Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,400 | 3,178 | The Epoch-Greedy Algorithm for Contextual
Multi-armed Bandits
Tong Zhang
Department of Statistics
Rutgers University
[email protected]
John Langford
Yahoo! Research
[email protected]
Abstract
We present Epoch-Greedy, an algorithm for contextual multi-armed bandits (also
known as bandits with side information). Epoch-Greedy has the following properties:
1. No knowledge of a time horizon T is necessary.
2. The regret incurred by Epoch-Greedy is controlled by a sample complexity
bound for a hypothesis class.
3. The regret scales as O(T 2/3 S 1/3 ) or better (sometimes, much better). Here S
is the complexity term in a sample complexity bound for standard supervised
learning.
1
Introduction
The standard k-armed bandits problem has been well-studied in the literature (Lai & Robbins, 1985;
Auer et al., 2002; Even-dar et al., 2006, for example). It can be regarded as a repeated game
between two players, with every stage consisting of the following: The world chooses k rewards
r1 , ..., rk ? [0, 1]; the player chooses an arm i ? {1, k} without knowledge of the world?s chosen
rewards, and then observes the reward ri . The contextual bandits setting considered in this paper
is the same except for a modification of the first step, in which the player also observes context
information x which can be used to determine which arm to pull.
The contextual bandits problem has many applications and is often more suitable than the standard
bandits problem, because settings with no context information are rare in practice. The setting
considered in this paper is directly motivated by the problem of matching ads to web-page contents
on the internet. In this problem, a number of ads (arms) are available to be placed on a number of
web-pages (context information). Each page visit can be regarded as a random draw of the context
information (one may also include the visitor?s online profile as context information if available)
from an underlying distribution that is not controlled by the player. A certain amount of revenue is
generated when the visitor clicks on an ad. The goal is to put the most relevant ad on each page to
maximize the expected revenue. Although one may potentially put multiple ads on each web-page,
we focus on the problem that only one ad is placed on each page (which is like pulling an arm given
context information). The more precise definition is given in Section 2.
Prior Work. The problem of bandits with context has been analyzed previously (Pandey et al., 2007;
Wang et al., 2005), typically under additional assumptions such as a correct prior or knowledge of
the relationship between the arms. This problem is also known as associative reinforcement learning
(Strehl et al., 2006, for example) or bandits with side information. A few results under as weak or
weaker assumptions are directly comparable.
1. The Exp4 algorithm (Auer et al., 1995) notably makes no assumptions about the world.
Epoch-Greedy has a worse regret bound in T (O(T 2/3 ) rather than O(T 1/2 )) and is only
1
analyzed under an IID assumption. An important advantage of Epoch-Greedy is a much
better dependence on the size of the set of predictors. In the situation where the number
of predictors is infinite but with finite VC-Dimension d, Exp4 has a vacuous regret bound
while Epoch-Greedy has a regret bound no worse than O(T 2/3 (ln m)1/3 ). Sometimes we
can achieve much better dependence on T , depending on the structure of the hypothesis space. For example, we will show that it is possible to achieve O(ln T ) regret bound
using Epoch-Greedy, while this is not possible with Exp4 or any simple modification of
it. Another substantial advantage is reduced computational complexity. The ERM step in
Epoch-Greedy can be replaced with any standard learning algorithm that achieves approximate loss minimization, making guarantees that degrade gracefully with the approximation
factor. Exp4 on the other hand requires computation proportional to the explicit count of
hypotheses in a hypothesis space.
2. The random trajectories method (Kearns et al., 2000) for learning policies in reinforcement
learning with hard horizon T = 1 is essentially the same setting. In this paper, bounds are
stated for a batch oriented setting where examples are formed and then used for choosing
a hypothesis. Epoch-Greedy takes advantage of this idea, but it also has analysis which
states that it trades off the number of exploration and exploitation steps so as to maximize
the sum of rewards incurred during both exploration and exploitation.
What we do. We present and analyze the Epoch-Greedy algorithm for multiarmed bandits with
context. This has all the nice properties stated in the abstract, resulting in a practical algorithm for
solving this problem.
The paper is broken up into the following sections.
1. In Section 2 we present basic definitions and background.
2. Section 3 presents the Epoch-Greedy algorithm along with a regret bound analysis which
holds without knowledge of T .
3. Section 4 analyzes the instantiation of the Epoch-Greedy algorithm in several settings.
2
Contextual bandits
We first formally define contextual bandit problems and algorithms to solve them.
Definition 2.1 (Contextual bandit problem) In a contextual bandits problem, there is a distribution P over (x, r1 , ..., rk ), where x is context, a ? {1, . . . , k} is one of the k arms to be pulled,
and ra ? [0, 1] is the reward for arm a. The problem is a repeated game: on each round, a sample
(x, r1 , ..., rk ) is drawn from P , the context x is announced, and then for precisely one arm a chosen
by the player, its reward ra is revealed.
Definition 2.2 (Contextual bandit algorithm) A contextual bandits algorithm B determines an
arm a ? {1, . . . , k} to pull at each time step t, based on the previous observation sequence
(x1 , a1 , ra,1 ), . . . , (xt?1 , at?1 , ra,t?1 ), and the current context xt .
PT
Our goal is to maximize the expected total reward t=1 E(xt ,~rt )?P [ra,t ]. Note that we use the
notation ra,t = rat to improve readability. Similar to supervised learning, we assume that we are
given a set H consisting of hypotheses h : X ? {1, . . . , k}. Each hypothesis maps side information
x to an arm a. A natural goal is to choose arms to compete with the best hypothesis in H. We
introduce the following definition.
Definition 2.3 (Regret) The expected reward of a hypothesis h is
R(h) = E(x,~r)?D rh(x) .
Consider any contextual bandits algorithm B. Let Z T = {(x1 , ~r1 ), . . . , (xT , ~rT )}, and the expected
regret of B with respect to a hypothesis h be:
?R(B, h, T ) = T R(h) ? EZ T ?P T
T
X
t=1
2
rB(x),t .
The expected regret of B up to time T with respect to hypothesis space H is defined as
?R(B, H, T ) = sup ?R(B, h, T ).
h?H
The main challenge of the contextual bandits problem is that when we pull an arm, rewards of
other arms are not observed. Therefore it is necessary to try all arms (explore) in order to form an
accurate estimation. In this context, methods we investigate in the paper make explicit distinctions
between exploration and exploitation steps. In an exploration step, the goal is to form unbiased
samples by randomly pulling all arms to improve the accuracy of learning. Because it does not
focus on the best arm, this step leads to large immediate regret but can potentially reduce regret
for the future exploitation steps. In an exploitation step, the learning algorithm suggests the best
hypothesis learned from the samples formed in the exploration steps, and the arm given by the
hypothesis is pulled: the goal is to maximize immediate reward (or minimize immediate regret).
Since the samples in the exploitation steps are biased (toward the arm suggested by the learning
algorithm using previous exploration samples), we do not use them to learn the hypothesis for the
future steps. That is, in methods we consider, exploitation does not help us to improve learning
accuracy for the future.
More specifically, in an exploration step, in order to form unbiased samples, we pull an arm a ?
{1, . . . , k} uniformly at random. Therefore the expected regret comparing to the best hypothesis
in H can be as large as O(1). In an exploitation step, the expected regret can be much smaller.
Therefore a central theme we examine in this paper is to balance the trade-off between exploration
and exploitation, so as to achieve a small overall expected regret up to some time horizon T .
Note that if we decide to pull a specific arm a with side information x, we do not observe rewards
ra0 for a0 6= a. In order to apply standard sample complexity analysis, we first show that exploration
samples, where a is picked uniformly at random, can create a standard learning problem without
missing observations. This is simply achieved by setting fully observed rewards r0 such that
ra0 0 (ra ) = kI(a0 = a)ra ,
(1)
where I(?) is the indicator function. The basic idea behind this transformation from partially observed to fully observed data dates back to the analysis of ?Sample Selection Bias? (Heckman,
1979). The above rule is easily generalized to other distribution over actions p(a) by replacing k
with 1/p(a).
The following lemma shows that this method of filling missing reward components is unbiased.
0
Lemma 2.1 For all arms a0 : E~r?P |x [ra0 ] = E~r?P h|x,a?U (1,...,k)
i [ra0 (ra )]. Therefore for any hy-
0
pothesis h(x), we have R(h) = E(x,~r)?P,a?U (1,...,k) rh(x)
(ra ) .
Proof We have:
E~r?P |x,a?U (1,...,k) [ra0 0 (ra )] =E~r?P |x
k
X
k ?1 [ra0 0 (ra )]
a=1
=E~r?P |x
k
X
k ?1 [kra I(a0 = a)] = E~r?P |x [ra0 ] .
a=1
Lemma 2.1 implies that we can estimate reward R(h) of any hypothesis h(x) using expectation
with respectP
to exploration samples (x, a, ra ). The right hand side can then be replaced by empirical
samples as t I(h(xt ) = at )ra,t for hypotheses in a hypothesis space H. The quality of this
estimation can be obtained with uniform convergence learning bounds.
3
Exploration with the Epoch-Greedy algorithm
The problem of treating contextual bandits as standard bandits is that the information in x is lost.
That is, the optimal arm to pull should be a function of the context x, but this is not captured by the
3
standard bandits setting. An alternative approach is to regard each hypothesis h as a separate artificial ?arm?, and then apply a standard bandits algorithm to these artificial arms. Using this approach,
let m be the number of hypotheses, we can get a bound of O(m). However, this solution ignores
the fact that many hypotheses can share the same arm so that choosing an arm yields information
for many hypotheses. For this reason, with a simple algorithm, we can get a bound that depends on
m logarithmically, instead of O(m) as would be the case for the standard bandits solution discussed
above.
As discussed earlier, the key issue in the algorithm is to determine when to explore and when to
exploit, so as to achieve appropriate balance. If we are given the time horizon T in advance, and
would like to optimize performance with the given T , then it is always advantageous to perform a
first phase of exploration steps, followed by a second phase of exploitation steps (until time step T ).
The reason that there is no advantage to take any exploitation step before the last exploration step is:
by switching the two steps, we can more accurately pick the optimal hypothesis in the exploitation
step due to more samples from exploration. With fixed T , assume that we have taken n steps of
exploration, and obtain an average regret bound of n for each exploitation step at the point, then
we can bound the regret of the exploration phase as n, and the exploitation phase as n (T ? n). The
total regret is n + (T ? n)n . Using this bound, we shall switch from exploration to exploitation at
the point n that minimizes the sum.
Without knowing T in advance, but with the same generalization bound, we can run exploration/exploitation in epochs, where at the beginning of each epoch `, we perform one step of exploration, followed by d1/n e steps of exploitation. We then start the next epoch. After epoch L, the
PL
total average regret is no more than n=1 (1 + n d1/n e) ? 3L. Moreover, the epoch L? containing T is no more than the optimal regret bound minn [n + (T ? n)n ] (with known T and optimal
stopping point). Therefore the performance of our method (which does not need to know T ) is no
worse than three time the optimal bound with known T and optimal stopping point. This motivates
a modified algorithm in Figure 1. The idea described above is related to forcing in (Lai & Yakowitz,
1995).
Proposition 3.1 Consider a sequence of nonnegative and monotone non-increasing numbers {n }.
PL
Let L? = min{L : `=1 (1 + d1/` e) ? T }, then
L? ? min [n + (T ? n)n ].
n?[0,T ]
Proof Let n? = arg minn?[0,T ] [n + (T ? n)n ]. The bound is trivial if n? ? L? . We only
PL? ?1
need consider the case n? ? L? ? 1. By assumption,
`=1 (1 + 1/` ) ? T ? 1. Since
PL? ?1
PL? ?1
,
we
have
L
1/
?
1/
?
(L
?
n
)1/
? ? 1 + (L? ? n? )1/n? ? T ? 1.
`
`
?
?
n?
`=1
`=n?
Rearranging, we have L? ? n? + (T ? L? )n? .
In Figure 1, s(Z1n ) is a sample-dependent (integer valued) exploitation step count. Proposition 3.1
suggests that choosing s(Z1n ) = d1/n (Z1n )e, where n (Z1n ) is a sample dependent average generalization bound, yields performance comparable to the optimal bound with known time horizon
T.
Definition 3.1 (Epoch-Greedy Exploitation Cost) Consider a hypothesis space H consisting of
hypotheses that take values in {1, 2, . . . , k}. Let Zt = (xt , at , ra,t ) for i = 1, . . . , n be independent random samples, where ai is uniform randomly distributed in {1, . . . , k}, and ra,t ? [0, 1] is
the observed (random) reward. Let Z1n = {Z1 , . . . , Zn }, and the empirical reward maximization
estimator
n
X
n
?
h(Z1 ) = arg max
ra,t I(h(xt ) = at ).
h?H
t=1
Given any fixed n, ? ? [0, 1], and observation Z1n , we denote by s(Z1n ) a data-dependent exploitation
step count. Then the per-epoch exploitation cost is defined as:
? n )) s(Z n ).
?n (H, s) = EZ1n sup R(h) ? R(h(Z
1
1
h?H
4
Epoch-Greedy (s(W` )) /*parameter s(W` ): exploitation steps*/
initialize: exploration samples W0 = {} and t1 = 1
iterate ` = 1, 2, . . .
t = t` , and observe xt /*do one-step exploration*/
select an arm at ? {1, . . . , k} uniformly at random
receive reward ra,t ? [0, 1]
W` = W`?1 ? {(xt , at , ra,t )}
? ` ? H by solving
find best hypothesis
h
P
maxh?H (x,a,ra )?W` ra I(h(x) = a)
t`+1 = t` + s(W` ) + 1
for t = t` + 1, ? ? ? , t`+1 ? 1 /*do s(W` )-steps exploitation*/
? ` (xt )
select arm at = h
receive reward ra,t ? [0, 1]
end for
end iterate
Figure 1: Exploration by -greedy in epochs
Theorem 3.1 For all T, n` , L such that: T ? L +
in Figure 1 is bounded by
?R(Epoch-Greedy, H, T ) ? L +
L
X
PL
`=1
n` , the expected regret of Epoch-Greedy
?` (H, s) + T
`=1
L
X
P [s(Z1` ) < n` ].
`=1
This theorem statement is very general, because we want to allow sample dependent bounds to be
used. When sample-independent bounds are used the following simple corollary holds:
Corollary 3.1 Assume we choose s(Z1` ) = s` ? b1/?` (H, 1)c (` = 1, . . .), and let LT =
PL
arg minL {L : L + `=1 s` ? T }. Then the expected regret of Epoch-Greedy in Figure 1 is
bounded by
?R(Epoch-Greedy, H, T ) ? 2LT .
Proof (of the main theorem) Let B be the Epoch-Greedy algorithm. One of the following events
will occur:
? A: s(Z1` ) < n` for some ` = 1, . . . , L.
? B: s(Z1` ) ? n` for all ` = 1, . . . , L.
If event A occurs, then since each reward is in [0,1], up to time T , regret cannot be larger than T .
Thus the total expected contribution of A to the regret ?R(B, H, T ) is at most
T P (A) ? T
L
X
P [s(Z1` ) < n` ].
(2)
`=1
If event B occurs, then t`+1 ? t` ? n` + 1 for ` = 1, . . . , L, and thus tL+1 > T . Therefore the
expected contribution of B to the regret ?R(B, H, T ) is at most the sum of expected regret in the
first L epochs.
By definition and construction, after the first step of epoch `, W` consists of ` random observations
Zj = (xj , aj , ra,j ) where aj is drawn uniformly at random from {1, . . . , k}, and j = 1, . . . , `.
This is independent of the number of exploitation steps before epoch `. Therefore we can treat
W` as ` independent samples. This means that the expected regret associated with exploitation
steps in epoch ` is ?` (H, s). Since the exploration step in each epoch contributes at most 1 to the
5
expected regret, the total expected regret for each epoch ` is at most 1 + ?` (H, s). Therefore the
PL
total expected regret for epochs ` = 1, . . . , L is at most L + `=1 ?` (H, s). Combined with (2),
we obtain the desired bound.
In the theorem, we bound the expected regret of each exploration step by one. Clearly this assumes
the worst case scenario and can often be improved. Some consequences of the theorem with specific
function classes are given in Section 4.
4
Examples
Theorem 3.1 is quite general. In this section, we present a few simple examples to illustrate the
potential applications.
4.1
Finite hypothesis space worst case bound
Consider the finite hypothesis space situation, with m = |H| < ?. We apply Theorem 3.1 with a
worst-case deviation bound.
Let x1 , . . . , xn ? [0, k] be iid random variables, such that Exi ? 1, then Bernstein inequality
implies that there exists a constant c0 > 0 such that ?? ? (0, 1), with probability 1 ? ?:
v
u
n
n
n
X
X
X
p
u
Ex2i + c0 k ln(1/?) ? c0 nk ln(1/?) + c0 k ln(1/?).
xi ?
Exi ? c0 tln(1/?)
i=1
i=1
i=1
It follows that there exists a universal constant c > 0 such that
p
?n (H, 1) ? c?1 k ln m/n.
Therefore in Figure 1, if we choose
s(Z1` ) = bc
p
`/(k ln m)c,
then ?` (H, s) ? 1: this is consistent with the choice recommended in Proposition 3.1.
In order to obtain a performance bound of this scheme using Theorem 3.1, we can simply take
p
n` = bc `/(k ln m)c.
This implies that P (s(Z1` ) < n` ) = 0. Moreover, with this choice, for any T , we can pick an L that
PL
satisfies the condition T ? `=1 n` . It implies that there exists a universal constant c0 > 0 such
that for any given T , we can take
L = bc0 T 2/3 (k ln m)1/3 c
in Theorem 3.1.
p
In summary, if we choose s(Z1` ) = bc `/(k ln m)c in Figure 1, then
?(Epoch-Greedy, H, T ) ? 2L ? 2c0 T 2/3 (k ln m)1/3 .
Reducing the problem to standard bandits, as discussed at the beginning of Section 3, leads to
a bound of O(m ln T ) (Lai & Robbins, 1985; Auer et al., 2002). Therefore when m is large,
the Epoch-Greedy algorithm in Figure 1 can perform significantly better. In this particular situation,
? Epoch-Greedy does not do as well as Exp4 in (Auer et al., 1995), which implies a regret of
O( kT ln m). However, the advantage of Epoch-Greedy is that any learning bound can be applied.
For many hypothesis classes, the ln m factor can be improved for Epoch-Greedy. In fact, a similar
result can be obtained for classes with infinitely many hypotheses but finite VC dimensions. Moreover, as we will see next, under additional assumptions, it is possible to obtain much better bounds
in terms of T for Epoch-Greedy, such as O(k ln m + k ln T ). This extends the classical O(ln T )
bound for standard bandits, and is not possible to achieve using Exp4 or simple variations of it.
6
4.2
Finite hypothesis space with unknown expected reward gap
This example illustrates the importance of allowing sample-dependent s(Z1` ). We still assume a
finite hypothesis space, with m = |H| < ?. However, we would like to improve the performance
bound by imposing additional assumptions. In particular we note that the standard bandits problem
has regret of the form O(ln T ) while in the worst case, our method for the contextual bandits problem
has regret O(T 2/3 ). A natural question is then: what are the assumptions we can impose so that the
Epoch-Greedy algorithm can have a regret of the form O(ln T ).
The main technical reason that the standard bandits problem has regret O(ln T ) is that the expected
reward of the best bandit and that of the second best bandit has a gap: the constant hidden in
the O(ln T ) bound depends on this gap, and the bound becomes trivial (infinity) when the gap
approaches zero. In this example we show that a similar assumption for contextual bandits problems
leads to a similar regret bound of O(ln T ) for the Epoch-Greedy algorithm.
Let H = {h1 , . . . , hm }, and assume without loss of generality that R(h1 ) ? R(h2 ) ? ? ? ? ?
R(hm ). Suppose that we know that R(h1 ) ? R(h2 ) + ? for some ? > 0, but the value of ? is not
known in advance.
Although ? is not known, it can be estimated from the data Z1n . Let the empirical reward of h ? H
be
n
kX
n
?
R(h|Z
)
=
ra,t I(h(xt ) = at ).
1
n t=1
? 1 be the hypothesis with highest empirical reward on Z n , and h
? 2 be the hypothesis with second
Let h
1
highest empirical reward. We define the empirical gap as
? 1 |Z n ) ? R(
? 2 |Z n ).
? 1n ) = R(
? h
? h
?(Z
1
1
? 1 6=
Let h1 be the hypothesis with the highest true expected reward, then we suffer a regret when h
h1 . Again, the standard large deviation bound implies that there exists a universal constant c > 0
such that for all j ? 1:
? 1 6= h1 ) ?me?ck?1 n(1+j 2 )?2
? 1n ) ? (j ? 1)?, h
P (?(Z
? 1n ) ? 0.5?) ?me?ck
P (?(Z
Now we can set s(Z1n ) = bm?1 e(2k)
such that
?1
? n )2
cn?(Z
1
?1
n?2
.
c. With this choice, there exists a constant c0 > 0
d??1 e
X
?n (H, s) ?
? 1 6= h1 )
? 1n ) ? j?}P (?(Z
? 1n ) ? [(j ? 1)?, j?], h
sup{s(Z1n ) : ?(Z
j=1
d??1 e
X
?
m?1 e(2k)
?1
cnj 2 ?2
? 1 6= h1 )
? 1n ) ? [(j ? 1)?, j?], h
P (?(Z
j=1
d??1 e
X
?
?1
cnj 2 ?2 ?ck?1 n(1+j 2 )?2
?1
n(0.5j 2 +1)?2
e(2k)
j=1
d??1 e
X
?
e?ck
j=1
0
?c
p
?1
2
k/n??1 e?ck n? .
There exists a constant c00 > 0 such that for any L:
L
X
`=1
?` (H, s) ?L + c0
? p
X
?1
2
k/`??1 e?ck `?
`=1
?L + c00 k??2 .
7
Now, consider any time horizon T . If we set n` = 0 when ` < L, nL = T , and
8k(ln m + ln(T + 1))
,
L=
c?2
then
? 1L ) ? 0.5?) ? me?ck
P (s(Z1L ) ? nL ) ? P (?(Z
That is, if we choose s(Z1n ) = bm?1 e(2k)
?1
?1
L?2
? 1/T.
? n )2
cn?(Z
1
?R(Epoch-Greedy, H, T ) ? 2L + 1 + c00 k??2
c in Figure 1, then
8k(ln m + ln(T + 1))
?2
+ 1 + c00 k??2 .
c?2
The regret for this choice is O(ln T ), which is better than O(T 2/3 ) of Section 4.1. However, the
constant depends on the gap ? which can be small. It is possible to combine the two strategies (that
? n ) is small) and obtain bounds that not only work
is, use the s(Z1n ) choice of Section 4.1 when ?(Z
1
well when the gap ? is large, but also not much worse than the bound of Section 4.1 when ? is small.
As a special case, we can apply the method in this section to solve the standard bandits problem.
The O(k ln T ) bound of the Epoch-Greedy method matches those more specialized algorithms for
the standard bandits problem, although our algorithm has a larger constant.
5
Conclusion
We consider a generalization of the multi-armed bandits problem, where observable context can
be used to determine which arm to pull and investigate the sample complexity of the exploration/exploitation trade-off for the Epoch-Greedy algorithm.
The Epoch-Greedy algorithm analysis leaves one important open problem behind. Epoch-Greedy is
much better at dealing with large hypothesis spaces or hypothesis spaces with special structures due
to its ability to employ any data-dependent sample complexity bound. However, for finite hypothesis
space, in the worst case scenario, Exp4 has better dependency on T . In such situations, it?s possible
that a better designed algorithm can achieve both strengths.
References
Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite time analysis of the multi-armed bandit
problem. Machine Learning, 47, 235?256.
Auer, P., Cesa-Bianchi, N., Freund, Y., & Schapire, R. E. (1995). Gambling in a rigged casino: The
adversarial multi-armed bandit problem. FOCS.
Even-dar, E., Mannor, S., & Mansour, Y. (2006). Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problems. JMLR, 7, 1079?1105.
Heckman, J. (1979). Sample selection bias as a specification error. Econometrica, 47, 153?161.
Kearns, M., Mansour, Y., & Ng, A. Y. (2000). Approximate planning in large pomdps via reusable
trajectories. NIPS.
Lai, T., & Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6, 4?22.
Lai, T., & Yakowitz, S. (1995). Machine learning and nonparametric bandit theory. IEEE TAC, 40,
1199?1209.
Pandey, S., Agarwal, D., Chakrabarti, D., & Josifovski, V. (2007). Bandits for taxonomies: a modelbased approach. SIAM Data Mining Conference.
Strehl, A. L., Mesterharm, C., Littman, M. L., & Hirsh, H. (2006). Experience-efficient learning in
associative bandit problems. ICML.
Wang, C.-C., Kulkarni, S. R., & Poor, H. V. (2005). Bandit problems with side observations. IEEE
Transactions on Automatic Control, 50, 338?355.
8
| 3178 |@word exploitation:26 advantageous:1 c0:9 rigged:1 open:1 pick:2 bc:3 current:1 contextual:15 com:1 comparing:1 pothesis:1 john:1 treating:1 designed:1 greedy:35 leaf:1 beginning:2 mannor:1 readability:1 zhang:1 along:1 chakrabarti:1 focs:1 consists:1 combine:1 introduce:1 notably:1 ra:24 expected:21 examine:1 planning:1 multi:6 armed:7 increasing:1 becomes:1 underlying:1 notation:1 moreover:3 bounded:2 what:2 minimizes:1 transformation:1 guarantee:1 every:1 control:1 before:2 t1:1 hirsh:1 treat:1 consequence:1 switching:1 studied:1 suggests:2 josifovski:1 practical:1 practice:1 regret:40 lost:1 cnj:2 universal:3 empirical:6 significantly:1 matching:1 get:2 cannot:1 selection:2 put:2 context:14 optimize:1 map:1 missing:2 rule:2 estimator:1 regarded:2 pull:7 variation:1 pt:1 construction:1 suppose:1 hypothesis:38 logarithmically:1 observed:5 wang:2 worst:5 trade:3 highest:3 observes:2 substantial:1 tongz:1 broken:1 complexity:7 reward:25 econometrica:1 littman:1 solving:2 easily:1 exi:2 artificial:2 choosing:3 quite:1 larger:2 solve:2 valued:1 ability:1 statistic:1 fischer:1 online:1 associative:2 advantage:5 sequence:2 relevant:1 date:1 achieve:6 convergence:1 r1:4 help:1 depending:1 illustrate:1 z1n:12 implies:6 correct:1 vc:2 exploration:25 elimination:1 generalization:3 proposition:3 tln:1 c00:4 pl:9 hold:2 considered:2 achieves:1 estimation:2 robbins:3 create:1 minimization:1 clearly:1 always:1 modified:1 rather:1 ck:7 corollary:2 focus:2 adversarial:1 dependent:6 stopping:3 typically:1 a0:4 hidden:1 bandit:41 overall:1 issue:1 arg:3 yahoo:2 special:2 initialize:1 ng:1 icml:1 filling:1 rci:1 future:3 few:2 employ:1 oriented:1 randomly:2 replaced:2 phase:4 consisting:3 investigate:2 mining:1 analyzed:2 nl:2 behind:2 accurate:1 kt:1 necessary:2 experience:1 desired:1 earlier:1 zn:1 maximization:1 cost:2 deviation:2 rare:1 predictor:2 uniform:2 dependency:1 chooses:2 combined:1 siam:1 off:3 modelbased:1 again:1 central:1 cesa:2 containing:1 choose:5 worse:4 potential:1 casino:1 inc:1 ad:6 depends:3 h1:8 try:1 picked:1 analyze:1 sup:3 start:1 contribution:2 minimize:1 formed:2 accuracy:2 yield:2 weak:1 accurately:1 iid:2 trajectory:2 pomdps:1 definition:8 proof:3 associated:1 knowledge:4 auer:6 back:1 supervised:2 improved:2 generality:1 stage:1 langford:1 until:1 hand:2 web:3 replacing:1 aj:2 quality:1 pulling:2 unbiased:3 true:1 round:1 game:2 during:1 rat:1 generalized:1 mesterharm:1 specialized:1 jl:1 discussed:3 multiarmed:1 imposing:1 ai:1 tac:1 automatic:1 mathematics:1 specification:1 maxh:1 forcing:1 scenario:2 certain:1 inequality:1 captured:1 analyzes:1 additional:3 impose:1 r0:1 minl:1 determine:3 maximize:4 recommended:1 multiple:1 technical:1 match:1 lai:5 visit:1 a1:1 controlled:2 basic:2 essentially:1 expectation:1 rutgers:2 sometimes:2 agarwal:1 achieved:1 receive:2 background:1 want:1 biased:1 integer:1 revealed:1 bernstein:1 switch:1 iterate:2 xj:1 click:1 reduce:1 idea:3 cn:2 knowing:1 motivated:1 suffer:1 action:2 dar:2 amount:1 nonparametric:1 reduced:1 schapire:1 zj:1 estimated:1 per:1 rb:1 shall:1 key:1 reusable:1 drawn:2 asymptotically:1 monotone:1 sum:3 compete:1 run:1 extends:1 decide:1 draw:1 announced:1 comparable:2 bound:40 internet:1 ki:1 followed:2 nonnegative:1 strength:1 occur:1 precisely:1 infinity:1 ri:1 hy:1 min:2 department:1 poor:1 smaller:1 modification:2 making:1 erm:1 kra:1 taken:1 ln:28 previously:1 count:3 know:2 end:2 available:2 apply:4 observe:2 appropriate:1 batch:1 alternative:1 assumes:1 include:1 exploit:1 classical:1 question:1 yakowitz:2 occurs:2 strategy:1 dependence:2 rt:2 heckman:2 separate:1 gracefully:1 degrade:1 w0:1 me:3 trivial:2 toward:1 reason:3 minn:2 relationship:1 balance:2 potentially:2 statement:1 taxonomy:1 stated:2 motivates:1 policy:1 zt:1 perform:3 unknown:1 allowing:1 bianchi:2 observation:5 finite:8 immediate:3 situation:4 precise:1 mansour:2 vacuous:1 z1:11 distinction:1 learned:1 nip:1 suggested:1 challenge:1 max:1 exp4:7 suitable:1 event:3 natural:2 indicator:1 arm:29 scheme:1 improve:4 hm:2 epoch:48 literature:1 prior:2 nice:1 freund:1 loss:2 fully:2 proportional:1 allocation:1 revenue:2 h2:2 incurred:2 consistent:1 share:1 strehl:2 summary:1 placed:2 last:1 side:6 weaker:1 pulled:2 bias:2 allow:1 distributed:1 regard:1 dimension:2 xn:1 world:3 ignores:1 reinforcement:3 adaptive:1 bm:2 visitor:2 transaction:1 approximate:2 observable:1 dealing:1 instantiation:1 b1:1 xi:1 pandey:2 learn:1 rearranging:1 contributes:1 main:3 rh:2 profile:1 repeated:2 x1:3 gambling:1 tl:1 tong:1 theme:1 explicit:2 jmlr:1 rk:3 theorem:9 xt:11 specific:2 exists:6 importance:1 illustrates:1 horizon:6 nk:1 gap:7 kx:1 lt:2 simply:2 explore:2 infinitely:1 ez:1 partially:1 determines:1 satisfies:1 goal:5 content:1 hard:1 infinite:1 except:1 specifically:1 uniformly:4 reducing:1 kearns:2 lemma:3 total:6 player:5 formally:1 select:2 kulkarni:1 d1:4 |
2,401 | 3,179 | Stable Dual Dynamic Programming
Tao Wang? Daniel Lizotte Michael Bowling Dale Schuurmans
Department of Computing Science
University of Alberta
{trysi,dlizotte,bowling,dale}@cs.ualberta.ca
Abstract
Recently, we have introduced a novel approach to dynamic programming and reinforcement learning that is based on maintaining explicit representations of stationary distributions instead of value functions. In this paper, we investigate the
convergence properties of these dual algorithms both theoretically and empirically,
and show how they can be scaled up by incorporating function approximation.
1
Introduction
Value function representations are dominant in algorithms for dynamic programming (DP) and reinforcement learning (RL). However, linear programming (LP) methods clearly demonstrate that the
value function is not a necessary concept for solving sequential decision making problems. In LP
methods, value functions only correspond to the primal formulation of the problem, while in the dual
they are replaced by the notion of state (or state-action) visit distributions [1, 2, 3]. Despite the well
known LP duality, dual representations have not been widely explored in DP and RL. Recently, we
have showed that it is entirely possible to solve DP and RL problems in the dual representation [4].
Unfortunately, [4] did not analyze the convergence properties nor implement the proposed ideas. In
this paper, we investigate the convergence properties of these newly proposed dual solution techniques, and show how they can be scaled up by incorporating function approximation. The proof
techniques we use to analyze convergence are simple, but lead to useful conclusions. In particular,
we find that the standard convergence results for value based approaches also apply to the dual case,
even in the presence of function approximation and off-policy updating. The dual approach appears
to hold an advantage over the standard primal view of DP/RL in one major sense: since the fundamental objects being represented are normalized probability distributions (i.e., belong to a bounded
simplex), dual updates cannot diverge. In particular, we find that dual updates converge (i.e. avoid
oscillation) in the very circumstance where primal updates can and often do diverge: gradient-based
off-policy updates with linear function approximation [5, 6].
2
Preliminaries
We consider the problem of computing an optimal behavior strategy in a Markov decision process
(MDP), defined by a set of actions A, a set of states S, a |S||A| by |S| transition matrix P , a reward
vector r and a discount factor ?, where we assume
P? thet goal is to maximize the infinite horizon
discounted reward r0 + ?r1 + ? 2 r2 + ? ? ? =
t=0 ? rt . It is known that an optimal behavior
strategy can always be expressed by a stationary policy, whose entries ? (sa) specify the probability
of taking action a in state s. Below, we represent a policy ? by an equivalent representation as
an |S| ? |S||A| matrix ? where ?(s,s0 a) = ? (sa) if s0 = s, otherwise 0. One can quickly verify
that the matrix product ?P gives the state-to-state transition probabilities induced by the policy
? in the environment P , and that P ? gives the state-action to state-action transition probabilities
induced by policy ? in P . The problem is to compute an optimal policy given either (a) a complete
?
Current affiliation: Computer Sciences Laboratory, Australian National University, [email protected].
specification of the environmental variables P and r (the ?planning problem?), or (b) limited access
to the environment through observed states and rewards and the ability to select actions to cause
further state transitions (the ?learning problem?). The first problem is normally tackled by LP or DP
methods, and the second by RL methods. In this paper, we will restrict our attention to scenario (a).
3
Dual Representations
Traditionally, DP methods for solving the MDP planning problem are typically expressed in terms of
the primal value function. However, [4] demonstrated that all the classical algorithms have natural
duals expressed in terms of state and state-action probability distributions.
In the primalP
representation, the policy state-action value function can be specified by an |S||A|?1
?
i
i
vector q =
i=0 ? (P ?) r which satisfies q = r + ?P ?q. To develop a dual form of stateaction policy evaluation, one considers the linear system d> = (1 ? ?)? > + ?d> P ?, where ? is
the initial distribution over state-action pairs. Not only is d a proper probability distribution over
state-action pairs, it also allows one to easily compute the expected discounted return of the policy ?.
However, recovering the state-action distribution d is inadequate for policy improvement. Therefore,
one considers the following |S||A| ? |S||A| matrix H = (1 ? ?)I + ?P ?H. The matrix H that
satisfies this linear relation is similar to d> , in that each row is a probability distribution and the
entries H(sa,s0 a0 ) correspond to the probability of discounted state-action visits to (s0 a0 ) for a policy
? starting in state-action pair (sa). Unlike d> , however, H drops the dependence on ?, giving
(1 ? ?)q = Hr. That is, given H we can easily recover the state-action values of ?.
For policy improvement, in the primal representation one can derive an improved policy ? 0 via
the update a? (s) = arg maxa q(sa) and ? 0(sa) = 1 if a = a? (s), otherwise 0. The dual form
of the policy update can be expressed in terms of the state-action matrix H for ? is a ? (s) =
arg maxa H(sa,:) r. In fact, since (1 ? ?)q = Hr, the two policy updates given in the primal
and dual respectively, must lead to the same resulting policy ? 0 . Further details are given in [4].
4
DP algorithms and convergence
We first investigate whether dynamic programming operators with the dual representations exhibit
the same (or better) convergence properties to their primal counterparts. These questions will be answered in the affirmative. In the tabular case, dynamic programming algorithms can be expressed by
operators that are successively applied to current approximations (vectors in the primal case, matrices in the dual), to bring them closer to a target solution; namely, the fixed point of a desired Bellman
equation. Consider two standard operators, the on-policy update and the max-policy update.
For a given policy ?, the on-policy operator O is defined as
Oq = r + ?P ?q
and
OH = (1 ? ?)I + ?P ?H,
for the primal and dual cases respectively. The goal of the on-policy update is to bring current
representations closer to satisfying the policy-specific Bellman equations,
q = r + ?P ?q
and
H = (1 ? ?)I + ?P ?H
The max-policy operator M is different in that it is neither linear nor defined by any reference policy,
but instead applies a greedy max update to the current approximations
Mq = r + ?P ?? [q]
and
MH = (1 ? ?)I + ?P ??r [H],
?
?
where ? [q](s) = maxa q(sa) and ?r [H](s,:) = H(sa0 (s),:) such that a0 (s) = arg maxa [Hr](sa) .
The goal of this greedy update is to bring the representations closer to satisfying the optimal-policy
Bellman equations q = r + ?P ?? [q] and H = (1 ? ?)I + ?P ??r [H].
4.1
On-policy convergence
For the on-policy operator O, convergence to the Bellman fixed point is easily proved in the primal
case, by establishing a contraction property of O with respect to a specific norm on q vectors. In
particular, one defines a weighted 2-norm with weights given by the stationary distribution determined by the policy ? and transition model P : Let z ? 0 be a vector such that z> P ? = z> ;
that is, z is the stationary state-action visit distribution for P ?. Then the norm is defined as
P
2
kqkz = q> Zq = (sa) z(sa) q2(sa) , where Z = diag(z). It can be shown that kP ?qkz ? kqkz
and kOq1 ? Oq2 kz ? ?kq1 ? q2 kz (see [7]). Crucially, for this norm, a state-action transition is
not an expansion [7]. By the contraction map fixed point theorem [2] there exists a unique fixed point
of O in the space of vectors q. Therefore, repeated applications of the on-policy operator converge
to a vector q? such that q? = Oq? ; that is, q? satisfies the policy based Bellman equation.
Analogously, for the dual representation H, one can establish convergence of the on-policy operator by first defining an approximate weighted norm over matrices and then verifying that O is a
contraction with respect to this norm. Define X
X
2
2
kHkz,r
= kHrkz =
z(sa) (
H(sa,s0 a0 ) r(s0 a0 ) )2
(1)
(sa)
(s0 a0 )
It is easily verified that this definition satisfies the property of a pseudo-norm, and in particular,
satisfies the triangle inequality. This weighted 2-norm is defined with respect to the stationary
distribution z, but also the reward vector r. Thus, the magnitude of a row normalized matrix is
determined by the magnitude of the weighted reward expectations it induces.
Interestingly, this definition allows us to establish the same non-expansion and contraction results
as the primal case. We can have kP ?Hkz,r ? kHkz,r by arguments similar to the primal case.
Moreover, the on-policy operator is a contraction with respect to k?kz,r .
Lemma 1 kOH1 ? OH2 kz,r ? ?kH1 ? H2 kz,r
Proof: kOH1 ? OH2 kz,r = ?kP ?(H1 ? H2 )kz,r ? ?kH1 ? H2 kz,r since kP ?Hkz,r ?
kHkz,r .
Thus, once again by the contraction map fixed point theorem there exists a fixed point of O among
row normalized matrices H, and repeated applications of O will converge to a matrix H ? such that
OH? = H? ; that is, H? satisfies the policy based Bellman equation for dual representations. This
argument shows that on-policy dynamic programming converges in the dual representation, without
making direct reference to the primal case. We will use these results below.
4.2
Max-policy convergence
The strategy for establishing convergence for the nonlinear max operator is similar to the on-policy
case, but involves working with a different norm. Instead of considering a 2-norm weighted by the
visit probabilities induced by a fixed policy, one simply uses the max-norm in this case: kqk ? =
max(sa) |q(sa) |. The contraction property of the M operator with respect to this norm can then
be easily established in the primal case: kMq1 ? Mq2 k? ? ?kq1 ? q2 k? (see [2]). As in the
on-policy case, contraction suffices to establish the existence of a unique fixed point of M among
vectors q, and that repeated application of M converges to this fixed point q? such that Mq? = q? .
To establish convergence of the off-policy update in the dual representation, first define the maxnorm for state-action visit distribution as
X
kHk? = max |
H(sa,s0 a0 ) r(s0 a0 ) |
(2)
(sa)
(s0 a0 )
Then one can simply reduce the dual to the primal case by appealing to the relationship (1??)Mq =
MHr to prove convergence of MH.
Lemma 2 If (1??)q = Hr, then (1??)Mq = MHr.
Proof: (1??)Mq = (1??)r+?P ?? [(1??)q]) = (1??)r+?P ?? [Hr] = (1??)r+?P ??r [H]r =
MHr where the second equality holds since we assumed (1 ? ?)q(sa) = [Hr](sa) for all (sa).
Thus, given convergence of Mq to a fixed point Mq? = q? , the same must also hold for MH.
However, one subtlety here is that the dual fixed point is not unique. This is not a contradiction
because the norm on dual representations k?kz,r is in fact just a pseudo-norm, not a proper norm.
That is, the relationship between H and q is many to one, and several matrices can correspond to
the same q. These matrices form a convex subspace (in fact, a simplex), since if H 1 r = (1 ? ?)q
and H2 r = (1 ? ?)q then (?H1 + (1 ? ?)H2 )r = (1 ? ?)q for any ?, where furthermore ? must be
restricted to 0 ? ? ? 1 to maintain nonnegativity. The simplex of fixed points {H ? : MH? = H? }
is given by matrices H? that satisfy H? r = (1 ? ?)q? .
5
DP with function approximation
Primal and dual updates exhibit strong equivalence in the tabular case, as they should. However,
when we begin to consider approximation, differences emerge. We next consider the convergence
properties of the dynamic programming operators in the context of linear basis approximation. We
focus on the on-policy case here, because, famously, the max operator does not always have a fixed
point when combined with approximation in the primal case [8], and consequently suffers the risk
of divergence [5, 6].
Note that the max operator cannot diverge in the dual case, even with basis approximation, by
boundedness alone; although the question of whether max updates always converge in this case
remains open. Here we establish that a similar bound on approximation error in the primal case can
be proved for the dual approach with respect to the on-policy operator.
In the primal case, linear approximation proceeds by fixing a small set of basis functions, forming
a |S||A|?k matrix ?, where k is the number of bases. The approximation of q can be expressed
? = ?w where w is a k ?1 vector of adjustable weights. This is
by a linear combination of bases q
? ? col span(?). In the dual, a linear approximation
equivalent to maintaining the constraint that q
? = ?w, where the vec operator creates a column vector from
to H can be expressed as vec(H)
a matrix by stacking the column vectors of the matrix below one another, w is a k ? 1 vector of
adjustable weights as it is in the primal case, and ? is a (|S||A|)2 ? k matrix of basis functions.
? remains a nonnegative, row normalized approximation to H, we simply add the
To ensure that H
? ? simplex(?) ? {H
? : vec(H)
? = ?w, ? ? 0,(1>?I)? = 11> ,w ? 0, w> 1 = 1}
constraints that H
where the operator ? is the Kronecker product.
In this section, we first introduce operators (projection and gradient step operators) that ensure the
approximations stay representable in the given basis. Then we consider their composition with the
on-policy and off-policy updates, and analyze their convergence properties. For the composition of
the on-policy update and projection operators, we establish a similar bound on approximation error
in the dual case as in the primal case.
5.1
Projection Operator
Recall that in the primal, the action value function q is approximated by a linear combination of
bases in ?. Unfortunately, there is no reason to expect Oq or Mq to stay in the column span of
?, so a best approximation is required. The subtlety resolved by Tsitsiklis and Van Roy [7] is to
identify a particular form of best approximation?weighted least squares?that ensures convergence
is still achieved when combined with the on-policy operator O. Unfortunately, the fixed point of this
combined update operator is not guaranteed to be the best representable approximation of O?s fixed
point, q? . Nevertheless, a bound can be proved on how close this altered fixed point is to the best
representable approximation.
We summarize a few details that will be useful below: First, the best least squares approximation is
computed with respect to the distribution z. The map from a general q vector onto its best approximation in col span(?) is defined by another operator, P, which projects q into the column span of
? kz 2 = ?(?> Z?)?1 ?> Zq, where q
? is an approximation for
?, Pq = argminq? ?col span(?) kq ? q
value function q. The important property of this weighted projection is that it is a non-expansion operator in k?kz , i.e., kPqkz ? kqkz , which can be easily obtained from the generalized Pythagorean
theorem. Approximate dynamic programming then proceeds by composing the two operators?the
on-policy update O with the subspace projection P?to compute the best representable approximation of the one step update. This combined operator is guaranteed to converge, since composing a
1
non-expansion with a contraction is still a contraction, i.e., kq+ ? q? kz ? 1??
kq? ? Pq? kz [7].
Linear function approximation in the dual case is a bit more complicated because matrices are being
represented, not vectors, and moreover the matrices need to satisfy row normalization and nonnegativity constraints. Nevertheless, a very similar approach to the primal case can be successfully
applied. Recall that in the dual, the state-action visit distribution H is approximated by a linear
combination of bases in ?. As in the primal case, there is no reason to expect that an update like
OH should keep the matrix in the simplex. Therefore, a projection operator must be constructed
that determines the best representable approximation to OH. One needs to be careful to define
this projection with respect to the right norm to ensure convergence. Here, the pseudo-norm k?k z,r
defined in Equation 1 suits this purpose. Define the weighted projection operator P over matrices
? z,r2
PH =
argmin kH ? Hk
(3)
?
H?simplex(?)
The projection could be obtained by solving the above quadratic program. A key result is that this
projection operator is a non-expansion with respect to the pseudo-norm k?k z,r .
Theorem 1 kPHkz,r ? kHkz,r
Proof: The easiest way to prove the theorem is to observe that the projection operator P is really
a composition of three orthogonal projections: first, onto the linear subspace span(?), then onto
the subspace of row normalized matrices span(?) ? {H : H1 = 1}, and finally onto the space of
nonnegative matrices span(?) ? {H : H1 = 1} ? {H : H ? 0}. Note that the last projection into
the nonnegative halfspace is equivalent to a projection into a linear subspace for some hyperplane
tangent to the simplex. Each one of these projections is a non-expansion in k?k z,r in the same way:
a generalized Pythagorean theorem holds. Consider just one of these linear projections P 1
2
2
2
kHkz,r
= kP1 H + H ? P1 Hkz,r = kP1 Hr + Hr ? P1 Hrkz
2
2
2
2
= kP1 Hrkz + kHr ? P1 Hrkz = kP1 Hkz,r + kH ? P1 Hkz,r
Since the overall projection is just a composition of non-expansions, it must be a non-expansion.
As in the primal, approximate dynamic programming can be implemented by composing the onpolicy update O with the projection operator P. Since O is a contraction and P a non-expansion,
PO must also be a contraction, and it then follows that it has a fixed point. Note that, as in the tabular
case, this fixed point is only unique up to Hr-equivalence, since the pseudo-norm k?k z,r does not
distinguish H1 and H2 such that H1 r = H2 r. Here too, the fixed point is actually a simplex
of equivalent solutions. For simplicity, we denote the simplex of fixed points for PO by some
representative H+ = POH+ . Finally, we can recover an approximation bound that is analogous
to the primal bound, which bounds the approximation error between H+ and the best representable
approximation to the on-policy fixed point H? = OH? .
Theorem 2 kH+ ? H? kz,r ?
1
1?? kPH?
? H? kz,r
Proof: First note that kH+ ?H? kz,r = kH+ ?PH? +PH? ?H? kz,r ? kH+ ?PH? kz,r +
kPH? ?H? kz,r by generalized Pythagorean theorem. Then since H+ = POH+ and P is a
non-expansion operator, we have kH+ ?PH? kz,r = kPOH+ ?PH? kz,r ? kOH+ ?H? kz,r .
Finally, using H? = OH? and Lemma 1, we obtain kOH+ ?H? kz,r = kOH+ ?OH? kz,r ?
?kH+ ?H? kz,r . Thus (1??)kH+ ?H? kz,r ? kPH? ?H? kz,r .
To compare the primal and dual results, note that despite the similarity of the bounds, the projection
operators do not preserve the tight relationship between primal and dual updates. That is, even if
(1??)q = Hr and (1??)(Oq) = (OH)r, it is not true in general that (1??)(POq) = (POH)r.
The most obvious difference comes from the fact that in the dual, the space of H matrices has
bounded diameter, whereas in the primal, the space of q vectors has unbounded diameter in the
natural norms. Automatically, the dual updates cannot diverge with compositions like PO and PM;
yet, in the primal case, the update PM is known to not have fixed points in some circumstances [8].
5.2
Gradient Operator
In large scale problems one does not normally have the luxury of computing full dynamic programming updates that evaluate complete expectations over the entire domain, since this requires
knowing the stationary visit distribution z for P ? (essentially requiring one to know the model of
the MDP). Moreover, full least squares projections are usually not practical to compute. A key intermediate step toward practical DP and RL algorithms is to formulate gradient step operators that
only approximate full projections. Conveniently, the gradient update and projection operators are
independent of the on-policy and off-policy updates and can be applied in either case. However, as
we will see below, the gradient update operator causes significant instability in the off-policy update,
to the degree that divergence is a common phenomenon (much more so than with full projections).
Composing approximation with an off-policy update (max operator) in the primal case can be very
dangerous. All other operator combinations are better behaved in practice, and even those that are
not known to converge usually behave reasonably. Unfortunately, composing the gradient step with
an off-policy update is a common algorithm attempted in reinforcement learning (Q-learning with
function approximation), despite being the most unstable.
In the dual representation, one can derive a gradient update operator in a similar way to the primal,
except that it is important to maintain the constraints on the parameters w, since the basis functions
are probability distributions. We start by considering the projection objective
1
? z,r 2 subject to vec(H)
? = ?w, w ? 0, w> 1 = 1
JH = kH ? Hk
2
The unconstrained gradient of the above objective with respect to w is
? ? h)
?w JH = ?> (r> ?I)> Z(r> ?I)(?w ? h) = ?> Z(r> ?I)(h
? = vec(H).
?
where ? = (r> ? I)?, h = vec(H), and h
However, this gradient step cannot be
followed directly because we need to maintain the constraints. The constraint w > 1 = 1 can be
maintained by first projecting the gradient onto it, obtaining ?w = (I ? k1 11> )?w JH . Thus, the
weight vector can be updated by
1
? ? h)
wt+1 = wt ? ??w = wt ? ?(I ? 11> )?> Z(r> ? I)(h
k
where ? is a step-size parameter. Then the gradient operator can then be defined by
? ? ???w = h
? ? ??(I ? 1 11> )?> Z(r> ? I)(h
? ? h)
Gh? h = h
k
(Note that to further respect the box constraints, 0 ? h ? 1, the stepsize might need to be reduced
and additional equality constraints might have to be imposed on some of the components of h that
are at the boundary values.)
Similarly as in the primal, since the target vector H (i.e., h) is determined by the underlying dynamic
programming update, this gives the composed updates
? = h
? ? ??(I ? 1 11> )?>Z(r> ?I)(h?O
?
? and
GOh
h)
k
? = h
? ? ??(I ? 1 11> )?>(r> ?I)(h?M
?
?
GMh
h)
k
respectively for the on-policy and off-policy cases (ignoring the additional equality constraints).
Thus far, the dual approach appears to hold an advantage over the standard primal approach, since
convergence holds in every circumstance where the primal updates converge, and yet the dual updates are guaranteed never to diverge because the fundamental objects being represented are normalized probability distributions (i.e., belong to a bounded simplex). We now investigate the convergence properties of the various updates empirically.
6
Experimental Results
To investigate the effectiveness of the dual representations, we conducted experiments on various
domains, including randomly synthesized MDPs, Baird?s star problem [5], and on the mountain car
problem. The randomly synthesized MDP domains allow us to test the general properties of the
algorithms. The star problem is perhaps the most-cited example of a problem where Q-learning
with linear function approximation diverges [5], and the mountain car domain has been prone to
divergence with some primal representations [9] although successful results were reported when
bases are selected by sparse tile coding [10].
For each problem domain, twelve algorithms were run over 100 repeats with a horizon of 1000 steps.
The algorithms were: tabular on-policy (O), projection on-policy (PO), gradient on-policy (GO),
tabular off-policy (M), projection off-policy (PM), and gradient off-policy (GM), for both the
primal and the dual. The discount factor was set to ? = 0.9. For on-policy algorithms, we measure
the difference between the values generated by the algorithms and those generated by the analytically
determined fixed-point. For off-policy algorithms, we measure the difference between the values
generated by the resulting policy and the values of the optimal policy. The step size for the gradient
updates was 0.1 for primal representations and 100 for dual representations. The initial values of
state-action value functions q are set according to the standard normal distribution, and state-action
visit distributions H are chosen uniformly randomly with row normalization. Since the goal is to
investigate the convergence of the algorithms without carefully crafting features, we also choose
random basis functions according to a standard normal distribution for the primal representations,
and random basis distributions according to a uniform distribution for the dual representations.
Randomly Synthesized MDPs. For the synthesized MDPs, we generated the transition and reward functions of the MDPs randomly?the transition function is uniformly distributed between 0
and 1 and the reward function is drawn from a standard normal. Here we only reported the results
of random MDPs with 100 states, 5 actions, and 10 bases, observed consistent convergence of the
dual representations on a variety of MDPs, with different numbers of states, actions, and bases. In
Figure 1(right), the curve for the gradient off-policy update (GM) in the primal case (dotted line
with the circle marker) blows up (diverges), while all the other algorithms in Figure 1 converge.
Interestingly, the approximate error of the dual algorithm POH (4.60?10?3 ) is much smaller than
the approximate error of the corresponding primal algorithm POq (4.23?10 ?2 ), even though their
theoretical bounds are the same (see Figure 1(left)).
On?Policy Update on Random MDPs
Off?Policy Update on Random MDPs
10
10
10
10
Oq
Mq
PMq
G Oq
Difference from Reference Point
Difference from Reference Point
POq
5
10
OH
POH
G OH
0
10
?5
10
?10
10
G Mq
5
10
MH
PMH
G MH
0
10
?5
10
?10
100
200
300
400
500
600
700
800
900
10
1000
100
200
300
Number of Steps
400
500
600
700
800
900
1000
Number of Steps
Figure 1: Updates of state-action value q and visit distribution H on randomly synthesized MDPs
The Star Problem. The star problem has 7 states and 2 actions. The reward function is zero
for each transition. In these experiments, we used the same fixed policy and linear value function
approximation as in [5]. In the dual, the number of bases is also set to 14 and the initial values of the
state-action visit distribution matrix H are uniformly distributed random numbers between 0 and 1
with row normalization. The gradient off-policy update in the primal case diverges (see the dotted
line with the circle marker in Figure 2(right)). However, all the updates with the dual representation
algorithms converge.
On?Policy Update on Star Problem
Off?Policy Update on Star Problem
10
10
10
10
Oq
Mq
PMq
G Oq
Difference from Reference Point
Difference from Reference Point
POq
5
10
OH
POH
G OH
0
10
?5
10
?10
10
G Mq
5
10
MH
PMH
G MH
0
10
?5
10
?10
100
200
300
400
500
600
Number of Steps
700
800
900
1000
10
100
200
300
400
500
600
700
800
Number of Steps
Figure 2: Updates of state-action value q and visit distribution H on the star problem
900
1000
The Mountain Car Problem The mountain car domain has continuous state and action spaces,
which we discretized with a simple grid, resulting in an MDP with 222 states and 3 actions. The
number of bases was chosen to be 5 for both the primal and dual algorithms. For the same reason
as before, we chose the bases for the algorithms randomly. In the primal representations with linear
function approximation, we randomly generated basis functions according to the standard normal
distribution. In the dual representations, we randomly picked the basis distributions according to
the uniform distribution. In Figure 3(right), we again observed divergence of the gradient off-policy
update on state-action values in the primal, and the convergence of all the dual algorithms (see Figure
3). Again, the approximation error of the projected on-policy update POH in the dual (1.90?10 1 )
is also considerably smaller than POq (3.26?102 ) in the primal.
On?Policy Update on Mountain Car
Off?Policy Update on Mountain Car
10
10
10
10
Oq
Mq
PMq
G Oq
Difference from Reference Point
Difference from Reference Point
POq
5
10
OH
POH
G OH
0
10
?5
10
?10
10
G Mq
5
10
MH
PMH
G MH
0
10
?5
10
?10
100
200
300
400
500
600
Number of Steps
700
800
900
1000
10
100
200
300
400
500
600
700
800
900
1000
Number of Steps
Figure 3: Updates of state-action value q and visit distribution H on the mountain car problem
7
Conclusion
Dual representations maintain an explicit representation of visit distributions as opposed to value
functions [4]. We extended the dual dynamic programming algorithms with linear function approximation, and studied the convergence properties of the dual algorithms for planning in MDPs.
We demonstrated that dual algorithms, since they are based on estimating normalized probability
distributions rather than unbounded value functions, avoid divergence even in the presence of approximation and off-policy updates. Moreover, dual algorithms remain stable in situations where
standard value function estimation diverges.
References
[1] M. Puterman. Markov Decision Processes: Discrete Dynamic Programming. Wiley, 1994.
[2] D. Bertsekas. Dynamic Programming and Optimal Control, volume 2. Athena Scientific, 1995.
[3] D. Bertsekas and J. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[4] T. Wang, M. Bowling, and D. Schuurmans. Dual representations for dynamic programming and reinforcement learning. In Proceeding of the IEEE International Symposium on ADPRL, pages 44?51, 2007.
[5] L. C. Baird. Residual algorithms: Reinforcement learning with function approximation. In International
Conference on Machine Learning, pages 30?37, 1995.
[6] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[7] J. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation.
IEEE Trans. Automat. Control, 42(5):674?690, 1997.
[8] D. de Farias and B. Van Roy. On the existence of fixed points for approximate value iteration and
temporal-difference learning. J. Optimization Theory and Applic., 105(3):589?608, 2000.
[9] J. A. Boyan and A. W. Moore. Generalization in reinforcement learning: Safely approximating the value
function. In NIPS 7, pages 369?376, 1995.
[10] R. S. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding.
In Advances in Neural Information Processing Systems, pages 1038?1044, 1996.
| 3179 |@word norm:20 open:1 crucially:1 contraction:12 automat:1 boundedness:1 initial:3 daniel:1 interestingly:2 current:4 yet:2 must:6 drop:1 update:54 stationary:6 greedy:2 alone:1 selected:1 coarse:1 unbounded:2 constructed:1 direct:1 symposium:1 khk:1 prove:2 introduce:1 theoretically:1 expected:1 behavior:2 p1:4 nor:2 planning:3 discretized:1 bellman:6 discounted:3 alberta:1 automatically:1 considering:2 begin:1 project:1 bounded:3 moreover:4 underlying:1 estimating:1 easiest:1 mountain:7 argmin:1 affirmative:1 maxa:4 q2:3 pseudo:5 temporal:2 every:1 safely:1 stateaction:1 pmh:3 scaled:2 control:2 normally:2 bertsekas:2 before:1 despite:3 sutton:2 establishing:2 might:2 chose:1 au:1 studied:1 equivalence:2 limited:1 unique:4 practical:2 practice:1 implement:1 projection:26 cannot:4 close:1 onto:5 operator:42 risk:1 context:1 instability:1 equivalent:4 map:3 demonstrated:2 imposed:1 go:1 attention:1 starting:1 convex:1 formulate:1 simplicity:1 contradiction:1 oh:14 mq:14 hkz:5 notion:1 traditionally:1 analogous:1 updated:1 target:2 gm:2 ualberta:1 programming:17 us:1 roy:3 satisfying:2 approximated:2 updating:1 onpolicy:1 observed:3 wang:3 verifying:1 mhr:3 ensures:1 environment:2 reward:8 dynamic:16 solving:3 tight:1 creates:1 basis:10 triangle:1 farias:1 easily:6 mh:10 resolved:1 po:4 represented:3 various:2 kp:4 whose:1 widely:1 solve:1 otherwise:2 ability:1 advantage:2 product:2 kh:10 convergence:25 r1:1 diverges:4 converges:2 object:2 derive:2 develop:1 fixing:1 sa:22 strong:1 recovering:1 c:1 involves:1 implemented:1 australian:1 come:1 duals:1 adprl:1 suffices:1 generalization:2 really:1 preliminary:1 hold:6 normal:4 major:1 purpose:1 estimation:1 successfully:1 weighted:8 mit:1 clearly:1 always:3 rather:1 avoid:2 barto:1 focus:1 improvement:2 hk:2 lizotte:1 sense:1 typically:1 entire:1 a0:9 relation:1 tao:2 arg:3 dual:56 among:2 overall:1 once:1 never:1 tabular:5 simplex:10 few:1 randomly:9 kp1:4 composed:1 preserve:1 national:1 divergence:5 replaced:1 maintain:4 suit:1 luxury:1 investigate:6 evaluation:1 poh:8 primal:46 closer:3 necessary:1 orthogonal:1 desired:1 goh:1 circle:2 pmq:3 theoretical:1 column:4 poq:6 stacking:1 entry:2 kq:3 uniform:2 successful:2 inadequate:1 conducted:1 too:1 reported:2 considerably:1 combined:4 cited:1 fundamental:2 twelve:1 international:2 stay:2 off:20 diverge:5 michael:1 analogously:1 quickly:1 again:3 successively:1 opposed:1 choose:1 tile:1 return:1 de:1 blow:1 star:7 coding:2 baird:2 satisfy:2 view:1 h1:6 picked:1 analyze:3 start:1 recover:2 complicated:1 halfspace:1 kph:3 square:3 oh2:2 correspond:3 kh1:2 identify:1 suffers:1 definition:2 obvious:1 proof:5 newly:1 proved:3 recall:2 car:7 carefully:1 actually:1 appears:2 specify:1 improved:1 formulation:1 box:1 though:1 furthermore:1 just:3 working:1 nonlinear:1 khr:1 marker:2 defines:1 perhaps:1 behaved:1 scientific:2 mdp:5 concept:1 normalized:7 verify:1 counterpart:1 true:1 equality:3 analytically:1 requiring:1 laboratory:1 moore:1 puterman:1 bowling:3 maintained:1 generalized:3 complete:2 demonstrate:1 bring:3 gh:1 novel:1 recently:2 common:2 empirically:2 rl:6 volume:1 belong:2 synthesized:5 significant:1 composition:5 vec:6 unconstrained:1 grid:1 pm:3 similarly:1 pq:2 stable:2 specification:1 access:1 similarity:1 base:10 add:1 dominant:1 showed:1 scenario:1 inequality:1 affiliation:1 additional:2 r0:1 converge:9 maximize:1 full:4 visit:13 neuro:1 circumstance:3 expectation:2 essentially:1 iteration:1 represent:1 normalization:3 achieved:1 whereas:1 argminq:1 unlike:1 induced:3 subject:1 oq:10 effectiveness:1 presence:2 intermediate:1 variety:1 thet:1 restrict:1 reduce:1 idea:1 knowing:1 whether:2 kq1:2 cause:2 action:32 useful:2 discount:2 ph:6 induces:1 diameter:2 reduced:1 dotted:2 discrete:1 key:2 nevertheless:2 drawn:1 neither:1 verified:1 kqk:1 run:1 oscillation:1 decision:3 bit:1 entirely:1 bound:8 guaranteed:3 distinguish:1 tackled:1 followed:1 quadratic:1 nonnegative:3 dangerous:1 constraint:9 kronecker:1 answered:1 argument:2 span:8 sa0:1 department:1 according:5 combination:4 representable:6 smaller:2 remain:1 lp:4 appealing:1 making:2 projecting:1 restricted:1 koh:3 equation:6 remains:2 know:1 apply:1 observe:1 stepsize:1 existence:2 ensure:3 maintaining:2 giving:1 k1:1 establish:6 approximating:1 classical:1 crafting:1 objective:2 question:2 strategy:3 rt:1 dependence:1 exhibit:2 gradient:18 dp:9 subspace:5 athena:2 considers:2 unstable:1 reason:3 toward:1 relationship:3 unfortunately:4 proper:2 policy:77 adjustable:2 markov:2 behave:1 defining:1 extended:1 situation:1 introduced:1 pair:3 namely:1 specified:1 required:1 established:1 applic:1 nip:1 trans:1 proceeds:2 below:5 usually:2 summarize:1 program:1 max:12 including:1 natural:2 boyan:1 hr:10 residual:1 altered:1 mdps:10 tangent:1 expect:2 h2:7 degree:1 consistent:1 s0:10 famously:1 row:8 prone:1 repeat:1 last:1 tsitsiklis:3 jh:3 allow:1 taking:1 emerge:1 sparse:2 van:3 distributed:2 boundary:1 curve:1 transition:9 kz:27 dale:2 reinforcement:8 projected:1 far:1 approximate:7 keep:1 maxnorm:1 assumed:1 continuous:1 zq:2 reasonably:1 ca:1 composing:5 ignoring:1 obtaining:1 schuurmans:2 expansion:10 domain:6 diag:1 did:1 repeated:3 representative:1 wiley:1 nonnegativity:2 explicit:2 col:3 theorem:8 specific:2 explored:1 r2:2 incorporating:2 exists:2 sequential:1 magnitude:2 anu:1 horizon:2 simply:3 forming:1 conveniently:1 expressed:7 subtlety:2 applies:1 environmental:1 satisfies:6 determines:1 goal:4 consequently:1 careful:1 infinite:1 determined:4 except:1 uniformly:3 hyperplane:1 wt:3 lemma:3 duality:1 experimental:1 attempted:1 select:1 pythagorean:3 evaluate:1 phenomenon:1 |
2,402 | 318 | Evaluation of Adaptive Mixtures
of Competing Experts
Steven J. Nowlan and Geoffrey E. Hinton
Computer Science Dept.
University of Toronto
Toronto, ONT M5S 1A4
Abstract
We compare the performance of the modular architecture, composed of
competing expert networks, suggested by Jacobs, Jordan, Nowlan and
Hinton (1991) to the performance of a single back-propagation network
on a complex, but low-dimensional, vowel recognition task. Simulations
reveal that this system is capable of uncovering interesting decompositions
in a complex task. The type of decomposition is strongly influenced by
the nature of the input to the gating network that decides which expert
to use for each case. The modular architecture also exhibits consistently
better generalization on many variations of the task.
1
Introduction
If back-propagation is used to train a single, multilayer network to perform different
subtasks on different occasions, there will generally be strong interference effects
which lead to slow learning and poor generalization. If we know in advance that a set
of training cases may be naturally divideJ into subsets that correspond to distinct
subtasks, interference can be reduced by using a system (see Fig. 1) composed of
several different "expert" networks plus a gating network that decides which of the
experts should be used for each training case.
Systems of this type have been suggested by a number of authors (Hampshire and
Waibel, 1989; Jacobs, Jordan and Barto, 1990; Jacobs et al., 1991) (see also the
paper by Jacobs and Jordan in this volume (1991?. Jacobs, Jordan, Nowlan and
Hinton (1991) show that this system can be trained by performing gradient descent
774
Evaluation of Adaptive Mixtures of Competing Experts
-10
O2
Expert 1
t
Expert 2
x1 x 2 x3
Expert 3
Gating
Network
t
Intut~
Input
Figure 1: A system of expert and gating networks. Each expert is a feedforward
network and all experts receive the same input and have the same number of outputs. The gating network is also feedforward and may receive a different input than
the expert networks. It has normalized outputs Pj
exp(xj)/ L:i exp(xd, where
Xj is the total weighted input received by output unit j of the gating network. Pj
can be viewed as the probability of selecting expert j for a particular case.
=
in the following error function:
E C = _logLvie-lIdc-o,cIl2/2Q'2
(1)
where E C is the error on training case c, pi is the output of the gating network for
expert i, lc is the desired output vector and o{ is the output vector of expert i, and
u is constant.
The error defined by Equation 1 is simply the negative log probability of generating
the desired output vector under a mixture of gaussians model of the probability
distribution of possible output vectors given the current input. The output vector
of each expert specifies the mean of a multidimensional gaussian distribution. These
means are a function of the inputs to the experts. The outputs of the gating network
specify the mixing proportions of the experts, so these too are determined by the
current input.
During learning, the gradient descent in E has two effects. It raises the mixing
proportion of experts that do better than average in predicting the desired output
vector for a particular case, and it also makes each expert better at predicting the
desired output for those cases for which it has a high mixing proportion. The result
of these two effects is that, after learning, the gating network nearly always assigns
a mixing proportion near 1 to one expert on each case. So towards the end of
the learning, each expert can focus on modelling the cases it is good at without
interference from the cases for which it has a negligible mixing proportion.
775
776
Nowlan and Hinton
In this paper, we compare mixtures of experts to single back-propagation networks
on a vowel recognition task. We demonstrate that the mixtures are better at
fitting the training data and better at generalizing than comparable single backpropagation networks.
2
Data and Experimental Procedures
The data used in these experiments consisted of the frequencies of the first and
second formants for 10 vowels from 75 speakers (32 Males, 28 Females, and 15
Children) (Peterson and Barney, 1952).1 The vowels, which were uttered in an
hVd context, were {heed, hid, head, had, hud, hod, hawed, hood, who'd, heard}. The
word list was repeated twice by each speaker, with the words in a different random
order for each presentation. The resulting spectrograms were hand segmented and
the frequencies of the formants extracted from the middle portion of the vowel.
The simulations were performed using a conjugate gradient technique, with one
weight change after each pass through the training set. For the back-propagation
experiments, each simulation was initialised randomly with weight values in the
range [-0.5,0.5]. For the mixture systems, the last layer of weights in the gating
network was always initialised to 0 so that all experts initially had equal a priori
selection probabilities, Pi,k, while all other weights in the gating and expert networks
were initialized randomly with values in the range [-0.5,0.5] to break symmetry.
The value of u used was 0.25 for all of the mixture simulations. In all cases, the
input formant values were linearly scaled by dividing them by 1000, so the first
formant was in the range (0,1.5) and the second was in the range (0,4).
Two sets of experiments were performed: one in which the performance of different
systems on the training data was compared and a second in which the ability of
different systems to generalize was compared.
Five different types of input were used in each set of experiments:
1. Frequencies of first and second formants only (Form.).
2. Form. plus a localist encoding of the speaker identity (Form.
+ Speaker ID).
3. Form. plus a localist encoding of whether the speaker was a male, female, or
child (Form. + MFC).
4. Form. plus the minimum and maximum frequency for the first and second
formant (as real values) over all samples from the speaker (Form. + Range).
5. Form.
+
MFC
+
Range.
For the simulations in which a single back-propagation network was used the network received the entire set of input values. However, for the mixture systems the
expert networks saw only the formant frequencies, while the gating network saw
everything but the formant frequencies (except of course when the input consisted
only of the formant frequencies).
1 Obtained, with thanks, from Ray Watrous, who originally obtained the data from Ann
Syrdal at AT&T Bell Labs.
Evaluation of Adaptive Mixtures of Competing Experts
Type of Input
Form.
Form. + Speaker ID
Form. + MFC
Form. + MFC + Range
Form. + Range
#
Experts
20
10
10
10
10
#
Hid per Expert
3-5
25
25
25
25
#
Hid Gating
10
0
0
5
5
Table 1: Summary of mixture architecture used with each type of input.
Type of Input
Formants only
Form. + Speaker ID
Form. + MFC
Form. + MFC + Range
Form. + Range
Mixture Error %
13.9 ? 0.9
4.6 ? 0.7
13.0 ? 0.4
5.6 ? 0.6
11.6 ? 0.9
BP Error %
21.8 ? 0.6
6.2 ? 0.6
15.4 ? 0.3
13.1 ? 1.0
13.5 ? 0.4
Sig.(p)
? 0.9999
> 0.97
? 0.9999
~ 0.9999
> 0.998
Table 2: Performance comparison of associative mixture systems and single backpropagation networks on vowel classification task. Results reported are based on an
average over 25 simulations for each back-propagation network or mixture system.
The BP networks used in the single network simulations contained one layer of
hidden units. 2 In the mixture systems, the expert networks also contained one
layer of hidden units although the number of hidden units in each expert varied.
The gating network in some cases contained hidden units, while in other cases it
did not (see Table 1). Further details of the simulations may be found in (Nowlan,
1991).
3
Results of Performance Studies
In the set of performance experiments, each system was trained with the entire set
of 1494 tokens until the magnitude of the gradient vector was < 10- 8 . The error
rate (as percent of total cases) was evaluated on the training data (generalization
studies are described in the next section). The very high degree of class overlap in
this task makes it extremely difficult to find good solutions with a gradient descent
procedure and this is reflected by the far from optimal average performance of all
systems on the training data (see Table 2). For purposes of comparison, the best
performance ever obtained on this vowel data using speaker dependant classification
methods is about 2.5% (Gerstman, 1968; Watrous, 1990).
Table 2 reveals that in every case the mixture system performs significantly better 3
than a single network given the same input. The most striking, and interesting,
2The number of hidden units was selected by performing a number of initial simulations
with different numbers of hidden units for each network and choosing the smallest number
which gave near optimal performance. These numbers were 50, 150, 60, 150, and 80
respectively for the five types of input listed above.
3Based on a t-test with 48 degrees of freedom.
777
778
Nowlan and Hinton
Spec.
0
4
5
7
8
9
#
% Male
% Female
% Child
% Total
0.0
3.1
84.4
9.4
3.1
0.0
0.0
3.6
17.8
7.1
42.9
28.6
6.7
0.0
0.0
6.7
0.0
86.7
1.3
2.7
42.7
8.0
17.3
28.0
Table 3: Speaker decomposition in terms of Male, Female and Child categories for
a mixture with speaker identity as input to the gating network.
result in Table 2 is contained in the fourth row of the table. While the associative
mixture architecture is able to combine the two separate cues of MFC categories and
speaker formant range quite effectively, the single back-propagation network fails
to do so. The combination of these two different cues in the associative mixture
system was obtained by a hierarchical training procedure in which three different
experts were first created using the MFC cue alone, and copies of these networks
were further specialized when the formant range cue was added to the input received
by the gating network (see (Nowlan, 1990; Nowlan, 1991) for details). Since the
single back-propagation network is much less modular than the associative mixture
system, it is difficult to implement such a hierarchical training procedure in the
single network case. (A variety of techniques were explored and details may again
be found in (Nowlan, 1991).)
Another interesting aspect of the mixture systems, not revealed in Table 2, is the
manner in which the training cases were divided among the different expert networks. Once the network was trained, the training cases were clustered by assigning
each case to the expert that was selected most strongly by the gating network.
The mixture which used only the formant frequencies as input to both the gating
and expert networks tended to cluster training cases according to the position of
the tongue hump when the vowel is uttered. In all simulations, the four front vowels
were always clustered together and handled by a single expert. The low back and
high back vowels also tended to be grouped together, but each of these groups was
divided among several experts and not always in exactly the same way.
The mixture which received speaker identity as well as formant frequencies as input
tended to group speakers roughly according to the categories male, female, and
child. A typical grouping of speakers by the mixture is shown in Table 3.
4
Results of Generalization Studies
In the set of generalization experiments, for all but the input which contained
the speaker identity, each system was trained on data from 65 speakers until the
magnitude of the gradient vector was < 10- 4 . The performance was then tested
on the data from the 10 speakers not in the training set. Twenty different test sets
were created by leaving out different speakers for each, and results are an average
over one simulation with each of the test sets. Each test set consisted of 4 male, 3
Evaluation of Adaptive Mixtures of Competing Experts
Type of Input
Formants only
Form. + Speaker ID
Form. + MFC
Form. + MFC + Range
Form. + Range
Mixture Error %
15.1 ? 0.9
6.4 ? 1.3
13.5 ? 0.6
6.2 ? 0.9
12.8 ? 0.9
BP Error %
23.3 ? 1.2
18.4 ? 1.1
16.1 ? 1.0
16.2 ? 0.8
Sig.(p)
0.9999
0.9999
? 0.9999
~ 0.9999
> 0.9999
~
~
Table 4: Generalization comparison of associative mixture systems and single backpropagation networks on vowel classification task. Results reported are based on an
average over 20 simulations for each back- propagation network or mixture system.
female and 3 child speakers.
The generalization tests for the mixture in which speaker identity was part of the
input used a different testing strategy. In this case, the training set consisted of 70
speakers and the testing set contained the remaining 5 speakers (2 male, 2 female, 1
child). Again, results are averaged over 20 different testing sets. After the mixture
was trained, an expert was selected for each test speaker using one utterance of each
of the first 3 vowels, and the performance of the selected expert was tested on the
remaining 17 utterances of that speaker. No generalization results are reported for
the single back-propagation network which received the speaker identity as well as
the first and second formant values, since there is no straightforward way to perform
rapid speaker adaptation with this architecture. (See Watrous (Watrous, 1990) for
some approaches to speaker adaptation in single networks.) The percentage of
misclassifications on the test set for the mixture systems and corresponding single
back-propagation networks are summarized in Table 4, and in all cases the mixture
system generalizes significantly better 4 than a single network.
The relatively poor generalization performance of the single back-propagation networks is not due to overfitting on the training data because the single backpropagation networks perform worse on the training data than the mixture systems
on the test data. Also, the associative mixture systems initially contained even
more parameters than the corresponding back-propagation networks. (The associative mixture which received formant range data for gating input initially contained
almost 3600 parameters, while the corresponding single back-propagation network
contained only slightly more than 1200 parameters.) Part of the explanation for the
good generalization performance of the mixt ures is the pruning of excess parameters
as the system is trained. The number of effective parameters in the final mixture is
very often less than half the number in the original system, because a large number
of experts have negligible mixing proportions in the final mixture.
5
Discussion
The mixture systems outperform single back-propagation networks which receive
the same input, and show much better generalization properties when forced to
deal with relatively small training sets . In addition, the mixtures can easily be
4Based on a t-test with 38 degrees of freedom.
779
780
Nowlan and Hinton
refined hierarchically by learning a few experts and then making several copies of
each and adding additional contextual input to the gating network.
The best performance for either single networks or mixture systems is obtained
by including the speaker identity as part of the input. When given such input,
the mixture systems are capable of discovering speaker categories which give levels
of classification performance close to those obtained by speaker dependent classification schemes. Good performance can also be obtained on novel speakers by
determining which existing speaker category the new speaker is most similar to (using a small number oflabelled utterances). If, instead, the speaker is represented in
terms of features such as male, female, child, and formant range, the mixtures also
exhibit good generalization to novel speakers described in terms of these features.
Acknow ledgements
This research was supported by grants from the Natural Sciences and Engineering
Research Council, the Ontario Information Technology Research Center, and Apple
Computer Inc. Hinton is the Norand a fellow of the Canadian Institute for Advanced
Research.
References
Gerstman, L. J. (1968). Classification of self-normalized vowels. IEEE Trans. on
Audio and Electroacoustics, AU-16(1 ):78-80.
Hampshire, J. and Waibel, A. (1989). The Meta-Pi network: Building distributed
knowledge representations for robust pattern recognition. Technical Report
CMU-CS-89-166, Carnegie-Mellon, Pittsburgh, PA.
Jacobs, R. A. and Jordan, M. I. (1991). A competitive modular connectionist architecture. In Touretzky, D. S., editor, Neural Information Processing Systems
3. Morgan Kauffman, San Mateo, CA.
Jacobs, R. A., Jordan, M. I., and Barto, A. G. (1990). Task decomposition through
competition in a modular connectionist architecture: The what and where
vision tasks. Cognitive Science. In Press.
Jacobs, R. A., Jordan, M. I., Nowlan, S. J., and Hinton, G. E. (1991). Adaptive
mixtures of local experts. Neural Computation, 3(1).
Nowlan, S. J. (1990). Competing experts: An experimental investigation of asssociative mixture models. Technical Report CRG-TR-90-5, Department of Computer Science, University of Toronto.
Nowlan, S. J. (1991). Soft Competitive Adaptation: Neural Network Learning Algorithms based on Fitting Statistical Mixtures. PhD thesis, School of Computer
Science, Carnegie Mellon University, Pittsburgh, PA.
Peterson, G. E. and Barney, H. L. (1952). Control methods used in a study of
vowels. The Journal of the Acoustical Society of America, 24:175-184.
Watrous, R. L. (1990). Speaker normalization and adaptation using second order connectionist networks. Technical Report CRG-TR-90-6, University of
Toronto.
| 318 |@word middle:1 proportion:6 simulation:12 jacob:8 decomposition:4 tr:2 barney:2 initial:1 selecting:1 o2:1 existing:1 current:2 contextual:1 nowlan:13 assigning:1 alone:1 spec:1 selected:4 cue:4 half:1 discovering:1 toronto:4 five:2 fitting:2 combine:1 ray:1 manner:1 rapid:1 roughly:1 formants:5 ont:1 what:1 watrous:5 fellow:1 every:1 multidimensional:1 xd:1 exactly:1 scaled:1 control:1 unit:7 grant:1 negligible:2 engineering:1 local:1 encoding:2 id:4 plus:4 twice:1 au:1 mateo:1 range:16 averaged:1 hood:1 testing:3 implement:1 x3:1 backpropagation:4 procedure:4 bell:1 significantly:2 word:2 close:1 selection:1 context:1 center:1 uttered:2 straightforward:1 assigns:1 variation:1 sig:2 pa:2 recognition:3 steven:1 trained:6 raise:1 easily:1 represented:1 america:1 train:1 distinct:1 forced:1 effective:1 choosing:1 refined:1 quite:1 modular:5 ability:1 formant:13 final:2 associative:7 adaptation:4 hid:3 mixing:6 ontario:1 competition:1 cluster:1 generating:1 school:1 received:6 strong:1 dividing:1 c:1 everything:1 generalization:12 clustered:2 investigation:1 crg:2 exp:2 smallest:1 purpose:1 saw:2 council:1 grouped:1 weighted:1 gaussian:1 always:4 barto:2 focus:1 consistently:1 modelling:1 dependent:1 entire:2 initially:3 hidden:6 uncovering:1 classification:6 among:2 priori:1 equal:1 once:1 nearly:1 report:3 connectionist:3 few:1 hawed:1 randomly:2 composed:2 vowel:14 freedom:2 hump:1 evaluation:4 male:8 mixture:44 capable:2 initialized:1 desired:4 tongue:1 soft:1 localist:2 subset:1 too:1 front:1 reported:3 thanks:1 together:2 again:2 thesis:1 worse:1 cognitive:1 expert:45 summarized:1 inc:1 performed:2 break:1 hud:1 lab:1 portion:1 competitive:2 who:2 correspond:1 generalize:1 apple:1 m5s:1 influenced:1 tended:3 touretzky:1 frequency:9 initialised:2 naturally:1 knowledge:1 back:17 originally:1 reflected:1 specify:1 evaluated:1 strongly:2 until:2 hand:1 propagation:15 dependant:1 reveal:1 building:1 effect:3 normalized:2 consisted:4 deal:1 during:1 self:1 speaker:38 occasion:1 demonstrate:1 performs:1 percent:1 novel:2 mixt:1 specialized:1 heed:1 volume:1 mellon:2 had:2 mfc:10 female:8 meta:1 morgan:1 minimum:1 additional:1 spectrogram:1 segmented:1 technical:3 divided:2 multilayer:1 vision:1 cmu:1 normalization:1 receive:3 addition:1 ures:1 leaving:1 jordan:7 near:2 feedforward:2 revealed:1 canadian:1 variety:1 xj:2 gave:1 misclassifications:1 architecture:7 competing:6 whether:1 handled:1 generally:1 heard:1 listed:1 category:5 reduced:1 specifies:1 outperform:1 percentage:1 per:1 ledgements:1 carnegie:2 group:2 four:1 pj:2 fourth:1 striking:1 almost:1 comparable:1 layer:3 bp:3 aspect:1 extremely:1 performing:2 relatively:2 department:1 according:2 waibel:2 combination:1 poor:2 conjugate:1 slightly:1 making:1 interference:3 equation:1 know:1 electroacoustics:1 end:1 generalizes:1 gaussians:1 hierarchical:2 original:1 remaining:2 a4:1 society:1 added:1 strategy:1 exhibit:2 gradient:6 separate:1 acoustical:1 difficult:2 acknow:1 negative:1 twenty:1 perform:3 descent:3 hinton:8 ever:1 head:1 varied:1 subtasks:2 trans:1 able:1 suggested:2 pattern:1 kauffman:1 including:1 explanation:1 overlap:1 natural:1 predicting:2 advanced:1 scheme:1 technology:1 created:2 utterance:3 determining:1 interesting:3 geoffrey:1 degree:3 editor:1 pi:3 row:1 course:1 summary:1 token:1 supported:1 last:1 copy:2 institute:1 peterson:2 distributed:1 author:1 adaptive:5 san:1 far:1 excess:1 pruning:1 decides:2 reveals:1 overfitting:1 pittsburgh:2 table:12 nature:1 robust:1 ca:1 symmetry:1 complex:2 did:1 hierarchically:1 linearly:1 child:8 repeated:1 x1:1 fig:1 slow:1 lc:1 fails:1 position:1 gating:20 list:1 explored:1 grouping:1 adding:1 effectively:1 phd:1 magnitude:2 hod:1 generalizing:1 simply:1 contained:9 hvd:1 extracted:1 viewed:1 presentation:1 identity:7 ann:1 towards:1 change:1 determined:1 except:1 typical:1 hampshire:2 total:3 pas:1 experimental:2 dept:1 audio:1 tested:2 |
2,403 | 3,180 | How SVMs can estimate quantiles and the median
Ingo Steinwart
Information Sciences Group CCS-3
Los Alamos National Laboratory
Los Alamos, NM 87545, USA
[email protected]
Andreas Christmann
Department of Mathematics
Vrije Universiteit Brussel
B-1050 Brussels, Belgium
[email protected]
Abstract
We investigate quantile regression based on the pinball loss and the ?-insensitive
loss. For the pinball loss a condition on the data-generating distribution P is
given that ensures that the conditional quantiles are approximated with respect to
k ? k1 . This result is then used to derive an oracle inequality for an SVM based
on the pinball loss. Moreover, we show that SVMs based on the ?-insensitive loss
estimate the conditional median only under certain conditions on P .
1
Introduction
Let P be a distribution on X ? Y , where X is an arbitrary set and Y ? R is closed. The goal of
quantile regression is to estimate the conditional quantile, i.e., the set valued function
?
F?,P
(x) := t ? R : P (??, t] | x ? ? and P [t, ?) | x ? 1 ? ? , x ? X,
where ? ? (0, 1) is a fixed constant and P( ? | x), x ? X, is the (regular) conditional probability. For
conceptual simplicity (though mathematically this is not necessary) we assume throughout this paper
?
?
that F?,P
(x) consists of singletons, i.e., there exists a function f?,P
: X ? R, called the conditional
?
?
? -quantile function, such that F?,P (x) = {f?,P (x)}, x ? X. Let us now consider the so-called
? -pinball loss L? : R ? R ? [0, ?) defined by L? (y, t) := ?? (y ? t), where ?? (r) = (? ? 1)r, if
r < 0, and ?? (r) = ? r, if r ? 0. Moreover, given a (measurable) function f : X ? R we define the
?
L? -risk of f by RL? ,P (f ) := E(x,y)?P L? (y, f (x)). Now recall that f?,P
is up to zero sets the only
?
function that minimizes the L? -risk, i.e. RL? ,P (f?,P ) = inf RL? ,P (f ) =: R?L? ,P , where the infimum is taken over all f : X ? R. Based on this observation several estimators minimizing a (modified) empirical L? -risk were proposed (see [5] for a survey on both parametric and non-parametric
methods) for situations where P is unknown, but i.i.d. samples D := ((x1 , y1 ), . . . , (xn , yn )) drawn
from P are given. In particular, [6, 4, 10] proposed an SVM that finds a solution fD,? ? H of
n
1X
arg min ?kf k2H +
L? (yi , f (xi )) ,
(1)
f ?H
n i=1
where ? > 0 is a regularization parameter and H is a reproducing kernel Hilbert space (RKHS) over
X. Note that this optimization problem can be solved by considering the dual problem [4, 10], but
since this technique is nowadays standard in machine learning we omit the details. Moreover, [10]
contains an exhaustive empirical study as well some theoretical considerations.
Empirical methods estimating quantiles with the help of the pinball loss typically obtain functions
fD for which RL? ,P (fD ) is close to R?L? ,P with high probability. However, in general this only
?
implies that fD is close to f?,P
in a very weak sense (see [7, Remark 3.18]), and hence there is so
far only little justification for using fD as an estimate of the quantile function. Our goal is to address
this issue by showing that under certain realistic assumptions on P we have an inequality of the form
q
?
(2)
kf ? f?,P
kL1 (PX ) ? cP RL? ,P (f ) ? R?L? ,P .
We then use this inequality to establish an oracle inequality for SVMs defined by (1). In addition,
we illustrate how this oracle inequality can be used to obtain learning rates and to justify a datadependent method for finding the hyper-parameter ? and H. Finally, we generalize the methods for
establishing (2) to investigate the role of ? in the ?-insensitive loss used in standard SVM regression.
2
Main results
In the following X is an arbitrary, non-empty set equipped with a ?-algebra, and Y ? R is a closed
non-empty set. Given a distribution P on X ? Y we further assume throughout this paper that the
?-algebra on X is complete with respect to the marginal distribution PX of P, i.e., every subset of a
PX -zero set is contained in the ?-algebra. Since the latter can always be ensured by increasing the
original ?-algebra in a suitable manner we note that this is not a restriction at all.
Definition 2.1 A distribution Q on R is said to have a ? -quantile of type ? > 0 if there exists a
? -quantile t? ? R and a constant cQ > 0 such that for all s ? [0, ?] we have
Q (t? , t? + s) ? cQ s
and
Q (t? ? s, t? ) ? cQ s .
(3)
It is not difficult to see that a distribution Q having a ? -quantile of some type ? has a unique ? quantile t? . Moreover, if Q has a Lebesgue density hQ then Q has a ? -quantile of type ? if hQ is
bounded away from zero on [t? ??, t? +?] since we can use cQ := inf{hQ (t) : t ? [t? ??, t? +?]}
in (3). This assumption is general enough to cover many distributions used in parametric statistics
such as Gaussian, Student?s t, and logistic distributions (with Y = R), Gamma and log-normal
distributions (with Y = [0, ?)), and uniform and Beta distributions (with Y = [0, 1]).
The following definition describes distributions on X ? Y whose conditional distributions P( ? |x),
x ? X, have the same ? -quantile type ?.
Definition 2.2 Let p ? (0, ?], ? ? (0, 1), and ? > 0. A distribution P on X ?Y is said to have a
? -quantile of p-average type ?, if Qx := P( ? |x) has PX -almost surely a ? -quantile type ? and b :
X ? (0, ?) defined by b(x) := cP( ? |x) , where cP( ? |x) is the constant in (3), satisfies b?1 ? Lp (PX ).
Let us now give some examples for distributions having ? -quantiles of p-average type ?.
Example 2.3 Let P be a distribution
on X ? R with marginal distribution PX and regular condi
tional probability Qx (??, y] := 1/(1+e?z ), y ? R, where z := y?m(x) /?(x), m : X ? R
describes a location shift, and ? : X ? [?, 1/?] describes a scale modification for some constant
? ? (0, 1]. Let us further assume that the functions m and ? are measurable. Thus Qx is a logistic
distribution having the positive and bounded Lebesgue density hQx (y) = e?z /(1 + e?z )2 , y ? R.
?
?
The ? -quantile function is t? (x) := f?,Q
= m(x) + ?(x) log( 1??
), x ? X, and we can choose
x
?
?
b(x) = inf{hQx (t) : t ? [t (x) ? ?, t (x) + ?]}. Note that hQx (m(x) + y) = hQx (m(x) ? y) for
all y ? R, and hQx (y) is strictly decreasing for y ? [m(x), ?). Some calculations show
n u (x)
u2 (x) o
1
1
b(x) = min hQx (t? (x)??), hQx (t? (x)+?) = min
,
?
c
,
,
?,?
(1+u1 (x))2 (1+u2 (x))2
4
??/?(x)
?/?(x)
where u1 (x) := 1??
, u2 (x) := 1??
and c?,? > 0 can be chosen independent of x,
? e
? e
?1
because ?(x) ? [?, 1/?]. Hence b ? L? (PX ) and P has a ? -quantile of ?-average type ?.
? be a distribution on X ? Y with marginal distribution P
? X and regular conExample 2.4 Let P
?
?
?
? X -almost surely
ditional probability Qx := P(? | x) on Y . Furthermore, assume that Qx is P
of ? -quantile type ?. Let us now consider the family of distributions P with
marginal
distribu
? X and regular conditional distributions Qx := P
? (? ? m(x))/?(x) x , x ? X, where
tion P
m : X ? R and ? : X ? (?, 1/?) are as in the previous example. Then Qx has a ? -quantile
?
?
f?,Q
= m(x) + ?(x)f?,
? x of type ??, because we obtain for s ? [0, ??] the inequality
x
Q
?
?
? x (f ? ? , f ? ? + s/?(x)) ? b(x)s/?(x) ? b(x)?s .
Qx (f?,Q
, f?,Q
+ s) = Q
x
x
?,Qx ?,Qx
? does have a ? -quantile of
Consequently, P has a ? -quantile of p-average type ?? if and only if P
p-average type ?.
The following theorem shows that for distributions having a quantile of p-average type the conditional quantile can be estimated by functions that approximately minimize the pinball risk.
p
Theorem 2.5 Let p ? (0, ?], ? ? (0, 1), ? > 0 be real numbers, and q := p+1
. Moreover, let
P be a distribution on X ? Y that has a ? -quantile of p-average type ?. Then for all f : X ? R
p+2
2p
satisfying RL? ,P (f ) ? R?L? ,P ? 2? p+1 ? p+1 we have
q
?
1/2
?
kf ? f?,P
kLq (PX ) ? 2 kb?1 kLp (PX ) RL? ,P (f ) ? R?L? ,P .
Our next goal is to establish an oracle inequality for SVMs defined by (1). To this end let us assume
Y = [?1, 1]. Then we have L? (y, t?) ? L? (y, t) for all y ? Y , t ? R, where t? denotes t clipped
to the interval [?1, 1], i.e., t? := max{?1, min{1, t}}. Since this yields RL? ,P (f?) ? RL? ,P (f ) for
all functions f : X ? R we will focus on clipped functions f? in the following. To describe the
approximation error of SVMs we need the approximation error function A(?) := inf f ?H ?kf k2H +
RL? ,P (f ) ? R?L? ,P , ? > 0. Recall that [8] showed lim??0 A(?) = 0 if the RKHS H is dense in
L1 (PX ). We also need the covering numbers which for ? > 0 are defined by
N BH , ?, L2 (?) := min n ? 1 : ? x1 , . . . , xn ? L2 (?) with BH ? ?ni=1 (xi + ?BL2 (?) ) , (4)
where ? is a distribution on X, and BH and BL2 (?) denote the closed unit balls ofH and the Hilbert
space L2 (?), respectively. Given a finite sequence D = ((x1 , y1 ), . . . , (xn , yn )) ? (X ? Y )n
we write DX := (x1 , . . . , xn ), and N (BH , ?, L2 (DX )) := N (BH , ?, L2 (?)) if ? is the empirical
measure defined by DX . Finally, we write L? ? f for the function (x, y) 7? L? (y, f (x)). With these
preparations we can now recall the following oracle inequality shown in more generality in [9].
Theorem 2.6 Let P be a distribution on X ?[?1, 1] for which there exist constants v ? 1, ? ? [0, 1]
with
2
?
?
?
EP L? ? f? ? L? ? f?,P
? v EP (L? ? f? ? L? ? f?,P
)
(5)
for all f : X ? R. Moreover, let H be a RKHS over X for which there exist ? ? (0, 1) and a ? 1
with
sup
log N BH , ?, L2 (DX ) ? a??2? ,
? > 0.
(6)
D?(X?Y )n
Then there exists a constant K?,v depending only on ? and v such that for all ? ? 1, n ? 1, and
? > 0 we have with probability not less than 1 ? 3e?? that
r
1
1
32v? 2??
K?,v a 2??+?(??1) K?,v a
A(?) ?
?
?
RL? ,P (fD,? ) ? RL? ,P ? 8A(?) + 30
+
+
5
+
.
? n
?? n
?? n
n
Moreover, [9] showed that oracle inequalities of the above type can be used to establish learning
rates and to investigate data-dependent parameter selection strategies. For example if we assume that
there exist constants c > 0 and ? ? (0, 1] such that A(?) ? c?? for all ? > 0 then RL? ,P (f?T,?n )
?
2?
converges to R?L? ,P with rate n?? where ? := min { ?(2??+?(??1))+?
, ?+1
} and ?n = n??/? .
Moreover, [9] shows that this rate can also be achieved by selecting ? in a data-dependent way with
the help of a validation set. Let us now consider how these learning rates in terms of risks translate
?
into rates for kf?T,? ? f?,P
kLq (PX ) . To this end we assume that P has a ? -quantile of p-average type
? for ? ? (0, 1). Using the Lipschitz continuity of L? and Theorem 2.5 we then obtain
2
q/2
?
? 2
?
?
? ? q
?
EP L? ?f??L? ?f?,P
? EP |f??f?,P
| ? kf??f?,P
k2?q
? EP |f ?f?,P | ? c RL? ,P (f )?RL? ,P
p+2
2p
for all f satisfying RL? ,P (f?)?R?L? ,P ? 2? p+1 ? p+1 , i.e. we have a variance bound (5) for ? := q/2
and clipped functions with small excess risk. Arguing carefully to handle the restriction on f? we
?
then see that kf?T,? ? f?,P
kLq (PX ) can converge as fast as n?? , where
n
o
?
?
? := min ?(4?q+?(q?2))+2?
, ?+1
.
To illustrate the latter let us assume that H is a Sobolev space W m (X) of order m ? N over X,
where X is the unit ball in Rd . Recall from [3] that H satisfies (6) for ? := d/(2m) if m > d/2 and
in this case H also consists of continuous functions. Furthermore, assume that we are in the ideal
?
?
situation f?,P
? W m (X) which implies ? = 1. Then the learning rate for kf?T,? ? f?,P
kLq (PX ) be?1/(4?q(1??))
?2m/(6m+d)
comes n
, which for ?-average type distributions reduces to n
? n?1/3 .
Let us finally investigate whether the ?-insensitive loss defined by L(y, t) := max{0, |y ? t| ? ?}
for y, t ? R and fixed ? > 0, can be used to estimate the median, i.e. the (1/2)-quantile.
Theorem 2.7 Let L be the ?-insensitive loss for some ? > 0 and P be a distribution on X ?R which
?
has a unique median f1/2,P
. Furthermore, assume that all conditional distributions P(?|x), x ? X,
are atom-free, i.e. P({y}|x) = 0 for all y ? R, and symmetric, i.e. P(h(x)+A|x) = P(h(x)?A|x)
for all measurable A ? R and a suitable function h : X ? R. If for the conditional distributions
?
?
have a positive mass concentrated around f1/2,P
? ? then f1/2,P
is the only minimizer of RL,P .
Note that using [7] one can show that for distributions specified in the above theorem the
?
SVM using the ?-insensitive loss approximates f1/2,P
whenever the SVM is RL,P -consistent,
?
i.e. RL,P (fT,? ) ? RL,P in probability, see [2]. More advanced results in the sense of Theorem
2.5 seem also possible, but are out of the scope of this paper.
3
Proofs
Let us first recall some notions from [7] who investigated surrogate losses in general and the question
how approximate risk minimizers approximate exact risk minimizers in particular. To this end let
L : X ? Y ? R ? [0, ?) be a measurable function which we call a loss in the following. For a
distribution P and an f : X ? R the L-risk is then defined by RL,P (f ) := E(x,y)?P L(x, y, f (x)),
and, as usual, the Bayes L-risk, is denoted by R?L,P := inf RL,P (f ), where the infimum is taken over
all (measurable) Rf : X ? R. In addition, given a distribution Q on Y the inner L-risks were defined
by CL,Q,x (t) := Y L(x, y, t) dQ(y), x ? X, t ? R, and the minimal inner L-risks were denoted by
?
CL,Q,x
:= inf CL,Q,x (t), x ? X, where the infimum is taken over all t ? R. Moreover, following
[7] we usually omit the indexes x or Q if L is independent of x or y, respectively. Obviously, we
have
Z
RL,P (f ) =
CL,P( ? |x),x f (x) dPX (x) ,
(7)
X
?
and [7, Theorem 3.2] further shows that x 7? CL,P(
? |x),x is measurable if Rthe ?-algebra on X is
?
complete. In this case it was also shown that the intuitive formula R?L,P = X CL,P(
? |x),x dPX (x)
holds, i.e. the Bayes L-risk is obtained by minimizing the inner risks and subsequently integrating
with respect to the marginal distribution PX . Based on this observation the basic idea in [7] is to
consider both steps
it turned
separately. In particular,
out that the sets of ?-approximate minimizers
?
ML,Q,x (?) := t ? R : CL,Q,x (t) < CL,Q,x
+ ? , ? ? [0, ?], and the set of exact minimizers
T
ML,Q,x (0+ ) := ?>0 ML,Q,x (?) play a crucial role. As in [7] we again omit the subscripts x and
Q in these definitions if L happens to be independent of x or y, respectively.
Now assume we have two losses Ltar : X ? Y ? R ? [0, ?] and Lsur : X ? Y ? R ? [0, ?], and
that our goal is to estimate the excess Ltar -risk by the excess Lsur -risk. This issue was investigated
in [7], where the main device was the so-called calibration function ?max ( ? , Q, x) defined by
(
inf t?R\MLtar ,Q,x (?) CLsur ,Q,x (t) ? CL? sur ,Q,x if CL? sur ,Q,x < ? ,
?max (?, Q, x) :=
?
if CL? sur ,Q,x = ? ,
for all ? ? [0, ?]. In the following we sometimes write ?max,Ltar ,Lsur (?, Q, x) := ?max (?, Q, x)
whenever we need to explicitly mention the target and surrogate losses. In addition, we follow our
convention which omits x or Q whenever this is possible. Now recall that [7, Lemma 2.9] showed
?max CLtar ,Q,x (t) ? CL? tar ,Q,x , Q, x ? CLsur ,Q,x (t) ? CL? sur ,Q,x ,
t?R
(8)
if both CL? tar ,Q,x < ? and CL? sur ,Q,x < ?. Before we use (8) to establish an inequality between
the excess risks of Ltar and Lsur , we finally recall that the Fenchel-Legendre bi-conjugate g ?? :
I ? [0, ?] of a function g : I ? [0, ?] defined on an interval I is the largest convex function
h : I ? [0, ?] satisfying h ? g. In addition, we write g ?? (?) := limt?? g ?? (t) if I = [0, ?).
With these preparations we can now establish the following generalization of [7, Theorem 2.18].
Theorem 3.1 Let P be a distribution on X ? Y with R?Ltar ,P < ? and R?Lsur ,P < ? and assume
that there exist p ? (0, ?] and functions b : X ? [0, ?] and ? : [0, ?) ? [0, ?) such that
and b
?1
?max (?, P( ? |x), x) ? b(x) ?(?) ,
? ? 0, x ? X,
(9)
p
q
? Lp (PX ). Then for q := p+1 , ?? := ? : [0, ?) ? [0, ?), and all f : X ? R we have
q
???? RLtar ,P (f ) ? R?Ltar ,P ? kb?1 kqLp (PX ) RLsur ,P (f ) ? R?Lsur ,P .
Proof: Let us first consider the case RLtar ,P (f ) < ?. Since ???? is convex and satisfies ???? (?) ?
? for all ? ? [0, ?) we see by Jensen?s inequality that
?(?)
Z
???? RLtar ,P (f ) ? R?Ltar ,P ?
?? CLtar ,P( ? |x),x (t) ? CL? tar ,P( ? |x),x dPX (x)
(10)
X
Moreover, using (8) and (9) we obtain
b(x) ? CLtar ,P( ? |x),x (t) ? CL? tar ,P( ? |x),x
? CLsur ,P( ? |x),x (t) ? CL? sur ,P( ? |x),x
? and H?older?s inequality in the
for PX -almost all x ? X and all t ? R. By (10), the definition of ?,
form of k ? kq ? k ? kp ? k ? k1 , we thus find that ???? RLtar ,P (f ) ? R?Ltar ,P is less than or equal to
Z
q/q
?q
q
b(x)
CLsur ,P( ? |x),x f (x) ? CL? sur ,P( ? |x),x dPX (x)
X
Z
q/p Z
q
?
b?p dPX
CLsur ,P( ? |x),x f (x) ? CL? sur ,P( ? |x),x dPX (x)
X
X
q
?1 q
? kb kLp (PX ) RLsur ,P (f ) ? R?Ltar ,P .
Let us finally deal with the case RLtar ,P (f ) = ?. If ???? (?) = 0 there is nothing to prove and
hence we assume ???? (?) > 0. Following the proof of [7, Theorem 2.13] we then see that there
exist constants c1 , c2 ? (0, ?) satisfying t ? c1 ? ?? (t) + c2 for all t ? [0, ?]. From this we obtain
Z
?
? = RLtar ,P (f ) ? RLtar ,P ? c1
???? CLtar ,P( ? |x),x (t) ? CL? tar ,P( ? |x),x dPX (x) + c2
X
Z
q
?q
? c1
b(x)
CLsur ,P( ? |x),x f (x) ? CL? sur ,P( ? |x),x dPX (x) + c2 ,
X
where the last step is analogous to our considerations for RLtar ,P (f ) < ?. By b?1 ? Lp (PX ) and
H?older?s inequality we then conclude RLsur ,P (f ) ? R?Lsur ,P = ?.
Our next goal is to determine the inner risks and their minimizers for the pinball loss. To this end
recall (see, e.g., [1, Theorem 23.8]) that given a distribution Q on R and a non-negative function
g : X ? [0, ?) we have
Z
Z
?
g dQ =
0
R
Q(g ? s) ds .
(11)
Proposition 3.2 Let ? ? (0, 1) and Q be a distribution on R with CL? ? ,Q < ? and t? be a ? -quantile
of Q. Then there exist q+ , q? ? [0, ?) with q+ + q? = Q({t? }), and for all t ? 0 we have
Z t
?
?
CL? ,Q (t + t) ? CL? ,Q (t ) = tq+ +
Q (t? , t? + s) ds , and
(12)
0
Z t
CL? ,Q (t? ? t) ? CL? ,Q (t? ) = tq? +
Q (t? ? s, t? ) ds .
(13)
0
?
?
Proof: Let us consider the distribution Q(t ) defined by Q(t ) (A) := Q(t? + A) for all measurable
?
A ? R. Then it is not hard to see that 0 is a ? -quantile of Q(t ) . Moreover, we obviously have
CL? ,Q (t? + t) = CL? ,Q(t? ) (t) and hence we may assume without loss of generality that t? = 0. Then
our assumptions together with Q((??, 0]) + Q([0, ?)) = 1 + Q({0}) yield ? ? Q((??, 0]) ?
? + Q({0}), i.e., there exists a q+ satisfying 0 ? q+ ? Q({0}) and
Q((??, 0]) = ? + q+ .
(14)
Let us now compute the inner risks of L? . To this end we first assume t ? 0. Then we have
Z
Z
Z
(y ? t) dQ(y) =
y dQ(y) ? tQ((??, t)) +
y dQ(y)
y<t
y<0
0?y<t
R
R
R
and y?t (y ? t) dQ(y) = y?0 y dQ(y) ? tQ([t, ?)) ? 0?y<t y dQ(y) and hence we obtain
Z
Z
CL? ,Q (t) = (? ? 1)
(y ? t) dQ(y) + ?
(y ? t) dQ(y)
y<t
y?t
Z
= CL? ,Q (0) ? ? t + tQ((??, 0)) + tQ([0, t)) ?
y dQ(y) .
0?y<t
Moreover, using (11) we find
Z
Z t
Z t
Z t
tQ([0, t)) ?
y dQ(y) =
Q([0, t))ds ?
Q([s, t)) ds = tQ({0}) +
Q((0, s))ds ,
0?y<t
0
0
0
and since (14) implies Q((??, 0)) + Q({0}) = Q((??, 0]) = ? + q+ we thus obtain (12).
Now (13) can be derived from (12) by considering the pinball loss with parameter 1 ? ? and the
? defined by Q(A)
?
distribution Q
:= Q(?A), A ? R measurable. This further yields a q? satisfying
0 ? q? ? Q({0}) and Q([0, ?) = 1 ? ? + q? . By (14) we then find q+ + q? = Q({0}).
For the proof of Theorem 2.5 we recall a few more concepts from [7]. To this end let us now assume
that our loss is independent of x, i.e. we consider a measurable function L : Y ? R ? [0, ?]. We
write
Qmin (L) := Q ? Qmin (L) : ? t?L,Q ? R such that ML,Q (0+ ) = {t?L,Q } ,
i.e. Qmin (L) contains the distributions on Y whose inner L-risks have exactly one exact minimizer.
?
Furthermore, note that this definition immediately yields CL,Q
< ? for all Q ? Qmin (L). Following [7] we now define the self-calibration loss of L by
?
L(Q,
t) := |t ? t?L,Q | ,
Q ? Qmin (L), t ? R .
(15)
This loss is a so-called template loss in the sense of [7], i.e., for a given distribution P on X ? Y ,
where X has a complete ?-algebra and P( ? |x) ? Qmin (L) for PX -almost all x ? X, the P-instance
? P (x, t) := |t ? t?
L
L,P( ? |x) | is measurable and hence a loss. [7] extended the definition of inner risks
?
to the self-calibration loss by setting CL,Q
? (t) := L(Q, t), and based on this the minimal inner risks
and their (approximate) minimizers were defined in the obvious way. Moreover, the self-calibration
?
function was defined by ?max,L,L
CL,Q (t) ? CL,Q
. As shown in [7] this
? (?, Q) = inf t?R; |t?t?
L,Q |??
self-calibration function has two important properties: first it satisfies
?
?max,L,L
|t ? t?L,Q |, Q ? CL,Q (t) ? CL,Q
,
t ? R,
(16)
?
i.e. it measures how well approximate L-risk minimizers t approximate the true minimizer t?L,Q , and
? P , i.e.
second it equals the calibration function of the P-instance L
?max,L? P ,L (?, P( ? |x), x) = ?max,L,L
? ? [0, ?], x ? X.
(17)
? (?, P( ? |x)) ,
In other words, the self-calibration function can be utilized in Theorem 3.1.
?
Proof of Theorem 2.5: Let Q be a distribution on R with CL,Q
< ? and t? be the only ? -quantile
of Q. Then the formulas of Proposition 3.2 show
Z ?
Z ?
n
o
? ?
?max,L,L
Q (t , t + s) ds, ?q? +
Q (t? ? s, t? ) ds , ? ? 0,
? (?, Q) = min ?q+ +
0
0
where q+ and q? are the real numbers defined in Proposition 3.2. Let us additionally assume that
the ? -quantile t? is of type ?. For the Huber type function ?(?) := ?2 /2 if ? ? [0, ?], and ?(?) :=
?? ? ?2 /2 if ? > ?, a simple calculation then yields ?max,L,L
? (?, Q) ? cQ ?(?), where cQ is the
?
?
constant satisfying (3). Let us further define ? : [0, ?) ? [0, ?) by ?(?)
:= ? q (?1/q ), ? ? 0. In
? ?
view of Theorem 3.1 we then need to find a convex
function
?? : [0, ?) ? [0, ?) such
that ? ? ?.
?
?
To this end we define ?(?)
:= spp ?2 if ? ? 0, sp ap and ?(?)
:= ap ? ? sp+2
a
if
?
>
sp ap ,
p
p
q
?q/p
?
where ap := ? and sp := 2
. Then ? : [0, ?) ? [0, ?) is continuously differentiable and its
derivative is increasing, and thus ?? is convex. Moreover, we have ??? ? ??? and hence ?? ? ?? which in
turn implies ?? ? ???? . Now we find the assertion by (16), (17), and Theorem 3.1.
The proof of Theorem 2.7 follows immediately from the following lemma.
Lemma 3.3 Let Q be a symmetric, atom-free distribution on R Rwith median t? = 0. Then for ? > 0
?
?
and L being the ?-insensitive loss we have CL,Q (0) = CL,Q
= 2 ? Q[s, ?)ds and if CL,Q (0) < ?
we further have
Z ?
Z ?+t
CL,Q (t) ? CL,Q (0) =
Q[s, ?] ds +
Q[?, s] ds,
if t ? [0, ?],
CL,Q (t) ? CL,Q (?)
=
??t
t??
Z
0
Q[s, ?) ds ?
?
?+t
Z
Zt??
Q[s, ?) ds + 2 Q[0, s] ds ? 0,
2?
if t > ?.
0
?
In particular, if Q[? ? ?, ? + ?] = 0 for some ? > 0 then CL,Q (?) = CL,Q
.
Proof: Because L(y, t) = L(?y, ?t) for all y, t ? R we only have to consider t ? 0. For later use
we note that for 0 ? a ? b ? ? Equation (11) yields
Z b
Z b
y dQ(y) = aQ([a, b]) +
Q([s, b])ds .
(18)
a
a
Moreover, the definition of L implies
Z t??
Z
CL,Q (t) =
t ? y ? ? dQ(y) +
??
?
t+?
y ? ? ? t dQ(y) .
R t??
R?
Using the symmetry of Q yields ? ?? y dQ(y) = ??t y dQ(y) and hence we obtain
Z t??
Z t+?
Z t+?
Z ?
CL,Q (t) =
Q(??, t ? ?]ds ?
Q[t + ?, ?)ds +
y dQ(y) + 2
y dQ(y) . (19)
0
0
??t
t+?
R t+?
R t+?
Let us first consider the case t ? ?. Then the symmetry of Q yields ??t y dQ(y) = t?? y dQ(y),
and hence (18) implies
Z t??
Z t??
Z t+?
CL,Q (t) =
Q[? ? t, ?)ds +
Q[t??, t+?] ds +
Q[s, t+?] ds
0
Z
+2
?
t+?
Q[s, ?) ds +
Z
0
0
t+?
t??
Q[t+?, ?) ds.
Using
Z
t+?
Q[s, t + ?) ds =
t??
Z
t+?
Q[s, t + ?) ds ?
0
t??
Z
Q[s, t + ?) ds
0
we further obtain
Z?
Zt??
Z?
Zt+?
Zt+?
Q[s, t + ?) ds + Q[t + ?, ?) ds + Q[s, ?) ds = Q[s, ?) ds ? Q[s, t + ?) ds .
t??
t+?
0
R t??
0
R t??
0
R t??
Q[t ? ?, t + ?] ds ? 0 Q[s, t + ?] ds = ? 0 Q[s, t ? ?] ds follows
Z t??
Z t??
Z ?
Z ?
CL,Q (t) = ?
Q[s, t ? ?] ds+
Q[? ? t, ?) ds+
Q[s, ?) ds+
Q[s, ?) ds .
From this and
0
0
0
t+?
0
R t??
R t??
The symmetry of Q implies 0 Q[? ? t, t ? ?] ds = 2 0 Q[0, t ? ?] ds, and we get
Z t??
Z t??
Z t??
Z t??
?
Q[s, t ? ?] ds +
Q[? ? t, ?) ds = 2
Q[0, s) ds +
Q[s, ?) ds .
0
0
0
0
This and
Z
?
t+?
Q[s, ?) ds +
Z
0
?
Z
Q[s, ?) ds = 2
?
t+?
Q[s, ?) ds +
Z
0
t+?
Q[s, ?) ds
yields
t??
Z
CL,Q (t) = 2
Q[0, s) ds +
0
By
Z
t??
0
t??
Q[s, ?) ds +
0
Z
we obtain
Z
CL,Q (t) = 2
Z
0
t??
0
t+?
Z
Q[s, ?) ds + 2
Z
Q[s, ?) ds = 2
Z
Q[0, ?) ds + 2
?
t+?
Q[s, ?) ds +
t??
0
Q[s, ?) ds +
?
t+?
Q[s, ?) ds +
Z
Z
t+?
Q[s, ?) ds .
0
t+?
Z
t??
Q[s, ?) ds
t+?
Q[s, ?) ds
t??
if t ? ?. Let us now consider the case t ? [0, ?]. Analogously we obtain from (19) that
Z ??t
Z ?+t
Z ?
CL,Q (t) =
Q[? ? t, t + ?] ds +
Q[s, t + ?] ds + 2
Q[s, ?) ds
0
Z
+2
0
??t
??t
?+t
Q[? + t, ?) ds ?
Combining this with
Z ??t
Z
Q[? ? t, t + ?] ds ?
0
and
Z
0
?+t
?+t
Q[? ? t, ?) ds ?
??t
0
Q[? ? t, ?) ds = ?
R ?+t
Z
Z
0
Q[? + t, ?) ds .
??t
0
Q[? + t, ?) ds
R ??t
R ?+t
Q[? + t, ?) ds ? 0 Q[? + t, ?) ds = ??t Q[? + t, ?) ds we get
Z ?+t
Z ?+t
Z ?
CL,Q (t) =
Q[? + t, ?) ds +
Q[s, t + ?] ds + 2
Q[s, ?) ds
0
??t
?+t
=
Z
??t
Z
Q[s, ?) ds + 2
??t
?
?+t
Q[s, ?) ds =
?+t
Z
?
??t
Q[s, ?) ds +
Z
?
?+t
Q[s, ?) ds.
R?
Hence CL,Q (0) = 2 ? Q[s, ?) ds. The expressions for CL,Q (t)?CL,Q (0), t ? (0, ?], and CL,Q (t)?
CL,Q (?), t > ?, given in Lemma 3.3 follow by using the same arguments. Hence one exact minimizer
of CL,Q (?) is the median t? = 0. The last assertion is a direct consequence of the formula for
CL,Q (t) ? CL,Q (0) in the case t ? (0, ?].
References
[1] H. Bauer. Measure and Integration Theory. De Gruyter, Berlin, 2001.
[2] A. Christmann and I. Steinwart. Consistency and robustness of kernel based regression.
Bernoulli, 15:799?819, 2007.
[3] D.E. Edmunds and H. Triebel. Function Spaces, Entropy Numbers, Differential Operators.
Cambridge University Press, 1996.
[4] C. Hwang and J. Shim. A simple quantile regression via support vector machine. In Advances
in Natural Computation: First International Conference (ICNC), pages 512 ?520. Springer,
2005.
[5] R. Koenker. Quantile Regression. Cambridge University Press, 2005.
[6] B. Sch?olkopf, A. J. Smola, R. C. Williamson, and P. L. Bartlett. New support vector algorithms.
Neural Computation, 12:1207?1245, 2000.
[7] I. Steinwart. How to compare different loss functions. Constr. Approx., 26:225?287, 2007.
[8] I. Steinwart, D. Hush, and C. Scovel. Function classes that approximate the Bayes risk. In
Proceedings of the 19th Annual Conference on Learning Theory, COLT 2006, pages 79?93.
Springer, 2006.
[9] I. Steinwart, D. Hush, and C. Scovel. An oracle inequality for clipped regularized risk minimizers. In Advances in Neural Information Processing Systems 19, pages 1321?1328, 2007.
[10] I. Takeuchi, Q.V. Le, T.D. Sears, and A.J. Smola. Nonparametric quantile estimation. J. Mach.
Learn. Res., 7:1231?1264, 2006.
| 3180 |@word mention:1 contains:2 selecting:1 rkhs:3 scovel:2 dx:4 realistic:1 device:1 location:1 c2:4 direct:1 beta:1 differential:1 consists:2 prove:1 manner:1 huber:1 decreasing:1 gov:1 little:1 equipped:1 considering:2 increasing:2 estimating:1 moreover:15 bounded:2 klq:4 mass:1 qmin:6 minimizes:1 finding:1 every:1 exactly:1 ensured:1 k2:1 unit:2 omit:3 yn:2 positive:2 before:1 consequence:1 mach:1 establishing:1 subscript:1 approximately:1 ap:4 bi:1 unique:2 arguing:1 dpx:8 empirical:4 word:1 integrating:1 regular:4 get:2 close:2 selection:1 operator:1 bh:6 risk:25 restriction:2 measurable:10 convex:4 survey:1 simplicity:1 immediately:2 estimator:1 handle:1 notion:1 justification:1 analogous:1 target:1 play:1 exact:4 approximated:1 satisfying:7 utilized:1 ep:5 role:2 ft:1 solved:1 ensures:1 algebra:6 sears:1 fast:1 describe:1 kp:1 hyper:1 exhaustive:1 whose:2 valued:1 klp:2 statistic:1 obviously:2 sequence:1 differentiable:1 turned:1 combining:1 translate:1 intuitive:1 olkopf:1 los:2 empty:2 generating:1 converges:1 help:2 derive:1 illustrate:2 ac:1 depending:1 christmann:3 implies:7 come:1 convention:1 subsequently:1 kb:3 f1:4 generalization:1 proposition:3 mathematically:1 strictly:1 hold:1 around:1 normal:1 k2h:2 scope:1 belgium:1 ditional:1 estimation:1 largest:1 always:1 gaussian:1 modified:1 tar:5 edmunds:1 derived:1 focus:1 bernoulli:1 sense:3 tional:1 dependent:2 minimizers:8 typically:1 arg:1 dual:1 issue:2 colt:1 denoted:2 integration:1 marginal:5 equal:2 having:4 atom:2 pinball:8 few:1 gamma:1 national:1 vrije:1 lebesgue:2 tq:8 fd:6 investigate:4 nowadays:1 necessary:1 re:1 theoretical:1 minimal:2 fenchel:1 vub:1 instance:2 cover:1 assertion:2 subset:1 kl1:1 alamo:2 uniform:1 kq:1 density:2 international:1 together:1 continuously:1 analogously:1 again:1 nm:1 choose:1 derivative:1 singleton:1 de:1 student:1 explicitly:1 tion:1 view:1 later:1 closed:3 sup:1 universiteit:1 bayes:3 minimize:1 ni:1 takeuchi:1 variance:1 who:1 yield:9 generalize:1 weak:1 cc:1 whenever:3 definition:8 obvious:1 proof:8 recall:9 lim:1 hilbert:2 carefully:1 follow:2 though:1 generality:2 furthermore:4 smola:2 d:78 steinwart:5 continuity:1 logistic:2 infimum:3 hwang:1 usa:1 concept:1 true:1 regularization:1 hence:11 symmetric:2 laboratory:1 deal:1 self:5 covering:1 complete:3 cp:3 l1:1 consideration:2 rl:23 insensitive:7 approximates:1 cambridge:2 rd:1 approx:1 consistency:1 mathematics:1 aq:1 calibration:7 showed:3 inf:8 certain:2 inequality:14 yi:1 surely:2 converge:1 determine:1 reduces:1 calculation:2 regression:6 basic:1 kernel:2 sometimes:1 limt:1 achieved:1 c1:4 condi:1 addition:4 separately:1 interval:2 median:6 crucial:1 sch:1 seem:1 call:1 ideal:1 enough:1 andreas:2 inner:8 idea:1 triebel:1 shift:1 bl2:2 whether:1 expression:1 bartlett:1 remark:1 nonparametric:1 brussel:1 concentrated:1 svms:5 exist:6 estimated:1 write:5 group:1 drawn:1 clipped:4 throughout:2 almost:4 family:1 sobolev:1 bound:1 oracle:7 annual:1 u1:2 argument:1 min:8 px:20 department:1 brussels:1 ball:2 legendre:1 conjugate:1 describes:3 lp:3 constr:1 modification:1 happens:1 taken:3 equation:1 turn:1 koenker:1 end:7 away:1 robustness:1 original:1 denotes:1 k1:2 quantile:31 establish:5 question:1 parametric:3 strategy:1 usual:1 surrogate:2 said:2 hq:3 berlin:1 sur:9 index:1 cq:6 minimizing:2 difficult:1 negative:1 zt:4 unknown:1 observation:2 ingo:2 finite:1 situation:2 extended:1 y1:2 reproducing:1 arbitrary:2 lanl:1 specified:1 omits:1 hush:2 address:1 usually:1 spp:1 rf:1 max:14 ofh:1 suitable:2 icnc:1 natural:1 regularized:1 advanced:1 older:2 l2:6 kf:8 loss:26 shim:1 validation:1 consistent:1 dq:21 last:2 free:2 distribu:1 template:1 bauer:1 xn:4 far:1 qx:10 excess:4 approximate:7 rthe:1 ml:4 conceptual:1 conclude:1 xi:2 continuous:1 additionally:1 learn:1 symmetry:3 williamson:1 investigated:2 cl:63 sp:4 main:2 dense:1 nothing:1 x1:4 quantiles:4 theorem:18 formula:3 showing:1 jensen:1 svm:5 exists:4 entropy:1 datadependent:1 contained:1 u2:3 springer:2 minimizer:4 satisfies:4 gruyter:1 conditional:10 goal:5 consequently:1 lipschitz:1 hard:1 justify:1 lemma:4 called:4 support:2 latter:2 preparation:2 |
2,404 | 3,181 | Convex Clustering with Exemplar-Based Models
Danial Lashkari
Polina Golland
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
{danial, polina}@csail.mit.edu
Abstract
Clustering is often formulated as the maximum likelihood estimation of a mixture
model that explains the data. The EM algorithm widely used to solve the resulting
optimization problem is inherently a gradient-descent method and is sensitive to
initialization. The resulting solution is a local optimum in the neighborhood of
the initial guess. This sensitivity to initialization presents a significant challenge
in clustering large data sets into many clusters. In this paper, we present a different approach to approximate mixture fitting for clustering. We introduce an
exemplar-based likelihood function that approximates the exact likelihood. This
formulation leads to a convex minimization problem and an efficient algorithm
with guaranteed convergence to the globally optimal solution. The resulting clustering can be thought of as a probabilistic mapping of the data points to the set of
exemplars that minimizes the average distance and the information-theoretic cost
of mapping. We present experimental results illustrating the performance of our
algorithm and its comparison with the conventional approach to mixture model
clustering.
1
Introduction
Clustering is one of the most basic problems of unsupervised learning with applications in a wide
variety of fields. The input is either vectorial data, that is, vectors of data points in the feature
space, or proximity data, the pairwise similarity or dissimilarity values between the data points. The
choice of the clustering cost function and the optimization algorithm employed to solve the problem
determines the resulting clustering [1]. Intuitively, most methods seek compact clusters of data
points, namely, clusters with relatively small intra-cluster and high inter-cluster distances. Other
approaches, such as Spectral Clustering [2], look for clusters of more complex shapes lying on some
low dimensional manifolds in the feature space. These methods typically transform the data such
that the manifold structures get mapped to compact point clouds in a different space. Hence, they
do not remove the need for efficient compact-cluster-finding techniques such as k-means.
The widely used Soft k-means method is an instance of maximum likelihood fitting of a mixture
model through the EM algorithm. Although this approach yields satisfactory results for problems
with a small number of clusters and is relatively fast, its use of a gradient-descent algorithm for
minimization of a cost function with many local optima makes it sensitive to initialization. As
the search space grows, that is, the number of data points or clusters increases, it becomes harder
to find a good initialization. This problem often arises in emerging applications of clustering for
large biological data sets such as gene-expression. Typically, one runs the algorithm many times
with different random initializations and selects the best solution. More sophisticated initialization
methods have been proposed to improve the results but the challenge of finding good initialization
for EM algorithm remains [4].
We aim to circumvent the initialization procedure by designing a convex problem whose global
optimum can be found with a simple algorithm. It has been shown that mixture modeling can
1
be formulated as an instance of iterative distance minimization between two sets of probability
distributions [3]. This formulation shows that the non-convexity of mixture modeling cost function
comes from the parametrization of the model components . More precisely, any mixture model is,
by definition, a convex combination of some set of distributions. However, for a fixed number of
mixture components, the set of all such mixture models is usually not convex when the distributions
have, say, free mean parameters in the case of normal distributions. Inspired by combinatorial,
non-parametric methods such as k-medoids [5] and affinity propagation [6], our main idea is to
employ the notion of exemplar finding, namely, finding the data points which could best describe
the data set. We assume that the clusters are dense enough such that there is always a data point
very close to the real cluster centroid and, thus, restrict the set of possible cluster means to the set
of data points. Further, by taking all data points as exemplar candidates, the modeling cost function
becomes convex. A variant of EM algorithm finds the globally optimal solution.
Convexity of the cost function means that the algorithm will unconditionally converge to the global
minimum. Moreover, since the number of clusters is not specified a priori, the algorithm automatically finds the number of clusters depending only on one temperature-like parameter. This parameter, which is equivalent to a common fixed variance in case of Gaussian models, defines the width
scale of the desired clusters in the feature space. Our method works exactly in the same way with
both proximity and vectorial data, unifying their treatment and providing insights into the modeling
assumptions underlying the conversion of feature vectors into pairwise proximity data.
In the next section, we introduce our maximum likelihood function and the algorithm that maximizes
it. In Section 3, we make a connection to the Rate-Distortion theory as a way to build intuition about
our objective function. Section 4 presents implementation details of our algorithm. Experimental
results comparing our method with a similar mixture model fitting method are presented in Section
5, followed by a discussion of the algorithm and the related work in Section 6.
2
Convex Cost Function
Given a set of data points X = {x1 , ? ? ? , xn } ? IRd , mixture model clustering seeks to maximize
the scaled log-likelihood function
l({qj }kj=1 , {mj }kj=1 ; X ) =
X
k
n
1X
qj f (xi ; mj ) ,
log
n i=1
j=1
(1)
where f (x; m) is an exponential family distribution on random variable X. It has been shown
that there is a bijection between regular exponential families and a broad family of divergences
called Bregman divergence [7]. Most of the well-known distance measures, such as Euclidean
distance or Kullback-Leibler divergence (KL-divergence) are included in this family. We employ this relationship and let our model be an exponential family distribution on X of the form
f (x; m) = C(x) exp(?d? (x, m)) where d? is some Bregman divergence and C(x) is independent of m. Note that with this representation, m is the expected value of X under the distribution
f (x; m). For instance, taking Euclidean distance as the divergence, we obtain normal distribution
as our model f .
In this work, we take models of the above form whose parameters m lie in the same space as data
vectors. Thus, we can restrict the set of mixture components to the distributions centered at the data
points, i.e., mj ? X . Yet, for a specified number of clusters k, the problem still has a combinatorial
nature of choosing the right k cluster centers among n data points. To avoid this problem, we
increase the number of possible components to n and represent all data points as cluster-center
candidates. The new log-likelihood function is
X
n
n
n
n
X
1X
1X
n
??d? (xi ,xj )
l({qj }j=1 ; X ) =
log
qj fj (xi ) =
log
qj e
+ const. , (2)
n i=1
n i=1
j=1
j=1
where fj (x) is an exponential family member with its expectation parameter equal to the jth data
vector and the constant denotes a term that does not depend on the unknown variables {qj }nj=1 .
The constant scaling factor ? in the exponent controls the sharpness
of mixture components.
We
n
o
Pn
maximize l(?; X ) over the set of all mixture distributions Q = Q|Q(?) = j=1 qj fj (?) .
2
The log-likelihood function (2) can be expressed in terms of the KL-divergence by defining P? (x) =
1/n, x ? X , to be the empirical distribution of the data on IRd and by noting that
X
D(P? kQ) = ?
P? (x) log Q(x) ? H(P? ) = ?l({qj }nj=1 ; X ) + const.
(3)
x?X
where H(P? ) is the entropy of the empirical distribution and does not depend on the unknown mixture
coefficients {qj }nj=1 . Consequently, the maximum likelihood problem can be equivalently stated as
the minimization of the KL-divergence between P? and the set of mixture distributions Q.
It is easy to see that unlike the unconstrained set of mixture densities considered by the likelihood
function (1), set Q is convex. Our formulation therefore leads to a convex minimization problem.
Furthermore, it is proved in [3] that for such a problem, the sequence of distributions Q(t) with
(t)
corresponding weights {qj }nj=1 defined iteratively via
(t+1)
qj
(t)
= qj
X
x?X
P? (x)fj (x)
Pn
(t)
0
j 0 =1 qj 0 fj (x)
(4)
is guaranteed to converge to the global optimum solution Q? if the support of the initial distribution
(0)
is the entire index set, i.e., qj > 0 for all j.
3
Connection to Rate-Distortion Problems
Now, we present an equivalent statement of our problem on the product set of exemplars and data
points. This alternative formulation views our method as an instance of lossy data compression and
directly implies the optimality of the algorithm (4).
The following proposition is introduced and proved in [3]:
Proposition 1. Let Q0 be the set of distributions of the complete data random variable (J, X) ?
{1, ? ? ? , n} ? IRd with elements Q0 (j, x) = qj fj (x). Let P 0 be the set of all distributions on the
same random variable (J, X) which have P? as their marginal on X. Then,
min D(P? kQ) =
Q?Q
min
P 0 ?P 0 ,Q0 ?Q0
D(P 0 kQ0 )
(5)
where Q is the set of all marginal distributions of elements of Q0 on X. Furthermore, if Q? and
(P 0? , Q0? ) are the corresponding optimal arguments, Q? is the marginal of Q0? .
This proposition implies that we can express our problem of minimizing (3) as minimization of
D(P 0 kQ0 ) where P 0 and Q0 are distributions of the random variable (J, X). Specifically, we define:
1
n rij , x = xi ? X ;
Q0 (j, x) = qj C(x)e??d? (x,xj )
P 0 (j, x) = P? (x)P 0 (j|x) =
(6)
0,
otherwise
where qj and rij = P 0 (j|x = xi ) are probability distributions over the set {j}nj=1 . This formulation
ensures that P 0 ? P 0 , Q0 ? Q0 and the objective function is expressed only in terms of variables
qj and P 0 (j|x) for x ? X . Our goal is then to solve the minimization problem in the space of
distributions of random variable (J, I) ? {j}nj=1 ?{j}nj=1 , namely, in the product space of exemplar
? data point indices. Substituting expressions (6) into the KL-divergence D(P 0 kQ0 ), we obtain the
equivalent cost function:
n
rij
1 X
0
0
rij log
+ ?d? (xi , xj ) + const.
(7)
D(P kQ ) =
n i,j=1
qj
P
It is straightforward to show that for any set of values rij , setting qj = n1 i rij minimizes (7).
Substituting this expression into the cost function, we obtain the final expression
n
1 X
rij
0
0?
0
D(P kQ (P )) =
rij log 1 P
+ ?d? (xi , xj ) + const. ,
n i,j=1
i 0 r i0 j
n
= I(I; J) + ?EI,J d? (xi , xj ) + const.
3
(8)
where the first term is the mutual information between the random variables I (data points) and
J (exemplars) under the distribution P 0 and the second term is the expected value of the pairwise
distances with the same distribution on indices. The n2 unknown values of rij lie on n separate
n-dimensional simplices. These parameters have the same role as cluster responsibilities in soft
k-means: they stand for the probability of data point xi choosing data point xj as its cluster-center.
The algorithm described in (4) is in fact the same as the standard Arimoto-Blahut algorithm [10]
commonly used for solving problems of the form (8).
We established that the problem of maximizing log-likelihood function (2) is equivalent to the minimization of objective function (8). This helps us to interpret this problem in the framework of
Rate-Distortion theory. The data set can be thought of as an information source with a uniform
distribution on the alphabet X . Such a source has entropy log n, which means that any scheme for
encoding an infinitely long i.i.d. sequence generated by this source requires on average this number
of bits per symbol, i.e., has a rate of at least log n. We cannot compress the information source
beyond this rate without tolerating some distortion, when the original data points are encoded into
other points with nonzero distances between them. We can then consider rij ?s as a probabilistic
encoding of our data set onto itself with the corresponding average distortion D = EI,J d? (xi , xj )
?
that minimizes (8) for some ? yields the least rate that can be
and the rate I(I; J). A solution rij
achieved having no more than the corresponding average distortion D. This rate is usually denoted
by R(D), a function of average distortion, and is called the rate-distortion function [8]. Note that
we have ?R/?D = ??, 0 < ? < ? at any point on the rate-distortion function graph. The weight
qj for the data point xj is a measure of how likely this point is to appear in the compressed representation of the data set, i.e., to be an exemplar. Here, we can rigorously quantify our intuitive idea
that higher number of clusters (corresponding to higher rates) is the inherent cost of attaining lower
average distortion. We will see an instance of this rate-distortion trade-off in Section 5.
4
Implementation
The implementation of our algorithm costs two matrix-vector multiplications per iteration, that
is, has a complexity of order n2 per iteration, if solved with no approximations. Letting sij =
exp(??d? (xi , xj )) and using two auxiliary vectors z and ?, we obtain the simple update rules
(t)
zi
=
n
X
j=1
n
(t)
sij qj
(t)
?j =
1 X sij
n i=1 z (t)
(t+1)
qj
(t) (t)
= ?j qj
(9)
i
(0)
where the initialization qj is nonzero for all the data points we want to consider as possible exemplars. At the fixed point, the values of ?j are equal to 1 for all data points in the support P
of qj and are
less than 1 otherwise [10]. In practice, we compute the gap between maxj (log ?j ) and j qj log ?j
in each iteration and stop the algorithm when this gap becomes less than a small threshold. Note
(t)
(t)
(t)
that the soft assignments rij = qj sij /nzi need to be computed only once after the algorithm has
converged.
Any value of ? ? [0, ?) yields a different solution to (8) with different number of nonzero qj values.
Smaller values of ? correspond to having wider clusters and greater values correspond to narrower
clusters. Neither extreme, one assigning all data points to the central exemplar and the other taking
all data points as exemplars, is interesting. For reasonable ranges of ?, the solution is sparse and the
resulting number of nonzero components of qj determines the final number of clusters.
Similar to other interior-point methods, the convergence of our algorithm becomes slow as we move
close to the vertices of the probability simplex where some qj ?s are very small. In order to improve
the convergence rate, after each iteration, we identify all qj ?s that are below a certain threshold
(10?3 /n in our experiments,) set them to zero and re-normalize the entire distribution over the
remaining indices. This effectively excludes the corresponding points as possible exemplars and
reduces the cost of the following iterations.
In order to further speed up the algorithm for very large data sets, we can search over values of
sij for any i and keep only the largest no values in any row turning the proximity matrix into a
sparse one. The reasoning is simply that we expect any point to be represented in the final solution
with exemplars relatively close to it. We observed that as long as no values are a few times greater
than the expected number of data points in each cluster, the final results remain almost the same
4
12
6
10
5
8
Rate (bits)
4
6
3
4
2
2
1
0
0
100
200
300
400
500
600
700
Average Distortion
800
900
0
0
1000
0.5
1
?/?o
1.5
2
2.5
Figure 1: Left: rate-distortion function for the example described in the text. The line with slope ??o is also
illustrated for comparison (dotted line) as well as the point corresponding to ? = ?o (cross) and the line tangent
to the graph at that point. Right: the exponential of rate (dotted line) and number of hard clusters for different
values of beta (solid line.) The rate is bounded above by logarithm of number of clusters.
with or without this preprocessing. However, this approximation decreases the running time of the
algorithm by a factor n/no .
5
Experimental Results
To illustrate some general properties of our method, we apply it to the set of 400 random data points
in IR2 shown in Figure 2. We use Euclidean distance and run the algorithm for different values of
?. Figure 1 (left) shows the resulting rate-distortion function for this example. As we expect, the
estimated rate-distortion function is smooth, monotonically decreasing and convex. To visualize the
clustering results, we turn the soft responsibilities into hard assignments. Here, we first choose the
set of exemplars to be the set of all indices j that are MAP estimate exemplars for some data point
i under P 0 (j|xi ). Then, any point is assigned to its closest exemplar. Figure 2 illustrates the shapes
of the resulting hard clusters for different values of ?. Since ? has dimensions
P of inverse variance in
the case of Gaussian models, we chose an empirical value ?o = n2 log n/ i,j kxi ? xj k2 so that
values ? around ?o give reasonable results. We can see how clusters split when we increase ?. Such
cluster splitting behavior also occurs in the case of a Gaussian mixture model with unconstrained
cluster centers and has been studied as the phase transitions of a corresponding statistical system
[9]. The nature of this connection remains to be further investigated.
The resulting number of hard clusters for different values of ? are shown in Figure 1 (right). The
figure indicates two regions of ? with relatively stable number of clusters, namely 4 and 10, while
other cluster numbers have a more transitory nature with varying ?. The distribution of data points
in Figure 2 shows that this is a reasonable choice of number of clusters for this data set. However,
we also observe some fluctuations in the number of clusters even in the more stable regime of values
of ?. Comparing this behavior with the monotonicity of our rate shows how, by turning the soft
assignments into the hard ones, we lose the strong optimality guarantees we have for the original soft
solution. Nevertheless, since our global optimum is minimum to a well justified cost function, we
expect to obtain relatively good hard assignments. We further discuss this aspect of the formulation
in Section 6.
The main motivation for developing a convex formulation of clustering is to avoid the well-known
problem of local optima and sensitivity to initialization. We compare our method with a regular
mixture model of the form (1) where f (x; m) is a Gaussian distribution and the problem is solved
using the EM algorithm. We will refer to this regular mixture model as the soft k-means. The kmeans algorithm is a limiting case of this mixture-model problem when ? ? ?, hence the name
soft k-means. The comparison will illustrate how employing convexity helps us better explore the
search space as the problem grows in complexity. We use synthetic data sets by drawing points from
unit variance Gaussian distributions centered around a set of vectors.
There is an important distinction between the soft k-means and our algorithm: although the results
of both algorithms depend on the choice of ?, only the soft k-means needs the number of clusters k
as an input. We run the two algorithms for five different values of ? which were empirically found
5
40
40
30
30
20
20
10
10
0
0
?10
?10
?20
?20
?30
?40
?40
40
(a)
?30
?30
?20
?10
0
10
20
30
?40
40 ?40
40
30
30
20
20
10
10
0
0
?10
?10
?20
?20
?30
?40
?40
40
(c)
?30
?30
?20
?10
0
10
20
30
?40
40 ?40
40
30
30
20
20
10
10
0
0
?10
?10
?20
?20
?30
?40
?40
(e)
?30
?30
?20
?10
0
10
20
30
?40
40 ?40
(b)
?30
?20
?10
0
10
20
30
40
?20
?10
0
10
20
30
40
?20
?10
0
10
20
30
40
(d)
?30
(f)
?30
Figure 2: The clusters found for different values of ?, (a) 0.1?o (b) 0.5?o (c) ?o (d) 1.2?o (e) 1.6?o (f)
1.7?o . The exemplar data point of each cluster is denoted by a cross. The range of normal distributions for any
mixture model is illustrated here by circles around these exemplar points with radius equal to the square root of
the variance corresponding to the value of ? used by the algorithm (? = (2?)?1/2 ). Shapes and colors denote
cluster labels.
to yield reasonable results for the problems presented here. As a measure of clustering quality, we
use micro-averaged precision. We form the contingency tables for the cluster assignments found by
the algorithm and the true cluster labels. The percentage of the total number of data points assigned
to the right cluster is taken as the precision value of the clustering result. Out of the five runs with
different values of ?, we take the result with the best precision value for any of the two algorithms.
In the first experiment, we look at the performance of the two algorithms as the number of clusters
increases. Different data sets are generated by drawing 3000 data points around some number of
cluster centers in IR20 with all clusters having the same number of data points. Each component of
any data-point vector comes from an independent Gaussian distribution with unit variance around
the value of the corresponding component of its cluster center. Further, we randomly generate
components of the cluster-center vectors from a Gaussian distribution with variance 25 around zero.
In this experiment, for any value of ?, we repeat soft k-means 1000 times with random initialization
and pick the solution with the highest likelihood value. Figure 3 (left) presents the precision values as
a function of the number of clusters in the mixture distribution that generates the 3000 data points.
The error bars summarize the standard deviation of precision over 200 independently generated
data sets. We can see that performance of soft k-means drops as the number of clusters increases
while our performance remains relatively stable. Consequently, as illustrated in Figure 3 (right),
6
25
Average Precision Gain
Average Precision
105
100
95
90
85
80
75
?
Convex?Clustering?
Soft?k?means?
56
8 10 12
15
20
25
Number of Clusters
30
20
15
10
5
0
-5
5 6 8 10 12
15
20
25
Number of Clusters
30
?
Figure 3: Left: average precision values of Convex Clustering and Soft k-means for different numbers of
clusters in 200 data sets of 3000 data points. Right: precision gain of using Convex Clustering in the same
experiment.
the average precision difference of the two algorithms increases with increasing number of clusters.
Since the total number of data points remains the same, increasing the number of clusters results in
increasing complexity of the problem with presumably more local minima to the cost function. This
trend agrees with our expectation that the results of the convex algorithm improves relative to the
original one with a larger search space.
As another way of exploring the complexity of the problem, in our second experiment, we generate
data sets with different dimensionality. We draw 100 random vectors, with unit variance Gaussian
distribution in each component, around any of the 40 cluster centers to make
? data sets of total 4000
data points. The cluster centers are chosen to be of the form (0, ? ? ? , 0, 50, 0, ? ? ? , 0) where we
change the position of the nonzero component to make different cluster centers. In this way, the
pairwise distance between all cluster centers is 50 by formation.
Figure 4 (left) presents the precision values found for the two algorithms when 4000 points lie in
spaces with different dimensionality. Soft k-means was repeated 100 times with random initialization for any value of ?. Again, the relative performance of Convex Clustering when compared to
soft k-means improves with the increasing problem complexity. This is another evidence that for
larger data sets the less precise nature of our constrained search, as compared to the full mixture
models, is well compensated by its ability to always find its global optimum. In general the value
of ? should be tuned to find the desired solution. We plan to develop a more systematic way for
choosing ?.
6
Discussion and Related Work
Since only the distances take part in our formulation and the values of data point vectors are not
required, we can extend this method to any proximity data. Given a matrix Dn?n = [dij ] that
describes the pairwise symmetric or asymmetric dissimilarities between data points, we can replace
d? (xi , xj )?s in (8) with dij ?s and solve the same minimization problem whose convexity can be
directly verified. The algorithm works in exactly the same way and all the aforementioned properties
carry over to this case as well.
A previous application of rate-distortion theoretic ideas in clustering led to the deterministic annealing (DA). In order to avoid local optima, DA gradually decreases an annealing parameter, tightening
the bound on the average distortion [9]. However, at each temperature the same standard EM updates
are used. Consequently, the method does not provide strong guarantees on the global optimality of
the resulting solution.
Affinity propagation is another recent exemplar-based clustering algorithm. It finds the exemplars
by forming a factor graph and running a message passing algorithm on the graph as a way to minimize the clustering cost function [6]. If the data point i is represented by the data point ci , assuming
a common preference parameter
value ? for all data points, the objective function of affinity propP
agation can be stated as i dici + ?k where k is the number of found clusters. The second term
is needed to put some cost on picking any point as an exemplar to prevent the trivial case of sending any point to itself. Outstanding results have been reported for the affinity propagation [6] but
theoretical guarantees on its convergence or optimality are yet to be established.
7
?
Average Precision Gain
Average Precision
?
100
95
Convex?Clustering?
Soft?k?means?
90
85
50
75
100
125
Number of Dimensions
150
18
16
14
12
10
8
50
75
100
125
Number of Dimensions
150
?
Figure 4: Left: average precision values of Convex Clustering and Soft k-means for different data dimensionality in 100 data sets of 4000 data points with 40 clusters. Right: precision gain of using Convex Clustering in
the same experiment.
We can interpret our algorithm as a relaxation of this combinatorial problem to the soft assignment
case by introducing probabilities
P P(ci = j) = rij of associating point i with an exemplar j. The
1
marginal distribution qj = n i rij is the probability that point j is an exemplar. In order to use
analytical tools for solving this problem, we have to turn the regularization term k into a continuous
function of assignments. A possible choice might be H(q), entropy of distribution qj , which is
bounded above by log k. However, the entropy function is concave and any local or global minimum
of a concave minimization problem over a simplex occurs in an extreme point of the feasible domain
which in our case corresponds to the original combinatorial hard assignments [11]. In contrast, using
mutual information I(I, J) induced by rij as the regularizing term turns the problem into a convex
problem. Mutual information is convex and serves as a lower bound on H(q) since it is always less
than the entropy of both of its random variables. Now, by letting ? = 1/? we arrive to our cost
function in (8). We can therefore see that our formulation is a convex relaxation of the original
combinatorial problem.
In conclusion, we proposed a framework for constraining the search space of general mixture models
to achieve global optimality of the solution. In particular, our method promises to be useful in
problems with large data sets where regular mixture models fail to yield consistent results due to
their sensitivity to initialization. We also plan to further investigate generalization of this idea to the
models with more elaborate parameterizations.
Acknowledgements. This research was supported in part by the NIH NIBIB NAMIC U54EB005149, NCRR NAC P41-RR13218 grants and by the NSF CAREER grant 0642971.
References
[1] J. Puzicha, T. Hofmann, and J. M. Buhmann, ?Theory of proximity based clustering: Structure detection
by optimization,? Pattern Recognition, Vol. 33, No. 4, pp. 617?634, 2000.
[2] A. Y. Ng, M. I. Jordan, and Y. Weiss, ?On Spectral Clustering: Analysis and an Algorithml,? Advances in
Neural Information Processing Systems, Vol. 14, pp. 849?856, 2001.
[3] I. Csisz?ar and P. Shields, ?Information Theory and Statistics: A Tutorial,? Foundations and Trends in
Communications and Information Theory, Vol. 1, No. 4, pp. 417?528, 2004.
[4] M. Meil?a, and D. Heckerman, ?An Experimental Comparison of Model-Based Clustering Methods,? Machine Learning, Vol. 42, No. 1-2, pp. 9?29, 2001.
[5] J. Han, and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2001.
[6] B. J. Frey, and D. Dueck, ?Clustering by Passing Messages Between Data Points,? Science, Vol. 315, No.
5814, pp. 972?976, 2007.
[7] A. Banerjee, S. Merugu, I. S.Dhillon, and J. Ghosh, ?Clustering with Bregman Divergences,? Journal of
Machine Learning Research, Vol. 6, No. 6, pp. 1705-1749, 2005.
[8] T. M. Cover, and J. A. Thomas, Elements of information theory, New York, Wiley, 1991.
[9] K. Rose, ?Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related
Optimization Problems,? Proceedings of the IEEE, Vol. 86, No. 11, pp. 2210?2239, 1998.
[10] R. E. .Blahut, ?Computation of Channel Capacity and Rate-Distortion Functions,? IEEE Transactions on
Information Theory, Vol. IT-18, No. 4, pp. 460?473, 1974.
[11] M. Pardalos, and J. B. Rosen, ?Methods for Global Concave Minimization: A Bibliographic Survey,?
SIAM Review, Vol. 28, No. 3., pp. 367?379, 1986.
8
| 3181 |@word illustrating:1 compression:2 seek:2 pick:1 solid:1 harder:1 carry:1 initial:2 bibliographic:1 tuned:1 comparing:2 yet:2 assigning:1 shape:3 hofmann:1 remove:1 drop:1 update:2 intelligence:1 guess:1 parametrization:1 parameterizations:1 bijection:1 preference:1 five:2 dn:1 beta:1 fitting:3 introduce:2 pairwise:5 inter:1 expected:3 behavior:2 inspired:1 globally:2 decreasing:1 automatically:1 increasing:4 becomes:4 moreover:1 underlying:1 maximizes:1 bounded:2 namic:1 minimizes:3 emerging:1 finding:4 ghosh:1 nj:7 guarantee:3 dueck:1 concave:3 exactly:2 scaled:1 k2:1 control:1 unit:3 grant:2 appear:1 local:6 frey:1 encoding:2 propp:1 meil:1 fluctuation:1 might:1 chose:1 initialization:13 studied:1 range:2 averaged:1 practice:1 procedure:1 empirical:3 thought:2 regular:4 get:1 cannot:1 close:3 onto:1 interior:1 ir2:1 put:1 conventional:1 equivalent:4 map:1 center:11 maximizing:1 compensated:1 straightforward:1 deterministic:2 independently:1 convex:22 survey:1 sharpness:1 splitting:1 insight:1 rule:1 notion:1 limiting:1 exact:1 designing:1 element:3 trend:2 recognition:1 asymmetric:1 observed:1 cloud:1 role:1 rij:15 solved:2 region:1 ensures:1 trade:1 decrease:2 highest:1 lashkari:1 intuition:1 rose:1 convexity:4 complexity:5 rigorously:1 depend:3 solving:2 represented:2 alphabet:1 fast:1 describe:1 artificial:1 formation:1 neighborhood:1 choosing:3 whose:3 encoded:1 widely:2 solve:4 larger:2 say:1 distortion:18 otherwise:2 compressed:1 drawing:2 ability:1 statistic:1 transform:1 itself:2 final:4 sequence:2 analytical:1 product:2 achieve:1 intuitive:1 csisz:1 normalize:1 convergence:4 cluster:61 optimum:8 help:2 depending:1 wider:1 illustrate:2 develop:1 exemplar:24 strong:2 auxiliary:1 come:2 implies:2 quantify:1 radius:1 centered:2 pardalos:1 explains:1 generalization:1 proposition:3 biological:1 exploring:1 proximity:6 lying:1 considered:1 around:7 normal:3 exp:2 presumably:1 mapping:2 visualize:1 substituting:2 estimation:1 lose:1 combinatorial:5 label:2 sensitive:2 largest:1 agrees:1 kq0:3 tool:1 minimization:11 mit:1 always:3 gaussian:8 aim:1 avoid:3 pn:2 varying:1 likelihood:12 indicates:1 contrast:1 centroid:1 i0:1 typically:2 entire:2 selects:1 among:1 aforementioned:1 classification:1 denoted:2 priori:1 exponent:1 plan:2 constrained:1 mutual:3 marginal:4 field:1 equal:3 once:1 having:3 ng:1 broad:1 look:2 unsupervised:1 simplex:2 rosen:1 inherent:1 employ:2 few:1 micro:1 randomly:1 divergence:10 maxj:1 phase:1 blahut:2 n1:1 detection:1 message:2 investigate:1 mining:1 intra:1 mixture:26 extreme:2 bregman:3 ncrr:1 euclidean:3 logarithm:1 desired:2 re:1 circle:1 theoretical:1 instance:5 soft:19 modeling:4 ar:1 cover:1 assignment:8 cost:17 introducing:1 deviation:1 vertex:1 kq:4 uniform:1 dij:2 reported:1 kxi:1 synthetic:1 density:1 sensitivity:3 siam:1 csail:1 probabilistic:2 off:1 systematic:1 picking:1 again:1 central:1 choose:1 attaining:1 coefficient:1 view:1 root:1 responsibility:2 slope:1 minimize:1 square:1 variance:7 kaufmann:1 merugu:1 yield:5 correspond:2 identify:1 tolerating:1 converged:1 definition:1 pp:9 stop:1 gain:4 proved:2 treatment:1 massachusetts:1 color:1 improves:2 dimensionality:3 sophisticated:1 higher:2 wei:1 formulation:9 furthermore:2 transitory:1 ei:2 banerjee:1 propagation:3 defines:1 quality:1 lossy:1 grows:2 nac:1 name:1 concept:1 true:1 hence:2 assigned:2 regularization:1 q0:11 laboratory:1 satisfactory:1 leibler:1 iteratively:1 nonzero:5 illustrated:3 symmetric:1 dhillon:1 width:1 theoretic:2 complete:1 temperature:2 fj:6 reasoning:1 nih:1 common:2 empirically:1 arimoto:1 extend:1 approximates:1 interpret:2 significant:1 refer:1 danial:2 cambridge:1 unconstrained:2 stable:3 han:1 similarity:1 closest:1 recent:1 certain:1 morgan:1 minimum:4 greater:2 employed:1 converge:2 maximize:2 monotonically:1 full:1 reduces:1 smooth:1 cross:2 long:2 variant:1 basic:1 regression:1 expectation:2 iteration:5 represent:1 achieved:1 golland:1 justified:1 want:1 annealing:3 source:4 unlike:1 induced:1 member:1 jordan:1 noting:1 agation:1 split:1 enough:1 easy:1 constraining:1 variety:1 xj:11 zi:1 restrict:2 associating:1 idea:4 qj:34 expression:4 ird:3 passing:2 york:1 useful:1 polina:2 generate:2 percentage:1 nsf:1 tutorial:1 dotted:2 nzi:1 estimated:1 per:3 promise:1 vol:9 express:1 threshold:2 nevertheless:1 prevent:1 neither:1 verified:1 graph:4 excludes:1 relaxation:2 run:4 inverse:1 arrive:1 family:6 reasonable:4 almost:1 draw:1 scaling:1 bit:2 bound:2 guaranteed:2 followed:1 rr13218:1 vectorial:2 precisely:1 generates:1 aspect:1 speed:1 argument:1 optimality:5 min:2 relatively:6 developing:1 combination:1 smaller:1 remain:1 em:6 describes:1 heckerman:1 intuitively:1 gradually:1 medoids:1 sij:5 taken:1 remains:4 turn:3 discus:1 fail:1 needed:1 letting:2 serf:1 sending:1 apply:1 observe:1 spectral:2 alternative:1 original:5 compress:1 denotes:1 clustering:32 remaining:1 running:2 thomas:1 unifying:1 const:5 build:1 objective:4 move:1 occurs:2 parametric:1 gradient:2 affinity:4 distance:11 separate:1 mapped:1 capacity:1 manifold:2 trivial:1 assuming:1 index:5 relationship:1 providing:1 minimizing:1 equivalently:1 statement:1 stated:2 tightening:1 implementation:3 unknown:3 conversion:1 descent:2 defining:1 communication:1 precise:1 introduced:1 namely:4 required:1 specified:2 kl:4 connection:3 distinction:1 established:2 beyond:1 bar:1 usually:2 below:1 pattern:1 regime:1 challenge:2 summarize:1 circumvent:1 turning:2 buhmann:1 scheme:1 improve:2 technology:1 unconditionally:1 kj:2 text:1 review:1 acknowledgement:1 tangent:1 multiplication:1 relative:2 expect:3 interesting:1 foundation:1 contingency:1 consistent:1 row:1 repeat:1 supported:1 free:1 jth:1 institute:1 wide:1 taking:3 sparse:2 dimension:3 xn:1 stand:1 transition:1 commonly:1 preprocessing:1 employing:1 transaction:1 approximate:1 compact:3 nibib:1 kullback:1 gene:1 keep:1 monotonicity:1 global:9 xi:13 search:6 iterative:1 continuous:1 table:1 mj:3 nature:4 channel:1 inherently:1 career:1 investigated:1 complex:1 domain:1 da:2 main:2 dense:1 motivation:1 n2:3 repeated:1 x1:1 elaborate:1 simplices:1 slow:1 wiley:1 shield:1 precision:15 position:1 exponential:5 candidate:2 lie:3 symbol:1 evidence:1 effectively:1 ci:2 dissimilarity:2 p41:1 illustrates:1 gap:2 entropy:5 led:1 simply:1 likely:1 infinitely:1 explore:1 forming:1 expressed:2 corresponds:1 determines:2 ma:1 goal:1 formulated:2 narrower:1 consequently:3 kmeans:1 replace:1 feasible:1 hard:7 change:1 included:1 specifically:1 called:2 total:3 experimental:4 puzicha:1 support:2 arises:1 outstanding:1 regularizing:1 |
2,405 | 3,182 | Random Features for Large-Scale Kernel Machines
Benjamin Recht
Caltech IST
Pasadena, CA 91125
[email protected]
Ali Rahimi
Intel Research Seattle
Seattle, WA 98105
[email protected]
Abstract
To accelerate the training of kernel machines, we propose to map the input data
to a randomized low-dimensional feature space and then apply existing fast linear
methods. The features are designed so that the inner products of the transformed
data are approximately equal to those in the feature space of a user specified shiftinvariant kernel. We explore two sets of random features, provide convergence
bounds on their ability to approximate various radial basis kernels, and show
that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel
machines.
1
Introduction
Kernel machines such as the Support Vector Machine are attractive because they can approximate
any function or decision boundary arbitrarily well with enough training data. Unfortunately, methods that operate on the kernel matrix (Gram matrix) of the data scale poorly with the size of the
training dataset. For example, even with the most powerful workstation, it might take days to
train a nonlinear SVM on a dataset with half a million training examples. On the other hand, linear machines can be trained very quickly on large datasets when the dimensionality of the data is
small [1, 2, 3]. One way to take advantage of these linear training algorithms for training nonlinear
machines is to approximately factor the kernel matrix and to treat the columns of the factor matrix
as features in a linear machine (see for example [4]). Instead, we propose to factor the kernel function itself. This factorization does not depend on the data, and allows us to convert the training and
evaluation of a kernel machine into the corresponding operations of a linear machine by mapping
data into a relatively low-dimensional randomized feature space. Our experiments show that these
random features, combined with very simple linear learning techniques, compete favorably in speed
and accuracy with state-of-the-art kernel-based classification and regression algorithms, including
those that factor the kernel matrix.
The kernel trick is a simple way to generate features for algorithms that depend only on the inner
product between pairs of input points. It relies on the observation that any positive definite function
k(x, y) with x, y ? Rd defines an inner product and a lifting ? so that the inner product between
lifted datapoints can be quickly computed as h?(x), ?(y)i = k(x, y). The cost of this convenience
is that the algorithm accesses the data only through evaluations of k(x, y), or through the kernel
matrix consisting of k applied to all pairs of datapoints. As a result, large training sets incur large
computational and storage costs.
Instead of relying on the implicit lifting provided by the kernel trick, we propose explicitly mapping
the data to a low-dimensional Euclidean inner product space using a randomized feature map z :
Rd ? RD so that the inner product between a pair of transformed points approximates their kernel
evaluation:
k(x, y) = h?(x), ?(y)i ? z(x)0 z(y).
(1)
1
Unlike the kernel?s lifting ?, z is low-dimensional. Thus, we can simply transform the input with
z, and then apply fast linear learning methods to approximate the answer of the corresponding
nonlinear kernel machine. In what follows, we show how to construct feature spaces that uniformly
approximate popular shift-invariant kernels k(x ? y) to within with only D = O(d?2 log 12 )
dimensions, and empirically show that excellent regression and classification performance can be
obtained for even smaller D.
In addition to giving us access to extremely fast learning algorithms, these randomized feature maps
also provide a way to quickly evaluate the machine. With the kernel trick, evaluating the machine
PN
at a test point x requires computing f (x) = i=1 ci k(xi , x), which requires O(N d) operations to
compute and requires retaining much of the dataset unless the machine is very sparse. This is often
unacceptable for large datasets. On the other hand, after learning a hyperplane w, a linear machine
can be evaluated by simply computing f (x) = w0 z(x), which, with the randomized feature maps
presented here, requires only O(D + d) operations and storage.
We demonstrate two randomized feature maps for approximating shift invariant kernels. Our first
randomized map, presented in Section 3, consists of sinusoids randomly drawn from the Fourier
transform of the kernel function we seek to approximate. Because this map is smooth, it is wellsuited for interpolation tasks. Our second randomized map, presented in Section 4, partitions the
input space using randomly shifted grids at randomly chosen resolutions. This mapping is not
smooth, but leverages the proximity between input points, and is well-suited for approximating kernels that depend on the L1 distance between datapoints. Our experiments in Section 5 demonstrate
that combining these randomized maps with simple linear learning algorithms competes favorably
with state-of-the-art training algorithms in a variety of regression and classification scenarios.
2
Related Work
The most popular methods for large-scale kernel machines are decomposition methods for solving
Support Vector Machines (SVM). These methods iteratively update a subset of the kernel machine?s
coefficients using coordinate ascent until KKT conditions are satisfied to within a tolerance [5,
6]. While such approaches are versatile workhorses, they do not always scale to datasets with
more than hundreds of thousands of datapoints for non-linear problems. To extend learning with
kernel machines to these scales, several approximation schemes have been proposed for speeding
up operations involving the kernel matrix.
The evaluation of the kernel function can be sped up using linear random projections [7]. Throwing
away individual entries [7] or entire rows [8, 9, 10] of the kernel matrix lowers the storage and
computational cost of operating on the kernel matrix. These approximations either preserve the
separability of the data [8], or produce good low-rank or sparse approximations of the true kernel
matrix [7, 9]. Fast multipole and multigrid methods have also been proposed for this purpose,
but, while they appear to be effective on small and low-dimensional problems, they have not been
demonstrated on large datasets. Further, the quality of the Hermite or Taylor approximation that
these methods rely on degrades exponentially with the dimensionality of the dataset [11]. Fast
nearest neighbor lookup with KD-Trees has been used to approximate multiplication with the kernel
matrix, and in turn, a variety of other operations [12]. The feature map we present in Section 4 is
reminiscent of KD-trees in that it partitions the input space using multi-resolution axis-aligned grids
similar to those developed in [13] for embedding linear assignment problems.
3
Random Fourier Features
Our first set of random features project data points onto a randomly chosen line, and then pass the
resulting scalar through a sinusoid (see Figure 1 and Algorithm 1). The random lines are drawn
from a distribution so as to guarantee that the inner product of two transformed points approximates
the desired shift-invariant kernel.
The following classical theorem from harmonic analysis provides the key insight behind this transformation:
Theorem 1 (Bochner [15]). A continuous kernel k(x, y) = k(x ? y) on Rd is positive definite if
and only if k(?) is the Fourier transform of a non-negative measure.
2
x
Kernel Name
R2
?
Gaussian
Laplacian
Cauchy
RD
k(?)
?
k?k2
2
2
e
e?k?k1
Q
2
d 1+?2d
p(?)
D
?2 ?
(2?)
e
Q
1
k?k2
2
2
d ?(1+?d2 )
?k?k1
e
Figure 1: Random Fourier Features. Each component of the feature map z(x) projects x onto a random
direction ? drawn from the Fourier transform p(?) of k(?), and wraps this line onto the unit circle in R2 .
After transforming two points x and y in this way, their inner product is an unbiased estimator of k(x, y). The
table lists some popular shift-invariant kernels and their Fourier transforms. To deal with non-isotropic kernels,
the data may be whitened before applying one of these kernels.
If the kernel k(?) is properly scaled, Bochner?s theorem guarantees that its Fourier transform p(?)
0
is a proper probability distribution. Defining ?? (x) = ej? x , we have
Z
k(x ? y) =
0
p(?)ej? (x?y) d? = E? [?? (x)?? (y)? ],
(2)
Rd
so ?? (x)?? (y)? is an unbiased estimate of k(x, y) when ? is drawn from p.
To obtain a real-valued random feature for k, note that both the probability distribution p(?) and
0
the kernel k(?) are real, so the integrand ej? (x?y) may be replaced with cos ? 0 (x ? y). Defining
0
z? (x) = [ cos(x) sin(x) ] gives a real-valued mapping that satisfies the condition E[z??(x)0 z? (y)] =
k(x, y), since z? (x)0 z? (y) = cos ? 0 (x ? y). Other mappings such as z? (x) = 2 cos(? 0 x +
b), where ? is drawn from p(?) and b is drawn uniformly from [0, 2?], also satisfy the condition
E[z? (x)0 z? (y)] = k(x, y).
We can lower the variance of z? (x)0 z? (y) by ?
concatenating D randomly chosen z? into a column
vector z and normalizing each component by D. The inner product of points featureized by the
PD
1
2D-dimensional random feature z, z(x)0 z(y) = D
j=1 z?j (x)z?j (y) is a sample average of
z?j (x)z?j (y) and is therefore a lower variance approximation to the expectation (2).
Since z? (x)0 z? (y) is bounded between -1 and 1, for a fixed pair of points x and y, Hoeffding?s inequality guarantees exponentially fast convergence in D between z(x)0 z(y) and k(x, y):
Pr [|z(x)0 z(y) ? k(x, y)| ? ] ? 2 exp(?D2 /2). Building on this observation, a much stronger
assertion can be proven for every pair of points in the input space simultaneously:
Claim 1 (Uniform convergence of Fourier features). Let M be a compact subset of Rd with diameter diam(M). Then, for the mapping z defined in Algorithm 1, we have
2
?p diam(M)
D2
sup |z(x)0 z(y) ? k(x, y)| ? ? 28
exp ?
,
4(d + 2)
x,y?M
Pr
where ?p2 ? Ep [? 0 ?] is the second moment of the Fourier transform of k.
Further, supx,y?M |z(x)0 z(y) ? k(y, x)| ? with any constant probability when D =
? diam(M)
? d2 log p
.
The proof of this assertion first guarantees that z(x)0 z(y) is close to k(x ? y) for the centers of an
-net over M ? M. This result is then extended to the entire space using the fact that the feature
map is smooth with high probability. See the Appendix for details.
By a standard Fourier identity, the scalar ?p2 is equal to the trace of the Hessian of k at 0. It
quantifies the curvature
of the kernel at the origin. For the spherical Gaussian kernel, k(x, y) =
exp ??kx ? yk2 , we have ?p2 = 2d?.
3
Algorithm 1 Random Fourier Features.
Require: A positive definite shift-invariant kernel k(x, y) = k(x ? y).
Ensure: A randomized feature map z(x) : Rd ? R2D so that Rz(x)0 z(y) ? k(x ? y).
0
1
e?j? ? k(?) d?.
Compute the Fourier transform p of the kernel k: p(?) = 2?
Draw D iid q
samples ?1 , ? ? ? , ?D ? Rd from p.
Let z(x) ?
4
1
D
[ cos(?10 x) ???
0
0
0
cos(?D
x) sin(?10 x) ??? sin(?D
x) ] .
Random Binning Features
Our second random map partitions the input space using randomly shifted grids at randomly chosen
resolutions and assigns to an input point a binary bit string that corresponds to the bin in which it
falls (see Figure 2 and Algorithm 2). The grids are constructed so that the probability that two points
x and y are assigned to the same bin is proportional to k(x, y). The inner product between a pair of
transformed points is proportional to the number of times the two points are binned together, and is
therefore an unbiased estimate of k(x, y).
10000000
01000000
00100000
00010000
00001000
00000100
00000010
00000001
?
k(xi , xj )
+
z1 (xi )0 z1 (xj )
z2 (xi )0 z2 (xj )
+
z3 (xi )0 z3 (xj )
+??? =
z(xi )0 z(xj )
Figure 2: Random Binning Features. (left) The algorithm repeatedly partitions the input space using a randomly shifted grid at a randomly chosen resolution and assigns to each point x the bit string z(x) associated
with the bin to which it is assigned. (right) The binary adjacency matrix that describes this partitioning has
z(xi )0 z(xj ) in its ijth entry and is an unbiased estimate of kernel matrix.
We first
describe a randomized mapping to approximate the ?hat? kernel khat (x, y; ?) =
max 0, 1 ? |x?y|
on a compact subset of R ? R, then show how to construct mappings for
?
more general separable multi-dimensional kernels. Partition the real number line with a grid of
pitch ?, and shift this grid randomly by an amount u drawn uniformly at random from [0, ?]. This
grid partitions the real number line into intervals [u + n?, u + (n + 1)?] forall integers n.
The
|x?y|
[13].
probability that two points x and y fall in the same bin in this grid is max 0, 1 ? ?
In other words, if we number the bins of the grid so that a point x falls in bin x
? = b x?u
? c and y
y?u
falls in bin y? = b ? c, then Pru [?
x = y?|?] = khat (x, y; ?). If we encode x
? as a binary indicator
vector z(x) over the bins, z(x)0 z(y) = 1 if x and y fall in the same bin and zero otherwise, so
Pru [z(x)0 z(y) = 1|?] = Eu [z(x)0 z(y)|?] = khat (x, y; ?). Therefore z is a random map for khat .
Now consider shift-invariant kernels that
R ?can be written as convex combinations of hat kernels on a
compact subset of R ? R: k(x, y) = 0 khat (x, y; ?)p(?) d?. If the pitch ? of the grid is sampled
from p, z again gives a random map for k because E?,u [z(x)0 z(y)] = E? [Eu [z(x)0 z(y)|?]] =
E? [khat (x, y; ?)] = k(x, y). That is, if the pitch ? of the grid is sampled from p, and the shift u is
drawn uniformly from [0, ?] the probability that x and y are binned together is k(x, y). Lemma 1 in
?
the appendix shows that p can be easily recovered from k by setting p(?) = ? k(?).
For example, in
the case of the Laplacian kernel, kLaplacian (x, y) = exp(?|x ? y|), p(?) is the Gamma distribution
? exp(??). For the Gaussian kernel, k? is not everywhere positive, so this procedure does not yield a
random map.
Random maps for separable multivariate shift-invariant kernels of the form k(x ? y) =
Qd
m
m
m=1 km (|x ?y |) (such as the multivariate Laplacian kernel) can be constructed in a similar way
if each km can be written as a convex combination of hat kernels. We apply the above binning process over each dimension of Rd independently. The probability that xm and y m are binned together
in dimension m is km (|xm ? y m |). Since the binning process is independent across dimensions, the
4
Qd
probability that x and y are binned together in every dimension is m=1 km (|xm ?y m |) = k(x?y).
1
d
In this multivariate case, z(x) encodes the integer vector [ x? ,??? ,?x ] corresponding to each bin of the
d-dimensional grid as a binary indicator vector. In practice, to prevent overflows when computing
z(x) when d is large, our implementation eliminates unoccupied bins from the representation. Since
there are never more bins than training points, this ensures no overflow is possible.
We can again reduce the variance of the estimator z(x)0 z(y)
p by concatenating P random binning
functions z into a larger list of features z and scaling by 1/P . The inner product z(x)0 z(y) =
PP
1
0
0
p=1 zp (x) zp (y) is the average of P independent z(x) z(y) and has therefore lower variance.
P
Since z(x)0 z(y) is binary, Hoeffding?s inequality guarantees that for a fixed pair of points x and y,
z(x)0 z(y) converges exponentially quickly to k(x, y) as a function of P . Again, a much stronger
claim is that this convergence holds simultaneously for all points:
Claim 2. Let M be a compact subset of Rd with diameter diam(M). Let ? = E[1/?] and let Lk
denote the Lipschitz constant of k with respect to the L1 norm. With z as above, we have
?
? 2
? P8 + ln Lk
?,
Pr sup |z(x)0 z(y) ? k(x, y)| ? ? 1 ? 36dP ? diam(M) exp ?
d+1
x,y?M
R?
R?
?
Note that ? = 0 1? p(?) d? = 0 k(?)
d? is 1, and Lk = 1 for the Laplacian kernel. The proof
of the claim (see the appendix) partitions M ? M into a few small rectangular cells over which
k(x, y) does not change much and z(x) and z(y) are constant. With high probability, at the centers
of these cells z(x)0 z(y) is close to k(x, y), which guarantees that k(x, y) and z(x)0 z(y) are close
throughout M ? M.
Algorithm 2 Random Binning Features.
Qd
Require: A point x ? Rd . A kernel function k(x, y) = m=1 km (|xm ? y m |), so that pm (?) ?
?k?m (?) is a probability distribution on ? ? 0.
Ensure: A randomized feature map z(x) so that z(x)0 z(y) ? k(x ? y).
for p = 1 . . . P do
Draw grid parameters ?, u ? Rd with the pitch ? m ? pm , and shift um from the uniform
distribution on [0, ? m ].
Let z return the coordinate of the bin containing x as a binary indicator vector zp (x) ?
d
d
1
1
e).
e, ? ? ? , d x ??u
hash(d x ??u
1
d
end for q
0
z(x) ? P1 [ z1 (x)???zP (x) ] .
5
Experiments
The experiments summarized in Table 1 show that ridge regression with our random features is a fast
way to approximate the training of supervised kernel machines. We focus our comparisons against
the Core Vector Machine [14] because it was shown in [14] to be both faster and more accurate than
other known approaches for training kernel machines, including, in most cases, random sampling of
datapoints [8]. The experiments were conducted on the five standard large-scale datasets evaluated
in [14], excluding the synthetic datasets. We replicated the results in the literature pertaining to the
CVM, SVMlight , and libSVM using binaries provided by the respective authors.1 For the random
feature experiments, we trained regressors and classifiers by solving the ridge regression problem
1
We include KDDCUP99 results for completeness, but note this dataset is inherently oversampled: training
an SVM (or least squares with random features) on a random sampling of 50 training examples (0.001% of the
training dataset) is sufficient to consistently yield a test-error on the order of 8%. Also, while we were able
to replicate the CVM?s 6.2% error rate with the parameters supplied by the authors, retraining after randomly
shuffling the training set results in 18% error and increases the computation time by an order of magnitude.
Even on the original ordering, perturbing the CVM?s regularization parameter by a mere 15% yields 49% error
rate on the test set [16].
5
Dataset
CPU
regression
6500 instances 21 dims
Census
regression
18,000 instances 119 dims
Adult
classification
32,000 instances 123 dims
Forest Cover
classification
522,000 instances 54 dims
KDDCUP99 (see footnote)
classification
4,900,000 instances 127 dims
Fourier+LS
3.6%
20 secs
D = 300
5%
36 secs
D = 500
14.9%
9 secs
D = 500
11.6%
71 mins
D = 5000
7.3%
1.5 min
D = 50
Binning+LS
5.3%
3 mins
P = 350
7.5%
19 mins
P = 30
15.3%
1.5 mins
P = 30
2.2%
25 mins
P = 50
7.3%
35 mins
P = 10
CVM
5.5%
51 secs
8.8%
7.5 mins
14.8%
73 mins
2.3%
7.5 hrs
6.2% (18%)
1.4 secs (20 secs)
Exact SVM
11%
31 secs
ASVM
9%
13 mins
SVMTorch
15.1%
7 mins
SVMlight
2.2%
44 hrs
libSVM
8.3%
< 1s
SVM+sampling
Table 1: Comparison of testing error and training time between ridge regression with random features, Core
Vector Machine, and various state-of-the-art exact methods reported in the literature. For classification tasks,
the percent of testing points incorrectly predicted is reported, and for regression tasks, the RMS error normalized by the norm of the ground truth.
6
% error
Testing error
0.4
0.3
0.2
0.1
10
training+testing time (sec)
0.5
5
4
3
0
10
2
10
4
Training set size
10
6
2
10
20
30
P
40
50
1200
800
400
10
20
30
40
50
P
Figure 3: Accuracy on test data continues to improve as the training set grows. On the Forest dataset, using
random binning, doubling the dataset size reduces testing error by up to 40% (left). Error decays quickly as P
grows (middle). Training time grows slowly as P grows (right).
minw kZ0 w ? yk22 + ?kwk22 , where y denotes the vector of desired outputs and Z denotes the
matrix of random features. To evaluate the resulting machine on a datapoint x, we can simply
compute w0 z(x). Despite its simplicity, ridge regression with random features is faster than, and
provides competitive accuracy with, alternative methods. It also produces very compact functions
because only w and a set of O(D) random vectors or a hash-table of partitions need to be retained.
Random Fourier features perform better on the tasks that largely rely on interpolation. On the other
hand, random binning features perform better on memorization tasks (those for which the standard
SVM requires many support vectors), because they explicitly preserve locality in the input space.
This difference is most dramatic in the Forest dataset.
Figure 3(left) illustrates the benefit of training classifiers on larger datasets, where accuracy continues to improve as more data are used in training. Figure 3(middle) and (right) show that good
performance can be obtained even from a modest number of features.
6
Conclusion
We have presented randomized features whose inner products uniformly approximate many popular
kernels. We showed empirically that providing these features as input to a standard linear learning
algorithm produces results that are competitive with state-of-the-art large-scale kernel machines in
accuracy, training time, and evaluation time.
It is worth noting that hybrids of Fourier features and Binning features can be constructed by concatenating these features. While we have focused on regression and classification, our features can
be applied to accelerate other kernel methods, including semi-supervised and unsupervised learning algorithms. In all of these cases, a significant computational speed-up can be achieved by first
computing random features and then applying the associated linear technique.
6
7
Acknowledgements
We thank Eric Garcia for help on early versions of these features, Sameer Agarwal and James R.
Lee for helpful discussions, and Erik Learned-Miller and Andres Corrada-Emmanuel for helpful
corrections.
References
[1] T. Joachims. Training linear SVMs in linear time. In ACM Conference on Knowledge Discovery and
Data Mining (KDD), 2006.
[2] M. C. Ferris and T. S. Munson. Interior-point methods for massive Support Vector Machines. SIAM
Journal of Optimization, 13(3):783?804, 2003.
[3] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver for SVM.
In IEEE International Conference on Machine Learning (ICML), 2007.
[4] D. DeCoste and D. Mazzoni. Fast query-optimized kernel machine classification via incremental approximate nearest support vectors. In IEEE International Conference on Machine Learning (ICML), 2003.
[5] J. Platt. Using sparseness and analytic QP to speed training of Support Vector Machines. In Advances in
Neural Information Processing Systems (NIPS), 1999.
[6] C.-C. Chang and C.-J. Lin. LIBSVM: a library for support vector machines, 2001. Software available at
http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[7] D. Achlioptas, F. McSherry, and B. Sch?olkopf. Sampling techniques for kernel methods. In Advances in
Neural Information Processing Systems (NIPS), 2001.
[8] A. Blum. Random projection, margins, kernels, and feature-selection. LNCS, 3940:52?68, 2006.
[9] A. Frieze, R. Kannan, and S. Vempala. Fast monte-carlo algorithms for finding low-rank approximations.
In Foundations of Computer Science (FOCS), pages 378?390, 1998.
[10] P. Drineas and M. W. Mahoney. On the nystrom method for approximating a Gram matrix for improved
kernel-based learning. In COLT, pages 323?337, 2005.
[11] C. Yang, R. Duraiswami, and L. Davis. Efficient kernel machines using the improved fast gauss transform.
In Advances in Neural Information Processing Systems (NIPS), 2004.
[12] Y. Shen, A. Y. Ng, and M. Seeger. Fast gaussian process regression using KD-Trees. In Advances in
Neural Information Processing Systems (NIPS), 2005.
[13] P. Indyk and N. Thaper. Fast image retrieval via embeddings. In International Workshop on Statistical
and Computational Theories of Vision, 2003.
[14] I. W. Tsang, J. T. Kwok, and P.-M. Cheung. Core Vector Machines: Fast SVM training on very large data
sets. Journal of Machine Learning Research (JMLR), 6:363?392, 2005.
[15] W. Rudin. Fourier Analysis on Groups. Wiley Classics Library. Wiley-Interscience, New York, reprint
edition edition, 1994.
[16] G. Loosli and S. Canu. Comments on the ?Core Vector Machines: Fast SVM training on very large data
sets?. Journal of Machine Learning Research (JMLR), 8:291?301, February 2007.
[17] F. Cucker and S. Smale. On the mathematical foundations of learning. Bull. Amer. Soc., 39:1?49, 2001.
A
Proofs
Lemma
1. Suppose a function k(?) : R ? R is twice differentiable and has the form
R?
?
p(?)
max(0,
1? ?
? ) d?. Then p(?) = ? k(?).
0
Proof. We want p so that
Z ?
k(?) =
p(?) max(0, 1 ? ?/?) d?
(3)
0
Z
?
Z
?
p(?) ? 0 d? +
=
0
Z
?
p(?)(1 ? ?/?) d? =
?
Z
?
?
To solve for p, differentiate twice w.r.t. to ? to find that k(?)
= ?
p(?)/?.
7
?
p(?) d? ? ?
p(?)/? d?.
(4)
?
R?
?
?
p(?)/? d? and k(?)
=
Proof of Claim 1. Define s(x, y) ? z(x)0 z(y), and f (x, y) ? s(x, y) ? k(y, x). Since f , and s
are shift invariant, as their arguments we use ? ? x ? y ? M? for notational simplicity.
M? is compact and has diameter at most twice diam(M), so we can find an -net that covers M?
using at most T = (4 diam M/r)d balls of radius r [17]. Let {?i }Ti=1 denote the centers of these
balls, and let Lf denote the Lipschitz constant of f . We have |f (?)| < for all ? ? M? if
|f (?i )| < /2 and Lf < 2r
for all i. We bound the probability of these two events.
Since f is differentiable, Lf = k?f (?? )k, where ?? = arg max??M? k?f (?)k. We have
E[L2f ] = Ek?f (?? )k2 = Ek?s(?? )k2 ? Ek?k(?? )k2 ? Ek?s(?? )k2 ? Ep k?k2 = ?p2 , so
by Markov?s inequality, Pr[L2f ? t] ? E[L2f ]/t, or
2
h
i
2r?p
Pr Lf ?
.
(5)
?
2r
The union bound followed by Hoeffding?s inequality applied to the anchors in the -net gives
Pr ?Ti=1 |f (?i )| ? /8 ? 2T exp ?D2 /2 .
Combining (5) and (6) gives a bound in terms of the free variable r:
d
2
2r?p
4 diam(M)
2
exp ?D /8 ?
.
Pr sup |f (?)| ? ? 1 ? 2
r
??M?
This has the form 1 ? ?1 r?d ? k2 r2 . Setting r =
?1
?2
1
d+2
d
(6)
(7)
2
turns this to 1 ? 2?2d+2 ?1d+2 , and
? diam(M)
? 1 and diam(M) ? 1, proves the first part of the claim. To prove the
assuming that p
second part of the claim, pick any probability for the RHS and solve for D.
Proof of Claim 2. M can be covered by rectangles over each of which z is constant. Let ?pm be the
pitch of the pth grid along the mth dimension. Each grid has at most ddiam(M)/?
pm e bins, and P
PP
PP
1
overlapping grids produce at most Nm = g=1 ddiam(M)/?gm e ? P + diam(M) p=1 ?pm
partitions along the mth dimension. The expected value of the right hand side is P + P diam(M)?.
By Markov?s inequality and the union bound, Pr ?dm=1 Nm ? t(P + P diam(M)?) ? 1 ? d/t.
That is, with probability 1 ? d/t, along every dimension, we have at most t(P + P diam(M)?)
one-dimensional cells. Denote by dmi the width of the ith cell along the mth dimension and observe
PNm
that i=1
dmi ? diam(M). We further subdivide these cells into smaller rectangles of some small
width r to ensure that the kernel k varies very little over each of these cells. This results in at most
PNm dmi
Nm +diam(M)
small one-dimensional cells over each dimension. Plugging in the
i=1 d r e ?
r
1
and assuming ? diam(M) ? 1, with probability 1 ? d/t, M
upper bound for Nm , setting t ? ?P
d
3tP ? diam(M)
rectangles of side r centered at {xi }Ti=1 .
can be covered with T ?
r
The condition |z(x, y) ? k(x, y)| ? on M ? M holds if |z(xi , yi ) ? k(xi , yi )| ? ? Lk rd
and z(x) is constant throughout each rectangle. With rd = 2L k , the union bound followed by
Hoeffding?s inequality gives
Pr [?ij |z(xi , yj ) ? k(xi , yj )| ? /2] ? 2T 2 exp ?P 2 /8
(8)
Combining this with the probability that z(x) is constant in each cell gives a bound in terms of t:
d
P 2
2Lk
Pr
sup
|z(x, y) ? k(x, y)| ? ?1 ? ? 2(3tP ? diam(M))d
exp ?
.
t
8
x,y?M?M
This has the form 1 ? ?1 t?1 ? ?2 td . To prove the claim, set t =
upper bound of 1 ?
1
3?1 ?2d+1 .
8
?1
2?2
1
d+1
, which results in an
| 3182 |@word version:1 middle:2 stronger:2 replicate:1 norm:2 retraining:1 d2:2 km:5 seek:1 decomposition:1 pick:1 dramatic:1 versatile:1 moment:1 existing:1 recovered:1 com:1 z2:2 reminiscent:1 written:2 partition:9 kdd:1 analytic:1 designed:1 update:1 hash:2 half:1 rudin:1 isotropic:1 ith:1 core:4 provides:2 completeness:1 hermite:1 five:1 mathematical:1 unacceptable:1 constructed:3 along:4 focs:1 consists:1 prove:2 interscience:1 p8:1 expected:1 p1:1 multi:2 relying:1 spherical:1 td:1 cpu:1 decoste:1 little:1 solver:1 l2f:3 provided:2 project:2 competes:1 bounded:1 corrada:1 what:1 multigrid:1 string:2 developed:1 finding:1 transformation:1 guarantee:6 every:3 ti:3 um:1 k2:8 scaled:1 classifier:2 partitioning:1 unit:1 platt:1 appear:1 positive:4 before:1 treat:1 despite:1 interpolation:2 approximately:2 might:1 twice:3 co:6 factorization:1 testing:5 yj:2 practice:1 union:3 definite:3 lf:4 procedure:1 lncs:1 pnm:2 projection:2 word:1 radial:1 convenience:1 onto:3 close:3 interior:1 pegasos:1 storage:3 selection:1 applying:2 memorization:1 www:1 map:19 demonstrated:1 center:3 independently:1 convex:2 rectangular:1 resolution:4 l:2 simplicity:2 focused:1 assigns:2 shen:1 insight:1 estimator:2 datapoints:5 embedding:1 classic:1 coordinate:2 suppose:1 gm:1 user:1 exact:2 massive:1 origin:1 trick:3 continues:2 binning:10 ep:2 csie:1 loosli:1 tsang:1 thousand:1 ensures:1 eu:2 ordering:1 munson:1 benjamin:1 transforming:1 pd:1 trained:2 depend:3 solving:2 ali:2 incur:1 eric:1 basis:1 drineas:1 accelerate:2 easily:1 various:2 train:1 fast:14 effective:1 describe:1 monte:1 pertaining:1 query:1 shalev:1 whose:1 larger:2 valued:2 solve:2 otherwise:1 ability:1 transform:8 itself:1 indyk:1 differentiate:1 advantage:1 differentiable:2 net:3 propose:3 product:12 aligned:1 combining:3 poorly:1 olkopf:1 seattle:2 convergence:4 zp:4 produce:4 incremental:1 converges:1 help:1 ij:1 nearest:2 p2:4 soc:1 predicted:1 qd:3 direction:1 radius:1 centered:1 bin:14 adjacency:1 require:2 wellsuited:1 ntu:1 correction:1 hold:2 proximity:1 ground:1 exp:10 mapping:8 claim:9 early:1 purpose:1 always:1 gaussian:4 pn:1 ej:3 lifted:1 encode:1 focus:1 joachim:1 properly:1 consistently:1 rank:2 notational:1 seeger:1 helpful:2 entire:2 pasadena:1 mth:3 transformed:4 arg:1 classification:10 colt:1 retaining:1 art:5 equal:2 construct:2 never:1 dmi:3 ng:1 sampling:4 unsupervised:1 icml:2 few:1 randomly:11 frieze:1 preserve:2 simultaneously:2 gamma:1 individual:1 replaced:1 consisting:1 mining:1 evaluation:5 mahoney:1 behind:1 primal:1 mcsherry:1 accurate:1 kddcup99:2 respective:1 minw:1 modest:1 unless:1 tree:3 euclidean:1 taylor:1 desired:2 circle:1 instance:5 column:2 assertion:2 tp:2 cover:2 assignment:1 ijth:1 bull:1 cost:3 subset:5 entry:2 hundred:1 uniform:2 conducted:1 reported:2 answer:1 supx:1 varies:1 synthetic:1 combined:1 recht:1 international:3 randomized:13 siam:1 lee:1 cucker:1 together:4 quickly:5 again:3 satisfied:1 nm:4 containing:1 r2d:1 hoeffding:4 slowly:1 ek:4 return:1 lookup:1 summarized:1 sec:8 coefficient:1 satisfy:1 explicitly:2 pru:2 sup:4 competitive:2 square:1 accuracy:5 variance:4 largely:1 miller:1 yield:3 andres:1 iid:1 mere:1 carlo:1 thaper:1 worth:1 footnote:1 datapoint:1 against:1 pp:3 james:1 nystrom:1 dm:1 proof:6 associated:2 workstation:1 sampled:2 dataset:10 popular:4 knowledge:1 dimensionality:2 day:1 supervised:2 improved:2 duraiswami:1 amer:1 evaluated:2 implicit:1 achlioptas:1 until:1 hand:4 nonlinear:3 unoccupied:1 overlapping:1 defines:1 quality:1 grows:4 name:1 building:1 normalized:1 true:1 unbiased:4 regularization:1 assigned:2 sinusoid:2 iteratively:1 deal:1 attractive:1 sin:3 width:2 davis:1 ridge:4 demonstrate:2 workhorse:1 l1:2 percent:1 image:1 harmonic:1 sped:1 empirically:2 perturbing:1 qp:1 exponentially:3 million:1 extend:1 ddiam:2 approximates:2 significant:1 shuffling:1 rd:15 grid:17 pm:5 canu:1 access:2 operating:1 yk2:1 curvature:1 multivariate:3 showed:1 scenario:1 inequality:6 binary:7 arbitrarily:1 yi:2 caltech:2 bochner:2 semi:1 sameer:1 reduces:1 rahimi:2 smooth:3 faster:2 lin:1 retrieval:1 plugging:1 laplacian:4 pitch:5 involving:1 regression:13 whitened:1 vision:1 expectation:1 kernel:67 agarwal:1 achieved:1 cell:8 addition:1 want:1 interval:1 sch:1 operate:1 unlike:1 eliminates:1 ascent:1 comment:1 kwk22:1 integer:2 leverage:1 svmlight:2 yk22:1 noting:1 enough:1 yang:1 embeddings:1 variety:2 xj:6 brecht:1 inner:12 reduce:1 shift:11 rms:1 hessian:1 york:1 repeatedly:1 covered:2 transforms:1 amount:1 svms:1 diameter:3 generate:1 http:1 outperform:1 supplied:1 shifted:3 dims:5 estimated:1 ist:2 key:1 group:1 blum:1 drawn:8 prevent:1 libsvm:4 rectangle:4 convert:1 compete:1 everywhere:1 powerful:1 throughout:2 draw:2 decision:1 appendix:3 scaling:1 bit:2 bound:9 followed:2 binned:4 throwing:1 software:1 encodes:1 fourier:16 speed:3 integrand:1 extremely:1 min:11 argument:1 separable:2 vempala:1 relatively:1 combination:2 ball:2 kd:3 smaller:2 describes:1 across:1 separability:1 tw:1 invariant:8 pr:10 census:1 ln:1 turn:2 cjlin:1 singer:1 end:1 available:1 operation:5 ferris:1 apply:3 kwok:1 observe:1 away:1 alternative:1 subdivide:1 hat:3 rz:1 original:1 denotes:2 multipole:1 ensure:3 include:1 giving:1 k1:2 emmanuel:1 overflow:2 approximating:3 classical:1 february:1 prof:1 mazzoni:1 degrades:1 gradient:1 dp:1 wrap:1 distance:1 thank:1 w0:2 cauchy:1 kannan:1 assuming:2 erik:1 retained:1 z3:2 providing:1 unfortunately:1 favorably:2 smale:1 trace:1 negative:1 implementation:1 proper:1 perform:2 upper:2 observation:2 datasets:7 markov:2 incorrectly:1 defining:2 extended:1 excluding:1 pair:7 specified:1 z1:3 oversampled:1 optimized:1 learned:1 nip:4 adult:1 able:1 xm:4 asvm:1 including:3 max:5 event:1 rely:2 hybrid:1 indicator:3 hr:2 scheme:1 improve:2 library:2 axis:1 lk:4 reprint:1 speeding:1 literature:2 acknowledgement:1 discovery:1 multiplication:1 proportional:2 proven:1 srebro:1 foundation:2 sufficient:1 row:1 free:1 side:2 neighbor:1 fall:5 sparse:2 tolerance:1 benefit:1 boundary:1 dimension:10 gram:2 evaluating:1 author:2 replicated:1 regressors:1 pth:1 approximate:10 compact:6 kkt:1 anchor:1 xi:12 shwartz:1 continuous:1 quantifies:1 table:4 ca:1 inherently:1 forest:3 excellent:1 rh:1 edition:2 intel:2 cvm:4 wiley:2 sub:1 concatenating:3 jmlr:2 theorem:3 r2:3 list:2 svm:9 decay:1 normalizing:1 workshop:1 ci:1 lifting:3 magnitude:1 illustrates:1 sparseness:1 kx:1 margin:1 suited:1 locality:1 garcia:1 simply:3 explore:1 scalar:2 doubling:1 chang:1 corresponds:1 truth:1 satisfies:1 relies:1 acm:1 kz0:1 diam:19 identity:1 cheung:1 lipschitz:2 change:1 uniformly:5 hyperplane:1 lemma:2 pas:1 gauss:1 shiftinvariant:1 support:7 svmtorch:1 evaluate:2 |
2,406 | 3,183 | Efficient Inference for Distributions on Permutations
Jonathan Huang
Carnegie Mellon University
[email protected]
Carlos Guestrin
Carnegie Mellon University
[email protected]
Leonidas Guibas
Stanford University
[email protected]
Abstract
Permutations are ubiquitous in many real world problems, such as voting,
rankings and data association. Representing uncertainty over permutations is
challenging, since there are n! possibilities, and typical compact representations
such as graphical models cannot efficiently capture the mutual exclusivity constraints associated with permutations. In this paper, we use the ?low-frequency?
terms of a Fourier decomposition to represent such distributions compactly. We
present Kronecker conditioning, a general and efficient approach for maintaining
these distributions directly in the Fourier domain. Low order Fourier-based
approximations can lead to functions that do not correspond to valid distributions.
To address this problem, we present an efficient quadratic program defined
directly in the Fourier domain to project the approximation onto a relaxed form
of the marginal polytope. We demonstrate the effectiveness of our approach on a
real camera-based multi-people tracking setting.
1
Introduction
Permutations arise naturally in a variety of real situations such as card games, data association
problems, ranking analysis, etc. As an example, consider a sensor network that tracks the positions
of n people, but can only gather identity information when they walk near certain sensors. Such
mixed-modality sensor networks are an attractive alternative to exclusively using sensors which can
measure identity because they are potentially cheaper, easier to deploy, and less intrusive. See [1]
for a real deployment. A typical tracking system maintains tracks of n people and the identity of
the person corresponding to each track. What makes the problem difficult is that identities can be
confused when tracks cross in what we call mixing events. Maintaining accurate track-to-identity
assignments in the face of these ambiguities based on identity measurements is known as the
Identity Management Problem [2], and is known to be N P -hard. Permutations pose a challenge for
probabilistic inference, because distributions on the group of permutations on n elements require
storing at least n! ? 1 numbers, which quickly becomes infeasible as n increases. Furthermore,
typical compact representations, such as graphical models, cannot capture the mutual exclusivity
constraints associated with permutations.
Diaconis [3] proposes maintaining a small subset of Fourier coefficients of the actual distribution allowing for a principled tradeoff between accuracy and complexity. Schumitsch et al. [4] use similar
ideas to maintain a particular subset of Fourier coefficients of the log probability distribution. Kondor et al. [5] allow for general sets of coefficients, but assume a restrictive form of the observation
model in order to exploit an efficient FFT factorization. The main contributions of this paper are:
? A new, simple and general algorithm, Kronecker Conditioning, which performs all probabilistic inference operations completely in the Fourier domain. Our approach is general, in
the sense that it can address any transition model or likelihood function that can be represented in the Fourier domain, such as those used in previous work, and can represent the
probability distribution with any desired set of Fourier coefficients.
? We show that approximate conditioning can sometimes yield Fourier coefficients which do
not correspond to any valid distribution, and present a method for projecting the result back
onto a relaxation of the marginal polytope.
? We demonstrate the effectiveness of our approach on a real camera-based multi-people
tracking setting.
1
2
Filtering over permutations
In identity management, a permutation ? represents a joint assignment of identities to internal tracks,
with ?(i) being the track belonging to the ith identity. When people walk too closely together, their
identities can be confused, leading to uncertainty over ?. To model this uncertainty, we use a Hidden
Markov Model on permutations, which is a joint distribution over P (? (1) , . . . , ? (T ) , z (1) , . . . , z (T ) )
which factors as:
P (? (1) , . . . , ? (T ) , z (1) , . . . , z (T ) ) = P (z (1) |? (1) )
Y
P (z t |? (t) ) ? P (? (t) |? (t?1) ),
t
(t)
(t)
where the ? are latent permutations and the z denote observed variables. The conditional
probability distribution P (? (t) |? (t?1) ) is called the transition model, and might reflect for example,
that the identities belonging to two tracks were swapped with some probability. The distribution
P (z (t) |? (t) ) is called the observation model, which might capture a distribution over the color of
clothing for each individual.
We focus on filtering, in which one queries the HMM for the posterior at some timestep, conditioned
on all past observations. Given the distribution P (? (t) |z (1) , . . . , z (t) ), we recursively compute
P (? (t+1) |z (1) , . . . , z (t+1) ) in two steps: a prediction/rollup step and a conditioning step. The
first updates the distribution by multiplying by the transition
model and marginalizing out the
P
(t+1) (t)
previous timestep: P (? (t+1) |z (1) , . . . , z (t) ) =
P
(?
|? )P (? (t) |z (1) , . . . , z (t) ).
? (t)
The second conditions the distribution on an observation z (t+1) using Bayes rule:
P (? (t+1) |z (1) , . . . , z (t+1) ) ? P (z (t+1) |? (t+1) )P (? (t+1) |z (1) , . . . , z (t) ). Since there are n!
permutations, a single update requires O((n!)2 ) flops and is consequently intractable for all but
very small n. The approach that we advocate is to maintain a compact approximation to the true
distribution based on the Fourier transform. As we discuss later, the Fourier based approximation
is equivalent to maintaining a set of low-order marginals, rather than the full joint, which we regard
as being analagous to an Assumed Density Filter [6].
3
Fourier projections of functions on the Symmetric Group
Over the last 50 years, the Fourier Transform has been ubiquitously applied to everything digital,
particularly with the invention of the Fast Fourier Transform. On the real line, the Fourier Transform
is a well-studied method for decomposing a function into a sum of sine and cosine terms over
a spectrum of frequencies. Perhaps less familiar, is its group theoretic generalization, which we
review in this section with an eye towards approximating functions on the group of permutations, the
Symmetric Group. For permutations on n objects, the Symmetric Group will be abbreviated by Sn .
The formal definition of the Fourier Transform relies on the theory of group representations, which
we briefly discuss first. Our goal in this section is to motivate the idea that the Fourier transform of
a distribution P is related to certain marginals of P . For references on this subject, see [3].
Definition 1. A representation of a group G is a map ? from G to a set of invertible d? ? d?
matrix operators which preserves algebraic structure in the sense that for all ?1 , ?2 ? G,
?(?1 ?2 ) = ?(?1 ) ? ?(?2 ). The matrices which lie in the image of this map are called the
representation matrices, and we will refer to d? as the degree of the representation.
Representations play the role of basis functions, similar to that of sinusoids, in Fourier theory. The
simplest basis functions are constant functions ? and our first example of a representation is the trivial representation ?0 : G ? R which maps every element of G to 1. As a more pertinent example,
we define the 1st order permutation representation of Sn to be the degree n representation, ?1 , which
maps a permutation ? to its corresponding permutation matrix given by: [?1 (?)]ij = 1 {?(j) = i}.
For example, the permutation in S3 which swaps the second and third elements maps to:
0
1
?1 (1 7? 1, 2 7? 3, 3 7? 2) = @ 0
0
0
0
1
1
0
1 A.
0
The ?1 representation can be thought of as a collection of n2 functions at once, one for each matrix
entry, [?1 (?)]ij . There are other possible permutation representations - for example the 2nd order
unordered permutation representation, ?2 , is defined by the action of a permutation on unordered
pairs of objects, ([?(?)]{i,j},{?,k} = 1 {?({?, k}) = {i, j}}), and is a degree n(n?1)
representation.
2
And the list goes on to include many more complicated representations.
2
It is useful to think of two representations as being the same if the representation matrices are equal
up to some consistent change of basis. This idea is formalized by declaring two representations ?
and ? to be equivalent if there exists an invertible matrix C such that C ?1 ? ?(?) ? C = ? (?) for all
? ? G. We write this as ? ? ? .
Most representations can be seen as having been built up by smaller representations. We say that
a representation ? is reducible if there exist smaller representations ?1 , ?2 such that ? ? ?1 ? ?2
where ? is defined to be the direct sum representation:
?1 ? ?2 (g) ,
?
?1 (g)
0
0
?2 (g)
?
.
(1)
In general, there are infinitely many inequivalent representations. However, for any finite group,
there is always a finite collection of atomic representations which can be used to build up any
other representation using direct sums. These representations are referred to as the irreducibles
of a group, and they are simply the collection of representations which are not reducible. We will
refer to the set of irreducibles by R. It can be shown that any representation of a finite group G
is equivalent to a direct sum of irreducibles [3], and hence, for any representation ? , there exists a
matrices C for which C ?1 ? ? ? C = ??i ?R ? ?i , where the inner ? refers to some finite number
of copies of the irreducible ?i .
Describing the irreducibles of Sn up to equivalence is a subject unto itself; We will simply say
that there is a natural way to order the irreducibles of Sn that corresponds to ?simplicity? in the
same way that low frequency sinusoids are simpler than higher frequency ones. We will refer to the
irreducibles in this order as ?0 , ?1 , . . . . For example, the first two irreducibles form the first order
permutation representation (?1 ? ?0 ? ?1 ), and the second order permutation representation can be
formed by the first 3 irreducibles.
Irreducible representation matrices are not always orthogonal, but they can always be chosen to be
so (up to equivalence). For notational convenience, the irreducible representations in this paper will
always be assumed to be orthogonal.
3.1 The Fourier transform
On the real line, the Fourier Transform corresponds to computing inner products of a function with
sines and cosines at varying frequencies. The analogous definition for finite groups replaces the
sinusoids by group representations.
Definition 2. Let f : G ? R be any function on a group G and let ? be any representation on G.
P
The Fourier Transform of f at the representation ? is defined to be: f?? = ? f (?)?(?).
There are two important points which distinguish this Fourier Transform from the familiar version
on the real line ? it is matrix-valued, and instead of real numbers, the inputs to f? are representations
of G. The collection of Fourier Transforms of f at all irreducibles form the Fourier Transform of f .
As in the familiar case, there is an inverse transform given by:
f (?) =
h
i
1 X
d?k Tr f??Tk ? ?k (?) ,
|G|
(2)
k
where k indexes over the collection of irreducibles of G.
We provide two examples for intuition. For functions on the real line, the Fourier Transform at
zero gives the DC component of a signal. This is also true for functions on a group; If f : G ? R
is any function, then the Fourier Transform of f at the trivial representation is constant with
P
f??0 = ? f (?). Thus, for any probability distribution P , we have P??0 = 1. If P were the uniform
distribution, then P?? = 0 at all irreducibles except at the trivial representation.
The Fourier Transform at ?1 also has a simple interpretation:
[f??1 ]ij =
X
??Sn
f (?)[?1 (?)]ij =
X
??Sn
f (?)1 {?(j) = i} =
X
f (?).
?:?(j)=i
Thus, if P is a distribution, then P??1 is a matrix of marginal probabilties, where the ij-th element
is the marginal probability that a random permutation drawn from P maps element j to i. Similarly,
the Fourier transform of P at the second order permutation representation is a matrix of marginal
probabilities of the form P (?({i, j}) = {k, ?}).
3
In Section 5, we will discuss function approximation by bandlimiting the Fourier coefficients, but
this example should illustrate the fact that maintaining Fourier coefficients at low-order irreducibles
is the same as maintaining low-order marginal probabilities, while higher order irreducibles
correspond to more complicated marginals.
4
Inference in the Fourier domain
Bandlimiting allows for compactly storing a distribution over permutations, but the idea is rather
moot if it becomes necessary to transform back to the primal domain each time an inference
operation is called. Naively, the Fourier Transform on Sn scales as O((n!)2 ), and even the fastest
Fast Fourier Transforms for functions on Sn are no faster than O(n! log(n!)) (see [7] for example).
To resolve this issue, we present a formulation of inference which operates solely in the Fourier
domain, allowing us to avoid a costly transform. We begin by discussing exact inference in the
Fourier domain, which is no more tractable than the original problem because there are n! Fourier
coefficients, but it will allow us to discuss the bandlimiting approximation in the next section. There
are two operations to consider: prediction/rollup, and conditioning. The assumption for the rest of
this section is that the Fourier Transforms of the transition and observation models are known. We
discuss methods for obtaining the models in Section 7.
4.1 Fourier prediction/rollup
We will consider one particular type of transition model ? that of a random walk over a group.
This model assumes that ? (t+1) is generated from ? (t) by drawing a random permutation ? (t)
from some distribution Q(t) and setting ? (t+1) = ? (t) ? (t) . In our identity management example,
? (t) represents a random identity permutation that might occur among tracks when they get close
to each other (a mixing event), but the random walk model appears in other applications such as
modeling card shuffles [3]. The Fourier domain Prediction/Rollup step is easily formulated using
the convolution theorem (see also [3]):
Proposition 3. Let Q and P be probability distributions on Sn . Define the convolution ofh Q andiP to
P
\
be the function [Q ? P ] (?1 ) = ?2 Q(?1 ? ?2?1 )P (?2 ). Then for any representation ?, Q
?P =
?
b ? ? Pb? , where the operation on the right side is matrix multiplication.
Q
The Prediction/Rollup step for the random walk transition model can be written as a convolution:
P (? (t+1) ) =
X
Q(t) (? (t) )?P (? (t) ) =
{(? (t) ,? (t) ) : ? (t+1) =? (t) ?? (t) }
X
? (t)
h
i
Q(t) (? (t+1) ?(? (t) )?1 )P (? (t) ) = Q(t) ? P (? (t+1) ).
(t)
b (t)
Then assuming that Pb? and Q
? are given, the prediction/rollup update rule is simply:
b (t) ? Pb(t) .
Pb?(t+1) ? Q
?
?
Note that the update requires only knowledge of P? and does not require P . Furthermore, the update
is pointwise in the Fourier domain in the sense that the coefficients at the representation ? affect
(t+1)
Pb?
only at ?.
4.2 Fourier conditioning
An application of Bayes rule to find a posterior distribution P (?|z) after observing some evidence z
requires a pointwise product of likelihood L(z|?) and
P prior P (?), followed by a normalization step.
We showed earlier that the normalization constant ? L(z|?) ? P (?) is given by the Fourier trans(t) P (t) at the trivial representation ? and therefore the normalization step of conditioning
form of L\
i
h
(t) P (t)
.
can be implemented by simply dividing each Fourier coefficient by the scalar L\
?0
The pointwise product of two functions f and g, however, is trickier to formulate in the Fourier
domain. For functions on the real line, the pointwise product of functions can be implemented
by convolving the Fourier coefficients of f? and g?, and so a natural question is: can we apply a
similar operation for functions over other groups? Our answer to this is that there is an analogous
(but more complicated) notion of convolution in the Fourier domain of a general finite group. We
present a convolution-based conditioning algorithm which we call Kronecker Conditioning, which,
in contrast to the pointwise nature of the Fourier Domain prediction/rollup step, and much like
convolution, smears the information at an irreducible ?k to other irreducibles.
4
Fourier transforming the pointwise product Our approach to Fourier Transforming the pointwise product in terms of f? and g? is to manipulate the function f (?)g(?) so that it can be seen as the
result of an inverse Fourier Transform. Hence, the goal will be to find matrices Ak (as a function of
f?, g?) such that for any ? ? G,
f (?) ? g(?) =
?
?
1 X
d?k Tr ATk ? ?k (?) ,
|G|
(3)
k
h i
where Ak = fcg . For any ? ? G we can write the pointwise product in terms f? and g? using the
?k
inverse Fourier Transform (Equation 2):
f (?) ? g(?)
=
=
"
# "
#
?
?
?
?
1 X
1 X
T
T
?
d?i Tr f?i ? ?i (?) ?
d?j Tr g??j ? ?j (?)
|G| i
|G| j
?
?2 X
h ?
?
?
?i
1
d?i d?j Tr f??Ti ? ?i (?) ? Tr g??Tj ? ?j (?) .
|G|
i,j
(4)
Now we want to manipulate this product of traces in the last line to be just one trace (as in
Equation 3), by appealing to some properties of the matrix Kronecker product. The connection
to the pointwise product (first observed in [8]), lies in the property that for any matrices U, V ,
Tr (U ? V ) = (Tr U ) ? (Tr V ). Applying this to Equation 4, we have:
?
?
?
?
Tr f??Ti ? ?i (?) ? Tr g??Tj ? ?j (?)
??
? ?
??
f??Ti ? ?i (?) ? g??Tj ? ?j (?)
??
?
?T
Tr
f??i ? g??j
? (?i (?) ? ?j (?)) ,
=
Tr
=
(5)
where the last line follows by standard matrix properties. The term on the right, ?i (?) ? ?j (?),
itself happens to be a representation, called the Kronecker Product Representation. In general,
the Kronecker Product representation is reducible, and so it can decomposed into a direct sum of
irreducibles. This means that if ?i and ?j are any two irreducibles of G, there exists a similarity
transform Cij such that for any ? ? G,
?1
Cij
? [?i ? ?j ] (?) ? Cij =
zijk
MM
k
?k (?).
?=1
The ? symbols here refer to a matrix direct sum as in Equation 1, k indexes over all irreducible
representations of Sn , while ? indexes over a number of copies of ?k which appear in the decomposition. We index blocks on the right side of this equation by pairs of indices (k, ?). The
number of copies of each ?k is denoted by the integer zijk , the collection of which, taken over
all triples (i, j, k), are commonly referred to as the Clebsch-Gordan series. Note that we allow
the zijk to be zero, in which case ?k does not contribute to the direct sum. The matrices Cij are
known as the Clebsch-Gordan coefficients. The Kronecker Product Decomposition problem is
that of finding the irreducible components of the Kronecker product representation, and thus to
find the Clebsch-Gordan series/coefficients for each pair of representations (?i , ?j ). Decomposing
the Kronecker product inside Equation 5 using the Clebsch-Gordan series/coefficients yields the
desired Fourier Transform, which we summarize here:
Proposition 4. Let f?, g? be the Fourier Transforms of functions f and g respectively,
and for each
?1
?
ordered pair of irreducibles (?i , ?j ), define the matrix: Aij , C ? f? ? g?? ? Cij . Then the
ij
i
j
Fourier tranform of the pointwise product f g is:
h i
fcg
?k
zijk
X k?
1 X
Aij ,
d?i d?j
=
d?k |G| ij
(6)
?=1
z
ijk
where Ak?
?k .
ij is the block of Aij corresponding to the (k, ?) block in ?k ??
See the Appendix for a full proof of Proposition 4. The Clebsch-Gordan series, zijk , plays an
important role in Equation 6, which says that the (?i , ?j ) crossterm contributes to the pointwise
product at ?k only when zijk > 0. For example,
?1 ? ?1 ? ?0 ? ?1 ? ?2 ? ?3 .
So z1,1,k = 1 for k ? 3 and is zero otherwise.
5
(7)
Unfortunately, there are no analytical formulas for finding the Clebsch-Gordan series or coefficients,
and in practice, these computations can take a long time. We emphasize however, that as fundamental quantities, like the digits of ?, they need only be computed once and stored in a table for future
reference. Due to space limitations, we will not provide complete details on computing these numbers. We refer the reader to Murnaghan [9], who provides general formulas for computing ClebschGordan series for pairs of low-order irreducibles, and to Appendix 1 for details about computing
Clebsch-Gordan coefficients. We will also make precomputed coefficients available on the web.
5
Approximate inference by bandlimiting
We approximate the probability distribution P (?) by fixing a bandlimit B and maintaining the
Fourier transform of P only at irreducibles ?0 , . . . ?B . We refer to this set of irreducibles as B. As on
the real line, smooth functions are generally well approximated by only a few Fourier coefficients,
while ?wigglier? functions require more. For example, when B = 3, B is the set ?0 , ?1 , ?2 , and
?3 , which corresponds to maintaining marginal probabilities of the form P (?((i, j)) = (k, ?)).
During inference, we follow the procedure outlined in the previous section but ignore the higher
order terms which are not maintained. Pseudocode for bandlimited prediction/rollup and Kronecker
conditioning is given in Figures 1 and 2.
Since the Prediction/Rollup step is pointwise in the Fourier domain, the update is exact for the
maintained irreducibles because higher order irreducibles cannot affect those below the bandlimit.
As in [5], we find that the error from bandlimiting creeps in through the conditioning step. For
example, Equation 7 shows that if B = 1 (so that we maintain first-order marginals), then the
pointwise product spreads information to second-order marginals. Conversely, pairs of higher-order
irreducibles may propagate information to lower-order irreducibles. If a distribution is diffuse, then
most of the energy is stored in low-order Fourier coefficients anyway, and so this is not a big problem. However, it is when the distribution is sharply concentrated at a small subset of permutations,
that the low-order Fourier projection is unable to faithfully approximate the distribution, in many
circumstances, resulting in a bandlimited Fourier Transform with negative ?marginal probabilities?!
To combat this problem, we present a method for enforcing nonnnegativity.
Projecting to a relaxed marginal polytope The marginal polytope, M, is the set of marginals
which are consistent with some joint distribution over permutations. We project our approximation
onto a relaxation of the marginal polytope, M? , defined by linear inequality constraints that
marginals be nonnegative, and linear equality constraints that they correspond to some legal Fourier
transform. Intuitively, our relaxation produces matrices of marginals which are doubly stochastic
(rows and columns sum to one and all entries are nonnegative), and satisfy lower-order marginal
consistency (different high-order marginals are consistent at lower orders).
After each conditioning step, we apply a ?correction? to the approximate posterior P (t) by finding
the bandlimited function in M? which is closest to P (t) in an L2 sense. To perform the projection,
we employ the Plancherel Theorem [3] which relates the L2 distance between functions on Sn to a
distance metric in the Fourier domain.
T
1 X
Proposition 5. X
2
?
?
(f (?) ? g(?)) =
d?k Tr f?k ? g??k
? f?k ? g??k
.
(8)
|G|
?
k
We formulate the optimization as a quadratic program where the objective is to minimize the right
side of Equation 8 ? the sum is taken only over the set of maintained irreducibles, B, and subject
to the linear constraints which define M? .
We remark that even though the projection will always produce a Fourier transform corresponding
to nonnegative marginals, there might not necessarily exist a joint probability distribution on Sn
consistent with those marginals. In the case of first-order marginals, however, the existence of
a consistent joint distribution is guaranteed by the Birkhoff-von Neumann theorem [10], which
states that a matrix is doubly stochastic if and only if it can be written as a convex combination of
permutation matrices. And so for the case of first-order marginals, our relaxation is in fact, exact.
6
Related Work
The Identity Management problem was first introduced in [2] which maintains a doubly stochastic
first order belief matrix to reason over data associations. Schumitsch et al. [4] exploits a similar
idea, but formulated the problem in log-space.
6
Figure 1: Pseudocode for the Fourier Prediction/Rollup Algorithm.
P REDICTION ROLLUP
(t+1)
? (t)
? (t)
foreach ?k ? B do P??k
?Q
?k ? P?k ;
Figure 2: Pseudocode for the Kronecker Conditioning Algorithm.
K RONECKER C ONDITIONING
h
i
(t) P (t)
foreach ?k ? B do L\
?k
? 0 //Initialize Posterior
//Pointwise Product
foreach ?i ? B do
foreach ?j ? B do
z ? CGseries(?i , ?j ) ;
?
?
T
Cij ? CGcoef f icients(?i , ?j ) ; Aij ? Cij
? f??i ? g??j ? Cij ;
for ?k ? B such that zijk 6= 0 do
for ?h = 1 to zki do h
i
d? d?
(t) P (t)
(t) P (t)
L\
? L\
+ d?i n!j Ak?
//Ak?
ij
ij is the (k, ?) block of Aij
k
?k
?k
h
i
(t) P (t)
Z ? L\
;
?0 h
i
i
h
(t) P (t)
(t) P (t)
//Normalization
foreach ?k ? B do L\
? Z1 L\
?k
?k
Kondor et al. [5] were the first to show that the data association problem could be approximately
handled via the Fourier Transform. For conditioning, they exploit a modified FFT factorization
which works on certain simplified observation models. Our approach generalizes the type of
observations that can be handled in [5] and is equivalent in the simplified model that they present.
We require O(D3 n2 ) time in their setting. Their FFT method saves a factor of D due to the fact that
certain representation matrices can be shown to be sparse. Though we do not prove it, we observe
that the Clebsch-Gordan coefficients, Cij are typically similarly sparse, which yields an equivalent
running time in practice. In addition, Kondor et al. do not address the issue of projecting onto valid
marginals, which, as we show in our experimental results, is fundamental in practice.
Willsky [8] was the first to formulate a nonabelian version of the FFT algorithm (for Metacyclic
groups) as well as to note the connection between pointwise products and Kronecker product
decompositions for general finite groups. In this paper, we address approximate inference, which is
necessary given the n! complexity of inference for the Symmetric group.
7
Experimental results
For small n, we compared our algorithm to exact inference on synthetic datasets in which tracks are
drawn at random to be observed or swapped. For validation we measure the L1 distance between true
and approximate marginal distributions. In (Fig. 3(a)), we call several mixings followed by a single
observation, after which we measured error. As expected, the Fourier approximation is better when
there are either more mixing events, or when more Fourier coefficients are maintained. In (Fig. 3(b))
we allow for consecutive conditioning steps and we see that that the projection step is fundamental,
especially when mixing events are rare, reducing the error dramatically. Comparing running times,
it is clear that our algorithm scales gracefully compared to the exact solution (Fig. 3(c)).
We also evaluated our algorithm on data taken from a real network of 8 cameras (Fig. 3(d)). In the
data, there are n = 11 people walking around a room in fairly close proximity. To handle the fact
that people can freely leave and enter the room, we maintain a list of the tracks which are external
to the room. Each time a new track leaves the room, it is added to the list and a mixing event is
called to allow for m2 pairwise swaps amongst the m external tracks.
The number of mixing events is approximately the same as the number of observations. For each
observation, the network returns a color histogram of the blob associated with one track. The
task after conditioning on each observation is to predict identities for all tracks inside the room,
and the evaluation metric is the fraction of accurate predictions. We compared against a baseline
approach of predicting the identity of a track based on the most recently observed histogram
at that track. This approach is expected to be accurate when there are many observations and
discriminative appearance models, neither of which our problem afforded. As (Fig. 3(e)) shows,
7
Error of Kronecker Conditioning, n=8
Running time of 10 forward algorithm iterations
Projection versus No Projection (n=6)
0.06
0.04
0.02
0
0
5
10
# Mixing Events
15
0.6
0.5
0.4
0.3
b=1, w/o Projection
b=2, w/o Projection
b=3, w/o Projection
b=1, w/Projection
b=2, w/Projection
b=3, w/Projection
b=0 (Uniform distribution)
Running time in seconds
0.08
Averaged over 250 timesteps
5
b=1
b=2
b=3
L1 error at 1st order Marginals
L1 error at 1st order marginals
0.1
0.2
0.1
0
0.2
0.4
0.6
4
b=1
b=2
b=3
exact
3
2
1
0
4
0.8
5
Fraction of Observation events
6
n
7
8
(a) Error of Kronecker Con- (b) Projection vs. No Projec- (c) n versus Running Time
ditioning
tion
% Tracks correctly Identified
60
Omniscient
50
w/Projection
40
w/o Projection
30
20
Baseline
10
0
(d) Sample Image
(e) Accuracy for Camera Data
Figure 3: Evaluation on synthetic ((a)-(c)) and real camera network ((d),(e)) data.
both the baseline and first order model(without projection) fared poorly, while the projection step
dramatically boosted the accuracy. To illustrate the difficulty of predicting based on appearance
alone, the rightmost bar reflects the performance of an omniscient tracker who knows the result of
each mixing event and is therefore left only with the task of distinguishing between appearances.
8
Conclusions
We presented a formulation of hidden Markov model inference in the Fourier domain. In particular,
we developed the Kronecker Conditioning algorithm which performs a convolution-like operation
on Fourier coefficients to find the Fourier transform of the posterior distribution. We argued that
bandlimited conditioning can result in Fourier coefficients which correspond to no distribution, but
that the problem can be remedied by projecting to a relaxation of the marginal polytope. Our evaluation on data from a camera network shows that our methods outperform well when compared to
the optimal solution in small problems, or to an omniscient tracker in large problems. Furthermore,
we demonstrated that our projection step is fundamental to obtaining these high-quality results.
We conclude by remarking that the mathematical framework developed in this paper is quite general.
In fact, both the prediction/rollup and conditioning formulations hold over any finite group, providing a principled method for approximate inference for problems with underlying group structure.
Acknowledgments
This work is supported in part by the ONR under MURI N000140710747, the ARO under grant
W911NF-06-1-0275, the NSF under grants DGE-0333420, EEEC-540865, Nets-NOSS 0626151
and TF 0634803, and by the Pennsylvania Infrastructure Technology Alliance (PITA). Carlos
Guestrin was also supported in part by an Alfred P. Sloan Fellowship. We thank Kyle Heath for
helping with the camera data and Emre Oto, and Robert Hough for valuable discussions.
References
[1] Y. Ivanov, A. Sorokin, C. Wren, and I. Kaur. Tracking people in mixed modality systems. Technical
Report TR2007-11, MERL, 2007.
[2] J. Shin, L. Guibas, and F. Zhao. A distributed algorithm for managing multi-target identities in wireless
ad-hoc sensor networks. In IPSN, 2003.
[3] P. Diaconis. Group Representations in Probability and Statistics. IMS Lecture Notes, 1988.
[4] B. Schumitsch, S. Thrun, G. Bradski, and K. Olukotun. The information-form data association filter. In
NIPS. 2006.
[5] R. Kondor, A. Howard, and T. Jebara. Multi-object tracking with representations of the symmetric group.
In AISTATS, 2007.
[6] X. Boyen and D. Koller. Tractable inference for complex stochastic processes. In UAI, 1998.
[7] R. Kondor. Sn ob: a C++ library for fast Fourier transforms on the symmetric group, 2006. Available at
http://www.cs.columbia.edu/?risi/Snob/.
[8] A. Willsky. On the algebraic structure of certain partially observable finite-state markov processes. Information and Control, 38:179?212, 1978.
[9] F.D. Murnaghan. The analysis of the kronecker product of irreducible representations of the symmetric
group. American Journal of Mathematics, 60(3):761?784, 1938.
[10] J. van Lint and R.M. Wilson. A Course in Combinatorics. Cambridge University Press, 2001.
8
| 3183 |@word briefly:1 version:2 kondor:5 nd:1 propagate:1 decomposition:4 tr:14 recursively:1 series:6 exclusively:1 omniscient:3 rightmost:1 past:1 comparing:1 written:2 pertinent:1 update:6 v:1 alone:1 leaf:1 ith:1 infrastructure:1 provides:1 contribute:1 simpler:1 projec:1 mathematical:1 direct:6 prove:1 doubly:3 advocate:1 inside:2 pairwise:1 expected:2 multi:4 fared:1 decomposed:1 resolve:1 actual:1 ivanov:1 becomes:2 project:2 confused:2 begin:1 underlying:1 what:2 developed:2 finding:3 combat:1 every:1 voting:1 ti:3 control:1 grant:2 appear:1 ak:5 solely:1 approximately:2 bandlimiting:5 might:4 studied:1 equivalence:2 conversely:1 challenging:1 deployment:1 fastest:1 factorization:2 averaged:1 acknowledgment:1 camera:7 atomic:1 practice:3 block:4 digit:1 procedure:1 shin:1 thought:1 projection:19 refers:1 get:1 cannot:3 convenience:1 onto:4 operator:1 close:2 applying:1 www:1 equivalent:5 map:6 demonstrated:1 go:1 convex:1 formulate:3 formalized:1 simplicity:1 m2:1 rule:3 handle:1 notion:1 anyway:1 analogous:2 target:1 deploy:1 play:2 exact:6 distinguishing:1 element:5 approximated:1 particularly:1 walking:1 muri:1 exclusivity:2 observed:4 role:2 reducible:3 capture:3 tranform:1 inequivalent:1 shuffle:1 valuable:1 principled:2 intuition:1 transforming:2 complexity:2 motivate:1 lint:1 completely:1 basis:3 compactly:2 swap:2 easily:1 joint:6 represented:1 fast:3 query:1 quite:1 stanford:2 valued:1 say:3 drawing:1 otherwise:1 statistic:1 think:1 transform:29 itself:2 hoc:1 blob:1 analytical:1 net:1 aro:1 product:22 mixing:9 poorly:1 neumann:1 produce:2 leave:1 object:3 tk:1 illustrate:2 fixing:1 pose:1 measured:1 ij:10 dividing:1 implemented:2 c:4 closely:1 filter:2 stochastic:4 ipsn:1 atk:1 everything:1 require:4 argued:1 generalization:1 proposition:4 helping:1 clothing:1 mm:1 correction:1 around:1 proximity:1 tracker:2 guibas:3 hold:1 predict:1 consecutive:1 wren:1 faithfully:1 tf:1 reflects:1 sensor:5 always:5 modified:1 rather:2 avoid:1 boosted:1 varying:1 wilson:1 focus:1 notational:1 likelihood:2 contrast:1 baseline:3 sense:4 inference:15 typically:1 hidden:2 koller:1 issue:2 among:1 denoted:1 proposes:1 initialize:1 mutual:2 marginal:14 equal:1 once:2 fairly:1 having:1 represents:2 future:1 report:1 few:1 irreducible:7 employ:1 diaconis:2 preserve:1 individual:1 cheaper:1 zki:1 familiar:3 maintain:4 bradski:1 possibility:1 gordan:8 evaluation:3 birkhoff:1 primal:1 tj:3 accurate:3 necessary:2 orthogonal:2 hough:1 walk:5 desired:2 alliance:1 merl:1 column:1 modeling:1 earlier:1 w911nf:1 trickier:1 assignment:2 subset:3 entry:2 rare:1 uniform:2 too:1 stored:2 answer:1 synthetic:2 person:1 density:1 st:3 fundamental:4 probabilistic:2 schumitsch:3 invertible:2 together:1 quickly:1 von:1 ambiguity:1 reflect:1 management:4 huang:1 external:2 convolving:1 zhao:1 leading:1 return:1 american:1 unordered:2 coefficient:23 analagous:1 satisfy:1 combinatorics:1 sloan:1 ranking:2 ad:1 leonidas:1 later:1 sine:2 tion:1 observing:1 bayes:2 carlos:2 maintains:2 complicated:3 contribution:1 minimize:1 formed:1 accuracy:3 who:2 efficiently:1 correspond:5 yield:3 multiplying:1 definition:4 against:1 energy:1 frequency:5 naturally:1 associated:3 proof:1 con:1 color:2 knowledge:1 ubiquitous:1 back:2 appears:1 higher:5 follow:1 formulation:3 evaluated:1 though:2 furthermore:3 just:1 web:1 zijk:7 quality:1 perhaps:1 dge:1 true:3 hence:2 equality:1 sinusoid:3 symmetric:7 oto:1 attractive:1 game:1 during:1 maintained:4 cosine:2 smear:1 theoretic:1 demonstrate:2 complete:1 performs:2 l1:3 image:2 kyle:1 recently:1 pseudocode:3 conditioning:20 foreach:5 association:5 interpretation:1 marginals:16 ims:1 mellon:2 measurement:1 refer:6 cambridge:1 enter:1 outlined:1 consistency:1 similarly:2 mathematics:1 similarity:1 etc:1 posterior:5 closest:1 showed:1 certain:5 inequality:1 onr:1 discussing:1 ubiquitously:1 guestrin:3 seen:2 creep:1 relaxed:2 freely:1 managing:1 signal:1 relates:1 full:2 smooth:1 technical:1 faster:1 cross:1 long:1 manipulate:2 prediction:12 circumstance:1 cmu:2 metric:2 histogram:2 represent:2 sometimes:1 normalization:4 iteration:1 addition:1 want:1 fellowship:1 modality:2 swapped:2 rest:1 heath:1 subject:3 effectiveness:2 call:3 integer:1 near:1 fft:4 variety:1 affect:2 timesteps:1 pennsylvania:1 identified:1 inner:2 idea:5 tradeoff:1 handled:2 algebraic:2 pita:1 action:1 remark:1 dramatically:2 useful:1 generally:1 clear:1 transforms:5 concentrated:1 simplest:1 http:1 outperform:1 exist:2 nsf:1 s3:1 track:18 correctly:1 alfred:1 carnegie:2 write:2 group:27 pb:5 drawn:2 d3:1 neither:1 invention:1 timestep:2 olukotun:1 relaxation:5 rediction:1 fraction:2 year:1 sum:9 inverse:3 uncertainty:3 reader:1 ob:1 appendix:2 followed:2 distinguish:1 guaranteed:1 quadratic:2 replaces:1 nonnegative:3 sorokin:1 occur:1 constraint:5 kronecker:16 sharply:1 afforded:1 diffuse:1 fourier:73 combination:1 belonging:2 smaller:2 appealing:1 happens:1 projecting:4 intuitively:1 taken:3 legal:1 equation:9 discus:5 abbreviated:1 describing:1 precomputed:1 know:1 tractable:2 available:2 operation:6 decomposing:2 generalizes:1 apply:2 observe:1 save:1 alternative:1 existence:1 original:1 assumes:1 running:5 include:1 graphical:2 maintaining:8 exploit:3 fcg:2 restrictive:1 risi:1 build:1 especially:1 approximating:1 objective:1 question:1 quantity:1 added:1 costly:1 amongst:1 distance:3 unable:1 card:2 remedied:1 thank:1 hmm:1 thrun:1 gracefully:1 polytope:6 trivial:4 reason:1 irreducibles:25 enforcing:1 willsky:2 assuming:1 index:5 pointwise:15 providing:1 difficult:1 unfortunately:1 cij:9 robert:1 potentially:1 trace:2 negative:1 perform:1 allowing:2 observation:13 convolution:7 markov:3 datasets:1 howard:1 finite:9 situation:1 flop:1 dc:1 jebara:1 introduced:1 pair:6 connection:2 z1:2 nip:1 trans:1 address:4 bar:1 below:1 remarking:1 boyen:1 challenge:1 summarize:1 program:2 built:1 ofh:1 belief:1 bandlimited:4 event:9 natural:2 difficulty:1 predicting:2 representing:1 technology:1 eye:1 library:1 columbia:1 sn:13 review:1 prior:1 l2:2 multiplication:1 marginalizing:1 emre:1 lecture:1 permutation:32 mixed:2 limitation:1 intrusive:1 filtering:2 declaring:1 versus:2 triple:1 digital:1 validation:1 degree:3 gather:1 consistent:5 storing:2 row:1 course:1 supported:2 last:3 copy:3 wireless:1 infeasible:1 aij:5 formal:1 allow:5 moot:1 side:3 face:1 sparse:2 distributed:1 regard:1 van:1 world:1 valid:3 transition:6 unto:1 forward:1 collection:6 commonly:1 simplified:2 approximate:8 compact:3 emphasize:1 ignore:1 observable:1 uai:1 assumed:2 conclude:1 discriminative:1 spectrum:1 latent:1 table:1 n000140710747:1 nature:1 plancherel:1 obtaining:2 contributes:1 necessarily:1 complex:1 domain:16 aistats:1 main:1 spread:1 big:1 arise:1 n2:2 fig:5 referred:2 position:1 lie:2 third:1 theorem:3 formula:2 symbol:1 list:3 evidence:1 intractable:1 exists:3 naively:1 conditioned:1 easier:1 simply:4 appearance:3 infinitely:1 ordered:1 tracking:5 partially:1 scalar:1 corresponds:3 relies:1 conditional:1 identity:18 goal:2 formulated:2 consequently:1 towards:1 room:5 hard:1 change:1 typical:3 except:1 operates:1 reducing:1 called:6 experimental:2 ijk:1 internal:1 people:8 jonathan:1 probabilties:1 |
2,407 | 3,184 | Invariant Common Spatial Patterns: Alleviating
Nonstationarities in Brain-Computer Interfacing
Benjamin Blankertz1,2
Motoaki Kawanabe2
Friederike U. Hohlefeld4
Ryota Tomioka3
Vadim Nikulin5
Klaus-Robert M?ller1,2
1
TU Berlin, Dept. of Computer Science, Machine Learning Laboratory, Berlin, Germany
2 Fraunhofer FIRST (IDA), Berlin, Germany
3 Dept. Mathematical Informatics, IST, The University of Tokyo, Japan
4 Berlin School of Mind and Brain, Berlin, Germany
5 Dept. of Neurology, Campus Benjamin Franklin, Charit? University Medicine Berlin, Germany
{blanker,krm}@cs.tu-berlin.de
Abstract
Brain-Computer Interfaces can suffer from a large variance of the subject conditions within and across sessions. For example vigilance fluctuations in the individual, variable task involvement, workload etc. alter the characteristics of EEG
signals and thus challenge a stable BCI operation. In the present work we aim to
define features based on a variant of the common spatial patterns (CSP) algorithm
that are constructed invariant with respect to such nonstationarities. We enforce
invariance properties by adding terms to the denominator of a Rayleigh coefficient
representation of CSP such as disturbance covariance matrices from fluctuations
in visual processing. In this manner physiological prior knowledge can be used
to shape the classification engine for BCI. As a proof of concept we present a
BCI classifier that is robust to changes in the level of parietal ? -activity. In other
words, the EEG decoding still works when there are lapses in vigilance.
1 Introduction
Brain-Computer Interfaces (BCIs) translate the intent of a subject measured from brain signals directly into control commands, e.g. for a computer application or a neuroprosthesis ([1, 2, 3, 4, 5, 6]).
The classical approach to brain-computer interfacing is operant conditioning ([2, 7]) where a fixed
translation algorithm is used to generate a feedback signal from the electroencephalogram (EEG).
Users are not equipped with a mental strategy they should use, rather they are instructed to watch
a feedback signal and using the feedback to find out ways to voluntarily control it. Successful BCI
operation is reinforced by a reward stimulus. In such BCI systems the user adaption is crucial and
typically requires extensive training. Recently machine learning techniques were applied to the BCI
field and allowed to decode the subject?s brain signals, placing the learning task on the machine side,
i.e. a general translation algorithm is trained to infer the specific characteristics of the user?s brain
signals [8, 9, 10, 11, 12, 13, 14]. This is done by a statistical analysis of a calibration measurement
in which the subject performs well-defined mental acts like imagined movements. Here, in principle
no adaption of the user is required, but it is to be expected that users will adapt their behaviour
during feedback operation. The idea of the machine learning approach is that a flexible adaption
of the system relieves a good amount of the learning load from the subject. Most BCI systems are
somewhere between those extremes.
1
Although the proof-of-concept of machine learning based BCI systems1 was given some years ago,
several major challenges are still to be faced. One of them is to make the system invariant to non
task-related fluctuations of the measured signals during feedback. These fluctuations may be caused
by changes in the subject?s brain processes, e.g. change of task involvement, fatigue etc., or by
artifacts such as swallowing, blinking or yawning. The calibration measurement that is used for
training in machine learning techniques is recorded during 10-30 min, i.e. a relatively short period
of time and typically in a monotone atmosphere, so this data does not contain all possible kinds of
variations to be expected during on-line operation.
The present contribution focusses on invariant feature extraction for BCI. In particular we aim to
enhance the invariance properties of the common spatial patterns (CSP, [15]) algorithm. CSP is the
solution of a generalized eigenvalue problem and has as such a strong link to the maximization of a
Rayleigh coefficient, similar to Fisher?s discriminant analysis. Prior work by Mika et al. [16] in the
context of kernel Fisher?s discriminant analysis contains the key idea that we will follow: noise and
distracting signal aspects with respect to which we want to make our feature extractor invariant is
added to the denominator of a Rayleigh coefficient. In other words, our prior knowledge about the
noise type helps to re-design the optimization of CSP feature extraction. We demonstrate how our
invariant CSP (iCSP) technique can be used to make a BCI system invariant to changes in the power
of the parietal ? -rhythm (see Section 2) reflecting, e.g. changes in vigilance. Vigilance changes
are among the most pressing challenges when robustifying a BCI system for long-term real-world
applications.
In principle we could also use an adaptive BCI, however, adaptation typically has a limited time
scale which might not allow to follow fluctuations quickly enough. Furthermore online adaptive BCI
systems have so far only been operated with 4-9 channels. We would like to stress that adaptation and
invariant classification are no mutually exclusive alternatives but rather complementary approaches
when striving for the same goal: a BCI system that is invariant to undesired distortions and nonstationarities.
2 Neurophysiology and Experimental Paradigms
Neurophysiological background. Macroscopic brain activity during resting wakefulness contains
distinct ?idle? rhythms located over various brain areas, e.g. the parietal ? -rhythm (7-13 Hz) can
be measured over the visual cortex [17] and the ? -rhythm can be measured over the pericentral
sensorimotor cortices in the scalp EEG, usually with a frequency of about 8?14 Hz ([18]). The
strength of the parietal ? -rhythm reflects visual processing load as well as attention and fatigue
resp. vigilance.
The moment-to-moment amplitude fluctuations of these local rhythms reflect variable functional
states of the underlying neuronal cortical networks and can be used for brain-computer interfacing.
Specifically, the pericentral ? - and ? rythms are diminished, or even almost completely blocked, by
movements of the somatotopically corresponding body part, independent of their active, passive or
reflexive origin. Blocking effects are visible bilateral but with a clear predominance contralateral to
the moved limb. This attenuation of brain rhythms is termed event-related desynchronization (ERD)
and the dual effect of enhanced brain rhythms is called event-related synchronization (ERS) (see
[19]).
Since a focal ERD can be observed over the motor and/or sensory cortex even when a subject is only
imagining a movement or sensation in the specific limb, this feature can be used for BCI control: The
discrimination of the imagination of movements of left hand vs. right hand vs. foot can be based on
the somatotopic arrangement of the attenuation of the ? and/or ? rhythms. However the challenge
is that due to the volume conduction EEG signal recorded at the scalp is a mixture of many cortical
activities that have different spatial localizations; for example, at the electrodes over the mortor
cortex, the signal not only contains the ? -rhythm that we are interested in but also the projection of
parietal ? -rhythm that has little to do with the motor-imagination. To this end, spatial filtering is an
indispensable technique; that is to take a linear combination of signals recorded over EEG channels
and extract only the component that we are interested in. In particular the CSP algorithm that
optimizes spatial filters with respect to discriminability is a good candidate for feature extraction.
Experimental Setup. In this paper we evaluate the proposed algorithm on off-line data in which
the nonstationarity is induced by having two different background conditions for the same primary
1 Note: In our exposition we focus on EEG-based BCI systems that does not rely on evoked potentials (for
an extensive overview of BCI systems including invasive and systems based on evoked potentials see [1]).
2
0
?0.05
Figure 1: Topographies of r2 ?values (multiplied by
the sign of the difference) quantifying the difference
in log band-power in the alpha band (8?12 Hz) between different recording sessions: Left: Difference
between imag_move and imag_lett. Due to lower
visual processing demands, alpha power in occipital areas is stronger in imag_lett. Right: Difference
between imag_move and sham_feedback. The latter
has decreased alpha power in centro-parietal areas.
Note the different sign in the colormaps.
0.5
0.4
?0.1
?0.15
0.3
?0.2
0.2
?0.25
0.1
?0.3
0
task. The ultimate challenge will be on-line feedback with strong fluctuations of task demands etc,
a project envisioned for the near future.
We investigate EEG recordings from 4 subjects (all from whom we have an ?invariance measurement?, see below). Brain activity was recorded from the scalp with multi-channel amplifiers using
55 EEG channels.
In the ?calibration measurement? all 4.5?6 seconds one of 3 different visual stimuli indicated for 3
seconds which mental task the subject should accomplish during that period. The investigated mental tasks were imagined movements of the left hand, the right hand, and the right foot. There were
two types of visual stimulation: (1: imag_lett) targets were indicated by letters (L, R, F) appearing at
a central fixation cross and (2: imag_move) a randomly moving small rhomboid with either its left,
right or bottom corner filled to indicate left or right hand or foot movement, respectively. Since the
movement of the object was independent from the indicated targets, target-uncorrelated eye movements are induced. Due to the different demands in visual processing, the background brain activity
can be expected to differ substancially in those two types of recordings. The topography of the
r2 ?values (bi-serial correlation coefficient of feature values with labels) of the log band-power difference between imag_move and imag_lett is shown in the left plot of Fig. 2. It shows a pronounced
differene in parietal areas.
A sham_feedback paradigm was designed in order to charaterize invariance properties needed for
stable real-world BCI applications. In this measurement the subjects received a fake feedback sequence which was preprogrammed. The aim of this recording was to collect data during a large
variety of mental states and actions that are not correlated with the BCI control states (motor imagery of hands and feet). Subjects were told that they could control the feedback in some way that
they should find out, e.g. with eye movements or muscle activity. They were instructed not to perform movements of hands, arms, legs and feet. The type of feedback was a standard 1D cursor
control. In each trial the cursor starts in the middle and should be moved to either the left or right
side as indicated by a target cue. When the cursor touched the left or right border, a response (correct
or false) was shown. Furthermore the number of hits and misses was shown. The preprogrammed
?feedback? signal was constructed such that it was random in the beginning and then alternating periods of increasingly more hits and periods with chance level performance. This was done to motivate
the subjects to try a variety of different actions and to induce different states of mood (satisfaction
during ?successful? periods and anger resp. disfavor during ?failure?). The right plot of Fig. 2 visualizes the difference in log band-power between imag_move and sham_feedback. A decreased alpha
power in centro-parietal areas during sham_feedback can be observed. Note that this recording includes much more variations of background mental activity than the difference between imag_move
and imag_lett.
3 Methods
Common Spatial Patterns (CSP) Analysis. The CSP technique ([15]) allows to determine spatial
filters that maximize the variance of signals of one condition and at the same time minimize the
variance of signals of another condition. Since variance of band-pass filtered signals is equal to bandpower, CSP filters are well suited to discriminate mental states that are characterized by ERD/ERS
effects ([20]). As such it has been well used in BCI systems ([8, 14]) where CSP filters are calculated
individually for each subject on the data of a calibration measurement.
Technically the Common Spatial Pattern (CSP) [21] algorithm gives spatial filters based on a discriminative criterion. Let X1 and X2 be the (time ? channel) data matrices of the band-pass filtered
3
EEG signals (concatenated trials) under the two conditions (e.g., right-hand or left-hand imagination,
respectively2) and ?1 and ?2 be the corresponding estimates of the covariance matrices ?i = Xi> Xi .
We define the two matrices Sd and Sc as follows:
Sd = ?(1) ? ?(2)
: discriminative activity matrix,
Sc = ?
: common activity matrix.
(1)
+?
(2)
The CSP spatial filter v ? RC (C is the number of channels) can be obtained by extremizing the
Rayleigh coefficient:
{max, min}v?RC
v> S d v
.
v> S c v
(1)
This can be done by solving a generalized eigenvalue problem.
Sd v = ? Sc v.
(2)
The eigenvalue ? is bounded between ?1 and 1; a large positive eigenvalue corresponds to a projection of the signal given by v that has large power in the first condition but small in the second
condition; the converse is true for a large negative eigenvalue. The largest and the smallest eigenvalues correspond to the maximum and the minimum of the Rayleigh coefficient problem (Eq. (1)).
Note that v> Sd v = v> ?1 v ? v> ?2 v is the average power difference in two conditions that we want
to maximize. On the other hand, the projection of the activity that is common to two classes v> Sc v
should be minimized because it doesn?t contribute to the discriminability. Using the same idea from
[16] we can rewrite the Rayleigh problem (Eq. (1)) as follows:
min
v?RC
v> Sc v,
s.t. v> ?1 v ? v> ?2 v = ? ,
which can be interpreted as finding the minimum norm v with the condition that the average power
difference between two conditions to be ? . The norm is defined by the common activity matrix Sc .
In the next section, we extend the notion of Sc to incorporate any disturbances that is common to
two classes that we can measure a priori.
In this paper we call filter the generalized eigenvectors v j ( j = 1, . . . ,C) of the generalized eigenvalue
problem (Eq. (2)) or a similar problem discussed in the next section. Moreover we denote by V the
matrix we obtain by putting the C generalized eigenvectors into columns, namely V = {v j }Cj=1 ?
RC?C and call patterns the row vectors of the inverse A = V ?1 . Note that a filter v j ? RC has its
corresponding pattern a j ? RC ; a filter v j extracts only the activity spanned by a j and cancels out all
other activities spanned by ai (i 6= j); therefore a pattern a j tells what the filter v j is extracting out
(see Fig. 2).
For classification the features of single-trials are calculated as the log-variance in CSP projected
signals. Here only a few (2 to 6) patterns are used. The selection of patterns is typically based on
eigenvalues. But when a large amount of calibration data is not available it is advisable to use a
more refined technique to select the patterns or to manually choose them by visual inspection. The
variance features are approximately chi-square distributed. Taking the logarithm makes them similar
to gaussian distributions, so a linear classifier (e.g., linear discriminant analysis) is fine.
For the evaluation in this paper we used the CSPs corresponding the the two largest and the two
smallest eigenvalues and used linear disciminant analysis for classification. The CSP algorithm,
several extentions as well as practical issues are reviewed in detail in [15].
Invariant CSP. The CSP spatial filters extracted as above are optimized for the calibration measurement. However, in online operation of the BCI system different non task-related modulations
of brain signals may occur which are not suppressed by the CSP filters. The reason may be that
these modulations have not been recorded in the calibration measurement or that they have been so
infrequent that they are not consistently reflected in the statistics (e.g. when they are not equally
distributed over the two conditions).
The proposed iCSP method minimizes the influence of modulations that can be characterized in
advance by a covariance matrix. In this manner we can code neurophysiological prior knowledge
2 We use the term covariance for zero-delay second order statistics between channels and not for the statistical variability. Since we assume the signal to be band-pass filtered, the second order statistics reflects band
power.
4
or further information such as the tangent covariance matrix ([22]) into such a covariante matrix ?.
In the following motivation we assume that ? is the covariance matrix of a signal matrix Y . Using
(1)
(1)
the notions from above, the objective is then to calculate spatial filters v j such that var(X1 v j ) is
(1)
(1)
(2)
maximized and var(X2 v j ) and var(Y v j ) are minimized. Dually spatial filters v j are determined
(2)
(2)
(2)
that maximize var(X2 v j ) and minimize var(X1 v j ) and var(Y v j ).
Pratically this can be accomplished by solving the following two generalized eigenvalue problems:
>
V (1) ?1V (1) = D(1)
V
(2) >
?2V (2) = D(2)
>
and V (1) ((1?? )(?1 + ?2 ) + ? ?)V (1) = I
and V
(2) >
(3)
((1?? )(?1 + ?2 ) + ? ?)V (2) = I
(4)
where ? ? [0, 1] is a hyperparameter to trade-off the discrimination of the training classes (X1 ,
X2 ) against invariance (as characterized by ?). Section 4 discusses the selection of parame(1)
(1)
(1)
ter ? . Filters v j with high eigenvalues d j provide not only high var(X1 v j ) but also small
(1) >
vj
(1)
(1)
(1)
(1)
((1 ? ? )?2 + ? ?)v j = 1 ? (1 ? ? )d j , i.e. small var(X2 v j ) and small var(Y v j ). The dual
(2)
is true for the selection of filters from v j .
Note that for ? = 0.5 there is a strong connection to the one-vs-rest strategy for 3-class CSP ([23]).
(1)
Features for classification are calculated as log-variance using the two filters from each of v j and
(2)
v j corresponding to the largest eigenvalues. Note that the idea of iCSP is in the spirit of the
invariance constraints in (kernel) Fisher?s Discriminant proposed in [16].
A Theoretical Investigation of iCSP by Influence Analysis. As mentioned, iCSP is aiming at
robust spatial filtering against disturbances whose covariance ? can be anticipated from prior knowledge. Influence analysis is a statistical tool with which we can assess robustness of inference procedures [24]. Basically, it evaluates the effect in inference procedures, if we add a small perturbation
of O(? ), where ? 1. For example, influence functions for the component analyses such as PCA
and CCA have been discussed so far [25, 26]. We applied the machinery to iCSP, in order to check
whether iCSP really reduces influence caused by the disturbance at least in local sense. For this
purpose, we have the following lemma (its proof is included in the Appendix).
Lemma 1 (Influence of generalized eigenvalue problems) Let ?k and wk be k-th eigenvalue and
eigenvector of the generalized eigvenvalue problem
Aw = ? Bw,
(5)
respectively. Suppose that the matrices A and B are perturbed with small matrices ? ? and ? P where
? 1. Then the eigenvalues w
e k and eigenvectors e
?k of the purterbed problem
(A + ? ?)e
w=e
? (B + ? P)e
w
(6)
can be expanded as ?k + ? ?k + o(? ) and wk + ?? k + o(? ), where
?k
= w>
k (? ? ?k P)wk ,
1
? k = ?Mk (? ? ?k P)wk ? (w>
Pwk )wk ,
2 k
(7)
Mk := B?1/2 (B?1/2 AB?1/2 ? ?k I)+ B?1/2 and the suffix ?+? denotes Moore-Penrose matrix inverse.
The generalized eigenvalue problem eqns (3) and (4) can be rephrased as
?1 v = d{(1 ? ? )(?1 + ?2 ) + ? ?}v,
?2 u = c{(1 ? ? )(?1 + ?2 ) + ? ?}u.
For simplicity, we consider here the simplest perturbation of the covariances as ?1 ? ?1 + ? ? and
?2 ? ?1 + ? ?. In this case, the perturbation matrices in the lemma can be expressed as ?1 = ?,
?2 = ?, P = 2(1 ? ? )?. Therefore, we get the expansions of the eigenvalues and eigenvectors as
dk + ? ?1k , ck + ? ?2k , vk + ?? 1k and uk + ?? 2k , where
?1k
= {1 ? 2(1 ? ? )dk}v>
k ?vk ,
? 1k
=
? 2k
=
?2k = {1 ? 2(1 ? ? )ck}u>
k ?uk ,
?{1 ? 2(1 ? ? )dk}M1k ?vk ? (1 ? ? )(v>
k ?vk )vk ,
?{1 ? 2(1 ? ? )ck}M2k ?uk ? (1 ? ? )(u>
k ?uk )uk ,
5
(8)
(9)
(10)
original CSP
invariant CSP
original CSP ? error: 10.7% / 11.4% / 12.9% / 37.9%
10.7%
alpha=0.0
?=0
filter
11.4%
alpha=0.5
?=0.5
12.9%
37.9%
alpha=1.0
?=1
alpha=2.0
?=2
errors
pattern
invariant CSP ? error: 9.3% / 10.0% / 9.3% / 11.4%
9.3%
10.0%
9.3%
11.4%
alpha=0.0
?=0
alpha=0.5
?=0.5
alpha=1.0
?=1
alpha=2.0
?=2
filter
pattern
Figure 2: Comparison of CSP and iCSP on test data with artificially increased occipital alpha. The upper plots
show the classifier output on the test data with different degrees of alpha added (factors ? = 0, 0.5, 1, 2). The
lower panel shows the filter/pattern coefficients topographically mapped on the scalp from original CSP (left)
and iCSP (right). Here the invariance property was defined with respect to the increase in the alpha activity in
the visual cortex (occipital location) using an eyes open/eyes closed recording. See Section 3 for the definition
of filter and pattern.
M1k := ??1/2 (??1/2 ?1 ??1/2 ? dk I)+ ??1/2 , M2k := ??1/2 (??1/2 ?2 ??1/2 ? dk I)+ ??1/2 , and ? :=
(1 ? ? )(?1 + ?2 ) + ? ?. The implication of the result is the following. If ? = 1 ? 2d1 (resp. ? =
k
1 ? 2c1k ) is satisfied, the O(? ) term ?1k (resp. ?2k ) of the k-th eigenvalue vanishes and also the k-th
eigenvector does coincide with the one for the original problem up to ? order, because the first term
of ? 1k (resp. ? 2k ) becomes zero (we note that dk and ck also depend on ? ).
4 Evaluation
Test Case with Constructed Test Data. To validate the proposed iCSP, we first applied it to
specifically constructed test data. iCSP was trained (? = 0.5) on motor imagery data with the invariance characterized by data from a measurement during ?eyes open? (approx. 40 s) and ?eyes closed?
(approx. 20 s). The motor imagery test data was used in its original form and variants that were
modified in a controlled manner: From another data set during ?eyes closed? we extracted activity
related to increased occipital alpha activity (backprojection of 5 ICA components) and added this
with 3 different factors (? = 0.5, 1, 2) to the test data.
The upper plots of Fig. 2 display the classifier output on the constructed test data. While the performance of the original CSP is more and more deteriorated with increased alpha mixed in, the
proposed iCSP method maintains a stable performance independent of the amount of increased alpha activity. The spatial filters that were extracted by CSP analysis vs. the proposed iCSP often
look quite similar. However, tiny but apparently important differences exist. In the lower panel of
Fig. 2 the filter (v j ) pattern (a j ) pairs from original CSP (left) and iCSP (right) are shown. The filters
from two approaches resemble each other strongly. However, the corresponding patterns reveal an
important difference. While the pattern of the original CSP has positive weights at the right occipital
side which might be susceptible to ? modulations, the corresponding iCSP has not. A more detailed
inspection shows that both filters have a focus over the right (sensori-) motor cortex, but only the
invariant filter has a spot of opposite sign right posterior to it. This spot will filter out contributions
coming from occipital/parietal sites.
Model selection for iCSP. For each subject, a cross-validation was performed for different values
of ? on the training data (session imag_move) and the ? resulting in minimum error was chosen. For
the same values of ? the iCSP filters + LDA classifier trained on imag_move were applied to calcu6
35
35
Subject cv
test
train
30
20
15
cv
zv
zk
zq
20
15
10
5
5
0
0.2
35
0.4
xi
0.6
0
0.8
0
0.2
35
Subject zk
test
train
30
0.4
xi
0.6
Subject zq
30
0.8
test
train
15
10
25
error [%]
25
error [%]
20
error [%]
10
20
15
10
20
5
15
10
5
0
25
test
train
25
error [%]
error [%]
25
0
Subject zv
30
5
0
0.2
0.4
xi
0.6
0.8
0
0
0.2
0.4
xi
0.6
0
0.8
CSP
iCSP
Figure 3: Modelselection and evaluation. Left subplots: Selection of hyperparameter ? of the iCSP method.
For each subject, a cross-validation was performed for different values of ? on the training data (session
imag_move), see thin black line, and the ? resulting in minimum error was chosen (circle). For the same
values of ? the iCSP filters + LDA classifier trained on imag_move were applied to calculate the test error
on data from imag_lett (thick colorful line). Right plot: Test error in all four recordings for classical CSP
and the proposed iCSP (with model parameter ? chosen by cross-validation on the training set as described in
Section 4).
late the test error on data from imag_lett. Fig. 3 (left plots) shows the result of this procedure. The
shape of the cross-validation error on the training set and the test error is very similar. Accordingly,
the selection of values for parameter ? is successful. For subject zq ? = 0 was chosen, i.e. classical
CSP. The case for subject zk shows that the selection of ? may be a delicate issue. For larges values of ? cross-validation error and test error differ dramatically. A choice of ? > 0.5 would result
in bad performance of iCSP, while this effect could have not been predicted so severely from the
cross-validation of the training set.
Evaluation of Performance with Real BCI Data. For evaluation we used the imag_move session
(see Section 2) as training set and the imag_lett session as test set. Fig 3 (right plot) compares
the classification error obtained by classical CSP and by the proposed method iCSP with model
parameter ? chosen by cross-validation on the training set as described above. Again an excellent
improvement is visible.
5 Concluding discussion
EEG data from Brain-Computer Interface experiments are highly challenging to evaluate due to
noise, nonstationarity and diverse artifacts. Thus, BCI provides an excellent testbed for testing the
quality and applicability of robust machine learning methods (cf. the BCI Competitions [27, 28]).
Obviously BCI users are subject to variations in attention and motivation. These types of nonstationarities can considerably deteriorate the BCI classifier performance. In present paper we proposed a novel method to alleviate this problem.
A limitation of our method is that variations need to be characterized in advance (by estimating an
appropriate covariance matrix). At the same time this is also a strength of our method as neurophysiological prior knowledge about possible sources of non-stationarity is available and can thus
be taken into account in a controlled manner. Also the selection of hyperparameter ? needs more
investigation, cf. the case of subject zk in Fig. 3. One strategy to pursue is to update the covariance
matrix ? online with incoming test data. (Note that no label information is needed.) Online learning
(learning algorithms for adaptation within a BCI session) could also be used to further stabilize the
system against unforeseen changes. It remains to future research to explore this interesting direction.
Appendix: Proof of Lemma 1.
By substituting the expansions of e
?k and w
e k to Eq.(6) and taking the O(? ) term, we get
A? k + ?wk = ?k B? k + ?k Pwk + ?k Bwk .
Eq.(7) can be obtained by multiplying
w>
k
to Eq.(11) and applying Eq.(5). Then, from Eq.(11),
(A ? ?k B)? k = ?(? ? ?k P)wk + ?k Bwk = ?(A ? ?k B)Mk (? ? ?k P)wk ,
7
(11)
holds, where we used the constraints w>j Bwk = ? jk and
(A ? ?k B)Mk =
? Bw j w>j = I ? Bwk w>k .
(12)
j6=k
Eq.(12)
can
(B?1/2 AB?1/2 ? ?k I)+
be
proven
by
B?1/2 AB?1/2 ? ?k I = ? j6=k ? j B1/2 w j w>j B1/2
and
Since span{wk } is the kernel of the operator A ? ?k B,
= ? j6=k 1/? j
? k can be explained as ? k = ?Mk (? ? ?k P)wk + cwk . By a multiplication with w>
k B, the constant c turns
> BM = 0> and w> B? = ?w> Pw /2 derived from the
out to be c = ?w>
Pw
/2,
where
we
used
the
fact
w
k
k
k
k
k
k
k
k
normalization w
e>
wk = 1.
k (B + ? P)e
B1/2 w
> 1/2 .
jw j B
References
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Brain-computer interfaces for communication and
control?, Clin. Neurophysiol., 113: 767?791, 2002.
[2] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K?bler, J. Perelmouter, E. Taub, and H. Flor, ?A spelling
device for the paralysed?, Nature, 398: 297?298, 1999.
[3] G. Pfurtscheller, C. Neuper, C. Guger, W. Harkam, R. Ramoser, A. Schl?gl, B. Obermaier, and M. Pregenzer, ?Current Trends in Graz
Brain-computer Interface (BCI)?, IEEE Trans. Rehab. Eng., 8(2): 216?219, 2000.
[4] J. Mill?n, Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, 2002.
[5] E. A. Curran and M. J. Stokes, ?Learning to control brain activity: A review of the production and control of EEG components for driving
brain-computer interface (BCI) systems?, Brain Cogn., 51: 326?336, 2003.
[6] G. Dornhege, J. del R. Mill?n, T. Hinterberger, D. McFarland, and K.-R. M?ller, eds., Toward Brain-Computer Interfacing, MIT Press,
Cambridge, MA, 2007.
[7] T. Elbert, B. Rockstroh, W. Lutzenberger, and N. Birbaumer, ?Biofeedback of Slow Cortical Potentials. I?, Electroencephalogr. Clin.
Neurophysiol., 48: 293?301, 1980.
[8] C. Guger, H. Ramoser, and G. Pfurtscheller, ?Real-time EEG analysis with subject-specific spatial patterns for a Brain Computer Interface
(BCI)?, IEEE Trans. Neural Sys. Rehab. Eng., 8(4): 447?456, 2000.
[9] B. Blankertz, G. Curio, and K.-R. M?ller, ?Classifying Single Trial EEG: Towards Brain Computer Interfacing?, in: T. G. Diettrich,
S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157?164, 2002.
[10] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, ?Linear spatial integration for single trial detection
in encephalography?, NeuroImage, 7(1): 223?230, 2002.
[11] E. Curran, P. Sykacek, S. Roberts, W. Penny, M. Stokes, I. Jonsrude, and A. Owen, ?Cognitive tasks for driving a Brain Computer
Interfacing System: a pilot study?, IEEE Trans. Rehab. Eng., 12(1): 48?54, 2004.
[12] J. Mill?n, F. Renkens, J. M. no, and W. Gerstner, ?Non-invasive brain-actuated control of a mobile robot by human EEG?, IEEE Trans.
Biomed. Eng., 51(6): 1026?1033, 2004.
[13] N. J. Hill, T. N. Lal, M. Schr?der, T. Hinterberger, B. Wilhelm, F. Nijboer, U. Mochty, G. Widman, C. E. Elger, B. Sch?lkopf, A. K?bler,
and N. Birbaumer, ?Classifying EEG and ECoG Signals without Subject Training for Fast BCI Implementation: Comparison of NonParalysed and Completely Paralysed Subjects?, IEEE Trans. Neural Sys. Rehab. Eng., 14(6): 183?186, 2006.
[14] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. M?ller, and G. Curio, ?The non-invasive Berlin Brain-Computer Interface: Fast Acquisition of Effective Performance in Untrained Subjects?, NeuroImage, 37(2): 539?550, 2007, URL
http://dx.doi.org/10.1016/j.neuroimage.2007.01.051.
[15] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. M?ller, ?Optimizing Spatial Filters for Robust EEG Single-Trial Analysis?,
IEEE Signal Proc. Magazine, 25(1): 41?56, 2008, URL http://dx.doi.org/10.1109/MSP.2008.4408441.
[16] S. Mika, G. R?tsch, J. Weston, B. Sch?lkopf, A. Smola, and K.-R. M?ller, ?Invariant Feature Extraction and Classification in Kernel
Spaces?, in: S. Solla, T. Leen, and K.-R. M?ller, eds., Advances in Neural Information Processing Systems, vol. 12, 526?532, MIT Press,
2000.
[17] H. Berger, ??ber das Elektroenkephalogramm des Menschen?, Archiv f?r Psychiatrie und Nervenkrankheiten, 99(6): 555?574, 1933.
[18] H. Jasper and H. Andrews, ?Normal differentiation of occipital and precentral regions in man?, Arch. Neurol. Psychiat. (Chicago), 39:
96?115, 1938.
[19] G. Pfurtscheller and F. H. L. da Silva, ?Event-related EEG/MEG synchronization and desynchronization: basic principles?, Clin. Neurophysiol., 110(11): 1842?1857, 1999.
[20] Z. J. Koles, ?The quantitative extraction and topographic mapping of the abnormal components in the clinical EEG?, Electroencephalogr.
Clin. Neurophysiol., 79(6): 440?447, 1991.
[21] K. Fukunaga, Introduction to statistical pattern recognition, Academic Press, Boston, 2nd edn., 1990.
[22] B. Sch?lkopf, Support vector learning, Oldenbourg Verlag, Munich, 1997.
[23] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Boosting bit rates in non-invasive EEG single-trial classifications by feature
combination and multi-class paradigms?, IEEE Trans. Biomed. Eng., 51(6): 993?1002, 2004.
[24] F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel, Robust Statistics: The Approach Based on Influence Functions, Wiley,
New York, 1986.
[25] F. Critchley, ?Influence in principal components analysis?, Biometrika, 72(3): 627?636, 1985.
[26] M. Romanazzi, ?Influence in Canonical Correlation Analysis?, Psychometrika, 57(2): 237?259, 1992.
[27] B. Blankertz, K.-R. M?ller, G. Curio, T. M. Vaughan, G. Schalk, J. R. Wolpaw, A. Schl?gl, C. Neuper, G. Pfurtscheller, T. Hinterberger,
M. Schr?der, and N. Birbaumer, ?The BCI Competition 2003: Progress and Perspectives in Detection and Discrimination of EEG Single
Trials?, IEEE Trans. Biomed. Eng., 51(6): 1044?1051, 2004.
[28] B. Blankertz, K.-R. M?ller, D. Krusienski, G. Schalk, J. R. Wolpaw, A. Schl?gl, G. Pfurtscheller, J. del R. Mill?n, M. Schr?der, and
N. Birbaumer, ?The BCI Competition III: Validating Alternative Approachs to Actual BCI Problems?, IEEE Trans. Neural Sys. Rehab.
Eng., 14(2): 153?159, 2006.
8
| 3184 |@word blankertz1:1 neurophysiology:1 trial:8 middle:1 pw:2 stronger:1 norm:2 nd:1 open:2 covariance:10 eng:8 ronchetti:1 moment:2 contains:3 franklin:1 current:1 ida:1 dx:2 oldenbourg:1 chicago:1 visible:2 shape:2 motor:6 plot:7 designed:1 update:1 discrimination:3 v:4 cue:1 device:1 accordingly:1 inspection:2 beginning:1 sys:3 short:1 stahel:1 filtered:3 psychiat:1 mental:7 provides:1 boosting:1 contribute:1 location:1 org:2 mathematical:1 rc:6 constructed:5 fixation:1 cwk:1 manner:4 deteriorate:1 ica:1 expected:3 bandpower:1 multi:2 brain:30 chi:1 little:1 actual:1 equipped:1 somatotopically:1 ller1:1 project:1 campus:1 underlying:1 bounded:1 moreover:1 panel:2 estimating:1 what:1 pregenzer:1 kind:1 interpreted:1 minimizes:1 eigenvector:2 pursue:1 finding:1 substancially:1 differentiation:1 dornhege:3 quantitative:1 act:1 attenuation:2 biometrika:1 classifier:7 hit:2 uk:5 control:10 converse:1 colorful:1 renkens:1 positive:2 local:2 sd:4 aiming:1 severely:1 fluctuation:7 modulation:4 approximately:1 mika:2 might:2 discriminability:2 black:1 diettrich:1 evoked:2 collect:1 challenging:1 limited:1 bi:1 practical:1 testing:1 pericentral:2 elbert:1 spot:2 procedure:3 wolpaw:3 cogn:1 area:5 osman:1 projection:3 word:2 idle:1 induce:1 get:2 selection:8 operator:1 krusienski:1 context:1 influence:9 applying:1 vaughan:2 attention:2 occipital:7 simplicity:1 spanned:2 blanker:1 notion:2 variation:4 resp:5 enhanced:1 target:4 infrequent:1 alleviating:1 user:6 decode:1 suppose:1 deteriorated:1 curran:2 magazine:1 origin:1 edn:1 trend:1 recognition:1 jk:1 located:1 blocking:1 observed:2 bottom:1 critchley:1 modelselection:1 calculate:2 graz:1 region:1 solla:1 movement:10 trade:1 voluntarily:1 benjamin:2 envisioned:1 mentioned:1 vanishes:1 und:1 reward:1 tsch:1 preprogrammed:2 trained:4 motivate:1 solving:2 rewrite:1 depend:1 topographically:1 technically:1 localization:1 completely:2 neurophysiol:4 workload:1 various:1 train:4 sajda:1 distinct:1 fast:2 effective:1 doi:2 sc:7 tell:1 klaus:1 refined:1 whose:1 quite:1 distortion:1 bci:34 alvino:1 statistic:4 topographic:1 bler:2 online:4 mood:1 obviously:1 sequence:1 eigenvalue:18 pressing:1 coming:1 adaptation:3 tu:2 rehab:5 wakefulness:1 disfavor:1 translate:1 moved:2 pronounced:1 validate:1 competition:3 guger:2 electrode:1 object:1 help:1 advisable:1 psychiatrie:1 andrew:1 schl:3 measured:4 school:1 received:1 progress:1 eq:9 strong:3 c:1 resemble:1 indicate:1 predicted:1 motoaki:1 differ:2 direction:1 sensation:1 foot:5 thick:1 tokyo:1 correct:1 filter:30 human:1 atmosphere:1 behaviour:1 really:1 investigation:2 alleviate:1 parra:1 ecog:1 hold:1 normal:1 mapping:1 substituting:1 driving:2 major:1 smallest:2 purpose:1 proc:2 label:2 predominance:1 individually:1 largest:3 tool:1 reflects:2 electroencephalogr:2 mit:3 interfacing:6 gaussian:1 aim:3 csp:33 rather:2 ck:4 modified:1 mobile:1 command:1 derived:1 focus:3 vk:5 consistently:1 improvement:1 check:1 sense:1 inference:2 suffix:1 typically:4 interested:2 germany:4 biomed:3 issue:2 dual:2 flexible:1 classification:8 among:1 priori:1 elger:1 spatial:19 integration:1 field:1 equal:1 extraction:5 having:1 manually:1 placing:1 look:1 cancel:1 thin:1 anticipated:1 alter:1 anger:1 minimized:2 future:2 stimulus:2 few:1 randomly:1 individual:1 relief:1 bw:2 delicate:1 ab:3 amplifier:1 detection:2 stationarity:1 investigate:1 highly:1 evaluation:5 mixture:1 extreme:1 operated:1 implication:1 paralysed:2 machinery:1 filled:1 logarithm:1 rockstroh:1 re:1 circle:1 biofeedback:1 precentral:1 theoretical:1 mk:5 increased:4 column:1 maximization:1 applicability:1 reflexive:1 contralateral:1 delay:1 successful:3 conduction:1 perelmouter:1 aw:1 perturbed:1 accomplish:1 considerably:1 ghanayim:1 told:1 off:2 informatics:1 decoding:1 enhance:1 quickly:1 unforeseen:1 imagery:3 reflect:1 recorded:5 central:1 satisfied:1 choose:1 vigilance:5 again:1 obermaier:1 hinterberger:4 corner:1 cognitive:1 imagination:3 japan:1 account:1 potential:3 de:2 wk:11 includes:1 coefficient:7 stabilize:1 caused:2 bilateral:1 try:1 performed:2 closed:3 apparently:1 start:1 maintains:1 encephalography:1 contribution:2 minimize:2 square:1 ass:1 variance:7 characteristic:2 reinforced:1 correspond:1 maximized:1 sensori:1 lkopf:3 basically:1 kotchoubey:1 multiplying:1 j6:3 ago:1 visualizes:1 nonstationarity:2 ed:3 definition:1 failure:1 against:3 evaluates:1 sensorimotor:1 frequency:1 acquisition:1 invasive:4 proof:4 pilot:1 friederike:1 knowledge:5 blinking:1 cj:1 amplitude:1 reflecting:1 follow:2 reflected:1 response:1 erd:3 jw:1 leen:1 done:3 psychometrika:1 strongly:1 furthermore:2 sykacek:1 smola:1 widman:1 correlation:2 arch:1 hand:10 del:2 artifact:2 indicated:4 bcis:1 reveal:1 lda:2 quality:1 menschen:1 effect:5 concept:2 contain:1 true:2 alternating:1 laboratory:1 moore:1 lapse:1 undesired:1 during:12 eqns:1 rhythm:11 criterion:1 generalized:9 distracting:1 fatigue:2 stress:1 hill:1 demonstrate:1 electroencephalogram:1 pearlmutter:1 performs:1 interface:8 passive:1 silva:1 extremizing:1 novel:1 recently:1 common:9 functional:1 stimulation:1 jasper:1 overview:1 conditioning:1 birbaumer:6 volume:1 imagined:2 extend:1 discussed:2 resting:1 measurement:9 blocked:1 taub:1 cambridge:2 ai:1 cv:2 approx:2 focal:1 session:7 moving:1 stable:3 calibration:7 cortex:6 respectively2:1 robot:1 etc:3 add:1 charit:1 posterior:1 somatotopic:1 csps:1 involvement:2 optimizes:1 inf:1 optimizing:1 termed:1 indispensable:1 verlag:1 perspective:1 accomplished:1 der:3 muscle:1 minimum:4 determine:1 paradigm:3 period:5 maximize:3 signal:23 ller:9 infer:1 reduces:1 adapt:1 characterized:5 cross:8 long:1 clinical:1 nonstationarities:4 academic:1 dept:3 serial:1 equally:1 controlled:2 variant:2 basic:1 denominator:2 yeung:1 kernel:4 normalization:1 background:4 want:2 fine:1 decreased:2 source:1 crucial:1 macroscopic:1 flor:1 vadim:1 rest:1 sch:3 subject:27 hz:3 induced:2 recording:7 validating:1 spirit:1 call:2 extracting:1 near:1 ter:1 iii:1 enough:1 subplots:1 variety:2 opposite:1 idea:4 whether:1 pca:1 c1k:1 ultimate:1 url:2 becker:1 suffer:1 york:1 action:2 krauledat:1 dramatically:1 fake:1 clear:1 eigenvectors:4 detailed:1 amount:3 rousseeuw:1 band:8 simplest:1 generate:1 http:2 exist:1 canonical:1 sign:3 diverse:1 hyperparameter:3 rephrased:1 vol:2 ist:1 key:1 putting:1 zv:2 four:1 koles:1 monotone:1 year:1 inverse:2 letter:1 almost:1 appendix:2 bit:1 cca:1 abnormal:1 display:1 activity:18 scalp:4 strength:2 occur:1 constraint:2 x2:5 centro:2 lemm:1 archiv:1 aspect:1 robustifying:1 min:3 concluding:1 span:1 fukunaga:1 expanded:1 relatively:1 munich:1 combination:2 across:1 increasingly:1 suppressed:1 leg:1 explained:1 invariant:14 operant:1 taken:1 mutually:1 remains:1 discus:1 turn:1 needed:2 mind:1 msp:1 end:1 available:2 operation:5 multiplied:1 limb:2 kawanabe:1 enforce:1 appropriate:1 appearing:1 m2k:2 alternative:2 robustness:1 original:8 denotes:1 cf:2 schalk:2 clin:4 iversen:1 medicine:1 somewhere:1 concatenated:1 ghahramani:1 classical:4 m1k:2 backprojection:1 objective:1 added:3 arrangement:1 pwk:2 strategy:3 primary:1 exclusive:1 spelling:1 link:1 mapped:1 berlin:8 parame:1 whom:1 discriminant:4 reason:1 toward:1 meg:1 code:1 wilhelm:1 berger:1 setup:1 susceptible:1 robert:2 ryota:1 negative:1 intent:1 design:1 implementation:1 perform:1 upper:2 parietal:9 variability:1 bwk:4 communication:1 stokes:2 harkam:1 perturbation:3 dually:1 schr:3 neuroprosthesis:1 namely:1 required:1 pair:1 extensive:2 optimized:1 connection:1 lal:1 engine:1 testbed:1 nip:1 trans:8 mcfarland:2 usually:1 pattern:20 below:1 challenge:5 including:1 max:1 power:11 event:3 satisfaction:1 rely:1 disturbance:4 hampel:1 arm:1 blankertz:6 eye:7 fraunhofer:1 extract:2 faced:1 prior:6 review:1 tangent:1 multiplication:1 synchronization:2 topography:2 mixed:1 limitation:1 filtering:2 interesting:1 proven:1 var:9 validation:7 degree:1 becomes:1 principle:3 uncorrelated:1 tiny:1 classifying:2 translation:2 row:1 production:1 gl:3 side:3 allow:1 nijboer:1 ber:1 taking:2 penny:1 distributed:2 feedback:10 calculated:3 cortical:3 world:2 doesn:1 sensory:1 instructed:2 adaptive:2 projected:1 coincide:1 bm:1 far:2 alpha:18 active:1 incoming:1 handbook:1 b1:3 discriminative:2 neurology:1 xi:6 zq:3 reviewed:1 nature:1 channel:7 zk:4 robust:5 actuated:1 eeg:21 imagining:1 expansion:2 investigated:1 excellent:2 artificially:1 gerstner:1 ramoser:2 vj:1 untrained:1 da:2 border:1 noise:3 motivation:2 allowed:1 complementary:1 body:1 neuronal:1 fig:8 x1:5 site:1 slow:1 wiley:1 pfurtscheller:6 tomioka:1 neuroimage:3 candidate:1 late:1 extractor:1 tang:1 touched:1 load:2 specific:3 bad:1 er:2 desynchronization:2 r2:2 striving:1 physiological:1 dk:6 neurol:1 curio:4 false:1 adding:1 demand:3 cursor:3 boston:1 suited:1 rayleigh:6 mill:4 explore:1 neurophysiological:3 visual:9 penrose:1 expressed:1 watch:1 corresponds:1 chance:1 adaption:3 extracted:3 ma:1 weston:1 goal:1 quantifying:1 exposition:1 towards:1 krm:1 owen:1 fisher:3 man:1 change:7 diminished:1 specifically:2 determined:1 included:1 miss:1 lemma:4 principal:1 called:1 colormaps:1 pas:3 invariance:8 experimental:2 discriminate:1 neuper:2 select:1 support:1 latter:1 incorporate:1 evaluate:2 d1:1 correlated:1 |
2,408 | 3,185 | Learning with Tree-Averaged Densities and
Distributions
Sergey Kirshner
AICML and Dept of Computing Science
University of Alberta
Edmonton, Alberta, Canada T6G 2E8
[email protected]
Abstract
We utilize the ensemble of trees framework, a tractable mixture over superexponential number of tree-structured distributions [1], to develop a new model
for multivariate density estimation. The model is based on a construction of treestructured copulas ? multivariate distributions with uniform on [0, 1] marginals.
By averaging over all possible tree structures, the new model can approximate
distributions with complex variable dependencies. We propose an EM algorithm
to estimate the parameters for these tree-averaged models for both the real-valued
and the categorical case. Based on the tree-averaged framework, we propose a
new model for joint precipitation amounts data on networks of rain stations.
1
Introduction
Multivariate real-valued data appears in many real-world data sets, and a lot of research is being
focused on the development of multivariate real-valued distributions. One of the challenges in constructing such distributions is that univariate continuous distributions commonly do not have a clear
multivariate generalization. The most studied exception is the multivariate Gaussian distribution owing to properties such as closed form density expression with a convenient generalization to higher
dimensions and closure over the set of linear projections. However, not all problems can be addressed fairly with Gaussians (e.g., mixtures, multimodal distributions, heavy-tailed distributions),
and new approaches are needed for such problems.
While modeling multivariate distributions is in general difficult due to complicated functional forms
and the curse of dimensionality, learning models for individual variables (univariate marginals) is
often straightforward. Once the univariate marginals are known (or assumed known), the rest can
be modeled using copulas, multivariate distributions with all univariate marginals equal to uniform
distributions on [0, 1] (e.g., [2, 3]). A large portion of copula research concentrated on bivariate
copulas as extensions to higher dimensions are often difficult. Thus if the desired distribution decomposes into its univariate marginals and only bivariate distributions, the machinery of copulas can
be effectively utilized.
Distributions with undirected tree-structured graphical models (e.g., [4]) have exactly these properties, as probability density functions over the variables with tree-structured conditional independence graphs can be written as a product involving univariate marginals and bivariate marginals
corresponding to the edges of the tree. While tree-structured dependence is perhaps too restrictive,
a richer variable dependence can be obtained by averaging over a small number of different tree
structures [5] or all possible tree structures; the latter can be done analytically for categorical-valued
distributions with an ensemble-of-trees model [1]. In this paper, we extend this tree-averaged model
to continuous variables with the help of copulas and derive a learning algorithm to estimate the
parameters within the maximum likelihood framework with EM [6]. Within this framework, the
1
parameter estimation for tree-structured and tree-averaged models requires optimization over only
univariate and bivariate densities potentially avoiding the curse of dimensionality, a property not
shared by alternative models that relax the dependence restriction of trees (e.g., vines [7]).
The main contributions of the paper are the new tree-averaged model for multivariate copulas, a
parameter estimation algorithm for tree-averaged framework (for both categorical and real-valued
complete data), and a new model for multi-site daily precipitation amounts, an important application
in hydrology. In the process, we introduce previously unexplored tree-structured copula density and
an algorithm for estimation of its structure and parameters. The paper is organized as follows.
First, we describe copulas, their densities, and some of their useful properties (Section 2). We then
construct multivariate copulas with tree-structured dependence from bivariate copulas (Section 3.1)
and show how to estimate the parameters of the bivariate copulas and perform the edge selection.
To allow more complex dependencies between the variables, we describe a tree-averaged copula,
a novel copula object constructed by averaging over all possible spanning trees for tree-structured
copulas, and derive a learning algorithm for the estimation of the parameters from data for the treeaveraged copulas (Section 4). We apply our new method to a benchmark data set (Section 5.1);
we also develop a new model for multi-site precipitation amounts, a problem involving both binary
(rain/no rain) and continuous (how much rain) variables (Section 5.2).
2
Copulas
Let X = (X1 , . . . , Xd ) be a vector random variable with corresponding probability distribution
F (cdf) defined on Rd . We denote by V the set of d components (variables) of X and refer
to individual variables as Xv for v ? V. For simplicity, we will refer to assignments to random variables by lower case letters, e.g., Xv = xv will be denoted by xv . Let Fv (xv ) =
F (Xv = xv , Xu = ? : u ? V \ {v}) denote a univariate marginal of F over the variable Xv .
Let pv (xv ) denote the probability density function (pdf) of Xv . Let av = Fv (xv ), and let
a = (a1 , . . . , ad ), so a is a vector of quantiles of components of x with respect to corresponding
univariate marginals. Next, we define copula, a multivariate distribution over vectors of quantiles.
d
Definition 1. The copula associated with F is a distribution function C : [0, 1] ? [0, 1] that
satisfies
F (x) = C (F1 (x1 ) , . . . , Fd (xd )) , x ? Rd .
(1)
If F is a continuous distribution
on Rd with univariate marginals F1 , . . . , Fd , then C (a) =
?1
?1
F F1 (a1 ) , . . . , Fd (ad ) is the unique choice for (1).
Assuming that F has d-th order partial derivatives, the probability density function (pdf) can be
obtained from the distribution function via differentiation and expressed in terms of a derivative of
a copula:
p (x) =
Y
? d F (x)
? d C (a)
? d C (a) Y ?av
=
=
= c (a)
pv (xv )
?x1 . . . ?xd
?x1 . . . ?xd
?a1 . . . ?ad
?xv
v?V
where c (a) =
(2)
v?V
? d C(a)
?a1 ...?ad
is referred to as a copula density function.
Suppose we are given a complete data set D = x1 , . . . , xN of d-component real-valued vectors xn = xn1 , . . . , xd1 under i.i.d. assumption. A maximum likelihood (ML) estimate for the
parameters of c (or p) from data can be obtained my maximizing the log-likelihood of D
ln p (D) =
N
XX
v?V n=1
ln pv (xnv ) +
N
X
ln c (F1 (xn1 ) , . . . , Fd (xnd )) .
(3)
n=1
The first term of the log-likelihood corresponds to the total log-likelihood of all univariate marginals
of p, and the second term to the log-likelihood of its d-variate copula. These terms are not independent as the second term in the sum is defined in terms of the probability expressions in the first
summand; except for a few special cases, a direct optimization of (3) is prohibitively complicated.
However a useful (and asymptotically consistent) heuristic is first to maximize the log-likelihood for
the marginals (first term only), and then to estimate the parameters for the copula given the solution
2
for the marginals. The univariate marginals can be accurately estimated by either fitting the parameters for some appropriately chosen univariate distributions or by applying non-parametric methods1
as the marginals are estimated independent of each other and do not suffer from the curse of dimensionality. Let p?v (xv ) be the estimated pdf for component
v, and F?v be the corresponding cdf.
1
Let A = a , . . . , aN where an = (an1 , . . . , and ) = F? (xn1 ) , . . . , F? (xnd ) be a set of estimated
quantiles. Under the above heuristic, ML estimate for copula density c is computed by maximizing
PN
ln c (A) = n=1 ln c (an ).
3
Exploiting Tree-Structured Dependence
Joint probability distributions are often modeled with probabilistic graphical models where the structure of the graph captures the conditional independence relations of the variables. The joint distribution is then represented as a product of functions over subsets of variables. We would like to
keep the number of variables for each of the functions small as the number of parameters and the
number of points needed for parameter estimation often grows exponentially with the number of
variables. Thus, we focus on copulas with tree dependence. Trees play an important role in probabilistic graphical models as they allow for efficient exact inference [10] as well as structure and
parameter learning [4]. They can also be placed in a fully Bayesian framework with decomposable
priors allowing to compute expected values (over all possible spanning trees) of product of functions
defined on the edges of the trees [1]. As we will see later in this section, under the tree-structured dependence, a copula density can be computed as products of bivariate copula densities over the edges
of the graph. This property allows us to estimate the parameters for the edge copulas independently.
3.1
Tree-Structured Copulas
We consider tree-structured Markov networks, i.e., undirected graphs that do not have loops. For a
distribution F admitting tree-structured Markov networks (referred from now on as tree-structured
distributions), assuming that p (x) > 0 and p (x) < ? for x ? R ? Rd , the density (for x ? R)
can be rewritten as
"
#
Y
Y
puv (xu , xv )
p (x) =
pv (xv )
.
(4)
pu (xu ) pv (xv )
v?V
{u,v}?E
This formulation easily follows from the Hammersley-Clifford theorem [11]. Note that for {u, v} ?
E, a copula density cuv (au , av ) for F (xu , xv ) can be computed using Equation 2:
puv (xu , xv )
.
(5)
cuv (au , av ) =
pu (xu ) pv (xv )
Using Equations 2, 4, and 5, cp (a) for F (x) can be computed as
Y
Y
p (x)
puv (xu , xv )
cp (a) = Q
=
=
cp (au , av ) .
(6)
pu (xu ) pv (xv )
v?V pv (xv )
{u,v}?E
{u,v}?E
Equation 6 states that a copula density for a tree-structured distribution decomposes as a product
of bivariate copulas over its edges. The converse is true as well; a tree-structured copula can be
constructed by specifying copulas for the edges of the tree.
Theorem 1. Given a tree or a forest G = (V, E) and copula densities cuv (au , av ) for {u, v} ? E,
Y
cE (a) =
cuv (au , av )
{u,v}?E
is a valid copula density.
For a tree-structured density, the copula log-likelihood can be rewritten as
ln c (A) =
N
X X
ln cuv (anu , anv ) ,
{u,v}?E n=1
1
These approaches for copula estimation are referred to as inference for the margins (IFM) [8] and canonical
maximum likelihood (CML) [9] for parametric and non-parametric forms for the marginals, respectively.
3
PN
and the parameters can be fitted by maximizing n=1 ln cuv (anu , anv ) independently for different
pairs {u, v} ? E. The tree structure can be learned from the data as well, as in the Chow-Liu
algorithm [4]. Full algorithm can be found in an extended version of the paper [12].
4
Tree-Averaged Copulas
While the framework from Section 3.1 is computationally efficient and convenient for implementation, the imposed tree-structured dependence is too restrictive for real-world problems. Vines [7],
for example, deal with this problem by allowing recursive refinements for the bivariate probabilities
over variables not connected by the tree edges. However, vines require estimation of additional characteristics of the distribution (e.g., conditional rank correlations) requiring estimation over large sets
of variables, which is not advisable when the amount of available data is not large. Our proposed
method would only require optimization of parameters of bivariate copulas from the corresponding
two components of weighted data vectors. Using the Bayesian framework for spanning trees from
[1], it is possible to construct an object constituting a convex combination over all possible spanning
trees allowing a much richer set of conditional independencies than a single tree.
Meil?a and Jaakkola [1] proposed a decomposable prior over all possible spanning tree structures.
Let ? be a symmetric matrix of non-negative weights for all pairs of distinct variables and zeros on
the diagonal. Let E be a set of all possible spanning trees over V. The probability distribution over
all spanning tree structures over V is defined as
X Y
1 Y
P (E ? E|?) =
?uv where Z =
?uv .
(7)
Z
E?E {u,v}?E
{u,v}?E
Even though the sum is over |E| = dd?2 trees, Z can be efficiently computed in closed form using
a weighted generalization of Kirchoff?s Matrix Tree Theorem (e.g., [1]).
Theorem 2. Let P (E) be a distribution over spanning tree structures defined by (7). Then the
normalization constant Z is equal to the determinant |L? (?)|, with matrix L? (?) representing the
first (d ? 1) rows and columns of the matrix L (?) given by:
??
u, v ? V, u 6= v;
P uv
Luv (?) = Lvu (?) =
?
u, v ? V, u = v.
vw
w?V
? is a generalization of an adjacency matrix, and L (?) is a generalization of the Laplacian matrix.
The decomposability property of the tree prior (Equation 7) allows us to compute the average of
the tree-structured distributions over all dd?2 tree structures. In [1], such averaging was applied to
tree-structured distributions over categorical variables. Similarly, we define a tree-averaged copula
density as a convex combination of copula densities of the form (6):
?
??
?
X
Y
|L? (?c (a))|
1 X? Y
?uv ? ?
cuv (au , av )? =
r (a) =
P (E|?) c (a) =
Z
|L? (?)|
E?E
E?E
{u,v}?E
{u,v}?E
where entry (uv) of matrix ?c (a) denotes ?uv cuv (au , av ). A finite convex combination of copulas
is a copula, so r (a) is a copula density.
4.1
Parameter Estimation
Given a set of estimated quantile values A, a suitable parameter values ? (edge weight matrix) and
? (parameters for bivariate edge copulas) can be found by maximizing the log-likelihood of A:
l (?, ?) = ln r (A|?, ?) =
N
X
ln r (an |?, ?) =
n=1
N
X
ln |L? (?c (an |?))| ? N ln |L? (?)| .
(8)
n=1
However, the parameter optimization of l (?, ?) cannot be done analytically. Instead, noticing that
we are dealing with a mixture model (granted, one where the number of mixture components is
super-exponential), we propose performing the parameter optimization with the EM algorithm [6].2
2
A possibility of EM algorithm for ensemble-of-trees with categorical data was mentioned [1], but the idea
was abandoned due to the concern about the M-step.
4
Algorithm T REE AVERAGED C OPULA D ENSITY(D, c)
Inputs: A complete data set D of d-component real-valued vectors; a set of of bivariate parametric copula densities c = {cuv : u, v ? V}
1. Estimate univariate margins F?v (Xv ) for all components v ? V treating all components
independently.
2. Replace D with A consisting of vectors an = F?1 (xn ) , . . . , F?d (xn ) for each vector
1
d
xn in D
3. Initialize ? and ?
4. Run until convergence (as determined by change in log-likelihood, Equation 8)
? E-step: For all vectors an and pairs {u, v}, compute P ({u, v} ? E|an , ?, ?)
? M-step:
? Update ? with gradient ascent
? Update ? uv for all pairs by setting partial derivative with respect to parameters
of ? uv (Equation 9) to zero and solving corresponding equations
?
Q
|L (?c(a))|
?
?
Output: Denoting au = F (xu ) and av = F (xv ), p? (x) =
p?v (xv )
|L? (? )|
v?V
Figure 1: Algorithm for estimation of a pdf with tree-averaged copulas.
While there are dd?2 possible mixture components (spanning trees), in the E-step, we only need
to compute the posterior probabilities for d (d ? 1) /2 edges. Each step of EM consists
of find
ing parameters ? 0 , ? 0 maximizing the expected joint log-likelihood M ? 0 , ? 0 ; ?, ? given current
parameter values ?, ? where
M ? 0 , ? 0 ; ?, ?
=
N X
X
P (En |an , ?, ?) ln P E|? 0 c an |E, ? 0
n=1 En ?E
=
N
X X
0
0
sn ({u, v}) (ln ?uv
+ ln cuv (anu , anv |?uv
)) ? N ln L? ? 0 ;
{u,v} n=1
Q
sn ({u, v})
=
X
n
P (En |a , ?, ?) =
E?E
X
E?E
{u,v}?E
{u,v}?E
(?uv cuv (anu , anv |? uv ))
|L? (?c (an ))|
.
{u,v}?E
n
The probability distribution P (En |a , ?, ?) is of the same form as the tree prior, so to compute
sn ({u, v}) one needs to compute the sum of probabilities of all trees containing edge {u, v}.
?1
Theorem 3. Let P (E|?) be a tree prior defined in Equation 7. Let Q (?) = (L? (?)) where L?
is obtained by removing row and column w from L. Then
(
?uv (Quu (?) + Qvv (?) ? 2Quv (?)) : u 6= v, u 6= w, v 6= w,
X
?uw Quu (?)
:
v = w,
P (E|?) =
?wv Qvv (?)
:
u = w.
E?E: {u,v}?E
As a consequence of Theorem 3, for each an , all d (d ? 1) /2 edge probabilities sn ({u, v}) can
be computed
simultaneously with time complexity of a single (d ? 1) ? (d ? 1) matrix inversion,
O d3 . Assuming a candidate bivariate copula cuv has one free parameter ?uv , ?uv can be optimized
by setting
N
0
X
?M ? 0 , ? 0 ; ?, ?
? ln cuv (anu , anv ; ?uv
)
=
sn ({u, v})
,
(9)
0
0
??uv
??
uv
n=1
to 0. (See [12] for more details.) The parameters of the tree prior can be updated by maximizing
!
N
X
1 X
0
sn ({u, v}) ln ?uv
? ln |L? (?)| ,
N n=1
{u,v}
5
an expression concave in ln ?uv ? {u, v}. ? 0 can be updated using a gradient ascent algorithm on
ln ?uv ? {u, v}, with time complexity O d3 per iteration. The outline of the EM algorithm is
shown in Figure 1. Assuming the complexity
of each bivariate copula update is O (N ), the time
complexity of each EM iteration is O N d3 .
The EM algorithm can be easily transferred to tree averaging for categorical data. The E-step does
not change, and in the M-step, the parameters for the univariate marginals are updated ignoring
bivariate terms. Then, the parameters for the bivariate distributions for each edge are updated constrained on the new values of the parameters for the univariate distributions. While the algorithm
does not guarantee a maximization of the expected log-likelihood, it nonetheless worked well in our
experiments.
5
5.1
Experiments
MAGIC Gamma Telescope Data Set
First, we tested our tree-averaged density estimator on a MAGIC Gamma Telescope Data Set
from the UCI Machine Learning Repository [13]. We considered only the examples from
class gamma (signal); this set consists of 12332 vectors of d = 10 real-valued components. The univariate marginals are not Gaussian (some are bounded; some have multiple
modes). Fig. 2 shows an average log-likelihood of models trained on training sets with N =
50, 100, 200, 500, 1000, 2000, 5000, 10000 and evaluated on 2000-example test sets (averaged over
10 training and test sets). The marginals were estimated using Gaussian kernel density estimators (KDE) with Rule-of-Thumb bandwidth selection. All of the models except for full Gaussian
have the same marginals, differ only in the multivariate dependence (copula). As expected from
the curse of dimensionality, product KDE improves logarithmically with the amount of data. Not
only the marginals are not Gaussian (evidenced by a Gaussian copula with KDE marginals outperforming a Gaussian distribution), the multivariate dependence is also not Gaussian, evidenced by a
tree-structured Frank copula outperforming a tree-structured and a full Gaussian copula. However,
model averaging even with the wrong dependence model (tree-averaged Gaussian copula) yields
superior performance.
5.2
Multi-Site Precipitation Modeling
We applied the tree-averaged framework to the problem of modeling daily rainfall amounts for a
regional spatial network of stations. The task is to build a generative model capturing the spatial
and temporal properties of the data. This model can be used in at least two ways: first, to sample
sequences from it and to use them as inputs for other models, e.g., crop models; and second, as
a descriptive model of the data. Hidden Markov models (possible with non-homogeneous transitions) are being frequently used for this task (e.g., [14]) with the transition distribution responsible
for modeling of temporal dependence, and the emission distributions capturing most of the spatial
dependence. Additionally, HMMs can be viewed as assigning rainfall daily patterns to ?weather
states? (or corresponding emission components), and both these states (as described by either their
parameters or the statistics of the patterns associated with it) and their temporal evolution often
offer useful synoptic insight. We will use HMMs as the wrapper model with tree-averaged (and
tree-structured) distributions to model the emission components.
The distribution of daily rainfall amounts for any given station can be viewed as a non-overlapping
mixture with one component corresponding to zero precipitation, and the other component to positive precipitation. For a station v, let rv be the precipitation amount, ?v be a probability of positive
precipitation, and let fv (rv |?v ) be a probability density function for amounts given positive precipitation:
1 ? ?v
: rv = 0,
p (rv |?v , ?v ) =
?v fv (rv |?v ) : rv > 0.
For a pair of stations {u, v}, let ?uv denote the probability of simultaneous positive amounts and
cuv (Fu (ru |?u ) , Fv (rv |?v ) |? uv ) denote the copula density for simultaneous positive amounts;
6
then
?
1 ? ?u ? ?v + ?uv
: ru = 0,
?
?
(?v ? ?uv ) fv (rv |?v )
: ru = 0,
p (ru , rv |?u , ?v , ?uv , ?u , ?v ) =
: ru > 0,
? (?u ? ?uv ) fu (ru |?u )
?
?uv fu (ru ) fv (rv ) c (Fu (ru ) , Fv (rv )) : ru > 0,
rv
rv
rv
rv
= 0,
> 0,
= 0,
> 0.
We can now define a tree-structured and tree-averaged probability distributions, pt (r) and pta (r),
respectively, over the amounts:
"
#
Y
Y
p (ru , rv |?u , ?v , ?uv , ?u , ?v )
?uv (r) =
, pt (r|?, ?, ?, E) =
p (rv |?v )
?uv (r) ,
p (ru |?u , ?u ) p (rv |?v , ?v )
v?V
{u,v}?E
"
#
Y
X
|L? (?? (r))|
P (E|?) pt (r|?, ?, ?, E) =
p (rv |?v )
pta (r|?, ?, ?, ?) =
.
|L? (?)|
E?E
v?V
??v rv
We employ univariate exponential distributions fv (rv ) = ?v e
cuv (au , av ) = ?
and bivariate Gaussian copulas
2 ??1 (a )2 ?2?
?1 (a )??1 (a )
? 2 ??1 (au )2 +?uv
v
uv ?
u
v
? uv
2 )
2(1??uv
1
e
2
1??uv
.
We applied the models to a data set collected from 30 stations from a region in Southeastern Australia (Fig. 3) 1986-2005, April-October, (20 sequences 214 30-dimensional vectors each). We
used a 5-state HMM with three different types of emission distributions: tree-averaged (pta ), treestructured (pt ), and conditionally independent (first term of pt and pta ). We will refer to these
models HMM-TA, HMM-Tree, and HMM-CI, respectively. For HMM-TA, we reduced the number
of free parameters by only allowing edges for stations adjacent to each other as determined by the
the Delaunay triangulation (Fig. 3). We also did not learn the edge weights (?) setting them to 1 for
selected edges and to 0 for the rest. To make sure that the models do not overfit, we computed their
out-of-sample log-likelihood with cross-validation, leaving out one year at a time (not shown). (5
states were chosen because the leave-one-year out log-likelihood starts to flatten out for HMM-TA
at 5 states.) The resulting log-likelihoods divided by the number of days and stations are ?0.9392,
?0.9522, and ?1.0222 for HMM-TA, HMM-Tree, and HMM-CI, respectively. To see how well
the models capture the properties of the data, we trained each model on the whole data set (with
50 restarts of EM), and then simulated 500 sequences of length 214. We are particularly interested
in how well they measure pairwise dependence; we concentrate on two measures: log-odds ratio
for occurrence and Kendall?s ? measure of concordance for pairs when both stations had positive
amounts. Both are shown in Fig. 4. Both plots suggest that HMM-CI underestimates the pairwise
dependence for strongly dependent pairs (as indicated by its trend to predict lower absolute values
for log-odds and concordance); HMM-Tree estimating the dependence correctly mostly for strongly
dependent pairs (as indicated by good prediction for high values), but underestimating it for moderate dependence; and HMM-TA performing the best for most pairs except for the ones with very
strong dependence.
Acknowledgements
This work has been supported by the Alberta Ingenuity Fund through the AICML. We thank Stephen
Charles (CSIRO, Australia) for providing us with precipitation data.
References
[1] M. Meil?a and T. Jaakkola. Tractable Bayesian learning of tree belief networks. Statistics and Computing,
16(1):77?92, 2006.
[2] H. Joe. Multivariate Models and Dependence Concepts, volume 73 of Monographs on Statistics and
Applied Probability. Chapman & Hall/CRC, 1997.
[3] R. B. Nelsen. An Introduction to Copulas. Springer Series in Statistics. Springer, 2nd edition, 2006.
[4] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE
Transactions on Information Theory, IT-14(3):462?467, May 1968.
[5] M. Meil?a and M. I. Jordan. Learning with mixtures of trees. Journal of Machine Learning Research,
1(1):1?48, October 2000.
7
?33
?2.6
Coastline
Stations
Selected pairs
?34
?2.8
?35
Independent KDE
Product KDE
Gaussian
Gaussian Copula
Gaussian TCopula
Frank TCopula
Gaussian TACopula
?2.9
?3
Latitude
Log?likelihood per feature
?2.7
?36
?3.1
?37
?3.2
50
100
200
500
1000
2000
5000
?38
10000
143
144
145
Training set size
Figure 2: Averaged test set per-feature loglikelihood for MAGIC data: independent KDE
(black solid ), product KDE (blue dashed ?),
Gaussian (brown solid ?), Gaussian copula (orange solid +), Gaussian tree-copula (magenta
dashed x), Frank tree-copula (blue dashed ),
Gaussian tree-averaged copula (red solid x).
146
Longitude
147
148
149
150
Figure 3: Station map with station locations (red
dots), coastline, and the pairs of stations selected according to Delaunay triangulation (dotted lines)
5
HMM?TA
HMM?Tree
HMM?CI
y=x
0.7
HMM?TA
HMM?Tree
HMM?CI
y=x
0.6
Kendall?s ? from the simulated data
Log?odds from the simulated data
4.5
4
3.5
3
2.5
2
0.5
0.4
0.3
0.2
0.1
1.5
0
1
1
1.5
2
2.5
3
3.5
4
Log?odds from the historical data
4.5
5
0
0.1
0.2
0.3
0.4
0.5
0.6
Kendall?s ? from the historical data
0.7
Figure 4: Scatter-plots of log-odds ratios for occurrence (left) and Kendall?s ? measure of concordance (right) for all pairs of stations for the historical data vs HMM-TA (red o), HMM-Tree (blue
x), and HMM-CI (green ?).
[6] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via EM
algorithm. Journal of the Royal Statistical Society Series B-Methodological, 39(1):1?38, 1977.
[7] T. Bedford and R. M. Cooke. Vines ? a new graphical model for dependent random variables. The Annals
of Statistics, 30(4):1031?1068, 2002.
[8] H. Joe and J.J. Xu. The estimation method of inference functions for margins for multivariate models.
Technical report, Department of Statistics, University of British Columbia, 1996.
[9] C. Genest, K. Ghoudi, and L.-P. Rivest. A semiparametric estimation procedure of dependence parameters
in multivariate families of distributions. Biometrika, 82:543?552, 1995.
[10] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, Inc., San Francisco, California, 1988.
[11] J. Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical
Society Series B-Methodological, 36(2):192?236, 1974.
[12] S. Kirshner. Learning with tree-averaged densities and distributions. Technical Report TR 08-01, Department of Computing Science, University of Alberta, 2008.
[13] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[14] E. Bellone. Nonhomogeneous Hidden Markov Models for Downscaling Synoptic Atmospheric Patterns to
Precipitation Amounts. PhD thesis, Department of Statistics, University of Washington, 2000.
8
| 3185 |@word determinant:1 version:1 inversion:1 repository:2 nd:1 closure:1 cml:1 tr:1 solid:4 wrapper:1 liu:2 series:3 denoting:1 current:1 assigning:1 scatter:1 written:1 treating:1 plot:2 update:3 fund:1 v:1 generative:1 selected:3 underestimating:1 location:1 constructed:2 direct:1 consists:2 fitting:1 introduce:1 pairwise:2 expected:4 ingenuity:1 frequently:1 multi:3 ifm:1 alberta:4 curse:4 precipitation:11 xx:1 bounded:1 estimating:1 rivest:1 differentiation:1 guarantee:1 temporal:3 unexplored:1 concave:1 xd:4 exactly:1 prohibitively:1 wrong:1 rainfall:3 biometrika:1 converse:1 positive:6 xv:26 consequence:1 meil:3 ree:1 black:1 coastline:2 luv:1 au:10 studied:1 specifying:1 hmms:2 downscaling:1 averaged:22 unique:1 responsible:1 recursive:1 procedure:1 weather:1 convenient:2 projection:1 flatten:1 suggest:1 cannot:1 selection:2 applying:1 restriction:1 imposed:1 map:1 maximizing:6 straightforward:1 independently:3 convex:3 focused:1 simplicity:1 decomposable:2 estimator:2 rule:1 insight:1 updated:4 annals:1 construction:1 suppose:1 play:1 ualberta:1 exact:1 pt:5 homogeneous:1 logarithmically:1 trend:1 particularly:1 utilized:1 xnd:2 role:1 capture:2 vine:4 region:1 connected:1 e8:1 mentioned:1 monograph:1 dempster:1 complexity:4 trained:2 solving:1 multimodal:1 joint:4 easily:2 represented:1 distinct:1 describe:2 newman:1 richer:2 heuristic:2 valued:8 plausible:1 loglikelihood:1 relax:1 statistic:7 laird:1 sequence:3 descriptive:1 propose:3 interaction:1 product:8 uci:2 loop:1 exploiting:1 convergence:1 nelsen:1 leave:1 object:2 help:1 derive:2 develop:2 advisable:1 strong:1 longitude:1 c:1 differ:1 concentrate:1 owing:1 australia:2 kirshner:2 adjacency:1 require:2 crc:1 f1:4 generalization:5 extension:1 considered:1 hall:1 predict:1 kirchoff:1 estimation:13 treestructured:2 weighted:2 gaussian:19 super:1 pn:2 jaakkola:2 focus:1 emission:4 methodological:2 rank:1 likelihood:19 besag:1 inference:4 dependent:3 chow:2 hidden:2 relation:1 interested:1 denoted:1 development:1 constrained:1 special:1 copula:63 fairly:1 marginal:1 equal:2 once:1 construct:2 initialize:1 washington:1 orange:1 chapman:1 pta:4 report:2 intelligent:1 summand:1 few:1 employ:1 simultaneously:1 gamma:3 individual:2 consisting:1 csiro:1 fd:4 possibility:1 mixture:7 admitting:1 edge:17 fu:4 partial:2 daily:4 machinery:1 tree:88 incomplete:1 desired:1 fitted:1 column:2 modeling:4 bedford:1 assignment:1 maximization:1 lattice:1 subset:1 decomposability:1 entry:1 uniform:2 too:2 dependency:2 my:1 density:28 probabilistic:3 clifford:1 thesis:1 containing:1 derivative:3 concordance:3 inc:1 ad:4 later:1 lot:1 closed:2 kendall:4 portion:1 start:1 red:3 complicated:2 asuncion:1 contribution:1 kaufmann:1 characteristic:1 efficiently:1 ensemble:3 yield:1 bayesian:3 thumb:1 accurately:1 simultaneous:2 definition:1 underestimate:1 nonetheless:1 hydrology:1 associated:2 xn1:3 dimensionality:4 improves:1 organized:1 appears:1 puv:3 higher:2 ta:8 day:1 restarts:1 april:1 formulation:1 done:2 though:1 evaluated:1 strongly:2 correlation:1 until:1 overfit:1 overlapping:1 mode:1 indicated:2 perhaps:1 grows:1 requiring:1 true:1 brown:1 concept:1 evolution:1 analytically:2 symmetric:1 deal:1 conditionally:1 adjacent:1 pdf:4 outline:1 complete:3 cp:3 reasoning:1 novel:1 charles:1 superior:1 functional:1 exponentially:1 volume:1 extend:1 marginals:21 refer:3 rd:4 uv:36 similarly:1 had:1 dot:1 pu:3 delaunay:2 multivariate:16 posterior:1 triangulation:2 moderate:1 binary:1 wv:1 outperforming:2 morgan:1 additional:1 maximize:1 signal:1 stephen:1 rv:21 full:3 multiple:1 dashed:3 ing:1 technical:2 offer:1 cross:1 divided:1 ensity:1 a1:4 laplacian:1 prediction:1 involving:2 crop:1 iteration:2 sergey:2 normalization:1 kernel:1 cuv:15 semiparametric:1 addressed:1 leaving:1 publisher:1 appropriately:1 rest:2 regional:1 ascent:2 sure:1 undirected:2 jordan:1 odds:5 vw:1 independence:2 variate:1 bandwidth:1 methods1:1 idea:1 expression:3 granted:1 suffer:1 useful:3 clear:1 amount:14 concentrated:1 telescope:2 reduced:1 canonical:1 dotted:1 estimated:6 per:3 correctly:1 blue:3 discrete:1 independency:1 d3:3 ce:1 utilize:1 uw:1 graph:4 asymptotically:1 sum:3 year:2 run:1 letter:1 noticing:1 family:1 capturing:2 worked:1 performing:2 transferred:1 structured:24 department:3 according:1 combination:3 em:10 ln:21 equation:8 computationally:1 previously:1 needed:2 tractable:2 available:1 gaussians:1 rewritten:2 apply:1 occurrence:2 alternative:1 abandoned:1 rain:4 anv:5 denotes:1 graphical:4 synoptic:2 restrictive:2 quantile:1 build:1 approximating:1 society:2 parametric:4 dependence:21 diagonal:1 gradient:2 thank:1 simulated:3 hmm:21 collected:1 spanning:9 assuming:4 ru:11 length:1 aicml:2 modeled:2 ratio:2 providing:1 difficult:2 october:2 mostly:1 potentially:1 kde:7 frank:3 quu:2 negative:1 magic:3 implementation:1 perform:1 allowing:4 av:11 markov:4 benchmark:1 finite:1 extended:1 superexponential:1 station:14 canada:1 atmospheric:1 evidenced:2 pair:12 optimized:1 california:1 fv:9 learned:1 pearl:1 pattern:3 latitude:1 challenge:1 hammersley:1 green:1 royal:2 belief:1 suitable:1 representing:1 xd1:1 xnv:1 categorical:6 columbia:1 sn:6 prior:6 acknowledgement:1 fully:1 validation:1 t6g:1 consistent:1 rubin:1 dd:3 heavy:1 cooke:1 row:2 placed:1 supported:1 free:2 allow:2 absolute:1 dimension:2 xn:5 world:2 valid:1 transition:2 commonly:1 refinement:1 san:1 historical:3 constituting:1 transaction:1 approximate:1 keep:1 dealing:1 ml:2 assumed:1 francisco:1 continuous:4 tailed:1 decomposes:2 an1:1 additionally:1 learn:1 nonhomogeneous:1 ca:1 ignoring:1 forest:1 genest:1 complex:2 constructing:1 did:1 main:1 whole:1 edition:1 x1:5 xu:10 site:3 referred:3 fig:4 edmonton:1 quantiles:3 en:4 quv:1 pv:8 exponential:2 candidate:1 spatial:4 theorem:6 removing:1 magenta:1 british:1 concern:1 bivariate:17 joe:2 effectively:1 ci:6 phd:1 anu:5 margin:3 univariate:18 expressed:1 qvv:2 springer:2 corresponds:1 satisfies:1 cdf:2 conditional:4 viewed:2 shared:1 replace:1 change:2 determined:2 except:3 averaging:6 total:1 exception:1 latter:1 dept:1 tested:1 avoiding:1 |
2,409 | 3,186 | Local Algorithms for Approximate Inference in
Minor-Excluded Graphs
Kyomin Jung
Dept. of Mathematics, MIT
[email protected]
Devavrat Shah
Dept. of EECS, MIT
[email protected]
Abstract
We present a new local approximation algorithm for computing MAP and logpartition function for arbitrary exponential family distribution represented by a
finite-valued pair-wise Markov random field (MRF), say G. Our algorithm is
based on decomposing G into appropriately chosen small components; computing
estimates locally in each of these components and then producing a good global
solution. We prove that the algorithm can provide approximate solution within
arbitrary accuracy when G excludes some finite sized graph as its minor and G
has bounded degree: all Planar graphs with bounded degree are examples of such
graphs. The running time of the algorithm is ?(n) (n is the number of nodes in
G), with constant dependent on accuracy, degree of graph and size of the graph
that is excluded as a minor (constant for Planar graphs).
Our algorithm for minor-excluded graphs uses the decomposition scheme of
Klein, Plotkin and Rao (1993). In general, our algorithm works with any decomposition scheme and provides quantifiable approximation guarantee that depends
on the decomposition scheme.
1 Introduction
Markov Random Field (MRF) based exponential family of distribution allows for representing distributions in an intuitive parametric form. Therefore, it has been successful for modeling in many
applications Specifically, consider an exponential family on n random variables X = (X1 , . . . , Xn )
represented by a pair-wise (undirected) MRF with graph structure G = (V, E), where vertices
V = {1, . . . , n} and edge set E ? V ? V . Each Xi takes value in a finite set ? (e.g. ? = {0, 1}).
The joint distribution of X = (Xi ): for x = (xi ) ? ?n ,
?
?
X
X
?ij (xi , xj )? .
(1)
Pr[X = x] ? exp ?
?i (xi ) +
i?V
(i,j)?E
4
Here, functions ?i : ? ? R+ = {x ? R : x ? 0}, and ?ij : ?2 ? R+ are assumed to be arbitrary non-negative (real-valued) functions.1 The two most important computational questions of interest are: (i) finding maximum a-posteriori (MAP) assignment x? , where
x? = arg maxx??n Pr[X = x]; and (ii) marginal distributions of variables, i.e. Pr[Xi =
x]; for x ? ?, 1 ? i ? n. MAP is equivalent to a minimal energy assignment (or ground state)
n
where
energy, E(x),
P
P of state x ? ? is defined as E(x) = ?H(x) + Constant, where H(x) =
?
(x
)+
?
(x
,
x
).
i
j Similarly, computing marginal is equivalent to computing logi?V i i
(i,j)?E ij
P
P
P
partition function, defined as log Z = log
.
x??n exp
i?V ?i (xi ) +
(i,j)?E ?ij (xi , xj )
?
In this paper, we will find ?-approximation solutions of MAP and log-partition function: that is, x
and log Z? such that: (1 ? ?)H(x? ) ? H(?
x) ? H(x? ),
(1 ? ?) log Z ? log Z? ? (1 + ?) log Z.
1
Here, we assume the positivity of ?i ?s and ?ij ?s for simplicity of analysis.
1
Previous Work. The question of finding MAP (or ground state) comes up in many important application areas such as coding theory, discrete optimization, image denoising.Similarly, log-partition
function is used in counting combinatorial objects loss-probability computation in computer networks, etc. Both problems are NP-hard for exact and even (constant) approximate computation for
arbitrary graph G. However, applications require solving this problem using very simple algorithms.
A plausible approach is as follows. First, identify wide class of graphs that have simple algorithms
for computing MAP and log-partition function. Then, try to build system (e.g. codes) so that such
good graph structure emerges and use the simple algorithm or else use the algorithm as a heuristic.
Such an approach has resulted in many interesting recent results starting the Belief Propagation
(BP) algorithm designed for Tree graph [1].Since there a vast literature on this topic, we will recall
only few results. Two important algorithms are the generalized belief propagation (BP) [2] and the
tree-reweighted algorithm (TRW) [3,4].Key properties of interest for these iterative procedures are
the correctness of fixed points and convergence. Many results characterizing properties of the fixed
points are known starting from [2]. Various sufficient conditions for their convergence are known
starting [5]. However, simultaneous convergence and correctness of such algorithms are established
for only specific problems, e.g. [6].
Finally, we discuss two relevant results. The first result is about properties of TRW. The TRW
algorithm provides provable upper bound on log-partition function for arbitrary graph [3]However,
to the best of authors? knowledge the error is not quantified. The TRW for MAP estimation has
a strong connection to specific Linear Programming (LP) relaxation of the problem [4]. This was
made precise in a sequence of work by Kolmogorov [7], Kolmogorov and Wainwright [8] for binary
MRF. It is worth noting that LP relaxation can be poor even for simple problems.
The second is an approximation algorithm proposed by Globerson and Jaakkola [9] to compute
log-partition function using Planar graph decomposition (PDC). PDC uses techniques of [3] in conjunction with known result about exact computation of partition function for binary MRF when G is
Planar and the exponential family has specific form. Their algorithm provides provable upper bound
for arbitrary graph. However, they do not quantify the error incurred. Further, their algorithm is
limited to binary MRF.
Contribution. We propose a novel local algorithm for approximate computation of MAP and logpartition function. For any ? > 0, our algorithm can produce an ?-approximate solution for MAP
and log-partition function for arbitrary MRF G as long as G excludes a finite graph as a minor
(precise definition later). For example, Planar graph excludes K3,3 , K5 as a minor. The running
time of the algorithm is ?(n), with constant dependent on ?, the maximum vertex degree of G and
the size of the graph that is excluded as minor. Specifically, for a Planar graph with bounded degree,
it takes ? C(?)n time to find ?-approximate solution with log log C(?) = O(1/?). In general, our
algorithm works for any G and we can quantify bound on the error incurred by our algorithm. It is
worth noting that our algorithm provides a provable lower bound on log-partition function as well
unlike many of previous works.
The precise results for minor-excluded graphs are stated in Theorems 1 and 2. The result concerning
general graphs are stated in the form of Lemmas 2-3-4 for log-partition and Lemmas 5-6-7 for MAP.
Techniques. Our algorithm is based on the following idea: First, decompose G into small-size
connected components say G1 , . . . , Gk by removing few edges of G. Second, compute estimates
(either MAP or log-partition) in each of Gi separately. Third, combine these estimates to produce a
global estimate while taking care of the effect induced by removed edges. We show that the error in
the estimate depends only on the edges removed. This error bound characterization is applicable for
arbitrary graph.
Klein, Plotkin and Rao [10]introduced a clever and simple decomposition method for minorexcluded graphs to study the gap between max-flow and min-cut for multicommodity flows. We
use their method to obtain a good edge-set for decomposing minor-excluded G so that the error
induced in our estimate is small (can be made as small as required).
In general, as long as G allows for such good edge-set for decomposing G into small components,
our algorithm will provide a good estimate. To compute estimates in individual components, we
use dynamic programming. Since each component is small, it is not computationally burdensome.
2
However, one may obtain further simpler heuristics by replacing dynamic programming by other
method such as BP or TRW for computation in the components.
2 Preliminaries
Here we present useful definitions and previous results about decomposition of minor-excluded
graphs from [10,11].
Definition 1 (Minor Exclusion) A graph H is called minor of G if we can transform G into H
through an arbitrary sequence of the following two operations: (a) removal of an edge; (b) merge
two connected vertices u, v: that is, remove edge (u, v) as well as vertices u and v; add a new vertex
and make all edges incident on this new vertex that were incident on u or v. Now, if H is not a minor
of G then we say that G excludes H as a minor.
The explanation of the following statement may help understand the definition: any graph H with
r nodes is a minor of Kr , where Kr is a complete graph of r nodes. This is true because one may
obtain H by removing edges from Kr that are absent in H. More generally, if G is a subgraph of
G0 and G has H as a minor, then G0 has H as its minor. Let Kr,r denote a complete bipartite graph
with r nodes in each partition. Then Kr is a minor of Kr,r . An important implication of this is as
follows: to prove property P for graph G that excludes H, of size r, as a minor, it is sufficient to
prove that any graph that excludes Kr,r as a minor has property P. This fact was cleverly used by
Klein et. al. [10] to obtain a good decomposition scheme described next. First, a definition.
Definition 2 ((?, ?)-decomposition) Given graph G = (V, E), a randomly chosen subset of edges
B ? E is called (?, ?) decomposition of G if the following holds: (a) For any edge e ? E,
Pr(e ? B) ? ?. (b) Let S1 , . . . , SK be connected components of graph G0 = (V, E\B) obtained by
removing edges of B from G. Then, for any such component Sj , 1 ? j ? K and any u, v ? Sj the
shortest-path distance between (u, v) in the original graph G is at most ? with probability 1.
The existence of (?, ?)-decomposition implies that it is possible to remove ? fraction of edges so
that graph decomposes into connected components whose diameter is small. We describe a simple
and explicit construction of such a decomposition for minor excluded class of graphs. This scheme
was proposed by Klein, Plotkin, Rao [10] and Rao [11].
DeC(G, r, ?)
(0) Input is graph G = (V, E) and r, ? ? N. Initially, i = 0, G0 = G, B = ?.
(1) For i = 0, . . . , r ? 1, do the following.
(a) Let S1i , . . . , Ski i be the connected components of Gi .
(b) For each Sji , 1 ? j ? ki , pick an arbitrary node vj ? Sji .
? Create a breadth-first search tree Tji rooted at vj in Sji .
? Choose a number Lij uniformly at random from {0, . . . , ? ? 1}.
? Let Bji be the set of edges at level Lij , ? + Lij , 2? + Lij , . . . in Tji .
i
Bji .
? Update B = B ?kj=1
(c) set i = i + 1.
(3) Output B and graph G0 = (V, E\B).
As stated above, the basic idea is to use the following step recursively (upto depth r of recursion):
in each connected component, say S, choose a node arbitrarily and create a breadth-first search tree,
say T . Choose a number, say L, uniformly at random from {0, . . . , ? ? 1}. Remove (add to B) all
edges that are at level L + k?, k ? 0 in T . Clearly, the total running time of such an algorithm is
O(r(n + |E|)) for a graph G = (V, E) with |V | = n; with possible parallel implementation across
different connected components.
The algorithm DeC(G, r, ?) is designed to provide a good decomposition for class of graphs that
exclude Kr,r as a minor. Figure 1 explains the algorithm for a line-graph of n = 9 nodes, which
excludes K2,2 as a minor. The example is about a sample run of DeC(G, 2, 3) (Figure 1 shows the
first iteration of the algorithm).
3
1
0
0
1
1
0
0
1
1
0
00
0 11
1
00
11
4
3
2
1
G0
1
0
0
1
5
1
0
0
1
0
1
1
1 0
0
00
11
0
1
00
11
1
0
0
1
1
0
00
0 11
1
00
11
7
6
9
8
5
4
3
L1
11
00
0
1
2
1
G1
1
0
6
1
0
T1
7
1
0
8
1
0
9
11 00
00
11 00
11 00
11 00
11 00
11 00
11 00
11 00
11
1
2
S1
3
4
5
S2
6
S3
7
S4
8
9
S5
Figure 1: The first of two iterations in execution of DeC(G, 2, 3) is shown.
Lemma 1 If G excludes Kr,r as a minor, then algorithm DeC(G, r, ?) outputs B which is
(r/?, O(?))-decomposition of G.
It is known that Planar graph excludes K3,3 as a minor. Hence, Lemma 1 implies the following.
Corollary 1 Given a planar graph G, the algorithm DeC(G, 3, ?) produces (3/?, O(?))decomposition for any ? ? 1.
3 Approximate log Z
Here, we describe algorithm for approximate computation of log Z for any graph G. The algorithm
uses a decomposition algorithm as a sub-routine. In what follows, we use term D ECOMP for a
generic decomposition algorithm. The key point is that our algorithm provides provable upper and
lower bound on log Z for any graph; the approximation guarantee and computation time depends on
the property of D ECOMP. Specifically, for Kr,r minor excluded G (e.g. Planar graph with r = 3),
we will use DeC(G, r, ?) in place of D ECOMP. Using Lemma 1, we show that our algorithm based
on DeC provides approximation upto arbitrary multiplicative accuracy by tuning parameter ?.
L OG PARTITION(G)
(1) Use D ECOMP(G) to obtain B ? E such that
(a) G0 = (V, E\B) is made of connected components S1 , . . . , SK .
(2) For each connected component Sj , 1 ? j ? K, do the following:
(a) Compute partition function Zj restricted to Sj by dynamic programming(or exhaustive computation).
L
U
(3) Let ?ij
= min(x,x0 )??2 ?ij (x, x0 ), ?ij
= max(x,x0 )??2 ?ij (x, x0 ). Then
K
K
X
X
X
X
L
U
log Zj +
?ij
; log Z?UB =
log Zj +
?ij
.
log Z?LB =
j=1
j=1
(i,j)?B
(i,j)?B
(4) Output: lower bound log Z?LB and upper bound log Z?UB .
In words, L OG PARTITION(G) produces upper and lower bound on log Z of MRF G as follows:
decompose graph G into (small) components S1 , . . . , SK by removing (few) edges B ? E using
D ECOMP(G). Compute exact log-partition function in each of the components. To produce bounds
log Z?LB , log Z?UB take the summation of thus computed component-wise log-partition function along
with minimal and maximal effect of edges from B.
Analysis of L OG PARTITION for General G : Here, we analyze performance of L OG PARTI TION for any G. In the next section, we will specialize our analysis for minor excluded G when
L OG PARTITION uses DeC as the D ECOMP algorithm.
Lemma 2 Given an MRF G described by (1), the L OG PARTITION produces log Z?LB , log Z?UB such
that
X
U
L
log Z?LB ? log Z ? log Z?UB , log Z?UB ? log Z?LB =
?ij
? ?ij
.
(i,j)?B
4
?
It takes O |E|K?|S | + TDECOMP time to produce this estimate, where |S ? | = maxK
j=1 |Sj | with
D ECOMP producing decomposition of G into S1 , . . . , SK in time TDECOMP .
hP
i
1
U
L
Lemma 3 If G has maximum vertex degree D then, log Z ? D+1
(i,j)?E ?ij ? ?ij .
Lemma 4 If G has maximum vertex degree D and the D ECOMP(G) produces B that is (?, ?)decomposition, then
h
i
E log Z?UB ? log Z?LB ? ?(D + 1) log Z,
w.r.t. the randomness in B, and L OG PARTITION takes time O(nD|?|
D?
) + TDECOMP .
Analysis of L OG PARTITION for Minor-excluded G : Here, we specialize analysis of L OG PAR TITION for minor exclude graph G. For G that exclude minor Kr,r , we use algorithm DeC(G, r, ?).
Now, we state the main result for log-partition function computation.
Theorem 1 Let G exclude Kr,r as minor and have D as maximum vertex degree. Given ? > 0, use
L OG PARTITION algorithm with DeC(G, r, ?) where ? = d r(D+1)
e. Then,
?
h
i
log Z?LB ? log Z ? log Z?UB ;
E log Z?UB ? log Z?LB ? ? log Z.
Further, algorithm takes (nC(D, |?|, ?)), where constant C(D, |?|, ?) = D|?|D
O(rD/?)
.
We obtain the following immediate implication of Theorem 1.
Corollary 2 For any ? > 0, the L OG PARTITION algorithm with DeC algorithm for constant degree
Planar graph G based MRF, produces log Z?LB , log Z?UB so that
(1 ? ?) log Z ? log Z?LB ? log Z ? log Z?UB ? (1 + ?) log Z,
in time O(nC(?)) where log log C(?) = O(1/?).
4 Approximate MAP
Now, we describe algorithm to compute MAP approximately. It is very similar to the L OG PAR TITION algorithm: given G, decompose it into (small) components S1 , . . . , SK by removing (few)
edges B ? E. Then, compute an approximate MAP assignment by computing exact MAP restricted
to the components. As in L OG PARTITION, the computation time and performance of the algorithm
depends on property of decomposition scheme. We describe algorithm for any graph G; which will
be specialized for Kr,r minor excluded G using DeC(G, r, ?).
M ODE(G)
(1) Use D ECOMP(G) to obtain B ? E such that
(a) G0 = (V, E\B) is made of connected components S1 , . . . , SK .
(2) For each connected component Sj , 1 ? j ? K, do the following:
(a) Through dynamic programming (or exhaustive computation) find exact MAP x?,j for
component Sj , where x?,j = (x?,j
i )i?Sj .
c? , which is obtained by assigning values to nodes using x?,j , 1 ? j ? K.
(3) Produce output x
Analysis of M ODE for General G : Here, we analyze performance of M ODE for any G. Later,
we will specialize our analysis for minor excluded G when it uses DeC as the D ECOMP algorithm.
c? such that
Lemma 5P
Given an MRF G described
by (1), the M ODE algorithm produces outputs x
U
L
c? ) ? H(x? ). It takes O |E|K?|S ? | + TDECOMP time to
H(x? ) ? (i,j)?B ?ij
? ?ij
? H(x
produce this estimate, where |S ? | = maxK
j=1 |Sj | with D ECOMP producing decomposition of G into
S1 , . . . , SK in time TDECOMP .
Lemma 6 If G has maximum vertex degree D, then
?
?
?
?
X
X
1
1
U?
U
L?
?
?
H(x? ) ?
?ij
?
?ij
? ?ij
.
D+1
D+1
(i,j)?E
(i,j)?E
5
Lemma 7 If G has maximum vertex degree D and the D ECOMP(G) produces B that is (?, ?)decomposition, then
h
i
c? ) ? ?(D + 1)H(x? ),
E H(x? ) ? H(x
D?
where expectation is w.r.t. the randomness in B. Further, M ODE takes time O(nD|?|
)+TDECOMP .
Analysis of M ODE for Minor-excluded G : Here, we specialize analysis of M ODE for minor
exclude graph G. For G that exclude minor Kr,r , we use algorithm DeC(G, r, ?). Now, we state
the main result for MAP computation.
Theorem 2 Let G exclude Kr,r as minor and have D as the maximum vertex degree. Given ? > 0,
use M ODE algorithm with DeC(G, r, ?) where ? = d r(D+1)
e. Then,
?
c? ) ? H(x? ).
(1 ? ?)H(x? ) ? H(x
DO(rD/?)
Further, algorithm takes n ? C(D, |?|, ?) time, where constant C(D, |?|, ?) = D|?|
.
We obtain the following immediate implication of Theorem 2.
Corollary 3 For any ? > 0, the M ODE algorithm with DeC algorithm for constant degree Planar
c? so that
graph G based MRF, produces estimate x
c? ) ? H(x? ),
(1 ? ?)H(x? ) ? H(x
in time O(nC(?)) where log log C(?) = O(1/?).
5 Experiments
Our algorithm provides provably good approximation for any MRF with minor excluded graph
structure, with planar graph as a special case. In this section, we present experimental evaluation of
our algorithm for popular synthetic model.
Setup 1.2 Consider binary (i.e. ? = {0, 1}) MRF on an n ? n lattice G = (V, E):
?
Pr(x) ? exp ?
X
i?V
?i x i +
X
(i,j)?E
?
2
?ij xi xj ? , for x ? {0, 1}n .
Figure 2 shows a lattice or grid graph with n = 4 (on the left side). There are two scenarios for
choosing parameters (with notation U[a, b] being uniform distribution over interval [a, b]):
(1) Varying interaction. ?i is chosen independently from distribution U[?0.05, 0.05] and ?ij chosen
independent from U[??, ?] with ? ? {0.2, 0.4, . . . , 2}.
(2) Varying field. ?ij is chosen independently from distribution U[?0.5, 0.5] and ?i chosen independently from U[??, ?] with ? ? {0.2, 0.4, . . . , 2}.
The grid graph is planar. Hence, we run our algorithms L OG PARTITION and M ODE, with decomposition scheme DeC(G, 3, ?), ? ? {3, 4, 5}. We consider two measures to evaluate performance:
error in log Z, defined as n12 | log Z alg ? log Z|; and error in H(x? ), defined as n12 |H(xalg ? H(x? )|.
We compare our algorithm for error in log Z with the two recently very successful algorithms ?
Tree re-weighted algorithm (TRW) and planar decomposition algorithm (PDC). The comparison is
plotted in Figure 3 where n = 7 and results are averages over 40 trials. The Figure 3(A) plots
error with respect to varying interaction while Figure 3(B) plots error with respect to varying field
strength. Our algorithm, essentially outperforms TRW for these values of ? and perform very
competitively with respect to PDC.
The key feature of our algorithm is scalability. Specifically, running time of our algorithm with a
given parameter value ? scales linearly in n, while keeping the relative error bound exactly the
same. To explain this important feature, we plot the theoretically evaluated bound on error in log Z
2
Though this setup has ?i , ?ij taking negative values, they are equivalent to the setup considered in the
paper as the function values are lower bounded and hence affine shift will make them non-negative without
changing the distribution.
6
in Figure 4 with tags (A), (B) and (C). Note that error bound plot is the same for n = 100 (A) and
n = 1000 (B). Clearly, actual error is likely to be smaller than these theoretically plotted bounds.
We note that these bounds only depend on the interaction strengths and not on the values of fields
strengths (C).
Results similar to of L OG PARTITION are expected from M ODE. We plot the theoretically evaluated
bounds on the error in MAP in Figure 4 with tags (A), (B) and (C). Again, the bound on MAP relative
error for given ? parameter remains the same for all values of n as shown in (A) for n = 100 and
(B) for n = 1000. There is no change in error bound with respect to the field strength (C).
Setup 2. Everything is exactly the same as the above setup with the only difference that grid graph
is replaced by cris-cross graph which is obtained by adding extra four neighboring edges per node
(exception of boundary nodes). Figure 2 shows cris-cross graph with n = 4 (on the right side).
We again run the same algorithm as above setup on this graph. For cris-cross graph, we obtained
its graph decomposition from the decomposition of its grid sub-graph. graph Though the cris-cross
graph is not planar, due to the structure of the cris-cross graph it can be shown (proved) that the
running time of our algorithm will remain the same (in order) and error bound will become only 3
times weaker than that for the grid graph ! We compute these theoretical error bounds for log Z and
MAP which is plotted in Figure 5. This figure is similar to the Figure 4 for grid graph. This clearly
exhibits the generality of our algorithm even beyond minor excluded graphs.
References
[1] J. Pearl, ?Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference,? San Francisco,
CA: Morgan Kaufmann, 1988.
[2] J. Yedidia, W. Freeman and Y. Weiss, ?Generalized Belief Propagation,? Mitsubishi Elect. Res. Lab., TR2000-26, 2000.
[3] M. J. Wainwright, T. Jaakkola and A. S. Willsky, ?Tree-based reparameterization framework for analysis of
sum-product and related algorithms,? IEEE Trans. on Info. Theory, 2003.
[4] M. J. Wainwright, T. S. Jaakkola and A. S. Willsky, ?MAP estimation via agreement on (hyper)trees:
Message-passing and linear-programming approaches,? IEEE Trans. on Info. Theory, 51(11), 2005.
[5] S. C. Tatikonda and M. I. Jordan, ?Loopy Belief Propagation and Gibbs Measure,? Uncertainty in Artificial
Intelligence, 2002.
[6] M. Bayati, D. Shah and M. Sharma, ?Maximum Weight Matching via Max-Product Belief Propagation,?
IEEE ISIT, 2005.
[7] V. Kolmogorov, ?Convergent Tree-reweighted Message Passing for Energy Minimization,? IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006.
[8] V. Kolmogorov and M. Wainwright, ?On optimality of tree-reweighted max-product message-passing,?
Uncertainty in Artificial Intelligence, 2005.
[9] A. Globerson and T. Jaakkola, ?Bound on Partition function through Planar Graph Decomposition,? NIPS,
2006.
[10] P. Klein, S. Plotkin and S. Rao, ?Excluded minors, network decomposition, and multicommodity flow,?
ACM STOC, 1993.
[11] S. Rao, ?Small distortion and volume preserving embeddings for Planar and Euclidian metrics,? ACM
SCG, 1999.
11 00
00
00 11
11
00
11
00
11
00
11
1
0
1
0
1
0
1
0
1
0
0
1
0
1
11
00
11
00 00
11
00
11
00
11
00
11
1
0
1
0
1
0
1
0
1
0
0
1
0
1
1
0
11 00
00
11
0 00
1
11
00
11
0
1
0
1
11 00
00
11
00
0 00
1
11
11
0
1
1
0
1
0
11 00
00
11
1 00
0
11
0 11 00
1
1
0
1
0
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
0
11
00
11
00
1
0
11 00
11
1 00
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
1
0
Cris
Grid
Figure 2: Example of grid graph (left) and cris-cross graph (right) with n = 4.
7
(1-A) Grid, N=7
(1-B) Gird, n=7
?????
???
TRW
TRW
3
'
4
'
5
?????
Z Error
Z Error
'
???
????
PDC
????
PDC
????
'
3
'
4
'
5
????
?????
???
????
????
?????
?
?
???
???
???
???
?
???
???
???
???
?
???
???
???
???
?
Interaction Strength
???
???
???
???
?
Field Strength
Figure 3:
Comparison of TRW, PDC and our algorithm for grid graph with n = 7 with respect to error in log Z. Our algorithm outperforms TRW and is
competitive with respect to PDC.
(2-A) Grid, n=100
(2-C) Grid, n=1000
(2-B) Grid, n=1000
0.9
2.5
0.9
0.8
0.8
' 5
0.7
2
0.7
Z Error Bound
Z Error Bound
10
0.6
'
0.5
20
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
Z Error Bound
'
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
0
2
0.2
0.4
0.6
0.8
Interaction Strength
1
1.2
1.4
1.6
1.8
2
0.2
20
0.4
0.3
0.7
0.6
0.5
0.4
0.2
0.1
0.1
0
1.6
1.8
2
0.6
0.8
1
1.2
1.4
1.6
1.8
1.4
1.6
1.8
2
2
1.5
1
0.5
0
0
0.4
1.4
0.3
0.2
0.2
1.2
2.5
MAP Error Bound
MAP Error Bound
MAP Error Bound
10
'
1
0.8
0.6
0.5
0.8
Field Strength
0.9
' 5
'
0.6
(3-C) Grid, n=1000
(3-B) Grid, n=1000
0.9
0.7
0.4
Interaction Strength
(3-A) Grid, n=100
0.8
1
0.5
0
0.2
1.5
2
0.2
0.4
0.6
0.8
Interaction Strength
1
1.2
1.4
1.6
1.8
0.2
2
0.4
0.6
0.8
1
1.2
Field Strength
Interaction Strength
Figure 4:
The theoretically computable error bounds for log Z and MAP under our algorithm for grid with n = 100 and n = 1000 under varying
interaction and varying field model. This clearly shows scalability of our algorithm.
(4-B) Cris Cross, n=1000
(4-A) Cris Cross, n=100
0.6
2.5
2.5
' 5
(4-C) Cris Cross, n=1000
0.5
2
2
'
20
1.5
1
0.5
Z Error Bound
10
Z Error Bound
Z Error Bound
'
1.5
1
0
0.1
0
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
0.2
2
Interaction Strength
2.5
2.5
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0.2
Interaction Strength
(5-B) Cris Cross, n=1000
0.6
' 5
20
1
MAP Error Bound
'
MAP Error Bound
MAP Error Bound
1.5
10
1.5
1
0.5
0.5
0
0
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
1.4
1.6
1.8
2
Field Strength
(5-C) Cris Cross, n=1000
0.5
2
2
'
0.3
0.2
0.5
(5-A) Criss Cross, n=100
0.4
0.4
0.3
0.2
0.1
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
0
0.2
0.4
0.6
0.8
1
1.2
Interaction Strength
Interaction Strength
Figure 5:
1.4
1.6
1.8
2
0.2
0.4
0.6
0.8
1
1.2
Field Strength
The theoretically computable error bounds for log Z and MAP under our algorithm for cris-cross with n = 100 and n = 1000 under varying
interaction and varying field model. This clearly shows scalability of our algorithm and robustness to graph structure.
8
| 3186 |@word trial:1 nd:2 scg:1 mitsubishi:1 decomposition:27 pick:1 euclidian:1 multicommodity:2 recursively:1 outperforms:2 assigning:1 partition:29 remove:3 designed:2 plot:5 update:1 intelligence:3 provides:7 characterization:1 node:10 simpler:1 along:1 become:1 prove:3 specialize:4 combine:1 theoretically:5 x0:4 expected:1 freeman:1 actual:1 bounded:4 notation:1 what:1 finding:2 guarantee:2 exactly:2 k2:1 producing:3 t1:1 local:3 path:1 merge:1 approximately:1 quantified:1 limited:1 globerson:2 procedure:1 area:1 maxx:1 matching:1 word:1 clever:1 equivalent:3 map:29 logpartition:2 starting:3 independently:3 simplicity:1 parti:1 reparameterization:1 n12:2 construction:1 exact:5 programming:6 us:5 agreement:1 cut:1 s1i:1 connected:11 removed:2 dynamic:4 depend:1 solving:1 bipartite:1 joint:1 various:1 represented:2 kolmogorov:4 describe:4 artificial:2 hyper:1 choosing:1 exhaustive:2 whose:1 heuristic:2 valued:2 plausible:2 say:6 distortion:1 gi:2 g1:2 transform:1 sequence:2 propose:1 interaction:14 maximal:1 product:3 neighboring:1 relevant:1 subgraph:1 intuitive:1 scalability:3 quantifiable:1 convergence:3 produce:14 object:1 help:1 ij:24 minor:40 strong:1 come:1 implies:2 quantify:2 tji:2 everything:1 explains:1 require:1 decompose:3 preliminary:1 isit:1 summation:1 hold:1 considered:1 ground:2 exp:3 k3:2 estimation:2 applicable:1 combinatorial:1 tatikonda:1 correctness:2 create:2 weighted:1 minimization:1 mit:4 clearly:5 og:15 varying:8 jaakkola:4 conjunction:1 corollary:3 burdensome:1 posteriori:1 inference:2 dependent:2 initially:1 provably:1 arg:1 special:1 marginal:2 field:13 np:1 intelligent:1 few:4 randomly:1 resulted:1 individual:1 replaced:1 interest:2 message:3 evaluation:1 implication:3 edge:20 tree:9 re:2 plotted:3 theoretical:1 minimal:2 modeling:1 rao:6 assignment:3 lattice:2 loopy:1 vertex:12 subset:1 uniform:1 successful:2 eec:1 plotkin:4 cris:13 synthetic:1 probabilistic:1 again:2 choose:3 positivity:1 tition:2 exclude:7 coding:1 depends:4 later:2 try:1 multiplicative:1 tion:1 lab:1 analyze:2 competitive:1 parallel:1 contribution:1 accuracy:3 kaufmann:1 identify:1 worth:2 randomness:2 simultaneous:1 explain:1 definition:6 energy:3 proved:1 popular:1 recall:1 knowledge:1 emerges:1 routine:1 trw:11 planar:17 wei:1 evaluated:2 though:2 generality:1 replacing:1 propagation:5 effect:2 true:1 hence:3 excluded:17 reweighted:3 rooted:1 elect:1 generalized:2 complete:2 l1:1 reasoning:1 image:1 wise:3 novel:1 recently:1 specialized:1 volume:1 s5:1 gibbs:1 tuning:1 rd:2 grid:17 mathematics:1 similarly:2 hp:1 ecomp:12 etc:1 add:2 recent:1 exclusion:1 scenario:1 sji:3 binary:4 arbitrarily:1 morgan:1 preserving:1 care:1 sharma:1 shortest:1 ii:1 cross:13 long:2 dept:2 concerning:1 mrf:14 basic:1 essentially:1 expectation:1 metric:1 iteration:2 dec:18 separately:1 ode:11 interval:1 else:1 appropriately:1 extra:1 unlike:1 induced:2 undirected:1 flow:3 jordan:1 counting:1 noting:2 embeddings:1 xj:3 criss:1 idea:2 computable:2 absent:1 shift:1 passing:3 useful:1 generally:1 s4:1 locally:1 diameter:1 zj:3 s3:1 per:1 klein:5 discrete:1 key:3 four:1 changing:1 breadth:2 vast:1 graph:74 excludes:9 relaxation:2 fraction:1 sum:1 run:3 uncertainty:2 place:1 family:4 bound:35 ki:1 convergent:1 strength:18 bp:3 tag:2 min:2 optimality:1 kyomin:1 poor:1 cleverly:1 remain:1 smaller:1 across:1 lp:2 s1:8 restricted:2 pr:5 computationally:1 remains:1 devavrat:2 discus:1 decomposing:3 operation:1 k5:1 competitively:1 yedidia:1 upto:2 generic:1 robustness:1 shah:2 existence:1 original:1 running:5 build:1 pdc:8 g0:8 question:2 parametric:1 exhibit:1 distance:1 topic:1 provable:4 willsky:2 code:1 nc:3 setup:6 statement:1 stoc:1 gk:1 info:2 negative:3 stated:3 implementation:1 ski:1 perform:1 upper:5 markov:2 finite:4 immediate:2 maxk:2 precise:3 arbitrary:11 lb:11 introduced:1 pair:2 required:1 connection:1 established:1 pearl:1 nip:1 trans:2 beyond:1 pattern:1 max:4 explanation:1 belief:5 wainwright:4 recursion:1 representing:1 scheme:7 lij:4 kj:1 literature:1 removal:1 relative:2 loss:1 par:2 interesting:1 bayati:1 incurred:2 degree:13 incident:2 affine:1 sufficient:2 jung:1 keeping:1 side:2 weaker:1 understand:1 wide:1 characterizing:1 taking:2 boundary:1 depth:1 xn:1 author:1 made:4 san:1 transaction:1 sj:9 approximate:10 global:2 assumed:1 francisco:1 xi:9 search:2 iterative:1 sk:7 decomposes:1 ca:1 alg:1 vj:2 main:2 linearly:1 s2:1 x1:1 sub:2 explicit:1 exponential:4 third:1 theorem:5 removing:5 specific:3 adding:1 kr:15 execution:1 gap:1 likely:1 acm:2 bji:2 sized:1 hard:1 change:1 specifically:4 uniformly:2 denoising:1 lemma:11 called:2 total:1 experimental:1 exception:1 ub:11 evaluate:1 |
2,410 | 3,187 | A Spectral Regularization Framework for
Multi-Task Structure Learning
Andreas Argyriou
Department of Computer Science
University College London
Gower Street, London WC1E 6BT, UK
[email protected]
Charles A. Micchelli
Department of Mathematics and Statistics
SUNY Albany
1400 Washington Avenue
Albany, NY, 12222, USA
Massimiliano Pontil
Department of Computer Science
University College London
Gower Street, London WC1E 6BT, UK
[email protected]
Yiming Ying
Department of Engineering Mathematics
University of Bristol
University Walk, Bristol, BS8 1TR, UK
[email protected]
Abstract
Learning the common structure shared by a set of supervised tasks is an important
practical and theoretical problem. Knowledge of this structure may lead to better generalization performance on the tasks and may also facilitate learning new
tasks. We propose a framework for solving this problem, which is based on regularization with spectral functions of matrices. This class of regularization problems exhibits appealing computational properties and can be optimized efficiently
by an alternating minimization algorithm. In addition, we provide a necessary
and sufficient condition for convexity of the regularizer. We analyze concrete examples of the framework, which are equivalent to regularization with Lp matrix
norms. Experiments on two real data sets indicate that the algorithm scales well
with the number of tasks and improves on state of the art statistical performance.
1
Introduction
Recently, there has been renewed interest in the problem of multi-task learning, see [2, 4, 5, 14,
16, 19] and references therein. This problem is important in a variety of applications, ranging from
conjoint analysis [12], to object detection in computer vision [18], to multiple microarray data set
integration in computational biology [8] ? to mention just a few. A key objective in many multitask learning algorithms is to implement mechanisms for learning the possible structure underlying
the tasks. Finding this common structure is important because it allows pooling information across
the tasks, a property which is particularly appealing when there are many tasks but only few data
per task. Moreover, knowledge of the common structure may facilitate learning new tasks (transfer
learning), see [6] and references therein.
In this paper, we extend the formulation of [4], where the structure shared by the tasks is described
by a positive definite matrix. In Section 2, we propose a framework in which the task parameters and
the structure matrix are jointly computed by minimizing a regularization function. This function has
the following appealing property. When the structure matrix is fixed, the function decomposes across
the tasks, which can hence be learned independently with standard methods such as SVMs. When
the task parameters are fixed, the optimal structure matrix is a spectral function of the covariance of
the tasks and can often be explicitly computed. As we shall see, spectral functions are of particular
interest in this context because they lead to an efficient alternating minimization algorithm.
1
The contribution of this paper is threefold. First, in Section 3 we provide a necessary and sufficient
condition for convexity of the optimization problem. Second, in Section 4 we characterize the spectral functions which relate to Schatten Lp regularization and present the alternating minimization
algorithm. Third, in Section 5 we discuss the connection between our framework and the convex
optimization method for learning the kernel [11, 15], which leads to a much simpler proof of the
convexity in the kernel than the one given in [15]. Finally, in Section 6 we present experiments on
two real data sets. The experiments indicate that the alternating algorithm runs significantly faster
than gradient descent and that our method improves on state of the art statistical performance on
these data sets. They also highlight that our approach can be used for transfer learning.
2
Modelling Tasks? Structure
In this section, we introduce our multi-task learning framework. We denote by S d the set of d ? d
symmetric matrices, by Sd+ (Sd++ ) the subset of positive semidefinite (definite) ones and by Od the
set of d ? d orthogonal matrices. For every positive integer n, we define INn = {1, . . . , n}. We
let T be the number of tasks which we want to simultaneously learn. We assume for simplicity
that each task t ? INT is well described by a linear function defined, for every x ? IRd , as wt> x,
where wt is a fixed vector of coefficients. For each task t ? INT , there are m data examples
{(xtj , ytj ) : j ? INm } ? IRd ? IR available. In practice, the number of examples per task may vary
but we have kept it constant for simplicity of notation.
Our goal is to learn the vectors w1 , . . . , wT , as well as the common structure underlying the tasks,
from the data examples. In this paper we follow the formulation in [4], where the tasks? structure
is summarized by a positive definite matrix D which is linked to the covariance matrix between
the tasks, W W > . Here, W denotes the d ? T matrix whose t-th column is given by the vector wt
(we have assumed for simplicity that the mean task is zero). Specifically, we learn W and D by
minimizing the function
Reg(W, D) := Err(W ) + ? Penalty(W, D),
(2.1)
where ? is a positive parameter which balances the importance between the error and the penalty.
The former may be any bounded from below and convex function evaluated at the values w t> xtj ,
tP? INT , j ? INm . Typically, P
it will be the average error on the tasks, namely, Err(W ) =
>
L
(w
),
where
L
(w
)
=
t
t
t
t
t?INT
j?INm `(ytj , wt xtj ) and ` : IR ? IR ? [0, ?) is a prescribed
loss function (e.g. quadratic, SVM, logistic etc.). We shall assume that the loss ` is convex in its
second argument, which ensures that the function Err is also convex. The latter term favors the tasks
sharing some common structure and is given by
T
X
Penalty(W, D) = tr(F (D)W W > ) =
wt> F (D)wt ,
(2.2)
t=1
where F : Sd++ ? Sd++ is a prescribed spectral matrix function. This is to say that F is induced
by applying a function f : (0, ?) ? (0, ?) to the eigenvalues of its argument. That is, for every
D ? Sd++ we write D = U ?U > , where U ? Od , ? = Diag(?1 , . . . , ?d ), and define
F (D) = U F (?)U > , F (?) = Diag(f (?1 ), . . . , f (?d )).
(2.3)
In the rest of the paper, we will always use F to denote a spectral matrix function and f to denote
the associated real function, as above.
Minimization of the function Reg allows us to learn the tasks and at the same time a good representation for them which is summarized by the eigenvectors and eigenvalues of the matrix D. Different
choices of the function f reflect different properties which we would like the tasks to share. In the
special case that f is a constant, the tasks are totally independent and the regularizer (2.2) is a sum
of T independent L2 regularizers. In the case f (?) = ??1 , which is considered in [4], the regularizer favors a sparse representation in the sense that the tasks share a small common set of features.
More generally, functions of the form f (?) = ??? , ? ? 0, allow for combining shared features
and task-specific features to some degree tuned by the exponent ?. Moreover, the regularizer (2.2)
ensures that the optimal representation (optimal D) is a function of the tasks? covariance W W > .
Thus, we propose to solve the minimization problem
n
o
inf Reg(W, D) : W ? IRd?T , D ? Sd++ , tr D ? 1
2
(2.4)
for functions f belonging to an appropriate class. As we shall see in Section 4, the upper bound
on the trace of D in (2.4) prevents the infimum from being zero, which would lead to overfitting.
Moreover, even though the infimum above is not attained in general, the problem in W resulting
after partial minimization over D admits a minimizer.
Since the first term in (2.1) is independent of D, we can first optimize the second term with respect
to D. That is, we can compute the infimum
?f (W ) := inf tr(F (D)W W > ) : D ? Sd++ , tr D ? 1 .
(2.5)
In this way we could end up with an optimization problem in W only. However, in general this
would be a complex matrix optimization problem. It may require sophisticated optimization tools
such as semidefinite programming, which may not scale well with the size of W . Fortunately, as
we shall show, problem (2.4) can be efficiently solved by alternately minimizing over D and W . In
particular, in Section 4 we shall show that ?f is a function of the singular values of W only. Hence,
the only matrix operation required by alternate minimization is singular value decomposition and
the rest are merely vector problems.
Finally, we note that the ideas above may be extended naturally to a reproducing kernel Hilbert space
setting [3].
3
Joint Convexity via Matrix Concave Functions
In this section, we address the issue of convexity of the regularization function (2.1). Our main
result characterizes the class of spectral functions F for which the term w > F (D)w is jointly convex
in (w, D), which in turn implies that (2.4) is a convex optimization problem.
To illustrate our result, we require the matrix analytic concept of concavity, see, for example, [7].
We say that the real-valued function g : (0, ?) ? IR is matrix concave of order d if
?A, B ? Sd++ and ? ? [0, 1] ,
?G(A) + (1 ? ?)G(B) G(?A + (1 ? ?)B)
where G is defined as in (2.3). The notation denotes the Loewner partial order on S d : C D
if and only if D ? C is positive semidefinite. If g is a matrix concave function of order d for any
d ? IN, we simply say that g is matrix concave. We also say that g is matrix convex (of order d)
if ?g is matrix concave (of order d). Clearly, matrix concavity implies matrix concavity of smaller
orders (and hence standard concavity).
Theorem 3.1. Let F : Sd++? Sd++ be a spectral function. Then the function ? : IRd ?Sd++? [0, ?)
defined as ?(w, D) = w > F (D)w is jointly convex if and only if f1 is matrix concave of order d.
Proof. By definition, ? is convex if and only if, for any w1 , w2 ? IRd , D1 , D2 ? Sd++ and ? ?
(0, 1), it holds that
?(?w1 + (1 ? ?)w2 , ?D1 + (1 ? ?)D2 ) ? ??(w1 , D1 ) + (1 ? ?)?(w2 , D2 ).
Let C := F (?D1 + (1 ? ?)D2 ), A := F (D1 )/?, B := F (D2 )/(1 ? ?), w := ?w1 + (1 ? ?)w2
and z := ?w1 . Using this notation, the above inequality can be rewritten as
w> Cw ? z > Az + (w ? z)> B(w ? z)
? w, z ? IRd .
(3.1)
The right hand side in (3.1) is minimized for z = (A + B)?1 Bw and hence (3.1) is equivalent to
>
w> Cw ? w> B(A + B)?1 A(A + B)?1 B + I ? (A + B)?1 B B I ? (A + B)?1 B w ,
? w ? IRd , or to
C B(A + B)?1 A(A + B)?1 B + I ? (A + B)?1 B
>
B I ? (A + B)?1 B
= B(A + B)?1 A(A + B)?1 B + B ? 2B(A + B)?1 B + B(A + B)?1 B(A + B)?1 B
= B ? B(A + B)?1 B = (A?1 + B ?1 )?1 ,
where the last equality follows from the matrix inversion lemma [10, Sec. 0.7]. The above inequality
is identical to (see e.g. [10, Sec. 7.7])
A?1 + B ?1 C ?1 ,
3
or, using the initial notation,
?1
?1
?1
? F (D1 )
+ (1 ? ?) F (D2 )
F (?D1 + (1 ? ?)D2 )
.
By definition, this inequality holds for any D1 , D2 ? Sd++ , ? ? (0, 1) if and only if
concave of order d.
1
f
is matrix
Examples of matrix concave functions on (0, ?) are log(x + 1) and the function x s for s ? [0, 1]
? see [7] for other examples and theoretical results. We conclude with the remark that, whenever f1
is matrix concave of order d, function ?f in (2.5) is convex, because it is the partial infimum of a
jointly convex function [9, Sec. IV.2.4].
4
4.1
Regularization with Schatten Lp Prenorms
Partial Minimization of the Penalty Term
In this section, we focus on the family of negative power functions f and obtain that function ? f
in (2.5) relates to the Schatten Lp prenorms. We start by showing that problem (2.5) reduces to a
minimization problem in IRd , by application of a useful matrix inequality. In the following, we let
B take the place of W W > for brevity.
Lemma 4.1. Let F : Sd ? Sd be a spectral function, B ? Sd and ?i , i ? INd , the eigenvalues of
B. Then,
)
(
X
X
d
?i ? 1 .
inf{tr(F (D)B) : D ? S++ , tr D ? 1} = inf
f (?i )?i : ?i > 0, i ? INd ,
i?INd
i?INd
Moreover, for the infimum on the left to be attained, F (D) has to share a set of eigenvectors with B
so that the corresponding eigenvalues are in the reverse order as the ?i .
Proof. We use an inequality of Von Neumann [13, Sec. H.1.h] to obtain, for all X, Y ? S d , that
X
tr(XY ) ?
?i ?i
i?INd
where ?i and ?i are the eigenvalues of X and Y in nonincreasing and nondecreasing order, respectively. The equality is attained whenever X = U Diag(?)U > , Y = U Diag(?)U > for some
U ? Od . Applying this inequality for X = F (D), Y = B and denoting f (?i ) = ?i , i ? INd , the
result follows.
Using this lemma, we can now derive the solution of problem (2.5) in the case that f is a negative
power function.
Proposition 4.2. Let B ? Sd+ and s ? (0, 1]. Then we have that
n
o
1
s?1
(tr B s ) s = inf tr(D s B) : D ? Sd++ , tr D ? 1 .
Moreover, if B ? Sd++ the infimum is attained and the minimizer is given by D =
Bs
.
tr B s
Proof. By Lemma 4.1, it suffices to show the analogous statement for vectors, namely that
! 1s
(
)
X s?1
X
X
s
s
?i
= inf
?i ?i : ?i > 0, i ? INd ,
?i ? 1
i?INd
i?INd
i?INd
1
where ?i ? 0, i ? INd . To this end, we apply H?older?s inequality with p = 1s and q = 1?s
:
!s
!1?s
!s
X s?1
X
X s?1 s
X s?1
X
?is =
?i s ?i ?i1?s ?
?i s ?i
?i
?
?i s ?i
.
i?INd
i?INd
i?INd
i?INd
?is
When ?i > 0, i ? INd , the equality is attained for ?i = P
inequality is P
sharp in all other cases, we replace ?i by ?i,?
s
s
?i,? = ?i,?
/( j ?j,?
) and take the limits as ? ? 0.
4
i?INd
, i ? INd . To show that the
?js
:= ?i + ?, i ? INd , ? > 0, define
j?INd
The above result implies that the regularization problem (2.4) is conceptually equivalent to regular2
ization with a Schatten Lp prenorm of W , when the coupling function f takes the form f (x) = x1? p
with p ? (0, 2], p = 2s. The Schatten Lp prenorm is the Lp prenorm of the singular values of a
matrix. In particular, trace norm regularization (see [1, 17]) corresponds to the case p = 1. We also
note that generalization error bounds for Schatten Lp norm regularization can be derived along the
lines of [14].
4.2
Learning Algorithm
Lemma 4.1 demonstrates that optimization problems such as (2.4) with spectral regularizers of the
form (2.2) are computationally appealing, since they decompose to vector problems in d variables
along with singular value decomposition of the matrix W . In particular, for the Schatten L p prenorm
with p ? (0, 2], the proof of Proposition 4.2 suggests a way to solve problem (2.4). We modify the
penalty term (2.2) as
Penalty? (W, D) = tr F (D)(W W > + ?I) ,
(4.1)
where ? > 0 and let Reg? (W, D) = Err(W ) + ? Penalty? (W, D) be the corresponding regularization function. By Proposition 4.2, for a fixed W ? IRd?T there is a unique minimizer of Penalty ?
(under the constraints in (2.5)), given by the formula
p
(W W > + ?I) 2
p .
tr(W W > + ?I) 2
Moreover, there exists a minimizer of problem (2.4), which is unique if p ? (1, 2].
D? (W ) =
(4.2)
Therefore, we can solve problem (2.4) using an alternating minimization algorithm, which is an
extension of the one presented in [4] for the special case F (D) = D ?1 . Each iteration of the
algorithm consists of two steps. In the first step, we keep D fixed and minimize over W . This
consists in solving the problem
(
)
X
X
d?T
>
min
Lt (wt ) + ?
wt F (D)wt : W ? IR
.
t?INT
t?INT
This minimization can be carried out independently for each task since the regularizer decouples
1
when D is fixed. Specifically, introducing new variables for (F (D)) 2 wt yields a standard L2 regularization problem for each task with the same kernel K(x, z) = x> (F (D))?1 z, x, z ? IRd . In
other words, we simply learn the parameters wt ? the columns of matrix W ? independently by
a regularization method, for example by an SVM or ridge regression method, for which there are
well developed tool boxes. In the second step, we keep matrix W fixed and minimize over D using
equation (4.2).
Space limitations prevent us from providing a convergence proof of the algorithm. We only note
that following the proof detailed in [3] for the case p = 1, one can show that the sequence produced
by the algorithm converges to the unique minimizer of Reg ? if p ? [1, 2], or to a local minimizer
if p ? (0, 1). Moreover, by [3, Thm. 3] as ? goes to zero the algorithm converges to a solution of
problem (2.4), if p ? [1, 2]. In theory, an algorithm without ?-perturbation does not converge to a
minimizer, since the columns of W and D always remain in the initial column space. In practice,
however, we have observed that even such an algorithm converges to an optimal solution, because
of round-off effects.
5
Relation to Learning the Kernel
In this section, we discuss the connection between the multi-task framework (2.1)-(2.4) and the
framework for learning the kernel, see [11, 15] and references therein. To this end, we define the
kernel Kf (D)(x, z) = x> (F (D))?1 z, x, z ? IRd , the set of kernels Kf = {Kf (D) : D ?
Sd++ , tr D ? 1} and, for every kernel K, the task kernel matrix Kt = (K(xti , xtj ) : i, j ? INm ),
t ? INT . It is easy to prove, using Weyl?s monotonicity theorem [10, Sec. 4.3] and [7, Thm. V.2.5],
that the set Kf is convex if and only if f1 is matrix concave. By the well-known representer theorem
(see e.g. [11]), problem (2.4) is equivalent to minimizing the function
!
X
X
`(yti , (Kt ct )i ) + ? c>
(5.1)
t Kt c t
t?INT
i?INm
5
over ct ? IRm (for t ? INT ) and K ? Kf . It is apparent that the function (5.1) is not jointly convex
in ct and K. However, minimizing each term over the vector ct gives a convex function of K.
Proposition 5.1. Let K be the set of all reproducing kernels on IRd . If `(y, ?) is convex for any
y ? IR then the function Et : K ? [0, ?) defined for every K ? K as
)
(
X
m
>
`(yti , (Kt c)i ) + ? c Kt c : c ? IR
Et (K) = min
i?INm
is convex.
Proof. Without loss of generality, we can assume as in [15] that KP
t are invertible for all t ? INT .
For every a ? IRm and K ? K , we define the function Gt (a, K) = i?INm `(yti , ai )+? a> Kt?1 a,
which is jointly convex by Theorem 3.1. Clearly, Et (K) = min{Gt (a, K) : a ? IRm }. Recalling
that the partial minimum of a jointly convex function is convex [9, Sec. IV.2.4], we obtain the
convexity of Et .
The fact that the function Et is convex has already been proved in [15], using minimax theorems
and Fenchel duality. Here, we were able to simplify the proof of this result by appealing to the joint
convexity property stated in Theorem 3.1.
6
Experiments
In this section, we first report a comparison of the computational cost between the alternating minimization algorithm and the gradient descent algorithm. We then study how performance varies for
different Lp regularizers, compare our approach with other multi-task learning methods and report
experiments on transfer learning.
We used two data sets in our experiments. The first one is the computer survey data from [12]. It
was taken from a survey of 180 persons who rated the likelihood of purchasing one of 20 different
personal computers. Here the persons correspond to tasks and the computer models to examples.
The input represents 13 different computer characteristics (price, CPU, RAM etc.) while the output
is an integer rating on the scale 0 ? 10. Following [12], we used the first 8 examples per task as the
training data and the last 4 examples per task as the test data. We measured the root mean square
error of the predicted from the actual ratings for the test data, averaged across people.
The second data set is the school data set from the Inner London Education Authority (see
http://www.cmm.bristol.ac.uk/learning-training/multilevel-m-support/datasets.shtml). It consists of
examination scores of 15362 students from 139 secondary schools in London. Thus, there are 139
tasks, corresponding to predicting student performance in each school. The input consists of the year
of the examination, 4 school-specific and 3 student-specific attributes. Following [5], we replaced
categorical attributes with binary ones, to obtain 27 attributes in total. We generated the training and
test sets by 10 random splits of the data, so that 75% of the examples from each school (task) belong
to the training set and 25% to the test set. Here, in order to compare our results with those in [5], we
used the measure of percentage explained variance, which is defined as one minus the mean squared
test error over the variance of the test data and indicates the percentage of variance explained by
the prediction model. Finally, we note that in both data sets we used the square loss, tuned the
regularization parameter ? with 5-fold cross-validation and added an additional input component
accounting for the bias term.
In the first experiment, we study the computational cost of the alternating minimization algorithm
against the gradient descent algorithm, both implemented in Matlab, for the Schatten L 1.5 norm. The
left plot in Figure 1 shows the value of the objective function (2.1) versus the number of iterations,
on the computer survey data. The curves for different learning rates ? are shown, whereas for rates
greater than 0.05 gradient descent diverges. The alternating algorithm curve for ? = 10 ?16 is also
shown. We further note that for both data sets our algorithm typically needed less than 30 iterations
to converge. The right plot depicts the CPU time (in seconds) needed to reach a value of the objective
function which is less than 10?5 away from the minimum, versus the number of tasks. It is clear
that our algorithm is at least an order of magnitude faster than gradient descent with the optimal
learning rate and scales better with the number of tasks. We note that the computational cost of our
method is mainly due to the T ridge regressions in the supervised step (learning W ) and the singular
6
value decomposition in the unsupervised step (learning D). A singular value decomposition is also
needed in gradient descent, for computing the gradient of the Schatten Lp norm. We have observed
that the cost per iteration is smaller for gradient descent but the number of iterations is at least an
order of magnitude larger, leading to the large difference in time cost.
28.5
6
? = 0.05
? = 0.03
? = 0.01
Alternating
28
27.5
Alternating
? = 0.05
5
4
27
seconds
Reg
26.5
3
26
2
25.5
1
25
24.5
0
20
40
60
80
0
50
100
100
150
200
tasks
iterations
Figure 1: Comparison between the alternating algorithm and the gradient descent algorithm.
4
0.27
0.265
3.5
0.26
3
0.255
RMSE
expl. variance
0.25
2.5
0.245
2
0.24
1.5
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
0.235
0.4
1.8
p
0.6
0.8
1
1.2
1.4
1.6
1.8
2
p
Figure 2: Performance versus p for the computer survey data (left) and the school data (right).
Table 1: Comparison of different methods on the computer survey data (left) and school data (right).
Method
p=2
p=1
p = 0.7
Hierarchical Bayes [12]
RMSE
3.88
1.93
1.86
1.90
Method
p=2
p=1
Hierarchical Bayes [5]
Explained variance
23.5 ? 2.0%
26.7 ? 2.0%
29.5 ? 0.4%
In the second experiment we study the statistical performance of our method as the spectral function
changes. Specifically, we choose functions giving rise to Schatten Lp prenorms, as discussed in
Section 4. The results, shown in Figure 2, indicate that the trace norm is the best norm on these
data sets. However, on the computer survey data a value of p less than one gives the best result
overall. From this we speculate that our method can even approximate well the solutions of certain
non-convex problems. In contrast, on the school data the trace norm gives almost the best result.
Next, in Table 1, we compare our algorithm with the hierarchical Bayes (HB) method described in
[5, 12]. This method also learns a matrix D using Bayesian inference. Our method improves on
the HB method on the computer survey data and is competitive on the school data (even though our
regularizer is simpler than HB and the data splits of [5] are not available).
Finally, we present preliminary results on transfer learning. On the computer survey data, we trained
our method with p = 1 on 150 randomly selected tasks and then used the learned structure matrix D
for training 30 ridge regressions on the remaining tasks. We obtained an RMSE of 1.98 on these 30
?new? tasks, which is not much worse than an RMSE of 1.88 on the 150 tasks. In comparison, when
7
using the raw data (D = dI ) on the 30 tasks we obtained an RMSE of 3.83. A similar experiment was
performed on the school data, first training on a random subset of 110 schools and then transferring
D to the remaining 29 schools. We obtained an explained variance of 19.2% on the new tasks. This
was worse than the explained variance of 24.8% on the 110 tasks but still better than the explained
variance of 13.9% with the raw representation.
7
Conclusion
We have presented a spectral regularization framework for learning the structure shared by many
supervised tasks. This structure is summarized by a positive definite matrix which is a spectral
function of the tasks? covariance matrix. The framework is appealing both theoretically and practically. Theoretically, it brings to bear the rich class of spectral functions which is well-studied in
matrix analysis. Practically, we have argued via the concrete example of negative power spectral
functions, that the tasks? parameters and the structure matrix can be efficiently computed using an
alternating minimization algorithm, improving upon state of the art statistical performance on two
real data sets. A natural question is to which extent the framework can be generalized to allow for
more complex task sharing mechanisms, in which the structure parameters depend on higher order
statistical properties of the tasks.
Acknowledgements
This work was supported by EPSRC Grant EP/D052807/1, NSF Grant DMS 0712827 and by the
IST Programme of the European Commission, PASCAL Network of Excellence IST-2002-506778.
References
[1] J. Abernethy, F. Bach, T. Evgeniou, and J-P. Vert. Low-rank matrix factorization with attributes. Technical
Report N24/06/MM, Ecole des Mines de Paris, 2006.
[2] R. K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[3] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 2007.
In press.
[4] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In Advances in Neural Information
Processing Systems 19, pages 41?48. 2007.
[5] B. Bakker and T. Heskes. Task clustering and gating for bayesian multi?task learning. Journal of Machine
Learning Research, 4:83?99, 2003.
[6] J. Baxter. A model for inductive bias learning. J. of Artificial Intelligence Research, 12:149?198, 2000.
[7] R. Bhatia. Matrix Analysis. Graduate texts in Mathematics. Springer, 1997.
[8] R. Chari, W.W. Lockwood, and B.P. Coe et al. Sigma: a system for integrative genomic microarray
analysis of cancer genomes. BMC Genomics, 7:324, 2006.
[9] J.-B. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms. Springer, 1996.
[10] R. A. Horn and C. R. Johnson. Matrix Analysis. Cambridge University Press, 1985.
[11] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research, 5:27?72, 2005.
[12] P. J. Lenk, W. S. DeSarbo, P. E. Green, and M. R. Young. Hierarchical Bayes conjoint analysis: recovery
of partworth heterogeneity from reduced experimental designs. Marketing Science, 15(2):173?191, 1996.
[13] A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and its Applications. Academic Press,
1979.
[14] A. Maurer. Bounds for linear multi-task learning. J. of Machine Learning Research, 7:117?139, 2006.
[15] C.A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6:1099?1125, 2005.
[16] R. Raina, A. Y. Ng, and D. Koller. Constructing informative priors using transfer learning. In Proceedings
of the 23rd International Conference on Machine Learning, 2006.
[17] N. Srebro, J. D. M. Rennie, and T. S. Jaakkola. Maximum-margin matrix factorization. In Advances in
Neural Information Processing Systems 17, pages 1329?1336. 2005.
[18] A. Torralba, K. P. Murphy, and W. T. Freeman. Sharing features: efficient boosting procedures for multiclass object detection. In Proc. of Conf. on Computer Vision and Pattern Recognition. 2:762-769, 2004.
[19] J. Zhang, Z. Ghahramani, and Y. Yang. Learning multiple related tasks using latent independent component analysis. In Advances in Neural Information Processing Systems 18, pages 1585?1592. 2006.
8
| 3187 |@word multitask:1 inversion:1 norm:8 lenk:1 d2:8 integrative:1 covariance:4 decomposition:4 accounting:1 mention:1 tr:15 minus:1 initial:2 score:1 tuned:2 renewed:1 denoting:1 ecole:1 err:4 od:3 olkin:1 informative:1 weyl:1 analytic:1 plot:2 intelligence:1 selected:1 authority:1 boosting:1 simpler:2 zhang:2 along:2 consists:4 prove:1 introduce:1 excellence:1 theoretically:2 multi:9 freeman:1 xti:1 cpu:2 actual:1 totally:1 underlying:2 moreover:7 notation:4 bounded:1 bakker:1 developed:1 finding:1 every:6 concave:10 decouples:1 demonstrates:1 uk:7 grant:2 positive:7 engineering:1 local:1 modify:1 sd:20 limit:1 therein:3 studied:1 suggests:1 factorization:2 graduate:1 averaged:1 practical:1 unique:3 horn:1 practice:2 implement:1 definite:4 procedure:1 pontil:5 significantly:1 vert:1 word:1 unlabeled:1 context:1 applying:2 optimize:1 equivalent:4 www:1 expl:1 go:1 independently:3 convex:23 survey:8 simplicity:3 recovery:1 argyriou:4 analogous:1 programming:2 lanckriet:1 recognition:1 particularly:1 observed:2 epsrc:1 ep:1 solved:1 ensures:2 convexity:7 cristianini:1 mine:1 personal:1 trained:1 depend:1 solving:2 predictive:1 upon:1 joint:2 regularizer:6 massimiliano:1 london:6 kp:1 artificial:1 bhatia:1 abernethy:1 whose:1 apparent:1 larger:1 solve:3 valued:1 say:4 rennie:1 favor:2 statistic:1 jointly:7 nondecreasing:1 sequence:1 eigenvalue:5 loewner:1 inn:1 ucl:2 propose:3 hiriart:1 combining:1 az:1 convergence:1 neumann:1 diverges:1 converges:3 yiming:1 object:2 illustrate:1 derive:1 ac:4 coupling:1 measured:1 school:12 implemented:1 c:2 predicted:1 indicate:3 implies:3 attribute:4 education:1 require:2 argued:1 multilevel:1 f1:3 generalization:2 suffices:1 decompose:1 preliminary:1 proposition:4 extension:1 hold:2 practically:2 mm:1 considered:1 vary:1 torralba:1 albany:2 proc:1 tool:2 minimization:15 bs8:1 clearly:2 genomic:1 always:2 shtml:1 jaakkola:1 derived:1 focus:1 modelling:1 likelihood:1 indicates:1 mainly:1 rank:1 contrast:1 sense:1 inference:1 el:1 bt:2 typically:2 cmm:1 transferring:1 relation:1 koller:1 i1:1 issue:1 overall:1 pascal:1 exponent:1 art:3 integration:1 special:2 evgeniou:3 washington:1 ng:1 biology:1 identical:1 represents:1 bmc:1 unsupervised:1 representer:1 minimized:1 report:3 simplify:1 few:2 randomly:1 simultaneously:1 xtj:4 murphy:1 replaced:1 bw:1 ando:1 recalling:1 partworth:1 detection:2 interest:2 semidefinite:4 regularizers:3 nonincreasing:1 kt:6 partial:5 necessary:2 xy:1 orthogonal:1 iv:2 maurer:1 irm:3 walk:1 lockwood:1 theoretical:2 inm:7 fenchel:1 column:4 marshall:1 tp:1 cost:5 introducing:1 subset:2 johnson:1 characterize:1 commission:1 varies:1 person:2 international:1 off:1 invertible:1 concrete:2 w1:6 von:1 reflect:1 squared:1 choose:1 worse:2 conf:1 leading:1 de:2 speculate:1 summarized:3 sec:6 student:3 int:10 coefficient:1 explicitly:1 performed:1 root:1 analyze:1 linked:1 characterizes:1 start:1 bayes:4 competitive:1 rmse:5 majorization:1 contribution:1 minimize:2 ir:7 square:2 variance:8 who:1 efficiently:3 characteristic:1 yield:1 correspond:1 conceptually:1 bayesian:2 raw:2 produced:1 bristol:4 reach:1 sharing:3 whenever:2 definition:2 against:1 dm:1 naturally:1 proof:9 associated:1 di:1 proved:1 knowledge:2 improves:3 hilbert:1 sophisticated:1 attained:5 higher:1 supervised:3 follow:1 formulation:2 evaluated:1 though:2 box:1 generality:1 just:1 marketing:1 hand:1 logistic:1 brings:1 infimum:6 facilitate:2 effect:1 usa:1 concept:1 ization:1 former:1 regularization:17 hence:4 equality:3 alternating:12 symmetric:1 inductive:1 ind:20 round:1 generalized:1 ridge:3 ranging:1 recently:1 charles:1 common:6 extend:1 belong:1 discussed:1 cambridge:1 ai:1 rd:1 mathematics:3 heskes:1 etc:2 gt:2 j:1 inf:6 reverse:1 certain:1 inequality:9 binary:1 minimum:2 fortunately:1 additional:1 greater:1 converge:2 relates:1 multiple:3 reduces:1 technical:1 faster:2 academic:1 cross:1 bach:1 n24:1 prediction:1 regression:3 vision:2 iteration:6 kernel:13 addition:1 want:1 whereas:1 chari:1 singular:6 microarray:2 w2:4 rest:2 pooling:1 induced:1 desarbo:1 jordan:1 integer:2 yang:1 split:2 easy:1 baxter:1 hb:3 variety:1 andreas:1 idea:1 inner:1 avenue:1 multiclass:1 bartlett:1 penalty:8 ird:12 remark:1 matlab:1 generally:1 useful:1 detailed:1 eigenvectors:2 clear:1 svms:1 reduced:1 http:1 percentage:2 nsf:1 per:5 ytj:2 write:1 threefold:1 shall:5 ist:2 key:1 suny:1 prevent:1 kept:1 ram:1 merely:1 sum:1 year:1 run:1 place:1 family:1 almost:1 bound:3 ct:4 fold:1 quadratic:1 constraint:1 argument:2 prescribed:2 min:3 department:4 alternate:1 belonging:1 across:3 smaller:2 remain:1 appealing:6 lp:11 b:1 explained:6 ghaoui:1 taken:1 computationally:1 equation:1 discus:2 turn:1 mechanism:2 needed:3 urruty:1 end:3 available:2 operation:1 rewritten:1 apply:1 hierarchical:4 away:1 spectral:16 appropriate:1 denotes:2 remaining:2 clustering:1 coe:1 wc1e:2 gower:2 giving:1 ghahramani:1 micchelli:2 objective:3 already:1 added:1 question:1 exhibit:1 gradient:9 cw:2 schatten:10 street:2 extent:1 providing:1 minimizing:5 balance:1 ying:1 statement:1 relate:1 trace:4 negative:3 stated:1 rise:1 sigma:1 design:1 upper:1 datasets:1 descent:8 heterogeneity:1 extended:1 perturbation:1 reproducing:2 sharp:1 thm:2 rating:2 namely:2 required:1 paris:1 optimized:1 connection:2 learned:2 alternately:1 address:1 able:1 below:1 pattern:1 green:1 power:3 natural:1 examination:2 predicting:1 raina:1 minimax:1 older:1 rated:1 carried:1 categorical:1 genomics:1 text:1 prior:1 l2:2 acknowledgement:1 kf:5 loss:4 highlight:1 bear:1 limitation:1 srebro:1 versus:3 conjoint:2 validation:1 degree:1 purchasing:1 sufficient:2 share:3 echal:1 cancer:1 supported:1 last:2 side:1 allow:2 bias:2 sparse:1 curve:2 rich:1 concavity:4 genome:1 programme:1 approximate:1 keep:2 monotonicity:1 overfitting:1 assumed:1 conclude:1 latent:1 decomposes:1 table:2 learn:5 transfer:5 improving:1 complex:2 european:1 constructing:1 diag:4 main:1 x1:1 depicts:1 ny:1 third:1 learns:1 young:1 theorem:6 formula:1 specific:3 showing:1 gating:1 svm:2 admits:1 exists:1 importance:1 magnitude:2 margin:1 lt:1 simply:2 prevents:1 springer:2 corresponds:1 minimizer:7 goal:1 shared:4 replace:1 yti:3 price:1 change:1 lemar:1 specifically:3 wt:12 lemma:5 total:1 secondary:1 duality:1 experimental:1 college:2 people:1 support:1 latter:1 brevity:1 reg:6 d1:8 |
2,411 | 3,188 | Catching Change-points with Lasso
Zaid Harchaoui, C?eline L?evy-Leduc
LTCI, TELECOM ParisTech and CNRS
37/39 Rue Dareau, 75014 Paris, France
{zharchao,levyledu}@enst.fr
Abstract
We propose a new approach for dealing with the estimation of the location of
change-points in one-dimensional piecewise constant signals observed in white
noise. Our approach consists in reframing this task in a variable selection context. We use a penalized least-squares criterion with a `1 -type penalty for this
purpose. We prove some theoretical results on the estimated change-points and
on the underlying piecewise constant estimated function. Then, we explain how
to implement this method in practice by combining the LAR algorithm and a reduced version of the dynamic programming algorithm and we apply it to synthetic
and real data.
1
Introduction
Change-points detection tasks are pervasive in various fields, ranging from audio [10] to EEG segmentation [5]. The goal is to partition a signal into several homogeneous segments of variable
durations, in which some quantity remains approximately constant over time. This issue was addressed in a large literature (see [20] [11]), where the problem was tackled both from an online
(sequential) [1] and an off-line (retrospective) [5] points of view. Most off-line approaches rely on a
Dynamic Programming algorithm (DP), allowing to retrieve K change-points within n observations
of a signal with a complexity of O(Kn2 ) in time [11]. Such a feature refrains practitioners from
applying these methods to large datasets. Moreover, one often observes a sub-optimal behavior of
the raw DP algorithm on real datasets.
We suggest here to slightly depart from this line of research, by focusing on a reformulation of
change-point estimation in a variable selection framework. Then, estimating change-point locations off-line turns into performing variable selection on dummy variables representing all possible
change-point locations. This allows us to take advantage of the latest theoretical [23], [3] and practical [7] advances in regression with Lasso penalty. Indeed, Lasso provides us with a very efficient
method for selecting potential change-point locations. This selection is then refined by using the DP
algorithm to estimate the change-point locations.
Let us outline the paper. In Section 2, we first describe our theoretical reformulation of off-line
change-point estimation as regression with a Lasso penalty. Then, we show that the estimated magnitude of jumps are close in mean, in a sense to be precized, to the true magnitude of jumps. We
also give a non asymptotic inequality to upper-bound the `2 -loss of the true underlying piecewise
constant function and the estimated one. We describe our algorithm in Section 3. In Section 4, we
discuss related works. Finally, we provide experimental evidence of the relevance of our approach.
1
2
2.1
Theoretical approach
Framework
We describe, in this section, how off-line change-point estimation can be cast as a variable selection
problem. Off-line estimation of change-point locations within a signal (Yt ) consists in estimating
the ?k? ?s in the following model:
?
Yt = ??k + ?t , t = 1, . . . , n such that ?k?1
+ 1 ? t ? ?k? , 1 ? k ? K ? with ?0? = 0,
(1)
where ?t are i.i.d zero-mean random variables with finite variance. This problem can be reformulated
as follows. Let us consider:
Yn = Xn ? n + ?n
(2)
where Yn is a n ? 1 vector of observations, Xn is a n ? n lower triangular matrix with nonzero
elements equal to one and ?n = (?n1 , . . . , ?nn )0 is a zero-mean random vector such that the ?nj ?s
are i.i.d with finite variance. As for ? n , it is a n ? 1 vector having all its components equal to
zero except those corresponding to the change-point instants. The above multiple change-point
estimation problem (1) can thus be tackled as a variable selection one:
2
Minimize kYn ? Xn ?kn subject to k?k1 ? s ,
?
(3)
Pn
where kuk1 and kukn are defined for a vector u = (u1 , . . . , un ) ? Rn by kuk1 =
j=1 |uj |
Pn
2
?1
2
and kukn = n
j=1 uj respectively. Indeed, the above formulation amounts to minimize the
following counterpart objective in model (1):
n
Minimize
?1 ,...,?n
1X
(Yt ? ?t )2
n t=1
subject to
n?1
X
t=1
|?t+1 ? ?t | ? s,
(4)
which consists in imposing an `1 -constraint on the magnitude of jumps. The underpinning insight
is the sparsity-enforcing property of the `1 -constraint, which is expected to give a sparse vector,
whose non-zero components would match with those of ? n and thus with change-point locations.
It is related to the popular Least Absolute Shrinkage eStimatOr (LASSO) in least-square regression
of [21], used for efficient variable selection.
In the next section, we provide two results supporting the use of the formulation (3) for off-line
multiple change-point estimation. We show that estimates of jumps minimizing (3) are consistent
in mean, and we provide a non asymptotic upper bound for the `2 loss of the underlying estimated
piecewise constant function and the true underlying piecewise function. This inequality shows that,
at a precized rate, the estimated piecewise constant function tends to the true piecewise constant
function with a probability tending to one.
2.2
Main results
In this section, we shall study the properties of the solutions of the problem (3) defined by
n
o
2
??n (?) = Arg min kYn ? Xn ?kn + ?k?k1 .
(5)
?
Let us now introduce the notation sign. It maps positive entry to 1, negative entry to -1 and a null
entry to zero. Let
A = {k, ?kn 6= 0} and A = {1, . . . , n}\A
(6)
n
and let C the covariance matrix be defined by
C n = n?1 Xn0 Xn .
(7)
In a general regression framework, [18] recall that, with probability tending to one, ??n (?) and ? n
have the same sign for a well-chosen ?, only if the following condition holds element-wise:
? n
?
n
n ?
?C (CAA
)?1 sign(?A
) < 1,
(8)
AA
n
where CIJ
is a sub-matrix of C n obtained by keeping rows with index in the set I and columns with
n
n
index in J. The vector ?A
is defined by ?A
= (?kn )k?A . The condition (8) is not fulfilled in the
2
change-point framework implying that we cannot have a perfect estimation of the change-points as
it is already known, see [13]. But, following [18] and [3], we can prove some consistency results,
see Propositions 1 and 2 below.
In the following, we shall assume that the number of break points is equal to K ? .
The following proposition ensures that for a large enough value of n the estimated change-point
locations are close to the true change-points.
Proposition 1. Assume that the observations (Yn ) are given by (2) and that the ?nj ?s are centered.
?
If ? = ?n is such that ?n n ? 0 as n tends to infinity then
kE(??n (?n )) ? ? n kn ? 0 .
Proof. We shall follow the proof of Theorem 1 in [18]. For this, we denote ? n (?) the estimator
??n (?) under the absence of noise and ?n (?) the bias associated to the Lasso estimator: ?n (?) =
? n (?) ? ? n . For notational simplicity, we shall write ? instead of ?n (?). Note that ? satisfies the
following minimization: ? = Arg min??Rn f (?) , where
X
X
f (?) = ? 0 C n ? + ?
|?kn + ?k | + ?
|?k | .
?
k?A
k?A
Since f (?) ? f (0), we get
?0C n? + ?
X
k?A
|?kn + ?k | + ?
X
?
k?A
|?k | ? ?
X
k?A
|?kn | .
We thus obtain using the Cauchy-Schwarz inequality the following upper bound
? n
!1/2
X
X
?
0 n
2
?C ???
|?k | ? ? K ?
|?k |
.
k?A
k=1
?
Pn
Using that ? 0 C n ? ? n?1 k=1 |?k |2 , we obtain: k?kn ? ? nK ? .
The following proposition ensures, thanks to a non asymptotic result, that the estimated underlying
piecewise function is close to the true piecewise constant function.
Proposition 2. Assume that the observations (Yn ) are given by (2) and that the ?nj ?s are centered iid
Gaussian random variables with variance ? 2 > 0. Assume also that (?kn )k?A belong to (?min , ?max )
?
1?A2 /2
where ?min
, if
p> 0. For all n ? 1 and A > 2 then, with a probability larger than 1 ? n
?n = A? log n/n,
r
log n
n
n 2
?
?
kXn (? (?n ) ? ? )kn ? 2A??max K
.
n
Proof. By definition of ??n (?) in (5) as a minimizer of a criterion, we have
kYn ? Xn ??n (?)k2n + ?k??n (?)k1 ? kYn ? Xn ? n kn + ?k? n k1 .
2
Using (2), we get
n
n
X
X
2
|?jn | .
kXn (? n ? ??n (?))k2n + (? n ? ??n (?))0 Xn0 ?n + ?
|??jn (?)| ? ?
n
j=1
j=1
Thus,
X
X
2
kXn (? n ? ??n (?))k2n ? (??n (?) ? ? n )0 Xn0 ?n + ?
(|?jn | ? |??jn (?)|) ? ?
|??jn (?)| .
n
?
j?A
Observe that
j?A
?
n
X
2 ?n
1
(? (?) ? ? n )0 Xn0 ?n = 2
(??jn (?) ? ?jn ) ?
?ni ? .
n
n
j=1
i=j
n
X
3
?
?P
?
o
Tn n
? n
?
Let us define the event E = j=1 n?1 ? i=j ?ni ? ? ? . Then, using the fact that the ?ni ?s are iid
zero-mean Gaussian random variables, we obtain
?
?
?
?
? n
?
?
?
n
n
X
X ?
X
?
n2 ?2
?1
n
?
?
?
?
?
P(E) ?
P n ?
?i ? > ? ?
exp ? 2
.
2? (n ? j + 1)
? i=j ?
j=1
j=1
p
Thus, if ? = ?n = A? log n/n,
? ? n1?A2 /2 .
P(E)
With a probability larger than 1 ? n1?A
kXn (? n ? ??n (?))k2n ? ?n
n
X
j=1
2
/2
, we get
|??jn (?) ? ?jn | + ?n
X
(|?jn | ? |??jn |) ? ?n
j?A
X
?
j?A
|??jn | .
2
We thus obtain with a probability larger than 1 ? n1?A /2 the following upper bound
r
r
X
log n X n
log n
n
n
2
n
?
?
kXn (? ? ? (?))kn ? 2?n
|?j | = 2A?
|?j | ? 2A??max K
.
n
n
j?A
j?A
3
Practical approach
The previous results need to be efficiently implemented to cope with finite datasets. Our algorithm,
called Cachalot (CAtching CHAnge-points with LassO), can be split into the following three steps
described hereafter.
Estimation with a Lasso penalty We compute the first Kmax non-null coefficients ???1 , . . . , ???Kmax
on the regularization path of the LASSO problem (3). The LAR/LASSO algorithm, as described in
[7], provides an efficient algorithm to compute the entire regularization path for the LASSO problem.
P
Since j |?j | ? s is a sparsity-enforcing constraint, the set {j, ??j 6= 0} = {?j } becomes larger as
we run through the regularization path. We shall denote by S the Kmax -selected variables:
S = {?1 , . . . , ?Kmax } .
(9)
The computational complexity of the Kmax -long regularization path of LASSO solutions is
3
2
n). Most of the time, we can see that the Lasso effectively catches the true change+ Kmax
O(Kmax
point but also irrelevant change-points at the vicinity of the true ones. Therefore, we propose to
refine the set of change-points caught by the Lasso by performing a post-selection.
Reduced Dynamic Programming algorithm One can consider several strategies to remove irrelevant change-points from the ones retrieved by the Lasso. Among them, since usually in applications, one is only interested in change-point estimation up to a given accuracy, we could launch
the Lasso on a subsample of the signal. Here, we suggest to perform post-selection by using the
standard Dynamic Programming algorithm (DP) thoroughly described in [11] (Chapter 12, p. 450)
but on the reduced set S instead of {1, . . . , n}. This algorithm allows one to efficiently minimize
the following objective for each K in {1, . . . , Kmax }:
J(K) =
Min
?1 <???<?K
s.t ?1 ,...,?K ?S
K
X
k=1
?k
X
(Yi ? ?
?k )2 ,
(10)
i=?k?1 +1
S being defined in (9) and outputs for each K, the corresponding subset of change-points
(?
?1 , . . . , ??K ). The DP algorithm has a computational complexity of O(Kmax n2 ) if we look for
at most Kmax change-points within the signal. Here, our reduced DP calculations (rDP) scales
2
as O(Kmax Kmax
) where Kmax is the maximum number of change-points/variables selected by
LAR/LASSO algorithm. Since typically Kmax ? n, our method thus provides a reduction of the
computational burden associated with the classical change-points detection approach which consists
in running the DP algorithm over all the n observations.
4
Selecting the number of change-points The point is now to select the adequate number of
change-points. As n ? ?, according to [15], the ratio ?k = J(k + 1)/J(k) should show different
qualitative behavior when k 6 K ? and when k > K ? , K ? being the true number of change-points.
In particular, ?k ? Cn for k > K ? , where Cn ? 1 as n ? ?. Actually we found out that Cn was
close to 1, even in small-sample settings, for various experimental designs in terms of noise variance
and true number of change-points. Hence, conciliating theoretical guidance in large-sample setting
and experimental findings in fixed-sample setting, we suggest the following rule of thumb for select? :K
? = Mink?1 {?k ? 1 ? ?} , where ?k = J(k + 1)/J(k).
ing the number of change-points K
Cachalot Algorithm
Input
? Vector of observations Y ? Rn
? Upper bound Kmax on the number of change-points
? Model selection threshold ?
Processing
1. Compute the first Kmax non-null coefficients (??1 , . . . , ??Kmax ) on the regularization path
with the LAR/LASSO algorithm.
2. Launch the rDP algorithm on the set of potential change-points (?1 , . . . , ?Kmax ).
3. Select the smallest subset of the potential change-points (?1 , . . . , ?Kmax ) selected by the rDP
algorithm for which ?k ? 1 ? ?.
Output Change-point locations estimates ??1 , . . . , ??K? .
To illustrate our algorithm, we consider observations (Yn ) satisfying model (2) with
(?30 , ?50 , ?70 , ?90 ) = (5, ?3, 4, ?2), the other ?j being equal to zero, n = 100 and ?n a Gaussian random vector with a covariance matrix equal to Id, Id being a n ? n identity matrix. The
set of the first nine active variables caught by the Lasso along the regularization path, i.e. the set
{k, ??k 6= 0} is given in this case by: S = {21, 23, 28, 29, 30, 50, 69, 70, 90}. The set S contains
the true change-points but also irrelevant ones close to the true change-points. Moreover the most
significant variables do not necessarily appear at the beginning. This supports the use of the reduced version of the DP algorithm hereafter. Table 1 gathers the J(K), K = 1, . . . , Kmax and the
corresponding (?
?1 , . . . , ??K ).
Table 1: Toy example: The empirical risk J and the estimated change-points as a function of the
possible number of change-points K
K
0
1
2
3
4
5
6
7
8
9
J(K)
696.28
249.24
209.94
146.29
120.21
118.22
116.97
116.66
116.65
116.64
(?
?1 , . . . , ??K )
?
30
(30,70)
(30,50,69)
(30,50,70,90)
(30,50,69,70,90)
(21,30,50,69,70,90)
(21,29,30,50,69,70,90)
(21,23,29,30,50,69,70,90)
(21,23,28,29,30,50,69,70,90)
The different values of the ratio ?k for k = 0, . . . , 8 of the model selection procedure are given in
? = 4 and that the change-points
Table 2. Here we took ? = 0.05. We conclude, as expected, that K
are (30, 50, 70, 90), thanks to the results obtained in Table 1.
4
Discussion
Off-line multiple change-point estimation has recently received much attention in theoretical works,
both in a non-asymptotic and in an asymptotic setting by [17] and [13] respectively. From a practi?
cal point of view, retrieving the set of change-point locations {?1? , . . . , ?K
} is challenging, since it is
5
Table 2: Toy example: The values of the ratio (?k = J(k + 1)/J(k), k = 0, . . . , 8)
k
?k
0
0.3580
1
0.8423
2
0.6968
3
0.8218
4
0.9834
5
0.9894
6
0.9974
7
0.9999
8
1.0000
plagued by the curse of dimensionality. Indeed, all of the n observation times have to be considered
as potential change-point instants. Yet, a dynamic programming algorithm (DP), proposed by [9]
and [2], allows to explore all the configurations with a complexity of O(n3 ) in time. Then selecting
the number of change-points is usually performed thanks to a Schwarz-like penalty ?n K, where
?n has to be calibrated on data [13] [12], or a penalty K(a + b log(n/K)) as in [17] [14], where
a and b are data-driven as well. We should also mention that an abundant literature tackles both
change-point estimation and model selection issues from a Bayesian point of view (see [20] [8] and
references therein). All approaches cited above rely on DP, or variants in Bayesian settings, and
hence yield a computational complexity of O(n3 ), which makes them inappropriate for very largescale signal segmentation. Moreover, despite its theoretical optimality in a maximum likelihood
framework, raw DP may sometimes have poor performances when applied to very noisy observations. Our alternative framework for multiple change-point estimation was previously elusively
mentioned several times, e.g. in [16] [4] [19]. However up to our knowledge neither successful
practical implementation nor theoretical grounding was given so far to support such an approach
for change-point estimation. Let us also mention [22], where the Fused Lasso is applied in a similar yet different way to perform hot-spot detection. However, this approach includes an additional
penalty, penalizing departures from the overall mean of the observations, and should thus rather be
considered as an outlier detection method.
5
5.1
Comparison with other methods
Synthetic data
We propose to compare our algorithm with a recent method based on a penalized least-squares criterion studied by [12]. The main difficulty in such approaches is the choice of the constants appearing
in the penalty. In [12], a very efficient approach to overcome this difficulty has been proposed: the
choice of the constants is completely data-driven and has been implemented in a toolbox available
online at http://www.math.u-psud.fr/?lavielle/programs/index.html.
In the following, we benchmark our algorithm: A together with the latter method: B. We shall
use Recall and Precision as relevant performance measures to analyze the previous two algorithms.
More precisely, the Recall corresponds to the ratio of change-points retrieved by a method with
those really present in the data. As for the Precision, it corresponds to the number of change-points
retrieved divided by the number of suggested change-points. We shall also estimate the probability
of false alarm corresponding to the number of suggested change-points which are not present in the
signal divided by the number of true change-points.
To compute the precision and the recall of methods A and B, we ran Monte-Carlo experiments. More
precisely, we sampled 30 configurations of change-points for each real number of change-points K ?
equal to 5, 10, 15 and 20 within a signal containing 500 observations. Change-points were at least
distant of 10 observations. We sampled 30 configurations of levels from a Gaussian distribution.
We used the following setting for the noise: for each configuration of change-points and levels,
we synthesized a Gaussian white noise such that the standard deviation is set to a multiple of the
minimum magnitude jump between two contiguous segments, i.e. ? = m Mink (??k+1 ? ??k ), ??k
being the level of the kth segment. The number of noise replications was set to 10.
As shown in Tables 3, 4 and 5 below, our method A yields competitive results compared to method
B with 1 ? ? = 0.99 and Kmax = 50. Performances in recall are comparable whereas method A
provides better results than method B in terms of precision and false alarm rate.
5.2
Real data
In this section, we propose to apply our method previously described to real data which have already
been analyzed by Bayesian methods: the well-log data which are described in [20] and [6] and
6
Table 3: Precision of methods A and B
?
Method
m = 0.1
m = 0.5
m = 1.0
m = 1.5
K =5
A
B
0.81?0.15 0.71?0.29
0.8?0.16 0.73?0.29
0.78?0.17 0.71?0.27
0.73?0.19 0.66?0.28
Method
m = 0.1
m = 0.5
m = 1.0
m = 1.5
K? = 5
A
B
0.99?0.02 0.99?0.02
0.98?0.04 0.99?0.03
0.95?0.08 0.94?0.08
0.85?0.16 0.87?0.15
K ? = 10
A
B
0.89?0.08 0.8?0.22
0.89?0.08 0.8?0.21
0.88?0.09 0.78?0.21
0.84?0.1 0.79?0.2
K ? = 15
A
B
0.95?0.05 0.86?0.13
0.95?0.05 0.86?0.13
0.93?0.06 0.85?0.13
0.93?0.06 0.84?0.13
K ? = 20
A
B
0.97?0.03 0.91?0.09
0.97?0.03 0.92?0.09
0.96?0.04 0.9?0.09
0.95?0.04 0.9?0.1
Table 4: Recall of methods A and B
K ? = 10
A
B
1?0
1?0
0.99?0.01 0.99?0.01
0.96?0.06 0.96?0.05
0.92?0.07 0.91?0.09
K ? = 15
A
B
0.99?0
0.99?0
0.99?0.01 0.99?0.01
0.97?0.03 0.97?0.04
0.94?0.06 0.94?0.06
K ? = 20
A
B
0.99?0
1?0
0.99?0.01 1?0
0.97?0.03 0.98?0.02
0.95?0.04 0.96?0.04
Table 5: False alarm rate of methods A and B
?
K =5
A
B
0.13?0.03 0.23?0.2
0.13?0.03 0.22?0.2
0.13?0.03 0.21?0.18
0.13?0.03 0.21?0.2
Method
m = 0.1
m = 0.5
m = 1.0
m = 1.5
K ? = 10
A
B
0.24?0.03 0.33?0.19
0.23?0.03 0.32?0.18
0.23?0.03 0.32?0.18
0.23?0.03 0.29?0.16
K ? = 15
A
B
0.34?0.02 0.42?0.13
0.33?0.02 0.41?0.13
0.33?0.02 0.4?0.13
0.31?0.03 0.4?0.15
K ? = 20
A
B
0.44?0.02 0.51?0.12
0.44?0.02 0.5?0.11
0.43?0.03 0.5?0.12
0.42?0.03 0.48?0.11
displayed in Figure 1. They consist in nuclear magnetic response measurements expected to carry
information about rock structure and especially its stratification.
One distinctive feature of these data is that they typically contain a non-negligible amount of outliers.
The multiple change-point estimation method should then, either be used after a data cleaning step
(median filtering [6]), or explicitly make heavy-tailed noise distribution assumption. We restricted
ourselves to a median filtering pre-processing. The results given by our method applied to the welllog data processed with a median filter are displayed in Figure 1 for Kmax = 200 and 1 ? ? = 0.99.
The vertical lines locate the change-points. We can note that they are close to those found out by [6]
(P. 206) who used Bayesian techniques to perform change-points detection.
5
5
1.5
1.4
x 10
x 10
1.35
1.4
1.3
1.3
1.25
1.2
1.2
1.1
1.15
1
1.1
0.9
1.05
0.8
1
0.7
0.95
0.6
0
500
1000
1500
2000
2500
3000
3500
4000
0.9
0
4500
500
1000
1500
2000
2500
3000
3500
4000
4500
Figure 1: Left: Raw well-log data, Right: Change-points locations obtained with our method in
well-log data processed with a median filter
7
6
Conclusion and prospects
We proposed here to cast the multiple change-point estimation as a variable selection problem. A
least-square criterion with a Lasso-penalty yields an efficient primary estimation of change-point
locations. Yet these change-point location estimates can be further refined thanks to a reduced
dynamic programming algorithm. We obtained competitive performances on both artificial and real
data, in terms of precision, recall and false alarm. Thus, Cachalot is a computationally efficient
multiple change-point estimation method, paving the way for processing large datasets.
References
[1] M. Basseville and N. Nikiforov. The detection of abrupt changes. Information and System sciences series.
Prentice-Hall, 1993.
[2] R. Bellman. On the approximation of curves by line segments using dynamic programming. Communications of the ACM, 4(6), 1961.
[3] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Preprint 2007.
[4] L. Boysen, A. Kempe, A. Munk, V. Liebscher, and O. Wittich. Consistencies and rates of convergence of
jump penalized least squares estimators. Annals of Statistics, In revision.
[5] B. Brodsky and B. Darkhovsky. Non-parametric statistical diagnosis: problems and methods. Kluwer
Academic Publishers, 2000.
[6] O. Capp?e, E. Moulines, and T. Ryden. Inference in Hidden Markov Models (Springer Series in Statistics).
Springer-Verlag New York, Inc., 2005.
[7] B. Efron, T. Hastie, and R. Tibshirani. Least angle regression. Annals of Statistics, 32:407?499, 2004.
[8] P. Fearnhead. Exact and efficient bayesian inference for multiple changepoint problems. Statistics and
Computing, 16:203?213, 2006.
[9] W. D. Fisher. On grouping for maximum homogeneity. Journal of the American Statistical Society,
53:789?798, 1958.
[10] O. Gillet, S. Essid, and G. Richard. On the correlation of automatic audio and visual segmentation of
music videos. IEEE Transactions on Circuits and Systems for Video Technology, 2007.
[11] S. M. Kay. Fundamentals of statistical signal processing: detection theory. Prentice-Hall, Inc., 1993.
[12] M. Lavielle. Using penalized contrasts for the change-points problems. Signal Processing, 85(8):1501?
1510, 2005.
[13] M. Lavielle and E. Moulines. Least-squares estimation of an unknown number of shifts in a time series.
Journal of time series analysis, 21(1):33?59, 2000.
[14] E. Lebarbier. Detecting multiple change-points in the mean of a gaussian process by model selection.
Signal Processing, 85(4):717?736, 2005.
[15] C.-B. L. Lee. Estimating the number of change-points in a sequence of independent random variables.
Statistics and Probability Letters, 25:241?248, 1995.
[16] E. Mammen and S. Van De Geer. Locally adaptive regression splines. Annals of Statistics, 1997.
[17] P. Massart. A non asymptotic theory for model selection. pages 309?323. European Mathematical Society,
2005.
[18] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data.
Preprint 2006.
[19] S. Rosset and J. Zhu. Piecewise linear regularized solution paths. Annals of Statistics, 35, 2007.
[20] J. Ruanaidh and W. Fitzgerald. Numerical Bayesian Methods Applied to Signal Processing. Statistics and
Computing. Springer, 1996.
[21] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[22] R. Tibshirani and P. Wang. Spatial smoothing and hot spot detection for cgh data using the fused lasso.
Biostatistics, 9(1):18?29, 2008.
[23] P. Zhao and B. Yu. On model selection consistency of lasso. Journal Of Machine Learning Research, 7,
2006.
8
| 3188 |@word ruanaidh:1 version:2 covariance:2 mention:2 carry:1 reduction:1 configuration:4 contains:1 series:5 selecting:3 hereafter:2 yet:3 boysen:1 numerical:1 distant:1 partition:1 zaid:1 remove:1 implying:1 selected:3 beginning:1 provides:4 math:1 detecting:1 evy:1 location:13 mathematical:1 along:1 retrieving:1 qualitative:1 consists:4 prove:2 replication:1 introduce:1 expected:3 indeed:3 behavior:2 nor:1 bellman:1 moulines:2 curse:1 inappropriate:1 becomes:1 revision:1 estimating:3 notation:1 underlying:5 moreover:3 circuit:1 biostatistics:1 null:3 finding:1 nj:3 tackle:1 yn:5 appear:1 positive:1 negligible:1 kuk1:2 tends:2 despite:1 id:2 path:7 approximately:1 therein:1 studied:1 dantzig:1 meinshausen:1 challenging:1 practical:3 practice:1 implement:1 spot:2 procedure:1 empirical:1 pre:1 suggest:3 get:3 cannot:1 close:6 selection:17 liebscher:1 cal:1 prentice:2 context:1 applying:1 kmax:22 risk:1 www:1 map:1 yt:3 latest:1 attention:1 duration:1 caught:2 ke:1 simplicity:1 abrupt:1 recovery:1 insight:1 estimator:4 rule:1 nuclear:1 kay:1 retrieve:1 annals:4 cleaning:1 programming:7 homogeneous:1 exact:1 element:2 satisfying:1 reframing:1 observed:1 preprint:2 wang:1 ensures:2 prospect:1 observes:1 ran:1 mentioned:1 complexity:5 fitzgerald:1 dynamic:7 segment:4 distinctive:1 completely:1 capp:1 various:2 chapter:1 describe:3 monte:1 artificial:1 refined:2 whose:1 larger:4 triangular:1 statistic:8 noisy:1 online:2 advantage:1 sequence:1 rock:1 took:1 propose:4 fr:2 relevant:1 combining:1 convergence:1 perfect:1 illustrate:1 received:1 implemented:2 launch:2 filter:2 centered:2 munk:1 really:1 proposition:5 underpinning:1 hold:1 considered:2 hall:2 exp:1 plagued:1 changepoint:1 bickel:1 a2:2 smallest:1 purpose:1 estimation:19 schwarz:2 minimization:1 gaussian:6 fearnhead:1 rather:1 pn:3 shrinkage:2 pervasive:1 notational:1 likelihood:1 contrast:1 sense:1 inference:2 cnrs:1 nn:1 entire:1 typically:2 hidden:1 france:1 interested:1 issue:2 arg:2 among:1 overall:1 html:1 spatial:1 smoothing:1 kempe:1 darkhovsky:1 field:1 equal:6 having:1 stratification:1 look:1 yu:2 spline:1 piecewise:10 richard:1 leduc:1 practi:1 homogeneity:1 psud:1 ourselves:1 n1:4 ltci:1 detection:8 rdp:3 analyzed:1 eline:1 abundant:1 catching:2 guidance:1 theoretical:8 column:1 contiguous:1 deviation:1 entry:3 subset:2 successful:1 kn:13 rosset:1 synthetic:2 calibrated:1 thoroughly:1 thanks:4 cited:1 fundamental:1 lee:1 off:8 together:1 fused:2 containing:1 american:1 zhao:1 toy:2 potential:4 de:1 includes:1 coefficient:2 inc:2 explicitly:1 performed:1 view:3 break:1 analyze:1 competitive:2 minimize:4 square:6 ni:3 accuracy:1 variance:4 who:1 efficiently:2 yield:3 raw:3 thumb:1 bayesian:6 iid:2 carlo:1 explain:1 simultaneous:1 definition:1 proof:3 associated:2 sampled:2 popular:1 recall:7 knowledge:1 efron:1 dimensionality:1 segmentation:3 actually:1 focusing:1 follow:1 response:1 formulation:2 ritov:1 kyn:4 correlation:1 kn2:1 lar:4 grounding:1 contain:1 true:13 counterpart:1 regularization:6 vicinity:1 kxn:5 hence:2 nonzero:1 white:2 mammen:1 criterion:4 outline:1 tn:1 ranging:1 wise:1 recently:1 tending:2 belong:1 kluwer:1 synthesized:1 significant:1 measurement:1 imposing:1 automatic:1 consistency:3 recent:1 retrieved:3 irrelevant:3 driven:2 verlag:1 inequality:3 refrain:1 yi:1 minimum:1 additional:1 signal:13 multiple:10 harchaoui:1 ing:1 match:1 academic:1 calculation:1 long:1 divided:2 post:2 variant:1 regression:7 sometimes:1 whereas:1 addressed:1 median:4 publisher:1 massart:1 subject:2 practitioner:1 split:1 enough:1 hastie:1 lasso:26 cn:3 shift:1 retrospective:1 penalty:9 reformulated:1 york:1 nine:1 adequate:1 k2n:4 amount:2 tsybakov:1 locally:1 processed:2 reduced:6 http:1 sign:3 estimated:9 fulfilled:1 dummy:1 tibshirani:3 diagnosis:1 write:1 shall:7 reformulation:2 threshold:1 neither:1 penalizing:1 run:1 angle:1 letter:1 comparable:1 bound:5 tackled:2 refine:1 constraint:3 infinity:1 precisely:2 n3:2 u1:1 min:5 optimality:1 performing:2 according:1 poor:1 enst:1 slightly:1 outlier:2 restricted:1 computationally:1 remains:1 previously:2 turn:1 discus:1 caa:1 available:1 nikiforov:1 apply:2 observe:1 magnetic:1 appearing:1 alternative:1 paving:1 jn:12 running:1 instant:2 music:1 k1:4 uj:2 especially:1 classical:1 society:3 objective:2 already:2 quantity:1 depart:1 strategy:1 primary:1 parametric:1 dp:11 kth:1 cauchy:1 enforcing:2 index:3 ratio:4 minimizing:1 cij:1 negative:1 mink:2 design:1 implementation:1 unknown:1 perform:3 allowing:1 upper:5 vertical:1 observation:12 datasets:4 markov:1 benchmark:1 finite:3 displayed:2 supporting:1 communication:1 locate:1 rn:3 cast:2 paris:1 toolbox:1 suggested:2 below:2 usually:2 departure:1 sparsity:2 program:1 max:3 royal:1 video:2 hot:2 event:1 difficulty:2 rely:2 regularized:1 largescale:1 zhu:1 representing:1 technology:1 catch:1 literature:2 asymptotic:6 loss:2 filtering:2 gather:1 consistent:1 heavy:1 row:1 penalized:4 keeping:1 bias:1 absolute:1 sparse:2 van:1 overcome:1 curve:1 xn:7 cgh:1 jump:6 adaptive:1 far:1 cope:1 transaction:1 selector:1 dealing:1 active:1 conclude:1 un:1 tailed:1 table:9 eeg:1 necessarily:1 european:1 rue:1 main:2 noise:7 subsample:1 alarm:4 n2:2 telecom:1 precision:6 sub:2 theorem:1 lebarbier:1 evidence:1 grouping:1 burden:1 consist:1 false:4 sequential:1 effectively:1 magnitude:4 nk:1 explore:1 visual:1 brodsky:1 springer:3 aa:1 corresponds:2 minimizer:1 satisfies:1 acm:1 lavielle:3 goal:1 identity:1 basseville:1 absence:1 fisher:1 change:73 paristech:1 except:1 called:1 geer:1 experimental:3 xn0:4 select:3 support:2 latter:1 relevance:1 audio:2 |
2,412 | 3,189 | Multi-task Gaussian Process Prediction
Edwin V. Bonilla, Kian Ming A. Chai, Christopher K. I. Williams
School of Informatics, University of Edinburgh, 5 Forrest Hill, Edinburgh EH1 2QL, UK
[email protected], [email protected], [email protected]
Abstract
In this paper we investigate multi-task learning in the context of Gaussian Processes (GP). We propose a model that learns a shared covariance function on
input-dependent features and a ?free-form? covariance matrix over tasks. This allows for good flexibility when modelling inter-task dependencies while avoiding
the need for large amounts of data for training. We show that under the assumption of noise-free observations and a block design, predictions for a given task
only depend on its target values and therefore a cancellation of inter-task transfer occurs. We evaluate the benefits of our model on two practical applications:
a compiler performance prediction problem and an exam score prediction task.
Additionally, we make use of GP approximations and properties of our model in
order to provide scalability to large data sets.
1
Introduction
Multi-task learning is an area of active research in machine learning and has received a lot of attention over the past few years. A common set up is that there are multiple related tasks for which
we want to avoid tabula rasa learning by sharing information across the different tasks. The hope is
that by learning these tasks simultaneously one can improve performance over the ?no transfer? case
(i.e. when each task is learnt in isolation). However, as pointed out in [1] and supported empirically
by [2], assuming relatedness in a set of tasks and simply learning them together can be detrimental.
It is therefore important to have models that will generally benefit related tasks and will not hurt
performance when these tasks are unrelated. We investigate this in the context of Gaussian Process
(GP) prediction.
We propose a model that attempts to learn inter-task dependencies based solely on the task identities
and the observed data for each task. This contrasts with approaches in [3, 4] where task-descriptor
features t were used in a parametric covariance function over different tasks?such a function may
be too constrained by both its parametric form and the task descriptors to model task similarities
effectively. In addition, for many real-life scenarios task-descriptor features are either unavailable
or difficult to define correctly. Hence we propose a model that learns a ?free-form? task-similarity
matrix, which is used in conjunction with a parameterized covariance function over the input features
x.
For scenarios where the number of input observations is small, multi-task learning augments the data
set with a number of different tasks, so that model parameters can be estimated more confidently;
this helps to minimize over-fitting. In our model, this is achieved by having a common covariance
function over the features x of the input observations. This contrasts with the semiparametric latent
factor model [5] where, with the same set of input observations, one has to estimate the parameters
of several covariance functions belonging to different latent processes.
For our model we can show the interesting theoretical property that there is a cancellation of intertask transfer in the specific case of noise-free observations and a block design. We have investigated
both gradient-based and EM-based optimization of the marginal likelihood for learning the hyperparameters of the GP. Finally, we make use of GP approximations and properties of our model in
order to scale our approach to large multi-task data sets, and evaluate the benefits of our model on
two practical multi-task applications: a compiler performance prediction problem and a exam score
prediction task.
The structure of the paper is as follows: in section 2 we outline our model for multi-task learning,
and discuss some approximations to speed up computations in section 3. Related work is described
in section 4. We describe our experimental setup in section 5 and give results in section 6.
2
The Model
Given a set X of N distinct inputs x1 , . . . , xN we define the complete set of responses for M tasks
as y = (y11 , . . . , yN 1 , . . . , y12 , . . . , yN 2 , . . . , y1M , . . . , yN M )T , where yil is the response for the lth
task on the ith input xi . Let us also denote the N ? M matrix Y such that y = vec Y .
Given a set of observations yo , which is a subset of y, we want to predict some of the unobserved
response-values yu at some input locations for certain tasks.
We approach this problem by placing a GP prior over the latent functions {fl } so that we directly
induce correlations between tasks. Assuming that the GPs have zero mean we set
f x
hfl (x)fk (x0 )i = Klk
k (x, x0 )
yil ? N (fl (xi ), ?l2 ),
(1)
where K f is a positive semi-definite (PSD) matrix that specifies the inter-task similarities, k x is a
covariance function over inputs, and ?l2 is the noise variance for the lth task. Below we focus on
stationary covariance functions k x ; hence, to avoid redundancy in the parametrization, we further
let k x be only a correlation function (i.e. it is constrained to have unit variance), since the variance
can be explained fully by K f .
The important property of this model is that the joint Gaussian distribution over y is not blockdiagonal wrt tasks, so that observations of one task can affect the predictions on another task. In
[4, 3] this property also holds, but instead of specifying a general PSD matrix K f , these authors set
f
Klk
= k f (tl , tk ), where k f (?, ?) is a covariance function over the task-descriptor features t.
One popular setup for multi-task learning is to assume that tasks can be clustered, and that there are
inter-task correlations between tasks in the same cluster. This can be easily modelled with a general
task-similarity K f matrix: if we assume that the tasks are ordered with respect to the clusters, then
K f will have a block diagonal structure. Of course, as we are learning a ?free form? K f the ordering
of the tasks is irrelevant in practice (and is only useful for explanatory purposes).
2.1
Inference
Inference in our model can be done by using the standard GP formulae for the mean and variance
of the predictive distribution with the covariance function given in equation (1). For example, the
mean prediction on a new data-point x? for task l is given by
f?l (x? ) = (kfl ? kx? )T ??1 y
? = Kf ? Kx + D ? I
(2)
where ? denotes the Kronecker product, kfl selects the lth column of K f , kx? is the vector of
covariances between the test point x? and the training points, K x is the matrix of covariances
between all pairs of training points, D is an M ? M diagonal matrix in which the (l, l)th element is
?l2 , and ? is an M N ? M N matrix.
In section 2.3 we show that when there is no noise in the data (i.e. D = 0), there will be no transfer
between tasks.
2.2
Learning Hyperparameters
Given the set of observations yo , we wish to learn the parameters ? x of k x and the matrix K f
to maximize the marginal likelihood p(yo |X, ? x , K f ). One way to achieve this is to use the
fact that y|X ? N (0, ?). Therefore, gradient-based methods can be readily applied to maximize the marginal likelihood. In order to guarantee positive-semidefiniteness of K f , one possible
parametrization is to use the Cholesky decomposition K f = LLT where L is lower triangular.
Computing the derivatives of the marginal likelihood with respect to L and ? x is straightforward. A
drawback of this approach is its computational cost as it requires the inversion of a matrix of potential size M N ? M N (or solving an M N ? M N linear system) at each optimization step. Note,
however, that one only needs to actually compute the Gram matrix and its inverse at the visible
locations corresponding to yo .
Alternatively, it is possible to exploit the Kronecker product structure of the full covariance matrix
as in [6], where an EM algorithm is proposed such that learning of ? x and K f in the M-step is
decoupled. This has the advantage that closed-form updates for K f and D can be obtained (see
equation (5)), and that K f is guaranteed to be positive-semidefinite. The details of the EM algorithm
are as follows: Let f be the vector of function values corresponding to y, and similarly for F wrt
Y . Further, let y?l denote the vector (y1l , . . . , yN l )T and similarly for f ?l . Given the missing data,
which in this case is f , the complete-data log-likelihood is
i
?1 T
N
M
1 h
?1
Lcomp = ? log |K f | ?
log |K x | ? tr K f
F (K x ) F
2
2
2
M
X
MN
1
N
log 2? (3)
log ?l2 ? tr (Y ? F )D?1 (Y ? F )T ?
?
2
2
2
l=1
from which we have following updates:
D
E
bx = arg min N log F T (K x (? x ))?1 F + M log |K x (? x )|
?
?x
?1
D
E
T
cx )
b f = N ?1 F T K x (?
K
F
?
bl2 = N ?1 (y?l ? f ?l ) (y?l ? f ?l )
(4)
(5)
where the expectations h?i are taken with respect to p f |yo , ? x , K f , and b? denotes the updated
parameters. For
Then
clarity, let us consider the case where yo = y, i.e. a block design.
p f |y, ? x , K f = N (K f ? K x )??1 y, (K f ? K x ) ? (K f ? K x )??1 (K f ? K x ) .
We have seen that ? needs to be inverted (in time O(M 3 N 3 )) for both making predictions and
learning the hyperparameters (when considering noisy observations). This can lead to computational
problems if M N is large. In section 3 we give some approximations that can help speed up these
computations.
2.3
Noiseless observations and the cancellation of inter-task transfer
One particularly interesting case to consider is noise-free observations at the same locations for
all tasks (i.e. a block-design) so that y|X ? Normal(0, K f ? K x ). In this case maximizing the
marginal likelihood p(y|X) wrt the parameters ? x of k x reduces to maximizing ?M log |K x | ?
N log |Y T (K x )?1 Y |, an expression that does not depend on K f . After convergence we can obtain
? f = 1 Y T (K x )?1 Y . The intuition behind is this: The responses Y are correlated via K f
K f as K
N
x
and K . We can learn K f by decorrelating Y with (K x )?1 first so that only correlation with respect
to K f is left. Then K f is simply the sample covariance of the de-correlated Y .
Unfortunately, in this case there is effectively no transfer between the tasks (given the kernels). To
see this, consider making predictions at a new location x? for all tasks. We have (using the mixedproduct property of Kronecker products) that
T
?1
f (x? ) = K f ? kx?
Kf ? Kx
y
(6)
f T
x T
f ?1
x ?1
= (K ) ? (k? )
(K ) ? (K )
y
(7)
f f ?1
x T
x ?1
= K (K )
? (k? ) (K )
y
(8)
?
?
x T
x ?1
(k? ) (K ) y?1
?
?
..
(9)
=?
?,
.
(kx? )T (K x )?1 y?M
and similarly for the covariances. Thus, in the noiseless case with a block design, the predictions
for task l depend only on the targets y?l . In other words, there is a cancellation of transfer. One can
in fact generalize this result to show that the cancellation of transfer for task l does still hold even
if the observations are only sparsely observed at locations X = (x1 , . . . , xN ) on the other tasks.
After having derived this result we learned that it is known as autokrigeability in the geostatistics
literature [7], and is also related to the symmetric Markov property of covariance functions that is
discussed in [8]. We emphasize that if the observations are noisy, or if there is not a block design,
then this result on cancellation of transfer will not hold. This result can also be generalized to
multidimensional tensor product covariance functions and grids [9].
3
Approximations to speed up computations
The issue of dealing with large N has been much studied in the GP literature, see [10, ch. 8] and
[11] for overviews. In particular, one can use sparse approximations where only Q out of N data
points are selected as inducing inputs[11]. Here, we use the Nystr?om approximation of K x in the
def
x
x ?1 x
ex =
marginal likelihood, so that K x ? K
K?I
(KII
) KI? , where I indexes Q rows/columns of
x
K . In fact for the posterior at the training points this result is obtained from both the subset of
regressors (SoR) and projected process (PP) approximations described in [10, ch. 8].
Specifying a full rank K f requires M (M + 1)/2 parameters, and for large M this would be a lot of
parameters to estimate. One parametrization of K f that reduces this problem is to use a PPCA model
def
ef =
[12] K f ? K
U ?U T + s2 IM , where U is an M ? P matrix of the P principal eigenvectors
f
of K , ? is a P ? P diagonal matrix of the corresponding eigenvalues, and s2 can be determined
analytically from the eigenvalues of K f (see [12] and references therein). For numerical stability,
?L
? T , where L
? is a
we may further use the incomplete-Cholesky decomposition setting U ?U T = L
M ? P matrix. Below we consider the case s = 0, i.e. a rank-P approximation to K f .
def
e =
?f ? K
? x + D ? IN , we have, after using the
Applying both approximations to get ? ? ?
K
def
??
e ?1 = ??1 ? ??1 B I ? K x + B T ??1 B ?1 B T ??1 where B =
(L
Woodbury identity, ?
II
def
x
?f ?K
? x has rank P Q, we have that computation
), and ? = D ? IN is a diagonal matrix. As K
K?I
?1
2 2
?
of ? y takes O(M N P Q ).
e x poses a problem in (4) because for the rank-deficient
For the EM algorithm, the approximation of K
x
e
matrix K , its log-determinant is negative infinity, and its matrix inverse is undefined. We overcome
e x = lim??0 (K x (K x )?1 K x +? 2 I), so that we solve an equivalent optimizathis by considering K
I?
II
?I
x
x
x
|,
| ? log |KII
K?I
tion problem where the log-determinant is replaced by the well-defined log |KI?
and the matrix inverse is replaced by the pseudo-inverse. With these approximations the computational complexity of hyperparameter learning can be reduced to O(M N P 2 Q2 ) per iteration for
both the Cholesky and EM methods.
4
Related work
There has been a lot of work in recent years on multi-task learning (or inductive transfer) using
methods such as Neural Networks, Gaussian Processes, Dirichlet Processes and Support Vector
Machines, see e.g. [2, 13] for early references. The key issue concerns what properties or aspects
should be shared across tasks. Within the GP literature, [14, 15, 16, 17, 18] give models where
the covariance matrix of the full (noiseless) system is block diagonal, and each of the M blocks is
induced from the same kernel function. Under these models each y?i is conditionally independent,
but inter-task tying takes place by sharing the kernel function across tasks. In contrast, in our model
and in [5, 3, 4] the covariance is not block diagonal.
The semiparametric latent factor model (SLFM) of Teh et al [5] involves having P latent processes
(where P ? M ) and each of these latent processes has its own covariance function. The noiseless
outputs are obtained by linear mixing of these processes with a M ? P matrix ?. The covariance
matrix of the system under this model has rank at most P N , so that when P < M the system
corresponds to a degenerate GP. Our model is similar to [5] but simpler, in that all of the P latent
processes share the same covariance function; this reduces the number of free parameters to be fitted
and should help to minimize overfitting. With a common covariance function k x , it turns out that
K f is equal to ??T , so a K f that is strictly positive definite corresponds to using P = M latent
processes. Note that if P > M one can always find an M ? M matrix ?0 such that ?0 ?0T = ??T .
We note also that the approximation methods used in [5] are different to ours, and were based on the
subset of data (SoD) method using the informative vector machine (IVM) selection heuristic.
In the geostatistics literature, the prior model for f? given in eq. (1) is known as the intrinsic correlation model [7], a specific case of co-kriging. A sum of such processes is known as the linear
coregionalization model (LCM) [7] for which [6] gives an EM-based algorithm for parameter estimation. Our model for the observations corresponds to an LCM model with two processes: the
process for f? and the noise process. Note that SLFM can also be seen as an instance of the LCM
model. To see this, let Epp be a P ? P diagonal matrix with 1 at (p, p) and zero elsewhere. Then we
PP
PP
can write the covariance in SLFM as (??I)( p=1 Epp ?Kpx )(??I)T = p=1 (?Epp ?T )?Kpx ,
where ?Epp ?T is of rank 1.
Evgeniou et al. [19] consider methods for inducing correlations between tasks based on a correlated
prior over linear regression parameters. In fact this corresponds to a GP prior using the kernel
k(x, x0 ) = xT Ax0 for some positive definite matrix A. In their experiments they use a restricted
f
form of K f with Klk
= (1 ? ?) + ?M ?lk (their eq. 25), i.e. a convex combination of a rank-1
matrix of ones and a multiple of the identity. Notice the similarity to the PPCA form of K f given in
section 3.
5
Experiments
We evaluate our model on two different applications. The first application is a compiler performance
prediction problem where the goal is to predict the speed-up obtained in a given program (task) when
applying a sequence of code transformations x. The second application is an exam score prediction
problem where the goal is to predict the exam score obtained by a student x belonging to a specific
school (task). In the sequel, we will refer to the data related to the first problem as the compiler data
and the data related to the second problem as the school data.
We are interested in assessing the benefits of our approach not only with respect to the no-transfer
case but also with respect to the case when a parametric GP is used on the joint input-dependent and
task-dependent space as in [3]. To train the parametric model note that the parameters of the covariance function over task descriptors k f (t, t0 ) can be tuned by maximizing the marginal likelihood,
as in [3]. For the free-form K f we initialize this (given k x (?, ?)) by using the noise-free expression
? f = 1 Y T (K x )?1 Y given in section 2.3 (or the appropriate generalization when the design is
K
N
not complete). For both applications we have used a squared-exponential (or Gaussian) covariance
function k x and a non-parametric form for K f . Where relevant the parametric covariance function
k f was also taken to be of squared-exponential form. Both k x and k f used an automatic relevance
determination (ARD) parameterization, i.e. having a length scale for each feature dimension. All
the length scales in k x and k f were initialized to 1, and all ?l2 were constrained to be equal for all
tasks and initialized to 0.01.
5.1
Description of the Data
Compiler Data. This data set consists of 11 C programs for which an exhaustive set of 88214
sequences of code transformations have been applied and their corresponding speed-ups have been
recorded. Each task is to predict the speed-up on a given program when applying a specific transformation sequence. The speed-up after applying a transformation sequence on a given program
is defined as the ratio of the execution time of the original program (baseline) over the execution
time of the transformed program. Each transformation sequence is described as a 13-dimensional
vector x that records the absence/presence of one-out-of 13 single transformations. In [3] the taskdescriptor features (for each program) are based on the speed-ups obtained on a pre-selected set of
8 transformations sequences, so-called ?canonical responses?. The reader is referred to [3, section
3] for a more detailed description of the data.
School Data. This data set comes from the Inner London Education Authority (ILEA) and
has been used to study the effectiveness of schools. It is publicly available under the name
of ?school effectiveness? at http://www.cmm.bristol.ac.uk/learning-training/
multilevel-m-support/datasets.shtml. It consists of examination records from 139
secondary schools in years 1985, 1986 and 1987. It is a random 50% sample with 15362 students.
This data has also been used in the context of multi-task learning by Bakker and Heskes [20] and
Evgeniou et al. [19]. In [20] each task is defined as the prediction of the exam score of a student
belonging to a specific school based on four student-dependent features (year of the exam, gender, VR band and ethnic group) and four school-dependent features (percentage of students eligible
for free school meals, percentage of students in VR band 1, school gender and school denomination). For comparison with [20, 19] we evaluate our model following the set up described above
and similarly, we have created dummy variables for those features that are categorical forming a
total of 19 student-dependent features and 8 school-dependent features. However, we note that
school-descriptor features such as the percentage of students eligible for free school meals and the
percentage of students in VR band 1 actually depend on the year the particular sample was taken.
It is important to emphasize that for both data sets there are task-descriptor features available. However, as we have described throughout this paper, our approach learns task similarity directly without
the need for task-dependent features. Hence, we have neglected these features in the application of
our free-form K f method.
6
Results
For the compiler data we have M = 11 tasks and we have used a Cholesky decomposition
K f = LLT . For the school data we have M = 139 tasks and we have preferred a reduced rank
ef = L
?L
? T , with ranks 1, 2, 3 and 5. We have learnt the parameparameterization of K f ? K
ters of the models so as to maximize the marginal likelihood p(yo |X, K f , ? x ) using gradient-based
search in MATLAB with Carl Rasmussen?s minimize.m. In our experiments this method usually
outperformed EM in the quality of solutions found and in the speed of convergence.
Compiler Data: For this particular application, in a real-life scenario it is critical to achieve good
performance with a low number of training data-points per task given that a training data-point
requires the compilation and execution of a (potentially) different version of a program. Therefore,
although there are a total of 88214 training points per program we have followed a similar set up
to [3] by considering N = 16, 32, 64 and 128 transformation sequences per program for training.
All the M = 11 programs (tasks) have been used for training, and predictions have been done at
the (unobserved) remaining 88214 ? N inputs. For comparison with [3] the mean absolute error
(between the actual speed-ups of a program and the predictions) has been used as the measure
of performance. Due to the variability of the results depending on training set selection we have
considered 10 different replications.
Figure 1 shows the mean absolute errors obtained on the compiler data for some of the tasks (top
row and bottom left) and on average for all the tasks (bottom right). Sample task 1 (histogram)
is an example where learning the tasks simultaneously brings major benefits over the no transfer
case. Here, multi-task GP (transfer free-form) provides a reduction on the mean absolute error of
up to 6 times. Additionally, it is consistently (although only marginally) superior to the parametric
approach. For sample task 2 (fir), our approach not only significantly outperforms the no transfer
case but also provides greater benefits over the parametric method (which for N = 64 and 128
is worse than no transfer). Sample task 3 (adpcm) is the only case out of all 11 tasks where our
approach degrades performance, although it should be noted that all the methods perform similarly.
Further analysis of the data indicates that learning on this task is hard as there is a lot of variability
that cannot be explained by the 1-out-of-13 encoding used for the input features. Finally, for all tasks
on average (bottom right) our approach brings significant improvements over single task learning
and consistently outperforms the parametric method. For all tasks except one our model provides
better or roughly equal performance than the non-transfer case and the parametric model.
School Data: For comparison with [20, 19] we have made 10 random splits of the data into training
(75%) data and test (25%) data. Due to the categorical nature of the data there are a maximum
of N = 202 different student-dependent feature vectors x. Given that there can be multiple observations of a target value for a given task at a specific input x, we have taken the mean of these
observations and corrected the noise variances by dividing them over the corresponding number of
observations. As in [19], the percentage explained variance is used as the measure of performance.
This measure can be seen as the percentage version of the well known coefficient of determination
r2 between the actual target values and the predictions.
SAMPLE TASK 1
SAMPLE TASK 2
0.2
0.35
NO TRANSFER
TRANSFER PARAMETRIC
TRANSFER FREE?FORM
0.16
NO TRANSFER
TRANSFER PARAMETRIC
TRANSFER FREE?FORM
0.3
MAE
MAE
0.25
0.12
0.08
0.2
0.15
0.1
0.04
0.05
0
16
32
64
0
128
16
64
N
SAMPLE TASK 3
ALL TASKS
(a)
128
(b)
0.12
0.14
0.1
0.12
NO TRANSFER
TRANSFER PARAMETRIC
TRANSFER FREE?FORM
0.1
MAE
0.08
MAE
32
N
0.06
0.08
0.06
0.04
0.04
NO TRANSFER
TRANSFER PARAMETRIC
TRANSFER FREE?FORM
0.02
0
16
32
64
128
0.02
0
16
32
N
64
128
N
(c)
(d)
Figure 1: Panels (a), (b) and (c) show the average mean absolute error on the compiler data as a
function of the number of training points for specific tasks. no transfer stands for the use of a single
GP for each task separately; transfer parametric is the use of a GP with a joint parametric (SE)
covariance function as in [3]; and transfer free-form is multi-task GP with a ?free form? covariance
matrix over tasks. The error bars show ? one standard deviation taken over the 10 replications.
Panel (d) shows the average MAE over all 11 tasks, and the error bars show the average of the
standard deviations over all 11 tasks.
The results are shown in Table 1; note that larger figures are better. The parametric result given in
the table was obtained from the school-descriptor features; in the cases where these features varied
for a given school over the years, an average was taken. The results show that better results can
be obtained by using multi-task learning than without. For the non-parametric K f , we see that the
rank-2 model gives best performance. This performance is also comparable with the best (29.5%)
found in [20]. We also note that our no transfer result of 21.1% is much better than the baseline of
9.7% found in [20] using neural networks.
no transfer
parametric
rank 1
rank 2
rank 3
rank 5
21.05 (1.15)
31.57 (1.61)
27.02 (2.03)
29.20 (1.60)
24.88 (1.62)
21.00 (2.42)
Table 1: Percentage variance explained on the school dataset for various situations. The figures in
brackets are standard deviations obtained from the ten replications.
On the school data the parametric approach for K f slightly outperforms the non-parametric method,
probably due to the large size of this matrix relative to the amount of data. One can also run the
parametric approach creating a task for every unique school-features descriptor1 ; this gives rise to
288 tasks rather than 139 schools, and a performance of 33.08% (?1.57). Evgeniou et al [19] use a
linear predictor on all 8 features (i.e. they combine both student and school features into x) and then
introduce inter-task correlations as described in section 4. This approach uses the same information
as our 288 task case, and gives similar performance of around 34% (as shown in Figure 3 of [19]).
1
Recall from section 5.1 that the school features can vary over different years.
7
Conclusion
In this paper we have described a method for multi-task learning based on a GP prior which has
inter-task correlations specified by the task similarity matrix K f . We have shown that in a noisefree block design, there is actually a cancellation of transfer in this model, but not in general. We
have successfully applied the method to the compiler and school problems. An advantage of our
method is that task-descriptor features are not required (c.f. [3, 4]). However, such features might
be beneficial if we consider a setup where there are only few datapoints for a new task, and where
the task-descriptor features convey useful information about the tasks.
Acknowledgments
CW thanks Dan Cornford for pointing out the prior work on autokrigeability. KMC thanks DSO NL for support.
This work is supported under EPSRC grant GR/S71118/01 , EU FP6 STREP MILEPOST IST-035307, and in
part by the IST Programme of the European Community, under the PASCAL Network of Excellence, IST2002-506778. This publication only reflects the authors? views.
References
[1] Jonathan Baxter. A Model of Inductive Bias Learning. JAIR, 12:149?198, March 2000.
[2] Rich Caruana. Multitask Learning. Machine Learning, 28(1):41?75, July 1997.
[3] Edwin V. Bonilla, Felix V. Agakov, and Christopher K. I. Williams. Kernel Multi-task Learning using
Task-specific Features. In Proceedings of the 11th AISTATS, March 2007.
[4] Kai Yu, Wei Chu, Shipeng Yu, Volker Tresp, and Zhao Xu. Stochastic Relational Models for Discriminative Link Prediction. In NIPS 19, Cambridge, MA, 2007. MIT Press.
[5] Yee Whye Teh, Matthias Seeger, and Michael I. Jordan. Semiparametric latent factor models. In Proceedings of the 10th AISTATS, pages 333?340, January 2005.
[6] Hao Zhang. Maximum-likelihood estimation for multivariate spatial linear coregionalization models.
Environmetrics, 18(2):125?139, 2007.
[7] Hans Wackernagel. Multivariate Geostatistics: An Introduction with Applications. Springer-Verlag,
Berlin, 2nd edition, 1998.
[8] A. O?Hagan. A Markov property for covariance structures. Statistics Research Report 98-13, Nottingham
University, 1998.
[9] C. K. I. Williams, K. M. A. Chai, and E. V. Bonilla. A note on noise-free Gaussian process prediction
with separable covariance functions and grid designs. Technical report, University of Edinburgh, 2007.
[10] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge, Massachusetts, 2006.
[11] Joaquin Qui?nonero-Candela, Carl Edward Rasmussen, and Christopher K. I. Williams. Approximation
Methods for Gaussian Process Regression. In Large Scale Kernel Machines. MIT Press, 2007. To appear.
[12] Michael E. Tipping and Christopher M. Bishop. Probabilistic principal component analysis. Journal of
the Royal Statistical Society, Series B, 61(3):611?622, 1999.
[13] S. Thrun. Is Learning the n-th Thing Any Easier Than Learning the First? In NIPS 8, 1996.
[14] Thomas P. Minka and Rosalind W. Picard. Learning How to Learn is Learning with Point Sets. 1999.
[15] Neil D. Lawrence and John C. Platt. Learning to learn with the Informative Vector Machine. In Proceedings of the 21st International Conference on Machine Learning, July 2004.
[16] Kai Yu, Volker Tresp, and Anton Schwaighofer. Learning Gaussian Processes from Multiple Tasks. In
Proceedings of the 22nd International Conference on Machine Learning, 2005.
[17] Anton Schwaighofer, Volker Tresp, and Kai Yu. Learning Gaussian Process Kernels via Hierarchical
Bayes. In NIPS 17, Cambridge, MA, 2005. MIT Press.
[18] Shipeng Yu, Kai Yu, Volker Tresp, and Hans-Peter Kriegel. Collaborative Ordinal Regression. In Proceedings of the 23rd International Conference on Machine Learning, June 2006.
[19] Theodoros Evgeniou, Charles A. Micchelli, and Massimiliano Pontil. Learning Multiple Tasks with
Kernel Methods. Journal of Machine Learning Research, 6:615?537, April 2005.
[20] Bart Bakker and Tom Heskes. Task Clustering and Gating for Bayesian Multitask Learning. Journal of
Machine Learning Research, 4:83?99, May 2003.
| 3189 |@word multitask:2 determinant:2 version:2 inversion:1 nd:2 covariance:31 decomposition:3 tr:2 nystr:1 klk:3 reduction:1 series:1 score:5 tuned:1 ours:1 past:1 outperforms:3 chu:1 readily:1 john:1 visible:1 numerical:1 informative:2 update:2 bart:1 stationary:1 selected:2 parameterization:1 parametrization:3 ith:1 record:2 provides:3 authority:1 location:5 theodoros:1 simpler:1 zhang:1 replication:3 consists:2 fitting:1 combine:1 dan:1 introduce:1 excellence:1 x0:3 inter:9 roughly:1 multi:15 ming:1 actual:2 considering:3 unrelated:1 panel:2 what:1 tying:1 bakker:2 q2:1 unobserved:2 transformation:8 guarantee:1 pseudo:1 every:1 multidimensional:1 uk:5 platt:1 unit:1 grant:1 yn:4 appear:1 positive:5 felix:1 encoding:1 solely:1 might:1 therein:1 studied:1 specifying:2 co:1 practical:2 woodbury:1 unique:1 acknowledgment:1 practice:1 block:11 definite:3 pontil:1 area:1 significantly:1 ups:3 word:1 induce:1 pre:1 get:1 cannot:1 selection:2 context:3 applying:4 yee:1 www:1 equivalent:1 missing:1 maximizing:3 williams:6 attention:1 straightforward:1 convex:1 datapoints:1 stability:1 denomination:1 hurt:1 updated:1 target:4 gps:1 carl:2 us:1 element:1 particularly:1 hagan:1 agakov:1 sparsely:1 observed:2 bottom:3 epsrc:1 cornford:1 ordering:1 eu:1 kriging:1 intuition:1 complexity:1 neglected:1 depend:4 solving:1 ist2002:1 predictive:1 edwin:3 easily:1 joint:3 various:1 train:1 distinct:1 massimiliano:1 describe:1 london:1 exhaustive:1 heuristic:1 larger:1 solve:1 kai:4 triangular:1 statistic:1 neil:1 gp:17 noisy:2 advantage:2 eigenvalue:2 sequence:7 matthias:1 propose:3 product:4 relevant:1 nonero:1 mixing:1 flexibility:1 achieve:2 degenerate:1 description:2 inducing:2 scalability:1 chai:3 convergence:2 cluster:2 assessing:1 kpx:2 tk:1 help:3 depending:1 exam:6 ac:4 pose:1 ard:1 school:26 received:1 eq:2 edward:1 dividing:1 involves:1 come:1 drawback:1 stochastic:1 education:1 kii:2 multilevel:1 sor:1 clustered:1 generalization:1 kmc:1 im:1 strictly:1 hold:3 around:1 considered:1 normal:1 lawrence:1 predict:4 pointing:1 major:1 vary:1 early:1 purpose:1 estimation:2 outperformed:1 successfully:1 reflects:1 hope:1 mit:4 gaussian:11 always:1 rather:1 avoid:2 volker:4 shtml:1 publication:1 conjunction:1 derived:1 yo:7 focus:1 june:1 improvement:1 consistently:2 rank:14 modelling:1 likelihood:10 indicates:1 contrast:3 seeger:1 baseline:2 inference:2 dependent:9 explanatory:1 cmm:1 transformed:1 selects:1 interested:1 arg:1 issue:2 pascal:1 constrained:3 spatial:1 initialize:1 marginal:8 equal:3 evgeniou:4 having:4 noisefree:1 placing:1 yu:7 report:2 few:2 simultaneously:2 replaced:2 attempt:1 psd:2 investigate:2 picard:1 bracket:1 nl:1 semidefinite:1 undefined:1 behind:1 compilation:1 decoupled:1 incomplete:1 initialized:2 theoretical:1 fitted:1 instance:1 column:2 ax0:1 caruana:1 cost:1 deviation:3 subset:3 predictor:1 sod:1 gr:1 too:1 dependency:2 learnt:2 thanks:2 st:1 international:3 sequel:1 probabilistic:1 informatics:1 michael:2 together:1 dso:1 squared:2 recorded:1 fir:1 worse:1 creating:1 derivative:1 zhao:1 bx:1 potential:1 de:1 semidefiniteness:1 student:11 coefficient:1 bonilla:4 tion:1 view:1 lot:4 closed:1 candela:1 compiler:10 bayes:1 collaborative:1 minimize:3 om:1 publicly:1 kfl:2 descriptor:10 variance:7 generalize:1 modelled:1 anton:2 bayesian:1 marginally:1 bristol:1 llt:2 sharing:2 ed:3 pp:3 minka:1 ppca:2 dataset:1 popular:1 massachusetts:1 recall:1 lim:1 actually:3 jair:1 tipping:1 tom:1 response:5 wei:1 april:1 decorrelating:1 done:2 nottingham:1 correlation:8 joaquin:1 christopher:4 brings:2 quality:1 name:1 inductive:2 hence:3 analytically:1 y12:1 symmetric:1 conditionally:1 noted:1 generalized:1 whye:1 hill:1 outline:1 complete:3 ef:2 charles:1 common:3 superior:1 empirically:1 yil:2 overview:1 discussed:1 mae:5 refer:1 significant:1 cambridge:3 vec:1 meal:2 automatic:1 rd:1 fk:1 grid:2 rasa:1 pointed:1 similarly:5 cancellation:7 heskes:2 lcm:3 han:2 similarity:7 posterior:1 own:1 recent:1 multivariate:2 irrelevant:1 scenario:3 certain:1 verlag:1 life:2 inverted:1 seen:3 tabula:1 greater:1 maximize:3 july:2 semi:1 ii:2 multiple:5 full:3 reduces:3 technical:1 determination:2 prediction:20 regression:3 noiseless:4 expectation:1 iteration:1 kernel:8 histogram:1 achieved:1 hfl:1 addition:1 want:2 semiparametric:3 separately:1 probably:1 induced:1 deficient:1 thing:1 effectiveness:2 jordan:1 presence:1 split:1 baxter:1 affect:1 isolation:1 inner:1 t0:1 bl2:1 expression:2 wackernagel:1 peter:1 y1m:1 matlab:1 generally:1 useful:2 detailed:1 eigenvectors:1 se:1 amount:2 band:3 ten:1 augments:1 reduced:2 kian:1 specifies:1 http:1 percentage:7 canonical:1 notice:1 estimated:1 correctly:1 per:4 dummy:1 write:1 hyperparameter:1 group:1 redundancy:1 key:1 four:2 ist:2 clarity:1 fp6:1 year:7 sum:1 run:1 inverse:4 parameterized:1 place:1 throughout:1 reader:1 eligible:2 forrest:1 environmetrics:1 qui:1 comparable:1 fl:2 def:5 ki:2 guaranteed:1 followed:1 kronecker:3 infinity:1 aspect:1 speed:10 min:1 separable:1 combination:1 march:2 belonging:3 across:3 slightly:1 em:7 beneficial:1 making:2 explained:4 restricted:1 taken:6 equation:2 slfm:3 discus:1 turn:1 wrt:3 ordinal:1 available:2 hierarchical:1 appropriate:1 rosalind:1 original:1 thomas:1 denotes:2 dirichlet:1 remaining:1 top:1 clustering:1 exploit:1 society:1 tensor:1 micchelli:1 eh1:1 occurs:1 parametric:22 degrades:1 diagonal:7 gradient:3 detrimental:1 cw:1 link:1 berlin:1 thrun:1 assuming:2 code:2 length:2 index:1 ratio:1 ql:1 difficult:1 setup:3 unfortunately:1 potentially:1 hao:1 negative:1 rise:1 y11:1 design:9 perform:1 teh:2 y1l:1 observation:17 markov:2 sm:1 datasets:1 january:1 situation:1 relational:1 variability:2 varied:1 community:1 pair:1 required:1 specified:1 learned:1 geostatistics:3 nip:3 bar:2 kriegel:1 below:2 usually:1 confidently:1 program:12 royal:1 critical:1 examination:1 mn:1 improve:1 lk:1 created:1 categorical:2 tresp:4 prior:6 literature:4 l2:5 blockdiagonal:1 kf:2 relative:1 fully:1 interesting:2 share:1 row:2 course:1 elsewhere:1 supported:2 free:20 rasmussen:3 bias:1 absolute:4 sparse:1 edinburgh:3 benefit:6 overcome:1 dimension:1 xn:2 gram:1 stand:1 rich:1 author:2 made:1 coregionalization:2 regressors:1 projected:1 programme:1 emphasize:2 relatedness:1 preferred:1 dealing:1 active:1 overfitting:1 xi:2 discriminative:1 alternatively:1 search:1 latent:9 table:3 additionally:2 learn:5 transfer:34 nature:1 correlated:3 unavailable:1 investigated:1 european:1 shipeng:2 aistats:2 s2:2 noise:9 hyperparameters:3 edition:1 convey:1 x1:2 ethnic:1 xu:1 referred:1 tl:1 vr:3 wish:1 exponential:2 learns:3 formula:1 specific:8 xt:1 bishop:1 gating:1 ilea:1 r2:1 concern:1 intrinsic:1 effectively:2 execution:3 kx:6 easier:1 cx:1 simply:2 forming:1 ordered:1 schwaighofer:2 epp:4 ters:1 springer:1 ch:2 corresponds:4 ivm:1 gender:2 ma:2 lth:3 identity:3 goal:2 shared:2 absence:1 hard:1 determined:1 except:1 corrected:1 principal:2 called:1 total:2 secondary:1 experimental:1 cholesky:4 support:3 jonathan:1 relevance:1 evaluate:4 avoiding:1 ex:1 |
2,413 | 319 | Dynamics of Learning in Recurrent
Feature-Discovery Networks
Todd K. Leen
Department of Computer Science and Engineering
Oregon Graduate Institute of Science & Technology
Beaverton, OR 97006-1999
Abstract
The self-organization of recurrent feature-discovery networks is studied
from the perspective of dynamical systems. Bifurcation theory reveals parameter regimes in which multiple equilibria or limit cycles coexist with the
equilibrium at which the networks perform principal component analysis.
1
Introduction
Oja (1982) made the remarkable observation that a simple model neuron with an
Hebbian adaptation rule develops into a filter for the first principal component of
the input distribution. Several researchers have extended Oja's work, developing
networks that perform a complete principal component analysis (PCA). Sanger
(1989) proposed an algorithm that uses a single layer of weights with a set of
cascaded feedback projections to force nodes to filter for the principal components.
This architecture singles out a particular node for each principal component. Oja
(1989) and Oja and Karhunen (1985) give a related algorithm that projects inputs
onto an orthogonal basis spanning the principal subspace, but does not necessarily
filter for the principal components themselves.
In another class of models, nodes are forced to learn different statistical features
by a set of lateral connections. Rubner and Schulten (1990) use cascaded lateral
connections; the ith node receives signals from the input and all nodes j with j < i.
The lateral connections are modified by an anti-Hebbian learning rule that tends
to de-correlate the node responses . Like Sanger's scheme, this architecture singles
out a particular node for each principal component. Kung and Diamantaras (1990)
propose a different learning rule on the same network topology. Foldiak (1989)
simulates a network with full lateral connectivity, but does not discuss convergence.
Dynamics of Learning in Recurrent &ature-Discovery Networks
71
The goal of this paper is to help form a more complete picture of feature-discovery
models that use lateral signal flow. We discuss two models with particular emphasis on their learning dynamics. The models incorporate Hebbian and anti-Hebbian
adaptation, and recurrent lateral connections. We give stability analyses and derive
bifurcation diagrams for the models. Stability analysis gives a lower bound on the
rate of adaptation the lateral connections, below which the equilibrium corresponding to peA is unstable. Bifurcation theory provides a description of the behavior
near loss of stability. The bifurcation analyses reveal stable equilibria in which the
weight vectors from the input are combinations of the eigenvectors of the input
correlation. Limit cycles are also found.
2
The Single-Neuron Model
In Oja's model the input, x E R N , is a random vector assumed to be drawn from
a stationary probability distribution. The vector of synaptic weights is denoted w
and the post-synaptic response is linear; y
x . w. The continuous-time, ensemble
averaged form of the learning rule is
=
w
<
xy
> - < y2 >
w
Rw - (w. Rw) w
(1)
=
where < ... > denotes the average over the ensemble of inputs, and R
< X xT >
is the correlation matrix. The unit-magnitude eigenvectors of R are denoted
ei, i
1 ... N and are assumed to be ordered in decreasing magnitude of the
associated eigenvalues Al > A2 > ... > AN > O. Oja shows that the weight vector
asymptotically approaches ?el' The variance of the node's response is thus maximized and the node acts as a filter for the first principal component of the input
distribution.
=
3
Extending the Single Neuron Model
To extend the model to a system of M ::5 N nodes we consider a set of linear neurons
with weight vectors (called the forward weights) Wl ?? . WM connecting each to 'the--- __
N -dimensional input. Without interactions between the nodes in the array, all M
weight vectors would converge to ?el.
We consider two approaches to building interactions that force nodes to filter for
different statistical features. In the first approach an internode potential is constructed. This formulation results in a non-local model. The model is made local
by introducing lateral connections that naturally acquire anti-Hebbian a.daptation.
For reasons that will become clear, the resulting model is referred to as a minimal coupling scheme. In the second approach, we write equations of motion of
the forward weights based directly on (1). The evolution of the lateral connection
strengths will follow a simple anti-Hebbian rule.
3.1
Minimal Coupling
The response of the
ith
node in the array is taken to be linear in the input
(2)
~
72
Leen
The adaptation of the forward weights is derived from the potential
~
1
U
+ 2"
i
L.J
i,k;i?k
1 M
~
-2
~
C
2
2 L.J < Yi >
C
+
(Wj . RWj)
M
2:
2
(3)
(Wj' R W k)2,
j,k;j?k
J
where C is a coupling constant. The first term of U generates the Hebb law,
while the second term penalizes correlated node activity (Yuille et al. 1989). The
equations of motion are constructed to perform gradient descent on U with a term
added to bound the weight vectors,
2
<
-V"w. U
Yi
> Wi
M
c 2:
< x Yi > -
<
Yi Yj
> < x Yj > -
<
yi >
Wi
j?i
M
RWi
C
-
L:
(Wi' RWj) RWj
-
(Wi' Rwd Wi.
(4)
j ?i
Note that Wi refers to the weight vector from the input to the
component of the weight vector.
ith
node, not the
ith
Equation (4) is non-local as it involves correlations, < Yi Yj >, between nodes. In
order to provide a purely local adaptation, we introduce a symmetric matrix of
lateral connections
1Jij
i, j
1, ... , M
=
= O.
1Jii
These evolve according to
-d (1Jij + C < Yi Yj > )
-d (1Jij + C Wi . RWj )
where d is a rate constant. In the limit of fast adaptation (large d)
1Jij
(5)
-C < Yi Yj > .
With this limiting behavior in mind, we replace (4) with
1Jij
--+
M
<
XVi
> +
L:
1Jij
<
XYj
> -
< Yi2 >
Wi
j?i
M
RWi
+
L:
1Jij RWj
-
(Wi' RWi) Wi?
(6)
j?i
Equations (5) and (6) specify the adaptation of the network.
Notice that the response of the ith node is given by (2) and is thus independent of
the signals carried on the lateral connections. In this sense the lateral signals affect
node plasticity but not node response. This minimal coupling can also be derived
as a low-order approximation to the model in ?3.2 below.
Dynamics of Learning in Recurrent &ature-Discovery Networks
3.1.1
Stability and Bifurcation
By inspection the weight dynamics given by (5) and (6) have an equilibrium at
(7)
At this equilibrium the outputs are the first M principal components of input vectors. In suitable coordinates the linear part of the equations of motion break into
block diagonal form with any possible instabilities constrained to 3 x 3 sub-blocks.
Details of the stability and bifurcation analysis are given in Leen (1991). The principal component subspace is always asymptotically stable. However the equilibrium
Xo is linearly stable if and only if
d
> do
C
> Co -
(Ai - Aj)2 (Ai + Aj)
A~I + A?
J
1
1 ;:; (i,j) ;:; M.
Ai + Aj ,
(8)
(9)
At Co or do there is a qualitative change (a bifurcation) in the learning dynamics. If
the condition on d is violated, then there is a Hopf bifurcation to oscillating weights.
At the critical value Co there is a bifurcation to multiple equilibria. The bifurcation
normal form was found by Liapunov-Schmidt reduction (see e.g. Golubitsky and
Schaeffer 1984) performed at the bifurcation point (Xo, Co). To deal effectively with
the large dimensional phase space of the network, the calculations were performed
on a symbolic algebra program.
At the critical point (Xo, Co) there is a supercritical pitchfork bifurcation. Two
unstable equilibria appear near Xo for C > Co. At these equilibria the forward
weights are mixtures of eM and eM -1 and the lateral connection strengths are
non-zero. Generically one expects a saddle-node bifurcation. However Xo is an
equilibrium for all values of C, and the system has an inversion symmetry. These
conditions preclude the saddle-node and transcritical bifurcations, and we are left
with the pitchfork.
The position of stable equilibria away from (Xo, Co) can be found by examining
terms of order five and higher in the bifurcation expansion. Alternatively we examine the bifurcation from the homogeneous solution, Xh, in which all weight vectors
are proportional to el. For a system of two nodes this equilibrium is asymptotically
stable provided
(10)
If Al < 3A2' then there is a supercritical pitchfork bifurcation at Ch. Two stable
equilibria emerge from Xh for C > Ch. At these stable equilibria, the forward
weight vectors are mixtures of the first two correlation eigenvectors and the lateral
connection strengths are nonzero.
The complete bifurcation diagram for a system of two nodes is shown in Fig . 1. The
upper portion of the figure shows the bifurcation at (Xo, Co). The horizontal line
corresponds to the peA equilibrium Xo. This equilibrium is stable (heavy line) for
73
74
Leen
C > Co, and unstable (light line) for C < Co. The subsidiary, unstable, equilibria
that emerge from (Xo, Co) lie on the light, parabolic branches of the top diagram.
Calculations indicate that the form of this bifurcation is independent of the number
of nodes, and of the input dimension. Of course the value of Co increases with
increasing number of nodes, c.f. (9).
The lower portion of Fig. 1 shows the bifurcation from (Xh' Ch) for a system of two
nodes. The horizontal line corresponds to the homogeneous equilibrium X h. This
is stable for C < Ch and unstable for C > Ch. The stable equilibria consisting of
mixtures of the correlation eigenvectors lie on the heavy parabolic branches of the
diagram. For networks with more nodes, there are presumably further bifurcations
along the supercritical stable branches emerging from (Xh' Ch); equilibria with
qualitatively different eigenvector mixtures are observed in simulations.
Each inset in the figure shows equilibrium forward weight vectors for both nodes in
a two-node network. These configurations were generated by numerical integration
of the equations of motion (5) and (6). The correlation matrix corresponds to an
ensemble of noise vectors with short-range correlations between the components.
Simulations of the corresponding discrete, pattern-by-pattern learning rule confirm
the form of the weight vectors shown here.
~2
3.0
lr
t
O.S
.
2
4
6
8
10
AI
Figure 1: Bifurcation diagram for
the minimal model
3.2
Fig 2: Regions in the (>'1, >'2) plane corresponding to supercritical (shaded) and
subcritical (unshaded) Hopf bifurcation.
Full Coupling
In a more conventional coupling scheme, the signals carried on the lateral connections affect the node activities directly. For linear node response, the vector of
activities is given by
(11)
where y E RM, TJ is the !l1 x !If matrix of lateral connection st.rengths and w is an
M x N matrix whose ith row is the forward weight vector to the ith node. The
adaptation rule is
w
<
yx T
D TJ -
> _ Diag?
yyT
C < yyT >, TJii
?w
= 0,
(12)
(13)
Dynamics of Learning in Recurrent Feature-Discovery Networks
where D and C are constants and Diag sets the off-diagonal elements of its argument
equal to zero. This system also has the peA equilibrium Xo. This is linearly stable
if
D
> 0
C
> Co
(14)
D
(15)
Equation (14) tells us that the peA equilibrium is structurally unstable without the
D'TJ term in (13). Without this term, the model reduces to that given by Foldiak
(1989). That the latter generally does not converge to the peA equilibrium is
consistent with the condition in (14).
If, on the other hand, the condition on C is violated then the network undergoes a
Hopf bifurcation leading to oscillations. Depending on the eigenvalue spectrum of
the input correlation, this bifurcation may be subcritical (with stable limit cycles
near Xo for C < Co), or supercritical (with unstable limit cycles near Xo for
C > Co). Figure 2 shows the corresponding regions in the (.AI, .A2) plane for a
1. Simulations show that even in the supercritical
network of two nodes with D
regime, stable limit cycles are found for C < Co, and for C > Co sufficiently
close to Co. This suggests that the complete bifurcation diagram in the supercritical regime is shaped like the bottom of a wine bottle, with only the indentation
shown in figure 2. Under the approximation u ~ 1 + 'TJ, the super-critical regime is
significantly narrowed.
=
4
Discussion
The primary goal of this study has been to give a theoretical description of learning
in feature-discovery models; in particular models that use lateral interactions to
ensure that nodes tune to different statistical features. The models presented here
have several different limit sets (equilibria and cycles) whose stability and location
in the weight space depends on the relative learning rates in the network, and
on the eigenvalue spectrum of the input correlation. We have applied t.ools from
bifurcation theory to qualitatively describe the location and determine stability of
these different limiting solutions. This theoretical approach provides a unifying
framework within which similar algorithms can be studied.
Both models have equilibria at which the network performs peA. In addition, the
minimal model has stable equilibria for which the forward weight vectors are mixtures of the correlation eigenvectors. Both models have regimes in which the weight
vectors oscillate. The model given by Rubner et al. (1990) also loses stability
through Hopf bifurcation for small values of the lateral learning rate.
The minimal values of C in (9) and (15) for the stability of the peA equilibrium
can become quite large for small correlation eigenvalues. These stringent conditions
can be ameliorated in both models by the replacement
d 'TJij -+ ? Y; > + < YJ > ) 'TJij.
However in the minimal model, this leads to degenerate bifurcations which have not
been thoroughly examined.
75
76
Leen
Finally, it remains to be seen whether the techniques employed here extend to similar
systems with non-linear node activation (e.g. Carlson 1991) or to the problem of
locating multiple minima in cost functions for supervised learning models.
Acknowledgments
This work was supported by the Office of Naval Research under contract N0001490-1349 and by DARPA grant MDA 972-88-J-1004 to the Department of Computer
Science and Engineering. The author thanks Bill Baird for stimulating e-mail disCUSSlon.
References
Carlson, A. (1991) Anti-Hebbian learning in a non-linear neural network Bioi. Cybern.,
64:171-176.
Foldiak, P. (1989) Adaptive network for optimal linear feature extraction. In Proceedings
of the JJCNN, pages I 401-405.
Golubitsky, Martin and Schaeffer, David (1984) Singularities and Groups in Bifurcation
Theory, Vol. I. Springer-Verlag, New York.
Kung, S. and Diamantaras K. (1990) A neural network learning algorithm for adaptive
principal component extraction (APEX). In Proceedings of the IEEE International
Conference on Acoustics Speech and Signal Processing, pages 861-864.
Leen, T. K. (1991) Dynamics oflearning in linear feature-discovery networks. Network:
Computation in Neural Systems, to appear.
Oja, E. (1982) A simplified neuron model as a principal component analyzer. J. Math.
Biology, 15:267-273.
Oja, E. (1989) Neural networks, principal components, and subspaces.
Journal of Neural Systems, 1:61-68.
International
Oja, E. and Karhunen, J. (1985) On stochastic approximation of the eigenvectors and
eigenvalues of the expectation of a random matrix. J. of Math. Anal. and Appl.,
106:69-84.
Rubner, J. and Schulten K. (1990) Development of feature detectors by self-organization:
A network model. BioI. Cybern., 62:193-199.
Sanger, T. (1989) An optimality principle for unsupervised learning. In D.S. Touretzky,
editor, Advances in Neural Information Processing Systems 1. Morgan Kauffmann.
Yuille, A.L, Kammen, D.M. and Cohen, D.S. (1989) Quadrature and the development of
orientation selective cortical cells by Hebb rules. Bioi. Cybern., 61:183-194.
| 319 |@word inversion:1 simulation:3 reduction:1 configuration:1 activation:1 numerical:1 plasticity:1 stationary:1 liapunov:1 inspection:1 plane:2 ith:7 short:1 lr:1 provides:2 math:2 node:35 location:2 five:1 along:1 constructed:2 become:2 hopf:4 qualitative:1 introduce:1 behavior:2 themselves:1 examine:1 decreasing:1 preclude:1 increasing:1 project:1 provided:1 emerging:1 eigenvector:1 act:1 rm:1 unit:1 grant:1 diamantaras:2 appear:2 engineering:2 local:4 todd:1 tends:1 limit:7 emphasis:1 studied:2 examined:1 xyj:1 shaded:1 suggests:1 co:18 appl:1 graduate:1 range:1 averaged:1 acknowledgment:1 yj:6 block:2 significantly:1 projection:1 refers:1 symbolic:1 onto:1 close:1 coexist:1 cybern:3 instability:1 conventional:1 unshaded:1 bill:1 rule:8 array:2 stability:9 coordinate:1 kauffmann:1 limiting:2 homogeneous:2 us:1 element:1 observed:1 bottom:1 region:2 wj:2 cycle:6 dynamic:8 algebra:1 yuille:2 purely:1 basis:1 darpa:1 forced:1 fast:1 describe:1 tell:1 whose:2 quite:1 kammen:1 eigenvalue:5 propose:1 interaction:3 jij:7 adaptation:8 degenerate:1 description:2 convergence:1 extending:1 oscillating:1 help:1 derive:1 recurrent:6 coupling:6 depending:1 involves:1 indicate:1 filter:5 pea:7 stochastic:1 stringent:1 singularity:1 sufficiently:1 normal:1 presumably:1 equilibrium:29 indentation:1 a2:3 wine:1 wl:1 always:1 rwi:3 modified:1 super:1 internode:1 office:1 derived:2 rwj:5 naval:1 sense:1 el:3 supercritical:7 selective:1 orientation:1 denoted:2 development:2 constrained:1 integration:1 bifurcation:31 equal:1 shaped:1 extraction:2 biology:1 unsupervised:1 develops:1 oja:9 phase:1 consisting:1 replacement:1 organization:2 generically:1 mixture:5 light:2 tj:4 xy:1 orthogonal:1 penalizes:1 theoretical:2 minimal:7 cost:1 introducing:1 oflearning:1 expects:1 examining:1 thoroughly:1 st:1 thanks:1 international:2 pitchfork:3 contract:1 off:1 connecting:1 connectivity:1 tjij:2 leading:1 jii:1 potential:2 de:1 baird:1 oregon:1 depends:1 performed:2 break:1 portion:2 wm:1 narrowed:1 variance:1 ensemble:3 maximized:1 researcher:1 detector:1 touretzky:1 synaptic:2 naturally:1 associated:1 schaeffer:2 yyt:2 higher:1 supervised:1 follow:1 response:7 specify:1 leen:6 formulation:1 correlation:11 hand:1 receives:1 horizontal:2 ei:1 undergoes:1 golubitsky:2 aj:3 reveal:1 building:1 y2:1 evolution:1 symmetric:1 nonzero:1 deal:1 self:2 complete:4 performs:1 motion:4 l1:1 cohen:1 extend:2 ai:5 analyzer:1 apex:1 stable:15 foldiak:3 perspective:1 verlag:1 yi:8 seen:1 minimum:1 morgan:1 employed:1 converge:2 determine:1 signal:6 branch:3 multiple:3 full:2 tjii:1 reduces:1 hebbian:7 calculation:2 post:1 expectation:1 cell:1 addition:1 diagram:6 simulates:1 flow:1 near:4 affect:2 architecture:2 topology:1 whether:1 pca:1 rwd:1 locating:1 speech:1 york:1 oscillate:1 generally:1 clear:1 eigenvectors:6 tune:1 rw:2 notice:1 write:1 discrete:1 vol:1 group:1 drawn:1 asymptotically:3 subcritical:2 parabolic:2 oscillation:1 layer:1 bound:2 activity:3 mda:1 strength:3 jjcnn:1 generates:1 argument:1 optimality:1 subsidiary:1 martin:1 department:2 developing:1 according:1 combination:1 em:2 wi:10 xo:12 taken:1 equation:7 remains:1 discus:2 mind:1 away:1 schmidt:1 denotes:1 top:1 ensure:1 beaverton:1 unifying:1 sanger:3 yx:1 carlson:2 added:1 primary:1 diagonal:2 gradient:1 subspace:3 lateral:18 mail:1 unstable:7 spanning:1 reason:1 acquire:1 anal:1 perform:3 upper:1 observation:1 neuron:5 descent:1 anti:5 extended:1 david:1 bottle:1 connection:13 acoustic:1 xvi:1 dynamical:1 below:2 pattern:2 regime:5 program:1 suitable:1 critical:3 force:2 cascaded:2 scheme:3 technology:1 picture:1 carried:2 discovery:8 evolve:1 relative:1 law:1 loss:1 proportional:1 remarkable:1 rubner:3 consistent:1 principle:1 editor:1 heavy:2 row:1 course:1 supported:1 institute:1 emerge:2 feedback:1 dimension:1 cortical:1 forward:8 made:2 qualitatively:2 author:1 adaptive:2 simplified:1 correlate:1 ameliorated:1 confirm:1 reveals:1 assumed:2 alternatively:1 spectrum:2 continuous:1 learn:1 symmetry:1 expansion:1 necessarily:1 diag:2 yi2:1 linearly:2 noise:1 quadrature:1 fig:3 referred:1 hebb:2 sub:1 position:1 schulten:2 structurally:1 xh:4 lie:2 xt:1 inset:1 effectively:1 ature:2 ools:1 magnitude:2 karhunen:2 saddle:2 ordered:1 springer:1 ch:6 corresponds:3 loses:1 stimulating:1 bioi:3 goal:2 replace:1 change:1 principal:14 called:1 latter:1 kung:2 violated:2 incorporate:1 correlated:1 |
2,414 | 3,190 | Evaluating Search Engines by Modeling the
Relationship Between Relevance and Clicks
Ben Carterette?
Center for Intelligent Information Retrieval
University of Massachusetts Amherst
Amherst, MA 01003
[email protected]
Rosie Jones
Yahoo! Research
3333 Empire Ave
Burbank, CA 91504
[email protected]
Abstract
We propose a model that leverages the millions of clicks received by web search
engines to predict document relevance. This allows the comparison of ranking
functions when clicks are available but complete relevance judgments are not.
After an initial training phase using a set of relevance judgments paired with click
data, we show that our model can predict the relevance score of documents that
have not been judged. These predictions can be used to evaluate the performance
of a search engine, using our novel formalization of the confidence of the standard
evaluation metric discounted cumulative gain (DCG), so comparisons can be made
across time and datasets. This contrasts with previous methods which can provide
only pair-wise relevance judgments between results shown for the same query.
When no relevance judgments are available, we can identify the better of two
ranked lists up to 82% of the time, and with only two relevance judgments for
each query, we can identify the better ranking up to 94% of the time. While our
experiments are on sponsored search results, which is the financial backbone of
web search, our method is general enough to be applicable to algorithmic web
search results as well. Furthermore, we give an algorithm to guide the selection of
additional documents to judge to improve confidence.
1
Introduction
Web search engine evaluation is an expensive process: it requires relevance judgments that indicate
the degree of relevance of each document retrieved for each query in a testing set. In addition,
reusing old relevance judgements to evaluate an updated ranking function can be problematic, since
documents disappear or become obsolete, and the distribution of queries entered changes [15]. Click
data from web searchers, used in aggregate, can provide valuable evidence about the relevance of
each document. The general problem with using clicks as relevance judgments is that clicks are
biased. They are biased to the top of the ranking [12], to trusted sites, to attractive abstracts; they
are also biased by the type of query and by other things shown on the results page. To cope with
this, we introduce a family of models relating clicks to relevance. By conditioning on clicks, we can
predict the relevance of a document or a set of documents.
Joachims et al. [12] used eye-tracking devices to track what documents users looked at before clicking. They found that users tend to look at results ranked higher than the one they click on more
often than they look at results ranked lower, and this information can in principle be used to train
a search engine using these ?preference judgments?[10]. The problem with using preference judgments inferred from clicks for learning is that they will tend to learn to reverse the list. A click at the
lowest rank is preferred to everything else, while a click at the highest rank is preferred to nothing
?
Work done while author was at Yahoo!
1
else. Radlinski and Joachims [13] suggest an antidote to this: randomly swapping adjacent pairs of
documents. This ensures that users will not prefer document i to document i + 1 solely because of
rank. However, we may not wish to show a suboptimal document ordering in order acquire data.
Our approach instead will be to use discounted cumulative gain (DCG [9]), an evaluation metric
commonly used in search engine evaluation. Using click data, we can estimate the confidence that
a difference in DCG exists between two rankings without having any relevance judgments for the
documents ranked. We will show how a comparison of ranking functions can be performed when
clicks are available but complete relevance judgments are not. After an initial training phase with a
few relevance judgments, the relevance of unjudged documents can be predicted from clickthrough
rates. The confidence in the evaluation can be estimated with the knowledge of which documents are
most frequently clicked. Confidence can be dramatically increased with only a few more judiciously
chosen relevance judgments.
Our contributions are (1) a formalization of the information retrieval metric DCG as a random variable (2) analysis of the sign of the difference between two DCGs as an indication that one ranking is
better than another (3) empirical demonstration that combining click-through rates over all results on
the page is better at predicting the relevance of the document at position i than just the click-through
rate at position i (4) empirically modeling relevance of documents using clicks, and using this model
to estimate DCG (5) empirical evaluation of comparison of different rankings using DCG derived
from clicks (6) an algorithm for selection of minimal numbers of documents for manual relevance
judgement to improve the confidence in DCG over the estimate derived from clicks alone.
Section 2 covers previous work on using clickthrough rates and on estimating evaluation metrics.
Section 3 describes the evaluation of web retrieval systems using the metric discounted cumulative
gain (DCG) and shows how to estimate the confidence that a difference exists when relevance judgments are missing. Our model for predicting relevance from clicks is described in Section 4. We
discuss our data in Section 5 and in Section 6 we return to the task of estimating relevance for the
evaluation of search engines. Our experiments are conducted in the context of sponsored search, but
the methods we use are general enough to translate to general web search engines.
2
Previous Work
There has been a great deal of work on low-cost evaluation in TREC-type settings ([20, 6, 16, 5] are a
few), but we are aware of little for the web. As discussed above, Joachims [10, 12] and Radlinski and
Joachims [13] conducted seminal work on using clicks to infer user preferences between documents.
Agichtein et al.[2, 1] used and applied models of user interaction to predict preference relationships
and to improve ranking functions. They use many features beyond clickthrough rate, and show that
they can learn preference relationships using these features. Our work is superficially similar, but
we explicitly model dependencies among clicks for results at different ranks with the purpose of
learning probabilistic relevance judgments. These relevance judgments are a stronger result than
preference ordering, since preference ordering can be derived from them. In addition, given a strong
probabilistic model of relevance from clicks, better combined models can be built.
Dupret et al. [7] give a theoretical model for the rank-position effects of click-through rate, and
build theoretical models for search engine quality using them. They do not evaluate estimates of
document quality, while we empirically compare relevance estimated from clicks to manual relevance judgments. Joachims [11] investigated the use of clickthrough rates for evaluation, showing
that relative differences in performance could be measured by interleaving results from two ranking
functions, then observing which function produced results that are more frequently clicked. As we
will show, interleaving results can change user behavior, and not necessarily in a way that will lead
to the user clicking more relevant documents.
Soboroff [15] proposed methods for maintaining the relevance judgments in a corpus that is constantly changing. Aslam et al. [3] investigated minimum variance unbiased estimators of system
performance, and Carterette et al. [5] introduced the idea of treating an evaluation measure as a random variable with a distribution over all possible relevance judgments. This can be used to create
an optimal sampling strategy to obtain judgments, and to estimate the confidence in an evaluation
measure. We extend their methods to DCG.
2
3
Evaluating Search Engines
Search results are typically evaluated using Discounted Cumulative Gain (DCG) [9]. DCG is defined
as the sum of the ?gain? of presenting a particular document times a ?discount? of presenting it
P`
at a particular rank, up to some maximum rank `: DCG` =
i=1 gaini discounti . For web
search, ?gain? is typically a relevance score determined from a human labeling, and ?discount? is
the reciprocal of the log of the rank, so that putting a document with a high relevance score at a low
rank results in a much lower discounted gain than putting the same document at a high rank.
DCG` = rel1 +
`
X
reli
log2 i
i=2
The constants reli are the relevance scores. Human assessors typically judge documents on an
ordinal scale, with labels such as ?Perfect?, ?Excellent?, ?Good?, ?Fair?, and ?Bad?. These are then
mapped to a numeric scale for use in DCG computation. We will denote five levels of relevance aj ,
with a1 > a2 > a3 > a4 > a5 . In this section we will show that we can compare ranking functions
without having labeled all the documents.
3.1
Estimating DCG from Incomplete Information
DCG requires that the ranked documents have been judged with respect to a query. If the index has
recently been updated, or a new algorithm is retrieving new results, we have documents that have not
been judged. Rather than ask a human assessor for a judgment, we may be able to infer something
about DCG based on the judgments we already have.
Let Xi be a random variable representing the relevance of document i. Since relevance is ordinal,
the distribution of Xi is multinomial. We will define pij = p(Xi = aj ) for 1 ? j ? 5 with
P5
P5
j=1 pij = 1. The expectation of Xi is E[Xi ] =
j=1 pij aj , and its variance is V ar[Xi ] =
P5
2
2
j=1 pij aj ? E[Xi ] .
We can then express DCG as a random variable:
DCG` = X1 +
`
X
Xi
log
2i
i=2
Its expectation and variance are:
E[DCG` ] = E[X1 ] +
`
X
E[Xi ]
i=2
V ar[DCG` ] = V ar[X1 ] +
(1)
log2 i
`
X
V ar[Xi ]
i=2
(log2
i)2
+2
`
X
Cov(X1 , Xi )
i=1
log2 i
+2
X Cov(Xi , Xj )
? E[DCG` ]2
log
i
?
log
j
2
2
1<i<j
(2)
If the relevance of documents i and j are independent, the covariance Cov(Xi , Xj ) is zero.
When some relevance judgments are not available, Eq. (1) and (2) can be used to estimate confidence
intervals for DCG. Thus we can compare ranking functions without having judged all the documents.
3.2
Comparative Evaluation
If we only care about whether one index or ranking function outperforms another, the actual values
of DCG matter less than the sign of their difference. We now turn our attention to estimating the
sign of the difference with high confidence. We redefine DCG in terms of an arbitrary indexing of
documents, instead of the indexing by rank we used in the previous section. Let rj (i) be the rank
at which document i was retrieved by system j. We define the discounted gain gij of document i to
the DCG of system j as gij = reli if rj (i) = 1, gij = logrelrji (i) if 1 < rj (i) ? `, and gij = 0 if
2
3
document i was not ranked by system j. Then we can write the difference in DCG for systems 1
and 2 as
N
X
?DCG` = DCG`1 ? DCG`2 =
gi1 ? gi2
(3)
i=1
where N is the number of documents in the entire collection. In practice we need only consider
those documents returned in the top ` by either of the two systems. We can define a random variable
Gij by replacing reli with Xi in gij ; we can then compute the expectation of ?DCG:
E[?DCG` ] =
N
X
E[Gi1 ] ? E[Gi2 ]
i=1
We can compute its variance as well, which is omitted here due to space constraints.
3.3
Confidence in a Difference in DCG
Following Carterette et al. [5], we define the confidence in a difference in DCG as the probability
that ?DCG = DCG1 ? DCG2 is less than zero. If P (?DCG < 0) ? 0.95, we say that we
have 95% confidence that system 1 is worse than system 2: over all possible judgments that could
be made to the unjudged documents, 95% of them will result in ?DCG < 0.
To compute this probability, we must consider the distribution of ?DCG. For web search, we are
typically most interested in performance in the top 10 retrieved. Ten documents is too few for any
convergence results, so instead we will estimate the confidence using Monte Carlo simulation. We
simply draw relevance scores for the unjudged documents according to the multinomial distribution
p(Xi ) and calculate ?DCG using those scores. After T trials, the probability that ?DCG is less
than 0 is simply the number of times ?DCG was computed to be less than 0 divided by T .
How can we estimate the distribution p(Xi )? In the absence of any other information, we may
assume it to be uniform over all five relevance labels. Relevance labels that have been made in
the past provide a useful prior distribution. As we shall see below, clicks are a useful source of
information that we can leverage to estimate this distribution.
3.4
Selecting Documents to Judge
If confidence estimates are low, we may want to obtain more relevance judgments to improve it. In
order to do as little work as necessary, we should select the documents that are likely to tell us a
lot about ?DCG and therefore tell us a lot about confidence. The most informative document is
the one that would have the greatest effect on ?DCG. Since ?DCG is linear, it is quite easy to
determine which document should be judged next. Eq. (3) tells us to simply choose the document i
that is unjudged and has maximum |E[Gi1 ] ? E[Gi2 ]|. Algorithm 1 shows how relevance judgments
would be acquired iteratively until confidence is sufficiently high. This algorithm is provably optimal
in the sense that after k judgments, we know more about the difference in DCG than we would with
any other k judgments.
Algorithm 1 Iteratively select documents to judge until we have high confidence in ?DCG.
1: while 1 ? ? ? P (?DCG < 0) ? ? do
2:
i? ? maxi |E[Gi1 ] ? E[Gi2 ]| for all unjudged documents i
3:
judge document i?
(human annotator provides reli? )
4:
P (Xi? = reli? ) ? 1
5:
P (Xi? 6= reli? ) ? 0
6:
estimate P (?DCG) using Monte Carlo simulation
7: end while
4
Modeling Clicks and Relevance
Our goal is to model the relationship between clicks and relevance in a way that will allow us
to estimate a distribution of relevance p(Xi ) from the clicks on document i and on surrounding
4
documents. We first introduce a joint probability distribution including the query q, the relevance
Xi of each document retrieved (where i indicates the rank), and their respective clickthrough rates
ci :
p(q, X1 , X2 , ..., X` , c1 , c2 , ..., c` ) = P (q, X, c)
(4)
Boldface X and c indicate vectors of length `.
Suppose we have a query for which we have few or no relevance judgments (perhaps because it has
only recently begun to appear in the logs, or because it reflects a trend for which new documents are
rapidly being indexed). We can nevertheless obtain click-through data. We are therefore interested
in the conditional probability p(X|q, c).
Note that X = {X1 , X2 , ? ? ? } is a vector of discrete ordinal variables; doing inference in this model
is not easy. To simplify, we make the assumption that the relevance of document i and document j
are conditionally independent given the query and the clickthrough rates:
p(X|q, c) =
`
Y
p(Xi |q, c)
(5)
i=1
This gives us a separate model for each rank, while still conditioning the relevance at rank i on the
clickthrough rates at all of the ranks. We do not lose the dependence between relevance at each rank
and clickthrough rates on other ranks. We will see the importance of this empirically in section 6.
The independence assumption allows us to model p(Xi ) using ordinal regression. Ordinal regression
is a generalization of logistic regression to a variable with two or more outcomes that are ranked by
preference.
The proportional odds model for our ordinal response variable is
log
`
`
X
X
p(X > aj |q, c)
= ?j + ?q +
?i ci +
?ik ci ck
p(X ? aj |q, c)
i=1
i<k
where aj is one of the five relevance levels. The sums are over all ranks in the list; this models the
dependence of the relevance of the document to the clickthrough rates of everything else that was
retrieved, as well as any multiplicative dependence between the clickthrough rates at any two ranks.
After the model is trained, we can obtain p(X ? aj |q, c) using the inverse logit function. Then
p(X = aj |q, c) = p(X ? aj |q, c) ? p(X ? aj?1 |q, c).
A generalization to the proportional odds model is the vector generalized additive model (VGAM)
described by Yee and Wild [19]. VGAM has the same relationship to ordinal regression that
GAM [8] has to logistic regression. It is useful in our case because clicks do not necessarily have
linear relationships to relevance. VGAM is implemented in the R library VGAM. Once the model is
trained, we have p(X = aj ) using the same arithmetic as for the proportional odds model.
5
Data
We obtained data from Yahoo! sponsored search logs for April 2006. Although we limited our data
to advertisements, there is no reason in principle our method should not be applicable to general web
search, since we see the same effects of bias towards the top of search results, to trusted sites and
so on. We have a total of 28,961 relevance judgments for 2,021 queries. The queries are a random
sample of all queries entered in late 2005 and early 2006. Relevance judgments are based on details
of the advertisement, such as title, summary, and URL.
We filtered out queries for which we had no relevance judgments. We then aggregated records
into distinct lists of advertisements for a query as follows: Each record L consisted of a query, a
search identification string, a set of advertisement ids, and for each advertisement id, the rank the
advertisement appeared at and the number of times it was clicked. Different sets of results for a
query, or results shown in a different order, were treated as distinct lists. We aggregated distinct lists
of results to obtain a clickthrough rate at each rank for a given list of results for a given query. The
clickthrough rate on each ad is simply the number of times it was clicked when served as part of list
L divided by the impressions, the number of times L was shown to any user. We did not adjust for
impression bias.
5
Dependence of Clicks on Entire Result List
Our model takes into account the clicks at all ranks to estimate the relevance of the document at position i. As the figure to the right shows,
when there is an ?Excellent? document at rank 1, its clickthrough rate
varies depending on the relevance of the document at rank 2. For example, a ?Perfect? document at rank 2 may decrease the likelihood of a
click on the ?Excellent? document at rank 1, while a ?Fair? document
at rank 2 may increase the clickthrough rate for rank 1. Clickthrough
rate at rank 1 more than doubles as the relevance of the document at
rank 2 drops from ?Perfect? to ?Fair?.
6
6.1
relative clickthrough rate at rank 1
0.0
0.2
0.4
0.6
0.8
1.0
5.1
Bad
Fair GoodExcellentPerfect
relevance at rank 2
Experiments
Fit of Document Relevance Model
We first want to test our proposed model (Eq. (5)) for predicting relevance from clicks. If the model
fits well, the distributions of relevance it produces should compare favorably to the actual relevance
of the documents. We will compare it to a simpler model that does not take into account the click
dependence. The two models are contrasted below:
Y
dependence model: p(X|q, c) =
p(Xi |q, c)
Y
independence model: p(X|q, c) =
p(Xi |q, ci )
The latter models the relevance being conditional only on the query and its own clickthrough rate,
ignoring the clickthrough rates of the other items on the page. Essentially, it discretizes clicks into
relevance label bins at each rank using the query as an aid.
We removed all instances for which we had fewer than 500 impressions, then performed 10-fold
cross-validation. For simplicity, the query q is modeled as the aggregate clickthrough rate over
all results ever returned for that query. Both models produce a multinomial distribution for the
probability of relevance of a document p(Xi ). Predicted relevance is the expected value of this
P5
distribution: E[Xi ] = j=1 p(Xi = aj )aj .
The correlation between predicted relevance and actual relevance starts from 0.754 at rank 1 and
trends downward as we move down the list; by rank 5 it has fallen to 0.527. Lower ranks are
clicked less often; there are fewer clicks to provide evidence for relevance. Correlations for the
independence model are significantly lower at each point.
Figure 1 depicts boxplots for each value of relevance for both models. Each box represents the
distribution of predictions for the true value on the x axis. The center line is the median prediction;
the edges are the 25% and 75% quantiles. The whiskers are roughly a 95% confidence interval,
with the points outside being outliers. When dependence is modeled (Figure 1(a)), the distributions
are much more clearly separated from each other, as shown by the fact that there is little overlap
in the boxes. The correlation between predicted and acutal relevance is 18% higher, a statistically
significant difference.
6.2
Estimating DCG
Since our model works fairly well, we now turn our attention to using relevance predictions to
estimate DCG for the evaluation of search engines. Recall that we are interested in comparative
evaluation?determining the sign of the difference in DCG rather than its magnitude. Our confidence
in the sign is P (?DCG < 0), which is estimated using the simulation procedure described in
Section 3.3. The simulation samples from the multinomial distributions p(Xi ).
Methodology: To be able to calculate the exact DCG to evaluate our models, we need all ads
in a list to have a relevance judgment. Therefore our test set will consist of all of the lists for
which we have complete relevance judgments and at least 500 impressions. The remainder will
be used for training. The size of the test set is 1720 distinct lists. The training sets will include
all lists for which we have at least 200 impressions, over 5000 lists. After training the model, we
6
1.5
0.5
1.0
expected relevance
2.0
2.5
3.0
2.5
2.0
1.5
1.0
expected relevance
0.5
0.0
0.0
Bad
Fair
Good
Excellent Perfect
Bad
(a) Dependence model; ? = 0.754
Fair
Good
Excellent Perfect
(b) No dependence modeled; ? = 0.638
Figure 1: Predicted vs. actual relevance for rank 1. Correlation increases 18% when dependence of
relevance of the document at rank 1 on clickthrough at all ranks is modeled.
Confidence
Accuracy clicks-only
Accuracy 2 judgments
0.5 ? 0.6
0.522
0.572
0.6 ? 0.7
0.617
0.678
0.7 ? 0.8
0.734
0.697
0.8 ? 0.9
0.818
0.890
0.9 ? 0.95
?
0.918
0.95 ? 1.0
?
0.940
Table 1: Confidence vs. accuracy of predicting the better ranking for pairs of ranked lists using the
relevance predictions of our model based on clicks alone, and with two additional judgments for
each pair of lists. Confidence estimates are good predictions of accuracy.
predict relevance for the ads in the test set. We then use these expected relevances to calculate the
expectation E[DCG]. We will compare these expectations to the true DCG calculated using the
actual relevance judgments. As a baseline for
Pautomatic evaluation, we will compare to the average
ci , the naive approach described in our introduction.
clickthrough rate on the list E[CT R] = k1
We then estimate the confidence P (?DCG < 0) for pairs of ranked lists for the same query and
compare it to the actual percentage of pairs that had ?DCG < 0. Confidence should be less than
or equal to this percentage; if it is, we can ?trust? it in some sense.
predicted relevance
1.0 1.5 2.0 2.5
0.5
0.0
The figure to the right shows actual vs. predicted relevance for ads in
the test set. (This is slightly different from Figure 1: the earlier figure
shows predicted results for all data from cross-validation while this
one only shows predicted results on our test data.) The separation of
the boxes shows that our model is doing quite well on the testing data,
at least for rank 1. Performance degrades quite a bit as rank increases
(not shown), but it is important to note that the upper ranks have the
greatest effect on DCG?so getting those right is most important.
3.0
Results: We first looked at the ability of E[DCG] to predict DCG, as well as the ability of
the average clickthrough rate E[CT R] to predict DCG. The correlation between the latter two
is 0.622, while the correlation between the former two is 0.876. This means we can approximate DCG better using our model than just using the mean clickthrough rate as a predictor.
Bad
Fair
Good Excellent Perfect
In Table 1, we have binned pairs of ranked lists by their estimated confidence. We computed the
accuracy of our predictions (the percent of pairs for which the difference in DCG was correctly
identified) for each bin. The first line shows results when evaluating with no additional relevance
judgments beyond those used for training the model: although confidence estimates tend to be low,
they are accurate in the sense that a confidence estimate predicts how well we were able to distinguish between the two lists. This means that the confidence estimates provide a guide for identifying
which evaluations require ?hole-filling? (additional judgments).
The second line shows how results improve when only two judgments are made. Confidence estimates increase a great deal (to a mean of over 0.8 from a mean of 0.6), and the accuracy of the
confidence estimates is not affected.
7
In general, performance is very good: using only the predictions of our model based on clicks, we
have a very good sense of the confidence we should have in our evaluation. Judging only two more
documents dramatically improves our confidence: there are many more pairs in high-confidence
bins after two judgments.
7
Conclusion
We have shown how to compare ranking functions using expected DCG. After a single initial training phase, ranking functions can be compared by predicting relevance from clickthrough rates. Estimates of confidence can be computed; the confidence gives a lower bound on how accurately
we have predicted that a difference exists. With just a few additional relevance judgments chosen cleverly, we significantly increase our success at predicting whether a difference exists. Using
our method, the cost of acquiring relevance judgments for web search evaluation is dramatically
reduced, when we have access to click data.
References
[1] E. Agichtein, E. Brill, and S. T. Dumais. Improving web search ranking by incorporating user behavior
information. In Proceedings SIGIR, pages 19?26, 2006.
[2] E. Agichtein, E. Brill, S. T. Dumais, and R. Ragno. Learning user interaction models for predicting web
search result preferences. In Proceedings SIGIR, pages 3?10, 2006.
[3] J. A. Aslam, V. Pavlu, and E. Yilmaz. A sampling technique for efficiently estimating measures of
query retrieval performance using incomplete judgments. In Proceedings of the 22nd ICML Workshop on
Learning with Partially Classified Training Data, pages 57?66, 2005.
[4] A. Broder. A taxonomy of web search. SIGIR Forum, 36(2):3?10, 2002.
[5] B. Carterette, J. Allan, and R. K. Sitaraman. Minimal test collections for retrieval evaluation. In Proceedings of SIGIR, pages 268?275, 2006.
[6] G. V. Cormack, C. R. Palmer, and C. L. Clarke. Efficient Construction of Large Test Collections. In
Proceedings of SIGIR, pages 282?289, 1998.
[7] G. Dupret, B. Piwowarski, C. Hurtado, and M. Mendoza. A statistical model of query log generation. In
SPIRE, LNCS 4209, pages 217?228. Springer, 2006.
[8] T. Hastie and R. Tibshirani. Generalized additive models. Statistical Science, 1:297?318, 1986.
[9] K. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst.,
20(4):422?446, 2002.
[10] T. Joachims. Optimizing search engines using clickthrough data. In Proceedings of KDD, pages 133?142,
2002.
[11] T. Joachims. Evaluating retrieval performance using clickthrough data. In Text Mining, pages 79?96.
2003.
[12] T. Joachims, L. A. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data
as implicit feedback. In Proceedings of SIGIR, pages 154?161, 2005.
[13] F. Radlinski and T. Joachims. Minimally invasive randomization fro collecting unbiased preferences from
clickthrough logs. In Proceedings of AAAI, 2006.
[14] M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: Estimating the click-through rate for
new ads. In Proceedings of WWW 2007, 2007.
[15] I. Soboroff. Dynamic test collections: measuring search effectiveness on the live web. In Proceedings of
SIGIR, pages 276?283, 2006.
[16] I. Soboroff, C. Nicholas, and P. Cahan. Ranking Retrieval Systems without Relevance Judgments. In
Proceedings of SIGIR, pages 66?73, 2001.
[17] L. Wasserman. All of Nonparametric Statistics. Springer, 2006.
[18] S. N. Wood. Thin plate regression splines. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 65(1):95?114, 2003.
[19] T. W. Yee and C. J. Wild. Vector generalized additive models. Journal of the Royal Statistical Society,
Series B (Methodological), 58(3):481?493, 1996.
[20] J. Zobel. How Reliable are the Results of Large-Scale Information Retrieval Experiments? In Proceedings
of SIGIR, pages 307?314, 1998.
8
| 3190 |@word trial:1 judgement:2 stronger:1 logit:1 nd:1 simulation:4 covariance:1 initial:3 series:2 uma:1 score:6 selecting:1 document:68 outperforms:1 past:1 com:1 must:1 additive:3 informative:1 kdd:1 treating:1 sponsored:3 drop:1 v:3 alone:2 obsolete:1 device:1 item:1 fewer:2 reciprocal:1 record:2 filtered:1 provides:1 preference:10 simpler:1 five:3 c2:1 become:1 ik:1 retrieving:1 wild:2 redefine:1 introduce:2 acquired:1 allan:1 expected:5 roughly:1 behavior:2 frequently:2 discounted:6 little:3 actual:7 clicked:5 estimating:7 piwowarski:1 lowest:1 what:1 backbone:1 string:1 collecting:1 appear:1 before:1 id:2 solely:1 minimally:1 limited:1 palmer:1 statistically:1 testing:2 practice:1 burbank:1 procedure:1 lncs:1 empirical:2 significantly:2 cormack:1 confidence:36 suggest:1 yilmaz:1 selection:2 judged:5 context:1 live:1 seminal:1 yee:2 www:1 center:2 missing:1 attention:2 sigir:9 simplicity:1 identifying:1 wasserman:1 kekalainen:1 estimator:1 financial:1 updated:2 construction:1 suppose:1 user:10 exact:1 trend:2 expensive:1 predicts:1 labeled:1 p5:4 calculate:3 ensures:1 ordering:3 decrease:1 highest:1 removed:1 valuable:1 dynamic:1 trained:2 joint:1 surrounding:1 train:1 separated:1 distinct:4 monte:2 query:24 labeling:1 aggregate:2 tell:3 outcome:1 outside:1 mendoza:1 quite:3 say:1 ability:2 cov:3 statistic:1 richardson:1 indication:1 propose:1 interaction:2 remainder:1 relevant:1 combining:1 entered:2 rapidly:1 translate:1 getting:1 convergence:1 double:1 produce:2 comparative:2 perfect:6 ben:1 depending:1 measured:1 received:1 eq:3 strong:1 implemented:1 c:1 predicted:10 judge:5 indicate:2 gi2:4 human:4 everything:2 bin:3 require:1 generalization:2 randomization:1 sufficiently:1 great:2 algorithmic:1 predict:7 early:1 a2:1 omitted:1 purpose:1 applicable:2 lose:1 label:4 title:1 create:1 trusted:2 reflects:1 clearly:1 rather:2 ck:1 derived:3 joachim:9 methodological:1 rank:43 indicates:1 likelihood:1 contrast:1 ave:1 baseline:1 sense:4 inference:1 dcg:64 typically:4 entire:2 interested:3 provably:1 among:1 yahoo:4 fairly:1 equal:1 aware:1 once:1 having:3 sampling:2 represents:1 jones:1 look:2 filling:1 icml:1 thin:1 jarvelin:1 spline:1 intelligent:1 simplify:1 few:6 randomly:1 phase:3 a5:1 mining:1 evaluation:22 adjust:1 swapping:1 accurate:1 edge:1 necessary:1 respective:1 indexed:1 incomplete:2 old:1 theoretical:2 minimal:2 increased:1 instance:1 modeling:3 earlier:1 cover:1 ar:4 measuring:1 cost:2 uniform:1 predictor:1 conducted:2 too:1 dependency:1 varies:1 combined:1 dumais:2 broder:1 amherst:2 probabilistic:2 dominowska:1 aaai:1 choose:1 worse:1 return:1 reusing:1 syst:1 account:2 zobel:1 inc:1 matter:1 explicitly:1 ranking:18 ad:5 performed:2 multiplicative:1 lot:2 observing:1 aslam:2 doing:2 start:1 contribution:1 ir:1 accuracy:6 variance:4 efficiently:1 judgment:45 identify:2 identification:1 fallen:1 accurately:2 produced:1 carlo:2 served:1 classified:1 manual:2 invasive:1 gain:9 massachusetts:1 ask:1 begun:1 recall:1 knowledge:1 improves:1 higher:2 methodology:2 response:1 april:1 done:1 evaluated:1 box:3 furthermore:1 just:3 implicit:1 until:2 correlation:6 web:16 replacing:1 trust:1 logistic:2 aj:14 perhaps:1 quality:2 empire:1 effect:4 consisted:1 unbiased:2 true:2 gay:1 former:1 iteratively:2 deal:2 attractive:1 adjacent:1 conditionally:1 generalized:3 plate:1 presenting:2 impression:5 complete:3 interpreting:1 percent:1 wise:1 novel:1 recently:2 multinomial:4 empirically:3 conditioning:2 million:1 discussed:1 extend:1 relating:1 significant:1 had:3 access:1 something:1 own:1 retrieved:5 optimizing:1 inf:1 reverse:1 success:1 minimum:1 additional:5 care:1 determine:1 aggregated:2 arithmetic:1 rj:3 infer:2 cross:2 retrieval:8 divided:2 paired:1 a1:1 prediction:8 regression:6 essentially:1 metric:5 expectation:5 searcher:1 c1:1 addition:2 want:2 interval:2 else:3 median:1 source:1 biased:3 tend:3 thing:1 effectiveness:1 odds:3 leverage:2 enough:2 easy:2 xj:2 independence:3 fit:2 hastie:1 identified:1 click:45 suboptimal:1 idea:1 judiciously:1 whether:2 url:1 returned:2 dramatically:3 useful:3 nonparametric:1 discount:2 ten:1 reduced:1 percentage:2 problematic:1 judging:1 sign:5 estimated:4 track:1 correctly:1 tibshirani:1 write:1 discrete:1 shall:1 affected:1 express:1 putting:2 nevertheless:1 changing:1 boxplots:1 sum:2 wood:1 inverse:1 family:1 separation:1 draw:1 prefer:1 clarke:1 bit:1 bound:1 ct:2 distinguish:1 fold:1 binned:1 constraint:1 agichtein:3 x2:2 ragno:2 according:1 cleverly:1 across:1 describes:1 slightly:1 pan:1 outlier:1 indexing:2 discus:1 turn:2 ordinal:7 know:1 end:1 sitaraman:1 available:4 discretizes:1 gam:1 nicholas:1 top:4 include:1 a4:1 log2:4 maintaining:1 reli:7 k1:1 build:1 disappear:1 forum:1 society:2 move:1 already:1 looked:2 strategy:1 degrades:1 dependence:10 separate:1 mapped:1 reason:1 boldface:1 length:1 dupret:2 relationship:6 index:2 modeled:4 demonstration:1 acquire:1 taxonomy:1 favorably:1 clickthrough:28 upper:1 datasets:1 ever:1 trec:1 arbitrary:1 inferred:1 introduced:1 pair:9 engine:12 trans:1 beyond:2 able:3 below:2 appeared:1 built:1 including:1 royal:2 reliable:1 greatest:2 overlap:1 ranked:10 treated:1 predicting:8 representing:1 improve:5 eye:1 library:1 axis:1 fro:1 naive:1 text:1 prior:1 determining:1 relative:2 whisker:1 generation:1 proportional:3 annotator:1 validation:2 degree:1 pij:4 principle:2 summary:1 guide:2 allow:1 bias:2 feedback:1 calculated:1 superficially:1 evaluating:4 cumulative:4 antidote:1 numeric:1 author:1 made:4 commonly:1 collection:4 cope:1 approximate:1 preferred:2 corpus:1 xi:28 search:28 table:2 carterette:4 learn:2 ca:1 ignoring:1 improving:1 investigated:2 necessarily:2 excellent:6 did:1 soboroff:3 nothing:1 fair:7 x1:6 site:2 assessor:2 quantiles:1 depicts:1 aid:1 formalization:2 position:4 wish:1 clicking:2 late:1 advertisement:6 interleaving:2 down:1 bad:5 showing:1 maxi:1 list:21 evidence:2 a3:1 exists:4 consist:1 incorporating:1 workshop:1 cumulated:1 importance:1 ci:5 magnitude:1 downward:1 hole:1 simply:4 likely:1 gi1:4 tracking:1 partially:1 acquiring:1 springer:2 constantly:1 acm:1 ma:1 conditional:2 goal:1 towards:1 absence:1 change:2 determined:1 contrasted:1 brill:2 total:1 gij:6 select:2 radlinski:3 latter:2 relevance:95 evaluate:4 |
2,415 | 3,191 | Random Projections for Manifold Learning
Chinmay Hegde
ECE Department
Rice University
[email protected]
Michael B. Wakin
EECS Department
University of Michigan
[email protected]
Richard G. Baraniuk
ECE Department
Rice University
[email protected]
Abstract
We propose a novel method for linear dimensionality reduction of manifold modeled data. First, we show that with a small number M of random projections of
sample points in RN belonging to an unknown K-dimensional Euclidean manifold, the intrinsic dimension (ID) of the sample set can be estimated to high accuracy. Second, we rigorously prove that using only this set of random projections,
we can estimate the structure of the underlying manifold. In both cases, the number of random projections required is linear in K and logarithmic in N , meaning
that K < M ? N . To handle practical situations, we develop a greedy algorithm
to estimate the smallest size of the projection space required to perform manifold
learning. Our method is particularly relevant in distributed sensing systems and
leads to significant potential savings in data acquisition, storage and transmission
costs.
1 Introduction
Recently, we have witnessed a tremendous increase in the sizes of data sets generated and processed
by acquisition and computing systems. As the volume of the data increases, memory and processing
requirements need to correspondingly increase at the same rapid pace, and this is often prohibitively
expensive. Consequently, there has been considerable interest in the task of effective modeling of
high-dimensional observed data and information; such models must capture the structure of the
information content in a concise manner.
A powerful data model for many applications is the geometric notion of a low-dimensional manifold. Data that possesses merely K ?intrinsic? degrees of freedom can be assumed to lie on a
K-dimensional manifold in the high-dimensional ambient space. Once the manifold model is identified, any point on it can be represented using essentially K pieces of information. Thus, algorithms
in this vein of dimensionality reduction attempt to learn the structure of the manifold given highdimensional training data.
While most conventional manifold learning algorithms are adaptive (i.e., data dependent) and nonlinear (i.e., involve construction of a nonlinear mapping), a linear, nonadaptive manifold dimensionality reduction technique has recently been introduced that employs random projections [1].
Consider a K-dimensional manifold M in the ambient space RN and its projection onto a random
subspace of dimension M = CK log(N ); note that K < M ? N . The result of [1] is that the
pairwise metric structure of sample points from M is preserved with high accuracy under projection
from RN to RM .
(a)
(b)
(c)
(d)
Figure 1: Manifold learning using random projections. (a) Input data consisting of 1000 images of a shifted
disk, each of size N = 64?64 = 4096. (b) True ?1 and ?2 values of the sampled data. (c,d) Isomap embedding
learned from (c) original data in RN , and (d) a randomly projected version of the data into RM with M = 15.
This result has far reaching implications. Prototypical devices that directly and inexpensively acquire random projections of certain types of data (signals, images, etc.) have been developed [2, 3];
these devices are hardware realizations of the mathematical tools developed in the emerging area of
Compressed Sensing (CS) [4, 5]. The theory of [1] suggests that a wide variety of signal processing
tasks can be performed directly on the random projections acquired by these devices, thus saving
valuable sensing, storage and processing costs.
The advantages of random projections extend even to cases where the original data is available in
the ambient space RN . For example, consider a wireless network of cameras observing a scene. To
perform joint image analysis, the following steps might be executed:
1. Collate: Each camera node transmits its respective captured image (of size N ) to a central
processing unit.
2. Preprocess: The central processor estimates the intrinsic dimension K of the underlying
image manifold.
3. Learn: The central processor performs a nonlinear embedding of the data points ? for
instance, using Isomap [6] ? into a K-dimensional Euclidean space, using the estimate of
K from the previous step.
In situations where N is large and communication bandwidth is limited, the dominating costs will be
in the first transmission/collation step. On the one hand, to reduce the communication needs one may
perform nonlinear image compression (such as JPEG) at each node before transmitting to the central
processing. But this requires a good deal of processing power at each sensor, and the compression
would have to be undone during the learning step, thus adding to overall computational costs. On the
other hand, every camera could encode its image by computing (either directly or indirectly) a small
number of random projections to communicate to the central processor. These random projections
are obtained by linear operations on the data, and thus are cheaply computed. Clearly, in many
situations it will be less expensive to store, transmit, and process such randomly projected versions
of the sensed images. The question now becomes: how much information about the manifold is
conveyed by these random projections, and is any advantage in analyzing such measurements from
a manifold learning perspective?
In this paper, we provide theoretical and experimental evidence that reliable learning of a Kdimensional manifold can be performed not just in the high-dimensional ambient space RN but also
in an intermediate, much lower-dimensional random projection space RM , where M = CK log(N ).
See, for example, the toy example of Figure 1. Our contributions are as follows. First, we present a
theoretical bound on the minimum number of measurements per sample point required to estimate
the intrinsic dimension (ID) of the underlying manifold, up to an accuracy level comparable to that
of the Grassberger-Procaccia algorithm [7, 8], a widely used geometric approach for dimensionality
estimation. Second, we present a similar bound on the number of measurements M required for
Isomap [6] ? a popular manifold learning algorithm ? to be ?reliably? used to discover the nonlinear
structure of the manifold. In both cases, M is shown to be linear in K and logarithmic in N . Third,
we formulate a procedure to determine, in practical settings, this minimum value of M with no a
priori information about the data points. This paves the way for a weakly adaptive, linear algorithm
(ML-RP) for dimensionality reduction and manifold learning.
The rest of the paper is organized as follows. Section 2 recaps the manifold learning approaches we
utilize. In Section 3 presents our main theoretical contributions, namely, the bounds on M required
to perform reliable dimensionality estimation and manifold learning from random projections. Sec-
tion 4 describes a new adaptive algorithm that estimates the minimum value of M required to provide
a faithful representation of the data so that manifold learning can be performed. Experimental results on a variety of real and simulated data are provided in Section 5. Section 6 concludes with
discussion of potential applications and future work.
2 Background
An important input parameter for all manifold learning algorithms is the intrinsic dimension (ID) of
a point cloud. We aim to embed the data points in as low-dimensional a space as possible in order to
avoid the curse of dimensionality. However, if the embedding dimension is too small, then distinct
data points might be collapsed onto the same embedded point. Hence a natural question to ask is:
given a point cloud in N -dimensional Euclidean space, what is the dimension of the manifold that
best captures the structure of this data set? This problem has received considerable attention in the
literature and remains an active area of research [7, 9, 10].
For the purposes of this paper, we focus our attention on the Grassberger-Procaccia (GP) [7] algorithm for ID estimation. This is a widely used geometric technique that takes as input the set of
pairwise distances between sample points. It then computes the scale-dependent correlation dimension of the data, defined as follows.
Definition 2.1 Suppose X = (x1 , x2 , ..., xn ) is a finite dataset of underlying dimension K. Define
X
1
Ikxi ?xj k<r ,
Cn (r) =
n(n ? 1)
i6=j
where I is the indicator function. The scale-dependent correlation dimension of X is defined as
b corr (r1 , r2 ) = log Cn (r1 ) ? log Cn (r2 ) .
D
log r1 ? log r2
b is obtained by fixing r1 and r2 to the biggest
The best possible approximation to K (call this K)
range over which the plot is linear and the calculating Dcorr in that range. There are a number of
practical issues involved with this approach; indeed, it has been shown that geometric ID estimation
algorithms based on finite sampling yield biased estimates of intrinsic dimension [10, 11]. In our
theoretical derivations, we do not attempt to take into account this bias; instead, we prove that
the effect of running the GP algorithm on a sufficient number of random projections produces a
dimension estimate that well-approximates the GP estimate obtained from analyzing the original
point cloud.
b of the ID of the point cloud is used by nonlinear manifold learning algorithms (e.g.,
The estimate K
Isomap [6], Locally Linear Embedding (LLE) [12], and Hessian Eigenmaps [13], among many
b
others) to generate a K-dimensional
coordinate representation of the input data points. Our main
analysis will be centered around Isomap. Isomap attempts to preserve the metric structure of the
manifold, i.e., the set of pairwise geodesic distances of any given point cloud sampled from the
manifold. In essence, Isomap approximates the geodesic distances using a suitably defined graph
and performs classical multidimensional scaling (MDS) to obtain a reduced K-dimensional representation of the data [6]. A key parameter in the Isomap algorithm is the residual variance, which is
equivalent to the stress function encountered in classical MDS. The residual variance is a measure
of how well the given dataset can be embedded into a Euclidean space of dimension K. In the next
section, we prescribe a specific number of measurements per data point so that performing Isomap
on the randomly projected data yields a residual variance that is arbitrarily close to the variance
produced by Isomap on the original dataset.
We conclude this section by revisiting the results derived in [1], which form the basis for our development. Consider the effect of projecting a smooth K-dimensional manifold residing in RN
onto a random M -dimensional subspace (isomorphic to RM ). If M is sufficiently large, a stable
near-isometric embedding of the manifold in the lower-dimensional subspace is ensured. The key
advantage is that M needs only to be linear in the intrinsic dimension of the manifold K. In addition,
M depends only logarithmically on other properties of the manifold, such as its volume, curvature,
etc. The result can be summarized in the following theorem.
Theorem 2.2 [1] Let M be a compact K-dimensional manifold in RN having volume V and
condition number 1/? . Fix 0 < ? < 1 and 0 < ? < 1. Let ? be a random orthoprojector1 from RN
to RM and
K log(N V ? ?1 ) log(??1 )
M ?O
.
(1)
?2
Suppose M < N . Then, with probability exceeding 1 ? ?, the following statement holds: For every
pair of points x, y ? M, and i ? {1, 2},
r
r
M
di (?x, ?y)
M
?
? (1 + ?)
.
(2)
(1 ? ?)
N
di (x, y)
N
where d1 (x, y) (respectively, d2 (x, y)) stands for the geodesic (respectively, ?2 ) distance between
points x and y.
The condition number ? controls the local, as well as global, curvature of the manifold ? the smaller
the ? , the less well-conditioned the manifold with higher ?twistedness? [1]. Theorem 2.2 has been
proved by first specifying a finite high-resolution sampling on the manifold, the nature of which
depends on its intrinsic properties; for instance, a planar manifold can be sampled coarsely. Then the
Johnson-Lindenstrauss Lemma [14] is applied to these points to guarantee the so-called ?isometry
constant? ?, which is nothing but (2).
3 Bounds on the performance of ID estimation and manifold learning
algorithms under random projection
We saw above that random projections essentially ensure that the metric structure of a highdimensional input point cloud (i.e., the set of all pairwise distances between points belonging to the
dataset) is preserved up to a distortion that depends on ?. This immediately suggests that geometrybased ID estimation and manifold learning algorithms could be applied to the lower-dimensional,
randomly projected version of the dataset.
The first of our main results establishes a sufficient dimension of random projection M required to
maintain the fidelity of the estimated correlation dimension using the GP algorithm. The proof of
the following is detailed in [15].
Theorem 3.1 Let M be a compact K-dimensional manifold in RN having volume V and condition number 1/? . Let X = {x1 , x2 , ...} be a sequence of samples drawn from a uniform density
b be the dimension estimate of the GP algorithm on X over the range
supported on M. Let K
(rmin , rmax ). Let ? = ln(rmax /rmin ) . Fix 0 < ? < 1 and 0 < ? < 1. Suppose the following
condition holds:
rmax < ? /2
(3)
Let ? be a random orthoprojector from RN to RM with M < N and
K log(N V ? ?1 ) log(??1 )
.
(4)
M ?O
?2 ?2
b ? be the estimated correlation dimension on ?X in the projected space over the range
Let K
p
p
b ? is bounded by:
(rmin M/N , rmax M/N ). Then, K
with probability exceeding 1 ? ?.
b ?K
b ? ? (1 + ?)K
b
(1 ? ?)K
(5)
Theorem 3.1 is a worst-case bound and serves as a sufficient condition for stable ID estimation using
random projections. Thus, if we choose a sufficiently small value for ? and ?, we are guaranteed
estimation accuracy levels as close as desired to those obtained with ID estimation in the original
b ? is multiplicative. This implies that in the worst case, the
signal space. Note that the bound on K
1
Such a matrix is formed by orthogonalizing M vectors of length N having, for example, i.i.d. Gaussian or
Bernoulli distributed entries.
b ? very close to K
b (say, within integer roundoff error)
number of projections required to estimate K
becomes higher with increasing manifold dimension K.
The second of our main results prescribes the minimum dimension of random projections required
to maintain the residual variance produced by Isomap in the projected domain within an arbitrary
additive constant of that produced by Isomap with the full data in the ambient space. This proof of
this theorem [15] relies on the proof technique used in [16].
Theorem 3.2 Let M be a compact K-dimensional manifold in RN having volume V and condition
number 1/? . Let X = {x1 , x2 , ..., xn } be a finite set of samples drawn from a sufficiently fine
density supported on M. Let ? be a random orthoprojector from RN to RM with M < N . Fix
0 < ? < 1 and 0 < ? < 1. Suppose
K log(N V ? ?1 ) log(??1 )
.
M ?O
?2
Define the diameter ? of the dataset as follows:
? = max diso (xi , xj )
1?i,j?n
where diso (x, y) stands for the Isomap estimate of the geodesic distance between points x and y.
Define R and R? to be the residual variances obtained when Isomap generates a K-dimensional
embedding of the original dataset X and projected dataset ?X respectively. Under suitable constructions of the Isomap connectivity graphs, R? is bounded by:
R? < R + C?2 ?
with probability exceeding 1 ? ?. C is a function only on the number of sample points n.
Since the choice of ? is arbitrary, we can choose a large enough M (which is still only logarithmic
in N ) such that the residual variance yielded by Isomap on the randomly projected version of the
dataset is arbitrarily close to the variance produced with the data in the ambient space. Again,
this result is derived from a worst-case analysis. Note that ? acts as a measure of the scale of the
dataset. In practice, we may enforce the condition that the data is normalized (i.e., every pairwise
distance calculated by Isomap is divided by ?). This ensures that the K-dimensional embedded
representation is contained within a ball of unit norm centered at the origin.
Thus, we have proved that with only an M -dimensional projection of the data (with M ? N )
we can perform ID estimation and subsequently learn the structure of a K-dimensional manifold,
up to accuracy levels obtained by conventional methods. In Section 4, we utilize these sufficiency
results to motivate an algorithm for performing practical manifold structure estimation using random
projections.
4 How many random projections are enough?
In practice, it is hard to know or estimate the parameters V and ? of the underlying manifold. Also,
b and R, the outputs
since we have no a priori information regarding the data, it is impossible to fix K
of GP and Isomap on the point cloud in the ambient space. Thus, often, we may not be able fix a
definitive value for M . To circumvent this problem we develop the following empirical procedure
that we dub it ML-RP for manifold learning using random projections.
We initialize M to a small number, and compute M random projections of the data set X =
{x1 , x2 , ..., xn } (here n denotes the number of points in the point cloud). Using the set ?X =
b
{?x : x ? X}, we estimate the intrinsic dimension using the GP algorithm. This estimate, say K,
b
is used by the Isomap algorithm to produce an embedding into K-dimensional space. The residual variance produced by this operation is recorded. We then increment M by 1 and repeat the
entire process. The algorithm terminates when the residual variance obtained is smaller than some
tolerance parameter ?. A full length description is provided in Algorithm 1.
The essence of ML-RP is as follows. A sufficient number M of random projections is determined by
a nonlinear procedure (i.e., sequential computation of Isomap residual variance) so that conventional
Algorithm 1 ML-RP
M ?1
? ? Random orthoprojector of size M ? N .
while residual variance ? ? do
Run the GP algorithm on ?X.
b to perform Isomap on ?X.
Use ID estimate (K)
Calculate residual variance.
M ?M +1
Add one row to ?
end while
return M
b
return K
(a)
(b)
Figure 2: Performance of ID estimation using GP as a function of random projections. Sample size n = 1000,
ambient dimension N = 150. (a) Estimated intrinsic dimension for underlying hyperspherical manifolds of
increasing dimension. The solid line indicates the value of the ID estimate obtained by GP performed on the
original data. (b) Minimum number of projections required for GP to work with 90% accuracy as compared to
GP on native data.
manifold learning does almost as well on the projected dataset as the original. On the other hand,
the random linear projections provide a faithful representation of the data in the geodesic sense.
In this manner, ML-RP helps determine the number of rows that ? requires in order to act as an
operator that preserves metric structure. Therefore, ML-RP can be viewed as an adaptive method
for linear reduction of data dimensionality. It is only weakly adaptive in the sense that only the
stopping criterion for ML-RP is determined by monitoring the nature of the projected data.
The results derived in Section 3 can be viewed as convergence proofs for ML-RP. The existence of
a certain minimum number of measurements for any chosen error value ? ensures that eventually,
M in the ML-RP algorithm is going to become high enough to ensure ?good? Isomap performance.
Also, due to the built-in parsimonious nature of ML-RP, we are ensured to not ?overmeasure? the
manifold, i.e., just the requisite numbers of projections of points are obtained.
5 Experimental results
This section details the results of simulations of ID estimation and subsequent manifold learning on
real and synthetic datasets. First, we examine the performance of the GP algorithm on random projections of K-dimensional dimensional hyperspheres embedded in an ambient space of dimension
N = 150. Figure 2(a) shows the variation of the dimension estimate produced by GP as a function
of the number of projections M . The sampled dataset in each of the cases is obtained from drawing
n = 1000 samples from a uniform distribution supported on a hypersphere of corresponding dimension. Figure 2(b) displays the minimum number of projections per sample point required to estimate
the scale-dependent correlation dimension directly from the random projections, up to 10% error,
when compared to GP estimation on the original data.
We observe that the ID estimate stabilizes quickly with increasing number of projections, and indeed
converges to the estimate obtained by running the GP algorithm on the original data. Figure 2(b)
illustrates the variation of the minimum required projection dimension M vs. K, the intrinsic dimen-
Figure 3: Standard databases. Ambient dimension for the face database N = 4096; ambient dimension for the
hand rotation databases N = 3840.
Figure 4: Performance of ML-RP on the above databases. (left) ML-RP on the face database (N = 4096).
Good approximates are obtained for M > 50. (right) ML-RP on the hand rotation database (N = 3840). For
M > 60, the Isomap variance is indistinguishable from the variance obtained in the ambient space.
sion of the underlying manifold. We plot the intrinsic dimension of the dataset against the minimum
b ? is within 10% of the conventional GP estimate K
b (this
number of projections required such that K
is equivalent to choosing ? = 0.1 in Theorem 3.1). We observe the predicted linearity (Theorem 3.1)
in the variation of M vs K.
Finally, we turn our attention to two common datasets (Figure 3) found in the literature on dimension
estimation ? the face database2 [6], and the hand rotation database [17].3 The face database is a
collection of 698 artificial snapshots of a face (N = 64 ? 64 = 4096) varying under 3 degrees of
freedom: 2 angles for pose and 1 for lighting dimension. The signals are therefore believed to reside
on a 3D manifold in an ambient space of dimension 4096. The hand rotation database is a set of
90 images (N = 64 ? 60 = 3840) of rotations of a hand holding an object. Although the image
appearance manifold is ostensibly one-dimensional, estimators in the literature always overestimate
its ID [11].
Random projections of each sample in the databases were obtained by computing the inner product
of the image samples with an increasing number of rows of the random orthoprojector ?. We
note that in the case of the face database, for M > 60, the Isomap variance on the randomly
projected points closely approximates the variance obtained with full image data. This behavior of
convergence of the variance to the best possible value is even more sharply observed in the hand
rotation database, in which the two variance curves are indistinguishable for M > 60. These results
are particularly encouraging and demonstrate the validity of the claims made in Section 3.
6 Discussion
Our main theoretical contributions in this paper are the explicit values for the lower bounds on the
minimum number of random projections required to perform ID estimation and subsequent manifold
learning using Isomap, with high guaranteed accuracy levels. We also developed an empirical greedy
algorithm (ML-RP) for practical situations. Experiments on simple cases, such as uniformly generated hyperspheres of varying dimension, and more complex situations, such as the image databases
displayed in Figure 3, provide sufficient evidence of the nature of the bounds described above.
2
http://isomap.stanford.edu
http://vasc.ri.cmu.edu//idb/html/motion/hand/index.html. Note that we use a subsampled version of the
database used in the literature, both in terms of resolution of the image and sampling of the manifold.
3
The method of random projections is thus a powerful tool for ensuring the stable embedding of lowdimensional manifolds into an intermediate space of reasonable size. The motivation for developing
results and algorithms that involve random measurements of high-dimensional data is significant,
particularly due to the increasing attention that Compressive Sensing (CS) has received recently. It
is now possible to think of settings involving a huge number of low-power devices that inexpensively capture, store, and transmit a very small number of measurements of high-dimensional data.
ML-RP is applicable in all such situations. In situations where the bottleneck lies in the transmission
of the data to the central processing node, ML-RP provides a simple solution to the manifold learning problem and ensures that with minimum transmitted amount of information, effective manifold
learning can be performed. The metric structure of the projected dataset upon termination of MLRP closely resembles that of the original dataset with high probability; thus, ML-RP can be viewed
as a novel adaptive algorithm for finding an efficient, reduced representation of data of very large
dimension.
References
[1] R. G. Baraniuk and M. B. Wakin. Random projections of smooth manifolds. 2007. To appear
in Foundations of Computational Mathematics.
[2] M. B. Wakin, J. N. Laska, M. F. Duarte, D. Baron, S. Sarvotham, D. Takhar, K. F. Kelly, and
R. G. Baraniuk. An architecture for compressive imaging. In IEEE International Conference
on Image Processing (ICIP), pages 1273?1276, Oct. 2006.
[3] S. Kirolos, J.N. Laska, M.B. Wakin, M.F. Duarte, D.Baron, T. Ragheb, Y. Massoud, and R.G.
Baraniuk. Analog-to-information conversion via random demodulation. In Proc. IEEE Dallas
Circuits and Systems Workshop (DCAS), 2006.
[4] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Info. Theory, 52(2):489?509,
Feb. 2006.
[5] D. L. Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289?1306, September
2006.
[6] J. B. Tenenbaum, V.de Silva, and J. C. Landford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[7] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors. Physica D
Nonlinear Phenomena, 9:189?208, 1983.
[8] J. Theiler. Statistical precision of dimension estimators. Physical Review A, 41(6):3038?3051,
1990.
[9] F. Camastra. Data dimensionality estimation methods: a survey. Pattern Recognition, 36:2945?
2954, 2003.
[10] J. A. Costa and A. O. Hero. Geodesic entropic graphs for dimension and entropy estimation in
manifold learning. IEEE Trans. Signal Processing, 52(8):2210?2221, August 2004.
[11] E. Levina and P. J. Bickel. Maximum likelihood estimation of intrinsic dimension. In Advances
in NIPS, volume 17. MIT Press, 2005.
[12] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323?2326, 2000.
[13] D. Donoho and C. Grimes. Hessian eigenmaps: locally linear embedding techniques for high
dimensional data. Proc. of National Academy of Sciences, 100(10):5591?5596, 2003.
[14] Sanjoy Dasgupta and Anupam Gupta. An elementary proof of the JL lemma. Technical Report
TR-99-006, University of California, Berkeley, 1999.
[15] C. Hegde, M. B. Wakin, and R. G. Baraniuk. Random projections for manifold learning proofs and analysis. Technical Report TREE 0710, Rice University, 2007.
[16] M. Bernstein, V. de Silva, J. Langford, and J. Tenenbaum. Graph approximations to geodesics
on embedded manifolds, 2000. Technical report, Stanford University.
[17] B. K?egl. Intrinsic dimension estimation using packing numbers. In Advances in NIPS, volume 14. MIT Press, 2002.
| 3191 |@word version:5 compression:2 norm:1 suitably:1 disk:1 termination:1 d2:1 simulation:1 sensed:1 concise:1 tr:1 solid:1 reduction:7 must:1 grassberger:3 subsequent:2 additive:1 plot:2 v:2 greedy:2 device:4 hypersphere:1 provides:1 node:3 mathematical:1 become:1 prove:2 dimen:1 manner:2 acquired:1 pairwise:5 indeed:2 rapid:1 behavior:1 cand:1 examine:1 encouraging:1 curse:1 increasing:5 becomes:2 provided:2 discover:1 underlying:7 bounded:2 linearity:1 circuit:1 what:1 rmax:4 emerging:1 developed:3 compressive:2 finding:1 guarantee:1 berkeley:1 every:3 multidimensional:1 act:2 prohibitively:1 rm:7 ensured:2 control:1 unit:2 appear:1 overestimate:1 before:1 local:1 dallas:1 id:18 analyzing:2 might:2 resembles:1 suggests:2 specifying:1 limited:1 roundoff:1 range:4 practical:5 camera:3 faithful:2 practice:2 procedure:3 area:2 empirical:2 undone:1 projection:46 onto:3 close:4 operator:1 romberg:1 storage:2 collapsed:1 impossible:1 conventional:4 equivalent:2 hegde:2 attention:4 survey:1 formulate:1 resolution:2 immediately:1 estimator:2 embedding:10 handle:1 notion:1 coordinate:1 increment:1 variation:3 transmit:2 construction:2 suppose:4 exact:1 prescribe:1 origin:1 logarithmically:1 expensive:2 particularly:3 recognition:1 native:1 database:14 vein:1 observed:2 cloud:8 capture:3 worst:3 calculate:1 revisiting:1 ensures:3 valuable:1 rigorously:1 geodesic:7 prescribes:1 weakly:2 motivate:1 upon:1 basis:1 packing:1 joint:1 represented:1 derivation:1 distinct:1 effective:2 artificial:1 choosing:1 widely:2 dominating:1 stanford:2 distortion:1 say:2 drawing:1 compressed:2 gp:17 think:1 advantage:3 sequence:1 propose:1 lowdimensional:1 reconstruction:1 product:1 relevant:1 realization:1 roweis:1 academy:1 description:1 vasc:1 convergence:2 transmission:3 requirement:1 r1:4 produce:2 converges:1 object:1 help:1 develop:2 pose:1 fixing:1 received:2 c:2 predicted:1 implies:1 closely:2 subsequently:1 centered:2 fix:5 elementary:1 physica:1 hold:2 recap:1 around:1 residing:1 sufficiently:3 mapping:1 claim:1 stabilizes:1 bickel:1 entropic:1 smallest:1 purpose:1 estimation:20 proc:2 applicable:1 saw:1 establishes:1 tool:2 mit:2 clearly:1 sensor:1 gaussian:1 always:1 aim:1 idb:1 ck:2 reaching:1 avoid:1 sion:1 varying:2 encode:1 derived:3 focus:1 bernoulli:1 indicates:1 likelihood:1 sense:2 duarte:2 dependent:4 stopping:1 entire:1 going:1 tao:1 overall:1 issue:1 among:1 fidelity:1 html:2 priori:2 development:1 initialize:1 laska:2 once:1 saving:2 having:4 sampling:3 dcorr:1 future:1 others:1 report:3 richard:1 employ:1 randomly:6 preserve:2 national:1 subsampled:1 consisting:1 attractor:1 maintain:2 attempt:3 freedom:2 inexpensively:2 interest:1 huge:1 highly:1 grime:1 implication:1 strangeness:1 ambient:13 respective:1 tree:1 incomplete:1 euclidean:4 desired:1 theoretical:5 witnessed:1 instance:2 modeling:1 jpeg:1 measuring:1 cost:4 entry:1 uniform:2 eigenmaps:2 johnson:1 too:1 eec:2 synthetic:1 density:2 international:1 kdimensional:1 michael:1 quickly:1 transmitting:1 connectivity:1 again:1 central:6 recorded:1 choose:2 return:2 toy:1 account:1 potential:2 de:2 sec:1 summarized:1 chinmay:1 depends:3 piece:1 performed:5 tion:1 multiplicative:1 observing:1 contribution:3 formed:1 accuracy:7 baron:2 variance:19 yield:2 preprocess:1 produced:6 dub:1 monitoring:1 lighting:1 processor:3 definition:1 against:1 acquisition:2 frequency:1 involved:1 transmits:1 di:2 proof:6 sampled:4 costa:1 dataset:15 proved:2 popular:1 ask:1 dimensionality:11 organized:1 higher:2 isometric:1 planar:1 sufficiency:1 just:2 correlation:5 langford:1 hand:10 nonlinear:10 effect:2 validity:1 normalized:1 true:1 isomap:26 hence:1 deal:1 indistinguishable:2 during:1 essence:2 criterion:1 stress:1 demonstrate:1 performs:2 motion:1 silva:2 meaning:1 image:15 novel:2 recently:3 common:1 rotation:6 physical:1 volume:7 jl:1 extend:1 analog:1 approximates:4 significant:2 measurement:7 mathematics:1 i6:1 stable:3 etc:2 add:1 feb:1 curvature:2 isometry:1 perspective:1 store:2 certain:2 arbitrarily:2 captured:1 minimum:11 transmitted:1 determine:2 signal:6 full:3 smooth:2 technical:3 levina:1 believed:1 collate:1 divided:1 demodulation:1 ensuring:1 involving:1 essentially:2 metric:5 cmu:1 preserved:2 background:1 addition:1 fine:1 biased:1 rest:1 posse:1 call:1 integer:1 near:1 intermediate:2 bernstein:1 enough:3 variety:2 xj:2 architecture:1 identified:1 bandwidth:1 reduce:1 regarding:1 cn:3 inner:1 bottleneck:1 hessian:2 detailed:1 involve:2 amount:1 locally:3 tenenbaum:2 hardware:1 processed:1 diameter:1 reduced:2 generate:1 http:2 massoud:1 shifted:1 camastra:1 estimated:4 per:3 pace:1 hyperspherical:1 dasgupta:1 coarsely:1 key:2 drawn:2 utilize:2 imaging:1 nonadaptive:1 graph:4 merely:1 run:1 angle:1 baraniuk:5 powerful:2 communicate:1 uncertainty:1 almost:1 reasonable:1 strange:1 parsimonious:1 scaling:1 comparable:1 bound:8 guaranteed:2 display:1 encountered:1 yielded:1 rmin:3 sharply:1 scene:1 x2:4 ri:1 generates:1 performing:2 department:3 developing:1 richb:1 ball:1 belonging:2 describes:1 smaller:2 terminates:1 projecting:1 ln:1 remains:1 turn:1 eventually:1 know:1 ostensibly:1 hero:1 serf:1 umich:1 end:1 available:1 operation:2 observe:2 indirectly:1 enforce:1 anupam:1 rp:17 existence:1 original:11 denotes:1 running:2 ensure:2 wakin:6 calculating:1 classical:2 question:2 md:2 pave:1 september:1 subspace:3 distance:7 simulated:1 manifold:63 length:2 modeled:1 index:1 acquire:1 executed:1 statement:1 holding:1 takhar:1 info:2 reliably:1 unknown:1 perform:7 conversion:1 snapshot:1 datasets:2 finite:4 displayed:1 situation:7 communication:2 rn:13 arbitrary:2 august:1 introduced:1 namely:1 required:14 pair:1 icip:1 california:1 learned:1 tremendous:1 ch3:1 nip:2 trans:3 able:1 pattern:1 built:1 reliable:2 memory:1 max:1 hyperspheres:2 power:2 suitable:1 natural:1 circumvent:1 indicator:1 residual:11 concludes:1 review:1 geometric:5 literature:4 kelly:1 embedded:5 prototypical:1 foundation:1 degree:2 conveyed:1 sufficient:5 theiler:1 principle:1 row:3 supported:3 wireless:1 repeat:1 bias:1 lle:1 wide:1 saul:1 face:6 correspondingly:1 distributed:2 tolerance:1 curve:1 dimension:41 xn:3 stand:2 lindenstrauss:1 calculated:1 computes:1 reside:1 collection:1 adaptive:6 projected:12 made:1 far:1 compact:3 ml:17 global:2 active:1 assumed:1 conclude:1 xi:1 learn:3 nature:4 robust:1 database2:1 complex:1 domain:1 main:5 motivation:1 definitive:1 nothing:1 x1:4 biggest:1 precision:1 exceeding:3 explicit:1 lie:2 third:1 theorem:9 embed:1 specific:1 sensing:5 r2:4 gupta:1 evidence:2 intrinsic:14 workshop:1 adding:1 corr:1 sequential:1 orthogonalizing:1 conditioned:1 illustrates:1 egl:1 entropy:1 michigan:1 logarithmic:3 appearance:1 cheaply:1 contained:1 relies:1 rice:5 sarvotham:1 oct:1 viewed:3 consequently:1 donoho:2 considerable:2 content:1 hard:1 determined:2 uniformly:1 lemma:2 called:1 sanjoy:1 isomorphic:1 ece:2 experimental:3 e:1 highdimensional:2 procaccia:3 requisite:1 d1:1 phenomenon:1 |
2,416 | 3,192 | Anytime Induction of Cost-sensitive Trees
Saher Esmeir
Computer Science Department
Technion?Israel Institute of Technology
Haifa 32000, Israel
[email protected]
Shaul Markovitch
Computer Science Department
Technion?Israel Institute of Technology
Haifa 32000, Israel
[email protected]
Abstract
Machine learning techniques are increasingly being used to produce a wide-range
of classifiers for complex real-world applications that involve nonuniform testing
costs and misclassification costs. As the complexity of these applications grows,
the management of resources during the learning and classification processes becomes a challenging task. In this work we introduce ACT (Anytime Cost-sensitive
Trees), a novel framework for operating in such environments. ACT is an anytime
algorithm that allows trading computation time for lower classification costs. It
builds a tree top-down and exploits additional time resources to obtain better estimations for the utility of the different candidate splits. Using sampling techniques
ACT approximates for each candidate split the cost of the subtree under it and favors the one with a minimal cost. Due to its stochastic nature ACT is expected to
be able to escape local minima, into which greedy methods may be trapped. Experiments with a variety of datasets were conducted to compare the performance
of ACT to that of the state of the art cost-sensitive tree learners. The results show
that for most domains ACT produces trees of significantly lower costs. ACT is
also shown to exhibit good anytime behavior with diminishing returns.
1 Introduction
Suppose that a medical center has decided to use machine learning techniques to induce a diagnostic
tool from records of previous patients. The center aims to obtain a comprehensible model, with low
expected test costs (the costs of testing attribute values) and high expected accuracy. Moreover, in
many cases there are costs associated with the predictive errors. In such a scenario, the task of the
inducer is to produce a model with low expected test costs and low expected misclassification costs.
A good candidate for achieving the goals of comprehensibility and reduced costs is a decision
tree model. Decision trees are easily interpretable because they mimic the way doctors think
[13][chap. 9]. In the context of cost-sensitive classification, decision trees are the natural form
of representation: they ask only for the values of the features along a single path from the root to
a leaf. Indeed, cost-sensitive trees have been the subject of many research efforts. Several works
proposed learners that consider different misclassification costs [7, 18, 6, 9, 10, 14, 1]. These methods, however, do not consider test costs. Other authors designed tree learners that take into account
test costs, such as IDX [16], CSID3 [22], and EG2 [17]. These methods, however, do not consider
misclassification costs. The medical center scenario exemplifies the need for considering both types
of cost together: doctors do not perform a test before considering both its cost and its importance to
the diagnosis.
Minimal Cost trees, a method that attempts to minimize both types of costs simultaneously has been
proposed in [21]. A tree is built top-down. The immediate reduction in total cost each split results
in is estimated, and a split with the maximal reduction is selected. Although efficient, the Minimal
Cost approach can be trapped into a local minimum and produce trees that are not globally optimal.
1
cost(a1-10) = $$
a9
a10
0
1
a7
a10
1
a6
a9
0
cost(a1-8) = $$
cost(a9,10) = $$$$$$
a1
0
a9
1
1
a4
0
0
a4
1
1
0
Figure 1: A difficulty for greedy learners (left). Importance of context-based evaluation (right).
For example, consider a problem with 10 attributes a1?10 , of which only a9 and a10 are relevant.
The cost of a9 and a10 , however, is significantly higher than the others but lower than the cost
of misclassification. This may hide their usefulness, and mislead the learner to fit a large expensive
tree. The problem is intensified if a9 and a10 were interdependent with a low immediate information
gain (e.g., a9 ? a10 ), as illustrated in Figure 1 (left). In such a case, even if the costs were uniform,
local measures would fail in recognizing the relevance of a9 and a10 and other attributes might be
preferred. The Minimal Cost method is appealing when resources are very limited. However, it
requires a fixed runtime and cannot exploit additional resources. In many real-life applications, we
are willing to wait longer if a better tree can be induced. For example, due to the importance of the
model, the medical center is ready to allocate 1 week to learn it. Algorithms that can exploit more
time to produce solutions of better quality are called anytime algorithms [5].
One way to exploit additional time when searching for a tree of lower costs is to widen the search
space. In [2] the cost-sensitive learning problem is formulated as a Markov Decision Process (MDP)
and a systematic search is used to solve the MDP. Although the algorithm searches for an optimal
strategy, the time and memory limits prevent it from always finding optimal solutions.
The ICET algorithm [24] was a pioneer in searching non-greedily for a tree that minimizes both
costs together. ICET uses genetic search to produce a new set of costs that reflects both the original
costs and the contribution each attribute can make to reduce misclassification costs. Then it builds
a tree using the greedy EG2 algorithm but with the evolved costs instead of the original ones. ICET
was shown to produce trees of lower total cost. It can use additional time resources to produce more
generations and hence to widen its search in the space of costs. Nevertheless, it is limited in the
way it can exploit extra time. Firstly, it builds the final tree using EG2. EG2 prefers attributes with
high information gain (and low test cost). Therefore, when the concept to learn hides interdependency between attributes, the greedy measure may underestimate the usefulness of highly relevant
attributes, resulting in more expensive trees. Secondly, even if ICET may overcome the above problem by reweighting the attributes, it searches the space of parameters globally, regardless of the
context. This imposes a problem if an attribute is important in one subtree but useless in another. To
illustrate the above consider the concept in Figure 1 (right). There are 10 attributes of similar costs.
Depending on the value of a1 , the target concept is a7 ? a9 or a4 ? a6 . Due to interdependencies,
all attributes will have a low gain. Because ICET assigns costs globally, they will have similar costs
as well. Therefore, ICET will not be able to recognize which attribute is relevant in what context.
Recently, we have introduced LSID3, a cost-insensitive algorithm, which can induce more accurate
trees when given more time [11]. The algorithm uses stochastic sampling techniques to evaluate
candidate splits. It is not designed, however, to minimize test and misclassification costs. In this
work we build on LSID3 and propose ACT, an Anytime Cost-sensitive Tree learner that can exploit
additional time to produce trees of lower costs. Applying the sampling mechanism to the costsensitive setup, however, is not trivial and imposes several challenges which we address in Section
2. Extensive set of experiments that compares ACT to EG2 and to ICET is reported in Section 3. The
results show that ACT is significantly better for the majority of problems. In addition ACT is shown
to exhibit good anytime behavior with diminishing returns. The major contributions of this paper
are: (1) a non-greedy algorithm for learning trees of lower costs that allows handling complex cost
structures, (2) an anytime framework that allows learning time to be traded for reduced classification
costs, and (3) a parameterized method for automatic assigning of costs for existing datasets.
Note that costs may also be involved during example acquisition [12, 15]. In this work, however,
we assume that the full training examples are in hand. Moreover, we assume that during the test
phase, all tests in the relevant path will be taken. Several test strategies that determine which values
to query for and at what order have been recently studied [21]. These strategies are orthogonal to
our work because they assume a given tree.
2
2 The ACT Algorithm
Offline concept learning consists of two stages: learning from labelled examples; and using the
induced model to classify unlabelled instances. These two stages involve different types of cost
[23]. Our primary goal in this work is to trade the learning time for reduced test and misclassification
costs. To make the problem well defined, we need to specify how to: (1) represent misclassification
costs, (2) calculate test costs, and (3) combine both types of cost.
To answer these questions, we adopt the model described by Turney [24]. In a problem with |C|
different classes, a classification cost matrix M is a |C| ? |C| matrix whose Mi,j entry defines the
penalty of assigning the class ci to an instance that actually belongs to the class cj . To calculate
the test costs of a particular case, we sum the cost of the tests along the path from the root to the
appropriate leaf. For tests that appear several times we charge only for the first occurrence. The
model handles two special test types, namely grouped and delayed. Grouped tests share a common
cost that is charged only once per group. Each test also has an extra cost charged when the test is
actually made. For example, consider a tree path with tests like cholesterol level and glucose level.
For both values to be measured, a blood test is needed. Clearly, once blood samples are taken to
measure the cholesterol level, the cost for measuring the glucose level is lower. Delayed tests are
tests whose outcome cannot be obtained immediately, e.g., lab test results. Such tests force us to
wait until the outcome is available. Alternatively, we can take into account all possible outcomes
and follow several paths in the tree simultaneously (and pay for their costs). Once the result of the
delayed test is available, the prediction is in hand. Note that we might be charged for tests that we
would not perform if the outcome of the delayed tests were available. In this work we do not handle
delayed costs but we do explain how to adapt our framework to scenarios that involve them.
Having measured the test costs and misclassification costs, an important question is how to combine
them. Following [24] we assume that both types of cost are given in the same scale. Alternatively,
Qin et. al. [19] presented a method to handle the two kinds of cost scales by setting a maximal
budget for one kind and minimizing the other.
ACT, our proposed anytime framework for induction of cost-sensitive trees, builds on the recently
introduced LSID3 algorithm [11]. LSID3 adopts the general top-down induction of decision trees
scheme (TDIDT): it starts from the entire set of training examples, partitions it into subsets by testing
the value of an attribute, and then recursively builds subtrees. Unlike greedy inducers, LSID3 invests
more time resources for making better split decisions. For every candidate split, LSID3 attempts to
estimate the size of the resulting subtree were the split to take place and following Occam?s razor
[4] it favors the one with the smallest expected size. The estimation is based on a biased sample
of the space of trees rooted at the evaluated attribute. The sample is obtained using a stochastic
version of ID3, called SID3 [11]. In SID3, rather than choosing an attribute that maximizes the
information gain ?I (as in ID3), the splitting attribute is chosen semi-randomly. The likelihood that
an attribute will be chosen is proportional to its information gain. LSID3 is a contract algorithm
parameterized by r, the sample size. When r is larger, the resulting estimations are expected to be
more accurate, therefore improving the final tree. Let m = |E| be the number of examples and
n = |A| be the number of attributes. The runtime complexity of LSID3 is O(rmn3 ) [11]. LSID3
was shown to exhibit a good anytime behavior with diminishing returns. When applied to hard
concepts, it produced significantly better trees than ID3 and C4.5. ACT takes the same sampling
approach as in LSID3. However, three major components of LSID3 need to be replaced for the
cost-sensitive setup: (1) sampling the space of trees, (2) evaluating a tree, and (3) pruning.
Obtaining the Sample. LISD3 uses SID3 to bias the samples towards small trees. In ACT, however,
we would like to bias our sample towards low cost trees. For this purpose, we designed a stochastic
version of the EG2 algorithm, that attempts to build low cost trees greedily. In EG2, a tree is built
top-down, and the attribute that maximizes
ICF (Information Cost Function) is chosen for splitting
a node, where, ICF (a) = 2?I(a) ? 1 / ((cost (a) + 1)w ).
In Stochastic EG2 (SEG2), we choose splitting attributes semi-randomly, proportionally to their ICF.
Due to the stochastic nature of SEG2 we expect to be able to escape local minima for at least some
of the trees in the sample. To obtain a sample of size r, ACT uses EG2 once and SEG2 r ? 1 times.
Unlike ICET, we give EG2 and SEG2 a direct access to context-based costs, i.e., if an attribute has
already been tested its cost would be zero and if another attribute that belongs to the same group
has been tested, a group discount is applied. The parameter w controls the bias towards lower cost
3
attributes. While ICET tunes this parameter using genetic search, we set w inverse proportionally to
the misclassification cost: a high misclassification cost results in a smaller w, reducing the effect of
attribute costs. One direction for future work would be to tune w a priori.
Evaluating a Subtree. As a cost insensitive learner, the main goal of LSID3 is to maximize the
expected accuracy of the learned tree. Following Occam?s razor, it uses the tree size as a preference
bias and favors splits that are expected to reduce the final tree size. In a cost-sensitive setup, our goal
is to minimize the expected cost of classification. Following the same lookahead strategy as LSID3,
we sample the space of trees under each candidate split. However, instead of choosing an attribute
that minimizes the size, we would like to choose one that minimizes costs. Therefore, given a tree,
we need to come up with a procedure that estimates the expected costs when classifying a future
case. This cost consists of two components: the test cost and misclassification cost.
Assuming that the distribution of future cases would be similar to that of the learning examples, we
can estimate the test costs using the training data. Given a tree, we calculate the average test cost
of the training examples and use it to approximate the test cost of new cases. For a tree T and a set
of training examples E, we denote the average cost of traversing T for an example from E (average
testing cost) by tst-cost(T, E). Note that group discounts and delayed cost penalties do not need a
special care because they will be incorporated when calculating the average test costs.
Estimating the cost of errors is not obvious. One can no longer use the tree size as a heuristic for predictive errors. Occam?s razor allows to compare two consistent trees but does not provide a mean to
estimate accuracy. Moreover, tree size is measured in a different currency than accuracy and hence
cannot be easily incorporated in the cost function. Instead, we propose using a different estimator:
the expected error [20]. For a leaf with m training examples, of which e are misclassified the expected error is defined as the upper limit on the probability for error, i.e., EE(m, e, cf ) = Ucf (e, m)
where cf is the confidence level and U is the confidence interval for binomial distribution. The expected error of a tree is the sum of the expected errors in its leafs. Originally, the expected error was
used by C4.5 to predict whether a subtree performs better than a leaf. Although it lacks theoretical
basis, it was shown experimentally to be a good heuristic. In ACT we use the expected error to
approximate the misclassification cost. Assume a problem with |C| classes and a misclassification
cost matrix M . Let c be the class label in a leaf l. Let m be the total number of examples in l and
mi be the number of examples in l that belong to class i. The expected misclassification cost in l is
(the right most expression assumes uniform misclassification cost Mi,j = mc)
X
1
mc-cost(l) = EE(m, m ? mc , cf ) ?
Mc,i = EE(m, m ? mc , cf ) ? mc
|C| ? 1
i6=c
The expected error of a tree is the sum of the expected errors in its leafs. In our experiments we use
cf = 0.25, as in C4.5. In the future, we intend to tune cf if the allocated time allows. Alternatively,
we also plan to estimate the error using a set-aside validation set, when the training set size allows.
To conclude, let E be the set of examples used to learn a tree T , and let m be the size of E. Let L
be the set of leafs in T . The expected total cost of T when classifying an instance is:
1 X
tst-cost(T, E) +
?
mc-cost (l).
m
l?L
Having decided about the sampler and the tree utility function we are ready to formalize the tree
growing phase in ACT. A tree is built top-down. The procedure for selecting splitting test at each
node is listed in Figure 2 (left), and exemplified in Figure 2 (right). The selection procedure, as
formalized is Figure 2 (left) needs to be slightly modified when an attribute is numeric: instead
of iterating over the values the attribute can take, we examine r cutting points, each is evaluated
with a single invocation of EG2. This guarantees that numeric and nominal attributes get the same
resources. The r points are chosen dynamically, according to their information gain.
Costs-sensitive Pruning. Pruning plays an important role in decision tree induction. In costinsensitive environments, the main goal of pruning is to simplify the tree in order to avoid overfitting.
A subtree is pruned if the resulting tree is expected to yield a lower error. When test costs are taken
into account, pruning has another important role: reducing costs. It is worthwhile to keep a subtree
only if its expected reduction to the misclassification cost is larger that the cost of its tests. If the
misclassification cost was zero, it makes no sense to keep any split in the tree. If, on the other hand,
4
Procedure ACT-C HOOSE -ATTRIBUTE(E, A, r)
If r = 0 Return EG2-C HOOSE -ATTRIBUTE(E, A)
Foreach a ? A
Foreach vi ? domain(a)
Ei ? {e ? E | a(e) = vi }
T ? EG2(a, Ei , A ? {a})
mini ? C OST(T, Ei )
Repeat r ? 1 times
T ? SEG2(a, Ei , A ? {a})
mini ? min (mini , C OST(T, Ei ))
P|domain(a)|
totala ? C OST(a) + i=1
mini
Return a for which totala is minimal
a
cost(SEG2)
=5.1
)
G2
SE
st( .9
co =4
cost(EG2)
=4.1
cost(EG2)
=8.9
Figure 2: Attribute selection (left) and evaluation (right) in ACT (left). Assume that the cost of a in the current
context is 1. The estimated cost of a subtree rooted at a is therefore 1 + min(4.1, 5.1) + min(8.9, 4.9) = 9.
the misclassification cost was very large, we would expect similar behavior to the cost-insensitive
setup. To handle this challenge, we propose a novel approach for cost-sensitive pruning. Similarly
to error-based pruning [20], we scan the tree bottom-up. For each subtree, we compare its expected
total cost to that of a leaf. Formally, assume that e examples in E do not belong to the default class.1
We prune a subtree T into a leaf if:
1
1 X
? mc-cost(l) ? tst-cost(T, E) +
?
mc-cost(l).
m
m
l?L
3 Empirical Evaluation
A variety of experiments were conducted to test the performance and behavior of ACT. First we
describe and motivate our experimental methodology. We then present and discuss our results.
3.1 Methodology
We start our experimental evaluation by comparing ACT, given a fixed resource allocation, with
EG2 and ICET. EG2 was selected as a representative for greedy learners. We also tested the performance of CSID3 and IDX but found the results very similar to EG2, confirming the report in
[24]. Our second set of experiments compares the anytime behavior of ACT to that of ICET. Because the code of EG2 and ICET is not publicly available we have reimplemented them. To verify
the reimplementation results, we compared them with those reported in literature. We followed the
same experimental setup and used the same 5 datasets. The results are indeed similar with the basic
version of ICET achieving an average cost of 49.9 in our reimplementation vs. 49 in Turney?s paper
[24]. One possible reason for the slight difference may be the randomization involved in the genetic
search as well as in data partitioning into training, validating, and testing sets.
Datasets. Typically, machine learning researchers use datasets from the UCI repository [3]. Only
five UCI datasets, however, have assigned test costs [24]. To gain a wider perspective, we developed
an automatic method that assigns costs to existing datasets randomly. The method is parameterized
with: (1) cr the cost range, (2) g the number of desired groups as a percentage of the number of
attributes, and (3) sc the group shared cost as a percentage of the maximal marginal cost in the
group. Using this method we assigned costs to 25 datasets: 21 arbitrarily chosen UCI datasets2
and 4 datasets that represent hard concept and have been used in previous research. The online
appendix 3 gives detailed descriptions of these datasets. Two versions of each dataset have been
created, both with cost range of 1-100. In the first g and sc were set to 20% and in the second
they were set to 80%. These parameters were chosen arbitrarily, in attempt to cover different types
of costs. In total we have 55 datasets: 5 with costs assigned as in [24] and 50 with random costs.
Cost-insensitive learning algorithms focus on accuracy and therefore are expected to perform well
1
The default class is the one that minimizes the misclassification cost in the node.
The chosen UCI datasets vary in their size, type of attributes and dimension.
3
http://www.cs.technion.ac.il/?esaher/publications/nips07
2
5
Table 1: Average cost of classification as a percentage of the standard cost of classification. The table also lists
for each of ACT and ICET the number of significant wins they had using t-test. The last row shows the winner,
if any, as implied by a Wilcoxon test over all datasets with ? = 5%.
EG2
22.37
AVERAGE
B ETTER
W ILCOXON
mc = 10
ICET
10.23
0
ACT
2.21
34
?
EG2
25.93
mc = 100
ICET
ACT
17.15
11.86
0
25
?
EG2
38.69
mc = 1000
ICET
ACT
35.28
34.38
3
11
EG2
54.22
100
100
100
100
80
80
80
80
60
60
60
60
40
40
40
40
20
20
20
20
0
0
0
0
20
40
60
80
100
0
20
40
60
80
100
mc = 10000
ICET
ACT
47.47
41.62
10
12
?
0
0
20
40
60
80
100
0
20
40
60
80
100
Figure 3: Illustration of the differences in performance between ACT and ICET for misclassification costs
(from left to right: 10, 100, 1000, and 10000). Each point represents a dataset. The x-axis represents the cost
of ICET while the y-axis represents that of ACT. The dashed line indicates equality. Points are below it if ACT
performs better and above it if ICET is better.
when testing costs are negligible relative to misclassification costs. On the other hand, when testing
costs are significant, ignoring them would result in expensive classifiers. Therefore, to evaluate a
cost-sensitive learner a wide spectrum of misclassification costs is needed. For each problem out of
the 55, we created 4 instances, with uniform misclassification costs mc = 10, 100, 1000, 10000.
Normalized Cost. As pointed out by Turney [24], using the average cost is problematic because:
(1) the differences in costs among the algorithms become small as misclassification cost increases,
(2) it is difficult to combine the results for the multiple datasets, and (3) it is difficult to combine average costs for different misclassification costs. To overcome these problems, Turney suggests to normalize the average cost of classification by dividing it by the standard cost, defined as
(T C + mini (1 ? fi ) ? maxi,j (Mi,j )), The standard cost is an approximation for the maximal cost
in a given problem. It consists of two components: (1) T C, the cost if we take all tests, and (2) the
misclassification cost if the classifier achieves only the base-line accuracy. fi denotes the frequency
of class i in the data and hence (1 ? fi ) would be the error if the response would always be class i.
Statistical Significance. For each problem, one 10 fold cross-validation experiment has been conducted. The same partition to train-test sets was used for all compared algorithms. To test the
statistical significance of the differences between ACT and ICET we used two tests. The first is
t-test with a ? = 5% confidence: for each method we counted how many times it was a significant winner. The second is Wilcoxon test [8], which compares classifiers over multiple datasets and
states whether one method is significantly better than the other (? = 5%).
3.2 Fixed-time Comparison
For each of the 55 ? 4 problem instances, we run the seeded version of ICET with its default
parameters (20 generations),4 EG2, and ACT with r = 5. We choose r = 5 so the average runtime
of ACT would be shorter than ICET for all problems. EG2 and ICET use the same post-pruning
mechanism as in C4.5. In EG2 the default confidence factor is used (0.25) while in ICET this value
is tuned using the genetic search.
Table 1 lists the average results, Figure 3 illustrates the differences between ICET and ACT, and
Figure 4 (left) plots the average cost for the different values of mc. The full results are available
in the online appendix. Similarly to the results reported in [24] ICET is clearly better than EG2,
because the latter does not consider misclassification costs. When mc is set to 10 and to 100 ACT
significantly outperforms ICET for most datasets. In these cases ACT was able to produce very
small trees, sometimes consist of one node, neglecting the accuracy of the learned model. For mc
set to 1000 and 10000 there are fewer significant wins, yet it is clear that ACT is dominating: the
4
Seeded ICET includes the true costs in the initial population and was reported to perform better [24].
6
30
20
EG2
ICET
ACT
10
0
10
100
1000
Misclassification Cost
85
80
75
70
65
60
55
50
10000
50
C4.5
ICET
ACT
45
EG2
ICET
ACT
48
Average Cost
40
Average Cost
Average Accuracy
Average Cost
50
46
44
42
100
1000
Misclassification Cost
10000
35
30
EG2
ICET
ACT
25
20
40
10
40
0
1
2
3
Time [sec]
4
5
0
1
2
3
4
Time [sec]
5
6
Figure 4: Average cost (left most) and accuracy (mid-left) as a function of misclassification cost. Average cost
as a function of time for Breast-cancer-20 (mid-right) and Multi-XOR-80 (right most).
number of ACT wins is higher and the average results indicate that ACT trees are cheaper. The
Wilcoxon test, states that for mc = 10, 100, 10000, ACT is significantly better than ICET, and that
for mc = 1000 no significant winner was found.
When misclassification costs are low, an optimal algorithm would produce a very shallow tree.
When misclassification costs are dominant, an optimal algorithm would produce a highly accurate
tree. Some concepts, however, are not easily learnable and even cost-insensitive algorithms fail
to achieve perfect accuracy on them. Hence, with the increase in the importance of accuracy the
normalized cost increases: the predictive errors affect the cost more dramatically. To learn more
about the effect of accuracy, we compared the accuracy of ACT to that of C4.5 and ICET mc
values. Figure 4 (mid-left) shows the results. An important property of both ICET and ACT is their
ability to compromise on accuracy when needed. ACT?s flexibility, however, is more noteworthy:
from the least accurate method it becomes the most accurate one. Interestingly, when accuracy is
extremely important both ICET and ACT achieves even better accuracy than C4.5. The reason is
their non-greedy nature. ICET performs an implicit lookahead by reweighting attributes according
to their importance. ACT performs lookahead by sampling the space of subtrees under every split.
Among the two, the results indicates that ACT?s lookahead is more efficient in terms of accuracy.
We also compared ACT to LSID3. As expected, ACT was significantly better for mc ? 1000.
For mc = 10000 their performance was similar. In addition, we compared the studied methods on
nonuniform misclassification costs and found ACT?s advantage to be consistent.
3.3 Anytime Comparison
Both ICET and ACT are anytime algorithms that improve their performance with time. ICET is
expected to exploit extra time by producing more generations and hence better tuning the parameters
for the final invocation of EG2. ACT can use additional time to acquire larger samples and hence
achieve better cost estimations. A typical anytime algorithm would produce improved results with
the increase in resources. The improvements diminish with time, reaching a stable performance.
To examine the anytime behavior of ICET and ACT, we run each of them on 2 problems, namely
Breast-cancer-20 and Multi-XOR-80, with exponentially increasing time allocation. ICET was run
with 2, 4, 8 . . . generations and ACT with a sample size of 1, 2, 4, . . .. Figure 4 plots the results. The
results show a good anytime behavior of both ICET and ACT. For both algorithms, it is worthwhile
to allocate more time. ACT dominates ICET for both domains and is able to produce trees of lower
costs in shorter time. The Multi-XOR dataset is an example for a concept with attributes being
important only in one sub-concept. As we expected, ACT outperforms ICET significantly because
the latter cannot assign context-based costs. Allowing ICET to produce more and more generations
(up to 128) does not result in trees comparable to those obtained by ACT.
4 Conclusions
Machine learning techniques are increasingly being used to produce a wide-range of classifiers for
real-world applications that involve nonuniform testing costs and misclassification costs. As the
complexity of these applications grows, the management of resources during the learning and classification processes becomes a challenging task. In this work we introduced a novel framework for
operating in such environments. Our framework has 4 major advantages: (1) it uses a non-greedy
approach to build a decision tree and therefore is able to overcome local minima problems, (2) it
evaluates entire trees and therefore can be adjusted to any cost scheme that is defined over trees. (3)
it exhibits good anytime behavior and produces significantly better trees when more time is available, and (4) it can be easily parallelized and hence can benefit from distributed computer power.
7
To evaluate ACT we have designed an extensive set of experiments with a wide range of costs. The
experimental results show that ACT is superior over ICET and EG2. Significance tests found the
differences to be statistically strong. ACT also exhibited good anytime behavior: with the increase
in time allocation, there was a decrease in the cost of the learned models. ACT is a contract anytime
algorithm that requires its sample size to be pre-determined. In the future we intend to convert
ACT into an interruptible anytime algorithm, by adopting the IIDT general framework [11]. In
addition, we plan to apply monitoring techniques for optimal scheduling of ACT and to examine
other strategies for evaluating subtrees.
References
[1] N. Abe, B. Zadrozny, and J. Langford. An iterative method for multi-class cost-sensitive
learning. In KDD, 2004.
[2] V. Bayer-Zubek and Dietterich. Integrating learning from examples into the search for diagnostic policies. Artificial Intelligence, 24:263?303, 2005.
[3] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998.
[4] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Occam?s Razor. Information
Processing Letters, 24(6):377?380, 1987.
[5] M. Boddy and T. L. Dean. Deliberation scheduling for problem solving in time constrained
environments. Artificial Intelligence, 67(2):245?285, 1994.
[6] J. Bradford, C. Kunz, R. Kohavi, C. Brunk, and C. Brodley. Pruning decision trees with
misclassification costs. In ECML, 1998.
[7] L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees.
Wadsworth and Brooks, Monterey, CA, 1984.
[8] J. Demsar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine
Learning Research, 7:1?30, 2006.
[9] P. Domingos. Metacost: A general method for making classifiers cost-sensitive. In KDD, 1999.
[10] C. Elkan. The foundations of cost-sensitive learning. In IJCAI, 2001.
[11] S. Esmeir and S. Markovitch. Anytime learning of decision trees. Journal of Machine Learning
Research, 8, 2007.
[12] R. Greiner, A. J. Grove, and D. Roth. Learning cost-sensitive active classifiers. Artificial
Intelligence, 139(2):137?174, 2002.
[13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining,
Inference, and Prediction. New York: Springer-Verlag, 2001.
[14] D. Margineantu. Active cost-sensitive learning. In IJCAI, 2005.
[15] P. Melville, M. Saar-Tsechansky, F. Provost, and R. J. Mooney. Active feature acquisition for
classifier induction. In ICDM, 2004.
[16] S. W. Norton. Generating better decision trees. In IJCAI, 1989.
[17] M. Nunez. The use of background knowledge in decision tree induction. Machine Learning,
6:231?250, 1991.
[18] F. Provost and B. Buchanan. Inductive policy: The pragmatics of bias selection. Machine
Learning, 20(1-2):35?61, 1995.
[19] Z. Qin, S. Zhang, and C. Zhang. Cost-sensitive decision trees with multiple cost scales. Lecture
Notes in Computer Science, AI, Volume 3339/2004:380?390, 2004.
[20] J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993.
[21] S. Sheng, C. X. Ling, A. Ni, and S. Zhang. Cost-sensitive test strategies. In AAAI, 2006.
[22] M. Tan and J. C. Schlimmer. Cost-sensitive concept learning of sensor use in approach and
recognition. In Proceedings of the 6th international workshop on Machine Learning, 1989.
[23] P. Turney. Types of cost in inductive concept learning. In Workshop on Cost-Sensitive Learning
at ICML, 2000.
[24] P. D. Turney. Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision
tree induction algorithm. Journal of Artificial Intelligence Research, 2:369?409, 1995.
8
| 3192 |@word repository:2 version:5 willing:1 recursively:1 reduction:3 initial:1 selecting:1 genetic:5 tuned:1 interestingly:1 outperforms:2 existing:2 current:1 comparing:1 assigning:2 yet:1 pioneer:1 partition:2 confirming:1 kdd:2 designed:4 interpretable:1 plot:2 aside:1 v:1 greedy:9 leaf:10 selected:2 fewer:1 intelligence:4 warmuth:1 record:1 node:4 preference:1 firstly:1 zhang:3 five:1 along:2 direct:1 become:1 consists:3 combine:4 buchanan:1 introduce:1 indeed:2 expected:28 behavior:10 examine:3 growing:1 multi:4 globally:3 chap:1 considering:2 increasing:1 becomes:3 estimating:1 moreover:3 maximizes:2 zubek:1 israel:4 evolved:1 what:2 kind:2 minimizes:4 developed:1 finding:1 guarantee:1 every:2 act:68 charge:1 runtime:3 classifier:9 control:1 partitioning:1 medical:3 appear:1 producing:1 before:1 negligible:1 local:5 limit:2 path:5 noteworthy:1 might:2 studied:2 dynamically:1 suggests:1 challenging:2 co:1 limited:2 range:5 statistically:1 decided:2 testing:8 reimplementation:2 procedure:4 empirical:2 significantly:10 confidence:4 induce:2 pre:1 integrating:1 wait:2 get:1 cannot:4 selection:3 scheduling:2 context:7 applying:1 www:1 dean:1 charged:3 center:4 roth:1 regardless:1 mislead:1 formalized:1 splitting:4 assigns:2 immediately:1 estimator:1 haussler:1 cholesterol:2 esmeir:2 population:1 searching:2 markovitch:2 handle:4 target:1 suppose:1 nominal:1 play:1 tan:1 us:6 domingo:1 elkan:1 element:1 expensive:3 recognition:1 database:1 bottom:1 role:2 calculate:3 tsechansky:1 trade:1 decrease:1 environment:4 hoose:2 complexity:3 motivate:1 solving:1 compromise:1 predictive:3 learner:9 basis:1 easily:4 train:1 describe:1 query:1 sc:2 artificial:4 outcome:4 choosing:2 whose:2 heuristic:2 larger:3 solve:1 dominating:1 melville:1 favor:3 ability:1 id3:3 think:1 final:4 online:2 a9:10 deliberation:1 advantage:2 propose:3 maximal:4 qin:2 relevant:4 uci:5 flexibility:1 achieve:2 lookahead:4 description:1 normalize:1 ijcai:3 produce:17 generating:1 perfect:1 wider:1 illustrate:1 depending:1 ac:3 measured:3 strong:1 dividing:1 c:3 trading:1 come:1 indicate:1 direction:1 shaulm:1 attribute:35 stochastic:6 assign:1 randomization:1 secondly:1 adjusted:1 diminish:1 blake:1 week:1 traded:1 predict:1 major:3 achieves:2 vary:1 adopt:1 smallest:1 purpose:1 estimation:4 label:1 sensitive:23 grouped:2 tool:1 reflects:1 clearly:2 sensor:1 always:2 aim:1 modified:1 rather:1 reaching:1 avoid:1 cr:1 breiman:1 publication:1 exemplifies:1 focus:1 improvement:1 likelihood:1 indicates:2 greedily:2 sense:1 inference:1 entire:2 typically:1 diminishing:3 shaul:1 misclassified:1 classification:12 among:2 priori:1 plan:2 art:1 special:2 iidt:1 wadsworth:1 seg2:6 marginal:1 once:4 constrained:1 having:2 sampling:6 represents:3 icml:1 mimic:1 future:5 others:1 report:1 simplify:1 escape:2 widen:2 randomly:3 simultaneously:2 recognize:1 delayed:6 cheaper:1 replaced:1 phase:2 attempt:4 friedman:2 highly:2 mining:1 evaluation:5 schlimmer:1 inducer:2 subtrees:3 accurate:5 grove:1 bayer:1 neglecting:1 shorter:2 orthogonal:1 traversing:1 tree:78 haifa:2 desired:1 theoretical:1 minimal:5 instance:5 classify:1 cover:1 measuring:1 a6:2 cost:202 entry:1 subset:1 uniform:3 technion:5 usefulness:2 recognizing:1 margineantu:1 conducted:3 reported:4 answer:1 saar:1 st:1 international:1 systematic:1 contract:2 together:2 aaai:1 management:2 choose:3 return:5 account:3 sec:2 includes:1 nips07:1 vi:2 root:2 lab:1 doctor:2 start:2 contribution:2 minimize:3 il:3 ni:1 accuracy:17 publicly:1 xor:3 kaufmann:1 yield:1 produced:1 mc:22 monitoring:1 researcher:1 mooney:1 explain:1 a10:7 norton:1 evaluates:1 underestimate:1 acquisition:2 frequency:1 involved:2 obvious:1 associated:1 mi:4 gain:7 dataset:3 ask:1 anytime:21 knowledge:1 cj:1 formalize:1 actually:2 higher:2 originally:1 follow:1 methodology:2 specify:1 response:1 improved:1 brunk:1 evaluated:2 datasets2:1 stage:2 implicit:1 until:1 langford:1 hand:4 sheng:1 ei:5 a7:2 reweighting:2 lack:1 defines:1 costsensitive:1 quality:1 mdp:2 grows:2 effect:2 dietterich:1 concept:11 verify:1 normalized:2 true:1 inductive:2 hence:7 assigned:3 equality:1 seeded:2 ehrenfeucht:1 illustrated:1 during:4 razor:4 rooted:2 interruptible:1 stone:1 performs:4 novel:3 recently:3 fi:3 common:1 superior:1 winner:3 insensitive:5 foreach:2 exponentially:1 belong:2 slight:1 approximates:1 volume:1 significant:5 glucose:2 ai:1 automatic:2 tuning:1 i6:1 similarly:2 pointed:1 had:1 access:1 ucf:1 longer:2 operating:2 stable:1 base:1 wilcoxon:3 dominant:1 hide:2 perspective:1 belongs:2 scenario:3 verlag:1 ost:3 arbitrarily:2 life:1 morgan:1 minimum:4 additional:6 care:1 prune:1 parallelized:1 determine:1 maximize:1 dashed:1 semi:2 full:2 interdependency:2 currency:1 multiple:4 unlabelled:1 adapt:1 cross:1 icdm:1 post:1 a1:5 prediction:2 basic:1 regression:1 breast:2 patient:1 nunez:1 represent:2 sometimes:1 adopting:1 addition:3 background:1 interval:1 allocated:1 kohavi:1 extra:3 biased:1 unlike:2 comprehensibility:1 exhibited:1 subject:1 induced:2 validating:1 ee:3 split:12 variety:2 affect:1 fit:1 hastie:1 reduce:2 whether:2 expression:1 allocate:2 utility:2 effort:1 penalty:2 icf:3 york:1 prefers:1 dramatically:1 iterating:1 proportionally:2 involve:4 tune:3 listed:1 se:1 detailed:1 clear:1 discount:2 mid:3 reduced:3 http:1 percentage:3 problematic:1 trapped:2 diagnostic:2 estimated:2 per:1 tibshirani:1 diagnosis:1 group:7 nevertheless:1 achieving:2 blood:2 tst:3 prevent:1 sum:3 convert:1 run:3 inverse:1 parameterized:3 letter:1 place:1 decision:14 appendix:2 comparable:1 pay:1 followed:1 fold:1 min:3 extremely:1 pruned:1 department:2 according:2 smaller:1 slightly:1 increasingly:2 appealing:1 shallow:1 making:2 taken:3 resource:10 discus:1 fail:2 mechanism:2 needed:3 available:6 etter:1 apply:1 worthwhile:2 appropriate:1 occurrence:1 comprehensible:1 original:2 top:5 binomial:1 cf:6 assumes:1 denotes:1 a4:3 quinlan:1 calculating:1 exploit:7 build:8 implied:1 intend:2 question:2 already:1 strategy:6 primary:1 exhibit:4 win:3 monterey:1 intensified:1 majority:1 idx:2 trivial:1 reason:2 induction:7 assuming:1 code:1 useless:1 mini:5 illustration:1 minimizing:1 acquire:1 setup:5 difficult:2 olshen:1 policy:2 perform:4 allowing:1 upper:1 datasets:16 markov:1 ecml:1 zadrozny:1 immediate:2 incorporated:2 nonuniform:3 provost:2 abe:1 introduced:3 namely:2 extensive:2 c4:8 learned:3 brook:1 address:1 able:6 below:1 exemplified:1 reimplemented:1 challenge:2 program:1 built:3 memory:1 power:1 misclassification:37 natural:1 difficulty:1 force:1 hybrid:1 scheme:2 improve:1 technology:2 brodley:1 axis:2 created:2 ready:2 interdependent:1 literature:1 relative:1 expect:2 lecture:1 generation:5 proportional:1 allocation:3 validation:2 foundation:1 consistent:2 imposes:2 classifying:2 share:1 occam:4 invests:1 row:1 cancer:2 repeat:1 last:1 offline:1 bias:5 institute:2 wide:4 benefit:1 distributed:1 overcome:3 default:4 dimension:1 world:2 evaluating:3 numeric:2 author:1 made:1 adopts:1 counted:1 pruning:9 approximate:2 preferred:1 cutting:1 keep:2 overfitting:1 active:3 conclude:1 alternatively:3 spectrum:1 search:10 iterative:1 table:3 nature:3 learn:4 ca:1 ignoring:1 obtaining:1 improving:1 complex:2 domain:4 significance:3 main:2 ling:1 representative:1 sub:1 candidate:6 invocation:2 down:5 maxi:1 list:2 learnable:1 dominates:1 consist:1 workshop:2 importance:5 ci:1 subtree:10 budget:1 illustrates:1 greiner:1 g2:1 springer:1 goal:5 formulated:1 blumer:1 towards:3 labelled:1 shared:1 hard:2 experimentally:1 typical:1 determined:1 reducing:2 sampler:1 total:6 called:2 bradford:1 experimental:4 merz:1 turney:6 formally:1 pragmatic:1 latter:2 scan:1 relevance:1 evaluate:3 tested:3 handling:1 |
2,417 | 3,193 | Estimating divergence functionals and the likelihood
ratio by penalized convex risk minimization
XuanLong Nguyen
SAMSI & Duke University
Martin J. Wainwright
UC Berkeley
Michael I. Jordan
UC Berkeley
Abstract
We develop and analyze an algorithm for nonparametric estimation of divergence
functionals and the density ratio of two probability distributions. Our method is
based on a variational characterization of f -divergences, which turns the estimation into a penalized convex risk minimization problem. We present a derivation
of our kernel-based estimation algorithm and an analysis of convergence rates for
the estimator. Our simulation results demonstrate the convergence behavior of the
method, which compares favorably with existing methods in the literature.
1
Introduction
An important class of ?distances? between multivariate probability distributions P and Q are the AliSilvey or f -divergences
[1, 6]. These divergences, to be defined formally in the sequel, are all of the
R
form D? (P, Q) = ?(dQ/dP)dP, where ? is a convex function of the likelihood ratio. This family,
including the Kullback-Leibler (KL) divergence and the variational distance as special cases, plays
an important role in various learning problems, including classification, dimensionality reduction,
feature selection and independent component analysis. For all of these problems, if f -divergences
are to be used as criteria of merit, one has to be able to estimate them efficiently from data.
With this motivation, the focus of paper is the problem of estimating an f -divergence based on i.i.d.
samples from each of the distributions P and Q. Our starting point is a variational characterization
of f -divergences, which allows our problem to be tackled via an M -estimation procedure. Specifically, the likelihood ratio function dP/dQ and the divergence functional D? (P, Q) can be estimated
by solving a convex minimization problem over a function class. In this paper, we estimate the likelihood ratio and the KL divergence by optimizing a penalized convex risk. In particular, we restrict
the estimate to a bounded subset of a reproducing kernel Hilbert Space (RKHS) [17]. The RKHS
is sufficiently rich for many applications, and also allows for computationally efficient optimization
procedures. The resulting estimator is nonparametric, in that it entails no strong assumptions on the
form of P and Q, except that the likelihood ratio function is assumed to belong to the RKHS.
The bulk of this paper is devoted to the derivation of the algorithm, and a theoretical analysis of the
performance of our estimator. The key to our analysis is a basic inequality relating a performance
metric (the Hellinger distance) of our estimator to the suprema of two empirical processes (with
respect to P and Q) defined on a function class of density ratios. Convergence rates are then obtained
using techniques for analyzing nonparametric M -estimators from empirical process theory [20].
Related work. The variational representation of divergences has been derived independently and
exploited by several authors [5, 11, 14]. Broniatowski and Keziou [5] studied testing and estimation
problems based on dual representations of f -divergences, but working in a parametric setting as opposed to the nonparametric framework considered here. Nguyen et al. [14] established a one-to-one
correspondence between the family of f -divergences and the family of surrogate loss functions [2],
through which the (optimum) ?surrogate risk? is equal to the negative of an associated f -divergence.
Another link is to the problem of estimating integral functionals of a single density, with the Shannon entropy being a well-known example, which has been studied extensively dating back to early
1
work [9, 13] as well as the more recent work [3, 4, 12]. See also [7, 10, 8] for the problem of
(Shannon) entropy functional estimation. In another branch of related work, Wang et al. [22] proposed an algorithm for estimating the KL divergence for continuous distributions, which exploits
histogram-based estimation of the likelihood ratio by building data-dependent partitions of equivalent (empirical) Q-measure. The estimator was empirically shown to outperform direct plug-in
methods, but no theoretical results on its convergence rate were provided.
This paper is organized as follows. Sec. 2 provides a background of f -divergences. In Sec. 3, we
describe an estimation procedure based on penalized risk minimization and accompanying convergence rates analysis results. In Sec. 4, we derive and implement efficient algorithms for solving
these problems using RKHS. Sec. 5 outlines the proof of the analysis. In Sec. 6, we illustrate the
behavior of our estimator and compare it to other methods via simulations.
2
Background
We begin by defining f -divergences, and then provide a variational representation of the f divergence, which we later exploit to develop an M -estimator.
Consider two distributions P and Q, both assumed to be absolutely continuous with respect to
Lebesgue measure ?, with positive densities p0 and q0 , respectively, on some compact domain
X ? Rd . The class of Ali-Silvey or f -divergences [6, 1] are ?distances? of the form:
Z
D? (P, Q) =
p0 ?(q0 /p0 ) d?,
(1)
? is a convex function. Different choices of ? result in many divergences that play
where ? : R ? R
important roles in information theory and statistics, including the variational distance, Hellinger
distance, KL divergence and so on (see, e.g., [19]). As an important
example, the Kullback-Leibler
R
(KL) divergence between P and Q is given by DK (P, Q) = p0 log(p0 /q0 ) d?, corresponding to
the choice ?(t) = ? log(t) for t > 0 and +? otherwise.
Variational representation: Since ? is a convex function, by Legendre-Fenchel convex duality [16]
we can write ?(u) = supv?R (uv ? ?? (v)), where ?? is the convex conjugate of ?. As a result,
?Z
?
Z
Z
?
?
D? (P, Q) =
p0 sup(f q0 /p0 ? ? (f )) d? = sup
f dQ ? ? (f ) dP ,
f
f
R
where the supremum is taken over all measurable functions f : X ? R, and f dP denotes the
expectation of f under distribution P. Denoting by ?? the subdifferential [16] of the convex function
?, it can be shown that the supremum will be achieved for functions f such that q0 /p0 ? ??? (f ),
where q0 , p0 and f are evaluated at any x ? X . By convex duality [16], this is true if f ? ??(q0 /p0 )
for any x ? X . Thus, we have proved [15, 11]:
Lemma 1. Letting F be any function class in X ? R, there holds:
Z
D? (P, Q) ? sup f dQ ? ?? (f ) dP,
(2)
f ?F
with equality if F ? ??(q0 /p0 ) 6= ?.
To illustrate this result in the special case of the KL divergence, here the function ? has the form
?(u) = ? log(u) for u > 0 and +? for u ? 0. The convex dual of ? is ?? (v) = supu (uv??(u)) =
?1 ? log(?v) if u < 0 and +? otherwise. By Lemma 1,
Z
Z
Z
Z
DK (P, Q) = sup f dQ ? (?1 ? log(?f )) dP = sup log g dP ? gdQ + 1. (3)
g>0
f <0
In addition, the supremum is attained at g = p0 /q0 .
3
Penalized M-estimation of KL divergence and the density ratio
Let X1 , . . . , Xn be a collection of n i.i.d. samples from the distribution Q, and let Y1 , . . . , Yn be
n i.i.d. samples drawn from the distribution P. Our goal is to develop an estimator of the KL
divergence and the density ratio g0 = p0 /q0 based on the samples {Xi }ni=1 and {Yi }ni=1 .
2
The variational representation in Lemma 1 motivates the following estimator of the KL divergence.
First, let G be a function class of X ? R+ . We then compute
Z
Z
? K = sup log g dPn ? gdQn + 1,
D
(4)
g?G
R
R
where dPn and dQn denote the expectation under empirical measures Pn and Qn , respectively.
If the supremum is attained at g?n , then g?n serves as an estimator of the density ratio g0 = p0 /q0 .
In practice, the ?true? size of G is not known. Accordingly, our approach in this paper is an alternative approach based on controlling the size of G by using penalties. More precisely, let I(g) be a
non-negative measure of complexity for g such that I(g0 ) < ?. We decompose the function class
G as follows:
G = ?1?M ?? GM ,
(5)
where GM := {g | I(g) ? M } is a ball determined by I(?).
The estimation procedure involves solving the following program:
Z
Z
?n 2
g?n = argming?G gdQn ? log g dPn +
I (g),
2
(6)
where ?n > 0 is a regularization parameter. The minimizing argument g?n is plugged into (4) to
obtain an estimate of the KL divergence DK .
? K ? DK (P, Q)| is a natural performance measure. For
For the KL divergence, the difference |D
estimating the density ratio, various metrics are possible. Viewing g0 = p0 /q0 as a density function
with respect to Q measure, one useful metric is the (generalized) Hellinger distance:
Z
1
1/2
h2Q (g0 , g) :=
(g0 ? g 1/2 )2 dQ.
(7)
2
For the analysis, several assumptions are in order. First, assume that g0 (not all of G) is bounded
from above and below:
0 < ?0 ? g0 ? ?1 for some constants ?0 , ?1 .
(8)
Next, the uniform norm of GM is Lipchitz with respect to the penalty measure I(g), i.e.:
sup |g|? ? cM for any M ? 1.
(9)
g?GM
Finally, on the bracket entropy of G [21]: For some 0 < ? < 2,
H?B (GM , L2 (Q)) = O(M/?)? for any ? > 0.
(10)
The following is our main theoretical result, whose proof is given in Section 5:
Theorem 2. (a) Under assumptions (8), (9) and (10), and letting ?n ? 0 so that:
2/(2+?)
??1
)(1 + I(g0 )),
n = OP (n
then under P:
hQ (g0 , g?n ) = OP (?1/2
n )(1 + I(g0 )),
I(?
gn ) = OP (1 + I(g0 )).
(b) If, in addition to (8), (9) and (10), there holds inf g?G g(x) ? ?0 for any x ? X , then
? K ? DK (P, Q)| = OP (?1/2 )(1 + I(g0 )).
|D
n
4
(11)
Algorithm: Optimization and dual formulation
G is an RKHS. Our algorithm involves solving program (6), for some choice of function class G.
In our implementation, relevant function classes are taken to be a reproducing kernel Hilbert space
induced by a Gaussian kernel. The RKHS?s are chosen because they are sufficiently rich [17], and
as in many learning tasks they are quite amenable to efficient optimization procedures [18].
3
Let K : X ? X ? R be a Mercer kernel function [17]. Thus, K is associated with a feature
map ? : X ? H, where H is a Hilbert space with inner product h., .i and for all x, x0 ? X ,
K(x, x0 ) = h?(x), ?(x0 )i. As a reproducing kernel Hilbert space, any function g ? H can be
expressed as an inner product g(x) = hw, ?(x)i, where kgkH = kwkH . A kernel used in our
simulation is the Gaussian kernel:
K(x, y) := e?kx?yk
2
/?
,
where k.k is the Euclidean metric in Rd , and ? > 0 is a parameter for the function class.
Let G := H, and let the complexity measure be I(g) = kgkH . Thus, Eq. (6) becomes:
n
n
1X
1X
?n
min J := min
hw, ?(xi )i ?
loghw, ?(yj )i +
kwk2H ,
w
w n
n
2
i=1
j=1
(12)
where {xi } and {yj } are realizations of empirical data drawn from Q and P, respectively. The log
function is extended take value ?? for negative arguments.
Lemma 3. minw J has the following dual form:
?min
?>0
n
X
1 1
1 X
1 X
1 X
? ? log n?j +
?i ?j K(yi , yj )+
K(xi , xj )?
?j K(xi , yj ).
2
n n
2?n i,j
2?n n i,j
?n n i,j
j=1
Proof. Let ?i (w) :=
min J
w
1
n hw,
?(xi )i, ?j (w) := ? n1 loghw, ?(yj )i, and ?(w) =
= ? max(h0, wi ? J(w)) = ?J ? (0)
?n
2
2 kwkH .
We have
w
= ? min
ui ,vj
n
X
?i? (ui ) +
i=1
n
X
??j (vj ) + ?? (?
j=1
n
X
i=1
ui ?
n
X
vj ),
j=1
where the last line is due to the inf-convolution theorem [16]. Simple calculations yield:
1
1
? log n?j if v = ??j ?(yj ) and + ? otherwise
n n
1
?
?i (u) = 0 if u = ?(xi ) and + ? otherwise
n
1
?
2
? (v) =
kvkH .
2?n
Pn
Pn
Pn
So, minw J = ? min?i j=1 (? n1 ? n1 log n?j )+ 2?1n k j=1 ?j ?(yj )? n1 i=1 ?(xi )k2H , which
implies the lemma immediately.
??j (v)
= ?
If ?
? is solution
it is not difficult to show that the optimal w
? is attained at
Pn of the dual formulation,
Pn
w
? = ?1n ( j=1 ?
? j ?(yj ) ? n1 i=1 ?(xi )).
For an RKHS based on a Gaussian kernel, the entropy condition (10) holds for any ? > 0 [23].
Furthermore, (9) trivially
p holds via the Cauchy-Schwarz inequality: |g(x)| = |hw, ?(x)i| ?
kwkH k?(x)kH ? I(g) K(x, x) ? I(g). Thus, by Theorem 2(a), kwk
? H = k?
gn kH = OP (kg0 kH ),
so the penalty term ?n kwk
? 2 vanishes at the same rate as ?n . We have arrived at the following estimator for the KL divergence:
?K = 1 +
D
n
X
j=1
n
(?
X 1
1
1
? log n?
?j ) =
? log n?
?j .
n n
n
j=1
log G is an RKHS. Alternatively, we could set log G to be the RKHS, letting g(x) =
exphw, ?(x)i, and letting I(g) = k log gkH = kwkH . Theorem 2 is not applicable in this case,
because condition (9) no longer holds, but this choice nonetheless seems reasonable and worth investigating, because in effect we have a far richer function class which might improve the bias of
our estimator when the true density ratio is not very smooth.
4
A derivation similar to the previous case yields the following convex program:
n
min J
w
n
1 X hw, ?(xi )i 1 X
?n
e
?
hw, ?(yj )i +
kwk2H
n i=1
n j=1
2
:=
min
=
n
n
1X
1 X
?i ?(xi ) ?
?(yj )k2H .
? min
?i log(n?i ) ? ?i +
k
?>0
2?n i=1
n j=1
i=1
w
n
X
Letting ?
? be the solution of the above convex program, the KL divergence can be estimated by:
?K = 1 +
D
n
X
?
? i log ?
?i + ?
? i log
i=1
5
n
.
e
Proof of Theorem 2
We now sketch out the proof of the main theorem. The key to our analysis is the following lemma:
Lemma 4. If g?n is an estimate of g using (6), then:
Z
Z
?n 2
g?n + g0
?n 2
1 2
h (g0 , g?n ) +
I (?
gn ) ? ? (?
gn ? g0 )d(Qn ? Q) + 2 log
I (g0 ).
d(Pn ? P) +
4 Q
2
2g0
2
R
?
Proof. Define dl (g0 , g) = (g ? g0 )dQ ? log gg0 dP. Note that for x > 0, 12 log x ? x ? 1. Thus,
R
R
?1/2
log gg0 dP ? 2 (g 1/2 g0
? 1) dP. As a result, for any g, dl is related to hQ as follows:
Z
Z
?1/2
dl (g0 , g) ?
(g ? g0 ) dQ ? 2 (g 1/2 g0
? 1) dP
Z
Z
Z
1/2
1/2
=
(g ? g0 ) dQ ? 2 (g 1/2 g0 ? g0 ) dQ = (g 1/2 ? g0 )2 dQ = 2h2Q (g0 , g).
By the definition (6) of our estimator, we have:
Z
Z
Z
Z
?n 2
?n 2
g?n dQn ? log g?n dPn +
I (?
gn ) ? g0 dQn ? log g0 dPn +
I (g0 ).
2
2
Both sides (modulo the regularization term I 2 ) are convex functionals of g. By Jensen?s inequality,
if F is a convex function, then F ((u + v)/2) ? F (v) ? (F (u) ? F (v))/2. We obtain:
Z
Z
Z
Z
g?n + g0
g?n + g0
?n 2
?n 2
dQn ? log
dPn +
I (?
gn ) ? g0 dQn ? log g0 dPn +
I (g0 ).
2
2
4
4
R
R
0
0
Rearranging, g?n ?g
d(Qn ? Q) ? log g?n2g+g
d(Pn ? P) + ?4n I 2 (?
gn ) ?
2
0
g?n ? g0
?n 2
g0 + g?n
?n 2
dQ +
I (g0 ) = ?dl (g0 ,
)+
I (g0 )
2
4
2
4
g0 + g?n
?n 2
1
?n 2
? ?2h2Q (g0 ,
)+
I (g0 ) ? ? h2Q (g0 , g?n ) +
I (g0 ),
2
4
8
4
where the last inequality is a standard result for the (generalized) Hellinger distance (cf. [20]).
Z
log
g?n + g0
dP ?
2g0
Z
0
Let us now proceed to part (a) of the theorem. Define fg := log g+g
2g0 , and let FM := {fg |g ? GM }.
Since fg is a Lipschitz function of g, conditions (8) and (10) imply that
H?B (FM , L2 (P)) = O(M/?)? .
(13)
Apply Lemma 5.14 of [20] using distance metric d2 (g0 , g) = kg ? g0 kL2 (Q) , the following is true
under Q (and so true under P as well, since dP/dQ is bounded from above),
R
| (g ? g0 )d(Qn ? Q)|
sup
= OP (1).
(14)
2
1??/2
g?G n?1/2 d2 (g0 , g)
(1 + I(g) + I(g0 ))?/2 ? n? 2+? (1 + I(g) + I(g0 ))
5
In the same vein, we obtain that under P measure:
R
| fg d(Pn ? P)|
sup
= OP (1).
2
1??/2
g?G n?1/2 d2 (g0 , g)
(1 + I(g) + I(g0 ))?/2 ? n? 2+? (1 + I(g) + I(g0 ))
(15)
By condition (9), we have: d2 (g0 , g) = kg ? g0 kL2 (Q) ? 2c1/2 (1 + I(g) + I(g0 ))1/2 hQ (g0 , g).
Combining Lemma 4 and Eqs. (15), (14), we obtain the following:
1 2
?n 2
hQ (g0 , g?n ) +
I (?
gn ) ? ?n I(g0 )2 /2+
4
2
?
?
2
OP n?1/2 hQ (g0 , g)1??/2 (1 + I(g) + I(g0 ))1/2+?/4 ? n? 2+? (1 + I(g) + I(g0 )) .
(16)
From this point, the proof involves simple algebraic manipulation of (16). To simplify notation, let
? = hQ (g0 , g?n ), I? = I(?
h
gn ), and I0 = I(g0 ). There are four possibilities:
? ? n?1/(2+?) (1 + I? + I0 )1/2 and I? ? 1 + I0 . From (16), either
Case a. h
? 2 /4 + ?n I?2 /2 ? OP (n?1/2 )h
? 1??/2 I?1/2+?/4 or h
? 2 /4 + ?n I?2 /2 ? ?n I 2 /2,
h
0
which implies, respectively, either
? ? ??1/2 OP (n?2/(2+?) ),
h
n
?2/(2+?)
I? ? ??1
) or
n OP (n
? ? OP (?1/2 I0 ),
h
n
I? ? OP (I0 ).
2/(?+2)
Both scenarios conclude the proof if we set ??1
(1 + I0 )).
n = OP (n
? ? n?1/(2+?) (1 + I? + I0 )1/2 and I? < 1 + I0 . From (16), either
Case b. h
? 2 /4 + ?n I?2 /2 ? OP (n?1/2 )h
? 1??/2 (1 + I0 )1/2+?/4 or h
? 2 /4 + ?n I?2 /2 ? ?n I 2 /2,
h
0
which implies, respectively, either
? ? (1 + I0 )1/2 OP (n?1/(?+2) ),
h
? ? OP (?1/2 I0 ),
h
n
I? ? 1 + I0 or
I? ? OP (I0 ).
2/(?+2)
Both scenarios conclude the proof if we set ??1
(1 + I0 )).
n = OP (n
? ? n?1/(2+?) (1 + I? + I0 )1/2 and I? ? 1 + I0 . From (16)
Case c. h
? 2 /4 + ?n I?2 /2 ? OP (n?2/(2+?) )I,
?
h
? ? OP (n?1/(2+?) )I?1/2 and I? ? ??1 OP (n?2/(2+?) ). This means that h
? ?
which implies that h
n
1/2
?1
2/(2+?)
?
OP (?n )(1 + I0 ), I ? OP (1 + I0 ) if we set ?n = OP (n
)(1 + I0 ).
? ? n?1/(2+?) (1 + I? + I0 )1/2 and I? ? 1 + I0 . Part (a) of the theorem is immediate.
Case d. h
Finally, part (b) is a simple consequence of part (a) using the same argument as in Thm. 9 of [15].
6
Simulation results
In this section, we describe the results of various simulations that demonstrate the practical viability
of our estimators, as well as their convergence behavior. We experimented with our estimators
using various choices of P and Q, including Gaussian, beta, mixture of Gaussians, and multivariate
Gaussian distributions. Here we report results in terms of KL estimation error. For each of the eight
estimation problems described here, we experiment with increasing sample sizes (the sample size,
n, ranges from 100 to 104 or more). Error bars are obtained by replicating each set-up 250 times.
For all simulations, we report our estimator?s performance using the simple fixed rate ?n ? 1/n,
noting that this may be a suboptimal rate. We set the kernel width to be relatively small (? = .1) for
one-dimension data, and larger for higher dimensions. We use M1 to denote the method in which
G is the RKHS, and M2 for the method in which log G is the RKHS. Our methods are compared to
6
Estimate of KL(1/2 N (0,1)+ 1/2 N (1,1),Unif[?5,5])
Estimate of KL(Beta(1,2),Unif[0,1])
t
t
0.8
0.7
0.4
0.6
0.3
0.5
0.2
0.4
0.414624
M1, ? = .1, ? = 1/n
M2, ? = 1, ? = .1/n
0.3
0.1
0.1931
M1, ? = .1, ? = 1/n
M2, ? = .1, ? = .1/n
0
0.2
1/3
WKV, s = n
1/2
WKV, s = n
1/2
WKV, s = n
0.1
1/3
2/3
WKV, s = n
?0.1
100
200
500
1000
2000
5000
10000 20000
WKV, s = n
0
50000
100
Estimate of KL(N (0,1),N (4,2))
t
200
500
1000
2000
5000
10000
Estimate of KL(N (4,2),N (0,1))
t
t
2.5
t
6
5
2
4
3
1.5
2
1
1.9492
M1, ? = .1, ? = 1/n
M2, ? = .1, ? = .1/n
0
0
WKV, s = n1/3
WKV, s = n1/2
WKV, s = n2/3
0.5
100
200
500
1000
2000
5000
4.72006
M1, ? = 1, ? = .1/n
M2, ? = 1, ? = .1/n
1
WKV, s = n1/4
WKV, s = n1/3
WKV, s = n1/2
?1
?2
10000
100
2
1000
t
2
1.5
1.5
1
1
0.5
500
2000
2
0.5
0.777712
M1, ? = .5, ? = .1/n
M2, ? = .5, ? = .1/n
200
500
1000
2000
5000
2
WKV, n1/2
0
10000
100
3
200
500
1000
2000
5000
10000
Estimate of KL(N (0,I ),N (1,I ))
Estimate of KL(N (0,I ),Unif[?3,3] )
t
t
WKV, n1/3
WKV, n1/2
100
10000
0.959316
M1, ? = .5, ? = .1/n
M2, ? = .5, ? = .1/n
WKV, n1/3
0
5000
Estimate of KL(N (0,I ),N (1,I ))
Estimate of KL(N (0,I ),Unif[?3,3] )
t
200
t
3
3
t
3
2
1.8
1.6
1.5
1.4
1.2
1
1
0.8
1.16657
M1 ? = 1, ? = .1/n1/2
M2, ? = 1, ? = .1/n
0.5
0.6
0.2
WKV, n1/3
WKV, n1/2
0
100
200
500
1000
2000
1.43897
M1, ? = 1, ? = .1/n
M2, ? = 1, ? = .1/n
0.4
M2, ? = 1, ? = .1/n2/3
WKV, n1/2
0
5000
?0.2
10000
WKV, n1/3
100
200
500
1000
2000
5000
10000
Figure 1. Results of estimating KL divergences for various choices of probability distributions. In all
plots, the X-axis is the number of data points plotted on a log scale, and the Y-axis is the estimated
value. The error bar is obtained by replicating the experiment 250 times. Nt (a, Ik ) denotes a truncated
normal distribution of k dimensions with mean (a, . . . , a) and identity covariance matrix.
7
algorithm A in Wang et al [22], which was shown empirically to be one of the best methods in the
literature. Their method, denoted by WKV, is based on data-dependent partitioning of the covariate
space. Naturally, the performance of WKV is critically dependent on the amount s of data allocated
to each partition; here we report results with s ? n? , where ? = 1/3, 1/2, 2/3.
The first four plots present results with univariate distributions. In the first two, our estimators M 1
and M 2 appear to have faster convergence rate than WKV. The WKV estimator performs very well
in the third example, but rather badly in the fourth example. The next four plots present results with
two and three dimensional data. Again, M1 has the best convergence rates in all examples. The
M2 estimator does not converge in the last example, suggesting that the underlying function class
exhibits very strong bias. The WKV methods have weak convergence rates despite different choices
of the partition sizes. It is worth noting that as one increases the number of dimensions, histogram
based methods such as WKV become increasingly difficult to implement, whereas increasing dimension has only a mild effect on our method.
References
[1] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another.
J. Royal Stat. Soc. Series B, 28:131?142, 1966.
[2] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the
American Statistical Association, 101:138?156, 2006.
[3] P. Bickel and Y. Ritov. Estimating integrated squared density derivatives: Sharp best order of convergence
estimates. Sankhy?
a Ser. A, 50:381?393, 1988.
[4] L. Birg?e and P. Massart. Estimation of integral functionals of a density. Ann. Statist., 23(1):11?29, 1995.
[5] M. Broniatowski and A. Keziou. Parametric estimation and tests through divergences. Technical report,
LSTA, Universit?e Pierre et Marie Curie, 2004.
[6] I. Csisz?ar. Information-type measures of difference of probability distributions and indirect observation.
Studia Sci. Math. Hungar, 2:299?318, 1967.
[7] L. Gyorfi and E.C. van der Meulen. Density-free convergence properties of various estimators of entropy.
Computational Statistics and Data Analysis, 5:425?436, 1987.
[8] P. Hall and S. Morton. On estimation of entropy. Ann. Inst. Statist. Math., 45(1):69?88, 1993.
[9] I. A. Ibragimov and R. Z. Khasminskii. On the nonparametric estimation of functionals. In Symposium
in Asymptotic Statistics, pages 41?52, 1978.
[10] H. Joe. Estimation of entropy and other functionals of a multivariate density. Ann. Inst. Statist. Math.,
41:683?697, 1989.
[11] A. Keziou. Dual representation of ?-divergences and applications. C. R. Acad. Sci. Paris, Ser. I 336,
pages 857?862, 2003.
[12] B. Laurent. Efficient estimation of integral functionals of a density. Ann. Statist., 24(2):659?681, 1996.
[13] B. Ya. Levit. Asymptotically efficient estimation of nonlinear functionals. Problems Inform. Transmission, 14:204?209, 1978.
[14] X. Nguyen, M. J. Wainwright, and M. I. Jordan. On divergences, surrogate losses and decentralized
detection. Technical Report 695, Dept of Statistics, UC Berkeley, October 2005.
[15] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Nonparametric estimation of the likelihood ratio and
divergence functionals. In International Symposium on Information Theory (ISIT), 2007.
[16] G. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970.
[17] S. Saitoh. Theory of Reproducing Kernels and its Applications. Longman, Harlow, UK, 1988.
[18] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[19] F. Topsoe. Some inequalities for information divergence and related measures of discrimination. IEEE
Transactions on Information Theory, 46:1602?1609, 2000.
[20] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[21] A. W. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes. Springer-Verlag, New
York, NY, 1996.
[22] Q. Wang, S. R. Kulkarni, and S. Verd?u. Divergence estimation of continuous distributions based on
data-dependent partitions. IEEE Transactions on Information Theory, 51(9):3064?3074, 2005.
[23] D. X. Zhou. The covering number in learning theory. Journal of Complexity, 18:739?767, 2002.
8
| 3193 |@word mild:1 norm:1 seems:1 unif:4 d2:4 simulation:6 covariance:1 p0:15 reduction:1 series:1 denoting:1 rkhs:11 existing:1 dpn:7 nt:1 partition:4 plot:3 discrimination:1 accordingly:1 characterization:2 provides:1 math:3 lipchitz:1 direct:1 beta:2 become:1 ik:1 symposium:2 hellinger:4 x0:3 behavior:3 increasing:2 becomes:1 provided:1 estimating:7 bounded:3 begin:1 notation:1 underlying:1 kg:2 cm:1 berkeley:3 universit:1 supv:1 ser:2 partitioning:1 uk:1 yn:1 appear:1 mcauliffe:1 positive:1 consequence:1 acad:1 despite:1 analyzing:1 laurent:1 might:1 studied:2 range:1 gyorfi:1 kg0:1 practical:1 testing:1 yj:10 practice:1 implement:2 supu:1 procedure:5 empirical:7 suprema:1 selection:1 risk:6 equivalent:1 measurable:1 map:1 starting:1 independently:1 convex:17 immediately:1 m2:11 estimator:21 gkh:1 argming:1 controlling:1 play:2 gm:6 modulo:1 duke:1 verd:1 vein:1 role:2 wang:3 yk:1 vanishes:1 convexity:1 complexity:3 ui:3 solving:4 ali:2 gg0:2 indirect:1 various:6 derivation:3 describe:2 h0:1 whose:1 quite:1 richer:1 larger:1 otherwise:4 statistic:4 vaart:1 product:2 relevant:1 combining:1 realization:1 kh:3 csisz:1 olkopf:1 convergence:12 optimum:1 transmission:1 derive:1 develop:3 illustrate:2 stat:1 op:25 eq:2 strong:2 soc:1 involves:3 implies:4 viewing:1 decompose:1 isit:1 accompanying:1 hold:5 sufficiently:2 considered:1 hall:1 normal:1 k2h:2 bickel:1 early:1 estimation:22 applicable:1 topsoe:1 schwarz:1 minimization:4 mit:1 gaussian:5 rather:1 pn:9 zhou:1 derived:1 focus:1 morton:1 likelihood:7 inst:2 dependent:4 i0:21 integrated:1 classification:2 dual:6 denoted:1 special:2 uc:3 equal:1 sankhy:1 report:5 simplify:1 divergence:39 lebesgue:1 n1:19 detection:1 possibility:1 mixture:1 bracket:1 devoted:1 silvey:2 amenable:1 integral:3 minw:2 plugged:1 euclidean:1 plotted:1 theoretical:3 fenchel:1 gn:9 ar:1 subset:1 uniform:1 density:15 international:1 sequel:1 michael:1 again:1 squared:1 opposed:1 american:1 derivative:1 suggesting:1 de:1 sec:5 coefficient:1 rockafellar:1 later:1 analyze:1 sup:9 kwk:2 levit:1 curie:1 ni:2 efficiently:1 yield:2 weak:2 critically:1 worth:2 inform:1 definition:1 nonetheless:1 kl2:2 naturally:1 associated:2 proof:9 proved:1 studia:1 dimensionality:1 hilbert:4 organized:1 back:1 attained:3 higher:1 h2q:4 kvkh:1 formulation:2 evaluated:1 ritov:1 furthermore:1 smola:1 working:1 sketch:1 nonlinear:1 dqn:5 building:1 effect:2 true:5 equality:1 regularization:2 q0:12 leibler:2 width:1 covering:1 criterion:1 generalized:2 arrived:1 outline:1 demonstrate:2 performs:1 variational:8 functional:2 empirically:2 belong:1 association:1 m1:10 relating:1 cambridge:2 rd:2 uv:2 trivially:1 replicating:2 entail:1 longer:1 saitoh:1 multivariate:3 recent:1 optimizing:1 inf:2 manipulation:1 scenario:2 verlag:1 inequality:5 yi:2 exploited:1 der:2 converge:1 branch:1 khasminskii:1 smooth:1 technical:2 faster:1 plug:1 calculation:1 basic:1 kwk2h:2 metric:5 expectation:2 histogram:2 kernel:12 achieved:1 c1:1 background:2 subdifferential:1 addition:2 whereas:1 allocated:1 sch:1 lsta:1 massart:1 induced:1 jordan:4 noting:2 viability:1 xj:1 restrict:1 fm:2 suboptimal:1 inner:2 bartlett:1 wellner:1 penalty:3 algebraic:1 proceed:1 york:1 useful:1 harlow:1 xuanlong:1 ibragimov:1 amount:1 nonparametric:6 extensively:1 statist:4 outperform:1 estimated:3 bulk:1 write:1 key:2 four:3 drawn:2 marie:1 longman:1 asymptotically:1 fourth:1 family:3 reasonable:1 bound:1 tackled:1 correspondence:1 badly:1 precisely:1 argument:3 min:9 kgkh:2 martin:1 relatively:1 ball:1 legendre:1 conjugate:1 increasingly:1 wi:1 taken:2 computationally:1 turn:1 merit:1 letting:5 serf:1 gaussians:1 decentralized:1 apply:1 eight:1 birg:1 pierre:1 alternative:1 denotes:2 cf:1 exploit:2 g0:70 parametric:2 surrogate:3 exhibit:1 dp:14 hq:6 distance:9 link:1 sci:2 cauchy:1 ratio:14 minimizing:1 hungar:1 difficult:2 october:1 favorably:1 negative:3 implementation:1 motivates:1 convolution:1 observation:1 truncated:1 immediate:1 defining:1 extended:1 y1:1 reproducing:4 sharp:1 thm:1 paris:1 kl:23 established:1 able:1 bar:2 kwkh:4 below:1 program:4 including:4 max:1 royal:1 wainwright:3 natural:1 improve:1 meulen:1 imply:1 axis:2 wkv:25 dating:1 literature:2 l2:2 asymptotic:1 loss:2 mercer:1 dq:13 penalized:5 last:3 free:1 bias:2 side:1 fg:4 van:3 dimension:5 xn:1 rich:2 qn:4 author:1 collection:1 nguyen:4 far:1 transaction:2 functionals:10 compact:1 kullback:2 supremum:4 investigating:1 assumed:2 conclude:2 xi:11 alternatively:1 continuous:3 rearranging:1 domain:1 vj:3 main:2 motivation:1 n2:2 x1:1 ny:1 third:1 hw:6 theorem:8 covariate:1 jensen:1 dk:5 experimented:1 dl:4 joe:1 kx:1 entropy:7 univariate:1 expressed:1 springer:1 ma:1 goal:1 identity:1 ann:4 lipschitz:1 specifically:1 except:1 determined:1 lemma:9 geer:1 duality:2 ya:1 shannon:2 formally:1 absolutely:1 kulkarni:1 dept:1 princeton:2 |
2,418 | 3,194 | SpAM: Sparse Additive Models
Pradeep Ravikumar? Han Liu?? John Lafferty?? Larry Wasserman??
? Machine
Learning Department
of Statistics
? Computer Science Department
? Department
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
We present a new class of models for high-dimensional nonparametric regression
and classification called sparse additive models (SpAM). Our methods combine
ideas from sparse linear modeling and additive nonparametric regression. We derive a method for fitting the models that is effective even when the number of
covariates is larger than the sample size. A statistical analysis of the properties of
SpAM is given together with empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high
dimensional data.
1
Introduction
Substantial progress has been made recently on the problem of fitting high dimensional linear regression models of the form Yi = X iT ? + i , for i = 1, . . . , n. Here Yi is a real-valued response, X i
is a p-dimensional predictor and i is a mean zero error term. Finding an estimate of ? when p > n
that is both statistically well-behaved and computationally efficient has proved challenging; howb
ever, the lasso estimator (Tibshirani (1996)) has been remarkably successful. The lasso estimator ?
minimizes the `1 -penalized sums of squares
p
X
X
T
(Yi ? X i ?) + ?
|? j |
i
(1)
j=1
bj are zero. The
with the `1 penalty k?k1 encouraging sparse solutions, where many components ?
good empirical success of this estimator has been recently backed up by results confirming that it has
strong theoretical properties; see (Greenshtein and Ritov, 2004; Zhao and Yu, 2007; Meinshausen
and Yu, 2006; Wainwright, 2006).
The nonparametric regression model Yi = m(X i )+i , where m is a general smooth function, relaxes
the strong assumptions made by a linear model, but is much more challenging in high dimensions.
Hastie and Tibshirani (1999) introduced the class of additive models of the form
Yi =
p
X
j=1
m j (X i j ) + i
(2)
which is less general, but can be more interpretable and easier to fit; in particular, an additive model
can be estimated using a coordinate descent Gauss-Seidel procedure called backfitting. An extension
of the additive model is the functional ANOVA model
X
X
X
Yi =
m j (X i j ) +
m j,k (X i j , X ik ) +
m j,k,` (X i j , X ik , X i` ) + ? ? ? + i
(3)
1? j? p
j<k
j<k<`
1
which allows interactions among the variables. Unfortunately, additive models only have good
statistical and computational behavior when the number of variables p is not large relative to the
sample size n.
In this paper we introduce sparse additive models (SpAM) that extend the advantages of sparse linear
models to the additive, nonparametric setting. The underlying model is the same as in (2), but constraints are placed on the component functions {m j }1? j? p to simultaneously encourage smoothness
of each component and sparsity across components; the penalty is similar to that used by the COSSO
of Lin and Zhang (2006). The SpAM estimation procedure we introduce allows the use of arbitrary
nonparametric smoothing techniques, and in the case where the underlying component functions are
linear, it reduces to the lasso. It naturally extends to classification problems using generalized additive models. The main results of the paper are (i) the formulation of a convex optimization problem
for estimating a sparse additive model, (ii) an efficient backfitting algorithm for constructing the
estimator, (iii) simulations showing the estimator has excellent behavior on some simulated and real
data, even when p is large, and (iv) a statistical analysis of the theoretical properties of the estimator
that support its good empirical performance.
2
The SpAM Optimization Problem
In this section we describe the key idea underlying SpAM. We first present a population version
of the procedure that intuitively suggests how sparsity is achieved. We then present an equivalent
convex optimization problem. In the following section we derive a backfitting procedure for solving
this optimization problem in the finite sample setting.
To motivate our approach, we first consider a formulation that scales each component function g j
by a scalar ? j , and then imposes an `1 constraint on ? = (?1 , . . . , ? p )T . For j ? {1, . . . , p}, let H j
denote the Hilbert space of measurable functions f j (x j ) of the single scalar variable x j , such that
E( f j (X j )) = 0 and E( f j (X j )2 ) < ?, furnished with the inner product
D
E
f j , f j0 = E f j (X j ) f j0 (X j ) .
(4)
Let Hadd = H1 + H2 + . .P
. , H p denote the Hilbert space of functions of (x1 , . . . , x p ) that have
an additive form: f (x) =
j f j (x j ). The standard additive model optimization problem, in the
population setting, is
2
Pp
(5)
min
E Y ? j=1 f j (X j )
f j ?H j , 1? j? p
and m(x) = E(Y | X = x) is the unknown regression function. Now consider the following modification of this problem that imposes additional constraints:
2
Pp
E Y ? j=1 ? j g j (X j )
(6a)
(P)
min
??R p ,g j ?H j
subject to
p
X
j=1
|? j | ? L
E g 2j = 1, j = 1, . . . , p
E g j = 0, j = 1, . . . , p
(6b)
(6c)
(6d)
noting that g j is a function while ? is a vector. Intuitively, the constraint that ? lies in the `1 -ball
{? : k?k1 ? L} encourages sparsity of the estimated
P p ?, just as for
P pthe parametric lasso. When ? is
sparse, the estimated additive function f (x) = j=1 f j (x j ) = j=1 ? j g j (x j ) will also be sparse,
meaning that many of the component functions f j (?) = ? j g j (?) are identically zero. The constraints
(6c) and (6c) are imposed for identifiability; without (6c), for example, one could always satisfy (6a)
by rescaling.
While this optimization problem makes plain the role `1 regularization of ? to achieve sparsity, it has
the unfortunate drawback of not being convex. More specifically, while the optimization problem is
convex in ? and {g j } separately, it is not convex in ? and {g j } jointly.
2
However, consider the following related optimization problem:
2
Pp
(Q)
min
E Y ? j=1 f j (X j )
f j ?H j
subject to
p q
X
j=1
E( f j2 (X j )) ? L
(7a)
(7b)
E( f j ) = 0, j = 1, . . . , p.
(7c)
This problem is convex in { f j }. Moreover, the solutions to problems (P) and (Q) are equivalent:
n o n o
n
o
? ?j , g ?j optimizes (P) implies f j? = ? ?j g ?j optimizes (Q);
n
o
n
o n
o
f j? optimizes (Q) implies ? ?j = (k f j k2 )T , g ?j = f j? /k f j? k
optimizes (P).
2
While optimization problem (Q) has the important virtue of being convex, the way it encourages
4
sparsity is not intuitive;
the following observation provides some insight. Consider
the set C ? R
q
q
2 + f2 +
2 + f 2 ? L . Then the projecf 21
defined by C = ( f 11 , f 12 , f 21 , f 22 )T ? R4 : f 11
12
22
tion ?12 C onto the first two components is an `2 ball. However, the projection ?13P
C onto
the first
and third components is an `1 ball. In this way, it can be seen that the constraint j
f j
2 ? L
acts as an `1 constraint across components to encourage sparsity, while it acts as an `2 constraint
within components to encourage smoothness, as in a ridge regression penalty. It is thus crucial that
2
the norm
f j
2 appears in the constraint,
and
not its square
f j
2 . For the purposes of sparsity,
P
this constraint could be replaced by
j f j q ? L for any q ? 1. In case each f j is linear,
( f j (x1 j ), . . . , f (xn j )) = ? j (x1 j , . . . , xn j ), the optimization problem reduces to the lasso.
The use of scaling coefficients together with a nonnegative garrote penalty, similar to our problem
(P), is considered by Yuan (2007). However, the component functions g j are fixed, so that the
procedure is not asymptotically consistent. The form of the optimization problem (Q) is similar
to that of the COSSO for smoothing spline ANOVA models (Lin and Zhang, 2006); however, our
method differs significantly from the COSSO, as discussed below. In particular, our method is
scalable and easy to implement even when p is much larger than n.
3
A Backfitting Algorithm for SpAM
We now derive a coordinate descent algorithm for fitting a sparse additive model. We assume that
we observe Y = m(X ) + , where is mean zero Gaussian noise. We write the Lagrangian for the
optimization problem (Q) as
p q
2
X
X
Pp
1
L( f, ?, ?) = E Y ? j=1 f j (X j ) + ?
E( f j2 (X j )) +
? j E( f j ).
(8)
2
j
j=1
P
Let R j = Y ? k6= j f k (X k ) be the jth residual. The stationary condition for minimizing L as a
function of f j , holding the other components f k fixed for k 6= j, is expressed in terms of the Frechet
derivative ?L as
?L( f, ?, ?; ? f j ) = E ( f j ? R j + ?v j )? f j = 0
(9)
q
for any ? f j ? H j satisfying E(? f j ) = 0, where v j ? ? E( f j2 ) is an element of the subgradient,
q
.q
E( f j2 ) if E( f j2 ) 6= 0. Therefore, conditioning on X j , the
satisfying Ev 2j ? 1 and v j = f j
stationary condition (9) implies
f j + ?v j = E(R j | X j ).
(10)
Letting P j = E[R j | X j ] denote the projection of the residual onto H j , the solution satisfies
?
?
?1 + q ?
? f j = P j if E(P j2 ) > ?
(11)
2
E( f j )
3
Input: Data (X i , Yi ), regularization parameter ?.
(0)
Initialize f j = f j , for j = 1, . . . , p.
Iterate until convergence:
For each j = 1, . . . , p:
P
Compute the residual: R j = Y ? k6= j f k (X k );
bj = S j R j ;
Estimate the projection P j = E[R j | X j ] by smoothing: P
q
Estimate the norm s j = E[P j ]2 using, for example, (15) or (35);
?
bj ;
Soft-threshold: f j = 1 ?
P
b
sj +
Center: f j ? f j ? mean( f j ).
P
Output: Component functions f j and estimator m
b(X i ) = j f j (X i j ).
Figure 1: T HE S PAM BACKFITTING A LGORITHM
and f j = 0 otherwise. Condition (11), in turn, implies
?
?
q
q
?
? E( f 2 ) = E(P 2 ) or
?1 + q
j
j
E( f j2 )
q
E( f j2 ) =
q
E(P j2 ) ? ?.
Thus, we arrive at the following multiplicative soft-thresholding update for f j :
?
?
?
? Pj
f j = ?1 ? q
E(P j2 )
(12)
(13)
+
where [?]+ denotes the positive part. In the finite sample case, as in standard backfitting (Hastie and
Tibshirani, 1999), we estimate the projection E[R j | X j ] by a smooth of the residuals:
bj = S j R j
P
(14)
where
S j is a linear smoother, such as a local linear or kernel smoother. Let b
s j be an estimate of
q
2
E[P j ]. A simple but biased estimate is
q
1 b
b2 ).
b
s j = ? kP
j k2 = mean( P
j
n
(15)
More accurate estimators are possible; an example is given in the appendix. We have thus derived
the SpAM backfitting algorithm given in Figure 1.
While the motivating optimization problem (Q) is similar to that considered in the COSSO (Lin
and Zhang, 2006) for smoothing splines, the SpAM backfitting algorithm decouples smoothing and
sparsity, through a combination of soft-thresholding and smoothing. In particular, SpAM backfitting
can be carried out with any nonparametric smoother; it is not restricted to splines. Moreover, by
iteratively estimating over the components and using soft thresholding, our procedure is simple to
implement and scales to high dimensions.
3.1
SpAM for Nonparametric Logistic Regression
The SpAM backfitting procedure can be extended to nonparametric logistic regression for classification. The additive logistic model is
P
p
exp
f
(X
)
j
j
j=1
P
P(Y = 1 | X ) ? p(X ; f ) =
(16)
p
1 + exp
f
(X
)
j
j
j=1
4
where Y ? {0, 1}, and the population log-likelihood is `( f ) = E Y f (X ) ? log (1 + exp f (X )) .
Recall that in the local scoring algorithm for generalized additive models (Hastie and Tibshirani,
1999) in the logistic case, one runs the backfitting procedure within Newton?s method. Here one
iteratively computes the transformed response for the current estimate f 0
Yi ? p(X i ; f 0 )
Z i = f 0 (X i ) +
(17)
p(X i ; f 0 )(1 ? p(X i ; f 0 ))
and weights w(X i ) = p(X i ; f 0 )(1 ? p(X i ; f 0 ), and carries out a weighted backfitting of (Z , X )
with weights w. The weighted smooth is given by
S (w R j )
bj = j
P
.
(18)
Sjw
To incorporate the sparsity penalty, we first note that the Lagrangian is given by
p q
X
X
E( f j2 (X j )) +
? j E( f j )
(19)
L( f, ?, ?) = E log (1 + exp f (X )) ? Y f (X ) + ?
j=1
j
and the stationary condition for component function f j is E p ? Y | X j + ?v j = 0 where v j is an
q
element of the subgradient ? E( f j2 ). As in the unregularized case, this condition is nonlinear in f ,
and
of the log-likelihood around f 0 . This yields the linearized condition
so we linearize the gradient
E w(X )( f (X ) ? Z ) | X j + ?v j = 0. When E( f j2 ) 6= 0, this implies the condition
?
?
?
?E w | X j + q
? f j (X j ) = E(w R j | X j ).
(20)
2
E( f j )
In the finite sample case, in terms of the smoothing matrix S j , this becomes
S j (w R j )
.q
.
fj =
Sjw + ?
E( f j2 )
(21)
If kS j (w R j )k2 < ?, then f j = 0. Otherwise, this implicit, nonlinear equation for f j cannot be
solved explicitly, so we propose to iterate until convergence:
S j (w R j )
fj ?
.
(22)
?
S j w + ? n k f j k2
When ? = 0, this yields the standard local scoring update (18). An example of logistic SpAM is
given in Section 5.
4
4.1
Properties of SpAM
SpAM is Persistent
The notion of risk consistency, or persistence, was studied by Juditsky and Nemirovski (2000) and
Greenshtein and Ritov (2004) in the context of linear models. Let (X, Y ) denote a new pair (independent of the observed data) and define the predictive risk when predicting Y with f (X ) by
R( f ) = E(Y ? f (X ))2 .
(23)
P
Since we consider predictors of the form f (x) =
?
g
(x
)
we
also
write
the
risk
as
R(?,
g)
j j j j
where ? = (?1 , . . . , ? p ) and g = (g1 , . . . , g p ). Following Greenshtein and Ritov (2004), we say
that an estimator m
bn is persistent relative to a class of functions Mn if
P
R(b
m n ) ? R(m ?n ) ? 0
(24)
?
where m n = argmin f ?Mn R( f ) is the predictive oracle. Greenshtein and Ritov (2004) showed
that the lasso is persistent for the class of linear models Mn = { f (x) = x T ? : k?k1 ? L n } if
L n = o((n/ log n)1/4 ). We show a similar result for SpAM.
?
Theorem 4.1. Suppose that pnn ? en for some ? < 1. Then SpAM ois persistent relative to the
Pp
class of additive models Mn = f (x) = j=1 ? j g j (x j ) : k?k1 ? L n if L n = o n (1?? )/4 .
5
4.2
SpAM is Sparsistent
In the case of linear regression, with m j (X j ) = ? Tj X j , Wainwright (2006) shows that under certain
conditions on n, p, s = |supp(?)|, and the design matrix X , the lasso recovers the sparsity
pattern
bn is sparsistent: P supp(?) = supp(?
bn ) ? 1. We
asymptotically; that is, the lasso estimator ?
show a similar result for SpAM with the sparse backfitting procedure.
For the purpose of analysis, we use orthogonal function regression as the smoothing procedure. For
each j = 1, . . . , p let ? j be an orthogonal basis for H j . We truncate the basis to finite dimension
dn , and let dn ? ? such that dn /n ? 0. Let 9 j denote the n ? d matrix 9 j (i, k) = ? jk (X i j ).
If A ? {1, . . . , p}, we denote by 9 A the n ? d|A| matrix where for each i ? A, 9i appears as a
submatrix in the natural way. The SpAM optimization problem can then be written as
p r
2
X
Pp
1
1 T T
min
Y ? j=1 9 j ? j + ?n
? 9 9j?j
(25)
? 2n
n j j
j=1
where each ? j is a d-dimensional vector. Let S denote the true set of variables {j : m j 6= 0}, with
bj 6= 0} denote the estimated set of
s = |S|, and let S c denote its complement. Let b
Sn = {j : ?
b
variables from the minimizer ?n of (25).
Theorem 4.2. Suppose that 9 satisfies the conditions
1 T
1 T
9 S 9 S ? Cmax < ? and 3min
9 S 9 S ? Cmin > 0
3max
n
n
s
2
?1
1 T
??
? Cmin 1?
9 c 9S 1 9 T 9S
, for some 0 < ? ? 1
n S
n S
Cmax
s
2
Let the regularization parameter ?n ? 0 be chosen to satisfy
p
s
dn (log dn + log( p ? s))
?n sdn ? 0,
? 0.
? 0, and
dn ?n
n?2n
Then SpAM is sparsistent: P b
Sn = S ?? 1.
5
(26)
(27)
(28)
Experiments
In this section we present experimental results for SpAM applied to both synthetic and real data,
including regression and classification examples that illustrate the behavior of the algorithm in various conditions. We first use simulated data to investigate the performance of the SpAM backfitting
algorithm, where the true sparsity pattern is known. We then apply SpAM to some real data. If not
explicitly stated otherwise, the data are always rescaled to lie in a d-dimensional cube [0, 1]d , and
a kernel smoother with Gaussian kernel is used. To tune the penalization parameter ?, we use a C p
statistic, which is defined as
p
n
2 2b
Pp
1 X
?2 X
C p( b
f) =
Yi ? j=1 b
f j (X j ) +
trace(S j ) 1[ b
f j 6= 0]
(29)
n
n
i=1
j=1
where S j is the smoothing matrix for the j-th dimension and b
? 2 is the estimated variance.
5.1
Simulations
We first apply SpAM to an example from (H?rdle et al., 2004). A dataset with sample size n = 150
is generated from the following 200-dimensional additive model:
Yi = f 1 (xi1 ) + f 2 (xi2 ) + f 3 (xi3 ) + f 4 (xi4 ) + i
2
f 1 (x) = ?2 sin(2x), f 2 (x) = x ?
1
3,
f 3 (x) = x ?
1
2,
f 4 (x) = e?x + e?1 ? 1
(30)
(31)
and f j (x) = 0 for j ? 5 with noise i ? N (0, 1). These data therefore have 196 irrelevant
dimensions. The results of applying SpAM with the plug-in bandwidths are summarized in Figure 2.
6
1.0
14
0.8
prob. of correct recovery
12
0.6
0.5
p=256
0.2
6
0.0
4
2
0.0
0.1
194 9 94 2
0.2
0.4
8
0.3
4 3
Cp
0.6
0.4
10
Component Norms
p=128
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
0 10 20 30 40 50 60 70 80 90
110 130 150
sample size
zero
zero
0.4
0.6
x1
0.8
1.0
0.0
0.2
0.4
0.6
x2
0.8
1.0
0.0
0.2
0.4
0.6
x3
0.8
1.0
0.0
0.2
0.4
0.6
x4
0.8
1.0
0.0
4
2
?6
?6
?4
?2
m6
2
?2
?4
?2
?4
?6
0.2
m5
m4 2
2
?2
?4
?2
?4
?2
?4
0.0
m3
m2 2
m1 2
4
4
4
6
6
4
4
l1=79.26
6
l1=90.65
6
l1=88.36
6
l1=97.05
0.2
0.4
0.6
x5
0.8
1.0
0.0
0.2
0.4
0.6
x6
0.8
1.0
Figure 2: (Simulated data) Upper left: The empirical `2 norm of the estimated
P components as plotted
against the tuning parameter ?; the value on the x-axis is proportional to j k b
f j k2 . Upper center:
The C p scores against the tuning parameter ?; the dashed vertical line corresponds to the value of
? which has the smallest C p score. Upper right: The proportion of 200 trials where the correct
relevant variables are selected, as a function of sample size n. Lower (from left to right): Estimated
(solid lines) versus true additive component functions (dashed lines) for the first 6 dimensions; the
remaining components are zero.
5.2
Boston Housing
The Boston housing data was collected to study house values in the suburbs of Boston; there are
altogether 506 observations with 10 covariates. The dataset has been studied by many other authors
(H?rdle et al., 2004; Lin and Zhang, 2006), with various transformations proposed for different
covariates. To explore the sparsistency properties of our method, we add 20 irrelevant variables. Ten
of them are randomly drawn from Uniform(0, 1), the remaining ten are a random permutation of the
original ten covariates, so that they have the same empirical densities.
The full model (containing all 10 chosen covariates) for the Boston Housing data is:
medv
= ? + f 1 (crim) + f 2 (indus) + f 3 (nox) + f 4 (rm) + f 5 (age)
+ f 6 (dis) + f 7 (tax) + f 8 (ptratio) + f 9 (b) + f 10 (lstat)
(32)
The result of applying SpAM to this 30 dimensional dataset is shown in Figure 3. SpAM identifies 6
nonzero components. It correctly zeros out both types of irrelevant variables. From the full solution
path, the important variables are seen to be rm, lstat, ptratio, and crim. The importance
of variables nox and b are borderline. These results are basically consistent with those obtained
by other authors (H?rdle et al., 2004). However, using C p as the selection criterion, the variables
indux, age, dist, and tax are estimated to be irrelevant, a result not seen in other studies.
5.3
SpAM for Spam
Here we consider an email spam classification problem, using the logistic SpAM backfitting algorithm from Section 3.1. This dataset has been studied by Hastie et al. (2001), using a set of 3,065
emails as a training set, and conducting hypothesis tests to choose significant variables; there are a
total of 4,601 observations with p = 57 attributes, all numeric. The attributes measure the percentage of specific words or characters in the email, the average and maximum run lengths of upper case
letters, and the total number of such letters. To demonstrate how SpAM performs well with sparse
data, we only sample n = 300 emails as the training set, with the remaining 4301 data points used
as the test set. We also use the test data as the hold-out set to tune the penalization parameter ?. The
results of a typical run of logistic SpAM are summarized in Figure 4, using plug-in bandwidths.
7
m4 10
?10
?10
70
0.0 0.2 0.4 0.6 0.8 1.0
x1
l1=478.29
0.0 0.2 0.4 0.6 0.8 1.0
x4
l1=1221.11
0.4
0.6
0.8
20
m1010
m8 10
20
40
30
1.0
0.0
0.2
0.4
0.6
0.8
1.0
?10
0.2
?10
0.0
20
0
17 7 5
1
63
8
50
Cp
2
60
Component Norms
10
3
80
m1 10
4
20
l1=1173.64
20
l1=177.14
0.0 0.2 0.4 0.6 0.8 1.0
x8
0.0 0.2 0.4 0.6 0.8 1.0
x10
0.18
0.16
SELECTED VARIABLES
{ 8,54}
{ 8, 9, 27, 53, 54, 57}
{7, 8, 9, 17, 18, 27, 53, 54, 57, 58}
{4, 6?10, 14?22, 26, 27, 38, 53?58}
ALL
ALL
ALL
ALL
0.14
# ZEROS
55
51
46
20
0
0
0
0
0.12
E RROR
0.2009
0.1725
0.1354 ?
0.1083 ( )
0.1117
0.1174
0.1251
0.1259
Empirical prediction error
?(?10?3 )
5.5
5.0
4.5
4.0
3.5
3.0
2.5
2.0
0.20
Figure 3: (Boston housing) Left: The empirical `2 norm of the estimated components versus the
regularization parameter ?. Center: The C p scores against ?; the dashed vertical line corresponds to
best C p score. Right: Additive fits for four relevant variables.
2.0
2.5
3.0
3.5
4.0
4.5
5.0
5.5
penalization parameter
Figure 4: (Email spam) Classification accuracies and variable selection for logistic SpAM.
6
Acknowlegments
This research was supported in part by NSF grant CCF-0625879 and a Siebel Scholarship to PR.
References
G REENSHTEIN , E. and R ITOV, Y. (2004). Persistency in high dimensional linear predictor-selection and the
virtue of over-parametrization. Journal of Bernoulli 10 971?988.
H ?RDLE , W., M ?LLER , M., S PERLICH , S. and W ERWATZ , A. (2004). Nonparametric and Semiparametric
Models. Springer-Verlag Inc.
H ASTIE , T. and T IBSHIRANI , R. (1999). Generalized additive models. Chapman & Hall Ltd.
H ASTIE , T., T IBSHIRANI , R. and F RIEDMAN , J. H. (2001). The Elements of Statistical Learning: Data
Mining, Inference, and Prediction. Springer-Verlag.
J UDITSKY, A. and N EMIROVSKI , A. (2000). Functional aggregation for nonparametric regression. Ann.
Statist. 28 681?712.
L IN , Y. and Z HANG , H. H. (2006). Component selection and smoothing in multivariate nonparametric regression. Ann. Statist. 34 2272?2297.
M EINSHAUSEN , N. and Y U , B. (2006). Lasso-type recovery of sparse representations for high-dimensional
data. Tech. Rep. 720, Department of Statistics, UC Berkeley.
T IBSHIRANI , R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, Methodological 58 267?288.
WAINWRIGHT, M. (2006). Sharp thresholds for high-dimensional and noisy recovery of sparsity. Tech. Rep.
709, Department of Statistics, UC Berkeley.
Y UAN , M. (2007). Nonnegative garrote component selection in functional ANOVA models. In Proceedings of
AI and Statistics, AISTATS.
Z HAO , P. and Y U , B. (2007). On model selection consistency of lasso. J. of Mach. Learn. Res. 7 2541?2567.
8
| 3194 |@word trial:1 version:1 norm:6 proportion:1 simulation:2 linearized:1 bn:3 solid:1 carry:1 liu:1 siebel:1 score:4 series:1 current:1 written:1 john:1 additive:22 confirming:1 interpretable:1 update:2 juditsky:1 stationary:3 selected:2 parametrization:1 persistency:1 provides:1 zhang:4 dn:6 ik:2 persistent:4 yuan:1 backfitting:15 combine:1 fitting:4 introduce:2 behavior:3 dist:1 m8:1 encouraging:1 becomes:1 estimating:2 underlying:3 moreover:2 argmin:1 minimizes:1 finding:1 transformation:1 berkeley:2 act:2 decouples:1 k2:5 rm:2 grant:1 positive:1 local:3 mach:1 path:1 pam:1 k:1 studied:3 meinshausen:1 suggests:1 challenging:2 r4:1 nemirovski:1 uditsky:1 statistically:1 borderline:1 implement:2 differs:1 x3:1 procedure:10 j0:2 empirical:7 significantly:1 projection:4 persistence:1 word:1 onto:3 cannot:1 selection:7 einshausen:1 risk:3 context:1 applying:2 sparsistent:3 equivalent:2 measurable:1 imposed:1 lagrangian:2 center:3 backed:1 convex:7 recovery:3 wasserman:1 m2:1 estimator:10 insight:1 population:3 notion:1 coordinate:2 suppose:2 hypothesis:1 pa:1 element:3 satisfying:2 jk:1 nox:2 observed:1 role:1 solved:1 rescaled:1 substantial:1 covariates:5 motivate:1 solving:1 predictive:2 f2:1 basis:2 various:2 effective:2 describe:1 kp:1 larger:2 valued:1 say:1 otherwise:3 statistic:5 g1:1 jointly:1 noisy:1 housing:4 advantage:1 propose:1 interaction:1 product:1 j2:14 relevant:2 pthe:1 achieve:1 tax:2 intuitive:1 convergence:2 derive:3 linearize:1 illustrate:1 progress:1 strong:2 ois:1 implies:5 drawback:1 correct:2 attribute:2 emirovski:1 larry:1 cmin:2 extension:1 hold:1 around:1 considered:2 hall:1 exp:4 bj:6 smallest:1 purpose:2 estimation:1 weighted:2 always:2 gaussian:2 shrinkage:1 derived:1 methodological:1 bernoulli:1 likelihood:2 tech:2 perlich:1 inference:1 ptratio:2 transformed:1 classification:6 among:1 k6:2 smoothing:10 initialize:1 uc:2 cube:1 chapman:1 x4:2 yu:2 spline:3 randomly:1 simultaneously:1 m4:2 sparsistency:1 replaced:1 investigate:1 mining:1 pradeep:1 tj:1 accurate:1 encourage:3 orthogonal:2 iv:1 re:1 plotted:1 theoretical:2 modeling:1 soft:4 frechet:1 predictor:3 uniform:1 successful:1 motivating:1 synthetic:2 density:1 xi1:1 together:2 containing:1 choose:1 zhao:1 derivative:1 rescaling:1 supp:3 b2:1 summarized:2 coefficient:1 inc:1 satisfy:2 explicitly:2 tion:1 h1:1 multiplicative:1 aggregation:1 identifiability:1 square:2 accuracy:1 variance:1 conducting:1 yield:2 basically:1 email:5 against:3 pp:7 suburb:1 naturally:1 recovers:1 proved:1 dataset:4 rdle:4 recall:1 hilbert:2 appears:2 x6:1 response:2 ritov:4 formulation:2 just:1 implicit:1 until:2 nonlinear:2 logistic:8 behaved:1 lgorithm:1 true:3 ccf:1 regularization:4 iteratively:2 nonzero:1 sin:1 x5:1 encourages:2 criterion:1 generalized:3 m5:1 cosso:4 ridge:1 demonstrate:1 performs:1 cp:2 l1:8 fj:2 meaning:1 recently:2 functional:3 conditioning:1 extend:1 discussed:1 he:1 m1:2 mellon:1 significant:1 ai:1 smoothness:2 tuning:2 consistency:2 han:1 add:1 xi3:1 multivariate:1 showed:1 optimizes:4 irrelevant:4 certain:1 verlag:2 rep:2 success:1 yi:10 scoring:2 seen:3 additional:1 ller:1 dashed:3 ii:1 smoother:4 full:2 ibshirani:3 reduces:2 x10:1 seidel:1 smooth:3 plug:2 lin:4 ravikumar:1 prediction:2 scalable:1 regression:14 kernel:3 achieved:1 pnn:1 remarkably:1 separately:1 semiparametric:1 crucial:1 biased:1 subject:2 lafferty:1 noting:1 iii:1 identically:1 relaxes:1 easy:1 iterate:2 m6:1 fit:2 hastie:4 lasso:11 bandwidth:2 inner:1 idea:2 indus:1 ltd:1 penalty:5 tune:2 nonparametric:12 ten:3 statist:2 percentage:1 nsf:1 sjw:2 estimated:9 lstat:2 tibshirani:4 xi4:1 correctly:1 carnegie:1 write:2 key:1 four:1 threshold:2 drawn:1 pj:1 anova:3 asymptotically:2 subgradient:2 sum:1 run:3 prob:1 letter:2 furnished:1 extends:1 arrive:1 garrote:2 appendix:1 scaling:1 submatrix:1 nonnegative:2 oracle:1 uan:1 constraint:10 x2:1 min:5 department:5 truncate:1 ball:3 combination:1 across:2 character:1 modification:1 intuitively:2 restricted:1 pr:1 unregularized:1 computationally:1 equation:1 turn:1 xi2:1 letting:1 crim:2 apply:2 observe:1 altogether:1 original:1 denotes:1 remaining:3 unfortunate:1 newton:1 cmax:2 k1:4 scholarship:1 society:1 parametric:1 gradient:1 simulated:3 collected:1 length:1 minimizing:1 unfortunately:1 holding:1 hao:1 trace:1 stated:1 design:1 unknown:1 upper:4 vertical:2 observation:3 finite:4 descent:2 extended:1 ever:1 arbitrary:1 sharp:1 introduced:1 complement:1 pair:1 greenshtein:4 below:1 pattern:2 ev:1 sparsity:12 max:1 including:1 royal:1 wainwright:3 natural:1 predicting:1 residual:4 mn:4 identifies:1 axis:1 carried:1 x8:1 sn:2 relative:3 permutation:1 proportional:1 versus:2 age:2 penalization:3 h2:1 riedman:1 consistent:2 imposes:2 thresholding:3 penalized:1 placed:1 supported:1 jth:1 dis:1 sparse:14 dimension:6 plain:1 xn:2 numeric:1 computes:1 acknowlegments:1 author:2 made:2 spam:38 sj:1 hang:1 astie:2 pittsburgh:1 learn:1 excellent:1 constructing:1 aistats:1 main:1 noise:2 x1:5 en:1 lie:2 house:1 third:1 theorem:2 specific:1 showing:2 virtue:2 importance:1 easier:1 boston:5 explore:1 expressed:1 scalar:2 rror:1 springer:2 corresponds:2 minimizer:1 satisfies:2 ann:2 specifically:1 typical:1 called:2 total:2 gauss:1 experimental:1 m3:1 support:1 incorporate:1 |
2,419 | 3,195 | Learning the structure of manifolds using random
projections
Yoav Freund ?
UC San Diego
Sanjoy Dasgupta ?
UC San Diego
Mayank Kabra
UC San Diego
Nakul Verma
UC San Diego
Abstract
We present a simple variant of the k-d tree which automatically adapts to intrinsic
low dimensional structure in data.
1
Introduction
The curse of dimensionality has traditionally been the bane of nonparametric statistics, as reflected
for instance in convergence rates that are exponentially slow in dimension. An exciting way out of
this impasse is the recent realization by the machine learning and statistics communities that in many
real world problems the high dimensionality of the data is only superficial and does not represent
the true complexity of the problem. In such cases data of low intrinsic dimension is embedded in a
space of high extrinsic dimension.
For example, consider the representation of human motion generated by a motion capture system.
Such systems typically track marks located on a tight-fitting body suit. The number of markers, say
N , is set sufficiently large in order to get dense coverage of the body. A posture is represented by a
(3N )-dimensional vector that gives the 3D location of each of the N marks. However, despite this
seeming high dimensionality, the number of degrees of freedom is relatively small, corresponding
to the dozen-or-so joint angles in the body. The marker positions are more or less deterministic
functions of these joint angles. Thus the data lie in R3N , but on (or very close to) a manifold [4] of
small dimension.
In the last few years, there has been an explosion of research investigating methods for learning in
the context of low-dimensional manifolds. Some of this work (for instance, [2]) exploits the low
intrinsic dimension to improve the convergence rate of supervised learning algorithms. Other work
(for instance, [12, 11, 1]) attempts to find an embedding of the data into a low-dimensional space,
thus finding an explicit mapping that reduces the dimensionality.
In this paper, we describe a new way of modeling data that resides in RD but has lower intrinsic
dimension d < D. Unlike many manifold learning algorithms, we do not attempt to find a single
unified mapping from RD to Rd . Instead, we hierarchically partition RD into pieces in a manner
that is provably sensitive to low-dimensional structure. We call this spatial data structure a random
projection tree (RP tree). It can be thought of as a variant of the k-d tree that is provably manifoldadaptive.
k-d trees, RP trees, and vector quantization
Recall that a k-d tree [3] partitions RD into hyperrectangular cells. It is built in a recursive manner,
splitting along one coordinate direction at a time. The succession of splits corresponds to a binary
tree whose leaves contain the individual cells in RD . These trees are among the most widely-used
methods for spatial partitioning in machine learning and computer vision.
?
?
Corresponding author: [email protected].
Dasgupta and Verma acknowledge the support of NSF, under grants IIS-0347646 and IIS-0713540.
1
Figure 1: Left: A spatial partitioning of R2 induced by a k-d tree with three levels. The dots are data
vectors; each circle represents the mean of the vectors in one cell. Right: Partitioning induced by an
RP tree.
On the left part of Figure 1 we illustrate a k-d tree for a set of vectors in R2 . The leaves of the tree
partition RD into cells; given a query point q, the cell containing q is identified by traversing down
the k-d tree. Each cell can be thought of as having a representative vector: its mean, depicted in the
figure by a circle. The partitioning together with these mean vectors define a vector quantization
(VQ) of R2 : a mapping from R2 to a finite set of representative vectors (called a ?codebook? in the
context of lossy compression methods). A good property of this tree-structured vector quantization
is that a vector can be mapped efficiently to its representative. The design goal of VQ is to minimize
the error introduced by replacing vectors with their representative.
We quantify the VQ error by the average squared Euclidean distance between a vector in the set and
the representative vector to which it is mapped. This error is closely related (in fact, proportional) to
the average diameter of cells, that is, the average squared distance between pairs of points in a cell.1
As the depth of the k-d tree increases the diameter of the cells decreases and so does the VQ error.
However, in high dimension, the rate of decrease of the average diameter can be very slow. In fact,
as we show in the supplementary material, there are data sets in RD for which a k-d tree requires D
levels in order to halve the diameter. This slow rate of decrease of cell diameter is fine if D = 2 as
in Figure 1, but it is disastrous if D = 1000. Constructing 1000 levels of the tree requires 21000 data
points! This problem is a real one that has been observed empirically: k-d trees are prone to a curse
of dimensionality.
What if the data have low intrinsic dimension? In general, k-d trees will not be able to benefit from
this; in fact the bad example mentioned above has intrinsic dimension d = 1. But we show that
a simple variant of the k-d tree does indeed decrease cell diameters much more quickly. Instead
of splitting along coordinate directions, we use randomly chosen unit vectors, and instead of splitting data exactly at the median, we use a more carefully chosen split point. We call the resulting
data structure a random projection tree (Figure 1, right) and we show that it admits the following
theoretical guarantee (formal statement is in the next section).
Pick any cell C in the RP tree, and suppose the data in C have intrinsic dimension
d. Pick a descendant cell ? d levels below; then with constant probability, this
descendant has average diameter at most half that of C.2
There is no dependence at all on the extrinsic dimensionality (D) of the data. We thus have a
vector quantization construction method for which the diameter of the cells depends on the intrinsic
dimension, rather than the extrinsic dimension of the data.
A large part of the benefit of RP trees comes from the use of random unit directions, which is
rather like running k-d trees with a preprocessing step in which the data are projected into a random
1
2
This is in contrast to the max diameter, the maximum distance between two vectors in a cell.
Here the probability is taken over the randomness in constructing the tree.
2
low-dimensional subspace. In fact, a recent experimental study of nearest neighbor algorithms [8]
observes that a similar pre-processing step improves the performance of nearest neighbor schemes
based on spatial data structures. Our work provides a theoretical explanation for this improvement
and shows both theoretically and experimentally that this improvement is significant. The explanation we provide is based on the assumption that the data has low intrinsic dimension.
Another spatial data structure based on random projections is the locality sensitive hashing scheme
[6].
Manifold learning and near neighbor search
The fast rate of diameter decrease in random projection trees has many consequences beyond the
quality of vector quantization. In particular, the statistical theory of tree-based statistical estimators
? whether used for classification or regression ? is centered around the rate of diameter decrease;
for details, see for instance Chapter 20 of [7]. Thus RP trees generically exhibit faster convergence
in all these contexts.
Another case of interest is nearest neighbor classification. If the diameter of cells is small, then it
is reasonable to classify a query point according to the majority label in its cell. It is not necessary
to find the nearest neighbor; after all, the only thing special about this point is that it happens to be
close to the query. The classical work of Cover and Hart [5] on the Bayes risk of nearest neighbor
methods applies equally to the majority vote in a small enough cell.
Figure 2: Distributions with low intrinsic dimension. The purple areas in these figures indicate regions in which the density of the data is significant, while the complementary white areas indicate areas where data density is very low. The left figure depicts data concentrated near a one-dimensional
manifold. The ellipses represent mean+PCA approximations to subsets of the data. Our goal is to
partition data into small diameter regions so that the data in each region is well-approximated by its
mean+PCA. The right figure depicts a situation where the dimension of the data is variable. Some of
the data lies close to a one-dimensional manifold, some of the data spans two dimensions, and some
of the data (represented by the red dot) is concentrated around a single point (a zero-dimensional
manifold).
Finally, we return to our original motivation: modeling data which lie close to a low-dimensional
manifold. In the literature, the most common way to capture this manifold structure is to create a
graph in which nodes represent data points and edges connect pairs of nearby points. While this is
a natural representation, it does not scale well to very large datasets because the computation time
of closest neighbors grows like the square of the size of the data set. Our approach is fundamentally
different. Instead of a bottom-up strategy that starts with individual data points and links them
together to form a graph, we use a top-down strategy that starts with the whole data set and partitions
it, in a hierarchical manner, into regions of smaller and smaller diameter. Once these individual cells
are small enough, the data in them can be well-approximated by an affine subspace, for instance that
given by principal component analysis. In Figure 2 we show how data in two dimensions can be
approximated by such a set of local ellipses.
2
2.1
The RP tree algorithm
Spatial data structures
In what follows, we assume the data lie in RD , and we consider spatial data structures built by
recursive binary splits. They differ only in the nature of the split, which we define in a subroutine
3
called C HOOSE RULE. The core tree-building algorithm is called M AKE T REE, and takes as input a
data set S ? RD .
procedure M AKE T REE(S)
if |S| < M inSize
then ?
return (Leaf )
Rule ? C HOOSE RULE(S)
?
?
Lef tT ree ? M AKE T REE({x ? S : Rule(x) = true})
else
?
?RightT ree ? M AKE T REE({x ? S : Rule(x) = false})
return ([Rule, Lef tT ree, RightT ree])
A natural way to try building a manifold-adaptive spatial data structure is to split each cell along its
principal component direction (for instance, see [9]).
procedure C HOOSE RULE(S)
comment: PCA tree version
let u be the principal eigenvector of the covariance of S
Rule(x) := x ? u ? median({z ? u : z ? S})
return (Rule)
This method will do a good job of adapting to low intrinsic dimension (details omitted). However,
it has two significant drawbacks in practice. First, estimating the principal eigenvector requires a
significant amount of data; recall that only about 1/2k fraction of the data winds up at a cell at level
k of the tree. Second, when the extrinsic dimension is high, the amount of memory and computation
required to compute the dot product between the data vectors and the eigenvectors becomes the
dominant part of the computation. As each node in the tree is likely to have a different eigenvector
this severely limits the feasible tree depth. We now show that using random projections overcomes
these problems while maintaining the adaptivity to low intrinsic dimension.
2.2
Random projection trees
We shall see that the key benefits of PCA-based splits can be realized much more simply, by picking
random directions. To see this pictorially, consider data that is concentrated on a subspace, as in the
following figure. PCA will of course correctly identify this subspace, and a split along the principal
eigenvector u will do a good job of reducing the diameter of the data. But a random direction v will
also have some component in the direction of u, and splitting along the median of v will not be all
that different from splitting along u.
Figure 3: Intuition: a random direction is almost as good as the principal eigenvector.
Now only medians need to be estimated, not principal eigenvectors; this significantly reduces the
data requirements. Also, we can use the same random projection in different places in the tree; all
we need is to choose a large enough set of projections that, with high probability, there is be a good
projection direction for each node in the tree. In our experience setting the number of projections
equal to the depth of the tree is sufficient. Thus, for a tree of depth k, we use only k projection
vectors v, as opposed to 2k with a PCA tree. When preparing data to train a tree we can compute
the k projection values before building the tree. This also reduces the memory requirements for
the training set, as we can replace each high dimensional data point with its k projection values
(typically we use 10 ? k ? 20).
We now define RP trees formally. For a cell containing points S, let ?(S) be the diameter of S (the
distance between the two furthest points in the set), and ?A (S) the average diameter, that is, the
4
average distance between points of S:
1 X
2 X
?2A (S) =
kx ? yk2 =
kx ? mean(S)k2 .
2
|S|
|S|
x,y?S
x?S
2
We use two different types of splits: if ? (S) is less than c?2A (S) (for some constant c) then we
use the hyperplane split discussed above. Otherwise, we split S into two groups based on distance
from the mean.
procedure C HOOSE RULE(S)
comment: RP tree version
2
if ?2 (S)
?? c ? ?A (S)
choose a random unit direction v
?
?
?
?
sort projection values: a(x) = v ? x ?x ? S, generating the list a1 ? a2 ? ? ? ? ? an
?
?
?
. . . , n ? 1 compute
?
( i = 1, P
?for
Pn
i
1
1
?1 = i j=1 aj , ?2 = n?i
then
j=i+1 aj
P
P
?
i
n
2
2
?
?
c
=
(a
?
?
)
+
i
1
?
j=1 j
j=i+1 (aj ? ?2 )
?
?
?
?
?find i that minimizes ci and set ? = (ai + ai+1 )/2
Rule(x) := v ? x ? ?
else {Rule(x) := kx ? mean(S)k ? median{kz ? mean(S)k : z ? S}
return (Rule)
In the first type of split, the data in a cell are projected onto a random direction and an appropriate
split point is chosen. This point is not necessarily the median (as in k-d trees), but rather the position
that maximally decreases average squared interpoint distance. In Figure 4.4, for instance, splitting
the bottom cell at the median would lead to a messy partition, whereas the RP tree split produces
two clean, connected clusters.
Figure 4: An illustration of the RP-Tree algorithm. 1: The full data set and the PCA ellipse that
approximates it. 2: The first level split. 3: The two PCA ellipses corresponding to the two cells after
the first split. 4: The two splits in the second level. 5: The four PCA ellipses for the cells at the third
level. 6: The four splits at the third level. As the cells get smaller, their individual PCAs reveal 1D
manifold structure. Note: the ellipses are for comparison only; the RP tree algorithm does not look
at them.
The second type of split, based on distance from the mean of the cell, is needed to deal with cases in
which the cell contains data at very different scales. In Figure 2, for instance, suppose that the vast
majority of data is concentrated at the singleton ?0-dimensional? point. If only splits by projection
were allowed, then a large number of splits would be devoted to uselessly subdividing this point
mass. The second type of split separates it from the rest of the data in one go. For a more concrete
example, suppose that the data are image patches. A large fraction of them might be ?empty?
background patches, in which case they?d fall near the center of the cell in a very tight cluster. The
5
remaining image patches will be spread out over a much larger space. The effect of the split is then
to separate out these two clusters.
2.3
Theoretical foundations
In analyzing RP trees, we consider a statistical notion of dimension: we say set S has local covariance dimension (d, ) if (1 ? ) fraction of the variance is concentrated in a d-dimensional subspace.
2
To make this precise, start by letting ?12 ? ?22 ? ? ? ? ? ?D
denote the eigenvalues of the covariance
matrix; these are the variances in each of the eigenvector directions.
Definition 1 S ? RD has local covariance dimension (d, ) if the largest d eigenvalues of its
2
2
covariance matrix satisfy ?12 + ? ? ? + ?d2 ? (1 ? ) ? (?12 + ? ? ? + ?D
). (Note that ?12 + ? ? ? + ?D
=
2
(1/2)?A (S).)
Now, suppose an RP tree is built from a data set X ? RD , not necessarily finite. Recall that there
are two different types of splits; let?s call them splits by distance and splits by projection.
Theorem 2 There are constants 0 < c1 , c2 , c3 < 1 with the following property. Suppose an RP
tree is built using data set X ? RD . Consider any cell C for which X ? C has local covariance
dimension (d, ), where < c1 . Pick a point x ? S ? C at random, and let C 0 be the cell that
contains it at the next level down.
? If C is split by distance then
E [?(S ? C 0 )] ? c2 ?(S ? C).
? If C is split by projection, then
c3 2
E ?2A (S ? C 0 ) ? 1 ?
?A (S ? C).
d
In both cases, the expectation is over the randomization in splitting C and the choice of
x ? S ? C.
As a consequence, the expected average diameter of cells is halved every O(d) levels. The proof of
this theorem is in the supplementary material, along with even stronger results for different notions
of dimension.
3
3.1
Experimental Results
A streaming version of the algorithm
The version of the RP algorithm we use in practice differs from the one above in three ways. First
of all, both splits operate on the projected data; for the second type of split (split by distance), data
that fall in an interval around the median are separated from data outside that interval. Second,
the tree is built in a streaming manner: that is, the data arrive one at a time, and are processed (to
update the tree) and immediately discarded. This is managed by maintaining simple statistics at
each internal node of the tree and updating them appropriately as the data streams by (more details
in the supplementary matter). The resulting efficiency is crucial to the large-scale applications we
have in mind. Finally, instead of choosing a new random projection in each cell, a dictionary of a
few random projections is chosen at the outset. In each cell, every one of these projections is tried
out and the best one (that gives the largest decrease in ?2A (S)) is retained. This last step has the
effect of boosting the probability of a good split.
3.2
Synthetic datasets
We start by considering two synthetic datasets that illustrate the shortcomings of k-d trees. We
will see that RP trees adapt well to such cases. For the first dataset, points x1 , . . . , xn ? RD are
generated by the following process: for each point xi ,
6
1350
1250
k?d Tree (random coord)
k?d Tree (max var coord)
RP Tree
PCA Tree
1800
1200
Avg VQ Error
Avg VQ Error
2000
k?d Tree (random coord)
k?d Tree (max var coord)
RP Tree
PCA Tree
1300
1150
1100
1050
1600
1400
1200
1000
950
1
2
3
Levels
4
1000
5
1
2
3
Levels
4
5
Figure 5: Performance of RP trees with k-d trees on first synthetic dataset (left) and the second
synthetic dataset (right)
? choose pi uniformly at random from [0, 1], and
? select each coordinate xij independently from N (pi , 1).
For the second dataset, we choose n points from two D-dimensional Gaussians (with equal probability) with means at (?1, ?1, . . . , ?1) and (1, 1, . . . , 1), and identity covariances.
We compare the performance of different trees according to the average VQ error they incur at
various levels. We consider four types of trees: (1) k-d trees in which the coordinate for a split is
chosen at random; (2) k-d trees in which at each split, the best coordinate is chosen (the one that
most improves VQ error); (3) RP trees; and (4) for reference, PCA trees.
Figure 5 shows the results for the two datasets (D = 1,000 and n = 10,000) averaged over 15 runs.
In both cases, RP trees outperform both k-d tree variants and are close to the performance of PCA
trees without having to explicitly compute any principal components.
3.3
MNIST dataset
We next demonstrate RP trees on the all-familiar MNIST dataset of handwritten digits. This dataset
consists of 28 ? 28 grayscale images of the digits zero through nine, and is believed to have low
intrinsic dimension (for instance, see [10]). We restrict our attention to digit 1 for this discussion.
Figure 6 (top) shows the first few levels of the RP tree for the images of digit 1. Each node is
represented by the mean of the datapoints falling into that cell. Hence, the topmost node shows the
mean of the entire dataset; its left and the right children show the means of the points belonging to
their respective partitions, and so on. The bar underneath each node shows the fraction of points
going to the left and to the right, to give a sense of how balanced each split is. Alongside each mean,
we also show a histogram of the 20 largest eigenvalues of the covariance matrix, which reveal how
closely the data in the cell is concentrated near a low-dimensional subspace. The last bar in the
histogram is the variance unaccounted for.
Notice that most of the variance lies in a small number of directions, as might be expected. And
this rapidly becomes more pronounced as we go further down in the tree. Hence, very quickly, the
cell means become good representatives of the dataset: an experimental corroboration that RP trees
adapt to the low intrinsic dimension of the data.
This is also brought out in Figure 6 (bottom), where the images are shown projected onto the plane
defined by their top two principal components. (The outer ring of images correspond to the linear
combinations of the two eigenvectors at those locations in the plane.) The left image shows how the
data was split at the topmost level (dark versus light). Observe that this random cut is actually quite
close to what the PCA split would have been, corroborating our earlier intuition (recall Figure 3).
The right image shows the same thing, but for the first two levels of the tree: data is shown in four
colors corresponding to the four different cells.
7
Figure 6: Top: Three levels of the RP tree for MNIST digit 1. Bottom: Images projected onto the
first two principal components. Colors represent different cells in the RP tree, after just one split
(left) or after two levels of the tree (right).
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):1373?1396, 2003.
[2] M. Belkin, P. Niyogi, and V. Sindhwani. On manifold regularization. Conference on AI and Statistics,
2005.
[3] J. Bentley. Multidimensional binary search trees used for associative searching. Communications of the
ACM, 18(9):509?517, 1975.
[4] W. Boothby. An Introduction to Differentiable Manifolds and Riemannian Geometry. Academic Press,
2003.
[5] T. M. Cover and P. E. Hart. Nearest neighbor pattern classifications. IEEE Transactions on Information
Theory, 13(1):21?27, 1967.
[6] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality sensitive hashing scheme based on p-stable
distributions. Symposium on Computational Geometry, 2004.
[7] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer, 1996.
[8] T. Liu, A. Moore, A. Gray, and K. Yang. An investigation of practical approximate nearest neighbor
algorithms. Advances in Neural Information Processing Systems, 2004.
[9] J. McNames. A fast nearest neighbor algorithm based on a principal axis search tree. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 23(9):964?976, 2001.
[10] M. Raginsky and S. Lazebnik. Estimation of intrinsic dimensionality using high-rate vector quantization.
Advances in Neural Information Processing Systems, 18, 2006.
[11] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290:2323?2326, 2000.
[12] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality
reduction. Science, 290(5500):2319?2323, 2000.
8
| 3195 |@word version:4 compression:1 stronger:1 d2:1 tried:1 covariance:8 pick:3 reduction:3 liu:1 contains:2 partition:7 update:1 half:1 leaf:3 intelligence:1 plane:2 core:1 provides:1 boosting:1 codebook:1 location:2 node:7 along:7 c2:2 become:1 symposium:1 descendant:2 consists:1 fitting:1 manner:4 theoretically:1 expected:2 indeed:1 subdividing:1 automatically:1 curse:2 considering:1 becomes:2 estimating:1 mass:1 what:3 minimizes:1 eigenvector:6 unified:1 finding:1 guarantee:1 every:2 multidimensional:1 exactly:1 k2:1 partitioning:4 unit:3 grant:1 before:1 local:4 limit:1 consequence:2 severely:1 despite:1 analyzing:1 ree:8 datar:1 lugosi:1 might:2 coord:4 gyorfi:1 averaged:1 practical:1 recursive:2 practice:2 differs:1 digit:5 procedure:3 area:3 thought:2 adapting:1 projection:21 significantly:1 pre:1 outset:1 get:2 onto:3 close:6 context:3 risk:1 deterministic:1 center:1 go:2 attention:1 independently:1 splitting:7 immediately:1 estimator:1 rule:13 datapoints:1 embedding:2 searching:1 notion:2 traditionally:1 coordinate:5 diego:4 suppose:5 construction:1 approximated:3 recognition:1 located:1 updating:1 cut:1 observed:1 bottom:4 pictorially:1 capture:2 region:4 connected:1 decrease:8 observes:1 mentioned:1 intuition:2 hoose:4 topmost:2 complexity:1 balanced:1 rightt:2 messy:1 tight:2 incur:1 efficiency:1 joint:2 represented:3 chapter:1 various:1 train:1 separated:1 fast:2 describe:1 shortcoming:1 query:3 outside:1 choosing:1 whose:1 quite:1 widely:1 supplementary:3 larger:1 say:2 otherwise:1 statistic:4 niyogi:2 indyk:1 associative:1 eigenvalue:3 differentiable:1 product:1 realization:1 rapidly:1 adapts:1 roweis:1 pronounced:1 convergence:3 cluster:3 requirement:2 empty:1 produce:1 generating:1 ring:1 illustrate:2 nearest:8 job:2 coverage:1 c:1 come:1 indicate:2 quantify:1 differ:1 direction:13 closely:2 drawback:1 centered:1 human:1 material:2 investigation:1 randomization:1 sufficiently:1 around:3 mapping:3 dictionary:1 a2:1 omitted:1 r3n:1 estimation:1 label:1 sensitive:3 largest:3 kabra:1 create:1 brought:1 rather:3 pn:1 improvement:2 contrast:1 underneath:1 sense:1 streaming:2 typically:2 entire:1 going:1 subroutine:1 provably:2 among:1 classification:3 spatial:8 special:1 uc:4 equal:2 once:1 having:2 preparing:1 represents:1 look:1 fundamentally:1 few:3 belkin:2 randomly:1 individual:4 familiar:1 geometry:2 impasse:1 suit:1 attempt:2 freedom:1 interest:1 generically:1 light:1 devoted:1 edge:1 explosion:1 necessary:1 experience:1 respective:1 traversing:1 tree:86 euclidean:1 circle:2 theoretical:3 instance:9 classify:1 modeling:2 earlier:1 cover:2 yoav:1 subset:1 eigenmaps:1 connect:1 synthetic:4 mayank:1 density:2 hyperrectangular:1 probabilistic:1 picking:1 together:2 quickly:2 concrete:1 squared:3 containing:2 choose:4 opposed:1 bane:1 return:5 singleton:1 de:1 seeming:1 matter:1 satisfy:1 explicitly:1 depends:1 stream:1 piece:1 try:1 wind:1 red:1 start:4 bayes:1 sort:1 minimize:1 purple:1 square:1 variance:4 efficiently:1 succession:1 correspond:1 identify:1 handwritten:1 pcas:1 randomness:1 halve:1 definition:1 proof:1 riemannian:1 dataset:9 recall:4 color:2 dimensionality:10 improves:2 carefully:1 actually:1 hashing:2 supervised:1 reflected:1 maximally:1 just:1 langford:1 replacing:1 nonlinear:2 marker:2 aj:3 gray:1 quality:1 reveal:2 grows:1 lossy:1 bentley:1 building:3 effect:2 contain:1 true:2 managed:1 hence:2 regularization:1 moore:1 white:1 deal:1 tt:2 demonstrate:1 motion:2 silva:1 image:9 lazebnik:1 common:1 empirically:1 unaccounted:1 exponentially:1 discussed:1 approximates:1 significant:4 ai:3 rd:14 dot:3 stable:1 yk2:1 dominant:1 closest:1 halved:1 recent:2 binary:3 ii:2 full:1 reduces:3 faster:1 adapt:2 academic:1 believed:1 hart:2 equally:1 ellipsis:5 a1:1 laplacian:1 variant:4 regression:1 vision:1 expectation:1 histogram:2 represent:4 cell:40 c1:2 whereas:1 background:1 fine:1 interval:2 else:2 median:8 crucial:1 appropriately:1 rest:1 unlike:1 operate:1 comment:2 induced:2 thing:2 mcnames:1 call:3 near:4 yang:1 split:37 enough:3 identified:1 restrict:1 whether:1 pca:14 nine:1 eigenvectors:3 amount:2 nonparametric:1 dark:1 locally:1 tenenbaum:1 concentrated:6 processed:1 diameter:18 outperform:1 xij:1 nsf:1 notice:1 estimated:1 extrinsic:4 track:1 correctly:1 dasgupta:2 shall:1 group:1 key:1 four:5 falling:1 clean:1 vast:1 graph:2 fraction:4 year:1 raginsky:1 run:1 angle:2 place:1 almost:1 reasonable:1 arrive:1 yfreund:1 patch:3 corroboration:1 nearby:1 span:1 relatively:1 structured:1 according:2 combination:1 belonging:1 smaller:3 happens:1 taken:1 vq:8 needed:1 mind:1 letting:1 gaussians:1 observe:1 hierarchical:1 appropriate:1 rp:27 original:1 top:4 running:1 remaining:1 maintaining:2 exploit:1 ellipse:1 classical:1 realized:1 posture:1 strategy:2 dependence:1 mirrokni:1 exhibit:1 subspace:6 distance:11 link:1 mapped:2 separate:2 majority:3 outer:1 manifold:14 furthest:1 devroye:1 retained:1 illustration:1 disastrous:1 statement:1 design:1 ake:4 datasets:4 discarded:1 acknowledge:1 finite:2 situation:1 communication:1 precise:1 ucsd:1 community:1 introduced:1 pair:2 required:1 c3:2 able:1 beyond:1 bar:2 below:1 alongside:1 pattern:3 built:5 max:3 memory:2 explanation:2 natural:2 scheme:3 improve:1 axis:1 literature:1 geometric:1 freund:1 embedded:1 adaptivity:1 proportional:1 var:2 versus:1 foundation:1 degree:1 affine:1 sufficient:1 exciting:1 verma:2 pi:2 prone:1 course:1 last:3 lef:2 formal:1 neighbor:10 fall:2 saul:1 benefit:3 dimension:27 depth:4 world:1 xn:1 resides:1 kz:1 author:1 adaptive:1 san:4 preprocessing:1 projected:5 avg:2 transaction:2 approximate:1 overcomes:1 global:1 investigating:1 corroborating:1 xi:1 grayscale:1 search:3 nature:1 superficial:1 necessarily:2 constructing:2 nakul:1 dense:1 hierarchically:1 spread:1 motivation:1 whole:1 allowed:1 complementary:1 child:1 body:3 x1:1 representative:6 depicts:2 slow:3 position:2 explicit:1 lie:5 third:2 dozen:1 down:4 theorem:2 bad:1 r2:4 list:1 admits:1 intrinsic:15 quantization:6 false:1 mnist:3 ci:1 kx:3 locality:2 depicted:1 simply:1 likely:1 sindhwani:1 applies:1 springer:1 corresponds:1 acm:1 goal:2 identity:1 replace:1 feasible:1 experimentally:1 reducing:1 uniformly:1 hyperplane:1 principal:11 called:3 sanjoy:1 experimental:3 vote:1 formally:1 select:1 internal:1 mark:2 support:1 immorlica:1 interpoint:1 |
2,420 | 3,196 | A Probabilistic Approach to Language Change
Alexandre Bouchard-C?ot?e?
Percy Liang?
Thomas L. Griffiths?
?
?
Computer Science Division
Department of Psychology
University of California at Berkeley
Berkeley, CA 94720
Dan Klein?
Abstract
We present a probabilistic approach to language change in which word forms
are represented by phoneme sequences that undergo stochastic edits along the
branches of a phylogenetic tree. This framework combines the advantages of
the classical comparative method with the robustness of corpus-based probabilistic models. We use this framework to explore the consequences of two different schemes for defining probabilistic models of phonological change, evaluating
these schemes by reconstructing ancient word forms of Romance languages. The
result is an efficient inference procedure for automatically inferring ancient word
forms from modern languages, which can be generalized to support inferences
about linguistic phylogenies.
1
Introduction
Languages evolve over time, with words changing in form, meaning, and the ways in which they can
be combined into sentences. Several centuries of linguistic analysis have shed light on some of the
key properties of this evolutionary process, but many open questions remain. A classical example is
the hypothetical Proto-Indo-European language, the reconstructed common ancestor of the modern
Indo-European languages. While the existence and general characteristics of this proto-language are
widely accepted, there is still debate regarding its precise phonology, the original homeland of its
speakers, and the date of various events in its evolution. The study of how languages change over
time is known as diachronic (or historical) linguistics (e.g., [4]).
Most of what we know about language change comes from the comparative method, in which words
from different languages are compared in order to identify their relationships. The goal is to identify
regular sound correspondences between languages and use these correspondences to infer the forms
of proto-languages and the phylogenetic relationships between languages. The motivation for basing
the analysis on sounds is that phonological changes are generally more systematic than syntactic or
morphological changes. Comparisons of words from different languages are traditionally carried
out by hand, introducing an element of subjectivity into diachronic linguistics. Early attempts to
quantify the similarity between languages (e.g., [15]) made drastic simplifying assumptions that
drew strong criticism from diachronic linguists. In particular, many of these approaches simply
represent the appearance of a word in two languages with a single bit, rather than allowing for
gradations based on correspondences between sequences of phonemes.
We take a quantitative approach to diachronic linguistics that alleviates this problem by operating
at the phoneme level. Our approach combines the advantages of the classical, phoneme-based,
comparative method with the robustness of corpus-based probabilistic models. We focus on the
case where the words are etymological cognates across languages, e.g. French faire and Spanish
hacer from Latin facere (to do). Following [3], we use this information to estimate a contextualized
model of phonological change expressed as a probability distribution over rules applied to individual
phonemes. The model is fully generative, and thus can be used to solve a variety of problems. For
example, we can reconstruct ancestral word forms or inspect the rules learned along each branch of
1
a phylogeny to identify sound laws. Alternatively, we can observe a word in one or more modern
languages, say French and Spanish, and query the corresponding word form in another language,
say Italian. Finally, models of this kind can potentially be used as a building block in a system for
inferring the topology of phylogenetic trees [3].
In this paper, we use this general approach to evaluate the performance of two different schemes for
defining probability distributions over rules. The first scheme, used in [3], treats these distributions
as simple multinomials and uses a Dirichlet prior on these multinomials. This approach makes it
difficult to capture rules that apply at different levels of granularity. Inspired by the prevalence
of multi-scale rules in diachronic phonology and modern phonological theory, we develop a new
scheme in which rules possess a set of features, and a distribution over rules is defined using a loglinear model. We evaluate both schemes in reconstructing ancient word forms, showing that the new
linguistically-motivated change can improve performance significantly.
2
Background and previous work
Most previous computational approaches to diachronic linguistics have focused on the reconstruction of phylogenetic trees from a Boolean matrix indicating the properties of words in different
languages [10, 6, 14, 13]. These approaches descend from glottochronology [15], which measures
the similarity between languages (and the time since they diverged) using the number of words in
those languages that belong to the same cognate set. This information is obtained from manually
curated cognate lists such as the data of [5]. The modern instantiations of this approach rely on sophisticated techniques for inferring phylogenies borrowed from evolutionary biology (e.g., [11, 7]).
However, they still generally use cognate sets as the basic data for evaluating the similarity between
languages (although some approaches incorporate additional manually constructed features [14]).
As an example of a cognate set encoding, consider the meaning ?eat?. There would be one column
for the cognate set which appears in French as manger and Italian as mangiare since both descend
from the Latin mandere (to chew). There would be another column for the cognate set which appears
in both Spanish and Portuguese as comer, descending from the Latin comedere (to consume). If
these were the only data, algorithms based on this data would tend to conclude that French and Italian
were closely related and that Spanish and Portuguese were equally related. However, the cognate
set representation has several disadvantages: it does not capture the fact that the cognate is closer
between Spanish and Portuguese than between French and Spanish, nor do the resulting models let
us conclude anything about the regular processes which caused these languages to diverge. Also,
curating cognate data can be expensive. In contrast, each word in our work is tracked using an
automatically obtained cognate list. While these cognates may be noisier, we compensate for this
by modeling phonological changes rather than Boolean mutations in cognate sets.
Another line of computational work has explored using phonological models as a way to capture
the differences between languages. [16] describes an information theoretic measure of the distance
between two dialects of Chinese. They use a probabilistic edit model, but do not consider the reconstruction of ancient word forms, nor do they present a learning algorithm for such models. There
have also been several approaches to the problem of cognate prediction in machine translation (essentially transliteration), e.g., [12]. Compared to our work, the phenomena of interest, and therefore
the models, are different. [12] presents a model for learning ?sound laws,? general phonological
changes governing two completely observed aligned cognate lists. This model can be viewed as a
special case of ours using a simple two-node topology.
3
A generative model of phonological change
In this section, we outline the framework for modeling phonological change that we will use throughout the paper. Assume we have a fixed set of word types (cognate sets) in our vocabulary V and a set
of languages L. Each word type i has a word form wil in each language l ? L, which is represented
as a sequence of phonemes which might or might not be observed. The languages are arranged
according to some tree topology T (see Figure 2(a) for examples). It is possible to also induce the
topology or cognate set assignments, but in this paper we assume that the topology is fixed and
cognates have already been identified.
2
For each word i ? V :
wiROOT ? LanguageModel
For each branch (k ? l) ? T :
[choose edit parameters]
?k?l ? Rules(? 2 )
For each word i ? V :
wil ? Edit(wik , ?k?l )
[sample word form]
???
wiA
?A?B
eiA?B
(a) Generative description
#
C
V
C
V
C
#
#
f
o
k
u
s
#
#
f
w
O
k
o
#
#
C
V
V
C
V
#
f
o
k
u
s
?
?
?
?
?
wiB
f
wO
k
o
/
/
/
/
/
#
C
V
C
V
V
C
V
C
#
?B?C
eiB?C
wiC
???
?B?D
eiB?D
wiD
???
word type i = 1 . . . |V |
Edits applied
Rules used
(b) Example of edits
(c) Graphical model
Figure 1: (a) A description of the generative model. (b) An example of edits that were used to transform
the Latin word focus (/fokus/) into the Italian word fuoco (/fwOko/) (fire) along with the context-specific rules
that were applied. (c) The graphical model representation of our model: ? are the parameters specifying the
stochastic edits e, which govern how the words w evolve.
The probabilistic model specifies a distribution over the word forms {wil } for each word type i ? V
and each language l ? L via a simple generative process (Figure 1(a)). The generative process
starts at the root language and generates all the word forms in each language in a top-down manner.
The w ? LanguageModel distribution is a simple bigram phoneme model. Q
A root word form w
n
consisting of n phonemes x1 ? ? ? xn is generated with probability plm (x1 ) = j=2 plm (xj | xj?1 ),
where plm is the distribution of the language model. The stochastic edit model w0 ? Edit(w, ?)
describes how a single old word form w = x1 ? ? ? xn changes along one branch of the phylogeny
with parameters ? to produce a new word form w0 . This process is parametrized by rule probabilities
?k?l , which are specific to branch (k ? l).
The generative process used in the edit model is as follows: for each phoneme xi in the old word
form, walking from left to right, choose a rule to apply. There are three types of rules: (1) deletion
of the phoneme, (2) substitution with some phoneme (possibly the same one), or (3) insertion of
another phoneme, either before or after the existing one. The probability of applying a rule depends
on the context (xi?1 , xi+1 ). Context-dependent rules are often used to characterize phonological
changes in diachronic linguistics [4]. Figure 1(b) shows an example of the rules being applied. The
context-dependent form of these rules allows us to represent phenomena such as the likely deletion
of s in word-final positions.
4
Defining distributions over rules
In the model defined in the previous section, each branch (k ? l) ? T has a collection of contextdependent rule probabilities ?k?l . Specifically, ?k?l specifies a collection of multinomial distributions, one for each C = (cl , x, cr ), where cl is left phoneme, x is the old phoneme, cr is the right
phoneme. Each multinomial distribution is over possible right-hand sides ? of the rule, which could
consist of 0, 1, or 2 phonemes. We write ?k?l (C, ?) for the probability of rule x ? ? / c1 c2 .
Previous work using this probabilistic framework simply placed independent Dirichlet priors on
each of the multinomial distributions [3]. While this choice results in a simple estimation procedure,
it has some severe limitations. Sound changes happen at many granularities. For example, from
Latin to Vulgar Latin, u ? o occurs in many contexts while s ? ? occurs only in word-final contexts. Using independent Dirichlets forces us to commit to a single context granularity for C. Since
the different multinomial distributions are not tied together, generalization becomes very difficult,
especially as data is limited. It is also difficult to interpret the learned rules, since the evidence
for a coarse phenomenon such as u ? o would be unnecessarily fragmented across many different
3
context-dependent rules. We would like to ideally capture a phenomenon using a single rule or feature. We could relate the rule probabilities via a simple hierarchical Bayesian model, but we would
still have to define a single hierarchy of contexts. This restriction might be inappropriate given that
sound changes often depend on different contexts that are not necessarily nested.
For these reasons, we propose using a feature-based distribution over the rule probabilities. Let
F (C, ?) be a feature vector that depends on the context-dependent rule (C, ?), and ?k?l be the
log-linear weights for branch (k ? l). We use a Normal prior on the log-linear weights, ?k?l ?
N (0, ? 2 I). The rule probabilities are then deterministically related to the weights via the softmax
function:
T
e?k?l F (C,?)
?k?l (C, ?; ?k?l ) = P ?T F (C,?0 ) .
k?l
?0 e
(1)
For each rule x ? ? / cl cr , we defined features based on whether x = ? (i.e. self-substitution),
and whether |?| = n for each n = 0, 1, 2 (corresponding to deletion, substitution, and insertion).
We also defined sets of features using three partitions of phonemes c into ?natural classes?. These
correspond to looking at the place of articulation (denoted A2 (c)), testing whether c is a vowel,
consonant, or boundary symbol (A1 (c)), and the trivial wildcard partition (A0 (c)), which allows
rules to be insensitive to c. Using these partitions, the final set of features corresponded to whether
Akl (cl ) = al and Akr (cr ) = ar for each type of partitioning kl , kr ? {0, 1, 2} and natural classes
al , ar .
The move towards using a feature-based scheme for defining rule probabilities is not just motivated
by the greater expressive capacity of this scheme. It also provides a connection with contemporary
phonological theory. Recent work in computational linguistics on probabilistic forms of optimality
theory has begun to use a similar approach, characterizing the distribution over word forms within a
language using a log-linear model applied to features of the words [17, 9]. Using similar features to
define a distribution over phonological changes thus provides a connection between synchronic and
diachronic linguistics in addition to a linguistically-motivated method for improving reconstruction.
5
Learning and inference
We use a Monte Carlo EM algorithm to fit the parameters of both models. The algorithm iterates
between a stochastic E-step, which computes reconstructions based on the current edit parameters,
and an M-step, which updates the edit parameters based on the reconstructions.
5.1
Monte Carlo E-step: sampling the edits
The E-step computes the expected sufficient statistics required for the M-step, which in our case is
the expected number of times each edit (such as o ? O) was used in each context. Note that the
sufficient statistics do not depend on the prior over rule probabilities; in particular, both the model
based on independent Dirichlet priors and the one based on a log-linear prior require the same E-step
computation.
An exact E-step would require summing over all possible edits involving all languages in the phylogeny (all unobserved {e}, {w} variables in Figure 1(c)), which does not permit a tractable dynamic
program. Therefore, we resort to a Monte Carlo E-step, where many samples of the edit variables
are collected, and counts are computed based on these samples. Samples are drawn using Gibbs
sampling [8]: for each word form of a particular language wil , we fix all other variables in the
model and sample wil along with its corresponding edits.
Consider the simple four-language topology in Figure 1(c). Suppose that the words in languages A,
C and D are fixed, and we wish to sample the word at language B along with the three corresponding
sets of edits (remember that the edits fully determine the words). While there are an exponential
number of possible words/edits, we can exploit the Markov structure in the edit model to consider
all such words/edits using dynamic programming, in a way broadly similar to the forward-backward
algorithm for HMMs. See [3] for details of the dynamic program.
4
la
vl
la
es
ib
it
it
Experiment
Latin reconstruction (6.1)
Sound changes (6.2)
es
Topology 1
Topology
1
1
2
Model
Dirichlet
Log-linear
Log-linear
Heldout
la:293
la:293
None
pt
Topology 2
(a) Topologies
(b) Experimental conditions
Figure 2: Conditions under which each of the experiments presented in this section were performed. The
topology indices correspond to those displayed at the left. The heldout column indicates how many words, if
any, were held out for edit distance evaluation, and from which language. All the experiments were run on a
data set of 582 cognates from [3].
5.2
M-step: updating the parameters
In the M-step, we estimate the distribution over rules for each branch (k ? l). In the Dirichlet
model, this can be done in closed form [3]. In the log-linear model, we need to optimize the feature
weights ?k?l . Let us fix a single branch and drop the subscript. Let N (C, ?) be the expected
number of times the rule (C, ?) was used in the E-step. Given these sufficient statistics, the estimate
of ? is given by optimizing the expected complete log-likelihood plus the regularization penalty
from the prior on ?,
h
i ||?||2
X
X T
0
O(?) =
N (C, ?) ?T F (C, ?) ? log
e? F (C,? ) ?
.
(2)
2? 2
0
?
C,?
We use L-BFGS to optimize this convex objective. which only requires the partial derivatives:
h
i ?
X
X
?O(?)
j
(3)
=
N (C, ?) Fj (C, ?) ?
?(C, ?0 ; ?)Fj (C, ?0 ) ? 2
??j
?
0
?
C,?
= F?j ?
X
N (C, ?)?(C, ?0 ; ?)Fj (C, ?0 ) ?
C,?0
?j
,
?2
(4)
def P
def P
where F?j =
C,? N (C, ?)Fj (C, ?) is the empirical feature vector and N (C, ?) =
? N (C, ?)
is the number of times context C was used. F?j and N (C, ?) do not depend on ? and thus can be
precomputed at the beginning of the M-step, thereby speeding up each L-BFGS iteration.
6
Experiments
In this section, we summarize the results of the experiments testing our different probabilistic models
of phonological change. The experimental conditions are summarized in Table 2. Training and test
data sets were taken from [3].
6.1
Reconstruction of ancient word forms
We ran the two models using Topology 1 in Figure 2 to assess the relative performance of Dirichletparametrized versus log-linear-parametrized models. Half of the Latin words at the root of the tree
were held out, and the (uniform cost) Levenshtein edit distance from the predicted reconstruction to
the truth was computed. While the uniform-cost edit distance misses important aspects of phonology (all phoneme substitutions are not equal, for instance), it is parameter-free and still seems to
correlate to a large extent with linguistic quality of reconstruction. It is also superior to held-out
log-likelihood, which fails to penalize errors in the modeling assumptions, and to measuring the
percentage of perfect reconstructions, which ignores the degree of correctness of each reconstructed
word.
5
Model
Dirichlet
Log-linear (0)
Log-linear (0,1)
Log-linear (0,1,2)
Baseline
3.59
3.59
3.59
3.59
Model
3.33
3.21
3.14
3.10
Improvement
7%
11%
12%
14%
Table 1: Results of the edit distance experiment. The language column corresponds to the language held out for
evaluation. We show the mean edit distance across the evaluation examples. Improvement rate is computed by
comparing the score of the algorithm against the baseline described in Section 6.1. The numbers in parentheses
for the log-linear model indicate which levels of granularity were used to construct the features (see Section 4).
/dEntis/
i ?E
E?jE
s?
/djEntes/
/dEnti/
Figure 3: An example of the proper Latin reconstruction given the Spanish and Italian word forms. Our model
produces /dEntes/, which is nearly correct, capturing two out of three of the phenomena.
We ran EM for 10 iterations for each model, and evaluated performance via a Viterbi derivation produced using these parameters. Our baseline for comparison was picking randomly, for each heldout
node in the tree, an observed neighboring word (i.e., copy one of the modern forms). Both models outperformed this baseline (see Figure 3), and the log-linear model outperformed the Dirichlet
model, suggesting that the featurized system better captures the phonological changes. Moreover,
adding more features further improved the performance, indicating that being able to express rules
at multiple levels of granularity allows the model to capture the underlying phonological changes
more accurately.
To give a qualitative feel for the operation of the system (good and bad), consider the example
in Figure 3, taken from the Dirichlet-parametrized experiment. The Latin dentis /dEntis/ (teeth) is
nearly correctly reconstructed as /dEntes/, reconciling the appearance of the /j/ in the Spanish and
the disappearance of the final /s/ in the Italian. Note that the /is/ vs. /es/ ending is difficult to predict
in this context (indeed, it was one of the early distinctions to be eroded in Vulgar Latin).
6.2
Inference of phonological changes
Another use of this model is to automatically recover the phonological drift processes between
known or partially-known languages. To facilitate evaluation, we continued in the well-studied Romance evolutionary tree. Again, the root is Latin, but we now add an additional modern language,
Portuguese, and two additional hidden nodes. One of the nodes characterizes the least common ancestor of modern Spanish and Portuguese; the other, the least common ancestor of all three modern
languages. In Figure 2, Topology 2, these two nodes are labeled vl (Vulgar Latin) and ib (ProtoIbero Romance), respectively. Since we are omitting many other branches, these names should not
be understood as referring to actual historical proto-languages, but, at best, to collapsed points representing several centuries of evolution. Nonetheless, the major reconstructed rules still correspond
to well-known phenomena and the learned model generally places them on reasonable branches.
Figure 4 shows the top four general rules for each of the evolutionary branches recovered by the
log-linear model. The rules are ranked by the number of times they were used in the derivations
during the last iteration of EM. The la, es, pt, and it forms are fully observed while the vl and
ib forms are automatically reconstructed. Figure 4 also shows a specific example of the evolution
of the Latin VERBUM (word), along with the specific edits employed by the model.
For this particular example, both the Dirichlet and the log-linear models produced the same reconstruction in the internal nodes. However, the log-linear parametrization makes inspection of sound
laws easier. Indeed, with the Dirichlet model, since the natural classes are of fixed granularity, some
6
r ? R / *
*
e ?
#
/ ALV
t ? d / *
*
? ? s / *
*
u ? o
/ *
/werbum/ (la)
m?
u?o
w?v
*
o ? os / C
#
v ? b
/ *
*
t ? te / *
*
e?E
/veRbo/ (ib)
v?b
/beRbo/ (es)
/ *
#
/ *
#
i ?
/ *
V
? ? n / *
/verbo/ (vl)
r?R
s ?
m ?
VELAR
u ? o
/ *
*
e ? E
/ *
*
i ?
/ C
V
a ? ja / *
*
/vErbo/ (it)
o?u
/veRbu/ (pt)
n ? m / *
*
a ? 5 / *
*
o ? u / *
*
e ? 1
*
/ *
Figure 4: The tree shows the system?s hypothesized transformation of a selected Latin word form, VERBUM
(word) into the modern Spanish, Italian, and Portuguese pronunciations. The Latin root and modern leaves were
observed while the hidden nodes as well as all the derivations were obtained using the parameters computed
by our model after 10 iterations of EM. Nontrivial rules (i.e. rules that are not identities) used at each stage are
shown along the corresponding edge. The boxes display the top four nontrivial rules corresponding to each of
these evolutionary branches, ordered by the number of times they were applied during the last E step. These
are grouped and labeled by their active feature of highest weight. ALV stands for alveolar consonant.
rules must be redundantly discovered, which tends to flood the top of the rule lists with duplicates.
In contrast, the log-linear model groups rules with features of the appropriate degree of generality.
While quantitative evaluation such as measuring edit distance is helpful for comparing results, it is
also illuminating to consider the plausibility of the learned parameters in a historical light, which we
do here briefly. In particular, we consider rules on the branch between la and vl, for which we have
historical evidence. For example, documents such as the Appendix Probi [2] provide indications of
orthographic confusions which resulted from the growing gap between Classical Latin and Vulgar
Latin phonology around the 3rd and 4th centuries AD. The Appendix lists common misspellings of
Latin words, from which phonological changes can be inferred.
On the la to vl branch, rules for word-final deletion of classical case markers dominate the list. It
is indeed likely that these were generally eliminated in Vulgar Latin. For the deletion of the /m/, the
Appendix Probi contains pairs such as PASSIM NON PASSI and OLIM NON OLI. For the deletion of
final /s/, this was observed in early inscriptions, e.g. CORNELIO for CORNELIOS [1]. The frequent
leveling of the distinction between /o/ and /u/ (which was ranked 5, but was not included for space
reasons) can be also be found in the Appendix Probi: COLUBER NON COLOBER. Note that in the
specific example shown, the model lowers the original /u/ and then re-raises it in the pt branch due
to a later process along that branch.
Similarly, major canonical rules were discovered in other branches as well, for example, /v/ to /b/
fortition in Spanish, palatalization along several branches, and so on. Of course, the recovered
words and rules are not perfect. For example, reconstructed Ibero /trinta/ to Spanish /treinta/ (thirty)
is generated in an odd fashion using rules /e/ to /i/ and /n/ to /in/. In the Dirichlet model, even when
otherwise reasonable systematic sound changes are captured, the crudeness of the fixed-granularity
contexts can prevent the true context from being captured, resulting in either rules applying with
low probability in overly coarse environments or rules being learned redundantly in overly fine
environments. The featurized model alleviates this problem.
7
Conclusion
Probabilistic models have the potential to replace traditional methods used for comparing languages
in diachronic linguistics with quantitative methods for reconstructing word forms and inferring
phylogenies. In this paper, we presented a novel probabilistic model of phonological change, in
which the rules governing changes in the sound of words are parametrized using the features of the
phonemes involved. This model goes beyond previous work in this area, providing more accurate
reconstructions of ancient word forms and connections to current work on phonology in synchronic
linguistics. Using a log-linear model to define the probability of a rule being applied results in a
7
straightforward inference procedure which can be used to both produce accurate reconstructions as
measured by edit distance and identify linguistically plausible rules that account for phonological
changes. We believe that this probabilistic approach has the potential to support quantitative analysis
of the history of languages in a way that can scale to large datasets while remaining sensitive to the
concerns that have traditionally motivated diachronic linguistics.
Acknowledgments We would like to thank Bonnie Chantarotwong for her help with the IPA converter and our reviewers for their comments. This work was supported by a FQRNT fellowship to
the first author, a NDSEG fellowship to the second author, NSF grant number BCS-0631518 to the
third author, and a Microsoft Research New Faculty Fellowship to the fourth author.
References
[1] W. Sidney Allen. Vox Latina: The Pronunciation of Classical Latin. Cambridge University
Press, 1989.
[2] W.A. Baehrens. Sprachlicher Kommentar zur vulg?arlateinischen Appendix Probi. Halle
(Saale) M. Niemeyer, 1922.
[3] A. Bouchard-C?ot?e, P. Liang, T. Griffiths, and D. Klein. A Probabilistic Approach to Diachronic
Phonology. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), 2007.
[4] L. Campbell. Historical Linguistics. The MIT Press, 1998.
[5] I. Dyen, J.B. Kruskal, and P. Black.
FILE IE-DATA1.
Available at
http://www.ntu.edu.au/education/langs/ielex/IE-DATA1, 1997.
[6] S. N. Evans, D. Ringe, and T. Warnow. Inference of divergence times as a statistical inverse
problem. In P. Forster and C. Renfrew, editors, Phylogenetic Methods and the Prehistory of
Languages. McDonald Institute Monographs, 2004.
[7] J. Felsenstein. Inferring Phylogenies. Sinauer Associates, 2003.
[8] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721?741,
1984.
[9] S. Goldwater and M. Johnson. Learning ot constraint rankings using a maximum entropy
model. Proceedings of the Workshop on Variation within Optimality Theory, 2003.
[10] R. D. Gray and Q. Atkinson. Language-tree divergence times support the Anatolian theory of
Indo-European origins. Nature, 2003.
[11] J. P. Huelsenbeck, F. Ronquist, R. Nielsen, and J. P. Bollback. Bayesian inference of phylogeny
and its impact on evolutionary biology. Science, 2001.
[12] G. Kondrak. Algorithms for Language Reconstruction. PhD thesis, University of Toronto,
2002.
[13] L. Nakhleh, D. Ringe, and T. Warnow. Perfect phylogenetic networks: A new methodology
for reconstructing the evolutionary history of natural languages. Language, 81:382?420, 2005.
[14] D. Ringe, T. Warnow, and A. Taylor. Indo-european and computational cladistics. Transactions
of the Philological Society, 100:59?129, 2002.
[15] M. Swadesh. Towards greater accuracy in lexicostatistic dating. Journal of American Linguistics, 21:121?137, 1955.
[16] A. Venkataraman, J. Newman, and J.D. Patrick. A complexity measure for diachronic chinese
phonology. In J. Coleman, editor, Computational Phonology. Association for Computational
Linguistics, 1997.
[17] C. Wilson and B. Hayes. A maximum entropy model of phonotactics and phonotactic learning.
Linguistic Inquiry, 2007.
8
| 3196 |@word faculty:1 briefly:1 bigram:1 seems:1 open:1 simplifying:1 thereby:1 substitution:4 contains:1 score:1 ours:1 document:1 existing:1 current:2 comparing:3 recovered:2 must:1 portuguese:6 romance:3 evans:1 happen:1 partition:3 plm:3 drop:1 update:1 v:1 generative:7 half:1 eroded:1 selected:1 leaf:1 intelligence:1 inspection:1 beginning:1 parametrization:1 probi:4 coleman:1 coarse:2 provides:2 node:7 iterates:1 toronto:1 phylogenetic:6 along:10 constructed:1 c2:1 qualitative:1 sidney:1 dan:1 combine:2 leveling:1 manner:1 chew:1 indeed:3 expected:4 nor:2 growing:1 multi:1 inspired:1 automatically:4 actual:1 inappropriate:1 becomes:1 moreover:1 underlying:1 what:1 kind:1 akl:1 redundantly:2 unobserved:1 transformation:1 berkeley:2 quantitative:4 hypothetical:1 remember:1 shed:1 partitioning:1 grant:1 before:1 understood:1 treat:1 tends:1 consequence:1 encoding:1 subscript:1 might:3 plus:1 black:1 au:1 studied:1 misspelling:1 specifying:1 hmms:1 limited:1 acknowledgment:1 thirty:1 testing:2 block:1 orthographic:1 prevalence:1 procedure:3 area:1 empirical:2 significantly:1 word:58 induce:1 griffith:2 regular:2 context:16 applying:2 gradation:1 collapsed:1 descending:1 restriction:1 optimize:2 www:1 reviewer:1 langs:1 go:1 straightforward:1 convex:1 focused:1 rule:54 continued:1 dominate:1 century:3 traditionally:2 variation:1 feel:1 hierarchy:1 suppose:1 pt:4 exact:1 programming:1 us:1 origin:1 associate:1 element:1 expensive:1 walking:1 updating:1 curated:1 geman:2 labeled:2 observed:6 capture:6 descend:2 morphological:1 venkataraman:1 contemporary:1 highest:1 ran:2 monograph:1 environment:2 govern:1 wil:5 insertion:2 complexity:1 ideally:1 dynamic:3 depend:3 raise:1 division:1 completely:1 comer:1 represented:2 various:1 dialect:1 derivation:3 monte:3 query:1 corresponded:1 newman:1 pronunciation:2 widely:1 solve:1 plausible:1 say:2 consume:1 reconstruct:1 otherwise:1 statistic:3 flood:1 commit:1 syntactic:1 transform:1 final:6 sequence:3 advantage:2 indication:1 reconstruction:15 propose:1 frequent:1 neighboring:1 aligned:1 date:1 alleviates:2 description:2 produce:3 comparative:3 perfect:3 help:1 develop:1 measured:1 odd:1 borrowed:1 strong:1 predicted:1 come:1 indicate:1 quantify:1 closely:1 correct:1 stochastic:5 wid:1 vox:1 education:1 require:2 ja:1 fix:2 generalization:1 ntu:1 around:1 normal:1 viterbi:1 diverged:1 predict:1 major:2 kruskal:1 early:3 a2:1 ronquist:1 estimation:1 linguistically:3 outperformed:2 sensitive:1 edit:18 grouped:1 basing:1 correctness:1 mit:1 rather:2 cr:4 wilson:1 linguistic:4 focus:2 improvement:2 indicates:1 likelihood:2 contrast:2 criticism:1 baseline:4 helpful:1 inference:7 dependent:4 vl:6 a0:1 hidden:2 italian:7 ancestor:3 her:1 denoted:1 special:1 softmax:1 equal:1 construct:1 phonological:20 sampling:2 manually:2 biology:2 eliminated:1 unnecessarily:1 nearly:2 duplicate:1 modern:11 randomly:1 resulted:1 divergence:2 individual:1 consisting:1 fire:1 vowel:1 microsoft:1 attempt:1 interest:1 eib:2 edits:13 evaluation:5 severe:1 light:2 held:4 accurate:2 edge:1 closer:1 partial:1 tree:9 old:3 ancient:6 taylor:1 re:1 instance:1 column:4 modeling:3 boolean:2 ar:2 disadvantage:1 measuring:2 restoration:1 assignment:1 cost:2 introducing:1 uniform:2 johnson:1 characterize:1 combined:1 referring:1 ie:2 ancestral:1 probabilistic:14 systematic:2 diverge:1 picking:1 together:1 dirichlets:1 again:1 thesis:1 huelsenbeck:1 ndseg:1 choose:2 possibly:1 emnlp:1 resort:1 derivative:1 american:1 suggesting:1 potential:2 account:1 bfgs:2 summarized:1 caused:1 ranking:1 depends:2 ad:1 performed:1 root:5 later:1 closed:1 characterizes:1 start:1 recover:1 bouchard:2 mutation:1 alv:2 ass:1 accuracy:1 phoneme:19 characteristic:1 correspond:3 identify:4 goldwater:1 bayesian:3 accurately:1 produced:2 none:1 carlo:3 history:2 inquiry:1 against:1 nonetheless:1 involved:1 subjectivity:1 synchronic:2 begun:1 nielsen:1 sophisticated:1 campbell:1 appears:2 alexandre:1 methodology:1 improved:1 arranged:1 eia:1 done:1 evaluated:1 box:1 generality:1 governing:2 just:1 stage:1 hand:2 expressive:1 o:1 marker:1 french:5 quality:1 gray:1 believe:1 building:1 facilitate:1 omitting:1 name:1 hypothesized:1 true:1 evolution:3 regularization:1 phonotactic:1 during:2 spanish:12 self:1 speaker:1 anything:1 bonnie:1 generalized:1 outline:1 theoretic:1 complete:1 confusion:1 mcdonald:1 percy:1 allen:1 fj:4 meaning:2 image:1 novel:1 common:4 superior:1 data1:2 multinomial:6 tracked:1 oli:1 insensitive:1 belong:1 association:1 interpret:1 cambridge:1 gibbs:2 rd:1 similarly:1 language:55 similarity:3 operating:1 add:1 patrick:1 recent:1 optimizing:1 fqrnt:1 wib:1 captured:2 additional:3 greater:2 employed:1 determine:1 transliteration:1 branch:19 multiple:1 sound:10 bcs:1 infer:1 plausibility:1 compensate:1 equally:1 a1:1 parenthesis:1 impact:1 prediction:1 involving:1 basic:1 essentially:1 iteration:4 represent:2 curating:1 c1:1 penalize:1 background:1 addition:1 fine:1 diachronic:12 fellowship:3 zur:1 ot:3 posse:1 file:1 comment:1 tend:1 undergo:1 granularity:7 latin:21 variety:1 xj:2 fit:1 psychology:1 topology:13 identified:1 converter:1 regarding:1 whether:4 motivated:4 wo:1 penalty:1 linguist:1 generally:4 http:1 specifies:2 percentage:1 canonical:1 ipa:1 nsf:1 overly:2 correctly:1 klein:2 broadly:1 write:1 express:1 group:1 key:1 four:3 drawn:1 changing:1 prevent:1 backward:1 contextualized:1 relaxation:1 run:1 inverse:1 fourth:1 place:2 throughout:1 reasonable:2 appendix:5 conll:1 bit:1 capturing:1 def:2 atkinson:1 display:1 correspondence:3 nontrivial:2 constraint:1 generates:1 aspect:1 optimality:2 eat:1 department:1 according:1 felsenstein:1 remain:1 cognate:19 reconstructing:4 across:3 describes:2 em:4 featurized:2 taken:2 count:1 precomputed:1 know:1 tractable:1 drastic:1 available:1 operation:1 permit:1 apply:2 observe:1 hierarchical:1 appropriate:1 robustness:2 existence:1 thomas:1 original:2 top:4 dirichlet:11 linguistics:13 reconciling:1 remaining:1 graphical:2 phonology:8 exploit:1 chinese:2 especially:1 classical:6 society:1 move:1 objective:1 question:1 already:1 occurs:2 disappearance:1 traditional:1 loglinear:1 forster:1 evolutionary:7 distance:8 thank:1 capacity:1 parametrized:4 w0:2 collected:1 extent:1 trivial:1 reason:2 index:1 relationship:2 providing:1 liang:2 difficult:4 potentially:1 relate:1 debate:1 proper:1 allowing:1 inspect:1 markov:1 datasets:1 displayed:1 defining:4 looking:1 precise:1 incorporate:1 discovered:2 drift:1 inferred:1 pair:1 required:1 kl:1 sentence:1 connection:3 homeland:1 california:1 learned:5 deletion:6 distinction:2 able:1 beyond:1 pattern:1 articulation:1 summarize:1 program:2 event:1 natural:6 rely:1 force:1 ranked:2 wik:1 scheme:8 improve:1 representing:1 halle:1 carried:1 dating:1 speeding:1 prior:7 evolve:2 relative:1 law:3 sinauer:1 fully:3 heldout:3 limitation:1 versus:1 illuminating:1 degree:2 teeth:1 sufficient:3 wic:1 editor:2 translation:1 course:1 placed:1 last:2 free:1 copy:1 supported:1 side:1 institute:1 characterizing:1 fragmented:1 boundary:1 vocabulary:1 evaluating:2 xn:2 ending:1 computes:2 ignores:1 forward:1 made:1 collection:2 stand:1 author:4 historical:5 correlate:1 transaction:2 reconstructed:6 active:1 instantiation:1 hayes:1 corpus:2 summing:1 conclude:2 consonant:2 xi:3 alternatively:1 table:2 wia:1 nature:1 ca:1 improving:1 european:4 cl:4 necessarily:1 motivation:1 x1:3 je:1 fashion:1 fails:1 inferring:5 position:1 deterministically:1 wish:1 indo:4 exponential:1 tied:1 ib:4 third:1 warnow:3 down:1 bad:1 specific:5 showing:1 symbol:1 list:6 explored:1 evidence:2 concern:1 consist:1 workshop:1 adding:1 drew:1 kr:1 phd:1 te:1 baehrens:1 gap:1 easier:1 entropy:2 simply:2 explore:1 appearance:2 likely:2 expressed:1 ordered:1 partially:1 nested:1 truth:1 corresponds:1 goal:1 viewed:1 identity:1 towards:2 replace:1 change:28 included:1 specifically:1 miss:1 accepted:1 e:5 la:8 experimental:2 wildcard:1 indicating:2 phylogeny:8 internal:1 support:3 noisier:1 phenomenon:6 levenshtein:1 evaluate:2 proto:4 contextdependent:1 languagemodel:2 |
2,421 | 3,197 | Online Linear Regression and Its Application to
Model-Based Reinforcement Learning
Alexander L. Strehl?
Yahoo! Research
New York, NY
[email protected]
Michael L. Littman
Department of Computer Science
Rutgers University
Piscataway, NJ USA
[email protected]
Abstract
We provide a provably efficient algorithm for learning Markov Decision Processes
(MDPs) with continuous state and action spaces in the online setting. Specifically,
we take a model-based approach and show that a special type of online linear
regression allows us to learn MDPs with (possibly kernalized) linearly parameterized dynamics. This result builds on Kearns and Singh?s work that provides a
provably efficient algorithm for finite state MDPs. Our approach is not restricted
to the linear setting, and is applicable to other classes of continuous MDPs.
Introduction
Current reinforcement-learning (RL) techniques hold great promise for creating a general type of
artificial intelligence (AI), specifically autonomous (software) agents that learn difficult tasks with
limited feedback (Sutton & Barto, 1998). Applied RL has been very successful, producing worldclass computer backgammon players (Tesauro, 1994) and model helicopter flyers (Ng et al., 2003).
Many applications of RL, including the two above, utilize supervised-learning techniques for the
purpose of generalization. Such techniques enable an agent to act intelligently in new situations by
learning from past experience in different but similar situations.
Provably efficient RL for finite state and action spaces is accomplished by Kearns and Singh (2002)
and hugely contributes to our understanding of the relationship between exploration and sequential
decision making. The achievement of the current paper is to provide an efficient RL algorithm that
learns in Markov Decision Processes (MDPs) with continuous state and action spaces. We prove that
it learns linearly-parameterized MDPs, a model introduced by Abbeel and Ng (2005), with sample
(or experience) complexity that grows only polynomially with the number of state space dimensions.
Our new RL algorithm utilizes a special linear regresser, based on least-squares regression, whose
analysis may be of interest to the online learning and statistics communities. Although our primary
result is for linearly-parameterized MDPs, our technique is applicable to other classes of continuous
MDPs and our framework is developed specifically with such future applications in mind. The linear dynamics case should be viewed as only an interesting example of our approach, which makes
substantial progress in the goal of understanding the relationship between exploration and generalization in RL.
An outline of the paper follows. In Section 1, we discuss online linear regression and pose a new
online learning framework that requires an algorithm to not only provide predictions for new data
points but also provide formal guarantees about its predictions. We also develop a specific algorithm
and prove that it solves the problem. In Section 2, using the algorithm and result from the first
section, we develop a provably efficient RL algorithm. Finally, we conclude with future work.
?
Some of the work presented here was conducted while the author was at Rutgers University.
1
1 Online Linear Regression
Linear Regression (LR) is a well-known and tremendously powerful technique for prediction of
the value of a variable (called the response or output) given the value of another variable (called
the explanatory or input). Suppose we are given some data consisting of input-output pairs:
(x1 , y1 ), (x2 , y2 ), . . . , (xm , ym ), where xi ? Rn and yi ? R for i = 1, . . . , m. Further, suppose
that the data satisfies a linear relationship, that is yi ? ?T xi ?i ? {1, . . . , m}, where ? ? Rn is an
n-dimensional parameter vector. When a new input x arrives, we would like to make a prediction
of the corresponding output by estimating ? from our data. A standard approach is to approximate
? with the least-squares estimator ?? defined by ?? = (X T X)?1 X T y, where X ? Rm?n is a matrix
whose ith row consists of the ith input xTi and y ? Rn is a vector whose ith component is the ith
output yi .
Although there are many analyses of the linear regression problem, none is quite right for an application to model-based reinforcement learning (MBRL). In particular, in MBRL, we cannot assume
that X is fixed ahead of time and we require more than just a prediction of ? but knowledge about
whether this prediction is sufficiently accurate. A robust learning agent must not only infer an approximate model of its environment but also maintain an idea about the accuracy of the parameters
of this model. Without such meta-knowledge, it would be difficult to determine when to explore (or
when to trust the model) and how to explore (to improve the model). We coined the term KWIK
(?know what it knows?) for algorithms that have this special property. With this idea in mind, we
present the following online learning problem related to linear regression. Let ||v|| denote the Euclidean norm of a vector v and let Var [X] denote the variance of a random variable X.
Definition 1 (KWIK Linear Regression Problem or KLRP) On every timestep t = 1, 2, . . . an
input vector xt ? Rn satisf ying||xt|| ? 1 and output number yt ? [?1, 1] is provided. The input
xt may be chosen in any way that depends on the previous inputs and outputs (x1 , y1 ), . . . , (xt , yt ).
The output yt is chosen probabilistically from a distribution that depends only on xt and satisfies
E[yt ] = ?T xt and Var[yt ] ? ? 2 , where ? ? Rn is an unknown parameter vector satisfying ||?|| ? 1
and ? ? R is a known constant. After observing xt and before observing yt , the learning algorithm
must produce an output y?t ? [?1, 1] ? {?} (a prediction of E[yt |xt ]). Furthermore, it should be
able to provide an output y?(x) for any input vector x ? {0, 1}n.
A key aspect of our problem that distinguishes it from other online learning models is that the
algorithm is allowed to output a special value ? rather than make a valid prediction (an output other
than ?). An output of ? signifies that the algorithm is not sure of what to predict and therefore
declines to make a prediction. The algorithm would like to minimize the number of times it predicts
?, and, furthermore, when it does make a valid prediction the prediction must be accurate, with
high probability. Next, we formalize the above intuition and define the properties of a ?solution? to
KLRP.
Definition 2 We define an admissible algorithm for the KWIK Linear Regression Problem to
be one that takes two inputs 0 ? ? 1 and 0 ? ? < 1 and, with probability at least 1 ? ?, satisfies
the following conditions:
1. Whenever the algorithm predicts y?t (x) ? [?1, 1], we have that |?
yt (x) ? ?T x| ? .
2. The number of timesteps t for which y?t (xt ) = ? is bounded by some function ?(, ?, n),
polynomial in n, 1/ and 1/?, called the sample complexity of the algorithm.
1.1 Solution
First, we present an algorithm and then a proof that it solves KLRP. Let X denote an m ? n matrix
whose rows we interpret as transposed input vectors. We let X(i) denote the transpose of the ith
row of X. Since X T X is symmetric, we can write it as
X T X = U ?U T ,
(Singular Value Decomposition)
(1)
where U = [v1 , . . . , vn ] ? Rn?n , with v1 , . . . , vn being a set of orthonormal eigenvectors of X T X.
Let the corresponding eigenvalues be ?1 ? ?2 ? ? ? ? ? ?k ? 1 > ?k+1 ? ? ? ? ? ?n ? 0. Note that
? = [v1 , . . . , vk ] ?
? = diag(?1 , . . . , ?n ) is diagonal but not necessarily invertible. Now, define U
2
? = diag(?1 , . . . , ?k ) ? Rk?k . For a fixed input xt (a new input provided to the
Rn?k and ?
algorithm at time t), define
??
? ?1 U
? T xt ? Rm?n ,
q? := X U
(2)
T
v? = [0, . . . , 0, vk+1
xt , . . . , vnT xt ]T ? Rn .
(3)
Algorithm 1 KWIK Linear Regression
0: Inputs: ?1 , ?2
1: Initialize X = [ ] and y = [ ].
2: for t = 1, 2, 3, ? ? ? do
3:
Let xt denote the input at time t.
4:
Compute q? and v? using Equations 2 and 3.
5:
if ||?
q || ? ?1 and ||?
v || ? ?2 then
P
? ? 1, where X(i) is
6:
Choose ?? ? Rn that minimizes i [y(i) ? ??T X(i)]2 subject to ||?||
the transpose of the ith row of X and y(i) is the ith component of y.
?
7:
Output valid prediction xT ?.
8:
else
9:
Output ?.
10:
Receive output yt .
11:
Append xTt as a new row to the matrix X.
12:
Append yt as a new element to the vector y.
13:
end if
14: end for
Our algorithm for solving the KWIK Linear Regression Problem uses these quantities and is provided in pseudocode by Algorithm 1. Our first main result of the paper is the following theorem.
Theorem 1 With appropriate parameter settings, Algorithm 1 is an admissible algorithm for the
? 3 /4 ).
KWIK Linear Regression Problem with a sample complexity bound of O(n
Although the analysis of Algorithm 1 is somewhat complicated, the algorithm itself has a simple
interpretation. Given a new input xt , the algorithm considers making a prediction of the output yt
using the norm-constrained least-squares estimator (specifically, ?? defined in line 6 of Algorithm1).
The norms of the vectors q? and v? provide a quantitative measure of uncertainty about this estimate.
When both norms are small, the estimate is trusted and a valid prediction is made. When either norm
is large, the estimate is not trusted and the algorithm produces an output of ?.
One may wonder why q? and v? provide a measure of uncertainty for the least-squares estimate.
Consider the case when all eigenvalues of X T X are greater than 1. In this case, note that x =
X T X(X T X)?1 x = X T q?. Thus, x can be written as a linear combination of the rows of X, whose
coefficients make up q?, of previously experienced input vectors. As shown by Auer (2002), this
particular linear combination minimizes ||q|| for any linear combination x = X T q. Intuitively,
if the norm of q? is small, then there are many previous training samples (actually, combinations
of inputs) ?similar? to x, and hence our least-squares estimate is likely to be accurate for x. For
the case of ill-conditioned X T X (when X T X has eigenvalues close to 0), X(X T X)?1 x may be
undefined or have a large norm. In this case, we must consider the directions corresponding to small
eigenvalues separately and this consideration is dealt with by v?.
1.2 Analysis
We provide a sketch of the analysis of Algorithm 1. Please see our technical report for full details.
The analysis hinges on two key lemmas that we now present.
In the following lemma, we analyze the behavior of the squared error of predictions based on an
incorrect estimator ?? 6= ? verses the squared error of using the true parameter vector ?. Specifically,
we show that the squared error of the former is very likely to be larger than the latter when the predictions based on ?? (of the form ??T x for input x) are highly inaccurate. The proof uses Hoeffding?s
bound and is omitted.
3
? ?
Lemma 1 Let ? ? Rn and ?? ? Rn be two fixed parameter vectors satisfying ||?|| ? 1 and ||?||
1. Suppose that (x1 , y1 ), . . . , (xm , ym ) is any sequence of samples satisfying xi ? Rn , yi ? R,
||xi || ? 1, yi ? [?1, 1], E[yi |xi ] = ?T xi , and Var[yi |xi ] ? ? 2 . For any 0 < ? 0 < 1 and fixed
positive constant z, if
m
X
p
? T xi ]2 ? 2 8m ln(2/?) + z,
[(? ? ?)
(4)
i=1
then
m
m
X
X
(yi ? ??T xi )2 >
(yi ? ?T xi )2 + z
i=1
0
(5)
i=1
with probability at least 1 ? 2? .
The following lemma, whose proof is fairly straight-forward and therefore omitted, relates the error
of an estimate ??T x for
a fixed input x based on an inaccurate estimator ?? to the quantities ||?
q ||,
qP
m
T
2
?
?
||?
v ||, and ?E (?) :=
[(? ? ?) X(i)] . Recall that when ||?
q || and ||?
v || are both small, our
i=1
algorithm becomes confident of the least-squares estimate. In precisely this case, the lemma shows
? T x| is bounded by a quantity proportional to ?E (?).
?
that |(? ? ?)
? ?
Lemma 2 Let ? ? Rn and ?? ? Rn be two fixed parameter vectors satisfying ||?|| ? 1 and ||?||
1. Suppose that (x1 , y1 ), . . . , (xm , ym ) is any sequence of samples satisfying xi ? Rn , yi ? R,
?
||xi || ? 1, yi ? [?1, q
1]. Let x ? Rn be any vector. Let q? and v? be defined as above. Let ?E (?)
Pm
?T 2
denote the error term
i=1 [(? ? ?) xi ] . We have that
? T x| ? ||?
? + 2||?
|(? ? ?)
q ||?E (?)
v ||.
(6)
Proof sketch: (of Theorem 1)
The proof has three steps. The first is to bound the sample complexity of the algorithm (the number
of times the algorithm makes a prediction of ?), in terms of the input parameters ?1 and ?2 . The
second is to choose the parameters ?1 and ?2 . The third is to show that, with high probability, every
valid prediction made by the algorithm is accurate.
Step 1
We derive an upper bound m
? on the number of timesteps for which either ||?
q || > ?1 holds or
||?
v || > ?2 holds. Observing that the algorithm trains on only those samples experienced during
pricisely these timesteps and applying Lemma 13 from the paper by Auer (2002) we have that
n ln(n/?1 )
n
m
? =O
+ 2 .
(7)
?21
?2
2
, and ?2
ln(1/(?)) ln(n)
Step 2 We choose ?1 = C ?Q ln Q, where C is a constant and Q = ?
n
= /4.
Step 3 Consider some fixed timestep t during the execution of Algorithm 1 such that the algorithm
makes a valid prediction (not ?). Let ?? denote the solution of the norm-constrained least-squares
minimization (line 6 in the pseudocode). By definition, since ? was not predicted, we have that
q? ? ?1 and v? ? ?2 . We would like to show that |??T x ? ?T x| ? so that Condition 1 of Definition 2
is satisfied. Suppose not, namely that |(?? ? ?)T x| > . Using Lemma 2, we can lower bound the
? 2 = Pm [(? ? ?)
? T X(i)]2 , where m denotes the number of rows of the matrix X
quantity ?E (?)
i=1
(equivalently, the number of samples obtained used by the algorithm for training, which we upperbounded by m),
? and X(i) denotes the transpose of the ith row of X. Finally, we would like to
apply Lemma 1 to prove that, with high probability, the squared error of ?? will be larger than the
squared error of predictions based on the true parameter vector ?, which contradicts the fact that ??
Pm
was chosen to minimize the term i=1 (yi ? ??T X(i))2 . One problem with this approach is that
Lemma 1 applies to a fixed ?? and the least-squares computation of Algorithm 1 may choose any ?? in
? ? 1}. Therefore, we use a uniform discretization to form a
the infinite set {?? ? Rn such that ||?||
4
? To guarantee
finite cover of [?1, 1]n and apply the theorem to the member of the cover closest to ?.
that the total failure probability of the algorithm is at most ?, we apply the union bound over all
(finitely many) applications of Lemma 1. 2
1.3 Notes
In our formulation of KLRP we assumed an upper bound of 1 on the the two-norm of the inputs xi ,
outputs yi , and the true parameter vector ?. By appropriate scaling of the inputs and/or outputs, we
could instead allow a larger (but still finite) bound.
Our analysis of Algorithm 1 showed that it is possible to solve KLRP with polynomial sample complexity (where the sample complexity is defined as the number of timesteps t that the algorithm
outputs ? for the current input xt ), with high probability. We note that the algorithm also has polynomial computational complexity per timestep, given the tractability of solving norm-constrained
least-squares problems (see Chapter 12 of the book by Golub and Van Loan (1996)).
1.4 Related Work
Work on linear regression is abundant in the statistics community (Seber & Lee, 2003). The use
of the quantities v? and q? to quantify the level of certainty of the linear estimator was introduced by
Auer (2002). Our analysis differs from that by Auer (2002) because we do not assume that the input
vectors xi are fixed ahead of time, but rather that they may be chosen in an adversarial manner. This
property is especially important for the application of regression techniques to the full RL problem,
rather than the Associative RL problem considered by Auer (2002). Our analysis has a similar flavor
to some, but not all, parts of the analysis by Abbeel and Ng (2005). However, a crucial difference
of our framework and analysis is the use of output ? to signify uncertainty in the current estimate,
which allows for efficient exploration in the application to RL as described in the next section.
2 Application to Reinforcement Learning
The general reinforcement-learning (RL) problem is how to enable an agent (computer program,
robot, etc.) to maximize an external reward signal by acting in an unknown environment. To ensure
a well-defined problem, we make assumptions about the types of possible worlds. To make the
problem tractable, we settle for near-optimal (rather than optimal) behavior on all but a polynomial
number of timesteps, as well as a small allowable failure probability. This type of performance
metric was introduced by Kakade (2003), in the vein of recent RL analyses (Kearns & Singh, 2002;
Brafman & Tennenholtz, 2002).
In this section, we formalize a specific RL problem where the environment is mathematically modeled by a continuous MDP taken from a rich class of MDPs. We present an algorithm and prove
that it learns efficiently within this class. The algorithm is ?model-based? in the sense that it constructs an explicit MDP that it uses to reason about future actions in the true, but unknown, MDP
environment. The algorithm uses, as a subroutine, any admissible algorithm for the KWIK Linear
Regression Problem introduced in Section 1. Although our main result is for a specific class of continuous MDPs, albeit an interesting and previously studied one, our technique is more general and
should be applicable to many other classes of MDPs as described in the conclusion.
2.1 Problem Formulation
The model we use is slightly modified from the model described by Abbeel and Ng (2005). The
main difference is that we consider discounted rather than undiscounted MDPs and we don?t require
the agent to have a ?reset? action that takes it to a specified start state (or distribution). Let PS denote
the set of all (measurable) probability distributions over the set S. The environment is described by
a discounted MDP M = hS, A, T, R, ?i, where S = RnS is the state space, A = RnA is the action
space, T : S ? A ? PS is the unknown transition dynamics, ? ? [0, 1) is the discount factor, and
R : S ? A ? R is the known reward function.1 For each timestep t, let xt ? S denote the current
1
All of our results can easily be extended to the case of an unknown reward function with a suitable linearity
assumption.
5
state and ut ? A the current action. The transition dynamics T satisfy
xt+1 = M ?(xt , ut ) + wt ,
nS +nA
(8)
n
where xt+1 ? S, ?(?, ?) : R
? R is a (basis or kernel) function satisfying ||?(?, ?)|| ? 1,
and M is an nS ? n matrix. We assume that the 2-norm of each row of M is bounded by 1.2 Each
component of the noise term wt ? RnS is chosen i.i.d. from a normal distribution with mean 0
and variance ? 2 for a known constant ?. If an MDP satisfies the above conditions we say that it
is linearly parameterized, because the next-state xt+1 is a linear function of the vector ?(xt , ut )
(which describes the current state and action) plus a noise term.
We assume that the learner (also called the agent) receives nS , nA , n, R, ?(?, ?), ?, and ? as input,
with T initially being unknown. The learning problem is defined as follows. The agent always
occupies a single state s of the MDP M . The agent is given s and chooses an action a. It then
receives an immediate reward r ? R(s, a) and is transported to a next state s0 ? T (s, a). This
procedure then repeats forever. The first state occupied by the agent may be chosen arbitrarily.
A policy is any strategy for choosing actions. We assume (unless noted otherwise) that rewards all lie
?
in the interval [0, 1]. For any policy ?, let VM
(s) (Q?M (s, a)) denote the discounted, infinite-horizon
value (action-value) function for ? in M (which may be omitted from the notation) from state s.
Specifically, let st and rt be the tth encountered state and received reward, respectively,
resulting
P
?
j
from execution of policy ? in some MDP M from state s0 . Then, VM
(s) = E[ ?
?
r
|s
j 0 = s].
j=0
?
The optimal policy is denoted ? ? and has value functions VM
(s) and Q?M (s, a). Note that a policy
cannot have a value greater than vmax := 1/(1 ? ?) by the assumption of a maximum reward of 1.
2.2 Algorithm
First, we discuss how to use an admissible learning algorithm for KLRP to construct an MDP model.
We proceed by specifying the transition model for each of the (infinitely many) state-action pairs.
Given a fixed state-action pair (s, a), we need to estimate the next-state distribution of the MDP from
past experience, which consists of input state-action pairs (transformed by the nonlinear function ?)
and output next states. For each state component i ? {1, . . . , nS }, we have a separate learning
problem that can be solved by any instance Ai of an admissible KLRP algorithm.3 If each instance
makes a valid prediction (not ?), then we simply construct an approximate next-state distribution
whose ith component is normally distributed with variance ? 2 and whose mean is given by the
?
prediction of Ai (this procedure is equivalent to constructing an approximate transition matrix M
?
whose ith row is equal to the transpose of the approximate parameter vector ? learned by Ai ).
If any instance of our KLRP algorithm predicts ? for state-action pair (s, a), then we cannot estimate
the next-state distribution. Instead, we make s highly rewarding in the MDP model to encourage
exploration, as done in the R-MAX algorithm (Brafman & Tennenholtz, 2002). Following the terminology introduced by Kearns and Singh (2002), we call such a state (state-action) an ?unknown?
state (state-action) and we ensure that the value function of our model assigns vmax (maximum possible) to state s. The standard way to satisfy this condition for finite MDPs is to make the transition
function for action a from state s a self-loop with reward 1 (yielding a value of vmax = 1/(1 ? ?) for
state s). We can affect the exact same result in a continuous MDP by adding a component to each
state vector s and to each vector ?(s, a) for every state-action pair (s, a). If (s, a) is ?unknown? we
set the value of the additional components (of ?(s, a) and s) to 1, otherwise we set it to 0. We add an
additional row and column to M that preserves this extra component (during the transformation from
?(s, a) to the next state s0 ) and otherwise doesn?t change the next-state distribution. Finally, we give
a reward of 1 to any unknown state, leaving rewards for the known states unchanged. Pseudocode
for the resulting KWIK-RMAX algorithm is provided in Algorithm 2.
Theorem 2 For any and ?, the KWIK-RMAX algorithm executes an -optimal policy on at most
a polynomial (in n, nS , 1/, 1/?, and 1/(1 ? ?)) number of steps, with probability at least 1 ? ?.
2
The algorithm can be modified to deal with bounds (on the norms of the rows of M ) that are larger than
one.
3
One minor technical detail is that our KLRP setting requires bounded outputs (see Definition 1) while our
application to MBRL requires dealing with normal, and hence unbounded outputs. This is easily dealt with by
ignoring any extremely large (or small) outputs and showing that the resulting norm of the truncated normal
distribution learned by the each instance Ai is very close to the norm of the untruncated distribution.
6
Algorithm 2 KWIK-RMAX Algorithm
0: Inputs: nS , nA , n, R, ?(?, ?), ?, ?, , ?, and admissible learning algorithm ModelLearn.
1: for all state components i ? {1, . . . , nS } do
2
?
2:
Initialize a new instantiation of ModelLearn, denoted Ai , with inputs C (1??)
and ?/nS ,
2 n
for inputs and ?, respectively, in Definition 2, and where C is some constant determined by
the analysis.
3: end for
4: Initialize an MDP Model with state space S, action space A, reward function R, discount factor
? and transition function specified by Ai for i ? {1, . . . , nS } as described above.
5: for t = 1, 2, 3, ? ? ? do
6:
Let s denote the state at time t.
7:
Choose action a := ?
? ? (s) where ?
? ? is the optimal policy of the MDP Model.
0
8:
Let s be the next state after executing action a.
9:
for all factors i ? {1, . . . , n} do
10:
Present input-output pair (?(s, a), s0 (i)) to Ai,a .
11:
end for
12:
Update MDP Model.
13: end for
2.3 Analysis
Proof sketch: (of Theorem 2)
?
It can be shown that, with high probability, policy ?
? ? is either an -optimal policy (V ?? (s) ?
V ? (s) ? ) or it is very likely to lead to an unknown state. However, the number of times the latter
event can occur is bounded by the maximum number of times the instances Ai can predict ?, which
is polynomial in the relevant parameters. 2
2.4 The Planning Assumption
We have shown that the KWIK-RMAX Algorithm acts near-optimally on all but a small (polynomial) number of timesteps, with high probability. Unfortunately, to do so, the algorithm must
solve its internal MDP model completely and exactly. It is easy to extend the analysis to allow approximate solution. However, it is not clear whether even this approximate computation can be
done efficiently. In any case, discretization of the state space can be used, which yields computational complexity that is exponential in the number of (state and action) dimensions of the problem,
similar to the work of Chow and Tsitsiklis (1991). Alternatively, sparse sampling can be used, whose
complexity has no dependence on the size of the state space but depends exponentially on the time
horizon (? 1/(1 ? ?)) (Kearns et al., 1999). Practically, there are many promising techniques that
make use of value-function approximation for fast and efficient solution (planning) of MDPs (Sutton
& Barto, 1998). Nevertheless, it remains future work to fully analyze the complexity of planning.
2.5 Related Work
The general exploration problem in continuous state spaces was considered by Kakade et al. (2003),
and at a high level our approach to exploration is similar in spirit. However, a direct application
of Kakade et al.?s (2003) algorithm to linearly-parameterized MDPs results in an algorithm whose
sample complexity scales exponentially, rather than polynomially, with the state-space dimension.
That is because the analysis uses a factor of the size of the ?cover? of the metric space. Reinforcement learning in continuous MDPs with linear dynamics was studied by Fiechter (1997). However,
an exact linear relationship between the current state and next state is required for this analysis to
go through, while we allow the current state to be transformed (for instance, adding non-linear state
features) through non-linear function ?. Furthermore, Fiechter?s algorithm relied on the existence
of a ?reset? action and a specific form of reward function. These assumptions admit a solution
that follows a fixed policy and doesn?t depend on the actual history of the agent or the underlying
MDP. The model that we consider, linearly parameterized MDPs, is taken directly from the work by
Abbeel and Ng (2005), where it was justified in part by an application to robotic helicopter flight. In
7
that work, a provably efficient algorithm was developed in the apprenticeship RL setting. In this setting, the algorithm is given limited access (polynomial number of calls) to a fixed policy (called the
teacher?s policy). With high probably, a policy is learned that is nearly as good as the teacher?s policy. Although this framework is interesting and perhaps more useful for certain applications (such as
helicopter flying), it requires a priori expert knowledge (to construct the teacher) and alleviates the
problem of exploration altogether. In addition, Abbeel and Ng?s (2005) algorithm also relies heavily
on a reset assumption, while ours does not.
Conclusion
We have provided a provably efficient RL algorithm that learns a very rich and important class of
MDPs with continuous state and action spaces. Yet, many real-world MDPs do not satisfy the linearity assumption, a concern we now address. Our RL algorithm utilized a specific online linear
regression algorithm. We have identified certain interesting and general properties (see Definition 2)
of this particular algorithm that support online exploration. These properties are meaningful without
the linearity assumption and should be useful for development of new algorithms for different modeling assumptions. Our real goal of the paper is to work towards developing a general technique for
applying regression algorithms (as black boxes) to model-based reinforcement-learning algorithms
in a robust and formally justified way. We believe the approach used with linear regression can be
repeated for other important classes, but we leave the details as interesting future work.
Acknowledgements
We thank NSF and DARPA IPTO for support.
References
Abbeel, P., & Ng, A. Y. (2005). Exploration and apprenticeship learning in reinforcement learning. ICML ?05:
Proceedings of the 22nd international conference on Machine learning (pp. 1?8). New York, NY, USA:
ACM Press.
Auer, P. (2002). Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning
Research, 3, 397?422.
Brafman, R. I., & Tennenholtz, M. (2002). R-MAX?a general polynomial time algorithm for near-optimal
reinforcement learning. Journal of Machine Learning Research, 3, 213?231.
Chow, C.-S., & Tsitsiklis, J. N. (1991). An optimal one-way multigrid algorithmfor discrete time stochastic
control. IEEE Transactions on Automatic Control, 36, 898?914.
Fiechter, C.-N. (1997). PAC adaptive control of linear systems. Tenth Annual Conference on Computational
Learning Theory (COLT) (pp. 72?80).
Golub, G. H., & Van Loan, C. F. (1996). Matrix computations. Baltimore, Maryland: The Johns Hopkins
University Press. 3rd edition.
Kakade, S. M. (2003). On the sample complexity of reinforcement learning. Doctoral dissertation, Gatsby
Computational Neuroscience Unit, University College London.
Kakade, S. M. K., Kearns, M. J., & Langford, J. C. (2003). Exploration in metric state spaces. Proceedings of
the 20th International Conference on Machine Learning (ICML-03).
Kearns, M., Mansour, Y., & Ng, A. Y. (1999). A sparse sampling algorithm for near-optimal planning in
large Markov decision processes. Proceedings of the Sixteenth International Joint Conference on Artificial
Intelligence (IJCAI-99) (pp. 1324?1331).
Kearns, M. J., & Singh, S. P. (2002). Near-optimal reinforcement learning in polynomial time. Machine
Learning, 49, 209?232.
Ng, A. Y., Kim, H. J., Jordan, M. I., & Sastry, S. (2003). Autonomous helicopter flight via reinforcement
learning. Advances in Neural Information Processing Systems 16 (NIPS-03).
Seber, G. A. F., & Lee, A. J. (2003). Linear regression analysis. Wiley-Interscience.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. The MIT Press.
Tesauro, G. (1994). TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural
Computation, 6, 215?219.
8
| 3197 |@word h:1 exploitation:1 polynomial:10 norm:14 nd:1 decomposition:1 ours:1 past:2 current:9 com:1 discretization:2 yet:1 must:5 written:1 john:1 update:1 intelligence:2 ith:10 dissertation:1 lr:1 provides:1 unbounded:1 direct:1 incorrect:1 prove:4 consists:2 interscience:1 manner:1 apprenticeship:2 behavior:2 planning:4 discounted:3 td:1 xti:1 actual:1 becomes:1 provided:5 estimating:1 bounded:5 linearity:3 notation:1 underlying:1 what:2 multigrid:1 rmax:4 minimizes:2 developed:2 transformation:1 nj:1 guarantee:2 certainty:1 quantitative:1 every:3 act:2 exactly:1 rm:2 control:3 normally:1 unit:1 producing:1 before:1 positive:1 sutton:3 black:1 plus:1 doctoral:1 studied:2 specifying:1 limited:2 union:1 differs:1 procedure:2 confidence:1 gammon:1 cannot:3 close:2 applying:2 measurable:1 equivalent:1 yt:11 go:1 hugely:1 assigns:1 estimator:5 orthonormal:1 autonomous:2 suppose:5 heavily:1 play:1 exact:2 us:5 element:1 satisfying:6 utilized:1 predicts:3 vein:1 solved:1 trade:1 substantial:1 intuition:1 environment:5 complexity:12 reward:12 littman:1 dynamic:5 singh:5 solving:2 depend:1 flying:1 learner:1 basis:1 completely:1 easily:2 darpa:1 joint:1 chapter:1 train:1 fast:1 london:1 artificial:2 choosing:1 whose:11 quite:1 larger:4 solve:2 say:1 otherwise:3 statistic:2 itself:1 online:11 associative:1 sequence:2 eigenvalue:4 intelligently:1 helicopter:4 reset:3 relevant:1 loop:1 alleviates:1 sixteenth:1 achievement:1 ijcai:1 undiscounted:1 p:2 produce:2 executing:1 leave:1 derive:1 develop:2 pose:1 finitely:1 minor:1 received:1 progress:1 solves:2 c:1 predicted:1 quantify:1 direction:1 stochastic:1 exploration:11 occupies:1 enable:2 settle:1 require:2 abbeel:6 generalization:2 mathematically:1 hold:3 practically:1 sufficiently:1 considered:2 normal:3 great:1 predict:2 achieves:1 omitted:3 purpose:1 applicable:3 trusted:2 minimization:1 offs:1 mit:1 rna:1 always:1 modified:2 rather:6 occupied:1 barto:3 probabilistically:1 vk:2 backgammon:2 tremendously:1 adversarial:1 kim:1 sense:1 inaccurate:2 explanatory:1 initially:1 chow:2 transformed:2 subroutine:1 provably:6 ill:1 colt:1 denoted:2 priori:1 yahoo:2 development:1 constrained:3 special:4 initialize:3 fairly:1 equal:1 construct:4 ng:9 sampling:2 fiechter:3 icml:2 nearly:1 future:5 report:1 distinguishes:1 preserve:1 consisting:1 maintain:1 interest:1 highly:2 golub:2 arrives:1 upperbounded:1 yielding:1 undefined:1 accurate:4 encourage:1 experience:3 unless:1 euclidean:1 abundant:1 instance:6 column:1 modeling:1 cover:3 signifies:1 tractability:1 uniform:1 wonder:1 successful:1 conducted:1 optimally:1 teacher:3 chooses:1 confident:1 st:1 international:3 lee:2 vm:3 rewarding:1 invertible:1 michael:1 ym:3 hopkins:1 na:3 squared:5 satisfied:1 choose:5 possibly:1 hoeffding:1 external:1 creating:1 book:1 admit:1 expert:1 vnt:1 inc:1 coefficient:1 satisfy:3 depends:3 observing:3 analyze:2 start:1 relied:1 complicated:1 minimize:2 square:9 accuracy:1 variance:3 efficiently:2 yield:1 dealt:2 none:1 straight:1 executes:1 history:1 whenever:1 definition:7 verse:1 failure:2 pp:3 proof:6 transposed:1 recall:1 knowledge:3 ut:3 formalize:2 auer:6 actually:1 supervised:1 response:1 formulation:2 done:2 box:1 furthermore:3 just:1 langford:1 sketch:3 receives:2 flight:2 trust:1 nonlinear:1 perhaps:1 mdp:16 believe:1 grows:1 usa:2 y2:1 true:4 former:1 hence:2 symmetric:1 deal:1 during:3 self:2 please:1 noted:1 allowable:1 outline:1 consideration:1 pseudocode:3 rl:17 qp:1 exponentially:2 extend:1 interpretation:1 interpret:1 ai:9 automatic:1 rd:1 sastry:1 pm:3 teaching:1 robot:1 access:1 etc:1 add:1 kwik:11 mlittman:1 closest:1 showed:1 recent:1 tesauro:2 certain:2 meta:1 arbitrarily:1 accomplished:1 yi:13 greater:2 somewhat:1 additional:2 determine:1 maximize:1 signal:1 relates:1 full:2 infer:1 technical:2 prediction:22 regression:20 metric:3 rutgers:3 kernel:1 receive:1 justified:2 addition:1 separately:1 signify:1 interval:1 baltimore:1 else:1 singular:1 leaving:1 crucial:1 extra:1 sure:1 probably:1 subject:1 member:1 spirit:1 jordan:1 call:2 near:5 easy:1 affect:1 timesteps:6 identified:1 idea:2 decline:1 whether:2 york:2 proceed:1 action:25 algorithmfor:1 useful:2 clear:1 eigenvectors:1 discount:2 tth:1 nsf:1 neuroscience:1 per:1 write:1 discrete:1 promise:1 key:2 terminology:1 nevertheless:1 tenth:1 utilize:1 v1:3 timestep:4 parameterized:6 powerful:1 uncertainty:3 master:1 vn:2 utilizes:1 decision:4 scaling:1 seber:2 bound:10 encountered:1 annual:1 ahead:2 occur:1 precisely:1 x2:1 software:1 aspect:1 extremely:1 department:1 developing:1 piscataway:1 combination:4 describes:1 slightly:1 contradicts:1 kakade:5 making:2 untruncated:1 intuitively:1 restricted:1 taken:2 ln:5 equation:1 previously:2 remains:1 discus:2 mind:2 know:2 tractable:1 end:5 apply:3 appropriate:2 altogether:1 algorithm1:1 existence:1 denotes:2 ensure:2 hinge:1 coined:1 build:1 especially:1 unchanged:1 quantity:5 strategy:1 primary:1 rt:1 dependence:1 diagonal:1 separate:1 thank:1 maryland:1 considers:1 reason:1 modeled:1 relationship:4 ying:1 equivalently:1 difficult:2 unfortunately:1 append:2 policy:14 unknown:10 upper:2 markov:3 finite:5 truncated:1 immediate:1 situation:2 extended:1 y1:4 rn:19 mansour:1 community:2 introduced:5 pair:7 namely:1 specified:2 required:1 learned:3 nip:1 address:1 able:1 tennenholtz:3 xm:3 program:2 including:1 max:2 suitable:1 event:1 improve:1 mdps:19 understanding:2 acknowledgement:1 xtt:1 fully:1 interesting:5 proportional:1 var:3 agent:10 s0:4 strehl:2 row:12 brafman:3 repeat:1 transpose:4 tsitsiklis:2 formal:1 allow:3 sparse:2 van:2 distributed:1 feedback:1 dimension:3 valid:7 world:2 rich:2 transition:6 doesn:2 author:1 made:2 reinforcement:13 forward:1 vmax:3 adaptive:1 polynomially:2 transaction:1 approximate:7 forever:1 dealing:1 robotic:1 instantiation:1 conclude:1 assumed:1 xi:15 alternatively:1 don:1 continuous:10 why:1 promising:1 learn:2 transported:1 robust:2 ignoring:1 contributes:1 necessarily:1 constructing:1 diag:2 main:3 linearly:6 noise:2 edition:1 allowed:1 repeated:1 x1:4 gatsby:1 ny:2 wiley:1 n:9 experienced:2 explicit:1 exponential:1 lie:1 third:1 learns:4 admissible:6 rk:1 theorem:6 xt:23 specific:5 showing:1 pac:1 concern:1 albeit:1 sequential:1 adding:2 execution:2 conditioned:1 horizon:2 flavor:1 simply:1 explore:2 likely:3 infinitely:1 applies:1 satisfies:4 relies:1 acm:1 viewed:1 goal:2 towards:1 change:1 loan:2 specifically:6 infinite:2 determined:1 acting:1 wt:2 kearns:8 lemma:11 called:5 total:1 player:1 meaningful:1 formally:1 college:1 internal:1 support:2 latter:2 ipto:1 alexander:1 |
2,422 | 3,198 | Semi-Supervised Multitask Learning
Qiuhua Liu, Xuejun Liao, and Lawrence Carin
Department of Electrical and Computer Engineering
Duke University
Durham, NC 27708-0291, USA
Abstract
A semi-supervised multitask learning (MTL) framework is presented, in which
M parameterized semi-supervised classifiers, each associated with one of M partially labeled data manifolds, are learned jointly under the constraint of a softsharing prior imposed over the parameters of the classifiers. The unlabeled data
are utilized by basing classifier learning on neighborhoods, induced by a Markov
random walk over a graph representation of each manifold. Experimental results
on real data sets demonstrate that semi-supervised MTL yields significant improvements in generalization performance over either semi-supervised single-task
learning (STL) or supervised MTL.
1
Introduction
Supervised learning has proven an effective technique for learning a classifier when the quantity of
labeled data is large enough to represent a sufficient sample from the true labeling function. Unfortunately, a generous provision of labeled data is often not available since acquiring the label of
a datum is expensive in many applications. A classifier supervised by a limited amount of labeled
data is known to generalize poorly even if it produces zero training errors. There has been much
recent work on improving the generalization of classifiers based on using information sources beyond the labeled data. These studies fall into two major categories: (i) semi-supervised learning
[9, 12, 15, 10] and (ii) multitask learning (MTL) [3, 1, 13]. The former employs the information
from the data manifold, in which the manifold information provided by the usually abundant unlabeled data is exploited, while the latter leverages information from related tasks.
In this paper we attempt to integrate the benefits offered by semi-supervised learning and MTL, by
proposing semi-supervised multitask learning. The semi-supervised MTL framework consists of M
semi-supervised classifiers coupled by a joint prior distribution over the parameters of all classifiers.
Each classifier provides the solution for a partially labeled data classification task. The solutions for
the M tasks are obtained simultaneously under the unified framework.
Existing semi-supervised algorithms are often not directly amenable to MTL extensions. Transductive algorithms directly operate on labels. Since the label is a local property of the associated data
point, information sharing must be performed at the level of data locations, instead of at the task
level. The inductive algorithm in [10] employs a data-dependent prior to encode manifold information. Since the information transferred from related tasks is also often represented by a prior, the
two priors will compete and need be balanced; moreover, this precludes a Dirichlet process [6] or
its variants to represent the sharing prior across tasks, because the base distribution of a Dirichlet
process cannot be dependent on any particular manifold.
We develop a new semi-supervised formulation, which enjoys several nice properties that make the
formulation immediately amenable to an MTL extension. First, the formulation has a parametric
classifier built for each task, thus multitask learning can be performed efficiently at the task level,
using the parameters of the classifiers. Second, the formulation encodes the manifold information
of each task inside the associated likelihood function, sparing the prior for exclusive use by the
information from related tasks. Third, the formulation lends itself to a Dirichlet process, allowing
the tasks to share information in a complex manner.
The new semi-supervised formulation is used as a key component of our semi-supervised MTL
framework. In the MTL setting, we have M partially labeled data manifolds, each defining a classification task and involving design of a semi-supervised classifier. The M classifiers are designed
simultaneously within a unified sharing structure. The key component of the sharing structure is a
soft variant of the Dirichlet process (DP), which implements a soft-sharing prior over the parameters
of all classifiers. The soft-DP retains the clustering property of DP and yet does not require exact
sharing of parameters, which increases flexibility and promotes robustness in information sharing.
2
Parameterized Neighborhood-Based Classification
The new semi-supervised formulation, termed parameterized neighborhood-based classification
(PNBC), represents the class probability of a data point by mixing over all data points in the neighborhood, which is formed via Markov random walk over a graph representation of the manifold.
2.1
Neighborhoods Induced by Markov Random Walk
Let G = (X , W) be a weighted graph such that X = {x1 , x2 , ? ? ?, xn } is a set of vertices that
coincide with the data points in a finite data manifold, and W = [wij ]n?n is the affinity matrix with
the (i, j)-th element wij indicating the immediate affinity between data points xi and xj . We follow
[12, 15] to define wij = exp(?0.5 kxi ? xj k2 /?i2 ), where k ? k is the Euclidean norm and ?ij > 0.
A Markov random walk on graph G = (X , W) is characterized by a matrix of one-step transition
probabilities A = [aij ]n?n , where aij is the probability of transiting from xi to xj via a single step
w
and is given by aij = Pn ij w [4]. Let B = [bij ]n?n = At . Then (i, j)-th element bij represents
k=1
ik
the probability of transiting from xi to xj in t steps.
Data point xj is said to be a t-step neighbor of xi if bij > 0. The t-step neighborhood of xi ,
denoted as Nt (xi ), is defined by all t-step neighbors of xi along with the associated t-step transition
probabilities, i.e., Nt (xi ) = {(xj , bij ) : bij > 0, xj ? X }. The appropriateness of a t-step
neighborhood depends on the right choice of t. A rule of choosing t is given in [12], based on
maximizing the margin of the associated classifier on both labeled and unlabeled data points.
The ?i in specifying wij represents the step-size (distance traversed in a single step) for xi to reach
its immediate neighbor, and we have used a distinct ? for each data point. Location-dependent
step-sizes allow one to account for possible heterogeneities in the data manifold ? at locations with
dense data distributions a small step-size is suitable, while at locations with sparse data distributions
a large step-size is appropriate. A simple choice of heterogeneous ? is to let ?i be related to the
distance between xi and close-by data points, where closeness is measured by Euclidean distance.
Such a choice ensures each data point is immediately connected to some neighbors.
2.2
Formulation of the PNBC Classifier
Let p? (yi |xi , ?) be a base classifier parameterized by ?, which gives the probability of class label
yi of data point xi , given xi alone (which is a zero-step neighborhood of xi ). The base classifier
can be implemented by any parameterized probabilistic classifier. For binary classification with
y ? {?1, 1}, the base classifier can be chosen as logistic regression with parameters ?, which
expresses the conditional class probability as
p? (yi |xi , ?) = [1 + exp(?yi ? T xi )]?1
(1)
where a constant element 1 is assumed to be prefixed to each x (the prefixed x is still denoted as x
for notational simplicity), and thus the first element in ? is a bias term.
Let p(yi |Nt (xi ), ?) denote a neighborhood-based classifier parameterized by ?, representing the
probability of class label yi for xi , given the neighborhood of xi . The PNBC classifier is defined as
a mixture
Pn
p(yi |Nt (xi ), ?) = j=1 bij p? (yi |xj , ?)
(2)
where the j-th component is the base classifier applied to (xj , yi ) and the associated mixing proportion is defined by the probability of transiting from xi to xj in t steps. Since the magnitude of bij
automatically determines the contribution of xj to the mixture, we let index j run over the entire X
for notational simplicity.
The utility of unlabeled data in (2) is conspicuous ? in order for xi to be labeled yi , each neighbor
xj must be labeled consistently with yi , with the strength of consistency proportional to bij ; in
such a manner, yi implicitly propagates over the neighborhood of xi . By taking neighborhoods
into account, it is possible to obtain an accurate estimate of ?, based on a small amount of labeled
data. The over-fitting problem associated with limited labeled data is ameliorated in the PNBC
formulation, through enforcing consistent labeling over each neighborhood.
Let L ? {1, 2, ? ? ? , n} denote the index set of labeled data in X . Assuming the labels are conditionally independent, we write the neighborhood-conditioned likelihood function
?
? Q
Q
Pn
p {yi , i ? L}|{Nt (xi ) : i ? L}, ? = i?L p(yi |Nt (xi ), ?) = i?L j=1 bij p? (yi |xj , ?) (3)
3
3.1
The Semi-Supervised MTL Framework
The sharing prior
Suppose we are given M tasks, defined by M partially labeled data sets
m
Dm = {xm
i : i = 1, 2, ? ? ? , nm } ? {yi : i ? Lm }
for m = 1, ? ? ? , M , where yim is the class label of xm
i and Lm ? {1, 2, ? ? ? , nm } is the index set
of labeled data in task m. We consider M PNBC classifiers, parameterized by ?m , m = 1, ? ? ? , M ,
with ?m responsible for task m. The M classifiers are not independent but coupled by a prior joint
distribution over their parameters
QM
p(?1 , ? ? ? , ?M ) = m=1 p(?m |?1 , ? ? ? , ?m?1 )
(4)
with the conditional distributions in the product defined by
?
?
Pm?1
1
?p(?m |?) + l=1 N (?m ; ?l , ? 2 I)
p(?m |?1 , ? ? ? , ?m?1 ) = ?+m?1
(5)
where ? > 0, p(?m |?) is a base distribution parameterized by ?, N ( ? ; ?l , ? 2 I) is a normal distribution with mean ?l and covariance matrix ? 2 I. As discussed below, the prior in (4) is linked to
Dirichlet processes and thus is more general than a parametric prior, as used, for example, in [5].
Each normal distribution represents the prior transferred from a previous task; it is the metaknowledge indicating how the present task should be learned, based on the experience with a previous task. It is through these normal distributions that information sharing between tasks is enforced.
Taking into account the data likelihood, unrelated tasks cannot share since they have dissimilar solutions and forcing them to share the same solution will decrease their respective likelihood; whereas,
related tasks have close solutions and sharing information helps them to find their solutions and
improve their data likelihoods.
The base distribution represents the baseline prior, which is exclusively used when there are no
previous tasks available, as is seen from (5) by setting m = 1. When there are m ? 1 previous
?
, and uses the prior transferred from each
tasks, one uses the baseline prior with probability ?+m?1
1
of the m ? 1 previous tasks with probability ?+m?1 . The ? balances the baseline prior and the
priors imposed by previous tasks. The role of baseline prior decreases as m increases, which is in
agreement with our intuition, since the information from previous tasks increase with m.
The formulation in (5) is suggestive of the polya urn representation of a Dirichlet process (DP) [2].
The difference here is that we have used a normal distribution to replace Dirac delta in Dirichlet
processes. Since N (?m |?l , ? 2 I) approaches Dirac delta ?(?m ? ?l ) as ? 2 ? 0, we recover the
Dirichlet process in the limit case when limit case when ? 2 ? 0.
The motivation behind the formulation in (5) is twofold. First, a normal distribution can be regarded
as a soft version of the Dirac delta. While the Dirac delta requires two tasks to have exactly the
same ? when sharing occurs, the soft delta only requires sharing tasks to have similar ??s. The
soft sharing may therefore be more consistent with situations in practical applications. Second, the
normal distribution is analytically more appealing than the Dirac delta and allows simple maximum
a posteriori (MAP) solutions. This is an attractive property considering that most classifiers do not
have conjugate priors for their parameters and Bayesian learning cannot be performed exactly.
Under the sharing prior in (4), the current task is equally influenced by each previous task but is
influenced unevenly by future tasks ? a distant future task has less influence than a near future task.
The ordering of the tasks imposed by (4) may in principle affect performance, although we have not
found this to be an issue in the experimental results. Alternatively, one may obtain a sharing prior
that does not depend on task ordering, by modifying (5) as
?
?
P
1
2
p(?m |??m ) = ?+M
(6)
l6=m N (?m ; ?l , ? I)
?1 ?p(?m |?) +
where ??m = {?1 , ? ? ? , ?M } \ {?m }. The prior joint distribution of {?1 , ? ? ? , ?M } associated with
the full conditionals in (6) is not analytically available, nether is the corresponding posterior joint
distribution, which causes technical difficulties in performing MAP estimation.
3.2
Maximum A Posteriori (MAP) Estimation
Assuming that, given {?1 , ? ? ? , ?M }, the class labels of different tasks are conditionally independent,
the joint likelihood function over all tasks can be written as
?
?
m
M
M
p {yim , i ? Lm }M
m=1 |{Nt (xi ) : i ? Lm }m=1 , {?m }m=1
QM Q
Pnm m ? m m
= m=1 i?Lm j=1
bij p (yi |xj , ?m )
(7)
where the m-th term in the product is taken from (3), with the superscript m indicating the task
index. Note that the neighborhoods are built for each task independently of other tasks, thus a
random walk is always restricted to the same task (the one where the starting data point belongs)
and can never traverse multiple tasks. From (4), (5), and (7), one can write the logarithm of the joint
posterior of {?1 , ? ? ? , ?M }, up to a constant translation that does not depend on {?1 , ? ? ? , ?M },
?
?
m
M
m
M
`MAP (?1 , ? ? ? , ?M ) = ln p {?m }M
m=1 |{yi , i ? Lm }m=1 , {Nt (xi ) : i ? Lm }m=1
?
? P
Pnm m ? m m
Pm?1
PM ? ?
bij p (yi |xj , ?m ) (8)
= m=1 ln ?p(?m |?) + l=1 N (?m ; ?l , ? 2 I) + i?Lmln j=1
We seek the parameters {?1 , ? ? ? , ?M } that maximize the log-posterior, which is equivalent to simultaneously maximizing the prior in (4) and the likelihood function in (7). As seen from (5), the
prior tends to have similar ??s across tasks (similar ??s increase the prior); however sharing between
unrelated tasks is discouraged, since each task requires a distinct ? to make its likelihood large.
As a result, to make the prior and the likelihood large at the same time, one must let related tasks
have similar ??s. Although any optimization techniques can be applied to maximize the objective
function (8), expectation maximization (EM) is particularly suitable, since the objective function
involves summations under the logarithmic operation. To conserve space the algorithmic details are
omitted here.
Utilization of the manifold information and the information from related tasks has greatly reduced
the hypothesis space. Therefore, point MAP estimation in semi-supervised MTL will not suffer as
much from overfitting as in supervised STL. This argument will be supported by the experimental results in Section 4.2, where semi-supervised MTL outperforms both supervised MTL and supervised
STL, although the former is based on MAP and the latter two are based on Bayesian learning.
With MAP estimation, one obtains the parameters of the base classifier in (1) for each task, which
can be employed to predict the class label of any data point in the associated task, regardless of
whether the data point has been seen during training. In the special case when predictions are desired
only for the unlabeled data points seen during training (transductive learning), one can alternatively
employ the PNBC classifier in (2) to perform the predictions.
4
Experimental Results
First we consider semi-supervised learning on a single task and establish the competitive performance of the PNBC in comparison with existing semi-supervised algorithms. Then we demonstrate
the performance improvements achieved by semi-supervised MTL, relative to semi-supervised STL
and supervised MTL. Throughout this section, the base classifier in (1) is logistic regression.
4.1
Performance of the PNBC on a Single Task
WDBC
PIMA
0.72
0.7
0.68
PNBC
Szummer & Jaakkola
Logistic GRF
GRF
Transductive SVM
0.66
0.64
0.62
20
40
60
Number of labeled data
0.94
0.92
PNBC
Szummer & Jaakkola
Logistic GRF
GRF
Transductive SVM
0.9
0.88
0.86
80
Accuracy on unlabeled data
Accuracy on unlabeled data
0.74
Accuracy on unlabeled data
Ionosphere
0.96
20
40
60
Number of labeled data
0.9
0.85
0.8
0.7
0.65
80
PNBC
Szummer & Jaakkola
Logistic GRF
GRF
Transductive SVM
PNBC?II
0.75
20
40
60
Number of Labeled Data
80
0.7
0.6
PNBC
Logistic GRF
0.5
0
20
40
60
80
100
Number of Unlabeled Samples
120
0.8
0.7
0.6
PNBC
Logistic GRF
0.5
0
20
40
60
80
100
Number of Unlabeled Samples
120
30 labeled samples
0.9
0.8
0.7
0.6
0.5
0
PNBC
Logistic GRF
20
40
60
80
100
Number of Unlabeled Samples
120
Accuracy on separated test data
0.8
20 labeled samples
0.9
Accuracy on separated test data
10 labeled samples
0.9
Accuracy on separated test data
Accuracy on separated test data
Figure 1: Transductive results of the PNBC. The horizontal axis is the size of XL .
40 labeled samples
0.9
0.8
0.7
0.6
PNBC
Logistic GRF
0.5
0
20
40
60
80
100
120
Number of Unlabeled Samples
Figure 2: Inductive results of the PNBC on Ionosphere. The horizontal axis is the size of XU .
The PNBC is evaluated on three benchmark data sets ? Pima Indians Diabetes Database (PIMA),
Wisconsin Diagnostic Breast Cancer (WDBC) data, and Johns Hopkins University Ionosphere
database (Ionosphere), which are taken from the UCI machine learning repository [11]. The evaluation is performed in comparison to four existing semi-supervised learning algorithms, namely, the
transductive SVM [9], the algorithm of Szummer & Jaakkola [12], GRF [15], and Logistic GRF
[10]. The performance is evaluated in terms of classification accuracy, defined as the ratio of the
number of correctly classified data over the total number of data being tested.
We consider two testing modes: transductive and inductive. In the transductive mode, the test data
are the unlabeled data that are used in training the semi-supervised algorithms; in the inductive
mode, the test data are a set of holdout data unseen during training. We follow the same procedures
as used in [10] to perform the experiments. Denote by X any of the three benchmark data sets and
Y the associated set of class labels. In the transductive mode, we randomly sample XL ? X and
assume the associated class labels YL are available; the semi-supervised algorithms are trained by
X ? YL and tested on X \ XL . In the inductive mode, we randomly sample two disjoint data subsets
XL ? X and XU ? X , and assume the class labels YL associated with XL are available; the semisupervised algorithms are trained by XL ? YL ? XU and tested on 200 data randomly sampled from
X \ (XL ? XU ).
The comparison results are summarized in Figures 1 and 2, where the results of the PNBC and the
algorithm of Szummer & Jaakkola are calculated by us, and the results of remaining algorithms are
cited from [10]. The algorithm of Szummer & Jaakkola [12] and the PNBC use ?i = minj kxi ?
xj k/3 and t = 100; learning of the PNBC is based on MAP estimation. Each curve in the figures
is a result averaged from T independent trials, with T = 20 for the transductive results and T = 50
for the inductive results. In the inductive case, the comparison is between the proposed algorithm
and the Logistic GRF, as the others are transductive algorithms.
For the PNBC, we can either use the base classifier in (1) or the PNBC classifier in (2) to predict the
labels of unlabeled data seen in training (the transductive mode). In the inductive mode, however,
the {bij } are not available for the test data (unseen in training) since they are not in the graph
representation, therefore we can only employ the base classifier. In the legends of Figures 1 and 2, a
suffix ?II? to PNBC indicates that the PNBC classifier in (2) is employed in testing; when no suffix
is attached, the base classifier is employed in testing.
Figures 1 and 2 show that the PNBC outperforms all the competing algorithms in general, regardless
of the number of labeled data points. The improvements are particularly significant on PIMA and
Ionosphere. As indicated in Figure 1(c), employing manifold information in testing by using (2)
can improve classification accuracy in the transductive learning case. The margin of improvements
achieved by the PNBC in the inductive learning case is striking and encouraging ? as indicated
by the error bars in Figure 2, the PNBC significantly outperforms Logistic GRF in almost all individual trials. Figure 2 also shows that the advantage of the PNBC becomes more conspicuous with
decreasing amount of labeled data considered during training.
4.2
Performance of the Semi-Supervised MTL Algorithm
We compare the proposed semi-supervised MTL against: (a) semi-supervised single-task learning
(STL), (b) supervised MTL, (c) supervised STL, (d) supervised pooling; STL refers to designing
M classifiers independently, each for the corresponding task, and pooling refers to designing a single classifier based on the data of all tasks. Since we have evaluated the PNBC in Section 4.1 and
established its effectiveness, we will not repeat the evaluation here and employ PNBC as a representative semi-supervised learning algorithm in semi-supervised STL. To replicate the experiments in
[13], we employ AUC as the performance measure, where AUC stands for area under the receiver
operation characteristic (ROC) curve [7].
The basic setup of the semi-supervised MTL algorithm is as follows. The tasks are ordered as
they are when the data are provided to the experimenter (we have randomly permuted the tasks and
found the performance does not change much). A separate t-neighborhood is employed to represent
the manifold information (consisting of labeled and unlabeled data points) for each task, where the
step-size at each data point is one third of the shortest distance to the remaining points and t is set
to half the number of data points. The base prior p(?m |?) = N (?m ; 0, ? 2 I) and the soft delta is
N (?m ; ?l , ? 2 I), where ? = ? = 1. The ? balancing the base prior and the soft delta?s is 0.3. These
settings represent the basic intuition of the experimenter; they have not been tuned in any way and
therefore do not necessarily represent the best settings for the semi-supervised MTL algorithm.
0.8
2
4
0.7
0.65
Supervised STL
Supervised Pooling
Supervised MTL
Semi?Supervised STL
Semi?Supervised MTL
0.6
0.55
20
40
60
80
100
120
Number of Labeled Data in Each Task
(a)
140
Index of Landmine Field
Average AUC on 19 tasks
0.75
6
8
10
12
14
16
18
2
4
6
8
10
12
14
Index of Landmine Field
16
18
(b)
Figure 3: (a) Performance of the semi-supervised MTL algorithm on landmine detection, in comparison to the remaining five algorithms. (b) The Hinton diagram of between-task similarity when
there are 140 labeled data in each task.
Landmine Detection First we consider the remote sensing problem considered in [13], based on
data collected from real landmines. In this problem, there are a total of 29 sets of data, collected
from various landmine fields. Each data point is represented by a 9-dimensional feature vector
extracted from radar images. The class label is binary (mine or false mine). The data are available
at http://www.ee.duke.edu/?lcarin/LandmineData.zip.
Each of the 29 data sets defines a task, in which we aim to find landmines with a minimum number
of false alarms. To make the results comparable to those in [13], we follow the authors there and take
data sets 1-10 and 16-24 to form 19 tasks. Of the 19 selected data sets, 1-10 are collected at foliated
regions and 11-19 are collected at regions that are bare earth or desert. Therefore we expect two
dominant clusters of tasks, corresponding to the two different types of ground surface conditions.
To replicate the experiments in [13], we perform 100 independent trials, in each of which we randomly select a subset of data for which labels are assumed available, train the semi-supervised MTL
and semi-supervised STL classifiers, and test the classifiers on the remaining data. The AUC averaged over the 19 tasks is presented in Figure 3(a), as a function of the number of labeled data,
where each curve represents the mean calculated from the 100 independent trials and the error bars
represent the corresponding standard deviations. The results of supervised STL, supervised MTL,
and supervised pooling are cited from [13].
Semi-supervised MTL clearly yields the best results up to 80 labeled data points; after that supervised MTL catches up but semi-supervised MTL still outperforms the remaining three algorithms
by significant margins. In this example semi-supervised MTL seems relatively insensitive to the
amount of labeled data; this may be attributed to the doubly enhanced information provided by the
data manifold plus the related tasks, which significantly augment the information available in the
limited labeled data. The superiority of supervised pooling over supervised STL on this dataset suggests that there are significant benefits offered by sharing across the tasks, which partially explains
why supervised MTL eventually catches up with semi-supervised MTL.
We plot in Figure 3(b) the Hinton diagram [8] of the between-task sharing matrix (an average over
the 100 trials) found by the semi-supervised MTL when there are 140 labeled data in each task.
2
lk
) (normalized such that the
The (m, l)-th element of similarity matrix is equal to exp(? k?m ??
2
maximum element is one), which is represented by a square in the Hinton diagram, with a larger
square indicating a larger value of the corresponding element. As seen from Figure 3(b), there is a
dominant sharing among tasks 1-10 and another dominant sharing among tasks 11-19. Recall from
the beginning of the section that data sets 1-10 are from foliated regions and data sets 11-19 are
from regions that are bare earth or desert. Therefore, the sharing is in agreement with the similarity
between tasks.
Art Images Retrieval We now consider the problem of art image retrieval [14, 13], in which we
have a library of 642 art images and want to retrieve the images based on a user?s preference. The
preference of each user is available on a subset of images, therefore the objective is to learn the preference of each user based on a subset of training examples. Each image is represented by a vector
of features and a user?s rating is represented by a binary label (like or dislike). The users? preferences are collected in a web-based survey, which can be found at http://honolulu.dbs.informatik.unimuenchen.de:8080/paintings/index.jsp.
We consider the same 69 users as considered in [13], who each rated more than 100 images. The
preference prediction for each user is treated as a task, with the associated set of ground truth data
defined by the images rated by the user. These 69 tasks are used in our experiment to evaluate
the performance of semi-supervised MTL. Since two users may give different ratings to exactly the
same image, pooling the tasks together can lead to multiple labels for the same data point. For this
reason, we exclude supervised pooling and semi-supervised pooling in the performance comparison.
0.62
0.61
Average AUC on 68 tasks
0.6
0.59
0.58
0.57
0.56
0.55
0.54
Supervised STL
Supervised MTL
Semi?supervised STL
Semi?supervised MTL
0.53
0.52
5
10
15
20
25
30
35
40
45
Number of Labeled Data for Each Task
50
55
Figure 4: Performance of the semi-supervised MTL algorithm on art image retrieval, in comparison
to the remaining three algorithms.
Following [13], we perform 50 independent trials, in each of which we randomly select a subset of
images rated by each user, train the semi-supervised MTL and semi-supervised STL classifiers, and
test the classifiers on the remaining images. The AUC averaged over the 69 tasks is presented in
Figure 4, as a function of the number of labeled data (rated images), where each curve represents
the mean calculated from the 50 independent trials and the error bars represent the corresponding
standard deviations. The results of supervised STL and supervised MTL are cited from [13].
Semi-supervised MTL performs very well, improving upon results of the three other algorithms by
significant margins in almost all individual trials (as seen from the error bars). It is noteworthy
that the performance improvement achieved by semi-supervised MTL over semi-supervised STL
is larger than corresponding improvement achieved by supervised MTL over supervised STL. The
greater improvement demonstrates that unlabeled data can be more valuable when used along with
multitask learning. The additional utility of unlabeled data can be attributed to its role in helping to
find the appropriate sharing between tasks.
5
Conclusions
A framework has been proposed for performing semi-supervised multitask learning (MTL). Recognizing that existing semi-supervised algorithms are not conveniently extended to an MTL setting,
we have introduced a new semi-supervised formulation to allow a direct MTL extension. We have
proposed a soft sharing prior, which allows each task to robustly borrow information from related
tasks and is amenable to simple point estimation based on maximum a posteriori. Experimental
results have demonstrated the superiority of the new semi-supervised formulation as well as the
additional performance improvement offered by semi-supervised MTL. The superior performance
of semi-supervised MTL on art image retrieval and landmine detection show that manifold information and the information from related tasks could play positive and complementary roles in real
applications, suggesting that significant benefits can be offered in practice by semi-supervised MTL.
References
[1] B. Bakker and T. Heskes. Task clustering and gating for Bayesian multitask learning. Journal of Machine
Learning Research, pages 83?99, 2003.
[2] D. Blackwell and J. MacQueen. Ferguson distributions via polya urn schemes. Annals of Statistics, 1:
353?355, 1973.
[3] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[4] F. R. K. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[5] T. Evgeniou and M. Pontil. Regularized multi-task learning. In Proc. 17th SIGKDD Conf. on Knowledge
Discovery and Data Mining, 2004.
[6] T. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1:209?230,
1973.
[7] J. Hanley and B. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC)
curve. Radiology, 143:29?36, 1982.
[8] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In J. L. McClelland,
D. E. Rumelhart, and the PDP Research Group, editors, Parallel Distributed Processing: Explorations in
the Microstructure of Cognition, volume 1, pages 282?317. MIT Press, Cambridge, MA, 1986.
[9] T. Joachims. Transductive inference for text classification using support vector machines. In Proc. 16th
International Conf. on Machine Learning (ICML), pages 200?209. Morgan Kaufmann, San Francisco,
CA, 1999.
[10] B. Krishnapuram, D. Williams, Y. Xue, A. Hartemink, L. Carin, and M. Figueiredo. On semi-supervised
classification. In Advances in Neural Information Processing Systems (NIPS), 2005.
[11] D.J. Newman, S. Hettich, C.L. Blake, and C.J. Merz. UCI repository of machine learning databases.
http://www.ics.uci.edu/?mlearn/MLRepository.html, 1998.
[12] M. Szummer and T. Jaakkola. Partially labeled classification with markov random walks. In Advances in
Neural Information Processing Systems (NIPS), 2002.
[13] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task learning for classification with dirichlet
process priors. Journal of Machine Learning Research (JMLR), 8:35?63, 2007.
[14] K. Yu, A. Schwaighofer, V. Tresp, W.-Y. Ma, and H.J. Zhang. Collaborative ensemble learning: Combining collaborative and content-based information filtering via hierarchical bayes. In Proceedings of the
19th International Conference on Uncertainty in Artificial Intelligence (UAI 2003), 2003.
[15] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In The Twentieth International Conference on Machine Learning (ICML), 2003.
| 3198 |@word multitask:9 trial:8 repository:2 version:1 norm:1 proportion:1 replicate:2 seems:1 seek:1 covariance:1 liu:1 exclusively:1 tuned:1 outperforms:4 existing:4 current:1 nt:8 yet:1 must:3 written:1 john:1 distant:1 designed:1 plot:1 alone:1 half:1 selected:1 intelligence:1 beginning:1 provides:1 location:4 traverse:1 preference:5 zhang:1 five:1 mathematical:1 along:2 direct:1 ik:1 consists:1 doubly:1 fitting:1 inside:1 manner:2 multi:2 decreasing:1 automatically:1 encouraging:1 considering:1 becomes:1 provided:3 moreover:1 unrelated:2 bakker:1 proposing:1 unified:2 exactly:3 classifier:40 k2:1 qm:2 utilization:1 demonstrates:1 superiority:2 positive:1 engineering:1 local:1 tends:1 limit:2 noteworthy:1 plus:1 specifying:1 suggests:1 limited:3 averaged:3 practical:1 responsible:1 testing:4 practice:1 implement:1 procedure:1 lcarin:1 pontil:1 area:2 pnm:2 significantly:2 honolulu:1 refers:2 krishnapuram:2 cannot:3 unlabeled:17 close:2 influence:1 www:2 equivalent:1 imposed:3 map:8 demonstrated:1 maximizing:2 williams:1 regardless:2 starting:1 independently:2 landminedata:1 survey:1 xuejun:1 simplicity:2 immediately:2 rule:1 regarded:1 borrow:1 retrieve:1 annals:2 enhanced:1 suppose:1 play:1 user:10 exact:1 duke:2 us:2 designing:2 hypothesis:1 agreement:2 diabetes:1 element:7 rumelhart:1 expensive:1 particularly:2 utilized:1 conserve:1 labeled:35 database:3 role:3 sparing:1 electrical:1 region:4 ensures:1 connected:1 ordering:2 decrease:2 remote:1 valuable:1 balanced:1 intuition:2 mine:2 radar:1 trained:2 depend:2 upon:1 joint:6 represented:5 various:1 train:2 separated:4 distinct:2 effective:1 sejnowski:1 artificial:1 labeling:2 newman:1 neighborhood:16 choosing:1 larger:3 precludes:1 statistic:2 unseen:2 transductive:15 jointly:1 itself:1 radiology:1 superscript:1 advantage:1 product:2 uci:3 combining:1 mixing:2 poorly:1 flexibility:1 grf:14 dirac:5 cluster:1 jsp:1 produce:1 help:1 develop:1 measured:1 ij:2 polya:2 implemented:1 involves:1 appropriateness:1 modifying:1 exploration:1 explains:1 require:1 microstructure:1 generalization:2 summation:1 traversed:1 extension:3 helping:1 considered:3 ground:2 normal:6 exp:3 blake:1 lawrence:1 algorithmic:1 predict:2 cognition:1 lm:7 ic:1 major:1 generous:1 omitted:1 earth:2 estimation:6 proc:2 label:17 basing:1 weighted:1 mit:1 clearly:1 always:1 gaussian:1 aim:1 pn:3 jaakkola:7 encode:1 joachim:1 improvement:8 notational:2 consistently:1 likelihood:9 indicates:1 greatly:1 sigkdd:1 baseline:4 posteriori:3 inference:1 dependent:3 suffix:2 ferguson:2 entire:1 wij:4 issue:1 classification:11 among:2 html:1 denoted:2 augment:1 art:5 special:1 field:4 equal:1 never:1 evgeniou:1 represents:7 yu:1 icml:2 carin:3 future:3 others:1 employ:6 randomly:6 simultaneously:3 individual:2 consisting:1 attempt:1 detection:3 mining:1 evaluation:2 mixture:2 behind:1 amenable:3 accurate:1 experience:1 respective:1 euclidean:2 logarithm:1 walk:6 abundant:1 desired:1 soft:9 retains:1 caruana:1 maximization:1 vertex:1 subset:5 deviation:2 recognizing:1 kxi:2 xue:2 cited:3 international:3 probabilistic:1 yl:4 together:1 hopkins:1 nm:2 conf:2 american:1 chung:1 account:3 exclude:1 suggesting:1 de:1 summarized:1 depends:1 performed:4 linked:1 competitive:1 recover:1 bayes:1 parallel:1 contribution:1 collaborative:2 formed:1 square:2 accuracy:9 kaufmann:1 characteristic:2 efficiently:1 who:1 yield:2 ensemble:1 painting:1 landmine:6 generalize:1 bayesian:4 informatik:1 classified:1 mlearn:1 minj:1 reach:1 influenced:2 sharing:23 against:1 dm:1 associated:13 attributed:2 sampled:1 holdout:1 experimenter:2 dataset:1 recall:1 knowledge:1 provision:1 supervised:89 mtl:48 follow:3 formulation:13 evaluated:3 horizontal:2 web:1 defines:1 logistic:12 mode:7 indicated:2 semisupervised:1 usa:1 normalized:1 true:1 former:2 inductive:9 analytically:2 i2:1 conditionally:2 attractive:1 during:4 auc:6 mlrepository:1 demonstrate:2 performs:1 image:15 meaning:1 harmonic:1 superior:1 permuted:1 attached:1 insensitive:1 volume:1 discussed:1 significant:6 cambridge:1 consistency:1 pm:3 heskes:1 similarity:3 surface:1 operating:1 base:14 dominant:3 posterior:3 recent:1 belongs:1 forcing:1 termed:1 binary:3 yi:19 exploited:1 seen:7 minimum:1 greater:1 additional:2 morgan:1 zip:1 employed:4 maximize:2 shortest:1 semi:62 ii:3 full:1 multiple:2 technical:1 characterized:1 retrieval:4 equally:1 promotes:1 prediction:3 variant:2 involving:1 regression:2 liao:2 heterogeneous:1 expectation:1 breast:1 basic:2 represent:7 achieved:4 whereas:1 conditionals:1 want:1 unevenly:1 diagram:3 source:1 operate:1 induced:2 pooling:8 db:1 legend:1 lafferty:1 effectiveness:1 ee:1 near:1 leverage:1 enough:1 xj:16 affect:1 competing:1 whether:1 utility:2 suffer:1 cause:1 foliated:2 amount:4 nonparametric:1 category:1 mcclelland:1 reduced:1 http:3 delta:8 diagnostic:1 correctly:1 disjoint:1 write:2 express:1 group:1 key:2 four:1 graph:6 mcneil:1 enforced:1 compete:1 run:1 parameterized:8 uncertainty:1 striking:1 throughout:1 almost:2 hettich:1 comparable:1 datum:1 strength:1 constraint:1 x2:1 encodes:1 argument:1 urn:2 performing:2 relatively:1 transferred:3 department:1 transiting:3 conjugate:1 across:3 em:1 conspicuous:2 appealing:1 restricted:1 taken:2 ln:2 eventually:1 prefixed:2 available:10 operation:2 hierarchical:1 appropriate:2 spectral:1 yim:2 robustly:1 robustness:1 dirichlet:9 clustering:2 remaining:7 l6:1 hanley:1 ghahramani:1 establish:1 society:1 objective:3 quantity:1 occurs:1 parametric:2 exclusive:1 said:1 affinity:2 lends:1 dp:4 distance:4 discouraged:1 separate:1 manifold:16 collected:5 reason:1 enforcing:1 assuming:2 index:7 ratio:1 balance:1 nc:1 setup:1 unfortunately:1 pima:4 design:1 boltzmann:1 perform:4 allowing:1 markov:5 macqueen:1 benchmark:2 finite:1 immediate:2 defining:1 heterogeneity:1 situation:1 hinton:4 extended:1 pdp:1 rating:2 introduced:1 namely:1 blackwell:1 learned:2 established:1 nip:2 beyond:1 bar:4 usually:1 below:1 xm:2 built:2 suitable:2 difficulty:1 treated:1 regularized:1 zhu:1 representing:1 scheme:1 improve:2 rated:4 library:1 axis:2 lk:1 catch:2 coupled:2 bare:2 tresp:1 text:1 prior:31 nice:1 discovery:1 dislike:1 relative:1 wisconsin:1 expect:1 proportional:1 filtering:1 proven:1 integrate:1 offered:4 sufficient:1 consistent:2 propagates:1 principle:1 editor:1 share:3 balancing:1 translation:1 cancer:1 supported:1 repeat:1 figueiredo:1 enjoys:1 aij:3 bias:1 allow:2 landmines:2 fall:1 neighbor:5 taking:2 sparse:1 benefit:3 distributed:1 curve:5 calculated:3 xn:1 transition:2 stand:1 author:1 coincide:1 san:1 employing:1 obtains:1 ameliorated:1 implicitly:1 suggestive:1 overfitting:1 uai:1 receiver:2 assumed:2 francisco:1 xi:27 alternatively:2 why:1 learn:1 ca:1 improving:2 complex:1 necessarily:1 dense:1 motivation:1 alarm:1 complementary:1 x1:1 xu:4 representative:1 roc:2 xl:7 jmlr:1 third:2 bij:12 gating:1 sensing:1 svm:4 ionosphere:5 stl:19 closeness:1 false:2 magnitude:1 conditioned:1 margin:4 relearning:1 durham:1 wdbc:2 logarithmic:1 twentieth:1 conveniently:1 ordered:1 hartemink:1 schwaighofer:1 partially:6 acquiring:1 truth:1 determines:1 extracted:1 ma:2 conditional:2 twofold:1 replace:1 content:1 change:1 total:2 experimental:5 merz:1 indicating:4 desert:2 select:2 support:1 latter:2 szummer:7 dissimilar:1 indian:1 evaluate:1 tested:3 |
2,423 | 3,199 | Scan Strategies for Adaptive Meteorological Radars
Victoria Manfredi, Jim Kurose
Department of Computer Science
University of Massachusetts
Amherst, MA USA
{vmanfred,kurose}@cs.umass.edu
Abstract
We address the problem of adaptive sensor control in dynamic resourceconstrained sensor networks. We focus on a meteorological sensing network comprising radars that can perform sector scanning rather than always scanning 360? .
We compare three sector scanning strategies. The sit-and-spin strategy always
scans 360? . The limited lookahead strategy additionally uses the expected environmental state K decision epochs in the future, as predicted from Kalman filters,
in its decision-making. The full lookahead strategy uses all expected future states
by casting the problem as a Markov decision process and using reinforcement
learning to estimate the optimal scan strategy. We show that the main benefits of
using a lookahead strategy are when there are multiple meteorological phenomena
in the environment, and when the maximum radius of any phenomenon is sufficiently smaller than the radius of the radars. We also show that there is a trade-off
between the average quality with which a phenomenon is scanned and the number
of decision epochs before which a phenomenon is rescanned.
1
Introduction
Traditionally, meteorological radars, such as the National Weather Service NEXRAD system, are
tasked to always scan 360 degrees. In contrast, the Collaborative Adaptive Sensing of the Atmosphere (CASA) Engineering Research Center [5] is developing a new generation of small, low-power
but agile radars that can perform sector scanning, targeting sensing when and where the user needs
are greatest. Since all meteorological phenomena cannot be now all observed all of the time with
the highest degree of fidelity, the radars must decide how best to perform scanning. While we focus on the problem of how to perform sector scanning in such an adaptive meteorological sensing
network, it is an instance of the larger class of problems of adaptive sensor control in dynamic
resource-constrained sensor networks.
Given the ability of a network of radars to perform sector scanning, how should scanning be adapted
at each decision epoch? Any scan strategy must consider, for each scan action, both the expected
quality with which phenomena would be observed, and the expected number of decision epochs
before which phenomena would be first observed (for new phenomena) or rescanned, since not all
regions are scanned every epoch under sectored scanning. Another consideration is whether to optimize myopically only over current and possibly past environmental state, or whether to additionally
optimize over expected future states. In this work we examine three methods for adapting the radar
scan strategy. The methods differ in the information they use to select a scan configuration at a
particular decision epoch. The sit-and-spin strategy of always scanning 360 degrees is independent of any external information. The limited lookahead strategies additionally use the expected
environmental state K decision epochs in the future in its decision-making. Finally, the full lookahead strategy has an infinite horizon: it uses all expected future states by casting the problem as a
Markov decision process and using reinforcement learning to estimate the optimal scan strategy. All
strategies, excluding sit-and-spin, work by optimizing the overall ?quality? (a term we will define
1
precisely shortly) of the sensed information about phenomena in the environment, while restricting
or penalizing long inter-scan intervals.
Our contributions are two-fold. We first introduce the meteorological radar control problem and
show how to constrain the problem so that it is amenable to reinforcement learning methods. We
then identify conditions under which the computational cost of an infinite horizon radar scan strategy
such as reinforcement learning is necessary. With respect to the radar meteorological application,
we show that the main benefits of considering expected future states are when there are multiple
meteorological phenomena in the environment, and when the maximum radius of any phenomenon
is sufficiently smaller than the radius of the radars. We also show that there is a trade-off between
the average quality with which a phenomenon is scanned and the number of decision epochs before
which a phenomenon is rescanned. Finally, we show that for some environments, a limited lookahead strategy is sufficient. In contrast to other work on radar control (see Section 5), we focus on
tracking meteorological phenomena and the time frame over which to evaluate control decisions.
The rest of this paper is organized as follows. Section 2 defines the radar control problem. Section
3 describes the scan strategies we consider. Section 4 describes our evaluation framework and
presents results. Section 5 reviews related work on control and resource allocation in radar and
sensor networks. Finally, Section 6 summarizes this work and outlines future work.
2
Meteorological Radar Control Problem
Meteorological radar sensing characteristics are such that the smaller the sector that a radar scans
(until a minimum sector size is reached), the higher the quality of the data collected, and thus, the
more likely it is that phenomena located within the sector are correctly identified [2]. The multiradar meteorological control problem is then as follows. We have a set of radars, with fixed locations
and possibly overlapping footprints. Each radar has a set of scan actions from which it chooses. In
the simplest case, a radar scan action determines the size of the sector to scan, the start angle, the
end angle, and the angle of elevation. We will not consider elevation angles here. Our goal is
to determine which scan actions to use and when to use them. An effective scanning strategy must
balance scanning small sectors (thus implicitly not scanning other sectors), to ensure that phenomena
are correctly identified, with scanning a variety of sectors, to ensure that no phenomena are missed.
We will evaluate the performance of different scan strategies based on inter-scan time, quality, and
cost. Inter-scan time is the number of decision epochs before a phenomenon is either first observed
or rescanned; we would like this value to be below some threshold. Quality measures how well a
phenomenon is observed, with quality depending on the amount of time a radar spends sampling
a voxel in space, the degree to which a meteorological phenomena is scanned in its (spatial) entirety, and the number of radars observing a phenomenon; higher quality scans are better. Cost is
a meta-metric that combines inter-scan time and quality, and that additionally considers whether a
phenomenon was never scanned. The radar control problem is that of dynamically choosing the scan
strategy of the radars over time to maximize quality while minimizing inter-scan time.
3
Scan Strategies
We define a radar configuration to be the start and end angles of the sector to be scanned by an
individual radar for a fixed interval of time. We define a scan action to be a set of radar configurations
(one configuration for each radar in the meteorological sensing network). We define a scan strategy
to be an algorithm for choosing scan actions. In Section 3.1 we define the quality function associated
with different radar configurations and in Section 3.2 we define the quality functions associated with
different scan strategies.
3.1
Quality Function
The quality function associated with a given scan action was proposed by radar meteorologists in [5]
and has two components. There is a quality component Up associated with scanning a particular
phenomenon p. There is also a quality component Us associated with scanning a sector, which is
independent of any phenomena in that sector. Let sr be the radar configuration for a single radar r
and let Sr be the scan action under consideration. From [5], we compute the quality Up (p, Sr ) of
2
Fc Function
Fw Function
Fd Function
1
1
1
0.8
0.8
0.8
0.4
0.6
0.6
Fd
Fc
Fw
0.6
0.4
0.4
0.2
0.2
0.2
0
0
0
0
0.2
0.4
0.6
c
0.8
1
0
0.2
0.4
0.6
w/360
0.8
1
0
0.2
0.4
0.6
d
0.8
1
1.2
Figure 1: Step functions used by the Up and Us quality functions, from [9]
scanning a phenomenon p using scan action Sr with the following equations,
w(sr )
Up (p, sr ) = Fc (c(p, sr )) ? ?Fd (d(r, p)) + (1 ? ?)Fw
360
Up (p, Sr ) = maxsr ?Sr [Up (p, sr )]
(1)
where
w(sr ) = size of sector sr scanned by r
a(r, p) = minimal angle that would allow r to cover p
w(sr )
= coverage of p by r scanning sr
c(p, sr ) =
a(r, p)
h(r, p) = distance from r to geometric center of p
hmax (r) = range of radar r
h(r, p)
d(r, p) =
= normalized distance from r to p
hmax (r)
? = tunable parameter
Up (p, Sr ) is the maximum quality obtained for scanning phenomenon p over all possible radars and
their associated radar configurations sr . Up (p, sr ) is the quality obtained for scanning phenomenon
p using a specific radar r and radar configuration sr . The functions Fc (?), Fw (?), and Fd (?) from [5]
are plotted in Figure 1. Fc captures the effect on quality due to the percentage of the phenomenon
covered; to usefully scan a phenomenon, at least 95% of the phenomenon must be scanned. Fw
captures the effect of radar rotation speed on quality; as rotation speed is reduced, quality increases.
Fd captures the effects of the distance from the radar to the geometrical center of the phenomenon on
quality; the further away the radar center is from the phenomenon being scanned, the more degraded
will be the scan quality due to attenuation. Due to the Fw function, the quality function Up (p, sr )
outputs the same quality for scan angles of 181? to 360? . The quality Us (ri , sr ) for scanning a
subsector i of radar r scanned using configuration sr is,
w(sr )
Us (ri , sr ) = Fw
(2)
360
Intuitively, a sector scanning strategy is only preferable when the quality function is such that the
quality gained for scanning a sector is greater than the quality lost for not scanning another sector.
3.2
Scan Strategies
We compare the performance of the following three scan strategies. The strategies differ in whether
they optimize quality over only current or also future expected states. For example, suppose a storm
cell is about to move into a high-quality multi-doppler region (i.e., the area where multiple radar
footprints overlap). By considering future expected states, a lookahead strategy can anticipate this
event and have all radars focused on the storm cell when it enters the multi-doppler region, rather
than expending resources (with little ?reward?) to scan the storm cell just before it enters this region.
(i) Sit-and-spin strategy. All radars always scan 360? .
(ii) Limited ?lookahead? strategy. We examine both a 1-step and a 2-step look-ahead scan strategy.
Although we do not have an exact model of the dynamics of different phenomena, to perform the
3
look-ahead we estimate the future attributes of each phenomenon using a separate Kalman filter. For
each filter, the true state x is a vector comprising the (x, y) location and velocity of the phenomenon,
and the measurement y is a vector comprising only the (x, y) location. The Kalman filter assumes
that the state at time t is a linear function of the state at time t ? 1 plus some Gaussian noise, and
that the measurement at time t is a linear function of the state at time t plus some Gaussian noise. In
particular, xt = Axt?1 + N [0, Q] and yt = Bxt + N [0, R].
Following work by [8], we initialize each Kalman filter as follows. The A matrix reflects that storm
cells typically move to the north-east. The B matrix, which when multiplied with xt returns xt ,
assumes that the observed state yt is directly the true state xt plus some Gaussian noise. The Q
matrix assumes that there is little noise in the true state dynamics. Finally, the measurement error
covariance matrix R is a function of the quality Up with which phenomenon p was scanned at time
t. We discuss how to compute the ?t ?s in Section 4. We use the first location measurement of a
storm cell y0 , augmented with the observed velocity, as the the initial state x0 . We assume that our
estimate of x0 has little noise and use .0001 ? I for the initial covariance P0 .
A=
"
1
0
0
0
0
1
0
0
1
0
1
0
0
1
0
1
#
, B=
h
1
0
0
1
0
0
0
0
i
"
,Q =
.0001
0
0
0
0
.0001
0
0
0
0
.0001
0
0
0
0
.0001
#
, R=
h
?t
0
0
?t
i
We compute the k-step look-ahead quality for different sets of radar configurations Sr with,
UK (Sr,1 |Tr )
=
K
X
k?1
?
Np
X
Up (pi,k , Sr,k |Tr )
i=1
k=1
where Np is the number of phenomena in the environment in the current decision epoch, pi,0 is
the current set of observed attributes for phenomenon i, pi,k is the k-step set of predicted attributes
for phenomenon i, Sr,k is the set of radar configurations for the kth decision epoch in the future,
and ? is a tunable discount factor between 0 and 1. The optimal set of radar configurations is
?
then Sr,1
= argmaxSr,1 UK (Sr,1 |Tr ). To account for the decay of quality for unscanned sectors
and phenomena, and to consider the possibility of new phenomena appearing, we restrict Sr to be
those scan actions that ensure that every sector has been scanned at least once in the last Tr decision
epochs. Tr is a tunable parameter whose purpose is to satisfy the meteorological dictate found in [5],
that all sectors be scanned, for instance by a 360? scan, at most every 5 minutes.
(iii) Full ?lookahead? strategy. We formulate the radar control problem as a Markov decision
process (MDP) and use reinforcement learning to obtain a lookahead scan strategy as follows. While
a POMDP (partially observable MDP) could be used to model the environmental uncertainty, due to
the cost of solving a POMDP with a large state space [9], we choose to formulate the radar control
problem as an MDP with quality (or uncertainty) variables as in an augmented MDP [6].
S is the observed state of the environment. The state is a function of the observed number of storms,
the observed x, y velocity of each storm, and the observed dimensions of each storm cell given by
x, y center of mass and radius. To model the uncertainty in the environment, we additionally define
as part of the state quality variables up and us based on the Up and Us quality functions defined
in Equations (1) and (2) in Section 3.1. up is the quality Up (?) with which each storm cell was
observed, and us is the current quality Us (?) of each 90? subsector, starting at 0, 90, 180, or 270? .
A is the set of actions available to the radars. This is the set of radar configurations for a given
decision epoch. We restrict each radar to scanning subsectors that are a multiple of 90? , starting at
0, 90, 180, or 270? . Thus, with N radars there are 13N possible actions at each decision epoch.
The transition function T (S ? A ? S) ? [0, 1] encodes the observed environment dynamics: specifically the appearance, disappearance, and movement of storm cells and their associated attributes.
For meteorological radar control, the next state really is a function of not just the current state but
also the action executed in the current state. For instance, if a radar scans 180 degrees rather than
360 degrees, then any new storm cells that appear in the unscanned areas will not be observed. Thus,
the new storm cells that will be observed will depend on the scanning action of the radar.
The cost function C(S, A, S) ? R encodes the goals of the radar sensing network. C is a function
of the error between the true state and the observed state, whether all storms have been observed,
4
and a penalty term for not rescanning a storm within Tr decision epochs. More precisely,
No
C
=
p Nd
X
X
|doij ? dij | + (Np ? Npo )Pm +
i=1 j=1
Np
X
I(ti )Pr
(3)
i=1
where Npo is the observed number of storms, Nd is the number of attributes per storm, doij is the
observed value of attribute j of storm i, dij is the true value of attribute j of storm i, Np is the true
number of storms, Pm is the penalty for missing a storm, ti is the number of decision epochs since
storm i was last scanned, Pr is the penalty for not scanning a storm at least once within Tr decision
epochs, and I(ti ) is an indicator function that equals 1 when ti ? Tr . The quality with which a
storm is observed determines the difference between the observed and true values of its attributes.
We use linear Sarsa(?) [15] as the reinforcement learning algorithm to solve the MDP for the radar
control problem. To obtain the basis functions, we use tile coding [13, 14]. Rather than defining
tilings over the entire state space, we define a separate set of tilings for each of the state variables.
4
4.1
Evaluation
Simulation Environment
We consider radars with both 10 and 30km radii as in [5, 17]. Two overlapping radars are placed
in a 90km ? 60km rectangle, one at (30km, 30km) and one at (60km, 30km). A new storm cell
can appear anywhere within the rectangle and a maximum number of cells can be present on any
decision epoch. When the (x, y) center of a storm cell is no longer within range of any radar, the
cell is removed from the environment. Following [5], we use a 30-second decision epoch.
We derive the maximum storm cell radius from [11], which uses 2.83km as ?the radius from the cell
center within which the intensity is greater than e?1 of the cell center intensity.? We then permit a
storm cell?s radius to range from 1 to 4 km. To determine the range of storm cell velocities, we use 39
real storm cell tracks obtained from meteorologists. Each track is a series of (latitude, longitude)
coordinates. We first compute the differences in latitude and longitude, and in time, between successive pairs of points. We then fit the differences using Gaussian distributions. We obtain, in units
of km/hour, that the latitude (or x) velocity has mean 9.1 km/hr and std. dev. of 35.6 km/hr and that
the longitude (or y) velocity has mean 16.7 km/hr and std. dev. of 28.8 km/hr. To obtain a storm
cell?s (x, y) velocity, we then sample the appropriate Gaussian distribution.
To simulate the environment transitions we use a stochastic model of rainfall in which storm cell
arrivals are modeled using a spatio-temporal Poisson process, see [11, 1]. To determine the number
of new storm cells to add during a decision epoch, we sample a Poisson random variable with rate
???a?t with ? = 0.075 storm cells/km2 and ? = 0.006 storm cells/minute from [11]. From the
radar setup we have ?a = 90 ? 60 km2 , and from the 30-second decision epoch we have ?t = 0.5
minutes. New storm cells are uniformly randomly distributed in the 90km ? 60km region and we
uniformly randomly choose new storm cell attributes from their range of values. This simulates the
true state of the environment over time. The following simplified radar model determines how well
the radars observe the true environmental state under a given set of radar configurations. If a storm
cell p is scanned using a set of radar configurations Sr , the location, velocity, and radius attributes
are observed as a function of the Up (p, Sr ) quality defined in Section 3.1. Up (p, Sr ) returns a value
u between zero and one. Then the observed value of the attribute is the true value of the attribute
plus some Gaussian noise distributed with mean zero and standard deviation (1 ? u)V max /? where
V max is the largest positive value the attribute can take and ? is a scaling term that will allow us to
adjust the noise variability. Since u depends on the decision epoch t, for the k-step look-ahead scan
strategy we also use ?t = (1 ? ut )V max /? to compute the measurement error covariance matrix,
R, in our Kalman filter.
We parameterize the MDP cost function as follows. We assume that any unobserved storm cell has
been observed with quality 0, hence u = 0. Summing over (1 ? u)V max /? for all attributes with
? = 0 gives the value Pm = 15.5667, and thus a penalty of 15.5667 is received for each unobserved
storm cell. If a storm cell is not seen within Tr = 4 decision epochs a penalty of Pr = 200 is
given. Using the value 200 ensures that if a storm cell has not been rescanned within the appropriate
amount of time, this part of the cost function will dominate.
5
We distinguish the true environmental state known only to the simulator from the observed environmental state used by the scan strategies for several reasons. Although radars provide measurements
about meteorological phenomena, the true attributes of the phenomena are unknown. Poor overlap in a dual-Doppler area, scanning a subsector too quickly or slowly, or being unable to obtain a
sufficient number of elevation scans will degrade the quality of the measurements. Consequently,
models of previously existing phenomena may contain estimation errors such as incorrect velocity,
propagating error into the future predicted locations of the phenomena. Additionally, when a radar
scans a subsector, it obtains more accurate estimates of the phenomena in that subsector than if it
had scanned a full 360? , but less accurate estimates of the phenomena outside the subsector.
4.2
Results
In this section we present experimental results obtained using the simulation model of the previous
section and the scan strategies described in Section 3. For the limited lookahead strategy we use
? = 0.5, ?p = 0.25, ?s = 0.25, and ? = 0.75. For Sarsa(?), we use a learning rate ? = 0.0005,
exploration rate = 0.01, discount factor ? = 0.9, and eligibility decay ? = 0.3. Additionally,
we use a single tiling for each state variable. For the (x, y) location and radius tilings, we use
a granularity of 1.0; for the (x, y) velocity, phenomenon confidence, and radar sector confidence
tilings, we use a granularity of 0.1. When there are a maximum of four storms, we restrict Sarsa(?)
to scanning only 180 or 360 degree sectors to reduce the time needed for convergence. Finally, all
strategies are always compared over the same true environmental state.
Figure 2(a) shows an example convergence profile of Sarsa(?) when there are at most four storms
in the environment. Figure 2(b) shows the average difference in scan quality between the learned
Sarsa(?) strategy and sit-and-spin and 2-step strategies. When 1/? = 0.001 (i.e., little measurement
noise) Sarsa(?) has the same or higher relative quality than does sit-and-spin, but significantly lower
relative quality (0.05 to 0.15) than does the 2-step. This in part reflects the difficulty of learning
to perform as well as or better than Kalman filtering. Examining the learned strategy showed that
when there was at most one storm with observation noise 1/? = 0.001, Sarsa(?) learned to simply
sit-and-spin, since sector scanning conferred little benefit. As the observation noise increases, the
relative difference increases for sit-and-spin, and decreases for the 2-step. Figure 2(c) shows the
average difference in cost between the learned Sarsa(?) scan strategy and the sit-and-spin and 2-step
strategies for a 30 km radar radius. Sarsa(?) has the lowest average cost.
Looking at the Sarsa(?) inter-scan times, Figure 2 (d) shows that, as a consequence of the penalty for
not scanning a storm within Tr = 4 time-steps, while Sarsa(?) may rescan fewer storm cells within
1, 2, or 3 decision epochs than do the other scan strategies, it scans almost all storm cells within
4 epochs. Note that for the sit-and-spin CDF, P [X ? 1] is not 1; due to noise, for example, the
measured location of a storm cell may be (expected) outside any radar footprint and consequently
the storm cell will not be observed. Thus the 2-step has more inter-scan times greater than Tr = 4
than does Sarsa(?). Together with Figure 2(b) and (c), this implies that there is a trade-off between
inter-scan time and scan quality. We hypothesize that this trade-off occurs because increasing the
size of the scan sectors ensures that inter-scan time is minimized, but decreases the scan quality.
Other results (not shown, see [7]) examine the average difference in quality between the 1-step and 2step strategies for 10 km and 30 km radar radii. With a 10 km radius, the 1-step quality is essentially
the same as the 2-step quality. We hypothesize that this is a consequence of the maximum storm cell
radius, 4 km, relative to the 10 km radar radius. With a 30 km radius and at most eight storm cells,
the 2-step quality is about 0.005 better than the 1-step and about 0.07 better than sit-and-spin (recall
that quality is a value between 0 and 1). Now recall that Figure 2(b) shows that with a 30 km radius
and at most four storm cells, the 2-step quality is as much as 0.12 than sit-and-spin. This indicates
that there may be some maximum number of storms above which it is best to sit-and-spin.
Overall, depending on the environment in which the radars are deployed, there are decreasing
marginal returns for considering more than 1 or 2 future expected states. Instead, the primary value
of reinforcement learning for the radar control problem is balancing multiple conflicting goals, i.e.,
maximizing scan quality while minimizing inter-scan time. Implementing the learned reinforcement
learning scan strategy in a real meteorological radar network requires addressing the differences between the offline environment in which the learned strategy is trained, and the online environment
in which the strategy is deployed. Given the slow convergence time for Sarsa(?) (on the order of
6
Radar Radius = 30km, Max 4 Storms
24
22
20
18
16
14
0
1
2
3
4
Episode
Radar Radius = 30km
0.15
sit?and?spin
sarsa
Average Difference in Scan Quality (250,000 steps)
Average Cost Per Episode of 1000 Steps
26
5
0.1
0.05
0
?0.05
?0.1
?0.15
?0.2
6
2step ? sarsa, max 1 storm
2step ? sarsa, max 4 storms
sitandspin? sarsa, max 1 storm
sitandspin ? sarsa, max 4 storms
0
0.01
0.02
0.03
0.04
4
x 10
(a)
0.94
2.5
0.92
P[X <= x]
Average Difference in Cost (250,000 steps)
0.96
3
2
1.5
0.5
0.84
0
0.82
0.02
0.03
0.04
0.1
0.9
0.86
0.01
0.09
0.88
1
0
0.08
0.98
3.5
?0.5
0.07
Max # of Storms = 4, Radar Radius = 30km
1
2step ? sarsa, max 1 storm
2step ? sarsa, max 4 storms
sitandspin? sarsa, max 1 storm
sitandspin ? sarsa, max 4 storms
4
0.06
(b)
Radar Radius = 30km
4.5
0.05
1/?
0.05
1/?
0.06
0.07
0.08
0.09
0.8
0.1
(c)
sit?and?spin, 1/?=0.1
1step, 1/?=0.1
2step, 1/?=0.1
sarsa, 1/?=0.1
0
1
2
3
4
5
6
7
8
x = # of decision epochs between storm scans
9
10
(d)
Figure 2: Comparing the scan strategies based on quality, cost, and inter-scan time. Recall that ? is
a scaling term used to determine measurement noise, see Section 4.1.
days), training solely online is likely infeasible, although the time complexity could be mitigated
by using hierarchical reinforcement learning methods and semi-Markov decision process. Some
online training could be achieved by treating 360? scans as the true environment state. Then when
unknown states are entered, learning could be performed, alternating between 360? scans to gauge
the true state of the environment and exploratory scans by the reinforcement learning algorithm.
5
Related Work
Other reinforcement learning applications in large state spaces include robot soccer [12] and helicopter control [10]. With respect to radar control, [4] examines the problem of using agile radars
on airplanes to detect and track ground targets. They show that lookahead scan strategies for radar
tracking of a ground target outperform myopic strategies. In comparison, we consider the problem of
tracking meteorological phenomena using ground radars. [4] uses an information theoretic measure
to define the reward metric and proposes both an approximate solution to solving the MDP Bellman
equations as well as a Q-learning reinforcement learning-based solution. [16] examines where to
target radar beams and which waveform to use for electronically steered phased array radars. They
maintain a set of error covariance matrices and dynamical models for existing targets, as well as
7
track existence probability density functions to model the probability that targets appear. They then
choose the scan mode for each target that has both the longest revisit time for scanning a target and
error covariance below a threshold. They do this for control 1-step and 2-steps ahead and show
that considering the environment two decision epochs ahead outperforms a 1-step look-ahead for
tracking of multiple targets.
6
Conclusions and Future Work
In this work we compared the performance of myopic and lookahead scan strategies in the context
of the meteorological radar control problem. We showed that the main benefits of using a lookahead
strategy are when there are multiple meteorological phenomena in the environment, and when the
maximum radius of any phenomenon is sufficiently smaller than the radius of the radars. We also
showed that there is a trade-off between the average quality with which a phenomenon is scanned
and the number of decision epochs before which a phenomenon is rescanned. Overall, considering
only scan quality, a simple lookahead strategy is sufficient. To additionally consider inter-scan time
(or optimize over multiple metrics of interest), a reinforcement learning strategy is useful. For future
work, rather than identifying a policy that chooses the best action to execute in a state for a single
decision epoch, it may be useful to consider actions that cover multiple epochs, as in semi-Markov
decision processes or to use controllers from robotics [3]. We would also like to incorporate more
radar and meteorological information into the transition, measurement, and cost functions.
Acknowledgments
The authors thank Don Towsley for his input. This work was supported in part by the National Science Foundation under the Engineering Research Centers Program, award number EEC-0313747.
Any opinions, findings and conclusions or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect those of the National Science Foundation.
References
[1] D. Cox and V. Isham. A simple spatial-temporal model of rainfall. Proceedings of the Royal Society of London. Series A, Mathematical
and Physical Sciences, 415:1849:317?328, 1988.
[2] B. Donovan and D. J. McLaughlin. Improved radar sensitivity through limited sector scanning: The DCAS approach. In Proceedings of
AMS Radar Meteorology, 2005.
[3] M. Huber and R. Grupen. A feedback control structure for on-line learning tasks. Robotics and Autonomous Systems, 22(3-4):303?315,
1997.
[4] C. Kreucher and A. O. H. III. Non-myopic approaches to scheduling agile sensors for multistage detection, tracking and identification.
In Proceedings of ICASSP, pages 885?888, 2005.
[5] J. Kurose, E. Lyons, D. McLaughlin, D. Pepyne, B. Phillips, D. Westbrook, and M. Zink. An end-user-responsive sensor network
architecture for hazardous weather detection, prediction and response. AINTEC, 2006.
[6] C. Kwok and D. Fox. Reinforcement learning for sensing strategies. In IROS, 2004.
[7] V. Manfredi and J. Kurose. Comparison of myopic and lookahead scan strategies for meteorological radars. Technical Report U of
Massachusetts Amherst, 2006-62, 2006.
[8] V. Manfredi, S. Mahadevan, and J. Kurose. Switching kalman filters for prediction and tracking in an adaptive meteorological sensing
network. In IEEE SECON, 2005.
[9] K. Murphy. A survey of POMDP solution techniques. Technical Report U.C. Berkeley, 2000.
[10] A. Ng, A. Coates, M. Diel, V. Ganapathi, J. Schulte, B. Tse, E. Berger, and E. Liang. Inverted autonomous helicopter flight via
reinforcement learning. In International Symposium on Experimental Robotics, 2004.
[11] I. Rodrigues-Iturbe and P. Eagleson. Mathematical models of rainstorm events in space and time. Water Resources Research, 23:1:181?
190, 1987.
[12] P. Stone, R. Sutton, and G. Kuhlmann. Reinforcement learning for robocup-soccer keepaway. Adaptive Behavior, 3, 2005.
[13] R. Sutton. Tile coding software. http://rlai.cs.ualberta.ca/RLAI/RLtoolkit/tiles.html.
[14] R. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse coding. In NIPS, 1996.
[15] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, Massachusetts, 1998.
[16] S. Suvorova, D. Musicki, B. Moran, S. Howard, and B. L. Scala. Multi step ahead beam and waveform scheduling for tracking of
manoeuvering targets in clutter. In Proceedings of ICASSP, 2005.
[17] J. M. Trabal, B. C. Donovan, M. Vega, V. Marrero, D. J. McLaughlin, and J. G. Colom. Puerto Rico student test bed applications and
system requirements document development. In Proceedings of the 9th International Conference on Engineering Education, 2006.
8
| 3199 |@word cox:1 nd:2 km:28 simulation:2 sensed:1 covariance:5 p0:1 tr:11 initial:2 configuration:15 series:2 uma:1 document:1 past:1 existing:2 outperforms:1 current:7 comparing:1 must:4 hypothesize:2 treating:1 fewer:1 coarse:1 location:8 successive:1 mathematical:2 symposium:1 incorrect:1 grupen:1 combine:1 introduce:1 x0:2 inter:12 huber:1 expected:12 behavior:1 examine:3 multi:3 simulator:1 bellman:1 decreasing:1 little:5 lyon:1 considering:5 increasing:1 mitigated:1 mass:1 lowest:1 spends:1 unobserved:2 finding:1 temporal:2 berkeley:1 every:3 attenuation:1 ti:4 usefully:1 preferable:1 axt:1 rainfall:2 uk:2 control:20 unit:1 appear:3 before:6 service:1 engineering:3 positive:1 consequence:2 switching:1 sutton:4 solely:1 plus:4 dynamically:1 limited:6 range:5 phased:1 acknowledgment:1 lost:1 footprint:3 area:3 adapting:1 weather:2 dictate:1 significantly:1 confidence:2 cannot:1 targeting:1 rlai:2 scheduling:2 context:1 optimize:4 center:9 yt:2 missing:1 maximizing:1 starting:2 focused:1 formulate:2 pomdp:3 survey:1 identifying:1 examines:2 array:1 dominate:1 his:1 exploratory:1 traditionally:1 coordinate:1 autonomous:2 target:9 suppose:1 user:2 exact:1 ualberta:1 us:5 rodrigues:1 velocity:10 located:1 std:2 observed:27 enters:2 capture:3 parameterize:1 region:5 ensures:2 episode:2 trade:5 highest:1 movement:1 removed:1 decrease:2 environment:20 complexity:1 reward:2 multistage:1 dynamic:5 radar:93 trained:1 depend:1 solving:2 secon:1 basis:1 icassp:2 effective:1 london:1 choosing:2 outside:2 whose:1 larger:1 solve:1 ability:1 online:3 helicopter:2 km2:2 westbrook:1 entered:1 lookahead:16 bed:1 isham:1 convergence:3 requirement:1 depending:2 derive:1 propagating:1 measured:1 received:1 longitude:3 coverage:1 c:2 predicted:3 entirety:1 implies:1 differ:2 waveform:2 radius:24 attribute:15 filter:7 stochastic:1 exploration:1 opinion:1 material:1 implementing:1 education:1 atmosphere:1 generalization:1 really:1 elevation:3 anticipate:1 sarsa:23 sufficiently:3 ground:3 purpose:1 estimation:1 largest:1 gauge:1 puerto:1 reflects:2 mit:1 sensor:7 always:6 gaussian:6 rather:5 casting:2 barto:1 focus:3 longest:1 indicates:1 contrast:2 detect:1 am:1 typically:1 entire:1 comprising:3 overall:3 fidelity:1 dual:1 html:1 proposes:1 development:1 constrained:1 spatial:2 initialize:1 marginal:1 equal:1 once:2 never:1 schulte:1 ng:1 sampling:1 look:5 future:15 minimized:1 np:5 report:2 randomly:2 national:3 individual:1 murphy:1 maintain:1 detection:2 interest:1 fd:5 possibility:1 evaluation:2 adjust:1 myopic:4 amenable:1 accurate:2 necessary:1 fox:1 plotted:1 minimal:1 instance:3 tse:1 dev:2 cover:2 cost:13 deviation:1 addressing:1 examining:1 dij:2 successful:1 too:1 scanning:33 eec:1 chooses:2 density:1 international:2 amherst:2 sensitivity:1 off:5 together:1 quickly:1 reflect:1 choose:3 possibly:2 tile:3 slowly:1 external:1 steered:1 return:3 ganapathi:1 account:1 coding:3 student:1 north:1 satisfy:1 depends:1 performed:1 towsley:1 observing:1 reached:1 start:2 collaborative:1 contribution:1 spin:15 degraded:1 robocup:1 characteristic:1 identify:1 identification:1 npo:2 storm:64 kuhlmann:1 associated:7 tunable:3 massachusetts:3 recall:3 ut:1 organized:1 rico:1 higher:3 day:1 response:1 improved:1 bxt:1 scala:1 execute:1 just:2 anywhere:1 until:1 flight:1 overlapping:2 meteorological:25 defines:1 mode:1 quality:62 mdp:7 diel:1 usa:1 effect:3 normalized:1 true:15 contain:1 hence:1 alternating:1 during:1 eligibility:1 soccer:2 stone:1 outline:1 theoretic:1 geometrical:1 consideration:2 vega:1 rotation:2 physical:1 measurement:10 cambridge:1 phillips:1 pm:3 had:1 robot:1 longer:1 add:1 showed:3 optimizing:1 meta:1 inverted:1 seen:1 minimum:1 greater:3 determine:4 maximize:1 ii:1 semi:2 full:4 multiple:9 expending:1 technical:2 long:1 award:1 prediction:2 controller:1 essentially:1 metric:3 tasked:1 poisson:2 donovan:2 achieved:1 cell:39 beam:2 robotics:3 interval:2 myopically:1 rest:1 sr:34 simulates:1 granularity:2 iii:2 mahadevan:1 variety:1 fit:1 manfredi:3 architecture:1 identified:2 restrict:3 reduce:1 airplane:1 mclaughlin:3 whether:5 penalty:6 action:16 useful:2 covered:1 amount:2 clutter:1 discount:2 meteorology:1 simplest:1 reduced:1 http:1 outperform:1 percentage:1 coates:1 revisit:1 correctly:2 per:2 track:4 four:3 threshold:2 penalizing:1 iros:1 rectangle:2 angle:7 uncertainty:3 almost:1 decide:1 missed:1 meteorologist:2 decision:35 summarizes:1 scaling:2 distinguish:1 fold:1 adapted:1 scanned:17 ahead:8 precisely:2 constrain:1 ri:2 software:1 encodes:2 speed:2 simulate:1 department:1 developing:1 poor:1 smaller:4 describes:2 y0:1 making:2 intuitively:1 pr:3 resource:4 equation:3 previously:1 discus:1 conferred:1 needed:1 end:3 tiling:5 available:1 permit:1 multiplied:1 victoria:1 observe:1 eight:1 away:1 appropriate:2 hierarchical:1 kwok:1 appearing:1 responsive:1 shortly:1 existence:1 assumes:3 ensure:3 include:1 rainstorm:1 society:1 move:2 occurs:1 strategy:57 primary:1 disappearance:1 kth:1 distance:3 separate:2 unable:1 thank:1 degrade:1 collected:1 considers:1 reason:1 water:1 kalman:7 modeled:1 berger:1 balance:1 minimizing:2 liang:1 setup:1 executed:1 sector:27 policy:1 unknown:2 perform:7 observation:2 markov:5 howard:1 defining:1 excluding:1 variability:1 jim:1 frame:1 looking:1 intensity:2 pair:1 doppler:3 learned:6 conflicting:1 hour:1 nip:1 address:1 below:2 dynamical:1 latitude:3 zink:1 program:1 max:14 royal:1 power:1 greatest:1 overlap:2 event:2 difficulty:1 indicator:1 hr:4 keepaway:1 epoch:30 review:1 geometric:1 relative:4 generation:1 allocation:1 filtering:1 foundation:2 degree:7 sufficient:3 pi:3 balancing:1 placed:1 last:2 supported:1 electronically:1 infeasible:1 offline:1 allow:2 sparse:1 benefit:4 distributed:2 feedback:1 dimension:1 transition:3 author:2 adaptive:7 reinforcement:18 simplified:1 voxel:1 approximate:1 observable:1 obtains:1 implicitly:1 summing:1 spatio:1 don:1 additionally:8 hazardous:1 ca:1 necessarily:1 agile:3 main:3 noise:12 arrival:1 profile:1 augmented:2 deployed:2 slow:1 resourceconstrained:1 hmax:2 minute:3 casa:1 specific:1 xt:4 sensing:9 moran:1 decay:2 sit:15 restricting:1 gained:1 horizon:2 fc:5 simply:1 likely:2 appearance:1 expressed:1 tracking:7 partially:1 recommendation:1 environmental:8 determines:3 ma:1 cdf:1 goal:3 consequently:2 fw:7 infinite:2 specifically:1 uniformly:2 experimental:2 east:1 select:1 scan:76 incorporate:1 evaluate:2 phenomenon:53 |
2,424 | 32 | 824
SYNCHRONIZATION IN NEURAL NETS
Jacques J. Vidal
University of California Los Angeles, Los Angeles, Ca. 90024
John Haggerty?
ABSTRACT
The paper presents an artificial neural network concept (the
Synchronizable Oscillator Networks) where the instants of individual
firings in the form of point processes constitute the only form of
information transmitted between joining neurons. This type of
communication contrasts with that which is assumed in most other
models which typically are continuous or discrete value-passing
networks. Limiting the messages received by each processing unit to
time markers that signal the firing of other units presents significant
implemen tation advantages.
In our model, neurons fire spontaneously and regularly in the
absence of perturbation. When interaction is present, the scheduled
firings are advanced or delayed by the firing of neighboring neurons.
Networks of such neurons become global oscillators which exhibit
multiple synchronizing attractors. From arbitrary initial states,
energy minimization learning procedures can make the network
converge to oscillatory modes that satisfy multi-dimensional
constraints Such networks can directly represent routing and
scheduling problems that conSist of ordering sequences of events.
INTRODUCTION
Most neural network models derive from variants of Rosenblatt's
original perceptron and as such are value-passing networks. This is
the case in particular with the networks proposed by Fukushima I,
Hopfield 2 , Rumelhart 3 and many others. In every case, the inputs to
the processing elements are either binary or continuous amplitude
signals which are weighted by synaptic gains and subsequently
summed (integrated). The resulting activation is then passed
through a sigmoid or threshold filter and again produce a continuous
or quantized output which may become the input to other neurons.
The behavior of these models can be related to that of living neurons
even if they fall considerably short of accounting for their complexity.
Indeed, it can be observed with many real neurons that action
potentials (spikes) are fired and propagate down the axonal branches
when the internal activation reaches some threshold and that higher
John Haggerty is with Interactive Systems Los angeles
3030 W. 6th St. LA, Ca. 90020
@)
American Institute of Physics 1988
825
input rates levels result in more rapid firing.
Behind these
traditional models, there is the assumption that the average
frequency of action potentials is the carrier of information between
neurons. Because of integration, the firings of individual neurons are
considered effective only to the extent to which they contribute to
the average intensities It is therefore assumed that the activity is
simply "frequency coded". The exact timing of individual firing is
ignored.
This view however does not cover some other well known
aspects of neural communication. Indeed, the precise timing of
spike arrivals can make a crucial difference to the outcome of some
neural interactions. One classic example is that of pre-synaptic
inhibition, a widespread mechanism in the brain machinery. Several
studies have also demonstrated the occurrence and functional
importance of precise timing or phase relationship between
cooperating neurons in local networks 4 . 5 .
The model presented in this paper contrasts with the ones just
mentioned in that in the networks each firing is considered as an
individual output event. On the input side of each node, the firing of
other nodes (the presynaptic neurons) either delay (inhibit) or
advance (excite) the node firing. As seen earlier, this type of
neuronal interaction which would be called phase-modulation in
engineering systems, can also find its rationale in experimental
neurophysiology. Neurophysiological plausibility however is not the
major concern here. Rather, we propose to explore a potentially
useful mechanism for parallel distributed computing. The merit of
this approach for artificial neural networks is that digital pulses are
used for internode communication instead of analog voltages. The
model is particularly well suited to the time-ordering and
sequencing found in a large class of routing and trajectory control
problems.
NEURONS AS SYNCHRONIZABLE OSCILLATORS:
In our model, the proceSSing elements (the "neurons") are
relaxation oscillators with built-in self-inhibition. A relaxation
oscillator is a dynamic system that is capable of accumulating
potential energy until some threshold or breakdown point is
reached. At that point the energy is abruptly released, and a new
cycle begins.
The description above fits the dynamic behavior of neuronal
membranes. A richly structured empirical model of this behavior is
found in the well-established differential formulation of Hodgkin and
Huxley 6 and in a simplified version given by Fitzhugh7. These
differential equations account for the foundations of neuronal activity
and are also capable of representing subthreshold behavior and the
refractoriness that follows each firing.
When the membrane
potential enters the critical region, an abrupt depolarization, i.e., a
collapse of the potential difference across the membrane occurs
followed by a somewhat slower recovery. This brief electrical
826
shorting of the membrane is called the action potential or "spike"
and constitutes the output event for the neuron. If the causes for the
initial depolarization are maintained,
oscillation ( "limit-cycles")
develops, generating multiple firings. Depending on input level and
membrane parameters, the oscillation can be limited to a single
spike, or may produce an oscillatory burst, or even continually
sustained activity.
The present model shares the same general properties but uses
the much simpler description of relaxation oscillator illustrated on
Figure 1.
Activation
EnergyE(t)
Exdt3tOIJ
OJ
Input
Out
InJrjh1~olJ
Input perturbation
~~utl
Intemilf
l!neJU Inpul
r
1
u (t -
ty
t
Figure 1 Relaxation Oscillator with perturbation input
Firing occurs when the energy level E(t) reaches some critical
level Ec. Assuming a constant rate of energy influx a, firing will
occur with the natural period
Ec?
T=a:When pre-synaptic pulses impinge on the course of energy
accumulation, the firing schedule is disturbed. Letting to represent
the instant of the last firing of the cell and tj. U = 1.2 ?... J), the
intants of impinging arrivals from other cells:
E(t - to) = aCt - to) +
L
Wj ?? uo(t - til ; E
$
Ec
where uo(t) represents the unit impulse at t=O.
The dramatic complexity of synchronization dynamics can be
appreCiated by considering the simplest possible case, that of a
master slave interaction between two regularly firing oscillator units
A and B, with natural periods TA and TB. At the instants of firing,
unit A unidirectionally sends a spike Signal to unit B which is
received at some interval <I> measured from the last time B fired.
827
Upon reception the spike is transformed into a quantum of energy
6E which depends upon the post-firing arrival time 4>. The
relationship 6E(4)) can be shaped to represent refractoriness and
other post-spike properties. Here it is assumed to be a simple ramp
function. If the interaction is inhibitory. the consequence of this
arrival is that the next firing of unit B is delayed (with respect to
what its schedule would have been in absence of perturbation) by
some positive interval 5 (Figure 2). Because of the shape of 6E(4)) .
the delaying action. nil immediately after firing. becomes longer for
impinging pre-synaptic spikes that arrive later in the interval. If the
interaction is excitatory. the delay is negative. Le. a shortening of the
natural firing interval. Under very general assumptions regarding the
function 6E( 4?. B will tend to synchronize to A. Within a given
range of coupling gains,
the phase 4> will self-adjust until
equilibrium is achieved. With a given 6E(4)) , this equilibrium
corresponds to a distribution of maximum entropy, i.e., to the point
where both cells receive the same amouint of activation. during their
common cycle.
I
I
~h ~ $~~
~
~
.. ) Inhibition
B ( .. ) Excitation
Figure 2 Relationship between phase and delay when input effiCiently
increases linearly in the after-spike interval
The synchronization dynamiCS presents an attractor for each
rational frequency pair. To each ratio is aSSOCiated a range of stability
but only the ratios of lowest cardinality have wide zones of phaselocking (Figure 3). The wider stability wnes correspond to a one to
one ratio between fA and fB (or between their inverses TA and TBl.
Kohn and Segundo have demonstrated that such phase locking
occurs in living invertebrate neurons and pointed out the paradoxical
nature of phase-locked inhibition which, within each stability region,
828
takes the appearence of excitation since small increases in input
firing rate will locally result in increased output rates 8, 5.
The areas between these ranges of stability have the appearance
of unstable transitions but in fact. as recently pOinted out by Bak9 ?
form an infinity of locking steps known as the Devil's Staircase.
corresponding to the infinity of intermediate rational pairs (figure 3).
Bak showed that the staircase is self-similar under scaling and that
the transitions form a fractal Cantor set with a fractal dimension
which is a universal constant of dynamic systems.
1/2
~
;;:
I
1/2
)
Excitation
Inhibiti~~v'/ I
:7
lI?L
It.?'
Figure 3 Unilateral SynchroniZation:
CONSTRAINT SATISFACTION IN OSCILLATOR NETWORKS
The global synchronization of an interconnected network of
mutually phase-locking oscillators is a constraint satisfaction
problem. For each synchronization equilibrium, the nodes fire in
interlocked patterns that organize inter-spike intervals into integer
ratios.
The often cited "Traveling Salesman Problem". the archetype
for a class of important "hard" problems. is a special case when the
ratio must be 1 / 1: all nodes must fire at the same frequency. Here
the equilibrium condition is that every node will accumulate the the
same amount of energy during the global cycle. Furthermore. the
firings must be ordered along a minimal path.
Using stochastic energy minimization and simulated annealing. the
first simulations have demonstrated the feasibility of the approach
with a limited number of nodes. The TSP is isomorphic to many
other sequencing problems which involve distributed constraints. and
fall into the oscillator array neural net paradigm in a particularly
natural way. Work is being pursued to more rigorously establish the
limits of applicability of the model..
829
~~~~~-L~~~~~~T-
171~~~~~~~~~-L~-
I
Annea/./ng
~
a~~~--~~--~----~--
171--~L-~~--~~--~~-
c~--~~--~--~----~?
Gf~--~~--~----L-----~
e
t-A-----.&--------.?.----
Figure 4. The Traveling Salesman Problem: In the global
oscillation oj minimal energy each node is constrained to fire at
the same rate in the order corresponding to the minimal path.
ACKNOWLEDGEMENT
Research supported in part by Aerojet Electro-Systems under the Aerojet-UCLA Cooperative
Research Master Agreement No. D8412I1, and by NASA NAG 2-302.
REFERENCES
l.
2.
3.
K. Fukushima. BioI. Cybern. 20. 121 (1975).
J.J. Hopfield. Proc. Nat. Acad. Sci. 79.2556 (1982).
D.E. Rumelhart. G.E. Hinton. and R.J. Williams. Parallel
Distributed Processing: Explorations in the
Microstructure oj Cognition, (MIT Press. Cambridge.
4.
5.
6.
7.
8.
9.
10.
MA .. 1986) p. 318.
J.P. Segundo. G.P. Moore. N.J. Stensaas. and T.H. Bullock. J. Exp.
BioI. 40. 643. (1963).
J.P. Segundo and A.F. Kohn. BioI Cyber 40. 113 (1981).
A.L. Hodgkin and A.F. Huxley. J. PhysiOI. 117.500 (1952).
Fitzhugh. Biophysics J .. 1. 445 (1961).
A.F. Kohn. A. Freitas da Rocha. and J.P. Segundo. BioI. Cybem.
41. 5 (1981).
P. Bak. Phys. Today (Dec 1986) p. 38 .
J. Haggerty and J.J. Vidal. UCLA BCI Report. 1975.
| 32 |@word neurophysiology:1 version:1 pulse:2 propagate:1 simulation:1 accounting:1 dramatic:1 initial:2 freitas:1 activation:4 must:3 john:2 shape:1 pursued:1 short:1 implemen:1 quantized:1 node:8 contribute:1 simpler:1 burst:1 along:1 become:2 differential:2 sustained:1 inter:1 indeed:2 rapid:1 behavior:4 multi:1 brain:1 considering:1 cardinality:1 becomes:1 begin:1 lowest:1 what:1 depolarization:2 every:2 act:1 interactive:1 control:1 unit:7 uo:2 organize:1 continually:1 carrier:1 positive:1 engineering:1 timing:3 local:1 tation:1 limit:2 consequence:1 acad:1 joining:1 firing:24 modulation:1 path:2 reception:1 collapse:1 limited:2 range:3 locked:1 spontaneously:1 procedure:1 area:1 universal:1 empirical:1 pre:3 scheduling:1 cybern:1 accumulating:1 accumulation:1 disturbed:1 appearence:1 demonstrated:3 williams:1 abrupt:1 recovery:1 immediately:1 array:1 rocha:1 classic:1 stability:4 limiting:1 today:1 exact:1 us:1 agreement:1 element:2 rumelhart:2 particularly:2 breakdown:1 aerojet:2 cooperative:1 observed:1 enters:1 electrical:1 region:2 wj:1 cycle:4 ordering:2 inhibit:1 mentioned:1 complexity:2 locking:3 rigorously:1 dynamic:5 upon:2 hopfield:2 effective:1 artificial:2 outcome:1 ramp:1 bci:1 tsp:1 advantage:1 sequence:1 net:2 propose:1 interaction:6 interconnected:1 neighboring:1 fired:2 description:2 los:3 produce:2 generating:1 wider:1 derive:1 depending:1 coupling:1 measured:1 received:2 bak:2 filter:1 subsequently:1 stochastic:1 exploration:1 routing:2 microstructure:1 considered:2 exp:1 equilibrium:4 cognition:1 major:1 released:1 proc:1 weighted:1 minimization:2 mit:1 rather:1 voltage:1 internode:1 sequencing:2 contrast:2 utl:1 typically:1 integrated:1 transformed:1 constrained:1 summed:1 integration:1 special:1 shaped:1 ng:1 represents:1 synchronizing:1 constitutes:1 others:1 report:1 develops:1 individual:4 delayed:2 phase:7 fire:4 attractor:2 fukushima:2 message:1 adjust:1 behind:1 tj:1 capable:2 segundo:4 machinery:1 minimal:3 increased:1 earlier:1 cover:1 applicability:1 delay:3 interlocked:1 considerably:1 st:1 cited:1 physic:1 again:1 american:1 til:1 li:1 account:1 potential:6 satisfy:1 depends:1 later:1 view:1 reached:1 parallel:2 efficiently:1 subthreshold:1 correspond:1 trajectory:1 oscillatory:2 reach:2 phys:1 synaptic:4 ty:1 energy:10 frequency:4 associated:1 gain:2 rational:2 richly:1 schedule:2 amplitude:1 nasa:1 higher:1 ta:2 formulation:1 refractoriness:2 furthermore:1 just:1 until:2 traveling:2 marker:1 widespread:1 mode:1 scheduled:1 impulse:1 concept:1 staircase:2 moore:1 illustrated:1 during:2 self:3 maintained:1 excitation:3 recently:1 sigmoid:1 common:1 functional:1 analog:1 accumulate:1 significant:1 cambridge:1 pointed:2 longer:1 inhibition:4 inpul:1 showed:1 cantor:1 binary:1 transmitted:1 seen:1 somewhat:1 converge:1 paradigm:1 period:2 signal:3 living:2 branch:1 multiple:2 plausibility:1 post:2 coded:1 feasibility:1 biophysics:1 variant:1 represent:3 achieved:1 cell:3 dec:1 receive:1 interval:6 annealing:1 sends:1 crucial:1 tend:1 cyber:1 electro:1 regularly:2 integer:1 axonal:1 intermediate:1 fit:1 regarding:1 angeles:3 kohn:3 passed:1 abruptly:1 unilateral:1 passing:2 cause:1 constitute:1 action:4 fractal:2 ignored:1 useful:1 involve:1 amount:1 shortening:1 locally:1 simplest:1 inhibitory:1 jacques:1 rosenblatt:1 discrete:1 threshold:3 tbl:1 relaxation:4 cooperating:1 inverse:1 master:2 hodgkin:2 arrive:1 oscillation:3 scaling:1 followed:1 activity:3 occur:1 constraint:4 huxley:2 infinity:2 influx:1 invertebrate:1 ucla:2 aspect:1 fitzhugh:1 structured:1 membrane:5 across:1 bullock:1 equation:1 mutually:1 mechanism:2 merit:1 letting:1 salesman:2 vidal:2 occurrence:1 slower:1 original:1 paradoxical:1 instant:3 establish:1 spike:10 occurs:3 fa:1 traditional:1 exhibit:1 simulated:1 sci:1 impinge:1 presynaptic:1 extent:1 unstable:1 assuming:1 relationship:3 ratio:5 potentially:1 negative:1 neuron:15 hinton:1 communication:3 precise:2 delaying:1 perturbation:4 arbitrary:1 intensity:1 pair:2 california:1 established:1 pattern:1 tb:1 built:1 oj:3 event:3 critical:2 natural:4 satisfaction:2 synchronize:1 shorting:1 advanced:1 representing:1 brief:1 gf:1 acknowledgement:1 synchronization:6 rationale:1 digital:1 foundation:1 share:1 course:1 excitatory:1 supported:1 last:2 appreciated:1 side:1 perceptron:1 institute:1 fall:2 wide:1 distributed:3 dimension:1 transition:2 quantum:1 fb:1 simplified:1 ec:3 global:4 olj:1 cybem:1 nag:1 assumed:3 excite:1 continuous:3 nature:1 ca:2 impinging:2 da:1 linearly:1 arrival:4 neuronal:3 phaselocking:1 slave:1 down:1 concern:1 consist:1 importance:1 nat:1 suited:1 entropy:1 simply:1 explore:1 appearance:1 neurophysiological:1 ordered:1 corresponds:1 ma:1 bioi:4 oscillator:11 absence:2 hard:1 called:2 nil:1 isomorphic:1 experimental:1 la:1 zone:1 internal:1 devil:1 |
2,425 | 320 | Exploratory Feature Extraction in Speech Signals
Nathan Intrator
Center for Neural Science
Brown U ni versity
Providence, RI 02912
Abstract
A novel unsupervised neural network for dimensionality reduction which
seeks directions emphasizing multimodality is presented, and its connection to exploratory projection pursuit methods is discussed. This leads to
a new statistical insight to the synaptic modification equations governing
learning in Bienenstock, Cooper, and Munro (BCM) neurons (1982).
The importance of a dimensionality reduction principle based solely on
distinguishing features, is demonstrated using a linguistically motivated
phoneme recognition experiment, and compared with feature extraction
using back-propagation network.
1
Introduction
Due to the curse of dimensionality (Bellman, 1961) it is desirable to extract features from a high dimensional data space before attempting a classification. How to
perform this feature extraction/dimensionality reduction is not that clear. A first
simplification is to consider only features defined by linear (or semi-linear) projections of high dimensional data. This class of features is used in projection pursuit
methods (see review in Huber, 1985).
Even after this simplification, it is still difficult to characterize what interesting
projections are, although it is easy to point at projections that are uninteresting.
A statement that has recently been made precise by Diaconis and Freedman (1984)
says that for most high-dimensional clouds, most low-dimensional projections are
approximately normal. This finding suggests that the important information in the
data is conveyed in those directions whose single dimensional projected distribution
is far from Gaussian, especially at the center of the distribution. Friedman (1987)
241
242
Intrator
argues that the most computationally attractive measures for deviation from normality (projection indices) are based on polynomial moments. However they very
heavily emphasize departure from normality in the tails of the distribution (Huber,
1985). Second order polynomials (measuring the variance - principal components)
are not sufficient in characterizing the important features of a distribution (see
example in Duda & Hart (1973) p. 212), therefore higher order polynomials are
needed. We shall be using the observation that high dimensional clusters translate to multimodallow dimensional projections, and if we are after such structures
measuring multimodality defines an interesting projection. In some special cases,
where the data is known in advance to be bi-modal, it is relatively straightforward
to define a good projection index (Hinton & Nowlan, 1990). When the structure
is not known in advance, defining a general multi modal measure of the projected
data is not straight forward, and will be discussed in this paper.
There are cases in which it is desirable to make the projection index invariant
under certain transformations, and maybe even remove second order structure (see
Huber, 1985) for desirable invariant properties of projection indices) . In such cases
it is possible to make such transformations before hand (Friedman, 1987), and then
assume that the data possesses these invariant properties already.
2
Feature Extraction using ANN
In this section, the intuitive idea presented above is used to form a statistically
plausible objective function whose minimization will be those projections having a
single dimensional projected distribution that is far from Gaussian. This is done
using a loss function whose expected value leads to the desired projection index.
Mathematical details are given in Intrator (1990).
Before presenting this loss function, let us review some necessary notations and assumptions. Consider a neuron with input vector x = (Xl, ... , :r N), synaptic weights
vector m = (ml' ... , mN), both in R N , and activity (in the linear region) c = x . m.
Define the threshold em = E[(x . m)2], and the functions ?(c, em) = c2 - ~cem,
?(c, em) = c 2 _ icem. The ? function has been suggested as a biologically plausible
synaptic modification function that explains visual cortical plasticity (Bienenstock,
Cooper and Munro, 1982). Note that at this point c represents the linear projection
of x onto m, and we seek an optimal projection in some sense.
We want to base our projection index on polynomial moments of low order, and
to use the fact that bimodal distribution is already interesting, and any additional
mode should make the distribution even more interesting. With this in mind, consider the following family of loss functions which depend on the synaptic weight
vector and on the input x;
The motivation for this loss function can be seen in the following graph, which
represents the ? function and the associated loss function Lm (x). For simplicity
the loss for a fixed threshold em and synaptic vector m can be written as Lm(c) =
-ic2(c - em), where c = (x? m).
Exploratory Feature Extraction in Speech Signals
TllI~
qlA:\D LOSS Ft;:\CIlO:\S
l.Jc)
Figure 1: The function ? and the loss functions for a fixed m and
em.
The graph of the loss function shows that for any fixed m and em, the loss is
small for a given input x, when either (x .111.) is close to zero, or when (x . m) is
larger than
m . Moreover, the loss function remains negative for (x? m) >
m ,
therefore, any kind of distribution at the right hand side of ~em is possible, and
the preferred ones are those which are concentratt'd further away from ~em.
ie
ie
We must still show why it is not possible that a minimizer of the average loss will be
such that all the mass of the distribution will be concentrated in one of the regions.
Roughly speaking, this can not happen because the threshold em is dynamic and
depends on the projections in a nonlinear way, namely, em = E(x . m)2. This
implies that em will always move itself to a stable point such that the distribution
will not be concentrated at only one of its sides. This yields that the part of the
distribution for c < ~em has a high loss, making those distributions in which the
distribution for c < ~em has its mode at zero more plausible.
The risk (expected value of the loss) is given by:
Rm = -~ {E[(x .111.)3] - E2[(x? m?]}.
3
Since the risk is continuously differentiable, its minimization can be achieved via a
gradient descent method with respect to m, namely:
dm
a
-dt = - -;;;--Rm = J1 E[?(x? m, em)Xi].
t
V7ni
The resulting differential equations suggest a modified version of the law governing
synaptic weight modification in the BCM theory for learning and memory (Bienenstock, Cooper and Munro, 1982). This theory was presented to account for various
experimental results in visual cortical plasticity. The biological relevance of the
theory has been extensively studied (Soul et al., 1986; Bear et al., 1987; Cooper et
aI., 1987; Bear et al., 1988), and it was shown that the theory is in agreement with
the classical deprivation experiments (Clothioux et al., 1990).
The fact that the distribution has part of its mass on both sides of ~em makes this
loss a plausible projection index that seeks multimodalities. However, we still need
243
244
Intrator
to reduce the sensitivity of the projection index to outliers, and for full generality,
allow any projected distribution to be shifted so that the part of the distribution
that satisfies c < ~em will have its mode at zero. The over-sensitivity to outliers
is addressed by considering a nonlinear neuron in which the neuron's activity is
defined to be C = q(x . m), where q usually represents a smooth sigmoidal function.
A more general definition that would allow symmetry breaking of the projected
distributions, will provide solution to the second problem raised above, and is still
consistent with the statistical formulation, is c = q(x . m - a), for an arbitrary
threshold a which can be found by using gradient descent as well. For the nonlinear
neuron, em is defined to be em = E[q2(x . m)].
Based on this formulation, a network of Q identical nodes may be constructed. All
the neurons in this network receive the same input and inhibit each other, so as
to extract several features in parallel. A similar network has been studied in the
context of mean field theory by Scofield and Cooper (1985). The activity of neuron
k in the network is defined as Ck = q(x . mk - ak), where mk is the synaptic weight
vector of neuron k, and ak is its threshold. The inhibited activity and threshold of
the k'th neuron are given by Ck = Ck - 17 E}#k Cj, e~ = E[c~].
We omit the derivation of the synaptic modification equations which is similar to
the one for a single neuron, and present only the resulting modification equations
for a synaptic vector mk in a lateral inhibition network of nonlinear neurons:
mk
= -11 E{?(Ck' e~:J(q'(Ck) -17 Lq'(Cj})x}.
j#k
The lateral inhibition network performs a direct search of Q-dimensional projections
together, and therefore may find a richer structure that a stepwise approach may
miss, e.g. see example 14.1 Huber (1985).
3
Conlparison with other feature extraction nlethods
When dealing with a classification problem, the interesting features are those that
distinguish between classes. The network presented above has been shown to seek
multimodality in the projected distributions, which translates to clusters in the
original space, and therefore to find those directions that make a distinction between
different sets in the training data.
In this section we compare classification performance of a network that performs
dimensionality reduction (before the classification) based upon multimodality, and
a network that performs dimensionality reduction based upon minimization of misclassification error (using back-propagation with MSE criterion). This is done using
a phoneme classification experiment whose linguistic motivation is described below.
In the latter we regard the hidden units representation as a new reduced feature
representation of the input space. Classification on the new feature space was done
using back-propagation 1
1 See Intrator (1990) for comparison with principal components feature extraction and
with k-NN as a classifier
Exploratory Feature Extraction in Speech Signals
Consider the six stop consonants [p,k,t,b,g,dJ, which have been a subject of recent
research in evaluating neural networks for phoneme recognition (see review in Lippmann, 1989). According to phonetic feature theory, these stops posses several common features, but only two distinguishing phonetic features, place of articulation
and voicing (see Blumstein & Lieberman 1984, for a review and related references
on phonetic feature theory). This theory suggests an experiment in which features
extracted from unvoiced stops can be used to distinguish place of articulation in
voiced stops as well. It is of interest if these features can be found from a single
speaker, how sensitive they are to voicing and whether they are speaker invariant.
The speech data consists of 20 consecutive time windows of 32msec with 30msec
overlap, aligned to the beginning of the burst. In each time window, a set of 22
energy levels is computed. These energy levels correspond to Zwicker critical band
filters (Zwicker, 1961). The consonant-vowel (CV) pairs were pronounced in isolation by native American speakers (two male BSS and LTN, and one female JES.)
Additional details on biologicalmotivatioll for the preprocessing, and linguistic motivation related to child language acquisition can be found in Seebach (1990), and
Seebach and Intrator (1991). An average (over 25 tokens) of the six stop consonants
followed by the vowel [aJ is presented in Figure 2. All the images are smoothened
using a moving average. One can see some similarities between the voiced and
unvoiced stops especially in the upper left corner of the image (high frequencies beginning of the burst) and the radical difference between them in the low frequencies.
Figure 2: An average of the six stop consonants followed by the vowel raj.
Their order from left to right [paJ [baJ [kaJ [gal [taJ [da]. Time increases
from the burst release on the X axis, and frequency increases on the Y axis.
In the experiments reported here, 5 features were extracted from the 440 dimension original space. Although the dimensionality reduction methods were trained
only with the unvoiced tokens of a single speaker, the classifier was trained on (5
dimensional) voiced and unvoiced data from the other speakers as well.
The classification results, which are summarized in table 1, show that the backpropagation network does well in finding structure useful for classification of the
trained data, but this structure is more sensitive to voicing. Classification results
using a BCM network suggest that, for this specific task, structure that is less
sensitive to voicing can be extracted, even though voic.ing has significant effects
on the speech signal itself. The results also suggest that these features are more
speaker invariant.
245
246
Inuator
Place of Articulation Classification JB-P)
BCM
B-P
100
100
BSS /p,k,t/
94.7
83.4
BSS /b,g,d/
95.6
97.7
LTN /p,k,t/
78.3
93.2
LTN /b,g,d/
99.4
JES (Both)
88.0
Table 1: Percentage of correct classification of place of articulation in voiced
and unvoiced stops.
Figure 3 : Synaptic weight images ofthe 5 hidden units of back-propagation
(top), and by the 5 BCM neurons (bottom).
The difference in performance between the two feature extractors may be partially
explained by looking at the synaptic weight vectors (images) extracted by both
method: For the back-propagation feature extraction it can be seen that although
5 units were used, fewer number of features were extracted. One of the main
distinction between the unvoiced stops in the training set is the high frequency burst
at the beginning of the consonant (the upper left corner). The back-propagation
method concentrated mainly on this feature, probably because it is sufficient to base
the recognition of the training set on this feature, and the fact that training stops
when misclassification error falls to zero. On the other hand, the BCM method does
not try to reduce the misclassificaion error and is able to find a richer, linguistically
meaningful structure, containing burst locations and format tracking of the three
different stops that allowed a better generalization to other speakers and to voiced
stops.
The network and its training paradigm present a different approach to speaker
independent speech recognition. In this approach the speaker variability problem
is addressed by training a network that concentrates mainly on the distinguishing
features of a single speaker, as opposed to training a network that concentrates on
both the distinguishing and common features, on multi-speaker data.
Acknowledgements
I wish to thank Leon N Cooper for suggesting the problem and for providing many
helpful hints and insights. Geoff Hinton made invaluable comments. The application of BCM to speech is discussed in more detail in Seebach (1990) and in a
Exploratory Feature Extraction in Speech Signals
forthcoming article (Seebach and Intrator, 1991). Research was supported by the
National Science Foundation, the Army Research Office, and the Office of Naval
Research.
References
Bellman, R. E. (1961) Adaptive Control Processes, Princeton, NJ, Princeton University Press.
Bienenstock, E. L., L. N Cooper, and P.W. Munro (1982) Theory for the development of neuron selectivity: orientation specificity and binocular interaction in
visual cortex. J.Neurosci. 2:32-48
Bear, M. F., L. N Cooper, and F. F. Ebner (1987) A Physiological Basis for a
Theory of Synapse Modification. Science 237:42-48
Diaconis, P, and D. Freedman (1984) Asymptotics of Graphical Projection Pursuit.
The Annals of Statistics, 12 793-815.
Friedman, J. H. (1987) Exploratory Projection Pursuit. Journal of the American
Statistical Association 82-397:249-266
Hinton, G. E. and S. J. Nowlan (1990) The bootstrap Widrow-Hoffrule as a clusterformation algorithm. Neural Computation.
Huber P. J. (1985) Projection Pursuit. The Annal. of Sta.t. 13:435-475
Intrator N. (1990) A Neural Network For Feature Extraction. In D. S. Touretzky (ed.), Advances in Neural Information Processing System,s 2. San Mateo, CA:
Morgan Kaufmann.
Lippmann, R. P. (1989) Review of Neural Networks for Speech Recognition. Neural
Computation 1, 1-38.
Reilly, D. L., C.L. Scofield, L. N Cooper and C. Elbaum (1988) GENSEP: a multiple
neural network with modifiable network topology. INNS Conference on Neural
Networks.
Saul, A. and E. E. Clothiaux, 1986) Modeling and Simulation II: Simulation of
a Model for Development of Visual Cortical specificity. J. of Electrophysiological
Techniques, 13:279-306
Scofield, C. L. and L. N Cooper (1985) Development and properties of neural networks. Contemp. Phys. 26:125-145
Seebach, B. S. (1990) Evidence for the Development of Phonetic Property Detectors in a Neural Net without Innate Knowledge of Linguistic Structure. Ph.D.
Dissertation Brown University.
Duda R. O. and P. E. Hart (19;3) Pattern classification and scene analysis John
Wiley, New York
Zwicker E. (1961) Subdivision of the audible frequency range into critical bands
(Frequenzgruppen) Journal of the Acoustical Society of America 33:248
247
| 320 |@word version:1 polynomial:4 duda:2 simulation:2 seek:4 moment:2 reduction:6 nowlan:2 written:1 must:1 john:1 happen:1 j1:1 plasticity:2 remove:1 fewer:1 beginning:3 dissertation:1 node:1 location:1 sigmoidal:1 mathematical:1 burst:5 c2:1 constructed:1 differential:1 direct:1 consists:1 multimodality:4 huber:5 expected:2 roughly:1 multi:2 bellman:2 versity:1 curse:1 window:2 considering:1 notation:1 moreover:1 mass:2 what:1 kind:1 q2:1 elbaum:1 finding:2 transformation:2 gal:1 nj:1 rm:2 classifier:2 control:1 unit:3 omit:1 before:4 ak:2 solely:1 approximately:1 studied:2 mateo:1 suggests:2 bi:1 statistically:1 range:1 backpropagation:1 bootstrap:1 asymptotics:1 projection:24 reilly:1 specificity:2 suggest:3 onto:1 close:1 risk:2 context:1 demonstrated:1 center:2 straightforward:1 clothiaux:1 simplicity:1 insight:2 exploratory:6 annals:1 heavily:1 distinguishing:4 agreement:1 recognition:5 native:1 bottom:1 cloud:1 ft:1 region:2 inhibit:1 dynamic:1 trained:3 depend:1 upon:2 basis:1 geoff:1 various:1 america:1 derivation:1 whose:4 richer:2 larger:1 plausible:4 say:1 statistic:1 itself:2 differentiable:1 net:1 inn:1 interaction:1 aligned:1 translate:1 intuitive:1 pronounced:1 seebach:5 cluster:2 radical:1 widrow:1 implies:1 direction:3 concentrate:2 correct:1 filter:1 explains:1 generalization:1 biological:1 normal:1 lm:2 consecutive:1 linguistically:2 sensitive:3 minimization:3 gaussian:2 always:1 modified:1 ck:5 office:2 linguistic:3 release:1 naval:1 mainly:2 multimodalities:1 sense:1 helpful:1 nn:1 bienenstock:4 hidden:2 classification:12 orientation:1 development:4 raised:1 special:1 field:1 extraction:11 having:1 identical:1 represents:3 unsupervised:1 jb:1 inhibited:1 hint:1 tlli:1 sta:1 diaconis:2 national:1 vowel:3 friedman:3 interest:1 male:1 necessary:1 desired:1 annal:1 mk:4 modeling:1 lieberman:1 measuring:2 deviation:1 smoothened:1 uninteresting:1 characterize:1 reported:1 providence:1 sensitivity:2 ie:2 ltn:3 audible:1 together:1 continuously:1 containing:1 opposed:1 corner:2 american:2 account:1 suggesting:1 summarized:1 jc:1 depends:1 try:1 parallel:1 voiced:5 ni:1 phoneme:3 variance:1 kaufmann:1 yield:1 correspond:1 ofthe:1 straight:1 detector:1 baj:1 phys:1 touretzky:1 synaptic:11 ed:1 definition:1 energy:2 acquisition:1 frequency:5 qla:1 e2:1 dm:1 associated:1 stop:12 knowledge:1 dimensionality:7 electrophysiological:1 cj:2 jes:2 back:6 higher:1 dt:1 modal:2 synapse:1 formulation:2 done:3 though:1 generality:1 governing:2 binocular:1 hand:3 nonlinear:4 propagation:6 defines:1 mode:3 aj:1 innate:1 effect:1 brown:2 attractive:1 speaker:11 criterion:1 presenting:1 argues:1 performs:3 invaluable:1 image:4 novel:1 recently:1 common:2 discussed:3 tail:1 association:1 significant:1 ai:1 cv:1 language:1 dj:1 moving:1 stable:1 similarity:1 cortex:1 inhibition:2 base:2 recent:1 female:1 raj:1 phonetic:4 certain:1 selectivity:1 seen:2 morgan:1 additional:2 paradigm:1 signal:5 semi:1 ii:1 full:1 desirable:3 multiple:1 ing:1 smooth:1 hart:2 bimodal:1 achieved:1 receive:1 want:1 addressed:2 posse:2 probably:1 comment:1 subject:1 contemp:1 easy:1 isolation:1 forthcoming:1 topology:1 reduce:2 idea:1 translates:1 whether:1 motivated:1 six:3 munro:4 speech:9 speaking:1 york:1 useful:1 clear:1 maybe:1 extensively:1 band:2 concentrated:3 ph:1 reduced:1 percentage:1 shifted:1 modifiable:1 shall:1 threshold:6 graph:2 place:4 family:1 kaj:1 followed:2 simplification:2 distinguish:2 activity:4 ri:1 scene:1 nathan:1 leon:1 attempting:1 relatively:1 format:1 according:1 em:19 modification:6 biologically:1 making:1 outlier:2 invariant:5 explained:1 computationally:1 equation:4 remains:1 needed:1 mind:1 pursuit:5 intrator:8 away:1 voicing:4 original:2 top:1 zwicker:3 graphical:1 especially:2 classical:1 society:1 objective:1 move:1 already:2 gradient:2 thank:1 lateral:2 acoustical:1 index:8 providing:1 difficult:1 paj:1 statement:1 negative:1 ebner:1 perform:1 upper:2 neuron:13 observation:1 unvoiced:6 descent:2 defining:1 hinton:3 looking:1 precise:1 variability:1 arbitrary:1 namely:2 pair:1 connection:1 bcm:7 distinction:2 able:1 suggested:1 usually:1 below:1 pattern:1 departure:1 soul:1 articulation:4 memory:1 misclassification:2 overlap:1 critical:2 mn:1 normality:2 axis:2 extract:2 review:5 acknowledgement:1 law:1 loss:15 bear:3 interesting:5 foundation:1 conveyed:1 sufficient:2 consistent:1 article:1 principle:1 token:2 supported:1 side:3 allow:2 scofield:3 fall:1 saul:1 characterizing:1 regard:1 bs:3 cortical:3 evaluating:1 dimension:1 forward:1 made:2 adaptive:1 projected:6 preprocessing:1 san:1 taj:1 far:2 emphasize:1 lippmann:2 preferred:1 dealing:1 ml:1 cem:1 consonant:5 xi:1 search:1 why:1 table:2 ca:1 symmetry:1 mse:1 ic2:1 da:1 main:1 neurosci:1 motivation:3 freedman:2 child:1 allowed:1 cooper:10 wiley:1 msec:2 wish:1 lq:1 xl:1 breaking:1 extractor:1 deprivation:1 emphasizing:1 specific:1 physiological:1 evidence:1 stepwise:1 importance:1 army:1 visual:4 tracking:1 partially:1 minimizer:1 satisfies:1 extracted:5 ann:1 miss:1 principal:2 experimental:1 subdivision:1 meaningful:1 latter:1 relevance:1 princeton:2 |
2,426 | 3,200 | Fixing Max-Product: Convergent Message Passing
Algorithms for MAP LP-Relaxations
Amir Globerson Tommi Jaakkola
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
gamir,[email protected]
Abstract
We present a novel message passing algorithm for approximating the MAP problem in graphical models. The algorithm is similar in structure to max-product but
unlike max-product it always converges, and can be proven to find the exact MAP
solution in various settings. The algorithm is derived via block coordinate descent
in a dual of the LP relaxation of MAP, but does not require any tunable parameters
such as step size or tree weights. We also describe a generalization of the method
to cluster based potentials. The new method is tested on synthetic and real-world
problems, and compares favorably with previous approaches.
Graphical models are an effective approach for modeling complex objects via local interactions. In
such models, a distribution over a set of variables is assumed to factor according to cliques of a graph
with potentials assigned to each clique. Finding the assignment with highest probability in these
models is key to using them in practice, and is often referred to as the MAP (maximum aposteriori)
assignment problem. In the general case the problem is NP hard, with complexity exponential in the
tree-width of the underlying graph.
Linear programming (LP) relaxations have proven very useful in approximating the MAP problem,
and often yield satisfactory empirical results. These approaches relax the constraint that the solution
is integral, and generally yield non-integral solutions. However, when the LP solution is integral,
it is guaranteed to be the exact MAP. For some classes of problems the LP relaxation is provably
correct. These include the minimum cut problem and maximum weight matching in bi-partite graphs
[8]. Although LP relaxations can be solved using standard LP solvers, this may be computationally
intensive for large problems [13]. The key problem with generic LP solvers is that they do not use
the graph structure explicitly and thus may be sub-optimal in terms of computational efficiency.
The max-product method [7] is a message passing algorithm that is often used to approximate the
MAP problem. In contrast to generic LP solvers, it makes direct use of the graph structure in
constructing and passing messages, and is also very simple to implement. The relation between
max-product and the LP relaxation has remained largely elusive, although there are some notable
exceptions: For tree-structured graphs, max-product and LP both yield the exact MAP. A recent
result [1] showed that for maximum weight matching on bi-partite graphs max-product and LP also
yield the exact MAP [1]. Finally, Tree-Reweighted max-product (TRMP) algorithms [5, 10] were
shown to converge to the LP solution for binary xi variables, as shown in [6].
In this work, we propose the Max Product Linear Programming algorithm (MPLP) - a very simple
variation on max-product that is guaranteed to converge, and has several advantageous properties.
MPLP is derived from the dual of the LP relaxation, and is equivalent to block coordinate descent in
the dual. Although this results in monotone improvement of the dual objective, global convergence
is not always guaranteed since coordinate descent may get stuck in suboptimal points. This can
be remedied using various approaches, but in practice we have found MPLP to converge to the LP
1
solution in a majority of the cases we studied. To derive MPLP we use a special form of the dual
LP, which involves the introduction of redundant primal variables and constraints. We show how
the dual variables corresponding to these constraints turn out to be the messages in the algorithm.
We evaluate the method on Potts models and protein design problems, and show that it compares
favorably with max-product (which often does not converge for these problems) and TRMP.
1 The Max-Product and MPLP Algorithms
The max-product algorithm [7] is one of the most often used methods for solving MAP problems.
Although it is neither guaranteed to converge to the correct solution, or in fact converge at all, it
provides satisfactory results in some cases. Here we present two algorithms: EMPLP (edge based
MPLP) and NMPLP (node based MPLP), which are structurally very similar to max-product, but
have several key advantages:
? After each iteration, the messages yield an upper bound on the MAP value, and the sequence of bounds is monotone decreasing and convergent. The messages also have a limit
point that is a fixed point of the update rule.
? No additional parameters (e.g., tree weights as in [6]) are required.
? If the fixed point beliefs have a unique maximizer then they correspond to the exact MAP.
? For binary variables, MPLP can be used to obtain the solution to an LP relaxation of the
MAP problem. Thus, when this LP relaxation is exact and variables are binary, MPLP will
find the MAP solution. Moreover, for any variable whose beliefs are not tied, the MAP
assignment can be found (i.e., the solution is partially decodable).
Pseudo code for the algorithms (and for max-product) is given in Fig. 1. As we show in the next
sections, MPLP is essentially a block coordinate descent algorithm in the dual of a MAP LP relaxation. Every update of the MPLP messages corresponds to exact minimization of a set of dual
variables. For EMPLP minimization is over the set of variables corresponding to an edge, and for
NMPLP it is over the set of variables corresponding to all the edges a given node appears in (i.e., a
star). The properties of MPLP result from its relation to the LP dual. In what follows we describe
the derivation of the MPLP algorithms and prove their properties.
2 The MAP Problem and its LP Relaxation
We consider functions over n variables x = {x1 , . . . , xn } defined as follows. Given a graph G =
(V, E) with n vertices, and potentials ?ij (xi , xj ) for all edges ij ? E, define the function1
X
f (x; ?) =
?ij (xi , xj ) .
(1)
ij?E
The MAP problem is defined as finding an assignment xM that maximizes the function f (x; ?).
Below we describe the standard LP relaxation for this problem. Denote by {?ij (xi , xj )}ij?E distributions over variables corresponding to edges ij ? E and {?i (xi )}i?V distributions corresponding
to nodes i ? V . We will use ? to denote a given set of distributions over all edges and nodes. The
set ML (G) is defined as the set of ? where pairwise and singleton distributions are consistent
P
P
?ij (?
xi , xj ) = ?j (xj ) ,
?j ) = ?i (xi ) ?ij ? E, xi , xj
x
?j ?ij (xi , x
ML (G) = ? ? 0 Px?i
?i ? V
xi ?i (xi ) = 1
Now consider the following linear program:
?L? = arg max ? ? ? .
(2)
MAPLPR :
??ML (G)
P
P
where ??? is shorthand for ??? = ij?E xi ,xj ?ij (xi , xj )?ij (xi , xj ). It is easy to show (see e.g.,
[10]) that the optimum of MAPLPR yields an upper bound on the MAP value, i.e. ?L? ?? ? f (xM ).
Furthermore, when the optimal ?i (xi ) have only integral values, the assignment that maximizes
?i (xi ) yields the correct MAP assignment. In what follows we show how the MPLP algorithms can
be derived from the dual of MAPLPR.
1
P
We note that some authors also add a term i?V ?i (xi ) to f (x; ?). However, these terms can be included
in the pairwise functions ?ij (xi , xj ), so we ignore them for simplicity.
2
3 The LP Relaxation Dual
Since MAPLPR is an LP, it has an equivalent convex dual. In App. A we derive a special dual of
MAPLPR using a different representation of ML (G) with redundant variables. The advantage of
this dual is that it allows the derivation of simple message passing algorithms. The dual is described
in the following proposition.
Proposition 1 The following optimization problem is a convex dual of MAPLPR
DMAPLPR : P
P
min
max
max ?ki (xk , xi )
xi
i
(3)
k?N (i) xk
?ji (xj , xi ) + ?ij (xi , xj ) = ?ij (xi , xj ) ,
s.t.
where the dual variables are ?ij (xi , xj ) for all ij, ji ? E and values of xi and xj .
The dual has an intuitive interpretation in terms of re-parameterizations. Consider the star
shaped graph Gi consisting of node i and all its neighbors N (i). Assume the potential on
edge kiP(for k ? N (i)) is ?ki (xk , xi ). The value of the MAP assignment for this model is
max
max ?ki (xk , xi ). This is exactly the term in the objective of DMAPLPR. Thus the dual
xi
k?N (i) xk
corresponds to individually decoding star graphs around all nodes i ? V where the potentials on the
graph edges should sum to the original potential. It is easy to see that this will always result in an
upper bound on the MAP value. The somewhat surprising result of the duality is that there exists a
? assignment such that star decoding yields the optimal value of MAPLPR.
4 Block Coordinate Descent in the Dual
To obtain a convergent algorithm we use a simple block coordinate descent strategy. At every
iteration, fix all variables except a subset, and optimize over this subset. It turns out that this can
be done in closed form for the cases we consider. We begin by deriving the EMPLP algorithm.
Consider fixing all the ? variables except those corresponding to some edge ij ? E (i.e., ?ij and
?ji ), and minimizing DMAPLPR over the non-fixed variables. Only two terms in the DMAPLPR
objective depend on ?ij and ?ji . We can write those as
?i
f (?ij , ?ji ) = max ??j
(x
)
+
max
?
(x
,
x
)
+
max
?
(x
)
+
max
?
(x
,
x
)
(4)
i
ji j
i
j
ij i
j
i
j
xi
where we defined ??j
i (xi ) =
xj
P
k?N (i)\j
xi
xi
?ki (xi ) and ?ki (xi ) = maxxk ?ki (xk , xi ) as in App. A.
?j
Note that the function f (?ij , ?ji ) depends on the other ? values only through ??i
j (xj ) and ?i (xi ).
This implies that the optimization can be done solely in terms of ?ij (xj ) and there is no need to
store the ? values explicitly. The optimal ?ij , ?ji are obtained by minimizing f (?ij , ?ji ) subject
to the re-parameterization constraint ?ji (xj , xi ) + ?ij (xi , xj ) = ?ij (xi , xj ). The following proposition characterizes the minimum of f (?ij , ?ji ). In fact, as mentioned above, we do not need to
characterize the optimal ?ij (xi , xj ) itself, but only the new ? values.
Proposition 2 Maximizing the function f (?ij , ?ji ) yields the following ?ji (xi ) (and the equivalent
expression for ?ij (xj ))
1
1
?ji (xi ) = ? ?i?j (xi ) + max ??i
j (xj ) + ?ij (xi , xj )
2
2 xj
The proposition is proved in App. B. The ? updates above result in the EMPLP algorithm, described
in Fig. 1. Note that since the ? optimization affects both ?ji (xi ) and ?ij (xj ), both these messages
need to be updated simultaneously.
We proceed to derive the NMPLP algorithm. For a given node i ? V , we consider all its neighbors
j ? N (i), and wish to optimize over the variables ?ji (xj , xi ) for ji, ij ? E (i.e., all the edges in a
star centered on i), while the other variables are fixed. One way of doing so is to use the EMPLP
algorithm for the edges in the star, and iterate it until convergence. We now show that the result of
3
Inputs: A graph G = (V, E), potential functions ?ij (xi , xj ) for each edge ij ? E.
Initialization: Initialize messages to any value.
Algorithm:
? Iterate until a stopping criterion is satisfied:
? Max-product: Iterate over messages and update (cji shifts the max to zero)
h
i
mji (xi )? max m?i
j (xj ) + ?ij (xi , xj ) ? cji
xj
? EMPLP: For each ij ? E, update ?ji (xi ) and ?ij (xj ) simultaneously (the update
for ?ij (xj ) is the same with i and j exchanged)
h
i
1
1
?ji (xi )? ? ??j
max ??i
j (xj ) + ?ij (xi , xj )
i (xi ) +
2
2 xj
? NMPLP: Iterate over nodes i ? V and update all ?ij (xj ) where j ? N (i)
2
3
X
2
?ij (xj )? max 4?ij (xi , xj ) ? ?ji (xi ) +
?ki (xi )5
xi
|N (i)| + 1
k?N(i)
? Calculate node ?beliefs?: Set biP
(xi ) to be the sum of incoming messages into node i ? V
(e.g., for NMPLP set bi (xi ) = k?N(i) ?ki (xi )).
Output: Return assignment x defined as xi = arg maxx?i b(?
xi ).
Figure 1: The max-product, EMPLP and NMPLP algorithms. Max-product,
EMPLP and NMPLP use mesP
sages mij , ?ij and ?ij respectively. We use the notation m?i
j (xj ) =
k?N(j)\i mkj (xj ).
this optimization can be found in closed form. The assumption
about ? being fixed
outside the star
?i
implies that ??i
(x
)
is
fixed.
Define:
?
(x
)
=
max
?
(x
,
x
)
+
?
(x
)
j
ji i
xj
ij
i
j
j . Simple algebra
j
j
?j
yields the following relation between ?i (xi ) and ?ki (xi ) for k ? N (i)
X
2
??j
?ki (xi )
(5)
i (xi ) = ??ji (xi ) +
|N (i)| + 1
k?N (i)
Plugging this into the definition of ?ji (xi ) we obtain the NMPLP update in Fig. 1. The messages
for both algorithms can be initialized to any value since it can be shown that after one iteration they
will correspond to valid ? values.
5 Convergence Properties
The MPLP algorithm decreases the dual objective (i.e., an upper bound on the MAP value) at every
iteration, and thus its dual objective values form a convergent sequence. Using arguments similar to
[5] it can be shown that MPLP has a limit point that is a fixed point of its updates. This in itself does
not guarantee convergence to the dual optimum since coordinate descent algorithms may get stuck
at a point that is not a global optimum. There are ways of overcoming this difficulty, for example by
smoothing the objective [4] or using techniques as in [2] (see p. 636). We leave such extensions for
further work. In this section we provide several results about the properties of the MPLP fixed points
and their relation to the corresponding LP. First, we claim that if all beliefs have unique maxima then
the exact MAP assignment is obtained.
Proposition 3 If the fixed point of MPLP has bi (xi ) such that for all i the function bi (xi ) has a
unique maximizer x?i , then x? is the solution to the MAP problem and the LP relaxation is exact.
Since the dual objective is always greater than or equal to the MAP value, it suffices to show that
there exists a dual feasible point whose objective value is f (x? ). Denote by ? ? , ?? the value of the
corresponding dual parameters at the fixed point of MPLP. Then the dual objective satisfies
X
X
X X
X X
?
?
??ki (xi ) =
max ?ki
max
(xk , x?i ) =
?ki
(x?k , x?i ) = f (x? )
i
xi
k?N (i)
i
k?N (i)
xk
i
4
k?N (i)
To see why the second equality holds, note that bi (x?i ) = maxxi ,xj ??j
i (xi ) + ?ji (xj , xi ) and
?i
?
bj (xj ) = maxxi ,xj ?j (xj ) + ?ij (xi , xj ). By the equalization property in Eq. 9 the arguments of
the two max operations are equal. From the unique maximum assumption it follows that x?i , x?j are
the unique maximizers of the above. It follows that ?ji , ?ij are also maximized by x?i , x?j .
In the general case, the MPLP fixed point may not correspond to a primal optimum because of the
local optima problem with coordinate descent. However, when the variables are binary, fixed points
do correspond to primal solutions, as the following proposition states.
Proposition 4 When xi are binary, the MPLP fixed point can be used to obtain the primal optimum.
The claim can be shown by constructing a primal optimal solution ?? . For tied bi , set ??i (xi ) to 0.5
and for untied bi , set ??i (x?i ) to 1. If bi , bj are not tied we set ??ij (x?i , x?j ) = 1. If bi is not tied but bj
is, we set ??ij (x?i , xj ) = 0.5. If bi , bj are tied then ?ji , ?ij can be shown to be maximized at either
x?i , x?j = (0, 0), (1, 1) or x?i , x?j = (0, 1), (1, 0). We then set ??ij to be 0.5 at one of these assignment
pairs. The resulting ?? is clearly primal feasible. Setting ?i? = b?i we obtain that the dual variables
(? ? , ?? , ? ? ) and primal ?? satisfy complementary slackness for the LP in Eq. 7 and therefore ?? is
primal optimal. The binary optimality result implies partial decodability, since [6] shows that the
LP is partially decodable for binary variables.
6 Beyond pairwise potentials: Generalized MPLP
In the previous sections we considered maximizing functions which factor according to the edges of
the graph. A more general setting considers
P clusters c1 , . . . , ck ? {1, . . . , n} (the set of clusters is
denoted by C), and a function f (x; ?) = c ?c (xc ) defined via potentials over clusters ?c (xc ). The
MAP problem in this case also has an LP relaxation (see e.g. [11]). To define the LP we introduce
the following definitions: S = {c ? c? : c, c? ? C, c ? c? 6= ?} is the set of intersection between clusters
and S(c) = {s ? S : s ? c} is the set of overlap sets for cluster c.We now consider marginals over
the variables in c ? C and s ? S and require that cluster marginals agree on their overlap. Denote
this set by ML (C). The LP relaxation is then to maximize ? ? ? subject to ? ? ML (C).
As in Sec. 4, we can derive message passing updates that result in monotone decrease of the dual
LP of the above relaxation. The derivation is similar and we omit the details. The key observation
is that one needs to introduce |S(c)| copies of each marginal ?c (xc ) (instead of the two copies
in the pairwise case). Next, as in the EMPLP derivation we assume all ? are fixed except those
corresponding to some cluster c. The resulting messages are ?c?s (xs ) from a cluster c to all of its
intersection sets s ? S(c). The update on these messages turns?out to be:
?
X
1
1
?
??c
max ?
??c
?c?s (xs ) = ? 1 ?
s (xs ) +
s? (xs?) + ?c (xc )
|S(c)|
|S(c)| xc\s
s
??S(c)\s
where for a given c ? C all ?c?s should be updated simultaneously for s ? S(c), and ??c
s (xs ) is
defined as the sum of messages into s that are not from c. We refer to this algorithm as Generalized
EMPLP (GEMPLP). It is possible to derive an algorithm similar to NMPLP that updates several
clusters simultaneously, but its structure is more involved and we do not address it here.
7 Related Work
Weiss et al. [11] recently studied the fixed points of a class of max-product like algorithms. Their
analysis focused on properties of fixed points rather than convergence guarantees. Specifically, they
showed that if the counting numbers used in a generalized max-product algorithm satisfy certain
properties, then its fixed points will be the exact MAP if the beliefs have unique maxima, and for
binary variables the solution can be partially decodable. Both these properties are obtained for the
MPLP fixed points, and in fact we can show that MPLP satisfies the conditions in [11], so that
we obtain these properties as corollaries of [11]. We stress however, that [11] does not address
convergence of algorithms, but rather properties of their fixed points, if they converge.
MPLP is similar in some aspects to Kolmogorov?s TRW-S algorithm [5]. TRW-S is also a monotone
coordinate descent method in a dual of the LP relaxation and its fixed points also have similar
5
guarantees to those of MPLP [6]. Furthermore, convergence to a local optimum may occur, as it
does for MPLP. One advantage of MPLP lies in the simplicity of its updates and the fact that it is
parameter free. The other is its simple generalization to potentials over clusters of nodes (Sec. 6).
Recently, several new dual LP algorithms have been introduced, which are more closely related to
our formalism. Werner [12] presented a class of algorithms which also improve the dual LP at every
iteration. The simplest of those is the max-sum-diffusion algorithm, which is similar to our EMPLP
algorithm, although the updates are different from ours. Independently, Johnson et al. [4] presented
a class of algorithms that improve duals of the MAP-LP using coordinate descent. They decompose
the model into tractable parts by replicating variables and enforce replication constraints within the
Lagrangian dual. Our basic formulation in Eq. 3 could be derived from their perspective. However,
the updates in the algorithm and the analysis differ. Johnson et al. also presented a method for
overcoming the local optimum problem, by smoothing the objective so that it is strictly convex.
Such an approach could also be used within our algorithms. Vontobel and Koetter [9] recently
introduced a coordinate descent algorithm for decoding LDPC codes. Their method is specifically
tailored for this case, and uses updates that are similar to our edge based updates.
Finally, the concept of dual coordinate descent may be used in approximating marginals as well. In
[3] we use such an approach to optimize a variational bound on the partition function. The derivation
uses some of the ideas used in the MPLP dual, but importantly does not find the minimum for each
coordinate. Instead, a gradient like step is taken at every iteration to decrease the dual objective.
8 Experiments
We compared NMPLP to three other message passing algorithms:2 Tree-Reweighted max-product
(TRMP) [10],3 standard max-product (MP), and GEMPLP. For MP and TRMP we used the standard
approach of damping messages using a factor of ? = 0.5. We ran all algorithms for a maximum of
2000 iterations, and used the hit-time measure to compare their speed of convergence. This measure
is defined as follows: At every iteration the beliefs can be used to obtain an assignment x with value
f (x). We define the hit-time as the first iteration at which the maximum value of f (x) is achieved.4
We first experimented with
state. The function f (x) was
Pa 10 ? 10 grid graph, with
P 5 values per
5
a Potts model: f (x) =
The values for ?ij and ?i (xi )
ij?E ?ij I(xi = xj ) +
i?V ?i (xi ).
were randomly drawn from [?cI , cI ] and [?cF , cF ] respectively, and we used values of cI and
cF in the range range [0.1, 2.35] (with intervals of 0.25), resulting in 100 different models. The
clusters for GEMPLP were the faces of the graph [14]. To see if NMPLP converges to the LP
solution we also used an LP solver to solve the LP relaxation. We found that the the normalized
difference between NMPLP and LP objective was at most 10?3 (median 10?7 ), suggesting that
NMPLP typically converged to the LP solution. Fig. 2 (top row) shows the results for the three
algorithms. It can be seen that while all non-cluster based algorithms obtain similar f (x) values,
NMPLP has better hit-time (in the median) than TRMP and MP, and MP does not converge in many
cases (see caption). GEMPLP converges more slowly than NMPLP, but obtains much better f (x)
values. In fact, in 99% of the cases the normalized difference between the GEMPLP objective and
the f (x) value was less than 10?5 , suggesting that the exact MAP solution was found.
We next applied the algorithms to the real world problems of protein design. In [13], Yanover
et al. show how these problems can be formalized in terms of finding a MAP in an appropriately
constructed graphical model.6 We used all algorithms except GNMPLP (since there is no natural
choice for clusters in this case) to approximate the MAP solution on the 97 models used in [13].
In these models the number of states per variable is 2 ? 158, and there are up to 180 variables per
model. Fig. 2 (bottom) shows results for all the design problems. In this case only 11% of the MP
runs converged, and NMPLP was better than TRMP in terms of hit-time and comparable in f (x)
value. The performance of MP was good on the runs where it converged.
2
As expected, NMPLP was faster than EMPLP so only NMPLP results are given.
The edge weights for TRMP corresponded to a uniform distribution over all spanning trees.
4
This is clearly a post-hoc measure since it can only be obtained after the algorithm has exceeded its maximum number of iterations. However, it is a reasonable algorithm-independent measure of convergence.
5
The potential ?i (xi ) may be folded into the pairwise potential to yield a model as in Eq. 1.
6
Data available from http://jmlr.csail.mit.edu/papers/volume7/yanover06a/Rosetta Design Dataset.tgz
3
6
(a)
(b)
(c)
100
(d)
0.6
2000
0.04
0.4
0.02
?50
0
?0.02
?0.04
?(Value)
0
1000
?(Hit Time)
?(Value)
?(Hit Time)
50
0
MP
TRMP
GMPLP
0
?0.2
?1000
?0.4
?0.06
?100
0.2
MP
TRMP
GMPLP
MP
TRMP
MP
TRMP
Figure 2: Evaluation of message passing algorithms on Potts models and protein design problems. (a,c):
Convergence time results for the Potts models (a) and protein design problems (c). The box-plots (horiz. red
line indicates median) show the difference between the hit-time for the other algorithms and NMPLP. (b,d):
Value of integer solutions for the Potts models (b) and protein design problems (d). The box-plots show the
normalized difference between the value of f (x) for NMPLP and the other algorithms. All figures are such
that better MPLP performance yields positive Y axis values. Max-product converged on 58% of the cases for
the Potts models, and on 11% of the protein problems. Only convergent max-product runs are shown.
9 Conclusion
We have presented a convergent algorithm for MAP approximation that is based on block coordinate descent of the MAP-LP relaxation dual. The algorithm can also be extended to cluster based
functions, which result empirically in improved MAP estimates. This is in line with the observations in [14] that generalized belief propagation algorithms can result in significant performance
improvements. However generalized max-product algorithms [14] are not guaranteed to converge
whereas GMPLP is. Furthermore, the GMPLP algorithm does not require a region graph and only
involves intersection between pairs of clusters. In conclusion, MPLP has the advantage of resolving
the convergence problems of max-product while retaining its simplicity, and offering the theoretical
guarantees of LP relaxations. We thus believe it should be useful in a wide array of applications.
A
Derivation of the dual
Before deriving the dual, we first express the constraint set ML (G) in a slightly different way. The
definition of ML (G) in Sec. 2 uses a single distribution ?ij (xi , xj ) for every ij ? E. In what
follows, we use two copies of this pairwise distribution for every edge, which we denote ?
? ij (xi , xj )
and ?
?ji (xj , xi ), and we add the constraint that these two copies both equal the original ?ij (xi , xj ).
For this extended set of pairwise marginals, we consider the following set of constraints which
is clearly equivalent to ML (G). On the rightmost column we give the dual variables that will
correspond to each constraint (we omit non-negativity constraints).
?
?ij (xi , xj ) = ?ij (xi , xj )
?
?ji (xj , xi ) = ?ij (xi , xj )
P
?ij (?
xi , xj ) = ?j (xj )
Px?i ?
?
?
(?
xj , xi ) = ?i (xi )
ji
Px?j
?
(x
xi i i ) = 1
?ij ? E, xi , xj
?ij ? E, xi , xj
?ij ? E, xj
?ji ? E, xi
?i ? V
?ij (xi , xj )
?ji (xj , xi )
?ij (xj )
?ji (xi )
?i
(6)
? L (G). We can now state an LP that
We denote the set of (?, ?
? ) satisfying these constraints by M
is equivalent to MAPLPR, only with an extended set of variables and constraints. The equivalent
? L (G) (note that the objective uses the original
problem is to maximize ? ? ? subject to (?, ?
?) ? M
? copy). LP duality transformation of the extended problem yields the following LP
P
min
i ?i
s.t. ?ij (xj ) ? ?ij (xi , xj ) ? 0
?ij, ji ? E, xi , xj
(7)
?ijP
(xi , xj ) + ?ji (xj , xi ) = ?ij (xi , xj ) ?ij ? E, xi , xj
? k?N (i) ?ki (xi ) + ?i ? 0
?i ? V, xi
We next simplify the above LP by eliminating some of its constraints and variables. Since each
variableP?i appears in only one constraint, and the objective minimizes ?i it follows that ?i =
maxxi k?N (i) ?ki (xi ) and the constraints with ?i can be discarded. Similarly, since ?ij (xj ) appears in a single constraint, we have that for all ij ? E, ji ? E, xi , xj ?ij (xj ) = maxxi ?ij (xi , xj )
and the constraints with ?ij (xj ), ?ji (xi ) can also be discarded. Using the eliminated ?i and ?ji (xi )
7
variables, we obtain that the LP in Eq. 7 is equivalent to that in Eq. 3. Note that the objective in
Eq. 3 is convex since it is a sum of point-wise maxima of convex functions.
B Proof of Proposition 2
We wish to minimize f in Eq. 4 subject to the constraint that ?ij + ?ji = ?ij . Rewrite f as
h
i
f (?ij , ?ji ) = max ??j
(x
)
+
?
(x
,
x
)
+ max ??i
i
ji
j
i
j (xj ) + ?ij (xi , xj )
i
xi ,xj
xi ,xj
(8)
?i
The sum of the two arguments in the max is ??j
i (xi ) + ?j (xj ) + ?ij (xi , xj )
(because of
h the constraints on ?).
i Thus the minimum must be greater than
?j
?i
1
max
?
(x
)
+
?
(x
)
+
?
(x
,
x
)
xi ,xj
i
j
ij i
j . One assignment to ? that achieves this minii
j
2
mum is obtained by requiring an equalization condition:7
1
?j
?j
?i
??i
(x
)
+
?
(x
,
x
)
=
?
(x
)
+
?
(x
,
x
)
=
?
(x
,
x
)
+
?
(x
)
+
?
(x
)
(9)
j
ij
i
j
i
ji
j
i
ij
i
j
i
j
j
i
i
j
2
?i
which implies ?ij (xi , xj ) = 12 ?ij (xi , xj ) + ??j
i (xi ) ? ?j (xj ) and a similar expression for ?ji .
The resulting ?ij (xj ) = maxxi ?ij (xi , xj ) are then the ones in Prop. 2.
Acknowledgments
The authors acknowledge support from the Defense Advanced Research Projects Agency (Transfer
Learning program). Amir Globerson was also supported by the Rothschild Yad-Hanadiv fellowship.
References
[1] M. Bayati, D. Shah, and M. Sharma. Maximum weight matching via max-product belief propagation.
IEEE Trans. on Information Theory (to appear), 2007.
[2] D. P. Bertsekas, editor. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995.
[3] A. Globerson and T. Jaakkola. Convergent propagation algorithms via oriented trees. In UAI. 2007.
[4] J.K. Johnson, D.M. Malioutov, and A.S. Willsky. Lagrangian relaxation for map estimation in graphical
models. In Allerton Conf. Communication, Control and Computing, 2007.
[5] V. Kolmogorov. Convergent tree-reweighted message passing for energy minimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(10):1568?1583, 2006.
[6] V. Kolmogorov and M. Wainwright. On the optimality of tree-reweighted max-product message passing.
In 21st Conference on Uncertainty in Artificial Intelligence (UAI). 2005.
[7] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
[8] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction, dual extragradient and bregman projections. Journal of Machine Learning Research, pages 1627?1653, 2006.
[9] P.O. Vontobel and R. Koetter. Towards low-complexity linear-programming decoding. In Proc. 4th Int.
Symposium on Turbo Codes and Related Topics, 2006.
[10] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Map estimation via agreement on trees: messagepassing and linear programming. IEEE Trans. on Information Theory, 51(11):1120?1146, 2005.
[11] Y. Weiss, C. Yanover, and T. Meltzer. Map estimation, linear programming and belief propagation with
convex free energies. In UAI. 2007.
[12] T. Werner. A linear programming approach to max-sum, a review. IEEE Trans. on PAMI, 2007.
[13] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation ? an
empirical study. Jourmal of Machine Learning Research, 7:1887?1907, 2006.
[14] J.S. Yedidia, W.T. W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282?2312, 2005.
7
Other solutions are possible but may not yield some of the properties of MPLP.
8
| 3200 |@word eliminating:1 advantageous:1 offering:1 ours:1 rightmost:1 surprising:1 must:1 belmont:1 partition:1 koetter:2 plot:2 update:17 intelligence:3 amir:2 parameterization:1 xk:8 provides:1 parameterizations:1 node:11 allerton:1 constructed:1 direct:1 symposium:1 replication:1 prove:1 shorthand:1 introduce:2 pairwise:7 expected:1 freeman:1 decreasing:1 solver:4 begin:1 project:1 underlying:1 moreover:1 maximizes:2 notation:1 what:3 minimizes:1 finding:3 transformation:1 guarantee:4 pseudo:1 every:8 exactly:1 hit:7 control:1 omit:2 appear:1 bertsekas:1 positive:1 before:1 local:4 limit:2 solely:1 pami:1 initialization:1 studied:2 bi:11 range:2 unique:6 globerson:3 acknowledgment:1 practice:2 block:6 implement:1 empirical:2 maxx:1 matching:3 projection:1 protein:6 get:2 mkj:1 equalization:2 optimize:3 equivalent:7 map:39 lagrangian:2 maximizing:2 elusive:1 independently:1 convex:6 focused:1 simplicity:3 formalized:1 rule:1 array:1 importantly:1 deriving:2 coordinate:14 variation:1 updated:2 exact:11 programming:8 caption:1 us:4 agreement:1 pa:1 satisfying:1 cut:1 bottom:1 taskar:1 solved:1 calculate:1 region:1 decrease:3 highest:1 ran:1 mentioned:1 agency:1 complexity:2 depend:1 solving:1 rewrite:1 algebra:1 efficiency:1 various:2 kolmogorov:3 derivation:6 describe:3 effective:1 artificial:2 corresponded:1 outside:1 whose:2 solve:1 relax:1 gi:1 itself:2 hoc:1 advantage:4 sequence:2 propose:1 interaction:1 product:28 intuitive:1 convergence:11 cluster:16 optimum:8 converges:3 leave:1 object:1 derive:5 fixing:2 ij:93 eq:8 involves:2 implies:4 tommi:2 differ:1 closely:1 correct:3 centered:1 duals:1 require:3 fix:1 generalization:2 suffices:1 decompose:1 proposition:9 extension:1 strictly:1 hold:1 around:1 considered:1 bj:4 claim:2 achieves:1 estimation:3 proc:1 individually:1 minimization:3 mit:2 clearly:3 always:4 ck:1 rather:2 jaakkola:3 corollary:1 derived:4 improvement:2 potts:6 indicates:1 contrast:1 rothschild:1 stopping:1 typically:1 relation:4 provably:1 arg:2 dual:41 denoted:1 retaining:1 smoothing:2 special:2 initialize:1 marginal:1 equal:3 shaped:1 eliminated:1 np:1 simplify:1 intelligent:1 randomly:1 oriented:1 decodable:3 simultaneously:4 consisting:1 message:23 gamir:1 evaluation:1 primal:8 bregman:1 edge:16 integral:4 partial:1 damping:1 tree:11 exchanged:1 re:2 initialized:1 vontobel:2 theoretical:1 formalism:1 modeling:1 column:1 assignment:13 werner:2 vertex:1 subset:2 uniform:1 johnson:3 characterize:1 synthetic:1 st:1 csail:2 probabilistic:1 decoding:4 ijp:1 satisfied:1 slowly:1 conf:1 horiz:1 return:1 suggesting:2 potential:12 singleton:1 star:7 sec:3 int:1 satisfy:2 notable:1 explicitly:2 mp:10 depends:1 closed:2 doing:1 characterizes:1 red:1 minimize:1 partite:2 kaufmann:1 largely:1 maximized:2 yield:14 correspond:5 mji:1 malioutov:1 app:3 converged:4 definition:3 energy:3 involved:1 proof:1 tunable:1 proved:1 massachusetts:1 dataset:1 trw:2 appears:3 exceeded:1 wei:4 improved:1 formulation:1 done:2 box:2 furthermore:3 until:2 nonlinear:1 maximizer:2 propagation:6 slackness:1 scientific:1 believe:1 concept:1 normalized:3 requiring:1 equality:1 assigned:1 laboratory:1 satisfactory:2 reweighted:4 width:1 criterion:1 generalized:6 stress:1 reasoning:1 variational:1 wise:1 novel:1 recently:3 ji:42 empirically:1 function1:1 interpretation:1 marginals:4 refer:1 significant:1 cambridge:1 grid:1 similarly:1 replicating:1 add:2 recent:1 showed:2 perspective:1 store:1 certain:1 trmp:11 binary:8 seen:1 minimum:4 additional:1 somewhat:1 greater:2 morgan:1 converge:9 maximize:2 redundant:2 sharma:1 resolving:1 faster:1 post:1 plugging:1 prediction:1 basic:1 essentially:1 iteration:10 tailored:1 achieved:1 c1:1 whereas:1 fellowship:1 interval:1 median:3 appropriately:1 unlike:1 subject:4 jordan:1 integer:1 counting:1 easy:2 meltzer:2 iterate:4 xj:92 affect:1 suboptimal:1 idea:1 intensive:1 shift:1 tgz:1 expression:2 cji:2 defense:1 passing:10 proceed:1 useful:2 generally:1 simplest:1 http:1 per:3 write:1 express:1 key:4 drawn:1 neither:1 diffusion:1 lacoste:1 graph:16 relaxation:23 monotone:4 sum:7 run:3 uncertainty:1 reasonable:1 comparable:1 bound:6 ki:15 guaranteed:5 convergent:8 turbo:1 occur:1 constraint:19 untied:1 aspect:1 speed:1 argument:3 min:2 optimality:2 px:3 structured:2 according:2 slightly:1 lp:48 taken:1 computationally:1 agree:1 turn:3 bip:1 tractable:1 available:1 operation:1 yedidia:1 generic:2 enforce:1 shah:1 original:3 top:1 include:1 cf:3 graphical:4 xc:5 approximating:3 objective:16 strategy:1 gradient:1 remedied:1 mplp:32 majority:1 athena:1 topic:1 considers:1 spanning:1 willsky:2 code:3 ldpc:1 minimizing:2 favorably:2 sage:1 design:7 upper:4 observation:2 discarded:2 acknowledge:1 descent:13 extended:4 communication:1 overcoming:2 introduced:2 pair:2 required:1 kip:1 pearl:1 trans:4 address:2 beyond:1 below:1 pattern:1 xm:2 program:2 max:53 belief:11 wainwright:2 overlap:2 difficulty:1 natural:1 yanover:3 advanced:1 improve:2 technology:1 julien:1 axis:1 negativity:1 review:1 proven:2 aposteriori:1 bayati:1 consistent:1 editor:1 row:1 supported:1 copy:5 free:3 institute:1 neighbor:2 wide:1 face:1 xn:1 world:2 valid:1 stuck:2 author:2 transaction:1 approximate:2 obtains:1 ignore:1 clique:2 ml:9 global:2 incoming:1 uai:3 assumed:1 xi:122 why:1 transfer:1 decodability:1 messagepassing:1 rosetta:1 complex:1 constructing:3 complementary:1 x1:1 fig:5 referred:1 sub:1 structurally:1 wish:2 exponential:1 lie:1 tied:5 jmlr:1 maxxi:5 remained:1 x:5 experimented:1 maximizers:1 exists:2 ci:3 intersection:3 partially:3 mij:1 corresponds:2 satisfies:2 ma:2 prop:1 towards:1 feasible:2 hard:1 included:1 specifically:2 except:4 folded:1 extragradient:1 duality:2 exception:1 support:1 evaluate:1 mum:1 tested:1 |
2,427 | 3,201 | A Kernel Statistical Test of Independence
Arthur Gretton
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Le Song
NICTA, ANU
and University of Sydney
[email protected]
Kenji Fukumizu
Inst. of Statistical Mathematics
Tokyo Japan
[email protected]
Bernhard Sch?olkopf
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Choon Hui Teo
NICTA, ANU
Canberra, Australia
[email protected]
Alexander J. Smola
NICTA, ANU
Canberra, Australia
[email protected]
Abstract
Although kernel measures of independence have been widely applied in machine
learning (notably in kernel ICA), there is as yet no method to determine whether
they have detected statistically significant dependence. We provide a novel test of
the independence hypothesis for one particular kernel independence measure, the
Hilbert-Schmidt independence criterion (HSIC). The resulting test costs O(m2 ),
where m is the sample size. We demonstrate that this test outperforms established
contingency table and functional correlation-based tests, and that this advantage
is greater for multivariate data. Finally, we show the HSIC test also applies to
text (and to structured data more generally), for which no other independence test
presently exists.
1 Introduction
Kernel independence measures have been widely applied in recent machine learning literature, most
commonly in independent component analysis (ICA) [2, 11], but also in fitting graphical models [1]
and in feature selection [22]. One reason for their success is that these criteria have a zero expected
value if and only if the associated random variables are independent, when the kernels are universal
(in the sense of [23]). There is presently no way to tell whether the empirical estimates of these
dependence measures indicate a statistically significant dependence, however. In other words, we
are interested in the threshold an empirical kernel dependence estimate must exceed, before we can
dismiss with high probability the hypothesis that the underlying variables are independent.
Statistical tests of independence have been associated with a broad variety of dependence measures.
Classical tests such as Spearman?s ? and Kendall?s ? are widely applied, however they are not
guaranteed to detect all modes of dependence between the random variables. Contingency tablebased methods, and in particular the power-divergence family of test statistics [17], are the best
known general purpose tests of independence, but are limited to relatively low dimensions, since they
require a partitioning of the space in which each random variable resides. Characteristic functionbased tests [6, 13] have also been proposed, which are more general than kernel density-based tests
[19], although to our knowledge they have been used only to compare univariate random variables.
In this paper we present three main results: first, and most importantly, we show how to test whether
statistically significant dependence is detected by a particular kernel independence measure, the
Hilbert Schmidt independence criterion (HSIC, from [9]). That is, we provide a fast (O(m2 ) for
sample size m) and accurate means of obtaining a threshold which HSIC will only exceed with
small probability, when the underlying variables are independent. Second, we show the distribution
1
of our empirical test statistic in the large sample limit can be straightforwardly parameterised in
terms of kernels on the data. Third, we apply our test to structured data (in this case, by establishing
the statistical dependence between a text and its translation). To our knowledge, ours is the first
independence test for structured data.
We begin our presentation in Section 2, with a short overview of cross-covariance operators between RKHSs and their Hilbert-Schmidt norms: the latter are used to define the Hilbert Schmidt
Independence Criterion (HSIC). In Section 3, we describe how to determine whether the dependence returned via HSIC is statistically significant, by proposing a hypothesis test with HSIC as its
statistic. In particular, we show that this test can be parameterised using a combination of covariance
operator norms and norms of mean elements of the random variables in feature space. Finally, in
Section 4, we give our experimental results, both for testing dependence between random vectors
(which could be used for instance to verify convergence in independent subspace analysis [25]),
and for testing dependence between text and its translation. Software to implement the test may be
downloaded from http : //www.kyb.mpg.de/bs/people/arthur/indep.htm
2 Definitions and description of HSIC
Our problem setting is as follows:
Problem 1 Let Pxy be a Borel probability measure defined on a domain X ? Y, and let Px and
Py be the respective marginal distributions on X and Y. Given an i.i.d sample Z := (X, Y ) =
{(x1 , y1 ), . . . , (xm , ym )} of size m drawn independently and identically distributed according to
Pxy , does Pxy factorise as Px Py (equivalently, we may write x ?
? y)?
We begin with a description of our kernel dependence criterion, leaving to the following section the
question of whether this dependence is significant. This presentation is largely a review of material
from [9, 11, 22], the main difference being that we establish links to the characteristic function-based
independence criteria in [6, 13]. Let F be an RKHS, with the continuous feature mapping ?(x) ? F
from each x ? X, such that the inner product between the features is given by the kernel function
k(x, x? ) := h?(x), ?(x? )i. Likewise, let G be a second RKHS on Y with kernel l(?, ?) and feature
map ?(y). Following [7], the cross-covariance operator Cxy : G ? F is defined such that for all
f ? F and g ? G,
hf, Cxy giF
= Exy ([f (x) ? Ex (f (x))] [g(y) ? Ey (g(y))]) .
The cross-covariance operator itself can then be written
Cxy := Exy [(?(x) ? ?x ) ? (?(y) ? ?y )],
(1)
where ?x := Ex ?(x), ?y := Ey ?(y), and ? is the tensor product [9, Eq. 6]: this is a generalisation
of the cross-covariance matrix between random vectors. When F and G are universal reproducing
kernel Hilbert spaces (that is, dense in the space of bounded continuous functions [23]) on the
compact domains X and Y, then the largest singular value of this operator, kCxy k, is zero if and only
if x ?
? y [11, Theorem 6]: the operator therefore induces an independence criterion, and can be used
to solve Problem 1. The maximum singular value gives a criterion similar to that originally proposed
in [18], but with more restrictive function classes (rather than functions of bounded variance). Rather
than the maximum singular value, we may use the squared Hilbert-Schmidt norm (the sum of the
squared singular values), which has a population expression
HSIC(Pxy , F, G) = Exx? yy? [k(x, x? )l(y, y ? )] + Exx? [k(x, x? )]Eyy? [l(y, y ? )]
? 2Exy [Ex? [k(x, x? )]Ey? [l(y, y ? )]]
(2)
(assuming the expectations exist), where x? denotes an independent copy of x [9, Lemma 1]: we
call this the Hilbert-Schmidt independence criterion (HSIC).
We now address the problem of estimating HSIC(Pxy , F, G) on the basis of the sample Z. An
unbiased estimator of (2) is a sum of three U-statistics [21, 22],
X
X
X
1
1
1
kij lij +
kij lqr ? 2
kij liq ,
(3)
HSIC(Z) =
(m)2
(m)4
(m)3
m
m
m
(i,j)?i2
(i,j,q,r)?i4
2
(i,j,q)?i3
m!
where (m)n := (m?n)!
, the index set im
r denotes the set all r-tuples drawn without replacement from
the set {1, . . . , m}, kij := k(xi , xj ), and lij := l(yi , yj ). For the purpose of testing independence,
however, we will find it easier to use an alternative, biased empirical estimate [9, Definition 2],
obtained by replacing the U-statistics with V-statistics1
HSICb (Z) =
m
m
m
1 X
1 X
1 X
1
k
l
+
k
l
?
2
kij liq = 2 trace(KHLH),
ij
ij
ij
qr
2
4
3
m i,j
m i,j,q,r
m i,j,q
m
(4)
where the summation indices now denote all r-tuples drawn with replacement from {1, . . . , m} (r
1
being the number of indices below the sum), K is the m?m matrix with entries kij , H = I? m
11? ,
2
and 1 is an m ? 1 vector of ones (the cost of computing this statistic is O(m )). When a Gaussian
2
kernel kij := exp ?? ?2 kxi ? xj k is used (or a kernel deriving from [6, Eq. 4.10]), the latter
statistic is equivalent to the characteristic function-based statistic [6, Eq. 4.11] and the T 2n statistic
of [13, p. 54]: details are reproduced in [10] for comparison. Our setting allows for more general
kernels, however, such as kernels on strings (as in our experiments in Section 4) and graphs (see
[20] for further details of kernels on structures): this is not possible under the characteristic function
framework, which is restricted to Euclidean spaces (Rd in the case of [6, 13]). As pointed out in [6,
Section 5], the statistic in (4) can also be linked to the original quadratic test of Rosenblatt [19] given
an appropriate kernel choice; the main differences being that characteristic function-based tests (and
RKHS-based tests) are not restricted to using kernel densities, nor should they reduce their kernel
width with increasing sample size. Another related test described in [4] is based on the functional
canonical correlation between F and G, rather than the covariance: in this sense the test statistic
resembles those in [2]. The approach in [4] differs with both the present work and [2], however,
in that the function spaces F and G are represented by finite sets of basis functions (specifically
B-spline kernels) when computing the empirical test statistic.
3 Test description
We now describe a statistical test of independence for two random variables, based on the test
statistic HSICb (Z). We begin with a more formal introduction to the framework and terminology
of statistical hypothesis testing. Given the i.i.d. sample Z defined earlier, the statistical test, T(Z) :
(X ? Y)m 7? {0, 1} is used to distinguish between the null hypothesis H0 : Pxy = Px Py and
the alternative hypothesis H1 : Pxy 6= Px Py . This is achieved by comparing the test statistic, in
our case HSICb (Z), with a particular threshold: if the threshold is exceeded, then the test rejects
the null hypothesis (bearing in mind that a zero population HSIC indicates Pxy = Px Py ). The
acceptance region of the test is thus defined as any real number below the threshold. Since the test
is based on a finite sample, it is possible that an incorrect answer will be returned: the Type I error
is defined as the probability of rejecting H0 based on the observed sample, despite x and y being
independent. Conversely, the Type II error is the probability of accepting Pxy = Px Py when the
underlying variables are dependent. The level ? of a test is an upper bound on the Type I error, and
is a design parameter of the test, used to set the test threshold. A consistent test achieves a level ?,
and a Type II error of zero, in the large sample limit.
How, then, do we set the threshold of the test given ?? The approach we adopt here is to derive
the asymptotic distribution of the empirical estimate HSICb (Z) of HSIC(Pxy , F, G) under H0 . We
then use the 1 ? ? quantile of this distribution as the test threshold.2 Our presentation in this section
is therefore divided into two parts. First, we obtain the distribution of HSICb (Z) under both H0 and
H1 ; the latter distribution is also needed to ensure consistency of the test. We shall see, however, that
the null distribution has a complex form, and cannot be evaluated directly. Thus, in the second part
of this section, we describe ways to accurately approximate the 1 ? ? quantile of this distribution.
Asymptotic distribution of HSICb (Z) We now describe the distribution of the test statistic in (4)
The first theorem holds under H1 .
1
The U- and V-statistics differ in that the latter allow indices of different sums to be equal.
An alternative would be to use a large deviation bound, as provided for instance by [9] based on Hoeffding?s
inequality. It has been reported in [8], however, that such bounds are generally too loose for hypothesis testing.
2
3
Theorem 1 Let
hijqr =
1
4!
(i,j,q,r)
X
ktu ltu + ktu lvw ? 2ktu ltv ,
(5)
(t,u,v,w)
where the sum represents all ordered quadruples (t, u, v, w) drawn without replacement from
(i, j, q, r), and assume E h2 < ?. Under H1 , HSICb (Z) converges in distribution as m ? ?
to a Gaussian according to
1
D
m 2 (HSICb (Z) ? HSIC(Pxy , F, G)) ? N 0, ?u2 .
(6)
2
The variance is ?u2 = 16 Ei Ej,q,r hijqr ? HSIC(Pxy , F, G) , where Ej,q,r := Ezj ,zq ,zr .
Proof We first rewrite (4) as a single V-statistic,
HSICb (Z) =
m
1 X
hijqr ,
m4 i,j,q,r
(7)
where we note that hijqr defined in (5) does not change with permutation of its indices. The associated U-statistic HSICs (Z) converges in distribution as (6) with variance ?u2 [21, Theorem 5.5.1(A)]:
see [22]. Since the difference between HSICb (Z) and HSICs (Z) drops as 1/m (see [9], or Theorem
3 below), HSICb (Z) converges asymptotically to the same distribution.
The second theorem applies under H0
Theorem 2 Under H0 , the U-statistic HSICs (Z) corresponding to the V-statistic in (7) is degenerate, meaning Ei hijqr = 0. In this case, HSICb (Z) converges in distribution according to [21,
Section 5.5.2]
?
X
D
mHSICb (Z) ?
?l zl2 ,
(8)
l=1
where zl ? N(0, 1) i.i.d., and ?l are the solutions to the eigenvalue problem
Z
?l ?l (zj ) = hijqr ?l (zi )dFi,q,r ,
where the integral is over the distribution of variables zi , zq , and zr .
Proof This follows from the discussion of [21, Section 5.5.2], making appropriate allowance for
the fact that we are dealing with a V-statistic (which is why the terms in (8) are not centred: in the
case of a U-statistic, the sum would be over terms ?l (zl2 ? 1)).
Approximating the 1 ? ? quantile of the null distribution A hypothesis test using HSICb (Z)
could be derived from Theorem 2 above by computing the (1 ? ?)th quantile of the distribution (8),
where consistency of the test (that is, the convergence to zero of the Type II error for m ? ?) is
guaranteed by the decay as m?1 of the variance of HSICb (Z) under H1 . The distribution under H0
is complex, however: the question then becomes how to accurately approximate its quantiles.
One approach, taken by [6], is to use a Monte Carlo resampling technique: the ordering of the Y
sample is permuted repeatedly while that of X is kept fixed, and the 1 ? ? quantile is obtained
from the resulting distribution of HSICb values. This can be very expensive, however. A second
approach, suggested in [13, p. 34], is to approximate the null distribution as a two-parameter Gamma
distribution [12, p. 343, p. 359]: this is one of the more straightforward approximations of an infinite
sum of ?2 variables (see [12, Chapter 18.8] for further ways to approximate such distributions; in
particular, we wish to avoid using moments of order greater than two, since these can become
expensive to compute). Specifically, we make the approximation
mHSICb (Z) ?
x??1 e?x/?
? ? ?(?)
where ? =
(E(HSICb (Z)))2
,
var(HSICb (Z))
4
?=
mvar(HSICb (Z))
.
E(HSICb (Z))
(9)
Figure 1: mHSICb cumulative distribution
function (Emp) under H 0 for m = 200,
obtained empirically using 5000 independent draws of mHSICb . The two-parameter
Gamma distribution (Gamma) is fit using
? = 1.17 and ? = 8.3 ? 10?4 in (9), with
mean and variance computed via Theorems
3 and 4.
P(mHSICb(Z) < mHSICb)
An illustration of the cumulative distribution function
(CDF) obtained via the Gamma approximation is given
in Figure 1, along with an empirical CDF obtained by
repeated draws of HSICb . We note the Gamma approximation is quite accurate, especially in areas of high probability (which we use to compute the test quantile). The
accuracy of this approximation will be further evaluated
experimentally in Section 4.
1
To obtain the Gamma distribution from our observa0.8
tions, we need empirical estimates for E(HSICb (Z)) and
0.6
var(HSICb (Z)) under the null hypothesis. Expressions
0.4
for these quantities are given in [13, pp. 26-27], however
these are in terms of the joint and marginal characterisEmp
0.2
tic functions, and not in our more general kernel setting
Gamma
0
(see also [14, p. 313]). In the following two theorems,
0
0.5
1
1.5
2
mHSIC
b
we provide much simpler expressions for both quantities,
in terms of norms of mean elements ?x and ?y , and the
covariance operators
Cxx := Ex [(?(x) ? ?x ) ? (?(x) ? ?x )]
and Cyy , in feature space. The main advantage of our new expressions is that they are computed
entirely in terms of kernels, which makes possible the application of the test to any domains on
which kernels can be defined, and not only Rd .
Theorem 3 Under H0 ,
1
1
2
2
2
2
TrCxx TrCyy =
1 + k?x k k?y k ? k?x k ? k?y k ,
(10)
m
m
where the second equality assumes kii = lii = 1. An empirical estimate of this statistic is obtained
P
2
\
by replacing the norms above with k?
k = (m)?1
k , bearing in mind that this results
E(HSICb (Z)) =
x
in a (generally negligible) bias of O(m
?1
2
(i,j)?im
2
ij
2
2
) in the estimate of k?x k k?y k .
Theorem 4 Under H0 ,
var(HSICb (Z)) =
2(m ? 4)(m ? 5)
kCxx k2HS kCyy k2HS + O(m?3 ).
(m)4
Denoting by ? the entrywise matrix product, A?2 the entrywise matrix power, and B =
((HKH) ? (HLH))?2 , an empirical estimate with negligible bias may be found by replacing the
product of covariance operator norms with 1? (B ? diag(B)) 1: this is slightly more efficient than
taking the product of the empirical operator norms (although the scaling with m is unchanged).
Proofs of both theorems may be found in [10], where we also compare with the original characteristic
function-based expressions in [13]. We remark that these parameters, like the original test statistic
in (4), may be computed in O(m2 ).
4 Experiments
General tests of statistical independence are most useful for data having complex interactions that
simple correlation does not detect. We investigate two cases where this situation arises: first, we
test vectors in Rd which have a dependence relation but no correlation, as occurs in independent
subspace analysis; and second, we study the statistical dependence between a text and its translation.
Independence of subspaces One area where independence tests have been applied is in determining the convergence of algorithms for independent component analysis (ICA), which involves
separating random variables that have been linearly mixed, using only their mutual independence.
ICA generally entails optimisation over a non-convex function (including when HSIC is itself the
optimisation criterion [9]), and is susceptible to local minima, hence the need for these tests (in fact,
for classical approaches to ICA, the global minimum of the optimisation might not correspond to
independence for certain source distributions). Contingency table-based tests have been applied [15]
5
in this context, while the test of [13] has been used in [14] for verifying ICA outcomes when the
data are stationary random processes (through using a subset of samples with a sufficiently large
delay between them). Contingency table-based tests may be less useful in the case of independent
subspace analysis (ISA, see e.g. [25] and its bibliography), where higher dimensional independent
random vectors are to be separated. Thus, characteristic function-based tests [6, 13] and kernel
independence measures might work better for this problem.
In our experiments, we tested the independence of random vectors, as a way of verifying the solutions of independent subspace analysis. We assumed for ease of presentation that our subspaces
have respective dimension dx = dy = d, but this is not required. The data were constructed as
follows. First, we generated m samples of two univariate random variables, each drawn at random
from the ICA benchmark densities in [11, Table 3]: these include super-Gaussian, sub-Gaussian,
multimodal, and unimodal distributions. Second, we mixed these random variables using a rotation matrix parameterised by an angle ?, varying from 0 to ?/4 (a zero angle means the data are
independent, while dependence becomes easier to detect as the angle increases to ?/4: see the two
plots in Figure 2, top left). Third, we appended d ? 1 dimensional Gaussian noise of zero mean
and unit standard deviation to each of the mixtures. Finally, we multiplied each resulting vector
by an independent random d-dimensional orthogonal matrix, to obtain vectors dependent across all
observed dimensions. We emphasise that classical approaches (such as Spearman?s ? or Kendall?s
? ) are completely unable to find this dependence, since the variables are uncorrelated; nor can we
recover the subspace in which the variables are dependent using PCA, since this subspace has the
same second order properties as the noise. We investigated sample sizes m = 128, 512, 1024, 2048,
and d = 1, 2, 4.
We compared two different methods for computing the 1 ? ? quantile of the HSIC null distribution:
repeated random permutation of the Y sample ordering as in [6] (HSICp), where we used 200 permutations; and Gamma approximation (HSICg) as in [13], based on (9). We used a Gaussian kernel,
with kernel size set to the median distance between points in input space. We also compared with
two alternative tests, the first based on a discretisation of the variables, and the second on functional
canonical correlation. The discretisation based test was a power-divergence contingency table test
from [17] (PD), which consisted in partitioning the space, counting the number of samples falling
in each partition, and comparing this with the number of samples that would be expected under the
null hypothesis (the test we used, described in [15], is more refined than this short description would
suggest). Rather than a uniform space partitioning, we divided our space into roughly equiprobable
bins as in [15], using a Gessaman partition for higher dimensions [5, Figure 21.4] (Ku and Fine did
not specify a space partitioning strategy for higher dimensions, since they dealt only with univariate
random variables). All remaining parameters were set according to [15]. The functional correlationbased test (fCorr) is described in [4]: the main differences with respect to our test are that it uses
the spectrum of the functional correlation operator, rather than the covariance operator; and that it
approximates the RKHSs F and G by finite sets of basis functions. Parameter settings were as in
[4, Table 1], with the second order B-spline kernel and a twofold dyadic partitioning. Note that
fCorr applies only in the univariate case. Results are plotted in Figure 2 (average over 500 repetitions). The y-intercept on these plots corresponds to the acceptance rate of H0 at independence, or
1 ? (Type I error), and should be close to the design parameter of 1 ? ? = 0.95. Elsewhere, the
plots indicate acceptance of H0 where the underlying variables are dependent, i.e. the Type II error.
As expected, we observe that dependence becomes easier to detect as ? increases from 0 to ?/4,
when m increases, and when d decreases. The PD and fCorr tests perform poorly at m = 128,
but approach the performance of HSIC-based tests for increasing m (although PD remains slightly
worse than HSIC at m = 512 and d = 1, while fCorr becomes slightly worse again than PD). PD
also scales very badly with d, and never rejects the null hypothesis when d = 4, even for m = 2048.
Although HSIC-based tests are unreliable for small ?, they generally do well as ? approaches ?/4
(besides m = 128, d = 2). We also emphasise that HSICp and HSICg perform identically, although
HSICp is far more costly (by a factor of around 100, given the number of permutations used).
Dependence and independence between text In this section, we demonstrate independence testing on text.
Our data are taken from the Canadian Hansard corpus
(http : //www.isi.edu/natural ? language/download/hansard/). These consist of the official records of the 36th Canadian parliament, in English and French. We used debate transcripts
on the three topics of Agriculture, Fisheries, and Immigration, due to the relatively large volume of
data in these categories. Our goal was to test whether there exists a statistical dependence between
6
Rotation ? = ?/8
Rotation ? = ?/4
2
1
1
1
0
?1
?2
?2
?2
0
?3
2
?2
X
Samp:512, Dim:1
0.5
1
0.6
0.4
0.2
0.5
1
1
Samp:2048, Dim:4
1
0.8
0.6
0.4
0.2
0
0
Angle (??/4)
0.5
Angle (??/4)
0
0.8
0
0
0.2
Samp:1024, Dim:4
% acceptance of H
0.2
0.4
0
0
1
1
0
% acceptance of H
0
0.4
0.5
0.6
Angle (??/4)
Samp:512, Dim:2
0.6
Angle (??/4)
0.2
0
0
2
1
0.8
0
0
0.4
X
1
% acceptance of H
0
0.6
0.8
0
?3
PD
fCorr
HSICp
HSICg
0.8
% acceptance of H
?1
% acceptance of H
Y
Y
0
0
2
Samp:128, Dim:2
Samp:128, Dim:1
1
% acceptance of H
3
0
3
0.5
Angle (??/4)
1
0.8
0.6
0.4
0.2
0
0
0.5
1
Angle (??/4)
Figure 2: Top left plots: Example dataset for d = 1, m = 200, and rotation angles ? = ?/8 (left) and ? = ?/4
(right). In this case, both sources are mixtures of two Gaussians (source (g) in [11, Table 3]). We remark that
the random variables appear ?more dependent? as the angle ? increases, although their correlation is always
zero. Remaining plots: Rate of acceptance of H 0 for the PD, fCorr, HSICp, and HSICg tests. ?Samp? is the
number m of samples, and ?dim? is the dimension d of x and y.
English text and its French translation. Our dependent data consisted of a set of paragraph-long
(5 line) English extracts and their French translations. For our independent data, the English paragraphs were matched to random French paragraphs on the same topic: for instance, an English
paragraph on fisheries would always be matched with a French paragraph on fisheries. This was
designed to prevent a simple vocabulary check from being used to tell when text was mismatched.
We also ignored lines shorter than five words long, since these were not always part of the text (e.g.
identification of the person speaking). We used the k-spectrum kernel of [16], computed according
to the method of [24]. We set k = 10 for both languages, where this was chosen by cross validating
on an SVM classifier for Fisheries vs National Defense, separately for each language (performance
was not especially sensitive to choice of k; k = 5 also worked well). We compared this kernel with
a simple kernel between bags of words [3, pp. 186?189]. Results are in Table 1.
Our results demonstrate the excellent performance of the HSICp test on this task: even for small
sample sizes, HSICp with a spectral kernel always achieves zero Type II error, and a Type I error
close to the design value (0.95). We further observe for m = 10 that HSICp with the spectral kernel
always has better Type II error than the bag-of words kernel. This suggests that a kernel with a more
sophisticated encoding of text structure induces a more sensitive test, although for larger sample
sizes, the advantage vanishes. The HSICg test does less well on this data, always accepting H0 for
m = 10, and returning a Type I error of zero, rather than the design value of 5%, when m = 50. It
appears that this is due to a very low variance estimate returned by the Theorem 4 expression, which
could be caused by the high diagonal dominance of kernels on strings. Thus, while the test threshold
for HSICg at m = 50 still fell between the dependent and independent values of HSICb , this was
not the result of an accurate modelling of the null distribution. We would therefore recommend the
permutation approach for this problem. Finally, we also tried testing with 2-line extracts and 10-line
extracts, which yielded similar results.
5 Conclusion
We have introduced a test of whether significant statistical dependence is obtained by a kernel dependence measure, the Hilbert-Schmidt independence criterion (HSIC). Our test costs O(m2 ) for sample size m. In our experiments, HSIC-based tests always outperformed the contingency table [17]
and functional correlation [4] approaches, for both univariate random variables and higher dimensional vectors which were dependent but uncorrelated. We would therefore recommend HSIC-based
tests for checking the convergence of independent component analysis and independent subspace
analysis. Finally, our test also applies on structured domains, being able to detect the dependence
7
Table 1: Independence tests for cross-language dependence detection. Topics are in the first column, where the
total number of 5-line extracts for each dataset is in parentheses. BOW(10) denotes a bag of words kernel and
m = 10 sample size, Spec(50) is a k-spectrum kernel with m = 50. The first entry in each cell is the null
acceptance rate of the test under H 0 (i.e. 1 ? (Type I error); should be near 0.95); the second entry is the null
acceptance rate under H 1 (the Type II error, small is better). Each entry is an average over 300 repetitions.
BOW(10)
Spec(10)
BOW(50)
Spec(50)
Topic
HSICg
HSICp
HSICg
HSICp
HSICg
HSICp
HSICg
HSICp
Agriculture
1.00,
0.94,
1.00,
0.95,
1.00,
0.93,
1.00,
0.95,
(555)
0.99
0.18
1.00
0.00
0.00
0.00
0.00
0.00
Fisheries
1.00,
0.94,
1.00,
0.94,
1.00,
0.93,
1.00,
0.95,
(408)
1.00
0.20
1.00
0.00
0.00
0.00
0.00
0.00
Immigration
1.00,
0.96,
1.00,
0.91,
0.99,
0.94,
1.00,
0.95,
(289)
1.00
0.09
1.00
0.00
0.00
0.00
0.00
0.00
of passages of text and their translation.Another application along these lines might be in testing
dependence between data of completely different types, such as images and captions.
Acknowledgements: NICTA is funded through the Australian Government?s Backing Australia?s
Ability initiative, in part through the ARC. This work was supported in part by the IST Programme
of the European Community, under the PASCAL Network of Excellence, IST-2002-506778.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
F. Bach and M. Jordan. Tree-dependent component analysis. In UAI 18, 2002.
F. R. Bach and M. I. Jordan. Kernel independent component analysis. J. Mach. Learn. Res., 3:1?48, 2002.
I. Calvino. If on a winter?s night a traveler. Harvest Books, Florida, 1982.
J. Dauxois and G. M. Nkiet. Nonlinear canonical analysis and independence tests. Ann. Statist.,
26(4):1254?1278, 1998.
L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Number 31 in
Applications of mathematics. Springer, New York, 1996.
Andrey Feuerverger. A consistent test for bivariate dependence. International Statistical Review,
61(3):419?433, 1993.
K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. Journal of Machine Learning Research, 5:73?99, 2004.
A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two-sampleproblem. In NIPS 19, pages 513?520, Cambridge, MA, 2007. MIT Press.
A. Gretton, O. Bousquet, A.J. Smola, and B. Sch?olkopf. Measuring statistical dependence with HilbertSchmidt norms. In ALT, pages 63?77, 2005.
A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. Technical Report 168, MPI for Biological Cybernetics, 2008.
A. Gretton, R. Herbrich, A. Smola, O. Bousquet, and B. Sch?olkopf. Kernel methods for measuring
independence. J. Mach. Learn. Res., 6:2075?2129, 2005.
N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. Volume 1 (Second
Edition). John Wiley and Sons, 1994.
A. Kankainen. Consistent Testing of Total Independence Based on the Empirical Characteristic Function.
PhD thesis, University of Jyv?askyl?a, 1995.
Juha Karvanen. A resampling test for the total independence of stationary time series: Application to the
performance evaluation of ica algorithms. Neural Processing Letters, 22(3):311 ? 324, 2005.
C.-J. Ku and T. Fine. Testing for stochastic independence: application to blind source separation. IEEE
Transactions on Signal Processing, 53(5):1815?1826, 2005.
C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein classification.
In Pacific Symposium on Biocomputing, pages 564?575, 2002.
T. Read and N. Cressie. Goodness-Of-Fit Statistics for Discrete Multivariate Analysis. Springer-Verlag,
New York, 1988.
A. R?enyi. On measures of dependence. Acta Math. Acad. Sci. Hungar., 10:441?451, 1959.
M. Rosenblatt. A quadratic measure of deviation of two-dimensional density estimates and a test of
independence. The Annals of Statistics, 3(1):1?14, 1975.
B. Sch?olkopf, K. Tsuda, and J.-P. Vert. Kernel Methods in Computational Biology. MIT Press, 2004.
R. Serfling. Approximation Theorems of Mathematical Statistics. Wiley, New York, 1980.
L. Song, A. Smola, A. Gretton, K. Borgwardt, and J. Bedo. Supervised feature selection via dependence
estimation. In Proc. Intl. Conf. Machine Learning, pages 823?830. Omnipress, 2007.
I. Steinwart. The influence of the kernel on the consistency of support vector machines. Journal of
Machine Learning Research, 2, 2002.
C. H. Teo and S. V. N. Vishwanathan. Fast and space efficient string kernels using suffix arrays. In ICML,
pages 929?936, 2006.
F.J. Theis. Towards a general independent subspace analysis. In NIPS 19, 2007.
8
| 3201 |@word norm:9 tried:1 covariance:9 moment:1 reduction:1 series:1 lqr:1 ours:1 rkhs:3 denoting:1 jyv:1 outperforms:1 com:2 comparing:2 exy:3 gmail:2 yet:1 must:1 written:1 dx:1 john:1 partition:2 kyb:1 drop:1 plot:5 designed:1 resampling:2 stationary:2 mvar:1 v:1 spec:3 short:2 record:1 accepting:2 eskin:1 math:1 herbrich:1 simpler:1 five:1 mathematical:1 along:2 constructed:1 become:1 symposium:1 initiative:1 incorrect:1 fitting:1 paragraph:5 excellence:1 notably:1 expected:3 ica:8 roughly:1 mpg:3 nor:2 isi:1 increasing:2 becomes:4 begin:3 estimating:1 underlying:4 bounded:2 provided:1 matched:2 null:12 tic:1 gif:1 string:4 proposing:1 bedo:1 returning:1 classifier:1 partitioning:5 zl:1 choonhui:1 unit:1 appear:1 before:1 negligible:2 local:1 limit:2 acad:1 despite:1 encoding:1 mach:2 establishing:1 quadruple:1 lugosi:1 might:3 au:1 resembles:1 acta:1 conversely:1 suggests:1 ease:1 limited:1 statistically:4 testing:10 yj:1 implement:1 differs:1 hansard:2 area:2 universal:2 empirical:12 reject:2 orfi:1 vert:1 word:5 suggest:1 protein:1 cannot:1 close:2 selection:2 operator:11 context:1 influence:1 intercept:1 py:6 www:2 equivalent:1 map:1 straightforward:1 independently:1 convex:1 m2:4 estimator:1 array:1 importantly:1 deriving:1 population:2 hsic:24 annals:1 caption:1 us:1 cressie:1 hypothesis:12 element:2 expensive:2 recognition:1 observed:2 verifying:2 region:1 indep:1 ordering:2 decrease:1 pd:7 vanishes:1 rewrite:1 basis:3 completely:2 htm:1 joint:1 multimodal:1 represented:1 chapter:1 traveler:1 separated:1 enyi:1 fast:2 describe:4 monte:1 detected:2 tell:2 outcome:1 h0:12 refined:1 quite:1 widely:3 solve:1 larger:1 ability:1 statistic:27 itself:2 reproduced:1 advantage:3 eigenvalue:1 interaction:1 product:5 statistics1:1 zl2:2 bow:3 degenerate:1 poorly:1 description:4 olkopf:6 qr:1 convergence:4 intl:1 converges:4 tions:1 derive:1 ac:1 ij:4 transcript:1 eq:3 sydney:1 kenji:1 involves:1 indicate:2 australian:1 differ:1 rasch:1 tokyo:1 stochastic:1 australia:3 material:1 bin:1 require:1 kii:1 government:1 biological:3 summation:1 im:2 hold:1 sufficiently:1 around:1 exp:1 ezj:1 mapping:1 achieves:2 adopt:1 agriculture:2 purpose:2 estimation:1 proc:1 outperformed:1 bag:3 teo:4 sensitive:2 largest:1 repetition:2 fukumizu:4 mit:2 gaussian:6 always:7 super:1 i3:1 rather:6 ltv:1 avoid:1 ej:2 varying:1 derived:1 modelling:1 indicates:1 check:1 sense:2 detect:5 inst:1 dim:7 dependent:9 suffix:1 relation:1 interested:1 germany:2 backing:1 classification:1 pascal:1 mutual:1 marginal:2 equal:1 never:1 having:1 biology:1 represents:1 broad:1 icml:1 noble:1 report:1 spline:2 recommend:2 equiprobable:1 winter:1 gamma:8 divergence:2 choon:1 national:1 m4:1 replacement:3 immigration:2 factorise:1 detection:1 acceptance:12 investigate:1 evaluation:1 mixture:2 accurate:3 integral:1 arthur:3 respective:2 shorter:1 orthogonal:1 discretisation:2 tree:1 euclidean:1 re:2 plotted:1 tsuda:1 instance:3 kij:7 earlier:1 column:1 measuring:2 goodness:1 leslie:1 kankainen:1 cost:3 deviation:3 entry:4 subset:1 uniform:1 delay:1 johnson:1 too:1 reported:1 straightforwardly:1 answer:1 kxi:1 andrey:1 person:1 density:4 international:1 borgwardt:2 probabilistic:1 ym:1 fishery:5 squared:2 again:1 thesis:1 hoeffding:1 worse:2 conf:1 lii:1 book:1 japan:1 de:3 centred:1 gy:1 ltu:1 caused:1 blind:1 h1:5 kendall:2 linked:1 hf:1 recover:1 samp:7 kcxy:1 appended:1 accuracy:1 cxy:3 variance:6 characteristic:8 largely:1 likewise:1 correspond:1 dealt:1 identification:1 rejecting:1 accurately:2 carlo:1 cybernetics:3 khlh:1 definition:2 pp:2 associated:3 proof:3 dataset:2 knowledge:2 dimensionality:1 hilbert:9 sophisticated:1 appears:1 exceeded:1 originally:1 higher:4 supervised:2 specify:1 entrywise:2 evaluated:2 parameterised:3 smola:7 correlation:8 steinwart:1 dismiss:1 night:1 replacing:3 ei:2 nonlinear:1 french:5 mode:1 cyy:1 hlh:1 verify:1 unbiased:1 consisted:2 equality:1 hence:1 read:1 i2:1 width:1 mpi:3 criterion:11 demonstrate:3 passage:1 omnipress:1 meaning:1 image:1 novel:1 rotation:4 permuted:1 functional:6 empirically:1 overview:1 jp:1 volume:2 approximates:1 significant:6 cambridge:1 rd:3 consistency:3 mathematics:2 pointed:1 language:4 funded:1 entail:1 functionbased:1 multivariate:2 recent:1 certain:1 verlag:1 ubingen:2 inequality:1 success:1 yi:1 minimum:2 greater:2 ey:3 determine:2 signal:1 ii:7 unimodal:1 isa:1 gretton:6 askyl:1 technical:1 cross:6 long:2 bach:3 divided:2 parenthesis:1 dauxois:1 optimisation:3 expectation:1 kernel:51 achieved:1 cell:1 fine:2 separately:1 singular:4 leaving:1 source:4 median:1 sch:6 biased:1 fell:1 validating:1 balakrishnan:1 jordan:3 call:1 near:1 counting:1 exceed:2 canadian:2 identically:2 variety:1 independence:37 xj:2 zi:2 fit:2 inner:1 reduce:1 whether:7 expression:6 pca:1 lesong:1 defense:1 harvest:1 song:3 returned:3 speaking:1 york:3 repeatedly:1 remark:2 ignored:1 generally:5 useful:2 liq:2 statist:1 induces:2 category:1 http:2 exist:1 canonical:3 zj:1 yy:1 rosenblatt:2 write:1 discrete:1 shall:1 dominance:1 ist:2 terminology:1 threshold:9 falling:1 drawn:5 prevent:1 kept:1 graph:1 asymptotically:1 sum:7 angle:11 letter:1 family:1 kotz:1 k2hs:2 separation:1 draw:2 exx:2 allowance:1 cxx:1 scaling:1 dy:1 entirely:1 bound:3 guaranteed:2 distinguish:1 pxy:12 quadratic:2 yielded:1 badly:1 i4:1 hilbertschmidt:1 vishwanathan:1 worked:1 alex:1 software:1 bibliography:1 bousquet:2 relatively:2 px:6 structured:4 pacific:1 according:5 sampleproblem:1 combination:1 spearman:2 across:1 slightly:3 son:1 serfling:1 b:2 making:1 presently:2 restricted:2 taken:2 remains:1 loose:1 needed:1 mind:2 gaussians:1 multiplied:1 apply:1 observe:2 appropriate:2 spectral:2 schmidt:7 rkhss:2 alternative:4 florida:1 original:3 denotes:3 assumes:1 ensure:1 include:1 top:2 graphical:1 remaining:2 ism:1 restrictive:1 quantile:7 especially:2 establish:1 approximating:1 classical:3 unchanged:1 tensor:1 question:2 quantity:2 occurs:1 strategy:1 costly:1 dependence:29 diagonal:1 usyd:1 subspace:10 distance:1 link:1 unable:1 separating:1 sci:1 topic:4 tuebingen:2 reason:1 nicta:4 assuming:1 besides:1 devroye:1 index:5 illustration:1 hungar:1 equivalently:1 susceptible:1 debate:1 trace:1 design:4 perform:2 upper:1 benchmark:1 finite:3 arc:1 juha:1 situation:1 y1:1 reproducing:2 download:1 community:1 introduced:1 required:1 established:1 nip:2 address:1 able:1 suggested:1 below:3 pattern:1 xm:1 including:1 power:3 natural:1 zr:2 extract:4 lij:2 text:11 review:2 literature:1 acknowledgement:1 checking:1 theis:1 determining:1 asymptotic:2 permutation:5 mixed:2 var:3 h2:1 contingency:6 downloaded:1 consistent:3 parliament:1 correlationbased:1 uncorrelated:2 translation:6 elsewhere:1 supported:1 copy:1 english:5 formal:1 allow:1 bias:2 mismatched:1 emp:1 taking:1 emphasise:2 distributed:1 ktu:3 dimension:6 vocabulary:1 cumulative:2 resides:1 commonly:1 programme:1 far:1 transaction:1 hkh:1 approximate:4 compact:1 bernhard:1 unreliable:1 dealing:1 global:1 uai:1 corpus:1 assumed:1 tuples:2 xi:1 spectrum:4 continuous:3 zq:2 why:1 table:10 ku:2 learn:2 obtaining:1 bearing:2 investigated:1 european:1 complex:3 excellent:1 domain:4 diag:1 official:1 did:1 main:5 dense:1 linearly:1 noise:2 edition:1 repeated:2 dyadic:1 x1:1 canberra:2 quantiles:1 borel:1 wiley:2 sub:1 dfi:1 wish:1 third:2 theorem:15 decay:1 svm:2 alt:1 bivariate:1 exists:2 consist:1 hui:1 phd:1 anu:3 easier:3 univariate:6 ordered:1 u2:3 applies:4 springer:2 corresponds:1 cdf:2 ma:1 goal:1 presentation:4 ann:1 towards:1 twofold:1 change:1 experimentally:1 generalisation:1 specifically:2 infinite:1 lemma:1 total:3 experimental:1 people:1 support:1 latter:4 arises:1 alexander:1 biocomputing:1 tested:1 ex:4 |
2,428 | 3,202 | PSVM: Parallelizing Support Vector Machines
on Distributed Computers
Edward Y. Chang?, Kaihua Zhu, Hao Wang, Hongjie Bai,
Jian Li, Zhihuan Qiu, & Hang Cui
Google Research, Beijing, China
Abstract
Support Vector Machines (SVMs) suffer from a widely recognized scalability
problem in both memory use and computational time. To improve scalability,
we have developed a parallel SVM algorithm (PSVM), which reduces memory
use through performing a row-based, approximate matrix factorization, and which
loads only essential data to each machine to perform parallel computation. Let n
denote the number of training instances, p the reduced matrix dimension after
factorization (p is significantly smaller than n), and m the number of machines.
PSVM reduces the memory requirement from O(n2 ) to O(np/m), and improves
computation time to O(np2 /m). Empirical study shows PSVM to be effective.
PSVM Open Source is available for download at http://code.google.com/p/psvm/.
1 Introduction
Let us examine the resource bottlenecks of SVMs in a binary classification setting to explain our
proposed solution. Given a set of training data X = {(xi , yi )|xi ? Rd }ni=1 , where xi is an observation vector, yi ? {?1, 1} is the class label of xi , and n is the size of X , we apply SVMs on X to
train a binary classifier. SVMs aim to search a hyperplane in the Reproducing Kernel Hilbert Space
(RKHS) that maximizes the margin between the two classes of data in X with the smallest training error (Vapnik, 1995). This problem can be formulated as the following quadratic optimization
problem:
n
X
1
?i
(1)
min P(w, b, ?) = kwk22 + C
2
i=1
s.t. 1 ? yi (wT ?(xi ) + b) ? ?i , ?i > 0,
where w is a weighting vector, b is a threshold, C a regularization hyperparameter, and ?(?) a basis
function which maps xi to an RKHS space. The decision function of SVMs is f (x) = wT ?(x) + b,
where the w and b are attained by solving P in (1). The optimization problem in (1) is the primal
formulation of SVMs. It is hard to solve P directly, partly because the explicit mapping via ?(?)
can make the problem intractable and partly because the mapping function ?(?) is often unknown.
The method of Lagrangian multipliers is thus introduced to transform the primal formulation into
the dual one
1 T
? Q? ? ?T 1
2
s.t. 0 ? ? ? C, yT ? = 0,
min
D(?) =
(2)
where [Q]ij = yi yj ?T (xi )?(xj ), and ? ? Rn is the Lagrangian
multiplier variable (or dual
Pn
variable). The weighting vector w is related with ? in w = i=1 ?i ?(xi ).
?
This work was initiated in 2005 when the author was a professor at UCSB.
1
The dual formulation D(?) requires an inner product of ?(xi ) and ?(xj ). SVMs utilize the kernel
trick by specifying a kernel function to define the inner-product K(xi , xj ) = ?T (xi )?(xj ). We
thus can rewrite [Q]ij as yi yj K(xi , xj ). When the given kernel function K is psd (positive semidefinite), the dual problem D(?) is a convex Quadratic Programming (QP) problem with linear
constraints, which can be solved via the Interior-Point method (IPM) (Mehrotra, 1992). Both the
computational and memory bottlenecks of the SVM training are the IPM solver to the dual formulation of SVMs in (2).
Currently, the most effective IPM algorithm is the primal-dual IPM (Mehrotra, 1992). The principal
idea of the primal-dual IPM is to remove inequality constraints using a barrier function and then
resort to the iterative Newton?s method to solve the KKT linear system related to the Hessian matrix
Q in D(?). The computational cost is O(n3 ) and the memory usage O(n2 ).
In this work, we propose a parallel SVM algorithm (PSVM) to reduce memory use and to parallelize
both data loading and computation. Given n training instances each with d dimensions, PSVM first
loads the training data in a round-robin fashion onto m machines. The memory requirement per
machine is O(nd/m). Next, PSVM performs a parallel row-based Incomplete Cholesky Factorization (ICF) on the loaded data. At the end of parallel ICF, each machine stores only a fraction
of the factorized matrix, which takes up space of O(np/m),
? where p is the column dimension of
the factorized matrix. (Typically, p can be set to be about n without noticeably degrading training accuracy.) PSVM reduces memory use of IPM from O(n2 ) to O(np/m), where p/m is much
smaller than n. PSVM then performs parallel IPM to solve the quadratic optimization problem
in (2). The computation time is improved from about O(n2 ) of a decomposition-based algorithm
(e.g., SVMLight (Joachims, 1998), LIBSVM (Chang & Lin, 2001), SMO (Platt, 1998), and SimpleSVM (Vishwanathan et al., 2003)) to O(np2 /m). This work?s main contributions are: (1) PSVM
achieves memory reduction and computation speedup via a parallel ICF algorithm and parallel IPM.
(2) PSVM handles kernels (in contrast to other algorithmic approaches (Joachims, 2006; Chu et al.,
2006)). (3) We have implemented PSVM on our parallel computing infrastructures. PSVM effectively speeds up training time for large-scale tasks while maintaining high training accuracy.
PSVM is a practical, parallel approximate implementation to speed up SVM training on today?s
distributed computing infrastructures for dealing with Web-scale problems. What we do not claim
are as follows: (1) We make no claim that PSVM is the sole solution to speed up SVMs. Algorithmic
approaches such as (Lee & Mangasarian, 2001; Tsang et al., 2005; Joachims, 2006; Chu et al.,
2006) can be more effective when memory is not a constraint or kernels are not used. (2) We do not
claim that the algorithmic approach is the only avenue to speed up SVM training. Data-processing
approaches such as (Graf et al., 2005) can divide a serial algorithm (e.g., LIBSVM) into subtasks
on subsets of training data to achieve good speedup. (Data-processing and algorithmic approaches
complement each other, and can be used together to handle large-scale training.)
2
PSVM Algorithm
The key step of PSVM is parallel ICF (PICF). Traditional column-based ICF (Fine & Scheinberg,
2001; Bach & Jordan, 2005) can reduce computational cost, but the initial memory requirement
is O(np), and hence not practical for very large data set. PSVM devises parallel row-based ICF
(PICF) as its initial step, which loads training instances onto parallel machines and performs factorization simultaneously on these machines. Once PICF has loaded n training data distributedly on m
machines, and reduced the size of the kernel matrix through factorization, IPM can be solved on parallel machines simultaneously. We present PICF first, and then describe how IPM takes advantage
of PICF.
2.1 Parallel ICF
ICF can approximate Q (Q ? Rn?n ) by a smaller matrix H (H ? Rn?p , p ? n), i.e., Q ?
HH T . ICF, together with SMW (the Sherman-Morrison-Woodbury formula), can greatly reduce
the computational complexity in solving an n ? n linear system. The work of (Fine & Scheinberg,
2001) provides a theoretical analysis of how ICF influences the optimization problem in Eq.(2). The
authors proved that the error of the optimal objective value introduced by ICF is bounded by C 2 l?/2,
where C is the hyperparameter of SVM, l is the number of support vectors, and ? is the bound of
2
Algorithm 1 Row-based PICF
Input: n training instances; p: rank of ICF matrix H; m: number of machines
Output: H distributed on m machines
Variables:
v: fraction of the diagonal vector of Q that resides in local machine
k: iteration number;
xi : the ith training instance
M : machine index set, M = {0, 1, . . . , m ? 1}
Ic : row-index set on machine c (c ? M ), Ic = {c, c + m, c + 2m, . . .}
1: for i = 0 to n ? 1 do
2:
Load xi into machine imodulom.
3: end for
4: k ? 0; H ? 0; v ? the fraction of the diagonal vector of Q that resides in local machine. (v(i)(i ? Im )
can be obtained from xi )
5: Initialize master to be machine 0.
6: while k < p do
7:
Each machine c ? M selects its local pivot value, which is the largest element in v:
lpvk,c = max v(i).
i?Ic
and records the local pivot index, the row index corresponds to lpvk,c :
lpik,c = arg max v(i).
i?Ic
8:
9:
Gather lpvk,c ?s and lpik,c ?s (c ? M ) to master.
The master selects the largest local pivot value as global pivot value gpvk and records in ik , row index
corresponding to the global pivot value.
gpvk = max lpvk,c .
c?M
10:
11:
12:
13:
14:
15:
16:
17:
The master broadcasts gpvk and ik .
Change master to machine ik %m.
Calculate H(ik , k) according to (3) on master.
The master broadcasts the pivot instance xik and the pivot row H(ik , :). (Only the first k + 1 values of
the pivot row need to be broadcast, since the remainder are zeros.)
Each machine c ? M calculates its part of the kth column of H according to (4).
Each machine c ? M updates v according to (5).
k ?k+1
end while
ICF approximation
(i.e. tr(Q ? HH T ) < ?). Experimental results in Section 3 show that when p is
?
set to n, the error can be negligible.
Our row-based parallel ICF (PICF) works as follows: Let vector v be the diagonal of Q and suppose
the pivots (the largest diagonal values) are {i1 , i2 , . . . , ik }, the k th iteration of ICF computes three
equations:
p
H(ik , k) = v(ik )
(3)
H(Jk , k) = (Q(Jk , k) ?
k?1
X
H(Jk , j)H(ik , j))/H(ik , k)
(4)
j=1
v(Jk ) = v(Jk ) ? H(Jk , k)2 ,
(5)
where Jk denotes the complement of {i1 , i2 , . . . , ik }. The algorithm iterates until the approximation
of Q by Hk HkT (measured by trace(Q ? Hk HkT )) is satisfactory, or the predefined maximum
iterations (or say, the desired rank of the ICF matrix) p is reached.
As suggested by G. Golub, a parallelized ICF algorithm can be obtained by constraining the parallelized Cholesky Factorization algorithm, iterating at most p times. However, in the proposed
algorithm (Golub & Loan, 1996), matrix H is distributed by columns in a round-robin way on m
machines (hence we call it column-based parallelized ICF). Such column-based approach is optimal for the single-machine setting, but cannot gain full benefit from parallelization for two major
reasons:
3
1. Large memory requirement. All training data are needed for each machine to calculate Q(Jk , k).
Therefore, each machine must be able to store a local copy of the training data.
2.
Limited parallelizable computation.
Only the inner product calculation
Pk?1
( j=1 H(Jk , j)H(ik , j)) in (4) can be parallelized. The calculation of pivot selection, the
summation of local inner product result, column calculation in (4), and the vector update in (5)
must be performed on one single machine.
To remedy these shortcomings of the column-based approach, we propose a row-based approach to
parallelize ICF, which we summarize in Algorithm 1. Our row-based approach starts by initializing
variables and loading training data onto m machines in a round-robin fashion (Steps 1 to 5). The
algorithm then performs the ICF main loop until the termination criteria are satisfied (e.g., the rank
of matrix H reaches p). In the main loop, PICF performs five tasks in each iteration k:
? Distributedly find a pivot, which is the largest value in the diagonal v of matrix Q (steps 7 to 10).
Notice that PICF computes only needed elements in Q from training data, and it does not store Q.
? Set the machine where the pivot resides as the master (step 11).
? On the master, PICF calculates H(ik , k) according to (3) (step 12).
? The master then broadcasts the pivot instance xik and the pivot row H(ik , :) (step 13).
? Distributedly compute (4) and (5) (steps 14 and 15).
At the end of the algorithm, H is stored distributedly on m machines, ready for parallel IPM (presented in the next section). PICF enjoys three advantages: parallel memory use (O(np/m)), parallel
computation (O(p2 n/m)), and low communication overhead (O(p2 log(m))). Particularly on the
communication overhead, its fraction of the entire computation time shrinks as the problem size
grows. We will verify this in the experimental section. This pattern permits a larger problem to be
solved on more machines to take advantage of parallel memory use and computation.
2.2 Parallel IPM
As mentioned in Section 1, the most effective algorithm to solve a constrained QP problem is the
primal-dual IPM. For detailed description and notations of IPM, please consult (Boyd, 2004; Mehrotra, 1992). For the purpose of SVM training, IPM boils down to solving the following equations in
the Newton step iteratively.
?
?
?i
1
+ diag(
)4x
(6)
4? = ?? + vec
t(C ? ?i )
C ? ?i
?
?
1
?i
? diag( )4x
(7)
4? = ?? + vec
t?i
?i
yT ??1 z + yT ?
yT ??1 y
?i
?i
D = diag( +
)
?i
C ? ?i
4x = ??1 (z ? y4?),
4? =
where ? and z depend only on [?, ?, ?, ?] from the last iteration as follows:
?i
?i
? = Q + diag( +
)
?i
C ? ?i
1
1
1
z = ?Q? + 1n ? ?y + vec( ?
).
t
?i
C ? ?i
(8)
(9)
(10)
(11)
(12)
The computation bottleneck is on matrix inverse, which takes place on ? for solving 4? in (8)
and 4x in (10). Equation (11) shows that ? depends on Q, and we have shown that Q can be
approximated through PICF by HH T . Therefore, the bottleneck of the Newton step can be sped up
from O(n3 ) to O(p2 n), and be parallelized to O(p2 n/m).
Distributed Data Loading
To minimize both storage and communication cost, PIPM stores data distributedly as follows:
4
? Distribute matrix data. H is distributedly stored at the end of PICF.
? Distribute n ? 1 vector data. All n ? 1 vectors are distributed in a round-robin fashion on m
machines. These vectors are z, ?, ?, ?, ?z, ??, ??, and ??.
? Replicate global scalar data. Every machine caches a copy of global data including ?, t, n, and
??. Whenever a scalar is changed, a broadcast is required to maintain global consistency.
Parallel Computation of 4?
Rather than walking through all equations, we describe how PIPM solves (8), where ??1 appears
twice. An interesting observation is that parallelizing ??1 z (or ??1 y) is simpler than parallelizing
??1 . Let us explain how parallelizing ??1 z works, and parallelizing ??1 y can follow suit.
According to SMW (the Sherman-Morrison-Woodbury formula), we can write ??1 z as
??1 z
=
(D + Q)?1 z ? (D + HH T )?1 z
= D?1 z ? D?1 H(I + H T D?1 H)?1 H T D?1 z
= D?1 z ? D?1 H(GGT )?1 H T D?1 z.
??1 z can be computed in four steps:
1. Compute D?1 z. D can be derived from locally stored vectors, following (9). D?1 z is a n ? 1
vector, and can be computed locally on each of the m machines.
2. Compute t1 = H T D?1 z. Every machine stores some rows of H and their corresponding part
of D?1 z. This step can be computed locally on each machine. The results are sent to the master
(which can be a randomly picked machine for all PIPM iterations) to aggregate into t1 for the next
step.
3. Compute (GGT )?1 t1 . This step is completed on the master, since it has all the required
data. G can be obtained from H in a straightforward manner as shown in SMW. Computing
t2 = (GGT )?1 t1 is equivalent to solving the linear equation system t1 = (GGT )t2 . PIPM first
solves t1 = Gy0 , then y0 = GT t2. Once it has obtained y0 , PIPM can solve GT t2 = y0 to obtain
t2 . The master then broadcasts t2 to all machines.
4. Compute D?1 Ht2 All machines have a copy of t2 , and can compute D?1 Ht2 locally to solve
for ??1 z.
Similarly, ??1 y can be computed at the same time. Once we have obtained both, we can solve ??
according to (8).
2.3
Computing b and Writing Back
When the IPM iteration stops, we have the value of ? and hence the classification function
f (x) =
Ns
X
?i yi k(si , x) + b
i=1
Here Ns is the number of support vectors and si are support vectors. In order to complete this
classification function, b must be computed. According to the SVM model, given a support vector s,
we obtain one of the two results for f (s): f (s) = +1, if ys = +1, or f (s) = ?1, if ys = ?1.
In practice, we can select M , say 1, 000, support vectors and compute the average of the bs in
parallel using MapReduce (Dean & Ghemawat, 2004).
3
Experiments
We conducted experiments on PSVM to evaluate its 1) class-prediction accuracy, 2) scalability on
large datasets, and 3) overheads. The experiments were conducted on up to 500 machines in our
data center. Not all machines are identically configured; however, each machine is configured with
a CPU faster than 2GHz and memory larger than 4GBytes.
5
Table 1: Class-prediction Accuracy with Different p Settings.
dataset
svmguide1
mushrooms
news20
Image
CoverType
RCV
3.1
samples (train/test)
3, 089/4, 000
7, 500/624
18, 000/1, 996
199, 957/84, 507
522, 910/58, 102
781, 265/23, 149
LIBSVM
0.9608
1
0.7835
0.849
0.9769
0.9575
p = n0.1
0.6563
0.9904
0.6949
0.7293
0.9764
0.8527
p = n0.2
0.9
0.9920
0.6949
0.7210
0.9762
0.8586
p = n0.3
0.917
1
0.6969
0.8041
0.9766
0.8616
p = n0.4
0.9495
1
0.7806
0.8121
0.9761
0.9065
p = n0.5
0.9593
1
0.7811
0.8258
0.9766
0.9264
Class-prediction Accuracy
PSVM employs PICF to approximate an n ? n kernel matrix Q with an n ? p matrix H. This
experiment evaluated how the choice of p affects class-prediction accuracy. We set p of PSVM to nt ,
where t ranges from 0.1 to 0.5 incremented by 0.1, and compared its class-prediction accuracy with
that achieved by LIBSVM. The first two columns of Table 1 enumerate the datasets and their sizes
with which we experimented. We use Gaussian kernel, and select the best C and ? for LIBSVM and
PSVM, respectively. For CoverType and RCV, we loosed the terminate condition (set -e 1, default
0.001) and used shrink heuristics (set -h 0)?to make LIBSVM terminate within several days. The
table shows that when t is set to 0.5 (or p = n), the class-prediction accuracy of PSVM approaches
that of LIBSVM.
We compared only with LIBSVM because it is arguably the best open-source SVM implementation in both accuracy and speed. Another possible candidate is CVM (Tsang et al., 2005). Our
experimental result on the CoverType dataset outperforms the result reported by CVM on the same
dataset in both accuracy and speed. Moreover, CVM?s training time has been shown unpredictable
by (Loosli & Canu, 2006), since the training time is sensitive to the selection of stop criteria and
hyper-parameters. For how we position PSVM with respect to other related work, please refer to
our disclaimer in the end of Section 1.
3.2
Scalability
For scalability experiments, we used three large datasets. Table 2 reports the speedup of PSVM
on up to m = 500 machines. Since when a dataset size is large, a single machine cannot store
the factorized matrix H in its local memory, we cannot obtain the running time of PSVM on one
machine. We thus used 10 machines as the baseline to measure the speedup of using more than
10 machines. To quantify speedup, we made an assumption that the speedup of using 10 machines
is 10, compared to using one machine. This assumption is reasonable for our experiments, since
PSVM does enjoy linear speedup when the number of machines is up to 30.
Table 2: Speedup (p is set to
Machines
10
30
50
100
150
200
250
500
LIBSVM
?
n); LIBSVM training time is reported on the last row for reference.
Image (200k)
Time (s)
Speedup
1, 958 (9)
10?
572 (8)
34.2
473 (14)
41.4
330 (47)
59.4
274 (40)
71.4
294 (41)
66.7
397 (78)
49.4
814 (123)
24.1
4, 334 NA
NA
CoverType (500k)
Time (s)
Speedup
16, 818 (442)
10?
5, 591 (10)
30.1
3, 598 (60)
46.8
2, 082 (29)
80.8
1, 865 (93)
90.2
1, 416 (24)
118.7
1, 405 (115)
119.7
1, 655 (34)
101.6
28, 149 NA
NA
RCV (800k)
Time (s)
45, 135 (1373)
12, 289 (98)
7, 695 (92)
4, 992 (34)
3, 313 (59)
3, 163 (69)
2, 719 (203)
2, 671 (193)
184, 199 NA
Speedup
10?
36.7
58.7
90.4
136.3
142.7
166.0
169.0
NA
We trained PSVM three times for each dataset-m combination. The speedup reported in the table
is the average of three runs with standard deviation provided in brackets. The observed variance in
speedup was caused by the variance of machine loads, as all machines were shared with other tasks
6
running on our data centers. We can observe in Table 2 that the larger is the dataset, the better is
the speedup. Figures 1(a), (b) and (c) plot the speedup of Image, CoverType, and RCV, respectively.
All datasets enjoy a linear speedup when the number of machines is moderate. For instance, PSVM
achieves linear speedup on RCV when running on up to around 100 machines. PSVM scales well till
around 250 machines. After that, adding more machines receives diminishing returns. This result
led to our examination on the overheads of PSVM, presented next.
(a) Image (200k) speedup
(b) Covertype (500k) speedup
(c) RCV (800k) speedup
(d) Image (200k) overhead
(e) Covertype (500k) overhead
(f) RCV (800k) overhead
(g) Image (200k) fraction
(h) Covertype (500k) fraction
(i) RCV (800k) fraction
Figure 1: Speedup and Overheads of Three Datasets.
3.3
Overheads
PSVM cannot achieve linear speedup when the number of machines continues to increase beyond
a data-size-dependent threshold. This is expected due to communication and synchronization overheads. Communication time is incurred when message passing takes place between machines. Synchronization overhead is incurred when the master machine waits for task completion on the slowest
machine. (The master could wait forever if a child machine fails. We have implemented a checkpoint scheme to deal with this issue.)
The running time consists of three parts: computation (Comp), communication (Comm), and synchronization (Sync). Figures 1(d), (e) and (f) show how Comm and Sync overheads influence the
speedup curves. In the figures, we draw on the top the computation only line (Comp), which approaches the linear speedup line. Computation speedup can become sublinear when adding machines beyond a threshold. This is because the computation bottleneck of the unparallelizable step
12 in Algorithm 1 (which computation time is O(p2 )). When m is small, this bottleneck is insignificant in the total computation time. According to the Amdahl?s law; however, even a small fraction
of unparallelizable computation can cap speedup. Fortunately, the larger the dataset is, the smaller
is this unparallelizable fraction, which is O(m/n). Therefore, more machines (larger m) can be
employed for larger datasets (larger n) to gain speedup.
7
When communication overhead or synchronization overhead is accounted for (the Comp + Comm
line and the Comp + Comm + Sync line), the speedup deteriorates. Between the two overheads, the
synchronization overhead does not impact speedup as much as the communication overhead does.
Figures 1(g), (h), and (i) present the percentage of Comp, Comm, and Sync in total running time.
The synchronization overhead maintains about the same percentage when m increases, whereas the
percentage of communication overhead grows with m. As mentioned in Section 2.1, the communication overhead is O(p2 log(m)), growing sub-linearly with m. But since the computation time per
node decreases as m increases, the fraction of the communication overhead grows with m. Therefore, PSVM must select a proper m for a training task to maximize the benefit of parallelization.
4
Conclusion
In this paper, we have shown how SVMs can be parallelized to achieve scalable performance. PSVM
distributedly loads training data on parallel machines, reducing memory requirement through approximate factorization on the kernel matrix. PSVM solves IPM in parallel by cleverly arranging
computation order. We have made PSVM open source at http://code.google.com/p/psvm/.
Acknowledgement
The first author is partially supported by NSF under Grant Number IIS-0535085.
References
Bach, F. R., & Jordan, M. I. (2005). Predictive low-rank decomposition for kernel methods. Proceedings of the 22nd International Conference on Machine Learning.
Boyd, S. (2004). Convex optimization. Cambridge University Press.
Chang, C.-C., & Lin, C.-J. (2001). LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/ cjlin/libsvm.
Chu, C.-T., Kim, S. K., Lin, Y.-A., Yu, Y., Bradski, G., Ng, A. Y., & Olukotun, K. (2006). Map
reduce for machine learning on multicore. NIPS.
Dean, J., & Ghemawat, S. (2004). Mapreduce: Simplified data processing on large clusters.
OSDI?04: Symposium on Operating System Design and Implementation.
Fine, S., & Scheinberg, K. (2001). Efficient svm training using low-rank kernel representations.
Journal of Machine Learning Research, 2, 243?264.
Ghemawat, S., Gobioff, H., & Leung, S.-T. (2003). The google file system. 19th ACM Symposium
on Operating Systems Principles.
Golub, G. H., & Loan, C. F. V. (1996). Matrix computations. Johns Hopkins University Press.
Graf, H. P., Cosatto, E., Bottou, L., Dourdanovic, I., & Vapnik, V. (2005). Parallel support vector
machines: The cascade svm. In Advances in neural information processing systems 17, 521?528.
Joachims, T. (1998). Making large-scale svm learning practical. Advances in Kernel Methods Support Vector Learning.
Joachims, T. (2006). Training linear svms in linear time. ACM KDD, 217?226.
Lee, Y.-J., & Mangasarian, O. L. (2001). Rsvm: Reduced support vector machines. First SIAM
International Conference on Data Mining. Chicago.
Loosli, G., & Canu, S. (2006). Comments on the core vector machines: Fast svm training on very
large data sets (Technical Report).
Mehrotra, S. (1992). On the implementation of a primal-dual interior point method. SIAM J. Optimization, 2.
Platt, J. (1998). Sequential minimal optimization: A fast algorithm for training support vector
machines (Technical Report MSR-TR-98-14). Microsoft Research.
Tsang, I. W., Kwok, J. T., & Cheung, P.-M. (2005). Core vector machines: Fast svm training on
very large data sets. Journal of Machine Learning Research, 6, 363?392.
Vapnik, V. (1995). The nature of statistical learning theory. New York: Springer.
Vishwanathan, S., Smola, A. J., & Murty, M. N. (2003). Simplesvm. ICML.
8
| 3202 |@word msr:1 loading:3 replicate:1 nd:2 open:3 termination:1 decomposition:2 tr:2 ipm:17 reduction:1 initial:2 bai:1 rkhs:2 outperforms:1 com:2 nt:1 si:2 mushroom:1 chu:3 must:4 john:1 chicago:1 kdd:1 remove:1 plot:1 update:2 n0:5 ith:1 svmguide1:1 core:2 record:2 infrastructure:2 provides:1 iterates:1 node:1 simpler:1 five:1 become:1 symposium:2 ik:14 consists:1 overhead:21 sync:4 manner:1 news20:1 expected:1 examine:1 growing:1 cpu:1 cache:1 solver:1 unpredictable:1 provided:1 bounded:1 notation:1 maximizes:1 factorized:3 simplesvm:2 rcv:8 what:1 moreover:1 degrading:1 developed:1 every:2 classifier:1 platt:2 grant:1 enjoy:2 arguably:1 positive:1 negligible:1 t1:6 local:8 initiated:1 parallelize:2 twice:1 china:1 specifying:1 factorization:7 limited:1 range:1 practical:3 woodbury:2 yj:2 practice:1 empirical:1 significantly:1 cascade:1 boyd:2 murty:1 wait:2 onto:3 interior:2 cannot:4 selection:2 storage:1 influence:2 writing:1 www:1 equivalent:1 map:2 lagrangian:2 yt:4 dean:2 center:2 straightforward:1 convex:2 distributedly:7 handle:2 arranging:1 today:1 suppose:1 programming:1 trick:1 element:2 approximated:1 jk:9 particularly:1 walking:1 continues:1 observed:1 csie:1 loosli:2 wang:1 solved:3 tsang:3 calculate:2 initializing:1 decrease:1 incremented:1 mentioned:2 comm:5 complexity:1 trained:1 depend:1 solving:5 rewrite:1 predictive:1 basis:1 train:2 fast:3 effective:4 describe:2 shortcoming:1 aggregate:1 hyper:1 heuristic:1 widely:1 solve:7 larger:7 say:2 transform:1 advantage:3 propose:2 product:4 remainder:1 loop:2 till:1 achieve:3 description:1 scalability:5 cluster:1 requirement:5 hkt:2 completion:1 measured:1 multicore:1 ij:2 sole:1 eq:1 p2:6 solves:3 implemented:2 edward:1 quantify:1 noticeably:1 ntu:1 summation:1 im:1 around:2 ic:4 mapping:2 algorithmic:4 claim:3 major:1 achieves:2 smallest:1 purpose:1 label:1 currently:1 sensitive:1 largest:4 gaussian:1 aim:1 rather:1 pn:1 np2:2 derived:1 joachim:5 rank:5 slowest:1 greatly:1 contrast:1 hk:2 baseline:1 kim:1 osdi:1 dependent:1 leung:1 typically:1 entire:1 diminishing:1 i1:2 selects:2 arg:1 classification:3 dual:9 issue:1 constrained:1 initialize:1 once:3 ng:1 yu:1 icml:1 np:5 t2:7 report:3 employ:1 randomly:1 simultaneously:2 maintain:1 suit:1 psd:1 microsoft:1 message:1 bradski:1 mining:1 golub:3 bracket:1 semidefinite:1 primal:6 cosatto:1 predefined:1 incomplete:1 divide:1 desired:1 theoretical:1 minimal:1 instance:8 column:9 cost:3 deviation:1 subset:1 conducted:2 stored:3 reported:3 international:2 siam:2 lee:2 together:2 hopkins:1 na:6 satisfied:1 broadcast:6 ucsb:1 resort:1 return:1 li:1 distribute:2 smw:3 configured:2 caused:1 depends:1 performed:1 picked:1 reached:1 start:1 maintains:1 parallel:26 contribution:1 minimize:1 disclaimer:1 ni:1 accuracy:10 loaded:2 variance:2 comp:5 explain:2 parallelizable:1 reach:1 whenever:1 boil:1 gain:2 stop:2 proved:1 dataset:7 cap:1 improves:1 hilbert:1 back:1 appears:1 attained:1 day:1 follow:1 improved:1 formulation:4 evaluated:1 shrink:2 smola:1 until:2 receives:1 web:1 google:4 grows:3 usage:1 verify:1 multiplier:2 remedy:1 regularization:1 hence:3 iteratively:1 satisfactory:1 i2:2 deal:1 round:4 please:2 criterion:2 complete:1 performs:5 image:6 mangasarian:2 sped:1 qp:2 refer:1 cambridge:1 vec:3 rd:1 consistency:1 canu:2 similarly:1 sherman:2 operating:2 gt:2 moderate:1 store:6 inequality:1 binary:2 yi:6 devise:1 fortunately:1 employed:1 parallelized:6 recognized:1 maximize:1 morrison:2 ii:1 full:1 reduces:3 technical:2 faster:1 calculation:3 bach:2 lin:3 pipm:5 ggt:4 serial:1 y:2 calculates:2 prediction:6 impact:1 scalable:1 iteration:7 kernel:13 achieved:1 whereas:1 fine:3 jian:1 source:3 parallelization:2 file:1 comment:1 kwk22:1 sent:1 jordan:2 call:1 consult:1 svmlight:1 constraining:1 identically:1 xj:5 affect:1 inner:4 idea:1 reduce:4 avenue:1 pivot:14 bottleneck:6 suffer:1 icf:20 hessian:1 passing:1 york:1 enumerate:1 iterating:1 detailed:1 locally:4 svms:11 reduced:3 http:3 percentage:3 nsf:1 notice:1 deteriorates:1 per:2 write:1 hyperparameter:2 key:1 four:1 threshold:3 libsvm:12 utilize:1 olukotun:1 fraction:10 beijing:1 run:1 inverse:1 master:15 place:2 reasonable:1 draw:1 decision:1 bound:1 quadratic:3 covertype:8 constraint:3 vishwanathan:2 n3:2 software:1 speed:6 min:2 performing:1 speedup:29 according:8 combination:1 cui:1 cleverly:1 smaller:4 y0:3 tw:1 b:1 making:1 resource:1 equation:5 scheinberg:3 cjlin:1 hh:4 needed:2 end:6 available:2 permit:1 apply:1 observe:1 kwok:1 denotes:1 running:5 top:1 completed:1 maintaining:1 newton:3 objective:1 traditional:1 diagonal:5 kth:1 reason:1 code:2 index:5 y4:1 xik:2 hao:1 trace:1 implementation:4 design:1 proper:1 unknown:1 perform:1 observation:2 datasets:6 communication:11 rn:3 reproducing:1 parallelizing:5 download:1 subtasks:1 introduced:2 complement:2 required:2 smo:1 nip:1 able:1 suggested:1 beyond:2 pattern:1 summarize:1 max:3 memory:17 including:1 psvm:39 examination:1 zhu:1 scheme:1 improve:1 library:1 ready:1 mehrotra:4 gy0:1 mapreduce:2 acknowledgement:1 graf:2 law:1 synchronization:6 sublinear:1 interesting:1 incurred:2 gather:1 principle:1 row:15 changed:1 accounted:1 supported:1 last:2 copy:3 enjoys:1 barrier:1 distributed:6 benefit:2 ghz:1 dimension:3 default:1 curve:1 rsvm:1 resides:3 computes:2 author:3 made:2 gbytes:1 simplified:1 approximate:5 hang:1 forever:1 dealing:1 global:5 kkt:1 xi:15 ht2:2 search:1 iterative:1 robin:4 table:7 terminate:2 nature:1 bottou:1 diag:4 pk:1 main:3 linearly:1 qiu:1 n2:4 child:1 fashion:3 cvm:3 n:2 fails:1 position:1 sub:1 explicit:1 candidate:1 weighting:2 formula:2 down:1 load:6 ghemawat:3 experimented:1 svm:14 insignificant:1 essential:1 intractable:1 vapnik:3 adding:2 effectively:1 sequential:1 margin:1 led:1 partially:1 scalar:2 chang:3 springer:1 corresponds:1 acm:2 cheung:1 formulated:1 shared:1 professor:1 hard:1 change:1 loan:2 checkpoint:1 reducing:1 hyperplane:1 wt:2 principal:1 total:2 partly:2 experimental:3 select:3 support:12 cholesky:2 evaluate:1 |
2,429 | 3,203 | Predictive Matrix-Variate t Models
Shenghuo Zhu
Kai Yu
Yihong Gong
NEC Labs America, Inc.
10080 N. Wolfe Rd. SW3-350
Cupertino, CA 95014
{zsh,kyu,ygong}@sv.nec-labs.com
Abstract
It is becoming increasingly important to learn from a partially-observed random
matrix and predict its missing elements. We assume that the entire matrix is a
single sample drawn from a matrix-variate t distribution and suggest a matrixvariate t model (MVTM) to predict those missing elements. We show that MVTM
generalizes a range of known probabilistic models, and automatically performs
model selection to encourage sparse predictive models. Due to the non-conjugacy
of its prior, it is difficult to make predictions by computing the mode or mean of
the posterior distribution. We suggest an optimization method that sequentially
minimizes a convex upper-bound of the log-likelihood, which is very efficient and
scalable. The experiments on a toy data and EachMovie dataset show a good
predictive accuracy of the model.
1
Introduction
Matrix analysis techniques, e.g., singular value decomposition (SVD), have been widely used in
various data analysis applications. An important class of applications is to predict missing elements
given a partially observed random matrix. For example, putting ratings of users into a matrix form,
the goal of collaborative filtering is to predict those unseen ratings in the matrix.
To predict unobserved elements in matrices, the structures of the matrices play an importance role,
for example, the similarity between columns and between rows. Such structures imply that elements
in a random matrix are no longer independent and identically-distributed (i.i.d.). Without the i.i.d.
assumption, many machine learning models are not applicable.
In this paper, we model the random matrix of interest as a single sample drawn from a matrixvariate t distribution, which is a generalization of Student-t distribution. We call the predictive
model under such a prior by matrix-variate t model (MVTM). Our study shows several interesting
properties of the model. First, it continues the line of gradual generalizations across several known
probabilistic models on random matrices, namely, from probabilistic principle component analysis
(PPCA) [11], to Gaussian process latent-variable models (GPLVMs)[7], and to multi-task Gaussian
processes (MTGPs) [13]. MVTMs can be further derived by analytically marginalizing out the
hyper-parameters of these models. From a Bayesian modeling point of view, the marginalization of
hyper-parameters means an automatic model selection and usually leads to a better generalization
performance [8]; Second, the model selection by MVTMs explicitly encourages simpler predictive
models that have lower ranks. Unlike the direct rank minimization, the log-determinant terms in the
form of matrix-variate t prior offers a continuous optimization surface (though non-convex) for rank
constraint; Third, like multivariate Gaussian distributions, a matrix-variate t prior is consistent under
marginalization, that means, if a matrix follows a matrix-variate t distribution, its any sub-matrix
follows a matrix-variate t distribution as well. This property allows to generalize distributions for
finite matrices to infinite stochastic processes.
?
?
?
?
?
S
?
?
R
S
I
T
T
T
T
Y
Y
Y
Y
(a)
(b)
(c)
(d)
Figure 1: Models for matrix prediction. (a) MVTM. (b) and (c) are two normal-inverse-Wishart
models, equivalent to MVTM when the covariance variable S (or R) is marginalized. (d) MTGP,
which requires to optimize the covariance variable S. Circle nodes represent for random variables,
shaded nodes for (partially) observable variables, text nodes for given parameters.
Under a Gaussian noise model, the matrix-variate t distribution is not a conjugate prior. It is thus difficult to make predictions by computing the mode or mean of the posterior distribution. We suggest
an optimization method that sequentially minimizes a convex upper-bound of the log-likelihood,
which is highly efficient and scalable. In the experiments, the algorithm shows very good efficiency
and excellent prediction accuracy.
This paper is organized as follows. We review three existing models and introduce the matrix-variate
t models in Section 2. The prediction methods are proposed in Section 3. In Section 4, the MVTM is
compared with some other models. We illustrate the MVTM with the experiments on a toy example
and on the movie-rating data in Section 5. We conclude in Section 6.
2
Predictive Matrix-Variate t Models
2.1 A Family of Probabilistic Models for Matrix Data
In this section we introduce three probabilistic models in the literature. Let Y be a p ? m
observational matrix and T be the underlying p ? m noise-free random matrix. We assume
Yi,j = Ti,j + i,j , i,j ? N (0, ? 2 ), where Yi,j denotes the (i, j)-th element of Y. If Y is
partially observed, then YI denotes the set of observed elements and I is the corresponding index
set.
Probabilistic Principal Component Analysis (PPCA) [11] assumes that yj , the j-th column vector
of Y, can be generated from a latent vector vj in a k-dimensional linear space (k < p). The model
is defined as yj = Wvj + ? + j and vj ? Nk (vj ; 0, Ik ), where j ? Np (j ; 0, ? 2 Ip ), and
W is a p ? k loading matrix. By integrating out vj , we obtain the marginal distribution yj ?
Np (yj ; ?, WW> + ? 2 Ip ). Since the columns of Y are conditionally independent, letting S take
the place of WW> , PPCA is similar1 to
Yi,j = Ti,j + i,j ,
T ? Np,m (T; 0, S, Im ),
where Np,m (?; 0, S, Im ) is a matrix-variate normal prior with zero mean, covariance S between
rows, and identity covariance Im between columns. PPCA aims to estimate the parameter W by
maximum likelihood.
Gaussian Process Latent-Variable Model (GPLVM) [7] formulates a latent-variable model in a
slightly unconventional way. It considers the same linear relationship from latent representation vj
to observations yj . Instead of treating vj as random variables, GPLVM assigns a prior on W and
see {vj } as parameters yj = Wvj + j , and W ? Np,k (W; 0, Ip , Ik ), where the elements of W
are independent Gaussian random variables. By marginalizing out W, we obtain a distribution that
each row of Y is an i.i.d. sample from a Gaussian process prior with the covariance VV> + ? 2 Im
and V = [v1 , . . . , vm ]> . Letting R take the place of VV> , we rewrite a similar model as
Yi,j = Ti,j + i,j ,
1
T ? Np,m (T; 0, Ip , R).
Because it requires S to be positive definite and W is usually low rank, they are not equivalent.
From a matrix modeling point of view, GPLVM estimates the covariance between the rows and
assume the columns to be conditionally independent.
Multi-task Gaussian Process (MTGP) [13] is a multi-task learning model where each column of
Y is a predictive function of one task, sampled from a Gaussian process prior, yj = tj + j , and
tj ? Np (0, S), where j ? Np (0, ? 2 Ip ). It introduces a hierarchical model where an inverseWishart prior is added for the covariance,
Yi,j = Ti,j + i,j ,
T ? Np,m (T; 0, S, Im ),
S ? IW p (S; ?, Ip )
MTGP utilizes the inverse-Wishart prior as the regularization and obtains a maximum a posteriori
(MAP) estimate of S.
2.2 Matrix-Variate t Models
The models introduced in the previous section are closely related to each other. PPCA models the
row covariance of Y, GPLVM models the column covariance, and MTGP assigns a hyper prior to
prevent over-fitting when estimating the (row) covariance. From a matrix modeling point of view,
capturing the dependence structure of Y by its row or column covariance is a matter of choices,
which are not fundamentally different.2 There is no reason to favor one choice over the other. By
introducing the matrix-variate t models (MVTMs), they can be unified to be the same model.
From a Bayesian modeling viewpoint, one should marginalize out as many variables as possible
[8]. We thus extend the MTGP model in two directions: (1) assume T ? Np,m (T; 0, S, Im ) that
have covariances on both sides of the matrix; (2) marginalize the covariance S on one side (see
Figure 1(b)). Then we have a marginal distribution of T
Z
Pr(T) = Np,m (T; 0, S, Im )IW p (S; ?, Ip )dS = tp,m (T; ?, 0, Ip , Im ),
(1)
which is a matrix-variate t distribution. Because the inverse-Wishart distribution may have different
degree-of-freedom definition in literature, we use the definition in [5].
Following the definition in [6], the matrix-variate t distribution of p ? m matrix T is given by
def
tp,m (T; ?, M, ?, ?) =
? ?+m+p?1
p
m
1
2
|?|? 2 |?|? 2 Ip + ??1 (T ? M)??1 (T ? M)>
,
Z
where ? is the degree of freedom; M is a p ? m matrix; ? and ? are positive definite matrices of
mp
size p ? p and m ? m, respectively; Z = (??) 2 ?p ( ?+p?1
)/?p ( ?+m+p?1
); ?p (?) is a multivariate
2
2
gamma function, and | ? | stands for determinant.
The model can be depicted as Figure 1(a). One important property of matrix-variate t distribution
is that the marginal distribution of its sub-matrix still follows a matrix-variate t distribution with the
same degree of freedom (see Section 3.1). Therefore, we can expand it to the infinite dimensional
stochastic process. By Eq. (1), we can see that Figure 1(a) and Figure 1(b) describe two equivalent
models. Comparing them with the MTGP model represented in Figure 1(d), we can see that the
difference lies in whether S is point estimated or integrated out.
Interestingly, the same matrix-variate t distribution can be equivalently derived by putting another
hierarchical generative process on the covariance R, as described in Figure 1(c), where R follows
an inverse-Wishart distribution. In other words, integrating the covariance on either side, we obtain
the same model. This implies that the model controls the complexity of the covariances on both
sides of the matrix. Neither PPCA nor GPLVM has such a property.
The matrix-variate t distribution involves a determinant term of T, which becomes a log-determinant
term in log-likelihood or KL-divergence. The log-determinant term encourages the sparsity of matrix T with lower rank. This property has been used as the heuristic for minimizing the rank of the
matrix in [3]. Student?s t priors were applied to enforce sparse kernel machine [10].
Here we say a few words about the given parameters. Though we can use evidence framework[8]
or other methods to estimate ?, the results are not good in many cases(see [4]). Usually we just set
2
GPLVM offers an advantage of using nonlinear covariance function based on attributes.
? to a small number. Similar to ?, the estimated ? 2 does not give us a good result either, but crossvalidation is a good choice. For the mean matrix M, in our experiments, we just use sample average
for all observed elements. For some tasks, when we have prior knowledge about the covariance
between columns or between rows, we can use the covariance matrices in the places of Im or Ip .
3
Prediction Methods
When the evaluation of the prediction is the sum of individual losses, the optimal prediction is to find
the individual mode of the marginal posterior distribution, i.e., arg maxTij Pr(Tij |YI ). However,
there is no exact solution for the marginal posterior. We have two ways to approximate the optimal
prediction.
One way to make prediction is to compute the mode of the joint posterior distribution of T, i.e. the
prediction problem is
b = arg max {ln Pr(YI |T) + ln Pr(T)} .
T
(2)
T
The computation of this estimation is usually easy. We discuss it in Section 3.3.
An alternative way is to use the individual mean of the posterior distribution to approximate the
individual mode. Since the joint of individual mean happens to be the mean of the joint distribution,
we only need to compute the joint posterior distribution. The problem of prediction by means is
written as
T = E(T|YI ).
(3)
However, it is usually difficult to compute the exact mean. One estimation method is the Monte
Carlo method, which is computationally intensive. In Section 3.4, we discuss an approximation
to compute the mean. From our experiments, the prediction by means usually outperforms the
prediction by modes.
Before discussing the prediction methods, we introduce a few useful properties in Section 3.1 and
suggest an optimization method as the efficient tool for prediction in Section 3.2.
3.1 Properties
The MVTM has a rich set of properties. We list a few in the following Theorem.
Theorem 1. If
q
p
n m
q
? ?
Iq
? tp+q,m+n (?; ?, 0,
? T
0
p n
0
In
,
Ip
0
m
0
),
Im
(4)
then
Pr(T) =tp,m (T; ?, 0, Ip , Im ),
(5)
>
>
Pr(T|?, ?, ?) =tp,m (T; ? + q + n, M, (Ip + ?B? ), (Im + ? A?)),
Pr(?) =tq,n (?; ?, 0, Iq , In ),
Pr(?|?) =tq,m (?; ? + n, 0, A?1 , Im ),
?1
Pr(?|?, ?) =tp,n (?; ? + q, 0, Ip , B ) = Pr(?|?),
E(T|?, ?, ?) =M,
Cov vec T> |?, ?, ? =(? + q + n ? 2)?1 (Ip + ?B?> ) ? (Im + ?> A?),
def
def
(6)
(7)
(8)
(9)
(10)
(11)
def
where A = (??> + Iq )?1 , B = (?> ? + In )?1 , and M = ??> A? = ?B?> ?.
This theorem can be directly derived from Theorem 4.3.1 and 4.3.9 in [6] with a little calculus. It
provides some insights about MVTMs. The marginal distribution in Eq. (5) has the same form as the
joint distribution, therefore the matrix-variate t distribution is extensible to an infinite dimensional
stochastic process. As conditional distribution in Eq. (6) is still a matrix-variate t distribution, we
can use it to approximate the posterior distribution, which we use in Section 3.4.
We encounter log-determinant terms in computation of the mode or mean estimation. The following
theorem provides a quadratic upper bounds for the log-determinant terms, which makes it possible
to apply the optimization method in Section 3.2.
Lemma 1. If X is a p ? p positive definite matrices, it holds that ln |X| ? tr (X) ? p. The equality
holds when X is an orthonormal matrix.
P
P
Proof. Let {?1 , ? ? ? , ?p } be the eigenvalues of X. We have ln |X| = i ln ?i and tr (X) = i ?i .
Since ln ?i ? ?i ? 1, we have the inequality. The equality holds when ?i = 1. Therefore, when X
is an orthonormal matrix (especially X = Ip ), the equality holds.
Theorem 2. If ? is a p ? p positive definite matrix, ? is an m ? m positive definite matrix, and T
and T0 are p ? m matrices, it holds that
ln |? + T??1 T> | ? h(T; T0 , ?, ?) + h0 (T0 , ?, ?),
where
def
?1 >
?1
h(T; T0 , ?, ?) =tr (? + T0 ??1 T>
)
T?
T
,
0
def
?1 > ?1
T0 ) ? ? p
h0 (T0 , ?, ?) = ln |? + T0 ??1 T>
0 | + tr (? + T0 ?
The equality holds when T = T0 . Also it holds that
?
?
?1 >
?1
?1
h(T; T0 , ?, ?)
ln
|?
+
T?
T
|
= 2(? + T0 ??1 T>
)
T
?
=
.
0
0
?T
?T
T=T0
T=T0
?1
Applying Lemma 1 with X = (? + T0 ??1 T>
(? + T??1 T> ), we obtain the inequality. By
0)
some calculus we have the equality of the first-order derivative. Actually h(?) is a quadratic convex
?1
function with respect to T, as (? + T0 ??1 T>
and ??1 are positive definite matrices.
0)
3.2 Optimization Method
Once the objective is given, the prediction becomes an optimization problem. We use an EMstyle optimization method to make the prediction. Suppose J (T) be the objective function to be
minimized. If we can find an auxiliary function, Q(T; T0 ), having the following properties, we can
apply this method.
1. J (T) ? Q(T; T0 ) and J (T0 ) = Q(T0 ; T0 ),
0
2. ?J (T)/?T|
0 = ?Q(T; T )/?T
0 ,
T=T
0
T=T
0
3. For a fixed T , Q(T; T ) is quadratic and convex with respect to T.
Starting from any T0 , as long as we can find a T1 such that Q(T1 , T0 ) < Q(T0 , T0 ), we have
J (T0 ) = Q(T0 , T0 ) > Q(T1 , T0 ) ? J (T1 ). If there exists a global minimum point of J (T),
there exists a global minimum point of Q(T; T0 ) as well, because Q(T; T0 ) is upper bound of
J (T). Since Q(T; T0 ) is quadratic with the respect to T, we can apply the Newton-Raphson
method to minimize Q(T; T0 ). As long as T0 is not a local minimum, maximum or saddle point of
J , we can find a T to reduce Q(T; T0 ), because Q(T; T0 ) has the same derivative as J (T) at T0 .
Usually, a random starting point, T0 is unlikely to be a local maximum, then T1 can not be a local
maximum. If T0 is a local maximum, we can reselect a point, which is not. After we find a Ti , we
repeat the procedure to find a Ti+1 so that J (Ti+1 ) < J (Ti ), unless Ti is a local minimum or
saddle point of J . Repeating this procedure, Ti converges a local minimum or saddle point of J ,
as long as T0 is not a local maximum.
3.3 Mode Prediction
Following Eq. (2), the goal is to minimize the objective function
?+m+p?1
def
ln Ip + TT> ,
Jb(T) = `(T) +
2
(12)
def
where `(T) = ? ln Pr(YI ) =
1
2? 2
P
(i,j)?I (Tij
? Yij )2 + const.
As Jb contains a log-determinant term, minimizing Jb by nonlinear optimization is slow. Here, we
introduce an auxiliary function,
def
Q(T; T0 ) = `(T) + h(T; T0 , Ip , Im ) + h0 (T0 , Ip , Im ).
(13)
0
0
0
0
0
By Corollary 2, we have that Jb(T) ? Q(T; T ), Jb(T ) = Q(T , T ), and Q(T, T ) has the same
first-order derivative as Jb(T) at T0 . Because l and h are quadratic and convex, Q is quadratic and
convex as well. Therefore, we can apply the optimization method in Section 3.2 to minimize Jb.
b is still time consuming and requires a very large
However, when the size of T is large, to find T
b Therefore, we consider a low
space. In many tasks, we only need to infer a small portion of T.
>
rank approximation, using UV to approximate T, where U is a p ? k matrix and V is an m ? k
matrix. The problem of Eq. (2) is approximated by arg minU,V Jb(UV> ). We can minimize J1 by
b ? USV> ,
alternatively optimizing U and V. We can put the final result in a canonical format as T
where U and V are semi-orthonormal and S is a k ? k diagonal matrix. This result can be consider
as the SVD of an incomplete matrix using matrix-variate t regularization. The details are skipped
because of the limit space.
3.4 Variational Mean Prediction
As the difficulty in explicitly computing the posterior distribution of T, we take a variational approach to approximate its posterior distribution by a matrix-variate t distribution via an expanded
model. We expand the model by adding matrix variate ?, ? and ? with distribution as Eq. (4).
Since the marginal distribution, Eq. (5), is the same as the prior of T, we can derive the original
model by marginalizing out ?, ? and ?. However, instead of integrating out ?, ? and ?, we use
them as the parameters to approximate T?s posterior distribution. Therefore, the estimation of the
parameters is to minimize
Z
? ln Pr(YI , ?, ?, ?) = ? ln Pr(?, ?, ?) ? ln Pr(T|?, ?, ?) Pr(YI |T)dT
(14)
over ?, ? and ?. The first term in the RHS of Eq. (14) can be written as
? ln Pr(?, ?, ?) = ? ln Pr(?) ? ln Pr(?|?) ? ln Pr(?|?, ?)
=
?+q+n+p+m?1
2
ln |Iq + ??> | +
?+q+n+m?1
2
ln |Im + ?> A?|
(15)
?+q+n+p?1
ln |Ip + ?B?> | + const.
2
Due to the convexity of negative logarithm, the second term in the RHS of Eq. (14) is bounded by
X
1
1
1
`(?B 2 ?> A 2 ?) + 2
(16)
(1 + [?B?> ]ii )(1 + [?> A?]jj ) + const.
+
2? (?+q+n?2)
(i,j)?I
because ? ln Pr(YI |T) is quadratic respective to T, thus we only need integration using the mean
and variance of Tij of Pr(T|?, ?, ?), which is given by Eq. (10) and (11). The parameter estimation not only reduce the loss (the term of `(?)), but also reduce the variance. Because of this, the
prediction by means usually outperforms the prediction by modes.
Let J be the sum of the right-hand-side of Eq. (15) and (16), which can be considered as the upper
bound of Eq. (14) (ignoring constants). Here, we estimate the parameters by minimizing J . Because
A and B involve the inverse of quadratic term of ?, it is awkward to directly optimize ?, ?, ?.
def
def
def
We reparameterize J by U = ?B1/2 , V = ?> A1/2 , and S = ?. We can easily apply the
optimization method in Section 3.2 to find optimal U, V and S. After estimation U, V and S, by
Theorem 1, we can compute T = M = USV> . The details are skipped because of the limit space.
4
Related work
Maximum Margin Matrix Factorization (MMMF) [9] is not in the framework of stochastic matrix
analysis, but there are some similarities between MMMF and our mode estimation in Section 3.3.
Using trace norm on the matrix as regularization, MMMF overcomes the over-fitting problem in
factorizing matrix with missing values. From the regularization viewpoint, the prediction by mode
of MVTM uses log-determinants as the regularization term in Eq. (12). The log-determinants encourage sparsity predictive models.
Stochastic Relational Models (SRMs) [12] extend MTGPs by estimating the covariance matrices
for each side. The covariance functions are required to be estimated from observation. By maximizing marginalized likelihood, the estimated S and R reflect the information of the dependency
structure. Then the relationship can be predicted with S and R. During estimating S and R, inverseWishart priors with parameter ? and ? are imposed to S and R respectively. MVTM differs from
SRM in integrating out the hyper-parameters or maximizing out. As MacKay suggests [8], ?one
should integrate over as many variables as possible?.
Robust Probabilistic Projections (RPP)[1] uses Student-t distribution to extends PPCA by scaling
each feature vector by an independent random variable. Written in a matrix format, RPP is
? ?
T ? Np,m (T; ?1> , WW> , U),
U = diag {ui } ,
ui ? IG(ui | , ),
2 2
where IG is inverse Gamma distribution. Though RPP unties the scale factors between feature vectors, which could make the estimation more robust, it does not integrate out the covariance matrix,
which we did in MVTM. Moreover inherited from PPCA, RPP implicitly uses independence assumption of feature vectors. Also RPP results different models depending on which side we assume
to be independent, therefore it is not suitable for matrix prediction.
5
Experiments
5
5
5
5
10
10
10
10
15
15
15
15
20
20
20
20
25
25
25
30
30
2
4
6
8
10
12
14
16
18
2
(a) Original Matrix
25
30
20
4
6
8
10
12
14
16
18
30
20
2
(b) With Noise (0.32)
4
6
8
10
12
14
16
18
20
2
(c) MMMF (0.27)
5
5
5
5
10
10
10
10
15
15
15
15
20
20
20
20
25
25
25
25
30
30
30
2
4
6
8
10
12
14
16
(e) SRM (0.22)
18
20
2
4
6
8
10
12
14
16
18
20
4
6
8
10
12
14
16
18
20
18
20
(d) PPCA (0.26)
30
2
4
6
8
10
12
14
16
18
20
(f) MVTM mode (0.20) (g) MVTM mean (0.192)
2
4
6
8
10
12
14
16
(h) MCMC (0.185)
Figure 2: Experiments on synthetic data. RMSEs are shown in parentheses.
singular values
Synthetic data: We generate a 30 ? 20 matrix (Fig4
MMMF
3.5
ure 2(a)), then add noise with ? 2 = 0.1 (Figure 2(b)). The
MVTM-mode
3
root mean squared noise is 0.32. We select 70% elements
MVTM-mean
2.5
as the observed data and the rest elements are for predic2
tion. We apply MMMF [9], PPCA[11], MTGP[13], SRM
1.5
[12], our MVTM prediction-by-means and prediction1
by-modes methods. The number of dimensions for low
0.5
rank approximation is 10. We also apply MCMC method
0
1 2 3 4 5 6 7 8 9 10
to infer the matrix. The reconstruction matrix and root
index
mean squared errors of prediction on the unobserved elements (comparing to the original matrix) are shown in Figure 3: Singular values of recovered
Figure 2(c)-2(g), respectively. MTGP has the similar re- matrices in descent order.
sult as PPCA, we do not show the result.
MVTM is in favor of sparse predictive models. To verify this, we depict the singular values of
the MMMF method and two MVTM prediction methods in Figure 3. There are only two singular
RMSE
MAE
user mean
1.425
1.141
movie mean
1.387
1.103
MMMF
1.186
0.943
PPCA
1.165
0.915
MVTM (mode)
1.162
0.898
MVTM (mean)
1.151
0.887
Table 1: RMSE (root mean squred error) and MAE (mean absolute error) of experiments on Eachmovie data. All standard errors are 0.001 or less.
values of the MVTM prediction-by-means method are non-zeros. The singular values of the mode
estimation decrease faster than the MMMF ones at beginning, but decrease slower after a threshold.
This confirms that the log-determinants automatically determine the intrinsic rank of the matrices.
Eachmovie data: We test our algorithms on Eachmovie from [2]. The dataset contains 74, 424
users? 2, 811, 718 ratings on 1, 648 movies, i.e. about 2.29% are rated by zero-to-five stars. We put
all ratings into a matrix, and randomly select 80% as observed data to predict the remaining ratings.
The random selection was carried out 10 times independently. We compare our approach with other
three approaches: 1) USER MEAN predicting rating by the sample mean of the same user? ratings;
2) MOVIE MEAN, predicting rating by the sample mean of users? ratings of the same movie; 3)
MMMF[9]; 4) PPCA[11]. We do not have a scalable implementation for other approaches compared
in the previous experiment. The number of dimensions is 10. The results are shown in Table 1. Two
MVTM prediction methods outperform the other methods.
6
Conclusions
In this paper we introduce matrix-variate t models for matrix prediction. The entire matrix is modeled as a sample drawn from a matrix-variate t distribution. An MVTM does not require the independence assumption over elements. The implicit model selection of the MVTM encourages sparse
models with lower ranks. To minimize the log-likelihood with log-determinant terms, we propose an
optimization method by sequentially minimizing its convex quadratic upper bound. The experiments
show that the approach is accurate, efficient and scalable.
References
[1] C. Archambeau, N. Delannay, and M. Verleysen. Robust probabilistic projections. In ICML, 2006.
[2] J. Breese, D. Heckerman, and C. Kadie. Empirical analysis of predictive algorithms for collaborative
filtering. In UAI-98, pages 43?52, 1998.
[3] M. Fazel, H. Haitham, and S. P. Boyd. Log-det heuristic for matrix rank minimization with applications
to hankel and euclidean distance matrices. In Proceedings of the American Control Conference, 2003.
[4] C. Fernandez and M. F. J. Steel. Multivariate Student-t regression models: Pitfalls and inference.
Biometrika, 86(1):153?167, 1999.
[5] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman & Hall/CRC,
New York, 2nd edition, 2004.
[6] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall/CRC, 2000.
[7] N. Lawrence. Probabilistic non-linear principal component analysis with gaussian process latent variable
models. J. Mach. Learn. Res., 6:1783?1816, 2005.
[8] D. J. C. MacKay. Comparison of approximate methods for handling hyperparameters. Neural Comput.,
11(5):1035?1068, 1999.
[9] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction.
In ICML, 2005.
[10] M. E. Tipping. Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning
Research, 1:211?244, 2001.
[11] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal
Statisitical Scoiety, B(61):611?622, 1999.
[12] K. Yu, W. Chu, S. Yu, V. Tresp, and Z. Xu. Stochastic relational models for discriminative link prediction.
In Advances in Neural Information Processing Systems 19 (NIPS), 2006.
[13] K. Yu, V. Tresp, and A. Schwaighofer. Learning Gaussian processes from multiple tasks. In ICML, 2005.
| 3203 |@word determinant:12 loading:1 norm:1 nd:1 calculus:2 confirms:1 gradual:1 decomposition:1 covariance:22 tr:4 contains:2 interestingly:1 outperforms:2 existing:1 recovered:1 com:1 comparing:2 chu:1 written:3 j1:1 treating:1 depict:1 generative:1 beginning:1 provides:2 node:3 simpler:1 five:1 direct:1 ik:2 fitting:2 introduce:5 nor:1 multi:3 pitfall:1 automatically:2 little:1 becomes:2 estimating:3 underlying:1 bounded:1 moreover:1 minimizes:2 unified:1 unobserved:2 ti:10 biometrika:1 control:2 positive:6 before:1 t1:5 local:7 limit:2 mach:1 ure:1 becoming:1 shenghuo:1 suggests:1 shaded:1 archambeau:1 factorization:2 range:1 fazel:1 sw3:1 yj:7 definite:6 differs:1 procedure:2 empirical:1 reselect:1 projection:2 word:2 integrating:4 boyd:1 suggest:4 marginalize:2 selection:5 gelman:1 put:2 prediction1:1 applying:1 optimize:2 equivalent:3 map:1 imposed:1 missing:4 maximizing:2 starting:2 independently:1 convex:8 assigns:2 insight:1 orthonormal:3 play:1 suppose:1 user:6 exact:2 us:3 wolfe:1 element:13 approximated:1 continues:1 observed:7 role:1 decrease:2 convexity:1 complexity:1 ui:3 rewrite:1 predictive:10 efficiency:1 easily:1 joint:5 various:1 america:1 represented:1 fast:1 describe:1 monte:1 hyper:4 h0:3 heuristic:2 kai:1 widely:1 say:1 rennie:1 favor:2 cov:1 unseen:1 ip:20 final:1 advantage:1 eigenvalue:1 reconstruction:1 propose:1 crossvalidation:1 converges:1 illustrate:1 iq:4 derive:1 gong:1 depending:1 eq:13 auxiliary:2 predicted:1 involves:1 implies:1 direction:1 closely:1 attribute:1 stochastic:6 observational:1 sult:1 crc:2 require:1 generalization:3 im:17 yij:1 hold:7 considered:1 hall:2 normal:2 minu:1 lawrence:1 predict:6 estimation:9 applicable:1 iw:2 tool:1 minimization:2 gaussian:11 aim:1 corollary:1 derived:3 rank:11 likelihood:6 skipped:2 rpp:5 posteriori:1 inference:1 entire:2 integrated:1 unlikely:1 expand:2 arg:3 verleysen:1 integration:1 mackay:2 marginal:7 once:1 inversewishart:2 having:1 chapman:2 yu:4 icml:3 minimized:1 np:12 jb:8 fundamentally:1 few:3 randomly:1 gamma:2 divergence:1 individual:5 tq:2 freedom:3 interest:1 highly:1 evaluation:1 introduces:1 tj:2 accurate:1 encourage:2 respective:1 unless:1 incomplete:1 euclidean:1 logarithm:1 circle:1 re:2 column:9 modeling:4 tp:6 formulates:1 extensible:1 introducing:1 srm:3 dependency:1 sv:1 mmmf:10 synthetic:2 probabilistic:10 vm:1 squared:2 reflect:1 wishart:4 mtgp:8 american:1 derivative:3 toy:2 star:1 student:4 kadie:1 inc:1 matter:1 explicitly:2 mp:1 fernandez:1 tion:1 view:3 root:3 lab:2 portion:1 inherited:1 rmse:2 collaborative:3 minimize:6 accuracy:2 variance:2 generalize:1 bayesian:4 carlo:1 definition:3 matrixvariate:2 proof:1 sampled:1 ppca:13 dataset:2 knowledge:1 organized:1 actually:1 dt:1 tipping:2 awkward:1 though:3 just:2 implicit:1 d:1 hand:1 nonlinear:2 mode:16 verify:1 analytically:1 regularization:5 equality:5 conditionally:2 during:1 encourages:3 tt:1 performs:1 usv:2 variational:2 srms:1 cupertino:1 extend:2 mae:2 vec:1 rd:1 automatic:1 uv:2 similarity:2 longer:1 surface:1 add:1 posterior:11 multivariate:3 nagar:1 optimizing:1 inequality:2 discussing:1 yi:13 minimum:5 determine:1 semi:1 ii:1 multiple:1 infer:2 eachmovie:4 faster:1 offer:2 long:3 raphson:1 a1:1 parenthesis:1 prediction:32 scalable:4 regression:1 represent:1 kernel:1 singular:6 rest:1 unlike:1 call:1 identically:1 easy:1 marginalization:2 variate:27 independence:2 carlin:1 reduce:3 intensive:1 det:1 yihong:1 t0:44 whether:1 york:1 jj:1 tij:3 useful:1 involve:1 repeating:1 generate:1 outperform:1 canonical:1 estimated:4 putting:2 threshold:1 drawn:3 prevent:1 neither:1 v1:1 sum:2 fig4:1 inverse:6 hankel:1 place:3 family:1 extends:1 utilizes:1 scaling:1 capturing:1 bound:6 def:12 quadratic:9 constraint:1 reparameterize:1 expanded:1 format:2 conjugate:1 across:1 slightly:1 increasingly:1 heckerman:1 happens:1 pr:21 ln:22 computationally:1 conjugacy:1 ygong:1 discus:2 letting:2 unconventional:1 generalizes:1 apply:7 hierarchical:2 enforce:1 alternative:1 encounter:1 slower:1 original:3 denotes:2 assumes:1 remaining:1 marginalized:2 newton:1 const:3 kyu:1 especially:1 objective:3 added:1 dependence:1 diagonal:1 distance:1 link:1 considers:1 reason:1 index:2 relationship:2 modeled:1 minimizing:4 equivalently:1 difficult:3 trace:1 negative:1 steel:1 implementation:1 stern:1 upper:6 observation:2 finite:1 descent:1 gplvm:6 relational:2 ww:3 rating:10 introduced:1 namely:1 required:1 kl:1 nip:1 usually:8 sparsity:2 max:1 royal:1 suitable:1 difficulty:1 predicting:2 zhu:1 movie:5 rated:1 imply:1 carried:1 tresp:2 text:1 prior:16 review:1 literature:2 wvj:2 marginalizing:3 loss:2 interesting:1 filtering:2 srebro:1 rmses:1 integrate:2 degree:3 consistent:1 rubin:1 principle:1 viewpoint:2 haitham:1 row:8 repeat:1 free:1 side:7 vv:2 absolute:1 sparse:5 distributed:1 dimension:2 stand:1 rich:1 ig:2 approximate:7 observable:1 obtains:1 implicitly:1 overcomes:1 global:2 sequentially:3 uai:1 b1:1 conclude:1 consuming:1 discriminative:1 alternatively:1 factorizing:1 continuous:1 latent:6 table:2 learn:2 robust:3 ca:1 ignoring:1 excellent:1 vj:7 diag:1 did:1 rh:2 noise:5 hyperparameters:1 edition:1 xu:1 zsh:1 slow:1 sub:2 comput:1 lie:1 third:1 theorem:7 bishop:1 list:1 gupta:1 evidence:1 exists:2 intrinsic:1 adding:1 importance:1 nec:2 margin:2 nk:1 depicted:1 saddle:3 schwaighofer:1 partially:4 conditional:1 goal:2 identity:1 gplvms:1 infinite:3 principal:3 lemma:2 breese:1 svd:2 select:2 relevance:1 mcmc:2 handling:1 |
2,430 | 3,204 | Modelling motion primitives and their timing
in biologically executed movements
Ben H Williams
School of Informatics
University of Edinburgh
5 Forrest Hill, EH1 2QL, UK
[email protected]
Marc Toussaint
TU Berlin
Franklinstr. 28/29, FR 6-9
10587 Berlin, Germany
[email protected]
Amos J Storkey
School of Informatics
University of Edinburgh
5 Forrest Hill, EH1 2QL, UK
[email protected]
Abstract
Biological movement is built up of sub-blocks or motion primitives. Such
primitives provide a compact representation of movement which is also
desirable in robotic control applications. We analyse handwriting data to
gain a better understanding of primitives and their timings in biological
movements. Inference of the shape and the timing of primitives can be
done using a factorial HMM based model, allowing the handwriting to
be represented in primitive timing space. This representation provides a
distribution of spikes corresponding to the primitive activations, which can
also be modelled using HMM architectures. We show how the coupling of
the low level primitive model, and the higher level timing model during
inference can produce good reconstructions of handwriting, with shared
primitives for all characters modelled. This coupled model also captures
the variance profile of the dataset which is accounted for by spike timing
jitter. The timing code provides a compact representation of the movement
while generating a movement without an explicit timing model produces a
scribbling style of output.
1
Introduction
Movement planning and control is a very difficult problem in real-world applications. Current robots have very good sensors and actuators, allowing accurate movement execution,
however the ability to organise complex sequences of movement is still far superior in biological organisms, despite being encumbered with noisy sensory feedback, and requiring
control of many non-linear and variable muscles. The underlying question is that of the
representation used to generate biological movement. There is much evidence to suggest
that biological movement generation is based upon motor primitives, with discrete muscle
synergies found in frog spines, (Bizzi et al., 1995; d?Avella & Bizzi, 2005; d?Avella et al.,
2003; Bizzi et al., 2002), evidence of primitives being locally fixed (Kargo & Giszter, 2000),
and modularity in human motor learning and adaption (Wolpert et al., 2001; Wolpert &
Kawato, 1998). Compact forms of representation for any biologically produced data should
therefore also be based upon primitive sub-blocks.
1
(A)
(B)
?m
Figure 1: (A) A factorial HMM of a handwriting trajectory Yt . The parameters ?
t indicate
the probability of triggering a primitive in the mth factor at time t and are learnt for one specific
character. (B) A hierarchical generative model of handwriting where the random variable c indicates
the currently written character and defines a distribution over random variables ?m
t via a Markov
model over Gm .
There are several approaches to use this idea of motion primitives for more efficient robotic
movement control. (Ijspeert et al., 2003; Schaal et al., 2004) use non-linear attractor dynamics as a motion primitive and train them to generate motion that solves a specific task.
(Amit & Matari?c, 2002) use a single attractor system and generate non-linear motion by
modulating the attractor point. These approaches define a primitive as a segment of movement rather than understanding movement as a superposition of concurrent primitives. The
goal of analysing and better understanding biological data is to extract a generative model of
complex movement based on concurrent primitives which may serve as an efficient representation for robotic movement control. This is in contrast to previous studies of handwriting
which usually focus on the problem of character classification rather than generation (Singer
& Tishby, 1994; Hinton & Nair, 2005).
We investigate handwriting data and analyse whether it can be modelled as a superposition
of sparsely activated motion primitives. The approach we take can intuitively be compared
to a Piano Model (also called Piano roll model (Cemgil et al., 2006)). Just as piano music
can (approximately) be modelled as a superposition of the sounds emitted by each key we
follow the idea that biological movement is a superposition of pre-learnt motion primitives.
This implies that the whole movement can be compactly represented by the timing of each
primitive in analogy to a score of music. We formulate a probabilistic generative model that
reflects these assumptions. On the lower level a factorial Hidden Markov Model (fHMM,
Ghahramani & Jordan, 1997) is used to model the output as a combination of signals emitted
from independent primitives (each primitives corresponds to a factor in the fHMM). On the
higher level we formulate a model for the primitive timing dependent upon character class.
The same motion primitives are shared across characters, only their timings differ. We train
this model on handwriting data using an EM-algorithm and thereby infer the primitives and
the primitive timings inherent in this data. We find that the inferred timing posterior for a
specific character is indeed a compact representation for the specific character which allows
for a good reproduction of this character using the learnt primitives. Further, using the
timing model learnt on the higher level we can generate new movement ? new samples of
characters (in the same writing style as the data), and also scribblings that exhibit local
similarity to written characters when the higher level timing control is omitted.
Section 2 will introduce the probabilistic generative model we propose. Section 3 briefly
describes the learning procedures which are variants of the EM-algorithm adapted to our
model. Finally in section 4 we present results on handwriting data recorded with a digitisation tablet, show the primitives and timing code we extract, and demonstrate how the
learnt model can be used to generate new samples of characters.
2
2
Model
Our analysis of primitives and primitive timings in handwriting is based on formulating a
corresponding probabilistic generative model. This model can be described on two levels.
On the lower level (Figure 1(A)) we consider a factorial Hidden Markov Model (fHMM)
where each factor produces the signal of a single primitive and the linear combination of
factors generates the observed movement Yt . This level is introduced in the next section
and was already considered in (Williams et al., 2006; Williams et al., 2007). It allows the
learning and identification of primitives in the data but does not include a model of their
timing. In this paper we introduce the full generative model (Figure 1(B)) which includes
a generative model for the primitive timing conditioned on the current character.
2.1
Modelling primitives in data
Let M be the number of primitives we allow for. We describe a primitive as a strongly
constrained Markov process which remains in a zero state most of the time but with some
? ? [0, 1] enters the 1 state and then rigorously runs through all states 2, .., K
probability ?
before it enters the zero state again. While running though its states, this process emits a
fixed temporal signal. More rigorously, we have a fHMM composed of M factors. The state
of the mth factor at time t is Stm ? {0, .., Km}, and the transition probabilities are
? ?m
?t
for a = 0 and b = 1
?
?
m
?
1
?
?
for a = 0 and b = 0
m
t
?m) =
.
(1)
= a, ?
P (Stm = b | St?1
t
1
for a 6= 0 and b = (a + 1) mod Km
?
?
0
otherwise
? m of the mth primitive at time t.
This process is parameterised by the onset probability ?
t
The M factors emit signals which are combined to produce the observed motion trajectory
Yt according to
M
X
P (Yt | St1:M ) = N (Yt ,
WSmtm , C) ,
(2)
m=1
where N (x, a, A) is the Gaussian density function over x with mean a and covariance matrix
A. This emission is parameterised by Wsm which is constrained to W0m = 0 (the zero state
does not contribute to the observed signal), and C is a stationary output covariance.
m
m
) is what we call a primitive and ? to stay in the analogy
= (W1m , .., WK
The vector W1:K
m
m
? m ? [0, 1] could be
? can be compared to the sound of a piano key. The parameters ?
t
compared to the score of the music. We will describe below how we learn the primitives
Wsm and also adapt the primitive lengths Km using an EM-algorithm.
2.2
A timing model
? to be fixed parameters is not a suitable model of biological movement.
Considering the ??s
The usage and timing of primitives depends on the character that is written and the timing
? actually provide a rather high-dimensional
varies from character to character. Also, the ??s
representation for the movement. Our model takes a different approach to parameterise
the primitive activations. For instance, if a primitive is activated twice in the course of the
movement we assume that there have been two signals (?spikes?) emitted from a higher
level process which encode the activation times. More formally, let c be a discrete random
variable indicating the character to be written, see Figure 1(B). We assume that for each
primitive we have another Markovian process which generates a length-L sequence of states
Gm
l ? {1, .., R, 0},
m
P (Gm
1:L | c) = P (G1 | c)
L
Y
m
P (Gm
l | Gl?1 , c) .
(3)
l=2
The states Gm
l encode which primitives are activated and how they are timed, as seen in
Figure 2(b). We now define ?m
t to be a binary random variable that indicate the activation
3
Training sample number
350
300
250
200
150
100
50
0
0
?1
0.1
0.2
0.3
?2
0.4
0.5
0.6
0.7
?3
0.8
Time /ms
(a)
(b)
Figure 2: (a) Illustration of equation (4): The Markov process on the states Gm
l emits Gaussian
components to the onset probabilities P (?m
t = 1). (b) Scatter plot of the MAP onsets of a single
primitive for different samples of the same character ?p?. Gaussian components can be fit to each
cluster.
of a primitive at time t, which we call a ?spike?. For a zero-state Gm
l = 0 no spike is
emitted and thus the probability of ?m = 1 is not increased. A non-zero state Gm
l = r adds
a Gaussian component to the probabilities of ?m
t = 1 centred around a typical spike time
m
?m
r and with variance ?r ,
Z t+0.5
L
X
m
m
m >0
=
1
|
G
,
c)
=
?
N (t, ?m
, ?G
(4)
P (?m
m ) dt .
G
1:Km
Gm
t
l
l
l
t?0.5
l=1
Here, ?Gm
is zero for Gm
l = 0 and 1 otherwise, and the integral essentially discretises the
l >0
Gaussian density. Additionally, we restrict the Markovian process such that each Gaussian
m
component can emit at most one spike, i.e., we constrain P (Gm
l | Gl?1 , c) to be a lower
triangular matrix. Given the ??s, the state transitions in the fHMM factors are as in equation
? by ?.
(1), replacing ?
To summarise, the spike probabilities of ?m
t = 1 are a sum of at most L Gaussian components
m
centred around the means ?m
l and with variances ?l . Whether or not such a Gaussian
component is present is itself randomised and depends on the states Gm
l . We can observe at
most L spikes in one primitive, the spike times between different primitives are dependent,
but we have a Markovian dependency between the presence and timing of spikes within a
primitive. The whole process is parameterised by the initial state distribution P (Gm
1 | c),
m
m
m
the transition probabilities P (Gm
l | Gl?1 , c), the spike means ?r and the variances ?r . All
these parameters will be learnt using an EM-algorithm.
This timing model is motivated from results with the fHMM-only model: When training
the fHMM on data of a single character and then computing the MAP spike times using
a Viterbi alignment for each data sample we find that the MAP spike times are roughly
Gaussian distributed around a number of means (see Figure 2(b)). This is why we used a
sum of Gaussian components to define the onset probabilities P (? = 1). However, the data
is more complicated than provided for by a simple Mixture of Gaussians. Not every sample
includes an activation for each cluster (which is a source of variation in the handwriting)
and there cannot be more than one spike in each cluster. Therefore we introduced the
constrained Markov process on the states Gm
l which may skip the emission of some spikes.
3
Inference and learning
In the experiments we will compare both the fHMM without the timing model (Figure 1(A))
and the full model including the timing model (Figure 1(B)).
In the fHMM-only model, inference in the fHMM is done using variational inference as
described in (Ghahramani & Jordan, 1997). Using a standard EM-algorithm we can train
? To prevent overfitting we assume the spike probabilities
the parameters W , C and ?.
4
10
2
6
5
4
?2
?4
?10
?9
?6
?3
?9
?5
?0.5
?10
0 0.20.4
1
(a)
?4 ?2 0 2
Distance /mm
0
0
0 1 2 3
?4.5 ?1
0.25
0
?0.25
?0.5
0.25
?0.25
0.25 0.5 0.75
Time /s
?0.5
2.5
?0.25 0
0.5
0
?14
1
0
0
?5
?8
?12
2
?8
?6
?10
3
?2
?4
Distance /mm
?2
7
0
?0.25
?0.5
4
0 0.2 0.4
0
?0.25
?0.5
?0.75
0 0.20.4
?0.2 0
?3
?7 ?1
?8
?2
?3
?0.1
?0.5
?0.2
Distance /mm
?0.2?0.1
?4
?3
?8
?7 ?4
?7
?4
?5
?4
?10
?5
?12.5
?15
?8
?3
?8
?3
?17.5
?0.3
?0.2 0
?3
?4
?2
?8?7
?4
?9
?3
?3
?7
?7.5
?4
?5
0
?0.25
?8
?4
?3
?7
?1
?5
?5
?8
?1
?7
?1
?5
?3
?4
?2.5
0
?0.25
?0.5
?0.75
?10
Distance /mm
Primitive number
8
?2
?7
?5
?4
?9
?2
?1
?8
5
0.5
?5
Distance /mm
?7
?6
?8
?1
0
?7
?2?8
?1
?5
0
0
9
?7
0
(b)
?5
?2.5
0
2.5
5
Distance /mm
(c)
Figure 3: (a) Reconstruction of a character from a training dataset, using a subset of the primitives.
The thickness of the reconstruction represents the pressure of the pen tip, and the different colours
represent the activity of the different primitives, the onsets of which are labelled with an arrow.
The posterior probability of primitive onset is shown on the left, highlighting why a spike timing
representation is appropriate. (b) Plots of the 10 extracted primitives, as drawn on paper. (c)
Generative samples using a flat primitive onset prior, showing scribbling behaviour of uncoupled
model.
? m for each
are stationary (?m
t constant over t) and learn only a single mean parameter ?
primitive.
In the full model, inference is an iterative process of inference in the timing model and
inference in the fHMM. Note that variational inference in the fHMM is itself an iterative
process which recomputes the posteriors over Stm after adapting the variational parameters.
We couple this iteration to inference in the timing model in both directions: In each iteration,
the posterior over Stm defines observation likelihoods for inference in the Markov models Gm
l .
m
Inversely, the resulting posterior over Gm
l defines a new prior over ??s (a message from Gl to
?m
t ) which enter the fHMM inference in the next iteration. Standard M-steps are then used
to train all parameters of the fHMM and the timing model. In addition, we use heuristics to
adapt the length Km of each primitive: we increase or decrease Km depending on whether
the learnt primitive is significantly different to zero in the last time steps. The number of
parameters used in the model therefore varies during learning, as the size of W depends
upon Km , and the size of G depends upon the number of inferred spikes.
In the experiments we will also investigate the reconstruction of data. By this we mean
that we take a trained model, use inference to compute the MAP spikes ? for a specific
data sample, then we use these ??s and the definition of our generative model (including the
learnt primitives W ) to generate a trajectory which can be compared to the original data
sample. Such a reconstruction can be computed using both the fHMM-only model and the
full model.
4
4.1
Results
Primitive and timing analysis using the fHMM-only
We first consider a data set of 300 handwritten ?p?s recorded using an INTUOS 3 WACOM digitisation tablet http://www.wacom.com/productinfo/9x12.cfm, providing trajectory data at 200Hz. The trajectory Yt we model is the normalised first differential of the
data, so that the data mean was close to zero, providing the requirements for the zero
state assumption in the model constraints. Three dimensional data was used, x-position,
y-position, and pressure. The data collected were separated into samples, or characters,
allowing each sample to be separately normalised.
Our choice of parameter was M = 10 primitives and we initialised all Km = 20 and constrained them to be smaller than 100 throughout learning.
We trained the fHMM-only model on this dataset. Figure 3(a) shows the reconstruction of a
specific sample of this data set and the corresponding posterior over ??s. This clean posterior
is the motivation for introducing a model of the spike timings as a compact representation
5
?15
?20
?25
?30
?35
?40
?45
?50
4
2
?20
?10
0
Distance /mm
(a)
10
0
x 10
x position
y position
pressure
1.8
1.6
?10
Distance /mm
Distance /mm
?10
?10
?8
?4
?7
?7
?10
?4
?2
?8
?9
?3
?1
?7
?1 ?8
?10
?4
?7
?3
?1 ?1
?7
?9
?4
?3
?2 ?3
?1
?4
?1?4
?8
?6
?4
?1
?4
?6
?10
?2 ?9
?8
?6
?6
?8
?3
?8
?10
?6
?2 ?3
?4
?5 ?7 ?6
?9
?2?9
?7
?3 ?3
?3
?2 ?10
?5 ?9
?10
?6
?10 ?10
?10
?9
?7
?7
?6 ?10
?2 ?4
?4
?6 ?6
?7 ?7
?3 ?10 ?3
?6
?10 ?3
?6
?4
?2
?5
?6 ?10
?10
?5 ?2
?7
?5
?5
?3
?7
?3
?4
?4 ?4
?10
?7
?9 ?10 ?4 ?9
?10
?4 ?9
?10
?6
?4
?3 ?10 ?8
?3
?6
?8
?4
?2
?3
?1 ?9
?8?4?4
?10
?8
?1
?6
?1
?7
?10
?3
?1 ?2
?3
?4
?10
?4
?1 ?9
?4
?2 ?1
?2
?1
?10
?4
?4
?2
?8
?3
?2
?4
?7?6
?10?7
?6
?4
?9
?3 ?9
?8?8
?2
?4
?2
?3
?8
?7
?8
?10
?3
?8 ?3
?9
?7
?3
?10
?2
?7
?10 ?5 ?7
?5
?6
?5 ?9
?8 ?8
?4 ?10 ?5
?7
?7
?10
?7 ?10
?6
?10
?3 ?10
?9 ?8
?7
?6
?5
?5
?5
?4
?10
?3
?3 ?10
?7?7
?6
?4
?3 ?8
?10
?6
?6
?10
?7
?2
?3
?10
?6 ?4
?6
?4
?4
?3
?4 ?10 ?4
?10 ?10
?7 ?10 ?9
?7
?9
?3
?9
?3
?7
?1
?4
?4
?1
?9
?8
?3
?1
?10
?7
?2
?3
?2 ?3
?2 ?8
?4
?4 ?7
?8
?10
?8
?10
?10
?1
?10
?3 ?3
?1
?2 ?3
?6
?1
?3
?7 ?9
?3
?2 ?9
?8
?4
?7
?6
?6?7
?4
?2 ?4
?10
?2 ?3?2 ?4?2 ?9
?2 ?10
?4
?3 ?10
?2
?4
?6
?7
?8
?10 ?3 ?10 ?10
?2
?5
?3 ?10 ?5
?5
?5
?6
?5
?5
?6
?10 ?6
?3
?4 ?7
?3
?10
?6
?6
?9
?10
?10
?7
?10 ?3
?3
?4
?6 ?7 ?3
?4 ?7
?10
?8
?4 ?4
?8 ?7
?9
?5
?3
Number of samples
0
?5
1.4
1.2
1
0.8
?20
?30
?40
0.6
0.4
?50
0.2
0
?800
?600
?400
?200
0
200
400
Velocity error /pixels /sec
(b)
600
800
?10
0
10
20
30
Distance /mm
(c)
Figure 4: (a) Reconstructions of ?p?s using the full model. (b) Histogram of the reconstruction
error, which is 3-dimensional pen movement velocity space. These errors were produced using over
300 samples of a single character. (c) Generative samples using the full generative model (Figure
1(B)).
of the data. Equally the reconstruction (using the Viterbi aligned MAP spikes) shows the
sufficiency of the spike code to generate the character. Figure 3(b) shows the primitives W m
(translated back into pen-space) that were learnt and implicitly used for the reconstruction
of the ?p?. These primitives can be seen to represent typical parts of the ?p? character; the
arrows in the reconstruction indicate when they are activated.
The fHMM-only model can be used to reconstruct a specific data sample using the MAP ??s
of that sample, but it can not ?autonomously? produce characters since it lacks a model of
the timing. To show the importance of this spike timing information, we can demonstrate
the effects of removing it. When using the fHMM-only model as a generative model with
?m the result is a form of primitive babbling, as can
the learnt stationary spike probabilities ?
be seen in Figure 3(c). Since these scribblings are generated by random expression of the
learnt primitives they locally resemble parts of the ?p? character.
The primitives generalise to other characters if the training dataset contained sufficient
variation. Further investigation has shown that 20 primitives learnt from 12 character types
are sufficiently generalised to represent all remaining novel character types without further
learning, by using a single E-step to fit the pre-learnt parameters to a novel dataset.
4.2
Generating new characters using the full generative model
Next we trained the full model on the same ?p?-dataset. Figure 4(a) shows the reconstructions of some samples of the data set. To the right we see the reconstruction errors in
velocity space showing at many time points a perfect reconstruction was attained. Since
the full model includes a timing model it can also be run autonomously as a generative
model for new character samples. Figure 4(c) displays such new samples of the character
?p? generated by the learnt model.
As a more challenging problem we collected a data set of over 450 character samples of
the letters a, b and c. The full model includes the written character class as a random
variable and can thus be trained on multi-character data sets. Note that we restrict the
total number of primitives to M = 10 which will require a sharing of primitives across
characters. Figure 5(a) shows samples of the training data set while Figure 5(b) shows
reconstructions of the same samples using the MAP ??s in the full model. Generally, the
reconstructions using the full model are better than using the fHMM-only model. This can
be understood investigating the distribution of the MAP ??s across different samples under
the fHMM-only and the full model, see Figure 6. Coupling the timing and the primitive
model during learning has the effect of trying to learn primitives from data that are usually in
the same place. Thus, using the full model the inferred spikes are more compactly clustered
at the Gaussian components due to the prior imposed from the timing model (the thick
black lines correspond to Equation (4)).
6
0
0
?20
?20
?20
?30
?40
?50
?60
Distance /mm
?30
Distance /mm
Distance /mm
0
?10
?10
?40
?50
?60
?70
?70
?40
?60
?80
?80
?80
?90
?90
?10
0
10
20
30
?100
?10
40
?100
0
10
20
30
?10
40
(a)
0
10 20 30 40
Distance /mm
Distance /mm
Distance /mm
(b)
(c)
Figure 5: (a) Training dataset, showing 3 character types, and variation. (b) Reconstruction of
dataset using 10 primitives learnt from the dataset in (a). (c) Generative samples using the full
generative model (Figure 1(B)).
m=5
Primitive ? Sample index
Primitive ? Sample index
m=5
m=4
m=3
m=2
m=1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
m=4
m=3
m=2
m=1
0
Time /ms
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Time /ms
(a)
(b)
Figure 6: (a) Scatter plot of primitive onset spikes for a single character type across all samples
and primitives, showing the clustering of certain primitives in particular parts of a character. The
horizontal bars separate the results for different primitives. (b) Scatter plot of spikes from same
dataset, with a coupled model, showing suppression of outlying spikes and tightening of clusters.
The thick black lines displays the prior over ??s imposed from the timing model via Equation (4).
Finally, we run the full model autonomously to generate new character samples, see Figure
5(c). Here the character class, c is first sampled uniform randomly and then all learnt
parameters are used to eventually sample a trajectory Yt . The generative samples show
interesting variation while still being readably a character.
5
Conclusions
In this paper we have shown that it is possible to represent handwriting using a primitive
based model. The model consists of a superposition of several arbitrary fixed functions.
These functions are time-extended, of variable length (during learning), and are superimposed with learnt offsets. The timing of activations is crucial to the accurate reproduction of
the character. With a small amount of timing variation, a distorted version of the original
character is reproduced, whilst large (and coordinated) differences in the timing pattern
produce different character types.
The spike code provides a compact representation of movement, unlike that which has previously been explored in the domain of robotic control. We have proposed to use Markov
processes conditioned on the character as a model for these spike emissions. Besides contributing to a better understanding of biological movement, we hope that such models will
inspire applications also in robotic control, e.g., for movement optimisation based on spike
codings.
7
An assumption made in this work is that the primitives are learnt velocity profiles. We have
not included any feedback control systems in the primitive production, however the presence
of low-level feedback, such as in a spring system (Hinton & Nair, 2005) or dynamic motor
primitives (Ijspeert et al., 2003; Schaal et al., 2004), would be interesting to incorporate into
the model, and could perhaps be done by changing the outputs of the fHMM to parameterise
the spring systems rather than be Gaussian distributions of velocities.
We make no assumptions about how the primitives are learnt in biology. It would be
interesting to study the evolution of the primitives during human learning of a new character
set. As humans become more confident at writing a character, the reproduction becomes
faster, and more repeatable. This could be related to a more accurate and efficient use
of primitives already available. However, it might also be the case that new primitives
are learnt, or old ones adapted. More research needs to be done to examine these various
possibilities of how humans learn new motor skills.
Acknowledgements
Marc Toussaint was supported by the German Research Foundation (DFG), Emmy Noether fellowship TO 409/1-3.
References
Amit, R., & Matari?c, M. (2002). Parametric primitives for motor representation and control. Proc.
of the Int. Conf. on Robotics and Automation (ICRA) (pp. 863?868).
Bizzi, E., d?Avella, A., Saltiel, P., & Trensch, M. (2002). Modular organization of spinal motor
systems. The Neuroscientist, 8, 437?442.
Bizzi, E., Giszter, S., Loeb, E., Mussa-Ivaldi, F., & Saltiel, P. (1995). Modular organization of
motor behavior in the frog?s spinal cord. Trends in Neurosciences, 18, 442?446.
Cemgil, A., Kappen, B., & Barber, D. (2006). A generative model for music transcription. IEEE
Transactions on Speech and Audio Processing, 14, 679?694.
d?Avella, A., & Bizzi, E. (2005). Shared and specific muscle synergies in natural motor behaviors.
PNAS, 102, 3076?3081.
d?Avella, A., Saltiel, P., & Bizzi, E. (2003). Combinations of muscle synergies in the construction
of a natural motor behavior. Nature Neuroscience, 6, 300?308.
Ghahramani, Z., & Jordan, M. (1997). Factorial hidden Markov models. Machine Learning, 29,
245?275.
Hinton, G. E., & Nair, V. (2005). Inferring motor programs from images of handwritten digits.
Advances in Neural Information Processing Systems 18 (NIPS 2005) (pp. 515?522).
Ijspeert, A. J., Nakanishi, J., & Schaal, S. (2003). Learning attractor landscapes for learning motor
primitives. Advances in Neural Information Processing Systems 15 (NIPS 2003) (pp. 1523?1530).
MIT Press, Cambridge.
Kargo, W., & Giszter, S. (2000). Rapid corrections of aimed movements by combination of forcefield primitives. J. Neurosci., 20, 409?426.
Schaal, S., Peters, J., Nakanishi, J., & Ijspeert, A. (2004).
ISRR2003.
Learning movement primitives.
Singer, Y., & Tishby, N. (1994). Dynamical encoding of cursive handwriting. Biol.Cybern., 71,
227?237.
Williams, B., M.Toussaint, & Storkey, A. (2006). Extracting motion primitives from natural handwriting data. Int. Conf. on Artificial Neural Networks (ICANN) (pp. 634?643).
Williams, B., M.Toussaint, & Storkey, A. (2007). A primitive based generative model to infer
timing information in unpartitioned handwriting data. Int. Jnt. Conf. on Artificial Intelligence
(IJCAI) (pp. 1119?1124).
Wolpert, D. M., Ghahramani, Z., & Flanagan, J. R. (2001). Perspectives and problems in motor
learning. TRENDS in Cog. Sci., 5, 487?494.
Wolpert, D. M., & Kawato, M. (1998). Multiple paired forward and inverse models for motor
control. Neural Networks, 11, 1317?1329.
8
| 3204 |@word version:1 briefly:1 km:8 covariance:2 pressure:3 thereby:1 kappen:1 ivaldi:1 initial:1 score:2 current:2 com:1 activation:6 scatter:3 written:5 shape:1 motor:13 plot:4 stationary:3 generative:19 intelligence:1 provides:3 contribute:1 differential:1 become:1 consists:1 introduce:2 indeed:1 rapid:1 behavior:3 spine:1 planning:1 examine:1 multi:1 roughly:1 considering:1 becomes:1 stm:4 provided:1 underlying:1 what:1 whilst:1 temporal:1 every:1 uk:4 control:11 before:1 generalised:1 understood:1 timing:42 local:1 cemgil:2 despite:1 encoding:1 approximately:1 black:2 might:1 twice:1 frog:2 challenging:1 flanagan:1 block:2 digit:1 procedure:1 adapting:1 significantly:1 pre:2 suggest:1 cannot:1 close:1 cybern:1 writing:2 www:1 map:8 imposed:2 yt:7 primitive:94 williams:6 formulate:2 variation:5 construction:1 gm:18 tablet:2 storkey:4 velocity:5 trend:2 sparsely:1 observed:3 enters:2 capture:1 cord:1 autonomously:3 movement:29 decrease:1 rigorously:2 dynamic:2 trained:4 segment:1 serve:1 upon:5 compactly:2 translated:1 represented:2 various:1 train:4 separated:1 recomputes:1 describe:2 artificial:2 emmy:1 heuristic:1 modular:2 otherwise:2 reconstruct:1 triangular:1 ability:1 g1:1 analyse:2 noisy:1 itself:2 reproduced:1 sequence:2 w1m:1 reconstruction:17 propose:1 fr:1 tu:2 aligned:1 ijcai:1 cluster:4 requirement:1 produce:6 generating:2 perfect:1 ben:2 coupling:2 depending:1 ac:2 school:2 solves:1 c:1 skip:1 indicate:3 implies:1 resemble:1 differ:1 direction:1 thick:2 human:4 require:1 behaviour:1 st1:1 clustered:1 investigation:1 biological:9 correction:1 mm:16 around:3 considered:1 avella:5 sufficiently:1 viterbi:2 cfm:1 bizzi:7 omitted:1 proc:1 currently:1 superposition:5 modulating:1 concurrent:2 amos:1 reflects:1 hope:1 mit:1 sensor:1 gaussian:12 rather:4 encode:2 jnt:1 focus:1 emission:3 schaal:4 modelling:2 indicates:1 likelihood:1 superimposed:1 contrast:1 suppression:1 inference:13 dependent:2 mth:3 hidden:3 germany:1 pixel:1 classification:1 constrained:4 biology:1 represents:1 kargo:2 summarise:1 inherent:1 randomly:1 composed:1 loeb:1 dfg:1 mussa:1 attractor:4 organization:2 neuroscientist:1 message:1 investigate:2 possibility:1 alignment:1 mixture:1 activated:4 accurate:3 emit:2 integral:1 old:1 timed:1 instance:1 increased:1 markovian:3 introducing:1 subset:1 uniform:1 tishby:2 dependency:1 thickness:1 varies:2 learnt:20 combined:1 confident:1 st:1 density:2 stay:1 probabilistic:3 informatics:2 tip:1 w1:1 again:1 recorded:2 conf:3 style:2 de:1 centred:2 sec:1 wk:1 includes:4 coding:1 int:3 automation:1 coordinated:1 onset:8 depends:4 complicated:1 roll:1 variance:4 correspond:1 landscape:1 fhmm:22 modelled:4 identification:1 handwritten:2 produced:2 trajectory:6 sharing:1 ed:2 definition:1 initialised:1 pp:5 handwriting:15 couple:1 gain:1 emits:2 dataset:10 sampled:1 actually:1 back:1 higher:5 dt:1 attained:1 follow:1 inspire:1 sufficiency:1 done:4 though:1 strongly:1 just:1 parameterised:3 horizontal:1 replacing:1 lack:1 defines:3 perhaps:1 usage:1 effect:2 requiring:1 evolution:1 during:5 wacom:2 discretises:1 m:3 trying:1 hill:2 demonstrate:2 motion:11 matari:2 image:1 variational:3 novel:2 superior:1 kawato:2 spinal:2 organism:1 cambridge:1 enter:1 robot:1 similarity:1 add:1 posterior:7 perspective:1 certain:1 binary:1 muscle:4 seen:3 signal:6 babbling:1 multiple:1 full:16 desirable:1 sound:2 infer:2 pnas:1 faster:1 adapt:2 nakanishi:2 equally:1 paired:1 variant:1 essentially:1 optimisation:1 iteration:3 represent:4 histogram:1 robotics:1 addition:1 fellowship:1 separately:1 source:1 crucial:1 unlike:1 hz:1 mod:1 jordan:3 emitted:4 call:2 extracting:1 presence:2 fit:2 architecture:1 restrict:2 triggering:1 idea:2 whether:3 motivated:1 expression:1 colour:1 peter:1 speech:1 generally:1 aimed:1 cursive:1 factorial:5 amount:1 locally:2 generate:8 http:1 neuroscience:2 discrete:2 key:2 drawn:1 changing:1 prevent:1 clean:1 sum:2 run:3 inverse:1 letter:1 jitter:1 franklinstr:1 distorted:1 place:1 throughout:1 forrest:2 display:2 activity:1 adapted:2 constraint:1 constrain:1 flat:1 generates:2 formulating:1 spring:2 x12:1 according:1 combination:4 across:4 describes:1 em:5 character:48 smaller:1 biologically:2 intuitively:1 equation:4 remains:1 randomised:1 previously:1 eventually:1 german:1 singer:2 noether:1 available:1 gaussians:1 actuator:1 hierarchical:1 observe:1 appropriate:1 original:2 running:1 include:1 remaining:1 clustering:1 music:4 ghahramani:4 amit:2 icra:1 eh1:2 question:1 spike:32 already:2 parametric:1 exhibit:1 distance:16 separate:1 berlin:3 sci:1 hmm:3 barber:1 collected:2 code:4 length:4 index:2 besides:1 illustration:1 providing:2 ql:2 executed:1 difficult:1 tightening:1 allowing:3 observation:1 markov:9 saltiel:3 hinton:3 extended:1 arbitrary:1 inferred:3 introduced:2 uncoupled:1 nip:2 bar:1 usually:2 below:1 pattern:1 dynamical:1 program:1 built:1 including:2 suitable:1 natural:3 inversely:1 coupled:2 extract:2 prior:4 understanding:4 piano:4 acknowledgement:1 contributing:1 generation:2 parameterise:2 interesting:3 analogy:2 toussaint:4 foundation:1 sufficient:1 production:1 course:1 accounted:1 gl:4 last:1 supported:1 organise:1 allow:1 normalised:2 generalise:1 edinburgh:2 distributed:1 feedback:3 world:1 transition:3 sensory:1 forward:1 made:1 outlying:1 far:1 transaction:1 compact:6 skill:1 implicitly:1 transcription:1 synergy:3 robotic:5 overfitting:1 investigating:1 pen:3 iterative:2 modularity:1 why:2 additionally:1 learn:4 nature:1 complex:2 marc:2 domain:1 icann:1 neurosci:1 arrow:2 whole:2 motivation:1 profile:2 sub:2 position:4 inferring:1 explicit:1 removing:1 cog:1 specific:8 repeatable:1 showing:5 offset:1 explored:1 evidence:2 reproduction:3 importance:1 execution:1 conditioned:2 forcefield:1 wolpert:4 highlighting:1 contained:1 corresponds:1 adaption:1 extracted:1 nair:3 goal:1 labelled:1 shared:3 analysing:1 included:1 typical:2 giszter:3 called:1 ijspeert:4 total:1 indicating:1 formally:1 incorporate:1 audio:1 biol:1 |
2,431 | 3,205 | The pigeon as particle filter
Nathaniel D. Daw
Center for Neural Science
and Department of Psychology
New York University
[email protected]
Aaron C. Courville
D?partement d?Informatique
et de recherche op?rationnelle
Universit? de Montr?al
[email protected]
Abstract
Although theorists have interpreted classical conditioning as a laboratory model
of Bayesian belief updating, a recent reanalysis showed that the key features that
theoretical models capture about learning are artifacts of averaging over subjects.
Rather than learning smoothly to asymptote (reflecting, according to Bayesian
models, the gradual tradeoff from prior to posterior as data accumulate), subjects
learn suddenly and their predictions fluctuate perpetually. We suggest that abrupt
and unstable learning can be modeled by assuming subjects are conducting inference using sequential Monte Carlo sampling with a small number of samples
? one, in our simulations. Ensemble behavior resembles exact Bayesian models
since, as in particle filters, it averages over many samples. Further, the model is
capable of exhibiting sophisticated behaviors like retrospective revaluation at the
ensemble level, even given minimally sophisticated individuals that do not track
uncertainty in their beliefs over trials.
1
Introduction
A central tenet of the Bayesian program is the representation of beliefs by distributions, which assign probability to each of a set of hypotheses. The prominent theoretical status accorded to such
ambiguity seems rather puzzlingly at odds with the all-or-nothing nature of our everyday perceptual
lives. For instance, subjects observing ambiguous or rivalrous visual displays famously report experiencing either percept alternately and exclusively; for even the most fervent Bayesian, it seems
impossible simultaneously to interpret the Necker cube as potentially facing either direction [1].
A longstanding laboratory model for the formation of beliefs and their update in light of experience
is Pavlovian conditioning in animals, and analogously structured prediction tasks in humans. There
is a rich program of reinterpreting data from such experiments in terms of statistical inference [2,
3, 4, 5, 6]. The data do appear in a number of respects to reflect key features of the Bayesian ideal
? specifically, that subjects represent beliefs as distributions with uncertainty and appropriately
employ it in updating them in light of new evidence. Most notable in this respect are retrospective
revaluation phenomena (e.g., [7]), which demonstrate that subjects are able to revise previously
favored beliefs in a way suggesting that they had entertained alternative hypotheses all along [6].
However, the data addressed by such models are, in almost all cases, averages over large numbers of
subjects. This raises the question whether individuals really exhibit the sophistication attributed to
them, or if it instead somehow emerges from the ensemble. Recent work by Gallistel and colleagues
[8] frames the problem particularly sharply. Whereas subject-averaged responses exhibit smooth
learning curves approaching asymptote (interpreted by Bayesian modelers as reflecting the gradual
tradeoff from prior to posterior as data accumulate), individual records exhibit neither smooth learning nor steady asymptote. Instead responding emerges abruptly and fluctuates perpetually. These
analyses soundly refute all previous quantitative theories of learning in these tasks: both Bayesian
and traditional associative learning.
1
Here we suggest that individuals? behavior in conditioning might be understood in terms of Monte
Carlo methods for sequentially sampling different hypotheses (e.g., [9]). Such a model preserves
the insights of a statistical framing while accounting for the characteristics of individual records.
Through the metaphor of particle filtering, it also explains why exact Bayesian reasoning is a good
account of the ensemble. Finally, it addresses another common criticism of Bayesian models: that
they attribute wildly intractable computations to the individual. A similar framework has also recently been used to characterize human categorization learning [10].
To make our point in the most extreme way, and to explore the most novel corner of the model space,
we here develop as proof of concept the idea that (as with percepts in the Necker cube) subjects
sample only a single hypothesis at a time. That is, we treat them as particle filters employing only
one particle. We show that even given individuals of such minimal capacity, sophisticated effects
like retrospective revaluation can emerge in the ensemble. Clearly intermediate models are possible,
either employing more samples or mixtures of sampling and exact methods within the individual,
and the insights developed here will extend to those cases. We therefore do not mean to defend
the extreme claim that subjects never track or employ uncertainty ? we think this would be highly
maladaptive ? but instead intend to explore the role of sampling and also point out how poor is
the evidentiary record supporting more sophisticated accounts, and how great is the need for better
experimental and analytical methods to test them.
2
2.1
Model
Conditioning as exact filtering
In conditioning experiments, a subject (say, a dog) experiences outcomes (?reinforcers,? say, food)
paired with stimuli (say, a bell). That subjects learn thereby to predict outcomes on the basis of
antecedent stimuli is demonstrated by the finding that they emit anticipatory behaviors (such as
salivation to the bell) which are taken directly to reflect the expectation of the outcome. Human experiments are analogously structured, but using various cover stories (such as disease diagnosis) and
with subjects typically simply asked to state their beliefs about how much they expect the outcome.
A standard statistical framing for such a problem [5], which we will adopt here, is to assume that
subjects are trying to learn the conditional probability P (r | x) of (real-valued) outcomes r given
(vector-valued) stimuli x. One simple generative model is to assume that each stimulus xi (bells,
lights, tones) produces reinforcement according to some unknown parameter wi ; that the contributions of multiple stimuli sum; and that the actual reward is Gaussian in the the aggregate. That is,
P (r | x) = N (x ? w, ?o2 ), where we take the variance parameter as known. The goal of the subject
is then to infer the unknown weights in order to predict reinforcement. If we further assume the
weights w can change with time, and take that change as Gaussian diffusion,
P (wt+1 | wt ) = N (wt , ?d2 I)
(1)
then we complete the well known generative model for which Bayesian inference about the weights
can be accomplished using the Kalman filter algorithm [5]. Given a Gaussian prior on w0 , the
? t , ?t ), with the mean
posterior distribution P (wt | x1..t , r1...t ) also takes a Gaussian form, N (w
and covariance given by the recursive Kalman filter update equations.
Returning to conditioning, a subject?s anticipatory responding to test stimulus xt is taken to be
proportional to her expectation about rt conditional on xt , marginalizing out uncertainty over the
? t , ?t ) = xt ? w
? t.
weights. E(rt | xt , w
2.2
Conditioning as particle filtering
Here we assume instead that subjects do not maintain uncertainty in their posterior beliefs, via
e tL and treats it as true with
covariance ?t , but instead that subject L maintains a point estimate w
L
e t+1
certainty. Even given such certainty, because of diffusion intervening between t and t + 1, w
L
e t+1 from the
will be uncertain; let us assume that she recursively samples her new point estimate w
posterior given this diffusion and the new observation xt+1 , rt+1 :
L
L
e t+1
e tL , xt+1 , rt+1 )
w
? P (wt+1
| wt = w
2
(2)
This is simply a Gaussian given by the standard Kalman filter equations. In particular, the mean of
e tL +xt+1 ?(rt+1 ?xt+1 ? w
e t ). Here the Kalman gain ? = ?d2 /(?d2 +?o2 )
the sampling distribution is w
e then, is just that given by the Rescorla-Wagner [11] model.
is constant; the expected update in w,
Such seemingly peculiar behavior may be motivated by the observation that, assuming that the initial
e 0L is sampled according to the prior, this process also describes the evolution of a single sample
w
in particle filtering by sequential importance sampling, with Equation 2 as the optimal proposal
distribution [9]. (In this algorithm, particles evolve independently by sequential sampling, and do
not interact except for resampling.)
Of course, the idea of such sampling algorithms is that one can estimate the true posterior over wt
by averaging over particles. In importance sampling, the average must be weighted according to
e tL ) over each t) serve to
importance weights. These (here, the product of P (rt+1 | xt+1 , wt = w
squelch the contribution of particles whose trajectories turn out to be conditionally more unlikely
given subsequent observations. If subjects were to behave in accord with this model, then this would
give us some insight into the ensemble average behavior, though if computed without importance
reweighting, the ensemble average will appear to learn more slowly than the true posterior.
2.3
Resampling and jumps
One reason why subjects might employ sampling is that, in generative models more interesting than
the toy linear, Gaussian one used here, Bayesian reasoning is notoriously intractable. However,
the approximation from a small number of samples (or in the extreme case considered here, one
sample) would be noisy and poor. As we can see by comparing the particle filter update rule of
Equation 2 to the Kalman filter, because the subject-as-single-sample does not carry uncertainty
from trial to trial, she is systematically overconfident in her beliefs and therefore tends to be more
reluctant than optimal in updating them in light of new evidence (that is, the Kalman gain is low).
This is the individual counterpart to the slowness at the ensemble level, and at the ensemble level, it
can be compensated for by importance reweighting and also by resampling (for instance, standard
sequential importance resampling; [12, 9]). Resampling kills off conditionally unlikely particles
and keeps most samples in conditionally likely parts of the space, with similar and high importance
weights. Since optimal reweighting and resampling both involve normalizing importance weights
over the ensemble, they are not available to our subject-as-sample.
However, there are some generative models that are more forgiving of these problems. In particular,
consider Yu and Dayan?s [13] diffusion-jump model, which replaces Equation 1 with
P (wt+1 | wt ) = (1 ? ?)N (wt , ?d2 I) + ?N (0, ?j2 I)
(3)
with ?j ? ?d . Here, the weights usually diffuse as before, but occasionally (with probability ?)
are regenerated anew. (We refer to these events as ?jumps? and the previous model of Equation 1
as a ?no-jump? model, even though, strictly speaking, diffusion is accomplished by smaller jumps.)
Since optimal inference in this model is intractable (the number of modes in the posterior grows
exponentially) Yu and Dayan [13] propose maintaining a simplified posterior: they make a sort of
maximum likelihood determination whether a jump occurred or not; conditional on this the posterior
is again Gaussian and inference proceeds as in the Kalman filter.
If we use Equation 3 together with the one-sample particle filtering scheme of Equation 2, then we
simplify the posterior still further by not carrying over uncertainty from trial to trial, but instead
L
only a point estimate. As before, at each step, we sample from the posterior P (wt+1
| wt =
L
e t , xt+1 , rt+1 ) given total confidence in our previous estimate. This distribution now has two
w
modes, one representing the posterior given that a jump occurred, the other representing the posterior
given no jump.
Importantly, we are more likely to infer a jump, and resample from scratch, if the observation rt+1 is
e tL . Specifically, the probability that
far from that expected under the hypothesis of no jump, xt+1 ? w
no jump occurred (and that we therefore resample according to the posterior distribution given drift
? effectively, the chance that the sample ?survives? as it would have in the no-jump Kalman filter)
e tL , no jump). This is also the factor that the trial would
? is proportional to P (rt+1 | xt+1 , wt =w
contribute to the importance weight in the no-jump Kalman filter model of the previous section. The
importance weight, in turn, is also the factor that would determine the chance that a particle would
be selected during an exact resampling step [12, 9].
3
Figure 1: Aggregate versus individual behavior in conditioning, figures adapted with permission
from [8], copyright 2004 by The National Academy of Sciences of the USA. (a) Mean over subjects
reveals smooth, slow acquisition curve (timebase is in sessions). (b) Individual records are noisier
and with more abrupt changes (timebase is in trials). (c) Examples of fits to individual records
assuming the behavior is piecewise Poisson with abrupt rate shifts.
no jumps
jumps
1
1
1.5
0.25
0
0
kalman
0.6
50
0
0
100
0.2
50
100
jumps
0.4
1
no jumps
1.5
probability
average P(r)
0.8
0.15
0.1
0.2
0.05
0
(a)
0
20
40
60
trial
80
100
(b)
0
0
50
100
(c)
0
0
50
100
0
(d)
1
50
dynamic interval
>100
Figure 2: Simple acquisition in conditioning, simulations using particle filter models. (a) Mean
behavior over samples for jump (? = 0.075; ?j = 1; ?d = 0.1; ?o = 0.5) and no-jump (? = 0)
particle filter models of conditioning, plotted against exact Kalman filter for same parameters (and
? = 0). (b) Two examples of individual subject traces for the no-jump particle filter model. (c)
Two examples of individual subject traces for the particle filter model incorporating jumps. (d)
Distribution over individuals using the jump model of the ?dynamic interval? of acquisition, that is
the number of trials over which responding grows from negligible to near-asymptotic levels.
There is therefore an analogy between sampling in this model and sampling with resampling in
the simpler generative model of Equation 1. Of course, this cannot exactly accomplish optimal
resampling, both because the chance that a particle survives should be normalized with respect to
the population, and because the distribution from which a non-surviving particle resamples should
also depend on the ensemble distribution. However, it has a similar qualitative effect of suppressing
conditionally unlikely samples and replacing them ultimately with conditionally more likely ones.
We can therefore view the jumps of Equation 3 in two ways. First, they could correctly model
a jumpy world; by periodically resetting itself, such a world would be relatively forgiving of the
tendency for particles in sequential importance sampling to turn out conditionally unlikely. Alternatively, the jumps can be viewed as a fiction effectively encouraging a sort of resampling to improve
the performance of low-sample particle filtering in the non-jumpy world of Equation 1. Whatever
their interpretation, as we will show, they are critical to explaining subject behavior in conditioning.
3
Acquisition
In this and the following section, we illustrate the behavior of individuals and of the ensemble in
some simple conditioning tasks, comparing particle filter models with and without jumps (Equations
1 and 3).
Figure 1 reproduces some data reanalyzed by Gallistel and colleagues [8], who quantify across a
number of experiments what had long been anecdotally known about conditioning: that individual
4
records look nothing like the averages over subjects that have been the focus of much theorizing.
Consider the simplest possible experiment, in which a stimulus A is paired repeatedly with food.
(We write this as A+.) Averaged learning curves slowly and smoothly climb toward asymptote
(Figure 1a, here the anticipatory behavior measured is pigeons pecking), just as does the estimate of
the mean, w
?A , in the Kalman filter models.
Viewed in individual records (Figure 1b), the onset of responding is much more abrupt (often it
occurred in a single trial), and the subsequent behavior much more variable. The apparently slow
learning results from the average over abrupt transitions occurring at a range of latencies. Gallistel et
al. [8] characterized the behavior as piecewise Poisson with instantaneous rate changes (Figure 1c).
These results present a challenge to the bulk of models of conditioning ? not just Bayesian ones, but
also associative learning theories like the seminal model of Rescorla & Wagner [11] ubiquitously
produce smooth, asymptoting learning curves of a sort that these data reveal to be essentially an
artifact of averaging.
One further anomaly with Bayesian models even as accounts for the average curves is that acquisition is absurdly slow from a normative perspective ? it emerges long after subjects using reasonable
priors would be highly certain to expect reward. This was pointed out by Kakade and Dayan [5],
who also suggested an account for why the slow acquisition might actually be normative due to
unaccounted priors caused by pretraining procedures known as hopper training. However, Balsam
and colleagues later found that manipulating the hopper pretraining did not speed learning [14].
Figure 2 illustrates individual and group behavior for the two particle filter models. As expected,
at the ensemble level (Figure 2a), particle filtering without jumps learns slowly, when averaged
without importance weighting or resampling and compared to the optimal Kalman filter for the
same parameters. As shown, the inclusion of jumps can speed this up.
In individual traces using the jumps model (Figure 2c) frequent sampled jumps both at and after
acquisition of responding capture the key qualitative features of the individual records: the abrupt
onset and ongoing instability. The inclusion of jumps in the generative model is key to this account:
as shown in Figure 2b, without these, behavior changes more smoothly. In the jump model, when
a jump is sampled, the posterior distribution conditional on the jump having occurred is centered
near the observed rt , meaning that the sampled weight will most likely arrive immediately near its
asymptotic level. Figure 2d shows that such an abrupt onset of responding is the modal behavior of
individuals. Here (after [8]), we have fit each individual run from the jump-model simulations with
a sigmoidal Weibull function, and defined the ?dynamic interval? over which acquisition occurs
as the number of trials during which this fit function rises from 10% to 90% of its asymptotic
level. Of course, the monotonic Weibull curve is not a great characterization of the individual?s
noisy predictions, and this mismatch accounts for the long tail of the distribution. Nevertheless, the
cumulative distribution from our simulations closely matches the proportions of animals reported as
achieving various dynamic intervals when the same analysis was performed on the pigeon data [8].
These simulations demonstrate, first, how sequential sampling using a very low number of samples
is a good model of the puzzling features of individual behavior in acquisition, and at the same
time clarify why subject-averaged records resemble the results of exact inference. Depending on
the presumed frequency of jumps (which help to compensate for this problem) the fact that these
averages are of course computed without importance weighting may also help to explain the apparent
slowness of acquisition. This could be true regardless of whether other factors, such as those posited
by Kakade and Dayan [5], also contribute.
4
Retrospective revaluation
So far, we have shown that sequential sampling provides a good qualitative characterization of individual behavior in the simplest conditioning experiments. But the best support for sophisticated
Bayesian models of learning comes from more demanding tasks such as retrospective revaluation.
These tasks give the best indication that subjects maintain something more than a point estimate
of the weights, and instead strongly suggest that they maintain a full joint distribution over them.
However, as we will show here, this effect can actually emerge due to covariance information being implicitly represented in the ensemble of beliefs over subjects, even if all the individuals are
one-particle samplers.
5
after AB+
after B+
1
B
0
expected P(r)
1
weight B
weight B
1
0
AB+?
B+?
0.5
A
?1
?1
(a)
0
weight A
?1
?1
1
after AB+
0
0
weight A
1
0
50
trials
100
after B+
1
B
0
average P(r)
1
weight B
weight B
1
0
AB+?
B+?
0.5
A
?1
?1
(b)
0
weight A
1
?1
?1
0
0
weight A
1
0
50
trials
100
Figure 3: Simulations of backward blocking effect, using exact Kalman filter (a) and particle filter
model with jumps (b). Left, middle: Joint distributions over wA and wB following first-phase AB+
training (left) and second phase B+ training (middle). For the particle filter, these are derived from
the histogram of individual particles? joint point beliefs about the weights. Right: Mean beliefs
about wA and wB , showing development of backward blocking. Parameters as in Figure 2.
Retrospective revaluation refers to how the interpretation of previous experience can be changed by
subsequent experience. A typical task, called backward blocking [7], has two phases. First, two
stimuli, A and B, are paired with each other and reward (AB+), so that both develop a moderate
level of responding. In the second phase, B alone is paired with reward (B+), and then the prediction to A alone is probed. The typical finding is that responding to A is attenuated; the intuition is
that the B+ trials suggested that B alone was responsible for the reward received in the AB+ trials,
so the association of A with reward is retrospectively discounted. Such retrospective revaluation
phenomena are hard to demonstrate in animals (though see [15]) but robust in humans [7].
Kakade and Dayan [6] gave a more formal analysis of the task in terms of the Kalman filter model.
In particular they point out that conditonal on the initial AB+ trials, the model will infer an anticorrelated joint distribution over wA and wB ? i.e., that they together add up to about one. This is
represented in the covariance ?; the joint distribution is illustrated in Figure 3a (left). Subsequent
B+ training indicates that wB is high, which means, given its posterior anticorrelation with wA ,
that the latter is likely low. Note that this explanation seems to turn crucially on the representation
of the full joint distribution over the weights, rather than just a point estimate.
Contrary to this intuition, Figure 3b demonstrates the same thing in the particle filter model with
jumps. At the end of AB+ training, the subjects as an ensemble represent the anti-correlated joint
distribution over the weights, even though each individual maintains only a particular point belief.
Moreover, B+ training causes an aggregate backward blocking effect. This is because individuals
who believe that wA is high tend also to believe that wB is low, which makes them most likely to
sample that a jump has occurred during subsequent B+ training. The samples most likely to stay
in place already have w
eA low and w
eB high; beliefs about wA are, on average, thereby reduced,
producing the backward blocking effect in the ensemble.
Note that this effect depends on the subjects sampling using a generative model that admits of jumps
(Equation 3). Although the population implicitly represents the posterior covariance between wA
and wB even using the diffusion model with no jumps (Equation 1; simulations not illustrated), sub6
sequent B+ training has no tendency to suppress the relevant part of the posterior, and no backward
blocking effect is seen. Again, this traces to the lack of a mechanism for downweighting samples
that turn out to be conditionally unlikely.
5
Discussion
We have suggested that individual subjects in conditioning experiments behave as though they are
sequentially sampling hypotheses about the underlying weights: like particle filters using a single sample. This model reproduces key and hitherto theoretically troubling features of individual
records, and also, rather more surprisingly, has the ability to reproduce more sophisticated behaviors that had previously been thought to demonstrate that subjects represented distributions in a fully
Bayesian fashion. One practical problem with particle filtering using a single sample is the lack of
distributional information to allow resampling or reweighting; we have shown that use of a particular
generative model previously proposed by Yu and Dayan [13] (involving sudden shocks that effectively accomplish resampling) helps to compensate qualitatively if not quantitatively for this failing.
This mechanism is key to all of our results.
The present work echoes and formalizes a long history of ideas in psychology about hypothesis
testing and sudden insight in learning, going back to Thorndike?s puzzle boxes. It also complements
a recent model of human categorization learning [10], which used particle filters to sample (sparsely
or even with a single sample) over possible clusterings of stimuli. That work concentrated on trial
ordering effects arising from the sparsely represented posterior (see also [16]); here we concentrate
on a different set of phenomena related to individual versus ensemble behavior.
Gallistel and colleagues? [8] demonstration that individual learning curves exhibit none of the features of the ensemble average curves that had previously been modeled poses rather a serious challenge for theorists: After all, what does it mean to model only the ensemble? Surely the individual
subject is the appropriate focus of theory ? particularly given the evolutionary rationale often advanced for Bayesian modeling, that individuals who behave rationally will have higher fitness. The
present work aims to refocus theorizing on the individual, while at the same time clarifying why the
ensemble may be of interest. (At the group level, there may also be a fitness advantage to spreading
different beliefs ? say, about productive foraging locations ? across subjects rather than having
the entire population gravitate toward the ?best? belief. This is similar to the phenomenon of mixed
strategy equilibrium in multiplayer games, and may provide an additional motivation for sampling.)
Previous models fail to predict any intersubject variability because they incorporate no variation
in either the subjects? beliefs or in their responses given their beliefs. We have suggested that the
structure in response timeseries suggests a prominent role for intersubject variability in the beliefs,
due to sampling. There is surely also noise in the responding, which we do not model, but for
this alone to rescue previous models, one would have to devise some other explanation for the
noise?s structure. (For instance, if learning is monotonic, simple IID output noise would not predict
sustained excursions away from asymptote as in Fig 1c.) Similarly, nonlinearity in the performance
function relating beliefs to response rates might help to account for the sudden onset of responding
even if learning is smooth, but would not address the other features of the data.
In addition to addressing the empirical problem of fit to the individual, sampling also answers an
additional problem with Bayesian models: that they attribute to subjects the capacity for radically
intractable calculations. While the simple Kalman filter used here is tractable, there has been a
trend in modeling human and animal learning toward assuming subjects perform inference about
model structure (e.g., recovering structural variables describing how different latent causes interact
to produce observations; [4, 3, 2]). Such inference cannot be accomplished exactly using simple
recursive filtering like the Kalman filter. Indeed, it is hard to imagine any approach other than
sequentially sampling one or a small number of hypothetical model structures, since even with
the structure known, there typically remains a difficult parametric inference problem. The present
modeling is therefore motivated, in part, toward this setting.
While in our model, subjects do not explicitly carry uncertainty about their beliefs from trial to trial,
they do maintain hyperparameters (controlling the speed of diffusion, the noise of observations, and
the probability of jumps) that serve as a sort of constant proxy for uncertainty. We might expect them
7
to adjust these so as to achieve the best performance; because the inference is anyway approximate,
the veridical, generative settings of these parameters will not necessarily perform the best.
Of course, the present model is only the simplest possible sketch, and there is much work to do in
developing it. In particular, it would be useful to develop less extreme models in which subjects either rely on sampling with more particles, or on some combination of sampling and exact inference.
We posit that many of the insights developed here will extend to such models, which seem more realistic since exclusive use of low-sample particle filtering would be extremely brittle and unreliable.
(The example of the Necker cube also invites consideration of Markov Chain Monte Carlo sampling
for exploration of multimodal posteriors even in nonsequential inference [1] ? such methods are
clearly complementary.) However, there is very little information available about individual-level
behavior to constrain the details of approximate inference. The present results on backward blocking
stress again the perils of averaging and suggest that data must be analyzed much more delicately if
they are ever to bear on issues of distributions and uncertainty. In the case of backward blocking, if
our account is correct, there should be a correlation, over individuals, between the degree to which
they initially exhibited a low w
eB and the degree to which they subsequently exhibited a backward
blocking effect. This would be straightforward to test. More generally, there has been a recent trend
[17] toward comparing models against raw trial-by-trial data sets according to the cumulative loglikelihood of the data. Although this measure aggregates over trials and subjects, it measures the
average goodness of fit, not the goodness of fit to the average, making it much more sensitive for
purposes of studying the issues discussed in this article.
References
[1] P Schrater and R Sundareswara. Theory and dynamics of perceptual bistability. In NIPS 19, 2006.
[2] TL Griffiths and JB Tenenbaum. Structure and strength in causal induction. Cognit Psychol, 51:334?384,
2005.
[3] AC Courville, ND Daw, and DS Touretzky. Similarity and discrimination in classical conditioning: A
latent variable account. In NIPS 17, 2004.
[4] AC Courville, ND Daw, GJ Gordon, and DS Touretzky. Model uncertainty in classical conditioning. In
NIPS 16, 2003.
[5] S Kakade and P Dayan. Acquisition and extinction in autoshaping. Psychol Rev, 109:533?544, 2002.
[6] S Kakade and P Dayan. Explaining away in weight space. In NIPS 13, 2001.
[7] DR Shanks. Forward and backward blocking in human contingency judgement. Q J Exp Psychol B,
37:1?21, 1985.
[8] CR Gallistel, S Fairhurst, and P Balsam. The learning curve: Implications of a quantitative analysis. Proc
Natl Acad Sci USA, 101:13124?13131, 2004.
[9] A Doucet, S Godsill, and C Andrieu. On sequential Monte Carlo sampling methods for Bayesian filtering.
Stat Comput, 10:197?208, 2000.
[10] AN Sanborn, TL Griffiths, and DJ Navarro. A more rational model of categorization. In CogSci 28, 2006.
[11] RA Rescorla and AR Wagner. A theory of Pavlovian conditioning: The effectiveness of reinforcement and
non-reinforcement. In AH Black and WF Prokasy, editors, Classical Conditioning, 2: Current Research
and Theory, pages 64?69. 1972.
[12] DB Rubin. Using the SIR algorithm to simulate posterior distributions. In JM Bernardo, MH DeGroot,
DV Lindley, and AFM Smith, editors, Bayesian Statistics, Vol. 3, pages 395?402. 1988.
[13] AJ Yu and P Dayan. Expected and unexpected uncertainty: ACh and NE in the neocortex. In NIPS 15,
2003.
[14] PD Balsam, S Fairhurst, and CR Gallistel. Pavlovian Contingencies and Temporal Information. J Exp
Psychol Anim Behav Process, 32:284?295, 2006.
[15] RR Miller and H Matute. Biological significance in forward and backward blocking: Resolution of a
discrepancy between animal conditioning and human causal judgment. J Exp Psychol Gen, 125:370?386,
1996.
[16] ND Daw, AC Courville, and P Dayan. Semi-rational models of cognition: The case of trial order. In
N Chater and M Oaksford, editors, The Probabilistic Mind. 2008. (in press).
[17] ND Daw and K Doya. The computational neurobiology of learning and reward. Curr Opin Neurobiol,
16:199?204, 2006.
8
| 3205 |@word trial:23 middle:2 judgement:1 seems:3 proportion:1 nd:4 extinction:1 d2:4 gradual:2 simulation:7 crucially:1 accounting:1 covariance:5 delicately:1 thereby:2 recursively:1 carry:2 initial:2 exclusively:1 suppressing:1 o2:2 current:1 com:1 comparing:3 gmail:1 must:2 tenet:1 subsequent:5 periodically:1 realistic:1 opin:1 asymptote:5 update:4 resampling:13 alone:4 generative:9 selected:1 discrimination:1 tone:1 smith:1 recherche:1 record:10 sudden:3 cognit:1 characterization:2 provides:1 contribute:2 location:1 sigmoidal:1 simpler:1 along:1 gallistel:6 qualitative:3 sustained:1 reinterpreting:1 theoretically:1 presumed:1 indeed:1 ra:1 expected:5 nor:1 behavior:22 discounted:1 food:2 little:1 actual:1 metaphor:1 encouraging:1 jm:1 moreover:1 underlying:1 what:2 hitherto:1 interpreted:2 neurobiol:1 weibull:2 developed:2 finding:2 formalizes:1 certainty:2 quantitative:2 temporal:1 hypothetical:1 bernardo:1 exactly:2 universit:1 returning:1 demonstrates:1 whatever:1 appear:2 producing:1 veridical:1 before:2 negligible:1 understood:1 treat:2 tends:1 acad:1 might:5 black:1 minimally:1 resembles:1 eb:2 jumpy:2 suggests:1 range:1 averaged:4 practical:1 responsible:1 testing:1 recursive:2 procedure:1 empirical:1 bell:3 thought:1 confidence:1 refers:1 griffith:2 suggest:4 cannot:2 impossible:1 seminal:1 instability:1 demonstrated:1 center:1 compensated:1 straightforward:1 regardless:1 independently:1 resolution:1 abrupt:7 immediately:1 insight:5 rule:1 importantly:1 population:3 anyway:1 variation:1 imagine:1 controlling:1 experiencing:1 exact:9 anomaly:1 hypothesis:7 trend:2 particularly:2 updating:3 sparsely:2 distributional:1 maladaptive:1 blocking:11 observed:1 role:2 capture:2 ordering:1 disease:1 intuition:2 pd:1 reward:7 asked:1 productive:1 dynamic:5 ultimately:1 sundareswara:1 raise:1 carrying:1 depend:1 serve:2 basis:1 multimodal:1 joint:7 mh:1 various:2 represented:4 informatique:1 monte:4 cogsci:1 aggregate:4 formation:1 outcome:5 whose:1 fluctuates:1 apparent:1 valued:2 say:4 loglikelihood:1 ability:1 autoshaping:1 statistic:1 think:1 noisy:2 itself:1 echo:1 associative:2 seemingly:1 advantage:1 indication:1 rr:1 analytical:1 reinforcer:1 propose:1 rescorla:3 product:1 frequent:1 j2:1 relevant:1 gen:1 achieve:1 academy:1 intervening:1 everyday:1 ach:1 r1:1 produce:3 categorization:3 help:4 illustrate:1 develop:3 depending:1 stat:1 pose:1 ac:3 measured:1 op:1 intersubject:2 received:1 recovering:1 resemble:1 come:1 quantify:1 exhibiting:1 direction:1 posit:1 concentrate:1 closely:1 correct:1 attribute:2 filter:29 subsequently:1 centered:1 human:8 exploration:1 explains:1 assign:1 really:1 refute:1 biological:1 strictly:1 clarify:1 considered:1 exp:3 great:2 equilibrium:1 puzzle:1 predict:4 cognition:1 claim:1 adopt:1 resample:2 purpose:1 failing:1 proc:1 nonsequential:1 spreading:1 sensitive:1 weighted:1 survives:2 clearly:2 gaussian:7 aim:1 rather:6 cr:2 fluctuate:1 chater:1 derived:1 focus:2 she:2 likelihood:1 indicates:1 criticism:1 defend:1 wf:1 inference:13 dayan:10 typically:2 unlikely:5 entire:1 initially:1 her:3 manipulating:1 reproduce:1 going:1 issue:2 reanalyzed:1 favored:1 development:1 animal:5 anticorrelation:1 cube:3 never:1 having:2 sampling:25 represents:1 yu:4 look:1 discrepancy:1 regenerated:1 jb:1 report:1 simplify:1 serious:1 partement:1 stimulus:9 piecewise:2 employ:3 quantitatively:1 gordon:1 simultaneously:1 preserve:1 national:1 individual:40 fitness:2 phase:4 antecedent:1 cns:1 maintain:4 ab:9 curr:1 montr:1 interest:1 highly:2 adjust:1 mixture:1 extreme:4 analyzed:1 light:4 copyright:1 natl:1 chain:1 implication:1 peculiar:1 emit:1 capable:1 balsam:3 experience:4 plotted:1 causal:2 theoretical:2 minimal:1 uncertain:1 instance:3 modeling:3 wb:6 cover:1 ar:1 goodness:2 bistability:1 addressing:1 gravitate:1 characterize:1 reported:1 foraging:1 answer:1 rationnelle:1 accomplish:2 stay:1 probabilistic:1 off:1 analogously:2 together:2 again:3 central:1 ambiguity:1 reflect:2 slowly:3 dr:1 corner:1 toy:1 suggesting:1 account:9 de:2 notable:1 caused:1 explicitly:1 onset:4 depends:1 later:1 view:1 performed:1 observing:1 apparently:1 sort:4 maintains:2 lindley:1 contribution:2 nathaniel:1 variance:1 conducting:1 percept:2 ensemble:20 characteristic:1 resetting:1 who:4 peril:1 necker:3 miller:1 judgment:1 bayesian:20 raw:1 iid:1 none:1 carlo:4 trajectory:1 notoriously:1 history:1 ah:1 explain:1 touretzky:2 against:2 colleague:4 acquisition:11 frequency:1 proof:1 attributed:1 modeler:1 gain:2 sampled:4 rational:2 revise:1 reluctant:1 emerges:3 sophisticated:6 actually:2 reflecting:2 ea:1 back:1 higher:1 response:4 modal:1 anticipatory:3 though:5 strongly:1 wildly:1 box:1 just:4 correlation:1 d:2 sketch:1 replacing:1 invite:1 reweighting:4 lack:2 somehow:1 mode:2 artifact:2 reveal:1 aj:1 grows:2 believe:2 usa:2 effect:10 concept:1 true:4 normalized:1 counterpart:1 evolution:1 andrieu:1 laboratory:2 illustrated:2 conditionally:7 during:3 game:1 ambiguous:1 steady:1 prominent:2 trying:1 stress:1 complete:1 demonstrate:4 reasoning:2 resamples:1 meaning:1 instantaneous:1 novel:1 recently:1 consideration:1 common:1 hopper:2 unaccounted:1 conditioning:21 exponentially:1 extend:2 occurred:6 interpretation:2 tail:1 interpret:1 association:1 accumulate:2 refer:1 relating:1 discussed:1 theorist:2 schrater:1 session:1 pointed:1 particle:35 inclusion:2 similarly:1 nonlinearity:1 had:4 dj:1 matute:1 similarity:1 gj:1 add:1 something:1 posterior:22 recent:4 showed:1 perspective:1 moderate:1 occasionally:1 slowness:2 certain:1 life:1 ubiquitously:1 accomplished:3 devise:1 seen:1 additional:2 surely:2 determine:1 semi:1 multiple:1 full:2 infer:3 smooth:5 match:1 determination:1 characterized:1 calculation:1 long:4 compensate:2 posited:1 paired:4 prediction:4 involving:1 essentially:1 expectation:2 poisson:2 histogram:1 represent:2 accord:1 proposal:1 whereas:1 addition:1 addressed:1 interval:4 appropriately:1 exhibited:2 navarro:1 subject:43 tend:1 degroot:1 db:1 thing:1 contrary:1 climb:1 seem:1 effectiveness:1 odds:1 surviving:1 structural:1 near:3 ideal:1 intermediate:1 fit:6 psychology:2 gave:1 approaching:1 idea:3 tradeoff:2 attenuated:1 shift:1 whether:3 motivated:2 retrospective:7 abruptly:1 york:1 speaking:1 pretraining:2 repeatedly:1 cause:2 behav:1 useful:1 latency:1 generally:1 involve:1 neocortex:1 tenenbaum:1 concentrated:1 simplest:3 reduced:1 fiction:1 rescue:1 arising:1 track:2 correctly:1 bulk:1 kill:1 diagnosis:1 write:1 probed:1 vol:1 group:2 key:6 nevertheless:1 achieving:1 downweighting:1 neither:1 diffusion:7 shock:1 revaluation:7 backward:11 sum:1 forgiving:2 run:1 uncertainty:12 arrive:1 almost:1 reasonable:1 place:1 excursion:1 doya:1 shank:1 courville:5 display:1 replaces:1 adapted:1 strength:1 sharply:1 constrain:1 diffuse:1 speed:3 simulate:1 extremely:1 pavlovian:3 relatively:1 department:1 structured:2 according:6 overconfident:1 developing:1 combination:1 poor:2 multiplayer:1 describes:1 smaller:1 across:2 wi:1 kakade:5 rev:1 making:1 retrospectively:1 dv:1 taken:2 equation:14 previously:4 remains:1 turn:5 describing:1 mechanism:2 fail:1 mind:1 tractable:1 end:1 studying:1 available:2 accorded:1 away:2 appropriate:1 alternative:1 permission:1 responding:10 clustering:1 maintaining:1 classical:4 suddenly:1 intend:1 question:1 already:1 occurs:1 strategy:1 parametric:1 rt:10 exclusive:1 traditional:1 exhibit:4 evolutionary:1 rationally:1 sanborn:1 sci:1 capacity:2 clarifying:1 w0:1 unstable:1 reason:1 toward:5 induction:1 assuming:4 kalman:17 modeled:2 demonstration:1 troubling:1 difficult:1 potentially:1 trace:4 rise:1 godsill:1 suppress:1 unknown:2 perform:2 anticorrelated:1 observation:6 markov:1 behave:3 anti:1 supporting:1 timeseries:1 neurobiology:1 variability:2 ever:1 frame:1 afm:1 sequent:1 drift:1 complement:1 dog:1 framing:2 daw:6 alternately:1 nip:5 address:2 able:1 suggested:4 proceeds:1 usually:1 mismatch:1 challenge:2 program:2 explanation:2 belief:21 event:1 critical:1 demanding:1 rely:1 advanced:1 representing:2 scheme:1 improve:1 oaksford:1 ne:1 psychol:5 prior:6 evolve:1 marginalizing:1 asymptotic:3 sir:1 fully:1 expect:3 bear:1 rationale:1 mixed:1 interesting:1 brittle:1 filtering:11 proportional:2 analogy:1 facing:1 versus:2 contingency:2 pecking:1 degree:2 proxy:1 article:1 rubin:1 editor:3 story:1 famously:1 systematically:1 reanalysis:1 course:5 changed:1 surprisingly:1 formal:1 allow:1 explaining:2 wagner:3 emerge:2 curve:9 world:3 transition:1 rich:1 cumulative:2 forward:2 qualitatively:1 perpetually:2 reinforcement:4 jump:42 longstanding:1 simplified:1 employing:2 far:2 prokasy:1 approximate:2 implicitly:2 status:1 keep:1 unreliable:1 anew:1 sequentially:3 reveals:1 reproduces:2 refocus:1 doucet:1 xi:1 alternatively:1 latent:2 why:5 learn:4 nature:1 robust:1 interact:2 anecdotally:1 necessarily:1 did:1 significance:1 rivalrous:1 motivation:1 noise:4 hyperparameters:1 nothing:2 complementary:1 x1:1 fig:1 tl:8 fashion:1 slow:4 comput:1 perceptual:2 weighting:2 learns:1 xt:12 showing:1 timebase:2 normative:2 nyu:1 admits:1 evidence:2 normalizing:1 intractable:4 incorporating:1 sequential:8 effectively:3 importance:13 entertained:1 illustrates:1 occurring:1 smoothly:3 sophistication:1 pigeon:3 simply:2 explore:2 likely:7 visual:1 unexpected:1 monotonic:2 radically:1 chance:3 conditional:4 goal:1 viewed:2 change:5 hard:2 specifically:2 except:1 typical:2 averaging:4 wt:14 sampler:1 anim:1 total:1 called:1 experimental:1 tendency:2 aaron:2 puzzling:1 support:1 latter:1 noisier:1 phenomenon:4 ongoing:1 incorporate:1 scratch:1 correlated:1 |
2,432 | 3,206 | Learning the 2-D Topology of Images
Yoshua Bengio
University of Montreal
[email protected]
Nicolas Le Roux
University of Montreal
[email protected]
Marc Joliveau
?
Ecole
Centrale Paris
[email protected]
Pascal Lamblin
University of Montreal
[email protected]
Bal?azs K?egl
LAL/LRI, University of Paris-Sud, CNRS
91898 Orsay, France
[email protected]
Abstract
We study the following question: is the two-dimensional structure of images a
very strong prior or is it something that can be learned with a few examples of
natural images? If someone gave us a learning task involving images for which
the two-dimensional topology of pixels was not known, could we discover it automatically and exploit it? For example suppose that the pixels had been permuted
in a fixed but unknown way, could we recover the relative two-dimensional location of pixels on images? The surprising result presented here is that not only the
answer is yes, but that about as few as a thousand images are enough to approximately recover the relative locations of about a thousand pixels. This is achieved
using a manifold learning algorithm applied to pixels associated with a measure of
distributional similarity between pixel intensities. We compare different topologyextraction approaches and show how having the two-dimensional topology can be
exploited.
1 Introduction
Machine learning has been applied to a number of tasks involving an input domain with a special topology: one-dimensional for sequences, two-dimensional for images, three-dimensional for
videos and for 3-D capture. Some learning algorithms are generic, e.g., working on arbitrary unstructured vectors in d , such as ordinary SVMs, decision trees, neural networks, and boosting
applied to generic learning algorithms. On the other hand, other learning algorithms successfully
exploit the specific topology of their input, e.g., SIFT-based machine vision [10], convolutional
neural networks [6, 7], time-delay neural networks [5, 16].
It has been conjectured [8, 2] that the two-dimensional structure of natural images is a very strong
prior that would require a huge number of bits to specify, if starting from the completely uniform
prior over all possible permutations.
The question studied here is the following: is the two-dimensional structure of natural images a
very strong prior or is it something that can be learned with a few examples? If a small number of
examples is enough to discover that structure, then the conjecture in [8] about the image topology
was probably incorrect. To answer that question we consider a hypothetical learning task involving images whose pixels have been permuted in a fixed but unknown way. Could we recover the
1
two-dimensional relations between pixels automatically? Could we exploit it to obtain better generalization? A related study performed in the context of ICA can be found in [1].
The basic idea of the paper is that the two-dimensional topology of pixels can be recovered by
looking for a two-dimensional manifold embedding pixels (each pixel is a point in that space), such
that nearby pixels have similar distributions of intensity (and possibly color) values.
We explore a number of manifold techniques with this goal in mind, and explain how we have
adapted these techniques in order to obtain the positive and surprising result: the two-dimensional
structure of pixels can be recovered from a rather small number of training images. On images we
find that the first 2 dimensions are dominant, meaning that even the knowledge that 2 dimensions
are most appropriate could probably be inferred from the data.
2 Manifold Learning Techniques Used
In this paper we have explored the question raised in the introduction for the particular case of
images, i.e., with 2-dimensional structures, and our experiments have been performed with images
of size 27 ? 27 to 30 ? 30, i.e., with about a thousand pixels. It means that we have to look
for the embedding of about a thousand points (the pixels) on a two-dimensional manifold. Metric
Multi-Dimensional Scaling MDS is a linear embedding technique (analogous to PCA but starting
from distances and yielding coordinates on the principal directions, of maximum variance). Nonparametric techniques such as Isomap [13], Local Linear Embedding (LLE) [12], or Semidefinite
Embedding (SDE, also known as MVU for Maximum Variance Unfolding) [17] have computation
time that scale polynomially in the number of examples n. With n around a thousand, all of these
are feasible, and we experimented with MDS, Isomap, LLE, and MVU.
Since we found Isomap to work best to recover the pixel topology even on small sets of images,
we review the basic elements of Isomap. It applies the metric multidimensional scaling (MDS)
algorithm to geodesic distances in the neighborhood graph. The neighborhood graph is obtained
by connecting the k nearest neighbors of each point. Each arc of the graph is associated with a
distance (the user-provided distance between points), and is used to compute an approximation of
the geodesic distance on the manifold with the length of the shortest path between two points. The
metric MDS algorithm then transforms these distances into d-dimensional coordinates as follows.
It first computes the dot-product (or Gram)
formula,
P 2n ? n1 matrix
P 2M using
Pthe ?double-centering?
2
2
yielding entries Mij = ? 21 (Dij
? n1 i Dij
? n j Dij
+ n12 i,j Dij
). The d principal eigenvectors vk?and eigenvalues ?k (k = 1, . . . , d) of M are then computed. This yields the coordinates:
xik = vki ?k is the k-th embedding coordinate of point i.
3 Topology-Discovery Algorithms
In order to apply a manifold learning algorithm, we must generally have a notion of similarity or
distance between the points to embed. Here each point corresponds to a pixel, and the data we have
about the pixels provide an empirical distribution of intensities for all pixels. Therefore we want to
compare two estimate the statistical dependency between two pixels, in order to determine if they
should be ?neighbors? on the manifold. A simple and natural dependency statistic is the correlation
between pixel intensities, and it works very well.
The empirical correlation ?ij between the intensity of pixel i and pixel j is in the interval [?1, 1].
However, two pixels highly anti-correlated are much more likely to be close than pixels not correlated (think of edges in an image). We should thus consider the absolute value of the correlations. If
we assume them to be the value of a Gaussian kernel
1
2
|?ij | = K(xi , xj ) = e? 2 kxi ?xj k ,
then by defining Dij = kxi ? xj k and solving the above for Dij we obtain a ?distance? formula
that can be used with the manifold learning algorithms:
q
Dij = ? log |?ij | .
(1)
Note that scaling the distances in the Gaussian kernel by a variance parameter would only scale the
resulting embedding, so it is unnecessary.
2
Many other measures of distance would probably work as well. However, we found the absolute
correlation to be simple and easy to understand while yielding nice embeddings.
3.1 Dealing With Low-Variance Pixels
A difficulty we observed in experimenting with different manifold learning algorithms on data sets
such as MNIST is the influence of low-variance pixels. On MNIST digit images the border pixels
may have 0 or very small variance. This makes them all want to be close to each other, which tends
to fold the manifold on itself.
To handle this problem we have simply ignored pixels with very low variance. When these represent
a fixed background (as in MNIST images), this strategy works fine. In the experiments with MNIST
we removed pixels with standard deviation less than 15% of the maximum standard deviation (maximum over all pixels). On the NORB dataset, which has varied backgrounds, this step does not
remove any of the pixels (so it is unnecessary).
4 Converting Back to a Grid Image
Once we have obtained an embedding for the pixels, the next thing we would like to do is to transform the data vectors back into images. For this purpose we have performed the following two
steps:
1. Choosing horizontal and vertical axes (since the coordinates on the manifold can be arbitrarily rotated), and rotating the embedding coordinates accordingly, and
2. Transforming the input vector of intensity values (along with the pixel coordinates) into an
ordinary discrete image on a grid. This should be done so that the resulting intensity at
position (i, j) is close to the intensity values associated with input pixels whose embedding
coordinates are (i, j).
Such a mapping of pixels to a grid has already been done in [4], where a grid topology is defined
by the connections in a graphical model, which is then trained by maximizing the approximate
likelihood. However, they are not starting from a continuous embedding, but from the original data.
Let pk (k = 1 . . . N ) be the embedding coordinates found by the dimensionality reduction algorithm
for the k-th input variable. We select the horizontal axis as the direction of smaller spread, the
vertical axis being in the orthogonal direction, and perform the appropriate rotation.
Once we have a coordinate system that assigns a 2-dimensional position p k to the k-th input pixel,
placed at irregular locations inside a rectangular grid, we can map the input intensities x k into
intensities Mi,j , so as to obtain a regular image that can be processed by standard image-processing
and machine vision learning algorithms. The output image pixel intensity M i,j at coordinates (i, j)
is obtained through a convex average
X
Mi,j =
wi,j,k xk
(2)
k
where the weights are non-negative and sum to one, and are chosen as follows.
vi,j,k
wi,j,k = P
k vi,j,k
with an exponential of the L1 distance to give less weight to farther points:
vi,j,k = exp (?k(i, j) ? pk k1 )
N (i,j,k)
(3)
where N (i, j, k) is true if k(i, j) ? pk k1 < 2 (or inferior to a larger radius to make sure that at least
one input pixel k is associated with output grid position (i, j)). We used ? = 3 in the experiments,
after trying only 1, 3 and 10. Large values of ? correspond to using only the nearest neighbor of
(i, j) among the pk s. Smaller values smooth the intensities and make the output look better if the
embedding is not perfect. Too small values result in a loss of effective resolution.
3
Algorithm 1 Pseudo-code of the topology-learning learning that recovers the 2-D structure of inputs
provided in an arbitrary but fixed order.
Input: X
{Raw input n ? N data matrix, one row per example, with elements in fixed but
arbitrary order}
Input: ? = 0.15 (default value){Minimum relative standard deviation threshold, to remove too
low-variance pixels}
Input: k = 4?(default value){Number
of neighbors used to build Isomap neighborhood graph}
?
Input: L = N , W = N (default values) {Dimensions (length L, width W of output image)}
Input: ? = 3 (default value) {Smoothing coefficient to recover images}
Output: p
{N ? 2 matrix of embedding coordinates (one per row) for each input variable}
Output: w
{Convolution weights to recover an image from a raw input vector}
n = number of examples (rows of X)
for all column
P X.i do
?i ? n1 t Xti {Compute means}
P
?i2 ? n1 t (Xti ? ?i )2 {Compute variances}
end for
Remove columns of X for which max?ji ?j < ?
for all column X.i do
for all column X.j do
(X.i ??i )0 (X.j ??j )
empirical correlation ?ij =
{Compute all pair-wise empirical correla?i ?j
tions}
p
pseudo-distances Dij = ? log |?ij |
end for
end for
{Compute the 2-D embeddings (pk1 , pk2 ) of each input variable k through Isomap}
p = Isomap(D, k, 2)
{Rotate the coordinates p to try to align them to a vertical-horizontal grid (see text)}
{Invert the axes if L < W }
{Compute the convolution weights that will map raw values to output image pixel intensities}
for all grid position (i, j) in output image (i in 1 . . . L, j in 1 . . . W ) do
r=1
repeat
neighbors ? {k : ||pk ? (i, j)||1 < r}
r ?r+1
until neighbors not empty
for all k in neighbors do
vk ? e?||pk ?(i,j)||1
end for
wi,j,. ? 0
for all k in neighbors do
v
{Compute convolution weights}
wi,j,k = P i,j,k
k vi,j,k
end for
end for
Algorithm 2 Convolve a raw input vector into a regular grid image, using the already discovered
embedding for each input variable.
Input: x
{Raw input N -vector (in same format as a row of X above)}
Input: p
{N ? 2 matrix of embedding coordinates (one per row) for each input variable}
Input: w
{Convolution weights to recover an image from a raw input vector}
Output: Y
{L ? W output image}
for all gridPposition (i, j) in output image (i in 1 . . . L, j in 1 . . . W ) do
Yi,j ? k wi,j,k xk {Perform the convolution}
end for
4
5 Experimental Results
We performed experiments on two sets of images: MNIST digits dataset and NORB object classification dataset 1 . We used the ?jittered objects and cluttered background? image set from NORB.
The MNIST images are particular in that they have a white background, whereas the NORB images
have more varying backgrounds. The NORB images are originally of dimension 108 ? 108; we
subsampled them by 4 ? 4 averaging into 27 ? 27 images. The experiments have been performed
with k = 4 neighbors for the Isomap embedding. Smaller values of k often led to unconnected
neighborhood graphs, which Isomap cannot deal with.
(a) Isomap embedding
(b) LLE embedding
(c) MDS embedding
(d) MVU embedding
Figure 1: Examples of embeddings discovered by Isomap, LLE, MDS and MVU with 250 training
images from NORB. Each of the original pixel is placed at the location discovered by the algorithm.
Size of the circle and gray level indicate the original true location of the pixel. Manifold learning
produces coordinates with an arbitrary rotation. Isomap appears most robust, and MDS the worst
method, for this task.
In Figure 1 we compare four different manifold learning algorithms on the NORB images: Isomap,
LLE, MDS and MVU. Figure 2 explains why Isomap is giving good results, especially in comparison
with MDS. One the one hand, MDS is using the pseudo-distance defined in equation 1, whose
relationship with the real distance between two pixels in the original image is linear only in a small
neighborhood. On the other hand, Isomap uses the geodesic distances in the neighborhood graph,
whose relationship with the real distance is really close to linear.
(a)
(b)
(c)
(d)
Figure 2: (a) and (c): Pseudo-distance Dij (using formula 1) vs. the true distance on the grid.
(b) and (d): Geodesic distance in neighborhood graph vs. the true distance on the grid.
The true distance is on the horizontal axis for all figures.
(a) and (b) are for a point in the upper-left corner, (c) and (d) for a point in the center.
Figure 3 shows the embeddings obtained on the NORB data using different numbers of examples.
In order to quantitatively evaluate the reconstruction, we applied on each embedding the similarity
transformation that minimizes the Root of the Mean Squared Error (RMSE) between the coordinates
of each pixel on the embedding, and their coordinates on the original grid, before measuring the
residual error. This minimization is justified because the discovered embedding could be arbitrarily
rotated, isotropically scaled, and mirrored. 100 examples are enough to get a reasonable embedding,
and with 2000 or more a very good embedding is obtained: the RMSE for 2000 examples is 1.13,
meaning that in expectation, each pixel is off by slightly more than one.
1
Both can be obtained from Yann Le Cun?s web site: http://yann.lecun.com/.
5
9.25
10 examples
2.43
50 examples
1.68
100 examples
1.21
1000 examples
1.13
2000 examples
Figure 3: Embedding discovered by Isomap on the NORB dataset, with different numbers of training
samples (top row). Second row shows the same embeddings aligned (by a similarity transformation)
on the original grid, third row shows the residual error (RMSE) after the alignment.
Figure 4 shows the whole process of transforming an original image (with pixels possibly permuted)
into an embedded image and finally into a reconstructed image as per algorithms 1 and 2.
Figure 4: Example of the process of transforming an MNIST image (top) from which pixel order
is unknown (second row) into its embedding (third row) and finally reconstructed as an image after
rotation and convolution (bottom). In the third row, we show the intensity associated to each original
pixel by the grey level in a circle located at the pixel coordinates discovered by Isomap.
We also performed experiments with acoustic spectral data to see if the time-frequency topology
can be recovered. The acoustic data come from the first 100 blues pieces of a publically available
genre classification dataset [14]. The FFT is computed for each frame and there are 86 frames per
second. The first 30 frequency bands are kept, each covering 21.51 Hz. We used examples formed
by 30-frame spectrograms, i.e., just like images of size 30 ? 30. Using the first 600,000 audio
samples from each recording yielded 2600 30-frames images, on which we applied our technique.
Figure 5 shows the resulting embedding when we removed the 30 coordinates of lowest standard
deviation (? = .15).
6
4
Eigenvalues
Ratio of consecutive eigenvalues
3.5
3
2.5
2
1.5
1
0.5
0
(a) Blues embedding
1
2
3
4
5
6
7
8
9
10
(b) Spectrum
Figure 5: Embedding and spectrum decay for sequences of blues music.
6 Discussion
Although [8] argue that learning the right permutation of pixels with a flat prior might be too difficult
(either in a lifetime or through evolution), our results suggest otherwise.
How do we interpret that apparent contradiction?
The main element of explanation that we see is that the space of permutations of d numbers is not
?
d
such a large class of functions. There are approximately N = 2?d de permutations (Stirling
approximation) of d numbers. Since this is a finite class of functions, its VC-dimension [15] is
h = log N ? d log d ? d.
Hence if we had a bounded criterion (say taking values in [0, 1]) to compare different permutations
and we used n examples (i.e., n images, here), we
r would expect the difference between generaliza1 2 log N/?
with probability 1??. Hence, with n a
tion error and test error to be bounded [15] by
2
n
multiple of d log d, we would expect that one could approximately learn a good permutation. When
d = 400 (the number of pixels with non-negligible variance in MNIST images), d log d ? d ? 2000.
This is more than what we have found necessary to recover a ?good? representation of the images,
but on the other hand there are equivalent classes within the set of permutations that give as good
results as far as our objective and subjective criteria are concerned: we do not care about image
symmetries, rotations, and small errors in pixel placement.
What is the selection criterion that we have used to recover the image structure? Mainly we have
used an additional prior which gives a preference to an order for which nearby pixels have similar
distributions. How specific to natural images and how strong is that prior? This may be an application of a more general principle that could be advantageous to learning algorithms as well as to
brains. When we are trying to compute useful functions from raw data, it is important to discover
dependencies between the input random variables. If we are going to perform computations on subsets of variables at a time (which would seem necessary when the number of inputs is very large,
to reduce the amount of connecting hardware), it would seem wiser that these computations combine variables that have dependencies with each other. That directly gives rise to the notion of local
connectivity between neurons associated to nearby spatial locations, in the case of brains, the same
notion that is exploited in convolutional neural networks.
The fact that nearby pixels are more correlated is true at many scales in natural images. This is well
known and explains why Gabor-like filters often emerge when trying to learn good filters for images,
e.g., by ICA [9] or Products of Experts [3, 11].
In addition to the above arguments, there is another important consideration to keep in mind. The
way in which we score permutations is not the way that one would score functions in an ordinary
learning experiment. Indeed, by using the distributional similarity between pairs of pixels, we get
not just a scalar score but d(d?1)/2 scores. Since our ?scoring function? is much more informative,
it is not surprising that it allows us to generalize from many fewer examples.
7
7 Conclusion and Future Work
We proved here that, even with a small number of examples, we are able to recover almost perfectly the 2-D topology of images. This allows us to use image-specific learning algorithms without
specifying any prior other than the dimensionnality of the coordinates. We also showed that this
algorithm performed well on sound data, even though the topology might be less obvious in that
case.
However, in this paper, we only considered the simple case where we knew in advance the dimensionnality of the coordinates. One could easily apply this algorithm to data whose intrinsic dimensionality of the coordinates is unknown. In that case, one would not convert the embedding to a grid
image but rather keep it and connect only the inputs associated to close coordinates (performing a k
nearest neighbor for instance). It is not known if such an embedding might be useful for other types
of data than the ones discussed above.
Acknowledgements
The authors would like to thank James Bergstra for helping with the audio data. They also want to
acknowledge the support from several funding agencies: NSERC, the Canada Research Chairs, and
the MITACS network.
References
[1] S. Abdallah and M. Plumbley. Geometry dependency analysis. Technical Report C4DM-TR06-05, Center
for Digital Music, Queen Mary, University of London, 2006.
[2] Y. Bengio and Y. Le Cun. Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste,
and J. Weston, editors, Large Scale Kernel Machines. MIT Press, 2007.
[3] G. Hinton, M. Welling, Y. Teh, and S. Osindero. A new view of ica. In Proceedings of ICA-2001, San
Diego, CA, 2001.
[4] A. Hyv?arinen, P. O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527?1558, 2001.
[5] K. J. Lang and G. E. Hinton. The development of the time-delay neural network architecture for speech
recognition. Technical Report CMU-CS-88-152, Carnegie-Mellon University, 1988.
[6] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Backpropagation
applied to handwritten zip code recognition. Neural Computation, 1(4):541?551, 1989.
[7] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, November 1998.
[8] Y. LeCun and J. S. Denker. Natural versus universal probability complexity, and entropy. In IEEE
Workshop on the Physics of Computation, pages 122?127. IEEE, 1992.
[9] T.-W. Lee and M. S. Lewicki. Unsupervised classification segmentation and enhancement of images using
ica mixture models. IEEE Trans. Image Proc., 11(3):270?279, 2002.
[10] D. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
[11] S. Osindero, M. Welling, and G. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18:381?344, 2005.
[12] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290(5500):2323?2326, Dec. 2000.
[13] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality
reduction. Science, 290(5500):2319?2323, Dec. 2000.
[14] G. Tzanetakis and P. Cook. Musical genre classification of audio signals. IEEE Transactions on Speech
and Audio Processing, 10(5):293?302, Jul 2002.
[15] V. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, Berlin, 1982.
[16] A. Waibel. Modular construction of time-delay neural networks for speech recognition. Neural Computation, 1:39?46, 1989.
[17] K. Q. Weinberger and L. K. Saul. An introduction to nonlinear dimensionality reduction by maximum
variance unfolding. In Proceedings of the National Conference on Artificial Intelligence (AAAI), Boston,
MA, 2006.
8
| 3206 |@word advantageous:1 hyv:1 grey:1 reduction:4 score:4 ecole:1 document:1 subjective:1 recovered:3 com:1 surprising:3 lang:1 must:1 informative:1 remove:3 v:2 intelligence:1 fewer:1 cook:1 accordingly:1 xk:2 farther:1 boosting:1 location:6 preference:1 plumbley:1 along:1 incorrect:1 combine:1 inside:1 indeed:1 ica:5 multi:1 sud:1 brain:2 automatically:2 xti:2 decoste:1 provided:2 discover:3 bounded:2 lowest:1 what:2 sde:1 minimizes:1 transformation:2 pseudo:4 multidimensional:1 hypothetical:1 scaled:1 positive:1 before:1 negligible:1 local:2 tends:1 path:1 approximately:3 might:3 studied:1 specifying:1 someone:1 lecun:4 backpropagation:1 digit:2 universal:1 empirical:5 gabor:1 regular:2 suggest:1 get:2 cannot:1 close:5 selection:1 mvu:5 context:1 influence:1 equivalent:1 map:2 center:2 maximizing:1 starting:3 cluttered:1 convex:1 rectangular:1 resolution:1 roux:2 unstructured:1 assigns:1 contradiction:1 lamblin:1 pk2:1 embedding:34 handle:1 n12:1 notion:3 coordinate:23 analogous:1 diego:1 suppose:1 construction:1 user:1 us:1 element:3 recognition:4 located:1 distributional:2 observed:1 bottom:1 capture:1 worst:1 thousand:5 removed:2 transforming:3 agency:1 complexity:1 geodesic:4 trained:1 solving:1 distinctive:1 completely:1 easily:1 genre:2 effective:1 london:1 artificial:1 neighborhood:7 choosing:1 whose:5 apparent:1 larger:1 modular:1 say:1 otherwise:1 statistic:2 topographic:2 think:1 transform:1 itself:1 sequence:2 eigenvalue:3 reconstruction:1 product:3 fr:2 aligned:1 pthe:1 roweis:1 az:1 double:1 empty:1 enhancement:1 produce:1 perfect:1 rotated:2 object:2 tions:1 montreal:3 nearest:3 ij:5 strong:4 c:1 indicate:1 come:1 direction:3 radius:1 filter:2 vc:1 explains:2 require:1 arinen:1 generalization:1 really:1 tzanetakis:1 helping:1 around:1 considered:1 exp:1 mapping:1 consecutive:1 vki:1 purpose:1 estimation:1 proc:1 jackel:1 hubbard:1 successfully:1 unfolding:2 minimization:1 mit:1 gaussian:2 rather:2 varying:1 ax:2 vk:2 likelihood:1 experimenting:1 mainly:1 lri:1 cnrs:1 publically:1 relation:1 going:1 france:1 pixel:56 among:1 classification:4 pascal:1 development:1 raised:1 special:1 smoothing:1 spatial:1 once:2 having:1 look:2 unsupervised:1 future:1 yoshua:2 report:2 quantitatively:1 few:3 national:1 subsampled:1 geometry:1 n1:4 huge:1 highly:1 alignment:1 joliveau:2 henderson:1 mixture:1 yielding:3 semidefinite:1 edge:1 necessary:2 orthogonal:1 tree:1 rotating:1 circle:2 instance:1 column:4 measuring:1 stirling:1 queen:1 ordinary:3 deviation:4 entry:1 subset:1 uniform:1 delay:3 dij:9 osindero:2 too:3 connect:1 answer:2 dependency:5 jittered:1 kxi:2 international:1 lee:1 off:1 physic:1 connecting:2 connectivity:1 squared:1 aaai:1 possibly:2 corner:1 expert:1 de:2 bergstra:1 coefficient:1 vi:4 piece:1 performed:7 try:1 root:1 tion:1 view:1 lowe:1 recover:10 jul:1 rmse:3 formed:1 convolutional:2 variance:11 musical:1 yield:1 correspond:1 yes:1 generalize:1 raw:7 handwritten:1 explain:1 centering:1 frequency:2 james:1 obvious:1 associated:7 mi:2 recovers:1 dataset:5 proved:1 color:1 knowledge:1 dimensionality:5 segmentation:1 back:2 appears:1 originally:1 specify:1 done:2 though:1 lifetime:1 just:2 mitacs:1 pk1:1 langford:1 correlation:5 until:1 working:1 hand:4 horizontal:4 web:1 nonlinear:3 gray:1 mary:1 true:6 isomap:17 evolution:1 hence:2 i2:1 white:1 deal:1 width:1 inferior:1 covering:1 bal:1 criterion:3 trying:3 l1:1 silva:1 image:64 meaning:2 wise:1 consideration:1 funding:1 umontreal:3 inki:1 rotation:4 permuted:3 ji:1 discussed:1 interpret:1 mellon:1 ai:1 grid:14 had:2 dot:1 chapelle:1 similarity:5 align:1 something:2 dominant:1 showed:1 conjectured:1 verlag:1 arbitrarily:2 yi:1 exploited:2 scoring:1 minimum:1 additional:1 care:1 spectrogram:1 zip:1 converting:1 determine:1 shortest:1 signal:1 multiple:1 sound:1 keypoints:1 smooth:1 technical:2 involving:3 basic:2 vision:3 metric:3 expectation:1 cmu:1 kernel:3 represent:1 achieved:1 dec:2 invert:1 irregular:1 justified:1 background:5 want:3 fine:1 whereas:1 interval:1 addition:1 abdallah:1 probably:3 sure:1 hz:1 recording:1 thing:1 seem:2 orsay:1 bengio:4 enough:3 easy:1 embeddings:5 fft:1 xj:3 concerned:1 gave:1 architecture:1 topology:14 perfectly:1 reduce:1 idea:1 haffner:1 pca:1 speech:3 ignored:1 generally:1 useful:2 eigenvectors:1 transforms:1 nonparametric:1 amount:1 band:1 locally:1 hardware:1 svms:1 processed:1 tenenbaum:1 http:1 mirrored:1 per:5 blue:3 discrete:1 carnegie:1 four:1 threshold:1 kept:1 graph:7 sum:1 convert:1 almost:1 reasonable:1 yann:2 ecp:1 decision:1 scaling:4 bit:1 fold:1 yielded:1 adapted:1 placement:1 scene:1 flat:1 nearby:4 argument:1 chair:1 performing:1 format:1 conjecture:1 waibel:1 centrale:1 smaller:3 slightly:1 wi:5 cun:2 invariant:1 equation:1 mind:2 end:7 available:1 apply:2 denker:2 generic:2 appropriate:2 spectral:1 weinberger:1 original:8 convolve:1 top:2 graphical:1 music:2 exploit:3 giving:1 k1:2 build:1 especially:1 objective:1 question:4 already:2 strategy:1 dependence:1 md:10 hoyer:1 gradient:1 distance:21 thank:1 berlin:1 manifold:14 argue:1 length:2 code:2 relationship:2 ratio:1 difficult:1 xik:1 wiser:1 negative:1 rise:1 unknown:4 perform:3 teh:1 upper:1 vertical:3 convolution:6 neuron:1 dimensionnality:2 arc:1 finite:1 acknowledge:1 howard:1 anti:1 november:1 kegl:1 defining:1 unconnected:1 looking:1 hinton:3 frame:4 discovered:6 varied:1 arbitrary:4 intensity:14 canada:1 inferred:1 pair:2 paris:2 connection:1 lal:2 acoustic:2 learned:2 boser:1 trans:1 able:1 max:1 video:1 explanation:1 natural:8 difficulty:1 residual:2 axis:3 text:1 prior:8 review:1 discovery:1 nice:1 acknowledgement:1 geometric:1 relative:3 embedded:1 loss:1 expect:2 permutation:8 versus:1 digital:1 principle:1 editor:1 row:11 placed:2 repeat:1 lle:5 understand:1 neighbor:10 saul:2 taking:1 emerge:1 absolute:2 dimension:5 default:4 gram:1 computes:1 author:1 san:1 far:1 polynomially:1 welling:2 transaction:1 reconstructed:2 approximate:1 keep:2 dealing:1 global:1 unnecessary:2 norb:9 xi:1 knew:1 spectrum:2 continuous:1 why:2 learn:2 robust:1 nicolas:2 ca:4 symmetry:1 bottou:2 marc:2 domain:1 pk:6 spread:1 main:1 border:1 whole:1 site:1 position:4 exponential:1 third:3 formula:3 embed:1 specific:3 sift:1 explored:1 experimented:1 decay:1 intrinsic:1 workshop:1 mnist:8 vapnik:1 egl:1 boston:1 entropy:1 led:1 simply:1 explore:1 likely:1 nserc:1 scalar:1 isotropically:1 lewicki:1 applies:1 tr06:1 mij:1 corresponds:1 springer:1 ma:1 weston:1 goal:1 towards:1 feasible:1 averaging:1 principal:2 experimental:1 select:1 support:1 rotate:1 evaluate:1 audio:4 correlated:3 |
2,433 | 3,207 | Comparing Bayesian models for multisensory cue
combination without mandatory integration
Konrad P. K?ording
Rehabilitation Institute of Chicago
Northwestern University, Dept. PM&R
Chicago, IL 60611
[email protected]
Ulrik R. Beierholm
Computation and Neural Systems
California Institute of Technology
Pasadena, CA 91025
[email protected]
Ladan Shams
Department of Psychology
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Wei Ji Ma
Department of Brain and Cognitive Sciences
University of Rochester
Rochester, NY 14620
[email protected]
Abstract
Bayesian models of multisensory perception traditionally address the problem of
estimating an underlying variable that is assumed to be the cause of the two sensory signals. The brain, however, has to solve a more general problem: it also has
to establish which signals come from the same source and should be integrated,
and which ones do not and should be segregated. In the last couple of years, a
few models have been proposed to solve this problem in a Bayesian fashion. One
of these has the strength that it formalizes the causal structure of sensory signals.
We first compare these models on a formal level. Furthermore, we conduct a psychophysics experiment to test human performance in an auditory-visual spatial
localization task in which integration is not mandatory. We find that the causal
Bayesian inference model accounts for the data better than other models.
Keywords: causal inference, Bayesian methods, visual perception.
1
Multisensory perception
In the ventriloquist illusion, a performer speaks without moving his/her mouth while moving a
puppet?s mouth in synchrony with his/her speech. This makes the puppet appear to be speaking.
This illusion was first conceptualized as ?visual capture?, occurring when visual and auditory stimuli
exhibit a small conflict ([1, 2]). Only recently has it been demonstrated that the phenomenon may be
seen as a byproduct of a much more flexible and nearly Bayes-optimal strategy ([3]), and therefore
is part of a large collection of cue combination experiments showing such statistical near-optimality
[4, 5]. In fact, cue combination has become the poster child for Bayesian inference in the nervous
system.
In previous studies of multisensory integration, two sensory stimuli are presented which act as cues
about a single underlying source. For instance, in the auditory-visual localization experiment by
Alais and Burr [3], observers were asked to envisage each presentation of a light blob and a sound
click as a single event, like a ball hitting the screen. In many cases, however, the brain is not only
posed with the problem of identifying the position of a common source, but also of determining
whether there was a common source at all. In the on-stage ventriloquist illusion, it is indeed primarily the causal inference process that is being fooled, because veridical perception would attribute
independent causes to the auditory and the visual stimulus.
1
To extend our understanding of multisensory perception to this more general problem, it is necessary
to manipulate the degree of belief assigned to there being a common cause within a multisensory
task. Intuitively, we expect that when two signals are very different, they are less likely to be perceived as having a common source. It is well-known that increasing the discrepancy or inconsistency
between stimuli reduces the influence that they have on each other [6, 7, 8, 9, 10, 11]. In auditoryvisual spatial localization, one variable that controls stimulus similarity is spatial disparity (another
would be temporal disparity). Indeed, it has been reported that increasing spatial disparity leads to a
decrease in auditory localization bias [1, 12, 13, 14, 15, 16, 17, 2, 18, 19, 20, 21]. This decrease also
correlates with a decrease in the reports of unity [19, 21]. Despite the abundance of experimental
data on this issue, no general theory exists that can explain multisensory perception across a wide
range of cue conflicts.
2
Models
The success of Bayesian models for cue integration has motivated attempts to extend them to situations of large sensory conflict and a consequent low degree of integration. In one of recent studies
taking this approach, subjects were presented with concurrent visual flashes and auditory beeps and
asked to count both the number of flashes and the number of beeps [11]. The advantage of the
experimental paradigm adopted here was that it probed the joint response distribution by requiring
a dual report. Human data were accounted for well by a Bayesian model in which the joint prior
distribution over visual and auditory number was approximated from the data. In a similar study,
subjects were presented with concurrent flashes and taps and asked to count either the flashes or
the taps [9, 22]. The Bayesian model proposed by these authors assumed a joint prior distribution
with a near-diagonal form. The corresponding generative model assumes that the sensory sources
somehow interact with one another. A third experiment modulated the rates of flashes and beeps.
The task was to judge either the visual or the auditory modulation rate relative to a standard [23].
The data from this experiment were modeled using a joint prior distribution which is the sum of a
near-diagonal prior and a flat background.
While all these models are Bayesian in a formal sense, their underlying generative model does
not formalize the model selection process that underlies the combination of cues. This makes it
necessary to either estimate an empirical prior [11] by fitting it to human behavior or to assume an
ad hoc form [22, 23]. However, we believe that such assumptions are not needed. It was shown
recently that human judgments of spatial unity in an auditory-visual spatial localization task can be
described using a Bayesian inference model that infers causal structure [24, 25]. In this model, the
brain does not only estimate a stimulus variable, but also infers the probability that the two stimuli
have a common cause. In this paper we compare these different models on a large data set of human
position estimates in an auditory-visual task.
In this section we first describe the traditional cue integration model, then the recent models based
on joint stimulus priors, and finally the causal inference model. To relate to the experiment in the
next section, we will use the terminology of auditory-visual spatial localization, but the formalism
is very general.
2.1
Traditional cue integration
The traditional generative model of cue integration [26] has a single source location s which produces on each trial an internal representation (cue) of visual location, xV and one of auditory location, xA . We assume that the noise processes by which these internal representations are generated are conditionally independent from each other and follow Gaussian distributions. That is,
p (xV |s) ? N (xV ; s, ?V )and p (xA |s) ? N (xA ; s, ?A ), where N (x; ?, ?) stands for the normal
distribution over x with mean ? and standard deviation ?. If on a given trial the internal representations are xV and xA , the probability that their source was s is given by Bayes? rule,
p (s|xV , xA ) ? p (xV |s) p (xA |s) .
If a subject performs maximum-likelihood estimation, then the estimate will be
+wA xA
s? = wV wxVV +w
, where wV = ?12 and wA = ?12 . It is important to keep in mind that this is the
A
V
A
estimate on a single trial. A psychophysical experimenter can never have access to xV and xA , which
2
are the noisy internal representations. Instead, an experimenter will want to collect estimates over
many trials and is interested in the distribution of s? given sV and sA , which are the sources generated
by the experimenter. In a typical cue combination experiment, xV and xA are not actually generated
by the same source, but by different sources, a visual one sV and an auditory one sA . These sources
are chosen close to each other so that the subject can imagine that the resulting cues originate from
a single source and thus implicitly have a common cause. The experimentally observed distribution
is then
Z Z
p (?
s|sV , sA ) =
p (?
s|xV , xA ) p (xV |sV ) p (xA |sA ) dxV dxA
Given that s? is a linear combination of two normally distributed variables, it will itself follow a
+wA sA
1
normal distribution, with meanh?
si = wVwsVV +w
and variance ?s?2 = wV +w
. The reason that we
A
A
emphasize this point is because many authors identify the estimate distribution p (?
s|sV , sA ) with
the posterior distribution p (s|xV , xA ). This is justified in this case because all distributions are
Gaussian and the estimate is a linear combination of cues. However, in the case of causal inference,
these conditions are violated and the estimate distribution will in general not be the same as the
posterior distribution.
2.2
Models with bisensory stimulus priors
Models with bisensory stimulus priors propose the posterior over source positions to be proportional
to the product of unimodal likelihoods and a two-dimensional prior:
p (sV , sA |xV , xA ) = p (sV , sA ) p (xV |sV ) p (xA |sA )
The traditional cue combination model has p (sV , sA ) = p (sV ) ? (sV ? sA ), usually (as above)
even with p (sV ) uniform. The question arises what bisensory stimulus prior is appropriate. In [11],
the prior is estimated from data, has a large number of parameters, and is therefore limited in its
predictive power. In [23], it has the form
?
(sV ?sA )2
p (sV , sA ) ? ? + e
2? 2
coupling
while in [22] the additional assumption ? = 0 is made1 .
In all three models, the response distribution p (?
sV , s?A |sV , sA ) is obtained by identifying it with the posterior distribution
p (sV , sA |xV , xA ). This procedure thus implicitly assumes that marginalizing over the latent
variables xV and xA is not necessary, which
leads to a significant error for non-Gaussian priors. In this paper we correctly deal with these
issues and in all cases marginalize over the latent variables. The parametric models used for
the coupling between the cues lead to an elegant low-dimensional model of cue integration
that allows for estimates of single cues that differ from one another.
C
C=1
S
XA
2.3
C=2
XV
SA
SV
XA
XV
Causal inference model
In the causal inference model [24, 25], we
start from the traditional cue integration model
but remove the assumption that two signals are
caused by the same source. Instead, the number of sources can be one or two and is itself a
variable that needs to be inferred from the cues.
Figure 1: Generative model of causal inference.
1
This family of Bayesian posterior distributions also includes one used to successfully model cue combination in depth perception [27, 28]. In depth perception, however, there is no notion of segregation as always a
single surface is assumed.
3
If there are two sources, they are assumed to be independent. Thus, we use the graphical model
depicted in Fig. 1. We denote the number of sources by C. The probability distribution over C
given internal representations xV and xA is given by Bayes? rule:
p (C|xV , xA ) ? p (xV , xA |C) p (C) .
In this equation, p (C) is the a priori probability of C. We will denote the probability of a common
cause by pcommon , so that p (C = 1) = pcommon and p (C = 2) = 1 ? pcommon . The probability of
generating xV and xA given C is obtained by inserting a summation over the sources:
Z
Z
p (xV , xA |C = 1) = p (xV , xA |s)p (s) ds = p (xV |s) p (xA |s)p (s) ds
Here p (s) is a prior for spatial location, which we assume to be distributed as N (s; 0, ?P ). Then all
three factors in this integral hare Gaussians, allowing for anianalytic solution: p (xV , xA |C = 1) =
2 2
2
2
2 2
A ) ?P +xV ?A +xA ?V
? 2 2 1 2 2 2 2 exp ? 12 (xV ??x
.
2 ? 2 +? 2 ? 2 +? 2 ? 2
2?
?V ?A +?V ?P +?A ?P
V
A
V
P
A
P
For p (xV , xA |C = 2) we realize that xV and xA are independent of each other and thus obtain
Z
Z
p (xV , xA |C = 2) =
p (xV |sV )p (sV ) dsV
p (xA |sA )p (sA ) dsA
Again, as all these distributions are assumed hto be Gaussian, we obtain
i an analytic solution,
x2V
x2A
1
1
p (xV , xA |C = 2) = p 2 2 2 2 exp ? 2 ?2 +?2 + ?2 +?2 . Now that we have comp
p
V
A
2? (?V +?p )(?A +?p )
puted p (C|xV , xA ), the posterior distribution over sources is given by
X
p (si |xV , xA ) =
p (si |xV , xA , C) p (C|xV , xA )
C=1,2
where i can be V or A and the posteriors conditioned on C are well-known:
p (si |xA , xV , C = 1) = R
p (xA |si ) p (xV |si ) p (si )
,
p (xA |s) p (xV |s) p (s) ds
p (si |xA , xV , C = 2) = R
p (xi |si ) p (si )
p (xi |si ) p (si ) dsi
The former is the same as in the case of mandatory integration with a prior, the latter is simply
the unimodal posterior in the presence of a prior. Based on the posterior distribution on a given
trial, p (si |xV ,DxA ), an estimate has to be created.
cost funcE D For this, we use a sum-squared-error
E
2
2
tion, Cost = p (C = 1|xV , xA ) (?
s ? s) + p (C = 2|xV , xA ) (?
s ? sV or A ) . Then the best
estimate is the mean of the posterior distribution, for instance for the visual estimation:
s?V = p (C = 1|xA , xV ) s?V,C=1 + p (C = 2|xA , xV ) s?V,C=2
where s?V,C=1 =
?2
?2
?2
xV ?V
+xA ?A
+xP ?P
?2
?2
?2
?V
+?A
+?P
and s?V,C=2 =
?2
?2
xV ?V
+xP ?P
.
?2
?2
?V
+?P
If pcommon equals 0 or
1, this estimate reduces to one of the conditioned estimates and is linear in xV and xA . If
0 < pcommon < 1, the estimate is a nonlinear combination of xV and xA , because of the functional form of p (C|xV , xA ). The response distributions, that is the distributions of s?V and s?A given
sV and sA over many trials, now cannot be identified with the posterior distribution on a single trial
and cannot be computed analytically either. The correct way to obtain the response distribution is to
simulate an experiment numerically.
Note that the causal inference model above can also be cast in the form of a bisensory stimulus prior
by integrating out the latent variable C, with:
p (sA , sV ) = p (C = 1) ? (sA ? sV ) p (sA ) + p (sA ) p (sV ) p (C = 2)
However, in addition to justifying the form of the interaction between the cues, the causal inference
model has the advantage of being based on a generative model that well formalizes salient properties
of the world, and it thereby also allows to predict judgments of unity.
4
3
Model performance and comparison
To examine the performance of the causal inference model and to compare it to previous models, we
performed a human psychophysics experiment in which we adopted the same dual-report paradigm
as was used in [11]. Observers were simultaneously presented with a brief visual and also an auditory
stimulus, each of which could originate from one of five locations on an imaginary horizontal line
(-10? , -5? , 0? , 5? , or 10? with respect to the fixation point). Auditory stimuli were 32 ms of white
noise filtered through an individually calibrated head related transfer function (HRTF) and presented
through a pair of headphones, whereas the visual stimuli were high contrast Gabors on a noisy
background presented on a 21-inch CRT monitor. Observers had to report by means of a key press
(1-5) the perceived positions of both the visual and the auditory stimulus. Each combination of
locations was presented with the same frequency over the course of the experiment. In this way, for
each condition, visual and auditory response histograms were obtained.
We obtained response distributions for each the three models described above by numeral simulation. On each trial, estimation is followed by a step in which, the key is selected which corresponds
to the position closed to the best estimate. The simulated histograms obtained in this way were
compared to the measured response frequencies of all subjects by computing the R2 statistic.
Auditory response
Auditory model
Visual response
Visual model
no vision
The parameters in the causal inference model were optimized using fminsearch in MATLAB to
maximize R2 . The best combination of parameters yielded an R2 of 0.97. The response frequencies
are depicted in Fig. 2. The bisensory prior models also explain most of the variance, with R2 = 0.96
for the Roach model and R2 = 0.91 for the Bresciani model. This shows that it is possible to model
cue combination for large disparities well using such models.
no audio
1
0
Figure 2: A comparison between subjects? performance and the causal inference model. The blue
line indicates the frequency of subjects responses to visual stimuli, red line is the responses to
auditory stimuli. Each set of lines is one set of audio-visual stimulus conditions. Rows of conditions
indicate constant visual stimulus, columns is constant audio stimulus. Model predictions is indicated
by the red and blue dotted line.
5
3.1
Model comparison
To facilitate quantitative comparison with other models, we now fit the parameters of each model2 to
individual subject data, maximizing the likelihood of the model, i.e., the probability of the response
frequencies under the model. The causal inference model fits human data better than the other
models. Compared to the best fit of the causal inference model, the Bresciani model has a maximal
log likelihood ratio (base e) of the data of ?22 ? 6 (mean ? s.e.m. over subjects), and the Roach
model has a maximal log likelihood ratio of the data of ?18 ? 6. A causal inference model that
maximizes the probability of being correct instead of minimizing the mean squared error has a
maximal log likelihood ratio of ?18 ? 3. These values are considered decisive evidence in favor of
the causal inference model that minimizes the mean squared error (for details, see [25]).
The parameter values found in the likelihood optimization of the causal model are as follows:
pcommon = 0.28 ? 0.05, ?V = 2.14 ? 0.22? , ?A = 9.2 ? 1.1? , ?P = 12.3 ? 1.1? (mean ?
s.e.m. over subjects). We see that there is a relatively low prior probability of a common cause. In
this paradigm, auditory localization is considerably less precise than visual localization. Also, there
is a weak prior for central locations.
3.2
Localization bias
We used the individual subject fittings from
above and and averaged the auditory bias values obtained from those fits (i.e. we did not
fit the bias data themselves). Fits are shown
in Fig. 3 (dashed lines). We applied a paired
t-test to the differences between the 5? and
20? disparity conditions (model-subject comparison). Using a double-sided test, the null
hypothesis that the difference between the bias
in the 5? and 20? conditions is correctly predicted by each model is rejected for the Bresciani model (p < 0.002) and the Roach model
(p < 0.042) and accepted for the causal inference model (p > 0.17). Alternatively, with a
single-sided test, the hypothesis is rejected for
the Bresciani model (p < 0.001) and the Roach
model (p < 0.021) and accepted for the causal
inference model (> 0.9).
% Auditory Bias
A useful quantity to gain more insight into the structure of multisensory data is the cross-modal
bias. In our experiment, relative auditory bias is defined as the difference between the mean auditory estimate in a given condition and the real auditory position, divided by the difference between the real visual position and the real auditory position in this condition. If the influence
of vision on the auditory estimate is strong, then the relative auditory bias will be high (close
to one). It is well-known that bias decreases with spatial disparity and our experiment is no
exception (solid line in Fig. 3; data were combined between positive and negative disparities).
It can easily be shown that a traditional cue integration model would predict a bias equal to
?1
?2
, which would be close to 1 and
1 + ?V2
50
A
independent of disparity, unlike the data. This
45
shows that a mandatory integration model is an
insufficient model of multisensory interactions.
40
35
30
25
20
5
10
15
Spatial Disparity (deg.)
20
Figure 3: Auditory bias as a function of spatial
disparity. Solid blue line: data. Red: Causal inference model. Green: Model by Roach et al. [23].
Purple: Model by Bresciani et al. [22]. Models were optimized on response frequencies (as in
Fig. 2), not on the bias data.
The reason that the Bresciani model fares worst
is that its prior distribution does not include a component that corresponds to independent causes. On
2
The Roach et al. model has four free parameters (?,?V , ?A , ?coupling ), the Bresciani et al. model has three
(?V , ?A , ?coupling ), and the causal inference model has four (pcommon ,?V , ?A , ?P ). We do not consider the
Shams et al. model here, since it has many more parameters and it is not immediately clear how in this model
the erroneous identification of posterior with response distribution can be corrected.
6
the contrary, the prior used in the Roach model contains two terms, one term that is independent of
the disparity and one term that decreases with increasing disparity. It is thus functionally somewhat
similar to the causal inference model.
4
Discussion
We have argued that any model of multisensory perception should account not only for situations
of small, but also of large conflict. In these situations, segregation is more likely, in which the two
stimuli are not perceived to have the same cause. Even when segregation occurs, the two stimuli can
still influence each other.
We compared three Bayesian models designed to account for situations of large conflict by applying them to auditory-visual spatial localization data. We pointed out a common mistake: for nonGaussian bisensory priors without mandatory integration, the response distribution can no longer
be identified with the posterior distribution. After correct implementation of the three models, we
found that the causal inference model is superior to the models with ad hoc bisensory priors. This is
expected, as the nervous system actually needs to solve the problem of deciding which stimuli have
a common cause and which stimuli are unrelated.
We have seen that multisensory perception is a suitable tool for studying causal inference. However, the causal inference model also has the potential to quantitatively explain a number of other
perceptual phenomena, including perceptual grouping and binding, as well as within-modality cue
combination [27, 28]. Causal inference is a universal problem: whenever the brain has multiple
pieces of information it must decide if they relate to one another or are independent.
As the causal inference model describes how the brain processes probabilistic sensory information,
the question arises about the neural basis of these processes. Neural populations encode probability
distributions over stimuli through Bayes? rule, a type of coding known as probabilistic population
coding. Recent work has shown how the optimal cue combination assuming a common cause can
be implemented in probabilistic population codes through simple linear operations on neural activities [29]. This framework makes essential use of the structure of neural variability and leads to
physiological predictions for activity in areas that combine multisensory input, such as the superior
colliculus. Computational mechanisms for causal inference are expected have a neural substrate that
generalizes these linear operations on population activities. A neural implementation of the causal
inference model will open the door to a complete neural theory of multisensory perception.
References
[1] H.L. Pick, D.H. Warren, and J.C. Hay. Sensory conflict in judgements of spatial direction. Percept.
Psychophys., 6:203205, 1969.
[2] D. H. Warren, R. B. Welch, and T. J. McCarthy. The role of visual-auditory ?compellingness? in the ventriloquism effect: implications for transitivity among the spatial senses. Percept Psychophys, 30(6):557?
64, 1981.
[3] D. Alais and D. Burr. The ventriloquist effect results from near-optimal bimodal integration. Curr Biol,
14(3):257?62, 2004.
[4] R. A. Jacobs. Optimal integration of texture and motion cues to depth. Vision Res, 39(21):3621?9, 1999.
[5] R. J. van Beers, A. C. Sittig, and J. J. Gon. Integration of proprioceptive and visual position-information:
An experimentally supported model. J Neurophysiol, 81(3):1355?64, 1999.
[6] D. H. Warren and W. T. Cleaves. Visual-proprioceptive interaction under large amounts of conflict. J Exp
Psychol, 90(2):206?14, 1971.
[7] C. E. Jack and W. R. Thurlow. Effects of degree of visual association and angle of displacement on the
?ventriloquism? effect. Percept Mot Skills, 37(3):967?79, 1973.
[8] G. H. Recanzone. Auditory influences on visual temporal rate perception. J Neurophysiol, 89(2):1078?93,
2003.
[9] J. P. Bresciani, M. O. Ernst, K. Drewing, G. Bouyer, V. Maury, and A. Kheddar. Feeling what you hear:
auditory signals can modulate tactile tap perception. Exp Brain Res, 162(2):172?80, 2005.
7
[10] R. Gepshtein, P. Leiderman, L. Genosar, and D. Huppert. Testing the three step excited state proton
transfer model by the effect of an excess proton. J Phys Chem A Mol Spectrosc Kinet Environ Gen
Theory, 109(42):9674?84, 2005.
[11] L. Shams, W. J. Ma, and U. Beierholm. Sound-induced flash illusion as an optimal percept. Neuroreport,
16(17):1923?7, 2005.
[12] G Thomas. Experimental study of the influence of vision on sound localisation. J Exp Psychol, 28:167177,
1941.
[13] W. R. Thurlow and C. E. Jack. Certain determinants of the ?ventriloquism effect?. Percept Mot Skills,
36(3):1171?84, 1973.
[14] C.S. Choe, R. B. Welch, R.M. Gilford, and J.F. Juola. The ?ventriloquist effect?: visual dominance or
response bias. Perception and Psychophysics, 18:55?60, 1975.
[15] R. I. Bermant and R. B. Welch. Effect of degree of separation of visual-auditory stimulus and eye position
upon spatial interaction of vision and audition. Percept Mot Skills, 42(43):487?93, 1976.
[16] R. B. Welch and D. H. Warren. Immediate perceptual response to intersensory discrepancy. Psychol Bull,
88(3):638?67, 1980.
[17] P. Bertelson and M. Radeau. Cross-modal bias and perceptual fusion with auditory-visual spatial discordance. Percept Psychophys, 29(6):578?84, 1981.
[18] P. Bertelson, F. Pavani, E. Ladavas, J. Vroomen, and B. de Gelder. Ventriloquism in patients with unilateral
visual neglect. Neuropsychologia, 38(12):1634?42, 2000.
[19] D. A. Slutsky and G. H. Recanzone. Temporal and spatial dependency of the ventriloquism effect. Neuroreport, 12(1):7?10, 2001.
[20] J. Lewald, W. H. Ehrenstein, and R. Guski. Spatio-temporal constraints for auditory?visual integration.
Behav Brain Res, 121(1-2):69?79, 2001.
[21] M. T. Wallace, G. E. Roberson, W. D. Hairston, B. E. Stein, J. W. Vaughan, and J. A. Schirillo. Unifying
multisensory signals across time and space. Exp Brain Res, 158(2):252?8, 2004.
[22] J. P. Bresciani, F. Dammeier, and M. O. Ernst. Vision and touch are automatically integrated for the
perception of sequences of events. J Vis, 6(5):554?64, 2006.
[23] N. W. Roach, J. Heron, and P. V. McGraw. Resolving multisensory conflict: a strategy for balancing the
costs and benefits of audio-visual integration. Proc Biol Sci, 273(1598):2159?68, 2006.
[24] K. P. Kording and D. M. Wolpert. Bayesian decision theory in sensorimotor control. Trends Cogn Sci,
2006. 1364-6613 (Print) Journal article.
[25] K.P. Kording, U. Beierholm, W.J. Ma, S. Quartz, J. Tenenbaum, and L. Shams. Causal inference in
multisensory perception. PLoS ONE, 2(9):e943, 2007.
[26] Z. Ghahramani. Computational and psychophysics of sensorimotor integration. PhD thesis, Massachusetts Institute of Technology, 1995.
[27] D. C. Knill. Mixture models and the probabilistic structure of depth cues. Vision Res, 43(7):831?54,
2003.
[28] D. C. Knill. Robust cue integration: A bayesian model and evidence from cue conflict studies with
stereoscopic and figure cues to slant. Journal of Vision, 7(7):2?24.
[29] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population
codes. Nat Neurosci, 9(11):1432?8, 2006.
8
| 3207 |@word beep:3 determinant:1 trial:8 judgement:1 open:1 simulation:1 jacob:1 excited:1 pick:1 thereby:1 solid:2 contains:1 disparity:12 ording:1 imaginary:1 comparing:1 com:2 si:13 gmail:1 must:1 realize:1 chicago:2 analytic:1 remove:1 designed:1 cue:31 generative:5 selected:1 nervous:2 filtered:1 location:7 five:1 become:1 fixation:1 fitting:2 combine:1 burr:2 speaks:1 expected:2 indeed:2 behavior:1 themselves:1 examine:1 wallace:1 brain:9 automatically:1 increasing:3 schirillo:1 estimating:1 underlying:3 unrelated:1 maximizes:1 null:1 what:2 psych:1 minimizes:1 gelder:1 formalizes:2 temporal:4 quantitative:1 act:1 puppet:2 control:2 normally:1 appear:1 veridical:1 positive:1 xv:50 mistake:1 despite:1 modulation:1 collect:1 limited:1 range:1 averaged:1 testing:1 illusion:4 cogn:1 procedure:1 displacement:1 area:1 universal:1 empirical:1 gabor:1 poster:1 integrating:1 cannot:2 close:3 selection:1 marginalize:1 influence:5 applying:1 vaughan:1 demonstrated:1 conceptualized:1 maximizing:1 welch:4 identifying:2 immediately:1 pouget:1 rule:3 insight:1 his:2 population:5 notion:1 traditionally:1 imagine:1 beierholm:3 substrate:1 hypothesis:2 roberson:1 trend:1 approximated:1 gon:1 observed:1 role:1 capture:1 worst:1 plo:1 decrease:5 asked:3 koerding:1 predictive:1 localization:10 upon:1 basis:1 neurophysiol:2 model2:1 easily:1 joint:5 describe:1 posed:1 solve:3 favor:1 statistic:1 noisy:2 envisage:1 itself:2 hoc:2 blob:1 advantage:2 sequence:1 propose:1 interaction:4 product:1 maximal:3 inserting:1 gen:1 ernst:2 los:2 double:1 produce:1 generating:1 coupling:4 measured:1 dxv:1 keywords:1 sa:23 heron:1 strong:1 implemented:1 predicted:1 come:1 judge:1 indicate:1 differ:1 direction:1 correct:3 attribute:1 human:7 crt:1 numeral:1 argued:1 summation:1 considered:1 normal:2 exp:6 deciding:1 predict:2 perceived:3 estimation:3 proc:1 individually:1 concurrent:2 successfully:1 tool:1 intersensory:1 gaussian:4 always:1 dsv:1 encode:1 likelihood:7 fooled:1 indicates:1 contrast:1 sense:1 inference:33 integrated:2 pasadena:1 her:2 interested:1 alais:2 issue:2 dual:2 flexible:1 among:1 priori:1 spatial:17 integration:21 psychophysics:4 equal:2 never:1 having:1 choe:1 nearly:1 discrepancy:2 bertelson:2 report:4 stimulus:27 quantitatively:1 few:1 primarily:1 simultaneously:1 individual:2 beck:1 attempt:1 curr:1 headphone:1 localisation:1 radeau:1 mixture:1 light:1 sens:1 implication:1 integral:1 byproduct:1 necessary:3 conduct:1 re:5 causal:33 instance:2 formalism:1 column:1 bull:1 cost:3 deviation:1 uniform:1 reported:1 mot:3 dependency:1 sv:25 considerably:1 calibrated:1 combined:1 probabilistic:5 nongaussian:1 again:1 squared:3 central:1 thesis:1 cognitive:1 audition:1 account:3 potential:1 de:1 coding:2 includes:1 caused:1 vi:1 ad:2 decisive:1 tion:1 performed:1 observer:3 closed:1 piece:1 red:3 start:1 bayes:4 vroomen:1 rochester:2 synchrony:1 il:1 purple:1 variance:2 percept:7 judgment:2 identify:1 inch:1 weak:1 bayesian:16 identification:1 bresciani:9 recanzone:2 comp:1 explain:3 phys:1 whenever:1 sensorimotor:2 hare:1 frequency:6 couple:1 gain:1 auditory:37 experimenter:3 massachusetts:1 infers:2 formalize:1 actually:2 follow:2 response:18 wei:1 modal:2 furthermore:1 xa:48 stage:1 rejected:2 d:3 horizontal:1 touch:1 nonlinear:1 somehow:1 indicated:1 puted:1 believe:1 facilitate:1 effect:9 requiring:1 former:1 analytically:1 assigned:1 proprioceptive:2 white:1 deal:1 conditionally:1 konrad:2 transitivity:1 m:1 complete:1 latham:1 performs:1 motion:1 jack:2 recently:2 common:11 superior:2 functional:1 ji:1 extend:2 fare:1 association:1 environ:1 numerically:1 functionally:1 significant:1 slant:1 pm:1 pointed:1 had:1 moving:2 access:1 similarity:1 surface:1 longer:1 base:1 posterior:13 mccarthy:1 recent:3 mandatory:5 hay:1 certain:1 wv:3 success:1 inconsistency:1 caltech:1 seen:2 additional:1 somewhat:1 ventriloquist:4 performer:1 paradigm:3 maximize:1 signal:7 dashed:1 resolving:1 multiple:1 sound:3 sham:4 reduces:2 unimodal:2 cross:2 justifying:1 divided:1 manipulate:1 paired:1 prediction:2 underlies:1 vision:8 patient:1 histogram:2 bimodal:1 justified:1 background:2 want:1 addition:1 whereas:1 source:20 modality:1 unlike:1 subject:12 induced:1 elegant:1 contrary:1 neuropsychologia:1 near:4 presence:1 door:1 fit:6 psychology:1 audio:4 identified:2 click:1 angeles:2 whether:1 motivated:1 tactile:1 unilateral:1 speech:1 speaking:1 cause:11 behav:1 matlab:1 useful:1 clear:1 amount:1 stein:1 tenenbaum:1 dotted:1 stereoscopic:1 estimated:1 correctly:2 ladan:2 blue:3 probed:1 dominance:1 key:2 salient:1 terminology:1 four:2 monitor:1 fminsearch:1 year:1 sum:2 colliculus:1 angle:1 you:1 family:1 decide:1 separation:1 decision:1 followed:1 slutsky:1 yielded:1 activity:3 strength:1 constraint:1 flat:1 ucla:1 simulate:1 optimality:1 relatively:1 department:2 combination:15 ball:1 across:2 describes:1 sittig:1 unity:3 dxa:2 rehabilitation:1 intuitively:1 sided:2 segregation:3 equation:1 count:2 mechanism:1 needed:1 mind:1 hto:1 adopted:2 studying:1 gaussians:1 operation:2 generalizes:1 v2:1 appropriate:1 thomas:1 assumes:2 include:1 graphical:1 unifying:1 neglect:1 ghahramani:1 establish:1 psychophysical:1 question:2 quantity:1 occurs:1 print:1 strategy:2 parametric:1 diagonal:2 traditional:6 exhibit:1 simulated:1 sci:2 ulrik:1 originate:2 reason:2 assuming:1 code:2 modeled:1 insufficient:1 ratio:3 minimizing:1 relate:2 negative:1 implementation:2 allowing:1 roach:8 immediate:1 situation:4 variability:1 head:1 precise:1 inferred:1 cast:1 pair:1 optimized:2 conflict:9 tap:3 california:2 proton:2 address:1 usually:1 perception:16 psychophys:3 hear:1 green:1 including:1 belief:1 mouth:2 power:1 event:2 suitable:1 technology:2 brief:1 eye:1 created:1 hrtf:1 psychol:3 prior:23 understanding:1 segregated:1 determining:1 relative:3 marginalizing:1 expect:1 dsi:1 northwestern:1 proportional:1 degree:4 xp:2 beer:1 article:1 balancing:1 row:1 course:1 accounted:1 supported:1 last:1 free:1 formal:2 bias:15 warren:4 institute:3 wide:1 taking:1 distributed:2 van:1 benefit:1 depth:4 stand:1 world:1 sensory:7 author:2 collection:1 feeling:1 correlate:1 kording:2 excess:1 emphasize:1 skill:3 implicitly:2 mcgraw:1 keep:1 deg:1 assumed:5 spatio:1 xi:2 alternatively:1 latent:3 hairston:1 transfer:2 robust:1 ca:2 mol:1 interact:1 did:1 neurosci:1 noise:2 knill:2 weijima:1 child:1 fig:5 screen:1 fashion:1 ny:1 position:10 perceptual:4 third:1 abundance:1 erroneous:1 showing:1 quartz:1 r2:5 dsa:1 consequent:1 physiological:1 evidence:2 grouping:1 exists:1 essential:1 fusion:1 texture:1 phd:1 nat:1 conditioned:2 occurring:1 wolpert:1 depicted:2 simply:1 likely:2 visual:38 hitting:1 binding:1 corresponds:2 ma:4 modulate:1 presentation:1 flash:6 x2a:1 experimentally:2 typical:1 corrected:1 accepted:2 experimental:3 multisensory:16 exception:1 ventriloquism:5 internal:5 latter:1 modulated:1 arises:2 chem:1 violated:1 phenomenon:2 dept:1 neuroreport:2 biol:2 |
2,434 | 3,208 | Probabilistic Matrix Factorization
Ruslan Salakhutdinov and Andriy Mnih
Department of Computer Science, University of Toronto
6 King?s College Rd, M5S 3G4, Canada
{rsalakhu,amnih}@cs.toronto.edu
Abstract
Many existing approaches to collaborative filtering can neither handle very large
datasets nor easily deal with users who have very few ratings. In this paper we
present the Probabilistic Matrix Factorization (PMF) model which scales linearly
with the number of observations and, more importantly, performs well on the
large, sparse, and very imbalanced Netflix dataset. We further extend the PMF
model to include an adaptive prior on the model parameters and show how the
model capacity can be controlled automatically. Finally, we introduce a constrained version of the PMF model that is based on the assumption that users who
have rated similar sets of movies are likely to have similar preferences. The resulting model is able to generalize considerably better for users with very few ratings.
When the predictions of multiple PMF models are linearly combined with the
predictions of Restricted Boltzmann Machines models, we achieve an error rate
of 0.8861, that is nearly 7% better than the score of Netflix?s own system.
1 Introduction
One of the most popular approaches to collaborative filtering is based on low-dimensional factor
models. The idea behind such models is that attitudes or preferences of a user are determined by
a small number of unobserved factors. In a linear factor model, a user?s preferences are modeled
by linearly combining item factor vectors using user-specific coefficients. For example, for N users
and M movies, the N ? M preference matrix R is given by the product of an N ? D user coefficient
matrix U T and a D ? M factor matrix V [7]. Training such a model amounts to finding the best
rank-D approximation to the observed N ? M target matrix R under the given loss function.
A variety of probabilistic factor-based models has been proposed recently [2, 3, 4]. All these models
can be viewed as graphical models in which hidden factor variables have directed connections to
variables that represent user ratings. The major drawback of such models is that exact inference is
intractable [12], which means that potentially slow or inaccurate approximations are required for
computing the posterior distribution over hidden factors in such models.
Low-rank approximations based on minimizing the sum-squared distance can be found using Sin? = U T V of the given rank which mingular Value Decomposition (SVD). SVD finds the matrix R
imizes the sum-squared distance to the target matrix R. Since most real-world datasets are sparse,
most entries in R will be missing. In those cases, the sum-squared distance is computed only for
the observed entries of the target matrix R. As shown by [9], this seemingly minor modification
results in a difficult non-convex optimization problem which cannot be solved using standard SVD
implementations.
? = U T V , i.e. the number of factors,
Instead of constraining the rank of the approximation matrix R
[10] proposed penalizing the norms of U and V . Learning in this model, however, requires solving a sparse semi-definite program (SDP), making this approach infeasible for datasets containing
millions of observations.
1
?V
?U
?U
?W
?V
Wk
Yi
Vj
Ui
k=1,...,M
Vj
Ui
R ij
R ij
i=1,...,N
j=1,...,M
Ii
i=1,...,N
j=1,...,M
?
?
Figure 1: The left panel shows the graphical model for Probabilistic Matrix Factorization (PMF). The right
panel shows the graphical model for constrained PMF.
Many of the collaborative filtering algorithms mentioned above have been applied to modelling
user ratings on the Netflix Prize dataset that contains 480,189 users, 17,770 movies, and over 100
million observations (user/movie/rating triples). However, none of these methods have proved to
be particularly successful for two reasons. First, none of the above-mentioned approaches, except
for the matrix-factorization-based ones, scale well to large datasets. Second, most of the existing
algorithms have trouble making accurate predictions for users who have very few ratings. A common
practice in the collaborative filtering community is to remove all users with fewer than some minimal
number of ratings. Consequently, the results reported on the standard datasets, such as MovieLens
and EachMovie, then seem impressive because the most difficult cases have been removed. For
example, the Netflix dataset is very imbalanced, with ?infrequent? users rating less than 5 movies,
while ?frequent? users rating over 10,000 movies. However, since the standardized test set includes
the complete range of users, the Netflix dataset provides a much more realistic and useful benchmark
for collaborative filtering algorithms.
The goal of this paper is to present probabilistic algorithms that scale linearly with the number of
observations and perform well on very sparse and imbalanced datasets, such as the Netflix dataset.
In Section 2 we present the Probabilistic Matrix Factorization (PMF) model that models the user
preference matrix as a product of two lower-rank user and movie matrices. In Section 3, we extend
the PMF model to include adaptive priors over the movie and user feature vectors and show how
these priors can be used to control model complexity automatically. In Section 4 we introduce a
constrained version of the PMF model that is based on the assumption that users who rate similar
sets of movies have similar preferences. In Section 5 we report the experimental results that show
that PMF considerably outperforms standard SVD models. We also show that constrained PMF and
PMF with learnable priors improve model performance significantly. Our results demonstrate that
constrained PMF is especially effective at making better predictions for users with few ratings.
2 Probabilistic Matrix Factorization (PMF)
Suppose we have M movies, N users, and integer rating values from 1 to K 1 . Let Rij represent
the rating of user i for movie j, U ? RD?N and V ? RD?M be latent user and movie feature
matrices, with column vectors Ui and Vj representing user-specific and movie-specific latent feature
vectors respectively. Since model performance is measured by computing the root mean squared
error (RMSE) on the test set we first adopt a probabilistic linear model with Gaussian observation
noise (see fig. 1, left panel). We define the conditional distribution over the observed ratings as
Iij
N Y
M
Y
p(R|U, V, ? 2 ) =
N (Rij |UiT Vj , ? 2 )
,
(1)
i=1 j=1
where N (x|?, ? 2 ) is the probability density function of the Gaussian distribution with mean ? and
variance ? 2 , and Iij is the indicator function that is equal to 1 if user i rated movie j and equal to
1
Real-valued ratings can be handled just as easily by the models described in this paper.
2
0 otherwise. We also place zero-mean spherical Gaussian priors [1, 11] on user and movie feature
vectors:
2
p(U |?U
)=
N
Y
2
N (Ui |0, ?U
I),
p(V |?V2 ) =
i=1
M
Y
N (Vj |0, ?V2 I).
(2)
j=1
The log of the posterior distribution over the user and movie features is given by
N
M
N M
1 XX
1 X T
1 X T
T
2
I
(R
?
U
V
)
?
U
U
?
V Vj
ij
ij
i
i j
i
2
2? 2 i=1 j=1
2?U
2?V2 j=1 j
i=1
??
?
?
N M
1 ??X X ?
2
?
Iij ln ? 2 + N D ln ?U
+ M D ln ?V2 ? + C,
(3)
2
i=1 j=1
2
ln p(U, V |R, ? 2 , ?V2 , ?U
)=?
where C is a constant that does not depend on the parameters. Maximizing the log-posterior over
movie and user features with hyperparameters (i.e. the observation noise variance and prior variances) kept fixed is equivalent to minimizing the sum-of-squared-errors objective function with
quadratic regularization terms:
E=
N M
N
M
2 ?U X
?V X
1 XX
Iij Rij ? UiT Vj +
k Ui k2F ro +
k Vj k2F ro ,
2 i=1 j=1
2 i=1
2 j=1
(4)
2
where ?U = ? 2 /?U
, ?V = ? 2 /?V2 , and k ? k2F ro denotes the Frobenius norm. A local minimum
of the objective function given by Eq. 4 can be found by performing gradient descent in U and V .
Note that this model can be viewed as a probabilistic extension of the SVD model, since if all ratings
have been observed, the objective given by Eq. 4 reduces to the SVD objective in the limit of prior
variances going to infinity.
In our experiments, instead of using a simple linear-Gaussian model, which can make predictions
outside of the range of valid rating values, the dot product between user- and movie-specific feature
vectors is passed through the logistic function g(x) = 1/(1 + exp(?x)), which bounds the range of
predictions:
Iij
N Y
M
Y
p(R|U, V, ? 2 ) =
N (Rij |g(UiT Vj ), ? 2 )
.
(5)
i=1 j=1
We map the ratings 1, ..., K to the interval [0, 1] using the function t(x) = (x ? 1)/(K ? 1), so
that the range of valid rating values matches the range of predictions our model makes. Minimizing
the objective function given above using steepest descent takes time linear in the number of observations. A simple implementation of this algorithm in Matlab allows us to make one sweep through
the entire Netflix dataset in less than an hour when the model being trained has 30 factors.
3 Automatic Complexity Control for PMF Models
Capacity control is essential to making PMF models generalize well. Given sufficiently many factors, a PMF model can approximate any given matrix arbitrarily well. The simplest way to control
the capacity of a PMF model is by changing the dimensionality of feature vectors. However, when
the dataset is unbalanced, i.e. the number of observations differs significantly among different rows
or columns, this approach fails, since any single number of feature dimensions will be too high for
some feature vectors and too low for others. Regularization parameters such as ?U and ?V defined
above provide a more flexible approach to regularization. Perhaps the simplest way to find suitable
values for these parameters is to consider a set of reasonable parameter values, train a model for each
setting of the parameters in the set, and choose the model that performs best on the validation set.
The main drawback of this approach is that it is computationally expensive, since instead of training
a single model we have to train a multitude of models. We will show that the method proposed by
[6], originally applied to neural networks, can be used to determine suitable values for the regularization parameters of a PMF model automatically without significantly affecting the time needed to
train the model.
3
As shown above, the problem of approximating a matrix in the L2 sense by a product of two low-rank
matrices that are regularized by penalizing their Frobenius norm can be viewed as MAP estimation
in a probabilistic model with spherical Gaussian priors on the rows of the low-rank matrices. The
complexity of the model is controlled by the hyperparameters: the noise variance ? 2 and the the
2
parameters of the priors (?U
and ?V2 above). Introducing priors for the hyperparameters and maximizing the log-posterior of the model over both parameters and hyperparameters as suggested in [6]
allows model complexity to be controlled automatically based on the training data. Using spherical
priors for user and movie feature vectors in this framework leads to the standard form of PMF with
?U and ?V chosen automatically. This approach to regularization allows us to use methods that
are more sophisticated than the simple penalization of the Frobenius norm of the feature matrices.
For example, we can use priors with diagonal or even full covariance matrices as well as adjustable
means for the feature vectors. Mixture of Gaussians priors can also be handled quite easily.
In summary, we find a point estimate of parameters and hyperparameters by maximizing the logposterior given by
ln p(U, V, ? 2 , ?U , ?V |R) = ln p(R|U, V, ? 2 ) + ln p(U |?U ) + ln p(V |?V )+
ln p(?U ) + ln p(?V ) + C,
(6)
where ?U and ?V are the hyperparameters for the priors over user and movie feature vectors respectively and C is a constant that does not depend on the parameters or hyperparameters.
When the prior is Gaussian, the optimal hyperparameters can be found in closed form if the movie
and user feature vectors are kept fixed. Thus to simplify learning we alternate between optimizing
the hyperparameters and updating the feature vectors using steepest ascent with the values of hyperparameters fixed. When the prior is a mixture of Gaussians, the hyperparameters can be updated
by performing a single step of EM. In all of our experiments we used improper priors for the hyperparameters, but it is easy to extend the closed form updates to handle conjugate priors for the
hyperparameters.
4 Constrained PMF
Once a PMF model has been fitted, users with very few ratings will have feature vectors that are close
to the prior mean, or the average user, so the predicted ratings for those users will be close to the
movie average ratings. In this section we introduce an additional way of constraining user-specific
feature vectors that has a strong effect on infrequent users.
Let W ? RD?M be a latent similarity constraint matrix. We define the feature vector for user i as:
PM
k=1 Iik Wk
Ui = Yi + P
.
(7)
M
k=1 Iik
where I is the observed indicator matrix with Iij taking on value 1 if user i rated movie j and 0
otherwise2 . Intuitively, the ith column of the W matrix captures the effect of a user having rated a
particular movie has on the prior mean of the user?s feature vector. As a result, users that have seen
the same (or similar) movies will have similar prior distributions for their feature vectors. Note that
Yi can be seen as the offset added to the mean of the prior distribution to get the feature vector Ui
for the user i. In the unconstrained PMF model Ui and Yi are equal because the prior mean is fixed
at zero (see fig. 1). We now define the conditional distribution over the observed ratings as
PM
N Y
M
Y
2 Iij
2
k=1 Iik Wk T
Vj , ? )
.
(8)
p(R|Y, V, W, ? ) =
N (Rij |g Yi + PM
i=1 j=1
k=1 Iik
We regularize the latent similarity constraint matrix W by placing a zero-mean spherical Gaussian
prior on it:
p(W |?W ) =
M
Y
2
N (Wk |0, ?W
I).
(9)
k=1
2
If no rating information is available about some user i, i.e. all entries of Ii vector are zero, the value of the
ratio in Eq. 7 is set to zero.
4
The Netflix Dataset
10D
30D
0.97
0.97
0.96
0.96
Netflix
Baseline Score
Netflix
Baseline Score
0.95
PMF1
RMSE
RMSE
0.95
0.94
SVD
0.94
SVD
0.93
PMF2
0.93
0.92
PMF
0.92
0.91
PMFA1
0.91
0
10
20
30
40
50
60
70
80
90
0.9
0
100
Constrained
PMF
5
10
15
20
25
30
35
40
45
50
55
60
Epochs
Epochs
Figure 2: Left panel: Performance of SVD, PMF and PMF with adaptive priors, using 10D feature vectors, on
the full Netflix validation data. Right panel: Performance of SVD, Probabilistic Matrix Factorization (PMF)
and constrained PMF, using 30D feature vectors, on the validation data. The y-axis displays RMSE (root mean
squared error), and the x-axis shows the number of epochs, or passes, through the entire training dataset.
As with the PMF model, maximizing the log-posterior is equivalent to minimizing the sum-ofsquared errors function with quadratic regularization terms:
PM
N M
2
1 XX
k=1 Iik Wk T
Vj
(10)
Iij Rij ? g Yi + P
E =
M
2 i=1 j=1
k=1 Iik
+
N
M
M
?Y X
?V X
?W X
k Yi k2F ro +
k Vj k2F ro +
k Wk k2F ro ,
2 i=1
2 j=1
2
k=1
2
. We can then perform gradient descent in Y ,
with ?Y = ? 2 /?Y2 , ?V = ? 2 /?V2 , and ?W = ? 2 /?W
V , and W to minimize the objective function given by Eq. 10. The training time for the constrained
PMF model scales linearly with the number of observations, which allows for a fast and simple
implementation. As we show in our experimental results section, this model performs considerably
better than a simple unconstrained PMF model, especially on infrequent users.
5 Experimental Results
5.1 Description of the Netflix Data
According to Netflix, the data were collected between October 1998 and December 2005 and represent the distribution of all ratings Netflix obtained during this period. The training dataset consists
of 100,480,507 ratings from 480,189 randomly-chosen, anonymous users on 17,770 movie titles.
As part of the training data, Netflix also provides validation data, containing 1,408,395 ratings. In
addition to the training and validation data, Netflix also provides a test set containing 2,817,131
user/movie pairs with the ratings withheld. The pairs were selected from the most recent ratings for
a subset of the users in the training dataset. To reduce the unintentional overfitting to the test set that
plagues many empirical comparisons in the machine learning literature, performance is assessed by
submitting predicted ratings to Netflix who then post the root mean squared error (RMSE) on an
unknown half of the test set. As a baseline, Netflix provided the test score of its own system trained
on the same data, which is 0.9514.
To provide additional insight into the performance of different algorithms we created a smaller and
much more difficult dataset from the Netflix data by randomly selecting 50,000 users and 1850
movies. The toy dataset contains 1,082,982 training and 2,462 validation user/movie pairs. Over
50% of the users in the training dataset have less than 10 ratings.
5.2 Details of Training
To speed-up the training, instead of performing batch learning we subdivided the Netflix data into
mini-batches of size 100,000 (user/movie/rating triples), and updated the feature vectors after each
5
mini-batch. After trying various values for the learning rate and momentum and experimenting with
various values of D, we chose to use a learning rate of 0.005, and a momentum of 0.9, as this setting
of parameters worked well for all values of D we have tried.
5.3 Results for PMF with Adaptive Priors
To evaluate the performance of PMF models with adaptive priors we used models with 10D features.
This dimensionality was chosen in order to demonstrate that even when the dimensionality of features is relatively low, SVD-like models can still overfit and that there are some performance gains
to be had by regularizing such models automatically. We compared an SVD model, two fixed-prior
PMF models, and two PMF models with adaptive priors. The SVD model was trained to minimize
the sum-squared distance only to the observed entries of the target matrix. The feature vectors of
the SVD model were not regularized in any way. The two fixed-prior PMF models differed in their
regularization parameters: one (PMF1) had ?U = 0.01 and ?V = 0.001, while the other (PMF2)
had ?U = 0.001 and ?V = 0.0001. The first PMF model with adaptive priors (PMFA1) had Gaussian priors with spherical covariance matrices on user and movie feature vectors, while the second
model (PMFA2) had diagonal covariance matrices. In both cases, the adaptive priors had adjustable
means. Prior parameters and noise covariances were updated after every 10 and 100 feature matrix
updates respectively. The models were compared based on the RMSE on the validation set.
The results of the comparison are shown on Figure 2 (left panel). Note that the curve for the PMF
model with spherical covariances is not shown since it is virtually identical to the curve for the model
with diagonal covariances. Comparing models based on the lowest RMSE achieved over the time of
training, we see that the SVD model does almost as well as the moderately regularized PMF model
(PMF2) (0.9258 vs. 0.9253) before overfitting badly towards the end of training. While PMF1
does not overfit, it clearly underfits since it reaches the RMSE of only 0.9430. The models with
adaptive priors clearly outperform the competing models, achieving the RMSE of 0.9197 (diagonal
covariances) and 0.9204 (spherical covariances). These results suggest that automatic regularization
through adaptive priors works well in practice. Moreover, our preliminary results for models with
higher-dimensional feature vectors suggest that the gap in performance due to the use of adaptive
priors is likely to grow as the dimensionality of feature vectors increases. While the use of diagonal
covariance matrices did not lead to a significant improvement over the spherical covariance matrices,
diagonal covariances might be well-suited for automatically regularizing the greedy version of the
PMF training algorithm, where feature vectors are learned one dimension at a time.
5.4 Results for Constrained PMF
For experiments involving constrained PMF models, we used 30D features (D = 30), since this
choice resulted in the best model performance on the validation set. Values of D in the range of
[20, 60] produce similar results. Performance results of SVD, PMF, and constrained PMF on the
toy dataset are shown on Figure 3. The feature vectors were initialized to the same values in all
three models. For both PMF and constrained PMF models the regularization parameters were set to
?U = ?Y = ?V = ?W = 0.002. It is clear that the simple SVD model overfits heavily. The constrained PMF model performs much better and converges considerably faster than the unconstrained
PMF model. Figure 3 (right panel) shows the effect of constraining user-specific features on the
predictions for infrequent users. Performance of the PMF model for a group of users that have fewer
than 5 ratings in the training datasets is virtually identical to that of the movie average algorithm that
always predicts the average rating of each movie. The constrained PMF model, however, performs
considerably better on users with few ratings. As the number of ratings increases, both PMF and
constrained PMF exhibit similar performance.
One other interesting aspect of the constrained PMF model is that even if we know only what movies
the user has rated, but do not know the values of the ratings, the model can make better predictions
than the movie average model. For the toy dataset, we randomly sampled an additional 50,000 users,
and for each of the users compiled a list of movies the user has rated and then discarded the actual
ratings. The constrained PMF model achieved a RMSE of 1.0510 on the validation set compared
to a RMSE of 1.0726 for the simple movie average model. This experiment strongly suggests that
knowing only which movies a user rated, but not the actual ratings, can still help us to model that
user?s preferences better.
6
Toy Dataset
1.2
1.3
1.28
1.26
1.15
SVD
1.24
1.1
1.22
1.2
1.05
RMSE
RMSE
1.18
1.16
1.14
1.12
Movie Average
PMF
1
Constrained
PMF
0.95
1.1
1.08
0.9
PMF
1.06
1.04
1.02
1
0
0.85
Constrained
PMF
20
40
60
80
100
120
140
160
180
0.8
200
1?5
6?10
11?20
21?40
41?80
81?160
>161
Number of Observed Ratings
Epochs
Figure 3: Left panel: Performance of SVD, Probabilistic Matrix Factorization (PMF) and constrained PMF on
the validation data. The y-axis displays RMSE (root mean squared error), and the x-axis shows the number of
epochs, or passes, through the entire training dataset. Right panel: Performance of constrained PMF, PMF, and
the movie average algorithm that always predicts the average rating of each movie. The users were grouped by
the number of observed ratings in the training data.
20
1.2
0.92
18
0.918
1.15
16
0.916
14
Users (%)
PMF
1.05
RMSE
Movie
Average
1
Constrained
PMF
0.95
0.914
12
0.912
RMSE
1.1
10
0.91
8
0.908
6
0.906
4
0.904
2
0.902
Constrained
PMF
0.9
0.85
0.8
1?5
6?10
11?20
21?40
41?80
81?160 161?320 321?640
Number of Observed Ratings
>641
0
1?5
6?10
11?20
21?40
41?80
81?160 161?320 321?640
Number of Observed Ratings
>641
0.9
0
Constrained PMF
(using Test rated/unrated id)
5
10
15
20
25
30
35
40
45
50
55
60
Epochs
Figure 4: Left panel: Performance of constrained PMF, PMF, and the movie average algorithm that always
predicts the average rating of each movie. The users were grouped by the number of observed rating in the training data, with the x-axis showing those groups, and the y-axis displaying RMSE on the full Netflix validation
data for each such group. Middle panel: Distribution of users in the training dataset. Right panel: Performance
of constrained PMF and constrained PMF that makes use of an additional rated/unrated information obtained
from the test dataset.
Performance results on the full Netflix dataset are similar to the results on the toy dataset. For both
the PMF and constrained PMF models the regularization parameters were set to ?U = ?Y = ?V =
?W = 0.001. Figure 2 (right panel) shows that constrained PMF significantly outperforms the
unconstrained PMF model, achieving a RMSE of 0.9016. A simple SVD achieves a RMSE of about
0.9280 and after about 10 epochs begins to overfit. Figure 4 (left panel) shows that the constrained
PMF model is able to generalize considerably better for users with very few ratings. Note that over
10% of users in the training dataset have fewer than 20 ratings. As the number of ratings increases,
the effect from the offset in Eq. 7 diminishes, and both PMF and constrained PMF achieve similar
performance.
There is a more subtle source of information in the Netflix dataset. Netflix tells us in advance which
user/movie pairs occur in the test set, so we have an additional category: movies that were viewed
but for which the rating is unknown. This is a valuable source of information about users who occur
several times in the test set, especially if they have only a small number of ratings in the training set.
The constrained PMF model can easily take this information into account. Figure 4 (right panel)
shows that this additional source of information further improves model performance.
When we linearly combine the predictions of PMF, PMF with a learnable prior, and constrained
PMF, we achieve an error rate of 0.8970 on the test set. When the predictions of multiple PMF
models are linearly combined with the predictions of multiple RBM models, recently introduced
by [8], we achieve an error rate of 0.8861, that is nearly 7% better than the score of Netflix?s own
system.
7
6 Summary and Discussion
In this paper we presented Probabilistic Matrix Factorization (PMF) and its two derivatives: PMF
with a learnable prior and constrained PMF. We also demonstrated that these models can be efficiently trained and successfully applied to a large dataset containing over 100 million movie ratings.
Efficiency in training PMF models comes from finding only point estimates of model parameters
and hyperparameters, instead of inferring the full posterior distribution over them. If we were to
take a fully Bayesian approach, we would put hyperpriors over the hyperparameters and resort to
MCMC methods [5] to perform inference. While this approach is computationally more expensive,
preliminary results strongly suggest that a fully Bayesian treatment of the presented PMF models
would lead to a significant increase in predictive accuracy.
Acknowledgments
We thank Vinod Nair and Geoffrey Hinton for many helpful discussions. This research was supported by NSERC.
References
[1] Delbert Dueck and Brendan Frey. Probabilistic sparse matrix factorization. Technical Report PSI TR
2004-023, Dept. of Computer Science, University of Toronto, 2004.
[2] Thomas Hofmann. Probabilistic latent semantic analysis. In Proceedings of the 15th Conference on
Uncertainty in AI, pages 289?296, San Fransisco, California, 1999. Morgan Kaufmann.
[3] Benjamin Marlin. Modeling user rating profiles for collaborative filtering.
Lawrence K. Saul, and Bernhard Sch?olkopf, editors, NIPS. MIT Press, 2003.
In Sebastian Thrun,
[4] Benjamin Marlin and Richard S. Zemel. The multiple multiplicative factor model for collaborative filtering. In Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff,
Alberta, Canada, July 4-8, 2004. ACM, 2004.
[5] Radford M. Neal. Probabilistic inference using Markov chain Monte Carlo methods. Technical Report
CRG-TR-93-1, Department of Computer Science, University of Toronto, September 1993.
[6] S. J. Nowlan and G. E. Hinton. Simplifying neural networks by soft weight-sharing. Neural Computation,
4:473?493, 1992.
[7] Jason D. M. Rennie and Nathan Srebro. Fast maximum margin matrix factorization for collaborative
prediction. In Luc De Raedt and Stefan Wrobel, editors, Machine Learning, Proceedings of the TwentySecond International Conference (ICML 2005), Bonn, Germany, August 7-11, 2005, pages 713?719.
ACM, 2005.
[8] Ruslan Salakhutdinov, Andriy Mnih, and Geoffrey Hinton. Restricted Boltzmann machines for collaborative filtering. In Machine Learning, Proceedings of the Twenty-fourth International Conference (ICML
2004). ACM, 2007.
[9] Nathan Srebro and Tommi Jaakkola. Weighted low-rank approximations. In Tom Fawcett and Nina
Mishra, editors, Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003),
August 21-24, 2003, Washington, DC, USA, pages 720?727. AAAI Press, 2003.
[10] Nathan Srebro, Jason D. M. Rennie, and Tommi Jaakkola. Maximum-margin matrix factorization. In
Advances in Neural Information Processing Systems, 2004.
[11] Michael E. Tipping and Christopher M. Bishop. Probabilistic principal component analysis. Technical
Report NCRG/97/010, Neural Computing Research Group, Aston University, September 1997.
[12] Max Welling, Michal Rosen-Zvi, and Geoffrey Hinton. Exponential family harmoniums with an application to information retrieval. In NIPS 17, pages 1481?1488, Cambridge, MA, 2005. MIT Press.
8
| 3208 |@word middle:1 version:3 norm:4 tried:1 decomposition:1 covariance:11 simplifying:1 tr:2 contains:2 score:5 selecting:1 outperforms:2 existing:2 mishra:1 comparing:1 michal:1 nowlan:1 realistic:1 hofmann:1 remove:1 update:2 v:1 half:1 fewer:3 selected:1 item:1 greedy:1 steepest:2 prize:1 ith:1 provides:3 toronto:4 preference:7 banff:1 consists:1 combine:1 introduce:3 g4:1 nor:1 sdp:1 salakhutdinov:2 spherical:8 alberta:1 automatically:7 actual:2 provided:1 xx:3 moreover:1 begin:1 panel:15 lowest:1 what:1 unobserved:1 finding:2 marlin:2 dueck:1 every:1 ro:6 control:4 before:1 local:1 frey:1 limit:1 id:1 might:1 chose:1 suggests:1 factorization:12 range:6 directed:1 acknowledgment:1 practice:2 definite:1 differs:1 empirical:1 significantly:4 suggest:3 get:1 cannot:1 close:2 put:1 equivalent:2 map:2 demonstrated:1 missing:1 maximizing:4 convex:1 twentysecond:1 insight:1 importantly:1 regularize:1 imizes:1 handle:2 updated:3 target:4 suppose:1 infrequent:4 user:74 exact:1 heavily:1 expensive:2 particularly:1 updating:1 predicts:3 observed:12 solved:1 rij:6 capture:1 improper:1 removed:1 valuable:1 mentioned:2 benjamin:2 ui:8 complexity:4 moderately:1 trained:4 depend:2 solving:1 harmonium:1 predictive:1 efficiency:1 easily:4 various:2 train:3 attitude:1 fast:2 effective:1 monte:1 zemel:1 tell:1 outside:1 quite:1 valued:1 rennie:2 otherwise:1 seemingly:1 product:4 frequent:1 combining:1 achieve:4 description:1 frobenius:3 olkopf:1 produce:1 converges:1 help:1 measured:1 ij:4 minor:1 eq:5 strong:1 c:1 predicted:2 come:1 tommi:2 drawback:2 subdivided:1 anonymous:1 preliminary:2 crg:1 extension:1 sufficiently:1 exp:1 lawrence:1 major:1 achieves:1 adopt:1 ruslan:2 estimation:1 diminishes:1 title:1 grouped:2 successfully:1 weighted:1 stefan:1 mit:2 clearly:2 gaussian:8 always:3 jaakkola:2 improvement:1 rank:8 modelling:1 experimenting:1 brendan:1 baseline:3 sense:1 helpful:1 inference:3 inaccurate:1 entire:3 hidden:2 going:1 germany:1 among:1 flexible:1 constrained:35 equal:3 once:1 having:1 washington:1 identical:2 placing:1 k2f:6 nearly:2 icml:4 rosen:1 report:4 others:1 simplify:1 richard:1 few:7 randomly:3 resulted:1 mnih:2 mixture:2 behind:1 chain:1 accurate:1 pmf:88 initialized:1 minimal:1 fitted:1 column:3 modeling:1 soft:1 raedt:1 introducing:1 entry:4 subset:1 successful:1 too:2 zvi:1 reported:1 fransisco:1 considerably:6 combined:2 density:1 international:4 probabilistic:17 michael:1 squared:9 aaai:1 containing:4 choose:1 resort:1 derivative:1 toy:5 account:1 de:1 wk:6 includes:1 coefficient:2 multiplicative:1 root:4 jason:2 closed:2 overfits:1 netflix:25 rmse:19 collaborative:9 minimize:2 accuracy:1 variance:5 who:6 efficiently:1 kaufmann:1 generalize:3 bayesian:2 none:2 carlo:1 m5s:1 reach:1 sebastian:1 sharing:1 rbm:1 psi:1 gain:1 sampled:1 dataset:25 proved:1 popular:1 treatment:1 dimensionality:4 underfits:1 improves:1 subtle:1 sophisticated:1 originally:1 higher:1 tipping:1 tom:1 strongly:2 just:1 overfit:3 christopher:1 logistic:1 perhaps:1 usa:1 effect:4 y2:1 regularization:10 semantic:1 neal:1 deal:1 sin:1 during:1 trying:1 complete:1 demonstrate:2 performs:5 recently:2 common:1 ncrg:1 extend:3 million:3 significant:2 cambridge:1 ai:1 rd:4 unconstrained:4 automatic:2 pm:4 had:6 dot:1 impressive:1 similarity:2 compiled:1 posterior:6 imbalanced:3 own:3 recent:1 optimizing:1 arbitrarily:1 yi:7 unrated:2 seen:2 minimum:1 additional:6 morgan:1 determine:1 period:1 july:1 semi:1 ii:2 multiple:4 full:5 reduces:1 eachmovie:1 technical:3 match:1 faster:1 retrieval:1 post:1 controlled:3 prediction:13 involving:1 represent:3 fawcett:1 achieved:2 affecting:1 addition:1 interval:1 grow:1 source:3 sch:1 ascent:1 pass:2 virtually:2 december:1 seem:1 integer:1 constraining:3 easy:1 vinod:1 variety:1 competing:1 andriy:2 reduce:1 idea:1 knowing:1 handled:2 passed:1 matlab:1 useful:1 clear:1 amount:1 category:1 simplest:2 outperform:1 group:4 achieving:2 changing:1 neither:1 penalizing:2 kept:2 sum:6 uncertainty:1 fourth:1 place:1 almost:1 reasonable:1 family:1 bound:1 display:2 quadratic:2 badly:1 occur:2 infinity:1 constraint:2 worked:1 bonn:1 aspect:1 speed:1 nathan:3 performing:3 relatively:1 department:2 according:1 alternate:1 unintentional:1 conjugate:1 smaller:1 em:1 rsalakhu:1 modification:1 making:4 intuitively:1 restricted:2 ln:10 computationally:2 needed:1 know:2 end:1 amnih:1 available:1 gaussians:2 hyperpriors:1 v2:8 batch:3 thomas:1 standardized:1 denotes:1 include:2 trouble:1 graphical:3 especially:3 approximating:1 sweep:1 objective:6 added:1 diagonal:6 exhibit:1 gradient:2 september:2 distance:4 thank:1 thrun:1 capacity:3 evaluate:1 collected:1 reason:1 nina:1 modeled:1 mini:2 ratio:1 minimizing:4 difficult:3 october:1 potentially:1 implementation:3 boltzmann:2 adjustable:2 perform:3 unknown:2 twenty:2 observation:9 datasets:7 discarded:1 benchmark:1 withheld:1 markov:1 descent:3 hinton:4 dc:1 august:2 community:1 canada:2 rating:52 introduced:1 pair:4 required:1 connection:1 plague:1 california:1 learned:1 hour:1 nip:2 able:2 suggested:1 program:1 max:1 suitable:2 regularized:3 indicator:2 representing:1 improve:1 movie:47 rated:9 aston:1 axis:6 created:1 prior:39 epoch:7 l2:1 literature:1 loss:1 fully:2 interesting:1 filtering:8 srebro:3 geoffrey:3 triple:2 validation:11 penalization:1 displaying:1 editor:3 row:2 summary:2 supported:1 infeasible:1 saul:1 taking:1 sparse:5 curve:2 dimension:2 world:1 uit:3 valid:2 adaptive:11 san:1 welling:1 approximate:1 bernhard:1 overfitting:2 latent:5 vj:12 did:1 main:1 linearly:7 noise:4 hyperparameters:15 profile:1 fig:2 differed:1 slow:1 delbert:1 iij:8 fails:1 momentum:2 inferring:1 exponential:1 wrobel:1 specific:6 bishop:1 showing:1 learnable:3 offset:2 list:1 submitting:1 multitude:1 intractable:1 essential:1 margin:2 gap:1 suited:1 likely:2 twentieth:1 iik:6 nserc:1 radford:1 acm:3 ma:1 nair:1 conditional:2 viewed:4 goal:1 king:1 consequently:1 towards:1 luc:1 determined:1 except:1 movielens:1 principal:1 svd:20 experimental:3 college:1 unbalanced:1 assessed:1 dept:1 mcmc:1 regularizing:2 |
2,435 | 3,209 | On Higher-Order Perceptron Algorithms ?
Cristian Brotto
DICOM, Universit`a dell?Insubria
Claudio Gentile
DICOM, Universit`a dell?Insubria
[email protected]
[email protected]
Fabio Vitale
DICOM, Universit`a dell?Insubria
[email protected]
Abstract
A new algorithm for on-line learning linear-threshold functions is proposed which
efficiently combines second-order statistics about the data with the ?logarithmic
behavior? of multiplicative/dual-norm algorithms. An initial theoretical analysis is
provided suggesting that our algorithm might be viewed as a standard Perceptron
algorithm operating on a transformed sequence of examples with improved margin properties. We also report on experiments carried out on datasets from diverse
domains, with the goal of comparing to known Perceptron algorithms (first-order,
second-order, additive, multiplicative). Our learning procedure seems to generalize quite well, and converges faster than the corresponding multiplicative baseline
algorithms.
1
Introduction and preliminaries
The problem of on-line learning linear-threshold functions from labeled data is one which have
spurred a substantial amount of research in Machine Learning. The relevance of this task from
both the theoretical and the practical point of view is widely recognized: On the one hand, linear
functions combine flexiblity with analytical and computational tractability, on the other hand, online algorithms provide efficient methods for processing massive amounts of data. Moreover, the
widespread use of kernel methods in Machine Learning (e.g., [24]) have greatly improved the scope
of this learning technology, thereby increasing even further the general attention towards the specific
task of incremental learning (generalized) linear functions. Many models/algorithms have been
proposed in the literature (stochastic, adversarial, noisy, etc.) : Any list of references would not do
justice of the existing work on this subject. In this paper, we are interested in the problem of online learning linear-threshold functions from adversarially generated examples. We introduce a new
family of algorithms, collectively called the Higher-order Perceptron algorithm (where ?higher?
means here ?higher than one?, i.e., ?higher than first-order? descent algorithms such as gradientdescent or standard Perceptron-like algorithms?). Contrary to other higher-order algorithms, such
as the ridge regression-like algorithms considered in, e.g., [4, 7], Higher-order Perceptron has the
ability to put together in a principled and flexible manner second-order statistics about the data with
the ?logarithmic behavior? of multiplicative/dual-norm algorithms (e.g., [18, 19, 6, 13, 15, 20]). Our
algorithm exploits a simplified form of the inverse data matrix, lending itself to be easily combined
with the dual norms machinery introduced by [13] (see also [12, 23]). As we will see, this has also
computational advantages, allowing us to formulate an efficient (subquadratic) implementation.
Our contribution is twofold. First, we provide an initial theoretical analysis suggesting that our
algorithm might be seen as a standard Perceptron algorithm [21] operating on a transformed sequence of examples with improved margin properties. The same analysis also suggests a simple
(but principled) way of switching on the fly between higher-order and first-order updates. This is
?
The authors gratefully acknowledge partial support by the PASCAL Network of Excellence under EC grant
n. 506778. This publication only reflects the authors? views.
especially convenient when we deal with kernel functions, a major concern being the sparsity of the
computed solution. The second contribution of this paper is an experimental investigation of our
algorithm on artificial and real-world datasets from various domains: We compared Higher-order
Perceptron to baseline Perceptron algorithms, like the Second-order Perceptron algorithm defined in
[7] and the standard (p-norm) Perceptron algorithm, as in [13, 12]. We found in our experiments that
Higher-order Perceptron generalizes quite well. Among our experimental findings are the following: 1) Higher-order Perceptron is always outperforming the corresponding multiplicative (p-norm)
baseline (thus the stored data matrix is always beneficial in terms of convergence speed); 2) When
dealing with Euclidean norms (p = 2), the comparison to Second-order Perceptron is less clear and
depends on the specific task at hand.
Learning protocol and notation. Our algorithm works in the well-known mistake bound model
of on-line learning, as introduced in [18, 2], and further investigated by many authors (e.g., [19, 6,
13, 15, 7, 20, 23] and references therein). Prediction proceeds in a sequence of trials. In each trial
t = 1, 2, . . . the prediction algorithm is given an instance vector in Rn (for simplicity, all vectors are
normalized, i.e., ||xt || = 1, where || ? || is the Euclidean norm unless otherwise specified), and then
guesses the binary label yt ? {?1, 1} associated with xt . We denote the algorithm?s prediction by
ybt ? {?1, 1}. Then the true label yt is disclosed. In the case when ybt 6= yt we say that the algorithm
has made a prediction mistake. We call an example a pair (xt , yt ), and a sequence of examples S
any sequence S = (x1 , y1 ), (x2 , y2 ), . . . , (xT , yT ). In this paper, we are competing against the
class of linear-threshold predictors, parametrized by normal vectors u ? {v ? Rn : ||v|| = 1}. In
this case, a common way of measuring the (relative) prediction performance of an algorithm A is
to compare the total number of mistakes of A on S to some measure of the linear separability of S.
One such measure (e.g., [24]) is the cumulative hinge-loss (or soft-margin) D? (u; S) of S w.r.t. a
PT
linear classifier u at a given margin value ? > 0: D? (u; S) = t=1 max{0, ? ? yt u> xt } (observe
that D? (u; S) vanishes if and only if u separates S with margin at least ?.
A mistake-driven algorithm A is one which updates its internal state only upon mistakes. One
can therefore associate with the run of A on S a subsequence M = M(S, A) ? {1, . . . , T } of
mistaken trials. Now, the standard analysis of these algorithms allows us to restrict the behavior
of the comparison class to mistaken trialsPonly and, as a consequence, to refine D? (u; S) so as to
include only trials in M: D? (u; S) = t?M max{0, ? ? yt u> xt }. This gives bounds on A?s
performance relative to the best u over a sequence of examples produced (or, actually, selected)
by A during its on-line functioning. Our analysis in Section 3 goes one step further: the number
of mistakes of A on S is contrasted to the cumulative hinge loss of the best u on a transformed
sequence S? = ((?
xi1 , yi1 ), (?
xi2 , yi2 ), . . . , (?
xim , yim )), where each instance xik gets transformed
? ik through a mapping depending only on the past behavior of the algorithm (i.e., only on
into x
examples up to trial t = ik?1 ). As we will see in Section 3, this new sequence S? tends to be ?more
separable? than the original sequence, in the sense that if S is linearly separable with some margin,
then the transformed sequence S? is likely to be separable with a larger margin.
2
The Higher-order Perceptron algorithm
The algorithm (described in Figure 1) takes as input a sequence of nonnegative parameters ?1 , ?2 , ...,
and maintains a product matrix Bk (initialized to the identity matrix I) and a sum vector v k (initialized to 0). Both of them are indexed by k, a counter storing the current number of mistakes
(plus one). Upon receiving the t-th normalized instance vector xt ? Rn , the algorithm computes
its binary prediction value ybt as the sign of the inner product between vector Bk?1 v k?1 and vector
Bk?1 xt . If ybt 6= yt then matrix Bk?1 is updates multiplicatively as Bk = Bk?1 (I ? ?k xt x>
t )
while vector v k?1 is updated additively through the standard Perceptron rule v k = v k?1 + yt xt .
The new matrix Bk and the new vector v k will be used in the next trial. If ybt = yt no update is
performed (hence the algorithm is mistake driven). Observe that ?k = 0 for any k makes this algorithm degenerate into the standard Perceptron algorithm [21]. Moreover, one can easily see that, in
order to let this algorithm exploit the information collected in the matrix BP
(and let the algorithm?s
?
behavior be substantially different from Perceptron?s) we need to ensure k=1 ?k = ?. In the
sequel, our standard choice will be ?k = c/k, with c ? (0, 1). See Sections 3 and 4.
Implementing Higher-Order Perceptron can be done in many ways. Below, we quickly describe
three of them, each one having its own merits.
1) Primal version. We store and update an n?n matrix Ak = Bk> Bk and an n-dimensional column
Parameters: ?1 , ?2 , ... ? [0, 1).
Initialization: B0 = I; v 0 = 0; k = 1.
Repeat for t = 1, 2, . . . , T :
1. Get instance xt ? Rn , ||xt || = 1;
>
2. Predict ybt = SGN(w>
k?1 xt ) ? {?1, +1}, where w k?1 = Bk?1 Bk?1 v k?1 ;
3. Get label yt ? {?1, +1};
v k = v k?1 + yt xt
4. if ybt 6= yt then:
Bk
k
= Bk?1 (I ? ?k xt x>
t )
? k + 1.
Figure 1: The Higher-order Perceptron algorithm (for p = 2).
vector v k . Matrix Ak is updated as Ak = Ak?1 ? ?Ak?1 xx> ? ?xx> Ak?1 + ?2 (x> Ak?1 x)xx> ,
taking O(n2 ) operations, while v k is updated as in Figure 1. Computing the algorithm?s margin
v > Ax can then be carried out in time quadratic in the dimension n of the input space.
2) Dual version. This implementation allows us the use of kernel functions (e.g., [24]). Let us
denote by Xk the n ? k matrix whose columns are the n-dimensional instance vectors x1 , ..., xk
where a mistake occurred so far, and y k be the k-dimensional column vector of the corresponding
(k)
labels. We store and update the k ? k matrix Dk = [di,j ]ki,j=1 , the k ? k diagonal matrix Hk =
(k)
(k)
hk = (h1 , ..., hk )> = Xk> Xk y k , and the k-dimensional column vector g k = y k +
Dk Hk 1k , being 1k a vector of k ones. If we interpret the primal matrix Ak above as Ak =
Pk
(k)
>
>
>
I + i,j=1 di,j xi x>
j , it is not hard to show that the margin value w k?1 x is equal to g k?1 Xk?1 x,
and can be computed through O(k) extra inner products. Now, on the k-th mistake, vector g can
be updated with O(k 2 ) extra inner products by updating D and H in the following way. We let
1
D
0 and H0 be emptymatrices. Then, given Dk?1 and Hk?1 = DIAG{hk?1 }, we have Dk =
Dk?1 ??k bk
(k)
>
2
, where bk = Dk?1 Xk?1
xk , and dk,k = ?2k x>
(k)
k Xk?1 bk ? 2?k + ?k . On
??k b>
dk,k
k
DIAG {hk },
the other hand, Hk =
DIAG {hk?1
(k)
(k)
>
>
+ yk Xk?1
xk , hk }, with hk = y >
k?1 Xk?1 xk + yk .
Observe that on trials when ?k = 0 matrix Dk?1 is padded with a zero row and a zero column.
Pk
(k)
This amounts to say that matrix Ak = I + i,j=1 di,j xi x>
j , is not updated, i.e., Ak = Ak?1 . A
closer look at the above update mechanism allows us to conclude that the overall extra inner products needed to compute g k is actually quadratic only in the number of past mistaken trials having
?k > 0. This turns out to be especially important when using a sparse version of our algorithm
which, on a mistaken trial, decides whether to update both B and v or just v (see Section 4).
3) Implicit primal version and the dual norms algorithm. This is based on the simple observation
that for any vector z we can compute Bk z by unwrapping Bk as in Bk z = Bk?1 (I ? ?xx> )z =
Bk?1 z 0 , where vector z 0 = (z ? ?x x> z) can be calculated in time O(n). Thus computing
>
the margin v > Bk?1
Bk?1 x actually takes O(nk). Maintaining this implicit representation for the
product matrix B can be convenient when an efficient dual version is likely to be unavailable,
as is the case for the multiplicative (or, more generally, dual norms) extension of our algorithm.
We recall that a multiplicative algorithm is useful when learning sparse target hyperplanes (e.g.,
[18, 15, 3, 12, 11, 20]). We obtain a dual norms algorithm by introducing a norm parameter p ? 2,
and the associated gradient mapping2 g : ? ? Rn ? ?? ||?||2p / 2 ? Rn . Then, in Figure 1, we
>
normalize instance vectors xt w.r.t. the p-norm, we define wk?1 = Bk?1
g(Bk?1 v k?1 ), and gen>
eralize the matrix update as Bk = Bk?1 (I ? ?k xt g(xt ) ). As we will see, the resulting algorithm
combines the multiplicative behavior of the p-norm algorithms with the ?second-order? information
contained in the matrix Bk . One can easily see that the above-mentioned argument for computing
the margin g(Bk?1 v k?1 )> Bk?1 x in time O(nk) still holds.
1
Observe that, by construction, Dk is a symmetric matrix.
This mapping has also been used in [12, 11]. Recall that setting p = O(log n) yields an algorithm similar
to Winnow [18]. Also, notice that p = 2 yields g = identity.
2
3
Analysis
We express the performance of the Higher-order Perceptron algorithm in terms of the hinge-loss
behavior of the best linear classifier over the transformed sequence
S? = (B0 xt(1) , yt(1) ), (B1 xt(2) , yt(2) ), (B2 xt(3) , yt(3) ), . . . ,
(1)
being t(k) the trial where the k-th mistake occurs, and Bk the k-th matrix produced by the algorithm.
Observe that each feature vector xt(k) gets transformed by a matrix Bk depending on past examples
only. This is relevant to the argument that S? tends to have a larger margin than the original sequence
(see the discussion at the end of this section). This neat ?on-line structure? does not seem to be
shared by other competing higher-order algorithms, such as the ?ridge regression-like? algorithms
considered, e.g., in [25, 4, 7, 23]. For the sake of simplicity, we state the theorem below only in the
case p = 2. A more general statement holds when p ? 2.
Theorem 1 Let the Higher-order Perceptron algorithm in Figure 1 be run on a sequence of examples S = (x1 , y1 ), (x2 , y2 ), . . . , (xT , yT ). Let the sequence of parameters ?k satisfy 0 ? ?k ?
1?c
, where xt is the k-th mistaken instance vector, and c ? (0, 1]. Then the total number m
1+|v >
k?1 xt |
s
of mistakes satisfies3
2
?
D? (u; Sc ))
?
?
D? (u; S?c ))
?2
m??
+ 2+
?
+ 2,
(2)
?
2?
?
?
4?
holding for any ? > 0 and any unit norm vector u ? Rn , where ? = ?(c) = (2 ? c)/c.
Proof. The analysis deliberately mimics the standard Perceptron convergence analysis [21]. We fix
an arbitrary sequence S = (x1 , y1 ), (x2 , y2 ), . . . , (xT , yT ) and let M ? {1, 2, . . . , T } be the set
of trials where the algorithm in Figure 1 made a mistake. Let t = t(k) be the trial where the k-th
mistake occurred. We study the evolution of ||Bk v k ||2 over mistaken trials. Notice that the matrix
Bk> Bk is positive semidefinite for any k. We can write
2
||Bk v k ||2 = ||Bk?1 (I ? ?k xt x>
t ) (v k?1 + yt xt ) ||
(from the update rule v k = v k?1 + yt xt and Bk = Bk?1 (I ? ?k xt x>
t ))
= ||Bk?1 v k?1 + yt (1 ? ?k yt v k?1 xt ? ?k )Bk?1 xt ||2
2
= ||Bk?1 v k?1 || +
>
2 yt rk v >
k?1 Bk?1 Bk?1 xt
+
(using ||xt || = 1)
rk2 ||Bk?1 xt ||2 ,
where we set for brevity rk = 1 ? ?k yt v k?1 xt ? ?k . We proceed by upper and lower bounding the
above chain of equalities. To this end, we need to ensure rk ? 0. Observe that yt v k?1 xt ? 0 implies
rk ? 0 if and only if ?k ? 1/(1 + yt v k?1 xt ). On the other hand, if yt v k?1 xt < 0 then, in order for
rk to be nonnegative, it suffices to pick ?k ? 1. In both cases ?k ? (1 ? c)/(1 + |v k?1 xt |) implies
>
rk ? c > 0, and also rk2 ? (1+?k |v k?1 xt |??k )2 ? (2?c)2 . Now, using yt v >
k?1 Bk?1 Bk?1 xt ? 0
2
2
2
(combined with rk ? 0), we conclude that ||Bk v k || ? ||Bk?1 v k?1 || ? (2 ? c) ||Bk?1 xt ||2 =
>
4
(2 ? c)2 x>
t Ak?1 xt , where we set Ak = Bk Bk . A simple (and crude) upper bound on the last
>
term follows by observing that ||xt || = 1 implies xt Ak?1 xt ? ||Ak?1 ||, the spectral norm (largest
eigenvalue) of Ak?1 . Since a factor matrix of the form (I ? ? xx> ) with ? ? 1 and ||x|| = 1 has
Qk?1
>
2
spectral norm one, we have x>
t Ak?1 xt ? ||Ak?1 || ?
i=1 ||I ? ?i xt(i) xt(i) || ? 1. Therefore,
summing over k = 1, . . . , m = |M| (or, equivalently, over t ? M) and using v 0 = 0 yields the
upper bound
||Bm v m ||2 ? (2 ? c)2 m.
(3)
To find a lower bound of the left-hand side of (3), we first pick any unit norm vector u ? Rn , and
apply the standard Cauchy-Schwartz inequality: ||Bm v m || ? u> Bm v m . Then, we observe that for
a generic trial t = t(k) the update rule of our algorithm allows us to write
u> Bk v k ? u> Bk?1 v k?1 = rk yt u> Bk?1 xt ? rk (? ? max{0, ? ? yt u> Bk?1 xt }),
where the last inequality follows from rk ? 0 and holds for any margin value ? > 0. We sum
3
The subscript c in S?c emphasizes the dependence of the transformed sequence on the choice of c. Note
that in the special case c = 1 we have ?k = 0 for any k and ? = 1, thereby recovering the standard Perceptron
bound for nonseparable sequences (see, e.g., [12]).
4
A slightly more refined bound can be derived which depends on the trace of matrices I ? Ak . Details will
be given in the full version of this paper.
the above over k = 1, . . . , m and exploit c ? rk ? 2 ? c after rearranging terms. This gets
||Bm v m || ? u> Bm v m ? c ? m ? (2 ? c)D? (u; S?c ). Combining with (3) and solving for m gives
the claimed bound.
From the above result one can see that our algorithm might be viewed as a standard Perceptron
algorithm operating on the transformed sequence S?c in (1). We now give a qualitative argument,
which is suggestive of the improved margin properties of S?c . Assume for simplicity that all examples
(xt , yt ) in the original sequence are correctly classified by hyperplane u with the same margin
? = yt u> xt > 0, where t = t(k). According to Theorem 1, the parameters ?1 , ?2 , . . . should be
small positive numbers. Assume, again for simplicity, that all ?k are set to the same small enough
Qk
value ? > 0. Then, up to first order, matrix Bk = i=1 (I ? ? xt(i) x>
t(i) ) can be approximated as
Pk
>
Bk ' I ? ? i=1 xt(i) xt(i) . Then, to the extent that the above approximation holds, we can write:5
Pk?1
Pk?1
>
yt u> Bk?1 xt = yt u> I ? ? i=1 xt(i) x>
I ? ? i=1 yt(i) xt(i) yt(i) x>
t(i) xt = yt u
t(i) xt
= yt u> xt ? ? yt
Pk?1
i=1
>
yt(i) u> xt(i) yt(i) x>
t(i) xt = ? ? ? ? yt v k?1 xt .
Now, yt v >
k?1 xt is the margin of the (first-order) Perceptron vector v k?1 over a mistaken trial for
the Higher-order Perceptron vector wk?1 . Since the two vectors v k?1 and wk?1 are correlated
>
>
2
(recall that v >
k?1 w k?1 = v k?1 Bk?1 Bk?1 v k?1 = ||Bk?1 v k?1 || ? 0) the mistaken condition
>
>
yt wk?1 xt ? 0 is more likely to imply yt v k?1 xt ? 0 than the opposite. This tends to yield a
margin larger than the original margin ?. As we mentioned in Section 2, this is also advantageous
from a computational standpoint, since in those cases the matrix update Bk?1 ? Bk might be
skipped (this is equivalent to setting ?k = 0), still Theorem 1 would hold.
Though the above might be the starting point of a more thorough theoretical understanding of the
margin properties of our algorithm, in this paper we prefer to stop early and leave any further investigation to collecting experimental evidence.
4
Experiments
We tested the empirical performance of our algorithm by conducting a number of experiments on a
collection of datasets, both artificial and real-world from diverse domains (Optical Character Recognition, text categorization, DNA microarrays). The main goal of these experiments was to compare
Higher-order Perceptron (with both p = 2 and p > 2) to known Perceptron-like algorithms, such
as first-order [21] and second-order Perceptron [7], in terms of training accuracy (i.e., convergence
speed) and test set accuracy. The results are contained in Tables 1, 2, 3, and in Figure 2.
Task 1: DNA microarrays and artificial data. The goal here was to test the convergence properties of our algorithms on sparse target learning tasks. We first tested on a couple of well-known DNA
microarray datasets. For each dataset, we first generated a number of random training/test splits (our
random splits also included random permutations of the training set). The reported results are averaged over these random splits. The two DNA datasets are: i. The ER+/ER? dataset from [14]. Here
the task is to analyze expression profiles of breast cancer and classify breast tumors according to ER
(Estrogen Receptor) status. This dataset (which we call the ?Breast? dataset) contains 58 expression
profiles concerning 3389 genes. We randomly split 1000 times into a training set of size 47 and a
test set of size 11. ii. The ?Lymphoma? dataset [1]. Here the goal is to separate cancerous and
normal tissues in a large B-Cell lymphoma problem. The dataset contains 96 expression profiles
concerning 4026 genes. We randomly split the dataset into a training set of size 60 and a test set of
size 36. Again, the random split was performed 1000 times. On both datasets, the tested algorithms
have been run by cycling 5 times over the current training set. No kernel functions have been used.
We also artificially generated two (moderately) sparse learning problems with margin ? ? 0.005 at
labeling noise levels ? = 0.0 (linearly separable) and ? = 0.1, respectively. The datasets have been
generated at random by first generating two (normalized) target vectors u ? {?1, 0, +1}500 , where
the first 50 components are selected independently at random in {?1, +1} and the remaining 450
5
Again, a similar argument holds in the more general setting p ? 2. The reader should notice how important
the dependence of Bk on the past is to this argument.
components are 0. Then we set ? = 0.0 for the first target and ? = 0.1 for the second one and,
corresponding to each of the two settings, we randomly generated 1000 training examples and 1000
test examples. The instance vectors are chosen at random from [?1, +1]500 and then normalized. If
u ? xt ? ? then a +1 label is associated with xt . If u ? xt ? ?? then a ?1 label is associated with
xt . The labels so obtained are flipped with probability ?. If |u ? xt | < ? then xt is rejected and
a new vector xt is drawn. We call the two datasets ?Artificial 0.0 ? and ?Artificial 0.1 ?. We tested
our algorithms by training over an increasing number of epochs and checking the evolution of the
corresponding test set accuracy. Again, no kernel functions have been used.
Task 2: Text categorization. The text categorization datasets are derived from the first 20,000
newswire stories in the Reuters Corpus Volume 1 (RCV1, [22]). A standard TF - IDF bag-of-words
encoding was used to transform each news story into a normalized vector of real attributes. We
built four binary classification problems by ?binarizing? consecutive news stories against the four
target categories 70, 101, 4, and 59. These are the 2nd, 3rd, 4th, and 5th most frequent6 categories,
respectively, within the first 20,000 news stories of RCV1. We call these datasets RCV1x , where
x = 70, 101, 4, 59. Each dataset was split into a training set of size 10,000 and a test set of the same
size. All algorithms have been trained for a single epoch. We initially tried polynomial kernels,
then realized that kernel functions did not significantly alter our conclusions on this task. Thus the
reported results refer to algorithms with no kernel functions.
Task 3: Optical character recognition (OCR). We used two well-known OCR benchmarks: the
USPS dataset and the MNIST dataset [16] and followed standard experimental setups, such as the
one in [9], including the one-versus-rest scheme for reducing a multiclass problem to a set of binary
tasks. We used for each algorithm the standard Gaussian and polynomial kernels, with parameters
chosen via 5-fold cross validation on the training set across standard ranges. Again, all algorithms
have been trained for a single epoch over the training set. The results in Table 3 only refer to the
best parameter settings for each kernel.
Algorithms. We implemented the standard Perceptron algorithm (with and without kernels), the
Second-order Perceptron algorithm, as described in [7] (with and without kernels), and our Higherorder Perceptron algorithm. The implementation of the latter algorithm (for both p = 2 and p > 2)
was ?implicit primal? when tested on the sparse learning tasks, and in dual variables for the other two
tasks. When using Second-order Perceptron, we set its parameter a (see [7] for details) by testing
on a generous range of values. For brevity, only the settings achieving the best results are reported.
On the sparse learning tasks we tried Higher-order Perceptron with norm p = 2, 4, 7, 10, while on
the other two tasks we set p = 2. In any case, for each value of p, we set7 ?k = c/k, with c =
0, 0.2, 0.4, 0.6, 0.8. Since c = 0 corresponds to a standard p-norm Perceptron algorithm [13, 12] we
tried to emphasize the comparison c = 0 vs. c > 0. Finally, when using kernels on the OCR tasks,
we also compared to a sparse dual version of Higher-order Perceptron. On a mistaken round t =
t(k), this algorithm sets ?k = c/k if yt v k?1 xt ? 0, and ?k = 0 otherwise (thus, when yt v k?1 xt <
0 the matrix Bk?1 is not updated). For the sake of brevity, the standard Perceptron algorithm is
called FO (?First Order?), the Second-order algorithm is denoted by SO (?Second Order?), while the
Higher-order algorithm with norm parameter p and ?k = c/k is abbreviated as HOp (c). Thus, for
instance, FO = HO2 (0).
Results and conclusions. Our Higher-order Perceptron algorithm seems to deliver interesting
results. In all our experiments HOp (c) with c > 0 outperforms HOp (0). On the other hand, the
comparison HOp (c) vs. SO depends on the specific task. On the DNA datasets, HOp (c) with c > 0 is
clearly superior in Breast. On Lymphoma, HOp (c) gets worse as p increases. This is a good indication that, in general, a multiplicative algorithm is not suitable for this dataset. In any case, HO2 turns
out to be only slightly worse than SO. On the artificial datasets HOp (c) with c > 0 is always better
than the corresponding p-norm Perceptron algorithm. On the text categorization tasks, HO2 tends to
perform better than SO. On USPS, HO2 is superior to the other competitors, while on MNIST it performs similarly when combined with Gaussian kernels (though it turns out to be relatively sparser),
while it is slightly inferior to SO when using polynomial kernels. The sparse version of HO2 cuts
the matrix updates roughly by half, still maintaining a good performance. In all cases HO2 (either
sparse or not) significantly outperforms FO.
In conclusion, the Higher-order Perceptron algorithm is an interesting tool for on-line binary clas6
7
We did not use the most frequent category because of its significant overlap with the other ones.
Notice that this setting fulfills the condition on ?k stated in Theorem 1.
Table 1: Training and test error on the two datasets ?Breast? and ?Lymphoma?. Training error is
the average total number of updates over 5 training epochs, while test error is the average fraction
of misclassified patterns in the test set, The results refer to the same training/test splits. For each
algorithm, only the best setting is shown (best training and best test setting coincided in these experiments). Thus, for instance, HO2 differs from FO because of the c parameter. We emphasized
the comparison HO7 (0) vs. HO7 (c) with best c among the tested values. According to Wilcoxon
signed rank test, an error difference of 0.5% or larger might be considered significant. In bold are
the smallest figures achieved on each row of the table.
TRAIN
TEST
TRAIN
TEST
LYMPHOMA
FO
HO 2
HO 4
HO 7 (0)
HO 7
HO 10
SO
45.2
23.4%
22.1
11.8%
21.7
16.4%
19.6
10.0%
24.5
13.3%
18.9
10.0%
47.4
15.7%
23.0
11.5%
24.5
12.0%
20.0
11.5%
32.4
13.5
23.1
11.9%
29.6
15.0%
19.3
9.6%
FO = HO 2(0.0)
Training updates vs training epochs on Artificial 0.0
SO
# of training updates
800
*
HO 4(0.4)
600
HO 7(0.0)
*
400
300
*
*
*
*
*
SO
2400
HO 2(0.4)
700
500
HO 7 (0.4)
*
2000
*
1200
400
2 3
5
10
15
20
*
1
*
*
2 3
*
Test error rates
*
* *
(a = 0.2)
HO 2(0.4)
HO 4(0.4)
*
*
*
*
HO 7(0.0)
HO 7 (0.4)
14%
Test error rates (minus 10%)
FO = HO 2(0.0)
SO
18%
5
22%
10
20
15
20
FO = HO 2(0.0)
*
* *
*
*
*
*
*
(a = 0.2)
HO 2(0.4)
HO 4(0.4)
HO 7(0.0)
14%
HO 7 (0.4)
6%
# of training epochs
15
18%
10%
5
10
SO
26%
6%
2 3
HO 7(0.0)
HO 7(0.4)
Test error rates vs training epochs on Artificial 0.1
10%
1
*
# of training epochs
Test error rates vs training epochs on Artificial 0.0
22%
*
*
# of training epochs
26%
(a = 0.2)
HO 2(0.4)
HO 4(0.4)
1600
800
*
1
FO = HO 2(0.0)
Training updates vs training epochs on Artificial 0.1
(a = 0.2)
# of training updates
B REAST
1
2 3
5
10
15
20
# of training epochs
Figure 2: Experiments on the two artificial datasets (Artificial0.0 , on the left, and Artificial0.1 , on
the right). The plots give training and test behavior as a function of the number of training epochs.
Notice that the test set in Artificial0.1 is affected by labelling noise of rate 10%. Hence, a visual
comparison between the two plots at the bottom can only be made once we shift down the y-axis of
the noisy plot by 10%. On the other hand, the two training plots (top) are not readily comparable.
The reader might have difficulty telling apart the two kinds of algorithms HOp (0.0) and HOp (c) with
c > 0. In practice, the latter turned out to be always slightly superior in performance to the former.
sification, having the ability to combine multiplicative (or nonadditive) and second-order behavior
into a single inference procedure. Like other algorithms, HOp can be extended (details omitted due
to space limitations) in several ways through known worst-case learning technologies, such as large
margin (e.g., [17, 11]), label-efficient/active learning (e.g., [5, 8]), and bounded memory (e.g., [10]).
References
[1] A. Alizadeh, et al. (2000). Distinct types of diffuse large b-cell lymphoma identified by gene expression
profiling. Nature, 403, 503?511.
[2] D. Angluin (1988). Queries and concept learning. Machine Learning, 2(4), 319?342.
[3] P. Auer & M.K. Warmuth (1998). Tracking the best disjunction. Machine Learning, 32(2), 127?150.
[4] K.S. Azoury & M.K. Warmuth (2001). Relative loss bounds for on-line density estimation with the
exponential familiy of distributions. Machine Learning, 43(3), 211?246.
[5] A. Bordes, S. Ertekin, J. Weston, & L. Bottou (2005). Fast kernel classifiers with on-line and active
learning. JMLR, 6, 1579?1619.
[6] N. Cesa-Bianchi, Y. Freund, D. Haussler, D.P. Helmbold, R.E. Schapire, & M.K. Warmuth (1997). How
to use expert advice. J. ACM, 44(3), 427?485.
Table 2: Experimental results on the four binary classification tasks derived from RCV1. ?Train?
denotes the number of training corrections, while ?Test? gives the fraction of misclassified patterns
in the test set. Only the results corresponding to the best test set accuracy are shown. In bold are the
smallest figures achieved for each of the 8 combinations of dataset (RCV1x , x = 70, 101, 4, 59) and
phase (training or test).
FO
TRAIN
TEST
993
673
803
767
7.20%
6.39%
6.14%
6.45%
RCV170
RCV1101
RCV14
RCV159
HO 2
TRAIN
TEST
941
665
783
762
6.83%
5.81%
5.94%
6.04%
SO
TRAIN
TEST
880
677
819
760
6.95%
5.48%
6.05%
6.84%
Table 3: Experimental results on the OCR tasks. ?Train? denotes the total number of training corrections, summed over the 10 categories, while ?Test? denotes the fraction of misclassified patterns
in the test set. Only the results corresponding to the best test set accuracy are shown. For the sparse
version of HO2 we also reported (in parentheses) the number of matrix updates during training. In
bold are the smallest figures achieved for each of the 8 combinations of dataset (USPS or MNIST),
kernel type (Gaussian or Polynomial), and phase (training or test).
FO
U SPS
M NIST
G AUSS
P OLY
G AUSS
P OLY
TRAIN
TEST
1385
1609
5834
8148
6.53%
7.37%
2.10%
3.04%
HO 2
TRAIN
TEST
945
1090
5351
6404
4.76%
5.71%
1.79%
2.27%
Sparse HO2
SO
TRAIN
TEST
TRAIN
TEST
965 (440)
1081 (551)
5363 (2596)
6476 (3311)
5.13%
5.52%
1.81%
2.28%
1003
1054
5684
6440
5.05%
5.53%
1.82%
2.03%
[7] N. Cesa-Bianchi, A. Conconi & C. Gentile (2005). A second-order perceptron algorithm. SIAM Journal
of Computing, 34(3), 640?668.
[8] N. Cesa-Bianchi, C. Gentile, & L. Zaniboni (2006). Worst-case analysis of selective sampling for linearthreshold algorithms. JMLR, 7, 1205?1230.
[9] C. Cortes & V. Vapnik (1995). Support-vector networks. Machine Learning, 20(3), 273?297.
[10] O. Dekel, S. Shalev-Shwartz, & Y. Singer (2006). The Forgetron: a kernel-based Perceptron on a fixed
budget. NIPS 18, MIT Press, pp. 259?266.
[11] C. Gentile (2001). A new approximate maximal margin classification algorithm. JMLR, 2, 213?242.
[12] C. Gentile (2003). The Robustness of the p-norm Algorithms. Machine Learning, 53(3), pp. 265?299.
[13] A.J. Grove, N. Littlestone & D. Schuurmans (2001). General convergence results for linear discriminant
updates. Machine Learning Journal, 43(3), 173?210.
[14] S. Gruvberger, et al. (2001). Estrogen receptor status in breast cancer is associated with remarkably distinct gene expression patterns. Cancer Res., 61, 5979?5984.
[15] J. Kivinen, M.K. Warmuth, & P. Auer (1997). The perceptron algorithm vs. winnow: linear vs. logarithmic
mistake bounds when few input variables are relevant. Artificial Intelligence, 97, 325?343.
[16] Y. Le Cun, et al. (1995). Comparison of learning algorithms for handwritten digit recognition. ICANN
1995, pp. 53?60.
[17] Y. Li & P. Long (2002). The relaxed online maximum margin algorithm. Machine Learning, 46(1-3),
361?387.
[18] N. Littlestone (1988). Learning quickly when irrelevant attributes abound: a new linear-threshold algorithm. Machine Learning, 2(4), 285?318.
[19] N. Littlestone & M.K. Warmuth (1994). The weighted majority algorithm. Information and Computation,
108(2), 212?261.
[20] P. Long & X. Wu (2004). Mistake bounds for maximum entropy discrimination. NIPS 2004.
[21] A.B.J. Novikov (1962). On convergence proofs on perceptrons. Proc. of the Symposium on the Mathematical Theory of Automata, vol. XII, pp. 615?622.
[22] Reuters: 2000. http://about.reuters.com/researchandstandards/corpus/.
[23] S. Shalev-Shwartz & Y. Singer (2006). Online Learning Meets Optimization in the Dual. COLT 2006, pp.
423?437.
[24] B. Schoelkopf & A. Smola (2002). Learning with kernels. MIT Press.
[25] Vovk, V. (2001). Competitive on-line statistics. International Statistical Review, 69, 213-248.
| 3209 |@word trial:15 version:9 polynomial:4 norm:22 seems:2 justice:1 advantageous:1 flexiblity:1 nd:1 dekel:1 additively:1 tried:3 pick:2 thereby:2 minus:1 initial:2 contains:2 past:4 existing:1 outperforms:2 current:2 com:3 comparing:1 gmail:1 readily:1 additive:1 plot:4 update:20 v:9 discrimination:1 half:1 selected:2 guess:1 intelligence:1 warmuth:5 xk:12 yi1:1 lending:1 hyperplanes:1 dell:3 mathematical:1 symposium:1 dicom:3 ik:2 qualitative:1 combine:4 manner:1 introduce:1 excellence:1 roughly:1 behavior:9 nonseparable:1 increasing:2 abound:1 provided:1 xx:5 notation:1 moreover:2 bounded:1 kind:1 substantially:1 finding:1 thorough:1 collecting:1 universit:3 classifier:3 schwartz:1 unit:2 grant:1 positive:2 tends:4 mistake:16 consequence:1 switching:1 receptor:2 encoding:1 ak:20 subscript:1 meet:1 nonadditive:1 might:7 plus:1 signed:1 therein:1 initialization:1 au:2 suggests:1 range:2 averaged:1 practical:1 testing:1 practice:1 differs:1 ybt:7 digit:1 procedure:2 empirical:1 significantly:2 convenient:2 word:1 get:6 put:1 equivalent:1 yt:47 go:1 attention:1 starting:1 independently:1 automaton:1 formulate:1 simplicity:4 helmbold:1 rule:3 haussler:1 insubria:3 updated:6 pt:1 target:5 construction:1 massive:1 associate:1 approximated:1 recognition:3 updating:1 cut:1 labeled:1 bottom:1 fly:1 worst:2 schoelkopf:1 news:3 counter:1 yk:2 substantial:1 principled:2 vanishes:1 mentioned:2 moderately:1 trained:2 solving:1 deliver:1 upon:2 binarizing:1 usps:3 easily:3 various:1 train:11 distinct:2 fast:1 describe:1 artificial:12 sc:1 labeling:1 query:1 shalev:2 h0:1 refined:1 lymphoma:6 quite:2 whose:1 widely:1 larger:4 disjunction:1 say:2 otherwise:2 ability:2 statistic:3 transform:1 noisy:2 itself:1 cristian:2 online:4 sequence:20 advantage:1 eigenvalue:1 analytical:1 indication:1 product:6 maximal:1 frequent:1 relevant:2 combining:1 turned:1 gen:1 degenerate:1 normalize:1 convergence:6 xim:1 empty:1 categorization:4 incremental:1 converges:1 leave:1 generating:1 depending:2 novikov:1 b0:2 recovering:1 implemented:1 implies:3 attribute:2 stochastic:1 sgn:1 implementing:1 fix:1 suffices:1 preliminary:1 investigation:2 extension:1 correction:2 hold:6 considered:3 normal:2 scope:1 mapping:2 predict:1 major:1 early:1 consecutive:1 generous:1 smallest:3 omitted:1 estimation:1 proc:1 bag:1 label:8 largest:1 uninsubria:1 tf:1 tool:1 reflects:1 weighted:1 mit:2 clearly:1 always:4 gaussian:3 claudio:2 publication:1 ax:1 derived:3 cancerous:1 rank:1 greatly:1 hk:11 adversarial:1 skipped:1 baseline:3 sense:1 inference:1 initially:1 ho2:9 transformed:9 misclassified:3 interested:1 selective:1 linearthreshold:1 overall:1 dual:11 flexible:1 pascal:1 denoted:1 classification:3 yahoo:1 among:2 colt:1 special:1 summed:1 equal:1 once:1 having:3 sampling:1 hop:10 adversarially:1 flipped:1 look:1 alter:1 mimic:1 report:1 subquadratic:1 few:1 randomly:3 phase:2 semidefinite:1 primal:4 chain:1 grove:1 closer:1 partial:1 machinery:1 unless:1 indexed:1 researchandstandards:1 euclidean:2 initialized:2 littlestone:3 re:1 theoretical:4 instance:10 column:5 soft:1 classify:1 measuring:1 tractability:1 introducing:1 predictor:1 stored:1 reported:4 combined:3 density:1 international:1 siam:1 sequel:1 xi1:1 receiving:1 together:1 quickly:2 again:5 cesa:3 worse:2 expert:1 sps:1 li:1 suggesting:2 b2:1 wk:4 bold:3 satisfy:1 depends:3 multiplicative:10 view:2 performed:2 h1:1 observing:1 analyze:1 competitive:1 maintains:1 oly:2 contribution:2 accuracy:5 qk:2 conducting:1 efficiently:1 yield:4 generalize:1 handwritten:1 produced:2 emphasizes:1 tissue:1 classified:1 fo:11 against:2 competitor:1 pp:5 gruvberger:1 associated:5 di:3 proof:2 couple:1 stop:1 dataset:13 recall:3 actually:3 auer:2 higher:24 forgetron:1 improved:4 done:1 though:2 just:1 implicit:3 rejected:1 smola:1 hand:8 widespread:1 normalized:5 true:1 y2:3 functioning:1 deliberately:1 hence:2 evolution:2 equality:1 rk2:2 symmetric:1 former:1 deal:1 round:1 during:2 inferior:1 generalized:1 ridge:2 performs:1 common:1 superior:3 volume:1 occurred:2 interpret:1 refer:3 significant:2 mistaken:9 rd:1 similarly:1 newswire:1 gratefully:1 operating:3 etc:1 wilcoxon:1 own:1 winnow:2 irrelevant:1 driven:2 apart:1 store:2 claimed:1 inequality:2 outperforming:1 binary:6 zaniboni:1 seen:1 gentile:6 relaxed:1 recognized:1 ii:1 full:1 faster:1 profiling:1 cross:1 long:2 concerning:2 parenthesis:1 prediction:6 regression:2 breast:6 kernel:19 achieved:3 cell:2 ertekin:1 remarkably:1 microarray:1 standpoint:1 extra:3 rest:1 subject:1 contrary:1 seem:1 call:4 split:8 enough:1 competing:2 restrict:1 opposite:1 inner:4 identified:1 multiclass:1 microarrays:2 shift:1 whether:1 expression:5 proceed:1 generally:1 useful:1 clear:1 amount:3 category:4 dna:5 angluin:1 schapire:1 http:1 notice:5 sign:1 correctly:1 diverse:2 xii:1 write:3 vol:1 affected:1 express:1 four:3 threshold:5 achieving:1 drawn:1 padded:1 fraction:3 sum:2 run:3 inverse:1 family:1 reader:2 wu:1 prefer:1 comparable:1 bound:11 ki:1 followed:1 fold:1 quadratic:2 refine:1 nonnegative:2 idf:1 bp:1 x2:3 diffuse:1 sake:2 speed:2 argument:5 rcv1:3 separable:4 optical:2 relatively:1 according:3 combination:2 beneficial:1 slightly:4 across:1 separability:1 character:2 cun:1 turn:3 abbreviated:1 mechanism:1 xi2:1 needed:1 singer:2 merit:1 end:2 generalizes:1 operation:1 apply:1 observe:7 ocr:4 spectral:2 generic:1 yim:1 robustness:1 ho:27 original:4 top:1 spurred:1 include:1 ensure:2 remaining:1 denotes:3 hinge:3 maintaining:2 exploit:3 especially:2 realized:1 occurs:1 dependence:2 diagonal:1 cycling:1 gradient:1 fabio:1 separate:2 higherorder:1 estrogen:2 parametrized:1 majority:1 collected:1 cauchy:1 extent:1 discriminant:1 multiplicatively:1 equivalently:1 setup:1 statement:1 holding:1 xik:1 trace:1 stated:1 implementation:3 perform:1 allowing:1 upper:3 bianchi:3 observation:1 datasets:14 benchmark:1 acknowledge:1 nist:1 descent:1 extended:1 y1:3 rn:8 arbitrary:1 introduced:2 bk:66 pair:1 specified:1 nip:2 proceeds:1 below:2 pattern:4 sparsity:1 built:1 max:3 including:1 memory:1 suitable:1 overlap:1 difficulty:1 kivinen:1 scheme:1 technology:2 reast:1 imply:1 axis:1 carried:2 text:4 epoch:13 literature:1 understanding:1 checking:1 review:1 relative:3 freund:1 loss:4 permutation:1 interesting:2 limitation:1 versus:1 validation:1 story:4 storing:1 unwrapping:1 bordes:1 row:2 cancer:3 concept:1 repeat:1 last:2 neat:1 side:1 perceptron:44 telling:1 taking:1 sification:1 sparse:11 dimension:1 calculated:1 world:2 cumulative:2 computes:1 author:3 made:3 collection:1 simplified:1 bm:5 ec:1 far:1 approximate:1 emphasize:1 status:2 gene:4 dealing:1 suggestive:1 decides:1 active:2 b1:1 summing:1 conclude:2 corpus:2 xi:2 shwartz:2 subsequence:1 table:6 nature:1 rearranging:1 unavailable:1 schuurmans:1 investigated:1 bottou:1 artificially:1 domain:3 protocol:1 diag:3 did:2 pk:6 yi2:1 main:1 linearly:2 azoury:1 bounding:1 noise:2 reuters:3 profile:3 n2:1 x1:4 advice:1 alizadeh:1 exponential:1 crude:1 jmlr:3 coincided:1 gradientdescent:1 theorem:5 rk:11 down:1 specific:3 xt:78 emphasized:1 er:3 list:1 dk:10 cortes:1 concern:1 evidence:1 disclosed:1 mnist:3 vapnik:1 labelling:1 budget:1 margin:23 nk:2 sparser:1 entropy:1 logarithmic:3 likely:3 visual:1 contained:2 conconi:1 tracking:1 collectively:1 corresponds:1 acm:1 weston:1 viewed:2 goal:4 identity:2 towards:1 twofold:1 shared:1 hard:1 included:1 contrasted:1 reducing:1 hyperplane:1 vovk:1 tumor:1 called:2 total:4 experimental:6 vitale:1 perceptrons:1 internal:1 support:2 latter:2 fulfills:1 brevity:3 relevance:1 icann:1 tested:6 correlated:1 |
2,436 | 321 | Adaptive Range Coding
Bruce E. Rosen, James M. Goodwin, and Jacques J. Vidal
Distributed Machine Intelligence Laboratory
Computer Science Department
University of California, Los Angeles
Los Angeles, CA 90024
Abstract
This paper examines a class of neuron based
learning systems for dynamic control that rely on
adaptive range coding of sensor inputs.
Sensors are
assumed to provide binary coded range vectors that
coarsely describe the system state. These vectors are
input to neuron-like processing elements.
Output
decisions generated by these "neurons" in turn
affect the system state, subsequently producing new
inputs.
Reinforcement
signals
from
the
environment are received at various intervals and
evaluated. The neural weights as well as the ran g e
b 0 u n dar i e s determining the output decisions are
then altered with the goal of maximizing future
reinforcement from the environment.
Preliminary
experiments show the promise of adapting "neural
receptive fields" when learning dynamical control.
The observed performance with this method exceeds
that of earlier approaches.
486
Adaptive Range Coding
1 INTRODUCTION
A major criticism of unsupervised learning and control
techniques such as those used by Barto et al. (Barto t 1983) and by
Albus (Albus t 1981) is the need for a priori selection of region
sizes for range coding.
Range coding in principle generalizes
inputs and reduces computational and storage overhead t but the
boundary partitioningt determined a priori t is often non-optimal
(for example t the ranges described in (Barto t 1983) differ from
those used in (Barto 1982) for the same control task differ).
Determination of nearly optimal t or at least adequate t regions is
left as an additional task that would require that the system
dynamics be analyzed t which is not always possible.
To address this problem t we move region boundaries adaptively t
progressively altering the initial partitioning to a more
appropriate representation with no need for a priori knowledge.
Unlike previous work (Michie t 1968)t (Barto t 1983)t (Anderson t
1982) which used fixed coderS t this approach produces adaptive
coders that contract and expand regions/ranges.
During
adaptation t frequently active regions/ranges contract t reducing
the number of situations in which they will be activated, and
increasing the chances that neighboring regions will receive
input instead.
This class of self-organization is discussed in
Kohonen (Kohonen t 1984)t (Ritter t 1986 t 1988).
The resulting
self-organizing mapping will tend to track the environmental
input probability density function.
Adaptive range coding
creates a focusing mechanism.
Resources are distributed
according to regional activity level.
More resources can be
allocated to critical areas of the state space. Concentrated activity
is more finely discriminated and corresponding control decisions
are more finely tuned.
Dynamic shaping of the region boundaries can be achieved
without sacrificing memory or learning speed.
Also t since the
region boundaries are finally determined solely by the
environmental dynamics t optimal a priori ranges and regIOn
specifications are not necessary.
As an example t consider a one dimensional state space t as shown
in figures 1a and 1b. It is is partitioned into three regions by the
vertical lines shown.
The heavy curve indicates a theoretical
optimal control surface (unknown a priori) of a state space
which the weight in each region should approximate. The dashed
horizontal lines show the best learned weight values for the
487
488
Rosen, Goodwin, and Vidal
respective partitionings.
Weight values approximate the mean
value of the true control surface weight in each of the regions.
Weight
Weight
state space
Figure 1a
Even Region Partition
state space
Figure
1b
Adapted Region Partition
An evenly partitioned space produces the weights shown in
figure 1a. Figure 1b shows the regions after the boundaries have
been adjusted. and the final weight values.
Although the weights
in both 1a and 1b reflect the mean of the true control surface (in
their respective regions). adaptive partitioning is able to
represent the ideal surface with a smaller mean squared error.
2 ADAPTIVE RANGE CODING RULE
For the more general n dimensional control problem using
adaptive range boundaries. the shape of each region can change
from an initial n dimensional prism to an n dimensional
polytope.
The polytope shape is determined by the current
activation state and its average activity.
The heuristic for our
adaptive range coding is to move each region vertex towards or
away from the current activation state according to the
rei nf0 r c e men t.
The equation which adjusts each regIOn
boundary is adapted in part from the weight alteration formula
used by Kohonen's topological mapping (Kohonen 1984).
Each
region (i) consists of 2n vertices (V ij<t). 1 ~ j ~ 2n) describing
that region's boundaries that move toward or away from the
current state activity (ACt? depending on the reinforcement r.
V ij(t+1) = Yilt) + K r h(Vij(t) - A(t?
w her e K is the gain. r is the reinforcement (or error) used to
alter the weight in the region. and hO is a Gaussian or a
difference of Gaussians function.
[1]
Adaptive Range Coding
3
SIMULATION RESULTS
In our experiments, the expected reinforcement of the ASE/ACE
system ~ (described in (Barto 1983)) was also used as r in [1].
Simple pole balancing (see figure 2) was chosen, rather than the
cart-pole balancing task in (Barto 1983).
The time step twas
chosen to be large (0.05 seconds) and initial region boundaries of
9 and 9? were chosen as (-12,-6,0,1,6,12) and (-00, -10,10,00). All other
parameters were identical to those described in (Barto, 1983).
Impulse
Right
?
Impulse
Left
III
Figure 2:
The Pole Balancing Task
The standard ASE, ASE/ACE, and adaptive range coding algorithms
were compared on this task. One hundred runs of each algorithm
were performed.
Each run consisted of a sequence of trials and
each trial counted the number of time steps until the pole fell. If
the pole had not fallen after 20,000 time steps, the trial was
considered to be successful and it was terminated. Each run was
terminated either after 100 trials, or after the pole was
successfully balanced in five successive trials.
(We assumed that
five successive trials indicated that the systems weights and
regions had stabilized.)
All region weights were initialized to
zero at the start of each run.
In the adaptive range coding runs, the updated vertex state
positions were determined by 3 factors: difference between the
vertex and the current state, the expected reinforcement, and the
gain.
A Gaussian served as an appropriate decay function to
modulate vertex movements.
Current state to vertex differences
served as function input parameters.
Outputs attenuated with
489
490
Rosen, Goodwin, and Vidal
increasing inputs. and the standard deviation 0' of the Gaussian
shaped the decay function.
The magnitude and position of each
vertex movement were also modulated by the reinforcement ~ (t)
which moves the vertex towards or away form the current state.
and by K. a gain parameter.
The user definable parameter values
of K and 0' were initially chosen (arbitrarily) as K = 1 and 0' = 10.0.
and were used in the following experiments.
Parameters were
not fine tuned or optimized.
Figure 3 shows the results of the ASE. ASE/ACE. and adaptive
range coding experiments.
The various runs and trials differed
only in the random number generator seed.
Corresponding runs
and trials using the standard ASE. ASE/ACE and the adaptive
range coding algorithm used the same random number seed.
All
other parameters were identical between the two systems.
However. in adaptive range coding. region boundaries were
shifted in accordance with [1] during each run.
0/0 Successes
1
Ie
%
Success
80
60
40
20
0
Associative
Search
Element
(ASE)
Critic
Element
(ACE)
Adaptive
Range
Coding
Figure 3: Comparison of the ASE. ASE/ACE. and the
Adaptive Range Coding Algorithm.
Adaptive Range Coding
We simulated 100 runs of the ASE algorithm with zero successful
runs.
Using the ASE/ACE algorithm, 54 runs were successful.
With adaptive range coding algorithm, 84 of the 100 runs were
successful. With O'ase/ace = 4.98 and O'adapt_range_code = 3.66, a
X2 test showed the two performance sets to be statistically
different (p > 0.95).
Figure 4 shows a comparison of the average performance values
of the 100 ASE/ACE and Adaptive Range Coding (ARC) runs. Pole
balancing time is shown as a function of the number of learning
trials experienced.
Pole Balancing Average Performances
20000
18000
16000
"
14000
... .'
12000
Run Time
1.
.
... ..
... .
I'? . .
'1 "."" ??
."
,.-
?' I
.'
10000
.............
..... n. t ? ttlL
t:..... ."
I
8000
6000
0,
I"
.1 ? ?
.1111-
- ASE/ACE
"
a ARC
Et
4000
e
2000
0
0
10
20
30
40
50
60
70
80
90
100
Trial Number
Figure 4: Comparison of the ASE/ACE and Adaptive
Range Coding learning rates on the cart pole task.
Pole balancing time is shown as function of
learning trials. Results are averaged over 100 runs.
The disparity between the run times of the two different
algorithms is due to the comparatively large number of failures
of the ASE/ ACE system.
Statistical analysis indicates no
significant difference in the learning rates or performance
levels of the successful runs between categories, leading us to
believe that adaptive range coding may lead to an "all or none"
491
492
Rosen, Goodwin, and Vidal
behavior, and that there is a mInImum area of the state space that
the system must explore to succeed.
4 CONCLUSION
The research has shown that neuron-like elements with
adjustable regions can dynamically create topological cause and
It is
effect maps reflecting the control laws of dynamic systems.
anticipated from the results of the examples presented above, that
adaptive range coding will be more effective than earlier static
region approaches in the control of complex systems with
unknown dynamics.
References
J. S. Albus. (1981) Brains, Behavior, and Robotics,
NH: McGraw-Hill Byte Books.
Peterburough,
C. W. Anderson. (1982) Feature generation and Selection by a
Layered Network of Reinforcement Learning Elements: Some
Initial Experiments, Technical Report COINS 82-12. Amherst, MA:
University of Massachusetts, Department of Computer and
Information Science.
A. Barto, R. Sutton, and C. Anderson. (1982) Neuron-like elements
that can solve difficult learning control problems. Coins Tech.
Rept. No. 82-20. Amherst, MA: University of Massachusetts,
Department of Computer and Information Science.
A. G. Barto, R. S. Sutton, and C. W. Anderson. (1983) Neuron-like
elements that can solve difficult learning control problems, lEE E
Transactions on Systems, Man, and Cybernetics, 13(5): 834-846.
T. Kohonen. (1984) Self-Organization
New York: Springer-Verlag.
D. Michie and R. Chambers.
Edinburgh: Oliver and Boyd.
(1968)
and Associative
Machine
Memory,
Intelligence
H. Ritter and K. Schulten. (1986) Topology Conserving Mappings
for Learning Motor Tasks. In J. S. Denker (ed.), Neural Networks
for Computing. Snowbird, Utah: AlP.
H. Ritter and K. Schulten. (1988) Extending Kohonen's SelfOrganizing Mapping Algorithm to Learn Ballistic Movements.
In
R. Eckmiller (ed.), Neural Computers. Springer-Verlag.
| 321 |@word effect:1 trial:11 consisted:1 true:2 comparatively:1 differ:2 move:4 rei:1 believe:1 laboratory:1 receptive:1 simulation:1 subsequently:1 alp:1 during:2 self:3 require:1 simulated:1 initial:4 evenly:1 disparity:1 preliminary:1 hill:1 polytope:2 tuned:2 toward:1 adjusted:1 current:6 considered:1 activation:2 must:1 seed:2 mapping:4 difficult:2 partition:2 major:1 shape:2 discriminated:1 motor:1 nh:1 progressively:1 discussed:1 intelligence:2 unknown:2 adjustable:1 ballistic:1 significant:1 vertical:1 neuron:6 arc:2 create:1 successfully:1 situation:1 sensor:2 always:1 successive:2 gaussian:3 had:2 rather:1 specification:1 five:2 surface:4 barto:10 consists:1 showed:1 goodwin:4 overhead:1 optimized:1 indicates:2 verlag:2 tech:1 california:1 learned:1 expected:2 criticism:1 behavior:2 binary:1 frequently:1 prism:1 arbitrarily:1 brain:1 success:2 address:1 minimum:1 additional:1 able:1 dynamical:1 initially:1 her:1 expand:1 increasing:2 signal:1 dashed:1 memory:2 reduces:1 coder:2 critical:1 priori:5 exceeds:1 technical:1 determination:1 rely:1 field:1 altered:1 shaped:1 ase:17 coded:1 identical:2 act:1 unsupervised:1 nearly:1 definable:1 anticipated:1 alter:1 future:1 rosen:4 report:1 represent:1 control:13 partitioning:3 byte:1 indicated:1 achieved:1 producing:1 robotics:1 receive:1 impulse:2 fine:1 determining:1 accordance:1 interval:1 rept:1 law:1 counted:1 sutton:2 allocated:1 men:1 generation:1 unlike:1 regional:1 finely:2 solely:1 organization:2 fell:1 tend:1 cart:2 generator:1 dynamically:1 principle:1 analyzed:1 vij:1 ideal:1 activated:1 range:28 statistically:1 averaged:1 iii:1 critic:1 affect:1 heavy:1 balancing:6 oliver:1 topology:1 necessary:1 environmental:2 respective:2 attenuated:1 reducing:1 angeles:2 area:2 initialized:1 sacrificing:1 distributed:2 adapting:1 theoretical:1 boyd:1 edinburgh:1 boundary:10 curve:1 earlier:2 adaptive:23 reinforcement:8 york:1 selection:2 layered:1 altering:1 storage:1 cause:1 dar:1 pole:10 vertex:8 deviation:1 adequate:1 transaction:1 map:1 hundred:1 selforganizing:1 maximizing:1 successful:5 approximate:2 mcgraw:1 active:1 concentrated:1 category:1 assumed:2 stabilized:1 examines:1 rule:1 adjusts:1 adaptively:1 density:1 amherst:2 ie:1 shifted:1 jacques:1 contract:2 ritter:3 lee:1 track:1 learn:1 promise:1 ca:1 updated:1 coarsely:1 eckmiller:1 user:1 squared:1 reflect:1 complex:1 element:7 book:1 terminated:2 michie:2 leading:1 observed:1 run:17 alteration:1 coding:22 region:28 differed:1 experienced:1 movement:3 performed:1 decision:3 position:2 ran:1 balanced:1 schulten:2 environment:2 start:1 dynamic:6 bruce:1 topological:2 formula:1 activity:4 adapted:2 creates:1 x2:1 decay:2 speed:1 fallen:1 various:2 none:1 magnitude:1 served:2 cybernetics:1 describe:1 effective:1 department:3 according:2 goal:1 ed:2 smaller:1 explore:1 heuristic:1 ace:12 solve:2 failure:1 partitioned:2 james:1 static:1 springer:2 gain:3 final:1 associative:2 massachusetts:2 chance:1 sequence:1 resource:2 knowledge:1 equation:1 turn:1 describing:1 mechanism:1 shaping:1 ma:2 succeed:1 modulate:1 adaptation:1 reflecting:1 neighboring:1 kohonen:6 focusing:1 towards:2 man:1 generalizes:1 gaussians:1 organizing:1 change:1 vidal:4 conserving:1 denker:1 determined:4 albus:3 evaluated:1 away:3 appropriate:2 anderson:4 chamber:1 los:2 search:1 until:1 coin:2 ho:1 extending:1 horizontal:1 produce:2 modulated:1 depending:1 snowbird:1 ij:2 received:1 utah:1 |
2,437 | 3,210 | Configuration Estimates Improve Pedestrian Finding
Duan Tran?
U.Illinois at Urbana-Champaign
Urbana, IL 61801 USA
[email protected]
D.A. Forsyth
U.Illinois at Urbana-Champaign
Urbana, IL 61801 USA
[email protected]
Abstract
Fair discriminative pedestrian finders are now available. In fact, these pedestrian
finders make most errors on pedestrians in configurations that are uncommon in
the training data, for example, mounting a bicycle. This is undesirable. However,
the human configuration can itself be estimated discriminatively using structure
learning. We demonstrate a pedestrian finder which first finds the most likely human pose in the window using a discriminative procedure trained with structure
learning on a small dataset. We then present features (local histogram of oriented
gradient and local PCA of gradient) based on that configuration to an SVM classifier. We show, using the INRIA Person dataset, that estimates of configuration
significantly improve the accuracy of a discriminative pedestrian finder.
1
Introduction
Very accurate pedestrian detectors are an important technical goal; approximately half-a-million
pedestrians are killed by cars each year (1997 figures, in [1]). At relatively low resolution, pedestrians tend to have a characteristic appearance. Generally, one must cope with lateral or frontal views
of a walk. In these cases, one will see either a ?lollipop? shape ? the torso is wider than the legs,
which are together in the stance phase of the walk ? or a ?scissor? shape ? where the legs are
swinging in the walk. This encourages the use of template matching. Early template matchers include: support vector machines applied to a wavelet expansion ([2], and variants described in [3]); a
neural network applied to stereoscopic reconstructions [4]; chamfer matching to a hierachy of contour templates [5]; a likelihood threshold applied to a random field model [6]; an SVM applied to
spatial wavelets stacked over four frames to give dynamical cues [3]; a cascade architecture applied
to spatial averages of temporal differences [7]; and a temporal version of chamfer matching to a
hierachy of contour templates [8].
By far one of the most successful static template matcher is due to Dalal and Triggs [9]. Their
method is based on a comprehensive study of features and their effects on performance for the
pedestrian detection problem. The method that performs best involves a histogram of oriented gradient responses (a HOG descriptor). This is a variant of Lowe?s SIFT feature [10]. Each window
is decomposed into overlapping blocks (large spatial domains) of cells (smaller spatial domains).In
each block, a histogram of gradient directions (or edge orientations) is computed for each cell with
a measure of histogram ?energy?. These cell histograms are concatenated into block histograms
followed by normalization which obtains a modicum of illumination invariance. The detection window is tiled with an overlapping grid. Within each block HOG descriptors are computed, and the
?
We would like to thank Alexander Sorokin for his providing the annotation software and Pietro Perona
for insightful comments. This work was supported by Vietname Education Foundation as well as in part
by the National Science Foundation under IIS - 0534837 and in part by the Office of Naval Research under
N00014-01-1-0890 as part of the MURI program. Any opinions, findings and conclusions or recommendations
expressed in this material are those of the author(s) and do not necessarily reflect those of the National Science
Foundation or the Office of Naval Research.
resulting feature vector is presented to an SVM. Dalal and Triggs show this method produces no
errors on the 709 image MIT dataset of [2]; they describe an expanded dataset of 1805 images. Furthermore, they compare HOG descriptors with the original method of Papageorgiou and Poggio [2];
with an extended version of the Haar wavelets of Mohan et al. [11]; with the PCA-Sift of Ke and
Sukthankar ([12]; see also [13]); and with the shape contexts of Belongie et al. [14]. The HOG
descriptors outperform all other methods. Recently, Sabzmeydani and Mori [15] reported improved
results by using AdaBoost to select shapelet features (triplets of location, direction and strength of
local average gradient responses in different directions).
A key difficulty with pedestrian detection is that detectors must work on human configurations not
often seen in datasets. For systems to be useful, they cannot fail even on configurations that are very
uncommon ? it is not acceptable to run people over when they stand on their hands. There is some
evidence (figure 1) that less common configurations present real difficulties for very good current
pedestrian detectors (our reimplementation of Dalal and Triggs? work [9]).
Figure 1. Configuration estimates result in our method producing fewer false negatives than our
implementation of Dalal and Triggs does. The figure shows typical images which are incorrectly
classified by our implementation of Dalal and Triggs, but correctly classified when a configuration
estimate is attached. We conjecture that a configuration estimate can avoid problems with occlusion
or contrast failure because the configuration estimate reduces noise and the detector can use lower
detection thresholds.
1.1
Configuration and Parts
Detecting pedestrians with templates most likely works because pedestrians appear in a relatively
limited range of configurations and views (e.g. ?Our HOG detectors cue mainly on silhouette contours (especially the head, shoulders and feet)? [9], p.893). It appears certain that using the architecture of constructing features for whole image windows and then throwing the result into a classifier
could be used to build a person-finder for arbitrary configurations and arbitrary views only with a
major engineering effort. The set of examples required would be spectacularly large, for example.
This is unattractive, because this set of examples implicitly encodes a set of facts that are relatively
easy to make explicit. In particular, people are made of body segments which individually have a
quite simple structure, and these segments are connected into a kinematic structure which is quite
well understood.
All this suggests finding people by finding the parts and then reasoning about their layout ? essentially, building templates with complex internal kinematics. The core idea is very old (see the review
in [16]) but the details are hard to get right and important novel formulations are a regular feature of
the current research literature.
Simply identifying the body parts can be hard. Discriminative approaches use classifiers to detect
parts, then reason about configuration [11]. Generative approaches compare predictions of part
appearance with the image; one can use a tree structured configuration model [17], or an arbitrary
graph [18]. If one has a video sequence, part appearance can itself be learned [19, 20]; more recently,
Ramanan has shown knowledge of articulation properties gives an appearance model in a single
image [21]. Mixed approaches use a discriminative model to identify parts, then a generative
model to construct and evaluate assemblies [22, 23, 24]. Codebook approaches avoid explicitly
modelling body segments, and instead use unsupervised methods to find part decompositions that
are good for recognition (rather than disarticulation) [25].
Our pedestrian detection strategy consists of two steps: first, for each window, we estimate the
configuration of the best person available in that window; second, we extract features for that window conditioned on the configuration estimate, and pass these features to a support vector machine
classifier, which makes the final decision on the window.
Figure 2. This figure is best viewed in color. Our model of human layout is parametrized by seven
vertices, shown on an example on the far left. The root is at the hip; the arrows give the direction of conditional dependence. Given a set of features, the extremal model can be identified by
dynamic programming on point locations. We compute segment features by placing a box around
some vertices (as in the head), or pairs of vertices (as in the torso and leg). Histogram features are
then computed for base points referred to the box coordinate frame; the histogram is shifted by the
orientation of the box axis (section 3) within the rectified box. On the far right, a window showing
the color key for our structure learning points; dark green is a foot, green a knee, dark purple the
other foot, purple the other knee, etc. Note that structure learning is capable of finding distinction of
left legs (green points) and right legs (pink points). On the center right, examples of configurations
estimated by our configuration estimator after 20 rounds of structure learning to estimate W.
2
Configuration Estimation and Structure Learning
We are presented with a window within which may lie a pedestrian. We would like to be able
to estimate the most likely configuration for any pedestrian present. Our research hypothesis is
that this estimate will improve pedestrian detector perfomance by reducing the amount of noise
the final detector must cope with ? essentially, the segmentation of the pedestrian is improved
from a window to a (rectified) figure. We follow convention (established by [26]) and model the
configuration of a person as a tree model of segments (figure 2), with a score of segment quality and
a score of segment-segment configuration. We ignore arms because they are small and difficult to
localize. Our configuration estimation procedure will use dynamic programming to extract the best
configuration estimate from a set of scores depending on the location of vertices on the body model.
However, we do not know which features are most effective at estimating segment location; this is a
well established difficulty in the literature [16]. Structure learning is a method that uses a series of
correct examples to estimate appropriate weightings of features relative to one another to produce a
score that is effective at estimating configuration [27, 28]. We will write the image as I; coordinates
in the image as x; the coordinates of an estimated configuration as y (which is a stack of 7 point
coordinates); the score for this configuration as WT f (I, x; y) (which is a linear combination of a
collection of scores, each of which depends on the configuration and the image).
For a given image I0 and known W and f , the best configuration estimate is
arg max WT f (I0 , x; y)
y?y(I0 )
and this can be found with dynamic programming for appropriate choice of f and y(I0 ). There is a
variety of sensible choices of features for identifying body segments, but there is little evidence that
a particular choice of features is best; different choices of W may lead to quite different behaviours.
In particular, we will collect a wide range of features likely to identify segments well in f , and wish
to learn a choice of W that will give good configuration estimates.
We choose a loss function L(yt , yp ) that gives the cost of predicting yp when the correct answer
is yt . Write the set of n examples as E, and yp,i as the prediction for the i?th example. Structure
learning must now estimate a W to minimize the hinge loss as in [29]
X
1
1
|| W || 2 +
?i ?i
2
n
i?examples
subject to the constraints
?i ? E, WT f (Ii , x; yt,i ) + ?i ?
max (WT (Ii , x; yp,i ) + L(yt,i , yp,i ))
yp,i ?y(Ii )
At the minimum, the slack variables ?i happen at the equality of the constraints. Therefore, we can
move the constraints to the objective function, which is:
X
1
1
|| W || 2 +
?i ( max (WT (Ii , x; yp,i ) + L(yt,i , yp,i )) ? WT f (Ii , x; yt,i ))
2
n
yp,i ?y(Ii )
i?examples
Notice that this function is convex, but not differentiable. We follow Ratliff et al. [29], and use
the subgradient method (see [30]) to minimize. In this case, the derivative of the cost function at
an extremal yp,i is a subgradient (but not a gradient, because the cost function is not differentiable
everywhere).
3
Features
There are two sets of features: first, those used for estimating configuration of a person from a
window; and second, those used to determine whether a person is present conditioned on the best
estimate of configuration.
3.1
Features for Estimating Configuration
We use a tree structured model, given in figure 2. The tree is given by the position of seven points,
and encodes the head, torso and legs; arms are excluded because they are small and difficult to
identify, and pedestrians can be identified without localizing arms. The tree is rooted at hips, and
the arrows give the direction of conditional dependence. We assume that torso, lef tleg, rightleg
are conditionally independent given the root (at the hip).
The feature vector f (I, x; y) contains two types of feature: appearance features encode the appearance of putative segments; and geometric features encode relative and absolute configuration of the
body segments.
Each geometric feature depends on at most three point positions. We use three types of feature.
First, the length of a segment, represented as a 15-dimensional binary vector whose elements encode
whether the segment is longer than each of a set of test segments. Second, the cosine of the angle
between a segment and the vertical axis. Third, the cosine of the angle between pairs of adjoining segments (except at the lower torso, for complexity reasons); this allows the structure learning
method to prefer straight backs, and reasonable knees.
Appearance features are computed for rectangles constructed from pairs of points adjacent in the
tree. For each rectangle, we compute Histogram of Oriented Gradient (HOG) features, after [9].
These features have a strong record in pedestrian detection, because they can detect the patterns
of orientation associated with characteristic segment outlines (typically, strong vertical orientations
in the frame of the segment for torso and legs; strong horizontal orientations at the shoulders and
head). However, histograms involve spatial pooling; this means that one can have many strong
vertical orientations that do not join up to form a segment boundary. This effect means that HOG
features alone are not particularly effective at estimating configuration.
To counter this effect, we use the local gradient features described by Ke and Sukthankar [12].
To form these features, we concatenate the horizontal and vertical gradients of the patches in the
segment coordinate frame, then normalize and apply PCA to reduce the number of dimensions.
Since we want to model the appearance, we do not align the orientation to a canonical orientation as
in PCA-SIFT. This feature reveals whether the pattern of a body part appears at that location. The
PCA space for each body part is constructed from 500 annotated positive examples.
3.2
Features for Detection
Once the best configuration has been obtained for a window, we must determine whether a person
is present or not. We do this with a support vector machine. Generally, the features that determine
configuration should also be good for determining whether a person is present or not. However, a set
of HOG features for the whole image window has been shown to be good at pedestrian detection [9].
The support vector machine should be able to distinguish between good and bad features, so it is
natural to concatenate the configuration features described above with a set of HOG features. We
find it helpful to reduce the dimension of the set of HOG features to 500, using principal components.
We find that these whole window features help recover from incorrect structure predictions. These
combined features are used in training the SVM classifier and in detection as well.
4
Results
Dataset: We use INRIA Person, consisting of 2416 pedestrian images (1208 images with their leftright reflections) and 1218 background images for training. For testing, there are 1126 pedestrian
images (563 images with their left-right reflections) and 453 background images.
Training structure learning: we manually annotate 500 selected pedestrian images in the training
set examples. We use all 500 annotated examples to build the PCA spaces for each body segment.
In training, each example is learned to update the weight vector. The order of selecting examples in
each round is randomly drawn based on the differences of their scores on the predictions and their
scores on the true targets. For each round, we choose 300 examples drawn (since structure learning
is expensive). We have trained the structure learning on 10 rounds and 20 rounds for comparisons.
Quality of configuration estimates: Configuration estimates look good (figure 2). A persistent
nuisance associated with pictorial structure models of people is the tendency of such models to
place legs on top of one another. This occurs if one uses only appearance and relative geometric
features. However, our results suggest that if one uses absolute configuration features as well as
different appearance features for left and right legs (implicit in the structure learning procedure), the
left and right legs are identified correctly. The conditional independence assumption (which means
we cannot use the angle between the legs as a feature) does not appear to cause problems, perhaps
because absolute configuration features are sufficient.
Bootstrapping the SVM: The final SVM is bootstrapped, as in [9]. We use 2146 pedestrian images
with 2756 window images extracted from 1218 background images. We apply the learned structure
model to generate on these 2416 positive examples and 2756 negative examples to train the initial
SVM classifier. We then use this classifier to scan over 1218 background images with step side of
32 pixels and find hard examples (including false positives and true negatives of low confidence by
using LibSVM [31] with probability option). These negatives yield a bootstrap training set for the
final SVM classifier. This bootstrap learning helps to reduce the false alarm significantly.
Testing: We test on 1126 positive images and scan 64x128 image windows over 453 negative test
images, stepping by 16 pixels, a total of 182, 934 negative windows.
Scanning rate and comparison: Pedestrian detection systems work by scanning image windows,
and presenting each window to a detector. Dalal and Triggs established a methodology for evaluating
pedestrian detectors, which is now quite widely used. Their dataset offers a set of positive windows
(where pedestrians are centered), and a set of negative images. The negative images produce a
pool of negative windows, and the detector is evaluated on detect rate on the positive windows
and the false positive per window (FPPW) rate on the negative windows. This strategy ? which
evaluates the detector, rather than the combination of detection and scanning ? is appropriate for
comparing systems that scan image windows at approximately the same high rate. Current systems
do so, because the detectors require nearly centered pedestrians. However, the important practical
parameter for evaluating a system is the false positive per image (FPPI) rate. If one has a detector
that does not require a pedestrian to be centered in the image window, then one can obtain the same
detect rate while scanning fewer image windows. In turn, the FPPI rate will go down even if the
FPPW rate is fixed. To date, this issue has not arisen, because pedestrian detectors have required
pedestrians to be centered.
Figure 3. Left: a comparison of our method with the best detector of Dalal and Triggs, and the
detector of Sabzmaydani and Mori, on the basis of FPPW rate. This comparison ignores the fact
that we can look at fewer image windows without loss of system sensitivity. We show ROC?s for
a configuration estimator trained on 10 (blue) and 20 (red) rounds of structure learning. With 20
rounds of structure learning, our detector easily outperforms that of Dalal and Triggs. Note that
at high specificity, our detector is slightly more sensitive than that of Sabzmaydani and Mori, too.
Right: a comparison of our method with the best detector of Dalal and Triggs, and the detector of
Sabzmaydani and Mori, on the basis of FPPI rate. This comparison takes into account the fact that
we can look at fewer image windows (by a factor of four). However, scanning by larger steps might
cause a loss of sensitivity. We test this with a procedure of replicating positive examples, described
in the text, and show the results of four runs. The low variance in the detect rate under this procedure
shows that our detector is highly insensitive to the configuration of the pedestrian within a window. If
one evaluates on the basis of false positives per image ? which is likely the most important practical
parameter ? our system easily outperforms the state of the art.
4.1
The Effect of Configuration Estimates
Figure 3 compares our detector with that of Dalal and Triggs, and of Sabzmeydani and Mori on the
basis of detect and FPPW rates. We plot detect rate against FPPW rate for the three detectors. For
this plot, note that at low FPPW rate our method is somewhat more sensitive than that of Sabzmeydani and Mori, but has no advantage at higher FPPW rates.
However, this does not tell the whole story. We scan images at steps of 16 pixels (rather than 8
pixels for Dalal and Triggs and Sabzmeydani and Mori). This means that we scan four times fewer
windows than they do. If we can establish that the detect rate is not significantly affected by big
offsets in pedestrian position, then we expect a large advantage in FPPI rate.
We evaluate the effect on the detect rate of scanning by large steps by a process of sampling. Each
positive example is replaced by a total of 256 replicates, obtained by offsetting the image window by
steps in the range -7 to 8 in x and y (figure 4). We now conduct multiple evaluation runs. For each,
we select one replicate of each positive example uniformly at random. For each run, we evaluate
the detect rate. A tendency of the detector to require centered pedestrians would appear as variance
in the reported detect rate. The FPPI rate of the detector is not affected by this procedure, which
evaluates only the spatial tuning of the detector.
Figure 4. In color, original positive examples from the INRIA test set; next to each, are three of the
replicates we use to determine the effect on our detection system of scanning relatively few windows,
or, equivalently, the effect on our detector of not having a pedestrian centered in the window. See
section 4.1, and figure 3.
Figure 3 compares system performance, combining detect and scanning rates, by plotting detect rate
against FPPI rate. We show four evaluation runs for our system; there is no evidence of substantial
variance in detect rate. Our system shows a very substantial increase in detect rate at fixed FPPI rate.
5
Discussion
There is a difficulty with the evaluation methodology for pedestrian detection established by Dalal
and Triggs (and widely followed). A pedestrian detector that tests windows cannot find more pedestrians than there are windows. This does not usually affect the interpretation of precision and recall
statistics because the windows are closely packed. However, in our method, because a pedestrian
need not be centered in the window to be detected, the windows need not be closely packed, and
there is a possibility of undercounting pedestrians who stand too close together. We believe that this
does not occur in our current method, because our window spacing is narrow relative to the width of
a pedestrian.
Part representations appear to be a natural approach to identifying people. However, to our knowledge, there is no clear evidence to date that shows compelling advantages to using such an approach
(e.g. the review in [16]). We believe our method does so. Configuration estimates appear to have two
important advantages. First, they result in a detector that is relatively insensitive to the placement of
a pedestrian in an image window, meaning one can look at fewer image windows to obtain the same
detect rate, with consequent advantages to the rate at which the system produces false positives. This
is probably the dominant advantage. Second, configuration estimates appear to be a significant help
at high specificity settings (notice that our method beats all others on the FPPW criterion at very
low FPPW rates). This is most likely because the process of estimating configurations focuses the
detector on important image features (rather than pooling information over space). The result would
be that, when there is low contrast or a strange body configuration, the detector can use a somewhat
lower detection threshold for the same FPPW rate. Figure 1 shows human configurations detected
by our method but not by our implementation of Dalal and Triggs; notice the predominance of either
strange body configurations or low contrast. Structure learning is an attractive method to determine
which features are discriminative in configuration estimation, and it produces good configuration
estimates in complex images. Future work will include: tying W components for legs; evaluating
arm detection; and formulating strategies to employ structure learning for detecting other objects.
References
[1] D.M. Gavrila. Sensor-based pedestrian protection. Intelligent Transportation Systems, pages 77?81, 2001.
[2] C. Papageorgiou and T. Poggio. A trainable system for object detection. Int. J. Computer Vision, 38(1):15?
33, June 2000.
[3] C.P. Papageorgiou and T. Poggio. A pattern classification approach to dynamical object detection. In Int.
Conf. on Computer Vision, pages 1223?1228, 1999.
[4] L. Zhao and C.E. Thorpe. Stereo- and neural network-based pedestrian detection. Intelligent Transportation Systems, 1(3):148?154, September 2000.
[5] D. Gavrila. Pedestrian detection from a moving vehicle. In European Conference on Computer Vision,
pages II: 37?49, 2000.
[6] Y. Wu, T. Yu, and G. Hua. A statistical field model for pedestrian detection. In IEEE Conf. on Computer
Vision and Pattern Recognition, pages I: 1023?1030, 2005.
[7] P. Viola, M.J. Jones, and D. Snow. Detecting pedestrians using patterns of motion and appearance. Int. J.
Computer Vision, 63(2):153?161, July 2005.
[8] M. Dimitrijevic, V. Lepetit, and P. Fua. Human body pose recognition using spatio-temporal templates.
In ICCV workshop on Modeling People and Human Interaction, 2005.
[9] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Conf. on Computer
Vision and Pattern Recognition, pages I: 886?893, 2005.
[10] D.G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Computer Vision, 60(2):91?
110, November 2004.
[11] A. Mohan, C.P. Papageorgiou, and T. Poggio. Example-based object detection in images by components.
IEEE T. Pattern Analysis and Machine Intelligence, 23(4):349?361, April 2001.
[12] Y. Ke and R. Sukthankar. Pca-sift: a more distinctive representation for local image descriptors. In IEEE
Conf. on Computer Vision and Pattern Recognition, pages II: 506?513, 2004.
[13] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE T. Pattern Analysis
and Machine Intelligence, 2004. accepted.
[14] Serge Belongie, Jitendra Malik, and Jan Puzicha. Shape matching and object recognition using shape
contexts. IEEE T. Pattern Analysis and Machine Intelligence, 24(4):509?522, 2002.
[15] P. Sabzmeydani and G. Mori. Detecting pedestrians by learning shapelet features. In CVPR, 2007.
[16] D.A. Forsyth, O.Arikan, L. Ikemoto, J. O?Brien, and D. Ramanan. Computational studies in human
motion 1: Tracking and animation. Foundations and Trends in Computer Vision, 2006. In press.
[17] P.F. Felzenszwalb and D.P. Huttenlocher. Pictorial structures for object recognition. Int. J. Computer
Vision, 61(1):55?79, January 2005.
[18] M. P. Kumar, P. H. S. Torr, and A. Zisserman. Extending pictorial structures for object recognition. In
Proceedings of the British Machine Vision Conference, 2004.
[19] Deva Ramanan, D.A. Forsyth, and A. Zisserman. Strike a pose: Tracking people by finding stylized
poses. In IEEE Conf. on Computer Vision and Pattern Recognition, 2005.
[20] D. Ramanan and D.A. Forsyth. Using temporal coherence to build models of animals. In Proc. ICCV,
2003.
[21] D. Ramanan. Learning to parse images of articulated objects. In Proc. NIPS, 2006.
[22] R. Ronfard, C. Schmid, and B. Triggs. Learning to parse pictures of people. In European Conference on
Computer Vision, page IV: 700 ff., 2002.
[23] K. Mikolajczyk, C. Schmid, and A. Zisserman. Human detection based on a probabilistic assembly of
robust part detectors. In European Conference on Computer Vision, pages Vol I: 69?82, 2004.
[24] A. Micilotta, E. Ong, and R. Bowden. Detection and tracking of humans by probabilistic body part
assembly. In British Machine Vision Conference, volume 1, pages 429?438, 2005.
[25] B. Leibe, E. Seemann, and B. Schiele. Pedestrian detection in crowded scenes. In IEEE Conf. on Computer Vision and Pattern Recognition, pages I: 878?885, 2005.
[26] Pedro F. Felzenszwalb and Daniel P. Huttenlocher. Efficient matching of pictorial structures. In IEEE
Conf. on Computer Vision and Pattern Recognition, 2000.
[27] B. Taskar. Learning Structured Prediction Models: A Large Margin Approach. PhD thesis, Stanford
University, 2004.
[28] B. Taskar, S. Lacoste-Julien, and M. Jordan. Structured prediction via the extragradient method. In Neural
Information Processing Systems Conference, 2005.
[29] N. Ratliff, J. A. Bagnell, and M. Zinkevich. Subgradient methods for maximum margin structured learning. In ICML 2006 Workshop on Learning in Structured Output Spaces, 2006.
[30] N.Z. Shor. Minimization Methods for Non-Differentiable Functions and Applications. 1985.
[31] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001.
| 3210 |@word hierachy:2 version:2 dalal:14 replicate:1 triggs:15 decomposition:1 lepetit:1 initial:1 configuration:56 series:1 score:8 contains:1 selecting:1 daniel:1 bootstrapped:1 outperforms:2 brien:1 current:4 comparing:1 protection:1 must:5 concatenate:2 happen:1 shape:5 plot:2 update:1 mounting:1 alone:1 half:1 cue:2 fewer:6 generative:2 selected:1 intelligence:3 core:1 record:1 detecting:4 codebook:1 location:5 x128:1 constructed:2 persistent:1 incorrect:1 consists:1 uiuc:2 decomposed:1 duan:1 little:1 window:43 estimating:6 tying:1 finding:6 bootstrapping:1 temporal:4 classifier:8 ramanan:5 appear:6 producing:1 positive:14 understood:1 engineering:1 local:6 approximately:2 inria:3 might:1 suggests:1 collect:1 limited:1 range:3 practical:2 testing:2 offsetting:1 block:4 reimplementation:1 bootstrap:2 procedure:6 jan:1 significantly:3 cascade:1 matching:5 confidence:1 regular:1 specificity:2 bowden:1 suggest:1 get:1 cannot:3 undesirable:1 close:1 context:2 sukthankar:3 zinkevich:1 center:1 yt:6 transportation:2 layout:2 go:1 convex:1 resolution:1 swinging:1 modicum:1 ke:3 identifying:3 knee:3 estimator:2 his:1 coordinate:5 target:1 programming:3 us:3 hypothesis:1 element:1 trend:1 recognition:11 particularly:1 expensive:1 muri:1 huttenlocher:2 taskar:2 connected:1 counter:1 substantial:2 complexity:1 ronfard:1 schiele:1 ong:1 dynamic:3 trained:3 deva:1 segment:23 distinctive:2 basis:4 easily:2 stylized:1 represented:1 stacked:1 train:1 articulated:1 describe:1 effective:3 detected:2 tell:1 quite:4 whose:1 widely:2 larger:1 cvpr:1 stanford:1 statistic:1 itself:2 final:4 sequence:1 differentiable:3 advantage:6 reconstruction:1 tran:1 interaction:1 combining:1 date:2 normalize:1 extending:1 produce:5 object:8 wider:1 depending:1 help:3 pose:4 strong:4 involves:1 convention:1 direction:5 foot:3 snow:1 closely:2 correct:2 annotated:2 centered:7 human:11 opinion:1 material:1 education:1 require:3 behaviour:1 around:1 bicycle:1 major:1 early:1 estimation:3 proc:2 predominance:1 extremal:2 sensitive:2 individually:1 minimization:1 mit:1 sensor:1 rather:4 avoid:2 office:2 encode:3 focus:1 june:1 naval:2 modelling:1 likelihood:1 mainly:1 contrast:3 detect:16 helpful:1 i0:4 typically:1 perona:1 pixel:4 arg:1 issue:1 orientation:8 classification:1 animal:1 spatial:6 art:1 field:2 construct:1 once:1 having:1 sampling:1 manually:1 placing:1 look:4 unsupervised:1 nearly:1 yu:1 jones:1 icml:1 future:1 others:1 intelligent:2 few:1 employ:1 thorpe:1 oriented:4 randomly:1 national:2 comprehensive:1 pictorial:4 replaced:1 phase:1 occlusion:1 consisting:1 detection:25 highly:1 kinematic:1 possibility:1 evaluation:4 replicates:2 uncommon:2 adjoining:1 accurate:1 edge:1 capable:1 poggio:4 tree:6 conduct:1 old:1 iv:1 walk:3 hip:3 modeling:1 compelling:1 localizing:1 cost:3 vertex:4 ikemoto:1 successful:1 too:2 reported:2 answer:1 scanning:8 combined:1 person:9 sensitivity:2 probabilistic:2 pool:1 together:2 thesis:1 reflect:1 choose:2 conf:7 derivative:1 zhao:1 chung:1 yp:10 account:1 int:5 pedestrian:51 forsyth:4 jitendra:1 crowded:1 explicitly:1 depends:2 vehicle:1 view:3 lowe:2 root:2 spectacularly:1 red:1 recover:1 option:1 annotation:1 minimize:2 il:2 purple:2 accuracy:1 descriptor:6 characteristic:2 variance:3 who:1 yield:1 identify:3 serge:1 rectified:2 straight:1 classified:2 detector:32 failure:1 evaluates:3 energy:1 against:2 associated:2 arikan:1 static:1 dataset:6 recall:1 knowledge:2 car:1 color:3 torso:6 segmentation:1 back:1 appears:2 higher:1 follow:2 adaboost:1 response:2 improved:2 methodology:2 fua:1 formulation:1 evaluated:1 box:4 april:1 zisserman:3 furthermore:1 implicit:1 hand:1 horizontal:2 parse:2 overlapping:2 quality:2 perhaps:1 believe:2 usa:2 effect:7 building:1 true:2 equality:1 stance:1 excluded:1 conditionally:1 round:7 adjacent:1 attractive:1 width:1 encourages:1 nuisance:1 rooted:1 cosine:2 criterion:1 presenting:1 outline:1 demonstrate:1 performs:1 motion:2 reflection:2 reasoning:1 image:45 meaning:1 novel:1 recently:2 common:1 stepping:1 attached:1 insensitive:2 volume:1 million:1 perfomance:1 interpretation:1 significant:1 tuning:1 grid:1 killed:1 illinois:2 replicating:1 moving:1 longer:1 etc:1 base:1 align:1 dominant:1 n00014:1 certain:1 binary:1 seen:1 minimum:1 somewhat:2 determine:5 strike:1 july:1 ii:9 multiple:1 keypoints:1 reduces:1 champaign:2 technical:1 offer:1 lin:1 finder:5 prediction:6 variant:2 essentially:2 vision:17 histogram:11 normalization:1 annotate:1 arisen:1 cell:3 background:4 want:1 spacing:1 probably:1 comment:1 subject:1 tend:1 pooling:2 gavrila:2 fppw:10 jordan:1 easy:1 variety:1 independence:1 affect:1 architecture:2 identified:3 shor:1 reduce:3 idea:1 whether:5 pca:7 effort:1 stereo:1 cause:2 generally:2 useful:1 clear:1 involve:1 amount:1 dark:2 generate:1 outperform:1 canonical:1 shifted:1 notice:3 stereoscopic:1 estimated:3 correctly:2 per:3 blue:1 write:2 vol:1 affected:2 key:2 four:5 threshold:3 localize:1 drawn:2 libsvm:2 lacoste:1 rectangle:2 graph:1 subgradient:3 pietro:1 year:1 run:5 angle:3 everywhere:1 fppi:7 place:1 reasonable:1 strange:2 wu:1 chih:2 patch:1 putative:1 decision:1 acceptable:1 prefer:1 coherence:1 followed:2 distinguish:1 sorokin:1 strength:1 occur:1 placement:1 constraint:3 throwing:1 scene:1 software:1 encodes:2 formulating:1 kumar:1 expanded:1 relatively:5 conjecture:1 structured:6 combination:2 pink:1 smaller:1 slightly:1 leg:12 iccv:2 invariant:1 mori:8 slack:1 kinematics:1 fail:1 turn:1 know:1 sabzmeydani:5 available:2 apply:2 leibe:1 appropriate:3 original:2 top:1 include:2 assembly:3 hinge:1 concatenated:1 especially:1 build:3 establish:1 move:1 objective:1 malik:1 occurs:1 strategy:3 dependence:2 bagnell:1 september:1 gradient:10 thank:1 lateral:1 parametrized:1 sensible:1 seven:2 reason:2 length:1 providing:1 equivalently:1 difficult:2 hog:10 negative:10 ratliff:2 implementation:3 packed:2 vertical:4 datasets:1 urbana:4 november:1 incorrectly:1 beat:1 viola:1 extended:1 shoulder:2 head:4 january:1 frame:4 stack:1 arbitrary:3 pair:3 required:2 learned:3 distinction:1 narrow:1 established:4 nip:1 able:2 dynamical:2 pattern:13 usually:1 articulation:1 program:1 green:3 max:3 video:1 including:1 difficulty:4 natural:2 haar:1 predicting:1 arm:4 improve:3 library:1 picture:1 julien:1 axis:2 extract:2 schmid:3 text:1 review:2 literature:2 geometric:3 determining:1 relative:4 loss:4 expect:1 discriminatively:1 mixed:1 foundation:4 sufficient:1 plotting:1 daf:1 story:1 supported:1 lef:1 side:1 wide:1 template:8 felzenszwalb:2 absolute:3 boundary:1 dimension:2 stand:2 evaluating:3 contour:3 ignores:1 author:1 made:1 collection:1 mikolajczyk:2 far:3 cope:2 obtains:1 ignore:1 implicitly:1 silhouette:1 reveals:1 belongie:2 spatio:1 discriminative:6 triplet:1 learn:1 robust:1 lollipop:1 shapelet:2 expansion:1 necessarily:1 papageorgiou:4 constructing:1 domain:2 complex:2 european:3 arrow:2 whole:4 noise:2 alarm:1 big:1 animation:1 fair:1 body:13 referred:1 join:1 roc:1 ff:1 precision:1 position:3 explicit:1 wish:1 lie:1 weighting:1 third:1 wavelet:3 down:1 british:2 chamfer:2 bad:1 jen:1 sift:4 insightful:1 showing:1 offset:1 svm:8 consequent:1 evidence:4 unattractive:1 workshop:2 false:7 phd:1 mohan:2 illumination:1 conditioned:2 margin:2 simply:1 likely:6 appearance:11 expressed:1 tracking:3 recommendation:1 chang:1 hua:1 pedro:1 extracted:1 conditional:3 goal:1 viewed:1 hard:3 typical:1 except:1 reducing:1 uniformly:1 wt:6 torr:1 extragradient:1 principal:1 total:2 pas:1 invariance:1 tendency:2 tiled:1 accepted:1 matcher:2 select:2 puzicha:1 internal:1 people:8 support:5 scan:5 alexander:1 frontal:1 evaluate:3 trainable:1 |
2,438 | 3,211 | Using Deep Belief Nets to Learn Covariance Kernels
for Gaussian Processes
Ruslan Salakhutdinov and Geoffrey Hinton
Department of Computer Science, University of Toronto
6 King?s College Rd, M5S 3G4, Canada
rsalakhu,[email protected]
Abstract
We show how to use unlabeled data and a deep belief net (DBN) to learn a good
covariance kernel for a Gaussian process. We first learn a deep generative model
of the unlabeled data using the fast, greedy algorithm introduced by [7]. If the
data is high-dimensional and highly-structured, a Gaussian kernel applied to the
top layer of features in the DBN works much better than a similar kernel applied
to the raw input. Performance at both regression and classification can then be
further improved by using backpropagation through the DBN to discriminatively
fine-tune the covariance kernel.
1 Introduction
Gaussian processes (GP?s) are a widely used method for Bayesian non-linear non-parametric regression and classification [13, 16]. GP?s are based on defining a similarity or kernel function that
encodes prior knowledge of the smoothness of the underlying process that is being modeled. Because of their flexibility and computational simplicity, GP?s have been successfully used in many
areas of machine learning.
Many real-world applications are characterized by high-dimensional, highly-structured data with a
large supply of unlabeled data but a very limited supply of labeled data. Applications such as information retrieval and machine vision are examples where unlabeled data is readily available. GP?s
are discriminative models by nature and within the standard regression or classification scenario,
unlabeled data is of no use. Given a set of i.i.d. labeled input vectors Xl = {xn }N
n=1 and their
N
associated target labels {yn }N
n=1 ? R or {yn }n=1 ? {?1, 1} for regression/classification, GP?s
model p(yn |xn ) directly. Unless some assumptions are made about the underlying distribution of
the input data X = [Xl , Xu ], unlabeled data, Xu , cannot be used. Many researchers have tried to
use unlabeled
P data by incorporating a model of p(X). For classification tasks, [11] model p(X) as
a mixture yn p(xn |yn )p(yn ) and then infer p(yn |xn ), [15] attempts to learn covariance kernels
based on p(X), and [10] assumes that the decision boundaries should occur in regions where the
data density, p(X), is low. When faced with high-dimensional, highly-structured data, however,
none of the existing approaches have proved to be particularly successful.
In this paper we exploit two properties of DBN?s. First, they can be learned efficiently from unlabeled data and the top-level features generally capture significant, high-order correlations in the data.
Second, they can be discriminatively fine-tuned using backpropagation. We first learn a DBN model
of p(X) in an entirely unsupervised way using the fast, greedy learning algorithm introduced by [7]
and further investigated in [2, 14, 6]. We then use this generative model to initialize a multi-layer,
non-linear mapping F (x|W ), parameterized by W , with F : X ? Z mapping the input vectors in
X into a feature space Z. Typically the mapping F (x|W ) will contain millions of parameters. The
top-level features produced by this mapping allow fairly accurate reconstruction of the input, so they
must contain most of the information in the input vector, but they express this information in a way
that makes explicit a lot of the higher-order structure in the input data.
After learning F (x|W ), a natural way to define a kernel function is to set K(xi , xj ) =
exp (?||F (xi |W ) ? F (xj |W )||2 ). Note that the kernel is initialized in an entirely unsupervised
way. The parameters W of the covariance kernel can then be fine-tuned using the labeled data by
1
maximizing the log probability of the labels with respect to W . In the final model most of the information for learning a covariance kernel will have come from modeling the input data. The very
limited information in the labels will be used only to slightly adjust the layers of features already
discovered by the DBN.
2 Gaussian Processes for Regression and Binary Classification
For a regression task, we are given a data set D of i.i.d . labeled input vectors Xl = {xn }N
n=1 and
their corresponding target labels {yn }N
n=1 ? R. We are interested in the following probabilistic
regression model:
yn = f (xn ) + ?,
? ? N (?|0, ? 2 )
(1)
A Gaussian process regression places a zero-mean GP prior over the underlying latent function f
we are modeling, so that a-priori p(f |Xl ) =N (f |0, K), where f = [f (x1 ), ..., f (xn )]T and K is the
covariance matrix, whose entries are specified by the covariance function Kij = K(xi , xj ). The
covariance function encodes our prior notion of the smoothness of f , or the prior assumption that
if two input vectors are similar according to some distance measure, their labels should be highly
correlated. In this paper we will use the spherical Gaussian kernel, parameterized by ? = {?, ?}:
1
Kij = ? exp ?
(xi ? xj )T (xi ? xj )
(2)
2?
Integrating out the function values f , the marginal log-likelihood takes form:
N
1
1
L = log p(y|Xl ) = ? log 2? ? log |K + ? 2 I| ? yT (K + ? 2 I)?1 y
(3)
2
2
2
which can then be maximized with respect to the parameters ? and ?. Given a new test point x? , a
prediction is obtained by conditioning on the observed data and ?. The distribution of the predicted
value y? at x? takes the form:
p(y? |x? , D, ?, ? 2 ) = N (y? |k?T (K + ? 2 I)?1 y, k?? ? k?T (K + ? 2 I)?1 k? + ? 2 )
where k? = K(x? , Xl ), and k?? = K(x? , x? ).
(4)
For a binary classification task, we similarly place a zero mean GP prior over the underlying latent
function f , which is then passed through the logistic function g(x) = 1/(1 + exp(?x)) to define a
prior p(yn = 1|xn ) = g(f (xn )). Given a new test point x? , inference is done by first obtaining the
distribution over the latent function f? = f (x? ):
Z
p(f? |x? , D) = p(f? |x? , Xl , f )p(f |Xl , y)df
(5)
which is then used to produce a probabilistic prediction:
Z
p(y? = 1|x? , D) = g(f? )p(f? |x? , D)df?
(6)
The non-Gaussian likelihood makes the integral in Eq. 5 analytically intractable. In our experiments,
we approximate the non-Gaussian posterior p(f |Xl , y) with a Gaussian one using expectation propagation [12]. For more thorough reviews and implementation details refer to [13, 16].
3 Learning Deep Belief Networks (DBN?s)
In this section we describe an unsupervised way of learning a DBN model of the input data X =
[Xl , Xu ], that contains both labeled and unlabeled data sets. A DBN can be trained efficiently by
using a Restricted Boltzmann Machine (RBM) to learn one layer of hidden features at a time [7].
Welling et. al. [18] introduced a class of two-layer undirected graphical models that generalize
RBM?s to exponential family distributions. This framework will allow us to model real-valued
images of face patches and word-count vectors of documents.
3.1 Modeling Real-valued Data
We use a conditional Gaussian distribution for modeling observed ?visible? pixel values x (e.g.
images of faces) and a conditional Bernoulli distribution for modeling ?hidden? features h (Fig. 1):
P
(x?bi ??i
hj wij )2
j
1
exp(?
)
(7)
p(xi = x|h) = ?2??
2
2?
i
i
P
p(hj = 1|x) = g bj + i wij x?ii
(8)
2
1000
W3
1000
target y
RBM
1000
h
GP
1000
WT3
W2
Binary
Hidden Features
1000
RBM
1000
W
1000
WT2
1000
WT1
W1
x
Gaussian
Visible
Units
Feature
Representation
F(X|W)
RBM
Input X
Figure 1: Left panel: Markov random field of the generalized RBM. The top layer represents stochastic binary
hidden features h and and the bottom layer is composed of linear visible units x with Gaussian noise. When
using a Constrained Poisson Model, the top layer represents stochastic binary latent topic features h and the
bottom layer represents the Poisson visible word-count vector x. Middle panel: Pretraining consists of learning
a stack of RBM?s. Right panel: After pretraining, the RBM?s are used to initialize a covariance function of the
Gaussian process, which is then fine-tuned by backpropagation.
where g(x) = 1/(1 + exp(?x)) is the logistic function, wij is a symmetric interaction term between
input i and feature j, ?i2 is the variance of input i, and bi , bj are biases. The marginal distribution
over visible vector x is:
X
exp (?E(x, h))
R P
(9)
p(x) =
g exp (?E(u, g))du
u
h
P
P
P
?bi )2
where E(x, h) is an energy term: E(x, h) = i (xi2?
? j bj hj ? i,j hj wij x?ii . The param2
i
eter updates required to perform gradient ascent in the log-likelihood is obtained from Eq. 9:
?wij = ?
? log p(x)
= ?(<zi hj >data ? <zi hj >model )
?wij
(10)
where ? is the learning rate, zi = xi /?i , < ?>data denotes an expectation with respect to the data
distribution and < ?>model is an expectation with respect to the distribution defined by the model.
To circumvent the difficulty of computing <?>model , we use 1-step Contrastive Divergence [5]:
?wij = ?(<zi hj >data ? <zi hj >recon )
(11)
The expectation < zi hj >data defines the expected sufficient statistics of the data distribution and
is computed as zi p(hj = 1|x) when the features are being driven by the observed data from the
training set using Eq. 8. After stochastically activating the features, Eq. 7 is used to ?reconstruct?
real-valued data. Then Eq. 8 is used again to activate the features and compute <zi hj >recon when
the features are being driven by the reconstructed data. Throughout our experiments we set variances
?i2 = 1 for all visible units i, which facilitates learning. The learning rule for the biases is just a
simplified version of Eq. 11.
3.2 Modeling Count Data with the Constrained Poisson Model
We use a conditional ?constrained? Poisson distribution for modeling observed ?visible? word count
data x and a conditional Bernoulli distribution for modeling ?hidden? topic features h:
P
X
exp (?i + j hj wij )
? N , p(hj = 1|x) = g(bj +
P
p(xi = n|h) = Pois n, P
wij xi ) (12)
k exp ?k +
j hj Wkj
i
?? n
where Pois
P n, ? = e ? /n!, wij is a symmetric interaction term between word i and feature
j, N = i xi is the total length of the document, ?i is the bias of the conditional Poisson model
for word i, and bj is the bias of feature j. The Poisson rate, whose log is shifted by the weighted
combination of the feature activations, is normalized and scaled up by N . We call this the ?Constrained Poisson Model? since it ensures that the mean Poisson rates across all words sum up to the
length of the document. This normalization is significant because it makes learning stable and it
deals appropriately with documents of different lengths.
3
The marginal distribution over visible count vectors x is given in Eq. 9 with an ?energy? given by
X
X
X
X
E(x, h) = ?
?i xi +
log (xi !) ?
b j hj ?
xi hj wij
(13)
i
i
j
i,j
The gradient of the log-likelihood function is:
?wij = ?
? log p(v)
= ?(<xi hj >data ? <xi hj >model )
?wij
(14)
3.3 Greedy Recursive Learning of Deep Belief Nets
A single layer of binary features is not the best way to capture the structure in the input data. We
now describe an efficient way to learn additional layers of binary features.
After learning the first layer of hidden features we have an undirected model that defines p(v, h)
by defining a consistent pair of conditional probabilities, p(h|v) and p(v|h) which can be used to
sample from the model distribution. A different way to express what has been learned is p(v|h) and
p(h). Unlike a standard, directed model, this p(h) does not have its own separate parameters. It is a
complicated, non-factorial prior on h that is defined implicitly by p(h|v) and p(v|h). This peculiar
decomposition into p(h) and p(v|h) suggests a recursive algorithm: keep the learned p(v|h) but
replace p(h) by a better prior over h, i.e. a prior that is closer to the average, over all the data
vectors, of the conditional posterior over h. So after learning an undirected model, the part we keep
is part of a multilayer directed model.
We can sample from this average conditional posterior by simply using p(h|v) on the training data
and these samples are then the ?data? that is used for training the next layer of features. The only
difference from learning the first layer of features is that the ?visible? units of the second-level RBM
are also binary [6, 3]. The learning rule provided in the previous section remains the same [5].
We could initialize the new RBM model by simply using the existing learned model but with the
roles of the hidden and visible units reversed. This ensures that p(v) in our new model starts out
being exactly the same as p(h) in our old one. Provided the number of features per layer does not
decrease, [7] show that each extra layer increases a variational lower bound on the log probability
of data. To suppress noise in the learning signal, we use the real-valued activation probabilities for
the visible units of every RBM, but to prevent hidden units from transmitting more than one bit of
information from the data to its reconstruction, the pretraining always uses stochastic binary values
for the hidden units.
The greedy, layer-by-layer training can be repeated several times to learn a deep, hierarchical model
in which each layer of features captures strong high-order correlations between the activities of
features in the layer below.
4 Learning the Covariance Kernel for a Gaussian Process
After pretraining, the stochastic activities of the binary features in each layer are replaced by deterministic, real-valued probabilities and the DBN is used to initialize a multi-layer, non-linear mapping f (x|W ) as shown in figure 1. We define a Gaussian covariance function, parameterized by
? = {?, ?} and W as:
1
Kij = ? exp ?
||F (xi |W ) ? F (xj |W )||2
(15)
2?
Note that this covariance function is initialized in an entirely unsupervised way. We can now maximize the log-likelihood of Eq. 3 with respect to the parameters of the covariance function using the
labeled training data[9]. The derivative of the log-likelihood with respect to the kernel function is:
?L
1
= Ky?1 yyT Ky?1 ? Ky?1
(16)
?Ky
2
where Ky = K + ? 2 I is the covariance matrix. Using the chain rule we readily obtain the necessary
gradients:
?L
?L ?Ky
=
??
?Ky ??
and
?L
?L
?Ky ?F (x|W )
=
W
?Ky ?F (x|W ) ?W
4
(17)
Training Data
?22.07 32.99
?41.15
66.38
27.49
Unlabeled
Test Data
A
B
Figure 2: Top panel A: Randomly sampled examples of the training and test data. Bottom panel B: The same
sample of the training and test images but with rectangular occlusions.
A
B
Training
labels
100
500
1000
100
500
1000
GPstandard
Sph.
ARD
22.24 28.57
17.25 18.16
16.33 16.36
26.94 28.32
20.20 21.06
19.20 17.98
GP-DBNgreedy
Sph.
ARD
17.94 18.37
12.71 8.96
11.22 8.77
23.15 19.42
15.16 11.01
14.15 10.43
GP-DBNfine
Sph.
ARD
15.28 15.01
7.25
6.84
6.42
6.31
19.75 18.59
10.56 10.12
9.13 9.23
GPpca
Sph.
ARD
18.13 (10) 16.47 (10)
14.75 (20) 10.53 (80)
14.86 (20) 10.00 (160)
25.91 (10) 19.27 (20)
17.67 (10) 14.11 (20)
16.26 (10) 11.55 (80)
Table 1: Performance results on the face-orientation regression task. The root mean squared error (RMSE) on
the test set is shown for each method using spherical Gaussian kernel and Gaussian kernel with ARD hyperparameters. By row: A) Non-occluded face data, B) Occluded face data. For the GPpca model, the number of
principal components that performs best on the test data is shown in parenthesis.
where ?F (x|W )/?W is computed using standard backpropagation. We also optimize the observation noise ? 2 . It is necessary to compute the inverse of Ky , so each gradient evaluation has O(N 3 )
complexity where N is the number of the labeled training cases. When learning the restricted Boltzmann machines that are composed to form the initial DBN, however, each gradient evaluation scales
linearly in time and space with the number of unlabeled training cases. So the pretraining stage
can make efficient use of very large sets of unlabeled data to create sensible, high-level features and
when the amount of labeled data is small. Then the very limited amount of information in the labels
can be used to slightly refine those features rather than to create them.
5 Experimental Results
In this section we present experimental results for several regression and classification tasks that
involve high-dimensional, highly-structured data. The first regression task is to extract the orientation of a face from a gray-level image of a large patch of the face. The second regression task is
to map images of handwritten digits to a single real-value that is as close as possible to the integer
represented by the digit in the image. The first classification task is to discriminate between images
of odd digits and images of even digits. The second classification task is to discriminate between
two different classes of news story based on the vector of word counts in each story.
5.1 Extracting the Orientation of a Face Patch
The Olivetti face data set contains ten 64?64 images of each of forty different people. We constructed a data set of 13,000 28?28 images by randomly rotating (?90? to +90? ), cropping, and
subsampling the original 400 images. The data set was then subdivided into 12,000 training images,
which contained the first 30 people, and 1,000 test images, which contained the remaining 10 people. 1,000 randomly sampled face patches from the training set were assigned an orientation label.
The remaining 11,000 training images were used as unlabeled data. We also made a more difficult
version of the task by occluding part of each face patch with randomly chosen rectangles. Panel A
of figure 2 shows randomly sampled examples from the training and test data.
For training on the Olivetti face patches we used the 784-1000-1000-1000 architecture shown in
figure 1. The entire training set of 12,000 unlabeled images was used for greedy, layer-by-layer
training of a DBN model. The 2.8 million parameters of the DBN model may seem excessive for
12,000 training cases, but each training case involves modeling 625 real-values rather than just a
single real-valued label. Also, we only train each layer of features for a few passes through the
training data and we penalize the squared weights.
5
45
1.0
40
Input Pixel Space
35
30
25
0.8
20
15
Feature 312
10
5
0.6
0
1
2
3
4
log ?
5
6
90
80
0.4
Feature Space
70
60
50
40
0.2
More Relevant
30
20
10
0
0.2
0.4
0.6
0.8
1.0
Feature 992
0
?1
0
1
2
log ?
3
4
5
6
Figure 3: Left panel shows a scatter plot of the two most relevant features, with each point replaced by the
corresponding input test image. For better visualization, overlapped images are not shown. Right panel displays
the histogram plots of the learned ARD hyper-parameters log ?.
After the DBN has been pretrained on the unlabeled data, a GP model was fitted to the labeled
data using the top-level features of the DBN model as inputs. We call this model GP-DBNgreedy.
GP-DBNgreedy can be significantly improved by slightly altering the weights in the DBN. The
GP model gives error derivatives for its input vectors which are the top-level features of the DBN.
These derivatives can be backpropagated through the DBN to allow discriminative fine-tuning of
the weights. Each time the weights in the DBN are updated, the GP model is also refitted. We call
this model GP-DBNfine. For comparison, we fitted a GP model that used the pixel intensities of
the labeled images as its inputs. We call this model GPstandard. We also used PCA to reduce the
dimensionality of the labeled images and fitted several different GP models using the projections
onto the first m principal components as the input. Since we only want a lower bound on the error
of this model, we simply use the value of m that performs best on the test data. We call this model
GPpca. Table 1 shows the root mean squared error (RMSE) of the predicted face orientations using
all four types of GP model on varying amounts of labeled data. The results show that both GPDBNgreedy and GP-DBNfine significantly outperform a regular GP model. Indeed, GP-DBNfine
with only 100 labeled training cases outperforms GPstandard with 1000.
To test the robustness of our approach to noise in the input we took the same data set and created
artificial rectangular occlusions (see Fig. 2, panel B). The number of rectangles per image was
drawn from a Poisson with ? = 2. The top-left location, length and width of each rectangle was
sampled from a uniform [0,25]. The pixel intensity of each occluding rectangle was set to the mean
pixel intensity of the entire image. Table 1 shows that the performance of all models degrades, but
their relative performances remain the same and GP-DBNfine on occluded data is still much better
than GPstandard on non-occluded data.
We have also experimented with using a Gaussian kernel with ARD hyper-parameters, which is a
common practice when the input vectors are high-dimensional:
1
(18)
Kij = ? exp ? (xi ? xj )T D(xi ? xj )
2
where D is the diagonal matrix with Dii = 1/?i , so that the covariance function has a separate
length-scale parameter for each dimension. ARD hyper-parameters were optimized by maximizing
the marginal log-likelihood of Eq. 3. Table 1 shows that ARD hyper-parameters do not improve
GPstandard, but they do slightly improve GP-DBNfine and they strongly improve GP-DBNgreedy
and GPpca when there are 500 or 1000 labeled training cases.
The histogram plot of log ? in figure 3 reveals that there are a few extracted features that are very
relevant (small ?) to our prediction task. The same figure (left panel) shows a scatter plot of the two
most relevant features of GP-DBNgreedy model, with each point replaced by the corresponding input test image. Clearly, these two features carry a lot of information about the orientation of the face.
6
A
B
Train
labels
100
500
1000
100
500
1000
GPstandard
Sph.
ARD
1.86
2.27
1.42
1.62
1.25
1.36
0.0884 0.1087
0.0222 0.0541
0.0129 0.0385
GP-DBNgreedy
Sph.
ARD
1.68
1.61
1.19
1.27
1.07
1.14
0.0528 0.0597
0.0100 0.0161
0.0058 0.0059
GP-DBNfine
Sph.
ARD
1.63
1.58
1.16
1.22
1.03
1.10
0.0501 0.0599
0.0055 0.0104
0.0050 0.0100
GPpca
Sph.
ARD
1.73 (20)
2.00 (20)
1.32 (40)
1.36 (20)
1.19 (40)
1.22 (80)
0.0785 (10) 0.0920 (10)
0.0160 (40) 0.0235 (20)
0.0091 (40) 0.0127 (40)
Table 2: Performance results on the digit magnitude regression task (A) and and discriminating odd vs. even
digits classification task (B). The root mean squared error for regression task on the test set is shown for each
method. For classification task the area under the ROC (AUROC) metric is used. For each method we show
1-AUROC on the test set. All methods were tried using both spherical Gaussian kernel, and a Gaussian kernel
with ARD hyper-parameters. For the GPpca model, the number of principal components that performs best on
the test data is shown in parenthesis.
Number of labeled
cases (50% in each class)
100
500
1000
GPstandard
GP-DBNgreedy
GP-DBNfine
0.1295
0.0875
0.0645
0.1180
0.0793
0.0580
0.0995
0.0609
0.0458
Table 3: Performance results using the area under the ROC (AUROC) metric on the text classification task.
For each method we show 1-AUROC on the test set.
We suspect that the GP-DBNfine model does not benefit as much from the ARD hyper-parameters
because the fine-tuning stage is already capable of turning down the activities of irrelevant top-level
features.
5.2 Extracting the Magnitude Represented by a Handwritten Digit and Discriminating
between Images of Odd and Even Digits
The MNIST digit data set contains 60,000 training and 10,000 test 28?28 images of ten handwritten
digits (0 to 9). 100 randomly sampled training images of each class were assigned a magnitude label.
The remaining 59,000 training images were used as unlabeled data. As in the previous experiment,
we used the 784-1000-1000-1000 architecture with the entire training set of 60,000 unlabeled digits
being used for greedily pretraining the DBN model. Table 2, panel A, shows that GP-DBNfine and
GP-DBNgreedy perform considerably better than GPstandard both with and without ARD hyperparameters. The same table, panel B, shows results for the classification task of discriminating between images of odd and images of even digits. We used the same labeled training set, but with each
digit categorized into an even or an odd class. The same DBN model was used, so the Gaussian covariance function was initialized in exactly the same way for both regression and classification tasks.
The performance of GP-DBNgreedy demonstrates that the greedily learned feature representation
captures a lot of structure in the unlabeled input data which is useful for subsequent discrimination
tasks, even though these tasks are unknown when the DBN is being trained.
5.3 Classifying News Stories
The Reuters Corpus Volume II is an archive of 804,414 newswire stories The corpus covers four
major groups: Corporate/Industrial, Economics, Government/Social, and Markets. The data was
randomly split into 802,414 training and 2000 test articles. The test set contains 500 articles of each
major group. The available data was already in a convenient, preprocessed format, where common
stopwords were removed and all the remaining words were stemmed. We only made use of the 2000
most frequently used word stems in the training data. As a result, each document was represented
as a vector containing 2000 word counts. No other preprocessing was done.
For the text classification task we used a 2000-1000-1000-1000 architecture. The entire unlabeled
training set of 802,414 articles was used for learning a multilayer generative model of the text documents. The bottom layer of the DBN was trained using a Constrained Poisson Model. Table 3 shows
the area under the ROC curve for classifying documents belonging to the Corporate/Industrial vs.
Economics groups. As expected, GP-DBNfine and GP-DBNgreedy work better than GPstandard.
The results of binary discrimination between other pairs of document classes are very similar to the
results presented in table 3. Our experiments using a Gaussian kernel with ARD hyper-parameters
did not show any significant improvements. Examining the histograms of the length-scale parame7
ters ?, we found that most of the input word-counts as well as most of the extracted features were
relevant to the classification task.
6 Conclusions and Future Research
In this paper we have shown how to use Deep Belief Networks to greedily pretrain and discriminatively fine-tune a covariance kernel for a Gaussian Process. The discriminative fine-tuning produces
an additional improvement in performance that is comparable in magnitude to the improvement produced by using the greedily pretrained DBN. For high-dimensional, highly-structured data, this is
an effective way to make use of large unlabeled data sets, especially when labeled training data is
scarce. Greedily pretrained DBN?s can also be used to provide input vectors for other kernel-based
methods, including SVMs [17, 8] and kernel regression [1], and our future research will concentrate
on comparing our method to other kernel-based semi-supervised learning algorithms [4, 19].
Acknowledgments
We thank Radford Neal for many helpful suggestions. This research was supported by NSERC, CFI
and OTI. GEH is a fellow of CIAR and holds a CRC chair.
References
[1] J. K. Benedetti. On the nonparametric estimation of regression functions. Journal of the Royal Statistical
Society series B, 39:248?253, 1977.
[2] Y. Bengio and Y. Le Cun. Scaling learning algorithms towards AI. In L. Bottou, O. Chapelle, D. DeCoste,
and J. Weston, editors, Large-Scale Kernel Machines. MIT Press, 2007.
[3] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks. In
Advances in Neural Information Processing Systems, 2006.
[4] O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-Supervised Learning. MIT Press, 2006.
[5] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1711?1800, 2002.
[6] G. E. Hinton and R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science,
313, 2006.
[7] Geoffrey E. Hinton, Simon Osindero, and Yee Whye Teh. A fast learning algorithm for deep belief nets.
Neural Computation, 18(7):1527?1554, 2006.
[8] F. Lauer, C. Y. Suen, and G. Bloch. A trainable feature extractor for handwritten digit recognition. Pattern
Recognition, 40(6):1816?1824, 2007.
[9] N. D. Lawrence and J. Qui?nonero Candela. Local distance preservation in the GP-LVM through back
constraints. In William W. Cohen and Andrew Moore, editors, ICML, volume 148, pages 513?520.
ACM, 2006.
[10] N. D. Lawrence and M. I. Jordan. Semi-supervised learning via gaussian processes. In NIPS, 2004.
[11] N. D. Lawrence and B. Sch?olkopf. Estimating a kernel Fisher discriminant in the presence of label
noise. In Proc. 18th International Conf. on Machine Learning, pages 306?313. Morgan Kaufmann, San
Francisco, CA, 2001.
[12] T. P. Minka. Expectation propagation for approximate bayesian inference. In Jack Breese and Daphne
Koller, editors, UAI, pages 362?369, San Francisco, CA, 2001. Morgan Kaufmann Publishers.
[13] C. E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006.
[14] R. Salakhutdinov and G. E. Hinton. Learning a nonlinear embedding by preserving class neighbourhood
structure. In AI and Statistics, 2007.
[15] M. Seeger. Covariance kernels from bayesian generative models. In Thomas G. Dietterich, Suzanna
Becker, and Zoubin Ghahramani, editors, NIPS, pages 905?912. MIT Press, 2001.
[16] M. Seeger. Gaussian processes for machine learning. Int. J. Neural Syst, 14(2):69?106, 2004.
[17] V. Vapnik. Statistical Learning Theory. Wiley, 1998.
[18] M. Welling, M. Rosen-Zvi, and G. Hinton. Exponential family harmoniums with an application to information retrieval. In NIPS 17, pages 1481?1488, Cambridge, MA, 2005. MIT Press.
[19] Xiaojin Zhu, Jaz S. Kandola, Zoubin Ghahramani, and John D. Lafferty. Nonparametric transforms of
graph kernels for semi-supervised learning. In NIPS, 2004.
8
| 3211 |@word version:2 middle:1 tried:2 covariance:19 decomposition:1 contrastive:2 carry:1 initial:1 contains:4 series:1 tuned:3 document:8 outperforms:1 existing:2 comparing:1 jaz:1 activation:2 scatter:2 stemmed:1 must:1 readily:2 john:1 visible:11 subsequent:1 plot:4 update:1 v:2 discrimination:2 generative:4 greedy:6 toronto:2 location:1 daphne:1 stopwords:1 constructed:1 supply:2 consists:1 g4:1 indeed:1 market:1 expected:2 frequently:1 multi:2 salakhutdinov:3 spherical:3 decoste:1 provided:2 estimating:1 underlying:4 panel:12 what:1 thorough:1 every:1 fellow:1 exactly:2 scaled:1 demonstrates:1 unit:8 yn:10 lvm:1 local:1 suggests:1 limited:3 bi:3 directed:2 acknowledgment:1 recursive:2 practice:1 backpropagation:4 digit:14 cfi:1 area:4 significantly:2 projection:1 convenient:1 word:11 integrating:1 regular:1 zoubin:2 cannot:1 unlabeled:20 onto:1 close:1 wt1:1 yee:1 optimize:1 deterministic:1 map:1 yt:1 maximizing:2 williams:1 economics:2 rectangular:2 simplicity:1 suzanna:1 rule:3 lamblin:1 refitted:1 embedding:1 notion:1 updated:1 target:3 us:1 overlapped:1 recognition:2 particularly:1 labeled:17 observed:4 bottom:4 role:1 capture:4 region:1 ensures:2 news:2 decrease:1 removed:1 complexity:1 occluded:4 trained:3 harmonium:1 represented:3 train:2 fast:3 describe:2 activate:1 effective:1 artificial:1 hyper:7 whose:2 widely:1 valued:6 reconstruct:1 statistic:2 gp:37 final:1 net:4 took:1 reconstruction:2 interaction:2 product:1 relevant:5 nonero:1 flexibility:1 ky:10 olkopf:2 cropping:1 produce:2 andrew:1 ard:17 odd:5 eq:9 strong:1 c:1 predicted:2 come:1 involves:1 larochelle:1 concentrate:1 stochastic:4 dii:1 crc:1 subdivided:1 activating:1 government:1 hold:1 exp:11 lawrence:3 mapping:5 bj:5 major:2 ruslan:1 estimation:1 proc:1 label:12 create:2 successfully:1 weighted:1 mit:5 clearly:1 suen:1 gaussian:27 always:1 rather:2 hj:18 poi:2 varying:1 improvement:3 bernoulli:2 likelihood:7 pretrain:1 industrial:2 seeger:2 greedily:5 helpful:1 inference:2 typically:1 entire:4 hidden:9 koller:1 wij:13 interested:1 pixel:5 classification:17 orientation:6 priori:1 constrained:5 initialize:4 fairly:1 marginal:4 field:1 represents:3 unsupervised:4 excessive:1 icml:1 future:2 rosen:1 few:2 randomly:7 composed:2 kandola:1 divergence:2 replaced:3 occlusion:2 william:1 attempt:1 highly:6 evaluation:2 adjust:1 mixture:1 chain:1 bloch:1 accurate:1 peculiar:1 integral:1 closer:1 capable:1 necessary:2 unless:1 old:1 initialized:3 rotating:1 fitted:3 kij:4 modeling:9 cover:1 altering:1 entry:1 uniform:1 successful:1 examining:1 osindero:1 zvi:1 considerably:1 density:1 international:1 discriminating:3 probabilistic:2 transmitting:1 w1:1 again:1 squared:4 containing:1 stochastically:1 conf:1 expert:1 derivative:3 syst:1 int:1 root:3 lot:3 candela:1 start:1 complicated:1 simon:1 rmse:2 variance:2 kaufmann:2 efficiently:2 maximized:1 generalize:1 raw:1 bayesian:3 handwritten:4 produced:2 none:1 researcher:1 m5s:1 energy:2 minka:1 associated:1 rbm:11 sampled:5 proved:1 knowledge:1 dimensionality:2 yyt:1 back:1 higher:1 supervised:4 improved:2 done:2 though:1 strongly:1 just:2 stage:2 correlation:2 nonlinear:1 propagation:2 defines:2 logistic:2 gray:1 dietterich:1 contain:2 normalized:1 analytically:1 assigned:2 symmetric:2 moore:1 i2:2 neal:1 deal:1 width:1 generalized:1 whye:1 performs:3 image:28 variational:1 jack:1 wise:1 common:2 cohen:1 conditioning:1 volume:2 million:2 significant:3 refer:1 cambridge:1 ai:2 smoothness:2 rd:1 tuning:3 dbn:25 similarly:1 newswire:1 chapelle:2 stable:1 similarity:1 wt2:1 posterior:3 own:1 olivetti:2 irrelevant:1 driven:2 scenario:1 binary:11 morgan:2 preserving:1 additional:2 forty:1 maximize:1 signal:1 ii:3 semi:4 zien:1 corporate:2 preservation:1 infer:1 stem:1 characterized:1 retrieval:2 parenthesis:2 prediction:3 regression:17 multilayer:2 vision:1 expectation:5 df:2 poisson:10 metric:2 histogram:3 kernel:28 normalization:1 eter:1 penalize:1 want:1 fine:8 wkj:1 publisher:1 appropriately:1 w2:1 extra:1 unlike:1 sch:2 archive:1 ascent:1 pass:1 suspect:1 lauer:1 undirected:3 facilitates:1 lafferty:1 seem:1 jordan:1 call:5 integer:1 extracting:2 presence:1 split:1 bengio:2 xj:8 zi:8 w3:1 architecture:3 reduce:1 ciar:1 pca:1 passed:1 becker:1 pretraining:6 deep:9 generally:1 useful:1 involve:1 tune:2 factorial:1 amount:3 nonparametric:2 transforms:1 backpropagated:1 ten:2 recon:2 svms:1 outperform:1 shifted:1 per:2 geh:1 express:2 group:3 four:2 drawn:1 prevent:1 preprocessed:1 rectangle:4 graph:1 sum:1 inverse:1 parameterized:3 place:2 family:2 throughout:1 patch:6 decision:1 scaling:1 qui:1 comparable:1 bit:1 entirely:3 layer:27 bound:2 display:1 refine:1 activity:3 occur:1 constraint:1 encodes:2 chair:1 format:1 department:1 structured:5 according:1 combination:1 belonging:1 across:1 slightly:4 remain:1 cun:1 rsalakhu:1 restricted:2 visualization:1 remains:1 count:8 xi2:1 available:2 hierarchical:1 neighbourhood:1 robustness:1 original:1 thomas:1 top:10 assumes:1 denotes:1 subsampling:1 remaining:4 graphical:1 exploit:1 ghahramani:2 especially:1 society:1 already:3 parametric:1 degrades:1 diagonal:1 gradient:5 distance:2 separate:2 reversed:1 thank:1 sensible:1 topic:2 discriminant:1 length:6 modeled:1 minimizing:1 difficult:1 suppress:1 implementation:1 boltzmann:2 unknown:1 perform:2 teh:1 observation:1 markov:1 defining:2 hinton:7 discovered:1 stack:1 canada:1 intensity:3 introduced:3 pair:2 required:1 specified:1 optimized:1 learned:6 nip:4 below:1 pattern:1 including:1 royal:1 belief:6 natural:1 difficulty:1 circumvent:1 turning:1 scarce:1 zhu:1 improve:3 created:1 extract:1 xiaojin:1 faced:1 prior:9 review:1 text:3 popovici:1 relative:1 discriminatively:3 suggestion:1 geoffrey:2 sufficient:1 consistent:1 article:3 editor:4 story:4 classifying:2 row:1 supported:1 rasmussen:1 bias:4 allow:3 face:14 benefit:1 boundary:1 dimension:1 xn:9 world:1 curve:1 made:3 preprocessing:1 simplified:1 san:2 welling:2 social:1 reconstructed:1 approximate:2 implicitly:1 keep:2 reveals:1 uai:1 corpus:2 francisco:2 discriminative:3 xi:18 latent:4 table:10 learn:8 nature:1 ca:2 obtaining:1 du:1 investigated:1 bottou:1 did:1 linearly:1 reuters:1 noise:5 hyperparameters:2 repeated:1 categorized:1 xu:3 x1:1 fig:2 roc:3 wiley:1 explicit:1 exponential:2 xl:10 extractor:1 oti:1 down:1 sph:8 experimented:1 auroc:4 incorporating:1 intractable:1 mnist:1 vapnik:1 magnitude:4 simply:3 contained:2 nserc:1 pretrained:3 ters:1 radford:1 extracted:2 acm:1 ma:1 weston:1 conditional:8 king:1 towards:1 replace:1 fisher:1 reducing:1 principal:3 total:1 breese:1 discriminate:2 experimental:2 occluding:2 college:1 people:3 trainable:1 correlated:1 |
2,439 | 3,212 | Learning Bounds for Domain Adaptation
John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman
Department of Computer and Information Science
University of Pennsylvania, Philadelphia, PA 19146
{blitzer,crammer,kulesza,pereira,wortmanj}@cis.upenn.edu
Abstract
Empirical risk minimization offers well-known learning guarantees when training
and test data come from the same domain. In the real world, though, we often
wish to adapt a classifier from a source domain with a large amount of training
data to different target domain with very little training data. In this work we give
uniform convergence bounds for algorithms that minimize a convex combination
of source and target empirical risk. The bounds explicitly model the inherent
trade-off between training on a large but inaccurate source data set and a small but
accurate target training set. Our theory also gives results when we have multiple
source domains, each of which may have a different number of instances, and we
exhibit cases in which minimizing a non-uniform combination of source risks can
achieve much lower target error than standard empirical risk minimization.
1
Introduction
Domain adaptation addresses a common situation that arises when applying machine learning to diverse data. We have ample data drawn from a source domain to train a model, but little or no training
data from the target domain where we wish to use the model [17, 3, 10, 5, 9]. Domain adaptation
questions arise in nearly every application of machine learning. In face recognition systems, training
images are obtained under one set of lighting or occlusion conditions while the recognizer will be
used under different conditions [14]. In speech recognition, acoustic models trained by one speaker
need to be used by another [12]. In natural language processing, part-of-speech taggers, parsers,
and document classifiers are trained on carefully annotated training sets, but applied to texts from
different genres or styles [7, 6].
While many domain-adaptation algorithms have been proposed, there are only a few theoretical
studies of the problem [3, 10]. Those studies focus on the case where training data is drawn from a
source domain and test data is drawn from a different target domain. We generalize this approach
to the case where we have some labeled data from the target domain in addition to a large amount
of labeled source data. Our main result is a uniform convergence bound on the true target risk
of a model trained to minimize a convex combination of empirical source and target risks. The
bound describes an intuitive tradeoff between the quantity of the source data and the accuracy of
the target data, and under relatively weak assumptions we can compute it from finite labeled and
unlabeled samples of the source and target distributions. We use the task of sentiment classification
to demonstrate that our bound makes correct predictions about model error with respect to a distance
measure between source and target domains and the number of training instances.
Finally, we extend our theory to the case in which we have multiple sources of training data, each
of which may be drawn according to a different distribution and may contain a different number
of instances. Several authors have empirically studied a special case of this in which each instance
is weighted separately in the loss function, and instance weights are set to approximate the target
domain distribution [10, 5, 9, 11]. We give a uniform convergence bound for algorithms that min1
imize a convex combination of multiple empirical source risks and we show that these algorithms
can outperform standard empirical risk minimization.
2
A Rigorous Model of Domain Adaptation
We formalize domain adaptation for binary classification as follows. A domain is a pair consisting
of a distribution D on X and a labeling function f : X ? [0, 1].1 Initially we consider two domains,
a source domain hDS , fS i and a target domain hDT , fT i.
A hypothesis is a function h : X ? {0, 1}. The probability according the distribution DS that a
hypothesis h disagrees with a labeling function f (which can also be a hypothesis) is defined as
?S (h, f )
=
Ex?DS [ |h(x) ? f (x)| ] .
When we want to refer to the risk of a hypothesis, we use the shorthand ?S (h) = ?S (h, fS ). We
write the empirical risk of a hypothesis on the source domain as ??S (h). We use the parallel notation
?T (h, f ), ?T (h), and ??T (h) for the target domain.
We measure the distance between two distributions D and D? using a hypothesis class-specific distance measure. Let H be a hypothesis class for instance space X , and AH be the set of subsets
of X that are the support of some hypothesis in H. In other words, for every hypothesis h ? H,
{x : x ? X , h(x) = 1} ? AH . We define the distance between two distributions as:
dH (D, D? ) = 2 sup |PrD [A] ? PrD? [A]| .
A?AH
For our purposes, the distance dH has an important advantage over more common means for comparing distributions such as L1 distance or the KL divergence: we can compute dH from finite
unlabeled samples of the distributions D and D? when H has finite VC dimension [4]. Furthermore,
we can compute a finite-sample approximation to dH by finding a classifier h ? H that maximally
discriminates between (unlabeled) instances from D and D? [3].
For a hypothesis space H, we define the symmetric difference hypothesis space H?H as
H?H = {h(x) ? h? (x) : h, h? ? H} ,
where ? is the XOR operator. Each hypothesis g ? H?H labels as positive all points x on which a
given pair of hypotheses in H disagree. We can then define AH?H in the natural way as the set of
all sets A such that A = {x : x ? X , h(x) 6= h? (x)} for some h, h? ? H. This allows us to define as
above a distance dH?H that satisfies the following useful inequality for any hypotheses h, h? ? H,
which is straight-forward to prove:
1
|?S (h, h? ) ? ?T (h, h? )| ? dH?H (DS , DT ) .
2
We formalize the difference between labeling functions by measuring error relative to other hypotheses in our class. The ideal hypothesis minimizes combined source and target risk:
h? = argmin ?S (h) + ?T (h) .
h?H
We denote the combined risk of the ideal hypothesis by ? = ?S (h? ) + ?T (h? ) . The ideal hypothesis
explicitly embodies our notion of adaptability. When the ideal hypothesis performs poorly, we
cannot expect to learn a good target classifier by minimizing source error.2 On the other hand, for
the kinds of tasks mentioned in Section 1, we expect ? to be small. If this is the case, we can
reasonably approximate target risk using source risk and the distance between DS and DT .
We illustrate the kind of result available in this setting with the following bound on the target risk
in terms of the source risk, the difference between labeling functions fS and fT , and the distance
between the distributions DS and DT . This bound is essentially a restatement of the main theorem
of Ben-David et al. [3], with a small correction to the statement of their theorem.
1
This notion of domain is not the domain of a function. To avoid confusion, we will always mean a specific
distribution and function pair when we say domain.
2
Of course it is still possible that the source data contains relevant information about the target function even
when the ideal hypothesis performs poorly ? suppose, for example, that fS (x) = 1 if and only if fT (x) = 0
? but a classifier trained using source data will perform poorly on data from the target domain in this case.
2
Theorem 1 Let H be a hypothesis space of VC-dimension d and US , UT be unlabeled samples of
size m? each, drawn from DS and DT , respectively. Let d?H?H be the empirical distance on US ,
UT , induced by the symmetric difference hypothesis space. With probability at least 1 ? ? (over the
choice of the samples), for every h ? H,
s
2d log(2m? ) + log( 4? )
1?
?T (h) ? ?S (h) + dH?H (US , UT ) + 4
+?.
2
m?
The corrected proof of this result can be found Appendix A.3 The main step in the proof is a variant
of the triangle inequality in which the sides of the triangle represent errors between different decision
rules [3, 8]. The bound is relative to ?. When the combined error of the ideal hypothesis is large,
there is no classifier that performs well on both the source and target domains, so we cannot hope
to find a good target hypothesis by training only on the source domain. On the other hand, for small
? (the most relevant case for domain adaptation), Theorem 1 shows that source error and unlabeled
H?H-distance are important quantities for computing target error.
3
A Learning Bound Combining Source and Target Data
Theorem 1 shows how to relate source and target risk. We now proceed to give a learning bound for
empirical risk minimization using combined source and target training data. In order to simplify the
presentation of the trade-offs that arise in this scenario, we state the bound in terms of VC dimension.
Similar, tighter bounds could be derived using more sophisticated measures of complexity such as
PAC-Bayes [15] or Rademacher complexity [2] in an analogous way.
At train time a learner receives a sample S = (ST , SS ) of m instances, where ST consists of ?m
instances drawn independently from DT and SS consists of (1??)m instances drawn independently
from DS . The goal of a learner is to find a hypothesis that minimizes target risk ?T (h). When ?
is small, as in domain adaptation, minimizing empirical target risk may not be the best choice. We
analyze learners that instead minimize a convex combination of empirical source and target risk:
??? (h) = ??
?T (h) + (1 ? ?)?
?S (h)
We denote as ?? (h) the corresponding weighted combination of true source and target risks, measured with respect to DS and DT .
We bound the target risk of a domain adaptation algorithm that minimizes ??? (h). The proof of the
bound has two main components, which we state as lemmas below. First we bound the difference
between the target risk ?T (h) and weighted risk ?? (h). Then we bound the difference between the
true and empirical weighted risks ?? (h) and ??? (h). The proofs of these lemmas, as well as the proof
of Theorem 2, are in Appendix B.
Lemma 1 Let h be a hypothesis in class H. Then
1
|?? (h) ? ?T (h)| ? (1 ? ?)
dH?H (DS , DT ) + ? .
2
The lemma shows that as ? approaches 1, we rely increasingly on the target data, and the distance
between domains matters less and less. The proof uses a similar technique to that of Theorem 1.
Lemma 2 Let H be a hypothesis space of VC-dimension d. If a random labeled sample of size
m is generated by drawing ?m points from DT and (1 ? ?)m points from DS , and labeling them
according to fS and fT respectively, then with probability at least 1 ? ? (over the choice of the
samples), for every h ? H
s
r
?2
(1 ? ?)2 d log(2m) ? log ?
+
.
|?
?? (h) ? ?? (h)| <
?
1??
2m
3
A longer version of this paper that includes the omitted appendix can be found on the authors? websites.
3
The proof is similar to standard uniform convergence proofs [16, 1], but it uses Hoeffding?s inequality in a different way because the bound on the range of the random variables underlying the
inequality varies with ? and ?. The lemma shows that as ? moves away from ? (where each instance
is weighted equally), our finite sample approximation to ?? (h) becomes less reliable.
Theorem 2 Let H be a hypothesis space of VC-dimension d. Let US and UT be unlabeled samples
of size m? each, drawn from DS and DT respectively. Let S be a labeled sample of size m generated
by drawing ?m points from DT and (1 ? ?)m points from DS , labeling them according to fS and
? ? H is the empirical minimizer of ??? (h) on S and h? = minh?H ?T (h) is the
fT , respectively. If h
T
target risk minimizer, then with probability at least 1 ? ? (over the choice of the samples),
s
r
?2
(1 ? ?)2 d log(2m) ? log ?
?
?
?T (h) ? ?T (hT ) + 2
+
+
?
1??
2m
?
?
s
? ) + log( 4 )
2d
log(2m
1
?
2(1 ? ?) ? d?H?H (US , UT ) + 4
+ ?? .
2
m?
When ? = 0 (that is, we ignore target data), the bound is identical to that of Theorem 1, but with an
empirical estimate for the source error. Similarly when ? = 1 (that is, we use only target data), the
bound is the standard learning bound using only target data. At the optimal ? (which minimizes the
right hand side), the bound is always at least as tight as either of these two settings. Finally note that
by choosing different values of ?, the bound allows us to effectively trade off the small amount of
target data against the large amount of less relevant source data.
We remark that when it is known that ? = 0, the dependence on m in Theorem 2 can be improved;
this corresponds to the restricted or realizable setting.
4
Experimental Results
We evaluate our theory by comparing its predictions to empirical results. While ideally Theorem 2
could be directly compared with test error, this is not practical because ? is unknown, dH?H is
computationally intractable [3], and the VC dimension d is too large to be a useful measure of
complexity. Instead, we develop a simple approximation of Theorem 2 that we can compute from
unlabeled data. For many adaptation tasks, ? is small (there exists a classifier which is simultaneously good for both domains), so we ignore it here. We approximate dH?H by training a linear
classifier to discriminate between the two domains. We use a standard hinge
loss (normalized by
dividing by the number of instances) and apply the quantity 1 ? hinge loss in place of the actual
dH?H . Let ?(US , UT ) be our approximation to dH?H , computed from source and target unlabeled
data. For domains that can be perfectly separated with margin, ?(US , UT ) = 1. For domains that
are indistinguishable, ?(US , UT ) = 0. Finally we replace the VC dimension sample complexity term
with a tighter constant C. The resulting approximation to the bound of Theorem 2 is
s
C ?2
(1 ? ?)2
f (?) =
+ (1 ? ?)?(US , UT ) .
(1)
+
m ?
1??
Our experimental results are for the task of sentiment classification. Sentiment classification systems
have recently gained popularity because of their potential applicability to a wide range of documents
in many genres, from congressional records to financial news. Because of the large number of
potential genres, sentiment classification is an ideal area for domain adaptation. We use the data
provided by Blitzer et al. [6], which consists of reviews of eight types of products from Amazon.com:
apparel, books, DVDs, electronics, kitchen appliances, music, video, and a catchall category ?other?.
The task is binary classification: given a review, predict whether it is positive (4 or 5 out of 5 stars)
or negative (1 or 2 stars). We chose the ?apparel? domain as our target domain, and all of the plots
on the right-hand side of Figure 1 are for this domain. We obtain empirical curves for the error
as a function of ? by training a classifier using a weighted hinge loss. Suppose the target domain
has weight ? and there are ?m target training instances. Then we scale the loss of target training
instance by ?/? and the loss of a source training instance by (1 ? ?)/(1 ? ?).
4
(a) vary distance, mS = 2500,
mT = 1000
(c) ?(US , UT ) = 0.715,
mS = 2500, vary mT
Dist: 0.780
Dist: 0.715
Dist: 0.447
Dist: 0.336
0
0.2
0.4
0.6
0.8
1
0
(b) vary sources, mS = 2500,
mT = 1000
0.2
0.4
(e) ?(US , UT ) = 0.715,
vary mS , mT = 2500
mT: 250
m : 250
mT: 500
m : 500
mT: 1000
m : 1000
mT: 2000
m : 2500
0.6
S
S
S
S
0.8
1
0
(d) source = dvd, mS = 2500,
vary mT
0.2
0.4
0.6
0.8
1
(f) source = dvd,
vary mS , mT = 2500
mT: 250
books: 0.78
dvd: 0.715
electronics: 0.447
kitchen: 0.336
mT: 500
mS: 250
mT: 1000
mS: 500
mT: 2000
mS: 1000
mS: 2500
0
0.1
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
0
0.2
0.4
0.6
0.8
1
Figure 1: Comparing the bound with test error for sentiment classification. The x-axis of each figure
shows ?. The y-axis shows the value of the bound or test set error. (a), (c), and (e) depict the bound,
(b), (d), and (f) the test error. Each curve in (a) and (b) represents a different distance. Curves in
(c) and (d) represent different numbers of target instances. Curves in (e) and (f) represent different
numbers of source instances.
Figure 1 shows a series of plots of equation 1 (on the top) coupled with corresponding plots of test
error (on the bottom) as a function of ? for different amounts of source and target data and different
distances between domains. In each pair of plots, a single parameter (distance, number of target
instances mT , or number of source instances mS ) is varied while the other two are held constant.
Note that ? = mT /(mT + mS ). The plots on the top part of Figure 1 are not meant to be numerical
proxies for the true error (For the source domains ?books? and ?dvd?, the distance alone is well
above 12 ). Instead, they are scaled to illustrate that the bound is similar in shape to the true error
curve and that relative relationships are preserved. By choosing a different C in equation 1 for each
curve, one can achieve complete control over their minima. In order to avoid this, we only use a
single value of C = 1600 for all 12 curves on the top part of Figure 1.
First note that in every pair of plots, the empirical error curves have a roughly convex shape that
mimics the shape of the bounds. Furthermore the value of ? which minimizes the bound also has
a low empirical error for each corresponding curve. This suggests that choosing ? to minimize the
bound of Theorem 2 and subsequently training a classifier to minimize the empirical error ??? (h) can
work well in practice, provided we have a reasonable measure of complexity.4 Figures 1a and 1b
show that more distant source domains result in higher target error. Figures 1c and 1d illustrate that
for more target data, we have not only lower error in general, but also a higher minimizing ?. Finally,
figures 1e and 1f depict the limitation of distant source data. With enough target data, no matter how
much source data we include, we always prefer to use only the target data. This is reflected in our
bound as a phase transition in the value of the optimal ? (governing the tradeoff between source and
target data). The phase transition occurs when mT = C/?(US , UT )2 (See Figure 2).
4
Although Theorem 2 does not hold uniformly for all ? as stated, this is easily remedied via an application
of the union bound. The resulting bound will contain an additional logarithmic factor in the complexity term.
5
1
Target ?102
32
30
0.5
28
26
0
24
5,000
50,000
722,000
Source
11 million
167 million
Figure 2: An example of the phase transition in the optimal ?. The value of ? which minimizes
the bound is indicated by the intensity, where black means ? = 1 (corresponding to ignoring source
and learning only from target data). We fix C = 1600 and ?(US , UT ) = 0.715, as in our sentiment
results. The x-axis shows the number of source instances (log-scale). The y-axis shows the number
of target instances. A phase transition occurs at 3,130 target instances. With more target instances
than this, it is more effective to ignore even an infinite amount of source data.
5
Learning from Multiple Sources
We now explore an extension of our theory to the case of multiple source domains. We are presented with data from N distinct sources. Each source Sj is associated with an unknown underlying
distribution Dj over input points and an unknown labeling function fj . From each source Sj , we
are given mj labeled training instances, and our goal is to use these instances to train a model to
perform well on a target domain hDT , fT i, which may or may not be one of the sources. This setting
is motivated by several new domain adaptation algorithms [10, 5, 11, 9] that weigh the loss from
training instances depending on how ?far? they are from the target domain. That is, each training
instance is its own source domain.
As in the previous sections, we will examine algorithms that minimize convex combinations of
training errors over the labeled examples from each source domain. As before, we let mj = ?j m
P
PN
with j=1 ?j = 1. Given a vector ? = (?1 , ? ? ? , ?N ) of domain weights with j ?j = 1, we
define the empirical ?-weighted error of function h as
??? (h) =
N
X
j=1
?j ??j (h) =
N
X
?j X
|h(x) ? fj (x)| .
mj
j=1
x?Sj
The true ?-weighted error ?? (h) is defined analogously. Let D? be a mixture of the N source
distributions with mixing weights equal to the components of ?. Finally, analogous to ? in the
single-source setting, we define the error of the multi-source ideal hypothesis for a weighting ? as
?? = min{?T (h) + ?? (h)} = min{?T (h) +
h
h
N
X
?j ?j (h)} .
j=1
The following theorem gives a learning bound for empirical risk minimization using the empirical
?-weighted error.
Theorem 3 Suppose we are given mj labeled instances from source Sj for j = 1 . . . N . For a fixed
? = argmin
vector of weights ?, let h
?? (h), and let h?T = argminh?H ?T (h). Then for any
h?H ?
? ? (0, 1), with probability at least 1 ? ? (over the choice of samples from each source),
v
u N 2r
uX ?j d log 2m ? log ?
1
?
? ? ?T (h ) + 2t
+
2
?
+
d
(D
,
D
)
.
?T (h)
?
H?H
?
T
T
?
2m
2
j=1 j
6
(a) Source. More girls than boys
(b) Target. Separator from
uniform mixture is suboptimal
(c) Weighting sources to match
target is optimal
Females
Males
learned
separator
Females
Males
Target
optimal
separator
errors
optimal &
learned
separator
learned
separator
Figure 3: A 1-dimensional example illustrating how non-uniform mixture weighting can result in
optimal error. We observe one feature, which we use to predict gender. (a) At train time we observe
more females than males. (b) Learning by uniformly weighting the training data causes us to learn a
suboptimal decision boundary, (c) but by weighting the males more highly, we can match the target
data and learn an optimal classifier.
The full proof is in appendix C. Like the proof of Theorem 2, it is split into two parts. The first part
bounds the difference between the ?-weighted error and the target error similar to lemma 1. The
second is a uniform convergence bound for ??? (h) similar to lemma 2.
Theorem 3 reduces to Theorem 2 when we have only two sources, one of which is the target domain
(that is, we have some small number of target instances). It is more general, though, because by
manipulating ? we can effectively change the source domain. This has two consequences. First,
we demand that there exists a hypothesis h? which has low error on both the ?-weighted convex
combination of sources and the target domain. Second, we measure distance between the target and
a mixture of sources, rather than between the target and a single source.
One question we might ask is whether there exist settings where a non-uniform weighting can lead
to a significantly lower value of the bound than a uniform weighting. This can happen if some
non-uniform weighting of sources accurately approximates the target domain. As a hypothetical
example, suppose we are trying to predict gender from height (Figure 3). Each instance is drawn
from a gender-specific Gaussian. In this example, we can find the optimal classifier by weighting
the ?males? and ?females? components of the source to match the target.
6
Related Work
Domain adaptation is a widely-studied area, and we cannot hope to cover every aspect and application of it here5 . Instead, in this section we focus on other theoretical approaches to domain
adaptation. While we do not explicitly address the relationship in this paper, we note that domain
adaptation is closely related to the setting of covariate shift, which has been studied in statistics. In
addition to the work of Huang et al. [10], several other authors have considered learning by assigning
separate weights to the components of the loss function corresponding to separate instances. Bickel
at al. [5] and Jiang and Zhai [11] suggest promising empirical algorithms that in part inspire our
Theorem 3. We hope that our work can help to explain when these algorithms are effective. Dai et
al. [9] considered weighting instances using a transfer-aware variant of boosting, but the learning
bounds they give are no stronger than bounds which completely ignore the source data.
Crammer et al. [8] consider learning when the marginal distribution on instances is the same across
sources but the labeling function may change. This corresponds in our theory to cases where
dH?H = 0 but ? is large. Like us they consider multiple sources, but their notion of weighting
is less general. They consider only including or discarding a source entirely.
Li and Bilmes [13] give PAC-Bayesian learning bounds for adaptation using ?divergence priors?.
They place source-centered prior on the parameters of a model learned in the target domain. Like
5
The NIPS 2006 Workshop on Learning When Test and Training Inputs have Different Distributions
(http://ida.first.fraunhofer.de/projects/different06/) contains a good set of references on domain adaptation and related topics.
7
our model, the divergence prior also emphasizes the tradeoff between source and target. In our
model, though, we measure the divergence (and consequently the bias) of the source domain from
unlabeled data. This allows us to choose the best tradeoff between source and target labeled data.
7
Conclusion
In this work we investigate the task of domain adaptation when we have a large amount of training data from a source domain but wish to apply a model in a target domain with a much smaller
amount of training data. Our main result is a uniform convergence learning bound for algorithms
which minimize convex combinations of source and target empirical risk. Our bound reflects the
trade-off between the size of the source data and the accuracy of the target data, and we give a
simple approximation to it that is computable from finite labeled and unlabeled samples. This approximation makes correct predictions about model test error for a sentiment classification task. Our
theory also extends in a straightforward manner to a multi-source setting, which we believe helps to
explain the success of recent empirical work in domain adaptation.
Our future work has two related directions. First, we wish to tighten our bounds, both by considering
more sophisticated measures of complexity [15, 2] and by focusing our distance measure on the most
relevant features, rather than all the features. We also plan to investigate algorithms that choose a
convex combination of multiple sources to minimize the bound in Theorem 3.
8
Acknowledgements
This material is based upon work partially supported by the Defense Advanced Research Projects
Agency (DARPA) under Contract No. NBCHD030010. Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors and do not necessarily reflect
the views of the DARPA or Department of Interior-National Business Center (DOI-NBC).
References
[1] M. Anthony and P. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University
Press, Cambridge, 1999.
[2] P. Barlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural results.
JMLR, 3:463?482, 2002.
[3] S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation.
In NIPS, 2007.
[4] S. Ben-David, J. Gehrke, and D. Kifer. Detecting change in data streams. In VLDB, 2004.
[5] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning for differing training and test distributions. In ICML, 2007.
[6] J. Blitzer, M. Dredze, and F. Pereira. Biographies, bollywood, boomboxes and blenders: Domain adaptation for sentiment classification. In ACL, 2007.
[7] C. Chelba and A. Acero. Empirical methods in natural language processing. In EMNLP, 2004.
[8] K. Crammer, M. Kearns, and J. Wortman. Learning from multiple sources. In NIPS, 2007.
[9] W. Dai, Q. Yang, G. Xue, and Y. Yu. Boosting for transfer learning. In ICML, 2007.
[10] J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Schoelkopf. Correcting sample selection bias by
unlabeled data. In NIPS, 2007.
[11] J. Jiang and C. Zhai. Instance weighting for domain adaptation. In ACL, 2007.
[12] C. Legetter and P. Woodland. Maximum likelihood linear regression for speaker adaptation of continuous
density hidden markov models. Computer Speech and Language, 9:171?185, 1995.
[13] X. Li and J. Bilmes. A bayesian divergence prior for classification adaptation. In AISTATS, 2007.
[14] A. Martinez. Recognition of partially occluded and/or imprecisely localized faces using a probabilistic
approach. In CVPR, 2007.
[15] D. McAllester. Simplified PAC-Bayesian margin bounds. In COLT, 2003.
[16] V. Vapnik. Statistical Learning Theory. John Wiley, New York, 1998.
[17] P. Wu and T. Dietterich. Improving svm accuracy by training on auxiliary data sources. In ICML, 2004.
8
| 3212 |@word illustrating:1 version:1 stronger:1 vldb:1 blender:1 electronics:2 contains:2 series:1 document:2 comparing:3 com:1 ida:1 assigning:1 john:2 distant:2 numerical:1 happen:1 shape:3 plot:6 depict:2 alone:1 website:1 record:1 detecting:1 boosting:2 appliance:1 tagger:1 height:1 shorthand:1 prove:1 consists:3 manner:1 nbc:1 upenn:1 roughly:1 dist:4 examine:1 multi:2 chelba:1 little:2 actual:1 considering:1 becomes:1 provided:2 project:2 notation:1 underlying:2 argmin:2 kind:2 minimizes:6 differing:1 finding:2 guarantee:1 every:6 hypothetical:1 classifier:12 scaled:1 control:1 positive:2 before:1 consequence:1 jiang:2 black:1 chose:1 might:1 acl:2 studied:3 suggests:1 range:2 practical:1 practice:1 union:1 area:2 empirical:26 significantly:1 word:1 suggest:1 cannot:3 unlabeled:11 interior:1 operator:1 selection:1 acero:1 risk:30 applying:1 center:1 straightforward:1 independently:2 convex:9 hds:1 amazon:1 correcting:1 rule:1 financial:1 notion:3 analogous:2 target:77 suppose:4 parser:1 us:2 hypothesis:30 pa:1 hdt:2 recognition:3 labeled:10 bottom:1 min1:1 ft:6 restatement:1 schoelkopf:1 news:1 trade:4 mentioned:1 discriminates:1 weigh:1 agency:1 complexity:8 ideally:1 occluded:1 trained:4 tight:1 upon:1 learner:3 completely:1 triangle:2 girl:1 easily:1 darpa:2 genre:3 train:4 separated:1 distinct:1 effective:2 doi:1 labeling:8 choosing:3 widely:1 cvpr:1 say:1 s:2 drawing:2 statistic:1 advantage:1 product:1 adaptation:24 relevant:4 combining:1 mixing:1 poorly:3 achieve:2 intuitive:1 convergence:6 rademacher:2 ben:3 help:2 blitzer:5 illustrate:3 develop:1 depending:1 measured:1 dividing:1 auxiliary:1 come:1 direction:1 closely:1 annotated:1 correct:2 subsequently:1 vc:7 centered:1 mcallester:1 opinion:1 material:2 apparel:2 fix:1 tighter:2 extension:1 correction:1 hold:1 considered:2 predict:3 vary:6 bickel:2 omitted:1 purpose:1 recognizer:1 label:1 gehrke:1 weighted:11 reflects:1 minimization:5 hope:3 offs:1 always:3 gaussian:2 rather:2 avoid:2 pn:1 imprecisely:1 derived:1 focus:2 likelihood:1 rigorous:1 realizable:1 inaccurate:1 initially:1 hidden:1 manipulating:1 classification:10 colt:1 plan:1 special:1 marginal:1 equal:1 aware:1 identical:1 represents:1 koby:1 icml:3 nearly:1 yu:1 mimic:1 future:1 simplify:1 inherent:1 few:1 simultaneously:1 divergence:5 national:1 kitchen:2 phase:4 occlusion:1 consisting:1 highly:1 investigate:2 male:5 mixture:4 held:1 accurate:1 theoretical:3 instance:34 cover:1 measuring:1 applicability:1 subset:1 uniform:12 wortman:2 too:1 varies:1 xue:1 combined:4 st:2 borgwardt:1 density:1 contract:1 off:3 probabilistic:1 analogously:1 reflect:1 huang:2 choose:2 hoeffding:1 emnlp:1 book:3 style:1 li:2 potential:2 de:1 star:2 includes:1 matter:2 explicitly:3 stream:1 view:1 analyze:1 sup:1 bayes:1 parallel:1 minimize:8 accuracy:3 xor:1 generalize:1 weak:1 bayesian:3 accurately:1 emphasizes:1 bilmes:2 lighting:1 straight:1 ah:4 explain:2 against:1 proof:10 associated:1 ask:1 ut:13 formalize:2 adaptability:1 carefully:1 sophisticated:2 focusing:1 higher:2 dt:10 reflected:1 maximally:1 improved:1 inspire:1 though:3 furthermore:2 governing:1 smola:1 d:12 hand:4 receives:1 indicated:1 believe:1 dredze:1 dietterich:1 contain:2 true:6 normalized:1 symmetric:2 indistinguishable:1 speaker:2 m:12 trying:1 complete:1 demonstrate:1 confusion:1 performs:3 l1:1 fj:2 image:1 recently:1 common:2 mt:18 empirically:1 million:2 extend:1 approximates:1 refer:1 cambridge:2 similarly:1 language:3 dj:1 longer:1 own:1 recent:1 female:4 scenario:1 inequality:4 binary:2 success:1 minimum:1 additional:1 dai:2 fernando:1 multiple:8 full:1 reduces:1 gretton:1 match:3 adapt:1 offer:1 equally:1 uckner:1 prediction:3 variant:2 regression:1 essentially:1 represent:3 preserved:1 addition:2 want:1 separately:1 source:86 induced:1 ample:1 structural:1 yang:1 ideal:8 split:1 congressional:1 enough:1 pennsylvania:1 perfectly:1 suboptimal:2 tradeoff:4 computable:1 br:1 shift:1 whether:2 motivated:1 defense:1 bartlett:1 sentiment:8 f:6 speech:3 proceed:1 cause:1 york:1 remark:1 useful:2 woodland:1 amount:8 category:1 http:1 outperform:1 exist:1 popularity:1 diverse:1 write:1 prd:2 drawn:9 ht:1 place:2 extends:1 reasonable:1 barlett:1 wu:1 decision:2 appendix:4 prefer:1 entirely:1 bound:49 alex:1 dvd:5 aspect:1 min:2 relatively:1 department:2 according:4 combination:10 describes:1 across:1 increasingly:1 smaller:1 restricted:1 computationally:1 equation:2 jennifer:1 kifer:1 available:1 apply:2 eight:1 observe:2 away:1 top:3 imize:1 include:1 hinge:3 music:1 embodies:1 move:1 question:2 quantity:3 occurs:2 dependence:1 exhibit:1 distance:19 separate:2 remedied:1 topic:1 relationship:2 zhai:2 minimizing:4 statement:1 relate:1 boy:1 negative:1 stated:1 unknown:3 perform:2 disagree:1 markov:1 finite:6 minh:1 situation:1 varied:1 intensity:1 david:3 pair:5 kl:1 acoustic:1 learned:4 nbchd030010:1 nip:4 address:2 below:1 kulesza:2 reliable:1 including:1 video:1 natural:3 rely:1 business:1 advanced:1 axis:4 fraunhofer:1 coupled:1 philadelphia:1 text:1 review:2 prior:4 disagrees:1 acknowledgement:1 relative:3 loss:8 expect:2 limitation:1 localized:1 foundation:1 proxy:1 course:1 supported:1 side:3 bias:2 wide:1 face:2 curve:9 dimension:7 boundary:1 world:1 transition:4 author:4 forward:1 simplified:1 far:1 tighten:1 sj:4 approximate:3 ignore:4 discriminative:1 continuous:1 promising:1 learn:3 reasonably:1 mj:4 transfer:2 ignoring:1 improving:1 necessarily:1 separator:5 anthony:1 domain:69 bollywood:1 aistats:1 main:5 arise:2 martinez:1 scheffer:1 wiley:1 pereira:4 wish:4 jmlr:1 weighting:12 theorem:22 specific:3 covariate:1 discarding:1 pac:3 svm:1 intractable:1 exists:2 workshop:1 mendelson:1 vapnik:1 effectively:2 gained:1 ci:1 margin:2 demand:1 logarithmic:1 explore:1 expressed:1 ux:1 partially:2 recommendation:1 gender:3 corresponds:2 minimizer:2 satisfies:1 dh:13 goal:2 presentation:1 consequently:1 replace:1 change:3 infinite:1 corrected:1 uniformly:2 lemma:8 kearns:1 discriminate:1 experimental:2 support:1 crammer:5 arises:1 meant:1 argminh:1 evaluate:1 biography:1 ex:1 |
2,440 | 3,213 | Unconstrained Online Handwriting Recognition with
Recurrent Neural Networks
Alex Graves
TUM, Germany
[email protected]
Santiago Fern?andez
IDSIA, Switzerland
[email protected]
Horst Bunke
University of Bern, Switzerland
[email protected]
Marcus Liwicki
University of Bern, Switzerland
[email protected]
?
Jurgen
Schmidhuber
IDSIA, Switzerland and TUM, Germany
[email protected]
Abstract
In online handwriting recognition the trajectory of the pen is recorded during writing. Although the trajectory provides a compact and complete representation of
the written output, it is hard to transcribe directly, because each letter is spread
over many pen locations. Most recognition systems therefore employ sophisticated preprocessing techniques to put the inputs into a more localised form. However these techniques require considerable human effort, and are specific to particular languages and alphabets. This paper describes a system capable of directly
transcribing raw online handwriting data. The system consists of an advanced recurrent neural network with an output layer designed for sequence labelling, combined with a probabilistic language model. In experiments on an unconstrained
online database, we record excellent results using either raw or preprocessed data,
well outperforming a state-of-the-art HMM based system in both cases.
1
Introduction
Handwriting recognition is traditionally divided into offline and online recognition. Offline recognition is performed on images of handwritten text. In online handwriting the location of the pen-tip on
a surface is recorded at regular intervals, and the task is to map from the sequence of pen positions
to the sequence of words.
At first sight, it would seem straightforward to label raw online inputs directly. However, the fact that
each letter or word is distributed over many pen positions poses a problem for conventional sequence
labelling algorithms, which have difficulty processing data with long-range interdependencies. The
problem is especially acute for unconstrained handwriting, where the writing style may be cursive,
printed or a mix of the two, and the degree of interdependency is therefore difficult to determine
in advance. The standard solution is to preprocess the data into a set of localised features. These
features typically include geometric properties of the trajectory in the vicinity of every data point,
pseudo-offline information from a generated image, and character level shape characteristics [6, 7].
Delayed strokes (such as the crossing of a ?t? or the dot of an ?i?) require special treatment because
they split up the characters and therefore interfere with localisation. HMMs [6] and hybrid systems
incorporating time-delay neural networks and HMMs [7] are commonly trained with such features.
The issue of classifying preprocessed versus raw data has broad relevance to machine learning, and
merits further discussion. Using hand crafted features often yields superior results, and in some
cases can render classification essentially trivial. However, there are three points to consider in
favour of raw data. Firstly, designing an effective preprocessor requires considerable time and expertise. Secondly, hand coded features tend to be more task specific. For example, features designed
1
for English handwriting could not be applied to languages with substantially different alphabets,
such as Arabic or Chinese. In contrast, a system trained directly on pen movements could be applied to any alphabet. Thirdly, using raw data allows feature extraction to be built into the classifier,
and the whole system to be trained together. For example, convolutional neural networks [10], in
which a globally trained hierarchy of network layers is used to extract progressively higher level
features, have proved effective at classifying raw images, such as objects in cluttered scenes or isolated handwritten characters [15, 11]. (Note than convolution nets are less suitable for unconstrained
handwriting, because they require the text images to be presegmented into characters [10]).
In this paper, we apply a recurrent neural network (RNN) to online handwriting recognition. The
RNN architecture is bidirectional Long Short-Term Memory [3], chosen for its ability to process data
with long time dependencies. The RNN uses the recently introduced connectionist temporal classification output layer [2], which was specifically designed for labelling unsegmented sequence data.
An algorithm is introduced for applying grammatical constraints to the network outputs, thereby
providing word level transcriptions. Experiments are carried out on the IAM online database [12]
which contains forms of unconstrained English text acquired from a whiteboard. The performance
of the RNN system using both raw and preprocessed input data is compared to that of an HMM
based system using preprocessed data only [13]. To the best of our knowledge, this is the first time
whole sentences of unconstrained handwriting have been directly transcribed from raw online data.
Section 2 describes the network architecture, the output layer and the algorithm for applying grammatical constraints. Section 3 provides experimental results, and conclusions are given in Section 4.
2
2.1
Method
Bidirectional Long Short-Term Memory
One of the key benefits of RNNs is their ability to make use of previous context. However, for
standard RNN architectures, the range of context that can in practice be accessed is limited. The
problem is that the influence of a given input on the hidden layer, and therefore on the network
output, either decays or blows up exponentially as it cycles around the recurrent connections. This
is often referred to as the vanishing gradient problem [4].
Long Short-Term Memory (LSTM; [5]) is an RNN architecture designed to address the vanishing
gradient problem. An LSTM layer consists of multiple recurrently connected subnets, known as
memory blocks. Each block contains a set of internal units, known as cells, whose activation is
controlled by three multiplicative ?gate? units. The effect of the gates is to allow the cells to store
and access information over long periods of time.
For many tasks it is useful to have access to future as well past context. Bidirectional RNNs [14]
achieve this by presenting the input data forwards and backwards to two separate hidden layers, both
of which are connected to the same output layer. Bidirectional LSTM (BLSTM) [3] combines the
above architectures to provide access to long-range, bidirectional context.
2.2
Connectionist Temporal Classification
Connectionist temporal classification (CTC) [2] is an objective function designed for sequence labelling with RNNs. Unlike previous objective functions it does not require pre-segmented training
data, or postprocessing to transform the network outputs into labellings. Instead, it trains the network
to map directly from input sequences to the conditional probabilities of the possible labellings.
A CTC output layer contains one more unit than there are elements in the alphabet L of labels for
the task. The output activations are normalised with the softmax activation function [1]. At each
time step, the first |L| outputs are used to estimate the probabilities of observing the corresponding
labels. The extra output estimates the probability of observing a ?blank?, or no label. The combined
output sequence estimates the joint probability of all possible alignments of the input sequence with
all possible labellings. The probability of a particular labelling can then be estimated by summing
over the probabilities of all the alignments that correspond to it.
More precisely, for an input sequence x of length T , choosing a label (or blank) at every time
step according to the probabilities implied by the network outputs defines a probability distribution
2
T
over the set of length T sequences of labels and blanks. We denote this set L0 , where L0 = L ?
T
{blank}. To distinguish them from labellings, we refer to the elements of L0 as paths. Assuming
that the label probabilities at each time step are conditionally independent given x, the conditional
T
probability of a path ? ? L0 is given by
p(?|x) =
T
Y
y?t t ,
(1)
t=1
where ykt is the activation of output unit k at time t. Denote the set of sequences of length less than
or equal to T on the alphabet L as L?T . Then Paths are mapped onto labellings l ? L?T by an
operator B that removes first the repeated labels, then the blanks. For example, both B(a, ?, a, b, ?)
and B(?, a, a, ?, ?, a, b, b) yield the labelling (a,a,b). Since the paths are mutually exclusive, the
conditional probability of a given labelling l ? L?T is the sum of the probabilities of all paths
corresponding to it:
X
p(l|x) =
p(?|x).
(2)
??B?1 (l)
Although a naive calculation of the above sum would be unfeasible, it can be efficiently evaluated
with a graph-based algorithm [2], similar to the forward-backward algorithm for HMMs.
To allow for blanks in the output paths, for each label sequence l ? L?T consider a modified label
?T
sequence l0 ? L0 , with blanks added to the beginning and the end and inserted between every
pair of labels. The length of l0 is therefore |l0 | = 2|l| + 1.
For a labelling l, define the forward variable ?t (s) as the summed probability of all paths whose
length t prefixes are mapped by B onto the length s/2 prefix of l, i.e.
?t (s) = P (?1:t : B(?1:t ) = l1:s/2 , ?t =
l0s |x)
=
t
X Y
0
y?t t0 ,
(3)
t0 =1
?:
B(?1:t )=l1:s/2
where, for some sequence s, sa:b is the subsequence (sa , sa+1 , ..., sb?1 , sb ), and s/2 is rounded
down to an integer value.
The backward variables ?t (s) are defined as the summed probability of all paths whose suffixes
starting at t map onto the suffix of l starting at label s/2
?t (s) = P (?t+1:T : B(?t:T ) = ls/2:|l| , ?t =
l0s |x)
=
X
T
Y
0
y?t t0
(4)
?: t0 =t+1
B(?t:T )=ls/2:|l|
Both the forward and backward variables are calculated recursively [2]. The label sequence probability is given by the sum of the products of the forward and backward variables at any time step:
0
p(l|x) =
|l |
X
?t (s)?t (s).
(5)
s=1
The objective function for CTC is the negative log probability of the network correctly labelling the
entire training set. Let S be a training set, consisting of pairs of input and target sequences (x, z),
where target sequence z is at most as long as input sequence x. Then the objective function is:
X
OCT C = ?
ln (p(z|x)).
(6)
(x,z)?S
The network can be trained with gradient descent by differentiating OCT C with respect to the outputs, then using backpropagation through time to differentiate with respect to the network weights.
Noting that the same label (or blank) may be repeated several times for a single labelling l, we define
the set of positions where label k occurs as lab(l, k) = {s : l0s = k}, which may be empty. We
then set l = z and differentiate (5) with respect to the unnormalised network outputs atk to obtain:
X
?ln (p(z|x))
?OCT C
1
=?
= ykt ?
?t (s)?t (s).
(7)
t
t
?ak
?ak
p(z|x)
s?lab(z,k)
3
Once the network is trained, we would ideally label some unknown input sequence x by choosing
the most probable labelling l? :
l? = arg max p(l|x).
(8)
l
Using the terminology of HMMs, we refer to the task of finding this labelling as decoding. Unfortunately, we do not know of a tractable decoding algorithm that is guaranteed to give optimal results.
However a simple and effective approximation is given by assuming that the most probable path
corresponds to the most probable labelling, i.e.
l? ? B arg max p(?|x) .
(9)
?
2.3
Integration with an External Grammar
For some tasks we want to constrain the output labellings according to a predefined grammar. For
example, in speech and handwriting recognition, the final transcriptions are usually required to form
sequences of dictionary words. In addition it is common practice to use a language model to weight
the probabilities of particular sequences of words.
We can express these constraints by altering the probabilities in (8) to be conditioned on some
probabilistic grammar G, as well as the input sequence x:
l? = arg max p(l|x, G).
l
(10)
Absolute requirements, for example that l contains only dictionary words, can be incorporated by
setting the probability of all sequences that fail to meet them to 0.
At first sight, conditioning on G seems to contradict a basic assumption of CTC: that the labels
are conditionally independent given the input sequences (see Eqn. (1)). Since the network attempts
to model the probability of the whole labelling at once, there is nothing to stop it from learning
inter-label transitions direct from the data, which would then be skewed by the external grammar.
However, CTC networks are typically only able to learn local relationships such as commonly occurring pairs or triples of labels. Therefore as long as G focuses on long range label interactions (such
as the probability of one word following another when the outputs are letters) it doesn?t interfere
with the dependencies modelled by CTC.
The basic rules of probability tell us that p(l|x, G) = p(l|x)p(l|G)p(x)
, where we have used the fact
p(x|G)p(l)
that x is conditionally independent of G given l. If we assume x is independent of G, this reduces
to p(l|x, G) = p(l|x)p(l|G)
. That assumption is in general false, since both the input sequences and
p(l)
the grammar depend on the underlying generator of the data, for example the language being spoken. However it is a reasonable first approximation, and is particularly justifiable in cases where
the grammar is created using data other than that from which x was drawn (as is common practice in speech and handwriting recognition, where independent textual corpora are used to generate
language models).
Finally, if we assume that all label sequences are equally probable prior to any knowledge about the
input or the grammar, we can drop the p(l) term in the denominator to get
l? = arg max p(l|x)p(l|G).
l
(11)
Note that, since the number of possible label sequences is finite (because both L and |l| are finite),
assigning equal prior probabilities does not lead to an improper prior.
We now describe an algorithm, based on the token passing algorithm for HMMs [16], that allows us
to find an approximate solution to (11) for a simple grammar.
Let G consist of a dictionary D containing W words, and a set of W 2 bigrams p(w|w)
? that define
the probability of making a transition from word w
? to word w. The probability of any labelling that
does not form a sequence of dictionary words is 0.
For each word w, define the modified word w0 as w with blanks added at the beginning and end and
between each pair of labels. Therefore |w0 | = 2|w| + 1. Define a token tok = (score, history)
to be a pair consisting of a real valued score and a history of previously visited words. In fact,
4
each token corresponds to a particular path through the network outputs, and its score is the log
probability of that path. The basic idea of the token passing algorithm is to pass along the highest
scoring tokens at every word state, then maximise over these to find the highest scoring tokens at
the next state. The transition probabilities are used when a token is passed from the last state in one
word to the first state in another. The output word sequence is given by the history of the highest
scoring end-of-word token at the final time step.
At every time step t of the length T output sequence, each segment s of each modified word w0 holds
a single token tok(w, s, t). This is the highest scoring token reaching that segment at that time. In
addition we define the input token tok(w, 0, t) to be the highest scoring token arriving at word w at
time t, and the output token tok(w, ?1, t) to be the highest scoring token leaving word w at time t.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
26:
Initialisation:
for all words w ? D do
tok(w, 1, 1) = (ln(yb1 ), (w))
1
tok(w, 2, 1) = (ln(yw
), (w))
1
if |w| = 1 then
tok(w, ?1, 1) = tok(w, 2, 1)
else
tok(w, ?1, 1) = (??, ())
tok(w, s, 1) = (??, ()) for all s 6= ?1
Algorithm:
for t = 2 to T do
sort output tokens tok(w, ?1, t ? 1) by ascending score
for all words w ? D do
w? = arg maxw?D
tok(w,
? ?1, t ? 1).score + ln (p(w|w))
?
?
tok(w, 0, t).score = tok(w? , ?1, t ? 1).score + ln (p(w|w? ))
tok(w, 0, t).history = tok(w? , ?1, t ? 1).history + w
for segment s = 1 to |w0 | do
P = {tok(w, s, t ? 1), tok(w, s ? 1, t ? 1)}
0
if ws0 6= blank and s > 2 and ws?2
6= ws0 then
add tok(w, s ? 2, t ? 1) to P
tok(w, s, t) = token in P with highest score
t
tok(w, s, t).score += ln(yw
0)
s
tok(w, ?1, t) = highest scoring of {tok(w, |w0 |, t), tok(w, |w0 | ? 1, t)}
Termination:
find output token tok ? (w, ?1, T ) with highest score at time T
output tok ? (w, ?1, T ).history
Algorithm 1: CTC Token Passing Algorithm
The algorithm?s worst case complexity is O(T W 2 ), since line 14 requires a potential search through
all W words. However, because the output tokens tok(w, ?1, T ) are sorted in order of score, the
search can be terminated when a token is reached whose score is less than the current best score
with the transition included. The typical complexity is therefore considerably lower, with a lower
bound of O(T W logW ) to account for the sort. If no bigrams are used, lines 14-16 can be replaced
by a simple search for the highest scoring output token, and the complexity reduces to O(T W ).
Note that this is the same as the complexity of HMM decoding, if the search through bigrams is
exhaustive. Much work has gone into developing more efficient decoding techniques (see e.g. [9]),
typically by pruning improbable branches from the tree of labellings. Such methods are essential
for applications where a rapid response is required, such as real time transcription. In addition,
many decoders use more sophisticated language models than simple bigrams. Any HMM decoding
algorithm could be applied to CTC outputs in the same way as token passing. However, we have
stuck with a relatively basic algorithm since our focus here is on recognition rather than decoding.
5
3
Experiments
The experimental task was online handwriting recognition, using the IAM-OnDB handwriting
database [12], which is available for public download from http://www.iam.unibe.ch/ fki/iamondb/
For CTC, we record both the character error rate, and the word error rate using Algorithm 1 with
a language model and a dictionary. For the HMM system, the word error rate is quoted from the
literature [13]. Both the character and word error rate are defined as the total number of insertions,
deletions and substitutions in the algorithm?s transcription of test set, divided by the combined length
of the target transcriptions in the test set.
We compare results using both raw inputs direct from the pen sensor, and a preprocessed input
representation designed for HMMs.
3.1
Data and Preprocessing
IAM-OnDB consists of pen trajectories collected from 221 different writers using a ?smart whiteboard? [12]. The writers were asked to write forms from the LOB text corpus [8], and the position of
their pen was tracked using an infra-red device in the corner of the board. The input data consisted
of the x and y pen coordinates, the points in the sequence when individual strokes (i.e. periods when
the pen is pressed against the board) end, and the times when successive position measurements
were made. Recording errors in the x, y data were corrected by interpolating to fill in for missing
readings, and removing steps whose length exceeded a certain threshold.
IAM-OnDB is divided into a training set, two validation sets, and a test set, containing respectively
5364, 1438, 1518 and 3859 written lines taken from 775, 192, 216 and 544 forms. The data sets
contained a total of 3,298,424, 885,964, 1,036,803 and 2,425,5242 pen coordinates respectively. For
our experiments, each line was used as a separate sequence (meaning that possible dependencies
between successive lines were ignored).
The character level transcriptions contain 80 distinct target labels (capital letters, lower case letters,
numbers, and punctuation). A dictionary consisting of the 20, 000 most frequently occurring words
in the LOB corpus was used for decoding, along with a bigram language model optimised on the
training and validation sets [13]. 5.6% of the words in the test set were not in the dictionary.
Two input representations were used. The first contained only the offset of the x, y coordinates
from the top left of the line, the time from the beginning of the line, and the marker for the ends of
strokes. We refer to this as the raw input representation. The second representation used state-of-theart preprocessing and feature extraction techniques [13]. We refer to this as the preprocessed input
representation. Briefly, in order to account for the variance in writing styles, the pen trajectories
were normalised with respect to such properties as the slant, skew and width of the letters, and the
slope of the line as a whole. Two sets of input features were then extracted, the first consisting of
?online? features, such as pen position, pen speed, line curvature etc., and the second consisting of
?offline? features created from a two dimensional window of the image created by the pen.
3.2
Experimental Setup
The CTC network used the BLSTM architecture, as described in Section 2.1. The forward and
backward hidden layers each contained 100 single cell memory blocks. The input layer was fully
connected to the hidden layers, which were fully connected to themselves and the output layer. The
output layer contained 81 units (80 characters plus the blank label). For the raw input representation,
there were 4 input units and a total of 100,881 weights. For the preprocessed representation, there
were 25 inputs and 117,681 weights. tanh was used for the cell activation functions and logistic
sigmoid in the range [0, 1] was used for the gates. For both input representations, the data was
normalised so that each input had mean 0 and standard deviation 1 on the training set. The network
was trained with online gradient descent, using a learning rate of 10?4 and a momentum of 0.9.
Training was stopped after no improvement was recorded on the validation set for 50 training epochs.
The HMM setup [13] contained a separate, left-to-right HMM with 8 states for each character (8 ?
81 = 648 states in total). Diagonal mixtures of 32 Gaussians were used to estimate the observation
6
Table 1: Word Error Rate (WER) on IAM-OnDB. LM = language model. CTC results are a mean
over 4 runs, ? standard error. All differences were significant (p < 0.01)
System
HMM
CTC
CTC
CTC
CTC
Input
preprocessed
raw
preprocessed
raw
preprocessed
LM
X
7
7
X
X
WER
35.5% [13]
30.1 ? 0.5%
26.0 ? 0.3%
22.8 ? 0.2%
20.4 ? 0.3%
probabilities. All parameters, including the word insertion penalty and the grammar scale factor,
were optimised on the validation set.
3.3
Results
The character error rate for the CTC network with the preprocessed inputs was 11.5 ? 0.05%.
From Table 1 we can see that with a dictionary and a language model this translates into a mean
word error rate of 20.4%, which is a relative error reduction of 42.5% compared to the HMM.
Without the language model, the error reduction was 26.8%. With the raw input data CTC achieved
a character error rate of 13.9 ? 0.1%, and word error rates that were close to those recorded with
the preprocessed data, particularly when the language model was present.
The key difference between the input representations is that the raw data is less localised, and therefore requires more use of context. A useful indication of the network?s sensitivity to context is
provided by the derivatives of the output ykt at a particular point t in the data sequence with respect
0
to the inputs xtk at all points 1 ? t0 ? T . We refer to these derivatives as the sequential Jacobian.
Looking at the relative magnitude of the sequential Jacobian over time gives an idea of the range of
context used, as illustrated in Figure 1.
4
Conclusion
We have combined a BLSTM CTC network with a probabilistic language model. We have applied
this system to an online handwriting database and obtained results that substantially improve on a
state-of-the-art HMM based system. We have also shown that the network?s performance with raw
sensor inputs is comparable to that with sophisticated preprocessing. As far as we are aware, our
system is the first to successfully recognise unconstrained online handwriting using raw inputs only.
Acknowledgments
This research was funded by EC Sixth Framework project ?NanoBioTact?, SNF grant 200021111968/1, and the SNF program ?Interactive Multimodal Information Management (IM)2?.
References
[1] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships
to statistical pattern recognition. In F. Fogleman-Soulie and J.Herault, editors, Neurocomputing: Algorithms, Architectures and Applications, pages 227?236. Springer-Verlag, 1990.
[2] A. Graves, S. Fern?andez, F. Gomez, and J. Schmidhuber. Connectionist temporal classification: Labelling
unsegmented sequence data with recurrent neural networks. In Proc. 23rd Int. Conf. on Machine Learning,
Pittsburgh, USA, 2006.
[3] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other
neural network architectures. Neural Networks, 18(5-6):602?610, June/July 2005.
[4] S. Hochreiter, Y. Bengio, P. Frasconi, and J. Schmidhuber. Gradient flow in recurrent nets: the difficulty
of learning long-term dependencies. In S. C. Kremer and J. F. Kolen, editors, A Field Guide to Dynamical
Recurrent Neural Networks. IEEE Press, 2001.
[5] S. Hochreiter and J. Schmidhuber. Long Short-Term Memory. Neural Comp., 9(8):1735?1780, 1997.
[6] J. Hu, S. G. Lim, and M. K. Brown. Writer independent on-line handwriting recognition using an HMM
approach. Pattern Recognition, 33:133?147, 2000.
7
Figure 1: Sequential Jacobian for an excerpt from the IAM-OnDB, with raw inputs (left) and preprocessed inputs (right). For ease of visualisation, only the derivative with highest absolute value
is plotted at each time step. The reconstructed image was created by plotting the pen coordinates
recorded by the sensor. The individual strokes are alternately coloured red and black. For both representations, the Jacobian is plotted for the output corresponding to the label ?i? at the point when
?i? is emitted (indicated by the vertical dashed lines). Because bidirectional networks were used, the
range of sensitivity extends in both directions from the dashed line. For the preprocessed data, the
Jacobian is sharply peaked around the time when the output is emitted. For the raw data it is more
spread out, suggesting that the network makes more use of long-range context. Note the spike in
sensitivity to the very end of the raw input sequence: this corresponds to the delayed dot of the ?i?.
[7] S. Jaeger, S. Manke, J. Reichert, and A. Waibel. On-line handwriting recognition: the NPen++ recognizer.
Int. Journal on Document Analysis and Recognition, 3:169?180, 2001.
[8] S. Johansson, R. Atwell, R. Garside, and G. Leech. The tagged LOB corpus user?s manual; Norwegian
Computing Centre for the Humanities, 1986.
[9] P. Lamere, P. Kwok, W. Walker, E. Gouvea, R. Singh, B. Raj, and P. Wolf. Design of the CMU Sphinx-4
decoder. In Proc. 8th European Conf. on Speech Communication and Technology, Aug. 2003.
[10] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proc. IEEE, 86(11):2278?2324, Nov. 1998.
[11] Y. LeCun, F. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to
pose and lighting. In Proc. of CVPR?04. IEEE Press, 2004.
[12] M. Liwicki and H. Bunke. IAM-OnDB - an on-line English sentence database acquired from handwritten
text on a whiteboard. In Proc. 8th Int. Conf. on Document Analysis and Recognition, volume 2, pages
956?961, 2005.
[13] M. Liwicki, A. Graves, S. Fern?andez, H. Bunke, and J. Schmidhuber. A novel approach to on-line
handwriting recognition based on bidirectional long short-term memory networks. In Proc. 9th Int. Conf.
on Document Analysis and Recognition, Curitiba, Brazil, Sep. 2007.
[14] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal
Processing, 45:2673?2681, Nov. 1997.
[15] P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices for convolutional neural networks applied
to visual document analysis. In Proc. 7th Int. Conf. on Document Analysis and Recognition, page 958,
Washington, DC, USA, 2003. IEEE Computer Society.
[16] S. Young, N. Russell, and J. Thornton. Token passing: A simple conceptual model for connected speech
recognition systems. Technical Report CUED/F-INFENG/TR38, Cambridge University Eng. Dept., 1989.
8
| 3213 |@word arabic:1 briefly:1 bigram:5 seems:1 johansson:1 termination:1 hu:1 eng:1 thereby:1 pressed:1 recursively:1 reduction:2 substitution:1 contains:4 score:13 initialisation:1 document:6 prefix:2 past:1 blank:11 current:1 activation:5 assigning:1 written:2 shape:1 remove:1 designed:6 drop:1 progressively:1 device:1 beginning:3 l0s:3 vanishing:2 short:5 record:2 provides:2 ondb:6 location:2 successive:2 firstly:1 accessed:1 along:2 framewise:1 direct:2 npen:1 consists:3 combine:1 acquired:2 inter:1 rapid:1 themselves:1 frequently:1 globally:1 steinkraus:1 window:1 provided:1 project:1 underlying:1 substantially:2 spoken:1 finding:1 pseudo:1 temporal:4 every:5 interactive:1 classifier:1 platt:1 unit:6 grant:1 maximise:1 local:1 ak:2 meet:1 path:11 optimised:2 black:1 rnns:3 plus:1 yb1:1 hmms:6 limited:1 ease:1 range:8 gone:1 acknowledgment:1 lecun:2 practice:4 block:3 backpropagation:1 snf:2 rnn:6 printed:1 word:33 pre:1 regular:1 get:1 onto:3 unfeasible:1 close:1 operator:1 lamere:1 put:1 context:8 influence:1 writing:3 applying:2 www:1 conventional:1 map:3 missing:1 straightforward:1 starting:2 cluttered:1 l:2 rule:1 fill:1 traditionally:1 coordinate:4 brazil:1 hierarchy:1 target:4 user:1 us:1 designing:1 humanity:1 crossing:1 idsia:5 recognition:23 element:2 particularly:2 database:5 inserted:1 worst:1 cycle:1 connected:5 improper:1 movement:1 highest:11 russell:1 leech:1 complexity:4 insertion:2 ideally:1 asked:1 trained:7 depend:1 singh:1 segment:3 smart:1 writer:3 multimodal:1 joint:1 sep:1 alphabet:5 train:1 distinct:1 effective:3 describe:1 liwicki:4 tell:1 choosing:2 exhaustive:1 whose:5 valued:1 cvpr:1 grammar:9 ability:2 transform:1 final:2 online:15 thornton:1 differentiate:2 sequence:36 indication:1 net:2 interaction:1 product:1 achieve:1 empty:1 requirement:1 jaeger:1 object:2 cued:1 recurrent:8 subnets:1 pose:2 jurgen:1 aug:1 sa:3 switzerland:4 direction:1 human:1 atk:1 public:1 require:4 andez:3 probable:4 secondly:1 im:1 hold:1 around:2 lm:2 dictionary:8 recognizer:1 proc:7 label:26 tanh:1 visited:1 successfully:1 sensor:3 sight:2 modified:3 bunke:4 reaching:1 rather:1 l0:8 focus:2 june:1 improvement:1 contrast:1 suffix:2 sb:2 typically:3 entire:1 hidden:4 w:1 visualisation:1 germany:2 issue:1 classification:7 arg:5 herault:1 art:2 special:1 softmax:1 summed:2 integration:1 equal:2 once:2 aware:1 extraction:2 frasconi:1 washington:1 field:1 broad:1 presegmented:1 theart:1 peaked:1 future:1 connectionist:4 infra:1 report:1 employ:1 neurocomputing:1 individual:2 delayed:2 replaced:1 consisting:5 attempt:1 localisation:1 alignment:2 punctuation:1 mixture:1 predefined:1 capable:1 improbable:1 tree:1 plotted:2 isolated:1 stopped:1 altering:1 juergen:1 deviation:1 delay:1 dependency:4 considerably:1 combined:4 lstm:4 sensitivity:3 probabilistic:4 manke:1 decoding:7 rounded:1 tip:1 together:1 recorded:5 management:1 containing:2 huang:1 transcribed:1 external:2 corner:1 conf:5 derivative:3 style:2 sphinx:1 simard:1 account:2 potential:1 suggesting:1 kolen:1 blow:1 int:5 santiago:2 performed:1 multiplicative:1 lab:2 observing:2 reached:1 red:2 sort:2 slope:1 convolutional:2 variance:1 characteristic:1 efficiently:1 phoneme:1 yield:2 preprocess:1 correspond:1 modelled:1 raw:21 handwritten:3 fki:1 fern:3 trajectory:5 expertise:1 justifiable:1 comp:1 lighting:1 history:6 stroke:4 manual:1 sixth:1 against:1 handwriting:19 bridle:1 stop:1 proved:1 treatment:1 knowledge:2 lim:1 sophisticated:3 bidirectional:9 tum:2 higher:1 exceeded:1 response:1 evaluated:1 hand:2 eqn:1 unsegmented:2 marker:1 interfere:2 defines:1 logistic:1 indicated:1 usa:2 effect:1 consisted:1 contain:1 brown:1 vicinity:1 tagged:1 illustrated:1 conditionally:3 during:1 skewed:1 lob:3 width:1 presenting:1 complete:1 l1:2 postprocessing:1 image:6 meaning:1 novel:1 recently:1 superior:1 common:2 sigmoid:1 ctc:18 tracked:1 conditioning:1 exponentially:1 volume:1 thirdly:1 interpretation:1 refer:5 measurement:1 significant:1 cambridge:1 slant:1 rd:1 unconstrained:7 centre:1 language:14 had:1 dot:2 funded:1 access:3 acute:1 surface:1 etc:1 add:1 curvature:1 raj:1 schmidhuber:6 store:1 certain:1 verlag:1 paliwal:1 outperforming:1 scoring:8 determine:1 ws0:2 period:2 dashed:2 signal:1 july:1 multiple:1 interdependency:2 mix:1 reduces:2 blstm:3 branch:1 segmented:1 technical:1 calculation:1 long:14 divided:3 equally:1 coded:1 controlled:1 infeng:1 basic:4 denominator:1 essentially:1 cmu:1 achieved:1 cell:4 hochreiter:2 addition:3 want:1 interval:1 else:1 walker:1 leaving:1 unibe:3 extra:1 unlike:1 recording:1 tend:1 flow:1 seem:1 integer:1 emitted:2 noting:1 backwards:1 feedforward:1 split:1 bengio:2 architecture:8 idea:2 haffner:1 translates:1 favour:1 t0:5 passed:1 effort:1 penalty:1 render:1 transcribing:1 speech:4 passing:5 ignored:1 useful:2 yw:2 cursive:1 generate:1 http:1 estimated:1 correctly:1 write:1 express:1 key:2 terminology:1 threshold:1 drawn:1 capital:1 preprocessed:14 backward:5 graph:1 sum:3 run:1 letter:6 wer:2 extends:1 reasonable:1 recognise:1 excerpt:1 comparable:1 layer:14 bound:1 guaranteed:1 distinguish:1 gomez:1 iam:10 constraint:3 precisely:1 alex:2 constrain:1 scene:1 sharply:1 speed:1 relatively:1 xtk:1 developing:1 according:2 waibel:1 describes:2 character:11 labellings:7 making:1 taken:1 ln:7 mutually:1 previously:1 skew:1 fail:1 know:1 merit:1 tractable:1 ascending:1 end:6 available:1 gaussians:1 apply:1 kwok:1 generic:1 gate:3 reichert:1 top:1 include:1 especially:1 chinese:1 society:1 implied:1 objective:4 added:2 occurs:1 spike:1 exclusive:1 diagonal:1 gradient:6 separate:3 mapped:2 hmm:11 decoder:2 w0:6 collected:1 trivial:1 marcus:1 assuming:2 length:9 relationship:2 providing:1 difficult:1 unfortunately:1 setup:2 localised:3 negative:1 design:1 unknown:1 vertical:1 convolution:1 observation:1 finite:2 descent:2 tok:27 incorporated:1 looking:1 norwegian:1 communication:1 dc:1 download:1 introduced:2 pair:5 required:2 sentence:2 connection:1 textual:1 deletion:1 alternately:1 address:1 able:1 usually:1 pattern:2 dynamical:1 atwell:1 reading:1 program:1 built:1 max:4 memory:7 including:1 suitable:1 difficulty:2 hybrid:1 advanced:1 improve:1 technology:1 created:4 carried:1 extract:1 naive:1 logw:1 unnormalised:1 text:5 prior:3 geometric:1 literature:1 epoch:1 coloured:1 graf:4 relative:2 fully:2 versus:1 triple:1 generator:1 validation:4 degree:1 plotting:1 editor:2 classifying:2 token:23 last:1 kremer:1 bern:2 english:3 arriving:1 offline:4 guide:1 allow:2 normalised:3 differentiating:1 absolute:2 distributed:1 grammatical:2 benefit:1 calculated:1 soulie:1 transition:4 doesn:1 horst:1 commonly:2 forward:6 preprocessing:4 stuck:1 made:1 far:1 ec:1 transaction:1 reconstructed:1 approximate:1 compact:1 contradict:1 pruning:1 nov:2 transcription:6 summing:1 corpus:4 pittsburgh:1 conceptual:1 quoted:1 subsequence:1 search:4 pen:17 table:2 learn:1 whiteboard:3 excellent:1 interpolating:1 european:1 bottou:2 spread:2 terminated:1 whole:4 nothing:1 repeated:2 crafted:1 referred:1 board:2 position:6 momentum:1 jacobian:5 young:1 down:1 preprocessor:1 removing:1 specific:2 recurrently:1 offset:1 decay:1 incorporating:1 consist:1 essential:1 false:1 sequential:3 magnitude:1 labelling:16 conditioned:1 occurring:2 visual:1 contained:5 ykt:3 maxw:1 springer:1 ch:6 corresponds:3 wolf:1 extracted:1 transcribe:1 conditional:3 oct:3 sorted:1 considerable:2 hard:1 included:1 specifically:1 typical:1 corrected:1 total:4 pas:1 invariance:1 experimental:3 internal:1 relevance:1 dept:1 schuster:1 |
2,441 | 3,214 | Markov Chain Monte Carlo with People
Adam N. Sanborn
Psychological and Brain Sciences
Indiana University
Bloomington, IN 47045
[email protected]
Thomas L. Griffiths
Department of Psychology
University of California
Berkeley, CA 94720
tom [email protected]
Abstract
Many formal models of cognition implicitly use subjective probability distributions to capture the assumptions of human learners. Most applications of these
models determine these distributions indirectly. We propose a method for directly
determining the assumptions of human learners by sampling from subjective probability distributions. Using a correspondence between a model of human choice
and Markov chain Monte Carlo (MCMC), we describe a method for sampling
from the distributions over objects that people associate with different categories.
In our task, subjects choose whether to accept or reject a proposed change to an
object. The task is constructed so that these decisions follow an MCMC acceptance rule, defining a Markov chain for which the stationary distribution is the
category distribution. We test this procedure for both artificial categories acquired
in the laboratory, and natural categories acquired from experience.
1
Introduction
Determining the assumptions that guide human learning and inference is one of the central goals
of cognitive science. Subjective probability distributions are used to model the degrees of belief
that learners assign to hypotheses in many domains, including categorization, decision making, and
memory [1, 2, 3, 4]. If the knowledge of learners can be modeled in this way, then exploring this
knowledge becomes a matter of asking questions about the nature of their associated probability
distributions. A common way to learn about a probability distribution is to draw samples from it.
In the machine learning and statistics literature, drawing samples from probability distributions is
a major area of research, and is often done using Markov chain Monte Carlo (MCMC) algorithms.
In this paper, we describe a method for directly obtaining information about subjective probability
distributions, by having people act as elements of an MCMC algorithm.
Our approach is to design a task that will allow us to sample from a particular subjective probability
distribution. Much research has been devoted to relating the magnitude of psychological responses
to choice probabilities, resulting in mathematical models of these tasks. We point out an equivalence between a model of human choice behavior and an MCMC acceptance function, and use this
equivalence to develop a method for obtaining samples from a subjective distribution. In this way
we can use the power of MCMC algorithms to explore the knowledge of human learners.
The plan of the paper is as follows. In Section 2, we describe MCMC in general and the Metropolis
method and Barker acceptance function in particular. Section 3 describes the experimental task we
use to connect human judgments to MCMC. In Section 4, we present an experiment showing that
this method can be used to recover trained category distributions from human judgments. Section 5
gives a demonstration of our MCMC method applied to recovering natural categories of animal
shape. Section 6 summarizes the results and discusses some implications.
1
2
Markov chain Monte Carlo
Models of physical phenomena used by scientists are often expressed in terms of complex probability distributions over different events. Generating samples from these distributions can be an
efficient way to determine their properties, indicating which events are assigned high probabilities
and providing a way to approximate various statistics of interest. Often, the distributions used in
these models are difficult to sample from, being defined over large state spaces or having unknown
normalization constants. Consequently, a great deal of research has been devoted to developing sophisticated Monte Carlo algorithms that can be used to generate samples from complex probability
distributions. One of the most successful methods of this kind is Markov chain Monte Carlo. An
MCMC algorithm constructs a Markov chain that has the target distribution, from which we want
to sample, as its stationary distribution. This Markov chain can be initialized with any state, being
guaranteed to converge to its stationary distribution after many iterations of stochastic transitions
between states. After convergence, the states visited by the Markov chain can be used similarly to
samples from the target distribution (see [5] for details).
The canonical MCMC algorithm is the Metropolis method [6], in which transitions between states
have two parts: a proposal distribution and an acceptance function. Based on the current state, a
candidate for the next state is sampled from the proposal distribution. The acceptance function gives
the probability of accepting this proposal. If the proposal is rejected, then the current state is taken
as the next state. A variety of acceptance functions guarantee that the stationary distribution of the
resulting Markov chain is the target distribution [7]. If we assume that the proposal distribution is
symmetric, with the probability of proposing a new state x? from the current state x being the same
as the probability of proposing x from x? , we can use the Barker acceptance function [8], giving
?(x? )
?(x? ) + ?(x)
A(x? ; x) =
(1)
for the acceptance probability, where ?(x) is the probability of x under the target distribution.
3
An acceptance function from human behavior
While our approach can be applied to any subjective probability distribution, our experiments focused on sampling from the distributions over objects associated with different categories. Categories are central to cognition, reflecting our knowledge of the structure of the world, supporting
inferences, and serving as the basic units of thought. The way people group objects into categories has been studied extensively, producing a number of formal models of human categorization
[3, 4, 9, 10, 11], almost all of which can be interpreted as defining a category as a probability distribution over objects [4]. In this section, we consider how to lead people to choose between two
objects in a way that would correspond to a valid acceptance function for an MCMC algorithm with
the distribution over objects associated with a category as its target distribution.
3.1
A Bayesian analysis of a choice task
Consider the following task. You are shown two objects, x1 and x2 , and told that one of those
objects comes from a particular category, c. You have to choose which object you think comes from
that category. How should you make this decision?
We can analyze this choice task from the perspective of a rational Bayesian learner. The choice
between the objects is a choice between two hypotheses: The first hypothesis, h1 , is that x1 is
drawn from the category distribution p(x|c) and x2 is drawn from g(x), an alternative distribution
that governs the probability of other objects appearing on the screen. The second hypothesis, h2 , is
that x1 is from the alternative distribution and x2 is from the category distribution. The posterior
probability of the first hypothesis given the data is determined via Bayes? rule,
p(h1 |x1 , x2 )
=
=
p(x1 , x2 |h1 )p(h1 )
p(x1 , x2 |h1 )p(h1 ) + p(x1 , x2 |h2 )p(h2 )
p(x1 |c)g(x2 )p(h1 )
p(x1 |c)g(x2 )p(h1 ) + p(x2 |c)g(x1 )p(h2 )
2
(2)
where we use the category distribution p(x|c) and its alternative g(x) to calculate p(x1 , x2 |h).
We will now make two assumptions. The first assumption is that the prior probabilities of the
hypotheses are the same. Since there is no a priori reason to favor one of the objects over the other,
this assumption seems reasonable. The second assumption is that the probabilities of the two stimuli
under the alternative distribution are approximately equal, with g(x1 ) ? g(x2 ). If people assume
that the alternative distribution is uniform, then the probabilities of the two stimuli will be exactly
equal. However, the probabilities will still be roughly equal under the weaker assumption that the
alternative distribution is fairly smooth and x1 and x2 differ by only a small amount relative to the
support of that distribution. With these assumptions Equation 2 becomes
p(h1 |x1 , x2 ) ?
p(x1 |c)
p(x1 |c) + p(x2 |c)
(3)
with the posterior probability of h1 being set by the probabilities of x1 and x2 in that category.
3.2
From a task to an acceptance function
The Bayesian analysis of the task described above results in a posterior probability of h1 (Equation
3) which has a similar form to the Barker acceptance function (Equation 1). If we return to the
context of MCMC, and assume that x1 is the proposal x? and x2 the current state x, and that people
choose x1 with probability equal to the posterior probability of h1 , then x? is chosen with probability
A(x? ; x) =
p(x? |c)
p(x? |c) + p(x|c)
(4)
being the Barker acceptance function for the target distribution ?(x) = p(x|c). This equation has a
long history as a model of human choice probabilities, where it is known as the Luce choice rule or
the ratio rule [12, 13]. This rule has been shown to provide a good fit to human data when people
choose between two stimuli based on a particular property [14, 15, 16]. It corresponds to a situation
in which people choose alternatives based on their relative probabilities, a common behavior known
as probability matching [17]. The Luce choice rule has also been used to convert psychological
response magnitudes into response probabilities in many models of cognition [11, 18, 19, 20, 21].
3.3
A more flexible response rule
Probability matching can be a good description of the data, but subjects have been shown to produce
behavior that is more deterministic [17]. Several models of categorization have been extended in
order to account for this behavior [22] by using an exponentiated version of Equation 4 to map
category probabilities onto response probabilities,
A(x? ; x) =
p(x? |c)?
p(x? |c)? + p(x|c)?
(5)
where ? raises each term on the right side of Equation 4 to a constant. This response rule can be
derived by applying a soft threshold to the log odds of the two hypotheses (a sigmoid function with
a gain of ?). As ? increases the hypothesis with higher posterior probability will be chosen more
often. By equivalence to the Barker acceptance function, this response rule defines a Markov chain
with stationary distribution
?(x) ? p(x|c)? .
(6)
Thus, using the weaker assumptions of Equation 5 as a model of human behavior, we can estimate
the category distribution p(x|c) up to a constant exponent. This estimate will have the same modes
and ordering of variances on the variables, but the actual values of the variances will differ.
3.4
Summary
Based on the results in this section, we can define a simple method for drawing samples from a
category distribution using MCMC. On each trial, a proposal is drawn from a symmetric distribution.
A person chooses between the current state and the proposal to select the new state. Assuming that
people?s choice behavior follows the Luce choice rule, the stationary distribution of the Markov
chain is the category distribution. The states of the chain are samples from the category distribution,
which provide information about the mental representation of that category.
3
4
Testing the MCMC algorithm with known categories
To test whether the procedure outlined in the previous section will produce samples that accurately
reflect people?s mental representations, we trained people on a variety of category distributions and
attempted to recover those distributions using MCMC. A simple one-dimensional categorization
task was used, with the height of schematic fish (see Figure 1) being the dimension along which
category distributions were defined. Subjects were trained on two categories of fish height ? a
uniform distribution and a Gaussian distribution ? being told that they were learning to judge whether
a fish came from the ocean (the uniform distribution) or a fish farm (the Gaussian distribution).
Four between-subject conditions tested different means and variances for the Gaussian distributions.
Once subjects were trained, we collected MCMC samples for the Gaussian distributions by asking
subjects to judge which of two fish came from the fish farm.
4.1
Method
Fifty subjects were recruited from the university community via a newspaper advertisement. Data
from one subject was discarded for not finishing the experiment, data from another was discarded
because the chains reached a boundary, and the data of eight others were discarded because their
chains did not cross (more detail below). There were ten observers in each between-subject condition. Each subject was paid $4 for a 35 minute session. The experiment was presented on a
Apple iMac G5 controlled by a script running in Matlab using PsychToolbox extensions [23, 24].
Observers were seated approximately 44 cm away from the display.
Each subject was trained to discriminate between two categories of fish: ocean fish and fish farm
fish. Subjects were instructed, ?Fish from the ocean have to fend for themselves and as a result they
have an equal probability of being any size. In contrast, fish from the fish farm are all fed the same
amount of food, so their sizes are similar and only determined by genetics.? These instructions were
meant to suggest that the ocean fish were drawn from a uniform distribution and the fish farm fish
were drawn from a Gaussian distribution. The mean and the standard deviation of the Gaussian were
varied in four between-subject conditions, resulting from crossing two levels of the mean, ? = 3.66
cm and ? = 4.72 cm, with two levels of the standard deviation, ? = 3.1 mm and ? = 1.3 mm.
The uniform distribution was the same across training distributions and was bounded at 2.63 cm and
5.76 cm.
The stimuli were a modified version of the fish used in [25]. The fish were constructed from three
ovals, two gray and one black, and a circle on a black background. Fish were all 9.1 cm long with
heights drawn from the Gaussian and uniform distributions in training. Examples of the smallest and
largest fish are shown in Figure 1. During the the MCMC trials, the range of possible fish heights
was expanded to be from 0.3 mm to 8.35 cm.
Subjects saw two types of trials. In a training trial, either the uniform or Gaussian distribution was
selected with equal probability, and a single sample was drawn from the selected distribution. The
sampled fish was shown to the subject, who chose which distribution produced the fish. Feedback
was then provided on the accuracy of this choice. In an MCMC trial, two fish were presented on
the screen. Subjects chose which of the two fish came from the Gaussian distribution. Neither fish
had been sampled from the Gaussian distribution. Instead, one fish was the state of a Markov chain
and the other fish was the proposal. The state and proposal were unlabeled and they were randomly
assigned to either the left or right side of the screen. Three MCMC chains were interleaved during
the MCMC trials. The start states of the chains were chosen to be 2.63 cm, 4.20 cm, and 5.76 cm.
Relative to the training distributions, the start states were overdispersed, facilitating assessment of
Figure 1: Examples of the largest and smallest fish stimuli presented to subjects during training. The
relative size of the fish stimuli are shown here; true display sizes are given in the text.
4
Fish Width (cm)
5
4
3
Subject 44
5
4
3
Subject 30
5
4
3
Subject 37
5
4
3
Subject 19
10
20
30
40
50
Trial Number
60
70
80 Training
Kernel Gaussian
Density
Fit
Figure 2: The four rows are subjects from each of the between-subject conditions. The panels in the
first column show the behavior of the three Markov chains per subject. The black lines represent the
states of the Markov chains, the dashed line is the mean of the Gaussian training distribution, and the
dot-dashed lines are two standard deviations from the mean. The second column shows the densities
of the training distributions. These training densities can be compared to the MCMC samples, which
are described by their kernel density estimates and Gaussian fits in the last two columns.
convergence. The proposal was chosen from a symmetric discretized pseudo-Gaussian distribution
with a mean equal to the current state. The probability of proposing the current state was set to zero.
The experiment was broken up into blocks of training and MCMC trials, beginning with 120 training
trials, followed by alternating blocks of 60 MCMC trials and 60 training trials. Training and MCMC
trials were interleaved to keep subjects from forgetting the training distributions. A block of 60 test
trials, identical to the training trials but without feedback, ended the experiment.
4.2
Results
Subjects were excluded if their chains did not converge to the stationary distribution or if the state of
any chain reached the edge of the parameter range. We used a heuristic for determining convergence:
every chain had to cross another chain.1 Figure 2 shows the chains from four subjects, one from each
of the between-subject conditions. Most subjects took approximately 20 trials to produce the first
crossing in their chains, so these trials were discarded and the remaining 60 trials from each chain
were pooled and used in further analyses.
The distributions on the right hand side of Figure 2 show the training distribution, best fit Gaussian
to the MCMC samples, and kernel density estimate based on the MCMC samples. The distributions
estimated for the subjects shown in this figure match well with the training distribution. The mean, ?,
and standard deviation, ?, were computed from the MCMC samples produced by each subject. The
average of these estimates for each condition is shown in Figure 3. As predicted, ? was higher for
subjects trained on Gaussians with higher means, and ? was higher for subjects trained on Gaussians
with higher standard deviations. These differences were statistically significant, with a one-tailed
Student?s t-test for independent samples giving t(38) = 7.36, p < 0.001 and t(38) = 2.01, p < 0.05
for ? and ? respectively. The figure also shows that the means of the MCMC samples corresponded
well with the actual means of the training distributions. The standard deviations of the samples
tended to be higher than the training distributions, which could be a consequence of either perceptual
1
Many heuristics have been proposed for assessing convergence. The heuristic we used is simple to apply
in a one-dimensional state space. It is a necessary, but not sufficient, condition for convergence.
5
Trained
Gaussian (cm)
? = 4.72, ? = 0.31
? = 4.72, ? = 0.13
? = 3.66, ? = 0.31
? = 3.66, ? = 0.13
0
1
2
3
?
4
5
6 0
0.2
?
0.4
0.6
Mean of Estimates from MCMC Samples (cm)
Figure 3: The bar plots show the mean of ? and ? across the MCMC samples produced by subjects
in all four training conditions. Error bars are one standard error. The black dot indicates the actual
value of ? and ? for each condition.
noise (increasing the effective variation in stimuli associated with a category) or choices being made
in a way consistent with the exponentiated choice rule with ? < 1.
5
Investigating the structure of natural categories
The previous experiment provided evidence that the assumptions underlying the MCMC method are
approximately correct, as the samples recovered by the method matched the training distribution.
Now we will demonstrate this method in a much more interesting case: sampling from subjective
probability distributions that have been built up from real-world experience. The natural categories
of the shapes of giraffes, horses, cats, and dogs were explored in a nine-dimensional stick figure
space [26]. The responses of a single subject are shown in Figure 4. For each animal, three Markov
chains were started from different states. The three starting states were the same between animal
conditions. Figure 4B shows the chains converging for the giraffe condition. The different animal
conditions converged to different areas of the parameter space (Figure 4C) and the means across
samples produced stick figures that correspond well to the tested categories (Figure 4D).
6
Summary and conclusion
We have developed a Markov chain Monte Carlo method for sampling from a subjective probability distribution. This method allows a person to act as an element of an MCMC algorithm by
constructing a task for which choice probabilities follow a valid acceptance function. By choosing
between the current state and a proposal, people produce a Markov chain with a stationary distribution matching their mental representation of a category. The results of our experiment indicate
that this method accurately uncovers differences in mental representations that result from training
people on categories with different structures. In addition, we explored the subjective probability
distributions of natural animal shapes in a multidimensional parameter space.
This method is a complement to established methods such as classification images [27]. Our method
estimates the subjective probability distribution, while classification images estimate the decision
boundary between two classes. Both methods can contribute to the complete picture of how people
make categorization decisions. The MCMC method corresponds most closely to procedures for
gathering typicality ratings in categorization research. Typicality ratings are used to determine which
objects are better examples of a category than other objects. Our MCMC method yields the same
information, but provides a way to efficiently do so when the category distribution is concentrated
in a small region of a large parameter space. Testing a random subset of objects from this type of
space will result in many uninformative trials. MCMC?s use of previous responses to select new test
trials is theoretically more efficient, but future work is needed to empirically validate this claim.
Our MCMC method provides a way to explore the subjective probability distributions that people
associate with categories. Similar tasks could be used to investigate subjective probability distributions in other settings, providing a valuable tool for testing probabilistic models of cognition. The
general principle of identifying connections between models of human performance and machine
learning algorithms can teach us a great deal about cognition. For instance, Gibbs sampling could
6
Figure 4: Task and results for an experiment exploring natural categories of animals using stick
figure stimuli. (A) Screen capture from the experiment, where people make a choice between the
current state of the Markov chain and a proposed state. (B) States of the Markov chain for the subject
when estimating the distribution for giraffes. The nine-dimensional space characterizing the stick
figures is projected onto the two dimensions that best discriminate the different animal distributions
using linear discriminant analysis. Each chain is a different color and the start states of the chains
are indicated by the filled circle. The dotted lines are samples that were discarded to ensure that the
Markov chains had converged, and the solid lines are the samples that were retained. (C) Samples
from distributions associated with all four animals for the subject, projected onto the same plane
used in B. Two samples from each distribution are displayed in the bubbles. The samples capture
the similarities and differences between the four categories of animals, and reveal the variation in
the members of those categories.(D) Mean of the samples for each animal condition.
7
be used to generate samples from a distribution, if a clever method for inducing people to sample
from conditional distributions could be found. Using people as the elements of a machine learning
algorithm is a virtually unexplored area that should be exploited in order to more efficiently test
hypotheses about the knowledge that guides human learning and inference.
References
[1] M. Oaksford and N. Chater, editors. Rational models of cognition. Oxford University Press, 1998.
[2] N. Chater, J. B. Tenenbaum, and A. Yuille. Special issue on ?Probabilistic models of cognition?. Trends
in Cognitive Sciences, 10(7), 2006.
[3] J. R. Anderson. The adaptive character of thought. Erlbaum, Hillsdale, NJ, 1990.
[4] F. G. Ashby and L. A. Alfonso-Reese. Categorization as probability density estimation. Journal of
Mathematical Psychology, 39:216?233, 1995.
[5] W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. Markov Chain Monte Carlo in Practice.
Chapman and Hall, Suffolk, 1996.
[6] A. W. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller. Equations of state
calculations by fast computing machines. Journal of Chemical Physics, 21:1087?1092, 1953.
[7] W. K. Hastings. Monte Carlo sampling methods using Markov chains and their applications. Biometrika,
57:97?109, 1970.
[8] A. A. Barker. Monte Carlo calculations of the radial distribution functions for a proton-electron plasma.
Australian Journal of Physics, 18:119?133, 1965.
[9] S. K. Reed. Pattern recognition and categorization. Cognitive Psychology, 3:393?407, 1972.
[10] D. L. Medin and M. M. Schaffer. Context theory of classification learning. Psychological Review, 85:207?
238, 1978.
[11] R. M. Nosofsky. Attention, similarity, and the identification-categorization relationship. Journal of Experimental Psychology: General, 115:39?57, 1986.
[12] R. D. Luce. Detection and recognition. In R. D. Luce, R. R. Bush, and E. Galanter, editors, Handbook of
Mathematical Psychology, Volume 1, pages 103?190. John Wiley and Sons, Inc., New York and London,
1963.
[13] R. N. Shepard. Stimulus and response generalization: A stochastic model relating generalization to
distance in psychological space. Psychometrika, 22:325?345, 1957.
[14] R. A. Bradley. Incomplete block rank analysis: On the appropriateness of the model of a method of paired
comparisons. Biometrics, 10:375?390, 1954.
[15] F. R. Clarke. Constant-ratio rule for confusion matrices in speech communication. The Journal of the
Acoustical Society of America, 29:715?720, 1957.
[16] J. W. Hopkins. Incomplete block rank analysis: Some taste test results. Biometrics, 10:391?399, 1954.
[17] N. Vulkan. An economist?s perspective on probability matching. Journal of Economic Surveys, 14:101?
118, 2000.
[18] F. G. Ashby. Multidimensional models of perception and cognition. Erlbaum, Hillsdale, NJ, 1992.
[19] R. M. Nosofsky. Attention and learning processes in the identification and categorization of integral
stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13:87?108, 1987.
[20] G. C. Oden and D. W. Massaro. Integration of featural information in speech perception. Psychological
Review, 85:172?191, 1978.
[21] J. L. McClelland and J. L. Elman. The TRACE model of speech perception. Cognitive Psychology,
18:1?86, 1986.
[22] F. G. Ashby and W. T. Maddox. Relations between prototype, exemplar, and decision bound models of
categorization. Journal of Mathematical Psychology, 37:372?400, 1993.
[23] D. H. Brainard. The psychophysics toolbox. Spatial Vision, 10:433?436, 1997.
[24] D. G. Pelli. The VideoToolbox software for visual psychophysics: Transforming numbers into movies.
Spatial Vision, 10:437?442, 1997.
[25] J. Huttenlocher, L. V. Hedges, and J. L. Vevea. Why do categories affect stimulus judgment? Journal of
Experimental Psychology: General, 129:220?241, 2000.
[26] C. Olman and D. Kersten. Classification objects, ideal observers, and generative models. Cognitive
Science, 28:227?239, 2004.
[27] A. J. Ahumada and J. Lovell. Stimulus features in signal detection. Journal of the Acoustical Society of
America, 49:1751?1756, 1971.
8
| 3214 |@word trial:19 version:2 seems:1 instruction:1 uncovers:1 paid:1 solid:1 subjective:13 bradley:1 current:9 recovered:1 john:1 shape:3 plot:1 stationary:8 generative:1 selected:2 plane:1 beginning:1 accepting:1 mental:4 provides:2 contribute:1 height:4 mathematical:4 along:1 constructed:2 theoretically:1 acquired:2 forgetting:1 roughly:1 themselves:1 elman:1 behavior:8 brain:1 discretized:1 food:1 actual:3 increasing:1 becomes:2 provided:2 estimating:1 bounded:1 underlying:1 panel:1 matched:1 psychometrika:1 kind:1 interpreted:1 cm:13 developed:1 proposing:3 indiana:2 ended:1 nj:2 guarantee:1 pseudo:1 berkeley:2 every:1 multidimensional:2 act:2 unexplored:1 exactly:1 biometrika:1 stick:4 unit:1 producing:1 scientist:1 consequence:1 oxford:1 approximately:4 black:4 chose:2 studied:1 equivalence:3 range:2 statistically:1 medin:1 gilks:1 testing:3 practice:1 block:5 procedure:3 area:3 reject:1 thought:2 matching:4 radial:1 griffith:2 suggest:1 onto:3 unlabeled:1 clever:1 context:2 applying:1 kersten:1 deterministic:1 map:1 attention:2 starting:1 typicality:2 barker:6 focused:1 survey:1 identifying:1 rule:12 variation:2 target:6 massaro:1 hypothesis:9 associate:2 element:3 crossing:2 trend:1 recognition:2 huttenlocher:1 capture:3 calculate:1 region:1 ordering:1 valuable:1 transforming:1 broken:1 trained:8 raise:1 yuille:1 learner:6 various:1 cat:1 america:2 fast:1 describe:3 effective:1 monte:10 london:1 artificial:1 corresponded:1 horse:1 choosing:1 heuristic:3 drawing:2 favor:1 statistic:2 richardson:1 think:1 farm:5 took:1 propose:1 description:1 inducing:1 validate:1 convergence:5 assessing:1 produce:4 categorization:11 adam:1 generating:1 object:17 brainard:1 develop:1 exemplar:1 recovering:1 predicted:1 come:2 judge:2 indicate:1 differ:2 australian:1 appropriateness:1 closely:1 correct:1 stochastic:2 human:15 hillsdale:2 assign:1 generalization:2 exploring:2 extension:1 mm:3 hall:1 great:2 cognition:9 claim:1 electron:1 major:1 smallest:2 estimation:1 visited:1 saw:1 largest:2 tool:1 gaussian:16 modified:1 chater:2 derived:1 finishing:1 rank:2 indicates:1 contrast:1 inference:3 accept:1 relation:1 issue:1 psychtoolbox:1 flexible:1 classification:4 priori:1 exponent:1 plan:1 animal:10 integration:1 fairly:1 special:1 psychophysics:2 equal:7 construct:1 once:1 having:2 spatial:2 sampling:7 chapman:1 identical:1 suffolk:1 future:1 others:1 stimulus:12 randomly:1 detection:2 acceptance:15 interest:1 investigate:1 devoted:2 chain:38 implication:1 edge:1 integral:1 necessary:1 experience:2 biometrics:2 filled:1 incomplete:2 initialized:1 circle:2 psychological:6 instance:1 column:3 soft:1 asking:2 deviation:6 subset:1 uniform:7 successful:1 erlbaum:2 connect:1 chooses:1 person:2 density:6 told:2 probabilistic:2 physic:2 hopkins:1 nosofsky:2 central:2 reflect:1 choose:6 cognitive:5 return:1 account:1 pooled:1 student:1 matter:1 inc:1 reese:1 script:1 h1:12 observer:3 analyze:1 reached:2 start:3 bayes:1 recover:2 accuracy:1 variance:3 who:1 efficiently:2 judgment:3 correspond:2 yield:1 bayesian:3 identification:2 accurately:2 produced:4 carlo:10 apple:1 history:1 converged:2 tended:1 associated:5 sampled:3 bloomington:1 rational:2 gain:1 knowledge:5 color:1 sophisticated:1 reflecting:1 higher:6 follow:2 tom:1 response:10 done:1 anderson:1 rejected:1 hand:1 hastings:1 assessment:1 defines:1 mode:1 indicated:1 gray:1 reveal:1 true:1 overdispersed:1 assigned:2 chemical:1 alternating:1 symmetric:3 laboratory:1 excluded:1 deal:2 during:3 width:1 lovell:1 complete:1 demonstrate:1 confusion:1 image:2 common:2 sigmoid:1 physical:1 empirically:1 shepard:1 volume:1 relating:2 significant:1 gibbs:1 outlined:1 similarly:1 session:1 had:3 dot:2 similarity:2 posterior:5 perspective:2 came:3 exploited:1 determine:3 converge:2 dashed:2 signal:1 smooth:1 match:1 calculation:2 cross:2 long:2 paired:1 controlled:1 schematic:1 converging:1 oden:1 basic:1 vision:2 iteration:1 normalization:1 kernel:3 represent:1 proposal:12 background:1 want:1 addition:1 uninformative:1 fifty:1 subject:37 recruited:1 virtually:1 alfonso:1 member:1 odds:1 ideal:1 variety:2 affect:1 fit:4 psychology:9 economic:1 prototype:1 luce:5 whether:3 speech:3 york:1 nine:2 matlab:1 governs:1 amount:2 vevea:1 extensively:1 ten:1 concentrated:1 tenenbaum:1 category:41 mcclelland:1 generate:2 canonical:1 fish:31 dotted:1 estimated:1 per:1 serving:1 imac:1 group:1 four:7 threshold:1 drawn:7 neither:1 convert:1 you:4 almost:1 reasonable:1 draw:1 decision:6 summarizes:1 clarke:1 interleaved:2 bound:1 ashby:3 guaranteed:1 followed:1 display:2 correspondence:1 x2:17 software:1 expanded:1 department:1 developing:1 describes:1 across:3 son:1 character:1 maddox:1 metropolis:3 making:1 gathering:1 taken:1 equation:8 discus:1 needed:1 fed:1 gaussians:2 eight:1 apply:1 away:1 indirectly:1 ocean:4 appearing:1 alternative:7 thomas:1 running:1 remaining:1 ensure:1 giving:2 society:2 question:1 g5:1 sanborn:1 distance:1 acoustical:2 collected:1 discriminant:1 reason:1 assuming:1 economist:1 modeled:1 retained:1 reed:1 providing:2 demonstration:1 ratio:2 relationship:1 difficult:1 teach:1 trace:1 design:1 rosenbluth:2 unknown:1 markov:23 discarded:5 displayed:1 supporting:1 defining:2 situation:1 extended:1 communication:1 varied:1 community:1 schaffer:1 rating:2 complement:1 dog:1 toolbox:1 connection:1 pelli:1 proton:1 california:1 established:1 bar:2 below:1 pattern:1 perception:3 built:1 including:1 memory:2 belief:1 power:1 event:2 natural:6 movie:1 oaksford:1 spiegelhalter:1 picture:1 started:1 bubble:1 featural:1 text:1 prior:1 literature:1 teller:2 review:2 taste:1 determining:3 relative:4 interesting:1 h2:4 degree:1 sufficient:1 consistent:1 principle:1 editor:3 seated:1 row:1 genetics:1 summary:2 last:1 formal:2 guide:2 allow:1 weaker:2 exponentiated:2 side:3 characterizing:1 boundary:2 dimension:2 feedback:2 transition:2 world:2 valid:2 instructed:1 made:1 adaptive:1 projected:2 newspaper:1 approximate:1 implicitly:1 keep:1 investigating:1 handbook:1 tailed:1 why:1 nature:1 learn:1 ca:1 obtaining:2 ahumada:1 complex:2 constructing:1 domain:1 did:2 giraffe:3 noise:1 facilitating:1 x1:19 screen:4 wiley:1 candidate:1 perceptual:1 advertisement:1 minute:1 showing:1 explored:2 evidence:1 magnitude:2 explore:2 visual:1 expressed:1 corresponds:2 hedge:1 conditional:1 goal:1 consequently:1 change:1 determined:2 oval:1 discriminate:2 experimental:4 plasma:1 attempted:1 indicating:1 select:2 people:19 support:1 meant:1 bush:1 mcmc:37 tested:2 phenomenon:1 |
2,442 | 3,215 | Learning with Transformation Invariant Kernels
Christian Walder
Max Planck Institute for Biological Cybernetics
72076 T?ubingen, Germany
[email protected]
Olivier Chapelle
Yahoo! Research
Santa Clara, CA
[email protected]
Abstract
This paper considers kernels invariant to translation, rotation and dilation. We
show that no non-trivial positive definite (p.d.) kernels exist which are radial and
dilation invariant, only conditionally positive definite (c.p.d.) ones. Accordingly,
we discuss the c.p.d. case and provide some novel analysis, including an elementary derivation of a c.p.d. representer theorem. On the practical side, we give a
support vector machine (s.v.m.) algorithm for arbitrary c.p.d. kernels. For the thinplate kernel this leads to a classifier with only one parameter (the amount of regularisation), which we demonstrate to be as effective as an s.v.m. with the Gaussian
kernel, even though the Gaussian involves a second parameter (the length scale).
1
Introduction
Recent years have seen widespread application of reproducing kernel Hilbert space (r.k.h.s.) based
methods to machine learning problems (Sch?olkopf & Smola, 2002). As a result, kernel methods
have been analysed to considerable depth. In spite of this, the aspects which we presently investigate
seem to have received insufficient attention, at least within the machine learning community.
The first is transformation invariance of the kernel, a topic touched on in (Fleuret & Sahbi, 2003).
Note we do not mean by this the local invariance (or insensitivity) of an algorithm to application
specific transformations which should not affect the class label, such as one pixel image translations
(see e.g. (Chapelle & Sch?olkopf, 2001)). Rather we are referring to global invariance to transformations, in the way that radial kernels (i.e. those of the form k(x, y) = ?(kx ? yk)) are invariant to
translations. In Sections 2 and 3 we introduce the more general concept of transformation scaledness, focusing on translation, dilation and orthonormal transformations. An interesting result is that
there exist no non-trivial p.d. kernel functions which are radial and dilation scaled.
There do exist non-trivial c.p.d. kernels with the stated invariances however. Motivated by this,
we analyse the c.p.d. case in Section 4, giving novel elementary derivations of some key results,
most notably a c.p.d. representer theorem. We then give in Section 6.1 an algorithm for applying
the s.v.m. with arbitrary c.p.d. kernel functions. It turns out that this is rather useful in practice,
for the following reason. Due to its invariances, the c.p.d. thin-plate kernel which we discuss in
Section 5, is not only richly non-linear, but enjoys a duality between the length-scale parameter
and the regularisation parameter of Tikhonov regularised solutions such as the s.v.m. In Section
7 we compare the resulting classifier (which has only a regularisation parameter), to that of the
s.v.m. with Gaussian kernel (which has an additional length scale parameter). The results show that
the two algorithms perform roughly as well as one another on a wide range of standard machine
learning problems, notwithstanding the new method?s advantage in having only one free parameter.
In Section 8 we make some concluding remarks.
1
2
Transformation Scaled Spaces and Tikhonov Regularisation
Definition 2.1. Let T be a bijection on X and F a Hilbert space of functions on some non-empty
set X such that f 7? f ? T is a bijection on F. F is T -scaled if
hf, giF = gT (F) hf ? T , g ? T iF
(1)
+
for all f ? F, where gT (F) ? R is the norm scaling function associated with the operation of T
on F. If gT (F) = 1 we say that F is T -invariant.
The following clarifies the behaviour of Tikhonov regularised solutions in such spaces.
Lemma 2.2. For any ? : F ?
?
? R and T such that f 7? f ? T is a bijection of F, if the left hand
side is unique then
arg min ?(f ) = arg min ?(fT ? T ) ? T
f ?F
fT ?F
Proof. Let f ? = arg minf ?F ?(f ) and fT? = arg minfT ?F ?(fT ? T ). By definition we have
that ?g ? F, ?(fT? ? T ) ? ?(g ? T ). But since f 7? f ? T is a bijection on F, we also have
?g ? F, ?(fT? ? T ) ? ?(g). Hence, given the uniqueness, this implies f ? = fT? ? T .
The following Corollary follows immediately from Lemma 2.2 and Definition 2.1.
Corollary 2.3. Let Li be any loss function. If F is T -scaled and the left hand side is unique then
X
X
2
2
arg min kf kF +
Li (f (xi )) = arg min kf kF /gT (F) +
Li (f (T xi ))
?T.
f ?F
i
f ?F
i
Corollary 2.3 includes various learning algorithms for various choices of Li ? for example the
s.v.m. with linear hinge loss for Li (t) = max (0, 1 ? yi t), and kernel ridge regression for Li (t) =
2
(yi ? t) . Let us now introduce the specific transformations we will be considering.
Definition 2.4. Let Ws , Ta and OA be the dilation, translation and orthonormal transformations
Rd ? Rd defined for s ? R \ {0}, a ? Rd and orthonormal A : Rd ? Rd by Ws x = sx,
Ta x = x + a and OA x = Ax respectively.
Hence, for an r.k.h.s. which is Ws -scaled for arbitrary s 6= 0, training an s.v.m. and dilating the
resultant decision function by some amount is equivalent training the s.v.m. on similarly dilated
input patterns but with a regularisation parameter adjusted according to Corollary 2.3.
While (Fleuret & Sahbi, 2003) demonstrated this phenomenon for the s.v.m. with a particular kernel,
as we have just seen it is easy to demonstrate for the more general Tikhonov regularisation setting
with any function norm satisfying our definition of transformation scaledness.
3
Transformation Scaled Reproducing Kernel Hilbert Spaces
We now derive the necessary and sufficient conditions for a reproducing kernel (r.k.) to correspond
to an r.k.h.s. which is T -scaled. The relationship between T -scaled r.k.h.s.?s and their r.k.?s is easy
to derive given the uniqueness of the r.k. (Wendland, 2004). It is given by the following novel
Lemma 3.1 (Transformation scaled r.k.h.s.). The r.k.h.s. H with r.k. k : X ? X ? R, i.e. with k
satisfying
hk(?, x), f (?)iH = f (x),
(2)
is T -scaled iff
k(x, y) = gT (H) k(T x, T y).
(3)
Which we prove in the accompanying technical report (Walder & Chapelle, 2007) . It is now easy
p
to see that, for example, the homogeneous polynomial kernel k(x, y) = hx, yi corresponds to a
p
p
Ws -scaled r.k.h.s. H with gWs (H) = hx, yi / hsx, syi = s?2p . Hence when the homogeneous
polynomial kernel is used with the hard-margin s.v.m. algorithm, the result is invariant to multiplicative scaling of the training and test data. If the soft-margin s.v.m. is used however, then the invariance
2
holds only under appropriate scaling (as per Corollary 2.3) of the margin softness parameter (i.e. ?
of the later equation (14)).
We can now show that there exist no non-trivial r.k.h.s.?s with radial kernels that are also Ws -scaled
for all s 6= 0. First however we need the following standard result on homogeneous functions:
Lemma 3.2. If ? : [0, ?) ? R and g : (0, ?) ? R satisfy ?(r) = g(s)?(rs) for all r ? 0 and
s > 0 then ?(r) = a?(r) + brp and g(s) = s?p , where a, b, p ? R, p 6= 0, and ? is Dirac?s function.
Which we prove in the accompanying technical report (Walder & Chapelle, 2007). Now, suppose
that H is an r.k.h.s. with r.k. k on Rd ? Rd . If H is Ta -invariant for all a ? Rd then
k(x, y) = k(T?y x, T?y y) = k(x ? y, 0) , ?T (x ? y).
If in addition to this H is OA -invariant for all orthogonal A, then by choosing A such that A(x?y) =
kx ? yk e? where e? is an arbitrary unit vector in Rd we have
k(x, y) = k(OA x, OA y) = ?T (OA (x ? y)) = ?T (kx ? yk e?) , ?OT (kx ? yk)
i.e. k is radial. All of this is straightforward, and a similar analysis can be found in (Wendland,
2004). Indeed the widely used Gaussian kernel satisfies both of the above invariances. But if we
now also assume that H is Ws -scaled for all s 6= 0 ? this time with arbitrary gWs (H) ? then
k(x, y) = gWs (H)k(Ws x, Ws y) = gW|s| (H)?OT (|s| kx ? yk)
so that letting r = kx ? yk we have that ?OT (r) = gW|s| (H)?OT (|s| r) and hence by Lemma 3.2
that ?OT (r) = a?(r) + brp where a, b, p ? R. This is positive semi-definite for the trivial case
p = 0, but there are various ways of showing this cannot be non-trivially positive semi-definite for
p 6= 0. One simple way is to consider two arbitrary vectors x1 and x2 such that kx1 ? x2 k = d > 0.
For the corresponding Gram matrix
a bdp
K,
,
bdp a
to be positive semi definite we require 0 ? det(K) = a2 ? b2 d2p , but for arbitrary d > 0 and
a < ?, this implies b = 0. This may seem disappointing, but fortunately there do exist c.p.d. kernel
functions with the stated properties, such as the thin-plate kernel. We discuss this case in detail in
Section 5, after the following particularly elementary and in part novel introduction to c.p.d. kernels.
4
Conditionally Positive Definite Kernels
In the last Section we alluded to c.p.d. kernel functions ? these are given by the following
Definition 4.1. A continuous function ? : X ? X ? R is conditionally positive definite with
respect to (w.r.t.) the linear
of functions P if, for all m ? N, all {xi }i=1...m ? X , and all
Pspace
m
? ? Rm \ {0} satisfying j=1 ?j p(xj ) = 0 for all p ? P, the following holds
Pm
(4)
j,k=1 ?j ?k ?(xj , xk ) > 0.
Due to the positivity condition (4) ? as opposed one of non negativity ? we are referring to c.p.d.
rather than conditionally positive semi-definite kernels. The c.p.d. case is more technical than the
p.d. case. We provide a minimalistic discussion here ? for more details we recommend e.g. (Wendland, 2004). To avoid confusion, let us note in passing that while the above definition is quite standard (see e.g. (Wendland, 2004; Wahba, 1990)), many authors in the machine learning community
use a definition of c.p.d. kernels which corresponds to our definition when P = {1} (e.g. (Sch?olkopf
& Smola, 2002)) or when P is taken to be the space of polynomials of some fixed maximum degree
(e.g. (Smola et al., 1998)). Let us now adopt the notation P ? (x1 , . . . , xm ) for the set
Pm
{? ? Rm : i=1 ?i p(xi ) = 0 for all p ? P} .
The c.p.d. kernels of Definition 4.1 naturally define a Hilbert space of functions as per
Definition 4.2. Let ? : X ? X ? R be a c.p.d. kernel w.r.t. P. We define F? (X ) to be the Hilbert
space of functions which is the completion of the set
nP
o
m
?
?
?(?,
x
)
:
m
?
N,
x
,
..,
x
?
X
,
?
?
P
(x
,
..,
x
)
,
j
j
1
m
1
m
j=1
which due to the
D definition of ? we may endow with
E the inner product
Pm
j=1
?j ?(?, xj ),
Pn
k=1
?k ?(?, yk )
=
F? (X )
3
Pm Pn
j=1
k=1
?j ?k ?(xj , yk ).
(5)
Note that ? is not the r.k. of F? (X ) ? in general ?(x, ?) does not even lie in F? (X ). For the
remainder of this Section we develop a c.p.d. analog of the representer theorem. We begin with
Lemma 4.3. Let ? : X ? X ? R be a c.p.d. kernel w.r.t. P and p1 , . . . pr a basis for P.
For any {(x1 , y1 ), . . . (xm , ym )} ? X ? R, there exists an s = sF? (X ) + sP where sF? (X ) =
Pm
Pr
j=1 ?j ?(?, xj ) ? F? (X ) and sP =
k=1 ?k pk ? P, such that s(xi ) = yi , i = 1 . . . m.
A simple and elementary proof (which shows (17) is solvable when ? = 0), is given in (Wendland,
2004) and reproduced in the accompanying technical report (Walder & Chapelle, 2007). Note that
although such an interpolating function s always exists, it need not be unique. The distinguishing
property of the interpolating function is that the norm of the part which lies in F? (X ) is minimum.
Definition 4.4. Let ? : X ? X ? R be a c.p.d. kernel w.r.t. P. We use the notation P? (P) to denote
the projection F? (X ) ? P ? F? (X ).
Pm
Note that F? (X ) ? P? (P) is a direct sum since p = j=1 ?i ?(zj , ?) ? P ? F? (X ) implies
Pm Pn
Pm
2
kpkF? (X ) = hp, piF? (X ) = i=1 j=1 ?i ?j ?(zi , zj ) = j=1 ?j p(zj ) = 0.
Hence, returning to the main thread, we have the following lemma ? our proof of which seems to
be novel and particularly elementary.
Lemma 4.5. Denote by ? : X ? X ? R a c.p.d. kernel w.r.t. P and
Pmby p1 , . . . pr a basis for
P. Consider an arbitrary function s = sF? (X ) + sP with sF? (X ) = j=1 ?j ?(?, xj ) ? F? (X )
Pr
and sP = k=1 ?k pk ? P. kP? (P)skF? (X ) ? kP? (P)f kF? (X ) holds for all f ? F? (X ) ? P
satisfying
f (xi ) = s(xi ), i = 1 . . . m.
(6)
Proof. Let f be an arbitrary element of F? (X ) ? P. We can always write f as
f=
m
X
(?i + ?i ) ?(?, xj ) +
j=1
n
X
bl ?(?, zl ) +
r
X
ck pk .
k=1
l=1
If we define1 [Px ]i,j = pj (xi ), [Pz ]i,j = pj (zi ), [?xx ]i,j = ?(xi , xj ), [?xz ]i,j = ?(xi , zj ), and
[?zx ]i,j = ?(zi , xj ), then the condition (6) can hence be written
Px ? = ?xx ? + ?xz b + Px c,
(7)
and the definition of F? (X ) requires that e.g. ? ? P ? (x1 , . . . , xm ), hence implying the constraints
Px> ? = 0 and Px> (? + ?) + Pz> b = 0.
The inequality to be demonstrated is then
>
?+?
?xx ?xz
?+?
L , ?> ?xx ? ?
, R.
b
?zx ?zz
b
|
{z
}
(8)
(9)
,?
By expanding
>
>
?
?
?
?
R = ? ?xx ? +
+2
?
,
?
b
b
0
b
| {z }
|
{z
}
|
{z
}
=L
>
,?1
,?2
it follows from (8) that Px> ? + Pz> ? = 0, and since ? is c.p.d. w.r.t. Px> Pz> that ?1 ? 0. But
(7) and (8) imply that L ? R, since
=0
z }| {
>
>
>
?2 = ? ?xx ? + ? ?xz b = ? Px (? ? c) ? ?> ?xz b + ?> ?xz b = 0.
1
Square brackets w/ subscripts denote matrix elements, and colons denote entire rows or columns.
4
Using these results it is now easy to prove an analog of the representer theorem for the p.d. case.
Theorem 4.6 (Representer theorem for the c.p.d. case). Denote by ? : X ? X ? R a c.p.d. kernel
w.r.t. P, by ? a strictly monotonic increasing real-valued function on [0, ?), and by c : Rm ?
R ? {?} an arbitrary cost function. There exists a minimiser over F? (X ) ? P of
2
(10)
W (f ) , c (f (x1 ), . . . , f (xm )) + ? kP? (P)f kF? (X )
Pm
which admits the form i=1 ?i ?(?, xi ) + p, where p ? P.
Pm
Proof. Let f be a minimiser of W. Let s =
i=1 ?i ?(?, xi ) + p satisfy s(xi ) = f (xi ), i =
2
1 . . . m. By Lemma 4.3 we know that such an s exists. But by Lemma 4.5 kP? (P)skF? (X ) ?
2
kP? (P)f kF? (X ) . As a result, W (s) ? W (f ) and s is a minimizer of W with the correct form.
5
Thin-Plate Regulariser
Definition 5.1. The m-th order thin-plate kernel ?m : Rd ? Rd ? R is given by
(
2m?d
(?1)m?(d?2)/2 kx ? yk
log(kx ? yk) if d ? 2N,
?m (x, y) =
2m?d
m?(d?1)/2
(?1)
kx ? yk
if d ? (2N ? 1),
(11)
for x 6= y, and zero otherwise. ?m is c.p.d. with respect to ?m?1 (Rd ), the set of d-variate polyno
mials of degree at most m ? 1. The kernel induces the following norm on the space F?m Rd of
Definition 4.2 (this is not obvious ? see e.g. (Wendland, 2004; Wahba, 1990))
hf, giF?
, h?f, ?giL2 (Rd )
Z
d
d Z ?
X
X
=
???
???
m (R
d)
i1 =1
im =1
where ? : F?m R
Clearly gOA (F?m
d
x1 =??
?
xd =??
?
?
???
f
?xi1
?xim
?
?
???
g dx1 . . . dxd ,
?xi1
?xim
d
? L2 (R ) is a regularisation operator, implicitly defined above.
Rd ) = gTa (F?m Rd ) = 1. Moreover, from the chain rule we have
?
?
?
?
???
(f ? Ws ) = sm
???
f ? Ws
?xi1
?xim
?xi1
?xim
(12)
and therefore since hf, giL2 (Rd ) = sd hf ? Ws , g ? Ws iL2 (Rd ) ,we can immediately write
h? (f ? Ws ) , ? (g ? Ws )iL2 (Rd ) = s2m h(?f ) ? Ws , (?g) ? Ws iL2 (Rd ) = s2m?d h?f, ?giL2 (Rd )
(13)
so that gWs (F?m Rd ) = s?(2m?d) . Note that although it may appear that this can be shown more
easily using (11) and an argument similar to Lemma 3.1, the process is actually more involved due
to the log factor in the first case of (11), and it is necessary to use the fact that the kernel is c.p.d.
w.r.t. ?m?1 (Rd ). Since this is redundant and not central to the paper we omit the details.
6
Conditionally Positive Definite s.v.m.
In the Section 3 we showed that non-trivial kernels which are both radial and dilation scaled cannot
be p.d. but rather only c.p.d. It is therefore somewhat surprising that the s.v.m. ? one of the most
widely used kernel algorithms ? has been applied only with p.d. kernels, or kernels which are
c.p.d. respect only to P = {1} (see e.g. (Boughorbel et al., 2005)). After all, it seems interesting
to construct a classifier independent not only of the absolute positions of the input data, but also of
their absolute multiplicative scale.
Hence we propose using the thin-plate kernel with the s.v.m. by minimising the s.v.m. objective over
the space F? (X ) ? P (or in some cases just over F? (X ), as we shall see in Section 6.1). For this
we require somewhat non-standard s.v.m. optimisation software. The method we propose seems
simpler and more robust than previously mentioned solutions. For example, (Smola et al., 1998)
mentions the numerical instabilities which may arise with the direct application of standard solvers.
5
Dataset
banana
breast
diabetes
flare
german
heart
Gaussian
10.567 (0.547)
26.574 (2.259)
23.578 (0.989)
36.143 (0.969)
24.700 (1.453)
17.407 (2.142)
Thin-Plate
10.667 (0.586)
28.026 (2.900)
23.452 (1.215)
38.190 (2.317)
24.800 (1.373)
17.037 (2.290)
dim/n
2/3000?
9/263
8/768
9/144
20/1000
13/270
Dataset
image
ringnm
splice
thyroid
twonm
wavefm
Gaussian
3.210 (0.504)
1.533 (0.229)
8.931 (0.640)
4.199 (1.087)
1.833 (0.194)
8.333 (0.378)
Thin-Plate
1.867 (0.338)
1.833 (0.200)
8.651 (0.433)
3.247 (1.211)
1.867 (0.254)
8.233 (0.484)
dim/n
18/2086
20/3000?
60/2844
5/215
20/3000?
21/3000
Table 1: Comparison of Gaussian and thin-plate kernel with the s.v.m. on the UCI data sets. Results
are reported as ?mean % classification error (standard error)?. dim is the input dimension and n
the total number of data points. A star in the n column means that more examples were available
but we kept only a maximum of 2000 per class in order to reduce the computational burden of the
extensive number of cross validation and model selection training runs (see Section 7). None of the
data sets were linearly separable so we always used used the normal (? unconstrained) version of
the optimisation described in Section 6.1.
6.1
Optimising an s.v.m. with c.p.d. Kernel
It is simple to implement an s.v.m. with a kernel ? which is c.p.d. w.r.t. an arbitrary finite dimensional
space of functions P by extending the primal optimisation approach of (Chapelle, 2007) to the c.p.d.
case. The quadratic loss s.v.m. solution can be formulated as arg minf ?F? (X )?P of
n
X
2
? kP? (P)f kF? (X ) +
max(0, 1 ? yi f (xi ))2 ,
(14)
i=1
Note that for the second order thin-plate case we have X = Rd and P = ?1 (Rd ) (the space of
constant and first order polynomials). Hence dim (P) = d + 1 and we can take the basis to be
pj (x) = [x]j for j = 1 . . . d along with pd+1 = 1.
It follows immediately from Theorem 4.6 that, letting p1 , p2 , . . . pdim(P) span P, the solution to (14)
Pn
Pdim(P)
is given by fsvm (x) = i=1 ?i ?(xi , x) + j=1 ?j pj (x). Now, if we consider only the margin
violators ? those vectors which (at a given step of the optimisation process) satisfy yi f (xi ) < 1,
we can replace the max (0, ?) in (14) with (?). This is equivalent to making a local second order
approximation. Hence by repeatedly solving in this way while updating the set of margin violators,
we will have implemented a so-called Newton optimisation. Now, since
2
kP? (P)fsvm kF? (X ) =
n
X
?i ?j ?(xi , xj ),
(15)
i,j=1
the local approximation of the problem is, in ? and ?
2
minimise ??> ?? + k?? + P ? ? yk , subject to P > ? = 0,
(16)
where [?]i,j = ?(xi , xj ), [P ]j,k = pk (xj ), and we assumed for simplicity that all vectors violate
the margin. The solution in this case is given by (Wahba, 1990)
?1
?
y
?I + ? P >
=
.
(17)
?
0
P
0
In practice it is essential that one makes a change of variable for ? in order to avoid the numerical
problems which arise when P is rank deficient or numerically close to it. In particular we make the
QR factorisation (Golub & Van Loan, 1996) P > = QR, where Q> Q = I and R is square. We then
solve for ? and ? = R?. As a final step at the end of the optimisation process, we take the minimum
norm solution of the system ? = R?, ? = R# ? where R# is the pseudo inverse of R. Note that
although (17) is standard for squared loss regression models with c.p.d. kernels, our use of it in
optimising the s.v.m. is new. The precise algorithm is given in (Walder & Chapelle, 2007), where
we also detail two efficient factorisation techniques, specific to the new s.v.m. setting. Moreover, the
method we present in Section 6.2 deviates considerably further from the existing literature.
6
6.2
Constraining ? = 0
Previously, if the data can be separated with only the P part of the function space ? i.e. with ? = 0
? then the algorithm will always do so regardless of ?. This is correct in that, since P lies in the null
2
space of the regulariser kP? (P)?kF? (X ) , such solutions minimise (14), but may be undesirable for
various reasons. Firstly, the regularisation cannot be controlled via ?. Secondly, for the thin-plate,
P = ?1 (Rd ) and the solutions are simple linear separating hyperplanes. Finally, there may exist
infinitely many solutions to (14). It is unclear how to deal with this problem ? after all it implies
that the regulariser is simply inappropriate for the problem at hand. Nonetheless we still wish to
apply a (non-linear) algorithm with the previously discussed invariances of the thin-plate.
To achieve this, we minimise (14) as before, but over the space F? (X ) rather than F? (X ) ? P. It
is important to note that by doing so we can no longer invoke Theorem 4.6, the representer theorem
for the c.p.d. case. This is because the solvability argument of Lemma 4.3 no longer holds. Hence
we do not know the optimal basis for the function, which may involve infinitely many ?(?, x) terms.
The way we deal with this is simple ? instead of minimising over F? (X ) we consider only the
finite dimensional subspace given by
nP
o
n
?
,
?
?(?,
x
)
:
?
?
P
(x
,
.
.
.
,
x
)
j
j
1
n
j=1
where x1 , . . . xn are those of the original problem (14). The required update equation can be acquired in a similar manner as before. The closed form solution to the constrained quadratic programme is in this case given by (see (Walder & Chapelle, 2007))
?1 > >
? = ?P? P?> ?? + ?>
P? ?sx ys
(18)
sx ?sx P?
where ?sx = [?]s,: , s is the current set of margin violators and P? the null space of P satisfying
P P? = 0. The precise algorithm we use to optimise in this manner is given in the accompanying
technical report (Walder & Chapelle, 2007), where we also detail efficient factorisation techniques.
7
Experiments and Discussion
We now investigate the behaviour of the algorithms which we have just discussed, namely the thinplate based s.v.m. with 1) the optimisation over F? (X ) ? P as per Section 6.1, and 2) the optimisation over a subspace of F? (X ) as per Section 6.2. In particular, we use the second method if the
data is linearlyseparable, otherwisewe use the first. For a baseline we take the Gaussian kernel
2
k(x, y) = exp ? kx ? yk /(2? 2 ) , and compare on real world classification problems.
Binary classification (UCI data sets). Table 1 provides numerical evidence supporting our claim
that the thin-plate method is competitive with the Gaussian, in spite of it?s having one less hyper
parameter. The data sets are standard ones from the UCI machine learning repository. The experiments are extensive ? the experiments on binary problems alone includes all of the data sets used in
(Mika et al., 2003) plus two additional ones (twonorm and splice). To compute each error measure,
we used five splits of the data and tested on each split after training on the remainder. For parameter
selection, we performed five fold cross validation on the four-fifths of the data available for training
each split, over an exhaustive search of the algorithm parameter(s) (? and ? for the Gaussian and
happily just ? for the thin-plate). We then take the parameter(s) with lowest mean error and retrain
on the entire four fifths. We ensured that the chosen parameters were well within the searched range
by visually inspecting the cross validation error as a function of the parameters. Happily, for the
thin-plate we needed to cross validate to choose only the regularisation parameter ?, whereas for the
Gaussian we had to choose both ? and the scale parameter ?. The discovery of an equally effective algorithm which has only one parameter is important, since the Gaussian is probably the most
popular and effective kernel used with the s.v.m. (Hsu et al., 2003).
Multi class classification (USPS data set). We also experimented with the 256 dimensional, ten
class USPS digit recognition problem. For each of the ten one vs. the rest models we used five fold
cross validation on the 7291 training examples to find the parameters, retrained on the full training
set, and labeled the 2007 test examples according to the binary classifier with maximum output. The
Gaussian misclassified 88 digits (4.38%), and the thin-plate 85 (4.25%). Hence the Gaussian did not
perform significantly better, in spite of the extra parameter.
7
Computational complexity.
The normal computational complexity of the c.p.d. s.v.m. algorithm is the usual O(nsv 3 ) ? cubic in the number of margin violators. For the ? = 0 variant
(necessary only on linearly separable problems ? presently only the USPS set) however, the cost
is O(nb 2 nsv + nb 3 ), where nb is the number of basis functions in the expansion. For our USPS
experiments we expanded on all m training points, but if nsv m this is inefficient and probably unnecessary. For example the final ten models (those with optimal parameters) of the USPS
problem had around 5% margin violators, and so training each Gaussian s.v.m. took only ? 40s in
comparison to ? 17 minutes (with the use of various efficient factorisation techniques as detailed
in the accompanying (Walder & Chapelle, 2007) ) for the thin-plate. By expanding on only 1500
randomly chosen points however, the training time was reduced to ? 4 minutes while incurring only
88 errors ? the same as the Gaussian. Given that for the thin-plate cross validation needs to be
performed over one less parameter, even in this most unfavourable scenario of nsv m, the overall
times of the algorithms are comparable. Moreover, during cross validation one typically encounters larger numbers of violators for some suboptimal parameter configurations, in which cases the
Gaussian and thin-plate training times are comparable.
8
Conclusion
We have proven that there exist no non-trivial radial p.d. kernels which are dilation invariant (or
more accurately, dilation scaled), but rather only c.p.d. ones. Such kernels have the advantage that,
to take the s.v.m. as an example, varying the absolute multiplicative scale (or length scale) of the data
has the same effect as changing the regularisation parameter ? hence one needs model selection to
chose only one of these, in contrast to the widely used Gaussian kernel for example.
Motivated by this advantage we provide a new, efficient and stable algorithm for the s.v.m. with
arbitrary c.p.d. kernels. Importantly, our experiments show that the performance of the algorithm
nonetheless matches that of the Gaussian on real world problems.
The c.p.d. case has received relatively little attention in machine learning. Our results indicate that
it is time to redress the balance. Accordingly we provided a compact introduction to the topic,
including some novel analysis which includes an new, elementary and self contained derivation of
one particularly important result for the machine learning community, the representer theorem.
References
Boughorbel, S., Tarel, J.-P., & Boujemaa, N. (2005). Conditionally positive definite kernels for svm based
image recognition. Proc. of IEEE ICME?05. Amsterdam.
Chapelle, O. (2007). Training a support vector machine in the primal. Neural Computation, 19, 1155?1178.
Chapelle, O., & Sch?olkopf, B. (2001). Incorporating invariances in nonlinear support vector machines. In
T. Dietterich, S. Becker and Z. Ghahramani (Eds.), Advances in neural information processing systems 14,
609?616. Cambridge, MA: MIT Press.
Fleuret, F., & Sahbi, H. (2003). Scale-invariance of support vector machines based on the triangular kernel.
Proc. of ICCV SCTV Workshop.
Golub, G. H., & Van Loan, C. F. (1996). Matrix computations. Baltimore MD: The Johns Hopkins University
Press. 2nd edition.
Hsu, C.-W., Chang, C.-C., & Lin, C.-J. (2003). A practical guide to support vector classification (Technical
Report). National Taiwan University.
Mika, S., R?atsch, G., Weston, J., Sch?olkopf, B., Smola, A., & M?uller, K.-R. (2003). Constructing descriptive
and discriminative non-linear features: Rayleigh coefficients in feature spaces. IEEE PAMI, 25, 623?628.
Sch?olkopf, B., & Smola, A. J. (2002). Learning with kernels: Support vector machines, regularization, optimization, and beyond. Cambridge: MIT Press.
Smola, A., Sch?olkopf, B., & M?uller, K.-R. (1998). The connection between regularization operators and support
vector kernels. Neural Networks, 11, 637?649.
Wahba, G. (1990). Spline models for observational data. Philadelphia: Series in Applied Math., Vol. 59, SIAM.
Walder, C., & Chapelle, O. (2007). Learning with transformation invariant kernels (Technical Report 165).
Max Planck Institute for Biological Cybernetics, Department of Empirical Inference, T?ubingen, Germany.
Wendland, H. (2004). Scattered data approximation. Monographs on Applied and Computational Mathematics.
Cambridge University Press.
8
| 3215 |@word repository:1 version:1 polynomial:4 norm:5 seems:3 nd:1 r:1 mention:1 configuration:1 series:1 existing:1 current:1 com:1 define1:1 surprising:1 analysed:1 clara:1 written:1 john:1 numerical:3 christian:2 update:1 v:1 implying:1 alone:1 flare:1 accordingly:2 xk:1 provides:1 math:1 bijection:4 hyperplanes:1 firstly:1 simpler:1 five:3 along:1 direct:2 prove:3 manner:2 introduce:2 acquired:1 notably:1 indeed:1 roughly:1 mpg:1 p1:3 xz:6 multi:1 chap:1 little:1 inappropriate:1 considering:1 increasing:1 solver:1 begin:1 xx:6 notation:2 moreover:3 provided:1 null:2 lowest:1 gif:2 transformation:13 pseudo:1 xd:1 softness:1 returning:1 classifier:4 scaled:15 rm:3 zl:1 unit:1 ensured:1 omit:1 appear:1 planck:2 positive:10 before:2 local:3 sd:1 subscript:1 pami:1 mika:2 plus:1 chose:1 range:2 practical:2 unique:3 practice:2 definite:10 implement:1 digit:2 empirical:1 significantly:1 projection:1 radial:7 spite:3 boughorbel:2 cannot:3 close:1 selection:3 operator:2 undesirable:1 nb:3 applying:1 instability:1 equivalent:2 demonstrated:2 straightforward:1 attention:2 dilating:1 regardless:1 simplicity:1 immediately:3 factorisation:4 rule:1 importantly:1 orthonormal:3 gta:1 suppose:1 olivier:1 homogeneous:3 distinguishing:1 regularised:2 diabetes:1 element:2 satisfying:5 particularly:3 updating:1 recognition:2 labeled:1 ft:7 yk:13 mentioned:1 monograph:1 pd:1 complexity:2 solving:1 basis:5 usps:5 easily:1 various:5 derivation:3 separated:1 effective:3 kp:8 hyper:1 choosing:1 exhaustive:1 quite:1 widely:3 valued:1 solve:1 say:1 larger:1 otherwise:2 triangular:1 analyse:1 final:2 reproduced:1 advantage:3 descriptive:1 took:1 propose:2 product:1 remainder:2 uci:3 iff:1 insensitivity:1 kx1:1 achieve:1 validate:1 dirac:1 olkopf:7 qr:2 empty:1 xim:4 extending:1 derive:2 develop:1 completion:1 received:2 p2:1 implemented:1 involves:1 implies:4 indicate:1 correct:2 happily:2 observational:1 unfavourable:1 require:2 behaviour:2 hx:2 biological:2 elementary:6 secondly:1 im:1 adjusted:1 strictly:1 inspecting:1 accompanying:5 hold:4 dxd:1 around:1 normal:2 exp:1 visually:1 claim:1 adopt:1 a2:1 uniqueness:2 proc:2 label:1 uller:2 mit:2 clearly:1 gaussian:19 always:4 rather:6 ck:1 avoid:2 pn:4 varying:1 corollary:5 endow:1 ax:1 pif:1 rank:1 hk:1 contrast:1 baseline:1 colon:1 dim:4 inference:1 entire:2 typically:1 w:16 misclassified:1 i1:1 germany:2 pixel:1 arg:7 classification:5 overall:1 yahoo:2 constrained:1 construct:1 having:2 zz:1 optimising:2 representer:7 thin:18 minf:2 report:6 recommend:1 np:2 spline:1 randomly:1 national:1 investigate:2 golub:2 bracket:1 primal:2 chain:1 brp:2 necessary:3 minimiser:2 il2:3 orthogonal:1 skf:2 column:2 soft:1 cost:2 gil2:3 reported:1 considerably:1 referring:2 siam:1 twonorm:1 xi1:4 invoke:1 ym:1 hopkins:1 squared:1 central:1 opposed:1 choose:2 positivity:1 inefficient:1 li:6 de:1 star:1 b2:1 includes:3 dilated:1 inc:1 coefficient:1 satisfy:3 multiplicative:3 later:1 performed:2 closed:1 hsx:1 doing:1 competitive:1 hf:5 nsv:4 square:2 clarifies:1 correspond:1 accurately:1 fsvm:2 none:1 zx:2 cybernetics:2 ed:1 definition:16 nonetheless:2 involved:1 obvious:1 resultant:1 associated:1 proof:5 naturally:1 hsu:2 dataset:2 richly:1 popular:1 hilbert:5 actually:1 focusing:1 ta:3 though:1 just:4 smola:7 hand:3 nonlinear:1 widespread:1 icme:1 effect:1 dietterich:1 concept:1 hence:13 regularization:2 deal:2 conditionally:6 gw:2 during:1 self:1 plate:18 ridge:1 demonstrate:2 confusion:1 goa:1 sctv:1 image:3 novel:6 rotation:1 analog:2 discussed:2 numerically:1 cambridge:3 rd:26 unconstrained:1 trivially:1 pm:10 similarly:1 hp:1 mathematics:1 had:2 chapelle:13 stable:1 longer:2 gt:5 solvability:1 recent:1 showed:1 disappointing:1 scenario:1 tikhonov:4 ubingen:2 inequality:1 binary:3 yi:7 seen:2 minimum:2 additional:2 fortunately:1 somewhat:2 redundant:1 semi:4 violate:1 full:1 technical:7 match:1 minimising:2 cross:7 lin:1 equally:1 y:1 controlled:1 variant:1 regression:2 breast:1 optimisation:8 kernel:59 pspace:1 addition:1 whereas:1 baltimore:1 sch:7 ot:5 rest:1 extra:1 probably:2 subject:1 deficient:1 seem:2 constraining:1 split:3 easy:4 affect:1 xj:12 zi:3 variate:1 wahba:4 suboptimal:1 inner:1 reduce:1 pdim:2 det:1 minimise:3 thread:1 motivated:2 syi:1 becker:1 passing:1 remark:1 repeatedly:1 useful:1 fleuret:3 santa:1 involve:1 detailed:1 amount:2 ten:3 induces:1 reduced:1 exist:7 zj:4 per:5 write:2 shall:1 vol:1 key:1 four:2 changing:1 pj:4 kept:1 year:1 sum:1 run:1 inverse:1 decision:1 scaling:3 comparable:2 fold:2 quadratic:2 constraint:1 x2:2 software:1 aspect:1 thyroid:1 argument:2 min:4 concluding:1 span:1 separable:3 expanded:1 px:8 relatively:1 department:1 according:2 making:1 presently:2 invariant:10 pr:4 iccv:1 taken:1 heart:1 equation:2 alluded:1 previously:3 discus:3 turn:1 german:1 needed:1 know:2 letting:2 end:1 available:2 operation:1 incurring:1 apply:1 appropriate:1 minimalistic:1 encounter:1 original:1 hinge:1 newton:1 giving:1 ghahramani:1 bl:1 objective:1 thinplate:2 usual:1 md:1 unclear:1 subspace:2 separating:1 oa:6 topic:2 considers:1 tuebingen:1 trivial:7 reason:2 taiwan:1 length:4 relationship:1 insufficient:1 balance:1 stated:2 regulariser:3 perform:2 sm:1 finite:2 walder:10 supporting:1 precise:2 banana:1 y1:1 reproducing:3 arbitrary:12 retrained:1 community:3 namely:1 required:1 extensive:2 connection:1 beyond:1 pattern:1 xm:4 max:5 including:2 optimise:1 solvable:1 imply:1 negativity:1 philadelphia:1 deviate:1 literature:1 l2:1 discovery:1 kf:10 regularisation:10 loss:4 interesting:2 tarel:1 proven:1 validation:6 degree:2 sufficient:1 bdp:2 translation:5 row:1 last:1 free:1 enjoys:1 side:3 guide:1 institute:2 wide:1 absolute:3 fifth:2 van:2 depth:1 dimension:1 gram:1 xn:1 world:2 author:1 programme:1 compact:1 implicitly:1 global:1 assumed:1 unnecessary:1 xi:19 discriminative:1 continuous:1 search:1 dilation:8 table:2 robust:1 ca:1 expanding:2 expansion:1 interpolating:2 constructing:1 sp:4 pk:4 main:1 did:1 linearly:3 arise:2 edition:1 x1:7 retrain:1 scattered:1 cubic:1 position:1 wish:1 sf:4 lie:3 splice:2 touched:1 theorem:10 minute:2 specific:3 s2m:2 showing:1 dx1:1 pz:4 admits:1 experimented:1 svm:1 evidence:1 exists:4 burden:1 ih:1 essential:1 incorporating:1 workshop:1 notwithstanding:1 kx:10 sx:5 margin:9 rayleigh:1 simply:1 infinitely:2 amsterdam:1 contained:1 wendland:7 chang:1 monotonic:1 corresponds:2 minimizer:1 satisfies:1 violator:6 ma:1 weston:1 formulated:1 replace:1 considerable:1 hard:1 change:1 loan:2 boujemaa:1 lemma:12 total:1 called:1 invariance:10 duality:1 atsch:1 support:7 searched:1 tested:1 phenomenon:1 |
2,443 | 3,216 | Bayesian binning beats approximate alternatives:
estimating peristimulus time histograms
Dominik Endres, Mike Oram, Johannes Schindelin and Peter F?oldi?ak
School of Psychology
University of St. Andrews
KY16 9JP, UK
{dme2,mwo,js108,pf2}@st-andrews.ac.uk
Abstract
The peristimulus time histogram (PSTH) and its more continuous cousin, the
spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin
width or a kernel size is often done in an relatively arbitrary fashion, even though
there have been recent attempts to remedy this situation [1, 2]. We develop an
exact Bayesian, generative model approach to estimating PSTHs and demonstate
its superiority to competing methods. Further advantages of our scheme include
automatic complexity control and error bars on its predictions.
1
Introduction
Plotting a peristimulus time histogram (PSTH), or a spike density function (SDF), from spiketrains
evoked by and aligned to a stimulus onset is often one of the first steps in the analysis of neurophysiological data. It is an easy way of visualizing certain characteristics of the neural response, such
as instantaneous firing rates (or firing probabilities), latencies and response offsets. These measures
also implicitly represent a model of the neuron?s response as a function of time and are important
parts of their functional description. Yet PSTHs are frequently constructed in an unsystematic manner, e.g. the choice of time bin size is driven by result expectations as much as by the data. Recently,
there have been more principled approaches to the problem of determining the appropriate temporal
resolution [1, 2].
We develop an exact Bayesian solution, apply it to real neural data and demonstrate its superiority
to competing methods. Note that we do in no way claim that a PSTH is a complete generative
description of spiking neurons. We are merely concerned with inferring that part of the generative
process which can be described by a PSTH in a Bayes-optimal way.
2
The model
Suppose we wanted to model a PSTH on [tmin , tmax ], which we discretize into T contiguous intervals of duration ?t = (tmax ? tmin )/T (see fig.1, left). We select a discretization fine enough
so that we will not observe more than one spike in a ?t interval for any given spike train. This can
be achieved easily by choosing a ?t shorter than the absolute refractory period of the neuron under
investigation. Spike train i can then be represented by a binary vector ~zi of dimensionality T . We
model the PSTH by M + 1 contiguous, non-overlapping bins having inclusive upper boundaries km ,
within which the firing probability P (spike|t ? (tmin + ?t(km?1 + 1), tmin + ?t(km + 1)]) = fm
is constant. M is the number of bin boundaries inside [tmin , tmax ]. The probability of a spike train
1
??getIEC(m,T?1,m)
??getIEC(T?1,T?1,m)
tmin
tmax
?t
subEm?1[m?1]
t
subEm?1[m]
P(spike|t)
f1
subEm?1[m?1]
k1
k2
subEm[T?2]
subEm?1[m]
f2
k0
subEm[T?1]
??getIEC(m+1,T?1,m)
??getIEC(m,T?2,m)
i
?z = [ 1 , 0 , 0 , 1 , 1 , 1 , 0 , 1 , 0 , 1 , 0 , 0 , 1 , 0 ]
0
subEm?1[T?2]
subEm[T?1]
??getIEC(m+1,T?2,m)
k
k
k3=T?1
m?1
m
T?2
T?1
Figure 1: Left: Top: A spike train, recorded between times tmin and tmax is represented by a binary
vector ~zi . Bottom: The time span between tmin and tmax is discretized into T intervals of duration
?t = (tmax ? tmin )/T , such that interval k lasts from k ? ?t + tmin to (k + 1) ? ?t + tmin . ?t
is chosen such that at most one spike is observed per ?t interval for any given spike train. Then, we
model the firing probabilities P (spike|t) by M + 1 = 4 contiguous, non-overlapping bins (M is the
number of bin boundaries inside the time span [tmin , tmax ]), having inclusive upper boundaries km
and P (spike|t ? (tmin + ?t(km?1 + 1), tmin + ?t(km + 1)]) = fm . Right: The core iteration. To
compute the evidence contribution subEm [T ? 1] of a model with a bin boundary at T ? 1 and m bin
boundaries prior to T ? 1, we sum over all evidence contributions of models with a bin boundary at
k and m ? 1 bin boundaries prior to k, where k ? m ? 1, because m bin boundaries must occupy
at least time intervals 0; . . . ; m ? 1. This takes O(T ) operations. Repeat the procedure to obtain
subEm [T ?2]; . . . ; subEm [m]. Since we expect T m, computing all subEm [k] given subEm?1 [k]
requires O(T 2 ) operations. For details, see text.
~zi of independent spikes/gaps is then
P (~zi |{fm }, {km }, M ) =
M
Y
s(~
z
fm
i
,m)
(1 ? fm )g(~z
i
,m)
(1)
m=0
where s(~zi , m) is the number of spikes and g(~zi , m) is the number of non-spikes, or gaps in spiketrain ~zi in bin m, i.e. between intervals km?1 + 1 and km (both inclusive). In other words, we model
the spiketrains by an inhomogeneous Bernoulli process with piecewise constant probabilities. We
also define k?1 = ?1 and kM = T ? 1. Note that there is no binomial factor associated with
the contribution of each bin, because we do not want to ignore the spike timing information within
the bins, but rather, we try to build a simplified generative model of the spike train. Therefore, the
probability of a (multi)set of spiketrains {~zi } = {z1 , . . . , zN }, assuming independent generation, is
P ({~zi }|{fm }, {km }, M )
=
N Y
M
Y
s(~
z
fm
i
,m)
(1 ? fm )g(~z
i
,m)
i=1 m=0
=
M
Y
s({~
z
fm
i
},m)
(1 ? fm )g({~z
i
},m)
(2)
m=0
where s({~zi }, m) =
2.1
PN
i=1
s(~zi , m) and g({~zi }, m) =
PN
i=1
g(~zi , m)
The priors
We will make a non-informative prior assumption for p({fm }, {km }), namely
p({fm }, {km }|M ) = p({fm }|M )P ({km }|M ).
2
(3)
i.e. we have no a priori preferences for the firing rates based on the bin boundary positions. Note
that the prior of the fm , being continuous model parameters, is a density. Given the form of eqn.(1)
and the constraint fm ? [0, 1], it is natural to choose a conjugate prior
p({fm }|M ) =
M
Y
B(fm ; ?m , ?m ).
(4)
?(? + ?) ?
p (1 ? p)? .
?(?)?(?)
(5)
m=0
The Beta density is defined in the usual way [3]:
B(p; ?, ?) =
There are only finitely many configurations of the km . Assuming we have no preferences for any of
them, the prior for the bin boundaries becomes
1
.
P ({km }|M ) =
(6)
T ?1
M
where the denominator is just the number of possibilities in which M ordered bin boundaries can
be distributed across T ? 1 places (bin boundary M always occupies position T ? 1, see fig.1,left ,
hence there are only T ? 1 positions left).
3
Computing the evidence P ({~zi }|M )
To calculate quantities of interest for a given M , e.g. predicted firing probabilities and their variances
or expected bin boundary positions, we need to compute averages over the posterior
p({fm }, {km }|M, {~zi }) =
p({~zi }, {fm }, {km }|M )
P ({~zi }|M )
(7)
which requires the evaluation of the evidence, or marginal likelihood of a model with M bins:
i
P ({~z }|M ) =
kM ?1 ?1
T
?2
X
X
...
kM ?1 =M ?1 kM ?2 =M ?2
kX
1 ?1
P ({~zi }|{km }, M )P ({km }|M )
(8)
k0 =0
where the summation boundaries are chosen such that the bins are non-overlapping and contiguous
and
Z 1
Z 1
Z 1
i
P ({~z }|{km }, M ) =
df0
df1 . . .
dfM P ({~zi }|{fm }, {km }, M )p({fm }|M ). (9)
0
0
0
By virtue of eqn.(2) and eqn.(4), the integrals can be evaluated:
P ({~zi }|{km }, M ) =
M
M
Y
?(s({~zi }, m) + ?m )?(g({~zi }, m) + ?m ) Y ?(?m + ?m )
.
?(s({~zi }, m) + ?m + g({~zi }, m) + ?m ) m=0 ?(?m )?(?m )
m=0
(10)
Computing the sums in eqn.(8) quickly is a little tricky. A na??ve approach would suggest that a
computational effort of O(T M ) is required. However, because eqn.(10) is a product with one factor
per bin, and because each factor depends only on spike/gap counts and prior parameters in that bin,
the process can be expedited. We will use an approach very similar to that described in [4, 5] in the
context of density estimation and in [6, 7] for Bayesian function approximation: define the function
getIEC(ks , ke , m) :=
?(s({~zi }, ks , ke ) + ?m )?(g({~zi }, ks , ke ) + ?m )
?(s({~zi }, ks , ke ) + ?m + g({~zi }, ks , ke ) + ?m )
(11)
where s({~zi }, ks , ke ) is the number of spikes and g({~zi }, ks , ke ) is the number of gaps in {~zi }
between the start interval ks and the end interval ke (both included). Furthermore, collect all contributions to eqn.(8) that do not depend on the data (i.e. {~zi }) and store them in the array pr[M ]:
QM
?(?m +?m )
pr[M ] :=
m=0 ?(?m )?(?m )
3
T ?1
M
.
(12)
Substituting eqn.(10) into eqn.(8) and using the definitions (11) and (12), we obtain
P ({~zi }|M ) ?
T
?2
X
...
kM ?1 =M ?1
kX
M
1 ?1 Y
getIEC(km?1 + 1, km , m)getIEC(0, k0 , 0)
(13)
k0 =0 m=1
with kM = T ? 1 and the constant of proportionality being pr[M ]. Since the factors on the r.h.s.
depend only on two consecutive bin boundaries each, it is possible to apply dynamic programming
[8]: rewrite the r.h.s. by ?pushing? the sums as far to the right as possible:
P ({~zi }|M ) ?
T
?2
X
kM ?1 ?1
getIEC(kM ?1 +1, T ?1, M )
kX
1 ?1
getIEC(kM ?2 +1, kM ?1 , M ?1)
kM ?2 =M ?2
kM ?1 =M ?1
?...
X
getIEC(k0 + 1, k1 , 1)getIEC(0, k0 , 0).
(14)
k0 =0
Evaluating the sum over k0 requires O(T ) operations (assuming that T M , which is likely to
be the case in real-world applications). As the summands depend also on k1 , we need to repeat this
evaluation O(T ) times, i.e. summing out k0 for all possible values of k1 requires O(T 2 ) operations.
This procedure is then repeated for the remaining M ? 1 sums, yielding a total computational
effort of O(M T 2 ). Thus, initialize the array subE0 [k] := getIEC(0, k, 0), and iterate for all m =
1, . . . , M :
k?1
X
subEm [k] :=
getIEC(r + 1, k, m)subEm?1 [r],
(15)
r=m?1
A close look at eqn.(14) reveals that while we sum over kM ?1 , we need subEM ?1 [k] for k =
M ? 1; . . . ; T ? 2 to compute the evidence of a model with its latest boundary at T ? 1. We can,
however, compute subEM ?1 [T ? 1] with little extra effort, which is, up to a factor pr[M ? 1], equal
to P ({~zi }|M ? 1), i.e. the evidence for a model with M ? 1 bin boundaries. Moreover, having
computed subEm [k], we do not need subEm?1 [k ? 1] anymore. Hence, the array subEm?1 [k] can
be reused to store subEm [k], if overwritten in reverse order. In pseudo-code (E[m] contains the
evidence of a model with m bin boundaries inside [tmin , tmax ] after termination):
Table 1: Computing the evidences of models with up to M bin boundaries
1. for k := 0 . . . T ? 1 : subE[k] := getIEC(0, k, 0)
2. E[0] := subE[T ? 1] ? pr[0]
3. for m := 1 . . . M :
(a) if m = M then l := T ? 1 else l := m
(b) for k := T ? 1 . . . l
Pk?1
subE[k] := r:=m?1 subE[r] ? getIEC(r + 1, k, m)
(c) E[m] = subE[T ? 1] ? pr[m]
4. return E[]
4
Predictive firing rates and variances
? {~zi }, M ). For a given configuration of
We will now calculate the predictive firing rate P (spike|k,
{fm } and {km }, we can write
? {fm }, {km }, M ) =
P (spike|k,
M
X
fm 1(k? ? {km?1 + 1, km })
(16)
m=0
where the indicator function 1(x) = 1 iff x is true and 0 otherwise. Note that the probability
of a spike given {km } and {fm } does not depend on any observed data. Since the bins are non? {~zi }, {km })
overlapping, k? ? {km?1 + 1, km } is true for exactly one summand and P (spike|k,
evaluates to the corresponding firing rate.
4
To finish we average eqn.(16) over the posterior eqn.(7). The denominator of eqn.(7) is independent
of {fm }, {km } and is obtained by integrating/summing the numerator via the algorithm in table 1.
Thus, we only need to multiply the integrand of eqn.(9) (i.e. the numerator of the posterior) with
? {fm }, {km }, M ), thereby replacing eqn.(11) with
P (spike|k,
getIEC(ks , ke , m) :=
?(s({~zi }, ks , ke ) + 1(k? ? {ks , ke }) + ?m )?(g({~zi }, ks , ke ) + ?m )
(17)
?(s({~zi }, ks , ke ) + 1(k? ? {ks , ke }) + ?m + g({~zi }, ks , ke ) + ?m )
? Call the array returned by this modified
i.e. we are adding an additional spike to the data at k.
? {~zi }, M ) = Ek? [M ] . To evaluate the
algorithm Ek? []. By virtue of eqn.(7) we then find P (spike|k,
E[M ]
2
?
variance, we need the posterior expectation of fm
. This can be computed by adding two spikes at k.
5
Model selection vs. model averaging
To choose the best M given {~zi }, or better, a probable range of M s, we need to determine the model
posterior
P ({~zi }|M )P (M )
(18)
P (M |{~zi }) = P
z i }|m)P (m)
m P ({~
where P (M ) is the prior over M , which we assume to be uniform. The sum in the denominator
runs over all values of m which we choose to include, at most 0 ? m ? T ? 1.
Once P (M |{~zi }) is evaluated, we could use it to select the most probable M 0 . However, making this
decision means ?contriving? information, namely that all of the posterior probability is concentrated
at M 0 . Thus we should rather average any predictions over all possible M , even if evaluating such
an average has a computational cost of O(T 3 ), since M ? T ? 1. If the structure of the data allow,
it is possible, and useful given a large enough T , to reduce this cost by finding a range of M , such
that the risk of excluding a model even though it provides a good description of the data is low. In
analogy to the significance levels of orthodox statistics, we shall call this risk ?. If the posterior of
M is unimodal (which it has been in most observed cases, see fig.3, right, for an example), we can
then choose the smallest interval of M s around the maximum of P (M |{~zi }) such that
P (Mmin ? M ? Mmax |{~zi }) ? 1 ? ?
(19)
and carry out the averages over this range of M after renormalizing the model posterior.
6
6.1
Examples and comparison to other methods
Data acquisition
We obtained data through [9], where the experimental protocols have been described. Briefly, extracellular single-unit recordings were made using standard techniques from the upper and lower banks
of the anterior part of the superior temporal sulcus (STSa) and the inferior temporal cortex (IT) of
two monkeys (Macaca mulatta) performing a visual fixation task. Stimuli were presented for 333
ms followed by an 333 ms inter-stimulus interval in random order. The anterior-posterior extent of
the recorded cells was from 7mm to 9mm anterior of the interaural plane consistent with previous
studies showing visual responses to static images in this region [10, 11, 12, 13]. The recorded cells
were located in the upper bank (TAa, TPO), lower bank (TEa, TEm) and fundus (PGa, IPa) of STS
and in the anterior areas of TE (AIT of [14]). These areas are rostral to FST and we collectively
call them the anterior STS (STSa), see [15] for further discussion. The recorded firing patters were
turned into distinct samples, each of which contained the spikes from ?300 ms before to 600 ms
after the stimulus onset with a temporal resolution of 1 ms.
6.2
Inferring PSTHs
To see the method in action, we used it to infer a PSTH from 32 spiketrains recorded from one of the
available STSa neurons (see fig.2, A). Spikes times are relative to the stimulus onset. We discretized
the interval from ?100ms pre-stimulus to 500ms post-stimulus into ?t = 1ms time intervals and
5
spiketrain number
A
30
20
10
B
P(spike)
0
| ||
| |
||
||
|
| | | | |||
| |
|
| ||
| |
| |
| || | | | | | |
|| | | |
| | | | | |
|
| ||
|
| | |
| || | | | | | |
|| |
|
|
| ||
|
|| |
|
| || | | | | | |
|
||
| |
|
| | || | | |
| | | || | | |
|
| | |
|
| |
|
| |
||
|
|
|| | |
| ||
|| | |
|
|
|
|
|
||| | | | | | |
|
||
| | |
|
| ||
| |
|
|| |
|
| ||
| | | || | | | | | |
| | |
||
| |
|
| ||
| |
| |
| | | | | | |||
|
|
||
|
||
|
|
| |
||
|
|
| | || | | |
| || |||
|
|| || |
|
|
|
|
| |
|| || | | | | | | |
||
| | | |
| | |
|| | | |
|
| | | ||
|
| |
||
|
||||
||
| || |
|| | | | | | | | | | | | | |
|
| ||
|
||
|
|
|
|| |
| | || | |
|
| || ||| |
| | || | | | |
||| | |
|
||
||
| | | | | || | | | | |
||
| | | |
| | | |
||
|
| ||
||| | | | | | | |
|| |
|| | | | | | |
| | | | |
|| | |
| ||
|| | || | | | || | | |
| | | |
||
|| || |
| ||
| | || | | | | | | | | | |
||
| |
| || | | | | | | | | |
|
||
| | |
||
|
|| | || | | | | |
|
| ||
|
||
|
| ||
|| |
| |
||
| | | | | || | | | |
||
| | ||
|
| ||
|
| |
||
||
| | || | | | | |
|| | |
| | | || | | || | | | | |
| ||
| | || |
|
||
||
| | | | | | ||
| | | | || ||| |
|
| |
| || |||| |
|
|| | || | | |
| | | || | || | | | | | |
| | || | | | | |
|
||
||
||
|
| || || | | |
| |
|
| | |
|
|
| ||
|
| || |
|| |
| || |
||
|
||
|
| ||
|
|
|| |
|
| |
|
|
| | | || | | | | | | |
| |
|
|| | | | | | | | | |
| |
||
||
|
|| | | | || | || | | | | | | |
|| |
|| | | | | | | | | |
| |
|
|
|| | | | | | | | | | | | |
| || | | | |
| || |
|
| | || |
|
||
||
|
||
| | | || |
||
| || |
||| |
|
|
| | | || | | |
|
|
| |
| | | | | | | | | | | ||
| |
||
| |
| | | | |||
|
|
|
||| | |
| | | | |
| |
|| |
| |
| |
| |
| ||
||
|
|| | | | | | | | | | |
| |
| | | |
|
|| || |
| | | |
|
||
|
|
|||
||
|||
|
|
|
| |
|
|
||
|
|
| |
|
|| |
|
|
| |
|
| |
|| |
|
|
| |
|
| |
|
| |
|
| |
|
|
|
|
||
|
|
|
||
||
||
||
| |
||
|| |
| |
| ||
|
||
||
| ||
|
|
|
|
|
|
|
||
| |
|
|
|
| |
|
|
|| ||
| |
|
0.1
0.05
C
P(spike)
0
0.1
0.05
D
P(spike)
0
0.1
0.05
0
-100
0
100
200
300
400
500
600
time, ms after stimulus onset
Figure 2: Predicting a PSTH/SDF with 3 different methods. A: the dataset used in this comparison
consisted of 32 spiketrains recorded from a STSa neuron. Each tick mark represents a spike. B:
PSTH inferred with our Bayesian binning method. The thick line represents the predictive firing
rate (section 4), the thin lines show the predictive firing rate ?1 standard deviation. Models with
4 ? M ? 13 were included on a risk level of ? = 0.1 (see eqn.(19)). C: bar PSTH (solid lines),
optimal binsize ? 26ms, and line PSTH (dashed lines), optimal binsize ? 78ms, computed by the
methods described in [1, 2]. D: SDF obtained by smoothing the spike trains with a 10ms Gaussian
kernel.
computed the model posterior (eqn.(18)) (see fig.3, right). The prior parameters were equal for all
bins and set to ?m = 1 and ?m = 32. This choice corresponds to a firing probability of ? 0.03 in
each 1 ms time interval (30 spikes/s), which is typical for the neurons in this study1 . Models with
4 ? M ? 13 (expected bin sizes between ? 23ms-148ms) were included on an ? = 0.1 risk level
(eqn.(19)) in the subsequent calculation of the predictive firing rate (i.e. the expected firing rate,
hence the continuous appearance) and standard deviation (fig.2, B). Fig.2, C, shows a bar PSTH and
a line PSTH computed with the recently developed methods described in [1, 2]. Roughly speaking,
1
one could search for the ?m , ?m which maximize of P ({~zi }|?m , ?m )
P Alternatively,
i
i
=
P ({~z }|M )P (M |?m , ?m ), where P ({~z }|M ) is given by eqn.(8). Using a uniform P (M |?m , ?m ),
we found ?m ? 2.3 and ?m ? 37 for the data in fig.2, A
M
6
these methods try to optimize a compromise between minimal within-bin variance and maximal
between-bin variance. In this example, the bar PSTH consists of 26 bins. Graph D in fig.2 depicts a
SDF obtained by smoothing the spiketrains with a 10ms wide Gaussian kernel, which is a standard
way of calculating SDFs in the neurophysiological literature.
All tested methods produce results which are, upon cursory visual inspection, largely consistent
with the spiketrains. However, Bayesian binning is better suited than Gaussian smoothing to model
steep changes, such as the transient response starting at ? 100ms. While the methods from [1, 2]
share this advantage, they suffer from two drawbacks: firstly, the bin boundaries are evenly spaced,
hence the peak of the transient is later than the scatterplots would suggest. Secondly, because the
bin duration is the only parameter of the model, these methods are forced to put many bins even
in intervals that are relatively constant, such as the baselines before and after the stimulus-driven
response. In contrast, Bayesian binning, being able to put bin boundaries anywhere in the time span
of interest, can model the data with less bins ? the model posterior has its maximum at M = 6 (7
bins), whereas the bar PSTH consists of 26 bins.
6.3
Performance comparison
0.4
10 ms Gaussian
0
0.1
i
P(M|{z })
relative frequency
0.2
0.4
bar PSTH
0.2
0
0.4
0.05
line PSTH
0.2
0
0
0.005
0.01
0
0
0.015
CV error relative to Bayesian Binning
10
M
20
30
Figure 3: Left: Comparison of Bayesian Binning with competing methods by 5-fold crossvalidation.
The CV error is the negative expected log-probability of the test data. The histograms show relative frequencies of CV error differences between 3 competing methods and our Bayesian binning
approach. Gaussian: SDFs obtained by Gaussian smoothing of the spiketrains with a 10 ms kernel.
Bar PSTH and line PSTH: PSTHs computed by the binning methods described in [1, 2]. Right:
Model posterior P (M |{~zi }) (see eqn.(18)) computed from the data shown in fig.2. The shape is
fairly typical for model posteriors computed from the neural data used in this paper: a sharp rise at
a moderately low M followed by a maximum (here at M = 6) and an approximately exponential
decay. Even though a maximum M of 699 would have been possible, P (M > 23|{~zi }) < 0.001.
Thus, we can accelerate the averaging process for quantities of interest (e.g. the predictive firing
rate, section 4) by choosing a moderately small maximum M .
For a more rigorous method comparison, we split the data into distinct sets, each of which contained
the responses of a cell to a different stimulus. This procedure yielded 336 sets from 20 cells with at
least 20 spiketrains per set. We then performed 5-fold crossvalidation, the crossvalidation error is
given by the negative logarithm of the data (spike or gap) in the test sets:
CV error = ? hlog(P (spike|t))i .
(20)
Thus, we measure how well the PSTHs predict the test data. The Gaussian SDFs were discretized
into 1 ms time intervals prior to the procedure. We average the CV error over the 5 estimates to obtain
a single estimate for each of the 336 neuron/stimulus combinations. On average, the negative log
likelihood of our Bayesian approach predicting the test data (0.04556 ? 0.00029, mean ? SEM) was
significantly better than any of the other methods (10ms Gaussian kernel: 0.04654 ? 0.00028; Bar
PSTH: 0.04739?0.00029; Line PSTH: 0.04658?0.00029). To directly compare the performance of
different methods we calculate the difference in the CV error for each neuron/stimulus combination.
Here a positive value indicates that Bayesian binning predicts the test data more accurately than the
alternative method. Fig.3, left, shows the relative frequencies of CV error differences between the
3 other methods and our approach. Bayesian binning predicted the data better than the three other
7
methods in at least 295/336 cases, with a minimal difference of ? ?0.0008, indicating the general
utility of this approach.
7
Summary
We have introduced an exact Bayesian binning method for the estimation of PSTHs. Besides treating
uncertainty ? a real problem with small neurophysiological datasets ? in a principled fashion, it also
outperforms competing methods on real neural data. It offers automatic complexity control because
the model posterior can be evaluated. While its computational cost is significantly higher than that
of the methods we compared it to, it is still fast enough to be useful: evaluating the predictive
probability takes less than 1s on a modern PC2 , with a small memory footprint (<10MB for 512
spiketrains).
Moreover, our approach can easily be adapted to extract other characteristics of neural responses in
a Bayesian way, e.g. response latencies or expected bin boundary positions. Our method reveals
a clear and sharp initial response onset, a distinct transition from the transient to the sustained part
of the response and a well-defined offset. An extension towards joint PSTHs from simultaneous
multi-cell recordings is currently being implemented.
References
[1] H. Shimazaki and S. Shinomoto. A recipe for optimizing a time-histogram. In B. Sch?olkopf,
J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages
1289?1296. MIT Press, Cambridge, MA, 2007.
[2] H. Shimazaki and S. Shinomoto. A method for selecting the bin size of a time histogram.
Neural Computation, 19(6):1503?1527, 2007.
[3] J.O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, New York, 1985.
[4] D. Endres and P. F?oldi?ak. Bayesian bin distribution inference and mutual information. IEEE
Transactions on Information Theory, 51(11), 2005.
[5] D. Endres. Bayesian and Information-Theoretic Tools for Neuroscience. PhD thesis, School
of Psychology, University of St. Andrews, U.K., 2006. http://hdl.handle.net/10023/162.
[6] M. Hutter.
Bayesian regression of piecewise constant functions.
Technical Report
arXiv:math/0606315v1, IDSIA-14-05, 2006.
[7] M. Hutter. Exact bayesian regression of piecewise constant functions. Journal of Bayesian
Analysis, 2(4):635?664, 2007.
[8] D. P. Bertsekas. Dynamic Programming and Optimal Control. Athena Scientific, 2000.
[9] M. W. Oram, D. Xiao, B. Dritschel, and K.R. Payne. The temporal precision of neural signals:
A unique role for response latency? Philosophical Transactions of the Royal Society, Series B,
357:987?1001, 2002.
[10] CJ Bruce, R Desimone, and CG Gross. Visual properties of neurons in a polysensory area in
superior temporal sulcus of the macaque. Journal of Neurophysiology, 46:369?384, 1981.
[11] DI Perrett, ET Rolls, and W Caan. Visual neurons responsive to faces in the monkey temporal
cortex. Expl. Brain. Res., 47:329?342, 1982.
[12] G.C. Baylis, E.T. Rolls, and C.M. Leonard. Functional subdivisions of the temporal lobe
neocortex. 1987.
[13] M. W. Oram and D. I. Perrett. Time course of neural responses discriminating different views
of the face and head. Journal of Neurophysiology, 68(1):70?84, 1992.
[14] K Tanaka, H Saito, Y Fukada, and M Moriya. Coding visual images of objects in the inferotemporal cortex of the macaque monkey. Journal of Neurophysiology, pages 170?189, 1991.
[15] N.E. Barraclough, D. Xiao, C.I. Baker, M.W. Oram, and D.I. Perrett. Integration of visual and
auditory information by superior temporal sulcus neurons responsive to the sight of actions.
Journal of Cognitive Neuroscience, 17, 2005.
2
3.2 GHz Intel XeonTM , SuSE Linux 10.0
8
| 3216 |@word neurophysiology:3 briefly:1 reused:1 proportionality:1 termination:1 km:47 overwritten:1 lobe:1 stsa:4 thereby:1 solid:1 carry:1 initial:1 configuration:2 contains:1 series:1 selecting:1 outperforms:1 discretization:1 anterior:5 yet:1 must:1 subsequent:1 informative:1 shape:1 analytic:1 wanted:1 treating:1 v:1 generative:4 plane:1 inspection:1 cursory:1 core:1 pf2:1 provides:1 math:1 psth:21 preference:2 firstly:1 constructed:1 beta:1 consists:2 fixation:1 sustained:1 interaural:1 inside:3 rostral:1 manner:1 inter:1 expected:5 roughly:1 frequently:1 multi:2 brain:1 discretized:3 little:2 becomes:1 estimating:2 moreover:2 baker:1 monkey:3 developed:1 finding:1 temporal:9 pseudo:1 exactly:1 k2:1 qm:1 uk:2 control:3 tricky:1 unit:1 platt:1 superiority:2 bertsekas:1 before:2 positive:1 timing:1 ak:2 firing:16 approximately:1 tmax:9 k:15 evoked:1 collect:1 range:3 unique:1 footprint:1 procedure:4 saito:1 area:3 significantly:2 word:1 integrating:1 pre:1 staple:1 suggest:2 close:1 selection:2 polysensory:1 put:2 context:1 risk:4 optimize:1 expl:1 latest:1 starting:1 duration:3 resolution:2 ke:15 array:4 handle:1 suppose:1 exact:4 programming:2 idsia:1 located:1 fukada:1 predicts:1 binning:12 mike:1 bottom:1 observed:3 role:1 calculate:3 region:1 principled:2 gross:1 complexity:2 moderately:2 dynamic:2 depend:4 rewrite:1 compromise:1 predictive:7 upon:1 f2:1 tpo:1 easily:2 patter:1 accelerate:1 k0:9 joint:1 represented:2 train:8 distinct:3 forced:1 fast:1 choosing:2 otherwise:1 statistic:1 advantage:2 net:1 product:1 maximal:1 mb:1 aligned:1 turned:1 payne:1 iff:1 description:3 moriya:1 macaca:1 olkopf:1 crossvalidation:3 recipe:1 produce:1 renormalizing:1 object:1 andrew:3 ac:1 develop:2 orthodox:1 finitely:1 school:2 implemented:1 predicted:2 inhomogeneous:1 thick:1 drawback:1 occupies:1 transient:3 bin:43 f1:1 investigation:1 probable:2 summation:1 secondly:1 extension:1 mm:2 around:1 k3:1 baylis:1 predict:1 claim:1 substituting:1 consecutive:1 smallest:1 estimation:2 currently:1 tool:1 hoffman:1 mit:1 gaussian:9 always:1 sight:1 modified:1 rather:2 pn:2 bernoulli:1 likelihood:2 indicates:1 contrast:1 rigorous:1 cg:1 baseline:1 inference:1 dfm:1 priori:1 smoothing:5 integration:1 initialize:1 fairly:1 marginal:1 equal:2 once:1 mutual:1 having:3 represents:2 look:1 thin:1 tem:1 report:1 stimulus:12 piecewise:3 summand:1 modern:1 ve:1 hdl:1 attempt:1 interest:3 possibility:1 multiply:1 evaluation:2 unsystematic:1 yielding:1 integral:1 desimone:1 shorter:1 logarithm:1 re:1 minimal:2 hutter:2 contiguous:4 zn:1 cost:3 deviation:2 uniform:2 spiketrain:2 endres:3 st:3 density:5 peak:1 discriminating:1 quickly:1 na:1 linux:1 thesis:1 recorded:6 choose:4 cognitive:1 ek:2 return:1 coding:1 suse:1 sts:2 onset:5 depends:1 later:1 try:2 performed:1 view:1 start:1 bayes:1 spiketrains:10 bruce:1 contribution:4 roll:2 variance:5 characteristic:2 largely:1 spaced:1 bayesian:21 accurately:1 simultaneous:1 definition:1 evaluates:1 acquisition:1 frequency:3 associated:1 di:1 oram:4 static:1 auditory:1 dataset:1 dimensionality:1 cj:1 higher:1 response:13 done:1 though:3 evaluated:3 furthermore:1 just:1 anywhere:1 eqn:20 replacing:1 overlapping:4 scientific:1 consisted:1 true:2 remedy:1 former:1 hence:4 visualizing:1 mmax:1 numerator:2 width:1 mmin:1 inferior:1 shinomoto:2 m:21 complete:1 demonstrate:1 theoretic:1 perrett:3 image:2 instantaneous:1 recently:2 superior:3 sdf:5 psths:7 mulatta:1 functional:2 spiking:1 refractory:1 jp:1 cambridge:1 cv:7 automatic:2 toolkit:1 cortex:3 fst:1 summands:1 inferotemporal:1 posterior:14 recent:1 optimizing:1 driven:2 reverse:1 store:2 certain:1 binary:2 additional:1 determine:1 maximize:1 period:1 dashed:1 signal:1 unimodal:1 infer:1 technical:1 calculation:1 offer:1 post:1 prediction:2 regression:2 denominator:3 expectation:2 arxiv:1 histogram:6 kernel:6 represent:1 iteration:1 achieved:1 cell:5 whereas:2 want:1 fine:1 interval:16 else:1 sch:1 extra:1 recording:2 call:3 split:1 easy:1 concerned:1 enough:3 iterate:1 finish:1 psychology:2 zi:51 competing:5 fm:28 reduce:1 cousin:1 utility:1 effort:3 suffer:1 peter:1 returned:1 speaking:1 york:1 action:2 useful:2 latency:3 clear:1 johannes:1 neocortex:1 concentrated:1 http:1 occupy:1 df1:1 ipa:1 neuroscience:2 per:3 write:1 shall:1 tea:1 sulcus:3 v1:1 graph:1 merely:1 sum:7 run:1 uncertainty:1 place:1 decision:2 pc2:1 pga:1 followed:2 fold:2 yielded:1 adapted:1 constraint:1 inclusive:3 integrand:1 span:3 performing:1 relatively:2 extracellular:1 combination:2 conjugate:1 across:1 making:1 pr:6 taa:1 count:1 end:1 available:1 operation:4 apply:2 observe:1 appropriate:1 anymore:1 responsive:2 alternative:2 binomial:1 top:1 include:2 remaining:1 pushing:1 calculating:1 k1:4 build:1 society:1 quantity:2 spike:39 usual:1 athena:1 evenly:1 extent:1 assuming:3 code:1 besides:1 berger:1 steep:1 hlog:1 negative:3 rise:1 discretize:1 upper:4 neuron:11 datasets:1 oldi:2 beat:1 situation:1 tmin:15 excluding:1 head:1 arbitrary:1 sharp:2 inferred:1 introduced:1 namely:2 required:1 z1:1 philosophical:1 tanaka:1 macaque:2 able:1 bar:8 usually:1 fundus:1 royal:1 memory:1 natural:1 predicting:2 indicator:1 scheme:1 extract:1 text:1 prior:11 literature:1 determining:1 relative:5 expect:1 generation:1 analogy:1 shimazaki:2 consistent:2 xiao:2 plotting:1 editor:1 bank:3 share:1 course:1 summary:1 repeat:2 last:1 tick:1 allow:1 wide:1 face:2 absolute:1 distributed:1 ghz:1 boundary:23 evaluating:3 world:1 transition:1 made:1 simplified:1 far:1 sdfs:3 transaction:2 approximate:1 ignore:1 implicitly:1 reveals:2 summing:2 alternatively:1 continuous:3 search:1 table:2 sem:1 protocol:1 pk:1 significance:1 ait:1 repeated:1 fig:11 intel:1 depicts:1 fashion:2 scatterplots:1 precision:1 inferring:2 position:5 exponential:1 dominik:1 showing:1 peristimulus:3 offset:2 decay:1 virtue:2 evidence:8 adding:2 phd:1 te:1 kx:3 gap:5 suited:1 likely:1 appearance:1 neurophysiological:3 visual:7 ordered:1 contained:2 collectively:1 springer:1 corresponds:1 ma:1 leonard:1 towards:1 change:1 included:3 neurophysiologists:1 typical:2 averaging:2 total:1 experimental:1 subdivision:1 indicating:1 select:2 mark:1 latter:1 evaluate:1 tested:1 |
2,444 | 3,217 | Learning Visual Attributes
Vittorio Ferrari ?
University of Oxford (UK)
Andrew Zisserman
University of Oxford (UK)
Abstract
We present a probabilistic generative model of visual attributes, together with an efficient
learning algorithm. Attributes are visual qualities of objects, such as ?red?, ?striped?, or
?spotted?. The model sees attributes as patterns of image segments, repeatedly sharing some
characteristic properties. These can be any combination of appearance, shape, or the layout
of segments within the pattern. Moreover, attributes with general appearance are taken
into account, such as the pattern of alternation of any two colors which is characteristic
for stripes. To enable learning from unsegmented training images, the model is learnt
discriminatively, by optimizing a likelihood ratio.
As demonstrated in the experimental evaluation, our model can learn in a weakly supervised
setting and encompasses a broad range of attributes. We show that attributes can be learnt
starting from a text query to Google image search, and can then be used to recognize the
attribute and determine its spatial extent in novel real-world images.
1 Introduction
In recent years, the recognition of object categories has become a major focus of computer vision and
has shown substantial progress, partly thanks to the adoption of techniques from machine learning
and the development of better probabilistic representations [1, 3]. The goal has been to recognize
object categories, such as a ?car?, ?cow? or ?shirt?. However, an object also has many other qualities
apart from its category. A car can be red, a shirt striped, a ball round, and a building tall. These visual
attributes are important for understanding object appearance and for describing objects to other
people. Figure 1 shows examples of such attributes. Automatic learning and recognition of attributes
can complement category-level recognition and therefore improve the degree to which machines
perceive visual objects. Attributes also open the door to appealing applications, such as more specific
queries in image search engines (e.g. a spotted skirt, rather than just any skirt). Moreover, as
different object categories often have attributes in common, modeling them explicitly allows part
of the learning task to be shared amongst categories, or allows previously learnt knowledge about
an attribute to be transferred to a novel category. This may reduce the total number of training
images needed and improve robustness. For example, learning the variability of zebra stripes under
non-rigid deformations tells us a lot about the corresponding variability in striped shirts.
In this paper we propose a probabilistic generative model of visual attributes, and a procedure for
learning its parameters from real-world images. When presented with a novel image, our method infers whether it contains the learnt attribute and determines the region it covers. The proposed model
encompasses a broad range of attributes, from simple colors such as ?red? or ?green? to complex patterns such as ?striped? or ?checked?. Both the appearance and the shape of pattern elements (e.g. a
single stripe) are explicitly modeled, along with their layout within the overall pattern (e.g. adjacent
stripes are parallel). This enables our model to cover attributes defined by appearance (?red?), by
shape (?round?), or by both (the black-and-white stripes of zebras). Furthermore, the model takes
into account attributes with general appearance, such as stripes which are characterized by a pattern
of alternation ABAB of any two colors A and B, rather than by a specific combination of colors.
Since appearance, shape, and layout are modeled explictly, the learning algorithm gains an understanding of the nature of the attribute. As another attractive feature, our method can learn in a
weakly supervised setting, given images labeled only by the presence or absence of the attribute,
?
This research was supported by the EU project CLASS. The authors thank Dr. Josef Sivic for fruitful
discussions and helpful comments on this paper.
unary
red
round
binary
black/white stripes
generic stripes
Figure 1: Examples of different kinds of attributes. On the left we show two simple attributes, whose characteristic properties are captured by individual image segments (appearance for red, shape for round). On the
right we show more complex attributes, whose basic element is a pair of segments.
without indication of the image region it covers. The presence/absence labels can be noisy, as the
training method can tolerate a considerable number of mislabeled images. This enables attributes to
be learnt directly from a text specification by collecting training images using a web image search
engine, such as Google-images, and querying on the name of the attribute.
Our approach is inspired by the ideas of Jojic and Caspi [4], where patterns have constant appearance
within an image, but are free to change to another appearance in other images. We also follow the
generative approach to learning a model from a set of images used by many authors, for example
LOCUS [10]. Our parameter learning is discriminative ? the benefits of this have been shown
before, for example for training the constellation model of [3]. In term of functionality, the closest
works to ours are those on the analysis of regular textures [5, 6]. However, they work with textures
covering the entire image and focus on finding distinctive appearance descriptors. In constrast, here
textures are attributes of objects, and therefore appear in complex images containing many other
elements. Very few previous works appeared in this setting [7, 11]. The approach of [7] focuses
on colors only, while in [11] attributes are limited to individual regions. Our method encompasses
also patterns defined by pairs of regions, allowing to capture more complex attributes. Moreover,
we take up the additional challenge of learning the pattern geometry.
Before describing the generative model in section 3, in the next section we briefly introduce image
segments, the elementary units of measurements observed in the model.
2 Image segments ? basic visual representation
The basic units in our attribute model are image segments extracted using the algorithm of [2]. Each
segment has a uniform appearance, which can be either a color or a simple texture (e.g. sand, grain).
Figure 2a shows a few segments from a typical image.
Inspired by the success of simple patches as a basis for appearance descriptors [8, 9], we randomly
sample a large number of 5 ? 5 pixel patches from all training images and cluster them using kmeans [8]. The resulting cluster centers form a codebook of patch types. Every pixel is soft-assigned
to the patch types. A segment is then represented as a normalized histogram over the patch types
of the pixels it contains. By clustering the segment histograms from the training images we obtain
a codebook A of appearances (figure 2b). Each entry in the codebook is a prototype segment
descriptor, representing the appearance of a subset of the segments from the training set.
Each segment s is then assigned the appearance a ? A with the smallest Bhattacharya distance to the
histogram of s. In addition to appearance, various geometric properties of a segment are measured,
summarizing its shape. In our current implementation, these are: curvedness, compactness, elongation (figure 2c), fractal dimension and area relative to the image. We also compute two properties of
pairs of segments: relative orientation and relative area (figure 2d).
A
P
1
A
C
A2
A
P2
ln
( AA )
1
2
M
C
P
m
m
M
?1 ? ?2
?1
?2
a
c
b
d
Figure 2: Image segments as visual features. a) An image with a few segments overlaid, including two pairs
of adjacent segments on a striped region. b) Each row is an entry from the appearance codebook A (i.e.
one appearance; only 4 out of 32 are shown). The three most frequent patch types for each appearance are
displayed. Two segments from the stripes are assigned to the white and black appearance respectively (arrows).
c) Geometric properties of a segment: curvedness, which is the ratio between the number of contour points C
with curvature above a threshold and the total perimeter P ; compactness; and elongation, which is the ratio
between the minor and major moments of inertia. d) Relative geometric properties of a pair of segments:
relative area and relative orientation. Notice how these measures are not symmetric (e.g. relative area is the
area of the first segment wrt to the second).
3 Generative models for visual attributes
Figure 1 shows various kinds of attributes. Simple attributes are entirely characterized by properties
of a single segment (unary attributes). Some unary attributes are defined by their appearance, such
as colors (e.g. red, green) and basic textures (e.g. sand, grainy). Other unary attributes are defined by
a segment shape (e.g. round). All red segments have similar appearance, regardless of shape, while
all round segments have similar shape, regardless of appearance. More complex attributes have a
basic element composed of two segments (binary attributes). One example is the black/white stripes
of a zebra, which are composed of pairs of segments sharing similar appearance and shape across
all images. Moreover, the layout of the two segments is characteristic as well: they are adjacent,
nearly parallel, and have comparable area. Going yet further, a general stripe pattern can have any
appearance (e.g. blue/white stripes, red/yellow stripes). However, the pairs of segments forming
a stripe pattern in one particular image must have the same appearance. Hence, a characteristic of
general stripes is a pattern of alternation ABABAB. In this case, appearance is common within an
image, but not across images.
The attribute models we present in this section encompass all aspects discussed above. Essentially,
attributes are found as patterns of repeated segments, or pairs of segments, sharing some properties
(geometric and/or appearance and/or layout).
3.1 Image likelihood.
We start by describing how the model M explains a whole image I. An image I is represented by a
set of segments {s}. A latent variable f is associated with each segment, taking the value f = 1 for
a foreground segment, and f = 0 for a background segment. Foreground segments are those on the
image area covered by the attribute. We collect f for all segments of I into the vector F. An image
has a foreground appearance a, shared by all the foreground segments it contains. The likelihood of
an image is
Y
p(I|M; F, a) =
p(x|M; F, a)
(1)
x?I
where x is a pixel, and M are the model parameters. These include ? ? A, the set of appearances
allowed by the model, from which a is taken. The other parameters are used to explain segments and
are dicussed below. The probability of pixels is uniform within a segment, and independent across
segments:
p(x|M; F, a) = p(sx |M; f, a)
(2)
x
with s the segment containing x. Hence, the image likelihood can be expressed as a product over
the probability of each segment s, counted by its area Ns (i.e. the number of pixels it contains)
Y x
Y
N
p(I|M; F, a) =
p(s |M; f, a) =
x?I
p(s|M; f, a)
s?I
s
(3)
?
?
a
R
(a)
(b)
?
s
?
f
1
?
Si D
G
?
?
2
?
?
a
s
c
Ci
G
f
Si
D
Figure 3: a) Graphical model for unary attributes. D is the number of images in the dataset, Si is the number
of segments in image i, and G is the total number of geometric properties considered (both active and inactive).
b) Graphical model for binary attributes. c is a pair of segments. ?1,2 are the geometric distributions for each
segment a pair. ? are relative geometric distributions (i.e. measure properties between two segments in a pair,
such as relative orientation), and there are R of them in total (active and inactive). ? is the adjacency model
parameter. It tells whether only adjacent pairs of segments are considered (so p(c|? = 1) is one only iff c is a
pair of adjacent segments).
Note that F and a are latent variables associated with a particular image, so there is a different F
and a for each image. In contrast, a single model M is used to explain all images.
3.2 Unary attributes
Segments are the only observed variables in the unary model. A segment s = (sa , {sjg }) is defined
by its appearance sa and shape, captured by a set of geometric measurements {sjg }, such as elongation and curvedness. The graphical model in figure 3a illustrates the conditional probability of
image segments
Q
j
p(s|M; f, a) =
p(sa |a) ?
?
j
p(sjg |?j )v
if f = 1
if f = 0
(4)
The likelihood for a segment depends on the model parameters M = (?, ?, {?j }), which specify
a visual attribute. For each geometric property ?j = (?j , v j ), the model defines its distribution
?j over the foreground segments and whether the property is active or not (v j = 1 or 0). Active
properties are relevant for the attribute (e.g. elongation is relevant for stripes, while orientation is
not) and contribute substantially to its likelihood in (4). Inactive properties instead have no impact
on the likelihood (exponentiation by 0). It is the task of the learning stage to determine which
properties are active and their foreground distribution.
The factor p(sa |a) = [sa = a] is 1 for segments having the foreground appearance a for this image,
and 0 otherwise (thus it acts as a selector). The scalar value ? represents a simple background model:
all segments assigned to the background have likelihood ?. During inference and learning we want
to maximize the likelihood of an image given the model over F, which is achieved by setting f to
foreground when the f = 1 case of equation (4) is greater than ?.
As an example, we give the ideal model parameters for the attribute ?red?. ? contains the red
appearance only. ? is some low value, corresponding to how likely it is for non-red segments to
be assigned the red appearance. No geometric property {?j } is active (i.e. all v j = 0).
3.3 Binary attributes
The basic element of binary attributes is a pair of segments. In this section we extend the unary
model to describe pairs of segments. In addition to duplicating the unary appearance and geometric properties, the extended model includes pairwise properties which do not apply to individual
segments. In the graphical model of figure 3b, these are relative geometric properties ? (area, orientation) and adjacency ?, and together specify the layout of the attribute. For example, the orientation
of a segment with respect to the other can capture the parallelism of subsequent stripe segments.
Adjacency expresses whether the two segments in the pair are adjacent (like in stripes) or not (like
the maple leaf and the stripes in the canadian flag). We consider two segments adjacent if they share
part of the boundary. A pattern characterized by adjacent segments is more distinctive, as it is less
likely to occur accidentally in a negative image.
Segment likelihood. An image is represented by a set of segments {s}, and the set of all possible
pairs of segments {c}. The image likelihood p(I|M; F, a) remains as defined in equation (3), but
now a = (a1 , a2 ) specifies two foreground appearances, one for each segment in the pair. The
likelihood of a segment s is now defined as the maximum over all pairs containing it
p(s|M; f, a) =
max{c|s?c} p(c|M, t)
?
if f = 1
if f = 0
(5)
Pair likelihood. The observed variables in our model are segments s and pairs of segments c. A
pair c = (s1 , s2 , {ckr }) is defined by two segments s1 , s2 and their relative geometric measurements
{ckr } (relative orientation and relative area in our implementation). The likelihood of a pair given
the model is
Y
Y j
j
j
k
k vk
j v
j
j v
p(c|M, a) = p(s1,a , s2,a |a) ?
|
{z
appearance
}
p(s1,g |?1 )
1
? p(s2,g |?2 )
?
2
j
|
p(cr |? )
r
? p(c|?)
(6)
k
{z
shape
}|
{z
layout
}
The binary model parameters M = (?, ?, ?, {?j1 }, {?j2 }, {? k }) control the behavior of the pair
likelihood. The two sets of ?ji = (?ji , vij ) are analogous to their counterparts in the unary model,
and define the geometric distributions and their associated activation states for each segment in the
pair respectively. The layout part of the model captures the interaction between the two segments in
the pair. For each relative geometric property ? k = (?k , vrk ) the model gives its distribution ?k over
pairs of foreground segments and its activation state vrk . The model parameter ? determines whether
the pattern is composed of pairs of adjacent segments (? = 1) or just any pair of segments (? = 0).
The factor p(c|?) is defined as 0 iff ? = 1 and the segments in c are not adjacent, while it is 1 in all
other cases (so, when ? = 1, p(c|?) acts as a pair selector). The appearance factor p(s1,a , s2,a |a) =
[s1,a = a1 ? s2,a = a2 ] is 1 when the two segments have the foreground appearances a = (a1 , a2 )
for this image.
As an example, the model for a general stripe pattern is as follows. ? = (A, A) contains all
pairs of appearances from A. The geometric properties ?elong
, ?curv
are active (v1j = 1) and their
1
1
j
distributions ?1 peaked at high elongation and low curvedness. The corresponding properties {?j2 }
have similar values. The layout parameters are ? = 1, and ? rel area , ? rel orient are active and
peaked at 0 (expressing that the two segments are parallel and have the same area). Finally, ? is a
value very close to 0, as the probability of a random segment under this complex model is very low.
4 Learning the model
Image Likelihood. The image likelihood defined in (3) depends on the foreground/background
labels F and on the foreground appearance a. Computing the complete likelihood, given only the
model M, involves maximizing a over the appearances ? allowed by the model, and over F:
p(I|M) = max max p(I|M; F, a)
a??
F
(7)
The maximization over F is easily achieved by setting each f to the greater of the two cases in
equation (4) (equation (5) for a binary model). The maximization over a requires trying out all
allowed appearances ?. This is computationally inexpensive, as typically there are about 32 entries
in the appearance codebook.
Training data. We learn the model parameters in a weakly supervised setting. The training data
i
i
consists of positive I+ = {I+
} and negative images I? = {I?
}. While many of the positive
images contain examples of the attribute to be learnt (figure 4), a considerable proportion don?t.
Conversely, some of the negative images do contain the attribute. Hence, we must operate under a
weak assumption: the attribute occurs more frequently on positive training images than on negative.
Moreover, only the (unreliable) image label is given, not the location of the attribute in the image.
As demonstrated in section 5, our approach is able to learn from this noisy training data.
Although our attribute models are generative, learning them in a discriminative fashion greatly helps
given the challenges posed by the weakly supervised setting. For example, in figure 4 most of the
overall surface for images labeled ?red? is actually white. Hence, a maximum likelihood estimator
over the positive training set alone would learn white, not red. A discriminative approach instead
positive training images
negative training images
Figure 4: Advantages of discriminative training. The task is to learn the attribute ?red?. Although the most
frequent color in the positive training images is white, white is also common across the negative set.
notices that white occurs frequently also on the negative set, and hence correctly picks up red, as it
is most discriminative for the positive set. Formally, the task of learning is to determine the model
parameters M that maximize the likelihood ratio
Q
i
p(I+ |M)
I i ?I+
p(I+ |M)
= Q+
i
p(I? |M)
p(I?
|M)
I i ?I
?
(8)
?
Learning procedure. The parameters of the binary model are M = (?, ?, ?, {?j1 }, {?j2 }, {? k }),
as defined in the previous sections. Since the binary model is a superset of the unary one, we only
explain here how to learn the binary case. The procedure for the unary model is derived analogously.
In our implementation, ? can contain either a single appearance, or all appearances in the codebook
A. The former case covers attributes such as colors, or patterns with specific colors (such as zebra
stripes). The latter case covers generic patterns, as it allows each image to pick a different appearance
a ? ?, while at the same time it properly constrains all segments/pairs within an image to share the
same appearance (e.g. subsequent pairs of stripe segments have the same appearance, forming a
pattern of alternation ABABAB). Because of this definition, ? can take on (1 + |A|)2 /2 different
values (sets of appearances). As typically a codebook of |A| ? 32 appearances is sufficient to model
the data, we can afford exhaustive search over all possible values of ?. The same goes for ?, which
can only take on two values.
Given a fixed ? and ?, the learning task reduces to estimating the background probability ?, and the
geometric properties {?j1 }, {?j2 }, {? k }. To achieve this, we need determine the latent variable F for
each training image, as it is necessary for estimating the geometric distributions over the foreground
segments. These are in turn necessary for estimating ?. Given ? and the geometric properties we
can estimate F (equation (6)). This particular circular dependence in the structure of our model
suggests a relatively simple and computationally cheap approximate optimization algorithm:
S
1. For each I ? {I+ I? }, estimate an initial F and a via equation (7), using an initial
? = 0.01, and no geometry (i.e. all activation variables set to 0).
2. Estimate all geometric distributions ?j1 , ?j2 , ?k over the foreground segments/pairs from
all images, according to the initial estimates {F}.
3. Estimate ? and the geometric activations v iteratively:
(a) Update ? as the average probability of segments from I? . This is obtained using the
foreground expression of (5) for all segments of I? .
(b) Activate the geometric property which most increases the likelihood-ratio (8) (i.e. set
the corresponding v to 1). Stop iterating when no property increases (8).
4. The above steps already yield a reasonable estimate of all model parameters. We use it as
initialization for the following EM-like iteration, which refines ? and ?j1 , ?j2 , ?k
(a) Update {F} given the current ? and geometric properties (set each f to maximize (5))
(b) Update ?j1 , ?j2 , ?k given the current {F}.
(c) Update ? over I? using the current ?j1 , ?j2 , ?k .
The algorithm is repeated over all possible ? and ?, and the model maximizing (8) is selected. Notice
how ? is continuously re-estimated as more geometric properties are added. This implicitly offers to
the selector the probability of an average negative segment under the current model as an up-to-date
baseline for comparison. It prevents the model from overspecializing as it pushes it to only pick up
properties which distinguish positive segments/pairs from negative ones.
(a)
Layout
Segment 2
Segment 1
(b)
0
1
<.33
>.67
0
1
<.33
>.67
<.33
>.67
??/2
0
?/2
?4
0
4
0
4
(c)
1
0
elongation
<.33
>.67
curvedness
0
1
area
0
0.4
compactness
0
1
elongation
curvedness
?4
relative orientation
relative area
Figure 5: a) color models learnt for red, green, blue, and yellow. For each, the three most frequent patch
types are displayed. Notice how each model covers different shades of a color. b+c) geometric properties of the
learned models for stripes (b) and dots (c). Both models are binary, have general appearance, i.e. ? = (A, A),
and adjacent segments, i.e. ? = 1. The figure shows the geometric distributions for the activated geometric
properties. Lower elongation values indicate more elongated segments. A blank slot means the property is not
active for that attribute. See main text for discussion.
One last, implicit, parameter is the model complexity: is the attribute unary or binary ? This is
tackled through model selection: we learn the best unary and binary models independently, and then
select the one with highest likelihood-ratio. The comparison is meaningful because image likelihood
is measured in the same way in both unary and binary cases (i.e. as the product over the segment
probabilities, equation (3)).
5 Experimental results
Learning. We present results on learning four colors (red, green, blue, and yellow) and three
patterns (stripes, dots, and checkerboard). The positive training set for a color consists of the 14
images in the first page returned by Google-images when queried by the color name. The proportion
of positive images unrelated to the color varies between 21% and 36%, depending on the color (e.g.
figure 4). The negative training set for a color contains all positive images for the other colors. Our
approach delivers an excellent performance. In all cases, the correct model is returned: unary, no
active geometric property, and the correct color as a specific appearance (figure 5a).
Stripes are learnt from 74 images collected from Google-images using ?striped?, ?stripe?, ?stripes?
as queries. 20% of them don?t contain stripes. The positive training set for dots contains 35 images,
29% of them without dots, collected from textile vendors websites and Google-images (keywords
?dots?, ?dot?, ?polka dots?). For both attributes, the 56 images for colors act as negative training
set. As shown in figure 5, the learnt models capture well the nature of these attributes. Both stripes
and dots are learnt as binary and with general appearance, while they differ substantially in their
geometric properties. Stripes are learnt as elongated, rather straight pairs of segments, with largely
the same properties for the two segments in a pair. Their layout is meaningful as well: adjacent,
nearly parallel, and with similar area. In contrast, dots are learnt as small, unelongated, rather
curved segments, embedded within a much larger segment. This can be seen in the distribution of
the area of the first segment, the dot, relative to the area of the second segment, the ?background?
on which dots lie. The background segments have a very curved, zigzagging outline, because they
circumvent several dots. In contrast to stripes, the two segments that form this dotted pattern are not
symmetric in their properties. This characterisic is modeled well by our approach, confirming its
flexibility. We also train a model from the first 22 Google-images for the query ?checkerboard?, 68%
of which show a black/white checkerboard. The learnt model is binary, with one segment for a black
square and the other for an adjacent white square, demonstrating the learning algorithm correctly
infers both models with specific and generic appearance, adapting to the training data.
Recognition. Once a model is learnt, it can be used to recognize whether a novel image contains
the attribute, by computing the likelihood (7). Moreover, the area covered by the attribute is localized by the segments with f = 1 (figure 6). We report results for red, yellow, stripes, and dots. All
test images are downloaded from Yahoo-images, Google-images, and Flickr. There are 45 (red), 39
(yellow), 106 (stripes), 50 (dots) positive test images. In general, the object carrying the attribute
stands against a background, and often there are other objects in the image, making the localization
task non-trivial. Moreover, the images exhibit extreme variability: there are paintings as well as photographs, stripes appear in any orientation, scale, and appearance, and they are often are deformed
Figure 6: Recognition results. Top row: red (left) and yellow (right). Middle rows: stripes. Bottom row:
dots. We give a few example test images and the corresponding localizations produced by the learned models.
Segments are colored according to their foreground likelihood, using matlab?s jet colormap (from dark blue to
green to yellow to red to dark red). Segments deemed not to belong to the attribute are not shown (black). In
the case of dots, notice how the pattern is formed by the dots themselves and by the uniform area on which they
lie. The ROC plots shows the image classification performance for each attribute. The two lower curves in
the stripes plot correspond to a model without layout, and without either layout nor any geometry respectively.
Both curves are substantially lower, confirming the usefulness of the layout and shape components of the model.
(human body poses, animals, etc.). The same goes for dots, which can vary in thickness, spacing,
and so on. Each positive set is coupled with a negative one, in which the attribute doesn?t appear,
composed of 50 images from the Caltech-101 ?Things? set [12]. Because these negative images are
rich in colors, textures and structure, they pose a considerable challenge for the classification task.
As can be seen in figure 6, our method achieves accurate localizations of the region covered by the
attribute. The behavior on stripe patterns composed of more than two appearances is particularly
interesting (the trousers in the rightmost example). The model explains them as disjoint groups of
binary stripes, with the two appearances which cover the largest image area. In terms of recognizing
whether an image contains the attribute, the method performs very well for red and yellow, with ROC
equal-error rates above 90%. Performance is convincing also for stripes and dots, especially since
these attributes have generic appearance, and hence must be recognized based only on geometry and
layout. In contrast, colors enjoy a very distinctive, specific appearance.
References
[1] N. Dalal and B. Triggs, Histograms of Oriented Gradients for Human Detection, CVPR, 2005.
[2] P. Felzenszwalb and D Huttenlocher, Efficient Graph-Based Image Segmentation, IJCV, (50):2, 2004.
[3] R. Fergus, P. Perona, and A. Zisserman, Object Class Recognition by Unsupervised Scale-Invariant
Learning, CVPR, 2003.
[4] N. Jojic and Y. Caspi, Capturing image structure with probabilistic index maps, CVPR, 2004
[5] S. Lazebnik, C. Schmid, and J. Ponce, A Sparse Texture Representation Using Local Affine Regions,
PAMI, (27):8, 2005
[6] Y. Liu, Y. Tsin, and W. Lin, The Promise and Perils of Near-Regular Texture, IJCV, (62):1, 2005
[7] J. Van de Weijer, C. Schmid, and J. Verbeek, Learning Color Names from Real-World Images, CVPR,
2007.
[8] M. Varma and A. Zisserman, Texture classification: Are filter banks necessary?, CVPR, 2003.
[9] J. Winn, A. Criminisi, and T. Minka, Object Categorization by Learned Universal Visual Dictionary,
ICCV, 2005.
[10] J. Winn and N. Jojic. LOCUS: Learning Object Classes with Unsupervised Segmentation, ICCV, 2005.
[11] K. Yanai and K. Barnard, Image Region Entropy: A Measure of ?Visualness? of Web Images Associated
with One Concept, ACM Multimedia, 2005.
[12] Caltech 101 dataset: www.vision.caltech.edu/Image Datasets/Caltech101/Caltech101.html
| 3217 |@word deformed:1 briefly:1 dalal:1 middle:1 proportion:2 triggs:1 open:1 pick:3 moment:1 initial:3 liu:1 contains:10 ours:1 rightmost:1 current:5 blank:1 si:3 yet:1 activation:4 must:3 grain:1 refines:1 subsequent:2 j1:7 confirming:2 shape:13 enables:2 cheap:1 plot:2 update:4 alone:1 generative:6 leaf:1 selected:1 website:1 colored:1 codebook:7 contribute:1 location:1 along:1 become:1 consists:2 ijcv:2 introduce:1 pairwise:1 behavior:2 themselves:1 frequently:2 nor:1 shirt:3 inspired:2 project:1 estimating:3 moreover:7 unrelated:1 kind:2 substantially:3 finding:1 duplicating:1 every:1 collecting:1 act:3 colormap:1 uk:2 control:1 unit:2 enjoy:1 appear:3 before:2 positive:14 local:1 oxford:2 pami:1 black:7 initialization:1 collect:1 conversely:1 suggests:1 limited:1 range:2 adoption:1 procedure:3 area:20 universal:1 adapting:1 regular:2 close:1 selection:1 www:1 fruitful:1 vittorio:1 demonstrated:2 center:1 maximizing:2 maple:1 layout:15 regardless:2 starting:1 go:2 elongated:2 independently:1 constrast:1 perceive:1 explictly:1 estimator:1 varma:1 ferrari:1 ckr:2 analogous:1 ababab:2 element:5 recognition:6 particularly:1 stripe:39 labeled:2 huttenlocher:1 observed:3 bottom:1 capture:4 region:8 eu:1 highest:1 substantial:1 complexity:1 constrains:1 weakly:4 carrying:1 segment:109 distinctive:3 localization:3 basis:1 mislabeled:1 easily:1 represented:3 various:2 train:1 describe:1 activate:1 query:4 tell:2 exhaustive:1 whose:2 posed:1 larger:1 cvpr:5 otherwise:1 noisy:2 advantage:1 indication:1 propose:1 interaction:1 product:2 frequent:3 j2:8 relevant:2 date:1 iff:2 flexibility:1 achieve:1 cluster:2 categorization:1 object:14 tall:1 help:1 andrew:1 depending:1 pose:2 measured:2 keywords:1 minor:1 progress:1 sa:5 p2:1 involves:1 indicate:1 differ:1 correct:2 attribute:70 functionality:1 filter:1 criminisi:1 human:2 sjg:3 enable:1 adjacency:3 sand:2 explains:2 elementary:1 considered:2 overlaid:1 major:2 vary:1 achieves:1 smallest:1 a2:4 dictionary:1 label:3 largest:1 rather:4 cr:1 derived:1 focus:3 ponce:1 vk:1 properly:1 likelihood:25 greatly:1 contrast:4 baseline:1 summarizing:1 helpful:1 inference:1 rigid:1 unary:16 entire:1 typically:2 compactness:3 perona:1 going:1 josef:1 pixel:6 overall:2 classification:3 orientation:9 html:1 yahoo:1 development:1 animal:1 spatial:1 weijer:1 equal:1 grainy:1 once:1 having:1 elongation:8 represents:1 broad:2 unsupervised:2 nearly:2 foreground:17 peaked:2 report:1 few:4 randomly:1 oriented:1 composed:5 recognize:3 individual:3 geometry:4 detection:1 circular:1 evaluation:1 extreme:1 activated:1 perimeter:1 accurate:1 necessary:3 re:1 deformation:1 skirt:2 modeling:1 soft:1 cover:7 maximization:2 subset:1 entry:3 uniform:3 usefulness:1 recognizing:1 thickness:1 varies:1 learnt:14 thanks:1 probabilistic:4 together:2 analogously:1 continuously:1 containing:3 dr:1 checkerboard:3 account:2 de:1 includes:1 explicitly:2 depends:2 lot:1 red:25 start:1 parallel:4 square:2 formed:1 descriptor:3 characteristic:5 largely:1 yield:1 correspond:1 painting:1 peril:1 yellow:8 weak:1 produced:1 straight:1 explain:3 flickr:1 sharing:3 checked:1 definition:1 inexpensive:1 against:1 minka:1 associated:4 gain:1 stop:1 dataset:2 color:24 car:2 knowledge:1 infers:2 segmentation:2 actually:1 tolerate:1 supervised:4 follow:1 zisserman:3 specify:2 furthermore:1 just:2 stage:1 implicit:1 web:2 unsegmented:1 google:7 defines:1 quality:2 name:3 building:1 normalized:1 contain:4 concept:1 counterpart:1 former:1 hence:6 assigned:5 jojic:3 symmetric:2 iteratively:1 white:12 attractive:1 round:6 adjacent:13 during:1 covering:1 trying:1 outline:1 complete:1 performs:1 delivers:1 image:98 lazebnik:1 novel:4 common:3 ji:2 discussed:1 extend:1 belong:1 measurement:3 expressing:1 zebra:4 queried:1 automatic:1 dot:19 specification:1 surface:1 etc:1 curvature:1 closest:1 recent:1 optimizing:1 apart:1 binary:17 success:1 alternation:4 caltech:3 captured:2 seen:2 additional:1 greater:2 recognized:1 determine:4 maximize:3 encompass:1 reduces:1 jet:1 characterized:3 offer:1 lin:1 spotted:2 a1:3 impact:1 verbeek:1 basic:6 vision:2 essentially:1 histogram:4 iteration:1 achieved:2 curv:1 addition:2 background:8 want:1 spacing:1 winn:2 operate:1 comment:1 thing:1 near:1 presence:2 door:1 ideal:1 canadian:1 superset:1 cow:1 reduce:1 idea:1 prototype:1 inactive:3 whether:7 expression:1 returned:2 afford:1 vrk:2 repeatedly:1 matlab:1 fractal:1 iterating:1 covered:3 dark:2 category:7 specifies:1 notice:5 dotted:1 estimated:1 disjoint:1 correctly:2 blue:4 promise:1 express:1 group:1 four:1 threshold:1 demonstrating:1 graph:1 trouser:1 year:1 orient:1 exponentiation:1 reasonable:1 patch:7 comparable:1 entirely:1 capturing:1 distinguish:1 tackled:1 occur:1 striped:6 aspect:1 relatively:1 transferred:1 according:2 combination:2 ball:1 across:4 em:1 appealing:1 making:1 s1:6 invariant:1 iccv:2 taken:2 ln:1 equation:7 computationally:2 previously:1 remains:1 describing:3 turn:1 vendor:1 needed:1 locus:2 wrt:1 apply:1 generic:4 bhattacharya:1 robustness:1 top:1 clustering:1 include:1 graphical:4 especially:1 already:1 added:1 occurs:2 dependence:1 map:1 abab:1 exhibit:1 amongst:1 gradient:1 distance:1 thank:1 zigzagging:1 extent:1 collected:2 trivial:1 modeled:3 index:1 ratio:6 convincing:1 negative:13 implementation:3 allowing:1 datasets:1 curved:2 displayed:2 extended:1 variability:3 complement:1 pair:37 sivic:1 engine:2 learned:3 able:1 below:1 pattern:24 parallelism:1 appeared:1 challenge:3 encompasses:3 green:5 including:1 max:3 circumvent:1 representing:1 improve:2 deemed:1 coupled:1 schmid:2 text:3 understanding:2 geometric:29 relative:17 embedded:1 discriminatively:1 interesting:1 querying:1 localized:1 yanai:1 downloaded:1 degree:1 affine:1 sufficient:1 vij:1 bank:1 share:2 row:4 caltech101:2 supported:1 last:1 free:1 accidentally:1 taking:1 felzenszwalb:1 sparse:1 benefit:1 van:1 boundary:1 dimension:1 curve:2 world:3 stand:1 contour:1 doesn:1 rich:1 author:2 inertia:1 counted:1 approximate:1 selector:3 implicitly:1 unreliable:1 active:10 discriminative:5 fergus:1 don:2 search:4 latent:3 learn:8 nature:2 curvedness:6 complex:6 excellent:1 main:1 arrow:1 whole:1 s2:6 repeated:2 allowed:3 body:1 roc:2 fashion:1 n:1 lie:2 shade:1 specific:6 constellation:1 rel:2 ci:1 texture:9 illustrates:1 push:1 sx:1 entropy:1 photograph:1 appearance:61 likely:2 forming:2 visual:11 prevents:1 expressed:1 scalar:1 aa:1 caspi:2 determines:2 extracted:1 acm:1 conditional:1 slot:1 goal:1 kmeans:1 v1j:1 shared:2 absence:2 considerable:3 change:1 barnard:1 typical:1 flag:1 total:4 multimedia:1 partly:1 experimental:2 meaningful:2 formally:1 select:1 people:1 latter:1 |
2,445 | 3,218 | Convex Learning with Invariances
Choon Hui Teo
Australian National University
[email protected]
Amir Globerson
CSAIL, MIT
[email protected]
Sam Roweis
Department of Computer Science
University of Toronto
[email protected]
Alexander J. Smola
NICTA
Canberra, Australia
[email protected]
Abstract
Incorporating invariances into a learning algorithm is a common problem in machine learning. We provide a convex formulation which can deal with arbitrary
loss functions and arbitrary losses. In addition, it is a drop-in replacement for most
optimization algorithms for kernels, including solvers of the SVMStruct family.
The advantage of our setting is that it relies on column generation instead of modifying the underlying optimization problem directly.
1
Introduction
Invariances are one of the most powerful forms of prior knowledge in machine learning; they have a
long history [9, 1] and their application has been associated with some of the major success stories
in pattern recognition. For instance, the insight that in vision tasks, one should be often be designing
detectors that are invariant with respect to translation, small degrees of rotation & scaling, and image
intensity has led to best-in-class algorithms including tangent-distance [13], virtual support vectors
[5] and others [6].
In recent years a number of authors have attempted to put learning with invariances on a solid mathematical footing. For instance, [3] discusses how to extract invariant features for estimation and
learning globally invariant estimators for a known class of invariance transforms (preferably arising
from Lie groups). Another mathematically appealing formulation of the problem of learning with
invariances casts it as a second order cone programming [8]; unfortunately this is neither particularly
efficient to implement (having worse than cubic scaling behavior) nor does it cover a wide range of
invariances in an automatic fashion. A different approach has been to pursue ?robust? estimation
methods which, roughly speaking, aim to find estimators whose performance does not suffer significantly when the observed inputs are degraded in some way. Robust estimation has been applied to
learning problems in the context of missing data [2] and to deal with specific type of data corruption
at test time [7]. The former approach again leads to a second order cone program, limiting its applicability to very small datasets; the latter is also computationally demanding and is limited to only
specific types of data corruption.
Our goal in this work is to develop a computationally scalable and broadly applicable approach to
supervised learning with invariances which is easily adapted to new types of problems and can take
advantage of existing optimization infrastructures. In this paper we propose a method which has
what we believe are many appealing properties:
1. It formulates invariant learning as a convex problem and thus can be implemented directly
using any existing convex solver, requiring minimal additional memory and inheriting the
convergence properties/guarantees of the underlying implementation.
1
2. It can deal with arbitrary invariances, including gradual degradations, provided that the
user provides a computational recipe to generate invariant equivalents efficiently from a
given data vector.
3. It provides a unifying framework for a number of previous approaches, such as the method
of Virtual Support Vectors [5] and is broadly applicable not just to binary classification but
in fact to any structured estimation problem in the sense of [16].
2
Maximum Margin Loss with Invariances
We begin by describing a maximum margin formulation of supervised learning which naturally
incorporates invariance transformations on the input objects. We assume that we are given input
patterns x ? X from from some space X and that we want to estimate outputs y ? Y. For instance
Y = {?1} corresponds to binary classification; Y = An corresponds to sequence prediction over
the alphabet A.1 We denote our prediction by y?(x), which is obtained by maximizing our learned
function f : X ? Y ? R, i.e. y?(x) := argmaxy?Y f (x, y). For instance, if we are training a
(generative or discriminative) probabilistic model, f (x, y) = log p(y|x) then our prediction is the
maximum a-posteriori estimate of the target y given x. In many interesting cases y?(x) is obtained
by solving a nontrivial discrete optimization problem, e.g. by means of dynamic programming. In
kernel methods f (x, y) = h?(x, y), wi for a suitable feature map ? and weight vector w. For the
purpose of our analysis the precise form of f is immaterial, although our experiments focus on the
kernel machines, due to the availability of scalable optimizers for that class of estimators.
2.1
Invariance Transformations and Invariance Sensitive Cost
The crucial ingredient to formulating invariant learning is to capture the domain knowledge that there
exists some class S of invariance transforms s which can act on the input x while leaving the target
y essentially unchanged. We denote by (s(x), y) s ? S the set of valid transformations of the pair
(x, y). For instance, we might believe that slight rotation (in pixel coordinates) of an input image in
a pattern recognition problem do not change the image label. For text classification problems such
as spam filtering, we may believe that certain editing operations (such as changes in capitalization
or substitutions like Viagra ? V1agra,V!agra) should not affect our decision function. Of
course, most invariances only apply ?locally?, i.e. in the neighborhood of the original input vector.
For instance, rotating an image of the digit 6 too far might change its label to 9; applying both a
substitution and an insertion can change Viagra ? diagram. Furthermore, certain invariances
may only hold for certain pairs of input and target. For example, we might believe that horizontal
reflection is a valid invariance for images of digits in classes 0 and 8 but not for digits in class 2.
The set s(x) s ? S incorporates both the locality and applicability constraints. (We have introduced
a slight abuse of notation since s may depend on y but this should always be clear in context.)
To complete the setup, we adopt the standard assumption that the world or task imposes a cost
function such that if the true target for an input x is y and our prediction is y?(x) we suffer a cost
?(y, y?(x)).2 For learning with invariances, we extend the definition of ? to include the invariance
function s(x), if any, which was applied to the input object: ?(y, y?(s(x)), s). This allows the cost
to depend on the transformation, for instance we might suffer less cost for poor predictions when
the input has undergone very extreme transformations. In a image labeling problem, for example,
we might believe that a lighting/exposure invariance applies but we might want to charge small
cost for extremely over-exposed or under-exposed images since they are almost impossible to label.
Similarly, we might assert that scale invariance holds but give small cost to severely spatially downsampled images since they contain very little information.
2.2
Max Margin Invariant Loss
Our approach to the invariant learning problem is very natural, yet allows us to make a surprising
amount of analytical and algorithmic progress. A key quantity is the cost under the worst case
transformation for each example, i.e. the transformation under which our predicted target suffers
1
2
For more nontrivial examples see, e.g. [16, 14] and the references therein.
Normally ? = 0 if y?(x) = y but this is not strictly necessary.
2
the maximal cost compared with the true target:
C(x, y, f ) = sup ?(y, y?(s(x)), s)
(1)
s?S
The objective function (loss) that we advocate minimizing during learning is essentially a convex
upper bound on this worst case cost which incorporates a notion of (scaled) margin:
l(x, y, f ) :=
sup
?(y, y 0 )(f (s(x), y 0 ) ? f (s(x), y)) + ?(y, y 0 , s)
(2)
y 0 ?Y,s?S
This loss function finds the combination of invariance transformation and predicted target for which
the sum of (scaled) ?margin violation? plus the cost is maximized. The function ?(y, y 0 ) is a nonnegative margin scaling which allows different target/prediction pairs to impose different amounts
of loss on the final objective function.3 The numerical scale of ? also sets the regularization tradeoff
between margin violations and the prediction cost ?.
This loss function has two mathematically important properties which allow us to develop scalable
and convergent algorithms as proposed above.
Lemma 1 The loss l(x, y, f ) is convex in f for any choice of ?, ? and S.
Proof For fixed (y 0 , s) the expression ?(y, y 0 )(f (s(x), y 0 ) ? f (s(x), y)) + ?(y, y 0 , s) is linear in
f , hence (weakly) convex. Taking the supremum over a set of convex functions yields a convex
function.
This means that we can plug l into any convex solver, in particular whenever f belongs to a linear
function class, as is the case with kernel methods. The primal (sub)gradient of l is easy to write:
?f l(x, y, f ) = ?(y, y ? )(?(s? (x), y ? ) ? ?(s? (x), y))
?
(3)
?
where s , y are values of s, y for which the supremum in Eq. (2) is attained and ? is the evaluation
functional of f , that is hf, ?(x, y)i = f (x, y). In kernel methods ? is commonly referred to as the
feature map with associated kernel
k((x, y), (x0 , y 0 )) = h?(x, y), ?(x0 , y 0 )i .
(4)
Note that there is no need to define S formally. All we need is a computational recipe to obtain the
worst case s ? S in terms of the scaled margin in Eq. 2. Nor is there any requirement for ?(y, y 0 , s)
or (s(x), y) to have any particularly appealing mathematical form, such as the polynomial trajectory
required by [8], or the ellipsoidal shape described by [2].
Lemma 2 The loss l(x, y, f ) provides an upper bound on C(x, y, f ) = sups?S ?(y, y?(s(x)), s).
Proof Denote by (s? , y ? ) the values for which the supremum of C(x, y, f ) is attained. By construction f (s? (x), y ? ) ? f (s? (x), y). Plugging this inequality into Eq. (2) yields
l(x, y, f ) ? ?(y, y ? )(f (s? (x), y ? ) ? f (s? (x), y)) + ?(y, y ? , s? ) ? ?(y, y ? , s? ).
Here the first inequality follows by substituting (s? , y ? ) into the supremum. The second inequality
follows from the fact that ? ? 0 and that (s? , y ? ) are the maximizers of the empirical loss.
This is essentially a direct extension of [16]. The main modifications are the inclusion of a margin
scale ? and the use of an invariance transform s(x). In section 4 we clarify how a number of existing
methods for dealing with invariances can be viewed as special cases of Eq. (2).
In summary, Eq. (2) penalizes estimation errors not only for the observed pair (x, y) but also for
patterns s(x) which are ?near? x in terms of the invariance transform s. Recall, however, that the
cost function ? may assign quite a small cost to a transformation s which takes x very far away
from the original. Furthermore, the transformation class is restricted only by the computational
consideration that we can efficiently find the ?worst case? transformation; S does not have to have
a specific analytic form. Finally, there is no specific restriction on y, thus making the formalism
applicable to any type of structured estimation.
3
Such scaling has been shown to be extremely important and effective in many practical problems especially
in structured prediction tasks. For example, the key difference between the large margin settings of [14] and
[16] is the incorporation of a sequence-length dependent margin scaling.
3
3
Learning Algorithms for Minimizing Invariant Loss
We now turn to the question of learning algorithms for our invariant loss function. We assume
that we are given a training set of input patterns X = {x1 , . . . , xm } and associated labels Y =
{y1 , . . . , ym }. We follow the common approach of minimizing, at training time, our average training
loss plus a penalty for model complexity. In the context of kernel methods this can be viewed as a
regularized empirical risk functional of the form
m
R[f ] =
1 X
?
2
l(xi , yi , f ) + kf kH where f (x, y) = h?(x, y), wi .
m i=1
2
(5)
A direct extension of the derivation of [16] yields that the dual of (5) is given by
minimize
?
m
X
X
X
?iys ?jy0 s0 Kiys,jy0 s0 +
i,j=1 y,y 0 ?Y s,s0 ?S
subject to ?m
XX
m XX
X
?(yi , y, s)?iys
(6a)
i=1 y?Y s?S
?iys = 1 for all i and ?iys ? 0.
(6b)
y?Y s?S
Here the entries of the kernel matrix K are given by
Kiys,jy0 s0 = ?(yi , y)?(yj , y 0 ) h?(s(xi ), y) ? ?(s(xi ), yi ), ?(s0 (xj ), y 0 ) ? ?(s0 (xj ), yj )i
(7)
This can be expanded into four kernel functions by using Eq. (4). Moreover, the connection between
the dual coefficients ?iys and f is given by
f (x0 , y 0 ) =
m XX
X
?iys [k((s(xi ), y), (x0 , y 0 )) ? k((s(xi ), yi ), (x0 , y 0 ))] .
(8)
i=1 y?Y s?S
There are many strategies for attempting to minimize this regularized loss, either in the primal formulation or the dual, using either batch or online algorithms. In fact, a number of previous heuristics
for dealing with invariances can be viewed as heuristics for approximately minimizing an approximation to an invariant loss similar to l. For this reason we believe a discussion of optimization is
valuable before introducing specific applications of the invariance loss.
Whenever the are an unlimited combination of valid transformations and targets (i.e. the domain
S ? Y is infinite), the optimization above is a semi-infinite program, hence exact minimization of
R[f ] or of its dual are essentially impossible. However, even is such cases it is possible to find
approximate solutions efficiently by means of column generation. In the following we describe two
algorithms exploiting this technique, which are valid for both infinite and finite programs. One based
on a batch scenario, inspired by SVMStruct [16], and one based on an online setting, inspired by
BMRM/Pegasos [15, 12].
3.1
A Variant of SVMStruct
The work of [16, 10] on SVMStruct-like optimization methods can be used directly to solve regularized risk minimization problems. The basic idea is to compute gradients of l(xi , yi , f ), either one
observation at a time, or for the entire set of observations simultaneously and to perform updates in
the dual space. While bundle methods work directly with gradients, solvers of the SVMStruct type
are commonly formulated in terms of column generation on individual observations. We give an
instance of SVMStruct for invariances in Algorithm 1. The basic idea is that instead of checking the
constraints arising from the loss functions only for y we check them for (y, s), that is, an invariance
in combination with a corresponding label which violates the margin most.
If we view the tuple (s, y) as a ?label? it is straightforward to see that the convergence results of
[16] apply. That is, this algorithm converges to precision in O(?2 ) time. In fact, one may show,
by solving the difference equation in the convergence proof of [16] that the rate can be improved to
O(?1 ). We omit technical details here.
4
Algorithm 1 SVMStruct for Invariances
1: Input: data X, labels Y , sample size m, tolerance
2: Initialize Si = ? for all i, and w = 0.
3: repeat
4:
for i = 1 to mP
do P
5:
f (x0 , y 0 ) = i (s,y)?Si ?iz [k((s(xi ), y), (x0 , y 0 )) ? k((s(xi ), yi ), (x0 , y 0 ))]
6:
(s? , y ? ) = argmaxs?S,y?Y ?(yi , y)[f (s(xi ), y) ? f (s(xi ), yi )] + ?(yi , y, s)
7:
?i = max(0, max(s,y)?Si ?(yi , y)[f (s(xi ), y) ? f (s(xi ), yi )] + ?(yi , y, s))
8:
if ?(yi , y ? )[f (s? (xi ), y ? ) ? f (s? (xi ), yi )] + ?(yi , y ? , s? ) > ?i + then
9:
Increase constraint set Si ? Si ? {(s? , y ? )}
10:
Optimize (6) using only ?iz where z ? Si .
11:
end if
12:
end for
13: until S has not changed in this iteration
3.2
An Application of Pegasos
Recently, Shalev-Shwartz et al. [12] proposed an online algorithm for learning optimization problems of type Eq. (5). Algorithm 2 is an adaptation of their method to learning with our convex
invariance loss. In a nutshell, the algorithm performs stochastic gradient descent on the regularized
1
version of the instantaneous loss while using a learning rate of ?t
and while projecting the current
q
2R[0]
weight vector back to a feasible region kf k ?
? , should it exceed it.
Algorithm 2 Pegasos for Invariances
1: Input: data X, labels Y , sample size m, iterations T ,
2: Initialize f1 = 0
3: for t = 1 to T do
4:
Pick (x, y) := (xt mod m , yt mod m )
5:
Compute constraint violator
(s? , y ? ) := argmax ?(y, y?) [f (?
s(x), y?) ? f (?
s(x), y)] + ?(y, y?, s?)
s??S,?
y ?Y
6:
7:
8:
9:
10:
?
)
Update ft+1 = 1 ? 1t ft + ?(y,y
[k((s? (x), y), (?, ?)) ? k((s? (x), y ? ), (?, ?))]
?t
q
if kft+1 k > 2R[0]
then
?
q
Update ft+t ? 2R[0]
? ft+1 / kft+1 k
end if
end for
We can apply the convergence result from [12] directly to Algorithm 2. In this context note that the
gradient with respect to l is bounded by twice the norm of ?(y, y ? ) [?(s(x), y ? ) ? ?(s(x), y)], due
to Eq. (3). We assume that the latter is given by R. We can apply [12, Lemma 1] immediately:
Theorem 3 Denote by Rt [f ] := l(xt mod m , yt mod m , f ) +
t. In this case Algorithm 2 satisfies the following bound:
?
2
2
kf k the instantaneous risk at step
T
T
T
T
1X
1X
1X
1X
R2 (1 + log T )
Rt [
ft?] ?
Rt [ft ] ?
min
Rt [f ] +
.
q
T t=1
T ?
T t=1
2?T
2R[0] T
kf k?
t=1
t
(9)
?
In particular, if T is a multiple of m we obtain bounds for the regularized risk R[f ].
4
Related work and specific invariances
While the previous sections gave a theoretical description of the loss, we now discuss a number of
special cases which can be viewed as instances of a convex invariance loss function presented here.
5
Virtual Support Vectors (VSVs): The most straightforward approach to incorporate prior knowledge is by adding ?virtual? (data) points generated from existing dataset. An extension of this
approach is to generate virtual points only from the support vectors (SVs) obtained from training on
the original dataset [5]. The advantage of this approach is that it results in far fewer SV than training
on all virtual points. However, it is not clear which objective it optimizes. Our current loss based
approach does optimize an objective, and generates the required support vectors in the process of
the optimization.
Second Order Cone Programming for Missing and Uncertain Data: In [2], the authors consider
the case where the invariance is in the form of ellipsoids around the original point. This is shown
to correspond to a second order cone program (SOCP). Instead of solving SOCP, we can solve an
equivalent but unconstrained convex problem.
Semidefinite Programming for Invariances: Graepel and Herbrich [8] introduce a method for
learning when the invariances are polynomial trajectories. They show that the problem is equivalent to an semidefinite program (SDP). Their formulation is again an instance of our general loss
based approach. Since SDPs are typically hard to solve for large problems, it it is likely that the
optimization scheme we suggest will perform considerably faster than standard SDP solvers.
Robust Estimation: Globerson and Roweis [7] address the case where invariances correspond to
deletion of a subset of the features (i.e., setting their values to zero). This results in a quadratic
program (QP) with a variables for each data point and feature in the training set. Solving such
a large QP (e.g., 107 variables for the MNIST dataset) is not practical, and again the algorithm
presented here can be much more efficient. In fact, in the next section we introduce a generalization
of the invariance in [7] and show how it can be optimized efficiently.
5
Experiments
Knowledge about invariances can be useful in a wide array of applications such as image recognition
and document processing. Here we study two specific cases: handwritten digit recognition on the
MNIST data, and spam filtering on the ECML06 dataset. Both examples are standard multiclass
classification tasks, where ?(y, y 0 , s) is taken to be the 0/1 loss. Also, we take the margin scale
?(y, y 0 ) to be identically one. We used SVMStruct and BMRM as the solvers for the experiments.
5.1
Handwritten Digits Recognition
Humans can recognize handwritten digits even when they are altered in various ways. To test our
invariant SVM (Invar-SVM) in this context, we used handwritten digits from the MNIST dataset [11]
and modeled 20 invariance transformations: 1-pixel and 2-pixel shifts in 4 and 8 directions, rotations
by ?10 degrees, scaling by ?0.15 unit, and shearing in vertical or horizontal axis by ?0.15 unit.
To test the effect of learning with these invariances we used small training samples of 10, 20, . . . , 50
samples per digit. In this setting invariances are particularly important since they can compensate
for the insufficient training data. We compared Invar-SVM to a related method where all possible
transformations were applied in advance to each data point to create virtual samples. The virtual
and original samples were used to train a multiclass SVM (VIR-SVM). Finally, we also trained a
multiclass SVM that did not use any invariance information (STD-SVM). All of the aforementioned
SVMs were trained using RBF kernel with well-chosen hyperparameters. For evaluation we used
the standard MNIST test set.
Results for the three methods are shown in Figure 1. It can be seen that Invar-SVM and VIR-SVM,
which use invariances, significantly improve the recognition accuracy compared to STD-SVM. This
comes at a certain cost of using more support vectors, but for Invar-SVM the number of support
vectors is roughly half of that in the VIR-SVM.
5.2
SPAM Filtering
The task of detecting spam emails is a challenging machine learning problem. One of the key
difficulties with such data is that it can change over time as a result of attempts of spam authors to
outwit spam filters [4]. In this context, the spam filter should be invariant to the ways in which a
spam authors will change their style. One common mechanism of style alteration is the insertion
of common words, and avoiding using specific keywords consistently over time. If documents are
6
Figure 1: Results for the MNIST handwritten digits recognition task, comparing SVM trained on
original samples (STD-SVM), SVM trained on original and virtual samples (VIR-SVM), and our
convex invariance-loss method (Invar-SVM). Left figure shows the classification error as a function
of the number of original samples per digit used in training. Right figure shows the number of
support vectors corresponding to the optimum of each method.
represented using a bag-of-words, these two strategies correspond to incrementing the counts for
some words, or setting it to zero [7].
Here we consider a somewhat more general invariance class (FSCALE) where word counts may be
scaled by a maximum factor of u (e.g., 1.5) and a minimum factor of l (e.g., 0.5), and the maximum
number of words subject to such perturbation is limited at K. Note that by setting l = 0 and u = 1
we specialize it to the feature deletion case (FDROP) in [7].
The invariances we consider are thus defined by
s(x) = {x ? ? : ? ? [l, u]d , l ? 1 ? u, #{i : ?i 6= 1} ? K},
(10)
where ? denotes element-wise product, d is the number of features, and #{?} denotes the cardinality
of the set. The set S is large so exhaustive enumeration is intractable. However, the search for
optimal perturbation s? is a linear program and can be computed efficiently by Algorithm 3 in
O(d log d) time.
We evaluated the performance of our invariance loss FSCALE and its special case FDROP
as well as the standard hinge loss on ECML?06 Discovery Challenge Task A dataset.4 This
dataset consists of two subsets, namely evaluation set (ecml06a-eval) and tuning set
(ecml06a-tune). ecml06a-eval has 4000/7500 training/testing emails with dimensionality
206908, and ecml06a-tune has 4000/2500 training/testing emails with dimensionality 169620.
We selected the best parameters for each methods on ecml06a-tune and used them for the training on ecml06a-eval. Results and parameter sets are shown in Table 1. We also performed
McNemar?s Tests and rejected the null hypothesis that there is no difference between hinge and
FSCALE/FDROP with p-value < 10?32 .
Algorithm 3 FSCALE loss
1: Input: datum x, label y, weight vector w ? Rd , invariance-loss parameters (K, l, u)
2: Initialize i := 1, j := d
3: B := y ? w ? x
4: I := IndexSort(B), such that B(I) is in ascending order
5: for k = 1 to K do
6:
if B[I[i]] ? (1 ? u) > B[I[j]] ? (1 ? l) then
7:
x[I[i]] := x[I[i]] ? u and i := i + 1
8:
else
9:
x[I[j]] := x[I[j]] ? l and j := j ? 1
10:
end if
11: end for
4
http://www.ecmlpkdd2006.org/challenge.html
7
Loss
Hinge
FDROP
FSCALE
Average Accuracy %
74.75
81.73
83.71
Average AUC %
83.63
87.79
89.14
Parameters (?, K, l, u)
(0.005,-,-,-)
(0.1,14,0,1)
(0.01,10,0.5,8)
Table 1: SPAM filtering results on ecml06a-eval averaged over 3 testing subsets. ? is regularization constant, (K, l, u) are parameters for invariance-loss methods. The loss FSCALE and its
special case FDROP statistically significantly outperform the standard hinge loss (Hinge).
6
Summary
We have presented a general approach for learning using knowledge about invariances. Our cost
function is essentially a worst case margin loss, and thus its optimization only relies on finding
the worst case invariance for a given data point and model. This approach can allow us to solve
invariance problems which previously required solving very large optimization problems (e.g. a
QP in [7]). We thus expect it to extend the scope of learning with invariances both in terms of the
invariances used and efficiency of optimization.
Acknowledgements: We thank Carlos Guestin and Bob Williamson for fruitful discussions. Part
of the work was done when CHT was visiting NEC Labs America. NICTA is funded through the
Australian Government?s Backing Australia?s Ability initiative, in part through the ARC. This work
was supported in part by the IST Programme of the European Community, under the PASCAL
Network of Excellence, IST-2002-506778.
References
[1] Y. Abu-Mostafa. A method for learning from hints. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors,
NIPS 5, 1992.
[2] C. Bhattacharyya, K. S. Pannagadatta, and A. J. Smola. A second order cone programming formulation
for classifying missing data. In L. K. Saul, Y. Weiss, and L. Bottou, editors, NIPS 17, 2005.
[3] C. J. C. Burges. Geometry and invariance in kernel based methods. In B. Sch?olkopf, C. J. C. Burges, and
A. J. Smola, editors, Advances in Kernel Methods ? Support Vector Learning, pages 89?116, Cambridge,
MA, 1999. MIT Press.
[4] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma. Adversarial classification. In KDD, 2004.
[5] D. DeCoste and B. Sch?olkopf. Training invariant support vector machines. Machine Learning, 46:161?
190, 2002.
[6] M. Ferraro and T. M. Caelli. Lie transformation groups, integral transforms, and invariant pattern recognition. Spatial Vision, 8:33?44, 1994.
[7] A. Globerson and S. Roweis. Nightmare at test time: Robust learning by feature deletion. In ICML, 2006.
[8] T. Graepel and R. Herbrich. Invariant pattern recognition by semidefinite programming machines. In
S. Thrun, L. Saul, and B. Sch?olkopf, editors, NIPS 16, 2004.
[9] G. E. Hinton. Learning translation invariant recognition in massively parallel networks. In Proceedings
Conference on Parallel Architectures and Laguages Europe, pages 1?13. Springer, 1987.
[10] T. Joachims. Training linear SVMs in linear time. In KDD, 2006.
[11] Y. LeCun, L. D. Jackel, L. Bottou, A. Brunot, C. Cortes, J. S. Denker, H. Drucker, I. Guyon, U. A.
M?uller, E. S?ackinger, P. Simard, and V. Vapnik. Comparison of learning algorithms for handwritten digit
recognition. In F. Fogelman-Souli?e and P. Gallinari, editors, ICANN, 1995.
[12] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In
ICML, 2007.
[13] P. Simard, Y. LeCun, and J. Denker. Efficient pattern recognition using a new transformation distance. In
S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, NIPS 5, 1993.
[14] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In S. Thrun, L. Saul, and
B. Sch?olkopf, editors, NIPS 16, 2004.
[15] C.H. Teo, Q. Le, A.J. Smola, and S.V.N. Vishwanathan. A scalable modular convex solver for regularized
risk minimization. In KDD, 2007.
[16] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. J. Mach. Learn. Res., 6:1453?1484, 2005.
8
| 3218 |@word version:1 polynomial:2 norm:1 gradual:1 pick:1 solid:1 substitution:2 document:2 bhattacharyya:1 existing:4 current:2 com:1 comparing:1 surprising:1 si:6 gmail:1 yet:1 kft:2 numerical:1 kdd:3 shape:1 analytic:1 hofmann:1 drop:1 update:3 generative:1 fewer:1 half:1 selected:1 amir:1 footing:1 infrastructure:1 provides:3 detecting:1 toronto:2 herbrich:2 org:1 mathematical:2 cht:1 direct:2 initiative:1 specialize:1 consists:1 advocate:1 dalvi:1 introduce:2 excellence:1 x0:8 shearing:1 roughly:2 behavior:1 nor:2 sdp:2 inspired:2 globally:1 little:1 enumeration:1 decoste:1 solver:8 cardinality:1 provided:1 begin:1 underlying:2 notation:1 xx:3 moreover:1 bounded:1 null:1 what:1 pursue:1 finding:1 transformation:16 guarantee:1 assert:1 preferably:1 act:1 charge:1 nutshell:1 scaled:4 vir:4 gallinari:1 normally:1 choonhui:1 omit:1 unit:2 before:1 severely:1 mach:1 abuse:1 approximately:1 might:7 plus:2 twice:1 au:1 therein:1 challenging:1 limited:2 range:1 statistically:1 averaged:1 practical:2 globerson:3 lecun:2 yj:2 testing:3 implement:1 caelli:1 optimizers:1 digit:11 empirical:2 significantly:3 word:5 ecmlpkdd2006:1 downsampled:1 suggest:1 altun:1 pegasos:4 tsochantaridis:1 put:1 context:6 applying:1 impossible:2 risk:5 restriction:1 equivalent:3 map:2 optimize:2 missing:3 maximizing:1 yt:2 exposure:1 straightforward:2 fruitful:1 convex:15 immediately:1 insight:1 estimator:3 array:1 notion:1 coordinate:1 svmstruct:8 limiting:1 target:9 construction:1 user:1 exact:1 programming:6 designing:1 hypothesis:1 domingo:1 element:1 recognition:12 particularly:3 std:3 observed:2 ft:6 taskar:1 capture:1 worst:6 sanghai:1 region:1 valuable:1 insertion:2 complexity:1 dynamic:1 immaterial:1 depend:2 solving:5 weakly:1 trained:4 exposed:2 efficiency:1 easily:1 various:1 represented:1 america:1 alphabet:1 derivation:1 train:1 souli:1 effective:1 describe:1 labeling:1 neighborhood:1 shalev:2 exhaustive:1 whose:1 quite:1 heuristic:2 solve:4 modular:1 ability:1 transform:2 final:1 online:3 advantage:3 sequence:2 analytical:1 mausam:1 propose:1 maximal:1 product:1 adaptation:1 argmaxs:1 roweis:4 description:1 kh:1 olkopf:4 recipe:2 exploiting:1 convergence:4 requirement:1 optimum:1 converges:1 object:2 develop:2 keywords:1 progress:1 eq:8 implemented:1 c:1 predicted:2 come:1 australian:2 direction:1 modifying:1 stochastic:1 filter:2 human:1 australia:2 virtual:9 violates:1 government:1 assign:1 f1:1 generalization:1 mathematically:2 strictly:1 extension:3 clarify:1 hold:2 around:1 algorithmic:1 scope:1 mostafa:1 substituting:1 major:1 adopt:1 purpose:1 estimation:7 applicable:3 bag:1 label:9 jackel:1 teo:3 sensitive:1 create:1 minimization:3 uller:1 mit:3 always:1 aim:1 focus:1 joachim:2 consistently:1 check:1 adversarial:1 sense:1 posteriori:1 dependent:1 entire:1 typically:1 koller:1 backing:1 pixel:3 fogelman:1 classification:6 dual:5 html:1 pascal:1 aforementioned:1 spatial:1 special:4 initialize:3 having:1 icml:2 others:1 hint:1 simultaneously:1 national:1 choon:1 individual:1 recognize:1 argmax:1 geometry:1 replacement:1 attempt:1 eval:4 gamir:1 evaluation:3 violation:2 argmaxy:1 extreme:1 semidefinite:3 primal:3 bundle:1 tuple:1 integral:1 necessary:1 rotating:1 penalizes:1 re:1 theoretical:1 minimal:1 uncertain:1 instance:10 column:3 formalism:1 giles:2 cover:1 formulates:1 applicability:2 cost:16 introducing:1 subset:3 entry:1 too:1 sv:1 considerably:1 csail:2 probabilistic:1 ym:1 again:3 worse:1 simard:2 style:2 socp:2 alteration:1 availability:1 coefficient:1 mp:1 performed:1 view:1 lab:1 sup:3 hf:1 carlos:1 parallel:2 minimize:2 degraded:1 accuracy:2 efficiently:5 maximized:1 yield:3 correspond:3 handwritten:6 sdps:1 trajectory:2 lighting:1 corruption:2 bob:1 history:1 detector:1 suffers:1 whenever:2 email:3 definition:1 naturally:1 associated:3 proof:3 dataset:7 recall:1 knowledge:5 dimensionality:2 graepel:2 back:1 attained:2 supervised:2 follow:1 improved:1 wei:1 editing:1 formulation:6 evaluated:1 done:1 furthermore:2 just:1 smola:5 rejected:1 until:1 horizontal:2 believe:6 effect:1 requiring:1 true:2 contain:1 former:1 regularization:2 hence:2 www:1 spatially:1 ferraro:1 deal:3 during:1 auc:1 complete:1 performs:1 reflection:1 image:9 wise:1 consideration:1 instantaneous:2 recently:1 common:4 rotation:3 functional:2 qp:3 extend:2 slight:2 cambridge:1 automatic:1 unconstrained:1 tuning:1 rd:1 similarly:1 inclusion:1 funded:1 bmrm:2 invar:5 europe:1 recent:1 belongs:1 optimizes:1 scenario:1 massively:1 certain:4 inequality:3 binary:2 success:1 mcnemar:1 yi:16 seen:1 minimum:1 additional:1 somewhat:1 impose:1 guestrin:1 semi:1 multiple:1 technical:1 faster:1 plug:1 long:1 compensate:1 plugging:1 prediction:8 scalable:4 variant:1 basic:2 vision:2 essentially:5 iteration:2 kernel:12 addition:1 want:2 diagram:1 else:1 leaving:1 crucial:1 sch:4 capitalization:1 subject:2 cowan:2 incorporates:3 mod:4 near:1 exceed:1 easy:1 identically:1 affect:1 xj:2 gave:1 architecture:1 idea:2 tradeoff:1 multiclass:3 drucker:1 shift:1 expression:1 penalty:1 suffer:3 speaking:1 svs:1 useful:1 clear:2 tune:3 transforms:3 amount:2 ellipsoidal:1 locally:1 svms:2 generate:2 http:1 outperform:1 estimated:1 arising:2 per:2 broadly:2 discrete:1 write:1 iz:2 abu:1 group:2 key:3 four:1 ist:2 neither:1 year:1 cone:5 sum:1 powerful:1 family:1 almost:1 guyon:1 decision:1 scaling:6 bound:4 datum:1 convergent:1 quadratic:1 nonnegative:1 nontrivial:2 adapted:1 constraint:4 incorporation:1 alex:1 vishwanathan:1 unlimited:1 generates:1 extremely:2 formulating:1 min:1 attempting:1 expanded:1 department:1 structured:4 combination:3 poor:1 sam:1 wi:2 appealing:3 modification:1 making:1 brunot:1 projecting:1 invariant:17 restricted:1 taken:1 computationally:2 equation:1 previously:1 discus:2 describing:1 turn:1 mechanism:1 count:2 singer:1 ascending:1 end:6 operation:1 apply:4 denker:2 away:1 batch:2 original:8 denotes:2 include:1 hinge:5 unifying:1 especially:1 unchanged:1 objective:4 question:1 quantity:1 strategy:2 rt:4 visiting:1 gradient:6 distance:2 thank:1 thrun:2 reason:1 nicta:2 length:1 modeled:1 ellipsoid:1 insufficient:1 minimizing:4 setup:1 unfortunately:1 implementation:1 perform:2 upper:2 vertical:1 observation:3 datasets:1 markov:1 arc:1 finite:1 descent:1 ecml:1 hinton:1 precise:1 y1:1 perturbation:2 arbitrary:3 community:1 intensity:1 introduced:1 cast:1 pair:4 required:3 namely:1 connection:1 optimized:1 hanson:2 learned:1 deletion:3 nip:5 address:1 pannagadatta:1 pattern:8 xm:1 challenge:2 program:7 including:3 memory:1 max:4 suitable:1 demanding:1 natural:1 difficulty:1 regularized:6 scheme:1 altered:1 improve:1 axis:1 extract:1 text:1 prior:2 interdependent:1 discovery:1 tangent:1 kf:4 checking:1 acknowledgement:1 loss:35 expect:1 generation:3 interesting:1 filtering:4 srebro:1 ingredient:1 degree:2 imposes:1 undergone:1 s0:6 editor:7 story:1 classifying:1 verma:1 translation:2 course:1 summary:2 changed:1 repeat:1 supported:1 allow:2 burges:2 wide:2 saul:3 taking:1 tolerance:1 valid:4 world:1 author:4 commonly:2 spam:9 programme:1 far:3 approximate:1 supremum:4 dealing:2 discriminative:1 xi:14 shwartz:2 search:1 table:2 learn:1 robust:4 williamson:1 bottou:2 european:1 domain:2 inheriting:1 did:1 icann:1 main:1 incrementing:1 hyperparameters:1 ackinger:1 x1:1 canberra:1 referred:1 cubic:1 fashion:1 precision:1 sub:2 lie:2 theorem:1 specific:8 xt:2 r2:1 svm:18 cortes:1 maximizers:1 incorporating:1 exists:1 mnist:5 intractable:1 adding:1 vapnik:1 hui:1 nec:1 anu:1 margin:16 locality:1 led:1 likely:1 nightmare:1 applies:1 springer:1 corresponds:2 violator:1 relies:2 satisfies:1 ma:1 goal:1 viewed:4 formulated:1 rbf:1 feasible:1 change:6 hard:1 infinite:3 degradation:1 lemma:3 invariance:57 attempted:1 formally:1 support:10 latter:2 alexander:1 incorporate:1 avoiding:1 |
2,446 | 3,219 | Active Preference Learning with Discrete Choice Data
Eric Brochu, Nando de Freitas and Abhijeet Ghosh
Department of Computer Science
University of British Columbia
Vancouver, BC, Canada
{ebrochu, nando, ghosh}@cs.ubc.ca
Abstract
We propose an active learning algorithm that learns a continuous valuation model
from discrete preferences. The algorithm automatically decides what items are
best presented to an individual in order to find the item that they value highly in
as few trials as possible, and exploits quirks of human psychology to minimize
time and cognitive burden. To do this, our algorithm maximizes the expected
improvement at each query without accurately modelling the entire valuation surface, which would be needlessly expensive. The problem is particularly difficult
because the space of choices is infinite. We demonstrate the effectiveness of the
new algorithm compared to related active learning methods. We also embed the
algorithm within a decision making tool for assisting digital artists in rendering
materials. The tool finds the best parameters while minimizing the number of
queries.
1
Introduction
A computer graphics artist sits down to use a simple renderer to find appropriate surfaces for a
typical reflectance model. It has a series of parameters that must be set to control the simulation:
?specularity?, ?Fresnel reflectance coefficient?, and other, less-comprehensible ones. The parameters interact in ways difficult to discern. The artist knows in his mind?s eye what he wants, but he?s
not a mathematician or a physicist ? no course he took during his MFA covered Fresnel reflectance
models. Even if it had, would it help? He moves the specularity slider and waits for the image
to be generated. The surface is too shiny. He moves the slider back a bit and runs the simulation
again. Better. The surface is now appropriately dull, but too dark. He moves a slider down. Now
it?s the right colour, but the specularity doesn?t look quite right any more. He repeatedly bumps the
specularity back up, rerunning the renderer at each attempt until it looks right. Good. Now, how to
make it look metallic...?
Problems in simulation, animation, rendering and other areas often take such a form, where the
desired end result is identifiable by the user, but parameters must be tuned in a tedious trial-anderror process. This is particularly apparent in psychoperceptual models, where continual tuning is
required to make something ?look right?. Using the animation of character walking motion as an
example, for decades, animators and scientists have tried to develop objective functions based on
kinematics, dynamics and motion capture data [Cooper et al., 2007]. However, even when expensive mocap is available, we simply have to watch an animated film to be convinced of how far we
still are from solving the gait animation problem. Unfortunately, it is not at all easy to find a mapping
from parameterized animation to psychoperceptual plausibility. The perceptual objective function is
simply unknown. Fortunately, however, it is fairly easy to judge the quality of a walk ? in fact, it is
trivial and almost instantaneous. The application of this principle to animation and other psychoperceptual tools is motivated by the observation that humans often seem to be forming a mental model
of the objective function. This model enables them to exploit feasible regions of the parameter space
where the valuation is predicted to be high and to explore regions of high uncertainty. It is our the1
optimization model
regression model
model
true function
Figure 1: An illustrative example of the difference between models learned for regression vesus optimization.
The regression model fits the true function better overall, but doesn?t fit at the maximum better than anywhere
else in the function. The optimization model is less accurate overall, but fits the area of the maximum very
well. When resources are limited, such as an active learning environment, it is far more useful to fit the area
of interest well, even at the cost of overall predictive performance. Getting a good fit for the maximum will
require many more samples using conventional regression.
sis that the process of tweaking parameters to find a result that looks ?right? is akin to sampling a
perceptual objective function, and that twiddling the parameters to find the best result is, in essence,
optimization. Our objective function is the psycho-perceptual process underlying judgement ? how
well a realization fits what the user has in mind. Following the econometrics terminology, we refer
to the objective as the valuation. In the case of a human being rating the suitability of a simulation,
however, it is not possible to evaluate this function over the entire domain. In fact, it is in general impossible to even sample the function directly and get a consistent response! While it would
theoretically be possible to ask the user to rate realizations with some numerical scale, such methods often have problems with validity and reliability. Patterns of use and other factors can result
in a drift effect, where the scale varies over time [Siegel and Castellan, 1988]. However, human
beings do excel at comparing options and expressing a preference for one over others [Kingsley,
2006]. This insight allows us to approach the optimization function in another way. By presenting
two or more realizations to a user and requiring only that they indicate preference, we can get far
more robust results with much less cognitive burden on the user [Kendall, 1975]. While this means
we can?t get responses for a valuation function directly, we model the valuation as a latent function, inferred from the preferences, which permits an active learning approach [Cohn et al., 1996;
Tong and Koller, 2000].
This motivates our second major insight ? it is not necessary to accurately model the entire objective function. The problem is actually one of optimization, not regression (Figure 1). We can?t
directly maximize the valuation function, so we propose to use an expected improvement function
(EIF) [Jones et al., 1998; Sasena, 2002]. The EIF produces an estimate of the utility of knowing the
valuation at any point in the space. The result is a principled way of trading off exploration (showing
the user examples unlike any they have seen) and exploitation (trying to show the user improvements
on examples they have indicated preference for). Of course, regression-based learning can produce
an accurate model of the entire valuation function, which would also allow us to find the best valuation. However, this comes at the cost of asking the user to compare many, many examples that have
no practical relation what she is looking for, as we demonstrate experimentally in Sections 3 and
4. Our method tries instead to make the most efficient possible use of the user?s time and cognitive
effort.
Our goal is to exploit the strengths of human psychology and perception to develop a novel framework of valuation optimization that uses active preference learning to find the point in a parameter
space that approximately maximizes valuation with the least effort to the human user. Our goal is
to offload the cognitive burden of estimating and exploring different sets of parameters, though we
can incorporate ?slider twiddling? into the framework easily. In Section 4, we present a simple, but
practical application of our model in a material design gallery that allows artists to find particular
appearance rendering effects. Furthermore, the valuation function can be any psychoperceptual process that lends itself to sliders and preferences: the model can support an animator looking for a
particular ?cartoon physics? effect, an artist trying to capture a particular mood in the lighting of a
scene, or an electronic musician looking for a specific sound or rhythm. Though we use animation
and rendering as motivating domains, our work has a broad scope of application in music and other
arts, as well as psychology, marketing and econometrics, and human-computer interfaces.
2
1.1
Previous Work
Probability models for learning from discrete choices have a long history in psychology and econometrics [Thurstone, 1927; Mosteller, 1951; Stern, 1990; McFadden, 2001]. They have been studied
? o, 1978] was adopted by the
extensively for use in rating chess players, and the Elo system [El?
World Chess Federation FIDE to model the probability of one player defeating another. Glickman
and Jensen [2005] use Bayesian optimal design for adaptively finding pairs for tournaments. These
methods all differ from our work in that they are intended to predict the probability of a preference outcome over a finite set of possible pairs, whereas we work with infinite sets and are only
incidentally interested in modelling outcomes.
In Section 4, we introduce a novel ?preference gallery? application for designing simulated materials
in graphics and animation to demonstrate the practical utility of our model. In the computer graphics
field, the Design Gallery [Marks et al., 1997] for animation and the gallery navigation interface for
Bidirectional Reflectance Distribution Functions (BRDFs) [Ngan et al., 2006] are artist-assistance
tools most like ours. They both uses non-adaptive heuristics to find the set of input parameters to be
used in the generation of the display. We depart from this heuristic treatment and instead present a
principled probabilistic decision making approach to model the design process.
Parts of our method are based on [Chu and Ghahramani, 2005b], which presents a preference learning method using probit models and Gaussian processes. They use a ThurstoneMosteller model, but with an innovative nonparametric model of the valuation function. [Chu
and Ghahramani, 2005a] adds active learning to the model, though the method presented there
differs from ours in that realizations are selected from a finite pool to maximize informativeness. More importantly, though, this work, like much other work in the field [Seo et al., 2000;
Guestrin et al., 2005], is concerned with learning the entire latent function. As our experiments
show in Section 3, this is too expensive an approach for our setting, leading us to develop the new
active learning criteria presented here.
2
Active Preference Learning
By querying the user with a paired comparison, one can estimate statistics of the valuation function
at the query point, but only at considerable expense. Thus, we wish to make sure that the samples
we do draw will generate the maximum possible improvement.
Our method for achieving this goal iterates the following steps:
1. Present the user with a new pair and record the choice: Augment the training set of paired choices
with the new user data.
2. Infer the valuation function: Here we use a Thurstone-Mosteller model with Gaussian processes.
See Sections 2.1 and 2.2 for details. Note that we are not interested in predicting the value of the
valuation function over the entire feasible domain, but rather in predicting it well near the optimum.
3. Formulate a statistical measure for exploration-exploitation: We refer to this measure as the
expected improvement function (EIF). Its maximum indicates where to sample next. EI is a function
of the Gaussian process predictions over the feasible domain. See Section 2.3.
4. Optimize the expected improvement function to obtain the next query point: Finding the maximum of the EI corresponds to a constrained nonlinear programming problem. See Section 2.3.
2.1
Preference Learning Model
Assume we have shown the user M pairs of items. In each case, the user has chosen which item she
likes best. The dataset therefore consists of the ranked pairs D = {rk ck ; k = 1, . . . , M }, where
the symbol indicates that the user prefers r to c. We use x1:N = {x1 , x2 , . . . , xN }, xi ? X ? Rd ,
to denote the N elements in the training data. That is, rk and ck correspond to two elements of x1:N .
Our goal is to compute the item x (not necessarily in the training data) with the highest user valuation
in as few comparisons as possible. We model the valuation functions u(?) for r and c as follows:
u(rk )
u(ck )
= f (rk ) + erk
= f (ck ) + eck ,
3
(1)
where the noise terms are Gaussian: erk ? N (0, ? 2 ) and eck ? N (0, ? 2 ). Following [Chu and
Ghahramani, 2005b], we assign a nonparametric Gaussian process prior to the unknown mean valua
1
tion: f (?) ? GP (0, K(?, ?)). That is, at the N training points. p(f ) = |2?K|? 2 exp ? 12 f T K?1 f ,
where f = {f (x1 ), f (x2 ), . . . , f (xN )} and the symmetric positive definite covariance K has entries (kernels) Kij = k(xi , xj ). Initially we learned these parameters via maximum likelihood, but
soon realized that this was unsound due to the scarcity of data. To remedy this, we elected to use
subjective priors using simple heuristics, such as expected dataset spread. Although we use Gaussian processes as a principled method of modelling the valuation, other techniques, such as wavelets
could also be adopted.
Random utility models such as (1) have a long and influential history in psychology and the study
of individual choice behaviour in economic markets. Daniel McFadden?s Nobel Prize speech [McFadden, 2001] provides a glimpse of this history. Many more comprehensive treatments appear in
classical economics books on discrete choice theory.
Under our Gaussian utility models, the probability that item r is preferred to item c is given by:
f (rk ) ? f (ck )
?
,
P (rk ck ) = P (u(rk ) > u(ck )) = P (eck ? erk < f (rk ) ? f (ck )) = ?
2?
R dk
where ? (dk ) = ?12? ??
exp ?a2 /2 da is the cumulative function of the standard Normal distribution. This model, relating binary observations to a continuous latent function, is known as the
Thurstone-Mosteller law of comparative judgement [Thurstone, 1927; Mosteller, 1951]. In statistics
it goes by the name of binomial-probit regression. Note that one could also easily adopt a logis?1
tic (sigmoidal) link function ? (dk ) = (1 + exp (?dk )) . In fact, such choice is known as the
Bradley-Terry model [Stern, 1990]. If the user had more than two choices one could adopting a
multinomial-probit model. This multi-category extension would, for example, enable the user to
state no preference for any of the two items being presented.
2.2
Inference
Our goal is to estimate the posterior distribution of the latent utility function given the discrete data.
QM
)?f (ck )
. Although
That is, we want to compute p(f |D) ? p(f ) k=1 p(dk |f ), where dk = f (rk?
2?
there exist sophisticated variational and Monte Carlo methods for approximating this distribution,
we favor a simple strategy: Laplace approximation. Our motivation for doing this is the simplicity
and computational efficiency of this technique. Moreover, given the amount of uncertainty in user
valuations, we believe the choice of approximating technique plays a small role and hence we expect
the simple Laplace approximation to perform reasonably in comparison to other techniques. The
application of the Laplace approximation is fairly straightforward, and we refer the reader to [Chu
and Ghahramani, 2005b] for details.
Finally, given an arbitrary test pair, the predicted utility f ? and f are jointly Gaussian. Hence, one
can obtain the conditional p(f ? |f ) easily. Moreover, the predictive
distribution p(f ? |D) follows by
R
?
?
straightforward convolution of two Gaussians: p(f |D) = p(f |f )p(f |D)df . One of the criticisms
of Gaussian processes, the fact that they are slow with large data sets, is not a problem for us, since
active learning is designed explicitly to minimize the number of training data.
2.3
The Expected Improvement Function
Now that we are armed with an expression for the predictive distribution, we can use it to decide
what the next query should be. In loose terms, the predictive distribution will enable us to balance the
tradeoff of exploiting and exploring. When exploring, we should choose points where the predicted
variance is large. When exploiting, we should choose points where the predicted mean is large (high
valuation).
Let x? be an arbitrary new instance. Its predictive distribution p(f ? (x? )|D) has sufficient statis?1 ?
tics {?(x? ) = k?T K?1 f M AP , s2 (x? ) = k?? ? k?T (K + C?1
k }, where, now, k?T =
M AP )
?
?
??
?
?
[k(x , x1 ) ? ? ? k(x , xN )] and k = k(x , x ). Also, let ?max denote the highest estimate of the
predictive distribution thus far. That is, ?max is the highest valuation for the data provided by the
individual.
4
1
1
0.8
0.8
?4
x 10
0.4
2.5
0
0.4
6
0.2
2
0.6
8
0.6
3
0.2
4
0
0.2
0.4
0.6
0.8
1
1.5
0
0
0.2
0.4
0.6
0.8
1
2
0
1
?2
0.5
?4
1
0
1
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
0.8
1
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
1
Figure 2: The 2D test function (left), and the estimate of the function based on the results of a typical run of 12
preference queries (right). The true function has eight local and one global maxima. The predictor identifies the
region of the global maximum correctly and that of the local maxima less well, but requires far fewer queries
than learning the entire function.
The probability of improvement at a point x? is simply given by a tail probability:
?max ? ?(x? )
p(f ? (x? ) ? ?max ) = ?
,
s(x? )
where f ? (x? ) ? N (?(x? ), s2 (x? )). This statistical measure of improvement has been widely used
in the field of experimental design and goes back many decades [Kushner, 1964]. However, it is
known to be sensitive to the value of ?max . To overcome this problem, [Jones et al., 1998] defined
the improvement over the current best point as I(x? ) = max{0, ?(x? ) ? ?max }, which resulted in
an expected improvement of
(?max ? ?(x? ))?(d) + s(x? )?(d) if s > 0
?
EI(x ) =
0
if s = 0
where d =
?max ??(x? )
.
s(x? )
To find the point at which to sample, we still need to maximize the constrained objective EI(x? )
over x? . Unlike the original unknown cost function, EI(?) can be cheaply sampled. Furthermore,
for the purposes of our application, it is not necessary to guarantee that we find the global maximum,
merely that we can quickly locate a point that is likely to be as good as possible. The original EGO
work used a branch-and-bound algorithm, but we found it was very difficult to get good bounds
over large regions. Instead we use DIRECT [Jones et al., 1993], a fast, approximate, derivativefree optimization algorithm, though we conjecture that for larger dimensional spaces, sequential
quadratic programming with interior point methods might be a better alternative.
3
Experiments
The goal of our algorithm is to find a good approximation of the maximum of a latent function using
preference queries. In order to measure our method?s effectiveness in achieving this goal, we create
a function f for which the optimum is known. At each time step, a query is generated in which
two points x1 and x2 are adaptively selected, and the preference is found, where f (x1 ) > f (x2 ) ?
x1 x2 . After each preference, we measure the error, defined as = fmax ? f (argmaxx f ? (x)),
that is, the difference between the true maximum of f and the value of f at the point predicted to be
the maximum. Note that by design, this does not penalize the algorithm for drawing samples from
X that are far from argmaxx , or for predicting a latent function that differs from the true function.
We are not trying to learn the entire valuation function, which would take many more queries ? we
seek only to maximize the valuation, which involves accurate modelling only in the areas of high
valuation.
We measured the performance of our method on three functions ? 2D, 4D and 6D. By way of demonstration, Figure 2 shows the actual 2D functions and the typical prediction after several queries. The
test functions are defined as:
f2d = max{0, sin(x1 ) + x1 /3 + sin(12x1 ) + sin(x2 ) + x2 /3 + sin(12x2 ) ? 1}
f4d,6d
=
d
X
sin(xi ) + xi /3 + sin(12xi )
i=1
5
1.0
2D function
0.8
?
4.0
4D function
8.0
3.0
7.0
2.0
6.0
1.0
5.0
6D function
0.6
0.4
0.2
0.0
10
20
30
40
0.0
10
20
30
preference queries
40
4.0
10
20
30
40
Figure 3: The evolution of error for the estimate of the optimum on the test functions. The plot shows the error
evolution against the number of queries. The solid line is our method; the dashed is a baseline comparison
in which each query point is selected randomly. The performance is averaged over 20 runs, with the error bars
showing the variance of .
all defined over the range [0, 1]d . We selected these equations because they seem both general and
difficult enough that we can safely assume that if our method works well on them, it should work on a
large class of real-world problems ? they have multiple local minima to get trapped in and varying
landscapes and dimensionality. Unfortunately, there has been little work in the psychoperception
literature to indicate what a good test function would be for our problem, so we have had to rely to
an extent on our intuition to develop suitable test cases.
The results of the experiments are shown in Figure 3. In all cases, we simulate 50 queries using our
method (here called maxEI ). As a baseline, we compare against 50 queries using the maximum
variance of the model (maxs ), which is a common criterion in active learning for regression [Seo
et al., 2000; Chu and Ghahramani, 2005a]. We repeated each experiment 20 times and measured
the mean and variance of the error evolution. We find that it takes far fewer queries to find a good
result using maxEI in all cases. In the 2D case, for example, after 20 queries, maxEI already
has better average performance than maxs achieves after 50, and in both the 2D and 4D scenarios,
maxEI steadily improves until it find the optima, while maxs soon reaches a plateau, improving only
slightly, if at all, while it tries to improve the global fit to the latent function. In the 6D scenario,
neither algorithm succeeds well in finding the optimum, though maxEI clearly comes closer. We
believe the problem is that in six dimensions, the space is too large to adequately explore with so few
queries, and variance remains quite high throughout the space. We feels that requiring more than 50
user queries in a real application would be unacceptable, so we are instead currently investigating
extensions that will allow the user to direct the search in higher dimensions.
4
Preference Gallery for Material Design
Properly modeling the appearance of a material is a necessary component of realistic image synthesis. The appearance of a material is formalized by the notion of the Bidirectional Reflectance
Distribution Function (BRDF). In computer graphics, BRDFs are most often specified using various analytical models observing the physical laws of reciprocity and energy conservation while also
exhibiting shadowing, masking and Fresnel reflectance phenomenon. Realistic models are therefore
fairly complex with many parameters that need to be adjusted by the designer. Unfortunately these
parameters can interact in non-intuitive ways, and small adjustments to certain settings may result
in non-uniform changes in appearance. This can make the material design process quite difficult for
the end user, who cannot expected to be an expert in the field of appearance modeling.
Our application is a solution to this problem, using a ?preference gallery? approach, in which users
are simply required to view two or more images rendered with different material properties and
indicate which ones they prefer. To maximize the valuation, we use an implementation of the model
described in Section 2. In practice, the first few examples will be points of high variance, since little
of the space is explored (that is, the model of user valuation is very uncertain). Later samples tend
to be in regions of high valuation, as a model of the user?s interest is learned.
We use our active preference learning model on an example gallery application for helping users
find a desired BRDF. For the purposes of this example, we limit ourselves to isotropic materials and
ignore wavelength dependent effects in reflection. The gallery uses the Ashikhmin-Shirley Phong
6
Table 1: Results of the user study
algorithm
latin hypercubes
maxs
maxEI
trials
50
50
50
n (mean ? std)
18.40 ? 7.87
17.87 ? 8.60
8.56 ? 5.23
model [Ashikhmin and Shirley, 2000] for the BRDFs which was recently validated to be well suited
for representing real materials [Ngan et al., 2005]. The BRDFs are rendered on a sphere under high
frequency natural illumination as this has been shown to be the desired setting for human preception
of reflectance [Fleming et al., 2001]. Our gallery demonstration presents the user with two BRDF
images at a time. We start with four predetermined queries to ?seed? the parameter space, and after
that use the learned model to select gallery images. The GP model is updated after each preference
is indicated. We use parameters of real measured materials from the MERL database [Ngan et al.,
2005] for seeding the parameter space, but can draw arbitrary parameters after that.
4.1
User Study
To evaluate the performance of our application, we have run a simple user study in which the generated images are restricted to a subset of 38 materials from the MERL database that we deemed to
be representative of the appearance space of the measured materials. The user is given the task of
finding a single randomly-selected image from that set by indicating preferences. Figure 4 shows a
typical user run, where we ask the user to use the preference gallery to find a provided target image.
At each step, the user need only indicate the image they think looks most like the target. This would,
of course, be an unrealistic scenario if we were to be evaluating the application from an HCI stance,
but here we limit our attention to the model, where we are interested here in demonstrating that with
human users maximizing valuation is preferable to learning the entire latent function.
Using five subjects, we compared 50 trials using the EIF to select the images for the gallery (maxEI ),
50 trials using maximum variance (maxs , the same criterion as in the experiments of Section 3), and
50 trials using samples selected using a randomized Latin hypercube algorithm. In each case, one of
the gallery images was the image with the highest predicted valuation and the other was selected by
the algorithm. The algorithm type for each trial was randomly selected by the computer and neither
the experimenter nor the subjects knew which of the three algorithms was selecting the images. The
results are shown in Table 1. n is the number clicks required of the user to find the target image.
Clearly maxEI dominates, with a mean n less than half that of the competing algorithms. Interestingly, selecting images using maximum variance does not perform much better than random. We
suspect that this is because maxs has a tendency to select images from the corners of the parameter space, which adds limited information to the other images, whereas Latin hypercubes at least
guarantees that the selected images fill the space.
Active learning is clearly a powerful tool for situations where human input is required for learning.
With this paper, we have shown that understanding the task ? and exploiting the quirks of human
cognition ? is also essential if we are to deploy real-world active learning applications. As people come to expect their machines to act intelligently and deal with more complex environments,
machine learning systems that can collaborate with users and take on the tedious parts of users?
cognitive burden has the potential to dramatically affect many creative fields, from business to the
arts to science.
References
[Ashikhmin and Shirley, 2000] M. Ashikhmin and P. Shirley. An anisotropic phong BRDF model. J. Graph.
Tools, 5(2):25?32, 2000.
[Chu and Ghahramani, 2005a] W. Chu and Z. Ghahramani. Extensions of Gaussian processes for ranking:
semi-supervised and active learning. In Learning to Rank workshop at NIPS-18, 2005.
[Chu and Ghahramani, 2005b] W. Chu and Z. Ghahramani. Preference learning with Gaussian processes. In
ICML, 2005.
[Cohn et al., 1996] D. A. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models.
Journal of Artificial Intelligence Research, 4:129?145, 1996.
7
T arget
1.
2.
3.
4.
Figure 4: A shorter-than-average but otherwise typical run of the preference gallery tool. At each (numbered)
iteration, the user is provided with two images generated with parameter instances and indicates the one they
think most resembles the target image (top-left) they are looking for. The boxed images are the user?s selections
at each iteration.
[Cooper et al., 2007] S. Cooper, A. Hertzmann, and Z. Popovi?c. Active learning for motion controllers. In
SIGGRAPH, 2007.
? o, 1978] A.
? El?
? o. The Rating of Chess Players: Past and Present. Arco Publishing, New York, 1978.
[El?
[Fleming et al., 2001] R. Fleming, R. Dror, and E. Adelson. How do humans determine reflectance properties
under unknown illumination? In CVPR Workshop on Identifying Objects Across Variations in Lighting,
2001.
[Glickman and Jensen, 2005] M. E. Glickman and S. T. Jensen. Adaptive paired comparison design. Journal
of Statistical Planning and Inference, 127:279?293, 2005.
[Guestrin et al., 2005] C. Guestrin, A. Krause, and A. P. Singh. Near-optimal sensor placements in Gaussian
processes. In Proceedings of the 22nd International Conference on Machine Learning (ICML-05), 2005.
[Jones et al., 1993] D. R. Jones, C. D. Perttunen, and B. E. Stuckman. Lipschitzian optimization without the
Lipschitz constant. J. Optimization Theory and Apps, 79(1):157?181, 1993.
[Jones et al., 1998] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive
black-box functions. J. Global Optimization, 13(4):455?492, 1998.
[Kendall, 1975] M. Kendall. Rank Correlation Methods. Griffin Ltd, 1975.
[Kingsley, 2006] D. C. Kingsley. Preference uncertainty, preference refinement and paired comparison choice
experiments. Dept. of Economics, University of Colorado, 2006.
[Kushner, 1964] H. J. Kushner. A new method of locating the maximum of an arbitrary multipeak curve in the
presence of noise. Journal of Basic Engineering, 86:97?106, 1964.
[Marks et al., 1997] J. Marks, B. Andalman, P. A. Beardsley, W. Freeman, S. Gibson, J. Hodgins, T. Kang,
B. Mirtich, H. Pfister, W. Ruml, K. Ryall, J. Seims, and S. Shieber. Design galleries: A general approach to
setting parameters for computer graphics and animation. Computer Graphics, 31, 1997.
[McFadden, 2001] D. McFadden. Economic choices. The American Economic Review, 91:351?378, 2001.
[Mosteller, 1951] F. Mosteller. Remarks on the method of paired comparisons: I. the least squares solution
assuming equal standard deviations and equal correlations. Psychometrika, 16:3?9, 1951.
[Ngan et al., 2005] A. Ngan, F. Durand, and W. Matusik. Experimental analysis of BRDF models. In Proceedings of the Eurographics Symposium on Rendering, pages 117?226, 2005.
[Ngan et al., 2006] A. Ngan, F. Durand, and W. Matusik. Image-driven navigation of analytical BRDF models.
In T. Akenine-M?oller and W. Heidrich, editors, Eurographics Symposium on Rendering, 2006.
[Sasena, 2002] M. J. Sasena. Flexibility and Efficiency Enhancement for Constrained Global Design Optimization with Kriging Approximations. PhD thesis, University of Michigan, 2002.
[Seo et al., 2000] S. Seo, M. Wallat, T. Graepel, and K. Obermayer. Gaussian process regression: active data
selection and test point rejection. In Proceedings of IJCNN 2000, 2000.
[Siegel and Castellan, 1988] S. Siegel and N. J. Castellan. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill, 1988.
[Stern, 1990] H. Stern. A continuum of paired comparison models. Biometrika, 77:265?273, 1990.
[Thurstone, 1927] L. Thurstone. A law of comparative judgement. Psychological Review, 34:273?286, 1927.
[Tong and Koller, 2000] S. Tong and D. Koller. Support vector machine active learning with applications to
text classification. In Proc. ICML-00, 2000.
8
| 3219 |@word trial:7 exploitation:2 judgement:3 nd:1 tedious:2 simulation:4 tried:1 seek:1 covariance:1 solid:1 offload:1 series:1 selecting:2 daniel:1 tuned:1 bc:1 ours:2 interestingly:1 animated:1 subjective:1 freitas:1 bradley:1 current:1 comparing:1 past:1 si:1 chu:9 must:2 numerical:1 realistic:2 predetermined:1 enables:1 seeding:1 designed:1 statis:1 plot:1 half:1 selected:9 fewer:2 item:8 intelligence:1 isotropic:1 prize:1 record:1 mental:1 iterates:1 provides:1 preference:29 sits:1 sigmoidal:1 five:1 unacceptable:1 direct:2 shiny:1 symposium:2 consists:1 hci:1 behavioral:1 introduce:1 theoretically:1 market:1 expected:8 nor:1 planning:1 multi:1 freeman:1 animator:2 automatically:1 eck:3 actual:1 armed:1 little:2 psychometrika:1 provided:3 estimating:1 underlying:1 glickman:3 maximizes:2 moreover:2 what:6 tic:2 erk:3 dror:1 mathematician:1 ghosh:2 finding:4 guarantee:2 safely:1 continual:1 act:1 preferable:1 biometrika:1 qm:1 control:1 appear:1 positive:1 scientist:1 local:3 engineering:1 limit:2 physicist:1 approximately:1 ap:2 tournament:1 might:1 black:1 studied:1 resembles:1 limited:2 range:1 averaged:1 practical:3 practice:1 definite:1 differs:2 fresnel:3 area:4 gibson:1 tweaking:1 numbered:1 wait:1 get:5 cannot:1 interior:1 selection:2 impossible:1 optimize:1 conventional:1 musician:1 maximizing:1 go:2 economics:2 straightforward:2 attention:1 formulate:1 welch:1 simplicity:1 formalized:1 identifying:1 schonlau:1 insight:2 importantly:1 fill:1 his:2 thurstone:6 variation:1 notion:1 laplace:3 feel:1 updated:1 target:4 play:1 deploy:1 user:41 colorado:1 programming:2 us:3 designing:1 element:2 ego:1 expensive:4 particularly:2 walking:1 econometrics:3 std:1 database:2 role:1 capture:2 region:5 highest:4 kriging:1 principled:3 intuition:1 environment:2 hertzmann:1 dynamic:1 singh:1 solving:1 arget:1 predictive:6 eric:1 efficiency:2 easily:3 siggraph:1 various:1 brdf:6 fast:1 monte:1 query:21 artificial:1 outcome:2 multipeak:1 quite:3 apparent:1 film:1 federation:1 heuristic:3 widely:1 larger:1 drawing:1 otherwise:1 cvpr:1 favor:1 statistic:3 gp:2 jointly:1 itself:1 think:2 mood:1 analytical:2 intelligently:1 took:1 gait:1 propose:2 realization:4 fmax:1 flexibility:1 anderror:1 intuitive:1 getting:1 exploiting:3 enhancement:1 optimum:5 produce:2 incidentally:1 comparative:2 object:1 help:1 develop:4 quirk:2 measured:4 c:1 predicted:6 judge:1 indicate:4 trading:1 come:3 differ:1 exhibiting:1 involves:1 exploration:2 nando:2 human:12 enable:2 material:13 require:1 behaviour:1 assign:1 suitability:1 adjusted:1 exploring:3 extension:3 helping:1 normal:1 exp:3 slider:5 seed:1 mapping:1 scope:1 elo:1 cognition:1 predict:1 bump:1 major:1 achieves:1 adopt:1 a2:1 continuum:1 purpose:2 proc:1 shadowing:1 currently:1 seo:4 sensitive:1 create:1 defeating:1 tool:7 clearly:3 sensor:1 gaussian:13 rather:1 ck:9 varying:1 validated:1 improvement:11 she:2 modelling:4 indicates:3 likelihood:1 properly:1 rank:2 criticism:1 baseline:2 inference:2 dependent:1 el:3 entire:9 psycho:1 initially:1 relation:1 koller:3 interested:3 overall:3 classification:1 augment:1 art:2 constrained:3 fairly:3 field:5 equal:2 sampling:1 cartoon:1 broad:1 look:6 jones:7 icml:3 adelson:1 others:1 few:4 unsound:1 randomly:3 resulted:1 comprehensive:1 individual:3 intended:1 ourselves:1 attempt:1 interest:2 highly:1 navigation:2 accurate:3 closer:1 necessary:3 glimpse:1 shorter:1 walk:1 desired:3 uncertain:1 psychological:1 kij:1 instance:2 modeling:2 merl:2 asking:1 cost:3 deviation:1 entry:1 subset:1 predictor:1 uniform:1 graphic:6 too:4 motivating:1 varies:1 adaptively:2 hypercubes:2 international:1 mosteller:6 randomized:1 probabilistic:1 off:1 physic:1 pool:1 synthesis:1 quickly:1 again:1 thesis:1 eurographics:2 choose:2 cognitive:5 book:1 expert:1 corner:1 leading:1 american:1 potential:1 de:1 coefficient:1 explicitly:1 ranking:1 tion:1 try:2 view:1 later:1 kendall:3 doing:1 observing:1 start:1 option:1 masking:1 minimize:2 square:1 variance:8 who:1 correspond:1 landscape:1 apps:1 bayesian:1 accurately:2 artist:6 carlo:1 lighting:2 history:3 plateau:1 reach:1 against:2 energy:1 frequency:1 steadily:1 sampled:1 dataset:2 treatment:2 experimenter:1 ask:2 f2d:1 dimensionality:1 improves:1 graepel:1 brochu:1 actually:1 back:3 sophisticated:1 bidirectional:2 higher:1 popovi:1 supervised:1 response:2 though:6 box:1 furthermore:2 anywhere:1 marketing:1 until:2 correlation:2 ei:5 cohn:3 nonlinear:1 quality:1 indicated:2 believe:2 name:1 effect:4 validity:1 requiring:2 true:5 remedy:1 evolution:3 hence:2 adequately:1 stance:1 dull:1 symmetric:1 deal:1 sin:6 assistance:1 during:1 essence:1 illustrative:1 rhythm:1 criterion:3 trying:3 presenting:1 hill:1 demonstrate:3 motion:3 interface:2 reflection:1 elected:1 image:22 variational:1 instantaneous:1 novel:2 recently:1 common:1 multinomial:1 physical:1 phong:2 anisotropic:1 tail:1 he:7 relating:1 refer:3 expressing:1 tuning:1 rd:1 collaborate:1 had:3 reliability:1 surface:4 renderer:2 add:2 heidrich:1 something:1 posterior:1 driven:1 scenario:3 certain:1 binary:1 durand:2 seen:1 guestrin:3 fortunately:1 minimum:1 determine:1 mocap:1 maximize:5 dashed:1 assisting:1 branch:1 multiple:1 sound:1 semi:1 infer:1 plausibility:1 long:2 sphere:1 matusik:2 paired:6 prediction:2 regression:9 basic:1 controller:1 df:1 iteration:2 kernel:1 adopting:1 penalize:1 whereas:2 want:2 krause:1 logis:1 else:1 appropriately:1 unlike:2 sure:1 subject:2 tend:1 suspect:1 effectiveness:2 seem:2 jordan:1 near:2 presence:1 latin:3 easy:2 concerned:1 rendering:6 enough:1 xj:1 fit:7 psychology:5 affect:1 competing:1 click:1 economic:3 knowing:1 tradeoff:1 motivated:1 expression:1 six:1 utility:6 colour:1 ltd:1 effort:2 akin:1 derivativefree:1 locating:1 speech:1 york:1 repeatedly:1 prefers:1 remark:1 dramatically:1 useful:1 covered:1 amount:1 nonparametric:3 dark:1 extensively:1 category:1 generate:1 exist:1 designer:1 trapped:1 correctly:1 perttunen:1 discrete:5 four:1 terminology:1 demonstrating:1 achieving:2 shirley:4 neither:2 graph:1 merely:1 run:6 parameterized:1 uncertainty:3 powerful:1 discern:1 almost:1 reader:1 decide:1 electronic:1 throughout:1 draw:2 decision:2 prefer:1 griffin:1 fide:1 bit:1 bound:2 display:1 quadratic:1 identifiable:1 strength:1 ijcnn:1 placement:1 scene:1 x2:8 simulate:1 innovative:1 rendered:2 conjecture:1 department:1 sasena:3 influential:1 creative:1 across:1 slightly:1 character:1 oller:1 making:2 chess:3 restricted:1 resource:1 equation:1 remains:1 kinematics:1 loose:1 know:1 mind:2 end:2 adopted:2 available:1 gaussians:1 permit:1 eight:1 appropriate:1 alternative:1 comprehensible:1 original:2 binomial:1 top:1 kushner:3 publishing:1 lipschitzian:1 music:1 exploit:3 reflectance:8 ghahramani:10 approximating:2 classical:1 hypercube:1 move:3 objective:8 already:1 realized:1 depart:1 strategy:1 rerunning:1 needlessly:1 obermayer:1 lends:1 link:1 simulated:1 eif:4 evaluate:2 valuation:31 the1:1 extent:1 trivial:1 gallery:15 nobel:1 assuming:1 reciprocity:1 minimizing:1 balance:1 demonstration:2 difficult:5 unfortunately:3 expense:1 design:11 implementation:1 motivates:1 stern:4 unknown:4 perform:2 observation:2 convolution:1 finite:2 stuckman:1 situation:1 looking:4 locate:1 arbitrary:4 canada:1 drift:1 inferred:1 rating:3 pair:6 required:4 specified:1 learned:4 kang:1 fleming:3 nip:1 bar:1 pattern:1 perception:1 max:16 terry:1 mfa:1 suitable:1 unrealistic:1 ranked:1 rely:1 natural:1 predicting:3 business:1 representing:1 improve:1 eye:1 shieber:1 identifies:1 excel:1 deemed:1 columbia:1 text:1 prior:2 literature:1 understanding:1 review:2 vancouver:1 law:3 probit:3 expect:2 mcfadden:5 generation:1 specularity:4 querying:1 digital:1 sufficient:1 consistent:1 informativeness:1 principle:1 editor:1 course:3 convinced:1 soon:2 allow:2 overcome:1 dimension:2 xn:3 world:3 cumulative:1 evaluating:1 doesn:2 arco:1 curve:1 adaptive:2 refinement:1 far:7 approximate:1 ignore:1 preferred:1 mcgraw:1 global:7 active:19 decides:1 investigating:1 conservation:1 knew:1 xi:5 continuous:2 latent:8 search:1 decade:2 table:2 learn:1 reasonably:1 robust:1 ca:1 improving:1 argmaxx:2 interact:2 metallic:1 boxed:1 necessarily:1 complex:2 domain:4 da:1 ruml:1 kingsley:3 hodgins:1 spread:1 motivation:1 noise:2 animation:9 s2:2 repeated:1 x1:11 representative:1 siegel:3 cooper:3 tong:3 slow:1 wish:1 perceptual:3 learns:1 wavelet:1 british:1 down:2 embed:1 rk:9 specific:1 showing:2 jensen:3 symbol:1 explored:1 dk:6 dominates:1 burden:4 essential:1 workshop:2 sequential:1 phd:1 illumination:2 rejection:1 suited:1 michigan:1 simply:4 explore:2 appearance:6 forming:1 cheaply:1 likely:1 wavelength:1 adjustment:1 watch:1 ubc:1 corresponds:1 conditional:1 goal:7 lipschitz:1 feasible:3 experimentally:1 considerable:1 change:1 infinite:2 typical:5 called:1 pfister:1 experimental:2 tendency:1 player:3 succeeds:1 beardsley:1 indicating:1 select:3 support:2 mark:3 people:1 scarcity:1 incorporate:1 dept:1 phenomenon:1 |
2,447 | 322 | INTERACTION AMONG OCULARITY,
RETINOTOPY AND ON-CENTER/OFFCENTER PATHWAYS DURING
DEVELOPMENT
Shigeru Tanaka
Fundamental Research Laboratories, NEC Corporation,
34 Miyukigaoka, Tsukuba, Ibaraki 305, Japan
ABSTRACT
The development of projections from the retinas to the cortex is
mathematically analyzed according to the previously proposed
thermodynamic formulation of the self-organization of neural networks.
Three types of submodality included in the visual afferent pathways are
assumed in two models: model (A), in which the ocularity and retinotopy
are considered separately, and model (B), in which on-center/off-center
pathways are considered in addition to ocularity and retinotopy. Model (A)
shows striped ocular dominance spatial patterns and, in ocular dominance
histograms, reveals a dip in the binocular bin. Model (B) displays
spatially modulated irregular patterns and shows single-peak behavior in
the histograms. When we compare the simulated results with the observed
results, it is evident that the ocular dominance spatial patterns and
histograms for models (A) and (B) agree very closely with those seen in
monkeys and cats.
1 INTRODUCTION
A recent experimental study has revealed that spatial patterns of ocular dominance columns
(ODes) observed by autoradiography and profiles of the ocular dominance histogram
(ODH) obtained by electrophysiological experiments differ greatly between monkeys and
cats. ODes for cats in the tangential section appear as beaded patterns with an irregularly
fluctuating bandwidth (Anderson, Olavarria and Van Sluyters 1988); ODes for monkeys are
likely to be straight parallel stripes (Hubel, Wiesel and LeVay, 1977). The typical ODH for
cats has a single peak in the middle of the ocular dominance corresponding to balanced
response in ocularity (Wiesel and Hubel, 1974). In contrast to this, the ODH for monkeys
has a dip in the middle of the ocular dominance (Hubel and Wiesel, 1963). Furthermore,
neurons in the input layer of the cat's primary visual cortex exhibit orientation selectivity,
while those of the monkey do not
Through these comparisons, we can observe distinct differences in the anatomical and
physiological properties of neural projections from the retinas to the visual cortex in
monkeys and cats. To obtain a better understanding of these differences, theoretical analyses
of interactions among ocularity, retinotopy and on-center/off-center pathways during visual
18
Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways
cortical development were performed with computer simulation based on the previously
proposed thermodynamic formulation of the self-organization of neural networks (fanaka,
1990).
Two models for the development of the visual afferent pathways are assumed: model (A), in
which the development of ocular dominance and retinotopic order is laken into account, and
model (B), in which the development of on-center/off-center pathway terminals is
considered in addition to ocular dominance and retinotopic order.
2 MODEL DESCRIPTION
The synaptic connection density of afferent fibers from the lateral geniculate nucleus (LGN)
in a local equilibrium state is represented by the Potts spin variables C1,i.J"S because of their
strong winner-lake-all process (Tanaka, 1990). The following function nq<{ C1,l,J'P gives the
distribution of the Potts spins in equilibrium:
1req (
(aj, l'll? = .1 exp( _H( (OJ,l,ll})
Z
T
with Z =
L
exp( _H( (OJ,l,Il})
{q,l,Jl=l,O}
T
)
(1)
.
(2)
The Hamiltonian H in the argument of the exponential function in (1) and (2) determines
the behavior of this spin system at the effective temperature T, where H is given by
(3)
Function VJ,J
~~ represents the interaction between synapses at positions j and j' in layer 4
of the primary visual cortex; function r~k,~ represents the correlation in activity between
LGN neurons at positions k and k' of cell types 11- and 11-'. The set Hj represents a group
of LGN neurons which can project their axons to the position j in the visual cortex;
therefore, the magnitude of this set is related to the extent of afferent terminal arborization
in the cortex A.A.
Taking the above formulation into consideration, we have only to discuss the
thermodynamics in the Potts spin system described by the Hamiltonian H at the
temperature T in order to discuss the activity-dependent self-organization of afferent neural
connections during development.
Next, let us discuss more specific descriptions on the modeling of the visual afferent
pathways. We will assume that the LGN serves only as a relay nucleus and that the signal
is transferred from the retina to the cortex as if they were directly connected. Therefore, the
correlation function r~k,~ can be treated as that in the retinas r:}I;Ic',~, This function is
given by using the lateral interaction function in the retina Vl~'c' and the correlation function
19
20
Tanaka
of stimuli to RGCs Gq Jl:12: in the following:
(4)
For simplicity. the stimuli are treated as white noise:
(5)
Now. we can obtain two models for the formation of afferent synaptic connections between
the retinas and the primary visual cortex: model (A). in which ocularity and retinotopy are
taken into account:
tIE (left. right).
K = [1
'1] .
(6)
'1 1
where 11 (0 ~ n ~ 1 ) is the correlation of activity between the left and right retinas; and
model (B). in which on-center and off-center pathways are added to model (A):
tIE {(left. on-center). (left. Off-center). (right. on-center). (right. off-center)}
K=
1
'1 +1'2
'1 +1'2
1
n
n
'1
'1
?
'1
'1
'1
'1
(7)
1 n +1'2
'1 +1'2
1
where 1'2 (- 1 ~ '2 ~ 1 ) is the correlation of activity between the on-center and off-center
RGCs in the same retina when there is no correlation between different retinas. A negative
value of 1'2 means out-of-phase firings between on-center and off-center neurons.
3 COMPUTER SIMULATION
Computer simulations were carried out according to the Metropolis algorithm (Metropolis.
1953; Tanaka. 1991). A square panel consisting of 80x80 grids was assumed to be the
input layer of the primary visual cortex. where the length of one grid is denoted by a. The
Potts spin is assigned to each grid. Free boundary conditions were adopted on the border of
the panel. One square panel of 20><20 grids was assumed to be a retina for each submodality
JL The length of one grid is given as 4a so that the edges for the square model cortex and
model retinas are of the same length.
The following form was adopted for the interactions V1v: i
V kv. k' =
?
qVa
2nlt
v 2
a
~ d!.k' )-
ex -
21t
v2
a
IS
qVinh
2nlt
v 2
inh
(v= VC or R):
~ c(k') .
ex -
21t
v 2
illh
(8)
Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways
All results reported in this paper were obtained with parameters whose values are ~s
vc
VC
R
follows: qVCu = 1.0, qVcw. = 5.0, A. u =0.15, A. w. = 1.0, qRu = I, A. u =0.5,
A. RiIJI = 1.0, A. A = 1.6. a =0.1. T =0.001. n = 0, and r2 =- 0.2. It is assumed that qRw. =0
for model (A) while qRiIJI =0.5 for model (B). By considering that the receptive field (RF)
of an RGC at position k is represented by JlVl~ l" RGCs for model (A) and (B) have lowpass and high-pass filtering properties, respectively. Monte Carlo simulation for model (A)
was carried out for 200,000 steps; that for model (B) was done for 760,000 steps.
L
(a)
(b)
R
(c)
L
(d)
(e)
R
(f)
(g)
Fig. 1 Simulated results of synaptic terminal and neuronal distributions and ocular
dominance histograms for models (A) and (B).
21
22
Tanaka
4
RESULTS AND DISCUSSIONS
The distributions of synaptic terminals and neurons, and ocular dominance histograms are
shown in Fig. I, where (a), (b) and (c) were obtained from model (A); (d), (e), (f) and (g)
were obtained from model (B). The spatial distribution of synaptic terminals originating
from the left or right retina (Figs. 1a and 1d) is a counterpart of an autoradiograph of the
one by the eye-injection of radiolabeled amino acid. The bandwidth of the simulated one
(Fig. 1a) is almost constant as well as the observed bandwidth for monkeys (Hubel and
Wiesel, 1974). The distribution of ocularity in synaptic terminals shown in Fig. 1d is
irregular in that the periodicity seen in Fig. 1a disappears even though a patchy pattern can
be seen. This pattern is quite similar to the ODe for cats (Anderson, Olavarria and Van
Sluyters 1988).
By calculating the convolution of the synaptic connections q.l.Il'S with the cortical
interaction function v.J.J~, the ocular dominance in response of cortical cells to monocular
stimulation and the spatial pattern of the ocular dominance in activity (Figs. 1b and Ie)
were obtained. Neurons specifically responding to stimuli presented in the right and left
eyes are, respectively, in the black and white domains. This pattern is a counterpart of an
electrophysiological pattern of the ODe. The distributions of ocularity in synaptic
terminals correspond to those of ocular dominance in neuronal response to monocular
stimulation (a to b; d to e in Fig. 1). This suggests that the borders of the autoradiographic
ODe pattern coincide with those of the electrophysiological ODe pattern. This
correspondence is not trivial since strong lateral inhibition exerts in the cortex.
Reflecting the narrow transition areas between monocular domains in Fig. 1b, a dip appears
in the binocular bin in the corresponding ODH (Fig. 1c). In contrast, the profile of the
ODH (Fig. If) has a single peak in the binocular bin since binocularly responsive neurons
are distributed over the cortex (Fig. Ie).
In model (B), on-center and off-center terminals are also segregated in the cortex in
superposition to the ODe paUern (Fig. 19). No correlation can be seen between the spatial
distribution of on-center/off-center terminals and the one pattern (Fig.1d).
(a)
(b)
(c)
Fig. 2 A visual stimulation pattern (a) and the distributions of active synaptic
terminals in the cortex [(b) for model (A) and (c) for model (B)].
Figures 2b and 2c visualize spatial patterns of active synaptic terminals in the cortex for
model (A) and model (B), when the light stimulus shown by Fig.1d is presented to both
Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways
retinas. A pattern similar to the stimulus appears in the cortex for model (A) (Fig. Ie).
This supports the observation that retinotopic order is almost achieved. In other
simulations for model (A), the retinotopic order in the final pattern was likely to be
achieved when initial patterns were roughly ordered in retinotopy. In model (B), the
retinotopic order seems to be broken at least in this system size even though the initial
pattern has a well-ordered retinotopy (Fig. lc). There is a tendency for retinotopy to be
harder to preserve in model (B) than in model (A).
L
L
R
(a)
(c)
R
(b)
(d)
(e)
Fig. 3 Representative receptive fields obtained from simulations.
Model (A) reproduced only concentric RFs for both eyes. The dominant RFs of monocular
neurons were of the on-center/off-surround type (right in Fig. 3a); the other RFs of the
same neurons were of the type of the low-pass filter which has only the off response (left
in Fig. 3a). In Model (B), RFs of cortical neurons generally had complex structures (Fig.
3b). It can barely be recognized that the dominant RFs of monocular neurons showed
simple-cell-like RFs.
To determine why model (B) produced complex structures in RFs, another simulation of
RF formation was carried out based on a model where retinotopy and on-center/off-center
pathways are considered. Various types of RFs emerged in the cortex (bottom row in Fig.
3). The difference in structures between Figures. 3c and 3e shows the difference in the
orientation and the phase (the deviation of the on region from the RF center) in the simplecell-like RFs. Fig. 3d shows an on-center concentric RF. Such nonoriented RFs were
likely to appear in the vicinity of the singular points around which the orientation rotates
by 180 degrees.
Simulations for model (A) with different values of parameters such as qVcw., A. A and qRw.
were also carried out although the results are not visualized here. When qvcjM takes a small
23
24
Tanaka
value, the OOC bandwidth fluctuates). However large the fluctuation may be, the left-eye
or right-eye dominant domains are well connected, and the pattern does not become an
irregular beaded pattern as seen in the cat OOC. When afferent axonal arbors were widely
spread in the cortex (,t A ? I), segregated OOC stripe patterns had only small fluctuation in
the bandwidth. qRw. = 0 corresponds to a monotonically decreasing function Vl~l' with
respect to the radial distance dk.)'. When qRw. was increased from zero, the number of
monocular neurons was decreased. Therefore, the profile of the ODH changes from that in
Fig. 1c.
In model (B), as the value of T'}. became smaller, on-center and off-center terminals were
more sharply segregated, and the average size of the OOC patches became smaller. The
segregation of on-center and off-center terminals seems to interfere strongly with the
development of the ODe and the retinotopic organization. This may be attributed to the
competition between ocularity and on-center/off-center pathways. We have seen that only
concentric or simple-cell-like RFs can be obtained (Fig. 3b) unless both the ocularity and
the on-center/off-center pathways are taken into account in simulations. However, in model
(B) in which the two types of submodality are treated, neurons have complex separated RF
structures (Fig. 3b). This also seems to be due to the competition among the ocularity and
the on-center/off-center pathways. The simulation of model (B) was performed with no
correlation in activity between the left and right eyes 1'l. This condition can be realized for
binocularly deprived kittens (Tanaka, 1989). By considering this, we may conclude that the
formation of normal RFs needs cooperative binocular input
In this research, we did not consider the effect of color-related cell types on OOC formation.
Actually, there are varieties of single-opponent cells in the retina and LGN of monkeys
such as four types of red-green opponent cells: a red on-center cell with a green inhibitory
surround; a green on-center cell with a red inhibitory surround; a red off-center cell with a
green excitatory surround; and a green off-center cell with a red excitatory surround. The
correlation of activity between red on-center and green on-center cells or green off-center and
red off-center cells may be positive in view of the fact that the spectral response functions
between three photoreceptors overlap on the axis of the wavelength. However, the red oncenter and green on-center cells antagonize the red off-center and green off-center cells,
respectively. Therefore, the former two and latter two can be looked upon as the on-center
and off-center cells seen in the retina of cats. This implies that the model for monkeys
should be model (B); thereby, the ODC pattern for monkeys should be an irregular beaded
pattern despite the fact that the OOC and ODH in model (A) resemble those for monkeys.
To avoid such contradiction, the on-center and off-center cells must separately send their
axons into different sublayers within layer 4e~, as seen in the visual cortex for Tree shrews
(Fitzpatrick and Raczkowski, 1990).
5 CONCLUSION
In model (A), the OOC showed the striped pattern and the ODH revealed a dip in the
binocular bin. In contrast to this, model (B) reproduced spatially modulated irregular OOC
patterns and the single-peak behavior of the ODH. From comparison of these simulated
results with experimental observations, it is evident that the OOCs and ODHs for models
(A) and (B) agree very closely with those seen in monkeys and cats, respectively. Therefore,
this leads to the conclusion that model (A) describes the development of the afferent fiber
terminals of the primary visual cortex of monkeys, while model (B) describes that of the
Interaction Among Ocularity, Retinotopy and On-center/Off-center Pathways
cat. In fact. the assumption of the negative correlation (7'2 < 0) between the on-center and
off-center pathways in model (B) is consistent with the experiments on correlated activity
between on-center and off-center RGCs for cats (Mastronarde. 1988).
Finally. we predict the following with regard to afferent projections for cats and monkeys.
[1] In the input layer of the visual cortex for cats. on-center/off-center pathway terminals are
segregated into patches. superposing the ocular dominance patterns.
[2] In monkeys, the axons from on-center/off-center cells in the LGN terminate in different
sublayers in layer 4C~ of the primary visual cortex.
Acknowledgment
The author thanks Mr.Miyashita for his help in performing computer simulations of
receptive field formation.
References
P.A. Anderson, J. Olavarria & R.C. Van Sluyters. (1988) The overall pattern of ocular
dominance bands in the cat visual cortex. J. Neurosci.. 8: 2183-2200.
D.H. Hubel, T.N. Wiesel and S. LeVay. (1977). Plasticity of ocular dominance columns in
monkey striate cortex. Philos. Trans. R. Soc. Lond .? B278: 377-409.
T.N. Wiesel and D.H. Hubel. (l974).Ordered arrangement of orientation columns in
monkeys lacking visual experience. J. Compo Neurol. 158: 307-318
D.H. Hubel and T.N. Wiesel. (1963). Receptive fields, binocular interaction and functional
architecture in the cat's visual cortex. J. Physiol.. 160: 106-154.
S. Tanaka. (1990) Theory of self-organization of cortical maps: Mathematical framework.
Neural Networks, 3: 625-640.
N. Metropolis, A. W. Rosenbluth. M. N. Rosenbluth, A. H. Teller and E. Teller. (1953)
Equation of state calculations by fast computing machines. J. Chern. Phys., 21: 10871092.
S. Tanaka. (1991) Theory of ocular dominance column formation: Mathematical basis and
computer simulation. BioI. Cybem., in press.
S. Tanaka. (1989) Theory of self-organization of cortical maps. In D. S. Touretzky (ed.),
Advances in Neural Information Processing Systems 1.451-458, San Mateo, CA: Morgan
Kaufmann.
D. Fitzpatrick and D. Raczkowski. (1990) Innervation patterns of single physiologically
identified geniculocortical axons in the striate cortex of the tree shrew. Proc. Natl. Acad.
Sci. USA, 87: 449-453.
D. N. Mastronarde. (1989) Correlated firing of retinal ganglion cells. Trends in Neurosci.
12: 75-80.
25
| 322 |@word middle:2 wiesel:7 seems:3 oncenter:1 simplecell:1 simulation:12 thereby:1 harder:1 initial:2 must:1 physiol:1 plasticity:1 nq:1 mastronarde:2 hamiltonian:2 compo:1 mathematical:2 become:1 pathway:18 roughly:1 behavior:3 terminal:15 decreasing:1 considering:2 innervation:1 project:1 retinotopic:6 panel:3 monkey:17 corporation:1 qru:1 tie:2 appear:2 positive:1 local:1 pauern:1 despite:1 acad:1 firing:2 fluctuation:2 black:1 mateo:1 suggests:1 acknowledgment:1 ooc:8 area:1 projection:3 radial:1 map:2 center:67 send:1 simplicity:1 contradiction:1 his:1 trend:1 stripe:2 cooperative:1 observed:3 bottom:1 region:1 connected:2 balanced:1 broken:1 qva:1 upon:1 req:1 basis:1 lowpass:1 cat:16 fiber:2 represented:2 various:1 separated:1 distinct:1 fast:1 effective:1 monte:1 formation:6 whose:1 quite:1 emerged:1 fluctuates:1 widely:1 final:1 reproduced:2 shrew:2 interaction:11 gq:1 levay:2 description:2 kv:1 competition:2 help:1 ex:2 strong:2 soc:1 resemble:1 implies:1 differ:1 sublayers:2 closely:2 filter:1 vc:3 nlt:2 bin:4 mathematically:1 around:1 considered:4 ic:1 normal:1 exp:2 equilibrium:2 predict:1 visualize:1 fitzpatrick:2 relay:1 proc:1 geniculate:1 superposition:1 avoid:1 hj:1 potts:4 greatly:1 contrast:3 dependent:1 vl:2 originating:1 lgn:6 overall:1 among:7 orientation:4 antagonize:1 denoted:1 superposing:1 development:9 spatial:7 field:4 represents:3 stimulus:5 tangential:1 retina:15 preserve:1 phase:2 consisting:1 arborization:1 organization:6 analyzed:1 light:1 natl:1 edge:1 experience:1 unless:1 tree:2 x80:1 theoretical:1 increased:1 column:4 modeling:1 patchy:1 deviation:1 reported:1 thanks:1 density:1 fundamental:1 peak:4 ie:3 off:34 japan:1 account:3 retinal:1 afferent:10 performed:2 view:1 shigeru:1 red:9 parallel:1 il:2 spin:5 square:3 became:2 acid:1 kaufmann:1 correspond:1 produced:1 carlo:1 straight:1 synapsis:1 phys:1 touretzky:1 synaptic:10 ed:1 ocular:18 attributed:1 color:1 electrophysiological:3 actually:1 reflecting:1 appears:2 response:5 formulation:3 done:1 though:2 strongly:1 anderson:3 furthermore:1 binocular:6 correlation:10 interfere:1 aj:1 usa:1 effect:1 rgcs:4 counterpart:2 former:1 vicinity:1 assigned:1 spatially:2 laboratory:1 white:2 ll:2 during:3 self:5 evident:2 temperature:2 consideration:1 stimulation:3 functional:1 winner:1 jl:3 surround:5 philos:1 grid:5 had:2 cortex:26 inhibition:1 dominant:3 recent:1 showed:2 selectivity:1 seen:9 morgan:1 mr:1 recognized:1 determine:1 monotonically:1 signal:1 thermodynamic:2 calculation:1 exerts:1 histogram:6 achieved:2 cell:19 irregular:5 c1:2 addition:2 chern:1 separately:2 ode:9 decreased:1 singular:1 qrw:4 axonal:1 revealed:2 variety:1 ibaraki:1 architecture:1 bandwidth:5 identified:1 generally:1 band:1 visualized:1 inhibitory:2 anatomical:1 dominance:18 group:1 olavarria:3 four:1 almost:2 lake:1 patch:2 layer:6 display:1 tsukuba:1 correspondence:1 activity:8 sharply:1 striped:2 argument:1 lond:1 performing:1 injection:1 transferred:1 according:2 smaller:2 describes:2 metropolis:3 deprived:1 binocularly:2 taken:2 equation:1 monocular:6 agree:2 previously:2 segregation:1 discus:3 irregularly:1 serf:1 adopted:2 submodality:3 opponent:2 observe:1 v2:1 fluctuating:1 spectral:1 responsive:1 responding:1 calculating:1 added:1 realized:1 looked:1 arrangement:1 receptive:4 primary:6 striate:2 exhibit:1 distance:1 rotates:1 simulated:4 lateral:3 sci:1 extent:1 trivial:1 barely:1 geniculocortical:1 length:3 negative:2 rosenbluth:2 neuron:13 convolution:1 observation:2 inh:1 concentric:3 connection:4 odc:1 offcenter:1 narrow:1 tanaka:10 trans:1 miyashita:1 pattern:29 ocularity:15 rf:17 oj:2 green:9 overlap:1 treated:3 thermodynamics:1 eye:6 disappears:1 axis:1 carried:4 understanding:1 teller:2 segregated:4 lacking:1 filtering:1 nucleus:2 degree:1 consistent:1 row:1 periodicity:1 excitatory:2 free:1 taking:1 van:3 distributed:1 dip:4 boundary:1 cortical:6 transition:1 regard:1 author:1 coincide:1 san:1 hubel:7 reveals:1 active:2 cybem:1 photoreceptors:1 assumed:5 conclude:1 physiologically:1 why:1 terminate:1 ca:1 complex:3 domain:3 vj:1 did:1 spread:1 neurosci:2 border:2 noise:1 profile:3 amino:1 neuronal:2 fig:27 representative:1 axon:4 lc:1 position:4 exponential:1 sluyters:3 specific:1 r2:1 dk:1 physiological:1 neurol:1 nec:1 magnitude:1 wavelength:1 likely:3 ganglion:1 visual:18 ordered:3 corresponds:1 determines:1 bioi:1 retinotopy:13 change:1 included:1 typical:1 specifically:1 pas:2 experimental:2 tendency:1 arbor:1 autoradiographic:1 rgc:1 support:1 latter:1 modulated:2 kitten:1 correlated:2 |
2,448 | 3,220 | Receptive Fields without Spike-Triggering
Jakob H Macke
j a k o b@ t u e bi n g e n . mpg . de
Max Planck Institute for Biological Cybernetics
S pemannstrasse 41
72076 T u? bingen, Germany
?
G unther
Zeck
z e c k @ n e u r o . mpg . de
Max Planck Institute of Neurobiology
Am Klopferspitze 1 8
8 21 52 Martinsried, Germany
Matthias Bethge
mbe t hg e @ t u e bi n g e n . mpg . de
Max Planck Institute for Biological Cybernetics
S pemannstrasse 41
72076 T u? bingen, Germany
Abstract
S timulus selectivity of sensory neurons is often characterized by estimating their
receptive ?eld properties such as orientation selectivity. Receptive ?elds are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take
into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we ?nd a concise description for
the processing of a whole population of neurons analogous to the receptive ?eld
for single neurons? Here, we present a generalization of the linear receptive ?eld
which is not bound to be triggered on individual spikes but can be meaningfully
linked to distributed response patterns. More precisely, we seek to identify those
stimulus features and the corresponding patterns of neural activity that are most
reliably coupled. We use an extension of reverse-correlation methods based on
canonical correlation analysis. The resulting population receptive ?elds span the
subspace of stimuli that is most informative about the population response. We
evaluate our approach using both neuronal models and multi-electrode recordings
from rabbit retinal ganglion cells. We show how the model can be extended to
capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our
technique can also be used to calculate receptive ?elds from multi-dimensional
neural measurements such as those obtained from dynamic imaging methods.
1
Introduction
Visual input to the retina consists of complex light intensity patterns. The interpretation of these
patterns constitutes a challenging problem: for computational tasks like object recognition, it is not
clear what information about the image should be extracted and in which format it should be represented. S imilarly, it is dif?cult to assess what information is conveyed by the multitude of neurons
in the visual pathway. Right from the ?rst synapse, the information of an individual photoreceptor
is signaled to many different cells with different temporal ?ltering properties, each of which is only
a small unit within a complex neural network [ 20] . Even if we leave the dif?culties imposed by
nonlinearities and feedback aside, it is hard to judge what the contribution of any particular neuron
is to the information transmitted.
1
The prevalent tool for characterizing the behavior of sensory neurons, the spike triggered average,
is based on a quasi-linear model of neural responses [ 1 5] . For the sake of clarity, we consider an
idealized model of the signaling channel
y = Wx + ? ,
(1 )
where y = ( y1 , . . . , yN ) T denotes the vector of neural responses, x the stimulus parameters, W =
( w 1 , . . . , w N ) T the ?lter matrix with row ? k? containing the receptive ?eld w k of neuron k, and ?
is the noise. The spike-triggered average only allows description of the stimulus-response function
(i. e. the w k ) of one single neuron at a time. In order to understand the collective behavior of a
neuronal population, we rather have to understand the behavior of the matrix W, and the structure
of the noise correlations ? ? : Both of them in?uence the feature selectivity of the population.
Can we ?nd a compact description of the features that a neural ensemble is most sensitive to? In
the case of a single cell, the receptive ?eld provides such a description: It can be interpreted as the
?favorite stimulus? of the neuron, in the sense that the more similar an input is to the receptive ?eld,
the higher is the spiking probability, and thus the ?ring rate of the neuron. In addition, the receptive
?eld can easily be estimated using a spike-triggered average, which, under certain assumptions,
yields the optimal estimate of the receptive ?eld in a linear-nonlinear cascade model [ 1 1 ] .
If we are considering an ensemble of neurons rather than a single neuron, it is not obvious what to
trigger on: This requires assumptions about what patterns of spikes or modulations in ?ring rates
across the population carry information about the stimulus. Rather than addressing the question
?what features of the stimulus are correlated with the occurence of spikes?, the question now is:
?What stimulus features are correlated with what patterns of spiking activity?? [ 1 4] . Phrased in the
language of information theory, we are searching for the subspace that contains most of the mutual information between sensory inputs and neuronal responses. By this dimensionality reduction
technique, we can ?nd a compact description of the processing of the population.
As an ef?cient implementation of this strategy, we present an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive ?elds (PRFs) are not
bound to be triggered on individual spikes but are linked to response patterns that are simultaneously
determined by the algorithm.
We calculate the PRF for a population consisting of uniformly spaced cells with center-surround
receptive ?elds and noise correlations, and estimate the PRF of a population of rabbit retinal ganglion
cells from multi-electrode recordings. In addition, we show how our method can be extended to
explore different hypotheses about the neural code, such as spike latencies or interval coding, which
require nonlinear read out mechanisms.
2
From reverse correlation to canonical correlation
We regard the stimulus at time t as a random variable Xt ? R n , and the neural response as Yt ? R m .
For simplicity, we assume that the stimulus consists of Gaussian white noise, i. e. E( X) = 0 and
Cov( X) = I.
The spike-triggered average a of a neuron can be motivated by the fact that it is the direction in
stimulus-space maximizing the correlation-coef?cient
?= ?
Cov( a T X, Y1 )
Var( a T X) Var( Y1 )
.
(2)
between the ?ltered stimulus a T X and a univariate neural response Y1 . In the case of a neural
population, we are not only looking for the stimulus feature a, but also need to determine what
pattern of spiking activity b it is coupled with. The natural extension is to search for those vectors
a 1 and b 1 that maximize
T
Cov( a T
1 X, b 1 Y)
.
(3 )
?1 = ?
T
Var( a T
1 X) Var( b 1 Y)
We interpret a 1 as the stimulus ?lter whose output is maximally correlated with the output of the
?response ?lter? b 1 . Thus, we are simultaneously searching for features of the stimulus that the
neural system is selective for, and the patterns of activity that it uses to signal the presence or absence
2
of this feature. We refer to the vector a 1 as the (?rst) population receptive ?eld of the population,
and b 1 is the response feature corresponding to a 1 . If a hypothetical neuron receives input from the
population, and wants to decode the presence of the stimulus a 1 , the weights of the optimal linear
readout [ 1 6] could be derived from b 1 .
Canonical Correlation Analysis (CCA) [ 9] is an algorithm that ?nds the vectors a 1 and b 1 that
maximize (3 ): We denote the covariances of X and Y by ? x , ? y , the cross-covariance by ? x y , and
the whitened cross-covariance by
C = ? x( ? 1 / 2 ) ? x y ? y( ? 1 / 2 ) .
(4)
Let C = UDV T denote the singular value decomposition of C, where the entries of the diagonal
matrix D are non-negative and decreasing along the diagonal. Then, the k-th pair of canonical
( ? 1 /2)
( ? 1 /2)
u k and b k = ? y
v k , where u k and v k are the k-th column
variables is given by a k = ? x
vectors of U and V, respectively. Furthermore, the k-th singular value of C, i. e. the k-th diagonal
T
T
T
entry of D is the correlation-coef?cient ? k of a T
k Xand b k Y. The random variables a i X and a j X
are uncorrelated for i ?= j.
Importantly, the solution for the optimization problem in CCA is unique and can be computed ef?ciently via a single eigenvalue problem. The population receptive ?elds and the characteristic
patterns are found by a joint optimization in stimulus and response space. Therefore, one does
not need to know?or assume?a priori what features the population is sensitive to, or what spike
patterns convey the information.
The ?rst K PRFs form a basis for the subspace of stimuli that the neural population is most sensitive
to, and the individual basis vectors a k are sorted according to their ?informativeness? [ 1 3 , 1 7] .
The mutual information between two one-dimensional Gaussian Variables with correlation ? is given
by MI G a us s = ? 21 log( 1 ? ? 2 ) , so maximizing correlation coef?cients is equivalent to maximizing
mutual information [ 3 ] . Assuming the neural response Y to be Gaussian, the subspace spanned by
the ?rst K vectors B K = ( b 1 , . . . , b K ) is also the K-subspace of stimuli that contains the maximal
amount of mutual information between stimuli and neural response. That is
B K = argmax
B ? Rn ? k
`
?
det B T ? y B
?
?
? ?
(?1)
det B T ? y ? ? T
?xy B
xy ?x
.
(5)
Thus, in terms of dimensionality reduction, CCA optimizes the same objective as oriented PCA
[ 5] . In contrast to oriented PCA, however, CCA does not require one to know explicitly how the
response covariance ? y = ? s + ? ? splits into signal ? s and noise ? ? covariance. Instead, it uses the
cross-covariance ? x y which is directly available from reverse correlation experiments. In addition,
CCA not only returns the most predictable response features b 1 , . . . b K but also the most predictive
stimulus components A K = ( a 1 , . . . a K ) .
For general Y and for stimuli X with elliptically contoured distribution, MI G a us s ? J( A T X) provides a lower bound to the mutual information between A T X and B T Y, where
J( A T X) =
1
log( det( 2 ?eA T ? x A) ) ? h( A T X)
2
(6)
is the Negentropy of A T X, and h( A T X) its differential entropy. S ince for elliptically contoured
distributions J( A T X) does not depend on A, the PRFs can be seen as the solution of a variational
approach, maximizing a lower bound to the mutual information. Maximizing mutual information
directly is hard, requires extensive amounts of data, and usually multiple repetitions of the same
stimulus sequence.
3
The receptive ?eld of a population of neurons
3.1
The effect of tuning functions and noise correlations
To illustrate the relationship between the tuning-functions of individual neurons and the PRFs [ 22] ,
we calculate the ?rst PRF of a simple one-dimensional population model consisting of center3
?
?
1 ? x ? c?2
f ( x) = exp ?
? A exp
2
?
?
? 2!
surround neurons. Each tuning function is modeled by a ?Difference of Gaussians? (DOG)
1
?
2
x? c
?
(7)
whose centers c are uniformly distributed over the real axis. The width ? of the negative Gaussian is
set to be twice as large as the width ? of the positive Gaussian. If the area of both Gaussians is the
same ( A = 1 ) , the DC component of the DOG-?llter is zero, i. e. the neuron is not sensitive to the
mean luminance of the stimulus. If the ratio between both areas becomes substantially unbalanced,
the DC component will become the largest signal ( A ? 0) .
In addition to the parameter A, we will study the length scale of noise correlations ? [ 1 8 ] . S peci?cally, we assume exponentially decaying noise correlation with ? ? ( s ) = exp( ? | s | / ?) .
As this model is invariant under spatial shifts, the ?rst PRF can be calculated by ?nding the spatial
frequency at which the S NR is maximal. That is, the ?rst PRF can be used to estimate the passband
of the population transfer function. The S NR is given by
?
?
1 + ?2 ?2
2?
S NR( ?) =
e? ?
2
?2
+ A2 e? ?
2
?2
? 2 Ae ?
?2 +?2
2
?2
??2
.
(8 )
The passband of the ?rst population ?lter moves as a function of both parameters A and ?. It equals
the DC component for small A (i. e. large imbalance) and small ? (i. e. short correlation length). In
this case, the mean intensity is the stimulus property that is most faithfully signaled by the ensemble.
1
1
0.8
0.8
0.6
A
0.6
0.4
0.4
0.2
0.2
0.5
1
?
1.5
2
0
Figure 1 : S patial frequency of the ?rst PRF for the model described above. ? is the length-scale of
the noise correlations, A is the weight of the negative Gaussian in the DOG-model. The region in
the bottom left corner (bounded by the white line) is the part of the parameter-space in which the
PRF equals the DC component.
3.2
The receptive ?eld of an ensemble of retinal ganglion cells
We mapped the population receptive ?elds of rabbit retinal ganglion cells recorded with a wholemount preparation. We are not primarily interested in prediction performance [ 1 2] , but rather in
dimensionality reduction: We want to characterize the ?ltering properties of the population.
The neurons were stimulated with a 1 6 ? 1 6 checkerboard consisting of binary white noise which
was updated every 20ms. The experimental procedures are described in detail in [ 21 ] . After spikesorting, spike trains from 32 neurons were binned at 2 0ms resolution, and the response of a neuron
to a stimulus at time t was de?ned to consist of the the spike-counts in the 1 0 bins between 40ms
and 240ms after t. Thus, each population response Yt is a 3 20 dimensional vector.
Figure 3 . 2 A) displays the ?rst 6 PRFs, the corresponding patterns of neural activity (B) and their
correlation coef?cients ? k (which were calculated using a cross-validation procedure). It can be seen
that the PRFs look very different to the usual center-surrond structure of retinal ganglion. However,
one should keep in mind that it is really the space spanned by the PRFs that is relevant, and thus be
careful when interpreting the actual ?lter shapes [ 1 5] .
For comparison, we also plotted the single-cell receptive ?elds in Figure 3 . 2 C), and their projections
into the spaced spanned by the ?rst 6 PRFs. These plots suggest that a small number of PRFs might
4
be suf?cient to approximate each of the receptive ?elds. To determine the dimensionality of the
relevant subspace, we analyzed the correlation-coef?cients ? k . The Gaussian Mutual Information
?K
MI G a us s = ? 12 k = 1 log( 1 ? ? 2k ) is an estimate of the information contained in the subspace
spanned by the ?rst K PRFs. Based on this measure, a 1 2 dimensional subspace accounts for 90%
of the total information.
In order to link the empirically estimated PRFs with the theoretical analysis in section 3 . 1 , we
calculated the spectral properties of the ?rst PRF. Our analysis revealed that most of the power is in
the low frequencies, suggesting that the population is in the parameter-regime where the single-cell
receptive ?elds have power in the DC-component and the noise-correlations have short range, which
is certainly reasonable for retinal ganglion cells [ 4] .
0.51
0.44
0.38
0.35
0.29
0.27
B)
5
5
5
5
5
5
Neuron index
A)
10
10
10
10
10
10
15
15
15
15
15
15
20
20
20
20
20
20
25
25
25
25
25
25
30
30
40
160 220
Time ?
40
30
160
220
40
30
160
220
40
30
160
220
40
0.2
0
?0.2
30
160
220 40
160
220
Proj. RF
RF
Proj. RF
RF
C)
Figure 2: The population receptive ?elds of a group of 32 retinal ganglion cells: A) the ?rst 6 PRFs,
as sorted by the correlation coef?cient ? k B) the response features b k coupled with the PRFs. Each
row of each image corresponds to one neuron, and each column to one time-bin. Blue color denotes
enhanced activity, red suppressed. It can be seen that only a subset of neurons contributed to the ?rst
6 PRFs. C) The single-cell receptive ?elds of 2 4 neurons from our population, and their projections
into the space spanned by the 6 PRFs.
5
B)
0.6
0.5
0.4
Percentage of MI
Correlations coefficients ?k
A)
0.3
0.2
0.1
0
?0.1
1
5
10
15
20
30
PRF index
40
100
90
80
60
40
20
0
50
1
5
10
15 20
30
40
Dimensionality of subspace
50
Figure 3 : A) Correlation coef?cients ? k for the PRFs. Estimates and error-bars are calculated using
a cross-validation procedure. B) Gaussian-MI of the subspace spanned by the ?rst K PRFs.
4
Nonlinear extensions using Kernel Canonical Correlation Analysis
Thus far, our model is completely linear: We assume that the stimulus is linearly related to the
neural responses, and we also assume a linear readout of the response. In this section, we will
explore generalizations of the CCA model using Kernel CCA: By embedding the stimulus-space
nonlinearly in a feature space, nonlinear codes can be described.
Kernel methods provide a framework for extending linear algorithms to the nonlinear case [ 8 ] . After
projecting the data into a feature space via a feature maps ? and ?, a solution is found using linear
methods in the feature space. In the case of Kernel CCA [ 1 , 1 0, 2, 7] one seeks to ?nd a linear
? = ?( X) and Y
? = ?( Y) , rather than between X and
relationship between the random variables X
Y. If an algorithm is purely de?ned in terms of dot-products, and if the dot-product in feature space
k( s , t) = ? ?( s ) , ?( t) ? can be computed ef?ciently, then the algorithm does not require explicit
calculation of the feature maps ? and ?. This ?kernel-trick? makes it possible to work in high(or in?nite)-dimensional feature spaces. It is worth mentioning that the space of patterns Y itself
does not have to be a vector space. Given a data-set x 1 . . . x n , it suf?ces to know the dot-products
between any pair of training points, Ki j : = ? ?( yi ) , ?( yj ) ? .
The kernel function k( s , t) can be seen as a similiarity measure. It incorporates our assumptions
about which spike-patterns should be regarded as similar ?messages?. Therefore, the choice of the
kernel-function is closely related to speci?ng what the search-space of potential neural codes is. A
number of distance- and kernel-functions [ 6, 1 9] have been proposed to compute distances between
spike-trains. They can be designed to take into account precisely timed pattern of spikes, or to be
invariant to certain transformations such as temporal jitter.
We illustrate the concept on simulated data: We will use a similarity measure based on the metric
D interval [ 1 9] to estimate the receptive ?eld of a neuron which does not use its ?ring rate, but rather
the occurrence of speci?c interspike intervals to convey information about the stimulus. The metric
D interval between two spike-trains is essentially the cost of matching their intervals by shifting, adding
or deleting spikes. (We set k( s , t) = exp( ? D( s , t) . In theory, this function is not guaranteed
to be positive de?nite, which could lead to numerical problems, but we did not encounter any in
our simulation. ) If we consider coding-schemes that are based on patterns of spikes, the methods
described here become useful even for the analysis of single neurons. We will here concentrate on a
single neuron, but the analysis can be extended to patterns distributed across several neurons.
Our hypothetical neuron encodes information in a pattern consisting of three spikes: The relative
timing of the second spike is informative about the stimulus: The bigger the correlation between
receptive ?eld and stimulus ? r, s t ? , the shorter is the interval. If the receptive ?eld is very dissimilar
to the stimulus, the interval is long. While the timing of the spikes relative to each other is precise,
there is jitter in the timing of the pattern relative to the stimulus. Figure 4 A) is a raster plot of
simulated spike-trains from this model, ordered by ? r, s t ? . We also included noise spikes at random
times.
6
A)
B)
Spike trains
C)
D)
0
50
100
Time ?
150
200
Figure 4: Coding by spike patterns: A) Receptive ?eld of neuron described in S ection 4. B) A
subset of the simulated spike-trains, sorted with respect to the similarity between the shown stimulus
and the receptive ?eld of the model. The interval between the ?rst two informative spikes in each
trial is highlighted in red. C) Receptive ?eld recovered by Kernel CCA, the correlation coef?cient
between real and estimated receptive ?eld is 0. 93 . D) Receptive ?eld derived using linear decoding,
correlation coef?cient is 0. 02 .
Using these spike-trains, we tried to recover the receptive ?eld r without telling the algorithm what
the indicating pattern was. Each stimulus was shown only once, and therefore, that every spikepattern occurred only once. We simulated 5 000 stimulus presentations for this model, and applied
Kernel CCA with a linear kernel on the stimuli, and the alignment-score on the spike-trains. By
using incomplete Cholesky decompositions [ 2] , one can compute Kernel CCA without having to
calculate the full kernel matrix. As many kernels on spike trains are computationally expensive, this
trick can result in substantial speed-ups of the computation. The receptive ?eld was recovered (see
Figure 4), despite the highly nonlinear encoding mechanism of the neuron. For comparison, we also
show what receptive ?eld would be obtained using linear decoding on the indicated bins.
Although this neuron model may seem slightly contrived, it is a good proof of concept that, in
principle, receptive ?elds can be estimated even if the ?ring rate gives no information at all about
the stimulus, and the encoding is highly nonlinear. Our algorithm does not only look at patterns that
occur more often than expected by chance, but also takes into account to what extent their occurrence
is correlated to the sensory input.
5
Conclusions
We set out to ?nd a useful description of the stimulus-response relationship of an ensemble of
neurons akin to the concept of receptive ?eld for single neurons. The population receptive ?elds are
found by a joint optimization over stimuli and spike-patterns, and are thus not bound to be triggered
by single spikes.
We estimated the PRFs of a group of retinal ganglion cells, and found that the ?rst PRF had most
spectral power in the low-frequency bands, consistent with our theoretical analysis. The stimulus
we used was a white-noise sequence?it will be interesting to see how the informative subspace and
its spectral properties change for different stimuli such as colored noise. The ganglion cell layer of
the retina is a system that is relatively well understood at the level of single neurons. Therefore,
our results can readily be compared and connected to those obtained using conventional analysis
techniques. However, our approach has the potential to be especially useful in systems in which the
functional signi?cance of single cell receptive ?elds is dif?cult to interpret.
7
We usually assumed that each dimension of the response vector Y represents an electrode-recording
from a single neuron. However, the vector Y could also represent any other multi-dimensional measurement of brain activity: For example, imaging modalities such as voltage-sensitive dye imaging
yield measurements at multiple pixels simultaneously. Data from electro-physiological data, e. g. local ?eld potentials, are often analyzed in frequency space, i. e. by looking at the energy of the signal
in different frequency bands. This also results in a multi-dimensional representation of the signal.
Using CCA, receptive ?elds can readily be estimated from these kinds of representations without
limiting attention to single channels or extracting neural events.
Acknowledgments
We would like to thank A Gretton and J Eichhorn for useful discussions, and F J?a kel, J Butler and S Liebe for
comments on the manuscript.
References
[ 1 ] S . Akaho. A kernel method for canonical correlation analysis. In International Meeting ofPsychometric
Society, Osaka, 2001 .
[ 2] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal ofMachine Learning
Research, 3 : 1 : 48 , 2002.
[ 3 ] G. Chechik, A. Globerson, N. Tishby, and Y. Weiss. Information Bottleneck for Gaussian Variables. The
Journal ofMachine Learning Research, 6: 1 65?1 8 8 , 2005.
[ 4] S . Devries and D. Baylor. Mosaic Arrangement of Ganglion Cell Receptive Fields in Rabbit Retina.
Journal ofNeurophysiology, 78 (4): 2048 ?2060, 1 997.
[ 5] K. Diamantaras and S . Kung. Cross-correlation neural network models. Signal Processing, IEEE Transactions on, 42(1 1 ): 3 21 8 ?3 223 , 1 994.
[ 6] J. Eichhorn, A. Tolias, A. Zien, M. Kuss, C. E. Rasmussen, J. Weston, N. Logothetis, and B. S cho? lkopf.
Prediction on spike data using kernel algorithms. In S . Thrun, L. S aul, and B. S cho? lkopf, editors, Advances
in Neural Information Processing Systems 1 6. MIT Press, Cambridge, MA, 2004.
[ 7] K. Fukumizu, F. R. Bach, and A. Gretton. S tatistical consistency of kernel canonical correlation analysis.
Journal ofMachine Learning Research, 2007.
[ 8 ] T. Hofmann, B. S cho? lkopf, and A. S mola. Kernel methods in machine learning. Annals ofStatistics (in
press), 2007.
[ 9] H. Hotelling. Relations between two sets of variates. Biometrika, 28 : 3 21 ?3 77, 1 93 6.
[ 1 0] T. Melzer, M. Reiter, and H. Bischof. Nonlinear feature extraction using generalized canonical correlation
analysis. In Proc. ofInternational Conference on Arti?cial Neural Networks (ICANN), pages 3 53 ?3 60, 8
2001 .
[ 1 1 ] L. Paninski. Convergence properties of three spike-triggered analysis techniques. Network, 1 4(3 ): 43 7?64,
Aug 2003 .
[ 1 2] J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. S imoncelli, and E. J. Chichilnisky. Prediction and decoding
of retinal ganglion cell responses with a probabilistic spiking model. J Neurosci, 25(47): 1 1 003 ?1 3 , 2005.
[ 1 3 ] J. W. Pillow and E. P. S imoncelli. Dimensionality reduction in neural models: an information-theoretic
generalization of spike-triggered average and covariance analysis. J Vis, 6(4): 41 4?28 , 2006.
[ 1 4] M. J. S chnitzer and M. Meister. Multineuronal ?ring patterns in the signal from eye to brain. Neuron,
3 7(3 ): 499?51 1 , 2003 .
[ 1 5] O. S chwartz, J. W. Pillow, N. C. Rust, and E. P. S imoncelli. S pike-triggered neural characterization. J
Vis, 6(4): 48 4?507, 2006.
[ 1 6] H. S . S eung and H. S ompolinsky. S imple models for reading neuronal population codes. Proc Natl Acad
Sci U S A, 90(22): 1 0749?53 , 1 993 .
[ 1 7] T. S harpee, N. Rust, and W. Bialek. Analyzing neural responses to natural signals: maximally informative
dimensions. Neural Comput, 1 6(2): 223 ?50, 2004.
[ 1 8 ] H. S ompolinsky, H. Yoon, K. Kang, and M. S hamir. Population coding in neuronal systems with correlated noise. Phys Rev E Stat Nonlin Soft Matter Phys, 64(5 Pt 1 ): 051 904, 2001 .
[ 1 9] J. Victor. S pike train metrics. Curr Opin Neurobiol, 1 5(5): 58 5?92, 2005.
[ 20] H. W?a ssle. Parallel processing in the mammalian retina. Nat Rev Neurosci, 5(1 0): 747?57, 2004.
[ 21 ] G. M. Zeck, Q. Xiao, and R. H. Masland. The spatial ?ltering properties of local edge detectors and
brisk-sustained retinal ganglion cells. Eur J Neurosci, 22(8 ): 201 6?26, 2005.
[ 22] K. Zhang and T. S ejnowski. Neuronal Tuning: To S harpen or Broaden?, 1 999.
8
| 3220 |@word trial:1 nd:6 simulation:1 seek:2 tried:1 covariance:8 decomposition:2 arti:1 concise:1 eld:40 carry:1 reduction:4 contains:2 score:1 xand:1 recovered:2 negentropy:1 readily:2 numerical:1 informative:5 wx:1 shape:1 interspike:1 eichhorn:2 hofmann:1 plot:2 designed:1 opin:1 aside:1 cult:2 short:2 colored:1 provides:2 characterization:1 zhang:1 along:1 differential:1 become:2 eung:1 consists:2 sustained:1 pathway:1 expected:1 behavior:3 mpg:3 udv:1 multi:5 brain:2 decreasing:1 actual:1 considering:1 becomes:1 estimating:1 bounded:1 what:15 kind:1 interpreted:1 substantially:1 neurobiol:1 transformation:1 temporal:2 cial:1 every:2 hypothetical:2 biometrika:1 zeck:2 unit:1 diamantaras:1 yn:1 planck:3 positive:2 understood:1 timing:3 treat:1 local:2 acad:1 despite:1 encoding:2 analyzing:1 modulation:1 might:2 twice:1 challenging:1 dif:3 mentioning:1 bi:2 range:1 unique:1 acknowledgment:1 globerson:1 yj:1 signaling:1 procedure:3 nite:2 area:2 cascade:1 projection:2 matching:1 ups:1 chechik:1 suggest:1 equivalent:1 imposed:1 map:2 center:3 yt:2 maximizing:5 conventional:1 attention:1 rabbit:4 resolution:1 simplicity:1 importantly:1 spanned:6 regarded:1 osaka:1 population:30 searching:2 embedding:1 analogous:1 updated:1 limiting:1 enhanced:1 trigger:1 logothetis:1 decode:1 annals:1 pt:1 us:2 mosaic:1 hypothesis:1 trick:2 recognition:1 expensive:1 mammalian:1 bottom:1 yoon:1 capture:1 calculate:4 readout:2 region:1 connected:1 substantial:1 cance:1 predictable:1 imple:1 dynamic:1 depend:1 predictive:1 purely:1 basis:2 completely:1 easily:1 joint:2 represented:1 train:10 whose:2 cov:3 highlighted:1 itself:1 triggered:11 eigenvalue:1 sequence:2 matthias:1 maximal:2 product:3 cients:4 relevant:2 description:6 rst:18 convergence:1 electrode:3 contrived:1 extending:1 leave:1 ring:5 object:1 illustrate:2 stat:1 aug:1 signi:1 judge:1 direction:1 concentrate:1 closely:1 bin:3 require:3 generalization:3 really:1 biological:2 kel:1 extension:4 exp:4 aul:1 a2:1 proc:2 prf:10 sensitive:5 largest:1 repetition:1 faithfully:1 tool:1 fukumizu:1 mit:1 gaussian:9 rather:6 voltage:1 derived:3 prevalent:1 contrast:1 am:1 sense:1 relation:1 proj:2 quasi:1 selective:1 interested:1 germany:3 pixel:1 orientation:1 priori:1 spatial:3 mutual:8 field:2 equal:2 once:2 having:1 ng:1 extraction:1 represents:1 look:2 constitutes:1 stimulus:44 primarily:1 retina:4 oriented:2 simultaneously:3 individual:5 argmax:1 consisting:4 curr:1 message:2 highly:2 certainly:1 alignment:1 analyzed:2 light:1 natl:1 hg:1 edge:1 xy:2 shorter:1 incomplete:1 ofmachine:3 mola:1 timed:1 signaled:2 plotted:1 uence:1 theoretical:2 column:2 soft:1 cost:1 addressing:1 entry:2 subset:2 tishby:1 characterize:1 cho:3 eur:1 international:1 probabilistic:1 decoding:3 bethge:1 recorded:1 containing:1 corner:1 macke:1 return:1 checkerboard:1 account:4 suggesting:1 nonlinearities:1 de:6 potential:3 retinal:10 coding:5 coefficient:1 matter:1 explicitly:1 idealized:1 vi:2 linked:2 red:2 decaying:1 recover:1 parallel:1 contribution:1 ass:1 ltering:3 characteristic:1 ensemble:6 yield:2 identify:1 spaced:2 lkopf:3 worth:1 cybernetics:2 kuss:1 detector:1 phys:2 coef:9 devries:1 raster:1 energy:1 frequency:6 obvious:1 proof:1 mi:5 color:1 dimensionality:6 ea:1 manuscript:1 higher:1 response:26 maximally:2 synapse:1 wei:1 tatistical:1 furthermore:1 contoured:2 correlation:34 receives:1 nonlinear:9 indicated:1 effect:1 concept:3 read:1 reiter:1 white:4 mbe:1 width:2 m:4 ection:1 generalized:1 theoretic:1 ince:1 interpreting:1 image:2 variational:1 ef:3 functional:1 spiking:4 empirically:1 rust:2 exponentially:1 interpretation:1 occurred:1 interpret:2 measurement:3 refer:1 surround:2 cambridge:1 tuning:4 consistency:1 akaho:1 language:1 had:1 dot:3 similarity:2 dye:1 optimizes:1 reverse:4 selectivity:3 certain:2 binary:1 meeting:1 yi:1 victor:1 transmitted:1 seen:4 speci:2 determine:2 maximize:2 signal:8 zien:1 multiple:2 full:1 gretton:2 characterized:1 calculation:1 cross:6 long:1 bach:2 bigger:1 prediction:3 whitened:1 ae:1 metric:3 essentially:1 kernel:20 represent:1 cell:18 addition:4 want:2 interval:8 singular:2 modality:1 comment:1 recording:3 electro:1 meaningfully:1 incorporates:1 nonlin:1 seem:1 jordan:1 ciently:2 extracting:1 presence:2 revealed:1 split:1 variate:1 triggering:1 det:3 shift:1 bottleneck:1 motivated:1 pca:2 unther:1 akin:1 bingen:2 multineuronal:1 pike:2 elliptically:2 useful:4 latency:1 clear:1 amount:2 band:2 percentage:1 canonical:10 estimated:6 blue:1 group:2 clarity:1 ce:1 lter:5 luminance:1 imaging:3 jitter:2 reasonable:1 bound:5 cca:12 ki:1 guaranteed:1 layer:1 display:1 activity:8 binned:1 occur:1 precisely:2 phrased:1 encodes:1 sake:1 speed:1 span:1 format:1 relatively:1 ned:2 according:1 across:3 slightly:1 suppressed:1 rev:2 projecting:1 invariant:2 computationally:1 count:1 mechanism:3 know:3 mind:1 meister:1 available:1 gaussians:2 spectral:3 occurrence:2 hotelling:1 encounter:1 broaden:1 denotes:2 cally:1 especially:1 passband:2 society:1 objective:1 move:1 question:2 arrangement:1 spike:38 receptive:41 strategy:1 usual:1 diagonal:3 nr:3 bialek:1 subspace:11 distance:2 link:1 mapped:1 simulated:4 thank:1 thrun:1 sci:1 extent:1 assuming:1 code:4 length:3 modeled:1 relationship:4 index:2 ratio:1 baylor:1 negative:3 implementation:1 reliably:1 collective:1 contributed:1 imbalance:1 neuron:38 neurobiology:1 extended:3 looking:2 precise:1 y1:4 rn:1 dc:5 jakob:1 intensity:2 pair:2 dog:3 nonlinearly:1 extensive:1 chichilnisky:1 bischof:1 kang:1 bar:1 usually:3 pattern:25 regime:1 reading:1 rf:4 max:3 deleting:1 shifting:1 power:3 event:1 natural:2 scheme:1 eye:1 axis:1 nding:1 ltered:1 coupled:3 occurence:1 relative:3 suf:2 interesting:1 masland:1 var:4 validation:2 conveyed:2 consistent:1 informativeness:1 xiao:1 principle:1 editor:1 uncorrelated:1 row:2 rasmussen:1 understand:2 telling:1 institute:3 characterizing:1 distributed:4 regard:1 feedback:1 calculated:4 dimension:2 pillow:3 sensory:4 far:1 transaction:1 approximate:1 compact:2 keep:1 assumed:1 tolias:1 butler:1 search:2 stimulated:1 favorite:1 channel:2 transfer:1 brisk:1 culties:1 complex:2 did:1 icann:1 linearly:1 neurosci:3 whole:1 noise:15 convey:2 neuronal:6 cient:7 explicit:1 similiarity:1 comput:1 xt:1 physiological:1 multitude:1 consist:1 adding:1 nat:1 entropy:1 paninski:2 explore:2 univariate:1 ganglion:12 visual:2 contained:1 ordered:1 corresponds:1 chance:1 extracted:1 ma:1 weston:1 sorted:3 presentation:1 careful:1 absence:1 hard:2 change:1 included:1 determined:1 uniformly:2 total:1 experimental:1 photoreceptor:1 indicating:1 uzzell:1 cholesky:1 unbalanced:1 dissimilar:1 kung:1 preparation:1 evaluate:1 correlated:5 |
2,449 | 3,221 | Extending position/phase-shift tuning to motion
energy neurons improves velocity discrimination
Stanley Yiu Man Lam and Bertram E. Shi
Department of Electronic and Computer Engineering
Hong Kong Univeristy of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
{eelym,eebert}@ee.ust.hk
Abstract
We extend position and phase-shift tuning, concepts already well established in
the disparity energy neuron literature, to motion energy neurons. We show that
Reichardt-like detectors can be considered examples of position tuning, and that
motion energy filters whose complex valued spatio-temporal receptive fields are
space-time separable can be considered examples of phase tuning. By combining
these two types of detectors, we obtain an architecture for constructing motion
energy neurons whose center frequencies can be adjusted by both phase and position shifts. Similar to recently described neurons in the primary visual cortex,
these new motion energy neurons exhibit tuning that is between purely spacetime separable and purely speed tuned. We propose a functional role for this
intermediate level of tuning by demonstrating that comparisons between pairs of
these motion energy neurons can reliably discriminate between inputs whose
velocities lie above or below a given reference velocity.
1 Introduction
Image motion is an important cue used by both biological and artificial visual systems to extract
information about the environment. Although image motion is commonly used, there are different
models for image motion processing in different systems. The Reichardt model is a dominant
model for motion detection in insects, where image motion analysis occurs at a very early stage [1].
For mammals, the bulk of visual processing for motion is thought to occur in the cortex, and the
motion energy model is one of the dominant models [2][3]. However, despite the differences in
complexity between these two models, they are mathematically equivalent given appropriate
choices of the spatial and temporal filters [4].
The motion energy model is very closely related to the disparity energy model, which has been
used to model the outputs of disparity selective neurons in the visual cortex [5]. The disparity tuning of neurons in this model can be adjusted via two mechanisms: a position shift between the center locations of the receptive fields in the left and right eyes or a phase shift between the receptive
field organization in the left and right eyes [6][7]. It appears that biological systems use a combination of these two mechanisms.
In Section 2, we extend the concepts of position and phase tuning to the construction of motion
energy neurons. We combine the Reichardt model and the motion energy model to obtain an architecture for constructing motion energy neurons whose tuning can be adjusted by the analogs of
position and phase shifts. In Section 3, we investigate the functional advantages of position and
phase shifts, inspired by a similar comparison from the disparity energy literature. We show that by
simply comparing the outputs of pair of motion energy cells with combined position/phase shift
tuning enables us to discriminate reliably between stimuli moving above and below a reference
velocity. Finally, in Section 4, we place this work in the context of recent results on speed tuning in
V1 neurons.
2 Extending Position/Phase Tuning to Motion Energy Models
Figure 1(a) shows a 1D array of three Reichardt detectors[1] tuned to motion from left to right.
Each detector computes the correlation between its photosensor input and the delayed input from
the photosensor to the left. The delay could be implemented by a low pass filter. Usually, the correlation is assumed to be computed by a multiplication between the current and delayed signals. For
consistency with the following discussion, we show the output as a summation followed by a
squaring. Squaring the sum is essentially equivalent to the product, since the product could be
recovered by subtracting the sum of the squared inputs from the squared sum (e.g.
( a + b ) 2 ? ( a 2 + b 2 ) = 2 ab ).
Delbruck proposed a modification of the Reichardt detector (Figure 1(b)), which switches the order
of the delay and the sum, resulting in a delay-line architecture [8]. The output of a detector is the
sum of its photosensor input and the delayed output of the detector to the left. This recurrent connection extends the spatio-temporal receptive field of the detector, since the input from the secondnearest-neighboring photosensor to the left is now connected to the detector through two delays,
whereas the Reichardt detector never sees the output of its second-nearest-neighboring photosensor.
The velocity tuning of these detectors is determined by the combination of the temporal delay and
the position shift between the neighboring detectors. As the delay increases, the tuned velocity
decreases. As the position shift increases, the tuned velocity also increases. This position-tuning of
velocity is reminiscent of the position-tuning of disparity energy neurons, where the larger the position shift between the spatial receptive fields being combined from the left and right eyes, the larger
the disparity tuning [9].
Figure 1(c) shows a 1D array of three motion energy detectors[2][3]. At each spatial location, the
outputs of the photosensors in a neighborhood around each spatial location are combined through
even and odd symmetric linear spatial receptive fields, which are here modelled by spatial Gabor
functions. In 1D, the even and odd symmetric Gabor receptive field profiles are the real and imaginary parts of the function
? x ?
? x ?
1
1
g s ( x ) = ----------------- exp ? ? --------? exp ( jx ? x ) = ----------------- exp ? ? --------? ( cos ( ? x x ) + j sin ( ? x x ) )
2
2
2
2?? x
2
? 2? x?
2?? x
? 2? x?
(1)
where ? x determines the preferred spatial frequency of the receptive field, and ? x determines its
spatial extent. The even and odd spatial filter outputs are then combined through temporal filters to
produce two outputs which are then squared and summed to produce the motion energy. In many
cases, the temporal receptive field profiles are also Gabor functions. The combined spatial and temporal receptive fields of the two neurons are separable when considered as a single complex valued
function:
? x ?
? t ?
1
1
g ( x, t ) = ----------------- exp ? ? --------? exp ( j ? x x ) ? ----------------- exp ? ? --------2-? exp ( j ? t t )
2
2
2?? x
? 2? x?
2
2?? t
? 2? t?
(2)
where ? t and ? t determine the preferred temporal frequency and temporal extent of the temporal
receptive fields. Strictly speaking, these spatio-temporal filters are not velocity tuned, since the
velocity at which a moving sine-wave grating stimulus produces maximum response varies with
the spatial frequency of the sine-wave grating. However, since spatial frequencies of ? x lead to the
largest responses, the filter is sometimes thought of as having a preferred velocity
v pref = ? ? t ? ? x .
2
2
2
?
?
(a)
2
2
2
?
?
(b)
im
2
2
?
?
?
?
?
j?
ae
re
2
2
2
2
im
re
?
ae
j?
j?
j?
ae
ae
im
a cos?
im
-a sin?
a sin?
re
a cos?
re
(c)
2
2
?
?
j?
?
2
2
2
2
?
?
?
j?
ae
ae
(d)
Figure 1. (a) 1D array of three Reichardt detectors tuned to motion from left to right. The ? block
represents a temporal delay. The semi-circles represent photosensors. (b) Delbruck delay-line
detector. (c) 1D array of three motion energy detectors. The bottom blocks represent even and odd
symmetric spatial receptive fields modelled by Gabor functions. (d) The proposed motion detector
by combining the position and phase tuning mechanisms of (b) and (c).
One problem with using spatio-temporal Gabor functions is that they are non-causal in time. In this
work, we consider the use of a causal recurrently implemented temporal filter. If we let the real and
imaginary parts of u(x, t) denote the even and odd spatial filter outputs, then the two temporal fil-
ter outputs of the temporal filter are given by the real and imaginary parts of v(x, t) , which satisfies
v ( x, t ) = a exp ( j ? t ) ? v ( x, t ? 1 ) + ( 1 ? a ) ? u ( x, t )
(3)
where a < 1 and ? t are real valued constants. We derive this equation from Fig. 1(c) by considering the time delay ? as a unit sample discrete time delay. We consider discrete time operation here
for consistency with our experimental results, however, a corresponding continuous time temporal
filter can be obtained by replacing the time delay by a first order continuous-time recurrent filter
with time constant ? . The frequency response of this complex-valued filter is
V(? x, ? t)
1?a
---------------------= ------------------------------------------------------------U(? x, ? t)
1 ? a ? exp ( ? j ( ? t ? ? t ) )
(4)
where ? x and ? t are spatial and temporal frequency variables. This function achieves unity maximum value at ? t = ? ? t , independently of ? x . Assuming the same Gabor spatial receptive field,
the combined spatio-temporal receptive field can be approximated by the continuous function:
? x ?
1
g ( x, t ) = ----------------- exp ? ? --------? exp ( j ? x x ) ? ? ? 1 exp ( ? t ? ? ) exp ( j ? t t ) h(t)
2
2
2?? x
? 2? x?
(5)
where h(t) is the unit step function, and ? ? 1 ? ( 1 ? a ) . Again, strictly speaking, the filter is not
velocity tuned, but for input sine-wave gratings with a spatial frequency near ? x , the composite
spatio-temporal filter has a preferred velocity near v pref = ? ? t ? ? x .
The velocity tuning of this filter is determined by the combination of the time delay and a phase
shift ? t between the input u(x, t) and the output v(x, t ? 1) . The longer the time delay, the
slower the preferred velocity. However, the larger the phase-shift, the higher the preferred velocity.
This phase-tuning of velocity is reminiscent of the phase-tuning of disparity tuned neurons, where
the larger the phase shift between the left and right receptive fields, the larger the preferred disparity.
The possibility to adjust velocity tuning using two complementary mechanisms, suggests that it
should be possible to combine these two methods, as observed in disparity neurons. Figure 1(d)
shows how the position and phase tuning mechanisms of Figures 1(b) and 1(c) can be combined.
The preferred velocity for spatial frequencies ? x will be determined by the sum of the preferred
velocities determined by the position and phase-shift mechanisms, i.e. v pref = 1 ? ? t ? ? x ,
assuming a unit spatial displacement between adjacent photosensors.
3 Motion energy pairs for velocity discrimination
Given the possibility of combining the position and phase tuning mechanisms, an interesting question is how these two mechanisms might be exploited when constructing populations of motion
energy neurons. Velocity can be estimated using a population of neurons tuned to different spatiotemporal frequencies [10][11]. However, the output of a single motion energy neuron is an ambiguous indicator of velocity, since its output depends upon other stimulus dimensions in addition to
motion, (e.g. orientation, contrast).
Given the long history of position/phase shifts in disparity tuning, it is natural to start with an inspiration taken from the context of binocular vision. It has been shown that the responses from a population of phase-tuned disparity energy are more comparable than the responses from a population
of position-tuned disparity energy neurons [12]. In particular, the preferred disparity of the neuron
with maximum response in a population of phase tuned neurons is a more reliable indicator of the
stimulus disparity than the preferred disparity of the neuron with maximum response in a population of position tuned neurons, especially for neurons with small phase shifts. The disadvantage of
purely phase tuned neurons is that their preferred disparities can be tuned only over a limited range
due to phase-wraparound in the sinusoidal modulation of the spatial Gabor. However, there is no
restriction on the range of preferred disparities when using position shifts. Thus, it has been suggested that position shifts can be used to ?bias? the preferred disparity of a population around a
rough estimate of the stimulus disparity, and then use a population of neurons tuned by phase shifts
to obtain a more accurate estimation of the actual disparity.
In this section, we demonstrate that a similar phenomenon holds for motion energy neurons. In particular, we show that we can use position shifts to place the tuned velocity (for a spatial frequency
of ? x ) in a population of two neurons around a desired bias velocity, v bias , and then use phase
shifts with equal magnitude but opposite sign to place the preferred velocities symmetrically
around this bias velocity. We then show that by comparing the outputs of these two neurons, we can
accurately discriminate between velocities above and below v bias .
The equation describing the complex valued output of the spatio-temporal filtering stage w(x, t)
for the detector shown in Figure 1(d) is
w(x, t) = a exp ( j ? t ) ? w ( x ? 1, t ? 1 ) + ( 1 ? a ) ? u ( x, t )
(6)
The frequency response is
W ( ? x, ? t )
1?a
------------------------ = -------------------------------------------------------------------------U(? x, ? t)
1 ? a ? exp ( ? j ( ? t + ? x ? ? t ) )
(7)
and achieves its maximum along the line ? t = ? x + ? t , as seen in the contour plot of the spatiotemporal frequency response magnitude of the cascade of (1) and (7) in Fig. 2(a). In comparison,
the spatio-temporal frequency response of the cascade of (1) and (4) shown in Fig. 2(e), achieves its
maximum at ? t independently of ? x . For a moving sine wave grating input with spatial and temporal frequencies ? x and ? t , the steady state motion energy outputs will be proportional to the
squared magnitudes of the spatio-temporal frequency response evaluated at ( ? x, ? t ) .
Assume that we have two such motion cells with the same preferred spatial frequency
? x = 2? ? 20 but opposite temporal frequencies ? t = ? 2? ? 20 . The motion energy cell with
positive ? t is tuned to fast velocities, while the motion energy cell with negative ? t is tuned to
slow velocities. If we compare the frequency response magnitudes at frequency ( ? x, ? t ) , the
boundary between the regions in the ? x ? ? t plane where the magnitude of one is larger than the
other is a line passing thorough the origin with slope equal to 1, as shown in Fig. 2(c). This suggests
that we can determine whether the velocity of the grating is faster or slower than 1 pixel per frame
by checking the relative magnitude of the motion energy outputs, at least for sine-wave gratings.
Although the sine-wave grating is a particularly simple input, this property is not shared by other
pairs of motion energy neurons. For example, Fig. 2(f) shows the spatio-temporal frequency
responses two motion energy neurons that have the same spatio-temporal center frequencies as considered above, but are constructed by phase tuning (the cascade of (1) and (4)). In this case, the
boundary is a horizontal line. Thus, the velocity boundary depends upon the spatial frequency. For
lower spatial frequencies, the relative magnitudes will switch at higher velocities. Another commonly considered arrangement of Gabor-filters is to place the center frequencies around a circle.
For two neurons, this corresponds to displacing the two center frequencies by an equal amount perpendicularly to the line ? x = ? t (Fig. 2(k)). For motion energy filters built from non-causal
Gabor filters, the spatio-temporal frequency responses exhibit perfect circular symmetry, and the
decision boundary also coincides with the diagonal line ? x = ? t (see Figure 9 in [13]). However,
non-causal filters are not physically realizable. If we consider motion energy neurons constructed
from temporally causal functions (e.g. the cascade (1) and (4)), the boundary only matches the
diagonal line in a small neighborhood of ? x = ? x , as shown in Fig. 2(i).
We have characterized the performance of the three motion pairs on the fast/slow velocity discrimination task for a variety of inputs, including sine-wave gratings, square wave gratings, and drifting
random dot stimuli with varying coherence.
We first consider drifting sinusoidal gratings with spatial frequencies ? x ? [ 0, 2? ? 10 ] and velocities v input ? [ 0, 2 ] . For each spatial frequency and velocity, we compare the two motion energy
0.5
0
-0.5
-1
1
0.5
0
-0.5
-1
-1.5
spatial frequency
-1
0
-0.5
-1
1
1
0.5
0
-0.5
-1
-1.5
-1
1
0
-0.5
-1
1
1
0.5
0
-0.5
-1
-1
0
-0.4
1
0.2
0.4
0.6
0.4
0.6
1
0
-0.5
0.8
0.6
0.4
0.2
-1
0
-1
0
1
-0.4
-0.2
0
0.2
distance
(h)
1.5
1
1
0.5
0
-0.5
0.8
0.6
0.4
0.2
-1
-1.5
0
(d)
0.5
-1.5
-0.2
distance
(g)
1.5
-1.5
1
spatial frequency
temporal frequency
temporal frequency
temporal frequency
phase-tuned (orthogonal)
1
0
1
(f)
0.5
0
0
-1
1.5
spatial frequency
(e)
1.5
-1
0
0.4
0.2
-1
-1.5
0.6
(c)
1.5
spatial frequency
-1.5
0
-0.5
0.8
spatial frequency
temporal frequency
temporal frequency
temporal frequency
phase-tuned (vertical)
0.5
0
0.5
(b)
1
-1
1
1
1
spatial frequency
(a)
1.5
-1.5
0
1.5
amplitude
0
1
amplitude
-1
1.5
tuning curve
amplitude
1
temporal frequency
1.5
-1.5
motion pair
slow cell
temporal frequency
temporal frequency
phase/position-tuned
fast cell
0
-1
0
1
spatial frequency
spatial frequency
spatial frequency
(i)
(j)
(k)
-0.4
-0.2
0
0.2
0.4
0.6
distance
(l)
Figure 2. Frequency response amplitudes of the motion pairs formed by types of motion cells. First
row: Phase and position tuned motion cells. The center frequencies of the fast (a) and slow (b) cells
are ( ? x, ? t ) = ( 0.314, 0.628 ) and ( 0.314, 0 ) respectively. Second row: Vertically displaced
phase-tuned motion energy cells. The center frequencies of the fast (e) and slow (f) cells are
( 0.314, 0.628 ) and ( 0.314, 0 ) respectively. Third row: Orthogonally displaced phase-tuned
motion energy cells. The center frequencies of the fast (i) and slow (j) cells are ( 0.092, 0.536 )
and ( 0.536, 0.092 ) respectively. The third column shows the contour plot of difference between
the frequency response amplitudes of the fast cell from the slow cell. The dashed line shows the
decision boundary at zero. The fourth column shows the cross sections of the frequency response
amplitudes along the line connecting the two center frequencies (fast = solid, slow = dashed). Zero
denotes the point on the line that crosses ? t = ? x .
outputs at different phase shifts of the input grating, and calculate the percentage where the
response of the fast cell is larger than that of the slow cell. Fig. 3(a)-(c) show the percentages as the
grey scale value for each combination of input spatial frequency and velocity. Ideally, the top half
should be white (i.e. the fast cell?s response is larger for all inputs whose velocity is greater than
one), and the bottom half should be black. For the phase-shifted motion cells with unit positiontuned velocity bias, the responses are correct over a wide range of spatial frequencies. On the other
hand, for the motion pairs with the same center frequencies but tuned by pure phase shifts
(Fig. 3(c)), the velocity at which the relative responses switch decreases with spatial frequency.
This is consistent with the horizontal decision boundary computed by comparing the frequency
response magnitudes. For the phase-tuned motion-energy cells with orthogonally displaced center
frequencies, the boundary rapidly diverges from the horizontal as the spatial frequency moves away
from ? x . Fig. 3(d) shows the overall accuracy by combining the responses over all velocities. The
detector utilizing the phase-tuned cells with position bias have the highest accuracy over the widest
range of spatial frequencies.
Fig. 3(e)-(h) show the responses of the motion pairs to square wave gratings. The results are similar
to the case of sinusoidal gratings, except that the performance at low spatial frequencies is worse.
phase-tuned (vertical)
2
1.5
1.5
1.5
1
0.5
velocity
velocity
1
1
0.5
0.8
1
0.7
0.6
0
0
0.1
0.2
0.3
0.4
0.5
0
0.6
0
input spatial frequency
0.1
0.2
0.3
0.4
0.5
0
0.6
0
input spatial frequency
0.1
0.2
0.3
0.4
0.5
0.5
0
0.6
input spatial frequency
(b)
0.1
2
2
1.5
1.5
1.5
0.2
0.3
0.4
0.5
0.6
input spatial frequency
(c)
2
(d)
0.5
1
accuracy
1
velocity
velocity
1
velocity
1
0.5
0.5
0.9
0.8
0.7
0.6
0
0
0.1
0.2
0.3
0.4
0.5
0
0.6
0
input spatial frequency
0.1
0.2
0.3
0.4
0.5
0
0.6
0
input spatial frequency
(e)
0.1
0.2
0.3
0.4
0.5
0.5
0
0.6
input spatial frequency
(f)
0.1
(g)
2
2
2
1.5
1.5
1.5
0.2
0.3
0.4
0.5
0.6
input spatial frequency
(h)
1
velocity
velocity
velocity
1
1
0.5
0.5
0.9
accuracy
square wave gratings
0.9
0.5
(a)
drifting random dots
average accuracy
phase-tuned (orthogonal)
2
velocity
sine wave gratings
position/phase tuned
2
1
0.5
0.8
0.7
0.6
0
0.5
0.6
0.7
0.8
0.9
coherence level
(i)
1
0
0.5
0.6
0.7
0.8
0.9
coherence level
(j)
1
0
0.5
0.6
0.7
0.8
0.9
coherence level
(k)
1
0.5
0.5
0.6
0.7
0.8
0.9
1
coherence level
(l)
Figure 3. Performance on the velocity discrimination task for different stimuli. First row: sine
wave gratings; second row: square wave gratings; third row: drifting random dots. The first three
columns show the percentage of stimuli where the fast motion energy cell?s response is larger than
the slow cell?s response. First column: motion cells with position-tuned velocity bias; second
column: phase tuned motion cells with the same center frequencies; third column: phase-tuned
motion cells with orthogonal offset. The fourth column shows the average accuracy over all input
velocities. Solid line: motion cells with position-tuned velocity bias; dashed line: phase tuned
motion cells with the same center frequencies; dash-dot line: phase-tuned motion cells with
orthogonal offset.
this is expected, since for low spatial frequencies, the square wave gratings have large constant
intensity areas that convey no motion information.
Fig. 3(i)-(l) show the responses for drifting random dot stimuli at different velocities and coherence
levels. The dots were one pixel wide. The motion pair using the phase-shifted cells with position
tuned bias velocity maintain a consistently higher accuracy over all coherence levels tested.
4 Discussion
We described a new architecture for motion energy filters obtained by combining the position tuning mechanism of the Reichardt-like detectors and the phase tuning mechanism of motion energy
detectors based on complex-valued spatio-temporal separable filters. Motivated by results with disparity energy neurons indicating that the responses of phase-tuned neurons with small phase shifts
are more comparable, we have examined the ability of the proposed velocity detectors to discriminate between input stimuli above and below a fixed velocity. Our experimental and analytical
results confirm that comparisons between pairs constructed by using a position shift to center the
tuned velocities around the border and using phase shifts to offset the tuned velocity of the pair to
opposite sides of the boundary is consistently better than previously proposed architectures that
were based on pure phase tuning.
Recent experimental evidence has cast doubt upon the belief that the motion neurons in V1 and MT
have very distinct properties. Traditionally, the tuning of V1 motion sensitive neurons is thought to
be separable along the spatial and temporal frequency dimensions, while the frequency tuning MT
neurons is inseparable, consistent with constant speed tuning. However, it now seems that both V1
and MT neurons actually show a continuum in the degree to which preferred velocity changes with
spatial frequency [14][15][16]. Our proposed neurons constructed by position and phase shifts also
show an intermediate behavior between speed tuning and space-time separable tuning. With pure
phase shifts, the tuning is space-time separable. With position shifts, the neurons become speed
tuned. An intermediate tuning is obtained by combining position and phase tuning. Our results on a
simple velocity discrimination task suggest a functional role for this intermediate level of tuning in
creating motion energy pairs whose relative responses truly indicate changes in velocity around a
reference level for stimuli with a broad band of spatial frequency content. Pair-wise comparisons
have been previously proposed as a potential method for coding image speed [17][18]. Here, we
have demonstrated a systematic way of constructing reliably comparable pairs of neurons using
simple neurally plausible circuits.
Acknowledgements
This work was supported in part by the Hong Kong Research Grants Council under Grant
HKUST6300/04E.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
W. Reichardt, ?Autocorrelation, a principle for the evaluation of sensory information by the central
nervous system,? in Sensory Communication, W. A. Rosenblith, ed. (Wiley, New York, 1961).
E. Adelson and J. Bergen, ?Spatiotemporal energy models for the perception of motion,? Optical Society of America, Journal, A: Optics and Image Science, vol. 2, pp. 284-299, 1985.
A. B. Watson and J. A. J. Ahumada, ?Model of human visual-motion sensing,? Journal Optical Society of America A, vol. 2, pp. 322-342, 1985.
J. P. H. van Santen and G. Sperling, ?Elaborated Reichardt detectors,? Journal of the Optical Society of
America A, vol. 2, pp. 300-321, 1985.
I. Ohzawa, G. C. DeAngelis, and R. D. Freeman, ?Stereoscopic depth discrimination in the visual cortex: Neurons ideally suited as disparity detectors,? Science, vol. 249, pp. 1037-1041, 1990.
N. Qian, ?Computing stereo disparity and motion with known binocular cell properties,? Neural Computation, vol. 6, pp. 390-404, 1994.
D. Fleet, H. Wagner, and D. Heeger, ?Neural encoding of binocular disparity: Energy models, position
shifts and phase shifts,? Vision Research, vol. 36, pp. 1839-1857, 1996.
T. Delbruck, ?Silicon Retina with Correlation-Based, Velocity-Tuned Pixels,? IEEE Transactions on
Neural Networks, vol. 4, pp. 529-541, 1993.
A. Anzai, I. Ohzawa and R. D. Freeman, ?Neural mechanisms for encoding binocular disparity: Position vs. phase,? J. Neurophysiology, vol. 82, pp. 874-890, 1999.
D. J. Heeger, ?Model for the extraction of image flow,? Journal Optical Society of America A, vol. 4,
pp. 1455-1471, 1987.
E. Simoncelli and D. Heeger, ?A model of neuronal responses in visual area MT,? Vision Research,
vol. 38, pp. 743-61, 1998.
Y. Chen and N. Qian, ?A course-to-fine disparity energy model with both phase-shift and positionshift receptive field mechanisms,? Neural Computation, vol. 16, pp. 1545-1578, 2004.
M. V. Srinivasan, M. Poteser and K. Kral, ?Motion detection in insect orientation and navigation,?
Vision Research, vol. 39, pp. 2749-2766, 1999.
N. Priebe, C. Cassanello, and S. Lisberger, ?The Neural Representation of Speed in Macaque Area
MT/V5,? Journal of Neuroscience, vol. 23, pp. 5650, 2003.
N. Priebe, S. Lisberger, and J. Movshon, ?Tuning for Spatiotemporal Frequency and Speed in Directionally Selective Neurons of Macaque Striate Cortex,? Journal of Neuroscience, vol. 26, pp. 29412950, 2006.
J. Perrone, ?A Single Mechanism Can Explain the Speed Tuning Properties of MT and V1 Complex
Neurons,? Journal of Neuroscience, vol. 26, pp. 11987-11991, 2006.
P. Thompson, ?Discrimination of moving gratings at and above detection threshold,? Vision Research,
vol. 23, pp. 1533-1538, 1983.
J. A. Perrone, ?Simulating the speed and direction tuning of MT neurons using spatiotemporal tuned
V1-neuron inputs?. Investigative Opthalmology and Visual Science (Supplement), vol. 35, pp. 2158,
1994.
| 3221 |@word neurophysiology:1 kong:3 seems:1 grey:1 mammal:1 solid:2 disparity:27 tuned:44 imaginary:3 current:1 comparing:3 recovered:1 ust:1 reminiscent:2 enables:1 plot:2 discrimination:7 v:1 cue:1 half:2 nervous:1 plane:1 location:3 along:3 constructed:4 become:1 combine:2 autocorrelation:1 expected:1 behavior:1 inspired:1 freeman:2 actual:1 considering:1 circuit:1 temporal:39 thorough:1 unit:4 grant:2 positive:1 engineering:1 vertically:1 despite:1 encoding:2 modulation:1 might:1 black:1 examined:1 suggests:2 co:3 limited:1 range:4 block:2 displacement:1 area:3 thought:3 gabor:9 composite:1 cascade:4 suggest:1 context:2 restriction:1 equivalent:2 demonstrated:1 shi:1 center:14 independently:2 thompson:1 pure:3 qian:2 array:4 utilizing:1 population:9 traditionally:1 construction:1 origin:1 velocity:64 approximated:1 particularly:1 santen:1 bottom:2 role:2 observed:1 calculate:1 region:1 connected:1 decrease:2 highest:1 environment:1 complexity:1 ideally:2 purely:3 upon:3 america:4 distinct:1 fast:11 investigative:1 deangelis:1 artificial:1 neighborhood:2 whose:6 pref:3 larger:9 valued:6 plausible:1 ability:1 directionally:1 advantage:1 analytical:1 propose:1 lam:1 subtracting:1 product:2 neighboring:3 combining:6 rapidly:1 extending:2 diverges:1 produce:3 perfect:1 derive:1 recurrent:2 nearest:1 odd:5 grating:19 implemented:2 indicate:1 direction:1 closely:1 correct:1 filter:22 human:1 anzai:1 biological:2 summation:1 mathematically:1 adjusted:3 im:4 strictly:2 hold:1 fil:1 around:7 considered:5 exp:15 achieves:3 early:1 jx:1 inseparable:1 continuum:1 estimation:1 sensitive:1 council:1 largest:1 rough:1 kowloon:1 varying:1 consistently:2 hk:1 contrast:1 realizable:1 bergen:1 squaring:2 selective:2 pixel:3 overall:1 orientation:2 insect:2 univeristy:1 spatial:54 summed:1 field:16 equal:3 never:1 having:1 extraction:1 represents:1 broad:1 adelson:1 stimulus:11 perpendicularly:1 retina:1 delayed:3 phase:58 maintain:1 ab:1 detection:3 organization:1 investigate:1 possibility:2 circular:1 evaluation:1 adjust:1 truly:1 navigation:1 accurate:1 orthogonal:4 re:4 circle:2 causal:5 desired:1 column:7 disadvantage:1 delbruck:3 delay:13 varies:1 spatiotemporal:5 combined:7 systematic:1 connecting:1 squared:4 again:1 central:1 worse:1 creating:1 doubt:1 potential:1 sinusoidal:3 coding:1 depends:2 sine:9 wave:14 start:1 slope:1 elaborated:1 square:5 formed:1 accuracy:7 modelled:2 accurately:1 history:1 detector:24 explain:1 rosenblith:1 ed:1 energy:46 frequency:79 pp:17 improves:1 stanley:1 amplitude:6 actually:1 appears:1 higher:3 response:30 photosensors:3 evaluated:1 stage:2 binocular:4 correlation:3 hand:1 horizontal:3 replacing:1 ohzawa:2 concept:2 inspiration:1 symmetric:3 white:1 adjacent:1 sin:3 ambiguous:1 steady:1 coincides:1 hong:3 demonstrate:1 motion:69 image:7 wise:1 recently:1 functional:3 mt:7 extend:2 analog:1 silicon:1 tuning:41 consistency:2 dot:6 moving:4 cortex:5 longer:1 dominant:2 recent:2 watson:1 exploited:1 seen:1 greater:1 determine:2 signal:1 semi:1 dashed:3 neurally:1 simoncelli:1 faster:1 match:1 characterized:1 cross:2 long:1 bertram:1 ae:6 essentially:1 vision:5 physically:1 sometimes:1 represent:2 cell:31 whereas:1 addition:1 fine:1 photosensor:5 flow:1 ee:1 near:2 symmetrically:1 ter:1 intermediate:4 switch:3 variety:1 architecture:5 displacing:1 opposite:3 shift:33 fleet:1 whether:1 motivated:1 movshon:1 stereo:1 speaking:2 passing:1 york:1 clear:1 amount:1 band:1 percentage:3 shifted:2 sign:1 estimated:1 stereoscopic:1 per:1 neuroscience:3 bulk:1 discrete:2 vol:17 srinivasan:1 demonstrating:1 threshold:1 v1:6 sum:6 fourth:2 place:4 extends:1 electronic:1 decision:3 coherence:7 comparable:3 followed:1 dash:1 spacetime:1 occur:1 optic:1 speed:10 separable:7 optical:4 department:1 combination:4 perrone:2 unity:1 modification:1 taken:1 equation:2 previously:2 describing:1 sperling:1 mechanism:13 operation:1 away:1 appropriate:1 simulating:1 slower:2 drifting:5 denotes:1 top:1 especially:1 widest:1 society:4 move:1 already:1 question:1 occurs:1 arrangement:1 receptive:16 primary:1 v5:1 striate:1 diagonal:2 exhibit:2 distance:3 extent:2 water:1 assuming:2 negative:1 priebe:2 reliably:3 vertical:2 neuron:48 displaced:3 communication:1 frame:1 intensity:1 wraparound:1 pair:15 cast:1 connection:1 established:1 macaque:2 suggested:1 below:4 usually:1 perception:1 built:1 reliable:1 including:1 belief:1 natural:1 indicator:2 technology:1 eye:3 temporally:1 orthogonally:2 extract:1 reichardt:10 literature:2 acknowledgement:1 checking:1 multiplication:1 relative:4 interesting:1 filtering:1 proportional:1 degree:1 consistent:2 principle:1 row:6 course:1 supported:1 bias:10 side:1 wide:2 wagner:1 yiu:1 van:1 boundary:9 dimension:2 curve:1 depth:1 contour:2 computes:1 sensory:2 commonly:2 transaction:1 preferred:17 confirm:1 assumed:1 spatio:13 continuous:3 bay:1 symmetry:1 ahumada:1 complex:6 constructing:4 border:1 profile:2 complementary:1 convey:1 neuronal:1 fig:12 slow:10 wiley:1 position:39 heeger:3 lie:1 third:4 recurrently:1 offset:3 sensing:1 evidence:1 supplement:1 magnitude:8 chen:1 suited:1 simply:1 visual:8 corresponds:1 lisberger:2 determines:2 satisfies:1 shared:1 man:1 content:1 change:2 determined:4 except:1 discriminate:4 pas:1 experimental:3 indicating:1 tested:1 phenomenon:1 eebert:1 |
2,450 | 3,222 | Heterogeneous Component Analysis
3,2
?
Shigeyuki Oba1 , Motoaki Kawanabe2 , Klaus Robert Muller
, and Shin Ishii4,1
1. Graduate School of Information Science, Nara Institute of Science and Technology, Japan
2. Fraunhofer FIRST.IDA, Germany
3. Department of Computer Science, Technical University Berlin, Germany
4. Graduate School of Informatics, Kyoto University, Japan
[email protected]
Abstract
In bioinformatics it is often desirable to combine data from various measurement
sources and thus structured feature vectors are to be analyzed that possess different
intrinsic blocking characteristics (e.g., different patterns of missing values, observation noise levels, effective intrinsic dimensionalities). We propose a new machine learning tool, heterogeneous component analysis (HCA), for feature extraction in order to better understand the factors that underlie such complex structured
heterogeneous data. HCA is a linear block-wise sparse Bayesian PCA based not
only on a probabilistic model with block-wise residual variance terms but also on
a Bayesian treatment of a block-wise sparse factor-loading matrix. We study various algorithms that implement our HCA concept extracting sparse heterogeneous
structure by obtaining common components for the blocks and specific components within each block. Simulations on toy and bioinformatics data underline the
usefulness of the proposed structured matrix factorization concept.
1
Introduction
Microarray and other high-throughput measurement devices have been applied to examine specimens such as cancer tissues of biological and/or clinical interest. The next step is to go towards
combinatorial studies in which tissues measured by two or more of such devices are simultaneously
analyzed. However, such combinatorial studies inevitably suffer from differences in experimental
conditions, or, even more complex, from different measurement technologies. Also, when concatenating a data set from different measurement sources, we often observe systematic missing parts
in a dataset (e.g., Fig 3A). Moreover, the noise levels may vary among different experiments. All
these induce a heterogeneous structure in data, that needs to be treated appropriately. Our work will
contribute exactly to this topic, by proposing a Bayesian method for feature subspace extraction,
called heterogeneous component analysis (HCA, sections 2 and 3). HCA performs a linear feature
extraction based on matrix factorization in order to obtain a sparse and structured representation.
After relating to previous methods (section 4), HCA is applied to toy data and more interestingly
to neuroblastoma data from different measurement techniques (section 5). We obtain interesting
factors that may be a first step towards better biological model building.
2
Formulation of the HCA problem
Let a matrix Y = {yij }i=1:M,j=1:N denote a set of N observations of M -dimensional feature
vectors, where yij ? R is the j-th observation of the i-th feature. In a heterogeneous situation, we
assume the M -dimensional feature vector is decomposed into L disjoint blocks. Let I (l) denote a
?
set of feature indices included in the l-th block, so that I (1) ? ? ? ? ? I (L) = I and I (l) ? I (l ) = ? for
l 6= l? .
Figure 1: An illustration of a typical dataset and the result by the HCA. The observation matrix
Y consists of multiple samples j = 1, . . . , N with high-dimensional features i ? I. The features
consist of multiple blocks, in this case I (1) ? I (2) ? I (3) = I. There are many missing observations
whose distribution is highly structural depending on each block. HCA optimally factorizes the matrix Y so that the factor-loading matrix U has structural sparseness; it includes some regions of zero
elements according to the block structure of the observed data. Each factor may or may not affect all
the features within a block, but each block does not necessarily affect all the factors. Therefore, the
rank of each factor-loading sub-matrix for each block (or any set of blocks) can be different from
the others. The resulting block-wise sparse matrix reflects a characteristic heterogeneity of features
over blocks.
We assume that the matrix Y ? RM ?N is a noisy observation of a matrix of true values X ? RM ?N
whose rank is K(< min(M, N )) and has a factorized form:
Y = X + E, X = U V T ,
M ?N
M ?K
(1)
N ?K
where E ? R
,U ?R
, and V ? R
are matrices of residuals, factor-loadings, and
factors, respectively. The superscript T denotes the matrix transpose. There may be missing or
unmeasured observations denoted by a matrix W ? {0, 1}M ?N , which indicates observation yij is
missing if wij = 0 or exists otherwise (wij = 1).
Figure 1 illustrates the concept of HCA. In this example, the observed data matrix (left panel) is
made up by three blocks of features. They have block-wise variation in effective dimensionalities, missing rates, observation noise levels, and so on, which we overall call heterogeneity. Such
heterogeneity affects the effective rank of the observation sub-matrix corresponding to each block,
and hence leads naturally to different ranks of factor-loading sub-matrix between blocks. In addition, there can exist block-wise patterns of missing values (shadowed rectangular regions in the
left panel); such a situation would occur, for example in bioinformatics, when some particular genes
have been measured in one assay (constituting a block) but not in another assay (constituting another
block).
To better understand the objective data based on the feature extraction by matrix factorization, we
assume a block-wise sparse factor-loading matrix U (right panel in Fig.1). Namely, the effective
rank of an observation sub-matrix corresponding to a block is reflected by the number of non-zero
components in the corresponding rows of U . Assuming such a block-wise sparse structure can
decrease the model?s effective complexity, and will describe the data better and therefore lead to
better generalization ability, e.g., for missing value prediction.
3
A probabilistic model for HCA
PK
Model For each element of the residual matrix, eij ? yij ? k=1 uik vjk , we assume a Gaussian
distribution with a common variance ?l2 for every feature i in the same block I (l) :
1 ?2 2
1
1
2
2
ln p(eij |?l(i)
) = ? ?l(i)
eij ? ln ?l(i)
? ln 2?,
2
2
2
(2)
where l(i) denotes the pre-determined block index to which the i-th feature belongs. For a factor
matrix V , we assume a Gaussian prior:
K
N X
X
1 2
ln p(V ) =
? vjk ? ln 2? .
(3)
2
j=1
k=1
The above two assumptions are exactly the same as those for probabilistic PCA that is a special case
of HCA with a single active block. Another special case where each block contains only one active
feature is probabilistic factor analysis (FA). Namely, maximum likelihood (ML) estimation based
on the following log-likelihood includes both the PCA and the FA as special settings of the blocks.
1X
1X
?2 2
2
2
ln p(Y , V |U , ? 2 ) =
wij ??l(i)
eij ? ln ?l(i)
? ln 2? +
?vjk
? ln 2? .
(4)
2 ij
2
jk
2
(?l2 )P
l=1,...,L
? =
summation
ij
is a vector of variances of all blocks. Since wij = 0 iff yij is missing, the
is actually taken for all observed values in Y .
Another character of the HCA model is the block-wise sparse factor-loading matrix, which is implemented by a prior for U , given by
X 1
1
(5)
ln p(U |T ) =
tik ? u2ik ? ln 2? ,
2
2
ik
where T = {tik } is a block-wise mask matrix which defines the block-wise-sparse structure; if
tik = 0, then uik = 0 with probability 1. Each column vector of the mask matrix takes one of
the possible block-wise mask patterns; a binary pattern vector whose dimensionality is the same as
the factor-loading vector, and whose values are consistent, either 0 or 1, within each block. When
there are L blocks, each column vector of T can take one of 2L possible patterns including the zero
vector, and hence, the matrix T with K columns can take one of 2LK possible patterns.
Parameter estimation We estimated the model parameters U and V by maximum a posteriori
def
(MAP) estimation, and ? by ML estimation; that is, the log-joint: L = log P (Y , U , V |?, T ) was
maximized w.r.t. U , V and ?.
Maximization of the log-joint L w.r.t U , V , and ? was performed by the conjugate gradient algorithm that was available in the NETLAB toolbox [1]. The stationary condition w.r.t. the variance,
?L
?(? 2 ) = 0, was solved as a closed form of U and V :
def
?
?l2 (U , V ) = mean(i,j|l) [e2ij ],
(6)
where mean(i,j|l) [.] is the average over all pairs (i, j) not missing in the l-th block. By redefining
the objective function with the closed form solution plugged in:
? , V ) def
? 2 (U , V )),
L(U
= L(U , V , ?
(7)
the conjugate gradient of L? w.r.t. U and V led to faster and more stable optimization than the naive
maximization of L w.r.t. U , V , and ? 2 .
ModelR selection The mask matrix T was determined by maximization of the log-marginal likelihood LdU dV which was calculated by Laplace approximation around the MAP estimator:
1
def
E(T ) = L ? lndetH,
2
def
where H =
?2
L
???? T
(8)
is the Hessian of log-joint w.r.t. all elements (?) in the parameters U and V .
The log Hessian term, lndetH, which works as a penalty term for maintaining non-zero elements in
the factor-loading matrix, was simplified in order for tractable calculation. Namely, independence
in the log-joint was assumed:
?2L
?2L
?2L
? 0,
? 0, and
? 0,
?uik vjk?
?uik uik?
?vjk vjk?
(9)
which enabled a similar tractable computation to variational Bayes (VB) and was expected to produce satisfactory results.
To avoid searching through an exponentially large number of possibilities, we implemented a greedy
search that optimizes each of the column vectors in a step-wise manner, called HCA-greedy algorithm. In each step of the HCA-greedy algorithm, factor-loading and factor vectors are estimated
based on 2L possible settings of block-wise mask vectors, and we accept the one achieving the
maximum log-marginal. It terminated if zero vector is accepted as the best mask vector.
HCA with ARD The greedy search still searches 2L possibilities per a factor, whose computation
increases exponentially as the number of blocks L increases. The automatic relevance determination
(ARD) is a hierarchical Bayesian approach for selecting relevant bases, which has been applied to
component analyzers since its first introduction to Bayesian PCA (BPCA) [2].
The prior for U is given by
(
)
L K
1 XX X
2
ln p(U |?) =
??lk uik + ln ?lk ? ln 2? ,
2
l=1 k=1
(10)
i?Il
where ?lk is an ARD hyper-parameter for the l-th block of the k-th column of U . ? is a vector of
all elements of ?lk , l = 1, . . . , L, k = 1, . . . , K. With this prior, the log-joint probability density
function becomes
1X
1X
?2 2
2
2
wij ??l(i)
eij ? ln ?l(i)
? ln 2? +
?vjk
? ln 2?
ln p(Y , U , V |? 2 , ?) =
2 ij
2
jk
1X
+
??l(i)k u2ik + ln ?l(i)k ? ln 2? .
2
(11)
ik
According to this ARD approach, ? is updated by the conjugate gradient-based optimization simultaneously with U and V . In each step of the optimization, ? was updated until the stationary
condition of log-marginal w.r.t. ? approximately held.
In HCA with ARD, called HCA-ARD, the initial values of U and V were obtained by SVD. We also
examined an ARD-based procedure with another initial value setting, i.e., starting from the result
obtained by HCA-greedy, which is signified by HCA-g+ARD.
4
Related work
In this work, the ideas from both probabilistic modeling of linear component analyzers and sparse
matrix factorization frameworks are combined into an analytical tool for data with underlying heterogeneous structures.
The weighted low-rank matrix factorization (WLRMF) [3] has been proposed as a minimization
problem of the weighted error:
X
X
min =
wij (yij ?
uik vjk )2 ,
(12)
U ,V
i,j
k
where wij is a weight for the element yij of the observation matrix Y . The weight value is set
as wij = 0 if the corresponding yij is missing or wij > 0 otherwise. This objective function is equivalent to the (negative) log-likelihood of a probabilistic generative model based on
an assumption that each element of the residual matrix obeys a Gaussian distribution with variance 1/wij . The WLRMF objective function is equivalent to our log-likelihood function (4) if the
weight is set at P
estimated inverse noise variance for each (i, j)-th element. Although the prior term,
2
ln p(V ) = ? 12 jk vjk
+ const., has been added to eq. (4), it just imposes a constraint on the linear
indeterminacy between U and V , and hence the resultant low-rank matrix U V T is identical to that
by WLRMF.
Bayesian PCA [2] is also a matrix factorization procedure,
which includes a characteristic prior
P
density of factor-loading vectors, ln p(U |?) = ? 21 ik ?k u2ik + const.. It is an equivalent prior for
(A)
Missing pattern
(B) True
(C)
factor loading
(D)
SVD
WLRMF
(I)
1
50
50
50
100
100
100
100
150
150
150
150
(E)
50
100
BPCA
(F)
2468
10
HCA-greedy
(G) HCA-ARD
2468
20
(H) HCA-g+ARD
50
50
50
50
100
100
100
100
150
150
150
150
2468
10
20
2468
0.9
NRMSE
50
SVD
WLRMF
BPCA
HCA-greedy
HCA-ARD
HCA-g+ARD
0.8
0.7
0.6
0.5
0
5
10
K
15
20
2468
Figure 2: Experimental results when applied to an artificial data matrix. (A) Missing pattern of the
observation matrix. Vertical and horizontal axes correspond to row (typically, genes) and column
(typically, samples) of the matrix (typically, gene expression matrix). Red cells signify missing
elements. (B) True factor-loading matrix. Horizontal axis denotes factors. Color and its intensity
denote element values and white cells denote zero elements. Panels from (C) to (H) show the
factor-loading matrices estimated by SVD, WLRMF, BPCA, HCA-greedy, HCA-ARD, and HCAg+ARD, respectively. The vertical line in panel (F) denotes the automatically determined number
of components. Panel (I) shows missing value prediction performance obtained by the three HCA
algorithms and other methods. The vertical and horizontal axes denote normalized root mean square
of test errors and dimensionalities of factors, respectively.
HCA-ARD (eq. (10)) if we assume only a single block. Although this prior term obviously a simple
L2 norm in the WLRMF, it also includes hyper parameter ? which constitute different regularization
term and it leads to automatic model (intrinsic dimensionality) selection when ? is determined by
evidence criterion.
Component analyzers with sparse factor-loadings have recently been investigated as sparse PCA
(SPCA). In a well established context of SPCA studies (e.g. [4]), the tradeoff problem is solved
between the understandability (sparsity of factor-loadings) and the reproducibility of the covariance
matrix from the sparsified factor-loadings. In our HCA, the block-wise sparse factor-loading matrix
is useful not only for understandability but also for generalization ability. The latter merit comes
from the assumption that the observation includes uncertainty due to a small sample size, large
noises, and missing observations, which have not been considered sufficiently in SPCA.
5
Experiments
Experiment 1: an artificial dataset We prepared an artificial data set with an underlying block
structure. For this we generated a 170 ? 9 factor-loading matrix U that included a pre-determined
block structure (white vs. colored in Fig. 2(B)), and a 100 ? 9 factor matrix V by applying orthogonalization to the factors sampled from a standard Gaussian distribution. The observation matrix Y
was produced by U V T + E, where each element of E was generated from a standard Gaussian.
Then, missing values were artificially introduced according to the pre-determined block structure
(Fig. 2(A)).
? Block 1 consisted of 20 features with randomly selected 10 % missing entries.
? Block 2 consisted of 50 features whose 50% columns were completely missing and the
remaining columns contained randomly selected 50% missing entries.
? Block 3 consisted of 100 features whose 20% columns were completely missing and the
remaining columns contained randomly selected 20% missing entries.
We applied three HCA algorithms: HCA-greedy, HCA-ARD, and HCA-g+ARD, and three existing
matrix factorization algorithms: SVD, WLRMF and BPCA.
SVD SVD calculated for a matrix whose missing values are imputed to zeros.
WLRMF[3] The weights were set 1 for the value-existing entries or 0 for the missing entries.
BPCA WLRMF with an ARD prior, called here BPCA, which is equivalent to HCA-ARD except
that all features are in a single active block (i.e., colored in Fig. 2(B)). We confirmed this
method exhibited almost the same performance as VB-EM-based algorithm [5].
The generalization ability was evaluated on the basis of the estimation performance for artificially
introduced missing values. The estimated factor-loading matrices and missing value estimation
accuracies are shown in Figure 2. Factor-loading matrices based on WLRMF and BPCA were
obviously almost the same with that by SVD, because these three methods did not assume any
sparsity in the factor-loading matrix.
The HCA-greedy algorithm terminated at K = 10. The factor-loading matrix estimated by HCAgreedy showed an identical sparse structure to the one consisting of the top five factors in the true
factor-loadings. The sixth factor in the second block was not extracted, possibly because the second
block lacked information due to the large rate of missing values. This algorithm also happened to
extract one factor not included in the original factor-loadings, as the tenth one in the first block.
Although the HCA-ARD and HCA-g+ARD algorithms extracted good ones as the top three and four
factors, respectively, they failed to completely reconstruct the sparsity structure in other factors. As
shown in panel (I), however, such a poorly extracted structure did not increase the generalization
error, implying that the essential structure underlying the data was extracted well by the three HCAbased algorithms.
def
Reconstruction of missing values was evaluated by normalized root mean square errors: NRMSE =
p
mean[(y ? y?)2 ]/var[y], where y and y? denote true and estimated values, respectively, the mean
is the average over all the missing entries and the variance is for all entries of the matrix.
Figure 2(I) shows the generalization ability of missing value predictions. SVD and WLRMF, which
incurred no penalty on extracting a large number of factors, exhibited the best results around K = 9,
but got worse with the increase in the number of K due to over-fitting. HCA-g+ARD showed the
best performance at K = 9, which was better than that obtained by all the other methods. HCAgreedy, HCA-ARD, and BPCA exhibited comparative performance at K = 9. At K = 2, . . . , 8, the
HCA algorithms performed better than BPCA. Namely, the sparse structure in the factor-loadings
tended to achieve better performance. HCA-ARD performed less effectively than the other two HCA
algorithms at K > 13, because of convergence to local solutions. This reason is supported by the fact
that HCA-g+ARD employing good initialization by HCA-greedy exhibited the best performance
among all the HCA algorithms. Accordingly, HCA showed a better generalization ability with a
smaller number of effective parameters than the existing methods.
(A)
Missing entries
(B)
Factor loading (HCA-greedy)
(C)
Factor loading (WLRMF)
array CGH
1000
1000
2000
2000
Microarray 1
Microarray 2
100
200
Samples
300
2448
5
10
Factors
15
20
2448
5
10
Factors
15
20
Figure 3: Analysis of an NBL dataset. Vertical axes denote high-dimensional features. Features
measured by array CGH technology are sorted in the chromosomal order. Microarray features are
sorted by correlations to sample?s prognosis, dead or alive at the end of clinical followup. (A)
Missing pattern in the NBL dataset. White and red colors denote observed and missing entries in
the data matrix, respectively. (B) and (C) Factor-loading matrices estimated by the HCA-greedy and
WLRMF algorithms, respectively.
Experiment 2: a cross-analysis of neuroblastoma data We next applied our HCA to a neuroblastoma (NBL) dataset consisting of three data blocks taken by three kinds of high-throughput
genomic measurement technologies.
Array CGH Chromosomal changes of 2340 DNA segments (using 2340 probes) were measured
for each of 230 NBL tumors, by using the array comparative genomic hybridization (array
CGH) technology. Data for 1000 probes were arbitrarily selected from the whole dataset.
Microarray 1 Expression levels of 5340 genes were measured for 136 tumors from NBL patients.
We selected 1000 genes showing the largest variance over the 136 tumors.
Microarray 2 Gene expression levels in 25 out of 136 tumors were also measured by a small-sized
microarray technology harboring 448 probes.
The dataset Microarray 1 was the same one as used in the previous study [6], and the other two
datasets, array CGH and Microarray 2, were also provided by the same research group for this
study. As seen in Figure 3(A), the set of measured samples was quite different in the three experiments, leading to apparent block-wise missing observations. We normalized the data matrix so that
the block-wise variances become unity. We further added 10% missing entries randomly into the
observed entries in order to evaluate missing value prediction performance.
When HCA-greedy was applied to this dataset, it terminated at K = 23, but we continued to obtain
further factors until K = 80. Figure 3(B) shows the factor-loading matrix from K = 0 to 23.
HCA-greedy extracted one factor showing the relationship between the three measurement devices
and three factors between aCGH and Microarray 1. The other factors accounted for either of aCGH
or Microarray 1. The first factor was strongly correlated with patient?s prognosis as clearly shown
by the color code in the parts of Microarrays 1 and 2. Note that the features in these two datasets
are aligned by correlations to the prognosis. This suggests that the dataset Microarray 2 did not
include factors other than the first one as those strongly related to the prognosis. On the other hand,
WLRMF extracted the identical first factor to HCA-greedy, but extracted much more factors concerning Microarray 2, all of which may not be trustworthy because the number of samples observed
in Microarray 2 was as small as 25.
(B)
SVD
BPCA
WLRMF
HCA-greedy
HCA-ARD
HCA-g+ARD
Training NRMSE
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Test NRMSE
0.9
(C)
0.9
0.9
0.8
0.8
Test NRMSE
(A)
0.7
0.6
0.5
20
40
K
60
80
0
0.7
0.6
0.5
20
40
K
60
80
0
0.5
1
1.5
Num.NonZeroElements
2
5
x 10
Figure 4: Missing value prediction performance by the six algorithms. Vertical axis denotes normalized root mean square of training errors (A) or test errors (B and C). Horizontal axis denotes the
number of factors (A and B) or the number of non-zero elements in the factor-loading matrices (C).
Each curve corresponds to one of the six algorithms.
We also applied SVD, WLRMF, BPCA and other two HCA algorithms to the NBL dataset. For
WLRMF, BPCA, HCA-ARD, and HCA-g+ARD, the initial numbers of factors were set at K =
5, 10, 20, . . . , 70, and 80. Missing value prediction performance in terms of NRMSE was obtained
as a measurement value of generalization performance. Note that the original data matrix included
many missing values, but we evaluated the performance by using artificially introduced missing
values. Figure 4 shows the results.
Training errors almost monotonically decreased as the number of factors increased (Fig. 4A), indicating the stability of the algorithms. The only exception was HCA-ARD whose error increased
from K = 30 to K = 40; this was due to local solution, because HCA-g+ARD employing the
same algorithm but starting from different initialization showed consistent improvements in its performance.
Test errors did not show monotonic profiles except that HCA-greedy exhibited monotonically better
results for larger K values (Fig. 4B and C). SVD and WLRMF exhibited the best performance
at K = 22 and K = 60, respectively, and got worse as the number of factors increased due to
over-fitting.
Overall, the variants of our new HCA concept have shown good generalization performance as
measured on missing values, much similar to existing methods like WLRMF. We would like to
emphasize, however, that HCA yields a clearer factor structure that is easier interpretable from the
biological point of view.
6
Conclusion
Complex structured data are ubiquitous in practice. For instance, when we should integrate data
derived from different measurement devices, it becomes critically important to combine the information in each single source optimally ? otherwise no gain can be achieved beyond the individual
analyses. Our Bayesian HCA model allows to take into account such structured feature vectors
that possess different intrinsic blocking characteristics. The new probabilistic structured matrix
factorization framework was applied to toy data and to neuroblastoma data collected by multiple
high-throughput measurement devices which had block-wise missing structures due to different
experimental designs. HCA achieved a block-wise sparse factor-loading matrix, representing the
information amount contained in each block of the dataset simultaneously. While HCA provided
a better or similar missing value prediction performance than existing methods such as BPCA or
WLRMF, the heterogeneous structure underlying the problem was clearly captured much better.
Furthermore the HCA factors derived are an interesting representation that may ultimately lead to a
better modeling of the neuroblastoma data (see section 5).
In the current HCA implementation, block structures were assumed to be known, as for the neuroblastoma data. Future work will go into a fully automatic estimate of structure from measured
multi-modal data and the respective model selection techniques to achieve this goal.
Clearly there is an increasing need for methods that are able to reliably extract factors from multimodal structured data with heterogeneous features. Our future effort will therefore strive towards
applications beyond bioinformatics and to design novel structured spatio-temporal decomposition
methods in applications like electroencephalography (EEG), image and audio analyses.
Acknowledgement This work was supported by a Grant-in-Aid for Young Scientists (B) No.
19710172 from MEXT Japan.
References
[1] I. Nabney and Christopher Bishop.
Netlab:
Netlab neural network software.
http://www.ncrg.aston.ac.uk/netlab/, 1995.
[2] C.M. Bishop. Bayesian PCA. In Proceedings of 11th conference on Advances in neural information processing systems, pages 382?388. MIT Press Cambridge, MA, USA, 1999.
[3] N. Srebro and T. Jaakkola. Weighted low rank matrix approximations. In Proceedings of 20th
International Conference on Machine Learning, pages 720?727, 2003.
[4] A. d?Aspremont, F. R. Bach, and L. El Ghaoui. Full regularization path for sparse principal
component analysis. In Proceedings of the 24th International Conference on Machine Learning,
2007.
[5] S. Oba, M. Sato, I. Takemasa, M. Monden, K. Matsubara, and S. Ishii. A Bayesian missing
value estimation method for gene expression profile data. Bioinformatics, 19(16):2088?2096,
2003.
[6] M. Ohira, S. Oba, Y. Nakamura, E. Isogai, S. Kaneko, A. Nakagawa, T. Hirata, H. Kubo,
T. Goto, S. Yamada, Y. Yoshida, M. Fuchioka, S. Ishii, and A. Nakagawara. Expression profiling
using a tumor-specific cDNA microarray predicts the prognosis of intermediate risk neuroblastomas. Cancer Cell, 7(4):337?350, Apr 2005.
| 3222 |@word loading:32 norm:1 underline:1 simulation:1 covariance:1 decomposition:1 initial:3 contains:1 selecting:1 interestingly:1 existing:5 current:1 ida:1 trustworthy:1 interpretable:1 v:1 stationary:2 greedy:18 generative:1 device:5 selected:5 implying:1 accordingly:1 yamada:1 colored:2 num:1 contribute:1 five:1 vjk:9 become:1 ik:3 consists:1 combine:2 fitting:2 manner:1 mask:6 expected:1 examine:1 multi:1 decomposed:1 automatically:1 electroencephalography:1 increasing:1 becomes:2 provided:2 xx:1 moreover:1 underlying:4 panel:7 factorized:1 kind:1 proposing:1 temporal:1 every:1 exactly:2 rm:2 uk:1 underlie:1 grant:1 scientist:1 local:2 path:1 approximately:1 initialization:2 examined:1 suggests:1 factorization:8 graduate:2 obeys:1 practice:1 block:61 implement:1 procedure:2 shin:1 got:2 pre:3 induce:1 selection:3 context:1 applying:1 risk:1 www:1 equivalent:4 map:2 missing:44 go:2 yoshida:1 starting:2 rectangular:1 estimator:1 continued:1 array:6 enabled:1 stability:1 searching:1 unmeasured:1 variation:1 laplace:1 updated:2 element:13 jk:3 predicts:1 blocking:2 observed:6 solved:2 region:2 decrease:1 complexity:1 ultimately:1 segment:1 completely:3 basis:1 multimodal:1 joint:5 various:2 describe:1 effective:6 artificial:3 klaus:1 hyper:2 whose:9 quite:1 apparent:1 larger:1 otherwise:3 reconstruct:1 ability:5 noisy:1 superscript:1 obviously:2 analytical:1 propose:1 reconstruction:1 relevant:1 aligned:1 iff:1 reproducibility:1 poorly:1 achieve:2 convergence:1 produce:1 comparative:2 depending:1 clearer:1 ac:1 acgh:2 ard:31 measured:9 ij:3 school:2 indeterminacy:1 eq:2 implemented:2 come:1 motoaki:1 hirata:1 generalization:8 biological:3 summation:1 yij:8 around:2 considered:1 sufficiently:1 vary:1 estimation:7 tik:3 combinatorial:2 largest:1 tool:2 reflects:1 weighted:3 minimization:1 mit:1 clearly:3 genomic:2 gaussian:5 avoid:1 factorizes:1 jaakkola:1 ax:3 derived:2 improvement:1 rank:8 indicates:1 likelihood:5 ishii:2 posteriori:1 el:1 typically:3 signified:1 accept:1 wij:10 germany:2 overall:2 among:2 denoted:1 special:3 marginal:3 extraction:4 identical:3 throughput:3 future:2 others:1 hca:70 randomly:4 simultaneously:3 individual:1 consisting:2 interest:1 highly:1 possibility:2 analyzed:2 held:1 respective:1 plugged:1 increased:3 column:10 modeling:2 instance:1 chromosomal:2 understandability:2 maximization:3 entry:11 usefulness:1 optimally:2 combined:1 density:2 international:2 probabilistic:7 systematic:1 informatics:1 possibly:1 worse:2 dead:1 strive:1 leading:1 toy:3 japan:3 account:1 includes:5 performed:3 root:3 view:1 closed:2 red:2 bayes:1 il:1 square:3 accuracy:1 variance:9 characteristic:4 maximized:1 correspond:1 yield:1 bayesian:9 produced:1 critically:1 confirmed:1 tissue:2 tended:1 sixth:1 naturally:1 resultant:1 sampled:1 gain:1 dataset:12 treatment:1 color:3 dimensionality:5 ubiquitous:1 actually:1 reflected:1 modal:1 formulation:1 evaluated:3 strongly:2 furthermore:1 just:1 until:2 correlation:2 hand:1 horizontal:4 christopher:1 e2ij:1 defines:1 usa:1 building:1 concept:4 true:5 normalized:4 consisted:3 hence:3 regularization:2 satisfactory:1 assay:2 white:3 criterion:1 performs:1 orthogonalization:1 image:1 wise:19 variational:1 novel:1 recently:1 common:2 jp:1 exponentially:2 ncrg:1 relating:1 cdna:1 measurement:10 cambridge:1 automatic:3 analyzer:3 had:1 stable:1 base:1 showed:4 belongs:1 optimizes:1 binary:1 arbitrarily:1 muller:1 seen:1 captured:1 specimen:1 monotonically:2 multiple:3 desirable:1 full:1 kyoto:1 technical:1 faster:1 determination:1 calculation:1 bach:1 clinical:2 nara:1 cross:1 profiling:1 concerning:1 prediction:7 variant:1 heterogeneous:10 patient:2 achieved:2 cell:3 addition:1 signify:1 decreased:1 source:3 microarray:15 appropriately:1 posse:2 exhibited:6 goto:1 hybridization:1 call:1 extracting:2 structural:2 ohira:1 spca:3 intermediate:1 affect:3 independence:1 followup:1 prognosis:5 idea:1 tradeoff:1 microarrays:1 expression:5 pca:7 six:2 effort:1 penalty:2 suffer:1 hessian:2 constitute:1 useful:1 amount:1 prepared:1 dna:1 imputed:1 http:1 exist:1 happened:1 estimated:8 disjoint:1 per:1 nrmse:6 group:1 four:1 achieving:1 tenth:1 inverse:1 uncertainty:1 almost:3 vb:2 netlab:4 def:6 nabney:1 sato:1 occur:1 constraint:1 alive:1 software:1 min:2 department:1 structured:9 according:3 conjugate:3 smaller:1 em:1 character:1 unity:1 dv:1 ghaoui:1 taken:2 ln:22 merit:1 tractable:2 end:1 available:1 lacked:1 probe:3 observe:1 hierarchical:1 original:2 denotes:6 remaining:2 top:2 include:1 maintaining:1 const:2 neuroblastoma:7 objective:4 added:2 matsubara:1 fa:2 gradient:3 subspace:1 berlin:1 nbl:6 topic:1 collected:1 reason:1 assuming:1 code:1 index:2 relationship:1 illustration:1 robert:1 negative:1 design:2 implementation:1 reliably:1 vertical:5 observation:17 datasets:2 inevitably:1 sparsified:1 situation:2 heterogeneity:3 intensity:1 introduced:3 namely:4 pair:1 toolbox:1 redefining:1 established:1 naist:1 beyond:2 able:1 pattern:9 sparsity:3 shadowed:1 including:1 treated:1 nakamura:1 residual:4 representing:1 technology:6 aston:1 lk:5 axis:3 fraunhofer:1 aspremont:1 naive:1 extract:2 prior:9 l2:4 acknowledgement:1 fully:1 interesting:2 srebro:1 var:1 integrate:1 incurred:1 consistent:2 imposes:1 row:2 cancer:2 accounted:1 supported:2 transpose:1 understand:2 institute:1 sparse:17 curve:1 calculated:2 cgh:5 made:1 simplified:1 employing:2 constituting:2 emphasize:1 gene:7 ml:2 active:3 assumed:2 spatio:1 search:3 obtaining:1 eeg:1 investigated:1 complex:3 necessarily:1 artificially:3 did:4 pk:1 oba:2 apr:1 terminated:3 whole:1 noise:5 profile:2 fig:7 uik:7 aid:1 sub:4 concatenating:1 young:1 specific:2 bishop:2 showing:2 evidence:1 intrinsic:4 consist:1 exists:1 essential:1 effectively:1 illustrates:1 sparseness:1 easier:1 led:1 eij:5 failed:1 contained:3 monotonic:1 kaneko:1 corresponds:1 extracted:7 ma:1 sorted:2 sized:1 goal:1 towards:3 change:1 included:4 typical:1 determined:6 except:2 nakagawa:1 tumor:5 principal:1 called:4 accepted:1 experimental:3 svd:12 indicating:1 exception:1 mext:1 latter:1 bioinformatics:5 relevance:1 evaluate:1 audio:1 correlated:1 |
2,451 | 3,223 | Discovering Weakly-Interacting Factors in a Complex
Stochastic Process
Charlie Frogner
School of Engineering and Applied Sciences
Harvard University
Cambridge, MA 02138
[email protected]
Avi Pfeffer
School of Engineering and Applied Sciences
Harvard University
Cambridge, MA 02138
[email protected]
Abstract
Dynamic Bayesian networks are structured representations of stochastic processes. Despite their structure, exact inference in DBNs is generally intractable.
One approach to approximate inference involves grouping the variables in the
process into smaller factors and keeping independent beliefs over these factors.
In this paper we present several techniques for decomposing a dynamic Bayesian
network automatically to enable factored inference. We examine a number of features of a DBN that capture different types of dependencies that will cause error in
factored inference. An empirical comparison shows that the most useful of these
is a heuristic that estimates the mutual information introduced between factors
by one step of belief propagation. In addition to features computed over entire
factors, for efficiency we explored scores computed over pairs of variables. We
present search methods that use these features, pairwise and not, to find a factorization, and we compare their results on several datasets. Automatic factorization
extends the applicability of factored inference to large, complex models that are
undesirable to factor by hand. Moreover, tests on real DBNs show that automatic
factorization can achieve significantly lower error in some cases.
1
Introduction
Dynamic Bayesian networks (DBNs) are graphical model representations of discrete-time stochastic
processes. DBNs generalize hidden Markov models and are used for modeling a wide range of
dynamic processes, including gene expression [1] and speech recognition [2]. Although a DBN
represents the process?s transition model in a structured way, all variables in the model might become
jointly dependent over the course of the process and so exact inference in a DBN usually requires
tracking the full joint probability distribution over all variables; it is generally intractable. Factored
inference approximates this joint distribution over all variables as the product of smaller distributions
over groups of variables (factors) and in this way enables tractable inference for large, complex
models. Inference algorithms based on this idea include Boyen-Koller [3], the Factored Frontier [4]
and Factored Particle Filtering [5].
Factored inference has generally been demonstrated for models that are factored by hand. In this
paper we will show that it is possible algorithmically to select a good factorization, thus not only
extending the applicability of factored inference to larger models, for which it might be undesireable
manually to choose a factorization, but also allowing for better (and sometimes ?non-obvious?)
factorizations. The quality of a factorization is defined by the amount of error incurred by repeatedly
discarding the dependencies between factors and treating them as independent during inference. As
such we formulate the goal of our algorithm as the minimization over factorizations of an objective
that describes the error we expect due to this type of approximation. For this purpose we have
examined a range of features that can be computed from the specification of the DBN, based both on
1
the underlying graph structure and on two essential conceptions of weak interaction between factors:
the degree of separability [6] and mutual information. For each principle we investigated a number
of heuristics. We find that the mutual information between factors that is introduced by one step of
belief state propagation is especially well-suited to the problem of finding a good factorization.
Complexity is an issue in searching for good factors, as the search space is large and the scoring
heuristics themselves are computationally intensive. We compare several search methods for finding
factors that allow for different tradeoffs between the efficiency and the quality of the factorization.
The fastest is a graph partitioning algorithm in which we find a k-way partition of a weighted graph
with edge-weights being pairwise scores between variables. Agglomerative clustering and local
search methods use the higher-order scores computed between whole factors, and are hence slower
while finding better factorizations. The more expensive of these methods are most useful when run
offline, for example when the DBN is to be used for online inference and one cares about finding
a good factorization ahead of time. We additionally give empirical results on two other real DBN
models as well as randomly-generated models. Our results show that dynamic Bayesian networks
can be decomposed efficiently and automatically, enabling wider applicability of factored inference.
Furthermore, tests on real DBNs show that using automatically found factors can in some cases yield
significantly lower error than using factors found by hand.
2
Background
A dynamic Bayesian network (DBN), [7] [8], represents a dynamic system consisting of some set of
variables that co-evolve in discrete timesteps. In this paper we are dealing with discrete variables.
We denote the set of variables in the system by X, with the canonical variables being those that
directly influence at least one variable in the next timestep. We call the probability distribution
over the possible states of the system at a given timestep the belief state. The DBN gives us the
probabilities of transitioning from any given system state at t to any other system state at time t + 1,
and it does so in a factored way: the probability that a variable takes on a given state at t + 1
depends only on the states of a subset of the variables in the system at t. We can hence represent
this transition model as a Bayesian network containing the variables in X at timestep t, denoted
Xt , and the variables in X at timestep t + 1, say Xt+1 ? this is called a 2-TBN (for two-timeslice
Bayesian network). By inferring the belief state over Xt+1 from that over Xt , and conditioning on
observations, we propagate the belief state through the system dynamics to the next timestep. The
specification of a DBN also includes a prior belief state at time t = 0.
Note that, although each variable at t + 1 may only depend on a small subset of the variables at t,
its state might be correlated implicitly with the state of any variable in the system, as the influence
of any variable might propagate through intervening variables over multiple timesteps. As a result,
the whole belief state over X (at a given timestep) in general is not factored. Boyen and Koller, [3],
find that, despite this fact, we can factor the system into components whose belief states are kept
independently, and the error incurred by doing so remains bounded over the course of the process.
The BK algorithm hence approximates the belief state at a given timestep as the product of the
local belief states for the factors (their marginal distributions), and does exact inference to propagate
this approximate belief state to the next timestep. Both the Factored Frontier, [4], and Factored
Particle, [5], algorithms also rely on this idea of a factored belief state representation.
In [9] and [6], Pfeffer introduced conditions under which a single variable?s (or factor?s) marginal
distribution will be propagated accurately through belief state propagation, in the BK algorithm. The
degree of separability is a property of a conditional probability distribution that describes the degree
to which that distribution can be decomposed as the sum of simpler conditional distributions, each
of which depends on only a subset of the conditioning variables. For example, let p(Z|XY ) give
the probability distribution for Z given X and Y . If p(Z|XY ) is separable in terms of X and Y to
a degree ?, this means that we can write
p(Z|XY ) = ?[?pX (Z|X) + (1 ? ?)pY (Z|Y )] + (1 ? ?)pXY (Z|XY )
(1)
for some conditional probability distributions pX (Z|X), pX (Z|Y ), and pXY (Z|XY ) and some
parameter ?. We will say that the degree of separability is the maximum ? such that there exist
pX (Z|X), pX (Z|Y ), and pXY (Z|XY ) and ? that satisfy (1). [9] and [6] have shown that if a
system is highly separable then the BK algorithm produces low error in the components? marginal
distributions.
2
Previous work has explored bounds on the error encountered by the BK algorithm. [3] showed that
the error over the course of a process is bounded with respect to the error incurred by repeatedly
projecting the exact distribution onto the factors as well as the mixing rate of the system, which can
be thought of as the rate at which the stochasticity of the system causes old errors to be forgotten. [10]
analyzed the error introduced between the exact distribution and the factored distribution by just one
step of belief propagation. The authors noted that this error can be decomposed as the sum of
conditional mutual information terms between variables in different factors and showed that each
such term is bounded with respect to the mixing rate of the subsystem comprising the variables in
that term. Computing the value of this error decomposition, unfortunately, requires one to examine
a distribution over all of the variables in the model, which can be intractable. Along with other
heuristics, we examined two approaches to automatic factorization that seek directly to exploit the
above results, labeled in-degree and out-degree in Table 1.
3
Automatic factorization with pairwise scores
We first investigated a collection of features, computable from the specification of the DBN, that
capture different types of pairwise dependencies between variables. These features are based both
on the 2-TBN graph structure and on two conceptions of interaction: the degree of separability
and mutual information. These methods allow us to factorize a DBN without computing expensive
whole-factor scores.
3.1 Algorithm: Recursive min-cut
We use the following algorithm to find a factorization using only scores between pairs of variables.
We build an undirected graph over the canonical variables in the DBN, weighting each edge between
two variables with their pairwise score. An obvious algorithm for finding a partition that minimizes
pairwise interactions between variables in different factors would be to compute a k-way min-cut,
taking, say, the best-scoring such partition in which all factors are below a size limit. Unfortunately,
on larger models this approach underperforms, yielding many partitions of size one. Instead we
find that a good factorization can be achieved by computing a recursive min-cut, recurring until all
factors are smaller than the pre-defined maximum size. We begin with all variables in a single factor.
As long as there exists a factor whose weight is larger than the maximum, we do the following. For
each factor that is too large, we search over the number of smaller factors, k, into which to divide
the large factor, for each k computing the k-way min-cut factorization of the variables in the large
factor. In our experiments we use a spectral graph partitioning algorithm, [11], e.g. We choose the k
that minimizes the overall sum of between-factor scores. This is repeated until all factors are of sizes
less than the maximum. This min-cut approach is designed only to use scores computed between
pairs of variables, and so it sacrifices optimality for significant speed gains.
3.2
Pairwise scores
Graph structure
As a baseline in terms of speed and simplicity, we first investigated three types of pairwise graph
relationships between variables that are indicative of different types of dependency.
? Children of common parents. Suppose that two variables at time t + 1, Xt+1 and Yt+1 , depend
on some common parents Zt . As X and Y share a common, direct influence, we might expect
them to to become correlated over the course of the process. The score between X and Y is the
number of parents they share in the 2-TBN.
? Parents of common children. Suppose that Xt and Yt jointly influence common children Zt+1 .
Then we might care more about any correlations between X and Y , because they jointly influence Z. If X and Y are placed in separate factors, then the accuracy of Z?s marginal distribution
will depend on how correlated X and Y were. Here the score between X and Y is the number
of children they share in the 2-TBN.
? Parent to child. If Yt+1 directly depends on Xt , or Xt+1 on Yt , then we expect them to be
correlated. The score between X and Y is the number of edges between them in the 2-TBN.
3
Degree of separability
The degree of separability for a given factor?s conditional distribution in terms of the other factors
gives a measure of how accurately the belief state for that factor will be propagated via that conditional distribution to the next timestep, in BK inference. When a factor?s conditional distribution is
highly separable in terms of the other factors, ignored dependencies between the other factors lead
to relatively small errors in that factor?s marginal belief state after propagation. We can hence use
the degree of separability as an objective to be maximized: we want to find the factorization that
yields the highest degree of separability for each factor?s conditional distribution. Computing the
degree of separability is a constrained optimization problem, and [12] gives an approximate method
of solution. For distributions over many variables the degree of separability is quite expensive to
compute, as the number of variables in the optimization grows exponentially with the number of
discrete variables in the input conditional distribution. Computing the degree of separability for
a small distribution is, however, reasonably efficient. In adapting the degree of separability to a
pairwise score for the min-cut algorithm, we took two approaches.
? Separability of the pair?s joint conditional distribution: We assign a score to the pair of canonical variables X and Y equal to the degree of separability for the joint conditional distribution
p(Xt+1 Yt+1 |P arents(Xt+1 ) ? P arents(Yt+1 )). We want to maximize this value for variables
that are joined in a factor, as a high degree of separability implies that the error of the factor
marginal distribution after propagation in BK will be low. Note that the degree of separability is
defined in terms of groups of parent variables. If we have, for example, p(Z|W XY ), then this
distribution might be highly separable in terms of the groups XY and W , but not in terms of
W X and Y . If, however, p(Z|W XY ) is highly separable in terms of W , X and Y grouped separately, then it is at least as separable in terms of any other groupings. We compute the degree of
separability for the above joint conditional distribution in terms of the parents taken separately.
? Non-separability between parents of a common child: If two parents are highly non-separable in
a common child?s conditional distribution, then the child?s marginal distribution can be rendered
inaccurate by placing these two parents in different components. For two variables X and Y , we
refer to the shared children of Xt and Yt in timeslice t + 1 as Zt+1 . The strength of interaction
between X and Y is defined to be the average degree of non-separability for each variable in
Zt+1 in terms of its parents taken separately. The degree of non-separability is one minus the
degree of separability.
Mutual information
Whereas the degree of separability is a property of a single factor?s conditional distribution, the
mutual information between two factors measures their joint dependencies. To compute it exactly
requires, however, that we obtain a joint distribution over the two factors. All we are given is a DBN
defining the conditional distribution over the next timeslice given the previous, and some initial
distribution over the variables at time 1. In order to obtain a suitable joint distribution over the
variables at t + 1 we must assume a prior distribution over the variables at time t. We therefore
examine several features based on the mutual information that we can compute from the DBN in
this way, to capture different types of dependencies.
? Mutual information after one timestep: We assume a prior distribution over the variables at time
t and do one step of propagation to get a marginal distribution over Xt+1 and Yt+1 . We then use
this marginal to compute the mutual information between X and Y , thus estimating the degree
of dependency between X and Y that results from one step of the process.
? Mutual information between timeslices t and t + 1: We measure the dependencies resulting from
X and Y directly influencing each other between timeslices: the more information Xt carries
about Yt+1 , the more we expect them to become correlated as the process evolves. Again, we
assume a prior distribution at time t and use this to obtain the joint distribution p(Yt+1 Xt )),
from which we can calculate their mutual information. We sum the mutual information between
Xt and Yt+1 and that between Yt and Xt+1 to get the score.
? Mutual information from the joint over both timeslices: We take into account all possible direct
influences between X and Y, by computing the mutual information between the sets of variables
(Xt ? Xt+1 ) and (Yt ? Yt+1 ). As before, we assume a prior distribution at time t to compute a
joint distribution p((Xt ?Xt+1 )?(Yt ?Yt+1 )), from which we can get the mutual information.
4
There are many possibilities for a prior distribution at time t. We can assume a uniform distribution,
in which case the resulting mutual information values are exactly those introduced by one step of
inference, as all variables are independent at time t. More costly would be to generate samples from
the DBN and to do inference, computing the average mutual information values observed over the
steps of inference. We found that, on small examples, there was little practical benefit to doing the
latter. For simplicity we use the uniform prior, although the effects of different prior assumptions
deserves further inquiry.
3.3 Empirical comparison
We compared the preceding pairwise scores by factoring randomly-generated DBNs, using the BK
algorithm for belief state monitoring. We computed two error measures. The first is the joint belief
state error, which is the relative entropy between the product of the factor marginal belief states and
the exact joint belief state. The second is the average factor belief state error, which is the average
over all factors of the relative entropy between each factor?s marginal distribution and the equivalent
marginal distribution from the exact joint belief state. We were constrained in choosing datasets on
which exact inference is tractable, which limited both the number of state variables and the number
of parameters per variable. Note that in our tables the joint KL distance is always given in terms of
10?2 , while the factor marginal KL distance is in terms of 10?4 .
For this comparison we used two datasets. The first is a large, relatively uncomplicated dataset that
is intended to elucidate basic distinctions between the different heuristics. It consists of 400 DBNs,
each of which contains 12 binary-valued state variables and 4 noisy observation variables. We tried
to capture the tendency in real DBNs for variables to depend on a varying number of parents by
drawing the number of parents for each variable from a gaussian distribution of mean 2 and standard
deviation 1 (rounding the result and truncating at zero), and choosing parents uniformly from among
the other variables. In real models variables usually, but not always, depend on themselves in the
previous timeslice, and each variable in our networks also depended on itself with a probability of
0.75. Finally, the parameters for each variable were drawn randomly with a uniform prior.
The second dataset is intended to capture more complicated structures commonly seen in real DBNs:
determinisim and context-specific independence. It consists of 50 larger models, each with 20 binary
state variables and 8 noisy observation variables. Parents and parameters were chosen as before, except that in this case we chose several variables to be deterministic, each computing a boolean
function of its parents, and several other variables to have tree-structured context-specific independence. To generate context-specific independence, the variable?s parents were randomly permuted
and between one half and all of the parents were chosen each to induce independence between the
child variable and the parents lower in the tree, conditional upon one of its states.
The results are shown in Table 1. For reference we have shown two additional methods that minimize
the maximum out-degree and in-degree of factors. These are suggested by Boyen and Koller as a
means of controlling the mixing rate of factored inference, which is used to bound the error. In all
cases, the mutual-information based factorizations, and in particular the mutual information after
one timestep, yielded lower error, both in the joint belief state and in the factor marginal belief
states. The degree of separability is apparently not well-adapted to a pairwise score, given that it is
naturally defined in terms of an entire factor.
4
Exploiting higher-order interactions
The pairwise heuristics described above do not take into account higher-order properties of whole
groups of variables: the mutual information between two factors is usually not exactly the sum of its
constituent pairwise information relationships, and the degree of separability is naturally formulated
in terms of a whole factor?s conditional distribution and not between arbitrary pairs of variables.
Two search algorithms allow us to use scores computed for whole factors, and to find better factors
while sacrificing speed.
4.1
Algorithms: Agglomerative clustering and local search
Agglomerative clustering begins with all canonical variables in separate factors, and at each step
chooses a pair of factors to merge such that the score of the factorization is minimized. If a merger
leads to a factor of size greater than some given maximum, it is ignored. The algorithm stops when
no advantageous merger is found. As the factors being scored are always of relatively small size,
agglomerative clustering allows us to use full-factor scores.
5
Table 1: Random DBNs with pairwise scores
12 nodes
Joint KL Factor KL
Out-degree
In-degree
Children of common parents
Parents of common children
Parent to child
Separability between parents
Separability of pairs of variables
Mut. information after timestep
Mut. information between timeslices
Mut. information from both timeslices
?10?4
2.50
2.44
2.61
1.98
2.28
2.69
2.80
1.11
1.62
1.65
?10?2
1.25
1.20
1.87
1.01
1.19
1.09
1.27
0.408
0.664
0.575
20 nodes/determinism/CSI
Joint KL
Factor KL
?10?4
16.0
15.1
15.5
11.9
14.9
15.3
18.5
7.11
9.73
10.5
?10?2
10.0
8.54
10.0
5.92
6.62
14.0
12.0
3.44
4.96
5.15
Local search begins with some initial factorization and attempts to find a factorization of minimum
score by iteratively modifying this factorization. More specifically, from any given factorization
moves of the following three types are considered: create a new factor with a single node, move a
single node from one factor into another, or swap a pair of nodes in different factor. At each iteration
only those moves that do not yield a factor of size greater than some given maximum are considered.
The move that yields the lowest score at that iteration is chosen. If there is no move that decreases
the score (and so we have hit a local minimum), however, the factors are randomly re-initialized and
the algorithm continues searching, terminating after a fixed number of iterations. The factorization
with the lowest score of all that were examined is returned. As with agglomerative clustering, local
search enables the use of full-factor scores. We have found that good results are achieved when
the factors are initialized (and re-initialized) to be as large as possible. In addition, although the
third type of move (swapping) is a composition of the other two, we have found that the sequence
of moves leading to an advantageous swap is not always a path of strictly decreasing scores, and
performance degrades without it.
We note that all of the algorithms benefit greatly from caching the components of the scores that are
computed.
4.2 Empirical comparison
We verified that the results for the pairwise scores extend to whole-factor scores on a dataset of
120 randomly-generated DBNs, each of which contained 8 binary-valued state variables. We were
significantly constrained in our choice of models by the complexity of computing the degree of
separability for large distributions: even on these smaller models, doing agglomerative clustering
with the degree of separability sometimes took over 2 hours and local search much longer. We have
therefore confined our comparison to agglomerative clustering on 8-variable models. We divided the
dataset into three groups to explore the effects of both extensive determinism and context-specific
independence separately.
The mutual information after one timestep again produced the lowest error in both in the factor
marginal belief states and in the joint belief state. For the networks with large amounts of contextspecific independence, the degree of separability was always close to one, and this might have
hampered its effectiveness for clustering. Interestingly, we see that agglomerative clustering can
sometimes produce results that are worse than those for graph partitioning, although local search
consistently outperforms the two. This may be due to the fact that agglomerative clustering tends
to produce smaller clusters than the divisive approach. Finally, we note that, although determinism greatly increased error, the relative performance of the different heuristics and algorithms was
unchanged. Local search consistently found lower-error factorizations.
We further compared the different algorithms on the dataset with 12 state variables per DBN, from
Section 3.3, using the mutual information after one timestep score. It is perhaps surprising that
the graph min-cut algorithm can perform comparably with the others, given that it is restricted to
pairwise scores.
6
Table 2: Random DBNs using pairwise and whole-factor scores
Score type/Search algorithm
Separability between parents:
Min-cut
Separability b/t pairs of variables:
Min-cut
Whole-factor separability:
Agglomerative
Mut. info. after one timestep:
Min-cut
Agglomerative
Local search
Mut. info. between timeslices:
Min-cut
Agglomerative
Local search
Mut. info. both timeslices:
Min-cut
Agglomerative
Local search
5
8 nodes
Joint Factor
8 nodes/determ.
Joint
Factor
8 nodes/CSI
Joint Factor
2.36
2.54
38.9
70
0.82
0.45
2.42
2.12
27.2
139
0.56
0.31
2.19
1.23
31.1
61
0.99
0.46
1.20
1.15
1.05
1.00
1.13
0.90
18.1
19.0
13.8
44
43
32
0.25
0.20
0.18
0.11
0.11
0.098
1.62
1.60
1.40
1.17
1.45
1.20
27.7
27.6
23.8
47
61
44
0.55
0.53
0.52
0.24
0.32
0.32
1.88
1.86
1.70
1.51
1.08
0.95
22.9
25.1
23.1
45
62
26
0.64
0.66
0.58
0.36
0.34
0.29
Factoring real models
Boyen and Koller, [3], demonstrated factored inference on two models that were factored by hand:
the Bayesian Automated Taxi network and the water network. Table 3 shows the performance
of automatic factorization on these two DBNs. In both cases automatic factorization recovered
reasonable factorizations that performed better than those found manually.
The Bayesian Automated Taxi (BAT) network, [13], is intended to monitor highway traffic and car state for an automated driving system. The DBN contains 10 persistent state
variables and 10 observation variables. Local search with factors of 5 or fewer variables
yielded exactly the 5+5 clustering given in the paper. When allowing 4 or fewer variables per factor, local search and agglomerative search both recovered the factorization ([LeftClr], [RightClr], [LatAct+Xdot+InLane], [FwdAct+Ydot+Stopped+EngStatus], [FrontBackStatus]), while graph min-cut found ([EngStatus], [FrontBackStatus], [InLane], [Ydot], [FwdAct+Ydot+Stopped+EngStatus], [LatAct+LeftClr]). The manual factorization from [3] is
([LeftClr+RightClr+LatAct], [Xdot+InLane], [FwdAct+Ydot+Stopped+EngStatus], [FrontBackStatus]). The error results are shown in Table 3. Local search took about 300 seconds to
complete, while agglomerative clustering took 138 seconds and graph min-cut 12 seconds.
The water network is used for monitoring the biological processes of a water purification plant. It has
8 state variables and 4 observation variables (labeled A through H), and all variables are discrete with
3 or 4 states. The agglomerative and local search algorithms yielded the same result ([A+B+C+E],
[D+F+G+H]) and graph min-cut was only slightly different ([A+C+E], [D+F+G+H], [B]). The
manual factorization from [3] is ([A+B],[C+D+E+F],[G+H]). The results in terms of KL distance
are shown in Figure 3. The automatically recovered factorizations were on average at least an order
of magnitude better. Local search took about one minute to complete, while agglomerative clustering
took 30 seconds and graph min-cut 3 seconds.
6
Conclusion
We compared several heuristics and search algorithms for automatically factorizing a dynamic
Bayesian network. These techniques attempt to minimize an objective score that captures the extent to which dependencies that are ignored by the factored approximation will lead to error. The
heuristics we examined are based both on the structure of the 2-TBN and on the concepts of degree
of separability and mutual information. The mutual information after one step of belief propaga7
Table 3: Algorithm performance
12-var. random
Jnt.
Fact.
Min-cut
Agglomerative
Local search
Manual
1.08
1.10
1.06
-
Jnt.
0.433
0.55
0.52
-
BAT
Fact.
14.7
0.390
0.390
5.62
0.723
0.0485
0.0485
0.0754
Water
Jnt.
Fact.
0.430
0.0702
0.0702
3.12
1.32
0.566
0.566
2.12
tion has generally been greatly more effective than the others as an objective for factorization. We
presented three search methods that allow for tradeoffs between computational complexity and the
quality of the factorizations they produce. Recursive min-cut efficiently uses scores between pairs
of variables, while agglomerative clustering and local search both use scores computed between
whole factors ? the latter two are slower, while achieving better results. Automatic factorization can
extend the applicability of factored inference to larger models for which it is undesireable to find
factors manually. In addition, tests run on real DBNs show that automatically factorized DBNs can
achieve significantly lower error than hand-factored models. Future work might explore extensions
to overlapping factors, which have been found to yield lower error in some cases.
Acknowledgments
This work was funded by an ONR project, with special thanks to Dr. Wendy Martinez.
References
[1] Sun Yong Kim, Seiya Imot, and Satoru Miyano. Inferring gene networks from time series
microarray data using dynamic Bayesian networks. Briefings in Bioinformatics, 2003.
[2] Geoffrey Zweig and Stuart Russell. Dynamic Bayesian networks for speech recognition. In
National Conference on Artificial Intelligence (AAAI), 1998.
[3] Xavier Boyen and Daphne Koller. Tractable inference for complex stochastic processes. In
Neural Information Processing Systems, 1998.
[4] Kevin Murphy and Yair Weiss. The factored frontier algorithm for approximate inference in
DBNs. In Uncertainty in Artificial Intelligence, 2001.
[5] Brenda Ng, Leonid Peshkin, and Avi Pfeffer. Factored particles for scalable monitoring. In
Uncertainty in Artificial Intelligence, 2002.
[6] Avi Pfeffer. Approximate separability for weak interaction in dynamic systems. In Uncertainty
in Artificial Intelligence, 2006.
[7] Thomas Dean and Keiji Kanazawa. A model for reasoning about persistence and causation.
Computational Intelligence, 1989.
[8] Kevin Murphy. Dynamic Bayesian networks: representation, inference and learning. PhD
thesis, U.C. Berkeley, Computer Science Division, 2002.
[9] Avi Pfeffer. Sufficiency, separability and temporal probabilistic models. In Uncertainty in
Artificial Intelligence, 2001.
[10] Xavier Boyen and Daphne Koller. Exploiting the architecture of dynamic systems. In Proceedings AAAI-99, 1999.
[11] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: analysis and an algorithm.
In Neural Information Processing Systems, 2001.
[12] Charlie Frogner and Avi Pfeffer. Heuristics for automatically decomposing a dynamic
Bayesian network for factored inference. Technical Report TR-04-07, Harvard University,
2007.
[13] Jeff Forbes, Tim Huang, Keiji Kanazawa, and Stuart Russell. The BATmobile: towards a
Bayesian automatic taxi. In International Joint Conference on Artificial Intelligence, 1995.
8
| 3223 |@word advantageous:2 seek:1 propagate:3 tried:1 decomposition:1 minus:1 tr:1 carry:1 initial:2 contains:2 score:38 series:1 interestingly:1 outperforms:1 recovered:3 surprising:1 must:1 partition:4 enables:2 treating:1 designed:1 half:1 discovering:1 fewer:2 intelligence:7 indicative:1 merger:2 batmobile:1 node:8 simpler:1 daphne:2 along:1 direct:2 become:3 persistent:1 consists:2 seiya:1 pairwise:17 sacrifice:1 themselves:2 examine:3 decomposed:3 decreasing:1 automatically:7 little:1 begin:3 estimating:1 moreover:1 underlying:1 bounded:3 factorized:1 project:1 lowest:3 minimizes:2 finding:5 forgotten:1 berkeley:1 temporal:1 exactly:4 hit:1 partitioning:3 before:2 engineering:2 local:19 influencing:1 tends:1 limit:1 depended:1 despite:2 taxi:3 path:1 merge:1 might:9 chose:1 examined:4 co:1 fastest:1 factorization:36 limited:1 range:2 bat:2 practical:1 acknowledgment:1 recursive:3 empirical:4 significantly:4 thought:1 adapting:1 persistence:1 pre:1 induce:1 get:3 onto:1 undesirable:1 subsystem:1 close:1 satoru:1 context:4 influence:6 py:1 equivalent:1 deterministic:1 demonstrated:2 yt:16 dean:1 tbn:6 independently:1 truncating:1 formulate:1 simplicity:2 factored:25 searching:2 dbns:16 suppose:2 elucidate:1 controlling:1 exact:8 us:1 harvard:5 recognition:2 expensive:3 continues:1 cut:18 pfeffer:6 labeled:2 observed:1 capture:6 calculate:1 sun:1 decrease:1 highest:1 russell:2 csi:2 complexity:3 dynamic:15 terminating:1 weakly:1 depend:5 upon:1 division:1 efficiency:2 swap:2 joint:23 effective:1 artificial:6 avi:6 choosing:2 kevin:2 whose:2 heuristic:10 larger:5 quite:1 valued:2 say:3 drawing:1 jointly:3 noisy:2 itself:1 online:1 sequence:1 took:6 interaction:6 product:3 mixing:3 achieve:2 intervening:1 constituent:1 exploiting:2 parent:24 cluster:1 extending:1 sea:1 produce:4 wider:1 tim:1 andrew:1 school:2 involves:1 implies:1 modifying:1 stochastic:4 enable:1 assign:1 biological:1 frontier:3 strictly:1 extension:1 considered:2 xdot:2 driving:1 purpose:1 highway:1 grouped:1 create:1 weighted:1 minimization:1 timeslice:4 always:5 gaussian:1 caching:1 varying:1 jnt:3 consistently:2 greatly:3 baseline:1 kim:1 inference:27 dependent:1 factoring:2 inaccurate:1 entire:2 hidden:1 koller:6 comprising:1 issue:1 overall:1 among:1 denoted:1 constrained:3 special:1 mutual:25 marginal:15 equal:1 ng:2 manually:3 represents:2 placing:1 stuart:2 future:1 minimized:1 others:2 report:1 causation:1 ydot:4 randomly:6 national:1 mut:6 murphy:2 intended:3 consisting:1 attempt:2 highly:5 possibility:1 analyzed:1 yielding:1 swapping:1 edge:3 xy:9 tree:2 old:1 divide:1 initialized:3 re:2 sacrificing:1 stopped:3 increased:1 modeling:1 boolean:1 deserves:1 applicability:4 deviation:1 subset:3 uniform:3 rounding:1 too:1 dependency:10 eec:1 chooses:1 thanks:1 international:1 probabilistic:1 michael:1 again:2 aaai:2 thesis:1 containing:1 choose:2 huang:1 dr:1 worse:1 leading:1 account:2 includes:1 satisfy:1 depends:3 performed:1 tion:1 doing:3 apparently:1 traffic:1 complicated:1 forbes:1 minimize:2 accuracy:1 purification:1 efficiently:2 maximized:1 yield:5 generalize:1 weak:2 bayesian:15 accurately:2 produced:1 comparably:1 monitoring:3 inquiry:1 manual:3 obvious:2 naturally:2 propagated:2 gain:1 stop:1 dataset:5 car:1 higher:3 wei:2 sufficiency:1 furthermore:1 just:1 until:2 correlation:1 hand:5 overlapping:1 propagation:7 quality:3 perhaps:1 grows:1 effect:2 concept:1 xavier:2 hence:4 iteratively:1 during:1 noted:1 arents:2 briefing:1 complete:2 reasoning:1 common:9 permuted:1 conditioning:2 exponentially:1 extend:2 approximates:2 significant:1 refer:1 composition:1 cambridge:2 automatic:8 dbn:17 particle:3 stochasticity:1 funded:1 specification:3 longer:1 showed:2 binary:3 onr:1 scoring:2 seen:1 minimum:2 additional:1 care:2 preceding:1 greater:2 maximize:1 full:3 multiple:1 technical:1 long:1 zweig:1 divided:1 scalable:1 basic:1 iteration:3 sometimes:3 represent:1 achieved:2 underperforms:1 confined:1 addition:3 background:1 want:2 separately:4 whereas:1 keiji:2 microarray:1 undirected:1 effectiveness:1 jordan:1 call:1 conception:2 automated:3 independence:6 timesteps:2 architecture:1 idea:2 tradeoff:2 computable:1 intensive:1 peshkin:1 expression:1 returned:1 speech:2 cause:2 repeatedly:2 ignored:3 generally:4 useful:2 amount:2 generate:2 exist:1 canonical:4 algorithmically:1 per:3 wendy:1 discrete:5 write:1 group:5 monitor:1 drawn:1 achieving:1 verified:1 kept:1 timestep:15 graph:14 sum:5 run:2 uncertainty:4 extends:1 reasonable:1 bound:2 pxy:3 encountered:1 yielded:3 strength:1 ahead:1 adapted:1 yong:1 speed:3 min:18 optimality:1 separable:7 rendered:1 px:5 relatively:3 structured:3 smaller:6 describes:2 slightly:1 separability:35 evolves:1 projecting:1 restricted:1 taken:2 computationally:1 remains:1 tractable:3 decomposing:2 spectral:2 yair:2 slower:2 thomas:1 hampered:1 clustering:15 include:1 charlie:2 graphical:1 exploit:1 especially:1 build:1 unchanged:1 objective:4 move:7 degrades:1 costly:1 distance:3 separate:2 agglomerative:19 extent:1 water:4 relationship:2 unfortunately:2 info:3 zt:4 perform:1 allowing:2 observation:5 datasets:3 markov:1 enabling:1 defining:1 interacting:1 arbitrary:1 introduced:5 bk:7 pair:11 kl:7 extensive:1 distinction:1 hour:1 recurring:1 suggested:1 usually:3 below:1 boyen:6 including:1 belief:28 suitable:1 rely:1 brenda:1 prior:9 evolve:1 relative:3 plant:1 expect:4 filtering:1 var:1 geoffrey:1 incurred:3 degree:35 principle:1 miyano:1 share:3 course:4 placed:1 keeping:1 offline:1 contextspecific:1 allow:4 wide:1 taking:1 determinism:3 benefit:2 transition:2 author:1 collection:1 commonly:1 approximate:5 implicitly:1 gene:2 dealing:1 factorize:1 factorizing:1 search:26 table:8 additionally:1 reasonably:1 investigated:3 complex:4 whole:10 scored:1 martinez:1 repeated:1 child:13 inferring:2 weighting:1 third:1 minute:1 transitioning:1 discarding:1 xt:20 specific:4 explored:2 frogner:3 intractable:3 grouping:2 essential:1 exists:1 kanazawa:2 phd:1 magnitude:1 suited:1 entropy:2 explore:2 contained:1 tracking:1 joined:1 ma:2 conditional:17 goal:1 formulated:1 towards:1 jeff:1 shared:1 leonid:1 specifically:1 except:1 uniformly:1 called:1 tendency:1 divisive:1 select:1 latter:2 bioinformatics:1 correlated:5 |
2,452 | 3,224 | Inferring Elapsed Time
from Stochastic Neural Processes
Misha B. Ahrens and Maneesh Sahani
Gatsby Computational Neuroscience Unit, UCL
Alexandra House, 17 Queen Square, London, WC1N 3AR
{ahrens, maneesh}@gatsby.ucl.ac.uk
Abstract
Many perceptual processes and neural computations, such as speech recognition,
motor control and learning, depend on the ability to measure and mark the passage
of time. However, the processes that make such temporal judgements possible are
unknown. A number of different hypothetical mechanisms have been advanced,
all of which depend on the known, temporally predictable evolution of a neural or psychological state, possibly through oscillations or the gradual decay of a
memory trace. Alternatively, judgements of elapsed time might be based on observations of temporally structured, but stochastic processes. Such processes need
not be specific to the sense of time; typical neural and sensory processes contain at
least some statistical structure across a range of time scales. Here, we investigate
the statistical properties of an estimator of elapsed time which is based on a simple
family of stochastic process.
1
Introduction
The experience of the passage of time, as well as the timing of events and intervals, has long been
of interest in psychology, and has more recently attracted attention in neuroscience as well. Timing
information is crucial for the correct functioning of a large number of processes, such as accurate
limb movement, speech and the perception of speech (for example, the difference between ?ba? and
?pa? lies only in the relative timing of voice onsets), and causal learning.
Neuroscientific evidence that points to a specialized neural substrate for timing is very sparse, particularly when compared to the divergent set of specific mechanisms which have been theorized.
One of the most influential proposals, the scalar expectancy theory (SET) of timing [1], suggests
that interval timing is based on the accumulation of activity from an internal oscillatory process.
Other proposals have included banks of oscillators which, when fine-tuned, produce an alignment
of phases at a specified point in time that can be used to generate a neuronal spike [2]; models in
which timing occurs via the characteristic and monotonic decay of memory traces [3] or reverberant
activity [4]; and randomly-connected deterministic networks, which, given neuronal processes of
appropriate timescales, can be shown to encode elapsed time implicitly [5].
Although this multitude of theories shows that there is little consensus on the mechanisms responsible for timing, it does point out an important fact: that timing information is present in a range
of different processes, from oscillations to decaying memories and the dynamics of randomly connected neural networks. All of the theories above choose one specific such process, and suggest that
observers rely on that one alone to judge time. An alternative, which we explore here, is to phrase
time estimation as a statistical problem, in which the elapsed time ?t is extracted from a collection of stochastic processes whose statistics are known. This is loosely analagous to accounts have
appeared in the psychological literature in the form of number-of-events models [6], which suggest
that the number of events in an interval influence the perception of its duration. Such models have
1
been related to recent psychological findings the show that the nature of the stimulus being timed
affects judgments of duration [7].
Here, by contrast, we consider the properties of duration estimators that are based on more general
stochastic processes. The particular stochastic processes we analyze are abstract. However, they
may be seen as models both for internally-generated neural processes, such as (spontaneous) network activity and local field potentials, and for sensory processes, in the form of externally-driven
neural activity, or (taking a functional view) in the form of the stimuli themselves. Both neural activity and sensory input from the environment follow well-defined temporal statistical patterns, but the
exploitation of these statistics has thus far not been studied as a potential substrate for timing judgements, despite being potentially attractive. Such a basis for timing is consistent with recent studies
that show that the statistics of external stimuli affect timing estimates [8, 7], a behavior not captured
by the existing mechanistic models. In addition, there is evidence that timing mechanisms are distributed [9] but subject to local (e.g. retinotopic or spatiotopic) biases [10]. Using the distributed
time-varying processes which are already present in the brain is implementationally efficient, and
lends itself straightforwardly to a distributed implementation. At the same time, it suggests a possible origin for the modality-specificity and locality of the bias effects, as different sets of processes
may be exploited for different timing purposes. Here, we show primarily that interval estimates
based on such processes obey a Weber-like scaling law for accuracy under a wide range of assumptions, as well as scaling with process number that is consistent with experimental observation; and
we use estimation theoretic analysis to find the reasons behind the robustness of these scaling laws.
Neuronal spike trains exhibit internal dependencies on many time scales, ranging from milliseconds
to tens of seconds [11, 12], so these ? or, more likely, processes derived from spike trains, such
as average network activity ? are plausible candidates for the types of processes assumed in this
paper. Likewise, sensory information too varies over a large range of temporal scales [13]. The
particular stochastic processes we use here are Gaussian Processes, whose power spectra are chosen
to be broad and roughly similar to those seen in natural stimuli.
2
The framework
To illustrate how random processes contain timing information, consider a random walk starting at
the origin, and suppose that we see a snapshot of the random walk at another, unknown, point in
time. If the walk were to end up very far from the origin, and if some statistics of the random walk
were known, we would expect that the time difference between the two observations, ?t, must be
reasonably long in comparison to the diffusion time of the process. If, however, the second point
were still very close to the origin, we might assign a high probability to ?t ? 0, but also some
probability (associated with delayed return to the orgin) to |?t| > 0. Access to more than one such
random walk would lead to more accurate estimates (e.g. if two random walks had both moved very
little between the two instances in time, our confidence that ?t ? 0 would be greater). From such
considerations it is evident that, on the basis of multiple stochastic processes, one can build up a
probabilistic model for ?t.
To formalize these ideas, we model the random processes as a family of independent stationary
Gaussian Processes (GPs, [14]). A GP is a stochastic process y(t) in which any subset of observations {y(t), y(t? ), y(t?? ), ...} is jointly Gaussian distributed, so that the probability distribution over
observations is completely specified by a mean value (here set to zero) and a covariance structure
(here assumed to remain constant in time). We denote the set of processes by {yi (t)}. Although
this is not a necessity, we let each process evolve independently according to the same stochastic
dynamics; thus the process values differ only due to the random effects. Mimicking the temporal statistics of natural scenes [15], we choose the dynamics to simultaneously contain multiple
time scales ? specifically, the power spectrum approximately follows a 1/f 2 power law, were
f = frequency = 1/(time scale). Some instances of such processes are shown in Figure 1.
Stationary Gaussian processes are fully described by the covariance function K(?t):
hyi (t)yi (t + ?t)i = K(?t)
so that the probability of observing a sequence of values [yi (t1 ), yi (t2 ), ..., yi (tn )] is Gaussian distributed, with zero mean and covariance matrix ?n,n? = K(tn? ? tn ).
2
y
log power
0
?5
?10
?4
time
?2
0
log frequency
2
4
Figure 1: Left: Two examples of the GPs used for inference of ?t. Right: Their power spectrum.
This is approximately a 1/f 2 spectrum, similar to the temporal power spectrum of visual scenes.
To generate processes with multiple time scales, we approximate a 1/f 2 spectrum with a sum over
Q squared exponential covariance functions:
K(?t) =
Q
X
q=1
?q2 exp(??t2 /2lq2 ) + ?y2 I(?t)
Here ?y2 I(?t) describes the instantaneous noise around the underlying covariance structure (I is
the indicator function, which equals 1 when its argument is zero), and lq are the time scales of the
component squared exponential functions. We take these to be linearly spaced, so that lq ? q. To
mimic a 1/f 2 spectrum, we choose the power of each component to be constant: ?q2 = 1/Q. Figure
1 shows that this choice does indeed quite accurately reproduce a 1/f 2 power spectrum.
To illustrate how elapsed time is implicitly encoded in such stochastic processes, we infer the duration of an interval [t, t + ?t] from two instantaneous observations of the processes, namely {yi (t)}
and {yi (t+?t)}. For convenience, yi is used to denote the vector [yi (t), yi (t+?t)]. The covariance
matrix ?(?t) of yi , which is of size 2x2, gives rise to a likelihood of these observations,
P ({yi (t)}, {yi (t + ?t)}|?t)
Y
?
i
1
|?|?1/2 exp ? yiT ??1 yi
2
With the assumption of a weak prior1 , this yields a posterior distribution over ?t:
?(?t) = P (?t|{yi }) ? P (?t) ?
Y
i
P (yi |?t)
!
1 X
T ?1
? P (?t) ? exp ?
log |?| + yi ? yi
2 i
This distribution gives a probabilistic description of the time difference between two snapshots of
the random processes. As we will see below (see Figure 2), this distribution tends to be centred on
the true value of ?t, showing that such random processes may indeed be exploited to obtain timing
information. In the following section, we explore the statistical properties of timing estimates based
on ?, and show that they correspond to several experimental findings.
1
such as P (?t) = ? exp(???t)?(?t) with ? ? 1 and ? the Heaviside function, or P (?t) = U[0, tmax ];
the details of the weak prior do not affect the results.
3
4
25
standard deviation
estimated ? t
20
15
10
5
0
0
5
?t
10
3
2
1
0
15
0
5
?t
10
15
Figure 2: Statistics of the inference of ?t from snapshots of a group of GPs. The GPs have time
scales in the interval [0.05, 50]. Left: The mean estimated times (blue) are clustered around the true
times (dashed). Right: The Weber law of timing, ? ? ?t, approximately holds true for this model.
The error bars are standard errors derived via a Laplace approximation to the posterior. A straight
line fit is shown with a dashed line. The Cramer-Rao bound (blue), which will be derived later in
the text, predicts the empirical data well.
3
3.1
Scaling laws and behaviour
Empirical demonstration of Weber?s law
Many behavioral studies have shown that the standard deviation of interval estimates is proportional
to the interval being judged, ? ? ?t, across a wide range of timescales and tasks (e.g. [1]). Here,
we show that GP-based estimates share this property under broad conditions.
To compare the behaviour of the model to experimental data, we must choose a mapping from the
function ? to a single scalar value, which will model the observer?s report. A simple choice is
to assume that the reported ?t is the maximum a-posteriori (MAP) estimator based on ?, that is,
c MAP = argmax?t ?(?t). To compare the statistics of this estimator to the experimental obser?t
vation, we took samples {yi (t)} and {yi (t + ?t)} from 50 GPs with identical 1/f 2 -like statistics
containing time scales from 1 to 40 time units. 100 samples were generated for each ?t (ranging
c MAP . These estimates are plotted in Figure 2A.
from 1 to 16 time unis), leading to 100 estimates, ?t
They are seen to follow the true ?t. Their spread around the true value increases with increasing
?t. The standard deviation of this spread is plotted in Figure 2B, and is a roughly linear function of
?t. Thus, time estimation is possible using the stochastic process framework, and the Weber law of
timing holds fairly accurately.
3.2
Fisher Information and Weber?s law
A number of questions about this Weber-like result naturally arise: Does it still hold if one changes
the power spectrum of the processes? What if one changes the scale of the instantaneous noise? We
increased the noise scale ?y2 , and found that the Weber law was still approximately satisfied. When
changing the power spectrum of the processes from a 1/f 2 -type spectrum to a 1/f 3 -type spectrum
(by letting ?i2 ? li instead of ?i2 ? 1), the Weber law was still approximately satisfied (Figure
3). This result may appear somewhat counter-intuitive, as one might expect that the accuracy of the
estimator for ?t would increase as the power in frequencies around 1/?t increased; thus, changing
the power spectrum to 1/f 3 might be expected to result in more accurate estimates of large ?t
(lower frequencies) as compared to estimates of small ?t, but this was not the case.
To find reasons for this behaviour, it would useful to have an analytical expression for the relationship between the variability of the estimated duration and the true duration. This is complex, but a
simpler analytical approximation to this relation can be constructed through the Cramer-Rao bound.
This is a lower bound on the asymptotic variance of an unbiased Maximum Likelihood estimator of
?t and is given by the inverse Fisher Information:
4
y
standard deviation
2.5
2
1.5
1
0.5
0
0
5
time
?t
10
15
Figure 3: Left: Two examples of GPs with a different power spectrum (?i2 ? li , for li ? i, which
approximates a 1/f 3 power spectrum, resulting in much smoother dynamics). Right: Inference of
c MAP is based on the true likelihood,
?t based on these altered processes. Note that the estimator ?t
i.e., the new 1/f 3 statistics. The Weber law still approximately holds, even though the dynamics
is different from the initial case. The empirical standard deviation is again well predicted by the
analytical Cramer-Rao bound (blue).
c ? 1/IF (?t)
Var(?t)
The Fisher Information, assuming that the elapsed time is estimated on the basis of N processes,
each evolving according to covariance matrix ?(?t), is given by the expression
IF (?t) = ?N
D ? 2 log P ({y }|?t) E
N
i
?1 ??
?1 ??
=
Tr
?
?
??t2
2
??t
??t
y
(1)
This bound is plotted in blue in Figure 2, and again in Figure 3, and can be seen to be a good
approximation to the empirical behaviour of the model.
What is the reason for the robust Weber-like behaviour? To answer this question, consider a different
but related model, in which there are N Gaussian processes, again labeled i, but each now evolving according to different covariance matrix Ci (?t). Previously, each process reflected structure at
many timescales. In this new model, each process evolves with a single squared-exponential covariance kernel, and thus a single time-constant. This will allow us to see how each process contributes
to the accuracy of the estimator.
Thus, in this model, [Ci (?t)]n,n? = ?i2 exp(?(tn? ?tn )2 /2li2 )+?y2 I(tn? ?tn ). (The power spectrum
is then shaped as exp(?f 2 li2 /2).) The likelihood of observing the processes at two instances is now
P ({yi (t)}, {yi (t + ?t)}|?t)
Y
?
i
?1/2
|Ci |
1 T ?1
exp ? yi Ci yi
2
(2)
This model shows very similar behaviour to the original model, but is somewhat less natural. Its
advantage lies in the fact that the Fisher Information can now be decomposed as a sum over different
time scales,
IF (?t) =
X
i
IF,i
1X
?1 ?Ci ?1 ?Ci
=
Tr Ci
C
2 i
??t i ??t
Using the Fisher Information to plot Cramer-Rao bounds for different types of processes {yi (t)}
(Figure 4, dashed lines), we first note that the bounds are all relatively close to linear, even though
the parameters governing the processes are very different. In particular, we tested both linear spacing
of time scales (li ? i) and quadratic spacing (li ? i2 ), and we tested a constant power distribution
5
power ~ time scale
lengthscales spaced linearly
power ~ time scale
lengthscales spaced quadratically
l=0.7
l=2.4,...
l=42.3
Cr.?Rao bound
F
F
I and (I )?1/2
IF and (IF)
?1/2
l=6
l=11,...
l=46
Cr.?Rao bound
0
10
?t
20
30
0
power = constant
lengthscales spaced linearly
20
30
l=0.7
l=2.4,...
l=42.3
Cr.?Rao bound
F
F
I and (I )?1/2
?1/2
?t
power = constant
lengthscales spaced quadratically
l=6
l=11,...
l=46
Cr.?Rao bound
IF and (IF)
10
0
10
?t
20
30
0
10
?t
20
30
Figure 4: Fisher Information and Cramer-Rao bounds for the model of equation 2. The Cramer-Rao
bound is the square root of the inverse of the sum of all the Fisher Information curves (note that
only a few Fisher Information curves are shown). The noise scale ?y2 = 0.1, and the time scales are
either li = i, i = 1, 2, . . . , 50 (linear) or li = i2 /50, i = 1, 2, . . . , 50 (quadratic). The power of each
process is either ?i2 = 1 (constant) or ?i2 = li . The graphs show that each time scale contributes to
the estimation of a wide range of ?t, and that the Cramer-Rao bounds are all fairly linear, leading
to a robust Weber-like behaviour of the estimator of elapsed time.
(?i = 1) and a power distribution where slower processes have more power (?i2 ? li ). None of
these manipulations caused the Cramer-Rao bound to deviate much from linearity.
Next, we can evaluate the contribution of each time scale to the accuracy of estimates of ?t, by
inspecting the Fisher Information IF,i of a given process yi . Figure 4 shows that (contrary to the
intuition that time scales close to ?t contribute most to the estimation of ?t) a process evolving at
a certain time scale lj contributes to the estimation of elapsed time ?t even if ?t is much smaller
than lj (indeed, the peak of IF,j does not lie at lj , but below it). This lies at the heart of the robust
Weber-like behaviour: the details of the distribution of time scales do not matter much, because each
time scale contributes to the estimation of a wide range of ?t. For similar reasons, the distribution
of power does not drastically affect the Cramer-Rao bound. From the graphs of IF,i , it is evident
that the Weber law arises from an accumulation of high values of Fisher Information at low values
of ?t.
Very small values of ?t may be an exception, if the instantaneous noise dominates the subtle changes
that the processes undergo during very short periods; for these ?t, the standard deviation may rise.
This is reflected by a subtle rise in some of the Cramer-Rao bounds at very low values of ?t.
However, it may be assumed that the shortest times that neural systems can evaluate are no shorter
than the scale of the fastest process within the system, making these small ?t?s irrelevant.
3.3
Dependence of timing variability on the number of processes
Increasing the number of processes, say Nprocesses , will add more terms to the likelihood and make
the estimated ?t more accurate. The Fisher Information (equation 1) scales with Nprocesses , which
p
c MAP is proportional to 1/ Nprocesses ; this was confirmed
suggests that the standard deviation of ?t
empirically (data not shown).
6
Psychologically and neurally, increasing the number of processes would correspond to adding more
perceptual processes, or expanding the size of the network that is being monitored for timing estimation. Although experimental data on this issue is sparse, in [9], it is shown that unimanual
rhythm tapping results in a higher variability of tapping times than bimanual rhythm tapping, and
that tapping with two hands and a foot results in even lower variability.
c MAP . Note that a
This correlates well with the theoretical scaling behaviour of the estimator ?t
similar scaling law is obtained from the Multiple Timer Model [16]. This is not a model for timing
itself, but for the combination of timing estimates of multiple timers; the Multiple Timer Model
combines these estimates by averaging, which is the ML estimate ?
arising from independent draws
of equal variance Gaussian random variables, also resulting in a 1/ N scaling law.
?
Experimentally, a slower decrease in variability than a 1/ N law was observed. This can be accounted for by assuming that the processes governing the right and left hands are dependent, so that
the number of effectively independent processes grows more slowly than the number of effectors.
4
Conclusion
We have shown that timing information is present in random processes, and can be extracted probabilistically if certain statistics of the processes are known. A neural implementation of such a
framework of time estimation could use both internally generated population activity as well as
external stimuli to drive its processes.
The timing estimators considered were based on the full probability distribution of the process values at times t and t? , but simpler estimators could also be constructed. There are two reasons for
considering simpler estimators: First, simpler estimators might be more easily implemented in neural systems. Second, to calculate ?(?t), one needs all of {yi (t), yi (t? )}, so that (at least) {yi (t)}
has to be stored in memory. One way to construct a simpler estimator might be to select a particular
class (say, a linear function of {yi }) and optimize over its parameters. Alternatively, an estimator
may be based on the posterior distribution over ?t conditioned on a reduced set of parameters, with
the neglected parameters integrated out. Another route might be to consider different stochatic processes, which have more compact sufficient statistics (e.g. Brownian motion, being translationally
invariant, would require only {yi (t? ) ? yi (t)} instead of {yi (t), yi (t? )}; we have not considered such
processes because they are unbounded and therefore hard to associate with sensory or neural processes). We have not addressed how a memory mechanism might be combined with the stochastic
process framework; this will be explored in the future.
The intention of this paper is not to offer a complete theory of neural and psychological timing, but to
examine the statistical properties of a hitherto neglected substrate for timing ? stochastic processes
that take place in the brain or in the sensory world. It was demonstrated that estimators based on
such processes replicate several important behaviors of humans and animals. Full models might be
based on the same substrate, thereby naturally incorporating the same behaviors, but contain more
completely specified relations to external input, memory mechanisms, adaptive mechanisms, neural
implementation, and importantly, (supervised) learning of the estimator.
The neural and sensory processes that we assume to form the basis of time estimation are, of course,
not fully random. But when the deterministic structure behind a process is unknown, they can still be
treated as stochastic under certain statistical rules, and thus lead to a valid timing estimator. Would
the GP likelihood still apply to real neural processes or would the correct likelihood be completely
different? This is unknown; however, the Multivariate Central Limit Theorem implies that sums
of i.i.d. stochastic processes tend to Gaussian Processes ? so that, when e.g. monitoring average
neuronal activity, the correct estimator may well be based on a GP likelihood.
An issue that deserves consideration is the mixing of internal (neural) and external (sensory) processes. Since timing information is present in both sensory processes (such as sound and movement
of the natural world, and the motion of one?s body) and internal processes (such as fluctuations in
network activity), and because stimulus statistics influence timing estimates, we propose that psychological and neural timing may make use of both types of processes. However, fluctuations in the
external world do not always translate into neural fluctuations (e.g. there is evidence for a spatial
7
code for temporal frequency in V2 [17]), so that neural and stimulus fluctuations cannot always be
treated on the same footing. We will address this issue in the future.
The framework presented here has some similarities with the very interesting and more explicitly
physiological model proposed by Buonomano and colleagues [5, 18], in which time is implicitly
encoded in deterministic2 neural networks through slow neuronal time constants. However, temporal
information in the network model is lost when there are stimulus-independent fluctuations in the
network activity, and the network can only be used as a reliable timer when it starts from a fixed
resting state, and if the stimulus is identical on every trial. The difference in our scheme is that here
timing estimates are based on statistics, rather than deterministic structure, so that it is fundamentally
robust to noise, internal fluctuations, and stimulus changes. The stochastic process framework is,
however, more abstract and farther removed from physiology, and a neural implementation may well
share some features of the network model of timing.
Acknowledgements: We thank Jeff Beck for useful suggestions, and Peter Dayan and Carlos Brody
for interesting discussions.
References
[1] J Gibbon. Scalar expectancy theory and Weber?s law in animal timing. Psychol Rev, 84:279?325, 1977.
[2] R C Miall. The storage of time intervals using oscillating neurons. Neural Comp, 1:359?371, 1989.
[3] J E R Staddon and J J Higa. Time and memory: towards a pacemaker-free theory of interval timing. J
Exp Anal Behav, 71:215?251, 1999.
[4] G Bugmann. Towards a neural model of timing. Biosystems, 48:11?19, 1998.
[5] D V Buonomano and M M Merzenich. Temporal information transformed into a spatial code by a neural
network with realistic properties. Science, 267:1028?1030, 1995.
[6] D Poynter. Judging the duration of time intervals: A process of remembering segments of experience.
In I Levin and D Zakay, editors, Time and human cognition: A life-span perspective, pages 305?331.
Elsevier, 1989.
[7] R Kanai, C L E Paffen, H Hogendoorn, and F A J Verstraten. Time dilation in dynamic visual display. J
Vision, 6:1421?1430, 2006.
[8] D M Eagleman, P U Tse, D V Buonomano, P Janssen, A C Nobre, and A O Holcombe. Time and the
brain: How subjective time relates to neural time. J Neurosci, pages 10369?10371, 2005.
[9] R B Ivry, T C Richardson, and L L Helmuth. Improved temporal stability in multi-effector movements. J
Exp Psychol, 28:72?92, 2002.
[10] D Burr, A Tozzi, and M C Morrone. Neural mechanisms for timing visual events are spatially selective in
real-world coordinates. Nat Neurosci, 10:423?425, 2007.
[11] M C Teich, C Heneghan, and S B Lowen. Fractal characted of the neural spike train in the visual system
of the cat. J Opt Soc Am A, 14:529?546, 1997.
[12] L C Osborne, W Bialek, and S G Lisberger. Time course of information about motion direction in visual
area MT of macaque monkeys. J Neurosci, 24:3210?3222, 2004.
[13] H Attias and C E Schreiner. Temporal low-order statistics of natural sounds. In Advances in Neural
Information Processing Systems 9, pages 27?33, 1996.
[14] C E Rasmussen and C K I Williams. Gaussian Processes for Machine Learning. MIT Press, Cambridge,
MA, 2006.
[15] D W Dong and J J Atick. Statistics of natural time-varying images. Network: Computation in Neural
Systems, 6:345?358, 1995.
[16] R B Ivry and T C Richardson. Temporal control and coordination: the multiple timer model. Brain and
Cognition, 48:117?132, 2002.
[17] K H Foster, J P Gaska, M Nagler, and D A Pollen. Spatial and temporal frequency selectivity of neurones
in visual cortical areas v1 and v2 of the macaque monkey. J Physiol, 365:331?363, 1985.
[18] U R Karmarkar and D V Buonomano. Timing in the absence of clocks: encoding time in neural network
states. Neuron, 53:427?438, 2007.
2
While this model and some other previous models might also contain neuronal noise, it is the deterministic
(and known) element of their behaviour which encodes time.
8
| 3224 |@word trial:1 exploitation:1 judgement:3 replicate:1 gradual:1 teich:1 covariance:9 thereby:1 tr:2 lq2:1 initial:1 necessity:1 tuned:1 subjective:1 existing:1 timer:5 attracted:1 must:2 physiol:1 realistic:1 motor:1 plot:1 alone:1 stationary:2 pacemaker:1 short:1 footing:1 farther:1 contribute:1 obser:1 simpler:5 unbounded:1 constructed:2 combine:1 behavioral:1 burr:1 expected:1 indeed:3 roughly:2 themselves:1 examine:1 behavior:3 multi:1 brain:4 decomposed:1 little:2 considering:1 increasing:3 retinotopic:1 underlying:1 linearity:1 what:2 hitherto:1 monkey:2 q2:2 finding:2 temporal:12 every:1 hypothetical:1 uk:1 control:2 unit:2 internally:2 appear:1 t1:1 timing:38 local:2 tends:1 limit:1 despite:1 encoding:1 tapping:4 fluctuation:6 approximately:6 might:10 tmax:1 studied:1 suggests:3 fastest:1 range:7 responsible:1 lost:1 area:2 empirical:4 maneesh:2 evolving:3 physiology:1 confidence:1 intention:1 specificity:1 suggest:2 convenience:1 close:3 cannot:1 judged:1 storage:1 influence:2 accumulation:2 optimize:1 deterministic:4 map:6 demonstrated:1 williams:1 attention:1 starting:1 duration:7 independently:1 schreiner:1 estimator:20 rule:1 importantly:1 population:1 stability:1 coordinate:1 laplace:1 spontaneous:1 suppose:1 substrate:4 gps:6 origin:4 pa:1 associate:1 element:1 recognition:1 particularly:1 predicts:1 labeled:1 observed:1 calculate:1 connected:2 movement:3 counter:1 decrease:1 removed:1 intuition:1 environment:1 predictable:1 gibbon:1 dynamic:6 neglected:2 depend:2 segment:1 basis:4 completely:3 easily:1 cat:1 train:3 london:1 lengthscales:4 whose:2 quite:1 encoded:2 plausible:1 say:2 ability:1 statistic:15 richardson:2 gp:4 jointly:1 itself:2 sequence:1 advantage:1 analytical:3 ucl:2 took:1 propose:1 mixing:1 translate:1 description:1 moved:1 intuitive:1 produce:1 oscillating:1 illustrate:2 ac:1 soc:1 implemented:1 predicted:1 judge:1 implies:1 differ:1 direction:1 foot:1 correct:3 holcombe:1 stochastic:17 human:2 require:1 behaviour:10 assign:1 clustered:1 opt:1 inspecting:1 hold:4 around:4 cramer:10 considered:2 exp:9 mapping:1 cognition:2 purpose:1 estimation:10 coordination:1 mit:1 gaussian:9 always:2 rather:1 cr:4 theorized:1 varying:2 probabilistically:1 encode:1 derived:3 likelihood:8 contrast:1 sense:1 am:1 posteriori:1 inference:3 elsevier:1 dependent:1 dayan:1 lj:3 integrated:1 relation:2 reproduce:1 transformed:1 selective:1 mimicking:1 issue:3 animal:2 spatial:3 fairly:2 field:1 equal:2 construct:1 shaped:1 identical:2 broad:2 mimic:1 future:2 t2:3 stimulus:10 prior1:1 report:1 primarily:1 few:1 fundamentally:1 randomly:2 simultaneously:1 delayed:1 beck:1 phase:1 argmax:1 translationally:1 interest:1 investigate:1 alignment:1 misha:1 behind:2 wc1n:1 accurate:4 experience:2 shorter:1 loosely:1 walk:6 timed:1 plotted:3 causal:1 theoretical:1 biosystems:1 psychological:5 increased:2 instance:3 tse:1 effector:2 rao:14 ar:1 queen:1 phrase:1 deserves:1 deviation:7 subset:1 levin:1 too:1 reported:1 straightforwardly:1 dependency:1 spatiotopic:1 varies:1 vation:1 answer:1 kanai:1 stored:1 combined:1 peak:1 probabilistic:2 dong:1 squared:3 again:3 satisfied:2 central:1 containing:1 choose:4 possibly:1 slowly:1 external:5 leading:2 return:1 li:9 account:1 potential:2 centred:1 matter:1 analagous:1 caused:1 explicitly:1 onset:1 later:1 view:1 observer:2 root:1 analyze:1 observing:2 start:1 decaying:1 carlos:1 contribution:1 square:2 accuracy:4 variance:2 characteristic:1 likewise:1 judgment:1 spaced:5 yield:1 correspond:2 weak:2 accurately:2 none:1 monitoring:1 confirmed:1 drive:1 comp:1 straight:1 oscillatory:1 higa:1 colleague:1 frequency:6 naturally:2 associated:1 monitored:1 formalize:1 subtle:2 higher:1 supervised:1 follow:2 reflected:2 improved:1 though:2 governing:2 atick:1 clock:1 hand:2 grows:1 alexandra:1 effect:2 contain:5 y2:5 functioning:1 true:7 evolution:1 unbiased:1 ivry:2 merzenich:1 spatially:1 i2:9 attractive:1 during:1 rhythm:2 nagler:1 evident:2 theoretic:1 complete:1 tn:7 motion:3 passage:2 weber:14 ranging:2 consideration:2 instantaneous:4 recently:1 image:1 specialized:1 functional:1 mt:1 empirically:1 approximates:1 resting:1 cambridge:1 stochatic:1 had:1 access:1 similarity:1 add:1 posterior:3 brownian:1 recent:2 multivariate:1 perspective:1 irrelevant:1 driven:1 manipulation:1 route:1 certain:3 selectivity:1 life:1 yi:34 exploited:2 seen:4 captured:1 greater:1 somewhat:2 remembering:1 shortest:1 period:1 hyi:1 dashed:3 relates:1 multiple:7 smoother:1 neurally:1 infer:1 full:2 sound:2 offer:1 long:2 vision:1 psychologically:1 kernel:1 proposal:2 addition:1 fine:1 spacing:2 interval:11 addressed:1 crucial:1 modality:1 lowen:1 subject:1 tend:1 undergo:1 contrary:1 reverberant:1 affect:4 fit:1 psychology:1 idea:1 attias:1 expression:2 peter:1 speech:3 neurones:1 behav:1 fractal:1 useful:2 staddon:1 ten:1 reduced:1 generate:2 millisecond:1 ahrens:2 judging:1 neuroscience:2 estimated:5 arising:1 blue:4 li2:2 group:1 yit:1 changing:2 diffusion:1 v1:1 graph:2 sum:4 verstraten:1 inverse:2 place:1 family:2 oscillation:2 draw:1 scaling:7 bound:17 brody:1 display:1 quadratic:2 activity:10 scene:2 x2:1 encodes:1 argument:1 span:1 relatively:1 buonomano:4 structured:1 influential:1 according:3 combination:1 across:2 remain:1 describes:1 smaller:1 evolves:1 making:1 rev:1 invariant:1 heart:1 equation:2 previously:1 mechanism:8 mechanistic:1 letting:1 end:1 apply:1 limb:1 obey:1 v2:2 appropriate:1 alternative:1 voice:1 robustness:1 slower:2 original:1 bimanual:1 build:1 already:1 question:2 spike:4 occurs:1 dependence:1 bialek:1 exhibit:1 lends:1 thank:1 bugmann:1 consensus:1 reason:5 assuming:2 code:2 relationship:1 demonstration:1 potentially:1 trace:2 rise:3 ba:1 neuroscientific:1 implementation:4 anal:1 unknown:4 observation:7 snapshot:3 neuron:2 variability:5 namely:1 specified:3 pollen:1 elapsed:9 quadratically:2 macaque:2 address:1 bar:1 below:2 perception:2 pattern:1 appeared:1 reliable:1 memory:7 power:24 event:4 natural:6 rely:1 treated:2 indicator:1 advanced:1 scheme:1 altered:1 temporally:2 psychol:2 sahani:1 text:1 prior:1 literature:1 deviate:1 acknowledgement:1 evolve:1 relative:1 law:16 asymptotic:1 fully:2 expect:2 interesting:2 suggestion:1 proportional:2 var:1 sufficient:1 consistent:2 foster:1 editor:1 bank:1 share:2 course:2 accounted:1 free:1 rasmussen:1 drastically:1 bias:2 allow:1 wide:4 taking:1 sparse:2 distributed:5 curve:2 cortical:1 world:4 valid:1 sensory:9 collection:1 adaptive:1 expectancy:2 far:2 miall:1 correlate:1 approximate:1 compact:1 implicitly:3 ml:1 assumed:3 morrone:1 alternatively:2 spectrum:16 dilation:1 nature:1 reasonably:1 robust:4 expanding:1 contributes:4 complex:1 timescales:3 spread:2 linearly:3 neurosci:3 noise:7 arise:1 osborne:1 unis:1 neuronal:6 body:1 gatsby:2 slow:1 inferring:1 exponential:3 lq:2 lie:4 house:1 perceptual:2 candidate:1 externally:1 theorem:1 specific:3 showing:1 explored:1 decay:2 divergent:1 physiological:1 multitude:1 evidence:3 dominates:1 incorporating:1 janssen:1 adding:1 effectively:1 ci:7 nat:1 conditioned:1 locality:1 explore:2 likely:1 visual:6 scalar:3 monotonic:1 lisberger:1 extracted:2 ma:1 towards:2 oscillator:1 jeff:1 fisher:11 absence:1 change:4 experimentally:1 included:1 typical:1 specifically:1 hard:1 averaging:1 experimental:5 exception:1 select:1 internal:5 mark:1 arises:1 evaluate:2 heaviside:1 karmarkar:1 tested:2 |
2,453 | 3,225 | A Unified Near-Optimal Estimator For Dimension Reduction in l?
(0 < ? ? 2) Using Stable Random Projections
Ping Li
Department of Statistical Science
Faculty of Computing and Information Science
Cornell University
[email protected]
Trevor J. Hastie
Department of Statistics
Department of Health, Research and Policy
Stanford University
[email protected]
Abstract
Many tasks (e.g., clustering) in machine learning only require the l? distances instead of the original data. For dimension reductions in the l? norm (0 < ? ? 2),
the method of stable random projections can efficiently compute the l? distances
in massive datasets (e.g., the Web or massive data streams) in one pass of the data.
The estimation task for stable random projections has been an interesting topic.
We propose a simple estimator based on the fractional power of the samples (projected data), which is surprisingly near-optimal in terms of the asymptotic variance. In fact, it achieves the Cram?er-Rao bound when ? = 2 and ? = 0+. This
new result will be useful when applying stable random projections to distancebased clustering, classifications, kernels, massive data streams etc.
1 Introduction
Dimension reductions in the l? norm (0 < ? ? 2) have numerous applications in data mining,
information retrieval, and machine learning. In modern applications, the data can be way too large
for the physical memory or even the disk; and sometimes only one pass of the data can be afforded
for building statistical learning models [1, 2, 5]. We abstract the data as a data matrix A ? Rn?D .
In many applications, it is often the case that we only need the l? properties (norms or distances) of
A. The method of stable random projections [9, 18, 22] is a useful tool for efficiently computing the
l? (0 < ? ? 2) properties in massive data using a small (memory) space.
Denote the leading two rows in the data matrix A by u1 , u2 ? RD . The l? distance d(?) is
d(?) =
D
X
|u1,i ? u2,i |? .
(1)
i=1
The choice of ? is beyond the scope of this study; but basically, we can treat ? as a tuning parameter.
In practice, the most popular choice, i.e., the ? = 2 norm, often does not work directly on the original
(unweighted) data, as it is well-known that truly large-scale datasets (especially Internet data) are
ubiquitously ?heavy-tailed.? In machine learning, it is often crucial to carefully term-weight the
data (e.g., taking logarithm or tf-idf) before applying subsequent learning algorithms using the l2
norm. As commented in [12, 21], the term-weighting procedure is often far more important than
fine-tuning the learning parameters. Instead of weighting the original data, an alternative scheme
is to choose an appropriate norm. For example, the l1 norm has become popular recently, e.g.,
LASSO, LARS, 1-norm SVM [23], Laplacian radial basis kernel [4], etc. But other norms are also
possible. For example, [4]Pproposed a family of non-Gaussian radial basis kernels for SVM in the
form K(x, y) = exp (?? i |xi ? yi |? ), where x and y are data points in high-dimensions; and [4]
showed that ? ? 1 (even ? = 0) in some cases produced better results in histogram-based image
classifications. The l? norm with ? < 1, which may initially appear strange, is now well-understood
to be a natural measure of sparsity [6]. In the extreme case, when ? ? 0+, the l? norm approaches
the Hamming norm (i.e., the number of non-zeros in the vector).
Therefore, there is the natural demand in science and engineering for dimension reductions in the
l? norm other than l2 . While the method of normal random projections for the l2 norm [22] has
become very popular recently, we have to resort to more general methodologies for 0 < ? < 2.
The idea of stable random projections is to multiply A with a random projection matrix R ? RD?k
(k ? D). The matrix B = A ? R ? Rn?k will be much smaller than A. The entries of R are
(typically) i.i.d. samples from a symmetric ?-stable distribution [24], denoted by S(?, 1), where ?
is the index and 1 is the scale. We can then discard the original data matrix A because the projected
matrix B now contains enough information to recover the original l? properties approximately.
A symmetric ?-stable random variable is denoted by S(?, d), where d is the scale parameter. If
x ? S(?, d), then its characteristic function (Fourier transform of the density function) would be
?
E exp ?1xt = exp (?d|t|? ) ,
(2)
whose inverse does not have a closed-form except for ? = 2 (i.e., normal) or ? = 1 (i.e., Cauchy).
Applying stable random projections on u1 ? RD , u2 ? RD yields, respectively, v1 = RT u1 ? Rk
and v2 = RT u2 ? Rk . By the properties of Fourier transforms, the projected differences, v1,j ?v2,j ,
j = 1, 2, ..., k, are i.i.d. samples of the stable distribution S(?, d(?) ), i.e.,
xj = v1,j ? v2,j ? S(?, d(?) ),
j = 1, 2, ..., k.
(3)
Thus, the task is to estimate the scale parameter from k i.i.d. samples xj ? S(?, d(?) ). Because no
closed-form density functions are available except for ? = 1, 2, the estimation task is challenging
when we seek estimators that are both accurate and computationally efficient.
For general 0 < ? < 2, a widely used estimator is based on the sample inter-quantiles [7,20], which
can be simplified to be the sample median estimator by choosing the 0.75 - 0.25 sample quantiles
and using the symmetry of S(?, d(?) ). That is
median{|xj |? , j = 1, 2, ..., k}
.
d?(?),me =
median{S(?, 1)}?
(4)
It has been well-known that the sample median estimator is not accurate, especially when the
sample size k is not too large. Recently, [13] proposed various estimators based on the geometric
mean and the harmonic mean of the samples. The harmonic mean estimator only works for small
?. The geometric mean estimator has nice properties including closed-form variances, closed-form
tail bounds in exponential forms, and very importantly, an analog of the Johnson-Lindenstrauss (JL)
Lemma [10] for dimension reduction in l? . The geometric mean estimator, however, can still be
improved for certain ?, especially for large samples (e.g., as k ? ?).
1.1 Our Contribution: the Fractional Power Estimator
The fractional power estimator, with a simple unified format for all 0 < ? ? 2, is (surprisingly)
near-optimal in the Cram?er-Rao sense (i.e., when k ? ?, its variance is close to the Cram?er-Rao
lower bound). In particularly, it achieves the Cram?er-Rao bound when ? = 2 and ? ? 0+.
The basic idea is straightforward. We first obtain an unbiased estimator of d?(?) , denoted by R? (?),? .
1/?
We then estimate d(?) by R? (?),?
, which can be improved by removing the O k1 bias (this
consequently also reduces the variance) using Taylor expansions. We choose ? = ?? (?) to minimize
the theoretical asymptotic variance. We prove that ?? (?) is the solution to a simple convex program,
i.e., ?? (?) can be pre-computed and treated as a constant for every ?. The main computation
involves only
P
k
j=1
|xj |?
?
?
1/??
; and hence this estimator is also computationally efficient.
1.2 Applications
The method of stable random projections is useful for efficiently computing the l? properties (norms
or distances) in massive data, using a small (memory) space.
? Data stream computations
Massive data streams are fundamental in many modern
data processing application [1, 2, 5, 9]. It is common practice to store only a very small
sketch of the streams to efficiently compute the l? norms of the individual streams or the l?
distances between a pair of streams. For example, in some cases, we only need to visually
monitor the time history of the l? distances; and approximate answers often suffice.
One interesting special case is to estimate the Hamming norms (or distances) using the
PD
?
fact that, when ? ? 0+, d(?) =
i=1 |u1,i ? u2,i | approaches the total number of
D
non-zeros in {|u1,i ? u2,i |}i=1 , i.e., the Hamming distance [5]. One may ask why not just
(binary) quantize the data and then apply normal random projections to the binary data. [5]
considered that the data are dynamic (i.e., frequent addition/subtraction) and hence prequantizing the data would not work. With stable random projections, we only need to
update the corresponding sketches whenever the data are updated.
? Computing all pairwise distances
In many applications including distanced-based
clustering, classifications and kernels (e.g.) for SVM, we only need the pairwise distances.
Computing all pairwise distances of A ? Rn?D would cost O(n2 D), which can be significantly reduced to O(nDk + n2 k) by stable random projections. The cost reduction will
be more considerable when the original datasets are too large for the physical memory.
? Estimating l? distances online
While it is often infeasible to store the original matrix
A in the memory, it is also often infeasible to materialize all pairwise distances in A. Thus,
in applications such as online learning, databases, search engines, online recommendation
systems, and online market-basket analysis, it is often more efficient if we store B ? Rn?k
in the memory and estimate any pairwise distance in A on the fly only when it is necessary.
When we treat ? as a tuning parameter, i.e., re-computing the l? distances for many different ?,
stable random projections will be even more desirable as a cost-saving device.
2 Previous Estimators
We assume k i.i.d. samples xj ? S(?, d(?) ), j = 1, 2, ..., k. We list several previous estimators.
? The geometric mean estimator is recommended in [13] for ? < 2.
d?(?),gm =
2
Qk
? k ?
(
2
?
2
?
Var d(?),gm = d(?) ?
2
?
?
2
1
?
= d2(?)
k 12
?
|xj |?/k
k .
1 ? k1 sin ?2 ?k
)
k
2?
? 1 ? k2 sin ? ?k
k
2k ? 1
?
? 1 ? k1 sin ?2 ?k
k
1
?2 + 2 + O
.
k2
j=1
?
(5)
(6)
(7)
? The harmonic mean estimator is recommended in [13] for 0 < ? ? 0.344.
!!
???(?2?) sin (??)
,
k?
2 ? 1
?(??) sin ?2 ?
!
???(?2?) sin (??)
1
?
1
+
O
.
2
k2
?(??) sin ? ?
? ?2 ?(??) sin ?2 ?
?
d(?),hm =
Pk
??
j=1 |xj |
1
Var d?(?),hm = d2(?)
k
(8)
(9)
2
P
? For ? = 2, the arithmetic mean estimator, k1 kj=1 |xj |2 , is commonly used, which has
variance = k2 d2(2) . It can be improved by taking advantage of the marginal l2 norms [17].
3 The Fractional Power Estimator
The fractional power estimator takes advantage of the following statistical result in Lemma 1.
Lemma 1 Suppose x ? S ?, d(?) . Then for ?1 < ? < ?,
?
?
?/? 2
E |x|? = d(?) ? 1 ?
?(?) sin
? .
(10)
?
?
2
If ? = 2, i.e., x ? S(2, d(2) ) = N (0, 2d(2) ), then for ? > ?1,
?
?
?/2 2
?/2 2? (?)
.
E |x|? = d(2) ? 1 ?
?(?) sin
? = d(2)
?
2
2
? ?2
(11)
Proof: For 0 < ? ? 2 and ?1 < ? < ?, (10) can be inferred directly from [24, Theorem 2.6.3].
For ? = 2, the moment E |x|? exists for any ? > ?1. (11) can be shown by directly integrating
the Gaussian density (using the integral formula [8, 3.381.4]). The Euler?s
? reflection formula
?
and the duplication formula ?(z)? z + 12 = 21?2z ??(2z) are handy.
?(1 ? z)?(z) = sin(?z)
The fractional power estimator is defined in Lemma 2. See the proof in Appendix A.
Lemma 2 Denoted by d?(?),f p , the fractional power estimator is defined as
d?(?),f p =
where
1
k
Pk
j=1
|xj |?
?
!1/??
?
?
? ?? )?(?? ?) sin ?2 ?? ?
!!
2
?(1 ? 2?? )?(2?? ?) sin (??? ?)
1
1 1
?
?1
,
1?
2 ? 1
2
k 2?? ??
?(1 ? ?? )?(?? ?) sin ?2 ?? ?
?
!
2
?(1 ? 2?)?(2??) sin (???)
1
?? = argmin g (?; ?) ,
g (?; ?) = 2 ?2
?
1
.
2
?
?(1 ? ?)?(??) sin ? ??
? 1 ?< 1
2?
2
?(1
?
?
2
(12)
(13)
2
Asymptotically (i.e., as k ? ?), the bias and variance of d?(?),f p are
1
E d?(?),f p ? d(?) = O
,
k2
!
2
?(1 ? 2?? )?(2?? ?) sin (??? ?)
1
2 1 1
?
?
Var d(?),f p = d(?)
.
2
2 ? 1 + O
2
? ?
?
?
k ??2
k
?(1 ? ? )?(? ?) sin 2 ? ?
?
(14)
(15)
P
1/??
k
?? ?
Note that in calculating d?(?),f p , the real computation only involves
, because
j=1 |xj |
all other terms are basically constants and can be pre-computed.
3.2
1.5
1.9
3
1.95
1.2
2.8
2.6
1.999
1
2.4
2.2
0.8
2
2
0.5
1.8
1.6
1.4
0.3
1.2 2e?16
1
?1 ?.8 ?.6 ?.4 ?.2 0 .2 .4 .6 .8 1
?
?opt
Variance factor
Figure 1(a) plots g (?; ?) as a function of ? for many different values of ?. Figure 1(b) plots the
optimal ?? as a function of ?. We can see that g (?; ?) is a convex function of ? and ?1 < ?? < 12
(except for ? = 2), which will be proved in Lemma 3.
0.5
0.4
0.3
0.2
0.1
0
?0.1
?0.2
?0.3
?0.4
?0.5
?0.6
?0.7
?0.8
?0.9
?1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
?
Figure 1: Left panel plots the variance factor g (?; ?) as functions of ? for different ?, illustrating
g (?; ?) is a convex function of ? and the optimal solution (lowest points on the curves) are between
-1 and 0.5 (? < 2). Note that there is a discontinuity between ? ? 2? and ? = 2. Right panel plots
the optimal ?? as a function of ?. Since ? = 2 is not included, we only see ?? < 0.5 in the figure.
3.1 Special cases
The discontinuity, ?? (2?) = 0.5 and ?? (2) = 1, reflects the fact that, for x ? S(?, d), E |x|?
exists for ?1 < ? < ? when ? < 2 and exists for any ? > ?1 when ? = 2.
P
When ? = 2, since ?? (2) = 1, the fractional power estimator becomes k1 kj=1 |xj |2 , i.e., the
arithmetic mean estimator. We will from now on only consider 0 < ? < 2.
when ? ? 0+, since ?? (0+) = ?1, the fractional power estimator approaches the harmonic mean
estimator, which is asymptotically optimal when ? = 0+ [13].
When ? ? 1, since ?? (1) = 0 in the limit, the fractional power estimator has the same asymptotic
variance as the geometric mean estimator.
3.2 The Asymptotic (Cram?er-Rao) Efficiency
For an estimator d?(?) , its variance, under certain regularity condition, is lower-bounded
bythe Information inequality (also known as the Cram?er-Rao bound) [11, Chapter 2], i.e., Var d?(?) ? 1 .
kI(?)
The Fisher Information I(?) can be approximated by computationally intensive procedures [19].
When ? = 2, it is well-known that the arithmetic mean estimator attains the Cram?er-Rao bound.
When ? = 0+, [13] has shown that the harmonic mean estimator is also asymptotically optimal.
Therefore, our fractional power estimator achieves the Cram?er-Rao bound, exactly when ? = 2,
and asymptotically when ? = 0+.
1
to the asymptotic variance of
The asymptotic (Cram?er-Rao) efficiency is defined as the ratio of kI(?)
?
d(?) (d(?) = 1 for simplicity). Figure 2 plots the efficiencies for all estimators we have mentioned,
illustrating that the fractional power estimator is near-optimal in a wide range of ?.
1
Efficiency
0.9
0.8
0.7
Fractional
Geometric
Harmonic
Median
0.6
0.5
0.4
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
?
Figure 2: The asymptotic Cram?er-Rao efficiencies of various estimators for 0 < ? < 2, which are
1
to the asymptotic variances of the estimators. Here k is the sample size and I(?) is
the ratios of kI(?)
the Fisher Information (we use the numeric values in [19]). The asymptotic variance of the sample
median estimator d?(?),me is computed from known statistical theory for sample quantiles. We can
see that the fractional power estimator d?(?),f p is close to be optimal in a wide range of ?; and it
always outperforms both the geometric mean and the harmonic mean estimators. Note that since we
only consider ? < 2, the efficiency of d?(?),f p does not achieve 100% when ? ? 2?.
3.3 Theoretical Properties
We can show that, when computing the fractional power estimator d?(?),f p , to find the optimal ?? only involves
searching
for the minimum on a convex curve in the narrow range ?? ?
1
max ?1, ? 2? , 0.5 . These properties theoretically ensure that the new estimator is well-defined
and is numerically easy to compute. The proof of Lemma 3 is briefly sketched
! in Appendix B.
2
?(1
?
2?)?(2??)
sin
(???)
1
?
Lemma 3 Part 1: g (?; ?) =
(16)
2
2 ? 1 ,
?2
?(1 ? ?)?(??) sin ? ??
?
2
is a convex function of ?.
Part 2: For 0 < ? < 2, the optimal ?? = argmin g (?; ?), satisfies ?1 < ?? < 0.5.
1
? 2?
?< 12
3.4 Comparing Variances at Finite Samples
It is also important to understand the small sample performance of the estimators. Figure 3 plots
the empirical mean square errors (MSE) from simulations for the fractional power estimator, the
harmonic mean estimator, and the sample median estimator. The MSE for the geometric mean
estimators can be computed exactly without simulations.
Figure 3 indicates that the fractional power estimator d?(?),f p also has good small sample performance unless ? is close to 2. After k ? 50, the advantage of d?(?),f p becomes noticeable even
when ? is very close to 2. It is also clear that the sample median estimator has poor small sample
performance; but even at very large k, its performance is not that good except when ? is about 1.
0.6
k = 10
0.5
0.4
0.3
Fractional
Geometric
Harmonic
Median
0.2
0.1
Mean square error (MSE)
Mean square error (MSE)
0.7
0.06
0.05
k = 100
0.04
0.03
Fractional
Geometric
Harmonic
Median
0.02
0.01
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
?
0.1
k = 50
0.08
0.06
Fractional
Geometric
Harmonic
Median
0.04
0.02
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
?
?
Mean square error (MSE)
Mean square error (MSE)
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
0.12
0.0109
0.01
k = 500
0.009
0.008
0.007
0.006
0.005
0.004
Fractional
0.003
Geometric
0.002
Harmonic
0.001
Median
0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
?
Figure 3: We simulate the mean square errors (MSE) (106 simulations at every ? and k) for the
harmonic mean estimator (0 < ? ? 0.344 only) and the fractional power estimator. We compute
the MSE exactly for the geometric mean estimator (for 0.344? < 2). The fractional power has good
accuracy (small MSE) at reasonable sample sizes (e.g., k ? 50). But even at small samples (e.g.,
k = 10), it is quite accurate except when ? approaches 2.
4 Discussion
P
1/??
k
?? ?
The fractional power estimator d?(?),f p ?
|x
|
can be treated as a linear estimator
j
j=1
Pk
?
?
in because the power 1/? is just a constant. However, j=1 |xj |? ? is not a metric because
?? ? < 1, as shown in Lemma 3. Thus our result does not conflict the celebrated impossibility result
[3], which proved that there is no hope to recover the original l1 distances using linear projections
and linear estimators without incurring large errors.
Although the fractional power estimator achieves near-optimal asymptotic variance, analyzing its
tail bounds does not appear straightforward. In fact, when ? approaches 2, this estimator does
not have finite moments much higher than the second order, suggesting poor tail behavior. Our
additional simulations (not included in this paper) indicate that d?(?),f p still has comparable tail
probability behavior as the geometric mean estimator, when ? ? 1.
Finally, we should mention that the method of stable random projections does not take advantage of
the data sparsity while high-dimensional data (e.g., text data) are often highly sparse. A new method
call Conditional Random Sampling (CRS) [14?16] may be more preferable in highly sparse data.
5 Conclusion
In massive datasets such as the Web and massive data streams, dimension reductions are often critical for many applications including clustering, classifications, recommendation systems, and Web
search, because the data size may be too large for the physical memory or even for the hard disk and
sometimes only one pass of the data can be afforded for building statistical learning models.
While there are already many papers on dimension reductions in the l2 norm, this paper focuses on
the l? norm for 0 < ? ? 2 using stable random projections, as it has become increasingly popular in
machine learning to consider the l? norm other than l2 . It is also possible to treat ? as an additional
tuning parameter and re-run the learning algorithms many times for better performance.
Our main contribution is the fractional power estimator for stable random projections. This estimator, with a unified format for all 0 < ? ? 2, is computationally efficient and (surprisingly) is
also near-optimal in terms of the asymptotic variance. We also prove some important theoretical
properties (variance, convexity, etc.) to show that this estimator is well-behaved. We expect that this
work will help advance the state-of-the-art of dimension reductions in the l? norms.
A
Proof of Lemma 2
? (?),? ,
By Lemma 1, we first seek an unbiased estimator of of d?(?) , denoted by R
? (?),? = 1
R
k
Pk
j=1
2
? ?(1
|xj |??
? ?)?(??) sin
,
whose variance is
d2?
? (?),? = (?)
Var R
k
?1/? < ? < 1
?
2 ??
!
?(1 ? 2?)?(2??) sin (???)
?
1
,
2
?
? ?(1 ? ?)?(??) sin 2 ??
2
?2
?
1
1
<?<
2?
2
1/?
? (?),?
A biased estimator of d(?) would be simply R
, which has O k1 bias. This bias can
be removed to an extent by Taylor expansions [11, Theorem 6.1.1]. While it is well-known that
bias-corrections are not always beneficial because of the bias-variance trade-off phenomenon, in our
case, it is a good idea to conduct the bias-correction because the function f (x) = x1/? is convex for
1 1
1
1/??2
x > 0. Note that f ? (x) = ?1 x1/??1 and f ?? (x)
> 0, assuming ? 2?
< ? < 21 .
= ? ? ?1 x
1
Because f (x) is convex, removing the O k bias will also lead to a smaller variance.
We call this new estimator the ?fractional power? estimator:
? (?),?
1/??2
Var R
1 1
?
?1
d(?)
?
2
? ?
! 1/?
!!
Pk
??
2
1
1 1
1
j=1 |xj |
? ?(1 ? 2?)?(2??) sin (???)
1
?
?
1
?
1
,
2
2
2
?
k ?
k 2? ?
?(1 ? ?)?(??) sin ?
2 ??
? ?(1 ? ?)?(??) sin 2 ??
d?(?),f p,? =
=
? (?),?
R
1/?
where we plug in the estimated d?(?) . The asymptotic variance would be
1
1/??1 2
1
? (?),?
d?
+O
Var d?(?),f p,? = Var R
(?)
2
?
k
= d2(?)
1
?2 k
The optimal ?, denoted by ?? , is then
(
?
? = argmin
1 ?< 1
? 2?
2
1
?2
!
?(1 ? 2?)?(2??) sin (???)
1
.
?
1
+O
2
2
?
k
? ?(1 ? ?)?(??) sin 2 ??
2
?2
2
?(1 ? 2?)?(2??) sin (???)
?2
2
?(1 ? ?)?(??) sin ?2 ??
?
!)
?1
.
B Proof of Lemma 3
We sketch the basic steps; and we direct readers to the additional supporting material for more detail.
We use the infinite-product representations of the Gamma and sine functions [8, 8.322,1.431.1],
?(z) =
?
exp (??e z) Y
z ?1
z
1+
exp
,
z
s
s
s=1
sin(z) = z
?
Y
1?
s=1
z2
s2 ? 2
!
,
to re-write g (?; ?) as
g(?; ?) =
1
1
(M (?; ?) ? 1) = 2
?2
?
fs (?; ?) =
1?
?
s
2
1+
2??
s
?
Y
!
fs (?; ?) ? 1
s=1
?1
1?
??
s
1+
,
??
s
3
1?
?2 ?2
4s2
! ?2
1?
2?
s
With respect to ?, the first two derivatives of g(?; ?) are
1
?g
= 2
??
?
?2 g
M
= 2
??2
?
?
?
X
2
? log fs
(M ? 1) +
M
?
??
s=1
X ? 2 log fs
6
+
+
2
?
??2
s=1
?
!
.
?
X
? log fs
??
s=1
!2
?
?
4 X ? log fs
? s=1
??
!
?
6
.
?4
?1
.
Also,
?
?
X
X
1
2
1
1
? log fs
= 2?
+ ?2
+ 2
? 2
,
2
2
2
2
2
2
2
2
2
??
s ? 3s? + 2?
4s ? ? ?
s + 3s?? + 2? ?
s ?? ?
s=1
s=1
?
?
X
X
? 2 log fs
?2
4
2?2
?2
3?2
4?2
2?2
=
+
+
?
?
+
+
2
2
2
2
2
2
2
??
(s ? ?)
(s ? 2?)
(2s ? ??)
(s ? ??)
(s + ??)
(s + 2??)
(2s + ??)2
s=1
s=1
?
X
? 3 log fs
??3
s=1
=
?
X
16
4
+
+ 2?3
(s ? ?)3
(s ? 2?)3
?
X
?12
96
4
+
+ 6?
(s ? ?)4
(s ? 2?)4
s=1
2
1
3
8
2
?
+
?
?
(2s ? ??)3
(s ? ??)3
(s + ??)3
(s + 2??)3
(2s + ??)3
,
2
1
3
16
2
?
?
+
+
(2s ? ??)4
(s ? ??)4
(s + ??)4
(s + 2??)4
(2s + ??)4
.
?
X
? 4 log fs
??4
s=1
=
s=1
?2g
??2
2
? g
> 0, it suffices to show ?4 ??
2 > 0, which can shown based on its own second derivaP? ? 4 log fs
tive (and hence we need s=1 ??4 ). Here we consider ? 6= 0 to avoid triviality. To complete
the proof, we use some properties of the Riemann?s Zeta function and the infinite countability.
= 0, which is equivalent to h(?? ) = 1,
Next, we show that ?? < ?1 does not satisfy ?g(?;?)
??
To show
h(?? ) = M(?? )
??
?
?? X ? log fs
1?
2 s=1
??
??
!
= 1,
?h
We show that when ? < ?1, ??
> 0, i.e., h(?) < h(?1). We then show ?h(?1)
< 0 for
??
0 < ? < 0.5; and hence h(?1; ?) < h(?1; 0+) = 1. Therefore, we must have ?? > ?1.
References
[1] C. Aggarwal, editor. Data Streams: Models and Algorithms. Springer, New York, NY, 2007.
[2] B. Babcock, S. Babu, M. Datar, R. Motwani, and J. Widom. Models and issues in data stream systems. In PODS, 1?16, 2002.
[3] B. Brinkman and M. Charikar. On the impossibility of dimension reduction in l1 . Journal of ACM, 52(2):766?788, 2005.
[4] O. Chapelle, P. Haffner, and V. Vapnik. Support vector machines for histogram-based image classification. IEEE Trans. Neural Networks,
10(5):1055?1064, 1999.
[5] G. Cormode, M. Datar, P. Indyk, and S. Muthukrishnan. Comparing data streams using hamming norms (how to zero in). In VLDB,
335?345, 2002.
[6] D. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289?1306, 2006.
[7] E. Fama and R. Roll. Parameter estimates for symmetric stable distributions. JASA, 66(334):331?338, 1971.
[8] I. Gradshteyn and I. Ryzhik. Table of Integrals, Series, and Products. Academic Press, New York, fifth edition, 1994.
[9] P. Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of ACM, 53(3):307?323,
2006.
[10] W. Johnson and J. Lindenstrauss. Extensions of Lipschitz mapping into Hilbert space. Contemporary Mathematics, 26:189?206, 1984.
[11] E. Lehmann and G. Casella. Theory of Point Estimation. Springer, New York, NY, second edition, 1998.
[12] E. Leopold and J. Kindermann. Text categorization with support vector machines. how to represent texts in input space? Machine
Learning, 46(1-3):423?444, 2002.
[13] P. Li. Estimators and tail bounds for dimension reduction in l? (0 < ? ? 2) using stable random projections. In SODA, 2008.
[14] P. Li and K. Church. Using sketches to estimate associations. In HLT/EMNLP, 708?715, 2005.
[15] P. Li and K. Church. A sketch algorithm for estimating two-way and multi-way associations. Computational Linguistics, 33(3):305?354,
2007.
[16] P. Li, K. Church, and T. Hastie. Conditional random sampling: A sketch-based sampling technique for sparse data. In NIPS, 873?880,
2007.
[17] P. Li, T. Hastie, and K. Church. Improving random projections using marginal information. In COLT, 635?649, 2006.
[18] P. Li, T. Hastie, and K. Church. Nonlinear estimators and tail bounds for dimensional reduction in l1 using cauchy random projections.
Journal of Machine Learning Research (To appear) .
[19] M. Matsui and A. Takemura. Some improvements in numerical evaluation of symmetric stable density and its derivatives. Communications on Statistics-Theory and Methods, 35(1):149?172, 2006.
[20] J. McCulloch. Simple consistent estimators of stable distribution parameters. Communications on Statistics-Simulation, 15(4):1109?
1136, 1986.
[21] J. Rennie, L. Shih, J. Teevan, and D. Karger. Tackling the poor assumptions of naive Bayes text classifiers. In ICML, 616?623, 2003.
[22] S. Vempala. The Random Projection Method. American Mathematical Society, Providence, RI, 2004.
[23] J. Zhu, S. Rosset, T. Hastie, and R. Tibshirani. 1-norm support vector machines. In NIPS, Vancouver, 2003.
[24] V. M. Zolotarev. One-dimensional Stable Distributions. American Mathematical Society, Providence, RI, 1986.
| 3225 |@word illustrating:2 briefly:1 faculty:1 norm:24 disk:2 widom:1 d2:5 vldb:1 seek:2 simulation:5 mention:1 moment:2 reduction:12 celebrated:1 contains:1 series:1 karger:1 outperforms:1 comparing:2 z2:1 tackling:1 must:1 subsequent:1 numerical:1 fama:1 plot:6 update:1 device:1 cormode:1 mathematical:2 direct:1 become:3 prove:2 theoretically:1 pairwise:5 inter:1 market:1 behavior:2 multi:1 riemann:1 becomes:2 estimating:2 bounded:1 suffice:1 panel:2 mcculloch:1 lowest:1 argmin:3 unified:3 every:2 exactly:3 preferable:1 k2:5 classifier:1 appear:3 before:1 understood:1 engineering:1 treat:3 limit:1 analyzing:1 datar:2 approximately:1 challenging:1 matsui:1 range:3 practice:2 handy:1 procedure:2 empirical:1 significantly:1 projection:22 pre:2 radial:2 integrating:1 cram:10 close:4 applying:3 equivalent:1 straightforward:2 pod:1 convex:7 simplicity:1 estimator:68 importantly:1 searching:1 updated:1 gm:2 suppose:1 massive:8 distancebased:1 approximated:1 particularly:1 database:1 fly:1 trade:1 removed:1 contemporary:1 mentioned:1 pd:1 convexity:1 dynamic:1 efficiency:6 basis:2 various:2 chapter:1 muthukrishnan:1 choosing:1 whose:2 quite:1 stanford:2 widely:1 rennie:1 compressed:1 statistic:3 transform:1 zolotarev:1 indyk:2 online:4 advantage:4 propose:1 product:2 frequent:1 achieve:1 regularity:1 motwani:1 categorization:1 help:1 noticeable:1 involves:3 indicate:1 lars:1 material:1 require:1 suffices:1 opt:1 ryzhik:1 extension:1 correction:2 considered:1 normal:3 exp:5 visually:1 scope:1 mapping:1 achieves:4 estimation:3 kindermann:1 tf:1 tool:1 reflects:1 hope:1 gaussian:2 always:2 avoid:1 cr:1 cornell:2 focus:1 improvement:1 indicates:1 impossibility:2 attains:1 sense:1 typically:1 initially:1 sketched:1 issue:1 classification:5 colt:1 denoted:6 art:1 special:2 marginal:2 saving:1 sampling:3 icml:1 modern:2 gamma:1 individual:1 mining:1 highly:2 multiply:1 evaluation:1 truly:1 extreme:1 accurate:3 integral:2 necessary:1 unless:1 conduct:1 taylor:2 logarithm:1 re:3 theoretical:3 rao:10 cost:3 entry:1 euler:1 johnson:2 too:4 providence:2 answer:1 rosset:1 density:4 fundamental:1 off:1 zeta:1 choose:2 emnlp:1 resort:1 derivative:2 leading:1 american:2 li:7 suggesting:1 babu:1 satisfy:1 countability:1 stream:12 sine:1 closed:4 recover:2 bayes:1 contribution:2 minimize:1 square:6 accuracy:1 roll:1 variance:22 characteristic:1 efficiently:4 qk:1 yield:1 produced:1 basically:2 history:1 ping:1 inform:1 casella:1 whenever:1 trevor:1 basket:1 hlt:1 proof:6 hamming:4 proved:2 popular:4 ask:1 fractional:27 hilbert:1 carefully:1 higher:1 methodology:1 improved:3 just:2 sketch:6 web:3 nonlinear:1 behaved:1 building:2 unbiased:2 hence:4 symmetric:4 sin:31 complete:1 l1:4 reflection:1 image:2 harmonic:13 recently:3 common:1 physical:3 jl:1 tail:6 analog:1 association:2 numerically:1 rd:4 tuning:4 mathematics:1 chapelle:1 stable:23 etc:3 own:1 showed:1 discard:1 store:3 certain:2 inequality:1 binary:2 ubiquitously:1 yi:1 minimum:1 additional:3 ndk:1 subtraction:1 recommended:2 arithmetic:3 desirable:1 reduces:1 aggarwal:1 distanced:1 academic:1 plug:1 retrieval:1 laplacian:1 basic:2 metric:1 histogram:2 kernel:4 sometimes:2 represent:1 addition:1 fine:1 median:12 crucial:1 biased:1 duplication:1 call:2 near:6 enough:1 easy:1 embeddings:1 xj:14 hastie:6 lasso:1 idea:3 haffner:1 intensive:1 triviality:1 f:12 york:3 useful:3 clear:1 transforms:1 reduced:1 estimated:1 tibshirani:1 materialize:1 write:1 commented:1 shih:1 monitor:1 v1:3 asymptotically:4 run:1 inverse:1 lehmann:1 soda:1 family:1 reasonable:1 strange:1 reader:1 appendix:2 comparable:1 bound:10 internet:1 ki:3 idf:1 afforded:2 ri:2 u1:6 fourier:2 simulate:1 pseudorandom:1 vempala:1 format:2 department:3 charikar:1 poor:3 smaller:2 beneficial:1 increasingly:1 teevan:1 computationally:4 available:1 incurring:1 apply:1 v2:3 appropriate:1 alternative:1 original:8 clustering:4 ensure:1 linguistics:1 calculating:1 k1:6 especially:3 society:2 pingli:1 already:1 rt:2 distance:17 me:2 topic:1 cauchy:2 extent:1 assuming:1 index:1 ratio:2 policy:1 datasets:4 finite:2 supporting:1 communication:2 rn:4 inferred:1 tive:1 pair:1 conflict:1 engine:1 leopold:1 narrow:1 discontinuity:2 trans:2 nip:2 beyond:1 sparsity:2 program:1 including:3 memory:7 max:1 power:23 critical:1 natural:2 treated:2 brinkman:1 zhu:1 scheme:1 numerous:1 church:5 hm:2 naive:1 health:1 kj:2 text:4 nice:1 geometric:14 l2:6 vancouver:1 asymptotic:12 expect:1 takemura:1 interesting:2 var:8 generator:1 jasa:1 consistent:1 editor:1 heavy:1 row:1 surprisingly:3 infeasible:2 bias:8 understand:1 wide:2 taking:2 fifth:1 sparse:3 curve:2 dimension:11 lindenstrauss:2 numeric:1 unweighted:1 commonly:1 projected:3 simplified:1 far:1 approximate:1 xi:1 search:2 tailed:1 why:1 table:1 symmetry:1 improving:1 expansion:2 quantize:1 mse:9 pk:5 main:2 s2:2 edition:2 n2:2 x1:2 quantiles:3 ny:2 exponential:1 weighting:2 rk:2 removing:2 theorem:2 formula:3 xt:1 er:10 sensing:1 list:1 svm:3 exists:3 vapnik:1 gradshteyn:1 demand:1 simply:1 u2:6 recommendation:2 springer:2 babcock:1 satisfies:1 acm:2 conditional:2 consequently:1 donoho:1 lipschitz:1 fisher:2 considerable:1 hard:1 included:2 infinite:2 except:5 lemma:12 total:1 pas:3 support:3 phenomenon:1 |
2,454 | 3,226 | People Tracking with the Laplacian Eigenmaps
Latent Variable Model
Zhengdong Lu
CSEE, OGI, OHSU
? Carreira-Perpin?
? an
Miguel A.
EECS, UC Merced
Cristian Sminchisescu
University of Bonn
[email protected]
http://eecs.ucmerced.edu
sminchisescu.ins.uni-bonn.de
Abstract
Reliably recovering 3D human pose from monocular video requires models that
bias the estimates towards typical human poses and motions. We construct priors for people tracking using the Laplacian Eigenmaps Latent Variable Model
(LELVM). LELVM is a recently introduced probabilistic dimensionality reduction model that combines the advantages of latent variable models?a multimodal
probability density for latent and observed variables, and globally differentiable
nonlinear mappings for reconstruction and dimensionality reduction?with those
of spectral manifold learning methods?no local optima, ability to unfold highly
nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training
data, and compatible with standard probabilistic trackers such as particle filters.
We analyze the performance of a LELVM-based probabilistic sigma point mixture
tracker in several real and synthetic human motion sequences and demonstrate that
LELVM not only provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements, but also compares
favorably with alternative trackers based on PCA or GPLVM priors.
Recent research in reconstructing articulated human motion has focused on methods that can exploit
available prior knowledge on typical human poses or motions in an attempt to build more reliable
algorithms. The high-dimensionality of human ambient pose space?between 30-60 joint angles
or joint positions depending on the desired accuracy level, makes exhaustive search prohibitively
expensive. This has negative impact on existing trackers, which are often not sufficiently reliable at
reconstructing human-like poses, self-initializing or recovering from failure. Such difficulties have
stimulated research in algorithms and models that reduce the effective working space, either using generic search focusing methods (annealing, state space decomposition, covariance scaling) or
by exploiting specific problem structure (e.g. kinematic jumps). Experience with these procedures
has nevertheless shown that any search strategy, no matter how effective, can be made significantly
more reliable if restricted to low-dimensional state spaces. This permits a more thorough exploration of the typical solution space, for a given, comparatively similar computational effort as a
high-dimensional method. The argument correlates well with the belief that the human pose space,
although high-dimensional in its natural ambient parameterization, has a significantly lower perceptual (latent or intrinsic) dimensionality, at least in a practical sense?many poses that are possible
are so improbable in many real-world situations that it pays off to encode them with low accuracy.
A perceptual representation has to be powerful enough to capture the diversity of human poses in a
sufficiently broad domain of applicability (the task domain), yet compact and analytically tractable
for search and optimization. This justifies the use of models that are nonlinear and low-dimensional
(able to unfold highly nonlinear manifolds with low distortion), yet probabilistically motivated and
globally continuous for efficient optimization. Reducing dimensionality is not the only goal: perceptual representations have to preserve critical properties of the ambient space. Reliable tracking
needs locality: nearby regions in ambient space have to be mapped to nearby regions in latent space.
If this does not hold, the tracker is forced to make unrealistically large, and difficult to predict jumps
in latent space in order to follow smooth trajectories in the joint angle ambient space.
1
In this paper we propose to model priors for articulated motion using a recently introduced probabilistic dimensionality reduction method, the Laplacian Eigenmaps Latent Variable Model (LELVM)
[1]. Section 1 discusses the requirements of priors for articulated motion in the context of probabilistic and spectral methods for manifold learning, and section 2 describes LELVM and shows how
it combines both types of methods in a principled way. Section 3 describes our tracking framework (using a particle filter) and section 4 shows experiments with synthetic and real human motion
sequences using LELVM priors learned from motion-capture data.
Related work: There is significant work in human tracking, using both generative and discriminative methods. Due to space limitations, we will focus on the more restricted class of 3D generative
algorithms based on learned state priors, and not aim at a full literature review. Deriving compact prior representations for tracking people or other articulated objects is an active research field,
steadily growing with the increased availability of human motion capture data. Howe et al. and
Sidenbladh et al. [2] propose Gaussian mixture representations of short human motion fragments
(snippets) and integrate them in a Bayesian MAP estimation framework that uses 2D human joint
measurements, independently tracked by scaled prismatic models [3]. Brand [4] models the human
pose manifold using a Gaussian mixture and uses an HMM to infer the mixture component index
based on a temporal sequence of human silhouettes. Sidenbladh et al. [5] use similar dynamic priors
and exploit ideas in texture synthesis?efficient nearest-neighbor search for similar motion fragments at runtime?in order to build a particle-filter tracker with observation model based on contour
and image intensity measurements. Sminchisescu and Jepson [6] propose a low-dimensional probabilistic model based on fitting a parametric reconstruction mapping (sparse radial basis function) and
a parametric latent density (Gaussian mixture) to the embedding produced with a spectral method.
They track humans walking and involved in conversations using a Bayesian multiple hypotheses
framework that fuses contour and intensity measurements. Urtasun et al. [7] use a dynamic MAP
estimation framework based on a GPLVM and 2D human joint correspondences obtained from an
independent image-based tracker. Li et al. [8] use a coordinated mixture of factor analyzers within a
particle filtering framework, in order to reconstruct human motion in multiple views using chamfer
matching to score different configuration. Wang et al. [9] learn a latent space with associated dynamics where both the dynamics and observation mapping are Gaussian processes, and Urtasun et
al. [10] use it for tracking. Taylor et al. [11] also learn a binary latent space with dynamics (using
an energy-based model) but apply it to synthesis, not tracking. Our work learns a static, generative
low-dimensional model of poses and integrates it into a particle filter for tracking. We show its
ability to work with real or partially missing data and to track multiple activities.
1
Priors for articulated human pose
We consider the problem of learning a probabilistic low-dimensional model of human articulated
motion. Call y ? RD the representation in ambient space of the articulated pose of a person. In this
paper, y contains the 3D locations of anywhere between 10 and 60 markers located on the person?s
joints (other representations such as joint angles are also possible). The values of y have been
normalised for translation and rotation in order to remove rigid motion and leave only the articulated
motion (see section 3 for how we track the rigid motion). While y is high-dimensional, the motion
pattern lives in a low-dimensional manifold because most values of y yield poses that violate body
constraints or are simply atypical for the motion type considered. Thus we want to model y in terms
of a small number of latent variables x given a collection of poses {yn }N
n=1 (recorded from a human
with motion-capture technology). The model should satisfy the following: (1) It should define a
probability density for x and y, to be able to deal with noise (in the image or marker measurements)
and uncertainty (from missing data due to occlusion or markers that drop), and to allow integration
in a sequential Bayesian estimation framework. The density model should also be flexible enough
to represent multimodal densities. (2) It should define mappings for dimensionality reduction F :
y ? x and reconstruction f : x ? y that apply to any value of x and y (not just those in the
training set); and such mappings should be defined on a global coordinate system, be continuous
(to avoid physically impossible discontinuities) and differentiable (to allow efficient optimisation
when tracking), yet flexible enough to represent the highly nonlinear manifold of articulated poses.
From a statistical machine learning point of view, this is precisely what latent variable models
(LVMs) do; for example, factor analysis defines linear mappings and Gaussian densities, while the
generative topographic mapping (GTM; [12]) defines nonlinear mappings and a Gaussian-mixture
density in ambient space. However, factor analysis is too limited to be of practical use, and GTM?
2
while flexible?has two important practical problems: (1) the latent space must be discretised to
allow tractable learning and inference, which limits it to very low (2?3) latent dimensions; (2) the
parameter estimation is prone to bad local optima that result in highly distorted mappings.
Another dimensionality reduction method recently introduced, GPLVM [13], which uses a Gaussian process mapping f (x), partly improves this situation by defining a tunable parameter xn for
each data point yn . While still prone to local optima, this allows the use of a better initialisation
for {xn }N
n=1 (obtained from a spectral method, see later). This has prompted the application of
GPLVM for tracking human motion [7]. However, GPLVM has some disadvantages: its training is
very costly (each step of the gradient iteration is cubic on the number of training points N , though
approximations based on using few points exist); unlike true LVMs, it defines neither a posterior
distribution p(x|y) in latent space nor a dimensionality reduction mapping E {x|y}; and the latent
representation it obtains is not ideal. For example, for periodic motions such as running or walking,
repeated periods (identical up to small noise) can be mapped apart from each other in latent space
because nothing constrains xn and xm to be close even when yn = ym (see fig. 3 and [10]).
There exists a different type of dimensionality reduction methods, spectral methods (such as Isomap,
LLE or Laplacian eigenmaps [14]), that have advantages and disadvantages complementary to those
of LVMs. They define neither mappings nor densities but just a correspondence (xn , yn ) between
points in latent space xn and ambient space yn . However, the training is efficient (a sparse eigenvalue problem) and has no local optima, and often yields a correspondence that successfully models
highly nonlinear, convoluted manifolds such as the Swiss roll. While these attractive properties have
spurred recent research in spectral methods, their lack of mappings and densities has limited their
applicability in people tracking. However, a new model that combines the advantages of LVMs and
spectral methods in a principled way has been recently proposed [1], which we briefly describe next.
2
The Laplacian Eigenmaps Latent Variable Model (LELVM)
LELVM is based on a natural way of defining an out-of-sample mapping for Laplacian eigenmaps
(LE) which, in addition, results in a density model. In LE, typically we first define a k-nearestneighbour graph on the sample data {yn }N
n=1 and weigh each edge yn ? ym by a Gaussian affinity
2
function K(yn , ym ) = wnm = exp (? 21 k(yn ? ym )/?k ). Then the latent points X result from:
min tr XLX?
s.t. X ? RL?N , XDX? = I, XD1 = 0
(1)
where we define the matrix
PN XL?N = (x1 , . . . , xN ), the symmetric affinity matrix WN ?N , the degree matrix D = diag ( n=1 wnm ), the graph Laplacian matrix L = D?W, and 1 = (1, . . . , 1)? .
The constraints eliminate the two trivial solutions X = 0 (by fixing an arbitrary scale) and
x1 = ? ? ? = xN (by removing 1, which is an eigenvector of L associated with a zero eigenvalue).
The solution is given by the leading u2 , . . . , uL+1 eigenvectors of the normalised affinity matrix
1
1
1
N = D? 2 WD? 2 , namely X = V? D? 2 with VN ?L = (v2 , . . . , vL+1 ) (an a posteriori translated, rotated or uniformly scaled X is equally valid).
Following [1], we now define an out-of-sample mapping F(y) = x for a new point y as a semisupervised learning problem, by recomputing the embedding as in (1) (i.e., augmenting the graph
Laplacian with the new point), but keeping
the
fixed:
old embedding
L
K(y)
X?
min tr ( X x ) K(y)? 1? K(y)
(2)
x?
x?RL
2
where Kn (y) = K(y, yn ) = exp (? 12 k(y ? yn )/?k ) for n = 1, . . . , N is the kernel induced by
the Gaussian affinity (applied only to the k nearest neighbours of y, i.e., Kn (y) = 0 if y ? yn ).
This is one natural way of adding a new point to the embedding by keeping existing embedded
points fixed. We need not use the constraints from (1) because they would trivially determine x, and
the uninteresting solutions X = 0 and X = constant were already removed in the old embedding
anyway. The solution yields an out-of-sample dimensionality reduction mapping x = F(y):
PN
n)
= n=1 PN K(y,y
x
(3)
x = F(y) = 1X?K(y)
K(y)
K(y,y ? ) n
n? =1
n
applicable to any point y (new or old). This mapping is formally identical to a Nadaraya-Watson
estimator (kernel regression; [15]) using as data {(xn , yn )}N
n=1 and the kernel K. We can take this
a step further by defining a LVM that has as joint distribution a kernel density estimate (KDE):
PN
PN
PN
p(x, y) = N1 n=1 Ky (y, yn )Kx (x, xn ) p(y) = N1 n=1 Ky (y, yn ) p(x) = N1 n=1 Kx (x, xn )
3
where Ky is proportional to K so it integrates to 1, and Kx is a pdf kernel in x?space. Consequently,
the marginals in observed and latent space are also KDEs, and the dimensionality reduction and
reconstruction mappings are given by kernel regression (the conditional means E {y|x}, E {x|y}):
PN
PN
PN
n)
F(y) = n=1 p(n|y)xn
f (x) = n=1 PN Kx (x,x
y = n=1 p(n|x)yn .
(4)
K (x,x ? ) n
n? =1
x
n
We allow the bandwidths to be different in the latent and ambient spaces:
2
2
Kx (x, xn ) ? exp (? 12 k(x ? xn )/?x k ) and Ky (y, yn ) ? exp (? 21 k(y ? yn )/?y k ).
They
may be tuned to control the smoothness of the mappings and densities [1].
Thus, LELVM naturally extends a LE embedding (efficiently obtained as a sparse eigenvalue problem with a cost O(N 2 )) to global, continuous, differentiable mappings (NW estimators) and potentially multimodal densities having the form of a Gaussian KDE. This allows easy computation
of posterior probabilities such as p(x|y) (unlike GPLVM). It can use a continuous latent space of
arbitrary dimension L (unlike GTM) by simply choosing L eigenvectors in the LE embedding. It
has no local optima since it is based on the LE embedding. LELVM can learn convoluted mappings
(e.g. the Swiss roll) and define maps and densities for them [1]. The only parameters to set are the
graph parameters (number of neighbours k, affinity width ?) and the smoothing bandwidths ?x , ?y .
3
Tracking framework
We follow the sequential Bayesian estimation framework, where for state variables s and observation
variables z we have the recursive prediction and correction equations:
R
p(st |z0:t?1 ) = p(st |st?1 ) p(st?1 |z0:t?1 ) dst?1
p(st |z0:t ) ? p(zt |st ) p(st |z0:t?1 ).
(5)
We define the state variables as s = (x, d) where x ? RL is the low-dim. latent space (for pose)
and d ? R3 is the centre-of-mass location of the body (in the experiments our state also includes
the orientation of the body, but for simplicity here we describe only the translation). The observed
variables z consist of image features or the perspective projection of the markers on the camera
plane. The mapping from state to observations is (for the markers? case, assuming M markers):
P
f
x ? RL ????? y ? R3M ??? ? ?????? z ? R2M
d ? R3
(6)
where f is the LELVM reconstruction mapping (learnt from mocap data); ? shifts each 3D marker
by d; and P is the perspective projection (pinhole camera), applied to each 3D point separately. Here
we use a simple observation model p(zt |st ): Gaussian with mean given by the transformation (6)
and isotropic covariance (set by the user to control the influence of measurements in the tracking).
We assume known correspondences and observations that are obtained either from the 3D markers
(for tracking synthetic data) or 2D tracks obtained from a 2D tracker. Our dynamics model is
p(st |st?1 ) ? pd (dt |dt?1 ) px (xt |xt?1 ) p(xt )
(7)
where both dynamics models for d and x are random walks: Gaussians centred at the previous
step value dt?1 and xt?1 , respectively, with isotropic covariance (set by the user to control the
influence of dynamics in the tracking); and p(xt ) is the LELVM prior. Thus the overall dynamics
predicts states that are both near the previous state and yield feasible poses. Of course, more complex
dynamics models could be used if e.g. the speed and direction of movement are known.
As tracker we use the Gaussian mixture Sigma-point particle filter (GMSPPF) [16]. This is a particle filter that uses a Gaussian mixture representation for the posterior distribution in state space
and updates it with a Sigma-point Kalman filter. This Gaussian mixture will be used as proposal
distribution to draw the particles. As in other particle filter implementations, the prediction step
is carried out by approximating the integral (5) with particles and updating the particles? weights.
Then, a new Gaussian mixture is fitted with a weighted EM algorithm to these particles. This replaces the resampling stage needed by many particle filters and mitigates the problem of sample
depletion while also preventing the number of components in the Gaussian mixture from growing
over time. The choice of this particular tracker is not critical; we use it to illustrate the fact that
LELVM can be introduced in any probabilistic tracker for nonlinear, nongaussian models. Given the
corrected distribution p(st |z0:t ), we choose its mean as recovered state (pose and location). It is also
possible to choose instead the mode closest to the state at t ? 1, which could be found by mean-shift
or Newton algorithms [17] since we are using a Gaussian-mixture representation in state space.
4
4
Experiments
We demonstrate our low-dimensional tracker on image sequences of people walking and running,
both synthetic (fig. 1) and real (fig. 2?3). Fig. 1 shows the model copes well with persistent partial
occlusion and severely subsampled training data (A,B), and quantitatively evaluates temporal reconstruction (C). For all our experiments, the LELVM parameters (number of neighbors k, Gaussian
affinity ?, and bandwidths ?x and ?y ) were set manually. We mainly considered 2D latent spaces
(for pose, plus 6D for rigid motion), which were expressive enough for our experiments. More
complex, higher-dimensional models are straightforward to construct. The initial state distribution
p(s0 ) was chosen a broad Gaussian, the dynamics and observation covariance were set manually to
control the tracking smoothness, and the GMSPPF tracker used a 5-component Gaussian mixture
in latent space (and in the state space of rigid motion) and a small set of 500 particles. The 3D
representation we use is a 102-D vector obtained by concatenating the 3D markers coordinates of all
the body joints. These would be highly unconstrained if estimated independently, but we only use
them as intermediate representation; tracking actually occurs in the latent space, tightly controlled
using the LELVM prior. For the synthetic experiments and some of the real experiments (figs. 2?3)
the camera parameters and the body proportions were known (for the latter, we used the 2D outputs
of [6]). For the CMU mocap video (fig. 2B) we roughly guessed. We used mocap data from several
sources (CMU, OSU). As observations we always use 2D marker positions, which, depending on
the analyzed sequence were either known (the synthetic case), or provided by an existing tracker
[6] or specified manually (fig. 2B). Alternatively 2D point trackers similar to the ones of [7] can be
used. The forward generative model is obtained by combining the latent to ambient space mapping
(this provides the position of the 3D markers) with a perspective projection transformation. The
observation model is a product of Gaussians, each measuring the probability of a particular marker
position given its corresponding image point track.
Experiments with synthetic data: we analyze the performance of our tracker in controlled conditions (noise perturbed synthetically generated image tracks) both under regular circumstances (reasonable sampling of training data) and more severe conditions with subsampled training points and
persistent partial occlusion (the man running behind a fence, with many of the 2D marker tracks
obstructed). Fig. 1B,C shows both the posterior (filtered) latent space distribution obtained from
our tracker, and its mean (we do not show the distribution of the global rigid body motion; in all
experiments this is tracked with good accuracy). In the latent space plot shown in fig. 1B, the onset
of running (two cycles were used) appears as a separate region external to the main loop. It does not
appear in the subsampled training set in fig. 1B, where only one running cycle was used for training
and the onset of running was removed. In each case, one can see that the model is able to track quite
competently, with a modest decrease in its temporal accuracy, shown in fig. 1C, where the averages
are computed per 3D joint (normalised wrt body height). Subsampling causes some ambiguity in
the estimate, e.g. see the bimodality in the right plot in fig. 1C. In another set of experiments (not
shown) we also tracked using different subsets of 3D markers. The estimates were accurate even
when about 30% of the markers were dropped.
Experiments with real images: this shows our tracker?s ability to work with real motions of different people, with different body proportions, not in its latent variable model training set (figs. 2?3).
We study walking, running and turns. In all cases, tracking and 3D reconstruction are reasonably accurate. We have also run comparisons against low-dimensional models based on PCA and GPLVM
(fig. 3). It is important to note that, for LELVM, errors in the pose estimates are primarily caused
by mismatches between the mocap data used to learn the LELVM prior and the body proportions of
the person in the video. For example, the body proportions of the OSU motion captured walker are
quite different from those of the image in fig. 2?3 (e.g. note how the legs of the stick man are shorter
relative to the trunk). Likewise, the style of the runner from the OSU data (e.g. the swinging of the
arms) is quite different from that of the video. Finally, the interest points tracked by the 2D tracker
do not entirely correspond either in number or location to the motion capture markers, and are noisy
and sometimes missing. In future work, we plan to include an optimization step to also estimate the
body proportions. This would be complicated for a general, unconstrained model because the dimensions of the body couple with the pose, so either one or the other can be changed to improve the
tracking error (the observation likelihood can also become singular). But for dedicated prior pose
models like ours these difficulties should be significantly reduced. The model simply cannot assume
highly unlikely stances?these are either not representable at all, or have reduced probability?and
thus avoids compensatory, unrealistic body proportion estimates.
5
n = 15
n = 40
n = 65
n = 90
n = 115
n = 140
A
1.5
1.5
1.5
1.5
1
?2
?1
?1
?1
?1
?1
?1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1
?1
n = 13
0.4
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1
?1
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
0.4
0.4
0
0
0
?0.1
?0.1
?0.1
?0.2
?0.2
?0.2
?0.2
?0.2
?0.2
?0.3
?0.3
?0.3
?0.3
?0.3
?0.3
1.5
1.2
1.4
1.6
1.8
2
2.2
?0.4
0.4
2.4
1.5
0.6
0.8
1.5
1
1.2
1.4
1.6
1.8
2
2.2
?0.4
0.4
2.4
1.5
0.6
0.8
1.5
1.5
1.5
1
1.2
1.4
1.6
1.8
2
2.2
?0.4
0.4
2.4
1.5
0.6
0.8
1.5
1.5
1.5
1
1.2
1.4
1.6
1.8
2
2.2
?0.4
0.4
2.4
1.5
0.6
0.8
1.5
1.5
1.5
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0.1
0
?0.1
1
0
0.2
0
?0.1
0.8
?0.5
0.4
0.1
0
0.6
?1
0.3
0.2
0.1
0
?1.5
?1.5
n = 60
0.3
0.2
1
?2
?1
?1.5
?1.5
?0.1
?0.4
0.4
?1
?1
?1.5
?1
n = 49
0.3
0.1
?1
?1.5
?1
?0.5
?2
?1
?1.5
0.5
0
?0.5
?0.5
?1.5
?1.5
?1.5
0.5
0
0
?0.5
?1
?1
1
?0.5
1
0.5
0.5
0
?1.5
?1.5
?1.5
0
1
1
0.5
?0.5
n = 37
0.2
0.1
?1
?1
?1.5
?1.5
0.3
0.2
0.1
?0.5
0.4
0.3
0.2
?1
1.5
1
?1
?1.5
?2
1
0
?0.5
?0.5
?0.5
?1
?1.5
?2
0.5
0
0
?0.5
?1.5
?1.5
?2
0.5
0
?0.5
?1
?1.5
?1
?1.5
1
0.5
0.5
0
?0.5
n = 25
0.4
0.3
?1
?1
?1.5
1.5
1
0.5
0
?0.5
0
?0.5
?2
1
1
0.5
0
?0.5
?1.5
?1.5
1.5
1
0.5
0.5
0
?1
?1.5
?2
1
0.5
0
?0.5
?0.5
?1
?1.5
?2
1
0.5
0
?0.5
?1.5
?1.5
?1.5
n=1
B
?1
?1.5
?1.5
?2
0.5
0
?0.5
?1
?1.5
?1
?1.5
?2
1
0.5
0
1
0.5
0
?0.5
0
?0.5
?1
?1.5
?2
1
0.5
1.5
1
0.5
0.5
0
?0.5
?1
?1.5
1
?0.5
1.5
1.5
1
1
0.5
0
?0.5
?2
0
?0.5
?0.5
?1
?1.5
?1.5
?1.5
?2
0.5
0
0
?0.5
1.5
1
1
0.5
0
?1
?1.5
?1
?1.5
1
0.5
0.5
0
1.5
1
0.5
?0.5
0
?0.5
?2
1
1
0.5
?0.5
1.5
1.5
1
1
0.5
0
?1
?1.5
?2
1
1.5
1.5
1
?0.5
?1
?1.5
?2
0
?0.5
?0.5
?1
?1.5
?1.5
?2
0.5
0
0
?0.5
0.5
0
?0.5
?1
?1.5
?1
?1.5
1
0.5
0.5
0
?0.5
1.5
1.5
1
0.5
0
?0.5
0
?0.5
?2
1
1
0.5
0
?0.5
1.5
1
0.5
0.5
0
?1
?1.5
?2
1
0.5
0
?0.5
?0.5
?1
?1.5
?2
1
0.5
0
?0.5
0.5
0
?0.5
?1
?1.5
?1
?1.5
?2
1
0.5
0
1
0.5
0
?0.5
0
?0.5
?1
?1.5
?2
1
0.5
1.5
1
0.5
0.5
0
?0.5
?1
?1.5
1
1
1
0.5
0
?0.5
?2
1.5
1.5
1
1
0.5
0
?1
?1.5
1.5
1.5
1
0.5
?0.5
1
1.2
1.4
1.6
1.8
2
2.2
?0.4
0.4
2.4
1.5
0.6
0.8
1.5
1.5
1.5
1
1.2
1.4
1.6
1.8
2
2.2
2.4
1.5
1.5
1.5
1.5
1.5
1
1
0.5
0.5
0
0
1
?0.5
0.5
?0.5
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
0.1
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
0
?0.5
?1.5
0.5
0
0
?0.5
?0.5
?0.5
?2
?1
?1
?1
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?2
?1
?1.5
?0.5
?1.5
1
0.5
0
?0.5
?1
?1
1
0.5
0
?1.5
?0.5
?1
?2
1
0.5
?2
?1
?1.5
?1.5
0
?1
1
?1
1
0.5
?1.5
?2
?0.5
?2
?1.5
1
0.5
?0.5
?1
?1.5
0
?0.5
?1
?1.5
0
?1.5
0.5
0
?0.5
?1
?0.5
?1
?1.5
1
0.5
0
?0.5
?1
0
?0.5
?1
1
0.5
0
?1.5
1
0.5
0
0
?0.5
?2
1
0.5
?2
?1
?1.5
?1.5
1
0.5
?0.5
0
?1
1
?1
1
0.5
?1.5
?2
?0.5
?2
?1.5
1
0.5
?0.5
?1
?1.5
0
?0.5
?1
?1.5
0
?1.5
0.5
0
?0.5
?1
?0.5
?1
?1.5
1
0.5
0
?0.5
?1
0
?0.5
?1
1
0.5
0
?1.5
1
0.5
0
0
?0.5
?2
1
0.5
?2
?1
?1.5
?1.5
1
0.5
?0.5
0
?1
1
?1
1
0.5
?1.5
?2
?0.5
?2
?1
?1.5
1
1
0.5
?0.5
?1
?1.5
0
?0.5
?0.5
?1
0.5
?1.5
0.5
0
0
?0.5
0
?1
?1.5
1
0.5
0.5
0
?0.5
0
?0.5
?1
1
1
0.5
?1
1
0.5
0
?0.5
?0.5
?2
1
?1.5
1
0.5
0
?1
?1.5
?2
?2
?1
?1.5
?1.5
1
0.5
0
?1
?1.5
?1
1
0.5
?0.5
0
?1.5
?0.5
?2
?1
?1.5
?1
?1
?1.5
1
0
?0.5
?0.5
?1
?1.5
0
0.5
0
0
?0.5
?2
?1
?1.5
?1.5
0.5
0.5
0
?1
1
0.5
0
?0.5
?1
1
1
0.5
?0.5
?2
?1
?1.5
?1.5
?2
1
0
?0.5
?0.5
?1
?1.5
?2
?1.5
0.5
0
0
?0.5
1
0.5
?0.5
0
?0.5
?1.5
?1
?1.5
1
0.5
0.5
0
1
0.5
?1
?0.5
?1
1
1
0.5
1
0.5
?0.5
?1
?0.5
?2
1
0
0
?1
?1.5
?2
1
0.5
0
?0.5
0
?1
?1.5
1
0.5
1
0.5
?1.5
?1.5
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
?1.5
?1
?0.5
0
0.5
1
0.13
0.12
RMSE
C
RMSE
0.08
0.06
0.04
0.02
0.11
0.1
0.09
0.08
0.07
0.06
0
0
50
100
150
0.05
0
10
time step n
20
30
40
50
60
time step n
Figure 1: OSU running man motion capture data. A: we use 217 datapoints for training LELVM
(with added noise) and for tracking. Row 1: tracking in the 2D latent space. The contours (very tight
in this sequence) are the posterior probability. Row 2: perspective-projection-based observations
with occlusions. Row 3: each quadruplet (a, a? , b, b? ) show the true pose of the running man from
a front and side views (a, b), and the reconstructed pose by tracking with our model (a? , b? ). B: we
use the first running cycle for training LELVM and the second cycle for tracking. C: RMSE errors
for each frame, for the tracking of A (left plot)
?1/2normalised so that 1 equals the
PM and B (middle 2plot),
1
?
ky
?
y
k
for all 3D locations of the M
height of the stick man. RMSE(n) = M
nj
nj
j=1
? n with ground-truth stick man yn . Right plot:
markers, i.e., comparison of reconstructed stick man y
multimodal posterior distribution in pose space for the model of A (frame 42).
Comparison with PCA and GPLVM (fig. 3): for these models, the tracker uses the same GMSPPF
setting as for LELVM (number of particles, initialisation, random-walk dynamics, etc.) but with the
mapping y = f (x) provided by GPLVM or PCA, and with a uniform prior p(x) in latent space
(since neither GPLVM nor the non-probabilistic PCA provide one). The LELVM-tracker uses both
its f (x) and latent space prior p(x), as discussed. All methods use a 2D latent space. We ensured
the best possible training of GPLVM by model selection based on multiple runs. For PCA, the
latent space looks deceptively good, showing non-intersecting loops. However, (1) individual loops
do not collect together as they should (for LELVM they do); (2) worse still, the mapping from 2D
to pose space yields a poor observation model. The reason is that the loop in 102-D pose space
is nonlinearly bent and a plane can at best intersect it at a few points, so the tracker often stays
put at one of those (typically an ?average? standing position), since leaving it would increase the
error a lot. Using more latent dimensions would improve this, but as LELVM shows, this is not
necessary. For GPLVM, we found high sensitivity to filter initialisation: the estimates have high
variance across runs and are inaccurate ? 80% of the time. When it fails, the GPLVM tracker often
freezes in latent space, like PCA. When it does succeed, it produces results that are comparable
with LELVM, although somewhat less accurate visually. However, even then GPLVM?s latent space
consists of continuous chunks spread apart and offset from each other; GPLVM has no incentive to
place nearby two xs mapping to the same y. This effect, combined with the lack of a data-sensitive,
realistic latent space density p(x), makes GPLVM jump erratically from chunk to chunk, in contrast
with LELVM, which smoothly follows the 1D loop. Some GPLVM problems might be alleviated
using higher-order dynamics, but our experiments suggest that such modeling sophistication is less
6
0
0.5
1
n=1
n = 15
n = 29
n = 43
n = 55
n = 69
A
100
100
100
100
100
50
50
50
50
50
0
0
0
0
0
0
?50
?50
?50
?50
?50
?50
?100
?100
?100
?100
?100
?100
50
50
40
?40
?20
?50
?30
?10
0
?40
20
?20
40
n=4
?40
10
20
?20
40
50
0
n=9
?40
?20
30
40
50
0
n = 14
?40
20
?20
40
50
30
20
10
0
0
?40
?20
?30
?20
?40
10
20
?30
?10
0
?50
30
50
n = 19
?50
?30
?10
?50
30
?10
?20
?30
?40
10
50
40
30
20
10
0
?50
?30
?10
?50
20
?10
?20
?30
?40
10
50
40
30
20
10
0
?50
?30
?10
?50
30
?10
?20
?30
?40
0
50
50
40
30
20
10
0
?50
?30
?10
?50
30
?10
?20
?30
?40
10
40
30
20
10
0
?10
?20
?30
50
40
30
20
10
0
?10
?50
100
40
?40
10
20
?50
30
50
n = 24
40
50
n = 29
B
20
20
20
20
20
40
40
40
40
40
60
60
60
60
60
80
80
80
80
80
80
100
100
100
100
100
100
120
120
120
120
120
120
140
140
140
140
140
140
160
160
160
160
160
160
180
180
180
180
180
200
200
200
200
200
220
220
220
220
220
50
100
150
200
250
300
350
50
100
150
200
250
300
350
50
100
150
200
250
300
350
50
100
150
200
250
300
350
20
40
60
180
200
220
50
100
150
200
250
300
350
50
100
150
100
100
100
100
100
100
50
50
50
50
50
50
0
0
0
0
0
?50
?50
?50
?50
?50
?100
?100
?100
?100
?100
50
50
40
?20
?50
?30
?10
0
?40
10
20
?50
30
40
?40
?20
?50
?30
?10
0
?40
10
20
?50
30
40
?40
?20
?50
?30
?10
0
?40
?20
30
40
50
0
?40
10
20
?50
30
40
50
300
350
50
40
30
30
20
20
10
10
0
?50
?30
?10
?50
20
0
?10
?20
?30
?40
10
40
30
20
10
0
?10
?20
?30
50
50
40
30
20
10
0
?10
?20
?30
50
50
40
30
20
10
0
?10
?20
?30
50
40
30
20
10
0
?10
?40
250
0
?50
?100
?50
200
?40
?20
?30
?20
?30
?10
0
?40
10
20
?50
30
40
50
?10
?50
?40
?20
?30
?20
?30
?10
0
?40
10
20
?50
30
40
50
Figure 2: A: tracking of a video from [6] (turning & walking). We use 220 datapoints (3 full walking
cycles) for training LELVM. Row 1: tracking in the 2D latent space. The contours are the estimated
posterior probability. Row 2: tracking based on markers. The red dots are the 2D tracks and the
green stick man is the 3D reconstruction obtained using our model. Row 3: our 3D reconstruction
from a different viewpoint. B: tracking of a person running straight towards the camera. Notice the
scale changes and possible forward-backward ambiguities in the 3D estimates. We train the LELVM
using 180 datapoints (2.5 running cycles); 2D tracks were obtained by manually marking the video.
In both A?B the mocap training data was for a person different from the video?s (with different body
proportions and motions), and no ground-truth estimate was available for favourable initialisation.
LELVM
GPLVM
PCA
tracking in latent space
tracking in latent space
tracking in latent space
2.5
0.02
30
38
2
38
0.99
0.015
20
1.5
38
0.01
1
10
0.005
0.5
0
0
0
?0.005
?0.5
?10
?0.01
?1
?0.015
?1.5
?20
?0.02
?0.025
?0.025
?2
?0.02
?0.015
?0.01
?0.005
0
0.005
0.01
0.015
0.02
0.025
?2.5
?2
?1
0
1
2
3
?30
?80
?60
?40
?20
0
20
40
60
80
Figure 3: Method comparison, frame 38.
PCA and
GPLVM map consecutive walking cycles to spatially distinct
latent space regions.
Compounded by a data independent
latent prior, the resulting tracker
gets easily confused: it jumps
across loops and/or remains put,
trapped in local optima. In contrast, LELVM is stable and follows tightly a 1D manifold (see
videos).
crucial if locality constraints are correctly modeled (as in LELVM). We conclude that, compared to
LELVM, GPLVM is significantly less robust for tracking, has much higher training overhead and
lacks some operations (e.g. computing latent conditionals based on partly missing ambient data).
7
5
Conclusion and future work
We have proposed the use of priors based on the Laplacian Eigenmaps Latent Variable Model
(LELVM) for people tracking. LELVM is a probabilistic dim. red. method that combines the advantages of latent variable models and spectral manifold learning algorithms: a multimodal probability
density over latent and ambient variables, globally differentiable nonlinear mappings for reconstruction and dimensionality reduction, no local optima, ability to unfold highly nonlinear manifolds, and
good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as
particle filters. Our results using a LELVM-based probabilistic sigma point mixture tracker with several real and synthetic human motion sequences show that LELVM provides sufficient constraints
for robust operation in the presence of missing, noisy and ambiguous image measurements. Comparisons with PCA and GPLVM show LELVM is superior in terms of accuracy, robustness and
computation time. The objective of this paper was to demonstrate the ability of the LELVM prior
in a simple setting using 2D tracks obtained automatically or manually, and single-type motions
(running, walking). Future work will explore more complex observation models such as silhouettes;
the combination of different motion types in the same latent space (whose dimension will exceed 2);
and the exploration of multimodal posterior distributions in latent space caused by ambiguities.
Acknowledgments
This work was partially supported by NSF CAREER award IIS?0546857 (MACP), NSF IIS?0535140
and EC MCEXT?025481 (CS). CMU data: http://mocap.cs.cmu.edu (created with funding from NSF EIA?0196217). OSU data: http://accad.osu.edu/research/mocap/mocap
data.htm.
References
? Carreira-Perpi?na? n and Z. Lu. The Laplacian Eigenmaps Latent Variable Model. In AISTATS, 2007.
[1] M. A.
[2] N. R. Howe, M. E. Leventon, and W. T. Freeman. Bayesian reconstruction of 3D human motion from
single-camera video. In NIPS, volume 12, pages 820?826, 2000.
[3] T.-J. Cham and J. M. Rehg. A multiple hypothesis approach to figure tracking. In CVPR, 1999.
[4] M. Brand. Shadow puppetry. In ICCV, pages 1237?1244, 1999.
[5] H. Sidenbladh, M. J. Black, and L. Sigal. Implicit probabilistic models of human motion for synthesis
and tracking. In ECCV, volume 1, pages 784?800, 2002.
[6] C. Sminchisescu and A. Jepson. Generative modeling for continuous non-linearly embedded visual inference. In ICML, pages 759?766, 2004.
[7] R. Urtasun, D. J. Fleet, A. Hertzmann, and P. Fua. Priors for people tracking from small training sets. In
ICCV, pages 403?410, 2005.
[8] R. Li, M.-H. Yang, S. Sclaroff, and T.-P. Tian. Monocular tracking of 3D human motion with a coordinated
mixture of factor analyzers. In ECCV, volume 2, pages 137?150, 2006.
[9] J. M. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models. In NIPS, volume 18, 2006.
[10] R. Urtasun, D. J. Fleet, and P. Fua. Gaussian process dynamical models for 3D people tracking. In CVPR,
pages 238?245, 2006.
[11] G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In
NIPS, volume 19, 2007.
[12] C. M. Bishop, M. Svens?en, and C. K. I. Williams. GTM: The generative topographic mapping. Neural
Computation, 10(1):215?234, January 1998.
[13] N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable
models. Journal of Machine Learning Research, 6:1783?1816, November 2005.
[14] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):1373?1396, June 2003.
[15] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986.
[16] R. van der Merwe and E. A. Wan. Gaussian mixture sigma-point particle filters for sequential probabilistic
inference in dynamic state-space models. In ICASSP, volume 6, pages 701?704, 2003.
? Carreira-Perpi?na? n. Acceleration strategies for Gaussian mean-shift image segmentation. In CVPR,
[17] M. A.
pages 1160?1167, 2006.
8
| 3226 |@word middle:1 briefly:1 proportion:7 perpin:1 decomposition:1 covariance:4 tr:2 reduction:11 initial:1 configuration:1 contains:1 fragment:2 score:1 initialisation:4 tuned:1 ours:1 existing:3 recovered:1 wd:1 yet:3 must:1 realistic:1 remove:1 drop:1 plot:5 update:1 resampling:1 xdx:1 generative:7 parameterization:1 plane:2 isotropic:2 short:1 filtered:1 provides:3 location:5 height:2 become:1 persistent:2 consists:1 combine:4 fitting:1 overhead:1 roughly:1 nor:3 growing:2 freeman:1 globally:3 automatically:1 provided:2 confused:1 competently:1 mass:1 what:1 eigenvector:1 transformation:2 nj:2 temporal:3 thorough:1 runtime:1 prohibitively:1 scaled:2 ensured:1 stick:5 control:4 yn:19 appear:1 dropped:1 local:7 lvm:1 limit:1 severely:1 puppetry:1 might:1 plus:1 black:1 ucmerced:1 collect:1 limited:2 nadaraya:1 tian:1 practical:5 camera:5 acknowledgment:1 recursive:1 swiss:2 silverman:1 procedure:1 unfold:3 intersect:1 significantly:4 matching:1 projection:4 alleviated:1 radial:1 regular:1 suggest:1 get:1 cannot:1 close:1 selection:1 put:2 context:1 impossible:1 influence:2 map:4 missing:6 straightforward:1 williams:1 independently:2 focused:1 swinging:1 simplicity:1 estimator:2 deceptively:1 deriving:1 datapoints:3 rehg:1 embedding:8 anyway:1 coordinate:2 user:2 us:6 hypothesis:2 expensive:1 walking:8 located:1 updating:1 merced:1 predicts:1 observed:3 initializing:1 capture:6 wang:2 region:4 cycle:7 movement:1 removed:2 decrease:1 principled:2 weigh:1 pd:1 constrains:1 hertzmann:2 dynamic:14 tight:1 basis:1 translated:1 multimodal:6 joint:10 easily:1 htm:1 icassp:1 gtm:4 articulated:9 train:1 forced:1 distinct:1 effective:2 describe:2 choosing:1 exhaustive:1 quite:3 whose:1 cvpr:3 distortion:1 reconstruct:1 ability:5 niyogi:1 statistic:1 topographic:2 noisy:3 cristian:1 advantage:4 differentiable:4 sequence:7 eigenvalue:3 reconstruction:11 propose:3 product:1 erratically:1 combining:1 loop:6 roweis:1 convoluted:2 ky:5 exploiting:1 optimum:7 requirement:1 produce:1 bimodality:1 leave:1 rotated:1 object:1 depending:2 illustrate:1 augmenting:1 fixing:1 pose:27 miguel:1 nearest:2 recovering:2 c:2 shadow:1 direction:1 filter:12 exploration:2 human:28 correction:1 hold:1 tracker:26 sufficiently:2 considered:2 ground:2 exp:4 visually:1 lawrence:1 mapping:29 predict:1 nw:1 hall:1 consecutive:1 estimation:6 integrates:2 applicable:1 sensitive:1 successfully:1 weighted:1 gaussian:25 always:1 aim:1 accad:1 avoid:1 pn:10 probabilistically:1 mcext:1 encode:1 focus:1 fence:1 june:1 likelihood:1 mainly:1 contrast:2 sense:1 posteriori:1 inference:3 dim:2 rigid:5 vl:1 typically:2 eliminate:1 unlikely:1 inaccurate:1 overall:1 lvms:4 flexible:3 orientation:1 plan:1 smoothing:1 integration:1 uc:1 field:1 construct:2 equal:1 having:1 sampling:1 manually:5 identical:2 kdes:1 broad:2 look:1 icml:1 chapman:1 future:3 quantitatively:1 few:2 primarily:1 belkin:1 neighbour:2 preserve:1 tightly:2 individual:1 subsampled:3 occlusion:4 n1:3 attempt:1 interest:1 highly:8 kinematic:1 severe:1 runner:1 mixture:17 analyzed:1 behind:1 accurate:3 ambient:12 edge:1 integral:1 partial:2 necessary:1 experience:1 improbable:1 shorter:1 modest:1 taylor:2 old:3 walk:2 desired:1 fitted:1 increased:1 recomputing:1 modeling:3 disadvantage:2 measuring:1 leventon:1 applicability:2 cost:1 subset:1 uninteresting:1 uniform:1 eigenmaps:9 too:1 front:1 kn:2 perturbed:1 eec:2 periodic:1 learnt:1 synthetic:8 combined:1 chunk:3 person:5 density:17 st:11 sensitivity:1 stay:1 standing:1 probabilistic:15 off:1 synthesis:3 ym:4 together:1 nongaussian:1 intersecting:1 na:2 ambiguity:3 recorded:1 choose:2 wan:1 r3m:1 external:1 worse:1 leading:1 style:1 li:2 de:1 diversity:1 centred:1 availability:1 includes:1 matter:1 coordinated:2 satisfy:1 caused:2 onset:2 later:1 view:3 lot:1 analyze:2 red:2 complicated:1 xlx:1 rmse:4 accuracy:5 roll:2 variance:1 merwe:1 efficiently:1 likewise:1 yield:5 guessed:1 correspond:1 bayesian:5 zhengdong:1 produced:1 lu:2 trajectory:1 straight:1 failure:1 evaluates:1 energy:1 against:1 steadily:1 involved:1 naturally:1 associated:2 static:1 couple:1 tunable:1 knowledge:1 conversation:1 dimensionality:14 improves:1 segmentation:1 actually:1 focusing:1 appears:1 higher:3 dt:3 follow:2 fua:2 eia:1 obstructed:1 though:1 anywhere:1 just:2 stage:1 implicit:1 working:1 expressive:1 nonlinear:10 marker:18 lack:3 defines:3 mode:1 semisupervised:1 effect:1 true:2 isomap:1 analytically:1 stance:1 spatially:1 symmetric:1 deal:1 attractive:1 ogi:2 quadruplet:1 self:1 width:1 ambiguous:2 pdf:1 demonstrate:3 motion:36 dedicated:1 image:12 recently:4 funding:1 superior:1 rotation:1 rl:4 tracked:4 volume:6 discussed:1 marginals:1 measurement:7 significant:1 freeze:1 smoothness:2 rd:1 unconstrained:2 trivially:1 pm:1 particle:17 analyzer:2 centre:1 dot:1 stable:1 etc:1 posterior:8 closest:1 recent:2 perspective:4 apart:2 binary:2 watson:1 life:1 der:1 cham:1 captured:1 somewhat:1 determine:1 mocap:8 period:1 ii:2 full:2 multiple:5 violate:1 infer:1 smooth:1 compounded:1 equally:1 bent:1 award:1 laplacian:11 impact:1 prediction:2 controlled:2 regression:2 optimisation:1 cmu:4 circumstance:1 physically:1 iteration:1 represent:2 kernel:6 sometimes:1 proposal:1 addition:1 unrealistically:1 want:1 separately:1 annealing:1 conditionals:1 walker:1 howe:2 source:1 singular:1 leaving:1 crucial:1 unlike:3 induced:1 call:1 near:1 presence:2 ideal:1 intermediate:1 synthetically:1 enough:4 wn:1 easy:1 exceed:1 yang:1 bandwidth:3 reduce:1 idea:1 shift:3 fleet:3 motivated:1 pca:10 ul:1 effort:1 cause:1 eigenvectors:2 reduced:2 http:3 exist:1 nsf:3 notice:1 estimated:2 trapped:1 track:11 per:1 correctly:1 incentive:1 nevertheless:1 neither:3 backward:1 fuse:1 graph:4 prismatic:1 run:3 angle:3 powerful:1 uncertainty:1 distorted:1 dst:1 extends:1 place:1 reasonable:1 vn:1 draw:1 scaling:3 comparable:1 entirely:1 pay:1 correspondence:4 replaces:1 activity:1 constraint:6 precisely:1 svens:1 nearby:3 bonn:2 speed:1 argument:1 min:2 px:1 marking:1 combination:1 representable:1 poor:1 describes:2 across:2 reconstructing:2 em:1 leg:1 restricted:2 iccv:2 depletion:1 computationally:2 monocular:2 equation:1 remains:1 trunk:1 discus:1 r3:2 turn:1 needed:1 wrt:1 tractable:2 available:2 operation:3 gaussians:2 permit:1 apply:2 v2:1 spectral:8 generic:1 r2m:1 alternative:1 robustness:1 running:13 spurred:1 subsampling:1 include:1 newton:1 exploit:2 build:2 approximating:1 comparatively:1 objective:1 already:1 added:1 occurs:1 strategy:2 parametric:2 costly:1 gradient:1 affinity:6 sidenbladh:3 separate:1 mapped:2 hmm:1 manifold:11 urtasun:4 trivial:1 reason:1 assuming:1 kalman:1 index:1 modeled:1 prompted:1 difficult:1 potentially:1 kde:2 favorably:1 sigma:5 negative:1 implementation:1 reliably:1 zt:2 observation:13 snippet:1 november:1 gplvm:21 january:1 situation:2 defining:3 hinton:1 frame:3 arbitrary:2 intensity:2 introduced:4 discretised:1 namely:1 specified:1 nonlinearly:1 compensatory:1 learned:2 discontinuity:1 nip:3 able:3 dynamical:2 pattern:1 xm:1 mismatch:1 reliable:4 green:1 video:9 belief:1 unrealistic:1 critical:2 difficulty:2 natural:3 turning:1 arm:1 improve:2 xd1:1 technology:1 created:1 carried:1 prior:20 literature:1 review:1 relative:1 embedded:2 limitation:1 filtering:1 proportional:1 integrate:1 degree:1 sufficient:2 wnm:2 s0:1 sigal:1 viewpoint:1 translation:2 row:6 prone:2 compatible:2 course:1 changed:1 supported:1 eccv:2 keeping:2 bias:1 normalised:4 allow:4 lle:1 side:1 neighbor:2 sparse:5 van:1 dimension:7 xn:13 world:1 valid:1 contour:4 avoids:1 preventing:1 forward:2 made:1 jump:4 collection:1 ec:1 cope:1 correlate:1 reconstructed:2 compact:2 uni:1 obtains:1 silhouette:2 global:3 active:1 conclude:1 discriminative:1 alternatively:1 search:5 latent:58 continuous:6 stimulated:1 learn:6 reasonably:1 robust:3 career:1 sminchisescu:4 complex:3 domain:2 diag:1 jepson:2 aistats:1 main:1 spread:1 linearly:1 noise:4 nothing:1 repeated:1 complementary:1 body:14 x1:2 fig:16 en:1 cubic:1 fails:1 position:5 concatenating:1 csee:2 xl:1 perceptual:3 atypical:1 learns:1 removing:1 z0:5 chamfer:1 bishop:1 specific:1 bad:1 xt:5 showing:1 mitigates:1 perpi:2 favourable:1 offset:1 x:1 intrinsic:1 exists:1 consist:1 sequential:3 adding:1 texture:1 justifies:1 kx:5 sclaroff:1 locality:2 smoothly:1 sophistication:1 simply:3 explore:1 visual:1 tracking:39 partially:2 u2:1 truth:2 succeed:1 conditional:1 goal:1 consequently:1 acceleration:1 towards:2 man:8 feasible:1 change:1 carreira:3 typical:3 reducing:1 uniformly:1 corrected:1 principal:1 partly:2 osu:6 brand:2 formally:1 pinhole:1 people:9 latter:1 ohsu:1 |
2,455 | 3,227 | Cluster Stability for Finite Samples
Ohad Shamir? and Naftali Tishby??
? School of Computer Science and Engineering
? Interdisciplinary Center for Neural Computation
The Hebrew University
Jerusalem 91904, Israel
{ohadsh,tishby}@cs.huji.ac.il
Abstract
Over the past few years, the notion of stability in data clustering has received
growing attention as a cluster validation criterion in a sample-based framework.
However, recent work has shown that as the sample size increases, any clustering
model will usually become asymptotically stable. This led to the conclusion that
stability is lacking as a theoretical and practical tool. The discrepancy between
this conclusion and the success of stability in practice has remained an open question, which we attempt to address. Our theoretical approach is that stability, as
used by cluster validation algorithms, is similar in certain respects to measures
of generalization in a model-selection framework. In such cases, the model chosen governs the convergence rate of generalization bounds. By arguing that these
rates are more important than the sample size, we are led to the prediction that
stability-based cluster validation algorithms should not degrade with increasing
sample size, despite the asymptotic universal stability. This prediction is substantiated by a theoretical analysis as well as some empirical results. We conclude that
stability remains a meaningful cluster validation criterion over finite samples.
1
Introduction
Clustering is one of the most common tools of unsupervised data analysis. Despite its widespread
use and an immense amount of literature, distressingly little is known about its theoretical foundations [14]. In this paper, we focus on sample based clustering, where it is assumed that the data to
be clustered are actually a sample from some underlying distribution.
A major problem in such a setting is assessing cluster validity. In other words, we might wish to
know whether the clustering we have found actually corresponds to a meaningful clustering of the
underlying distribution, and is not just an artifact of the sampling process. This problem relates to the
issue of model selection, such as determining the number of clusters in the data or tuning parameters
of the clustering algorithm. In the past few years, cluster stability has received growing attention
as a criterion for addressing this problem. Informally, this criterion states that if the clustering
algorithm is repeatedly applied over independent samples, resulting in ?similar? clusterings, then
these clusterings are statistically significant. Based on this idea, several cluster validity methods
have been proposed (see [9] and references therein), and were shown to be relatively successful for
various data sets in practice.
However, in recent work, it was proven that under mild conditions, stability is asymptotically fully
determined by the behavior of the objective function which the clustering algorithm attempts to
optimize. In particular, the existence of a unique optimal solution for some model choice implies
stability as sample size increase to infinity. This will happen regardless of the model fit to the data.
From this, it was concluded that stability is not a well-suited tool for model selection in clustering.
This left open, however, the question of why stability is observed to be useful in practice.
1
In this paper, we attempt to explain why stability measures should have much wider relevance than
what might be concluded from these results. Our underlying approach is to view stability as a
measure of generalization, in a learning-theoretic sense. When we have a ?good? model, which is
stable over independent samples, then inferring its fit to the underlying distribution should be easy.
In other words, stability should ?work? because stable models generalize better, and models which
generalize better should fit the underlying distribution better. We emphasize that this idea in itself is
not novel, appearing explicitly and under various guises in many aspects of machine learning. The
novelty in this paper lies mainly in the predictions that are drawn from it for clustering stability.
The viewpoint above places emphasis on the nature of stability for finite samples. Since generalization is meaningless when the sample is infinite, it should come as no surprise that stability displays
similar behavior. On finite samples, the generalization uncertainty is virtually always strictly positive, with different model choices leading to different convergence rates towards zero for increasing
sample size. Based on the link between stability and generalization, we predict that on realistic
data, all risk-minimizing models asymptotically become stable, but the rates of convergence to this
ultimate stability differ. In other words, an appropriate scaling of the stability measures will make
them independent of the actual sample size used. Using this intuition, we characterize and prove a
mild set of conditions, applicable in principle to a wide class of clustering settings, which ensure
the relevance of cluster stability for arbitrarily large sample sizes. We then prove that the stability
measure used in previous work to show negative asymptotic results on stability, actually allows us to
discern the ?correct? model, regardless of how large is the sample, for a certain simple setting. Our
results are further validated by some experiments on synthetic and real world data.
2
Definitions and notation
We assume that the data sample to be clustered, S = {x1 , .., xm }, is produced by sampling instances
i.i.d from an underlying distribution D, supported on a subset X of Rn . A clustering CD for some
D ? X is a function from D ? D to {0, 1}, defining an equivalence relation on D with a finite
number of equivalence classes (namely, CD (xi , xj ) = 1 if xi and xj belong to the same cluster,
and 0 otherwise). For a clustering CX of the instance space, and a finite sample S, let CX |S denote
the functional restriction of CX on S ? S.
A clustering algorithm A is a function from any finite sample S ? X , to some clustering CX of
the instance space1 . We assume the algorithm is driven by optimizing an objective function, and
has some user-defined parameters ?. In particular, Ak denotes the algorithm A with the number of
clusters chosen to be k.
Following [2], we define the stability of a clustering algorithm A on finite samples of size m as:
stab(A, D, m) = ES1 ,S2 dD (A(S1 ), A(S2 )),
(1)
where S1 and S2 are samples of size m, drawn i.i.d from D, and dD is some ?dissimilarity? function
between clusterings of X , to be specified later.
Let ` denote a loss function from any clustering CS of a finite set S ? X to [0, 1]. ` may or may not
correspond to the objective function the clustering algorithm attempts to optimize, and may involve
a global quality measure rather than some average over individual instances. For a fixed sample size,
we say that ` obeys the bounded differences property (see [11]), if for any clustering CS it holds that
|`(CS ) ? `(CS 0 )| ? a, where a is a constant, and CS 0 is obtained from CS by replacing at most one
instance of S by any other instance from X , and clustering it arbitrarily.
A hypothesis class H is defined as some set of clusterings of X . The empirical risk of a clustering
CX ? H on a sample S of size m is `(CX |S ). The expected risk of CX , with respect to samples
S of size m, will be defined as ES `(CX |S ). The problem of generalization is how to estimate the
expected risk, based on the empirical data.
1
Many clustering algorithms, such as spectral clustering, do not induce a natural clustering on X based on
a clustering of a sample. In that case, we view the algorithm as a two-stage process, in which the clustering
of the sample is extended to X through some uniform extension operator (such as assigning instances to the
?nearest? cluster in some appropriate sense).
2
3
A Bayesian framework for relating stability and generalization
The relationship between generalization and various notions of stability is long known, but has been
dealt with mostly in a supervised learning setting (see [3][5] [8] and references therein). In the
context of unsupervised data clustering, several papers have explored the relevance of statistical
stability and generalization, separately and together (such as [1][4][14][12]). However, there are
not many theoretical results quantitatively characterizing the relationship between the two in this
setting. The aim of this section is to informally motivate our approach, of viewing stability and
generalization in clustering as closely related.
Relating the two is very natural in a Bayesian setting, where clustering stability implies an ?unsurprising? posterior given a prior, which is based on clustering another sample. Under this paradigm,
we might consider ?soft clustering? algorithms which return a distribution over a measurable hypothesis class H, rather than a specific clustering. This distribution typically reflects the likelihood of a
clustering hypothesis, given the data and prior assumptions. Extending our notation, we have that
for any sample S, A(S) is now a distribution over H. The empirical risk of such a distribution, with
respect to sample S 0 , is defined as `(A(S)|S 0 ) = ECX ?A(S) `(CX |S 0 ).
In this setting, consider for example the following simple procedure to derive a clustering hypothesis
distribution, as well as a generalization bound: Given a sample of size 2m drawn i.i.d from D, we
randomly split it into two samples S1 ,S2 each of size m, and use A to cluster each of them separately.
Then we have the following:
Theorem 1. For the procedure defined above, assume ` obeys the bounded differences property
with parameter 1/m. Define the clustering distance dD (P, Q) in Eq. (1), between two distributions
P,Q over the hypothesis class H, as the Kullback-Leibler divergence DKL [Q||P]2 . Then for a fixed
confidence parameter ? ? (0, 1), it holds with probability at least 1 ? ? over the draw of samples S1
and S2 of size m, that
r
dD (A(S1 ), A(S2 )) + ln(m/?) + 2
.
ES `(A(S2 )|S ) ? `(A(S2 )|S2 ) ?
2m ? 1
The theorem is a straightforward variant of the PAC-Bayesian theorem [10]. Since the loss function
is not necessarily an empirical average, we need to utilize McDiarmid?s bound for random variables
with bounded differences, instead of Hoeffding?s bound. Other than that, the proof is identical, and
is therefore ommited.
This theorem implies that the more stable is the Bayesian algorithm, the tighter the expected generalization bounds we can achieve. In fact, the ?expected? magnitude of the high-probability bound
we will get (over drawing S1 and S2 and performing the procedure described above) is:
r
r
dD (A(S1 ), A(S2 )) + ln(m/?) + 2
ES1 ,S2 dD (A(S1 ), A(S2 )) + ln(m/?) + 2
?
ES1 ,S2
2m ? 1
2m ? 1
r
stab(A, D, m) + ln(m/?) + 2
.
=
2m ? 1
Note that the only model-dependent quantity in the expression above is stab(A, D, m). Therefore,
carrying out model selection by attempting to minimize these types of generalization bounds is
closely related to minimizing stab(A, D, m). In general, the generalization bound might converge
to 0 as m ? ?, but this is immaterial for the purpose of model selection. The important factor is
the relative values of the measure, over different choices of the algorithm parameters ?. In other
words, the important quantity is the relative convergence rates of this bound for different choices of
?, governed by stab(A, D, m).
This informal discussion only exemplifies the relationship between generalization and stability, since
the setting and the definition of dD here differs from the one we will focus on later in the paper.
Although these ideas can be generalized, they go beyond the scope of this paper, and we leave it for
future work.
R
Where we define DKL [Q||P] = X Q(X) ln(Q(X)/P(X)), and DKL [q||p] for q, p ? [0, 1] is defined
as the divergence of Bernoulli distributions with parameters q and p.
2
3
4
Effective model selection for arbitrarily large sample sizes
From now on, following [2], we will define the clustering distance function dD of Eq. (1) as:
dD (A(S1 ), A(S2 )) =
Pr
x1 ,x2 ?D
(A(S1 )(x1 , x2 ) 6= A(S2 )(x1 , x2 )) .
(2)
In other words, the clustering distance is the probability that two independently drawn instances
from D will be in the same cluster under one clustering, and in different clusters under another
clustering.
In [2], it is essentially proven that if there exists a unique optimizer to the clustering algorithm?s objective function, to which the algorithm converges for asymptotically large samples, then
stab(A, D, m) converges to 0 as m ? ?, regardless of the parameters of A. From this, it was
concluded that using stability as a tool for cluster validity is problematic, since for large enough
samples it would always be approximately zero, for any algorithm parameters chosen.
However, using the intuition gleaned from the results of the previous section, the different convergence rates of the stability measure (for different algorithm parameters) should be more important
than their absolute values or the sample size. The key technical result needed to substantiate this
intuition is the following theorem:
Theorem 2. Let X, Y be two random variables bounded in [0, 1], and with strictly positive expected values. Assume E[X]/E[Y ] ? 1 + c for some positive constant c. Letting X1 , . . . , Xm and
? = 1 Pm Xi
Y1 , . . . , Ym be m identical independent copies of X and Y respectively, define X
i=1
m
Pm
1
Y
.
and Y? = m
Then
it
holds
that:
i
i=1
?
?
?
?
?4 !
?2 !
c
c
1
1
? ? Y? ) ? exp ? mE[X]
Pr(X
+ exp ? mE[X]
.
8
1+c
4
1+c
? Y? are taken to be empirical estimators
The importance of this theorem becomes apparent when X,
of stab(A, D, m) for two different algorithm parameter sets ?, ?0 . For example, suppose that according to our stability measure (see Eq. (1)), a cluster model with k clusters is more stable than a
model with k 0 clusters, where k 6= k 0 , for sample size m (e.g., stab(Ak , D, m) < stab(Ak0 , D, m)).
These stability measures might be arbitrarily close to zero. Assume that with high probability
over
?
the choice of samples S1 and S2 ?
of size m, we can show that dD (Ak (S1 ), Ak (S2 )) ? 1/ m, while
dD (Ak0 (S1 ), Ak0 (S2 )) ? 1.01/ m. We cannot compute these exactly, since the definition of dD
involves an expectation over the unknown distribution D (see Eq. (2)). However, we can estimate
them by drawing another sample S3 of m instance pairs, and computing a sample mean to estimate
Eq. (2). According to Thm. 2, since dD (Ak (S1 ), Ak (S2 )) and dD (Ak0 (S1 ), Ak0 (S2 )) have slightly
different convergence rates (c ? 0.01), which are slower than ?(1/m), then we can discern which
number of clusters is more stable, with a high probability which actually improves as m increases.
Therefore, we can use Thm. 2 as a guideline for when a stability estimator might be useful for arbitrarily large sample sizes. Namely, we need to show it is an expected value of some random variable,
with at least slightly different convergence rates for different model selections, and with at least
some of them dominating ?(1/m). We would expect these conditions to hold under quite general
settings, since most stability measures are based on empirically estimating the mean of some random variable.
Moreover, a central-limit theorem argument leads us to expect an asymptotic form of
?
?(1/ m), with the exact constants dependent on the model. This convergence rate is slow enough
for the theorem to apply. The difficult step, however, is showing that the differing convergence rates
can be detected empirically, without knowledge of D. In the example above, this reduces to showing that with high probability over S1 and S2 , dD (Ak (S1 ), Ak (S2 )) and dD (Ak0 (S1 ), Ak0 (S2 )) will
indeed differ by some constant ratio independent of m.
Proof of Thm. 2. Using a relative entropy variant of Hoeffding?s bound [7], we have that for any
1 > b > 0 and 1/E[Y ] > a > 1, it holds that:
?
?
? ? bE[X] ? exp (?m DKL [bE [X] || E [X]]) ,
Pr X
?
?
Pr Y? ? aE[Y ] ? exp (?m DKL [aE [Y ] || E [Y ]]) .
4
By substituting the bound DKL [p||q] ? (p ? q)2 /2 max{p, q} in the two inequalities, we get:
?
?
?
?
? ? bE[X] ? exp ? 1 mE [X] (1 ? b)2
Pr X
2
?
??
?
?
?
1
1
,
Pr Y? ? aE[Y ] ? exp ? mE [Y ] a + ? 2
2
a
(3)
(4)
2
which hold whenever 1 > b > 0 and a > 1. Let b = 1 ? (1 ? E[Y ]/E[X]) /2, and a =
bE[X]/E[Y ]. It is easily verified that b < 1 and a > 1. Substituting these values into the r.h.s of
Eq. (3), and to both sides of Eq. (4), and after some algebra, we get:
?
?
?4 !
c
1
? ? bE[X]) ? exp ? mE[X]
,
Pr(X
8
1+c
?
?
?2 !
c
1
?
Pr(Y ? bE[X]) ? exp ? mE[X]
.
4
1+c
? ? Y? ) is at most the sum of the r.h.s of the last
As a result, by the union bound, we have that Pr(X
two inequalities, hence proving the theorem.
As a proof of concept, we show that for a certain setting, the stability measure used by [2], as defined
above, is meaningful for arbitrarily large sample sizes, even when this measure converges to zero for
any choice of the required number of clusters. The result is a simple counter-example to the claim
that this phenomenon makes cluster stability a problematic tool.
The setting we analyze is a mixture distribution of three well-separated unequal Gaussians in R,
where an empirical estimate of stability, using a centroid-based clustering algorithm, is utilized
to discern whether the data contain 2, 3 or 4 clusters. We prove that with high probability, this
empirical estimation process will discern k = 3 as much more stable than both k = 2 and k = 4
(by an amount depending on the separation between the Gaussians). The result is robust enough to
hold even if in addition one performs normalization procedures to account for the fact that higher
number of clusters entail more degrees of freedom for the clustering algorithm (see [9]).
We emphasize that the simplicity of this setting is merely for the sake of analytical convenience.
The proof itself relies on a general and intuitive characteristic of what constitutes a ?wrong? model
(namely, having cluster
boundaries in areas of high density), rather than any specific feature of this setting. We are currently
working on generalizing this result, using a more involved analysis.
In this setting, by the results of [2], stab(Ak , D, m) will converge to 0 as m ? ? for k = 2, 3, 4.
The next two lemmas, however, show that the stability measure for k = 3 (the ?correct? model order)
is smaller than the other two, by a substantial ratio independent of m, and that this will be discerned,
with high probability, based on the empirical estimates of dD (Ak (S1 ), Ak (S2 )). The proofs are
technical, and appear in the supplementary material to this paper.
Lemma 1. For some ? > 0, let D be a Gaussian mixture distribution on R, with density function
?
? 2?
?
?
?
(x + ?)2
x
(x ? ?)2
1
1
2
+ ? exp ?
+ ? exp ?
.
p(x) = ? exp ?
2
2
2
3 2?
6 2?
6 2?
Assume ? ? 1, so that the Gaussians are well separated. Let Ak be a centroid-based clustering algorithm, which is given a sample and required number of clusters k, and returns a set of k
centroids, minimizing the k-means objective function (sum of squared Euclidean distances between
each instance and its nearest centroid). Then the following holds, with o(1) signifying factors which
converge to 0 as m ? ?:
?
?
?2
0.4 ? o(1)
1 ? o(1)
?
?
exp ?
, stab(A4 , D, m) ?
stab(A2 , D, m) ?
32
7 m
m
? 2?
?
1.1 + o(1)
?
exp ?
.
stab(A3 , D, m) ?
8
m
5
Lemma 2. For the setting described in Lemma 1, it holds that over the draw of independent sample pairs (S1 , S2 ), (S10 , S20 ), (S100 , S200 ) (each of size m from D), the ratio between
dD (A2 (S10 ), A2 (S20 )) and dD (A3 (S1 ), A3 (S2 )), as well as the ratio between dD (A4 (S100 ), A4 (S200 ))
and dD (A3 (S1 ), A3 (S2 )), is larger than 2 with probability of at least:
?
? 2?
? 2 ??
?
?
+ exp ?
.
1 ? (4 + o(1)) exp ?
16
32
It should be noted that the asymptotic notation is merely to get rid of second-order terms, and is not
an essential feature. Also, the constants are by no means the tightest possible. With these lemmas,
we can prove that a direct estimation of stab(A, D, m), based on a random sample, allows us to
discern the more stable model with high probability, for arbitrarily large sample sizes.
Theorem 3. For the setting described in Lemma 1, define the following unbiased estimator ??k,4m
of stab(Ak , D, m): Given a sample of size 4m, split it randomly into 3 disjoint subsets S1 ,S2 ,S3 of
size m,m and 2m respectively. Estimate dD (Ak (S1 ), Ak (S2 )) by computing
?
?
X
1
1 Ak (S1 )(xi , xm+i ) 6= Ak (S2 )(xi , xm+i ) ,
m
xi ,xm+i ?S3
where (x1 , .., xm ) is a random permutation of S3 , and return this value as an estimate of
stab(Ak , D, m). If three samples of size 4m each are drawn i.i.d from D, and are used to calculate
??2,4m , ??3,4m , ??4,4m , then
?
n
o?
?
?
?
?? ??
Pr ??3,4m ? min ??2,4m , ??4,4m
? exp ??(?2 ) + exp ?? m .
Proof. Using Lemma 2, we have that:
?
Pr
?
min {dD (A2 (S10 ), A2 (S20 )), dD (A4 (S100 ), A4 (S200 ))}
?2
dD (A3 (S1 ), A3 (S2 ))
?
?
< exp ??(?2 ) .
(5)
Denoting the event above as B, and assuming it does not occur, we have that the estimators
??2,4m , ??3,4m , ??4,4m are each an empirical average over an additional sample of size m, and the
expected value of ??3,4m is at least twice smaller than the expected values
? of the other two. Moreover, by Lemma 1, the expected value of dD (A3 (S1 ), A3 (S2 )) is ?(1/ m). Invoking Thm. 2, we
have that:
?
n
o ?
?
?
?? ??
(6)
Pr ??3,4m ? min ??2,4m , ??4,4m ? B { ? exp ?? m
Combining Eq. (5) and Eq. (6) yield the required result.
5
Experiments
In order to further substantiate our analysis above, some experiments were run on synthetic and
real world data, with the goal of performing model selection over the number of clusters k. Our
first experiment simulated the setting discussed in section 4 (see figure 1). We tested 3 different
Gaussian mixture distributions (with ? = 5, 7, 8), and sample sizes m ranging from 25 to 222 . For
each distribution and sample size, we empirically estimated ??2 , ??3 and ??4 as described in section
4, using the k-means algorithm, and repeated this procedure over 1000 trials. Our results show
that although these empirical estimators converge towards zero, their convergence
rates differ, with
?
approximately constant ratios between them. Scaling the graphs by m results in approximately
constant and differing stability measures for each ?. Moreover, the failure rate does not increase with
sample size, and decreases rapidly to negligible size as the Gaussians become more well separated
- exactly in line with Thm. 3. Notice that although in the previous section we assumed a large
separation between the Gaussians for analytical convenience, good results are obtained even when
this separation is quite small.
For the other experiments, we used the stability-based cluster validation algorithm proposed in [9],
which was found to compare favorably with similar algorithms, and has the desirable property of
6
Values of ??2 , ??3 , ??4
Distribution
Failure Rate
0.5
p(x)
0.3
?2
0.4
?4
0.3
10
0.2
10
?6
0.1
10
0
10
?8
?10
?5
0
5
10
0.2
k=2
k=3
k=4
1
0.1
3
10
5
10
7
10
10
0
1
10
3
5
10
7
10
10
0.5
p(x)
0.3
?2
0.4
?4
0.3
10
0.2
10
0.1
10
0
10
?6
?8
?10
?5
0
5
10
0.2
k=2
k=3
k=4
1
0.1
3
10
5
10
7
10
10
0
1
10
3
5
10
7
10
10
0.5
p(x)
0.3
?2
0.4
10
0.3
?4
0.2
10
?6
0.1
10
0
10
?8
?10
?5
0
5
10
0.2
k=2
k=3
k=4
1
0.1
3
10
5
10
7
10
10
0
1
10
3
5
10
7
10
m
10
m
Figure 1: Empirical validation of results in section 4. In each row, the leftmost sub-figure is the
actual distribution, the middle sub-figure is a log-log plot of the estimators ??2 , ??3 , ??4 (averaged over
1000 trials), as a function of the sample size, and on the right is the failure rate as a function of the
sample size (percentage of trials where ??3 was not the smallest of the three).
Random Sample
Values of stability method index
?1
5
10
0.4
0.3
?2
0
10
k=4
0.2
?3
?5
10
?10
?10
10
0.1
k=5
?4
?5
0
5
Failure Rate
0.5
k=3
k=7
k=6
2
10
3
10
4
10
5
6
10
0
10
2
10
3
4
5
10
10
10
2000
4000
8000
0.5
2
?1
0.4
10
0
k=4
k=5
?2
10
?2
0.2
k=3
k=2
?3
10
?2
0
0.3
2
2000
4000
8000
0.1
0
0.5
50
sh
iy
dcl
aa
ao
0
?50
?100
0
100
200
0.4
?1
10
k=5
k=6
k=4
k=3
?2
10
?3
300
10
500
1000
5000
m
0.3
0.2
0.1
0
500
1000
5000
m
Figure 2: Performance of stability based algorithm in [9] on 3 data sets. In each row, the leftmost
sub-figure is a sample representing the distribution, the middle sub-figure is a log-log plot of the
computed stability indices (averaged over 100 trials), and on the right is the failure rate (in detecting
the most stable model over repeated trials). In the phoneme data set, the algorithm selects 3 clusters
as the most stable models, since the vowels tend to group into a single cluster. The ?failures? are all
due to trials when k = 4 was deemed more stable.
7
producing a clear quantitative stability measure, bounded in [0, 1]. Lower values match models
with higher stability. The synthetic data sets selected (see figure 2) were a mixture of 5 Gaussians,
and segmented 2 rings. We also experimented on the Phoneme data set [6], which consists of
4, 500 log-periodograms of 5 phonemes uttered by English speakers, to which we applied PCA
projection on 3 principal components as a pre-processing step. The advantage of this data set is its
clear low-dimensional representation relative to its size, allowing us to get nearer to the asymptotic
convergence rates of the stability measures. All experiments used the k-means algorithm, except for
the ring data set which used the spectral clustering algorithm proposed in [13].
Complementing our theoretical analysis, the experiments clearly demonstrate that regardless of the
actual stability measures per fixed sample size, they seem to eventually follow roughly constant and
differing convergence rates, with no substantial degradation in performance. In other words, when
stability works well for small sample sizes, it should also work at least as well for larger sample
sizes. The universal asymptotic convergence to zero does not seem to be a problem in that regard.
6
Conclusions
In this paper, we propose a principled approach for analyzing the utility of stability for cluster
validation in large finite samples. This approach stems from viewing stability as a measure of
generalization in a statistical setting. It leads us to predict that in contrast to what might be concluded
from previous work, cluster stability does not necessarily degrade with increasing sample size. This
prediction is substantiated both theoretically and empirically.
The results also provide some guidelines (via Thm. 2) for when a stability measure might be relevant
for arbitrarily large sample size, despite asymptotic universal stability. They also suggest that by
appropriate scaling, stability measures would become insensitive to the actual sample size used.
These guidelines do not presume a specific clustering framework. However, we have proven their
fulfillment rigorously only for a certain stability measure and clustering setting. The proof can be
generalized in principle, but only at the cost of a more involved analysis. We are currently working
on deriving more general theorems on when these guidelines apply.
Acknowledgements: This work has been partially supported by the NATO SfP Programme and the
PASCAL Network of excellence.
References
[1] Shai Ben-David. A framework for statistical clustering with a constant time approximation algorithms for
k-median clustering. In Proceedings of COLT 2004, pages 415?426.
[2] Shai Ben-David, Ulrike von Luxburg, and D?avid P?al. A sober look at clustering stability. In Proceedings
of COLT 2006, pages 5?19.
[3] Olivier Bousquet and Andr?e Elisseeff. Stability and generalization. Journal of Machine Learning Research, 2:499?526, 2002.
[4] Joachim M. Buhmann and Marcus Held. Model selection in clustering by uniform convergence bounds.
In Advances in Neural Information Processing Systems 12, pages 216?222, 1999.
[5] Andrea Caponnetto and Alexander Rakhlin. Stability properties of empirical risk minimization over
donsker classes. Journal of Machine Learning Research, 6:2565?2583, 2006.
[6] Trevor Hastie, Robert Tibshirani, Jerome Friedman. The Elements of Statistical Learning. Springer, 2001.
[7] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301):13?30, March 1963.
[8] Samuel Kutin and Partha Niyogi. Almost-everywhere algorithmic stability and generalization error. In
Proceeding of the 18th confrence on Uncertainty in Artificial Intelligence (UAI), pages 275?282, 2002.
[9] Tilman Lange, Volker Roth, Mikio L. Braun, and Joachim M. Buhmann. Stability-based validation of
clustering solutions. Neural Computation, 16(6):1299?1323, June 2004.
[10] D.A. McAllester. Pac-bayesian stochastic model selection. Machine Learning Journal, 51(1):5?21, 2003.
[11] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics, volume 141 of
London Mathematical Society Lecture Note Series, pages 148?188. Cambridge University Press, 1989.
[12] Alexander Rakhlin and Andrea Caponnetto. Stability of k-means clustering. In Advances in Neural
Information Processing Systems 19. MIT Press, Cambridge, MA, 2007.
[13] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8):888?905, 2000.
[14] Ulrike von Luxburg and Shai Ben-David. Towards a statistical theory of clustering. Technical report,
PASCAL workshop on clustering, London, 2005.
8
| 3227 |@word mild:2 trial:6 middle:2 open:2 invoking:1 elisseeff:1 series:1 denoting:1 past:2 assigning:1 realistic:1 happen:1 plot:2 intelligence:2 selected:1 complementing:1 detecting:1 mcdiarmid:2 mathematical:1 direct:1 become:4 prove:4 consists:1 wassily:1 excellence:1 theoretically:1 periodograms:1 indeed:1 roughly:1 andrea:2 expected:9 growing:2 behavior:2 little:1 actual:4 increasing:3 becomes:1 estimating:1 underlying:6 notation:3 bounded:7 moreover:3 israel:1 what:3 stab:16 differing:3 quantitative:1 braun:1 exactly:2 wrong:1 jianbo:1 appear:1 producing:1 positive:3 negligible:1 engineering:1 limit:1 despite:3 ak:18 analyzing:1 approximately:3 might:8 emphasis:1 therein:2 twice:1 equivalence:2 statistically:1 obeys:2 averaged:2 practical:1 unique:2 arguing:1 practice:3 union:1 differs:1 procedure:5 area:1 universal:3 empirical:13 projection:1 word:6 induce:1 confidence:1 pre:1 suggest:1 get:5 cannot:1 close:1 selection:10 operator:1 convenience:2 risk:6 context:1 optimize:2 restriction:1 measurable:1 shi:1 center:1 roth:1 uttered:1 jerusalem:1 attention:2 regardless:4 straightforward:1 go:1 independently:1 survey:1 simplicity:1 estimator:6 deriving:1 stability:65 proving:1 notion:2 shamir:1 suppose:1 user:1 exact:1 olivier:1 hypothesis:5 element:1 utilized:1 cut:1 observed:1 calculate:1 counter:1 decrease:1 substantial:2 intuition:3 principled:1 rigorously:1 motivate:1 carrying:1 immaterial:1 algebra:1 easily:1 various:3 substantiated:2 separated:3 effective:1 london:2 detected:1 artificial:1 apparent:1 quite:2 supplementary:1 dominating:1 larger:2 say:1 drawing:2 otherwise:1 niyogi:1 itself:2 advantage:1 analytical:2 propose:1 relevant:1 combining:1 rapidly:1 achieve:1 intuitive:1 convergence:14 cluster:33 assessing:1 extending:1 leave:1 converges:3 ring:2 wider:1 derive:1 depending:1 ac:1 ben:3 nearest:2 school:1 received:2 eq:9 ecx:1 c:7 involves:1 implies:3 come:1 differ:3 closely:2 correct:2 stochastic:1 viewing:2 mcallester:1 material:1 sober:1 ao:1 generalization:19 clustered:2 tighter:1 strictly:2 extension:1 hold:9 exp:19 scope:1 predict:2 algorithmic:1 claim:1 substituting:2 major:1 optimizer:1 a2:5 smallest:1 purpose:1 estimation:2 applicable:1 currently:2 tool:5 reflects:1 minimization:1 mit:1 clearly:1 always:2 gaussian:2 aim:1 rather:3 volker:1 validated:1 focus:2 exemplifies:1 joachim:2 june:1 bernoulli:1 likelihood:1 mainly:1 contrast:1 centroid:4 sense:2 dependent:2 typically:1 relation:1 selects:1 issue:1 colt:2 pascal:2 having:1 sampling:2 identical:2 look:1 unsupervised:2 constitutes:1 discrepancy:1 future:1 report:1 quantitatively:1 few:2 randomly:2 divergence:2 individual:1 vowel:1 attempt:4 freedom:1 friedman:1 mixture:4 sh:1 held:1 immense:1 ohad:1 euclidean:1 theoretical:6 instance:10 soft:1 cost:1 addressing:1 subset:2 uniform:2 successful:1 tishby:2 unsurprising:1 characterize:1 synthetic:3 density:2 huji:1 interdisciplinary:1 together:1 ym:1 iy:1 squared:1 central:1 von:2 hoeffding:3 american:1 leading:1 return:3 account:1 jitendra:1 combinatorics:1 explicitly:1 later:2 view:2 analyze:1 ulrike:2 dcl:1 shai:3 partha:1 minimize:1 il:1 phoneme:3 characteristic:1 correspond:1 yield:1 generalize:2 dealt:1 bayesian:5 produced:1 presume:1 explain:1 whenever:1 trevor:1 definition:3 failure:6 involved:2 proof:7 knowledge:1 improves:1 segmentation:1 actually:4 higher:2 supervised:1 follow:1 discerned:1 just:1 stage:1 jerome:1 working:2 replacing:1 widespread:1 quality:1 artifact:1 validity:3 concept:1 unbiased:1 contain:1 normalized:1 hence:1 leibler:1 naftali:1 noted:1 substantiate:2 speaker:1 samuel:1 criterion:4 generalized:2 leftmost:2 theoretic:1 demonstrate:1 gleaned:1 performs:1 ranging:1 image:1 novel:1 common:1 functional:1 empirically:4 insensitive:1 volume:1 belong:1 discussed:1 association:1 relating:2 significant:1 cambridge:2 tuning:1 pm:2 stable:12 entail:1 posterior:1 recent:2 optimizing:1 driven:1 certain:4 inequality:3 arbitrarily:8 success:1 additional:1 novelty:1 paradigm:1 converge:4 relates:1 desirable:1 reduces:1 stem:1 caponnetto:2 segmented:1 technical:3 match:1 long:1 dkl:6 prediction:4 variant:2 ae:3 essentially:1 expectation:1 normalization:1 addition:1 separately:2 median:1 concluded:4 meaningless:1 tend:1 virtually:1 seem:2 split:2 easy:1 enough:3 xj:2 fit:3 hastie:1 lange:1 idea:3 avid:1 whether:2 expression:1 pca:1 utility:1 ultimate:1 repeatedly:1 useful:2 governs:1 informally:2 involve:1 clear:2 amount:2 percentage:1 problematic:2 andr:1 s3:4 notice:1 estimated:1 disjoint:1 per:1 tibshirani:1 group:1 key:1 drawn:5 verified:1 utilize:1 asymptotically:4 graph:1 merely:2 year:2 sum:3 run:1 luxburg:2 everywhere:1 uncertainty:2 discern:5 place:1 almost:1 separation:3 draw:2 scaling:3 bound:13 display:1 kutin:1 occur:1 infinity:1 s10:3 x2:3 s100:3 sake:1 bousquet:1 aspect:1 argument:1 min:3 performing:2 attempting:1 relatively:1 according:2 march:1 smaller:2 slightly:2 s1:27 pr:12 taken:1 ln:5 remains:1 eventually:1 needed:1 know:1 letting:1 informal:1 gaussians:6 tightest:1 apply:2 appropriate:3 spectral:2 appearing:1 slower:1 existence:1 denotes:1 clustering:59 ensure:1 a4:5 society:1 objective:5 malik:1 question:2 quantity:2 fulfillment:1 distance:4 link:1 simulated:1 degrade:2 me:6 marcus:1 assuming:1 index:2 relationship:3 ratio:5 minimizing:3 hebrew:1 difficult:1 mostly:1 robert:1 favorably:1 negative:1 guideline:4 unknown:1 allowing:1 finite:10 tilman:1 defining:1 extended:1 y1:1 rn:1 thm:6 david:3 namely:3 pair:2 specified:1 required:3 s20:3 unequal:1 nearer:1 address:1 beyond:1 usually:1 pattern:1 xm:6 max:1 event:1 natural:2 buhmann:2 representing:1 ohadsh:1 deemed:1 prior:2 literature:1 acknowledgement:1 determining:1 asymptotic:7 relative:4 lacking:1 fully:1 loss:2 expect:2 permutation:1 lecture:1 proven:3 validation:8 foundation:1 degree:1 principle:2 viewpoint:1 dd:26 cd:2 row:2 supported:2 last:1 copy:1 english:1 side:1 wide:1 characterizing:1 absolute:1 regard:1 boundary:1 ak0:7 world:2 programme:1 transaction:1 emphasize:2 nato:1 kullback:1 global:1 uai:1 rid:1 conclude:1 assumed:2 xi:6 distressingly:1 why:2 nature:1 robust:1 necessarily:2 s2:33 repeated:2 x1:6 mikio:1 slow:1 guise:1 inferring:1 space1:1 wish:1 sub:4 lie:1 governed:1 donsker:1 theorem:12 remained:1 specific:3 pac:2 showing:2 explored:1 experimented:1 rakhlin:2 a3:9 exists:1 essential:1 workshop:1 importance:1 dissimilarity:1 magnitude:1 surprise:1 suited:1 entropy:1 cx:9 led:2 generalizing:1 partially:1 springer:1 aa:1 corresponds:1 relies:1 ma:1 goal:1 towards:3 determined:1 infinite:1 except:1 lemma:8 principal:1 degradation:1 e:2 meaningful:3 signifying:1 alexander:2 relevance:3 es1:3 tested:1 phenomenon:1 |
2,456 | 3,228 | Transfer Learning using Kolmogorov Complexity:
Basic Theory and Empirical Evaluations
M. M. Hassan Mahmud
Department of Computer Science
University of Illinois at Urbana-Champaign
[email protected]
Sylvian R. Ray
Department of Computer Science
University of Illinois at Urbana-Champaign
[email protected]
Abstract
In transfer learning we aim to solve new problems using fewer examples using
information gained from solving related problems. Transfer learning has been
successful in practice, and extensive PAC analysis of these methods has been developed. However it is not yet clear how to define relatedness between tasks. This
is considered as a major problem as it is conceptually troubling and it makes it
unclear how much information to transfer and when and how to transfer it. In
this paper we propose to measure the amount of information one task contains
about another using conditional Kolmogorov complexity between the tasks. We
show how existing theory neatly solves the problem of measuring relatedness and
transferring the ?right? amount of information in sequential transfer learning in a
Bayesian setting. The theory also suggests that, in a very formal and precise sense,
no other reasonable transfer method can do much better than our Kolmogorov
Complexity theoretic transfer method, and that sequential transfer is always justified. We also develop a practical approximation to the method and use it to transfer
information between 8 arbitrarily chosen databases from the UCI ML repository.
1
Introduction
The goal of transfer learning [1] is to learn new tasks with fewer examples given information gained
from solving related tasks, with each task corresponding to the distribution/probability measure
generating the samples for that task. The study of transfer is motivated by the fact that people use
knowledge gained from previously solved, related problems to solve new problems quicker. Transfer
learning methods have been successful in practice, for instance it has been used to recognize related
parts of a visual scene in robot navigation tasks, predict rewards in related regions in reinforcement
learning based robot navigation problems, and predict results of related medical tests for the same
group of patients. Figure 1 shows a prototypical transfer method [1], and it illustrates some of the
key ideas. The m tasks being learned are defined on the same input space, and are related by virtue
of requiring the same common ?high level features? encoded in the hidden units. The tasks are
learned in parallel ? i.e. during training, the network is trained by alternating training samples from
the different tasks, and the hope is that now the common high level features will be learned quicker.
Transfer can also be done sequentially where information from tasks learned previously are used to
speed up learning of new ones.
Despite the practical successes, the key question of how one measures relatedness between tasks has,
so far, eluded answer. Most current methods, including the deep PAC theoretic analysis in [2], start
by assuming that the tasks are related because they have a common near-optimal inductive bias (the
common hidden units in the above example). As no explicit measure of relatedness is prescribed, it
becomes difficult to answer questions such as how much information to transfer between tasks and
when not to transfer information.
1
Figure 1: A typical Transfer Learning Method.
There has been some work which attempt to solve these problems. [3] gives a more explicit measure
of task relatedness in which two tasks P and Q are said to be similar with respect to a given set of
functions if the set contains an element f such that P (a) = Q(f (a)) for all events a. By assuming
the existence of these functions, the authors are able to derive PAC sample complexity bounds for
error of each task (as opposed to expected error, w.r.t. a distribution over the m tasks, in [2]). More
interesting is the approach in [4], where the author derives PAC bounds in which the sample complexity is proportional to the joint Kolmogorov complexity [5] of the m hypotheses. So Kolmogorov
complexity (see below) determines the relatedness between tasks. However, the bounds hold only
for ? 8192 tasks (Theorem 3).
In this paper we approach the above idea from a Bayesian perspective and measure tasks relatedness
using conditional Kolmogorov complexity of the hypothesis. We describe the basics of the theory
to show how it justifies this approach and neatly solves the problem of measuring task relatedness
(details in [6; 7]). We then perform experiments to show the effectiveness of this method.
Let us take a brief look at our approach. We assume that each hypothesis is represented by a program
? for example a decision tree is represented by a program that contains a data structure representing
the tree, and the relevant code to compute the leaf node corresponding to a given input vector. The
Kolmogorov complexity of a hypothesis h (or any other bit string) is now defined as the length of the
shortest program that outputs h given no input. This is a measure of absolute information content of
an individual object ? in this case the hypothesis h. It can be shown that Kolmogorov complexity is
a sharper version of Information Theoretic entropy, which measures the amount of information in an
ensemble of objects with respect to a distribution over the ensemble. The conditional Kolmogorov
complexity of hypothesis h given h? , K(h|h? ), is defined as the length of the shortest program that
outputs the program h given h? as input. K(h|h? ) measures the amount of constructive information
h? contains about h ? how much information h? contains for the purpose of constructing h. This
is precisely what we wish to measure in transfer learning. Hence this becomes our measure of
relatedness for performing sequential transfer learning in the Bayesian setting.
In the Bayesian setting, any sequential transfer learning mechanism/algorithm is ?just? a conditional
prior W (?|h? ) over the hypothesis/probability measure space, where h? is the task learned previously
? i.e. the task we are trying to transfer information from. In this case, by setting the prior over the
?
hypothesis space to be P (?|h? ) := 2?K(?|h ) we weight each candidate hypothesis by how related it
is to previous tasks, and so we automatically transfer the right amount of information when learning
the new problem. We show that in a certain precise sense this prior is never much worse than
any reasonable transfer learning prior, or any non-transfer prior. So, sequential transfer learning is
always justified from a theoretical perspective. This result is quite unexpected as the current belief
in the transfer learning community is that it should hurt to transfer from unrelated tasks. Due to
lack of space, we only just briefly note that similar results hold for an appropriate interpretation of
parallel transfer, and that, translated to the Bayesian setting, current practical transfer methods look
like sequential transfer methods [6; 7]. Kolmogorov complexity is computable only in the limit (i.e.
with infinite resources), and so, while ideal for investigating transfer in the limit, in practice we need
to use an approximation of it (see [8] for a good example of this). In this paper we perform transfer
in Bayesian decision trees by using a fairly simple approximation to the 2?K(?|?) prior.
In the rest of the paper we proceed as follows. In section 3 we define Kolmogorov complexity more
precisely and state all the relevant Bayesian convergence results for making the claims above. We
then describe our Kolmogorov complexity based Bayesian transfer learning method. In section 4
we describe our method for approximation of the above using Bayesian decision trees, and then in
section 5 we describe 12 transfer experiments using 8 standard databases from the UCI machine
learning repository [9]. Our experiments are the most general that we know of, in the sense that we
2
transfer between arbitrary databases with little or no semantic relationships. We note that this fact
also makes it difficult to compare our method to other existing methods (see also section 6).
2
Preliminaries
We consider Bayesian transfer learning for finite input spaces Ii and finite output spaces Oi . We
assume finite hypothesis spaces Hi , where each h ? Hi is a conditional probability measure on Oi ,
conditioned on elements of Ii . So for y ? Oi and x ? Ii , h(y|x) gives the probability of output
being y given input x. Given Dn = {(x1 , y1 ), (x2 , y2 ), ? ? ? , (xn , yn )} from Ii ? Oi , the probability
of Dn according to h ? Hi is given by:
n
Y
h(Dn ) :=
h(yk |xk )
k=1
The conditional probability of a new sample (xnew , ynew ) ? Ii ? Oi for any conditional probability
measure ? (e.g. h ? Hi or MW in ( 3.2) ) is given by:
?(Dn ? {(xnew , ynew )})
?(ynew |xnew , Dn ) :=
(2.1)
?(Dn )
So the learning problem is: given a training sample Dn , where for each (xk , yk ) ? Dn yk is
assumed to have been chosen according a h ? Hi , learn h. The prediction problem is to predict
the label of the new sample xnew using ( 2.1). The probabilities for the inputs x are not included
above because they cancel out. This is merely the standard Bayesian setting, translated to a typical
Machine learning setting (e.g. [10]).
We use MCMC simulations in a computer to sample for our Bayesian learners, and so considering
only finite spaces above is acceptable. However, the theory we present here holds for any hypothesis,
input and output space that may be handled by a computer with infinite resources (see [11; 12] for
more precise descriptions). Note that we are considering cross-domain transfer [13] as our standard
setting (see section 6). We further assume that each h ? Hi is a program (therefore a bit string)
for some Universal prefix Turing machine U . When it is clear that a particular symbol p denotes a
program, we will write p(x) to denote U (p, x), i.e. running program p on input x.
3
3.1
Transfer Learning using Kolmogorov Complexity
Kolmogorov Complexity based Task Relatedness
A program is a bit string, and a measure of absolute constructive information that a bit string x
contains about another bit string y is given by the conditional Kolmogorov complexity of x given y
[5] . Since our hypotheses are programs/bit strings, the amount of information that a hypothesis or
program h? contains about constructing another hypothesis h is also given by the same:
Definition 1. The conditional Kolmogorov complexity of h ? Hj given h? ? Hi is defined as the
length of the shortest program that given the program h? as input, outputs the program h.
K(h|h? ) := min{l(r) : r(h? ) = h}
r
We will use a minimality property of K. Let f (x, y) be a computable function over product of bit
strings. f is computable means that there is a program p such that p(x, n), n ? N,
P computes f (x)
to accuracy ? < 2?n in finite time. Now assume that f (x, y) satisfies for each y x 2?f (x,y) ? 1.
Then for a constant cf = K(f ) + O(1), independent of x and y, but dependent on K(f ), the length
of shortest program computing f , and some small constant (O(1)) [5, Corollary 4.3.1]:
K(x|y) ? f (x, y) + cf
(3.1)
3.2
Bayesian Convergence Results
A Bayes mixture MW over Hi is defined as follows:
X
X
MW (Dn ) :=
h(Dn )W (h) with
W (h) ? 1
h?Hi
h?Hi
3
(3.2)
(the inequality is sufficient for the convergence results). Now assume that the data has been generated by a hj ? Hi (this is standard for a Bayesian setting, but we will relax this constraint below).
Then the following impressive result holds true for each (x, y) ? Ii ? Oi .
? X
X
hj (Dn )[MW (y|x, Dn ) ? hj (y|x, Dn )]2 ? ? ln W (hj ).
(3.3)
t=0 Dn
So for finite ? ln W (hj ), convergence is rapid; the expected number of times n |MW (a|x, Dn ) ?
hj (a|x, Dn )| > ? is ? ? ln W (hj )/?2 , and the probability that the number of ? deviations
> ? ln W (hj )/?2 ? is < ?. This result was first proved in [14], and extended variously in [11;
12]. In essence these results hold as long as Hi can be enumerated and hj and W can be computed
with infinite resources. These results also hold if hj 6? Hi , but ?h?j ? Hi such that the nth order
KL divergence between hj and h?j is bounded by k. In this case the error bound is ? ln W (h?j ) + k
[11, section 2.5]. Now consider the Solomonoff-Levin prior: 2?K(h) ? this has ( 3.3) error bound
K(h) ln 2, and for any computable prior W (?), f (x, y) := ? ln W (x)/ ln 2 satisfies conditions for
f (x, y) in ( 3.1). So by ( 3.3), with y = the empty string, we get:
K(h) ln 2 ? ? ln W (h) + cW
(3.4)
?K(h)
By ( 3.3), this means that for all h ? Hi , the error bound for the 2
prior can be no more than a
constant worse than the error bound for any other prior. Since reasonable priors have small K(W )
(= O(1)), cW = O(1) and this prior is universally optimal [11, section 5.3].
3.3
Bayesian Transfer Learning
Assume we have previously observed/learned m ? 1 tasks, with task tj ? Hij , and the mth task to
be learned is in Him . Let t := (t1 , t2 , ? ? ? , tm?1 ). In the Bayesian framework, a transfer learning
scheme corresponds to a computable prior W (?|t) over the space Him ,
X
W (h|t) ? 1
h?Him
In this case, by ( 3.3), the error bound of the transfer learning scheme MW (defined by the prior W )
is ? ln W (h|t). We define our transfer learning method MT L by choosing the prior 2?K(?|t) :
X
MT L (Dn ) :=
h(Dn )2?K(h|t) .
h?Him
For MT L the error bound is K(h|t) ln 2. By the minimality property ( 3.1), we get that
K(h|t) ln 2 ? ? ln W (h|t) + cW
So for a reasonable computable transfer learning scheme MW , cW = O(1) and for all h and t, the
error bound for MT L is no more than a constant worse than the error bound for MW ? i.e. MT L
is universally optimal [11, section 5.3]. Also note that in general K(x|y) ? K(x)1 . Therefore by
( 3.4) the transfer learning scheme MT L is also universally optimal over all non-transfer learning
schemes ? i.e. in the precise formal sense of the framework in this paper, sequential transfer learning
is always justified. The result in this section, while novel, are not technically deep (see also [6] [12,
section 6]). We should also note that the 2?K(h) prior is not universally optimal with respect to
the transfer prior W (?|t) because the inequality ( 3.4) now holds only upto the constant cW (?|t)
which depends on K(t). So this constant increases with increasing number of tasks which is very
undesirable. Indeed, this is demonstrated in our experiments when the base classifier used is an
approximation to the 2?K(h) prior and the error of this prior is seen to be significantly higher than
the transfer learning prior 2?K(h|t) .
4
Practical Approximation using Decision Trees
Since K is computable only in the limit, to apply the above ideas in practical situations, we need
to approximate K and hence MT L . Furthermore we also need to specify the spaces Hi , Oi , Ii and
how to sample from the approximation of MT L . We address each issue in turn.
1
Because arg K(x), with a constant length modification, also outputs x given input y.
4
4.1
Decision Trees
We will consider standard binary decision trees as our hypotheses. Each hypothesis space Hi consists of decision trees for Ii defined by the set fi of features. A tree h ? Hi is defined recursively:
h := nroot
nj := rj Cj ? ? | rj Cj njL ? | rj Cj ? njR | rj Cj njL njR
C is a vector of size |Oi |, with component Ci giving the probability of the ith class. Each rule r is
of the form f < v, where f ? fi and v is a value for f . The vector C is used during classification
only when the corresponding node has one or more ? children. The size of each tree is N c0 where
N is the number of nodes, and c0 is a constant, denoting the size of each rule entry, the outgoing
pointers, and C. Since c0 and the length of the program code p0 for computing the tree output are
constants independent of the tree, we define the length of a tree as l(h) := N .
Approximating K and the Prior 2?K(?|t)
4.2
Approximation for a single previously learned tree: We will approximate K(?|?) using a function
that is defined for a single previously learned tree as follows:
Cld (h|h? ) := l(h) ? d(h, h? )
where d(h, h? ) is the maximum number of overlapping nodes starting from the root nodes:
d(h, h? ) := d(nroot , n?root )
d(n, ?) := 0
d(n, n? ) := 1 + d(nL , n?L ) + d(nR , n?R )
d(?, n? ) := 0
In the single task case, the prior is just 2?l(h) /Zl (which is an approximation to the Solomonoff?
Levin prior 2?K(?) ), and in the transfer learning case, the prior is 2?Cld (?|h ) /ZCld where the Zs
2
are normalization terms . In both cases, we can sample from the prior directly by growing the
decision tree dynamically. Call a ? in h a hole. Then for 2?l(h) , during the generation process, we
first generate an integer k according to 2?t distribution (easy to do using a pseudo random number
generator). Then at each step we select a hole uniformly at random and then create a node there
(with two more holes) and generate the corresponding rule randomly. We do so until we get a tree
?
with l(h) = k. In the transfer learning case, for the prior 2?Cld (?|h ) we first generate an integer k
?t
according to 2 distribution. Then we generate as above until we get a tree h with Cld (h|h? ) = k.
It can be seen with a little thought that these procedures sample from the respective priors.
Approximation for multiple previously learned trees: We define Cld for multiple trees as an averaging of the contributions of each of the m ? 1 previously learned trees:
!
m?1
1 X ?Cld (hm |hi )
m
Cld (hm |h1 , h2 , ? ? ? , hm?1 ) := ? log
2
m ? 1 i=1
m
m which reduces to 1/[(m?
In the transfer learning case, we need to sample according 2?Cld (?|?) /ZCld
Pm?1 ?Cld (hm |hi )
m]
1)ZCld
. To sample from this, we can simply select a hi from the m ? 1 trees
i=1 2
at random and then sample from 2?Cld (?|hi ) to get the new tree.
The transfer learning mixture: The approximation of the transfer learning mixture MT L is now:
X
m
m
PT L (Dn ) =
h(Dn )2?Cld (h|t) /ZCld
h?Him
m
So by ( 3.3), the error bound for PT L is given by Cld
(h|t) ln 2 + ln ZCld (the ln ZCld is a constant
m
that is same for all h ? Hi ). So when using Cld , universality is maintained, but only up to the degree
m
that Cld
approximates K. In our experiments we used the prior 1.005?C instead of 2?C above to
make larger trees more likely and hence speed up convergence of MCMC sampling.
2
The Z?s exist, here because the Hs
Pare finite, and in general because ki = N c0 + l(p0 ) gives lengths of
programs, which are known to satisfy i 2?ki ? 1.
5
Table 1: Metropolis-Hastings Algorithm
1. Let Dn be the training sample; select the current tree/state hcur using the proposal distribution
q(hcur ).
2. For i = 1 to J do
(a) Choose a candidate next state hprop according to the proposal distribution q(hprop ).
(b) Draw u uniformly at random from [0, 1] and set hcur := hprop if A(hprop , hcur ) > u, where
A is defined by
(
)
m
h(Dn )2?Cld (h|t) q(h? )
A(h, h? ) := min 1,
m
?
h? (Dn )2?Cld (h |t) q(h)
4.3
Approximating PT L using Metropolis-Hastings
As in standard Bayesian MCMC methods, the idea will be to draw N samples hmi from the posterior, P (h|Dn , t) which is given by
m
m P (Dn ))
P (h|Dn , t) := h(Dn )2?Cld (h|t) /(ZCld
Then we will approximate PT L by
N
1 X
hmi (y|x)
P?T L (y|x) :=
N i=1
We will use the standard Metropolis-Hastings algorithm to sample from PT L (see [15] for a brief
introduction and further references). The algorithm is given in table 1. The algorithm is first run for
some J = T , to get the Markov chain q ? A to converge, and then starting from the last hcur in
the run, the algorithm is run again for J = N times to get N samples for P?T L . In our experiments
m
m , and hence the acceptance
we set T to 1000 and N = 50. We set q to our prior 2?Cld (?|t) /ZCld
probability A is reduced to min{1, h(Dn )/h? (Dn )}. Note that every time after we generate a tree
according to q, we set the C entries using the training sample Dn in the usual way.
5
Experiments
We used 8 databases from the UCI machine learning repository [9] in our experiments (table 2). To
show transfer of information we used 20% of the data for a task as the training sample, but also
used as prior knowledge trees learned on another task using 80% of the data as training sample.
The reported error rates are on the testing sets and are averages over 10 runs . To the best of our
knowledge our transfer experiments are the most general performed so far, in the sense that the
databases information is transferred between have semantic relationship that is often tenuous.
We performed 3 sets of experiments. In the first set we learned each classifier using 80% of the
data as training sample and 20% as testing sample (since it is a Bayesian method, we did not use
a validation sample-set). This set ensured that our base Bayesian classifier with 2?l(h) prior is
reasonably powerful and that any improvement in performance in the transfer experiments (set 3)
was due to transfer and not deficiency in our base classifier. From a survey of literature it seems
the error rate for our classifier is always at least a couple of percentage points better than C4.5. As
an example, for ecoli our classifier outperforms Adaboost and Random Forests in [16], but is a bit
worse than these for German Credit.
In the second set of experiments we learned the databases that we are going to transfer to using 20%
of the database as training sample, and 80% of the data as the testing sample. This was done to
establish baseline performance for the transfer learning case. The third and final set of experiments
were performed to do the actual transfer. In this case, first one task was learned using 80/20 (80%
training, 20% testing) data set and then this was used to learn a 20/80 dataset. During transfer, the
N
N trees from the sampling of the 80/20 task were all used in the prior 2?Cld (?|t) . The results are
6
Table 2: Database summary. The last column gives the error and standard deviation for 80/20
database split.
Data Set
No. of Samples
No. of Feats.
No. Classes
Error/S.D.
Ecoli
Yeast
Mushroom
Australian Credit
German Credit
Hepatitis
Breast Cancer,Wisc.
Heart Disease, Cleve.
336
1484
8124
690
1000
155
699
303
7
8
22
14
20
19
9
14
8
10
2
2
2
2
2
5
9.8%, 3.48
14.8%, 2.0
0.83%, 0.71
16.6%, 3.75
28.2%, 4.5
18.86%, 2.03
5.6%, 1.9
23.0%, 2.56
given in table 3. In our experiments, we transferred only to tasks that showed a significant drop in
error rate with the 20/80 split. Surprisingly, the error of the other data sets did not change much.
As can be seen from comparing the tables, in most cases transfer of information improves the performance compared to the baseline transfer case. For ecoli, the transfer resulted in improvement to
near 80/20 levels, while for australian the improvement was better than 80/20. While the error rate
for mushroom and bc-wisc did not move up to 80/20 levels, there was improvement. Interestingly
transfer learning did not hurt in one single case, which agrees with our theoretical results in the
idealized setting.
Table 3: Results of 12 transfer experiments. Transfer To and From rows gives databases information
is transferred to and from. The row No-Transfer gives the baseline 20/80 error-rate and standard
deviation. Row Transfer gives the error rate and standard deviation after transfer, and the final row
PI gives percentage improvement in performance due to transfer. With our admittedly inefficient
code, each experiment took between 15 ? 60 seconds on a 2.4 GHz laptop with 512 MB RAM.
Trans. To
Trans. From
Yeast
ecoli
Germ.
BC Wisc
Germ.
Australian
ecoli
hep.
No-Transfer
Transfer
PI
20.6%, 3.8
11.3%, 1.6
45.1%
20.6%, 3.8
10.2%, 4.74
49%
20.6%, 3.8
9.68%, 2.98
53%
23.2%, 2.4
15.47%, 0.67
33.0%
23.2%, 2.4
15.43%, 1.2
33.5%
23.2%, 2.4
15.21%, 0.42
34.4%
Trans. To
Trans. From
ecoli
mushroom
BC Wisc.
Germ.
heart
BC Wisc.
Aus.
ecoli
No-Transfer
Transfer
PI
13.8%, 1.3
4.6%, 0.17
66.0%
13.8%, 1.3
4.64%, 0.21
66.0%
13.8%, 1.3
3.89%, 1.02
71.8%
10.3%, 1.6
8.3%, 0.93
19.4%
10.3%, 1.6
8.1%, 1.22
21.3%
10.3%, 1.6
7.8%, 2.03
24.3%
6
Discussion
In this paper we introduced a Kolmogorov Complexity theoretic framework for Transfer Learning. The theory is universally optimal and elegant, and we showed its practical applicability
by constructing approximations to it to transfer information across disparate domains in standard UCI machine learning databases. The full theoretical development can be found in [6;
7]. Directions for future empirical investigations are many. We did not consider transferring from
multiple previous tasks, and effect of size of source samples on transfer performance (using 70/30
etc. as the sources) or transfer in regression. Due to the general nature of our method, we can
perform transfer experiments between any combination of databases in the UCI repository. We
7
also wish to perform experiments using more powerful generalized similarity functions like the gzip
compressor [8]3 .
We also hope that it is clear that Kolmogorov complexity based approach elegantly solves the problem of cross-domain transfer, where we transfer information between tasks that are defined over
different input,output and distribution spaces. To the best of our knowledge, the first paper to address this was [13], and recent works include [17] and [18]. All these methods transfer information
by finding structural similarity between various networks/rule that form the hypotheses. This is,
of course, a way to measure constructive similarity between the hypotheses, and hence an approximation to Kolmogorov complexity based similarity. So Kolmogorov complexity elegantly unifies
these ideas. Additionally, the above methods, particularly the last two, are rather elaborate and are
hypothesis space specific ([18] is even task specific). The theory of Kolmogorov complexity and its
practical approximations such as [8] and this paper suggests that we can get good performance by
just using generalized compressors, such as gzip, etc., to measure similarity.
Acknowledgments
We would like to thank Kiran Lakkaraju for their comments and Samarth Swarup for many fruitful
disucssions.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
Rich Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149?
198, March 2000.
Shai Ben-David and Reba Schuller. Exploiting task relatedness for learning multiple tasks. In Proceedings
of the 16th Annual Conference on Learning Theory, 2003.
Brendan Juba. Estimating relatedness via data compression. In Proceedings of the 23rd International
Conference on Machine Learning, 2006.
Ming Li and Paul Vitanyi. An Introduction to Kolmogorov Complexity and its Applications. SpringerVerlag, New York, 2nd edition, 1997.
M. M. Hassan Mahmud. On universal transfer learning. In Proceedings of the 18th International Conference on Algorithmic Learning Theory, 2007.
M. M. Hassan Mahmud. On universal transfer learning (Under Review). 2008.
R. Cilibrasi and P. Vitanyi. Clustering by compression. IEEE Transactions on Information theory,
51(4):1523?1545, 2004.
D.J. Newman, S. Hettich, C.L. Blake, and C.J. Merz. UCI repository of ML databases, 1998.
Radford M. Neal. Bayesian methods for machine learning, NIPS tutorial, 2004.
Marcus Hutter. Optimality of Bayesian universal prediction for general loss and alphabet. Journal of
Machine Learning Research, 4:971?1000, 2003.
Marcus Hutter. On universal prediction and bayesian confirmation. Theoretical Computer Science (in
press), 2007.
Samarth Swarup and Sylvian R. Ray. Cross domain knowledge transfer using structured representations.
In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI), 2006.
R. J. Solomonoff. Complexity-based induction systems: comparisons and convergence theorems. IEEE
Transactions on Information Theory, 24(4):422?432, 1978.
Christophe Andrieu, Nando de Freitas, Arnaud Doucet, and Michael I. Jordan. An introduction to MCMC
for machine learning. Machine Learning, 50(1-2):5?43, 2003.
Leo Breiman. Random forests. Machine Learning, 45:5?32, 2001.
Lilyana Mihalkova, Tuyen Huynh, and Raymond Mooney. Mapping and revising markov logic networks
for transfer learning. In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI,
2007.
Matthew Taylor and Peter Stone. Cross-domain transfer for reinforcement learning. In Proceedings of
the 24th International Conference on Machine Learning, 2007.
3
A flavor of this approach: if the standard compressor is gzip, then the function Cgzip (xy) will give the
length of the string xy after compression by gzip. Cgzip (xy) ? Cgzip (y) will be the conditional Cgzip (x|y).
So Cgzip (h|h? ) will give the relatedness between tasks.
8
| 3228 |@word h:1 multitask:1 repository:5 version:1 briefly:1 compression:3 seems:1 nd:2 c0:4 simulation:1 p0:2 recursively:1 contains:7 denoting:1 bc:4 interestingly:1 prefix:1 outperforms:1 existing:2 freitas:1 current:4 comparing:1 yet:1 universality:1 mushroom:3 drop:1 intelligence:3 fewer:2 leaf:1 xk:2 ith:1 pointer:1 node:6 dn:30 consists:1 ray:3 indeed:1 tuyen:1 expected:2 rapid:1 uiuc:2 growing:1 ming:1 automatically:1 little:2 actual:1 considering:2 increasing:1 becomes:2 estimating:1 unrelated:1 bounded:1 laptop:1 what:1 string:9 z:1 developed:1 revising:1 finding:1 nj:1 pseudo:1 every:1 ensured:1 classifier:6 zl:1 unit:2 medical:1 yn:1 t1:1 limit:3 despite:1 au:1 dynamically:1 suggests:2 practical:7 acknowledgment:1 testing:4 practice:3 germ:3 procedure:1 universal:5 empirical:2 significantly:1 thought:1 get:8 undesirable:1 fruitful:1 demonstrated:1 starting:2 survey:1 rule:4 hurt:2 pt:5 hypothesis:19 element:2 particularly:1 database:13 observed:1 quicker:2 solved:1 region:1 yk:3 disease:1 reba:1 complexity:25 reward:1 trained:1 solving:2 technically:1 learner:1 translated:2 joint:1 represented:2 various:1 kolmogorov:22 alphabet:1 leo:1 describe:4 artificial:3 newman:1 choosing:1 quite:1 encoded:1 larger:1 solve:3 relax:1 final:2 took:1 propose:1 product:1 mb:1 uci:6 relevant:2 description:1 exploiting:1 convergence:6 empty:1 generating:1 ben:1 object:2 derive:1 develop:1 solves:3 c:1 australian:3 direction:1 kiran:1 nando:1 hassan:3 preliminary:1 investigation:1 enumerated:1 hold:7 considered:1 credit:3 blake:1 algorithmic:1 predict:3 mapping:1 claim:1 matthew:1 major:1 purpose:1 label:1 him:5 agrees:1 create:1 hope:2 always:4 aim:1 rather:1 hj:12 breiman:1 corollary:1 improvement:5 hepatitis:1 brendan:1 baseline:3 sense:5 dependent:1 transferring:2 hidden:2 mth:1 going:1 issue:1 arg:1 classification:1 development:1 fairly:1 never:1 sampling:2 look:2 cancel:1 future:1 t2:1 randomly:1 recognize:1 divergence:1 individual:1 resulted:1 variously:1 national:2 attempt:1 acceptance:1 evaluation:1 navigation:2 mixture:3 nl:1 tj:1 chain:1 xy:3 respective:1 tree:28 ynew:3 taylor:1 theoretical:4 hutter:2 instance:1 column:1 hep:1 measuring:2 caruana:1 applicability:1 deviation:4 entry:2 successful:2 levin:2 reported:1 answer:2 st:1 international:3 minimality:2 michael:1 again:1 aaai:2 opposed:1 choose:1 worse:4 inefficient:1 li:1 de:1 satisfy:1 mcmc:4 depends:1 idealized:1 performed:3 root:2 h1:1 start:1 bayes:1 parallel:2 shai:1 contribution:1 oi:8 accuracy:1 ensemble:2 conceptually:1 bayesian:22 unifies:1 ecoli:7 mooney:1 definition:1 mihalkova:1 couple:1 proved:1 dataset:1 knowledge:5 improves:1 cj:4 higher:1 adaboost:1 specify:1 done:2 furthermore:1 just:4 until:2 hastings:3 overlapping:1 lack:1 yeast:2 effect:1 requiring:1 y2:1 true:1 inductive:2 hence:5 andrieu:1 alternating:1 arnaud:1 semantic:2 neal:1 during:4 huynh:1 essence:1 maintained:1 generalized:2 trying:1 hcur:5 stone:1 theoretic:4 gzip:4 novel:1 fi:2 common:4 mt:9 interpretation:1 approximates:1 significant:1 rd:1 pm:1 neatly:2 illinois:2 robot:2 impressive:1 similarity:5 etc:2 base:3 posterior:1 showed:2 recent:1 perspective:2 certain:1 inequality:2 binary:1 arbitrarily:1 success:1 christophe:1 seen:3 converge:1 shortest:4 ii:8 multiple:4 full:1 rj:4 reduces:1 champaign:2 cross:4 long:1 prediction:3 basic:2 regression:1 breast:1 patient:1 normalization:1 justified:3 proposal:2 source:2 rest:1 comment:1 elegant:1 effectiveness:1 jordan:1 call:1 integer:2 structural:1 near:2 mw:8 ideal:1 split:2 easy:1 baxter:1 idea:5 tm:1 computable:7 motivated:1 handled:1 solomonoff:3 peter:1 proceed:1 york:1 deep:2 clear:3 amount:6 lakkaraju:1 reduced:1 generate:5 exist:1 percentage:2 tutorial:1 write:1 mahmud:3 group:1 key:2 wisc:5 ram:1 merely:1 run:4 turing:1 powerful:2 reasonable:4 hettich:1 draw:2 decision:8 acceptable:1 bit:8 bound:12 hi:23 ki:2 cleve:1 xnew:4 vitanyi:2 lilyana:1 annual:1 precisely:2 constraint:1 deficiency:1 scene:1 x2:1 speed:2 prescribed:1 min:3 optimality:1 performing:1 transferred:3 department:2 structured:1 according:7 combination:1 march:1 across:1 metropolis:3 making:1 modification:1 heart:2 ln:17 resource:3 previously:8 turn:1 german:2 mechanism:1 know:1 apply:1 appropriate:1 upto:1 existence:1 denotes:1 running:1 cf:2 include:1 clustering:1 giving:1 establish:1 approximating:2 move:1 question:2 usual:1 nr:1 unclear:1 said:1 cw:5 thank:1 induction:1 marcus:2 assuming:2 code:3 length:9 relationship:2 troubling:1 difficult:2 sharper:1 hij:1 disparate:1 perform:4 markov:2 urbana:2 finite:7 situation:1 extended:1 precise:4 y1:1 arbitrary:1 community:1 introduced:1 david:1 kl:1 extensive:1 c4:1 eluded:1 learned:15 nip:1 trans:4 address:2 able:1 njr:2 below:2 cld:19 program:18 including:1 belief:1 event:1 schuller:1 nth:1 representing:1 scheme:5 pare:1 brief:2 hm:4 raymond:1 prior:32 literature:1 review:1 loss:1 prototypical:1 interesting:1 proportional:1 generation:1 generator:1 validation:1 h2:1 degree:1 sufficient:1 pi:3 row:4 cancer:1 course:1 summary:1 surprisingly:1 last:3 formal:2 bias:2 absolute:2 ghz:1 xn:1 rich:1 computes:1 author:2 reinforcement:2 universally:5 far:2 transaction:2 approximate:3 relatedness:13 feat:1 tenuous:1 logic:1 ml:2 doucet:1 sequentially:1 investigating:1 assumed:1 nroot:2 table:7 additionally:1 learn:3 transfer:89 reasonably:1 nature:1 confirmation:1 forest:2 hmi:2 constructing:3 domain:5 elegantly:2 did:5 paul:1 edition:1 child:1 x1:1 elaborate:1 explicit:2 wish:2 candidate:2 third:1 theorem:2 specific:2 pac:4 symbol:1 virtue:1 derives:1 sequential:7 gained:3 ci:1 illustrates:1 justifies:1 conditioned:1 hole:3 flavor:1 entropy:1 simply:1 likely:1 visual:1 unexpected:1 compressor:3 radford:1 corresponds:1 determines:1 satisfies:2 conditional:10 goal:1 njl:2 content:1 change:1 springerverlag:1 included:1 typical:2 infinite:3 uniformly:2 averaging:1 admittedly:1 merz:1 select:3 people:1 jonathan:1 constructive:3 outgoing:1 |
2,457 | 3,229 | Inferring Neural Firing Rates from Spike Trains
Using Gaussian Processes
John P. Cunningham1 , Byron M. Yu1,2,3 , Krishna V. Shenoy1,2
1
Department of Electrical Engineering,
2
Neurosciences Program, Stanford University, Stanford, CA 94305
{jcunnin,byronyu,shenoy}@stanford.edu
Maneesh Sahani3
Gatsby Computational Neuroscience Unit, UCL
Alexandra House, 17 Queen Square, London, WC1N 3AR, UK
[email protected]
3
Abstract
Neural spike trains present challenges to analytical efforts due to their noisy,
spiking nature. Many studies of neuroscientific and neural prosthetic importance
rely on a smoothed, denoised estimate of the spike train?s underlying firing rate.
Current techniques to find time-varying firing rates require ad hoc choices of
parameters, offer no confidence intervals on their estimates, and can obscure
potentially important single trial variability. We present a new method, based
on a Gaussian Process prior, for inferring probabilistically optimal estimates of
firing rate functions underlying single or multiple neural spike trains. We test the
performance of the method on simulated data and experimentally gathered neural
spike trains, and we demonstrate improvements over conventional estimators.
1
Introduction
Neuronal activity, particularly in cerebral cortex, is highly variable. Even when experimental conditions are repeated closely, the same neuron may produce quite different spike trains from trial
to trial. This variability may be due to both randomness in the spiking process and to differences
in cognitive processing on different experimental trials. One common view is that a spike train is
generated from a smooth underlying function of time (the firing rate) and that this function carries
a significant portion of the neural information. If this is the case, questions of neuroscientific and
neural prosthetic importance may require an accurate estimate of the firing rate. Unfortunately, these
estimates are complicated by the fact that spike data gives only a sparse observation of its underlying
rate. Typically, researchers average across many trials to find a smooth estimate (averaging out spiking noise). However, averaging across many roughly similar trials can obscure important temporal
features [1]. Thus, estimating the underlying rate from only one spike train (or a small number of
spike trains believed to be generated from the same underlying rate) is an important but challenging
problem.
The most common approach to the problem has been to collect spikes from multiple trials in a peristimulus-time histogram (PSTH), which is then sometimes smoothed by convolution or splines [2],
[3]. Bin sizes and smoothness parameters are typically chosen ad hoc (but see [4], [5]) and the result
is fundamentally a multi-trial analysis. An alternative is to convolve a single spike train with a kernel.
Again, the kernel shape and time scale are frequently ad hoc. For multiple trials, researchers may
average over multiple kernel-smoothed estimates. [2] gives a thorough review of classical methods.
1
More recently, point process likelihood methods have been adapted to spike data [6]?[8]. These
methods optimize (implicitly or explicitly) the conditional intensity function ?(t|x(t), H(t)) ?
which gives the probability of a spike in [t, t + dt), given an underlying rate function x(t) and the
history of previous spikes H(t) ? with respect to x(t). In a regression setting, this rate x(t) may
be learned as a function of an observed covariate, such as a sensory stimulus or limb movement.
In the unsupervised setting of interest here, it is constrained only by prior expectations such as
smoothness. Probabilistic methods enjoy two advantages over kernel smoothing. First, they allow
explicit modelling of interactions between spikes through the history term H(t) (e.g., refractory
periods). Second, as we will see, the probabilistic framework provides a principled way to share
information between trials and to select smoothing parameters.
In neuroscience, most applications of point process methods use maximum likelihood estimation. In
the unsupervised setting, it has been most common to optimize x(t) within the span of an arbitrary
basis (such as a spline basis [3]). In other fields, a theory of generalized Cox processes has been
developed, where the point process is conditionally Poisson, and x(t) is obtained by applying a link
function to a draw from a random process, often a Gaussian process (GP) (e.g. [9]). In this approach,
parameters of the GP, which set the scale and smoothness of x(t) can be learned by optimizing the
(approximate) marginal likelihood or evidence, as in GP classification or regression. However, the
link function, which ensures a nonnegative intensity, introduces possibly undesirable artifacts. For
instance, an exponential link leads to a process that grows less smooth as the intensity increases.
Here, we make two advances. First, we adapt the theory of GP-driven point processes to incorporate a history-dependent conditional likelihood, suitable for spike trains. Second, we formulate the
problem such that nonnegativity in x(t) is achieved without a distorting link function or sacrifice of
tractability. We also demonstrate the power of numerical techniques that makes application of GP
methods to this problem computationally tractable. We show that GP methods employing evidence
optimization outperform both kernel smoothing and maximum-likelihood point process models.
2
Gaussian Process Model For Spike Trains
Spike trains can often be well modelled by gamma-interval point processes [6], [10]. We assume the
underlying nonnegative firing rate x(t) : t ? [0, T ] is a draw from a GP, and then we assume that
our spike train is a conditionally inhomogeneous gamma-interval process (IGIP), given x(t). The
spike train is represented by a list of spike times y = {y0 , . . ., yN }. Since we will model this spike
train as an IGIP1 , y | x(t) is by definition a renewal process, so we can write:
p(y | x(t)) =
N
Y
p(yi | yi?1 , x(t)) ? p0 (y0 | x(t)) ? pT (T | yN , x(t)),
(1)
i=1
where p0 (?) is the density of the first spike occuring at y0 , and pT (?) is the density of no spikes being
observed on (yN , T ]; the density for IGIP intervals (of order ? ? 1) (see e.g. [6]) can be written as:
Z yi
??1
Z yi
?x(yi )
p(yi | yi?1 , x(t)) =
?
x(u)du
exp ??
x(u)du .
?(?)
yi?1
yi?1
(2)
The true p0 (?) and pT (?) under this gamma-interval spiking model are not closed form, so we simplify these distributions as intervals of an inhomogeneous Poisson process (IP). This step, which
we find to sacrifice very little in terms of accuracy, helps to preserve tractability. Note also that
we write the distribution in terms of the inter-spike-interval distribution p(yi |yi?1 , x(t)) and not
?(t|x(t), H(t)), but the process could be considered equivalently in terms of conditional intensity.
We now discretize x(t) : t ? [0, T ] by the time resolution of the experiment (?, here 1 ms), to
T
yield a series of n evenly spaced samples x = [x1 , . . ., xn ]0 (with n = ?
). The events y become
N + 1 time indices into x, with N much smaller than n. The discretized IGIP output process is now
(ignoring terms that scale with ?):
1
The IGIP is one of a class of renewal models that works well for spike data (much better than inhomogeneous Poisson; see [6], [10]). Other log-concave renewal models such as the inhomogeneous inverse-Gaussian
interval can be chosen, and the implementation details remain unchanged.
2
p(y | x)
=
N
Y
?xy
i
i=1
?(?)
?
yX
i ?1
xk ?
k=yi?1
??1
yX
i ?1
exp ??
xk ?
k=yi?1
yX
n?1
0 ?1
X
? xy0 exp ?
xk ? ? exp ?
xk ? ,
k=0
(3)
k=yN
where the final two terms are p0 (?) and pT (?), respectively [11]. Our goal is to estimate a smoothly
varying firing rate function from spike times. Loosely, instead of being restricted to only one family
of functions, GP allows all functions to be possible; the choice of kernel determines which functions
are more likely, and by how much. Here we use the standard squared exponential (SE) kernel. Thus,
x ? N (?1, ?), where ? is the positive definite covariance matrix defined by
?
? = K(ti , tj ) i,j?{1,...,n} where K(ti , tj ) = ?f2 exp ? (ti ? tj )2 + ?v2 ?ij .
2
(4)
For notational convenience, we define the hyperparameter set ? = [?; ?; ?; ? f2 ; ?v2 ]. Typically, the
GP mean ? is set to 0. Since our intensity function is nonnegative, however, it is sensible to treat ?
instead as a hyperparameter and let it be optimized to a positive value. We note that other standard
kernels - including the rational quadratic, Matern ? = 23 , and Matern ? = 25 - performed similarly to
the SE; thus we only present the SE here. For an in depth discussion of kernels and of GP, see [12].
As written, the model assumes only one observed spike train; it may be that we have m trials believed
to be generated from
Ymthe same firing rate profile. Our method naturally incorporates this case: define
p(y(i) | x), where y(i) denotes the ith spike train observed.2 Otherwise, the
p({y}m
|
x)
=
1
i=1
model is unchanged.
3
3.1
Finding an Optimal Firing Rate Estimate
Algorithmic Approach
R
Ideally, we would calculate the posterior on firing rate p(x | y) = ? p(x | y, ?)p(?)d? (integrating
over the hyperparameters ?), but this problem is intractable. We consider two approximations:
replacing the integral by evaluation at the modal ?, and replacing the integral with a sum over a
discrete grid of ? values. We first consider choosing a modal hyperparameter set (ML-II model
selection, see [12]), i.e. p(x | y) ? q(x | y, ? ? ) where q(?) is some approximate posterior, and
?? = argmax p(? | y) = argmax p(?)p(y | ?) = argmax p(?)
?
?
?
Z
p(y | x, ?)p(x | ?)dx.
(5)
x
(This and the following equations hold similarly for a single observation y or multiple observations
{y}m
1 , so we consider only the single observation for notational brevity.) Specific choices for the
hyperprior p(?) are discussed in Results. The integral in Eq. 5 is intractable under the distributions
we are modelling, and thus we must use an approximation technique. Laplace approximation and
Expectation Propagation (EP) are the most widely used techniques (see [13] for a comparison). The
Laplace approximation fits an unnormalized Gaussian distribution to the integrand in Eq. 5. Below
we show this integrand is log concave in x. This fact makes reasonable the Laplace approximation,
since we know that the distribution being approximated is unimodal in x and shares log concavity
with the normal distribution. Further, since we are modelling a non-zero mean GP, most of the
Laplace approximated probability mass lies in the nonnegative orthant (as is the case with the true
posterior). Accordingly, we write:
2
Another reasonable approach would consider each trial as having a different rate function x that is a draw
from a GP with a nonstationary mean function ?(t). Instead of inferring a mean rate function x ? , we would
learn a distribution of means. We are considering this choice for future work.
3
p(y | ?) =
Z
n
?
?
p(y | x, ?)p(x | ?)dx ? p(y | x , ?)p(x | ?)
x
(2?) 2
1
|?? + ??1 | 2
,
(6)
where x? is the mode of the integrand and ?? = ??2x log p(y | x, ?) |x=x? . Note that in general
both ? and ?? (and x? , implicitly) are functions of the hyperparameters ?. Thus, Eq. 6 can be
differentiated with respect to the hyperparameter set, and an iterative gradient optimization (we used
conjugate gradients) can be used to find (locally) optimal hyperparameters. Algorithmic details and
the gradient calculations are typical for GP; see [12]. The Laplace approximation also naturally
provides confidence intervals from the approximated posterior covariance (? ?1 + ?? )?1 .
We can also consider approximate integration over ? using the Laplace approximation above. The
Laplace approximation produces a posterior approximation q(x | y, ?) = N x? , (?? + ??1 )?1
and a model evidence approximation q(? | y)P(Eq. 6). The approximate integrated posterior can be
written as p(x | y) = E?|y [p(x | y, ?)] ? j q(x | y, ?j )q(?j | y) for some choice of samples
?j (which again gives confidence intervals on the estimates). Since the dimensionality of ? is small,
and since we find in practice that the posterior on ? is well behaved (well peaked and unimodal), we
find that a simple grid of ?j works very well, thereby obviating MCMC or another sampling scheme.
This approximate integration consistently yields better results than a modal hyperparameter set, so
we will only consider approximate integration for the remainder of this report.
For the Laplace approximation at any value of ?, we require the modal estimate of firing rate x ? ,
which is simply the MAP estimator:
x? = argmax p(x | y) = argmax p(y | x)p(x).
x0
(7)
x0
Solving this problem is equivalent to solving an unconstrained problem where p(x) is a truncated
multivariate normal (but this is not the same as individually truncating each marginal p(x i ); see
[14]). Typically a link or squashing function would be included to enforce nonnegativity in x, but
this can distort the intensity space in unintended ways. We instead impose the constraint x 0,
which reduces the problem to being solved over the (convex) nonnegative orthant. To pose the
problem as a convex program, we define f (x) = ?log p(y | x)p(x):
f (x)
=
N
X
yX
i ?1
?log xyi ? (? ? 1)log
i=1
?log xy0 +
xk ?
k=yi?1
yX
0 ?1
k=1
xk ? +
n?1
X
xk ? +
k=yN
+
yX
N ?1
?xk ?
(8)
k=y0
1
(x ? ?1)T ??1 (x ? ?1) + C,
2
(9)
where C represents constants with respect to x. From this form follows the Hessian
?2x f (x) = ??1 + ? where ? = ??2x log p(y | x, ?) = B + D,
(10)
?2
?2
where D = diag(x?2
y0 , . . ., 0, . . ., xyi . . ., 0, . . ., xyN ) is positive semidefinite and diagonal. B is
block diagonal with N blocks. Each block is rank 1 and associates its positive, nonzero eigenvalue
with eigenvector [0, . . ., 0, bTi , 0, . . ., 0]T . The remaining n ? N eigenvalues are zero. Thus, B has
total rank N and is positive semidefinite. Since ? is positive definite, it follows then that the Hessian
is also positive definite, proving convexity. Accordingly, we can use a log barrier Newton method to
efficiently solve for the global MAP estimator of firing rate x? [15].
In the case of multiple spike train observations, we need only add extra terms of negative log likelihood from the observation model. This flows through to the Hessian, where ? 2x f (x) = ??1 + ?
and ? = ?1 + . . . + ?m , with ?i ? i ? {1, . . ., m} defined for each observation as in Eq. 10.
4
3.2
Computational Practicality
This method involves multiple iterative layers which require many Hessian inversions and other matrix operations (matrix-matrix products and determinants) that cost O(n3 ) in run-time complexity
and O(n2 ) in memory, where (x ? IRn ). For any significant data size, a straightforward implementation is hopelessly slow. With 1 ms time resolution (or similar), this method would be restricted
to spike trains lasting less than a second, and even this problem would be burdensome. Achieving
computational improvements is critical, as a naive implementation is, for all practical purposes, intractable. Techniques to improve computational performance are a subject of study in themselves
and are beyond the scope of this paper. We give a brief outline in the following paragraph.
In the MAP estimation of x? , since we have analytical forms of all matrices, we avoid explicit
representation of any matrix, resulting in linear storage. Hessian inversions are avoided using the
matrix inversion lemma and conjugate gradients, leaving matrix vector multiplications as the single
costly operation. Multiplication of any vector by ? can be done in linear time, since ? is a (blockwise) vector outer product matrix. Since we have evenly spaced resolution of our data x in time
indices ti , ? is Toeplitz; thus multiplication by ? can be done using Fast Fourier Transform (FFT)
methods [16]. These techniques allow exact MAP estimation with linear storage and nearly linear
run time performance. In practice, for example, this translates to solving MAP estimation problems
of 103 variables in fractions of a second, with minimal memory load. For the modal hyperparameter
scheme (as opposed to approximately integrating over the hyperparameters), gradients of Eq. 6
must also be calculated at each step of the model evidence optimization. In addition to using similar
techniques as in the MAP estimation, log determinants and their derivatives (associated with the
Laplace approximation) can be accurately approximated by exploiting the eigenstructure of ?.
In total, these techniques allow optimal firing rates functions of 103 to 104 variables to be estimated
in seconds or minutes (on a modern workstation). These data sizes translate to seconds of spike
data at 1 ms resolution, long enough for most electrophysiological trials. This algorithm achieves a
reduction from a naive implementation which would require large amounts of memory and would
require many hours or days to complete.
4
Results
We tested the methods developed here using both simulated neural data, where the true firing rate
was known by construction, and in real neural spike trains, where the true firing rate was estimated
by a PSTH that averaged many similar trials. The real data used were recorded from macaque
premotor cortex during a reaching task (see [17] for experimental method). Roughly 200 repeated
trials per neuron were available for the data shown here.
We compared the IGIP-likelihood GP method (hereafter, GP IGIP) to other rate estimators (kernel
smoothers, Bayesian Adaptive Regressions Splines or BARS [3], and variants of the GP method)
using root mean squared difference (RMS) to the true firing rate. PSTH and kernel methods approximate the mean conditional intensity ?(t) = EH(t) [?(t|x(t), H(t))]. For a renewal process, we know
(by the time rescaling theorem [7], [11]) that ?(t) = x(t), and thus we can compare the GP IGIP
(which finds x(t)) directly to the kernel methods. To confirm that hyperparameter optimization improves performance, we also compared GP IGIP results to maximum likelihood (ML) estimates of
x(t) using fixed hyperparameters ?. This result is similar in spirit to previously published likelihood
methods with fixed bases or smoothness parameters. To evaluate the importance of an observation
model with spike history dependence (the IGIP of Eq. 3), we also compared GP IGIP to an inhomogeneous Poisson (GP IP) observation model (again with a GP prior on x(t); simply ? = 1 in
Eq. 3).
The hyperparameters ? have prior distributions (p(?) in Eq. 5). For ?f , ?, and ?, we set lognormal priors to enforce meaningful values (i.e. finite, positive, and greater than 1 in the case of
?). Specifically, we set log(?f2 ) ? N (5, 2) , log(?) ? N (2, 2), and log(? ? 1) ? N (0, 100).
The variance ?v can be set arbitrarily small, since the GP IGIP method avoids explicit inversions
of ? with the matrix inversion lemma (see 3.2). For the approximate integration, we chose a grid
consisting of the empirical mean rate for ? (that is, total spike count N divided by total time T )
and (?, log(?f2 ), log(?)) ? [1, 2, 4] ? [4, . . ., 8] ? [0, . . ., 7]. We found this coarse grid (or similar)
produced similar results to many other very finely sampled grids.
5
70
Firing Rate (spikes/sec)
Firing Rate (spikes/sec)
60
50
40
30
20
10
0
60
50
40
30
20
10
0
0.2
0.4
0.6
0.8
1
Time (sec)
1.2
1.4
0
1.6
(a) Data Set L20061107.214.1; 1 spike train
0
0.2
0.4
0.6
0.8
Time (sec)
1
1.2
1.4
1.6
(b) Data Set L20061107.14.1; 4 spike trains
50
16
14
40
Firing Rate (spikes/sec)
Firing Rate (spikes/sec)
45
35
30
25
20
15
10
10
8
6
4
2
5
0
12
0
0.2
0.4
0.6
0.8
1
Time (sec)
1.2
1.4
0
1.6
(c) Data Set L20061107.151.5; 8 spike trains
0
0.2
0.4
0.6
0.8
Time (sec)
1
1.2
1.4
1.6
(d) Data Set L20061107.46.3; 1 spike train
Figure 1: Sample GP firing rate estimate. See full description in text.
The four examples in Fig. 1 represent experimentally gathered firing rate profiles (according to the
methods in [17]). In each of the plots, the empirical average firing rate of the spike trains is shown
in bold red. For simulated spike trains, the spike trains were generated from each of these empirical
average firing rates using an IGIP (? = 4, comparable to fits to real neural data). For real neural
data, the spike train(s) were selected as a subset of the roughly 200 experimentally recorded spike
trains that were used to construct the firing rate profile. These spike trains are shown as a train of
black dots, each dot indicating a spike event time (the y-axis position is not meaningful). This spike
train or group of spike trains is the only input given to each of the fitting models. In thin green and
magenta, we have two kernel smoothed estimates of firing rates; each represents the spike trains
convolved with a normal distribution of a specified standard deviation (50 and 100 ms). We also
smoothed these spike trains with adaptive kernel [18], fixed ML (as described above), BARS [3],
and 150 ms kernel smoothers. We do not show these latter results in Fig. 1 for clarity of figures.
These standard methods serve as a baseline from which we compare our method. In bold blue, we
see x? , the results of the GP IGIP method. The light blue envelopes around the bold blue GP firing
rate estimate represent the 95% confidence intervals. Bold cyan shows the GP IP method. This color
scheme holds for all of Fig. 1.
We then ran all methods 100 times on each firing rate profile, using (separately) simulated and real
neural spike trains. We are interested in the average performance of GP IGIP vs. other GP methods
(a fixed ML or a GP IP) and vs. kernel smoothing and spline (BARS) methods. We show these
results in Fig. 2. The four panels correspond to the same rate profiles shown in Fig. 1. In each
panel, the top, middle, and bottom bar graphs correspond to the method on 1, 4, and 8 spike trains,
respectively. GP IGIP produces an average RMS error, which is an improvement (or, less often,
a deterioration) over a competing method. Fig. 2 shows the percent improvement of the GP IGIP
method vs. the competing method listed. Only significant results are shown (paired t-test, p < 0.05).
6
50
GP Methods
GP IP Fixed ML
short
Kernel Smoothers
medium long adaptive
BARS
50
%
%
0
50
0
50
%
%
0
50
0
50
%
%
0
0
(a) L20061107.214.1; 1,4,8 spike trains
50
GP Methods
GP IP Fixed ML
short
Kernel Smoothers
medium long adaptive
BARS
GP Methods
GP IP Fixed ML
short
Kernel Smoothers
medium long adaptive
BARS
(b) L20061107.14.1; 1,4,8 spike trains
50
%
%
0
50
0
50
%
%
0
50
0
50
%
%
0
0
(c) L20061107.151.5; 1,4,8 spike trains
GP Methods
GP IP Fixed ML
short
Kernel Smoothers
medium long adaptive
BARS
(d) L20061107.46.3; 1,4,8 spike trains
Figure 2: Average percent RMS improvement of GP IGIP method (with model selection) vs. method
indicated in the column title. See full description in text.
Blue improvement bars are for simulated spike trains; red improvement bars are for real neural spike
trains. The general positive trend indicates improvements, suggesting the utility of this approach.
Note that, in the few cases where a kernel smoother performs better (e.g. the long bandwidth kernel
in panel (b), real spike trains, 4 and 8 spike trains), outperforming the GP IGIP method requires
an optimal kernel choice, which can not be judged from the data alone. In particular, the adaptive
kernel method generally performed more poorly than GP IGIP. The relatively poor performance of
GP IGIP vs. different techniques in panel (d) is considered in the Discussion section. The data
sets here are by no means exhaustive, but they indicate how this method performs under different
conditions.
5
Discussion
We have demonstrated a new method that accurately estimates underlying neural firing rate functions
and provides confidence intervals, given one or a few spike trains as input. This approach is not
without complication, as the technical complexity and computational effort require special care.
Estimating underlying firing rates is especially challenging due to the inherent noise in spike trains.
Having only a few spike trains deprives the method of many trials to reduce spiking noise. It is
important here to remember why we care about single trial or small number of trial estimates, since
we believe that in general the neural processing on repeated trials is not identical. Thus, we expect
this signal to be difficult to find with or without trial averaging.
In this study we show both simulated and real neural spike trains. Simulated data provides a good test
environment for this method, since the underlying firing rate is known, but it lacks the experimental
proof of real neural spike trains (where spiking does not exactly follow a gamma-interval process).
For the real neural spike trains, however, we do not know the true underlying firing rate, and thus we
can only make comparisons to a noisy, trial-averaged mean rate, which may or may not accurately
reflect the true underlying rate of an individual spike train (due to different cognitive processing on
different trials). Taken together, however, we believe the real and simulated data give good evidence
of the general improvements offered by this method.
Panels (a), (b), and (c) in Fig. 2 show that GP IGIP offers meaningful improvements in many cases
and a small loss in performance in a few cases. Panel (d) tells a different story. In simulation, GP
IGIP generally outperforms the other smoothers (though, by considerably less than in other panels).
In real neural data, however, GP IGIP performs the same or relatively worse than other methods.
This may indicate that, in the low firing rate regime, the IGIP is a poor model for real neural spiking.
7
It may also be due to our algorithmic approximations (namely, the Laplace approximation, which
allows density outside the nonnegative orthant). We will report on this question in future work.
Furthermore, some neural spike trains may be inherently ill-suited to analysis. A problem with this
and any other method is that of very low firing rates, as only occasional insight is given into the
underlying generative process. With spike trains of only a few spikes/sec, it will be impossible
for any method to find interesting structure in the firing rate. In these cases, only with many trial
averaging can this structure be seen.
Several studies have investigated the inhomogeneous gamma and other more general models (e.g.
[6], [19]), including the inhomogeneous inverse gaussian (IIG) interval and inhomogeneous Markov
interval (IMI) processes. The methods of this paper apply immediately to any log-concave inhomogeneous renewal process in which inhomogeneity is generated by time-rescaling (this includes the
IIG and several others). The IMI (and other more sophisticated models) will require some changes
in implementation details; one possibility is a variational Bayes approach. Another direction for
this work is to consider significant nonstationarity in the spike data. The SE kernel is standard, but
it is also stationary; the method will have to compromise between areas of categorically different
covariance. Nonstationary covariance is an important question in modelling and remains an area of
research [20]. Advances in that field should inform this method as well.
Acknowledgments
This work was supported by NIH-NINDS-CRCNS-R01, the Michael Flynn SGF, NSF, NDSEGF,
Gatsby, CDRF, BWF, ONR, Sloan, and Whitaker. This work was conceived at the UK Spike Train
Workshop, Newcastle, UK, 2006; we thank Stuart Baker for helpful discussions during that time.
We thank Vikash Gilja, Stephen Ryu, and Mackenzie Risch for experimental, surgical, and animal
care assistance. We thank also Araceli Navarro.
References
[1] B. Yu, A. Afshar, G. Santhanam, S. Ryu, K. Shenoy, and M. Sahani. Advances in NIPS, 17,
2005.
[2] R. Kass, V. Ventura, and E. Brown. J. Neurophysiol, 94:8?25, 2005.
[3] I. DiMatteo, C. Genovese, and R. Kass. Biometrika, 88:1055?1071, 2001.
[4] H. Shimazaki and S. Shinomoto. Neural Computation, 19(6):1503?1527, 2007.
[5] D. Endres, M. Oram, J. Schindelin, and P. Foldiak. Advances in NIPS, 20, 2008.
[6] R. Barbieri, M. Quirk, L. Frank, M. Wilson, and E. Brown. J Neurosci Methods, 105:25?37,
2001.
[7] E. Brown, R. Barbieri, V. Ventura, R. Kass, and L. Frank. Neural Comp, 2002.
[8] W. Truccolo, U. Eden, M. Fellows, J. Donoghue, and E. Brown. J Neurophysiol., 93:1074?
1089, 2004.
[9] J. Moller, A. Syversveen, and R. Waagepetersen. Scandanavian J. of Stats., 1998.
[10] K. Miura, Y. Tsubo, M. Okada, and T. Fukai. J Neurosci., 27:13802?13812, 2007.
[11] D. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer, 2002.
[12] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[13] M. Kuss and C. Rasmussen. Journal of Machine Learning Res., 6:1679?1704, 2005.
[14] W. Horrace. J Multivariate Analysis, 94(1):209?221, 2005.
[15] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[16] B. Silverman. Journal of Royal Stat. Soc. Series C: Applied Stat., 33, 1982.
[17] C. Chestek, A. Batista, G. Santhanam, B. Yu, A. Afshar, J. Cunningham, V. Gilja, S. Ryu,
M. Churchland, and K. Shenoy. J Neurosci., 27:10742?10750, 2007.
[18] B. Richmond, L. Optican, and H. Spitzer. J. Neurophys., 64(2), 1990.
[19] R. Kass and V. Ventura. Neural Comp, 14:5?15, 2003.
[20] C. Paciorek and M. Schervish. Advances in NIPS, 15, 2003.
8
| 3229 |@word trial:23 cox:1 middle:1 determinant:2 inversion:5 sgf:1 simulation:1 covariance:4 p0:4 thereby:1 carry:1 reduction:1 series:2 hereafter:1 unintended:1 batista:1 optican:1 outperforms:1 current:1 ka:4 neurophys:1 dx:2 written:3 must:2 john:1 vere:1 numerical:1 shape:1 plot:1 v:5 alone:1 generative:1 selected:1 stationary:1 accordingly:2 xk:8 ith:1 short:4 provides:4 coarse:1 complication:1 psth:3 become:1 yu1:1 fitting:1 ndsegf:1 paragraph:1 inter:1 sacrifice:2 roughly:3 themselves:1 frequently:1 multi:1 discretized:1 little:1 considering:1 estimating:2 underlying:14 baker:1 panel:7 mass:1 medium:4 spitzer:1 eigenvector:1 newcastle:1 developed:2 flynn:1 finding:1 temporal:1 thorough:1 remember:1 fellow:1 ti:4 concave:3 exactly:1 biometrika:1 uk:4 unit:1 enjoy:1 yn:5 shenoy:3 positive:9 eigenstructure:1 engineering:1 treat:1 barbieri:2 firing:36 approximately:1 black:1 chose:1 collect:1 challenging:2 averaged:2 practical:1 acknowledgment:1 practice:2 block:3 definite:3 silverman:1 area:2 empirical:3 maneesh:2 boyd:1 confidence:5 integrating:2 convenience:1 undesirable:1 selection:2 judged:1 storage:2 applying:1 impossible:1 optimize:2 conventional:1 map:6 equivalent:1 demonstrated:1 straightforward:1 williams:1 truncating:1 convex:3 formulate:1 resolution:4 immediately:1 stats:1 estimator:4 insight:1 vandenberghe:1 proving:1 laplace:10 pt:4 construction:1 exact:1 associate:1 trend:1 approximated:4 particularly:1 observed:4 ep:1 bottom:1 electrical:1 solved:1 calculate:1 ensures:1 movement:1 ran:1 principled:1 byronyu:1 convexity:1 complexity:2 environment:1 ideally:1 solving:3 compromise:1 churchland:1 serve:1 f2:4 basis:2 iig:2 neurophysiol:2 represented:1 train:56 fast:1 london:1 tell:1 choosing:1 outside:1 exhaustive:1 quite:1 premotor:1 stanford:3 widely:1 solve:1 otherwise:1 toeplitz:1 gp:46 transform:1 noisy:2 inhomogeneity:1 ip:8 final:1 hoc:3 advantage:1 eigenvalue:2 analytical:2 ucl:2 interaction:1 product:2 remainder:1 translate:1 poorly:1 description:2 exploiting:1 produce:3 help:1 ac:1 stat:2 pose:1 quirk:1 ij:1 eq:9 soc:1 involves:1 indicate:2 direction:1 inhomogeneous:9 closely:1 bin:1 require:8 truccolo:1 hold:2 around:1 considered:2 normal:3 exp:5 algorithmic:3 scope:1 achieves:1 purpose:1 estimation:5 title:1 individually:1 mit:1 gaussian:8 reaching:1 avoid:1 varying:2 wilson:1 probabilistically:1 improvement:10 notational:2 modelling:4 likelihood:9 consistently:1 rank:2 indicates:1 richmond:1 baseline:1 burdensome:1 helpful:1 dependent:1 typically:4 integrated:1 cunningham:1 irn:1 interested:1 classification:1 ill:1 animal:1 constrained:1 smoothing:4 renewal:5 integration:4 marginal:2 field:2 construct:1 special:1 having:2 sampling:1 identical:1 represents:2 stuart:1 yu:2 unsupervised:2 nearly:1 thin:1 genovese:1 peaked:1 future:2 jones:1 report:2 stimulus:1 spline:4 fundamentally:1 others:1 inherent:1 modern:1 simplify:1 few:5 gamma:5 preserve:1 individual:1 argmax:5 consisting:1 interest:1 highly:1 possibility:1 evaluation:1 introduces:1 semidefinite:2 light:1 tj:3 wc1n:1 accurate:1 integral:3 xy:1 loosely:1 hyperprior:1 re:1 minimal:1 instance:1 column:1 ar:1 miura:1 queen:1 paciorek:1 tractability:2 cost:1 deviation:1 subset:1 xy0:2 chestek:1 imi:2 endres:1 considerably:1 density:4 probabilistic:2 dimatteo:1 michael:1 together:1 again:3 squared:2 recorded:2 reflect:1 opposed:1 possibly:1 worse:1 cognitive:2 derivative:1 rescaling:2 suggesting:1 sec:9 bold:4 includes:1 explicitly:1 sloan:1 ad:3 performed:2 view:1 matern:2 closed:1 root:1 portion:1 red:2 bayes:1 denoised:1 complicated:1 square:1 accuracy:1 afshar:2 variance:1 efficiently:1 gathered:2 yield:2 spaced:2 correspond:2 surgical:1 modelled:1 bayesian:1 accurately:3 produced:1 xyi:2 comp:2 researcher:2 published:1 randomness:1 history:4 kuss:1 inform:1 nonstationarity:1 definition:1 distort:1 naturally:2 associated:1 proof:1 workstation:1 oram:1 rational:1 sampled:1 color:1 dimensionality:1 electrophysiological:1 improves:1 sophisticated:1 dt:1 day:1 follow:1 tsubo:1 modal:5 done:2 though:1 syversveen:1 furthermore:1 replacing:2 propagation:1 lack:1 mode:1 artifact:1 indicated:1 cunningham1:1 behaved:1 grows:1 alexandra:1 believe:2 brown:4 true:7 nonzero:1 conditionally:2 shinomoto:1 assistance:1 during:2 unnormalized:1 m:5 generalized:1 outline:1 occuring:1 demonstrate:2 complete:1 performs:3 percent:2 variational:1 recently:1 nih:1 common:3 spiking:7 refractory:1 cerebral:1 discussed:1 significant:4 cambridge:1 smoothness:4 unconstrained:1 grid:5 similarly:2 dot:2 cortex:2 bti:1 add:1 base:1 posterior:7 multivariate:2 foldiak:1 optimizing:1 driven:1 outperforming:1 arbitrarily:1 onr:1 yi:14 krishna:1 seen:1 greater:1 care:3 impose:1 period:1 signal:1 ii:1 smoother:8 multiple:7 unimodal:2 full:2 reduces:1 stephen:1 smooth:3 technical:1 adapt:1 calculation:1 offer:2 believed:2 long:6 divided:1 paired:1 variant:1 regression:3 expectation:2 poisson:4 histogram:1 sometimes:1 kernel:25 represent:2 deterioration:1 achieved:1 addition:1 separately:1 interval:15 bwf:1 leaving:1 fukai:1 extra:1 envelope:1 finely:1 navarro:1 subject:1 byron:1 incorporates:1 flow:1 spirit:1 nonstationary:2 enough:1 fft:1 fit:2 competing:2 bandwidth:1 reduce:1 translates:1 donoghue:1 vikash:1 distorting:1 rms:3 utility:1 effort:2 hessian:5 generally:2 se:4 listed:1 amount:1 locally:1 outperform:1 nsf:1 neuroscience:3 estimated:2 per:1 conceived:1 blue:4 write:3 hyperparameter:7 discrete:1 santhanam:2 group:1 four:2 eden:1 achieving:1 clarity:1 graph:1 schervish:1 fraction:1 sum:1 run:2 inverse:2 family:1 shenoy1:1 reasonable:2 draw:3 ninds:1 comparable:1 layer:1 cyan:1 quadratic:1 nonnegative:6 activity:1 adapted:1 constraint:1 n3:1 prosthetic:2 integrand:3 fourier:1 span:1 relatively:2 department:1 according:1 poor:2 conjugate:2 across:2 smaller:1 remain:1 y0:5 lasting:1 restricted:2 taken:1 computationally:1 equation:1 previously:1 remains:1 count:1 know:3 tractable:1 available:1 operation:2 apply:1 limb:1 occasional:1 v2:2 differentiated:1 enforce:2 alternative:1 convolved:1 convolve:1 assumes:1 denotes:1 remaining:1 top:1 newton:1 whitaker:1 yx:6 practicality:1 especially:1 classical:1 r01:1 unchanged:2 question:3 spike:77 costly:1 dependence:1 diagonal:2 gradient:5 link:5 thank:3 simulated:8 sensible:1 outer:1 evenly:2 index:2 equivalently:1 difficult:1 unfortunately:1 ventura:3 potentially:1 blockwise:1 frank:2 negative:1 neuroscientific:2 implementation:5 discretize:1 neuron:2 observation:9 convolution:1 markov:1 finite:1 orthant:3 truncated:1 variability:2 smoothed:5 arbitrary:1 intensity:7 namely:1 specified:1 optimized:1 learned:2 ryu:3 hour:1 macaque:1 nip:3 beyond:1 bar:10 below:1 regime:1 challenge:1 program:2 including:2 memory:3 green:1 royal:1 power:1 suitable:1 event:2 critical:1 rely:1 eh:1 scheme:3 improve:1 brief:1 mackenzie:1 axis:1 naive:2 sahani:1 text:2 prior:5 review:1 multiplication:3 loss:1 expect:1 interesting:1 shimazaki:1 offered:1 story:1 share:2 obscure:2 squashing:1 supported:1 rasmussen:2 allow:3 lognormal:1 barrier:1 waagepetersen:1 sparse:1 depth:1 xn:1 calculated:1 avoids:1 concavity:1 sensory:1 adaptive:7 avoided:1 employing:1 approximate:8 implicitly:2 confirm:1 ml:8 global:1 gilja:2 iterative:2 why:1 nature:1 learn:1 okada:1 ca:1 inherently:1 ignoring:1 du:2 investigated:1 moller:1 diag:1 neurosci:3 noise:3 hyperparameters:6 profile:5 n2:1 repeated:3 obviating:1 x1:1 neuronal:1 fig:7 crcns:1 gatsby:3 slow:1 inferring:3 nonnegativity:2 explicit:3 position:1 exponential:2 daley:1 lie:1 house:1 minute:1 theorem:1 magenta:1 load:1 specific:1 covariate:1 peristimulus:1 list:1 evidence:5 intractable:3 workshop:1 importance:3 suited:1 smoothly:1 simply:2 likely:1 hopelessly:1 springer:1 determines:1 conditional:4 goal:1 experimentally:3 change:1 included:1 typical:1 specifically:1 averaging:4 lemma:2 total:4 experimental:5 meaningful:3 indicating:1 select:1 latter:1 brevity:1 incorporate:1 evaluate:1 mcmc:1 tested:1 |
2,458 | 323 | Generalization by Weight-Elimination
with Application to Forecasting
Andreas S. Weigend
Physics Department
Stanford University
Stanford, CA 94305
David E. Rumelhart
Psychology Department
Stanford University
Stanford, CA 94305
Bernardo A. Huberman
Dynamics of Computation
XeroxPARC
Palo Alto, CA 94304
Abstract
Inspired by the information theoretic idea of minimum description length, we add
a term to the back propagation cost function that penalizes network complexity.
We give the details of the procedure, called weight-elimination, describe its
dynamics, and clarify the meaning of the parameters involved. From a Bayesian
perspective, the complexity term can be usefully interpreted as an assumption
about prior distribution of the weights. We use this procedure to predict the
sunspot time series and the notoriously noisy series of currency exchange rates.
1 INTRODUCTION
Learning procedures for connectionist networks are essentially statistical devices for performing inductive inference. There is a trade-off between two goals: on the one hand, we
want such devices to be as general as possible so that they are able to learn a broad range
of problems. This recommends large and flexible networks. On the other hand, the true
measure of an inductive device is not how well it performs on the examples it has been
shown, but how it performs on cases it has not yet seen, i.e., its out-of-sample performance.
Too many weights of high precision make it easy for a net to fit the idiosyncrasies or "noise"
of the training data and thus fail to generalize well to new cases. This overfitting problem
is familiar in inductive inference, such as polynomial curve fitting. There are a number
of potential solutions to this problem. We focus here on the so-called minimal network
strategy. The underlying hypothesis is: if several nets fit the data equally well, the simplest
one will on average provide the best generalization. Evaluating this hypothesis requires (i)
some way of measuring simplicity and (ii) a search procedure for finding the desired net.
The complexity of an algorithm can be measured by the length of its minimal description
875
876
Weigend, Rumelhart, and Huberman
in some language. Rissanen [Ris89] and Cheeseman [Che90] formalized the old but vague
intuition of Occam's razor as the information theoretic minimum description length (MDL)
criterion: Given some data, the most probable model is the model that minimizes
description length = description length(datalmodel)
,
Y
cost
.f
,
",
+
description length(model) .
,
.J
V
Y
error
complexity
This sum represents the trade-off between residual error and model complexity. The goal is
to find a net that has the lowest complexity while fitting the data adequately. The complexity
is dominated by the number of bits needed to encode the weights. It is roughly proportional
to the number of weights times the number of bits per weight. We focus here on the
procedure of weight-elimination that tries to find a net with the smallest number of weights.
We compare it with a second approach that tries to minimize the number of bits per weight,
thereby creating a net that is not too dependent on the precise values of its weights.
2 WEIGHT-ELIMINATION
In 1987, Rumelhart proposed a method for finding minimal nets within the framework of
back propagation learning. In this section we explain and interpret the procedure and, for
the first time, give the details of its implementation. 1
2.1 METHOD
The idea is indeed simple in conception: add to the usual cost function a term which counts
the number of parameters, and minimize the sum of performance error and the number of
weights by back propagation,
(1)
The first term measures the performance of the net. In the simplest case, it is the sum
squared error over the set of training examples T. The second term measures the size of
the net. Its sum extends over all connections C. A represents the relative importance of the
complexity term with respect to the performance term.
The learning rule is then to change the weights according to the gradient of the entire cost
function, continuously doing justice to the trade-off between error and complexity. This
differs from methods that consider a set of fixed models, estimate the parameters for each
of them, and then compare between the models by considering the number of parameters.
The complexity cost as function of wdwo is shown in Figure 1(b). The extreme regions of
very large and very small weights are easily interpreted. For IWi I ~ wo, the cost of a weight
approaches unity (times A). This justifies the interpretation of the complexity term as a
counter of significantly sized weights. For IWi I ~ wo, the cost is close to zero. ''Large'' and
"small" are defined with respect to the scale wo, a free parameter of the weight-elimination
procedure that has to be chosen.
IThe original formulation benefited from conversations with Paul Smolensky. Variations, and
alternatives have been developed by Hinton, Hanson and Pratt, Mozer and Smolensky, Ie Cun,
Denker and SoHa, Ii, Snapp and Psaltis and others. They are discussed in Weigend [Wei91].
Generalization by Weight-Elimination with Application to Forecasting
0.8
I'\
.
prior
I
probability
I
0.5
I\
0.2
J \
~
=-~=-=-~-~_:/..
I
-4
-3
-2
-1
A=4
A=2
A=l
---- A=O.S
I
.~~=-~-=-=.
0
1
0.9
cost
(c)
A
0.8
(a)
I
I
I
2
3
4
0.7
1.3
cost/)..
(b)
0.2
-3
1.2
0:= A
0.5
7. 1
0.5
-4
0.6
-2
-1
0
1
2
3
0.4
ex
weight/wo
4
I
I
I
0 .0
0.5
1.0
Figure 1: (a) Prior probability distribution for a weight. (b) Corresponding cost.
(c) Cost for different values of S/wo as function of 0:' = WI / S, where S = WI + W2.
To clarify the meaning of Wo, let us consider a unit which is connected-redundantly-by
two weights (WI and wz) to the same signal source. Is it cheaper to have two smaller weights
or just one large weight? Interestingly, as shown in Figure l(c), the answer depends on the
ratio S/wo, where S = WI + Wz is the relevant sum for the receiving unit. For values of
S/wo up to about 1.1, there is only one minimum at 0:' := wt/S = 0.5, i.e., both weights
are present and equal. When S/Wo increases, this symmetry gets broken; it is cheaper to
set one weight ~ S and eliminate the other one.
Weight-decay, proposed by Hinton and by Ie Cun in 1987, is contained in our method of
weight-elimination as the special case of large woo In the statistics community, this limit
(cost ex w;) is known as ridge regression. The scale parameter Wo thus allows us to express
a preference for fewer large weights (wo small) or many small weights (wo large). In our
experience, choosing Wo of order unity is good for activations of order unity.
2.2 INTERPRETATION AS PRIOR PROBABILITY
Further insight can be gained by viewing the cost as the negative log likelihood of the
network, given the data. In this framework 2 , the error term is the negative logarithm of the
probability of the data given the net, and the complexity term is the negative logarithm of
the prior probability of the weights.
The cost function corresponds approximately to the assumption that the weights come from
a mixture of two distributions. Relevant weights are drawn from a uniform distribution (to
2This perspective is expanded in a forthcoming paper by Rumelhart et ai. [RDGC92].
877
878
Weigend, Rumelhart, and Huberman
allow for normalization of the probability, up to a certain maximum size). Weights that are
merely the result of "noise" are drawn from a Gaussian-like distribution centered on zero;
they are expected to be small. We show the prior probability for our complexity term for
several values of ,X in Figure l(a). If we wish to approximate the bump around zero by a
Gaussian, its variance is given by (1'2 == w5;,X. Its width scales with Woo
Perhaps surprisingly the innocent weighting factor ,x now influences the width: the variance
of the "noise" is inversely proportional to,X. The larger ,x is, the closer to zero a weight
must be to have a reasonable probability of being a member of the "noise" distribution.
Also, the larger ,x is, the more "pressure" small weights feel to become even smaller.
The following technical section describes how ,x is dynamically adjusted in training. From
the perspective taken in Section 2.1, the usual increase of ,x during training corresponds to
attaching more importance to the complexity term. From the perspective developed in this
section, it corresponds to sharpening the peak of the weight distribution around zero.
2.3
DETAILS
Although the basic form of the weight-elimination procedure is simple, it is sensitive to the
choice of ,X.3 If ,x is too small, it will have no effect. If ,x is too large, all of the weights will
be driven to zero. Worse, a value of ,x which is useful for a problem that is easily learned
may be too large for a hard problem, and a problem which is difficult in one region (at the
start, for example) may require a larger value of ,x later on. We have developed some rules
that make the performance relatively insensitive to the exact values of the parameters.
We start with A = 0 so that the network can initially use all of its resources. A is changed after each
epoch. It is usually gently incremented, sometimes decremented, and, in emergencies, cut down. The
choice among these three actions depends on the value of the error on the training set
The subscript n denotes the number of the epoch that has just finished. (Note that
is only the
first term of the cost function (Equation 1). Since gradient descent minimizes the sum of both terms,
en by itself can decrease or increase.) en is compared to three quantities, the first two derived from
previous values of that error itself, the last one given externally:
en.
en
? en-l Previous error.
? An Average error (exponentially weighted over the past).
It is defined as An = "YA n - 1 + (1 - "Y)en (with "Y relatively close to 1).
1)
Desired error, the externally provided performance criterion.
The strategy for choosing 1) depends on the specific problem. For example, "solutions" with
an error larger than 1) might not be acceptable. Dr, we may have observed (by monitoring the
out-of-sample performance during training) that overfitting starts when a certain in-sample error
is reached. Dr, we may have some other estimate of the amount of noise in the training data.
For toy problems, derived from approximating analytically defined functions (where perfect
performance on the training data can be expected), a good choice is 1) = O. For hard problems,
such as the prediction of currency exchange rates, 1) is set just below the error that corresponds
to chance performance, since overfitting would occur if the error was reduced further.
After each epoch in training, we evaluate whether
is above or below each of these quantities. This
gives eight possibilities. Three actions are possible:
?
en
? A ~ A +dA
In six cases, we increment A slightly. These are the situations in which things are going well:
the error is already below than the criterion (en < 1)) and/or is still falling (en < en-d.
3The reason that A appears at all is because weight-elimination only deals with a part of the
complete network complexity, and this only approximately. In a theory rigidly derived from the
minimum description length principle, no such parameter would appear.
Generalization by Weight-Elimination with Application to Forecasting
Incrementing ~ means attaching more importance to the complexity term and making the
Gaussian a little sharper. Note that the primary parameter is actually .1~. Its size is fairly small,
of order 10-6 ?
In the remaining two cases, the error is worse than the criterion and it has grown compared to
just before (En ~ En - 1 ). The action depends on its relation to its long term average An .
? ~ - ~ - .1~
[if En ~ En - 1 A En < An A En ~ 1)]
In the less severe of those two cases, the performance is still improving with respect to the long
term average (En < A). Since the error can have grown only slightly, we reduce ~ slightly.
? ~ - 0.9 ~
[if En ~ En - 1 A En ~ An A En ~ 1)]
In this last case, the error has increased and exceeds its long term average. This can happen
for two reasons. The error might have grown a lot in the last iteration. Or, it might not have
improved by much in the whole period covered by the long term average, i.e., the network
might be trapped somewhere before reaching the performance criterion. The value of ~ is cut,
hopefully prevent weight-elimination from devouring the whole net.
We have found that this set of heuristics for finding a minimal network while achieving
a desired level of performance on the training data works rather well on a wide range of
tasks. We give two examples of applications of weight-elimination. In the second example
we show how A changes during training.
3
APPLICATION TO TIME SERIES PREDICTION
A central problem in science is predicting the future of temporal sequences; examples
range from forecasting the weather to anticipating currency exchange rates. The desire
to know the future is often the driving force behind the search for laws in science. The
ability to forecast the behavior of a system hinges on two types of knowledge. The first and
most powerful one is the knowledge of the laws underlying a given phenomenon. When
expressed in the form of equations, the future outcome of an experiment can be predicted.
The second, albeit less powerful, type of knowledge relies On the discovery of empirical
regularities without resorting to knowledge of the underlying mechanism. In this case, the
key problem is to determine which aspects of the data are merely idiosyncrasies and which
aspects are truly indicators of the intrinsic behavior. This issue is particularly serious for
real world data, which are limited in precision and sample size. We have applied nets with
weight-elimination to time series of sunspots and currency exchange rates.
3.1 SUNSPOT SERIES 4
When applied to predict the famous yearly sunspot averages, weight-elimination reduces
the number of hidden units to three. Just having a small net, however, is not the ultimate
goal: predictive power is what counts. The net has one half the out-of-sample error (on
iterated single step predictions) of the benchmark model by Tong [Ton90].
What happens when we enlarge the input size from twelve, the optimal size for the benchmark model, to four times that size? As shown in [WRH90], the performance does not
deteriorate (as might have been expected from a less dense distribution of data points in
higher dimensional spaces). Instead, the net manages to ignore irrelevant information.
4We here only briefly summarize our results on sunspots. Details have been published in [WHR90)
and [WRH90).
879
880
Weigend, Rumelhart, and Huberman
3.2 CURRENCY EXCHANGE RATES 5
We use daily exchange rates (or prices with respect to the US Dollar) for five currencies
(German Mark (DM), Japanese Yen, Swiss Franc, Pound Sterling and Canadian Dollar) to
predict the returns at day t, defined as
._
1't.-
In ~ -- In (1
Pt-l
+ Pt -
Pt
Pt-l
-1)
f"V ,--P_t_-_P_t_-_l
,....,
Pt-l
(2)
For small changes, the return is the difference to the previous day normalized by the price
Pt -1. Since different currencies and different days of the week may have different dynamics,
we pick for one day (Monday) and one currency (OM). We define the task to be to learn
Monday DM dynamics: given exchange rate information through a Monday, predict the
DM - US$ rate for the following day.
The net has 45 inputs for past daily DM returns, 5 inputs for the present Monday's returns
of all available currencies, and 11 inputs for additional information (trends and volatilities),
solely derived from the original exchange rates. The k day trend at day t is the mean of the
returns of the k last days,
1't ? Similarly, the k day volatility is defined to be the
standard deviation of the returns 0 the k last days.
t 2:!-ktl
The inputs are fully connected to the 5 sigmoidal hidden units with range (-1, 1). The
hidden units are fully connected to two output units. The first one is to predict the next
day return, 1't+l. This is a linear unit, trained with quadratic error. The second output unit
focuses on the sign of the change. Its target value is one when the price goes up and zero
otherwise. Since we want the unit to predict the probability that the return is positive, we
choose a sigmoidal unit with range (0,1) and minimize cross entropy error.
The central question is whether the net is able to extract any signal from the training set
that generalizes to the test sets. The performance is given as function of training time in
epochs in Figure 2. 6
The result is that the out-of-sample prediction is significantly better than chance. Weightelimination reliably extracts a signal that accounts for between 2.5 and 4.0 per cent of the
variance, corresponding to a correlation coefficient of 0 .21 ? 0.03 for both test sets. In contrast, nets without precautions against overfitting show hopeless out-of-sample performance
almost before the training has started. Also, none of the control experiments (randomized
series and time-reversed series) reaches any significant predictability.
The dynamics of weight-elimination, discussed in Section 2.3, is also shown in Figure 2.
A first grows very slowly. Then, around epoch 230, the error reaches the performance
SWe thank Blake LeBaron for sending us the data.
6The error of the unit predicting the return is expressed as the gyerage r.elative y"ariance
arv S =
2:kES
(target k - prediction k)2
2:kES
(targetk
-
= -
1
cri-
means)2
1 '" (
L..t rk
Ns kES
-
......
rk
)2
(3)
The averaging (division by N s, the number of observations in set S) makes the measure independent
of the size of the set. The normalization (division by ~, the estimated variance of the data in S),
removes the dependence on the dynamic range of the data. Since the mean of the returns is close to
zero, the random walk hypothesis corresponds to arv 1.0.
=
Generalization by Weight-Elimination with Application to Forecasting
training with added noise
training with weight-elimination
~~~
...~
... ~--~=---------------~--~1.00F_~~_~_~_-~----------------~--~----~
:-=--:>~,-~.~~i:~:::. :::
'-><',-----=:!>E;;~:~?
aN (r-unit)
0.92
~-+-.~~~+-__~~+-~~~~o.oo~-+-+~~~+-__-+__~~~~~
~-~-~-~==~~~---:----~----~.~~--------~~--~~--~----~
-....... -... - ... -.
r.m.s.
....
. . ,. ..
,
....
~- ~. ~~
"
error (s-ul1 It.I
'
"
........
"\ .. ,
'.
--
- - - - - - ? ?-,--...
--~ . .
:
.494
492
., . . \ . :.. .... ;.... .. ...~ :400
..., ...;....\ ......{ ....._......... : .488
.
?'?il
.486I1...----1..--L--L...J.......L.....L..JI....L-_ _' - -.........~.........-L..JL.....L..J.........
50 70
epochs 200
400
700
training (5/75... 12/84) (501)
early test (9/73... 4/75) (87)
late test (12/84... 5/87) (128)
Figure 2: Learning curves of currency exchange rates for training with weight-elimination
(left) and training with added noise (right). In-sample predictions are shown as solid lines,
out-of sample predictions in grey and dashed. Top: average relative variance of the unit
predicting the return (r-unit). Center: root-mean-square error of the unit predicting the sign
(s-unit). Bottom: Weighting of the complexity term.
criterion. 7 The network starts to focus on the elimination of weights (indicated by growing
A) without further reducing its in-sample errors (solid lines), since that would probably
correspond to overfitting.
We also compare training with weight-elimination with a method intended to make the
parameters more robust. We add noise to the inputs, independently to each input unit,
different at each presentation of each pattern. 8 This can be viewed as artificially enlarging
the training set by smearing the data points around their centers. Smoother boundaries of
the "basins of attraction" are the result. Viewed from the description length angle, it means
saving bits by specifying the (input) weights with less preciSion, as opposed to eliminating
some of them. The corresponding learning curves are shown on the right hand side of
Figure 2. This simple method also successfully avoids overfitting.
7Guided by cross-validation, we set the criterion (for the sum of the squared errors from both
outputs) to 650. With this value, the choice of the other parameters is not critical, as long as they
are fairly small. We used a learning rate of 2.5 x 10-4, no momentum, and an increment dA of
2.5 x 10-6 ? If the criterion was set to zero, the balance between error and complexity would be
fragile in such a hard problem.
8We add Gaussian noise with a rather large standard deviation of 1.5 times the signal. The exact
value is not crucial: similar performance is obtained for noise levels between 0.7 and 2.0.
881
882
Weigend, Rumelhart, and Huberman
Finally, we analyze the weight-eliminated network solution. The weights from the hidden
units to the outputs are in a region where the complexity term acts as a counter. In fact
only one or two hidden units remain. The weights from the inputs to the dead hidden
units are also eliminated. For time series prediction, weight-elimination acts as hidden-unit
elimination.
The weights between inputs and remaining hidden units are fairly small. Weight-elimination
is in its quadratic region and prevents them from growing too large. Consequently, the
activation of the hidden units lies in ( -0.4,0.4). This prompted us to try a linear net where
our procedure also works surprisingly well, yielding comparable performance to sigmoids.
Since all inputs are scaled to zero mean and unit standard deviation, we can gauge the
importance of different inputs directly by the size of the weights. With weight-elimination,
it becomes fairly clear which quantities are important, since connections that do not manage
to reduce the error are not worth their price. A detailed deSCription will be published in
[WHR91]. Weight-elimination enhances the interpretability of the solution.
To summarize, we have a working procedure that finds small nets and can help prevent
overfitting. With our rules for the dynamics of A, weight-elimination is fairly stable. values
of most parameters. In the examples we analyzed, the network manages to pick out some
significant part of the dynamics underlying the time series.
References
[Che90]
[RDGC92]
[Ris89]
[Ton90]
[Wei91]
[WHR90]
[WHR91]
[WRH90]
Peter C. Cheeseman. On finding the most probable model. In J. Shrager and
P. Langley (eds.) Computational Models of Scientific Discovery and Theory
Formation, p. 73. Morgan Kaufmann, 1990.
David E. Rumelhart, Richard Durbin, Richard Golden, and Yves Chauvin.
Backpropagation: theoretical foundations. In Y. Chauvin and D. E. Rumelhart (eds.) Backpropagation and Connectionist Theory. Lawrence Erlbaum,
1992.
Jorma Rissanen. Stochastic Complexity in Statistical Inquiry. World Scientific, 1989.
Howell Tong. Non-linear Time Series: a Dynamical System Approach.
Oxford University Press, 1990.
Andreas S. Weigend. Connectionist Architectures for Time Series Prediction. PhD thesis, Stanford University, 1991. (in preparation)
Andreas S. Weigend, Bernardo A. Huberman, and David E. Rumelhart. Predicting the future: a connectionist approach. International Journal ofNeural
Systems, 1:193, 1990.
Andreas S. Weigend, Bernardo A. Huberman, and David E. Rumelhart. Predicting sunspots and currency rates with connectionist networks. In M.
Casdagli and S. Eubank (eds.) Proceedings of the 1990 NATO Workshop on
Nonlinear Modeling and Forecasting (Santa Fe). Addison-Wesley, 1991.
Andreas S. Weigend, David E. Rumelhart, and Bernardo A. Huberman. Backpropagation, weight-elimination and time series prediction. In D. S.1buretzky, J. L. Elman, T. J. Sejnowski, and G. E. Hinton (eds.) Proceedings of the
1990 Connectionist Models Summer School, p 105. Morgan Kaufmann, 1990.
| 323 |@word briefly:1 eliminating:1 polynomial:1 justice:1 casdagli:1 grey:1 pressure:1 pick:2 thereby:1 solid:2 series:12 interestingly:1 past:2 activation:2 yet:1 must:1 happen:1 remove:1 precaution:1 half:1 fewer:1 device:3 preference:1 monday:4 sigmoidal:2 five:1 become:1 fitting:2 deteriorate:1 expected:3 indeed:1 behavior:2 elman:1 roughly:1 growing:2 inspired:1 little:1 considering:1 becomes:1 provided:1 underlying:4 alto:1 lowest:1 what:2 interpreted:2 minimizes:2 developed:3 redundantly:1 finding:4 sharpening:1 temporal:1 bernardo:4 innocent:1 usefully:1 act:2 golden:1 scaled:1 control:1 unit:24 appear:1 before:3 positive:1 limit:1 oxford:1 rigidly:1 subscript:1 solely:1 approximately:2 might:5 dynamically:1 specifying:1 limited:1 range:6 differs:1 swiss:1 backpropagation:3 procedure:10 langley:1 empirical:1 significantly:2 weather:1 get:1 close:3 influence:1 center:2 go:1 independently:1 simplicity:1 formalized:1 jorma:1 rule:3 insight:1 attraction:1 variation:1 increment:2 feel:1 pt:6 target:2 exact:2 hypothesis:3 trend:2 rumelhart:12 particularly:1 cut:2 observed:1 bottom:1 region:4 connected:3 trade:3 counter:2 incremented:1 decrease:1 intuition:1 mozer:1 broken:1 complexity:20 dynamic:8 trained:1 ithe:1 predictive:1 division:2 vague:1 easily:2 grown:3 describe:1 sejnowski:1 formation:1 choosing:2 outcome:1 heuristic:1 stanford:5 larger:4 otherwise:1 ability:1 statistic:1 noisy:1 itself:2 sequence:1 net:20 relevant:2 description:9 regularity:1 perfect:1 volatility:2 oo:1 help:1 measured:1 school:1 predicted:1 come:1 guided:1 stochastic:1 centered:1 viewing:1 elimination:27 exchange:9 require:1 generalization:5 probable:2 adjusted:1 clarify:2 around:4 blake:1 lawrence:1 predict:6 week:1 bump:1 driving:1 early:1 smallest:1 psaltis:1 palo:1 sensitive:1 gauge:1 successfully:1 weighted:1 gaussian:4 reaching:1 rather:2 encode:1 derived:4 focus:4 likelihood:1 contrast:1 dollar:2 inference:2 dependent:1 cri:1 entire:1 eliminate:1 initially:1 hidden:9 relation:1 going:1 i1:1 issue:1 among:1 flexible:1 smearing:1 special:1 fairly:5 equal:1 saving:1 having:1 enlarge:1 eliminated:2 represents:2 broad:1 future:4 connectionist:6 others:1 decremented:1 serious:1 richard:2 franc:1 cheaper:2 familiar:1 sterling:1 intended:1 w5:1 possibility:1 severe:1 mdl:1 mixture:1 extreme:1 truly:1 yielding:1 analyzed:1 behind:1 closer:1 daily:2 experience:1 old:1 logarithm:2 penalizes:1 desired:3 walk:1 theoretical:1 minimal:4 increased:1 modeling:1 measuring:1 cost:15 deviation:3 uniform:1 erlbaum:1 too:6 answer:1 peak:1 twelve:1 randomized:1 ie:2 international:1 physic:1 off:3 receiving:1 continuously:1 squared:2 central:2 thesis:1 manage:1 opposed:1 choose:1 slowly:1 idiosyncrasy:2 dr:2 worse:2 dead:1 creating:1 return:11 toy:1 account:1 potential:1 coefficient:1 depends:4 later:1 try:3 lot:1 root:1 doing:1 analyze:1 reached:1 start:4 iwi:2 yen:1 minimize:3 om:1 il:1 square:1 yves:1 variance:5 kaufmann:2 correspond:1 generalize:1 bayesian:1 famous:1 iterated:1 eubank:1 manages:2 none:1 monitoring:1 notoriously:1 worth:1 howell:1 published:2 explain:1 inquiry:1 reach:2 ed:4 against:1 involved:1 dm:4 conversation:1 knowledge:4 anticipating:1 actually:1 back:3 appears:1 wesley:1 higher:1 day:11 improved:1 formulation:1 pound:1 just:5 correlation:1 hand:3 working:1 nonlinear:1 hopefully:1 propagation:3 indicated:1 perhaps:1 scientific:2 grows:1 effect:1 normalized:1 true:1 inductive:3 adequately:1 analytically:1 deal:1 during:3 width:2 razor:1 criterion:8 theoretic:2 ridge:1 complete:1 arv:2 performs:2 meaning:2 ji:1 insensitive:1 exponentially:1 gently:1 discussed:2 interpretation:2 jl:1 interpret:1 significant:2 ai:1 resorting:1 similarly:1 language:1 stable:1 add:4 perspective:4 irrelevant:1 driven:1 certain:2 seen:1 minimum:4 additional:1 morgan:2 determine:1 period:1 dashed:1 signal:4 ii:2 currency:11 ul1:1 smoother:1 reduces:1 exceeds:1 technical:1 cross:2 long:5 equally:1 prediction:10 regression:1 basic:1 essentially:1 iteration:1 normalization:2 sometimes:1 want:2 source:1 shrager:1 crucial:1 w2:1 probably:1 thing:1 member:1 canadian:1 recommends:1 easy:1 conception:1 pratt:1 fit:2 psychology:1 forthcoming:1 architecture:1 andreas:5 idea:2 reduce:2 fragile:1 whether:2 six:1 ultimate:1 forecasting:6 wo:13 peter:1 action:3 useful:1 covered:1 clear:1 detailed:1 santa:1 amount:1 simplest:2 reduced:1 sign:2 trapped:1 estimated:1 per:3 express:1 key:1 four:1 rissanen:2 falling:1 drawn:2 achieving:1 kes:3 prevent:2 weightelimination:1 merely:2 sum:7 weigend:10 angle:1 powerful:2 extends:1 almost:1 reasonable:1 acceptable:1 comparable:1 bit:4 emergency:1 summer:1 quadratic:2 durbin:1 occur:1 dominated:1 aspect:2 performing:1 expanded:1 relatively:2 department:2 according:1 smaller:2 describes:1 slightly:3 remain:1 unity:3 wi:4 cun:2 making:1 happens:1 taken:1 resource:1 equation:2 count:2 fail:1 mechanism:1 needed:1 know:1 german:1 addison:1 sending:1 available:1 generalizes:1 denker:1 eight:1 alternative:1 original:2 cent:1 denotes:1 remaining:2 top:1 hinge:1 somewhere:1 yearly:1 approximating:1 already:1 quantity:3 question:1 added:2 strategy:2 primary:1 dependence:1 usual:2 enhances:1 gradient:2 reversed:1 thank:1 ofneural:1 reason:2 chauvin:2 length:8 prompted:1 ratio:1 balance:1 difficult:1 fe:1 sharper:1 negative:3 implementation:1 reliably:1 observation:1 benchmark:2 descent:1 situation:1 hinton:3 precise:1 community:1 david:5 connection:2 hanson:1 learned:1 able:2 usually:1 below:3 pattern:1 dynamical:1 smolensky:2 summarize:2 wz:2 interpretability:1 power:1 critical:1 force:1 predicting:6 indicator:1 cheeseman:2 residual:1 inversely:1 finished:1 started:1 woo:2 extract:2 prior:6 epoch:6 discovery:2 relative:2 law:2 fully:2 proportional:2 validation:1 foundation:1 basin:1 principle:1 occam:1 hopeless:1 changed:1 surprisingly:2 last:5 free:1 side:1 allow:1 wide:1 attaching:2 curve:3 boundary:1 evaluating:1 world:2 avoids:1 approximate:1 ignore:1 nato:1 overfitting:7 search:2 learn:2 robust:1 ca:3 f_:1 symmetry:1 improving:1 lebaron:1 japanese:1 artificially:1 da:2 dense:1 incrementing:1 noise:10 paul:1 snapp:1 whole:2 benefited:1 en:21 sunspot:6 tong:2 predictability:1 precision:3 n:1 momentum:1 wish:1 lie:1 weighting:2 late:1 externally:2 down:1 rk:2 enlarging:1 specific:1 decay:1 intrinsic:1 workshop:1 albeit:1 importance:4 gained:1 phd:1 justifies:1 sigmoids:1 forecast:1 entropy:1 soha:1 prevents:1 desire:1 contained:1 expressed:2 corresponds:5 chance:2 relies:1 goal:3 sized:1 presentation:1 viewed:2 consequently:1 price:4 change:4 hard:3 reducing:1 huberman:8 wt:1 averaging:1 called:2 ya:1 mark:1 preparation:1 evaluate:1 phenomenon:1 ex:2 |
2,459 | 3,230 | Bundle Methods for Machine Learning
Alexander J. Smola, S.V. N. Vishwanathan, Quoc V. Le
NICTA and Australian National University, Canberra, Australia
[email protected], {SVN.Vishwanathan, Quoc.Le}@nicta.com.au
Abstract
We present a globally convergent method for regularized risk minimization problems. Our method applies to Support Vector estimation, regression, Gaussian
Processes, and any other regularized risk minimization setting which leads to a
convex optimization problem. SVMPerf can be shown to be a special case of
our approach. In addition to the unified framework we present tight convergence
bounds, which show that our algorithm converges in O(1/) steps to precision
for general convex problems and in O(log(1/)) steps for continuously differentiable problems. We demonstrate in experiments the performance of our approach.
1
Introduction
In recent years optimization methods for convex models have seen significant progress. Starting
from the active set methods described by Vapnik [17] increasingly sophisticated algorithms for solving regularized risk minimization problems have been developed. Some of the most exciting recent
developments are SVMPerf [5] and the Pegasos gradient descent solver [12]. The former computes
gradients of the current solution at every step and adds those to the optimization problem. Joachims
[5] prove an O(1/2 ) rate of convergence. For Pegasos Shalev-Shwartz et al. [12] prove an O(1/)
rate of convergence, which suggests that Pegasos should be much more suitable for optimization.
In this paper we extend the ideas of SVMPerf to general convex optimization problems and a much
wider class of regularizers. In addition to this, we present a formulation which does not require
the solution of a quadratic program whilst in practice enjoying the same rate of convergence as
algorithms of the SVMPerf family. Our error analysis shows that the rates achieved by this algorithm
are considerably better than what was previously known for SVMPerf, namely the algorithm enjoys
O(1/) convergence and O(log(1/)) convergence, whenever the loss is sufficiently smooth. An
important feature of our algorithm is that it automatically takes advantage of smoothness in the
problem.
Our work builds on [15], which describes the basic extension of SVMPerf to general convex problems. The current paper provides a) significantly improved performance bounds which match better
what can be observed in practice and which apply to a wide range of regularization terms, b) a variant of the algorithm which does not require quadratic programming, yet enjoys the same fast rates of
convergence, and c) experimental data comparing the speed of our solver to Pegasos and SVMPerf.
Due to space constraints we relegate the proofs to an technical report [13].
2
Problem Setting
Denote by x ? X and y ? Y patterns and labels respectively and let l(x, y, w) be a loss function
which is convex in w ? W, where either W = Rd (linear classifier), or W is a Reproducing Kernel
Hilbert Space for kernel methods. Given a set of m training patterns {xi , yi }m
i=1 the regularized risk
1
functional which many estimation methods strive to minimize can be written as
m
J(w) := Remp (w) + ??(w) where Remp (w) :=
1 X
l(xi , yi , w).
m i=1
(1)
2
?(w) is a smooth convex regularizer such as 12 kwk , and ? > 0 is a regularization term. Typically
? is cheap to compute and to minimize whereas the empirical risk term Remp (w) is computationally
expensive to deal with. For instance, in the case of intractable graphical models it requires approximate inference methods such as sampling or semidefinite programming. To make matters worse
the number of training observations m may be huge. We assume that the empirical risk Remp (w) is
nonnegative.
If J is differentiable we can use standard quasi-Newtons methods
like LBFGS even for large values of m [8]. Unfortunately, it is not
straightforward to extend these algorithms to optimize a non-smooth
objective. In such cases one has to resort to bundle methods [3],
which are based on the following elementary observation: for convex functions a first order Taylor approximation is a lower bound.
So is the maximum over a set of Taylor approximations. Furthermore, the Taylor approximation is exact at the point of expansion.
The idea is to replace Remp [w] by these lower bounds and to optiFigure 1: A lower bound on the mize the latter in conjunction with ?(w). Figure 1 gives geometric
convex empirical risk Remp (w) intuition. In the remainder of the paper we will show that 1) This exobtained by computing three tan- tends a number of existing algorithms; 2) This method enjoys good
gents on the entire function.
rates of convergence; and 3) It works well in practice.
Note that there is no need for Remp [w] to decompose into individual losses in an additive fashion.
For instance, scores, such as Precision@k [4], or SVM Ranking scores do not satisfy this property.
Likewise, estimation problems which allow for an unregularized common constant offset or adaptive
margin settings using the ?-trick fall into this category. The only difference is that in those cases the
derivative of Remp [w] with respect to w no more decomposes trivially into a sum of gradients.
3
3.1
Bundle Methods
Subdifferential and Subgradient
Before we describe the bundle method, it is necessary to clarify a key technical point. The subgradient is a generalization of gradients appropriate for convex functions, including those which are not
necessarily smooth. Suppose w is a point where a convex function F is finite. Then a subgradient is the normal vector of any tangential supporting hyperplane of F at w. Formally ? is called a
subgradient of F at w if, and only if,
F (w0 ) ? F (w) + hw0 ? w, ?i
?w0 .
(2)
The set of all subgradients at a point is called the subdifferential, and is denoted by ?w F (w). If
this set is not empty then F is said to be subdifferentiable at w. On the other hand, if this set is a
singleton then, the function is said to be differentiable at w.
3.2
The Algorithm
Denote by wt ? W the values of w which are obtained by successive steps of our method, Let
at ? W, bt ? R, and set w0 = 0, a0 = 0, b0 = 0. Then, the Taylor expansion coefficients of
Remp [wt ] can be written as
at+1 := ?w Remp (wt ) and bt+1 := Remp (wt ) ? hat+1 , wt i .
(3)
Note that we do not require Remp to be differentiable: if Remp is not differentiable at wt we simply
choose any element of the subdifferential as at+1 . Since each Taylor approximation is a lower
bound, we may take their maximum to obtain that Remp (w) ? maxt hat , wi + bt . Moreover, by
2
Algorithm 1 Bundle Method()
Initialize t = 0, w0 = 0, a0 = 0, b0 = 0, and J0 (w) = ??(w)
repeat
Find minimizer wt := argminw Jt (w)
Compute gradient at+1 and offset bt+1 .
Increment t ? t + 1.
until t ?
virtue of the fact that Remp is a non-negative function we can write the following lower bounds on
Remp and J respectively:
Rt (w) := max
hat0 , wi + bt0 and Jt (w) := ??(w) + Rt (w).
0
(4)
t ?t
By construction Rt0 ? Rt ? Remp and Jt0 ? Jt ? J for all t0 ? t. Define
w? := argmin J(w),
?t := Jt+1 (wt ) ? Jt (wt ),
w
wt := argmin Jt (w),
t := min
Jt0 +1 (wt0 ) ? Jt (wt ).
0
and
t ?t
w
The following lemma establishes some useful properties of ?t and t .
Lemma 1 We have Jt0 (wt0 ) ? Jt (wt ) ? J(w? ) ? J(wt ) = Jt+1 (wt ) for all t0 ? t. Furthermore,
t is monotonically decreasing with t ? t+1 ? Jt+1 (wt+1 ) ? Jt (wt ) ? 0. Also, t upper bounds
the distance from optimality via ?t ? t ? mint0 ?t J(wt0 ) ? J(w? ).
3.3
Dual Problem
Optimization is often considerably easier in the dual space. In fact, we will show that we need
not know ?(w) at all, instead it is sufficient to work with its Fenchel-Legendre dual ?? (?) :=
supw hw, ?i ? ?(w). If ?? is a so-called Legendre function [e.g. 10] the w at which the supremum
is attained can be written as w = ?? ?? (?). In the sequel we will always assume that ?? is twice
P
2
differentiable and Legendre. Examples include ?? (?) = 12 k?k or ?? (?) = i exp[?]i .
Theorem 2 Let ? ? Rt , denote by A = [a1 , . . . , at ] the matrix whose columns are the
(sub)gradients, let b = [b1 , . . . , bt ]. The dual problem of
minimize Jt (w) := max
hat0 , wi + bt0 + ??(w) is
0
(5)
maximize Jt? (?) := ? ??? (???1 A?) + ?> b subject to ? ? 0 and k?k1 = 1.
(6)
w
t ?t
?
Furthermore, the optimal wt and ?t are related by the dual connection wt = ??? (???1 A?t ).
2
2
Recall that for ?(w) = 21 kwk2 the Fenchel-Legendre dual is given by ?? (?) = 12 k?k2 . This is
commonly used in SVMs and Gaussian Processes. The following corollary is immediate:
Corollary 3 Define Q := A> A, i.e. Quv := hau , av i. For quadratic regularization, i.e.
2
minimizew max(0, maxt0 ?t hat0 , wi + bt0 ) + ?2 kwk2 the dual becomes
maximize ?
?
1 >
2? ? Q?
+ ?> b subject to ? ? 0 and k?k1 = 1.
(7)
This means that for quadratic regularization the dual optimization problem is a quadratic program
where the number of variables equals the number of gradients computed previously. Since t is
typically in the order of 10s to 100s, the resulting QP is very cheap to solve. In fact, we don?t even
need to know the gradients explicitly. All that is required to define the QP are the inner products
between gradient vectors hau , av i. Later in this section we propose a variant which does away with
the quadratic program altogether while preserving most of the appealing convergence properties of
Algorithm 1.
3
3.4
Examples
Structured Estimation Many estimation problems [14, 16] can be written in terms of a piecewise
linear loss function
l(x, y, w) = max
h?(x, y 0 ) ? ?(x, y), wi + ?(y, y 0 )
0
(8)
y ?Y
for some suitable joint feature map ?, and a loss function ?(y, y 0 ). It follows from Section 3.1 that
a subdifferential of (8) is given by
?w l(x, y, w) = ?(x, y ? ) ? ?(x, y) where y ? := argmax h?(x, y 0 ) ? ?(x, y), wi + ?(y, y 0 ). (9)
y 0 ?Y
Since Remp is defined as a summation of loss terms, this allows us to apply Algorithm 1 directly
for risk minimization: at every iteration t we find all maximal constraint violators for each (xi , yi )
pair and compute the composite gradient vector. This vector is then added to the convex program
we have so far.
Joachims [5] pointed out this idea for the special case of ?(x, y) = y?(x) and y ? {?1}, that is,
binary loss. Effectively, by defining a joint feature map as the sum over individual feature maps and
by defining a joint loss ? as the sum over individual losses SVMPerf performs exactly the same
operations as we described above. Hence, for losses of type (8) our algorithm is a direct extension
of SVMPerf to structured estimation.
Exponential Families One of the advantages of our setting is that it applies to any convex loss
function, as long as there is an efficient way of computing the gradient. That is, we can use it for
cases where we are interested in modeling
Z
p(y|x; w) = exp(h?(x, y), wi ? g(w|x)) where g(w|x) = log exp h?(x, y 0 ), wi dy 0
(10)
Y
That is, g(w|x) is the conditional log-partition function. This type of losses includes settings such
as Gaussian Process classification and Conditional Random Fields [1]. Such settings have been
studied by Lee et al. [6] in conjunction with an `1 regularizer ?(w) = kwk1 for structure discovery
in graphical models. Choosing l to be the negative log-likelihood it follows that
Remp (w) =
m
X
g(w|xi ) ? h?(xi , yi ), wi and ?w Remp (w) =
i=1
m
X
Ey0 ?p(y0 |xi ;w) [?(xi , y 0 )] ? ?(xi , yi ).
i=1
This means that column generation methods are therefore directly applicable to Gaussian Process
estimation, a problem where large scale solvers were somewhat more difficult to find. It also shows
that adding a new model becomes a matter of defining a new loss function and its corresponding
gradient, rather than having to build a full solver from scratch.
4
Convergence Analysis
While Algorithm 1 is intuitively plausible, it remains to be shown that it has good rates of convergence. In fact, past results, such as those by Tsochantaridis et al. [16] suggest an O(1/2 ) rate,
which would make the application infeasible in practice.
We use a duality argument similar to those put forward in [11, 16], both of which share key techniques with [18]. The crux of our proof argument lies in showing that t ? t+1 ? Jt+1 (wt+1 ) ?
Jt (wt ) (see Theorem 4) is sufficiently bounded away from 0. In other words, since t bounds the
distance from the optimality, at every step Algorithm 1 makes sufficient progress towards the optimum. Towards this end, we first observe that by strong duality the values of the primal and dual
problems (5) and (6) are equal at optimality. Hence, any progress in Jt+1 can be computed in the
dual.
Next, we observe that the solution of the dual problem (6) at iteration t, denoted by ?t , forms a
feasible set of parameters for the dual problem (6) at iteration t + 1 by means of the parameterization
(?t , 0), i.e. by padding ?t with a 0. The value of the objective function in this case equals Jt (wt ).
4
To obtain a lower bound on the improvement due to Jt+1 (wt+1 ) we perform a line search along ((1?
?)?t , ?) in (6). The constraint ? ? [0, 1] ensures dual feasibility. We will bound this improvement
in terms of ?t . Note that, in general, solving the dual problem (6) results in an increase which is
larger than that obtained via the line search. The line search is employed in the analysis only for
analytic tractability. We aim to lower-bound t ?t+1 in terms of t and solve the resultant difference
equation.
Depending on J(w) we will be able to prove two different convergence results.
(a) For regularizers ?(w) for which
??2 ?? (?)
? H ? we first experience a regime of progress
linear in ?t and a subsequent slowdown to improvements
2
which are quadratic in ?t .
(b) Under the above conditions, if furthermore
?w
J(w)
? H, i.e. the Hessian of J is
bounded, we have linear convergence throughout.
We first derive lower bounds on the improvement Jt+1 (wt+1 ) ? Jt (wt ), then the fact that for (b) the
bounds are better. Finally we prove the convergence rates by solving the difference equation in t .
This reasoning leads to the following theorem:
Theorem 4 Assume that k?w Remp (w)k ? G for all w ? W , where W is some
domain of
interest
containing all wt0 for t0 ? t. Also assume that ?? has bounded curvature, i.e. let
??2 ?? (?)
? H ?
?? where ?
for all ? ? ???1 A?
? ? 0 and k?
?k1 ? 1 . In this case we have
t ? t+1 ? ?2t min(1, ??t /4G2 H ? ) ? 2t min(1, ?t /4G2 H ? ).
2
J(w)
? H, then we have
Furthermore, if
?w
?
if ?t > 4G2 H ? /?
??t /2
?
t ? t+1 ? ?/8H
if 4G2 H ? /? ? ?t ? H/2
?
?
??t /4HH
otherwise
(11)
(12)
Note that the error keeps on halving initially and settles for a somewhat slower rate of convergence
after that, whenever the Hessian of the overall risk is bounded from above. The reason for the
difference in the convergence bound for differentiable and non-differentiable losses is that in the
former case the gradient of the risk converges to 0 as we approach optimality, whereas in the former
case, no such guarantees hold (e.g. when minimizing |x| the (sub)gradient does not vanish at the
optimum).
Two facts are worthwhile noting: a) The dual of many regularizers, e.g. squares norm, squared `p
norm, and the entropic regularizer
second derivative. See e.g. [11] for a discussion
have bounded
and details. Thus our condition
??2 ?? (?)
? H ? is not unreasonable. b) Since the improvements
decrease with the size of ?t we may replace ?t by t in both bounds and conditions without any ill
effect (the bound only gets worse). Applying the previous result we obtain a convergence theorem
for bundle methods.
Theorem 5 Assume that J(w) ? 0 for all w. Under the assumptions of Theorem 4 we can give the
following convergence guarantee for Algorithm 1. For any < 4G2 H ? /? the algorithm converges
to the desired precision after
n ? log2
?J(0)
8G2 H ?
+
?4
G2 H ?
?
(13)
steps. If furthermore the Hessian of J(w) is bounded, convergence to any ? H/2 takes at most
the following number of steps:
n ? log2
4HH ?
?J(0)
4H ?
+
max 0, H ? 8G2 H ? /? +
log(H/2)
2
?
4G H
?
?
(14)
Several observations are in order: firstly, note that the number of iterations only depends logarithmically on how far the initial value J(0) is away from the optimal solution. Compare this to the result
of Tsochantaridis et al. [16], where the number of iterations is linear in J(0).
5
Secondly, we have an O(1/) dependence in the number of iterations in the non-differentiable
case. This matches the rate of Shalev-Shwartz et al. [12]. In addition to that, the convergence is
O(log(1/)) for continuously differentiable problems.
Note that whenever Remp (w) is the average over many piecewise linear functions Remp (w) behaves
essentially like a function with bounded Hessian as long as we are taking large enough steps not to
?notice? the fact that the term is actually nonsmooth.
2
1
Remark 6 For
=
dual Hessian
is exactly H ? = 1. Moreover we know that
?(w)
2 kwk
the
2
2
H ? ? since ?w J(w) = ? + ?w Remp (w) .
Effectively the rate of convergence of the algorithm is governed by upper bounds on the primal and
dual curvature of the objective function. This acts like a condition number of the problem ? for
?(w) = 12 w> Qw the dual is ?? (z) = 12 z > Q?1 z, hence the largest eigenvalues of Q and Q?1
would have a significant influence on the convergence.
In terms of ? the number of iterations needed for convergence is O(??1 ). In practice the iteration
count does increase with ?, albeit not as badly as predicted. This is likely due to the fact that the
empirical risk Remp (w) is typically rather smooth and has a certain inherent curvature which acts as
a natural regularizer in addition to the regularization afforded by ??[w].
5
A Linesearch Variant
The convergence analysis in Theorem 4 relied on a one-dimensional line search. Algorithm 1,
however, uses a more complex quadratic program to solve the problem. Since even the simple
updates promise good rates of convergence it is tempting to replace the corresponding step in the
bundle update. This can lead to considerable savings in particular for smaller problems, where the
time spent in the quadratic programming solver is a substantial fraction of the total runtime.
2
To keep matters simple, we only consider quadratic regularization ?(w) := 12 kwk . Note that
?
Jt+1 (?) := Jt+1
((1 ? ?)?t , ?) is a quadratic function in ?, regardless of the choice of Remp [w].
Hence a line search only needs to determine first and second derivative as done in the proof
2
of Theorem 4. It can be shown that ?? Jt+1 (0) = ?t and ??2 Jt+1 (0) = ? ?1 k?w J(wt )k =
2
? ?1 k?wt + at+1 k . Hence the optimal value of ? is given by
? = min(1, ??t /k?wt + at+1 k22 ).
(15)
This means that we may update wt+1 = (1 ? ?)wt ? ?? at+1 . In other words, we need not store past
gradients for the update. To obtain ?t note that we
Remp (wt ) as part of the Taylor
are computing
approximation step. Finally, Rt (wt ) is given by w> A + b ?t , hence it satisfies the same update
2
relations. In particular, the fact that w> A? = ? kwk means that the only quantity we need to cache
is b> ?t as an auxiliary variable rt in order to compute ?t efficiently. Experiments show that this
simplified algorithm has essentially the same convergence properties.
6
Experiments
In this section we show experimental results that demonstrate the merits of our algorithm and its
analysis. Due to space constraints, we report results of experiments with two large datasets namely
Astro-Physics (astro-ph) and Reuters-CCAT (reuters-ccat) [5, 12]. For a fair comparison with existing solvers we use the quadratic regularizer ?(w) := ?2 kwk2 , and the binary hinge loss.
In our first experiment, we address the rate of convergence and its dependence on the value of ?. In
Figure 2 we plot t as a function of iterations for various values of ? using the QP solver at every
iteration to solve the dual problem (6) to optimality. Initially, we observe super-linear convergence;
this is consistent with our analysis. Surprisingly, even though theory predicts sub-linear speed of
convergence for non-differentiable losses like the binary hinge loss (see (11)), our solver exhibits
linear rates of convergence predicted only for differentiable functions (see (12)). We conjecture
that the average over many piecewise linear functions, Remp (w), behaves essentially like a smooth
function. As predicted, the convergence speed is inversely proportional to the value of ?.
6
Figure 2: We plot t as a function of the number of iterations. Note the logarithmic scale in t . Left:
astro-ph; Right: reuters-ccat.
Figure 3: Top: Objective function value as a function of time. Bottom: Objective function value as
a function of iterations. Left: astro-ph, Right: reuters-ccat. The black line indicates the final value
of the objective function + 0.001 .
In our second experiment, we compare the convergence speed of two variants of the bundle method,
namely, with a QP solver in the inner loop (which essentially boils down to SVMPerf) and the line
search variant which we described in Section 5. We contrast these solvers with Pegasos [12] in the
batch setting. Following [5] we set ? = 104 for reuters-ccat and ? = 2.104 for astro-ph.
Figure 3 depicts the evolution of the primal objective function value as a function of both CPU time
as well as the number of iterations. Following Shalev-Shwartz et al. [12] we investigate the time
required by various solvers to reduce the objective value to within 0.001 of the optimum. This is
depicted as a black horizontal line in our plots. As can be seen, Pegasos converges to this region
quickly. Nevertheless, both variants of the bundle method converge to this value even faster (line
search is slightly slower than Pegasos on astro-ph, but this is not always the case for many other large
datasets we tested on). Note that both line search and Pegasos converge to within 0.001 precision
rather quickly, but they require a large number of iterations to converge to the optimum.
7
Related Research
Our work is closely related to Shalev-Shwartz and Singer [11] who prove mistake bounds for online
algorithms by lower bounding the progress in the dual. Although not stated explicitly, essentially
the same technique of lower bounding the dual improvement was used by Tsochantaridis et al. [16]
to show polynomial time convergence of the SVMStruct algorithm. The main difference however
is that Tsochantaridis et al. [16] only work with a quadratic objective function while the framework
7
proposed by Shalev-Shwartz and Singer [11] can handle arbitrary convex functions. In both cases,
a weaker analysis led to O(1/2 ) rates of convergence for nonsmooth loss functions. On the other
hand, our results establish a O(1/) rate for nonsmooth loss functions and O(log(1/)) rates for
smooth loss functions under mild technical assumptions.
Another related work is SVMPerf [5] which solves the SVM estimation problem in linear time.
SVMPerf finds a solution with accuracy in O(md/(?2 )) time, where the m training patterns
?
xi ? Rd . This bound was improved by Shalev-Shwartz et al. [12] to O(1/??)
for obtaining an
accuracy of with confidence 1 ? ?. Their algorithm, Pegasos, essentially performs
stochastic
?
(sub)gradient descent but projects the solution back onto the L2 ball of radius 1/ ?. But, as our
experiments show, performing an exact line search in the dual leads to a faster decrease in the value
of primal objective. Note that Pegasos also can be used in an online setting. This, however, only
applies whenever the empirical risk decomposes into individual loss terms (e.g. it is not applicable
to multivariate performance scores).
The third related strand of research considers gradient descent in the primal with a line search to
choose the optimal step size, see e.g. [2, Section 9.3.1]. Under assumptions of smoothness and
strong convexity ? that is, the objective function can be upper and lower bounded by quadratic functions ? it can be shown that gradient descent with line search will converge to an accuracy of in
O(log(1/)) steps. The problem here is the line search in the primal, since evaluating the regularized risk functional might be as expensive as computing its gradient, thus rendering a line search in
the primal unattractive. On the other hand, the dual objective is relatively simple to evaluate, thus
making the line search in the dual, as performed by our algorithm, computationally feasible.
Finally, we would like to point out connections to subgradient methods [7]. These algorithms are
designed for nonsmooth functions, and essentially choose an arbitrary element of the subgradient set
to perform a gradient descent like update. Let kJw (w)k ? G, and B(w? , r) denote a ball of radius
r centered around the minimizer of J(w). By applying the analysis of Nedich and Bertsekas [7] to
the regularized risk minimization problem with ?(w) := ?2 kwk2 , Ratliff et al. [9] showed that subgradient descent with a fixed, but sufficiently small, stepsize will converge linearly to B(w? , G/?).
References
[1] Y. Altun, A. J. Smola, and T. Hofmann. Exponential families for conditional random fields. In UAI, pages
2?9, 2004.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[3] J. Hiriart-Urruty and C. Lemar?echal. Convex Analysis and Minimization Algorithms. 1993.
[4] T. Joachims. A support vector method for multivariate performance measures. In ICML, pages 377?384,
2005.
[5] T. Joachims. Training linear SVMs in linear time. In KDD, 2006.
[6] S.-I. Lee, V. Ganapathi, and D. Koller. Efficient structure learning of Markov networks using L1regularization. In NIPS, pages 817?824, 2007.
[7] A. Nedich and D. P. Bertsekas. Convergence rate of incremental subgradient algorithms. In Stochastic
Optimization: Algorithms and Applications, pages 263?304. 2000.
[8] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.
[9] N. Ratliff, J. Bagnell, and M. Zinkevich. (online) subgradient methods for structured prediction. In Proc.
of AIStats, 2007.
[10] R. T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
[11] S. Shalev-Shwartz and Y. Singer. Online learning meets optimization in the dual. In COLT, 2006.
[12] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In
ICML, 2007.
[13] A. J. Smola, S. V. N. Vishwanathan, and Q. V. Le. Bundle methods for machine learning. JMLR, 2008.
in preparation.
[14] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, pages 25?32, 2004.
[15] C. H. Teo, Q. Le, A. Smola, and S. V. N. Vishwanathan. A scalable modular convex solver for regularized
risk minimization. In KDD, 2007.
[16] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. JMLR, 6:1453?1484, 2005.
[17] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
[18] T. Zhang. Sequential greedy approximation for certain convex optimization problems. IEEE Trans.
Information Theory, 49(3):682?691, 2003.
8
| 3230 |@word mild:1 polynomial:1 norm:2 initial:1 score:3 past:2 existing:2 current:2 com:2 comparing:1 gmail:1 yet:1 written:4 subsequent:1 additive:1 partition:1 kdd:2 hofmann:2 cheap:2 analytic:1 numerical:1 plot:3 designed:1 update:6 greedy:1 parameterization:1 provides:1 successive:1 firstly:1 zhang:1 along:1 direct:1 prove:5 globally:1 decreasing:1 automatically:1 cpu:1 cache:1 solver:13 becomes:2 project:1 moreover:2 bounded:8 qw:1 what:2 argmin:2 developed:1 whilst:1 unified:1 guarantee:2 every:4 act:2 runtime:1 exactly:2 classifier:1 k2:1 bertsekas:2 before:1 tends:1 mistake:1 meet:1 black:2 might:1 twice:1 au:1 studied:1 suggests:1 range:1 practice:5 subdifferentiable:1 j0:1 empirical:5 significantly:1 composite:1 boyd:1 word:2 confidence:1 suggest:1 altun:2 get:1 pegasos:11 onto:1 tsochantaridis:5 put:1 risk:15 applying:2 influence:1 optimize:1 zinkevich:1 map:3 straightforward:1 regardless:1 starting:1 rt0:1 convex:19 vandenberghe:1 handle:1 increment:1 svmstruct:1 construction:1 tan:1 suppose:1 exact:2 programming:3 us:1 trick:1 element:2 logarithmically:1 expensive:2 predicts:1 observed:1 bottom:1 taskar:1 region:1 ensures:1 decrease:2 substantial:1 intuition:1 convexity:1 tight:1 solving:3 joint:3 various:2 regularizer:5 fast:1 describe:1 shalev:8 choosing:1 whose:1 modular:1 larger:1 solve:4 plausible:1 otherwise:1 final:1 online:4 advantage:2 differentiable:12 eigenvalue:1 propose:1 hiriart:1 product:1 maximal:1 remainder:1 argminw:1 loop:1 convergence:35 empty:1 optimum:4 incremental:1 converges:4 wider:1 depending:1 derive:1 spent:1 b0:2 progress:5 solves:1 strong:2 auxiliary:1 predicted:3 australian:1 radius:2 closely:1 stochastic:2 centered:1 australia:1 settle:1 require:4 crux:1 generalization:1 decompose:1 elementary:1 summation:1 secondly:1 svmperf:12 extension:2 clarify:1 hold:1 sufficiently:3 around:1 wright:1 normal:1 exp:3 entropic:1 estimation:8 proc:1 applicable:2 label:1 teo:1 largest:1 establishes:1 minimization:7 gaussian:4 always:2 aim:1 super:1 rather:3 conjunction:2 corollary:2 l1regularization:1 joachim:5 improvement:6 likelihood:1 indicates:1 contrast:1 inference:1 typically:3 entire:1 bt:5 a0:2 initially:2 relation:1 koller:2 quasi:1 interested:1 ey0:1 dual:25 supw:1 classification:1 denoted:2 overall:1 ill:1 development:1 colt:1 special:2 initialize:1 equal:3 field:2 saving:1 having:1 sampling:1 icml:2 report:2 nonsmooth:4 piecewise:3 inherent:1 tangential:1 national:1 individual:4 gent:1 argmax:1 huge:1 interest:1 investigate:1 semidefinite:1 primal:8 regularizers:3 bundle:10 necessary:1 experience:1 enjoying:1 taylor:6 desired:1 instance:2 fenchel:2 column:2 modeling:1 linesearch:1 tractability:1 considerably:2 sequel:1 lee:2 physic:1 continuously:2 quickly:2 squared:1 containing:1 choose:3 worse:2 resort:1 strive:1 derivative:3 ganapathi:1 singleton:1 minimizew:1 includes:1 coefficient:1 matter:3 rockafellar:1 satisfy:1 explicitly:2 ranking:1 depends:1 later:1 performed:1 kwk:4 relied:1 minimize:3 square:1 accuracy:3 who:1 likewise:1 efficiently:1 ccat:5 whenever:4 resultant:1 proof:3 boil:1 remp:28 recall:1 hilbert:1 sophisticated:1 actually:1 back:1 attained:1 improved:2 formulation:1 done:1 though:1 furthermore:6 smola:5 until:1 hand:3 horizontal:1 effect:1 k22:1 former:3 regularization:6 hence:6 evolution:1 deal:1 demonstrate:2 performs:2 reasoning:1 nedich:2 mint0:1 common:1 behaves:2 functional:2 qp:4 extend:2 kwk2:4 significant:2 cambridge:1 smoothness:2 rd:2 trivially:1 pointed:1 add:1 curvature:3 multivariate:2 recent:2 showed:1 store:1 certain:2 binary:3 kwk1:1 yi:5 seen:2 preserving:1 guestrin:1 somewhat:2 employed:1 determine:1 maximize:2 converge:5 monotonically:1 tempting:1 mize:1 full:1 smooth:7 technical:3 match:2 faster:2 long:2 a1:1 feasibility:1 halving:1 variant:6 regression:1 basic:1 prediction:1 essentially:7 scalable:1 iteration:14 kernel:2 achieved:1 addition:4 whereas:2 subdifferential:4 subject:2 bt0:3 noting:1 enough:1 rendering:1 inner:2 idea:3 reduce:1 svn:1 t0:3 padding:1 hessian:5 remark:1 useful:1 ph:5 svms:2 category:1 notice:1 estimated:1 write:1 promise:1 key:2 nevertheless:1 nocedal:1 subgradient:9 fraction:1 year:1 sum:3 family:3 throughout:1 dy:1 bound:20 convergent:1 quadratic:14 nonnegative:1 badly:1 vishwanathan:4 constraint:4 alex:1 afforded:1 hau:2 speed:4 argument:2 min:4 optimality:5 subgradients:1 performing:1 relatively:1 conjecture:1 structured:4 ball:2 legendre:4 describes:1 smaller:1 increasingly:1 y0:1 slightly:1 wi:9 appealing:1 making:1 quoc:2 intuitively:1 unregularized:1 computationally:2 equation:2 previously:2 remains:1 count:1 hh:2 needed:1 know:3 merit:1 singer:4 urruty:1 end:1 operation:1 unreasonable:1 apply:2 observe:3 worthwhile:1 away:3 appropriate:1 stepsize:1 batch:1 altogether:1 hat:2 slower:2 top:1 include:1 graphical:2 log2:2 hinge:2 newton:1 k1:3 build:2 establish:1 objective:12 added:1 quantity:1 rt:6 dependence:2 md:1 bagnell:1 said:2 exhibit:1 gradient:21 distance:2 w0:4 astro:6 considers:1 reason:1 nicta:2 minimizing:1 difficult:1 unfortunately:1 negative:2 stated:1 ratliff:2 perform:2 upper:3 av:2 observation:3 datasets:2 markov:2 finite:1 descent:6 supporting:1 immediate:1 defining:3 reproducing:1 arbitrary:2 namely:3 required:2 pair:1 connection:2 nip:2 trans:1 address:1 able:1 pattern:3 regime:1 program:5 including:1 max:6 suitable:2 natural:1 regularized:7 inversely:1 geometric:1 discovery:1 l2:1 interdependent:1 loss:21 generation:1 proportional:1 srebro:1 sufficient:2 consistent:1 exciting:1 share:1 maxt:1 echal:1 repeat:1 slowdown:1 surprisingly:1 infeasible:1 enjoys:3 allow:1 weaker:1 wide:1 fall:1 taking:1 evaluating:1 computes:1 forward:1 commonly:1 adaptive:1 simplified:1 far:2 approximate:1 supremum:1 keep:2 active:1 uai:1 b1:1 xi:9 shwartz:8 don:1 search:14 decomposes:2 wt0:4 nature:1 obtaining:1 expansion:2 necessarily:1 complex:1 domain:1 aistats:1 main:1 linearly:1 reuters:5 bounding:2 fair:1 canberra:1 depicts:1 fashion:1 quv:1 precision:4 sub:5 exponential:2 lie:1 governed:1 vanish:1 jmlr:2 third:1 hw:1 theorem:9 down:1 jt:24 showing:1 offset:2 svm:3 virtue:1 unattractive:1 intractable:1 vapnik:2 adding:1 effectively:2 albeit:1 sequential:1 margin:3 maxt0:1 easier:1 depicted:1 logarithmic:1 led:1 simply:1 relegate:1 lbfgs:1 likely:1 strand:1 g2:8 applies:3 springer:2 minimizer:2 violator:1 satisfies:1 conditional:3 towards:2 replace:3 lemar:1 jt0:3 feasible:2 considerable:1 hyperplane:1 wt:31 lemma:2 called:3 total:1 duality:2 experimental:2 hw0:1 formally:1 support:2 latter:1 alexander:1 preparation:1 evaluate:1 princeton:1 tested:1 scratch:1 |
2,460 | 3,231 | An Analysis of Inference with the Universum
Fabian H. Sinz
Max Planck Institute for biological Cybernetics
Spemannstrasse 41, 72076, T?ubingen, Germany
[email protected]
Alekh Agarwal
University of California Berkeley
387 Soda Hall Berkeley, CA 94720-1776
[email protected]
Olivier Chapelle
Yahoo! Research
Santa Clara, California
[email protected]
Bernhard Sch?olkopf
Max Planck Institute for biological Cybernetics
Spemannstrasse 38, 72076, T?ubingen, Germany
[email protected]
Abstract
We study a pattern classification algorithm which has recently been proposed by
Vapnik and coworkers. It builds on a new inductive principle which assumes that
in addition to positive and negative data, a third class of data is available, termed
the Universum. We assay the behavior of the algorithm by establishing links with
Fisher discriminant analysis and oriented PCA, as well as with an SVM in a projected subspace (or, equivalently, with a data-dependent reduced kernel). We also
provide experimental results.
1
Introduction
Learning algorithms need to make assumptions about the problem domain in order to generalise
well. These assumptions are usually encoded in the regulariser or the prior. A generic learning algorithm usually makes rather weak assumptions about the regularities underlying the data. An example
of this is smoothness. More elaborate prior knowledge, often needed for a good performance, can
be hard to encode in a regulariser or a prior that is computationally efficient too.
Interesting hybrids between both extremes are regularisers that depend on an additional set of data
available to the learning algorithm. A prominent example of data-dependent regularisation is semisupervised learning [1], where an additional set of unlabelled data, assumed to follow the same
distribution as the training inputs, is tied to the regulariser using the so-called cluster assumption.
A novel form of data-dependent regularisation was recently proposed by [11]. The additional dataset
for this approach is explicitly not from the same distribution as the labelled data, but represents a
third ? neither ? class. This kind of dataset was first proposed by Vapnik [10] under the name
Universum, owing its name to the intuition that the Universum captures a general backdrop against
which a problem at hand is solved. According to Vapnik, a suitable set for this purpose can be
thought of as a set of examples that belong to the same problem framework, but about which the
resulting decision function should not make a strong statement.
Although initially proposed for transductive inference, the authors of [11] proposed an inductive
classifier where the decision surface is chosen such that the Universum examples are located close
to it. Implementing this idea into an SVM, different choices of Universa proved to be helpful in
various classification tasks. Although the authors showed that different choices of Universa and loss
functions lead to certain known regularisers as special cases of their implementation, there are still
a few unanswered questions. On the one hand it is not clear whether the good performance of their
algorithm is due to the underlying original idea, or just a consequence of the employed algorithmic
1
relaxation. On the other hand, except in special cases, the influence of the Universum data on the
resulting decision hyperplane and therefore criteria for a good choice of a Universum is not known.
In the present paper we would like to address the second question by analysing the influence of the
Universum data on the resulting function in the implementation of [11] as well as in a least squares
version of it which we derive in section 2. Clarifying the regularising influence of the Universum on
the solution of the SVM can give valuable insight into which set of data points might be a helpful
Universum and how to obtain it.
The paper is structured as follows. After briefly deriving the algorithms in section 2 we show
in section 3 that the algorithm of [11] pushes the normal of the hyperplane into the orthogonal
complement of the subspace spanned by the principal directions of the Universum set. Furthermore,
we demonstrate that the least squares version of the Universum algorithm is equivalent to a hybrid
between kernel Fisher Discriminant Analysis and kernel Oriented Principal Component Analysis. In
section 4, we validate our analysis on toy experiments and give an example how to use the geometric
and algorithmic intuition gained from the analysis to construct a Universum set for a real world
problem.
2
2.1
The Universum Algorithms
The Hinge Loss U-SVM
We start with a brief review of the implementation proposed in [11].
Let L =
{(x1 , y1 ), ..., (xm , ym )} be the set of labelled examples and let U = {z1 , ..., zq } denote the set
of Universum examples. Using the hinge loss Ha [t] = max{0, a ? t} and fw,b (x) = hw, xi + b, a
standard SVM can compactly be formulated as
min
w,b
m
X
1
||w||2 + CL
H1 [yi fw,b (xi )].
2
i=1
In the implementation of [11] the goal of bringing the Universum examples close to the separating
hyperplane is realised by also minimising the cumulative ?-insensitive loss I? [t] = max{0, |t| ? ?}
on the Universum points
min
w,b
q
m
X
X
1
||w||2 + CL
H1 [yi fw,b (x)] + CU
I? [ |fw,b (zj )| ].
2
i=1
j=1
(1)
Noting that I? [t] = H?? [t] + H?? [?t], one can use the simple trick of adding the Universum
examples twice with opposite labels and obtain an SVM like formulation which can be solved with
a standard SVM optimiser.
2.2
The Least Squares U-SVM
The derivation of the least squares U-SVM starts with the same general regularised error minimisation problem
q
m
1
CL X
CU X
2
min
||w|| +
Qy [fw,b (x)] +
Q0 [fw,b (zj )].
w,b
2
2 i=1 i
2 j=1
(2)
Instead of using the hinge loss, we employ the quadratic loss Qa [t] = ||t ? a||22 which is used in the
least squares versions of SVMs [9]. Expanding (2) in terms of slack variables ? and ? yields
min
w,b
s.t.
q
m
1
CL X 2 CU X 2
||w||2 +
?i +
?
2
2 i=1
2 j=1 j
(3)
hw, xi i + b = yi ? ?i for i = 1, ..., m
hw, zj i + b = 0 ? ?j for j = 1, ..., q.
Minimising the Lagrangian of (3) with respect to the primal variables w, b, ? and ?, and substituting
their optimal values back into (3) yields a dual maximisation problem in terms of the Lagrange
2
multipliers ?. Since this dual problem is still convex, we can set its derivative to zero and thereby
obtain the following linear system
0
b
0
1>
y
=
,
?
1
K+C
0
Here, K =
U, and C =
KL,L
K>
L,U
1
CL
I
0
KL,U
KU,U
denotes the kernel matrix between the input points in the sets L and
0
an identity matrix of appropriate size scaled with
1
CU
I
associated with labelled examples and
1
CU
1
CL
in dimensions
for dimensions corresponding to Universum examples.
The solution (?, b) can then be obtained by a simple matrix inversion. In the remaining part of this
paper we denote the least squares SVM by Uls -SVM.
2.3
Related Ideas
Although [11] proposed the first algorithm that explicitly refers to Vapnik?s Universum idea, there
exist related approaches that we shall mention briefly. The authors of [12] describe an algorithm
for the one-vs-one strategy in multiclass learning that additionally minimises the distance of the
separating hyperplane to the examples that are in neither of the classes. Although this is algorithmically equivalent to the U-SVM formulation above, their motivation is merely to sharpen the contrast
between the different binary classifiers. In particular, they do not consider using a Universum for
binary classification problems.
There are also two Bayesian algorithms that refer to non-examples or neither class in the binary
classification setting. [8] gives a probabilistic interpretation for a standard hinge loss SVM by establishing the connection between the MAP estimate of a Gaussian process with a Gaussian prior using
a covariance function k and a hinge loss based noise model. In order to deal with the problem that
the proposed likelihood does not integrate to one the author introduces a third ? the neither? class,
A similar idea is used by [4], introducing a third class to tackle the problem that unlabelled examples
used in semi-supervised learning do not contribute to discriminative models PY|X (yi |xi ) since the
parameters of the label distribution are independent of input points with unknown, i.e., marginalised
value of the label. To circumvent this problem, the authors of [4] introduce an additional ? neither
? class to introduce a stochastic dependence between the parameter and the unobserved label in the
discriminative model. However, neither of the Bayesian approaches actually assigns an observed
example to the introduced third class.
3
Analysis of the Algorithm
The following two sections analyse the geometrical relation of the decision hyperplane learnt with
one of the Universum SVMs to the Universum set. It will turn out that in both cases the optimal
solutions tend to make the normal vector orthogonal to the principal directions of the Universum.
The extreme case where w is completely orthogonal to U, makes the decision function defined by w
invariant to transformations that act on the subspace spanned by the elements of U. Therefore, the
Universum should contain directions the resulting function should be invariant against.
In order to increase the readability we state all results for the linear case. However, our results
generalise to the case where the xi and zj live in an RKHS spanned by some kernel.
3.1 U-SVM and Projection Kernel
For this section we start by considering a U-SVM with hard margin on the elements of U. Furthermore, we use ? = 0 for the ?-insensitive loss. After showing the equivalence to using a standard
SVM trained on the orthogonal complement of the subspace spanned by the zj , we extend the result
to the cases with soft margin on U.
Lemma A U-SVM with CU = ?, ? = 0 is equivalent to training a standard SVM with the training
points projected onto the orthogonal complement of span{zj ?z0 , zj ? U}, where z0 is an arbitrary
element of U.
3
Proof: Since CU = ? and ? = 0, any w yielding a finite value of (1) must fulfil hw, zj i + b = 0 for
all j = 1, ..., q. So hw, zj ? z0 i = 0 and w is orthogonal to span{zj ? z0 , zj ? U}. Let PU? denote
the projection operator onto the orthogonal complement of that set. From the previous argument, we
can replace hw, xi i by hPU? w, xi i in the solution of (1) without changing it. Indeed, the optimal
w in (1) will satisfy w = PU? w. Since PU? is an orthogonal projection we have that PU? = PU>?
and hence hPU? w, xi i = hw, PU>? xi i = hw, PU? xi i. Therefore, the optimisation problem in (1) is
the same as a standard SVM where the xi have been replaced by PU? xi .
The special case the lemma refers to, clarifies the role of the Universum in the U-SVM. Since the
resulting w is orthogonal to an affine space spanned by the Universum points, it is invariant against
features implicitly specified by directions of large variance in that affine space. Picturing the h?, zj i
as filters that extract certain features from given labelled or test examples x, using the Universum
algorithms means suppressing the features specified by the zj .
Finally, we generalise the result of the lemma by dropping the hard constraint assumption on the
Universum examples, i.e. we consider the case CU < ?. Let w? and b? the optimal solution of (1).
We have that
q
q
X
X
|hw? , zj i + b|.
CU
|hw? , zj i + b? | ? CU min
b
j=1
j=1
The right hand side can be interpreted as an ?L1 variance?. So the algorithm tries to find a direction
w? such that the variance of the projection of the Universum points on that direction is small. As
CU approaches infinity this variance approaches 0 and we recover the result of the above lemma.
3.2 Uls -SVM, Fisher Discriminant Analysis and Principal Component Analysis
In this section we present the relation of the Uls -SVM to two classic learning algorithms: (kernel)
oriented Principal Component Analysis (koPCA) and (kernel) Fisher discriminant analysis (kFDA)
[5]. As it will turn out, the Uls -SVM is equivalent to a hybrid between both up to a linear equality
constraint. Since koPCA and kFDA can both be written as maximisation of a Rayleigh Quotient we
start with the Rayleigh quotient of the hybrid
from FDA
w
w> (CL
X
z
+
?
}|
+
(c
? c )(c
.
j=1
k=? i?I k
|
{
? >
?c ) w
q
X
X
k
k >
?)(zj ? c
?)> )w
(xi ? c )(xi ? c ) +CU
(zj ? c
w
max
>
{z
}
|
from FDA
{z
}
from oPCA
? = 21 (c+ + c? ) is the point between
Here, c? denote the class means of the labelled examples and c
them. As indicated in the equation, the numerator is exactly the same as in kFDA, i.e. the interclass variance, while the denominator is a linear combination of the denominators from kFDA and
koPCA, i.e. the inner class variances from kFDA and the noise variance from koPCA.
As noted in [6] the numerator is just a rank one matrix. For optimising the quotient it can be fixed
to an arbitrary value while the denominator is minimised. Since the denominator might not have
full rank it needs to be regularised [6]. Choosing the regulariser to be ||w||2 , the problem can be
rephrased as
min
?
?
P
P
P
?)(zj ? c
? )> w
||w||2 + w> CL k=? i?I k (xi ? ck )(xi ? ck )> + CU qj=1 (zj ? c
s.t.
w> (c+ ? c? ) = 2
w
As we will see below this problem can further be transformed into a quadratic program
min
||w||2 + CL ||?||2 + CU ||?||2
w,b
s.t.
(4)
(5)
hw, xi i + b = yi + ?i for all i = 1, ..., m
hw, zj i + b = ?j for all j = 1, ..., q
? > 1k = 0 for k = ?.
Ignoring the constraint ? 1 = 0, this program is equivalent to the quadratic program (3) of the
Uls -SVM. The following lemma establishes the relation of the Uls -SVM to kFDA and koPCA.
> k
4
Lemma For given CL and CU the optimisation problems (4) and (5) are equivalent.
Proof: Let w, b, ? and ? the optimal solution of (5). Combining the first and last constraint, we get
?. Plugging ? and ? in (5)
w> c? + b ? 1 = 0. This gives us w> (c+ ? c? ) = 2 as well as b = ?w> c
and using this value of b, we obtain the objective function (4). So we have proved that the minimum
value of (4) is not larger than the one of (5).
?, ?i = w>P
Conversely, let w be the optimal solution of (4). Let us choose b = ?w> c
xi + b ? yi and
>
?j = w zj +b. Again both objective functions are equal. We just have to check that i: yi =?1 ?i =
0. But because w> (c+ ? c? ) = 2, we have
1
m?
X
?i = w> c? + b ? 1 = w> c? ?
i: yi =?1
w> (c+ + c? )
w> (c? ? c? )
?1=
? 1 = 0.
2
2
The above lemma establishes a relation of the Uls -SVM to two classic learning algorithms. This
further clarifies the role of the Universum set in the algorithmic implementation of Vapnik?s idea
as proposed by [11]. Since the noise covariance matrix of koPCA is given by the covariance of the
Universum points centered on the average of the labelled class means, the role of the Universum as
a data-dependent specification of principal directions of invariance is affirmed.
The koPCA term also shows that both
Pq the position and covariance
P structure are crucial>to a good
?)(zj ? c
?)> as qj=1 (zj ? z
?)(zj ? z
?) + q(?
Universum. To see this, we rewrite j=1 (zj ? c
z?
Pq
1
>
?)(?
?) , where z
? = q j=1 zj is the Universum mean. The additive relationship between
c
z?c
covariance of Universum about its mean, and the distance between Universum and training sample
means projected onto w shows that either quantity can dominate depending on the data at hand.
In the next section, we demonstrate the theoretical results of this section on toy problems and give
an example how to use the insight gained from this section to construct an appropriate Universum.
4
4.1
Experiments
Toy Experiments
The theoretical results of section 3 show that the covariance structure of the Universum as well as
its absolute position influence the result of the learning process. To validate this insight on toy data,
we sample ten labelled sets of size 20, 50, 100 and 500 from two fifty-dimensional Gaussians. Both
Gaussians have a diagonal covariance that has low standard deviation (?1,2 = 0.08) in the first two
dimensions and high standard deviation (?3,...,50 = 10) in the remaining 48. The two Gaussians are
displaced such that the mean of ??
i = ?0.3 exceeds the standard deviation by a factor of 3.75 in
the first two dimensions but was 125 times smaller in the remaining ones. The values are chosen
such that the Bayes risk is approx. 5%. Note, that by construction the first two dimensions are most
discriminative.
We construct two kinds of Universa for this toy problem. For the first kind we use a mean zero
Gaussian with the same covariance structure as the Gaussians for the labelled data (?3,...,50 = 10),
but with varied degree of anisotropy in the first two dimensions (?1,2 = 0.1, 1.0, 10). According to
the results of section 3 the Universa should be more helpful for larger anisotropy. For the second
kind of Universa we use the same covariance as the labelled classes but shifted them along the line
between the means of the labelled Gaussians. This kind of Universa should have a positive effect
on the accuracy for small displacements but that effect should vanish with increasing amount of
translation.
Figure 1 shows the performance of a linear U-SVMs for different amounts of training and Universum
data. In the top row, the degree of isotropy increases from left to right, whereas ? = 10 refers to the
complete isotropic case. In the bottom row, the amount of translation increases from left to right.
As expected, performance converges to the performance of an SVM for high isotropy ? and large
translations t. Note, that large translations do not affect the accuracy as much as a high isotropy.
However, this might be due to the fact the variance along the principal components of the Universum
is much larger in magnitude than the applied shift. We obtained similar results for the Uls -SVM.
Also, the effect remains when employing an RBF kernel.
5
? = 0.1
? = 1.0
0.3
0.2
0.1
0.5
SVM (q=0)
q = 100
q = 500
0.4
0.3
0.2
0.1
100
200
300
400
0
0
500
100
t = 0.1
200
300
0
0
500
0.2
0.1
200
0.4
400
0.2
500
400
500
t = 0.9
0.3
0
0
300
0.5
SVM (q=0)
q = 100
q = 1000
0.1
300
100
m
mean error
0.3
200
0.2
t = 0.5
mean error
mean error
400
0.5
SVM (q=0)
q = 100
q = 1000
100
0.3
m
0.5
0.4
SVM (q=0)
q = 100
q = 500
0.4
0.1
m
0
0
mean error
0.4
0
0
? = 10.0
0.5
SVM (q=0)
q = 100
q = 500
mean error
mean error
0.5
SVM (q=0)
q = 100
q = 1000
0.4
0.3
0.2
0.1
100
200
300
m
400
500
0
0
100
200
m
300
400
500
m
Figure 1: Learning curves of linear U-SVMs for different degrees of isotropy ? and different amounts of
translation z 7? z + 2t ? (c+ ? c? ). With increasing isotropy and translation the performance of the U-SVMs
converges to the performance of a normal SVM.
Universum
Test error
Mean output
Angle
0
1.234
0.406
81.99
1
1.313
-0.708
85.57
2
1.399
-0.539
79.49
3
1.051
-0.031
69.74
4
1.246
-0.256
79.75
6
1.111
0.063
81.02
7
1.338
-0.165
82.72
9
1.226
-0.360
77.98
Table 1: See text for details. Without Universum, test error is 1.419%. The correlation between the test error
and the absolute value of the mean output (resp. angle) is 0.71 (resp 0.64); the p-value (i.e the probability of
observing such a correlation by chance) is 3% (resp 5.5%). Note that for instance that digits 3 and 6 are the
best Universum and they are also the closest to the decision boundary.
4.2
Results on MNIST
Following the experimental work from [11], we took up the task of distinguishing between the
digits 5 and 8 on MNIST data. Training sets of size 1000 were used, and other digits served as
Universum data. Using different digits as universa, we recorded the test error (in percentage) of
U-SVM. We also computed the mean output (i.e. hw, xi + b) of a normal SVM trained for binary classification between the digits 5 and 8, measured on the points from the Universum class.
Another quantity of interest measured was the angle between covariance matrices of training and
Universum data in the feature space. Note that for two covariance matrices CX and CY corresponding to matrices
p X and Y (centered about their means), the cosine of the angle is defined
as trace(CX CY )/ ptrace(C2X )trace(C2Y ). This quantity can be computed in feature space as
2
2
trace(KXY K>
XY )/ trace(KXX )trace(KY Y ), with KXY the kernel matrix between the sets X
and Y . These quantities have been documented in Table 1. All the results reported are averaged
over 10-folds of cross-validation, with C = CU = 100, and ? = 0.01.
4.3
Classification of Imagined Movements in Brain Computer Interfaces
Brain computer interfaces (BCI) are devices that allow a user to control a computer by merely
using his brain activity [3]. The user indicates different states to a computer system by deliberately
changing his state of mind according to different experimental paradigms. These states are to be
detected by a classifier. In our experiments, we used data from electroencephalographic recordings
(EEG) with a imagined-movement paradigm. In this paradigm the patient imagines the movement
of his left or right hand for indicating the respective state. In order to reverse the spatial blurring of
the brain activity by the intermediate tissue of the skull, the signals from all sensors are demixed via
6
Algorithm
SVM
U-SVM
LS-SVM
Uls -SVM
SVM
U-SVM
LS-SVM
Uls -SVM
U
?
UC3
Unm
?
UC3
Unm
FS
40.00 ? 7.70
41.33 ? 7.06 (0.63)
39.67 ? 8.23 (1.00)
41.00 ? 7.04
40.67 ? 7.04 (1.00)
40.67 ? 6.81 (1.00)
?
UC3
Unm
?
UC3
Unm
S1
12.35 ? 6.82
13.53 ? 6.83 (0.63)
12.35 ? 7.04 (1.00)
13.53 ? 8.34
12.94 ? 6.68 (1.00)
16.47 ? 7.74 (0.50)
DATA I
JH
40.00 ? 11.32
34.58 ? 9.22 (0.07)
37.08 ? 11.69 (0.73)
40.42 ? 11.96
37.08 ? 7.20 (0.18)
37.92 ? 12.65 (1.00)
DATA II
S2
35.29 ? 13.30
32.94 ? 11.83 (0.63)
27.65 ? 14.15 (0.13)
33.53 ? 13.60
32.35 ? 10.83 (0.38)
31.18 ? 13.02 (0.69)
JL
30.00 ? 15.54
30.56 ? 17.22 (1.00)
30.00 ? 16.40 (1.00)
30.56 ? 15.77
31.11 ? 17.01 (1.00)
30.00 ? 15.54 (1.00)
S3
35.26 ? 14.05
35.26 ? 14.05 (1.00)
36.84 ? 13.81 (1.00)
34.21 ? 12.47
35.79 ? 15.25 (1.00)
35.79 ? 15.25 (1.00)
Table 2: Mean zero-one test error scores for the BCI experiments. The mean was taken over ten single error
scores. The p-value for a two-sided sign test against the SVM error scores are given in brackets.
an independent component analysis (ICA) applied to the concatenated lowpass filtered time series
of all recording channels [2].
In the experiments below we used two BCI datasets. For the first set (DATA I) we recorded the EEG
activity from three healthy subjects for an imagined movement paradigm as described by [3]. The
second set (DATA II) contains EEG signals from a similar paradigm [7].
We constructed two kind of Universa. The first Universum, UC3 consists of recordings from a third
condition in the experiments that is not related to imagined movements. Since variations in signals
from this condition should not carry any useful information about imagined movement task, the
classifier should be invariant against them. The second Universum Unm is physiologically motivated. In the case of the imagined-movement paradigm the relevant signal is known to be in the so
called ?-band from approximately 10 ? 12Hz and spatially located over the motor cortices. Unfortunately, signals in the ?-band are also related to visual activity and independent components can be
found that have a strong influence from sensors over the visual cortex. However, since ICA is unsupervised, those independent components could still contain discriminative information. In order
to make the learning algorithm prefer the signals from the motor cortex, we construct a Universum
Unm by projecting the labelled data onto the independent components that have a strong influence
from the visual cortex.
The machine learning experiments were carried out in two nested cross validation loops, where
the inner loop was used for model selection and the outer for testing. We exclusively used a linear
kernel. Table 2 shows the mean zero-one loss for DATA I and DATA II and the constructed Universa.
On the DATA I dataset, there is no improvement in the error rates for the subjects FS and JL compared to an SVM without Universum. Therefore, we must assume that the employed Universa did
not provide helpful information in those cases. For subject JH, UC3 and Unm yield an improvement
for both Universum algorithms. However, the differences to the SVM error scores are not significantly better according to a two-sided sign test. The Uls -SVM performs worse than the U-SVM in
almost all cases.
On the DATA II dataset, there was an improvement only for subject S2 using the U-SVM with the
Unm and UC3 Universum (8% and 3% improvement respectively). However, also those differences
are not significant. As already observed for the DATA I dataset, the Uls -SVM performs constantly
worse than its hinge loss counterpart.
The better performance of the Unm Universum on the subjects JH and S2 indicates that additional
information about the usefulness of features might in fact help to increase the accuracy of the classifier. The regularisation constant CU for the Universum points was chosen C = CU = 0.1 in
both cases. This means that the non-orthogonality of w on the Universum points was only weakly
7
penalised, but had equal priority to classifying the labelled examples correctly. This could indicate
that the spatial filtering by the ICA is not perfect and discriminative information might be spread
over several independent components, even over those that are mainly non-discriminative. Using
the Unm Universum and therefore gently penalising the use of these non-discriminative features can
help to improve the classification accuracy, although the factual usefulness seems to vary with the
subject.
5
Conclusion
In this paper we analysed two algorithms for inference with a Universum as proposed by Vapnik
[10]. We demonstrated that the U-SVM as implemented in [11] is equivalent to searching for a
hyperplane which has its normal lying in the orthogonal complement of the space spanned by Universum examples. We also showed that the corresponding least squares Uls -SVM can be seen as a
hybrid between the two well known learning algorithms kFDA and koPCA where the Universum
points, centered between the means of the labelled classes, play the role of the noise covariance in
koPCA. Ideally the covariance matrix of the Universum should thus contain some important invariant directions for the problem at hand.
The position of the Universum set plays also an important role and both our theoretical and experimental analysis show that the behaviour of the algorithm depends on the difference between the
means of the labelled set and of the Universum set. The question of whether the main influence
of the Universum comes from the position or the covariance does not have a clear answer and is
probably problem dependent.
From a practical point, the main contribution of this paper is to suggest how to select a good Universum set: it should be such that it contains invariant directions and is positioned ?in between? the
two classes. Therefore, as can be partly seen from the BCI experiments, a good Universum dataset
needs to be carefully chosen and cannot be an arbitrary backdrop as the name might suggest.
References
[1] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA,
2006.
[2] N. J. Hill, T. N. Lal, M. Schr?oder, T. Hinterberger, B. Wilhelm, F. Nijboer, U. Mochty, G. Widman, C. E.
Elger, B. Sch?olkopf, A. K?ubler, and N. Birbaumer. Classifying EEG and ECoG signals without subject
training for fast bci implementation: Comparison of non-paralysed and completely paralysed subjects.
IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14(2):183?186, 06 2006.
[3] T. N. Lal. Machine Learning Methods for Brain-Computer Interdaces. PhD thesis, University Darmstadt,
09 2005. Logos Verlag Berlin MPI Series in Biological Cybernetics, Bd. 12 ISBN 3-8325-1048-6.
[4] Neil D. Lawrence and Michael I. Jordan. Gaussian processes and the null-category noise model. In
A. Zien O. Chapelle, Bernhard Sch?olkopf, editor, Semi-Supervised Learning, chapter 8, pages 137?150.
MIT University Press, 2006.
[5] S. Mika, G. R?atsch, J. Weston, B. Sch?olkopf, A. Smola, and K. M?uller. Invariant feature extraction and
classification in kernel spaces. In Advances in Neural Information Processing Systems 12, pages 526?532,
2000.
[6] Sebastian Mika, Gunnar R?atsch, and Klaus-Robert M?uller. A mathematical programming approach to the
kernel fisher algorithm. In Advances in Neural Information Processing Systems, NIPS, 2000.
[7] J. del R. Mill?an. On the need for on-line learning in brain-computer interfaces. IDIAP-RR 30, IDIAP,
Martigny, Switzerland, 2003. Published in ?Proc. of the Int. Joint Conf. on Neural Networks?, 2004.
[8] P. Sollich. Probabilistic methods for support vector machines. In Advances in Neural Information Processing Systems, 1999.
[9] J. A. K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Processing
Letters, 9(3):293?300, 1999.
[10] V. Vapnik. Transductive Inference and Semi-Supervised Learning. In O. Chapelle, B. Sch?olkopf, and
A. Zien, editors, Semi-Supervised Learning, chapter 24, pages 454?472. MIT press, 2006.
[11] J. Weston, R. Collobert, F. Sinz, L. Bottou, and V. Vapnik. Inference with the universum. In Proceedings
of the 23rd International Conference on Machine Learning, page 127, 06/25/ 2006.
[12] P. Zhong and M. Fukushima. A new support vector algorithm. Optimization Methods and Software,
21:359?372, 2006.
8
| 3231 |@word cu:18 briefly:2 version:3 inversion:1 seems:1 covariance:14 thereby:1 mention:1 carry:1 series:2 score:4 contains:2 exclusively:1 rkhs:1 suppressing:1 com:1 analysed:1 clara:1 must:2 written:1 bd:1 additive:1 motor:2 v:1 device:1 isotropic:1 filtered:1 contribute:1 readability:1 fabee:1 mathematical:1 along:2 constructed:2 consists:1 introduce:2 ica:3 expected:1 indeed:1 mpg:2 behavior:1 brain:6 chap:1 anisotropy:2 considering:1 affirmed:1 increasing:2 underlying:2 isotropy:5 null:1 kind:6 interpreted:1 unobserved:1 transformation:1 sinz:2 berkeley:3 act:1 tackle:1 exactly:1 classifier:6 scaled:1 control:1 planck:2 positive:2 engineering:1 consequence:1 establishing:2 approximately:1 might:6 logo:1 twice:1 mika:2 equivalence:1 conversely:1 averaged:1 practical:1 testing:1 maximisation:2 digit:5 displacement:1 thought:1 significantly:1 projection:4 refers:3 suggest:2 get:1 onto:4 close:2 selection:1 operator:1 cannot:1 risk:1 influence:7 live:1 py:1 equivalent:7 map:1 lagrangian:1 demonstrated:1 l:2 convex:1 assigns:1 insight:3 deriving:1 spanned:6 dominate:1 his:3 unanswered:1 classic:2 fulfil:1 variation:1 searching:1 resp:3 construction:1 play:2 user:2 olivier:1 programming:1 distinguishing:1 regularised:2 trick:1 element:3 located:2 observed:2 role:5 bottom:1 factual:1 solved:2 capture:1 cy:2 movement:7 valuable:1 intuition:2 ideally:1 trained:2 depend:1 rewrite:1 weakly:1 blurring:1 completely:2 compactly:1 lowpass:1 joint:1 various:1 chapter:2 derivation:1 fast:1 describe:1 detected:1 klaus:1 choosing:1 encoded:1 larger:3 bci:5 neil:1 transductive:2 analyse:1 universum:67 rr:1 isbn:1 took:1 relevant:1 combining:1 loop:2 validate:2 olkopf:6 ky:1 regularity:1 cluster:1 perfect:1 converges:2 help:2 derive:1 depending:1 minimises:1 measured:2 strong:3 implemented:1 quotient:3 idiap:2 indicate:1 come:1 direction:9 switzerland:1 owing:1 filter:1 stochastic:1 centered:3 implementing:1 behaviour:1 darmstadt:1 biological:3 ecog:1 lying:1 hall:1 normal:5 lawrence:1 algorithmic:3 substituting:1 vary:1 purpose:1 proc:1 label:4 healthy:1 establishes:2 uller:2 mit:3 sensor:2 gaussian:4 rather:1 ck:2 zhong:1 minimisation:1 encode:1 improvement:4 rank:2 likelihood:1 check:1 indicates:2 electroencephalographic:1 contrast:1 ubler:1 mainly:1 helpful:4 inference:5 dependent:5 initially:1 relation:4 transformed:1 germany:2 classification:8 dual:2 yahoo:2 elger:1 spatial:2 special:3 equal:2 construct:4 extraction:1 optimising:1 represents:1 unsupervised:1 few:1 employ:1 oriented:3 replaced:1 fukushima:1 picturing:1 interest:1 introduces:1 extreme:2 bracket:1 yielding:1 primal:1 paralysed:2 xy:1 respective:1 orthogonal:10 kxx:1 theoretical:3 instance:1 soft:1 introducing:1 deviation:3 usefulness:2 vandewalle:1 too:1 reported:1 answer:1 eec:1 learnt:1 international:1 probabilistic:2 minimised:1 michael:1 ym:1 thesis:1 again:1 recorded:2 choose:1 hinterberger:1 priority:1 worse:2 conf:1 derivative:1 toy:5 de:2 int:1 inc:1 satisfy:1 explicitly:2 depends:1 collobert:1 h1:2 try:1 observing:1 realised:1 start:4 recover:1 bayes:1 contribution:1 square:8 accuracy:4 variance:8 yield:3 clarifies:2 weak:1 bayesian:2 served:1 cybernetics:3 published:1 tissue:1 penalised:1 sebastian:1 against:5 associated:1 proof:2 dataset:6 proved:2 knowledge:1 penalising:1 positioned:1 carefully:1 actually:1 back:1 supervised:5 follow:1 formulation:2 furthermore:2 just:3 smola:1 widman:1 correlation:2 hand:7 del:1 indicated:1 semisupervised:1 name:3 effect:3 contain:3 multiplier:1 deliberately:1 inductive:2 hence:1 equality:1 counterpart:1 spatially:1 q0:1 assay:1 deal:1 spemannstrasse:2 numerator:2 noted:1 mpi:1 cosine:1 criterion:1 prominent:1 hill:1 complete:1 demonstrate:2 performs:2 l1:1 interface:3 geometrical:1 novel:1 recently:2 birbaumer:1 insensitive:2 imagined:6 belong:1 interpretation:1 extend:1 jl:2 gently:1 refer:1 significant:1 cambridge:1 smoothness:1 approx:1 rd:1 sharpen:1 had:1 pq:2 chapelle:4 specification:1 alekh:2 surface:1 cortex:4 pu:8 closest:1 showed:2 reverse:1 termed:1 certain:2 verlag:1 ubingen:2 binary:4 yi:8 optimiser:1 seen:2 minimum:1 additional:5 employed:2 paradigm:6 coworkers:1 signal:7 semi:5 ii:4 full:1 zien:3 exceeds:1 unlabelled:2 minimising:2 cross:2 plugging:1 denominator:4 optimisation:2 patient:1 kernel:13 agarwal:1 qy:1 suykens:1 addition:1 whereas:1 crucial:1 sch:6 fifty:1 bringing:1 probably:1 recording:3 tend:1 subject:8 hz:1 jordan:1 noting:1 intermediate:1 affect:1 opposite:1 inner:2 idea:6 multiclass:1 shift:1 qj:2 whether:2 motivated:1 pca:1 regularisers:2 f:2 oder:1 kxy:2 useful:1 santa:1 clear:2 amount:4 ten:2 band:2 svms:5 category:1 reduced:1 documented:1 exist:1 percentage:1 zj:26 shifted:1 s3:1 sign:2 algorithmically:1 correctly:1 rephrased:1 shall:1 dropping:1 gunnar:1 changing:2 neither:6 relaxation:1 merely:2 angle:4 letter:1 soda:1 almost:1 decision:6 prefer:1 fold:1 quadratic:3 activity:4 constraint:4 infinity:1 orthogonality:1 fda:2 software:1 argument:1 min:7 span:2 structured:1 according:4 combination:1 smaller:1 sollich:1 skull:1 b:1 s1:1 rehabilitation:1 projecting:1 invariant:7 sided:2 taken:1 computationally:1 equation:1 remains:1 slack:1 turn:2 needed:1 mind:1 available:2 gaussians:5 generic:1 appropriate:2 original:1 assumes:1 denotes:1 remaining:3 top:1 hinge:6 concatenated:1 build:1 objective:2 question:3 quantity:4 already:1 strategy:1 dependence:1 diagonal:1 subspace:4 distance:2 link:1 separating:2 clarifying:1 berlin:1 outer:1 tuebingen:2 discriminant:4 kfda:7 relationship:1 wilhelm:1 equivalently:1 unfortunately:1 robert:1 statement:1 trace:5 negative:1 martigny:1 implementation:6 regulariser:4 unknown:1 displaced:1 datasets:1 fabian:1 finite:1 y1:1 schr:1 varied:1 interclass:1 arbitrary:3 introduced:1 complement:5 kl:2 specified:2 z1:1 connection:1 lal:2 california:2 nip:1 qa:1 address:1 usually:2 pattern:1 xm:1 below:2 program:3 max:5 suitable:1 hybrid:5 circumvent:1 marginalised:1 improve:1 brief:1 demixed:1 carried:1 extract:1 text:1 review:1 prior:4 geometric:1 regularisation:3 loss:11 interesting:1 filtering:1 validation:2 integrate:1 degree:3 affine:2 principle:1 editor:3 classifying:2 translation:6 row:2 last:1 side:1 allow:1 jh:3 generalise:3 institute:2 nijboer:1 absolute:2 curve:1 dimension:6 boundary:1 world:1 cumulative:1 author:5 projected:3 employing:1 transaction:1 unm:10 implicitly:1 bernhard:2 assumed:1 xi:19 discriminative:7 uls:13 physiologically:1 zq:1 table:4 additionally:1 channel:1 ku:1 opca:1 ca:1 expanding:1 ignoring:1 eeg:4 bottou:1 cl:10 domain:1 did:1 spread:1 main:2 motivation:1 noise:5 s2:3 x1:1 elaborate:1 backdrop:2 position:4 tied:1 vanish:1 third:6 hw:13 z0:4 showing:1 svm:54 mnist:2 vapnik:8 adding:1 gained:2 phd:1 magnitude:1 imago:1 push:1 margin:2 cx:2 rayleigh:2 mill:1 visual:3 lagrange:1 nested:1 chance:1 constantly:1 ma:1 weston:2 goal:1 formulated:1 identity:1 rbf:1 labelled:14 replace:1 fisher:5 hard:3 analysing:1 regularising:1 fw:6 except:1 hyperplane:6 principal:7 lemma:7 called:2 invariance:1 experimental:4 partly:1 atsch:2 indicating:1 select:1 support:3 |
2,461 | 3,232 | Retrieved context and the discovery of semantic
structure
Vinayak A. Rao, Marc W. Howard?
Syracuse University
Department of Psychology
430 Huntington Hall
Syracuse, NY 13244
[email protected], [email protected]
Abstract
Semantic memory refers to our knowledge of facts and relationships between concepts. A successful semantic memory depends on inferring relationships between
items that are not explicitly taught. Recent mathematical modeling of episodic
memory argues that episodic recall relies on retrieval of a gradually-changing representation of temporal context. We show that retrieved context enables the development of a global memory space that reflects relationships between all items
that have been previously learned. When newly-learned information is integrated
into this structure, it is placed in some relationship to all other items, even if that
relationship has not been explicitly learned. We demonstrate this effect for global
semantic structures shaped topologically as a ring, and as a two-dimensional sheet.
We also examined the utility of this learning algorithm for learning a more realistic
semantic space by training it on a large pool of synonym pairs. Retrieved context
enabled the model to ?infer? relationships between synonym pairs that had not yet
been presented.
1
Introduction
Semantic memory refers to our ability to learn and retrieve facts and relationships about concepts
without reference to a specific learning episode. For example, when answering a question such as
?what is the capital of France?? it is not necessary to remember details about the event when this fact
was first learned in order to correctly retrieve this information. An appropriate semantic memory for
a set of stimuli as complex as, say, words in the English language, requires learning the relationships
between tens of thousands of stimuli. Moreover, the relationships between these items may describe
a network of non-trivial topology [16]. Given that we can only simultaneously perceive a very small
number of these stimuli, in order to be able to place all stimuli in the proper relation to each other
the combinatorics of the problem require us to be able to generalize beyond explicit instruction. Put
another way, semantic memory needs to not only be able to retrieve information in the absence of
a memory for the details of the learning event, but also retrieve information for which there is no
learning event at all.
Computational models for automatic extraction of semantic content from naturally-occurring text,
such as latent semantic analysis [12], and probabilistic topic models [1, 7], exploit the temporal
co-occurrence structure of naturally-occurring text to estimate a semantic representation of words.
Their success relies to some degree on their ability to not only learn relationships between words
that occur in the same context, but also to infer relationships between words that occur in similar
?
Vinayak Rao is now at the Gatsby Computational Neuroscience Unit, University College London.
http://memory.syr.edu.
1
contexts. However, these models operate on an entire corpus of text, such that they do not describe
the process of learning per se.
Here we show that the temporal context model (TCM), developed as a quantitative model of human
performance in episodic memory tasks, can provide an on-line learning algorithm that learns appropriate semantic relationships from incomplete information. The capacity for this model of episodic
memory to also construct semantic knowledge spaces of multiple distinct topologies, suggests a
relatively subtle relationship between episodic and semantic memory.
2
The temporal context model
Episodic memory is defined as the vivid conscious recollection of information from a specific instance from one?s life [18]. Many authors describe episodic memory as the result of the recovery
of some type of a contextual representation that is distinct from the items themselves. If a cue item
can recover this ?pointer? to an episode, this enables recovery of other items that were bound to the
contextual representation without committing to lasting interitem connections between items whose
occurrence may not be reliably correlated [17].
Laboratory episodic memory tasks can provide an important clue to the nature of the contextual
representation that could underlie episodic memory. For instance, in the free recall task, subjects
are presented with a series of words to be remembered and then instructed to recall all the words
they can remember in any order they come to mind. If episodic recall of an item is a consequence
of recovering a state of context, then the transitions between recalls may tell us something about
the ability of a particular state of context to cue recall of other items. Episodic memory tasks show
a contiguity effect?a tendency to make transitions to items presented close together in time, but
not simultaneously, with the just-recalled word. The contiguity effect shows an apparently universal
form across multiple episodic recall tasks, with a characteristic asymmetry favoring forward recall
transitions [11] (see Figure 1a).
The temporal contiguity effect observed in episodic recall can be simply reconciled with the hypothesis that episodic recall is the result of recovery of a contextual representation if one assumes that
the contextual representation changes gradually over time. The temporal context model (TCM) describes a set of rules for a gradually-changing representation of temporal context and how items can
be bound to and recover states of temporal context. TCM has been applied to a number of problems
in episodic recall [9]. Here we describe the model, incorporating several changes that enable TCM
to describe the learning of stable semantic relationships (detailed in Section 3).1
TCM builds on distributed memory models which have been developed to provide detailed descriptions of performance in human memory tasks [14]. In TCM, a gradually-changing state of temporal
context mediates associations between items and is responsible for recency effects and contiguity
effects. The state of the temporal context vector at time step i is denoted as ti and changes from
moment-to-moment according to
ti = ?i ti?1 + ?tIN
(1)
i ,
IN
where ? is a free parameter, ti is the input caused by the item presented at time step i, assumed to
be of unit length, and ?i is chosen to ensure that ti is of unit length. Items, represented as unchanging
orthonormal vectors f , are encoded in their study contexts by means of a simple outer-product matrix
connecting the t layer to the f layer, MT F , which is updated according to:
?MTi F = fi t0i?1 ,
(2)
where the prime denotes the transpose and the subscripts here reflect time steps. Items are probed
for recall by multiplying MT F from the right with the current state of t as a cue. This means that
when tj is presented as a cue, each item is activated to the extent that the probe context overlaps
with its encoding contexts.
The space over which t evolves is obviously determined by the tIN s. We will decompose tIN into
cIN , a component that does not change over the course of study of this paper, and hIN , a component
1
Previous published treatments of TCM have focused on episodic tasks in which items were presented only
once. Although the model described here differs from previously published versions in notation and its behavior
over multiple item repetitions, it is identical to previously-published results described for single presentations
of items.
2
a
b
c
1
f
Hippocampal
Cortical
hi
t
Cue strength
0.8
ci
0.6
0.4
0.2
0
h
-5 -4 -3 -2 -1 0 1
Lag
2
3
4
5
Figure 1: Temporal recovery in episodic memory. a. Temporal contiguity effect in episodic recall.
Given that an item from a series has just been recalled, the y-axis gives the probability that the next
item recalled came from each serial position relative the just-recalled item. This figure is averaged
across a dozen separate studies [11]. b. Visualization of the model. Temporal context vectors
ti are hypothesized to reside in extra-hippocampal MTL regions. When an item fi is presented,
it evokes two inputs to t?a slowly-changing direct cortical input cIN
and a more rapidly varying
i
hippocampal input hIN
i . When an item is repeated, the hippocampal component retrieves the context
in which the item was presented. c. While the cortical component serves as a temporally-asymmetric
cue when an item is repeated, the hippocampal component provides a symmetric cue. Combining
these in the right proportion enables TCM to describe temporal contiguity effects.
that changes rapidly to retrieve the contexts in which an item was presented. Denoting the time steps
at which a particular item A was presented as Ai , we have
? IN + (1 ? ?) cIN .
tIN ? ? h
(3)
Ai+1
Ai+1
A
IN
where the proportionality reflects the fact that t is always normalized before being used to update
ti as in Eq. 1 and the hat on the hIN term refers to the normalization of hIN . We assume that the
cIN s corresponding to the items presented in any particular experiment start and remain orthonormal
to each other. In contrast, hIN starts as zero for each item and then changes according to:
IN
hIN
Ai+1 = hAi + tAi ?1 .
(4)
It has been hypothesized that ti reflects the pattern of activity at extra-hippocampal medial temporal
lobe (MTL) regions, in particular the entorhinal cortex [8]. The notation cIN and hIN reflects
the hypothesis that the consistent and rapidly-changing parts of tIN reflect inputs to the entorhinal
cortex from cortical and hippocampal sources, respectively (Figure 1b).
According to TCM, associations between items are not formed directly, but rather are mediated by
the effect that items have on the state of context which is then used to probe for recall of other items.
When an item is repeated as a probe, this induces a correlation between the tIN of the probe context
and the study context of items that were neighbors of the probe item when it was initially presented.
The consistent part of tIN is an effective cue for items that followed the initial presentation of
the probe item (open symbols, Figure 1c). In contrast, recovery of the state of context that was
present before the probe item was initially presented is a symmetric cue (filled symbols, Figure 1).
Combining these two components in the proper proportions provides an excellent description of
contiguity effects in episodic memory [8].
3
Constructing global semantic information from local events
In each of the following simulations, we specify a to-be-learned semantic structure by imagining
items as the nodes of a graph with some topology. We generated training sequences by randomly
sampling edges from the graph.2 Each edge only contains a limited amount of information about
2
The pairs are chosen randomly, so that any across-pair learning would be uninformative with respect to
the overall structure of the graph. To further ensure that learning across pairs from simple contiguity could not
contribute to our results, we set ? in Eq. 1 to one when the first member of each pair was presented. This means
that the temporal context when the second item is presented is effectively isolated from the previous pair.
3
b
G
I
D
B
C
E
d
J
I
I
H
H
G
G
F
H
J
A
c
J
F
1.4
1.2
1
0.8
0.6
0.4
Dimension 2
a
F
E
E
D
D
C
C
B
B
A
A
A
B
C
D
E
F
G
H
I
J
0.2
0
-0.2
-0.4
-0.6
-0.8
-1
-1.2
-1.4
A
B
C
D
E
F
G
H
I
J
-2
-1.5
-1
-0.5
0
0.5
Dimension 1
1
1.5
2
Figure 2: Learning of a one-dimensional structure using contextual retrieval. a. The graph used to
generate the training pairs. b-c. Associative strength between items after training (higher strength
corresponds to darker cells). b. The model without contextual retrieval (? = 0). c. The model with
contextual retrieval (? > 0). d. Two dimensional MDS solution for the log of the data in c. Lines
connect points corresponding to nodes connected by an edge.
the global structure. For the model is to learn the global structure of the graph, it must somehow
integrate the learning events into a coherent whole.
After training we evaluated the ability of the model to capture the topology of the graph by examining the cue strength between each item. The cue strength from item A to B is defined as
IN
and hIN components of A and the contexts
fB0 MT F tIN
A . This reflects the overlap between the c
3
in which B was presented.
Because tIN
is caused by presentation of item i, we can think of the tIN s as a representation of the
i
set of items. Learning can be thought of as a mixing of the tIN s according to the temporal structure
of experience. Because the cIN s are fixed, changes in the representation are solely due to changes
in the hIN s. Suppose that two items, A and B are presented in sequence. If context is retrieved,
IN
then after presentation of the pair A-B hIN
B includes the tA that obtained when A was presented.
IN
This includes the current state of hA as well as the fixed state cIN
A . If at some later time B is now
is
similar
to tIN
presented as part of the sequence B-C , then because tIN
A , item C is learned in a
B
IN
context that resembles tA , despite the fact that A and C were not actually presented close together
IN
in time. After learning A-B and B-C , tIN
A and tC will resemble each other. This ability to
rate as similar items that were not presented together in the same context, but that were presented in
similar contexts, is a key property of latent models of semantic learning [12].
To isolate the importance of retrieved context for the ability to extract global structure, we will
compare a version of the model with ? = 0 to one with ? > 0.4 With ? = 0, the model functions
as a simple co-occurrence detector in that the cue strength between A and B is non-zero only if cIN
A
was part of the study contexts of B. In the absence of contextual retrieval, this requires that B was
preceded by A during study.
IN
Ultimately, the ti s and hIN
vectors. We therefore
i s can be expressed as a combination of the c
treated these as orthonormal basis vectors in the simulations that follow. MT F and the hIN s were
initialized as a matrix and vectors of zeros, respectively. The parameter ? for the second member of
a pair was fixed at 0.6.
3.1
1-D: Rings
For this simulation we sampled edges chosen from a ring of ten items (Fig. 2a). We treated the ring
as an undirected graph, in that we sampled an edge A-B equally often as B-A . We presented the
model with 300 pairs chosen randomly from the ring. For example, the training pairs might include
the sub-sequence C-D , A-B , F-E , B-C .
3
0
TF
In this implementation of TCM, hIN
. This need not be the case in general, as one
A is identical to fA M
could alter the learning rate, or even the structure of Eqs. 2 and/or 4 without changing the basic idea of the
model.
4
In the simulations reported below, this value is set to 0.6. The precise value does not affect the qualitative
results we report as long as it is not too close to one.
4
a
b
c
Y
X
6
5
5
4
4
W
T
V
S
Dimension 2
O
Q
N
P
M
J
L
H
E
G
D
F
1
0
-1
-2
I
K
2
2
R
Dimension 2
U
3
3
C
B
A
1
0
-1
-2
-3
-3
-4
-4
-5
-6
-6
-5
-4
-3
-2
-1 0
1
Dimension 1
2
3
4
5
6
-5
-5
-4
-3
-2
-1
0
1
Dimension 1
2
3
4
5
Figure 3: Reconstruction of a 2-dimensional spatial representation. a. The graph used to construct
sequences. b. 2-dimensional MDS solution constructed from the temporal co-occurrence version
of TCM ? = 0 using the log of the associative strength as the metric. Lines connect stimuli from
adjacent edges. c. Same as b, but for TCM with retrieved context. The model accurately places the
items in the correct topology.
Figure 2b shows the cue strength between each pair of items as a grey-scale image after training the
model without contextual retrieval (? = 0). The diagonal is shaded reflecting the fact that an item?s
cue strength to itself is high. In addition, one row on either side of the diagonal is shaded. This
reflects the non-zero cue strength between items that were presented as part of the same training
pair. That is, the model without contextual retrieval has correctly learned the relationships described
by the edges of the graph. However, without contextual retrieval the model has learned nothing
about the relationships between the items that were not presented as part of the same pair (e.g.
the cue strength between A and C is zero). Figure 2c shows the cue strength between each pair
of items for the model with contextual retrieval ? > 0. The effect of contextual retrieval is that
pairs that were not presented together have non-zero cue strength and this cue strength falls off with
the number of edges separating the items in the graph. This happens because contextual retrieval
enables similarity to ?spread? across the edges of the graph, reaching an equilibrium that reflects
the global structure. Figure 2d shows a two-dimensional MDS (multi-dimensional scaling) solution
conducted on the log of the cue strengths of the model with contextual retrieval. The model appears
to have successfully captured the topology of the graph that generated the pairs. More precisely,
with contextual retrieval, TCM can place the items in a space that captures the topology of the graph
used to generate the training pairs.
On the one hand, the relationships that result from contextual retrieval in this simulation seem intuitive and satisfying. Viewed from another perspective, however, this could be seen as undesirable
behavior. Suppose that the training pairs accurately sample the entire set of relationships that are
actually relevant. Moreover, suppose that one?s task were simply to remember the pairs, or alternatively, to predict the next item that would be presented after presenting the first member of a pair.
Under these circumstances, the co-occurrence model performs better than the model equipped with
contextual retrieval.
It should be noted that people form associations across pairs (e.g. A-C ) after learning lists of paired
associates with a linked temporal structure like the rings shown in Figure 2a [15]. In addition, rats
can also generalize across pairs, but this ability depends on an intact hippocampus [2]. These finding
suggest that the mechanism of contextual retrieval capture an important property of how we learn in
similar circumstance.
3.2
2-D: Spatial navigation
The ring illustrated in Figure 2 demonstrates the basic idea behind contextual retrieval?s ability to
extract semantic spaces, but it is hard to imagine an application where such a simple space would
need to be extracted. In this simulation will illustrate the ability of retrieved context to discover
relationships between stimuli arranged in a two-dimensional sheet. The use of a two-dimensional
sheet has an analog in spatial navigation.
It has long been argued that the medial temporal lobe has a special role in our ability to store and
retrieve information from a spatial map. Eichenbaum [5] has argued that the MTL?s role in spatial
5
navigation is merely a special case of more general role in organizing disjointed experiences into
integrated representations. The present model can be seen as a computational mechanism that could
implement this idea.
In our typical experience, spatial information is highly correlated with temporal information. Because of our tendency to move in continuous paths through our environment, locations that are close
together in space also tend to be experienced close together in time. However, insofar as we travel
in more-or-less straight paths, the combinatorics of the problem place a premium on the ability to
integrate landmarks experienced on different paths into a coherent whole. At the outset we should
emphasize that our extremely simple simulation here does not capture many of the aspects of actual
spatial navigation?the model is not provided with metric spatial information, nor gradually changing item inputs, nor do we discuss how the model could select an appropriate trajectory to reach a
goal [3].
We constructed a graph arranged as a 5?5 grid with horizontal and vertical edges (Figure 3a). We
presented the model with 600 edges from the graph in a randomly-selected order. One may think
of the items as landmarks in a city with a rectangular street plan. The ?traveler? takes trips of one
block at a time (perhaps teleporting out of the city between journeys).5 The problem here is not
only to integrate pairs into rows and columns as in the 1-dimensional case, but to place the rows and
columns into the correct relationship to each other.
Figure 3b shows the two-dimensional MDS solution calculated on the log of the cue strengths for the
co-occurrence model. Without contextual retrieval the model places the items in a high-dimensional
structure that reflects their co-occurrence. Figure 3c shows the same calculation for TCM with
contextual retrieval. Contextual retrieval enables the model to place the items on a two-dimensional
sheet that preserves the topology of the graph used to generate the pairs. It is not a map?there is
no sense of North nor an accurate metric between the points?but it is a semantic representation
that captures something intuitive about the organization that generated the pairs. This illustrates the
ability of contextual retrieval to organize isolated experiences, or episodes, into a coherent whole
based on the temporal structure of experience.
3.3
More realistic example: Synonyms
The preceding simulations showed that retrieved context enables learning of simple topologies with
a few items. It is possible that the utility of the model in discovering semantic relationships is limited
to these toy examples. Perhaps it does not scale up well to spaces with large numbers of stimuli, or
perhaps it will be fooled by more realistic and complex topologies.
In this subsection we demonstrate that retrieved context can provide benefits in learning relationships
among a large number of items with a more realistic semantic structure. We assembled a large list of
English words (all unique strings in the TASA corpus) and used these as probes to generate a list of
nearly 114,000 synonym pairs using WordNet. We selected 200 of these synonym pairs at random
as a test list. The word pairs organize into a large number of connected graphs of varying sizes. The
largest of these contained slightly more than 26,000 words; there were approximately 3,500 clusters
with only two words. About 2/3 of the pairs reflect edges within the five largest clusters of words.
We tested performance by comparing the cue strength of the cue word with its synonym to the
associative strength to three lures that were synonyms of other cue words?if the correct answer had
the highest cue strength, it was counted as correct.6 We averaged performance over ten shuffles of
the training pairs. We preserved the order of the synonym pairs, so that this, unlike the previous two
simulations, described a directed graph.
Figure 4a shows performance on the training list as a function of learning. The lower curve shows
?co-occurrence? TCM without contextual retrieval, ? = 0. The upper curve shows TCM with contextual retrieval, ? > 0. In the absence of contextual retrieval, the model learns linearly, performing
perfectly on pairs that have been explicitly presented. However, contextual retrieval enables faster
learning of the pairs, presumably due to the fact that it can ?infer? relationships between words
5
We also observed the same results when we presented the model with complete rows and columns of the
sheet as a training set rather than simply pairs.
6
In instances where the cue strength was zero for all the choices, as at the beginning of training, this was
counted as 1/4 of a correct answer.
6
b
1
1
0.8
0.8
P(correct)
P(correct)
a
0.6
0.4
0.2
0
0.4
0.2
TCM
Co-occurrence
0
0.6
0
20
40
60
80
100
Number of pairs presented (1k)
TCM
Co-occurrence
0
20
40
60
80
100
Number of pairs presented (1k)
Figure 4: Retrieved context aids in learning synonyms that have not been presented. a. Performance
on the synonym test. The curve labeled ?TCM? denotes the performance of TCM with contextual
retrieval. The curve labeled ?Co-occurrence? is the performance of TCM without contextual retrieval. b. Same as a, except that the training pairs were shuffled to omit any of the test pairs from
the middle region of the training sequence.
that were never presented together. To confirm that this property holds, we constructed shuffles of
the training pairs such that the test synonyms were not presented for an extended period (see Figure 4b). During this period, the model without contextual retrieval does not improve its performance
on the test pairs because they are not presented. In contrast, TCM with contextual retrieval shows
considerable improvement during that interval.7
4
Discussion
We showed that retrieval of temporal context, an on-line learning method developed for quantitatively describing episodic recall data, can also integrate distinct learning events into a coherent and
intuitive semantic representation. It would be incorrect to describe this representation as a semantic
space?the cue strength between items is in general asymmetric (Figure 1c). The model thus has
the potential to capture some effects of word order and asymmetry. However, one can also think of
the set of tIN s corresponding to the items as a semantic representation that is also a proper space.
Existing models of semantic memory, such as LSA and LDA, differ from TCM in that they are offline learning algorithms. More specifically, these algorithms form semantic associations between
words by batch-processing large collections of natural text (e.g., the TASA corpus). While it would
be interesting to compare results generated by running TCM on such a corpus with these models,
constraints of syntax and style complicate this task. Unlike the simple examples employed here, temporal proximity is not a perfect indicator of local similarity in real world text. The BEAGLE model
[10] describes the semantic representation of a word as a superposition of the words that occurred
with it in the same sentence. This enables BEAGLE to describe semantic relations beyond simple
cooccurrence, but precludes the development of a representation that captures continuously-varying
representations (e.g., Fig. 3). It may be possible to overcome this limitation of a straightforward
application of TCM to naturally-occurring text by generating a predictive representation, as in the
syntagmatic-paradigmatic model [4].
The present results suggest that retrieved temporal context?previously hypothesized to be essential for episodic memory?could also be important in developing coherent semantic representations.
This could reflect similar computational mechanisms contribute to separate systems, or it could indicate a deep connection between episodic and semantic memory. A key finding is that adult-onset
amnesics with impaired episodic memory retain the ability to express previously-learned semantic
knowledge but are impaired at learning new semantic knowledge [19]. Previous connectionist models have argued that the hippocampus contributes to classical conditioning by learning compressed
representations of stimuli, and that these representations are eventually transferred to entorhinal cor7
To ensure that this property wasn?t simply a consequence of backward associations for the model with
retrieved context, we re-ran the simulations presenting the pairs simultaneously rather than in sequence (so that
the co-occurrence model would also learn backward associations) and obtained the same results.
7
tex [6]. This could be implemented in the context of the current model by allowing slow plasticity
to change the cIN s over long time scales [13].
Acknowledgments
Supported by NIH award MH069938-01. Thanks to Mark Steyvers, Tom Landauer, Simon Dennis,
and Shimon Edelman for constructive criticism of the ideas described here at various stages of
development. Thanks to Hongliang Gai and Aditya Datey for software development and Jennifer
Provyn for reading an earlier version of this paper.
References
[1] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[2] M. Bunsey and H. B. Eichenbaum. Conservation of hippocampal memory function in rats and
humans. Nature, 379(6562):255?257, 1996.
[3] P. Byrne, S. Becker, and N. Burgess. Remembering the past and imagining the future: a neural
model of spatial memory and imagery. Psychological Review, 114(2):340?75, 2007.
[4] S. Dennis. A memory-based theory of verbal cognition. Cognitive Science, 29:145?193, 2005.
[5] H. Eichenbaum. The hippocampus and declarative memory: cognitive mechanisms and neural
codes. Behavioural Brain Research, 127(1-2):199?207, 2001.
[6] M. A. Gluck, C. E. Myers, and M. Meeter. Cortico-hippocampal interaction and adaptive
stimulus representation: A neurocomputational theory of associative learning and memory.
Neural Networks, 18:1265?1279, 2005.
[7] T. L. Griffiths, M. Steyvers, and J. B. Tenenbaum. Topics in semantic representation. Psychological Review, 114(2):211?44, 2007.
[8] M. W. Howard, M. S. Fotedar, A. V. Datey, and M. E. Hasselmo. The temporal context model
in spatial navigation and relational learning: Toward a common explanation of medial temporal
lobe function across domains. Psychological Review, 112(1):75?116, 2005.
[9] M. W. Howard and M. J. Kahana. A distributed representation of temporal context. Journal of
Mathematical Psychology, 46(3):269?299, 2002.
[10] M. N. Jones and D. J. K. Mewhort. Representing word meaning and order information composite holographic lexicon. Psychological Review, 114:1?32, 2007.
[11] M. J. Kahana, M.W. Howard, and S.M. Polyn. Associative processes in episodic memory. In
H. L. Roediger, editor, Learning and Memory - A Comprehensive Reference. Elsevier, in press.
[12] T. K. Landauer and S. T. Dumais. Solution to Plato?s problem : The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review,
104:211?240, 1997.
[13] J. L. McClelland, B. L. McNaughton, and R. C. O?Reilly. Why there are complementary
learning systems in the hippocampus and neocortex: insights from the successes and failures
of connectionist models of learning and memory. Psychological Review, 102(3):419?57, 1995.
[14] B. B. Murdock. Context and mediators in a theory of distributed associative memory (TODAM2). Psychological Review, 1997:839?862, 1997.
[15] N. J. Slamecka. An analysis of double-function lists. Memory & Cognition, 4:581?585, 1976.
[16] M. Steyvers and J. Tenenbaum. The large scale structure of semantic networks: statistical
analyses and a model of semantic growth. Cognitive Science, 29:41?78, 2005.
[17] T. J. Teyler and P. DiScenna. The hippocampal memory indexing theory. Behavioral Neuroscience, 100(2):147?54, 1986.
[18] E. Tulving. Elements of Episodic Memory. Oxford, New York, 1983.
[19] R. Westmacott and M. Moscovitch. Names and words without meaning: incidental postmorbid
semantic learning in a person with extensive bilateral medial temporal damage. Neuropsychology, 15(4):586?96, 2001.
8
| 3232 |@word middle:1 version:4 proportion:2 hippocampus:4 proportionality:1 instruction:1 open:1 grey:1 simulation:10 lobe:3 moment:2 initial:1 series:2 contains:1 denoting:1 past:1 existing:1 current:3 contextual:33 comparing:1 yet:1 must:1 realistic:4 plasticity:1 unchanging:1 enables:8 update:1 medial:4 cue:27 selected:2 discovering:1 item:68 beginning:1 pointer:1 blei:1 provides:2 murdock:1 node:2 contribute:2 location:1 lexicon:1 five:1 mathematical:2 constructed:3 direct:1 qualitative:1 incorrect:1 edelman:1 behavioral:1 behavior:2 themselves:1 nor:3 multi:1 brain:1 actual:1 tex:1 equipped:1 provided:1 discover:1 moreover:2 notation:2 what:1 string:1 contiguity:8 developed:3 finding:2 temporal:29 remember:3 quantitative:1 ti:9 growth:1 demonstrates:1 uk:1 unit:3 underlie:1 omit:1 lsa:1 organize:2 before:2 local:2 consequence:2 despite:1 encoding:1 oxford:1 subscript:1 solely:1 path:3 approximately:1 might:1 lure:1 resembles:1 examined:1 suggests:1 shaded:2 co:11 limited:2 averaged:2 directed:1 unique:1 responsible:1 acknowledgment:1 block:1 implement:1 differs:1 episodic:25 universal:1 thought:1 composite:1 reilly:1 word:21 outset:1 refers:3 griffith:1 suggest:2 close:5 undesirable:1 sheet:5 put:1 context:45 recency:1 map:2 straightforward:1 focused:1 rectangular:1 recovery:5 perceive:1 rule:1 insight:1 orthonormal:3 enabled:1 retrieve:6 steyvers:3 mcnaughton:1 updated:1 imagine:1 suppose:3 hypothesis:2 associate:1 element:1 satisfying:1 asymmetric:2 labeled:2 observed:2 role:3 polyn:1 capture:7 thousand:1 region:3 connected:2 episode:3 shuffle:2 highest:1 cin:9 ran:1 neuropsychology:1 environment:1 cooccurrence:1 ultimately:1 predictive:1 basis:1 represented:1 retrieves:1 traveler:1 various:1 distinct:3 committing:1 describe:8 london:1 effective:1 tell:1 whose:1 encoded:1 lag:1 say:1 precludes:1 compressed:1 ability:13 think:3 itself:1 associative:6 obviously:1 sequence:7 myers:1 ucl:1 reconstruction:1 interaction:1 product:1 relevant:1 combining:2 rapidly:3 organizing:1 mixing:1 description:2 intuitive:3 cluster:2 asymmetry:2 impaired:2 double:1 generating:1 perfect:1 ring:7 illustrate:1 ac:1 syr:2 eq:3 recovering:1 implemented:1 resemble:1 come:1 indicate:1 differ:1 correct:7 human:3 enable:1 require:1 argued:3 decompose:1 hold:1 proximity:1 hall:1 presumably:1 equilibrium:1 cognition:2 predict:1 travel:1 superposition:1 largest:2 hasselmo:1 repetition:1 tf:1 successfully:1 city:2 reflects:8 always:1 rather:3 reaching:1 varying:3 improvement:1 fooled:1 contrast:3 criticism:1 sense:1 elsevier:1 integrated:2 entire:2 initially:2 relation:2 favoring:1 mediator:1 france:1 overall:1 among:1 denoted:1 development:4 plan:1 spatial:10 special:2 construct:2 once:1 shaped:1 extraction:1 sampling:1 ng:1 identical:2 never:1 jones:1 nearly:1 alter:1 future:1 report:1 stimulus:9 quantitatively:1 connectionist:2 few:1 randomly:4 simultaneously:3 preserve:1 comprehensive:1 vrao:1 organization:1 highly:1 navigation:5 activated:1 tj:1 behind:1 accurate:1 edge:12 necessary:1 experience:5 filled:1 incomplete:1 initialized:1 re:1 isolated:2 psychological:7 instance:3 column:3 modeling:1 earlier:1 rao:2 vinayak:2 holographic:1 successful:1 examining:1 conducted:1 too:1 reported:1 connect:2 answer:2 disjointed:1 dumais:1 thanks:2 person:1 retain:1 probabilistic:1 off:1 pool:1 together:7 connecting:1 continuously:1 imagery:1 reflect:4 slowly:1 cognitive:3 style:1 toy:1 potential:1 includes:2 north:1 combinatorics:2 explicitly:3 caused:2 depends:2 onset:1 later:1 bilateral:1 apparently:1 linked:1 start:2 recover:2 simon:1 formed:1 characteristic:1 generalize:2 accurately:2 multiplying:1 trajectory:1 published:3 straight:1 detector:1 reach:1 complicate:1 failure:1 acquisition:1 naturally:3 sampled:2 newly:1 treatment:1 recall:15 knowledge:5 subsection:1 subtle:1 actually:2 reflecting:1 appears:1 higher:1 ta:2 teleporting:1 mtl:3 follow:1 specify:1 tom:1 arranged:2 evaluated:1 just:3 stage:1 correlation:1 hand:1 horizontal:1 dennis:2 somehow:1 lda:1 perhaps:3 name:1 effect:12 hypothesized:3 concept:2 normalized:1 byrne:1 shuffled:1 symmetric:2 laboratory:1 semantic:38 illustrated:1 adjacent:1 during:3 noted:1 rat:2 hippocampal:10 syntax:1 presenting:2 complete:1 demonstrate:2 argues:1 performs:1 hin:13 image:1 meaning:2 fi:2 nih:1 common:1 mt:4 preceded:1 conditioning:1 association:6 analog:1 occurred:1 ai:4 automatic:1 beagle:2 grid:1 language:1 had:2 stable:1 cortex:2 similarity:2 something:2 recent:1 showed:2 retrieved:12 perspective:1 prime:1 store:1 success:2 remembered:1 life:1 came:1 captured:1 seen:2 remembering:1 preceding:1 employed:1 period:2 paradigmatic:1 multiple:3 infer:3 faster:1 calculation:1 long:3 retrieval:30 serial:1 equally:1 award:1 paired:1 basic:2 circumstance:2 metric:3 normalization:1 cell:1 preserved:1 addition:2 uninformative:1 interval:1 source:1 extra:2 operate:1 unlike:2 subject:1 isolate:1 tend:1 undirected:1 plato:1 member:3 seem:1 jordan:1 insofar:1 affect:1 psychology:2 burgess:1 topology:10 perfectly:1 idea:4 wasn:1 tcm:25 utility:2 becker:1 york:1 deep:1 se:1 detailed:2 amount:1 neocortex:1 ten:3 conscious:1 induces:1 tenenbaum:2 mcclelland:1 http:1 generate:4 neuroscience:2 correctly:2 per:1 probed:1 taught:1 express:1 key:2 capital:1 changing:7 backward:2 graph:18 mti:1 merely:1 topologically:1 journey:1 place:7 evokes:1 vivid:1 scaling:1 bound:2 layer:2 hi:1 followed:1 activity:1 strength:21 occur:2 precisely:1 constraint:1 software:1 huntington:1 aspect:1 extremely:1 performing:1 relatively:1 eichenbaum:3 transferred:1 department:1 developing:1 according:5 combination:1 kahana:2 across:8 describes:2 remain:1 slightly:1 evolves:1 happens:1 lasting:1 gradually:5 indexing:1 behavioural:1 visualization:1 previously:5 tai:1 discus:1 describing:1 mechanism:4 eventually:1 jennifer:1 mind:1 tulving:1 serf:1 probe:8 appropriate:3 occurrence:12 batch:1 hat:1 assumes:1 denotes:2 ensure:3 include:1 running:1 datey:2 dirichlet:1 exploit:1 build:1 classical:1 move:1 question:1 fa:1 damage:1 md:4 diagonal:2 hai:1 separate:2 separating:1 capacity:1 recollection:1 outer:1 landmark:2 street:1 topic:2 mewhort:1 extent:1 trivial:1 declarative:1 toward:1 induction:1 length:2 code:1 relationship:23 implementation:1 reliably:1 proper:3 incidental:1 allowing:1 upper:1 vertical:1 amnesic:1 howard:4 extended:1 relational:1 precise:1 syracuse:2 pair:43 trip:1 connection:2 sentence:1 extensive:1 recalled:4 coherent:5 learned:9 mediates:1 assembled:1 adult:1 able:3 beyond:2 below:1 pattern:1 reading:1 memory:38 explanation:1 event:6 overlap:2 treated:2 natural:1 indicator:1 representing:1 t0i:1 improve:1 temporally:1 axis:1 mediated:1 extract:2 text:6 review:7 discovery:1 relative:1 interesting:1 limitation:1 allocation:1 integrate:4 degree:1 consistent:2 editor:1 row:4 course:1 placed:1 supported:1 free:2 english:2 transpose:1 offline:1 side:1 verbal:1 cortico:1 neighbor:1 fall:1 distributed:3 benefit:1 curve:4 dimension:6 cortical:4 transition:3 calculated:1 world:1 overcome:1 author:1 instructed:1 clue:1 forward:1 reside:1 collection:1 counted:2 adaptive:1 emphasize:1 confirm:1 global:7 corpus:4 assumed:1 conservation:1 alternatively:1 landauer:2 continuous:1 latent:4 why:1 learn:5 nature:2 contributes:1 roediger:1 imagining:2 excellent:1 complex:2 constructing:1 marc:2 domain:1 reconciled:1 spread:1 linearly:1 synonym:11 whole:3 nothing:1 repeated:3 complementary:1 fig:2 gai:1 gatsby:2 slow:1 ny:1 darker:1 aid:1 sub:1 inferring:1 position:1 explicit:1 experienced:2 answering:1 tin:15 learns:2 dozen:1 shimon:1 specific:2 symbol:2 list:6 incorporating:1 essential:1 effectively:1 importance:1 ci:1 entorhinal:3 illustrates:1 occurring:3 tasa:2 gluck:1 tc:1 simply:4 expressed:1 contained:1 aditya:1 syntagmatic:1 corresponds:1 relies:2 extracted:1 viewed:1 presentation:4 goal:1 absence:3 content:1 change:9 hard:1 considerable:1 determined:1 typical:1 except:1 specifically:1 wordnet:1 tendency:2 premium:1 intact:1 select:1 college:1 people:1 mark:1 moscovitch:1 constructive:1 tested:1 correlated:2 |
2,462 | 3,233 | Fitted Q-iteration in continuous action-space MDPs
Andr?as Antos
Computer and Automation Research Inst.
of the Hungarian Academy of Sciences
Kende u. 13-17, Budapest 1111, Hungary
[email protected]
R?emi Munos
SequeL project-team, INRIA Lille
59650 Villeneuve d?Ascq, France
[email protected]
Csaba Szepesv?ari?
Department of Computing Science
University of Alberta
Edmonton T6G 2E8, Canada
[email protected]
Abstract
We consider continuous state, continuous action batch reinforcement learning
where the goal is to learn a good policy from a sufficiently rich trajectory generated by some policy. We study a variant of fitted Q-iteration, where the greedy
action selection is replaced by searching for a policy in a restricted set of candidate policies by maximizing the average action values. We provide a rigorous
analysis of this algorithm, proving what we believe is the first finite-time bound
for value-function based algorithms for continuous state and action problems.
1
Preliminaries
We will build on the results from [1, 2, 3] and for this reason we use the same notation as these
papers. The unattributed results cited in this section can be found in the book [4].
A discounted MDP is defined by a quintuple (X , A, P, S, ?), where X is the (possible infinite)
state space, A is the set of actions, P : X ? A ? M (X ) is the transition probability kernel with
P (?|x, a) defining the next-state distribution upon taking action a from state x, S(?|x, a) gives the
corresponding distribution of immediate rewards, and ? ? (0, 1) is the discount factor. Here X is
a measurable space and M (X ) denotes the set of all probability measures over X . The Lebesguemeasure shall be denoted by ?. We start with the following mild assumption on the MDP:
Assumption A1 (MDP Regularity) X is a compact subset of the dX -dimensional Euclidean space,
? max
A is a compact subset of [?A? , A? ]dA . The random immediate
rewards are bounded by R
R
and that the expected immediate reward function, r(x, a) = rS(dr|x, a), is uniformly bounded
by Rmax : krk? ? Rmax .
A policy determines the next action given the past observations. Here we shall deal with stationary
(Markovian) policies which choose an action in a stochastic way based on the last observation only.
The value of a policy ? when it is started from a state x is defined as the
expected discounted
Ptotal
?
reward that is encountered while the policy is executed: V ? (x) = E? [ t=0 ? t Rt |X0 = x]. Here
Rt ? S(?|Xt , At ) is the reward received at time step t, the state, Xt , evolves according to Xt+1 ?
?
Also with: Computer and Automation Research Inst. of the Hungarian Academy of Sciences Kende u.
13-17, Budapest 1111, Hungary.
1
P (?|Xt , At ), where At is sampled from the distribution determined
by ?. We use Q? : X ? A ? R
P?
to denote the action-value function of policy ?: Q? (x, a) = E? [ t=0 ? t Rt |X0 = x, A0 = a].
The goal is to find a policy that attains the best possible values, V ? (x) = sup? V ? (x), at all states
?
x ? X . Here V ? is called the optimal value function and a policy ? ? that satisfies V ? (x) =
?
?
?
V (x) for all x ? X is called optimal. The optimal action-value function Q (x, a) is Q (x, a) =
sup? Q? (x, a). We say that a (deterministic stationary) policy ? is greedy w.r.t. an action-value
function Q ? B(X ? A), and we write ? = ?
? (?; Q), if, for all x ? X , ?(x) ? argmaxa?A Q(x, a).
Under mild technical assumptions, such a greedy policy always exists. Any greedy policy w.r.t. Q?
?
is optimal. For ? : X ? A we
? A), by
R define its evaluation operator, T : B(X ?? A) ?? B(X
?
(T Q)(x, a) = r(x, a) + ? X Q(y, ?(y)) P (dy|x, a). It is known that Q = T Q? . Further, if
weR let the Bellman operator, T : B(X ? A) ? B(X ? A), defined by (T Q)(x, a) = r(x, a) +
? X supb?A Q(y, b) P (dy|x, a) then Q? = T Q? . It is known that V ? and Q? are bounded by
Rmax /(1 ? ?), just like Q? and V ? . For ? : X ? A, the operator E ? : B(X ? A) ? B(X ) is
defined by (E ? Q)(x) = Q(x, ?(x)), while E : B(X ? A) ? B(X ) is defined by (EQ)(x) =
supa?A Q(x, a).
Throughout the paper F ? {f : X ? A ? R} will denote a subset of real-valued functions over
the state-action space X ? A and ? ? AX will
R be a set of policies. For ? ? M (X ) and f : X ? R
p
measurable, we let (for p ? 1) kf kp,? = X |f (x)|p ?(dx). We simply write kf k? for kf k2,? .
R R
2
Further, we extend k?k? to F by kf k? = A X |f |2 (x, a) d?(x) d?A (a), where ?RA is the uniform
distribution over A. We shall use the shorthand notation ?f to denote the integral f (x)?(dx). We
denote the space of bounded measurable functions with domain X by B(X ). Further, the space of
measurable functions bounded by 0 < K < ? shall be denoted by B(X ; K). We let k?k? denote
the supremum norm.
2
Fitted Q-iteration with approximate policy maximization
We assume that we are given a finite trajectory, {(Xt , At , Rt )}1?t?N , generated by some stochastic
stationary policy ?b , called the behavior policy: At ? ?b (?|Xt ), Xt+1 ? P (?|Xt , At ), Rt ?
def
S(?|Xt , At ), where ?b (?|x) is a density with ?0 = inf (x,a)?X ?A ?b (a|x) > 0.
The generic recipe for fitted Q-iteration (FQI) [5] is
Qk+1 = Regress(Dk (Qk )),
(1)
where Regress is an appropriate regression procedure and Dk (Qk ) is a dataset defining a regression
problem in the form of a list of data-point pairs:
?h
?
i
Dk (Qk ) =
(Xt , At ), Rt + ? max Qk (Xt+1 , b)
.1
b?A
1?t?N
Fitted Q-iteration can be viewed as approximate value iteration applied to action-value functions.
To see this note that value iteration would assign the value (T Qk )(x, a) = r(x, a) +
R
? maxb?A Qk (y, b) P (dy|x, a) to Qk+1 (x, a) [6]. Now, remember that the regression function for
the jointly distributed random variables (Z, Y ) is defined by the conditional expectation of Y given
Z: m(Z) = E [Y |Z]. Since for any fixed function Q, E [Rt + ? maxb?A Q(Xt+1 , b)|Xt , At ] =
(T Q)(Xt , At ), the regression function corresponding to the data Dk (Q) is indeed T Q and hence if
FQI solved the regression problem defined by Qk exactly, it would simulate value iteration exactly.
However, this argument itself does not directly lead to a rigorous analysis of FQI: Since Qk is
obtained based on the data, it is itself a random function. Hence, after the first iteration, the ?target?
function in FQI becomes random. Furthermore, this function depends on the same data that is used
to define the regression problem. Will FQI still work despite these issues? To illustrate the potential
difficulties consider a dataset where X1 , . . . , XN is a sequence of independent random variables,
which are all distributed uniformly at random in [0, 1]. Further, let M be a random integer greater
than N which is independent of the dataset (Xt )N
t=1 . Let U be another random variable, uniformly
distributed in [0, 1]. Now define the regression problem by Yt = fM,U (Xt ), where fM,U (x) =
sgn(sin(2M 2?(x + U ))). Then it is not hard to see that no matter how big N is, no procedure can
1
Since the designer controls Qk , we may assume that it is continuous, hence the maximum exists.
2
estimate the regression function fM,U with a small error (in expectation, or with high probability),
even if the procedure could exploit the knowledge of the specific form of fM,U . On the other hand,
if we restricted M to a finite range then the estimation problem could be solved successfully. The
example shows that if the complexity of the random functions defining the regression problem is
uncontrolled then successful estimation might be impossible.
Amongst the many regression methods in this paper we have chosen to work with least-squares
methods. In this case Equation (1) takes the form
?
?
??2
N
X
1
Q(Xt , At ) ? Rt + ? max Qk (Xt+1 , b)
Qk+1 = argmin
.
(2)
b?A
Q?F t=1 ?b (At |Xt )
We call this method the least-squares fitted Q-iteration (LSFQI) method. Here we introduced the
weighting 1/?b (At |Xt ) since we do not want to give more weight to those actions that are preferred
by the behavior policy.
Besides this weighting, the only parameter of the method is the function set F. This function set
should be chosen carefully, to keep a balance between the representation power and the number of
samples. As a specific example for F consider neural networks with some fixed architecture. In
this case the function set is generated by assigning weights in all possible ways to the neural net.
Then the above minimization becomes the problem of tuning the weights. Another example is to use
linearly parameterized function approximation methods with appropriately selected basis functions.
In this case the weight tuning problem would be less demanding. Yet another possibility is to let F
be an appropriate restriction of a Reproducing Kernel Hilbert Space (e.g., in a ball). In this case the
training procedure becomes similar to LS-SVM training [7].
As indicated above, the analysis of this algorithm is complicated by the fact that the new dataset
is defined in terms of the previous iterate, which is already a function of the dataset. Another
complication is that the samples in a trajectory are in general correlated and that the bias introduced
by the imperfections of the approximation architecture may yield to an explosion of the error of the
procedure, as documented in a number of cases in, e.g., [8].
Nevertheless, at least for finite action sets, the tools developed in [1, 3, 2] look suitable to show
that under appropriate conditions these problems can be overcome if the function set is chosen in
a judicious way. However, the results of these works would become essentially useless in the case
of an infinite number of actions since these previous bounds grow to infinity with the number of
actions. Actually, we believe that this is not an artifact of the proof techniques of these works, as
suggested by the counterexample that involved random targets. The following result elaborates this
point further:
Proposition 2.1. Let F ? B(X ? A). Then even if the pseudo-dimension of F is finite, the fatshattering function of
?
?
?
Fmax
= VQ : VQ (?) = max Q(?, a), Q ? F
a?A
2
can be infinite over (0, 1/2).
Without going into further details, let us just note that the finiteness of the fat-shattering function is a
sufficient and necessary condition for learnability and the finiteness of the fat-shattering function is
implied by the finiteness of the pseudo-dimension [9].The above proposition thus shows that without
imposing further special conditions on F, the learning problem may become infeasible.
One possibility is of course to discretize the action space, e.g., by using a uniform grid. However, if
the action space has a really high dimensionality, this approach becomes unfeasible (even enumerating 2dA points could be impossible when dA is large). Therefore we prefer alternate solutions.
Another possibility is to make the functions in F, e.g., uniformly Lipschitz in their state coordinates.
?
Then the same property will hold for functions in Fmax
and hence by a classical result we can bound
the capacity of this set (cf. pp. 353?357 of [10]). One potential problem with this approach is that
this way it might be difficult to get a fine control of the capacity of the resulting set.
2
The proof of this and the other results are given in the appendix, available in the extended version of this
paper, downloadable from http://hal.inria.fr/inria-00185311/en/.
3
In the approach explored here we modify the fitted Q-iteration algorithm by introducing a policy
set ? and a search over this set for an approximately greedy policy in a sense that will be made
precise in a minute. Our algorithm thus has four parameters: F, ?, K, Q0 . Here F is as before, ?
is a user-chosen set of policies (mappings from X to A), K is the number of iterations and Q0 is an
initial value function (a typical choice is Q0 ? 0). The algorithm computes a sequence of iterates
(Qk , ?
?k ), k = 0, . . . , K, defined by the following equations:
?
?0
Qk+1
?
?k+1
=
argmax
???
=
argmin
N
X
argmax
???
Q0 (Xt , ?(Xt )),
t=1
Q?F
=
N
X
t=1
N
X
?
?
??2
1
Q(Xt , At ) ? Rt + ?Qk (Xt+1 , ?
?k (Xt+1 )) ,
?b (At |Xt )
(3)
Qk+1 (Xt , ?(Xt )).
(4)
t=1
Thus, (3) is similar to (2), while (4) defines the policy search problem. The policy search will
generally be solved by a gradient procedure or some other appropriate method. The cost of this step
will be primarily determined by how well-behaving the iterates Qk+1 are in their action arguments.
For example, if they were quadratic and if ? was linear then the problem would be a quadratic
optimization problem. However, except for special cases3 the action value functions will be more
complicated, in which case this step can be expensive. Still, this cost could be similar to that of
searching for the maximizing actions for each t = 1, . . . , N if the approximately maximizing actions
are similar across similar states.
This algorithm, which we could also call a fitted actor-critic algorithm, will be shown to overcome
the above mentioned complexity control problem provided that the complexity of ? is controlled
appropriately. Indeed, in this case the set of possible regression problems is determined by the set
F?? = { V : V (?) = Q(?, ?(?)), Q ? F, ? ? ? } ,
and the proof will rely on controlling the complexity of F?? by selecting F and ? appropriately.
3
3.1
The main theoretical result
Outline of the analysis
In order to gain some insight into the behavior of the algorithm, we provide a brief summary of its
error analysis. The main result will be presented subsequently. For f ,Q ? F and a policy ?, we
define the tth TD-error as follows:
dt (f ; Q, ?) = Rt + ?Q(Xt+1 , ?(Xt+1 )) ? f (Xt , At ).
Further, we define the empirical loss function by
N
X
d2t (f ; Q, ?)
? N (f ; Q, ?) = 1
,
L
N t=1 ?(A)?b (At |Xt )
where the normalization with ?(A) is introduced for mathematical convenience. Then (3) can be
? N (f ; Qk , ?
written compactly as Qk+1 = argminf ?F L
?k ).
? N (f ; Q, ?) is an
The algorithm can then be motivated by the observation that for any f ,Q, and ?, L
unbiased estimate of
def
2
L(f ; Q, ?) = kf ? T ? Qk? + L? (Q, ?),
(5)
where the first term is the error we are interested in and the second term captures the variance of the
random samples:
Z
L? (Q, ?) =
E [Var [R1 + ?Q(X2 , ?(X2 ))|X1 , A1 = a]] d?A (a).
A
3
Linear quadratic regulation is such a nice case. It is interesting to note that in this special case the obvious
choices for F and ? yield zero error in the limit, as can be proven based on the main result of this paper.
4
h
i
? N (f ; Q, ?) = L(f ; Q, ?).
This result is stated formally by E L
Since the variance term in (5) is independent of f , argminf ?F L(f ; Q, ?)
=
2
?
argminf ?F kf ? T Qk? . Thus, if ?
?k were greedy w.r.t. Qk then argminf ?F L(f ; Qk , ?
?k ) =
2
argminf ?F kf ? T Qk k? . Hence we can still think of the procedure as approximate value iteration
over the space of action-value functions, projecting T Qk using empirical risk minimization on the
space F w.r.t. k?k? distances in an approximate manner. Since ?
?k is only approximately greedy, we
will have to deal with both the error coming from the approximate projection and the error coming
from the choice of ?
?k . To make this clear, we write the iteration in the form
Qk+1 = T ??k Qk + ?0k = T Qk + ?0k + (T ??k Qk ? T Qk ) = T Qk + ?k ,
def
where ?0k is the error committed while computing T ??k Qk , ?00k = T ??k Qk ? T Qk is the error committed because the greedy policy is computed approximately and ?k = ?0k + ?00k is the total error of step
k. Hence, in order to show that the procedure is well behaved, one needs to show that both errors are
controlled and that when the errors are propagated through these equations, the resulting error stays
controlled, too. Since we are ultimately interested in the performance of the policy obtained, we
will also need to show that small action-value approximation errors yield small performance losses.
For these we need a number of assumptions that concern either the training data, the MDP, or the
function sets used for learning.
3.2
Assumptions
3.2.1
Assumptions on the training data
We shall assume that the data is rich, is in a steady state, and is fast-mixing, where, informally,
mixing means that future depends weakly on the past.
Assumption A2 (Sample Path Properties) Assume that {(Xt , At , Rt )}t=1,...,N is the sample path
of ?b , a stochastic stationary policy. Further, assume that {Xt } is strictly stationary (Xt ? ? ?
M (X )) and exponentially ?-mixing with the actual rate given by the parameters (?, b, ?).4 We
further assume that the sampling policy ?b satisfies ?0 = inf (x,a)?X ?A ?b (a|x) > 0.
The ?-mixing property will be used to establish tail inequalities for certain empirical processes.5
Note that the mixing coefficients do not need to be known. In the case when no mixing condition is
satisfied, learning might be impossible. To see this just consider the case when X1 = X2 = . . . =
XN . Thus, in this case the learner has many copies of the same random variable and successful
generalization is thus impossible. We believe that the assumption that the process is in a steady state
is not essential for our result, as when the process reaches its steady state quickly then (at the price
of a more involved proof) the result would still hold.
3.2.2
Assumptions on the MDP
In order to prevent the uncontrolled growth of the errors as they are propagated through the updates,
we shall need some assumptions on the MDP. A convenient assumption is the following one [11]:
Assumption A3 (Uniformly stochastic transitions) For all x ? X and a ? A, assume that
P (?|x, a) is absolutely continuous w.r.t. ? and the
derivative of P w.r.t. ? is bounded
?
? Radon-Nikodym
def
? dP (?|x,a) ?
uniformly with bound C? : C? = supx?X ,a?A ? d? ? < +?.
?
Note that by the definition of measure differentiation, Assumption A3 means that P (?|x, a) ?
C? ?(?). This assumption essentially requires the transitions to be noisy. We will also prove (weaker)
results under the following, weaker assumption:
4
For the definition of ?-mixing, see e.g. [2].
We say ?empirical process? and ?empirical measure?, but note that in this work these are based on dependent (mixing) samples.
5
5
Assumption A4 (Discounted-average concentrability of future-state distributions) Given ?,
?, m ? 1 and an arbitrary sequence of stationary policies {?m }m?1 , assume that the futuredef
state distribution
?P ?1 P ?2 . . .?P ?m is absolutely continuous w.r.t. ?. Assume that c(m) =
?
?1 ?2
?m ?
P
def
?
sup?1 ,...,?m ? d(?P Pd? ...P ) ? satisfies m?1 m? m?1 c(m) < +?. We shall call C?,? =
?
?
?
P
P
max (1 ? ?)2 m?1 m? m?1 c(m), (1 ? ?) m?1 ? m c(m) the discounted-average concentrability coefficient of the future-state distributions.
The number c(m) measures how much ? can get amplified in m steps as compared to the reference
distribution ?. Hence, in general we expect c(m) to grow with m. In fact, the condition that C?,? is
finite is a growth rate condition on c(m). Thanks to discounting, C?,? is finite for a reasonably large
class of systems (see the discussion in [11]).
A related assumption is needed in the error analysis of the approximate greedy step of the algorithm:
Assumption A5 (The random policy ?makes no peak-states?) Consider the distribution ? = (? ?
?A )P which is the distribution of a state that results from sampling an initial state according to ? and
then executing an action which is selected uniformly at random.6 Then ?? = kd?/d?k? < +?.
Note that under Assumption A3 we have ?? ? C? . This (very mild) assumption means that after
one step, starting from ? and executing this random policy, the probability of the next state being in
a set is upper bounded by ?? -times the probability of the starting state being in the same set.
def
Besides, we assume that A has the following regularity property:
Let Py(a, h, ?) =
?
?
(a0 , v) ? RdA +1 : ka ? a0 k1 ? ?, 0 ? v/h ? 1 ? ka ? a0 k1 /? denote the pyramid with hight
?
def ?
h and base given by the `1 -ball B(a, ?) = a0 ? RdA : ka ? a0 k1 ? ? centered at a.
Assumption A6 (Regularity of the action space) We assume that there exists ? > 0, such that for
all a ? A, for all ? > 0,
?
?
?(Py(a, 1, ?) ? (A ? R))
?(A)
? min ?,
.
?(Py(a, 1, ?))
?(B(a, ?))
For example, if A is an `1 -ball itself, then this assumption will be satisfied with ? = 2?dA .
Without assuming any smoothness of the MDP, learning in infinite MDPs looks hard (see, e.g.,
[12, 13]). Here we employ the following extra condition:
Assumption A7 (Lipschitzness of the MDP in the actions) Assume that the transition probabilities
and rewards are Lipschitz w.r.t. their action variable, i.e., there exists LP , Lr > 0 such that for all
(x, a, a0 ) ? X ? A ? A and measurable set B of X ,
|P (B|x, a) ? P (B|x, a0 )| ? LP ka ? a0 k1 ,
|r(x, a) ? r(x, a0 )| ? Lr ka ? a0 k1 .
Note that previously Lipschitzness w.r.t. the state variables was used, e.g., in [11] to construct consistent planning algorithms.
3.2.3
Assumptions on the function sets used by the algorithm
These assumptions are less demanding since they are under the control of the user of the algorithm.
However, the choice of these function sets will greatly influence the performance of the algorithm,
as we shall see it from the bounds. The first assumption concerns the class F:
Assumption A8 (Lipschitzness of candidate action-value functions) Assume F ? B(X ? A)
and that any elements of F is uniformly Lipschitz in its action-argument in the sense that |Q(x, a) ?
Q(x, a0 )| ? LA ka ? a0 k1 holds for any x ? X , a,a0 ? A, and Q ? F .
6
Remember that ?A denotes the uniform distribution over the action set A.
6
We shall also need to control the capacity of our function sets. We assume that the reader is familiar
with the concept of VC-dimension.7 Here we use the pseudo-dimension of function sets that builds
upon the concept of VC-dimension:
Definition 3.1 (Pseudo-dimension). The pseudo-dimension VF + of F is defined as the VCdimension of the subgraphs of functions in F (hence it is also called the VC-subgraph dimension of
F).
Since A is multidimensional, we define V?+ to be the sum of the pseudo-dimensions of the coordinate projection spaces, ?k of ?:
V
?+
=
dA
X
V? + ,
k=1
k
?k = { ?k : X ? R : ? = (?1 , . . . , ?k , . . . , ?dA ) ? ? } .
Now we are ready to state our assumptions on our function sets:
Assumption A9 (Capacity of the function and policy sets) Assume that F ? B(X ? A; Qmax )
for Qmax > 0 and VF + < +?. Also, A ? [?A? , A? ]dA and V?+ < +?.
Besides their capacity, one shall also control the approximation power of the function sets involved.
Let us first consider the policy set ?. Introduce
e? (F, ?) = sup inf ?(EQ ? E ? Q).
Q?F ???
Note that inf ??? ?(EQ ? E ? Q) measures the quality of approximating ?EQ by ?E ? Q. Hence,
e? (F, ?) measures the worst-case approximation error of ?EQ as Q is changed within F. This can
be made small by choosing ? large.
Another related quantity is the one-step Bellman-error of F w.r.t. ?. This is defined as follows: For
a fixed policy ?, the one-step Bellman-error of F w.r.t. T ? is defined as
E1 (F; ?) = sup inf
kQ0 ? T ? Qk? .
0
Q?F Q ?F
Taking again a pessimistic approach, the one-step Bellman-error of F is defined as
E1 (F, ?) = sup E1 (F; ?).
???
Typically by increasing F, E1 (F, ?) can be made smaller (this is discussed at some length in
[3]). However, it also holds for both ? and F that making them bigger will increase their capacity
(pseudo-dimensions) which leads to an increase of the estimation errors. Hence, F and ? must be
selected to balance the approximation and estimation errors, just like in supervised learning.
3.3
The main result
Theorem 3.2. Let ?K be a greedy policy w.r.t. QK , i.e. ?K (x) ? argmaxa?A QK (x, a). Then
under Assumptions A1, A2, and A5?A9, for all ? > 0 we have with probability at least 1 ? ?: given
Assumption A3 (respectively A4), kV ? ? V ?K k? (resp. kV ? ? V ?K k1,? ), is bounded by
?
??
? d 1+1
?+1
?
?
A
?
?
4?
(log N + log(K/?))
K
?
+
?
,
C ?E1 (F, ?) + e? (F, ?) +
1/4
?
?
N
?
?
A
where C depends on dA , VF + , (V?+ )dk=1
, ?, ?, b, ?, C? (resp. C?,? ), ?? , LA , LP ,Lr , ?, ?(A), ?0 ,
k
?+1
? max , and A? . In particular, C scales with V 4?(dA +1) , where V = 2VF + + V?+
Qmax , Rmax , R
plays the role of the ?combined effective? dimension of F and ?.
7
Readers not familiar with VC-dimension are suggested to consult a book, such as the one by Anthony and
Bartlett [14].
7
4
Discussion
We have presented what we believe is the first finite-time bounds for continuous-state and actionspace RL that uses value functions. Further, this is the first analysis of fitted Q-iteration, an algorithm
that has proved to be useful in a number of cases, even when used with non-averagers for which no
previous theoretical analysis existed (e.g., [15, 16]). In fact, our main motivation was to show that
there is a systematic way of making these algorithms work and to point at possible problem sources
the same time. We discussed why it can be difficult to make these algorithms work in practice. We
suggested that either the set of action-value candidates has to be carefully controlled (e.g., assuming
uniform Lipschitzness w.r.t. the state variables), or a policy search step is needed, just like in actorcritic algorithms. The bound in this paper is similar in many respects to a previous bound of a
Bellman-residual minimization algorithm [2]. It looks that the techniques developed here can be
used to obtain results for that algorithm when it is applied to continuous action spaces. Finally,
although we have not explored them here, consistency results for FQI can be obtained from our
results using standard methods, like the methods of sieves. We believe that the methods developed
here will eventually lead to algorithms where the function approximation methods are chosen based
on the data (similar to adaptive regression methods) so as to optimize performance, which in our
opinion is one of the biggest open questions in RL. Currently we are exploring this possibility.
Acknowledgments
Andr?as Antos would like to acknowledge support for this project from the Hungarian Academy of Sciences
(Bolyai Fellowship). Csaba Szepesv?ari greatly acknowledges the support received from the Alberta Ingenuity
Fund, NSERC, the Computer and Automation Research Institute of the Hungarian Academy of Sciences.
References
[1] A. Antos, Cs. Szepesv?ari, and R. Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. In COLT-19, pages 574?588, 2006.
[2] A. Antos, Cs. Szepesv?ari, and R. Munos. Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path. Machine Learning, 2007. (accepted).
[3] A. Antos, Cs. Szepesv?ari, and R. Munos. Value-iteration based fitted policy iteration: learning with a
single trajectory. In IEEE ADPRL, pages 330?337, 2007.
[4] D. P. Bertsekas and S.E. Shreve. Stochastic Optimal Control (The Discrete Time Case). Academic Press,
New York, 1978.
[5] D. Ernst, P. Geurts, and L. Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine
Learning Research, 6:503?556, 2005.
[6] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. Bradford Book. MIT Press, 1998.
[7] N. Cristianini and J. Shawe-Taylor. An introduction to support vector machines (and other kernel-based
learning methods). Cambridge University Press, 2000.
[8] J.A. Boyan and A.W. Moore. Generalization in reinforcement learning: Safely approximating the value
function. In NIPS-7, pages 369?376, 1995.
[9] P.L. Bartlett, P.M. Long, and R.C. Williamson. Fat-shattering and the learnability of real-valued functions.
Journal of Computer and System Sciences, 52:434?452, 1996.
[10] A.N. Kolmogorov and V.M. Tihomirov. ?-entropy and ?-capacity of sets in functional space. American
Mathematical Society Translations, 17(2):277?364, 1961.
[11] R. Munos and Cs. Szepesv?ari. Finite time bounds for sampling based fitted value iteration. Technical
report, Computer and Automation Research Institute of the Hungarian Academy of Sciences, Kende u.
13-17, Budapest 1111, Hungary, 2006.
[12] A.Y. Ng and M. Jordan. PEGASUS: A policy search method for large MDPs and POMDPs. In Proceedings of the 16th Conference in Uncertainty in Artificial Intelligence, pages 406?415, 2000.
[13] P.L. Bartlett and A. Tewari. Sample complexity of policy search with known dynamics. In NIPS-19. MIT
Press, 2007.
[14] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University
Press, 1999.
[15] M. Riedmiller. Neural fitted Q iteration ? first experiences with a data efficient neural reinforcement
learning method. In 16th European Conference on Machine Learning, pages 317?328, 2005.
[16] S. Kalyanakrishnan and P. Stone. Batch reinforcement learning in a complex domain. In AAMAS-07,
2007.
8
| 3233 |@word mild:3 version:1 norm:1 open:1 hu:1 r:1 kalyanakrishnan:1 initial:2 selecting:1 past:2 ka:6 assigning:1 dx:3 yet:1 written:1 must:1 update:1 fund:1 stationary:6 greedy:10 selected:3 intelligence:1 lr:3 iterates:2 complication:1 mathematical:2 become:2 shorthand:1 prove:1 introduce:1 manner:1 x0:2 ra:1 indeed:2 expected:2 ingenuity:1 planning:1 behavior:3 bellman:7 discounted:4 alberta:2 td:1 actual:1 increasing:1 becomes:4 project:2 provided:1 notation:2 bounded:8 what:2 argmin:2 rmax:4 developed:3 averagers:1 csaba:2 differentiation:1 lipschitzness:4 pseudo:7 remember:2 safely:1 multidimensional:1 growth:2 exactly:2 fat:3 k2:1 control:7 bertsekas:1 before:1 modify:1 limit:1 despite:1 sutton:1 path:4 approximately:4 inria:4 might:3 range:1 acknowledgment:1 practice:1 procedure:8 riedmiller:1 empirical:5 projection:2 convenient:1 fqi:6 argmaxa:2 hight:1 get:2 unfeasible:1 convenience:1 selection:1 operator:3 risk:1 impossible:4 influence:1 py:3 restriction:1 measurable:5 deterministic:1 optimize:1 yt:1 maximizing:3 starting:2 l:1 subgraphs:1 insight:1 proving:1 searching:2 coordinate:2 resp:2 target:2 controlling:1 play:1 ualberta:1 user:2 us:1 element:1 expensive:1 role:1 solved:3 capture:1 worst:1 e8:1 mentioned:1 pd:1 complexity:5 reward:6 cristianini:1 dynamic:1 ultimately:1 weakly:1 upon:2 learner:1 basis:1 compactly:1 kolmogorov:1 fast:1 effective:1 kp:1 artificial:1 choosing:1 valued:2 say:2 elaborates:1 think:1 jointly:1 itself:3 noisy:1 a9:2 sequence:3 net:1 coming:2 fr:2 budapest:3 hungary:3 fmax:2 mixing:8 subgraph:1 ernst:1 amplified:1 academy:5 kv:2 recipe:1 regularity:3 r1:1 executing:2 illustrate:1 vcdimension:1 received:2 eq:5 hungarian:5 c:5 stochastic:5 subsequently:1 centered:1 vc:4 sgn:1 opinion:1 adprl:1 assign:1 generalization:2 villeneuve:1 preliminary:1 really:1 proposition:2 rda:2 pessimistic:1 strictly:1 exploring:1 hold:4 sufficiently:1 mapping:1 a2:2 actionspace:1 estimation:4 currently:1 kq0:1 successfully:1 tool:1 minimization:5 mit:2 imperfection:1 always:1 barto:1 ax:1 greatly:2 rigorous:2 attains:1 sense:2 inst:2 dependent:1 typically:1 a0:14 going:1 france:1 interested:2 issue:1 colt:1 denoted:2 special:3 construct:1 ng:1 sampling:3 shattering:3 lille:1 look:3 future:3 report:1 primarily:1 employ:1 familiar:2 replaced:1 argmax:2 a5:2 possibility:4 evaluation:1 antos:6 integral:1 explosion:1 necessary:1 experience:1 tree:1 euclidean:1 taylor:1 theoretical:3 fitted:14 markovian:1 maximization:1 a6:1 cost:2 introducing:1 subset:3 uniform:4 successful:2 too:1 learnability:2 supx:1 combined:1 tihomirov:1 thanks:1 cited:1 density:1 peak:1 stay:1 sequel:1 systematic:1 quickly:1 again:1 satisfied:2 choose:1 dr:1 book:3 american:1 derivative:1 supb:1 sztaki:1 potential:2 downloadable:1 automation:4 coefficient:2 matter:1 depends:3 fatshattering:1 sup:6 start:1 complicated:2 actorcritic:1 square:2 qk:38 variance:2 yield:3 trajectory:4 pomdps:1 reach:1 concentrability:2 definition:3 pp:1 involved:3 regress:2 obvious:1 proof:4 propagated:2 sampled:1 gain:1 dataset:5 proved:1 knowledge:1 dimensionality:1 hilbert:1 carefully:2 actually:1 dt:1 supervised:1 furthermore:1 just:5 shreve:1 hand:1 a7:1 defines:1 mode:1 quality:1 artifact:1 indicated:1 behaved:1 mdp:8 believe:5 hal:1 concept:2 unbiased:1 hence:10 discounting:1 sieve:1 q0:4 moore:1 d2t:1 deal:2 sin:1 szepesva:1 steady:3 stone:1 outline:1 geurts:1 ari:6 functional:1 rl:2 exponentially:1 extend:1 tail:1 discussed:2 cambridge:2 counterexample:1 imposing:1 smoothness:1 tuning:2 grid:1 consistency:1 shawe:1 actor:1 behaving:1 base:1 inf:5 certain:1 inequality:1 greater:1 technical:2 academic:1 long:1 e1:5 bigger:1 a1:3 controlled:4 variant:1 regression:12 essentially:2 expectation:2 iteration:21 kernel:3 normalization:1 pyramid:1 szepesv:6 want:1 fine:1 fellowship:1 grow:2 finiteness:3 source:1 appropriately:3 extra:1 jordan:1 integer:1 call:3 consult:1 near:2 maxb:2 iterate:1 architecture:2 fm:4 enumerating:1 motivated:1 bartlett:4 york:1 action:35 generally:1 useful:1 clear:1 informally:1 tewari:1 discount:1 tth:1 documented:1 http:1 bolyai:1 andr:2 designer:1 write:3 discrete:1 shall:10 four:1 nevertheless:1 prevent:1 sum:1 parameterized:1 wer:1 uncertainty:1 qmax:3 throughout:1 reader:2 dy:3 prefer:1 appendix:1 radon:1 vf:4 def:7 bound:9 uncontrolled:2 existed:1 quadratic:3 encountered:1 infinity:1 x2:3 emi:1 simulate:1 argument:3 min:1 quintuple:1 department:1 according:2 alternate:1 ball:3 kd:1 across:1 smaller:1 lp:3 evolves:1 making:2 projecting:1 restricted:2 equation:3 vq:2 previously:1 eventually:1 needed:2 available:1 appropriate:4 generic:1 batch:3 denotes:2 cf:1 a4:2 wehenkel:1 exploit:1 k1:7 build:2 establish:1 approximating:2 classical:1 society:1 implied:1 already:1 quantity:1 question:1 pegasus:1 rt:11 amongst:1 gradient:1 dp:1 distance:1 capacity:7 reason:1 assuming:2 besides:3 length:1 useless:1 balance:2 difficult:2 executed:1 regulation:1 argminf:5 stated:1 policy:44 discretize:1 upper:1 observation:3 finite:9 acknowledge:1 immediate:3 defining:3 extended:1 team:1 precise:1 committed:2 supa:1 reproducing:1 arbitrary:1 canada:1 introduced:3 pair:1 nip:2 suggested:3 max:6 power:2 suitable:1 demanding:2 difficulty:1 rely:1 boyan:1 residual:3 mdps:3 brief:1 ascq:1 started:1 ready:1 acknowledges:1 nice:1 kf:7 loss:2 expect:1 interesting:1 proven:1 var:1 foundation:1 sufficient:1 t6g:1 consistent:1 nikodym:1 critic:1 translation:1 course:1 summary:1 changed:1 last:1 copy:1 infeasible:1 bias:1 weaker:2 institute:2 taking:2 munos:6 distributed:3 overcome:2 dimension:12 xn:2 transition:4 rich:2 computes:1 made:3 reinforcement:6 adaptive:1 approximate:6 compact:2 preferred:1 supremum:1 keep:1 continuous:9 search:6 why:1 learn:1 reasonably:1 ca:1 williamson:1 european:1 complex:1 anthony:2 krk:1 da:9 domain:2 main:5 linearly:1 big:1 motivation:1 aamas:1 x1:3 biggest:1 edmonton:1 en:1 candidate:3 weighting:2 minute:1 theorem:1 xt:35 specific:2 list:1 dk:5 svm:1 explored:2 concern:2 a3:4 exists:4 essential:1 entropy:1 remi:1 simply:1 nserc:1 a8:1 determines:1 satisfies:3 conditional:1 goal:2 viewed:1 lipschitz:3 price:1 hard:2 judicious:1 infinite:4 determined:3 uniformly:8 typical:1 except:1 called:4 total:1 bradford:1 accepted:1 la:2 formally:1 support:3 absolutely:2 correlated:1 |
2,463 | 3,234 | Topmoumoute online natural gradient algorithm
Pierre-Antoine Manzagol
University of Montreal
[email protected]
Nicolas Le Roux
University of Montreal
[email protected]
Yoshua Bengio
University of Montreal
[email protected]
Abstract
Guided by the goal of obtaining an optimization algorithm that is both fast and
yields good generalization, we study the descent direction maximizing the decrease in generalization error or the probability of not increasing generalization
error. The surprising result is that from both the Bayesian and frequentist perspectives this can yield the natural gradient direction. Although that direction can be
very expensive to compute we develop an efficient, general, online approximation
to the natural gradient descent which is suited to large scale problems. We report experimental results showing much faster convergence in computation time
and in number of iterations with TONGA (Topmoumoute Online natural Gradient
Algorithm) than with stochastic gradient descent, even on very large datasets.
Introduction
An efficient optimization algorithm is one that quickly finds a good minimum for a given cost function. An efficient learning algorithm must do the same, with the additional constraint that the function is only known through a proxy. This work aims to improve the ability to generalize through
more efficient learning algorithms.
Consider the optimization of a cost on a training set with access to a validation set. As the end
objective is a good solution with respect to generalization, one often uses early stopping: optimizing
the training error while monitoring the validation error to fight overfitting. This approach makes
the underlying assumption that overfitting happens at the later stages. A better perspective is that
overfitting happens all through the learning, but starts being detrimental only at the point it overtakes
the ?true? learning. In terms of gradients, the gradient of the cost on the training set is never collinear
with the true gradient, and the dot product between the two actually eventually becomes negative.
Early stopping is designed to determine when that happens. One can thus wonder: can one limit
overfitting before that point? Would this actually postpone that point?
From this standpoint, we discover new justifications behind the natural gradient [1]. Depending on
certain assumptions, it corresponds either to the direction minimizing the probability of increasing
generalization error, or to the direction in which the generalization error is expected to decrease
the fastest. Unfortunately, natural gradient algorithms suffer from poor scaling properties, both with
respect to computation time and memory, when the number of parameters becomes large. To address
this issue, we propose a generally applicable online approximation of natural gradient that scales
linearly with the number of parameters (and requires computation time comparable to stochastic
gradient descent). Experiments show that it can bring significant faster convergence and improved
generalization.
1
1 Natural gradient
e =
Let Le be a cost defined as L(?)
Z
L(x, ?)p(x)dx where L is a loss function over some parameters
? and over the random variable x with distribution p(x). The problem of minimizing Le over ? is often
encountered and can be quite difficult. There exist various techniques to tackle it, their efficiency
depending on L and p. In the case of non-convex optimization, gradient descent is a successful
e
technique. The approach consists in progressively updating ? using the gradient ge = dd?L .
e (the covariance of the
[1] showed that the parameter space is a Riemannian space of metric C
gradients), and introduced the natural gradient as the direction of steepest descent in this space.
e?1 ge. The Riemannian space is known to
The natural gradient direction is therefore given by C
correspond to the space of functions represented by the parameters (instead of the space of the
parameters themselves).
The natural gradient somewhat resembles the Newton method. [6] showed that, in the case of a mean
squared cost function, the Hessian is equal to the sum of the covariance matrix of the gradients and
of an additional term that vanishes to 0 as the training error goes down. Indeed, when the data are
generated from the model, the Hessian and the covariance matrix are equal. There are two important
e is positive-definite, which makes the technique more stable,
differences: the covariance matrix C
but contains no explicit second order information. The Hessian allows to account for variations in
the parameters. The covariance matrix accounts for slight variations in the set of training samples. It
also means that, if the gradients highly disagree in one direction, one should not go in that direction,
even if the mean suggests otherwise. In that sense, it is a conservative gradient.
2 A new justification for natural gradient
Until now, we supposed we had access to the true distribution p. However, this is usually not the
case and, in general, the distribution p is only known through the samples of the training set. These
samples define a cost L (resp. a gradient g) that, although close to the true cost (resp. gradient), is
not equal to it. We shall refer to L as the training error and to Le as the generalization error. The
danger is then to overfit the parameters ? to the training set, yielding parameters that are not optimal
with respect to the generalization error.
A simple way to fight overfitting consists in determining the point when the continuation of the
e This can be done by setting aside some samples to
optimization on L will be detrimental to L.
e Once the error starts increasing
form a validation set that will provide an independent estimate of L.
on the validation set, the optimization should be stopped. We propose a different perspective on
overfitting. Instead of only monitoring the validation error, we consider using as descent direction
an estimate of the direction that maximizes the probability of reducing the generalization error. The
goal is to limit overfitting at every stage, with the hope that the optimal point with respect to the
validation should have lower generalization error.
Consider a descent direction v. We know that if v T ge is negative then the generalization error drops
(for a reasonably small step) when stepping in the direction of v. Likewise, if v T g is negative then
the training error drops. Since the learning objective is to minimize generalization error, we would
like v T ge as small as possible, or at least always negative.
n
1X
?L(xi , ?)
gi where gi =
and n is the
n i=1
??
number of training samples. With a rough approximation, one can consider the g i s as draws from the
true gradient distribution and assume all the gradients are independent and identically distributed.
The central limit theorem then gives
!
e
C
g ? N ge,
(1)
n
By definition, the gradient on the training set is g =
e is the true covariance matrix of
where C
?L(x,?)
??
wrt p(x).
2
We will now show that, both in the Bayesian setting (with a Gaussian prior) and in the frequentist
setting (with some restrictions over the type of gradient considered), the natural gradient is optimal
in some sense.
2.1 Bayesian setting
In the Bayesian setting, ge is a random variable. We would thus like to define a posterior over ge given
the samples gi in order to have a posterior distribution over v T ge for any given direction v. The prior
over ge will be a Gaussian centered in 0 of variance ? 2 I. Thus, using eq. 1, the posterior over ge given
the gi s (assuming the only information over ge given by the gi s is through g and C) is
?
?
!?1
?1
e
C
I
e?N? I+
e?1
?
ge|g, C
(2)
g,
+ nC
n? 2
?2
e? = I +
Denoting C
e
C
n? 2 ,
we therefore have
e?N
v ge|g, C
T
T e ?1 e
e?1 g, v C? Cv
v C
?
n
T
!
(3)
Using this result, one can choose between several strategies, among which two are of particular
interest:
? choosing the direction v such that the expected value of v T ge is the lowest possible (to
maximize the immediate gain). In this setting, the direction v to choose is
e??1 g.
v ? ?C
(4)
e? = I and this
If ? < ?, this is the regularized natural gradient. In the case of ? = ?, C
is the batch gradient descent.
? choosing the direction v to minimize the probability of v T ge to be positive. This is equivalent to finding
e??1 g
vT C
argminv p
e
e??1 Cv
vT C
(we dropped n for the sake of clarity, since it does not change the result). If we square this
e ?1 g(v T C
e?1 g)(v T C
e?1 Cv)
e ?
quantity and take the derivative with respect to v, we find 2C
?
?
?
?1 e
T e ?1 2
?1
e
e
2C? Cv(v C? g) at the numerator. The first term is in the span of C? g and the second
e Hence, for the derivative to be zero, we must have g ? Cv
e
e ?1 Cv.
one is in the span of C
?
e
e
(since C and C? are invertible), i.e.
e ?1 g.
v ? ?C
(5)
This direction is the natural gradient and does not depend on the value of ?.
2.2 Frequentist setting
In the frequentist setting, ge is a fixed unknown quantity. For the sake of simplicity, we will only
consider (as all second-order methods do) the directions v of the form v = M T g (i.e. we are only
allowed to go in a direction which is a linear function of g).
e
Since g ? N ge, C
n , we have
!
e ge
geT M T CM
T
T
T
v ge = g M g ? N ge M ge,
(6)
n
The matrix M ? which minimizes the probability of v T ge to be positive satisfies
M ? = argminM
3
geT M ge
geT M T CM ge
(7)
e gegeT ? 2CM
e gegeT M gegeT . The first
The numerator of the derivative of this quantity is gegeT M T CM
e
term is in the span of ge and the second one is in the span of CM ge. Thus, for this derivative to be
e ?1 and we obtain the same result as in the Bayesian case: the
0 for all ge, one must have M ? C
natural gradient represents the direction minimizing the probability of increasing the generalization
error.
3 Online natural gradient
The previous sections provided a number of justifications for using the natural gradient. However,
the technique has a prohibitive computational cost, rendering it impractical for large scale problems.
Indeed, considering p as the number of parameters and n as the number of examples, a direct batch
implementation of the natural gradient is O(p2 ) in space and O(np2 + p3 ) in time, associated respectively with the gradients? covariance storage, computation and inversion. This section reviews
existing low complexity implementations of the natural gradient, before proposing TONGA, a new
low complexity, online and generally applicable implementation suited to large scale problems. In
e to be known. In a practical algorithm
the previous sections we assumed the true covariance matrix C
we of course use an empirical estimate, and here this estimate is furthermore based on a low-rank
approximation denoted C (actually a sequence of estimates Ct ).
3.1 Low complexity natural gradient implementations
[9] proposes a method specific to the case of multilayer perceptrons. By operating on blocks of
the covariance matrix, this approach attains a lower computational complexity 1. However, the technique is quite involved, specific to multilayer perceptrons and requires two assumptions: Gaussian
distributed inputs and a number of hidden units much inferior to that of input units. [2] offers a more
general approach based on the Sherman-Morrison formula used in Kalman filters: the technique
maintains an empirical estimate of the inversed covariance matrix that can be updated in O(p 2 ). Yet
the memory requirement remains O(p2 ). It is however not necessary to compute the inverse of the
gradients? covariance, since one only needs its product with the gradient. [10] offers two approaches
to exploit this. The first uses conjugate gradient descent to solve Cv = g. The second revisits
[9] thereby achieving a lower complexity. [8] also proposes an iterative technique based on the
minimization of a different cost. This technique is used in the minibatch setting, where Cv can be
computed cheaply through two matrix vector products. However, estimating the gradient covariance
only from a small number of examples in one minibatch yields unstable estimation.
3.2 TONGA
Existing techniques fail to provide an implementation of the natural gradient adequate for the large
scale setting. Their main failings are with respect to computational complexity or stability. TONGA
was designed to address these issues, which it does this by maintaining a low rank approximation of
the covariance and by casting both problems of finding the low rank approximation and of computing
the natural gradient in a lower dimensional space, thereby attaining a much lower complexity. What
we exploit here is that although a covariance matrix needs many gradients to be estimated, we can
take advantage of an observed property that it generally varies smoothly as training proceeds and
moves in parameter space.
3.2.1 Computing the natural gradient direction between two eigendecompositions
Even though our motivation for the use of natural gradient implied the covariance matrix of the empirical gradients, we will use the second moment (i.e. the uncentered covariance matrix) throughout
the paper (and so did Amari in his work). The main reason is numerical stability. Indeed, in the
batch setting, we have (assuming C is the centered covariance matrix and g the mean) v = C ?1 g,
thus Cv = g. But then, (C + gg T )v = g + gg T v = g(1 + g T v) and
v
(C + gg T )?1 g =
= v?
(8)
1 + gT v
1
Though the technique allows for a compact representation of the covariance matrix, the working memory
requirement remains the same.
4
Even though the direction is the same, the scale changes and the norm of the direction is bounded
1
by kgk cos(g,v)
.
Since TONGA operates using a low rank estimate of the gradients? non-centered covariance, we
must be able to update cheaply. When presented with a new gradient, we integrate its information
using the following update formula2:
Ct = ? C?t?1 + gt gtT
(9)
where C0 = 0 and C?t?1 is the low rank approximation at time step t ? 1. Ct is now likely of
greater rank, and the problem resides in computing its low rank approximation C?t . Writing C?t?1 =
T
Xt?1 Xt?1
,
?
Ct = Xt XtT with Xt = [ ?Xt?1 gt ]
With such covariance matrices, computing the (regularized) natural direction v t is equal to
vt = (Ct + ?I)?1 gt = (Xt XtT + ?I)?1 gt
(10)
vt = (Xt XtT + ?I)?1 Xt yt with yt = [0, . . . 0, 1]T .
(11)
Using the Woodbury identity with positive definite matrices [7], we have
vt = Xt (XtT Xt + ?I)?1 yt
(12)
If Xt is of size p ? r (with r < p, thus yielding a covariance matrix of rank r), the cost of this
computation is O(pr 2 + r3 ). However, since the Gram matrix Gt = XtT Xt can be rewritten as
? T
? T
T
?Xt?1 gt
?Gt?1
?Xt?1 gt
?Xt?1
Xt?1
? T
? T
=
,
(13)
Gt =
?gt Xt?1
gtT gt
?gt Xt?1
gtT gt
the cost of computing Gt using Gt?1 reduces to O(pr + r 3 ). This stresses the need to keep r small.
3.2.2 Updating the low-rank estimate of Ct
To keep a low-rank estimate of Ct = Xt XtT , we can compute its eigendecomposition and keep only
the first k eigenvectors. This can be made at low cost using its relation to that of G t :
Gt = V DV T
Ct = (Xt V D? 2 )D(Xt V D? 2 )T
(14)
The cost of such an eigendecomposition is O(kr 2 + pkr) (for the computation of the eigendecomposition of the Gram matrix and the computation of the eigenvectors, respectively). Since the cost of
computing the natural direction is O(pr + r 3 ), it is computationally more efficient to let the rank of
Xt grow for several steps (using formula 12 in between) and then compute the eigendecomposition
using
i
h
t+b
b?1
1
T
Ct+b = Xt+b Xt+b
with Xt+b = ?Ut , ? 2 gt+1 , . . . ? 2 gt+b?1 , ? 2 gt+b ]
1
1
with Ut the unnormalized eigenvectors computed during the previous eigendecomposition.
3.2.3 Computational complexity
The computational complexity of TONGA depends on the complexity of updating the low rank
approximation and on the complexity of computing the natural gradient. The cost of updating the
approximation is in O(k(k + b)2 + p(k + b)k) (as above, using r = k + b). The cost of computing
3
the natural
pgradient vt is in O(p(k + b) + (k + b) ) (again, as above, using r = k + b). Assuming
k + b (p) and k ? b, TONGA?s total computational cost per each natural gradient computation
is then O(pb).
Furthermore, by operating on minibatch gradients of size b0 , we end up with a cost per example of
0
O( bp
b0 ). Choosing b = b , yields O(p) per example, the same as stochastic gradient descent. Empirical comparison of cpu time also shows comparable CPU time per example, but faster convergence.
In our experiments, p was in the tens of thousands, k was less than 5 and b was less than 50.
The result is an approximate natural gradient with low complexity, general applicability and flexibility over the tradoff between computations and the quality of the estimate.
2
The second term is not weighted by 1?? so that the influence of gt in Ct is the same for all t, even t = 0.To
keep the magnitude of the matrix constant, one must use a normalization constant equal to 1 + ? + . . . + ? t .
5
4 Block-diagonal online natural gradient for neural networks
One might wonder if there are better approximations of the covariance matrix C than computing its
first k eigenvectors. One possibility is a block-diagonal approximation from which to retain only
the first k eigenvectors of every block (the value of k can be different for each block). Indeed, [4]
showed that the Hessian of a neural network with one hidden layer trained with the cross-entropy
cost converges to a block diagonal matrix during optimization. These blocks are composed of the
weights linking all the hidden units to one output unit and all the input units to one hidden unit.
Given the close relationship between the Hessian and the covariance matrices, we can assume they
have a similar shape during the optimization.
Figure 1 shows the correlation between the standard stochastic gradients of the parameters of a
16 ? 50 ? 26 neural network. The first blocks represent the weights going from the input units to
each hidden unit (thus 50 blocks of size 17, bias included) and the following represent the weights
going from the hidden units to each output unit (26 blocks of size 51). One can see that the blockdiagonal approximation is reasonable. Thus, instead of selecting only k eigenvectors to represent
the full covariance matrix, we can select k eigenvectors for every block, yielding the same total cost.
However, the rank of the approximation goes from k to k ?number of blocks. In the matrices shown
in figure 1, which are of size 2176, a value of k = 5 yields an approximation of rank 380.
(a) Stochastic gradient
(b) TONGA
(c) TONGA - zoom
Figure 1: Absolute correlation between the standard stochastic gradients after one epoch in a neural
network with 16 input units, 50 hidden units and 26 output units when following stochastic gradient
directions (left) and natural gradient directions (center and right).
?
kC?Ck
Figure 2 shows the ratio of Frobenius norms kCk2 F for different types of approximations C? (full
F
or block-diagonal). We can first notice that approximating only the blocks yields a ratio of .35 (in
comparison, taking only the diagonal of C yields a ratio of .80), even though we considered only
82076 out of the 4734976 elements of the matrix (1.73% of the total). This ratio is almost obtained
with k = 6. We can also notice that, for k < 30, the block-diagonal approximation is much better
(in terms of the Frobenius norm) than the full approximation. The block diagonal approximation is
therefore very cost effective.
2
1
0.8
Ratio of the squared Frobenius norms
Ratio of the squared Frobenius norms
1
Full matrix approximation
Block diagonal approximation
0.9
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
200
400
600
800
1000
1200
1400
1600
Number k of eigenvectors kept
1800
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
2000
(a) Full view
Full matrix approximation
Block diagonal approximation
0.9
5
10
15
20
25
30
Number k of eigenvectors kept
35
40
(b) Zoom
Figure 2: Quality of the approximation C? of the covariance C depending on the number of eigenvec? 2
kC?Ck
tors kept (k), in terms of the ratio of Frobenius norms kCk2 F , for different types of approximation
F
C? (full matrix or block diagonal)
6
This shows the block diagonal approximation constitutes a powerful and cheap approximation of the
covariance matrix in the case of neural networks. Yet this approximation also readily applies to any
mixture algorithm where we can assume independence between the components.
5 Experiments
We performed a small number of experiments with TONGA approximating the full covariance matrix, keeping the overhead of the natural gradient small (ie, limiting the rank of the approximation).
Regrettably, TONGA performed only as well as stochastic gradient descent, while being rather sensitive to the hyperparameter values. The following experiments, on the other hand, use TONGA
with the block diagonal approximation and yield impressive results. We believe this is a reflection
of the phenomenon illustrated in figure 2: the block diagonal approximation makes for a very cost
effective approximation of the covariance matrix. All the experiments have been made optimizing
hyperparameters on a validation set (not shown here) and selecting the best set of hyperparameters
for testing, trying to keep small the overhead due to natural gradient calculations.
One could worry about the number of hyperparameters of TONGA. However, default values of
k = 5, b = 50 and ? = .995 yielded good results in every experiment. When ? goes to infinity,
TONGA becomes the standard stochastic gradient algorithm. Therefore, a simple heuristic for ? is
to progressively tune it down. In our experiments, we only tried powers of ten.
5.1 MNIST dataset
The MNIST digits dataset consists of 50000 training samples, 10000 validation samples and 10000
test samples, each one composed of 784 pixels. There are 10 different classes (one for every digit).
0.05
0.02
0.16
0.045
0.04
0.035
0.03
0.025
0.02
0
500
1000
1500
2000
2500
3000
CPU time (in seconds)
3500
4000
4500
(a) Train class error
0.015
0.14
0.12
0.1
0.08
0.06
0.2
Block diagonal TONGA
Stochastic batchsize=1
Stochastic batchsize=400
Stochastic batchsize=1000
Stochastic batchsize=2000
0.15
0.1
0.04
0.01
0
Block diagonal TONGA
Stochastic batchsize=1
Stochastic batchsize=400
Stochastic batchsize=1000
Stochastic batchsize=2000
0.18
Negative log?likelihood
0.03
Block diagonal TONGA
Stochastic batchsize=1
Stochastic batchsize=400
Stochastic batchsize=1000
Stochastic batchsize=2000
0.055
Classification error
Classification error
0.04
0.2
0.06
Block diagonal TONGA
Stochastic batchsize=1
Stochastic batchsize=400
Stochastic batchsize=1000
Stochastic batchsize=2000
0.05
Negative log?likelihood
0.06
0.02
0
500
1000
1500
2000
2500
3000
CPU time (in seconds)
3500
4000
4500
(b) Test class error
0
0
500
1000
1500
2000
2500
3000
CPU time (in seconds)
3500
(c) Train NLL
4000
4500
0.05
0
500
1000
1500
2000
2500
3000
CPU time (in seconds)
3500
4000
4500
(d) Test NLL
Figure 3: Comparison between stochastic gradient and TONGA on the MNIST dataset (50000 training examples), in terms of training and test classification error and Negative Log-Likelihood (NLL).
The mean and standard error have been computed using 9 different initializations.
Figure 3 shows that in terms of training CPU time (which includes the overhead due to TONGA),
TONGA allows much faster convergence in training NLL, as well as in testing classification error
and testing NLL than ordinary stochastic and minibatch gradient descent on this task. One can also
note that minibatch stochastic gradient is able to profit from matrix-matrix multiplications, but this
advantage is mainly seen in training classification error.
5.2 Rectangles problem
The Rectangles-images task has been proposed in [5] to compare deep belief networks and support
vector machines. It is a two-class problem and the inputs are 28 ? 28 grey-level images of rectangles
located in varying locations and of different dimensions. The inside of the rectangle and the background are extracted from different real images. We used 900,000 training examples and 10,000 validation examples (no early stopping was performed, we show the whole training/validation curves).
All the experiments are performed with a multi-layer network with a 784-200-200-100-2 architecture (previously found to work well on this dataset). Figure 4 shows that in terms of training CPU
time, TONGA allows much faster convergence than ordinary stochastic gradient descent on this
task, as well as lower classification error.
7
0.4
0.35
0.3
0.25
0.2
0
0.5
1
1.5
2
CPU time (in seconds)
2.5
3
(a) Train NLL error
3.5
4
x 10
0.5
Stochastic gradient
Block diagonal TONGA
0.16
0.14
0.12
0.1
0.08
0.06
0
0.5
1
1.5
2
CPU time (in seconds)
2.5
3
3.5
4
x 10
(b) Test NLL error
0.2
Stochastic gradient
Block diagonal TONGA
0.45
0.4
0.35
0.3
0.25
0.2
0
0.5
1
1.5
2
CPU time (in seconds)
2.5
3
(c) Train class error
3.5
4
x 10
Stochastic gradient
Block diagonal TONGA
0.18
Classification error on the test set
0.45
0.2
0.18
Classification error on the training set
Stochastic gradient
Block diagonal TONGA
0.5
Negative log?likelihood on the test set
Negative log?likelihood on the training set
0.55
0.16
0.14
0.12
0.1
0.08
0.06
0
0.5
1
1.5
2
CPU time (in seconds)
2.5
3
3.5
4
x 10
(d) Test class error
Figure 4: Comparison between stochastic gradient descent and TONGA w.r.t. NLL and classification error, on training and validation sets for the rectangles problem (900,000 training examples).
6 Discussion
[3] reviews the different gradient descent techniques in the online setting and discusses their respective properties. Particularly, he states that a second order online algorithm (i.e., with a search
direction of is v = M g with g the gradient and M a positive semidefinite matrix) is optimal (in terms
of convergence speed) when M converges to H ?1 . Furthermore, the speed of convergence depends
(amongst other things) on the rank of the matrix M . Given the aforementioned relationship between
the covariance and the Hessian matrices, the natural gradient is close to optimal in the sense defined
above, provided the model has enough capacity. On mixture models where the block-diagonal approximation is appropriate, it allows us to maintain an approximation of much higher rank than a
standard low-rank approximation of the full covariance matrix.
Conclusion and future work
We bring two main contributions in this paper. First, by looking for the descent direction with either
the greatest probability of not increasing generalization error or the direction with the largest expected increase in generalization error, we obtain new justifications for the natural gradient descent
direction. Second, we present an online low-rank approximation of natural gradient descent with
computational complexity and CPU time similar to stochastic gradientr descent. In a number of
experimental comparisons we find this optimization technique to beat stochastic gradient in terms of
speed and generalization (or in generalization for a given amount of training time). Even though default values for the hyperparameters yield good results, it would be interesting to have an automatic
procedure to select the best set of hyperparameters.
References
[1] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[2] S. Amari, H. Park, and K. Fukumizu. Adaptive method of realizing natural gradient learning for multilayer
perceptrons. Neural Computation, 12(6):1399?1409, 2000.
[3] L. Bottou. Stochastic learning. In O. Bousquet and U. von Luxburg, editors, Advanced Lectures on Machine Learning, number LNAI 3176 in Lecture Notes in Artificial Intelligence, pages 146?168. Springer
Verlag, Berlin, 2004.
[4] R. Collobert. Large Scale Machine Learning. PhD thesis, Universit?e de Paris VI, LIP6, 2004.
[5] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep
architectures on problems with many factors of variation. In Twenty-fourth International Conference on
Machine Learning (ICML?2007), 2007.
[6] Y. LeCun, L. Bottou, G. Orr, and K.-R. M?uller. Efficient backprop. In G. Orr and K.-R. M?uller, editors,
Neural Networks: Tricks of the Trade, pages 9?50. Springer, 1998.
[7] K. B. Petersen and M. S. Pedersen. The matrix cookbook, feb 2006. Version 20051003.
[8] N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural
Computation, 14(7):1723?1738, 2002.
[9] H. H. Yang and S. Amari. Natural gradient descent for training multi-layer perceptrons. Submitted to
IEEE Tr. on Neural Networks, 1997.
[10] H. H. Yang and S. Amari. Complexity issues in natural gradient descent method for training multi-layer
perceptrons. Neural Computation, 10(8):2137?2157, 1998.
8
| 3234 |@word kgk:1 version:1 inversion:1 norm:6 c0:1 grey:1 tried:1 covariance:30 thereby:2 profit:1 tr:1 moment:1 contains:1 selecting:2 denoting:1 existing:2 surprising:1 yet:2 dx:1 must:5 readily:1 numerical:1 shape:1 cheap:1 designed:2 drop:2 progressively:2 update:2 aside:1 intelligence:1 prohibitive:1 steepest:1 realizing:1 location:1 direct:1 gtt:3 consists:3 overhead:3 inside:1 indeed:4 expected:3 themselves:1 multi:3 cpu:13 considering:1 increasing:5 becomes:3 provided:2 discover:1 underlying:1 estimating:1 maximizes:1 bounded:1 lowest:1 what:1 cm:5 minimizes:1 proposing:1 finding:2 impractical:1 every:5 tackle:1 universit:1 unit:13 before:2 positive:5 dropped:1 limit:3 might:1 initialization:1 resembles:1 suggests:1 co:1 fastest:1 practical:1 woodbury:1 lecun:1 testing:3 block:31 postpone:1 definite:2 digit:2 procedure:1 danger:1 empirical:5 petersen:1 get:3 close:3 storage:1 influence:1 writing:1 restriction:1 equivalent:1 argminm:1 yt:3 maximizing:1 center:1 go:5 convex:1 roux:2 simplicity:1 his:1 stability:2 variation:3 justification:4 updated:1 resp:2 limiting:1 us:2 trick:1 element:1 expensive:1 particularly:1 updating:4 located:1 observed:1 thousand:1 decrease:2 trade:1 vanishes:1 complexity:14 trained:1 depend:1 efficiency:1 various:1 represented:1 train:4 fast:2 effective:2 artificial:1 choosing:3 quite:2 heuristic:1 solve:1 otherwise:1 amari:5 ability:1 gi:5 online:10 nll:8 sequence:1 advantage:2 propose:2 product:4 flexibility:1 supposed:1 frobenius:5 convergence:7 requirement:2 converges:2 depending:3 develop:1 montreal:3 b0:2 eq:1 p2:2 larochelle:1 direction:32 guided:1 filter:1 stochastic:37 centered:3 backprop:1 generalization:18 batchsize:16 considered:2 tor:1 early:3 failing:1 estimation:1 applicable:2 sensitive:1 lip6:1 largest:1 weighted:1 hope:1 minimization:1 rough:1 fukumizu:1 tonga:27 always:1 gaussian:3 aim:1 uller:2 ck:2 rather:1 varying:1 casting:1 np2:1 rank:19 likelihood:5 mainly:1 eigenvec:1 attains:1 sense:3 stopping:3 lnai:1 fight:2 hidden:7 relation:1 kc:2 going:2 pixel:1 issue:3 among:1 classification:9 aforementioned:1 denoted:1 proposes:2 equal:5 once:1 never:1 represents:1 park:1 icml:1 constitutes:1 cookbook:1 future:1 yoshua:2 report:1 composed:2 zoom:2 maintain:1 interest:1 highly:1 possibility:1 evaluation:1 mixture:2 yielding:3 semidefinite:1 behind:1 necessary:1 respective:1 stopped:1 ordinary:2 cost:22 applicability:1 wonder:2 successful:1 eigendecompositions:1 varies:1 international:1 ie:1 retain:1 invertible:1 quickly:1 squared:3 central:1 again:1 von:1 thesis:1 choose:2 derivative:4 account:2 de:1 attaining:1 orr:2 bergstra:1 includes:1 depends:2 collobert:1 vi:1 later:1 view:1 performed:4 start:2 maintains:1 contribution:1 minimize:2 square:1 regrettably:1 variance:1 likewise:1 efficiently:1 yield:9 correspond:1 generalize:1 bayesian:5 pedersen:1 monitoring:2 submitted:1 definition:1 involved:1 associated:1 riemannian:2 gain:1 dataset:4 ut:2 actually:3 worry:1 higher:1 improved:1 done:1 though:5 furthermore:3 stage:2 until:1 overfit:1 working:1 correlation:2 hand:1 minibatch:5 quality:2 believe:1 true:7 hence:1 illustrated:1 numerator:2 during:3 inferior:1 unnormalized:1 trying:1 gg:3 stress:1 bring:2 reflection:1 image:3 umontreal:3 stepping:1 linking:1 slight:1 he:1 significant:1 refer:1 cv:9 automatic:1 had:1 dot:1 sherman:1 access:2 stable:1 impressive:1 operating:2 gt:21 feb:1 curvature:1 posterior:3 showed:3 perspective:3 optimizing:2 argminv:1 certain:1 verlag:1 vt:6 seen:1 minimum:1 additional:2 somewhat:1 greater:1 determine:1 maximize:1 morrison:1 full:9 reduces:1 faster:5 calculation:1 offer:2 cross:1 schraudolph:1 multilayer:3 metric:1 iteration:1 normalization:1 represent:3 background:1 grow:1 standpoint:1 thing:1 yang:2 bengio:3 identically:1 enough:1 rendering:1 independence:1 architecture:2 collinear:1 suffer:1 hessian:6 adequate:1 deep:2 generally:3 eigenvectors:9 tune:1 amount:1 ten:2 continuation:1 kck2:2 exist:1 notice:2 estimated:1 per:4 hyperparameter:1 shall:1 pb:1 achieving:1 clarity:1 kept:3 rectangle:5 sum:1 luxburg:1 inverse:1 powerful:1 fourth:1 throughout:1 reasonable:1 almost:1 p3:1 draw:1 scaling:1 comparable:2 layer:4 ct:10 courville:1 encountered:1 yielded:1 constraint:1 infinity:1 bp:1 sake:2 bousquet:1 speed:3 span:4 poor:1 conjugate:1 happens:3 dv:1 pr:3 computationally:1 remains:2 previously:1 discus:1 eventually:1 fail:1 r3:1 wrt:1 know:1 ge:27 end:2 rewritten:1 appropriate:1 pierre:1 frequentist:4 batch:3 inversed:1 maintaining:1 newton:1 exploit:2 approximating:2 implied:1 objective:2 move:1 quantity:3 strategy:1 diagonal:22 antoine:1 gradient:87 detrimental:2 amongst:1 berlin:1 capacity:1 unstable:1 iro:1 reason:1 assuming:3 kalman:1 relationship:2 manzagol:1 ratio:7 minimizing:3 nc:1 difficult:1 unfortunately:1 negative:9 implementation:5 unknown:1 twenty:1 disagree:1 datasets:1 descent:23 beat:1 immediate:1 looking:1 overtakes:1 introduced:1 paris:1 address:2 able:2 proceeds:1 usually:1 memory:3 belief:1 power:1 greatest:1 natural:42 regularized:2 advanced:1 improve:1 prior:2 review:2 epoch:1 blockdiagonal:1 xtt:6 determining:1 multiplication:1 loss:1 lecture:2 interesting:1 validation:11 eigendecomposition:5 integrate:1 proxy:1 dd:1 editor:2 course:1 keeping:1 bias:1 taking:1 absolute:1 distributed:2 curve:1 default:2 dimension:1 gram:2 resides:1 made:2 adaptive:1 erhan:1 approximate:1 compact:1 keep:5 uncentered:1 overfitting:7 assumed:1 xi:1 search:1 iterative:1 reasonably:1 nicolas:2 ca:3 obtaining:1 bottou:2 did:1 main:3 linearly:1 motivation:1 revisits:1 hyperparameters:5 whole:1 allowed:1 explicit:1 topmoumoute:2 down:2 theorem:1 formula:2 specific:2 xt:25 showing:1 mnist:3 kr:1 phd:1 magnitude:1 suited:2 smoothly:1 entropy:1 likely:1 cheaply:2 applies:1 springer:2 corresponds:1 satisfies:1 extracted:1 goal:2 identity:1 change:2 included:1 reducing:1 operates:1 conservative:1 total:3 experimental:2 perceptrons:5 select:2 support:1 phenomenon:1 |
2,464 | 3,235 | Sparse Overcomplete Latent Variable Decomposition
of Counts Data
Madhusudana Shashanka
Mars, Incorporated
Hackettstown, NJ
[email protected]
Bhiksha Raj
Mitsubishi Electric Research Labs
Cambridge, MA
[email protected]
Paris Smaragdis
Adobe Systems
Newton, MA
[email protected]
Abstract
An important problem in many fields is the analysis of counts data to extract meaningful latent components. Methods like Probabilistic Latent Semantic Analysis
(PLSA) and Latent Dirichlet Allocation (LDA) have been proposed for this purpose. However, they are limited in the number of components they can extract and
lack an explicit provision to control the ?expressiveness? of the extracted components. In this paper, we present a learning formulation to address these limitations
by employing the notion of sparsity. We start with the PLSA framework and
use an entropic prior in a maximum a posteriori formulation to enforce sparsity.
We show that this allows the extraction of overcomplete sets of latent components
which better characterize the data. We present experimental evidence of the utility
of such representations.
1 Introduction
A frequently encountered problem in many fields is the analysis of histogram data to extract meaningful latent factors from it. For text analysis where the data represent counts of word occurrences
from a collection of documents, popular techniques available include Probabilistic Latent Semantic
Analysis (PLSA; [6]) and Latent Dirichlet Allocation (LDA; [2]). These methods extract components that can be interpreted as topics characterizing the corpus of documents. Although they are
primarily motivated by the analysis of text, these methods can be applied to analyze arbitrary count
data. For example, images can be interpreted as histograms of multiple draws of pixels, where each
draw corresponds to a ?quantum of intensity?. PLSA allows us to express distributions that underlie
such count data as mixtures of latent components. Extensions to PLSA include methods that attempt
to model how these components co-occur (eg. LDA, Correlated Topic Model [1]).
One of the main limitations of these models is related to the number of components they can extract.
Realistically, it may be expected that the number of latent components in the process underlying
any dataset is unrestricted. However, the number of components that can be discovered by LDA
or PLSA is restricted by the cardinality of the data, e.g. by the vocabulary of the documents, or
the number of pixels of the image analyzed. Any analysis that attempts to find an overcomplete
set of a larger number of components encounters the problem of indeterminacy and is liable to
result in meaningless or trivial solutions. The second limitation of the models is related to the
?expressiveness? of the extracted components i.e. the information content in them. Although the
methods aim to find ?meaningful? latent components, they do not actually provide any control over
the information content in the components.
In this paper, we present a learning formulation that addresses both these limitations by employing
the notion of sparsity. Sparse coding refers to a representational scheme where, of a set of components that may be combined to compose data, only a small number are combined to represent any
particular instance of the data (although the specific set of components may change from instance to
1
instance). In our problem, this translates to permitting the generating process to have an unrestricted
number of latent components, but requiring that only a small number of them contribute to the composition of the histogram represented by any data instance. In other words, the latent components
must be learned such that the mixture weights with which they are combined to generate any data
have low entropy ? a set with low entropy implies that only a few mixture weight terms are significant. This addresses both the limitations. Firstly, it largely eliminates the problem of indeterminacy
permitting us to learn an unrestricted number of latent components. Secondly, estimation of low
entropy mixture weights forces more information on to the latent components, thereby making them
more expressive.
The basic formulation we use to extract latent components is similar to PLSA. We use an entropic
prior to manipulate the entropy of the mixture weights. We formulate the problem in a maximum a
posteriori framework and derive inference algorithms. We use an artificial dataset to illustrate the
effects of sparsity on the model. We show through simulations that sparsity can lead to components
that are more representative of the true nature of the data compared to conventional maximum likelihood learning. We demonstrate through experiments on images that the latent components learned
in this manner are more informative enabling us to predict unobserved data. We also demonstrate
that they are more discriminative than those learned using regular maximum likelihood methods.
We then present conclusions and avenues for future work.
2 Latent Variable Decomposition
Consider an F ? N count matrix V. We will consider each column of V to be the histogram of an
independent set of draws from an underlying multinomial distribution over F discrete values. Each
column of V thus represents counts in a unique data set. Vf n , the f th row entry of Vn , the nth
column of V, represents the count of f (or the f th discrete symbol that may be generated by the
multinomial) in the nth data set. For example, if the columns of V represent word count vectors
for a collection of documents, Vf n would be the count of the f th word of the vocabulary in the nth
document in the collection.
We model all data as having been generated by a process that is characterized by a set of latent
probability distributions that, although not directly observed, combine to compose the distribution
of any data set. We represent the probability of drawing f from the z th latent distribution by P (f |z),
where z is a latent variable. To generate any data set, the latent distributions P (f |z) are combined in
proportions that are specific to that set. Thus, each histogram (column) in V is the outcome of draws
from a distribution that is a column-specific composition of P (f |z). We can define the distribution
underlying the nth column of V as
X
Pn (f ) =
P (f |z)Pn (z),
(1)
z
where Pn (f ) represents the probability of drawing f in the nth data set in V, and Pn (z) is the
mixing proportion signifying the contribution of P (f |z) towards Pn (f ).
Equation 1 is functionally identical to that used for Probabilistic Latent Semantic Analysis of text
data [6]1 : if the columns Vn of V represent word count vectors for documents, P (f |z) represents
the z th latent topic in the documents. Analogous interpretations may be proposed for other types
of data as well. For example, if each column of V represents one of a collection of images (each
of which has been unraveled into a column vector), the P (f |z)?s would represent the latent ?bases?
that compose all images in the collection. In maintaining this latter analogy, we will henceforth refer
to P (f |z) as the basis distributions for the process.
Geometrically, the normalized columns of V (obtained by scaling the entries of Vn to sum to 1.0),
? n , which we refer to as data distributions, may be viewed as F -dimensional vectors that lie in an
V
(F ? 1) simplex. The distributions Pn (f ) and basis distributions P (f |z) are also F -dimensional
vectors in the same simplex. The model expresses Pn (f ) as points within the convex hull formed
by the basis distributions P (f |z). The aim of the model is to determine P (f |z) such that the model
1
P
PLSA actually represents the joint distribution of n and f as P (n, f ) = P (n) z P (f |z)P (z|n). However the maximum likelihood estimate of P (n) is simply the fraction of all observations from all data sets that
occurred in the nth data set and does not affect the estimation of P (f |z) and P (z|n).
2
2 Basis Vectors
3 Basis Vectors
(010)
(100)
(010)
(100)
(001)
Simplex Boundary
Data Points
Basis Vectors
Approximation
(001)
Simplex Boundary
Data Points
Basis Vectors
Convex Hull
Figure 1: Illustration of the latent variable model. Panels show 3-dimensional data distributions as
points within the Standard 2-Simplex given by {(001), (010), (100)}. The left panel shows a set of
2 Basis distributions (compact code) derived from the 400 data points. The right panel shows a set
of 3 Basis distributions (complete code). The model approximates data distributions as points lying
within the convex hull formed by the basis distributions. Also shown are two data points (marked
by + and ?) and their approximations by the model (respectively shown by ? and ).
? n approximates it closely. Since Pn (f ) is constrained to lie within
Pn (f ) for any data distribution V
? n accurately if the latter also lies within the
the simplex defined by P (f |z), it can only model V
?
hull. Any Vn that lies outside the hull is modeled with error. Thus, the objective of the model
is to identify P (f |z) such that they form a convex hull surrounding the data distributions. This is
illustrated in Figure 1 for a synthetic data set of 400 3-dimensional data distributions.
2.1 Parameter Estimation
Given count matrix V, we estimate P (f |z) and Pn (z) to maximize the likelihood of V. This can be
done through iterations of equations derived using the Expectation Maximization (EM) algorithm:
Pn (z)P (f |z)
,
Pn (z|f ) = P
z Pn (z)P (f |z)
and
P
f Vf n Pn (z|f )
Pn (z) = P P
z
f Vf n Pn (z|f )
P
n Vf n Pn (z|f )
P (f |z) = P P
,
f
n Vf n Pn (z|f )
(2)
(3)
Detailed derivation is shown in supplemental material. The EM algorithm guarantees that the above
multiplicative updates converge to a local optimum.
2.2 Latent Variable Model as Matrix Factorization
We can write the model given by equation (1) in matrix form as pn = Wgn , where pn is a column
vector indicating Pn (f ), gn is a column vector indicating Pn (z), and W is a matrix with the (f, z)th element corresponding to P (f |z). If we characterize V by R basis distributions, W is an F ? R
matrix. Concatenating all column vectors pn and gn as matrices P and G respectively, one can
write the model as P = WG, where G is an R ? N matrix. It is easy to show (as demonstrated in
the supplementary material) that the maximum likelihood estimator for P (f |z) and Pn (z) attempts
to minimize the Kullback-Leibler (KL) distance between the normalized data distribution Vn and
Pn (f ), weighted by the total count in Vn . In other words, the model of Equation (1) actually
represents the decomposition
V ? WGD = WH
(4)
th
where D is an N ? N diagonal matrix, whose n diagonal element is the total number of counts
in Vn and H = GD. The astute reader might recognize the decomposition of equation (4) as Nonnegative matrix factorization (NMF; [8]). In fact equations (2) and (3) can be shown to be equivalent
to one of the standard update rules for NMF.
Representing the decomposition in matrix form immediately reveals one of the shortcomings of the
basic model. If R, the number of basis distributions, is equal to F , then a trivial solution exists
that achieves perfect decomposition: W = I; H = V, where I is the identity matrix (although the
algorithm may not always arrive at this solution). However, this solution is no longer of any utility
to us since our aim is to derive basis distributions that are characteristic of the data, whereas the
3
(100)
Enclosing triangles for ?+?:
ABG, ABD, ABE, ACG,
ACD, ACE, ACF
B
A
(010)
C
G
F
E D
(001)
Simplex Boundary
Data Points
Basis Vectors
Figure 2: Illustration of the effect of sparsifying H on the dataset shown in Figure 1. A-G represent
7 basis distributions. The ?+? represents a typical data point. It can be accurately represented by
any set of three or more bases that form an enclosing polygon and there are many such polygons.
However, if we restrict the number of bases used to enclose ?+? to be minimized, only the 7 enclosing
triangles shown remain as valid solutions. By further imposing the restriction that the entropy of
the mixture weights with which the bases (corners) must be combined to represent ?+? must be
minimum, only one triangle is obtained as the unique optimal enclosure.
columns of W in this trivial solution are not specific to any data, but represent the dimensions of
the space the data lie in. For overcomplete decompositions where R > F , the solution becomes
indeterminate ? multiple perfect decompositions are possible.
The indeterminacy of the overcomplete decomposition can, however, be greatly reduced by im? n must employ minimum number of basis
posing a restriction that the approximation for any V
distributions required. By further imposing the constraint that the entropy of gn must be minimized,
the indeterminacy of the solution can often be eliminated as illustrated by Figure 2. This principle,
which is related to the concept of sparse coding [5], is what we will use to derive overcomplete sets
of basis distributions for the data.
3 Sparsity in the Latent Variable Model
Sparse coding refers to a representational scheme where, of a set of components that may be combined to compose data, only a small number are combined to represent any particular input. In the
context of basis decompositions, the goal of sparse coding is to find a set of bases for any data set
such that the mixture weights with which the bases are combined to compose any data are sparse.
Different metrics have been used to quantify the sparsity of the mixture weights in the literature.
Some approaches minimize variants of the Lp norm of the mixture weights (eg. [7]) while other
approaches minimize various approximations of the entropy of the mixture weights.
In our approach, we use entropy as a measure of sparsity. We use the entropic prior, which has
been used in the maximum entropy literature (see [9]) to manipulate entropy. Given
P a probability
distribution ?, the entropic prior is defined as Pe (?) ? e??H(?) , where H(?) = ? i ?i log ?i is the
entropy of the distribution and ? is a weighting factor. Positive values of ? favor distributions with
lower entropies while negative values of ? favor distributions with higher entropies. Imposing this
prior during maximum a posteriori estimation is a way to manipulate the entropy of the distribution.
The distribution ? could correspond to the basis distributions P (f |z) or the mixture weights Pn (z)
or both. A sparse code would correspond to having the entropic prior on Pn (z) with a positive
value for ?. Below, we consider the case where both the basis vectors and mixture weights have the
entropic prior to keep the exposition general.
3.1 Parameter Estimation
We use the EM algorithm to derive the update equations. Let us examine the case where both
P (f |z) and Pn (z) have the entropic prior. The set of parameters to be estimated is given by ? =
{P (f |z), Pn (z)}. The a priori distribution over the parameters, P (?), corresponds to the entropic
priors. We can write log P (?), the log-prior, as
?
XX
z
P (f |z) log P (f |z) + ?
XX
n
f
4
z
Pn (z) log Pn (z),
(5)
3 Basis Vectors
(010)
(100)
7 Basis Vectors
(010)
(100)
(001)
(001)
7 Basis Vectors
7 Basis Vectors
(010)
Sparsity Param = 0.01
(010)
7 Basis Vectors
(010)
(100)
Sparsity Param = 0.3
Sparsity Param = 0.05
(001)
(010)
(001)
(100)
(100)
10 Basis Vectors
(100)
(001)
(001)
Figure 3: Illustration of the effect of sparsity on the synthetic data set from Figure 1. For visual
clarity, we do not display the data points.
Top panels: Decomposition without sparsity. Sets of 3 (left), 7 (center), and 10 (right) basis distributions were obtained from the data without employing sparsity. In each case, 20 runs of the
estimation algorithm were performed from different initial values. The convex hulls formed by the
bases from each of these runs are shown in the panels from left to right. Notice that increasing the
number of bases enlarges the sizes of convex hulls, none of which characterize the distribution of
the data well.
Bottom panels: Decomposition with sparsity. The panels from left to right show the 20 sets of
estimates of 7 basis distributions, for increasing values of the sparsity parameter for the mixture
weights. The convex hulls quickly shrink to compactly enclose the distribution of the data.
where ? and ? are parameters indicating the degree of sparsity desired in P (f |z) and Pn (z) respectively. As before, we can write the E-step as
Pn (z)P (f |z)
Pn (z|f ) = P
.
z Pn (z)P (f |z)
(6)
The M-step reduces to the equations
?
?
+ ? + ? log P (f |z) + ?z = 0,
+ ? + ? log Pn (z) + ?n = 0
(7)
P (f |z)
Pn (z)
P
P
where we have let ? represent n Vf n Pn (z|f ), ? represent f Vf n Pn (z|f ), and ?z , ?n are Lagrange multipliers. The above M-step equations are systems of simultaneous transcendental equations for P (f |z) and Pn (z). Brand [3] proposes a method to solve such equations using the Lambert
W function [4]. It can be shown that P (f |z) and Pn (z) can be estimated as
P? (f |z) =
??/?
,
W(??e1+?z /? /?)
P?n (z) =
??/?
.
W(??e1+?n /? /?)
(8)
Equations (7), (8) form a set of fixed-point iterations that typically converge in 2-5 iterations [3].
The final update equations are given by equation (6), and the fixed-point equation-pairs (7), (8). Details of the derivation are provided in supplemental material. Notice that the above equations reduce
to the maximum likelihood updates of equations (2) and (3) when ? and ? are set to zero. More
generally, the EM algorithm aims to minimize the KL distance between the true distribution of the
data and that of the model, i.e. it attempts to arrive at a model that conserves the entropy of the data,
subject to the a priori constraints. Consequently, reducing entropy of the mixture weights Pn (z) to
obtain a sparse code results in increased entropy (information) of basis distributions P (f |z).
3.2 Illustration of the Effect of Sparsity
The effect and utility of sparse overcomplete representations is demonstrated by Figure 3. In this
example, the data (from Figure 1) have four distinct quadrilaterally located clusters. This structure
cannot be accurately represented by three or fewer basis distributions, since they can, at best specify
5
A. Occluded Faces
B. Reconstructions
C. Original Test Images
Figure 4: Application of latent variable decomposition for reconstructing faces from occluded images (CBCL Database). (A). Example of a random subset of 36 occluded test images. Four 6 ? 6
patches were removed from the images in several randomly chosen configurations (corresponding
to the rows). (B). Reconstructed faces from a sparse-overcomplete basis set of 1000 learned components (sparsity parameter = 0.1). (C). Original test images shown for comparison.
a triangular simplex, as demonstrated by the top left panel in the figure. Simply increasing the number of bases without constraining the sparsity of the mixture weights does not provide meaningful
solutions. However, increasing the sparsity quickly results in solutions that accurately characterize
the distribution of the data.
A clearer intuition is obtained when we consider the matrix form of the decomposition in Equation
4. The goal of the decomposition is often to identify a set of latent distributions that characterize
the underlying process that generated the data V. When no sparsity is enforced on the solution, the
trivial solution W = I, H = V is obtained at R = F . In this solution, the entire information in
V is borne by H and the bases W becomes uninformative, i.e. they no longer contain information
about the underlying process.
However, by enforcing sparsity on H the information V is transferred back to W, and non-trivial
solutions are possible for R > F . As R increases, however, W become more and more data-like.
At R = N another trivial solution is obtained: W = V, and H = D (i.e. G = I). The columns of
W now simply represent (scaled versions) of the specific data V rather than the underlying process.
For R > N the solutions will now become indeterminate. By enforcing sparsity, we have thus
increased the implicit limit on the number of bases that can be estimated without indeterminacy
from the smaller dimension of V to the larger one.
4 Experimental Evaluation
We hypothesize that if the learned basis distribution are characteristic of the process that generates
the data, they must not only generalize to explain new data from the process, but also enable prediction of components of the data that were not observed. Secondly, the bases for a given process must
be worse at explaining data that have been generated by any other process. We test both these hypotheses below. In both experiments we utilize images, which we interpret as histograms of repeated
draws of pixels, where each draw corresponds to a quantum of intensity.
4.1 Face Reconstruction
In this experiment we evaluate the ability of the overcomplete bases to explain new data and predict
the values of unobserved components of the data. Specifically, we use it to reconstruct occluded
portions of images. We used the CBCL database consisting of 2429 frontal view face images handaligned in a 19 ? 19 grid. We preprocessed the images by linearly scaling the grayscale intensities
so that pixel mean and standard deviation was 0.25, and then clipped them to the range [0, 1].
2000 images were randomly chosen as the training set. 100 images from the remaining 429 were
randomly chosen as the test set. To create occluded test images, we removed 6 ? 6 grids in ten
random configurations for 10 test faces each, resulting in 100 occluded images. We created 4 sets of
test images, where each set had one, two, three or four 6 ? 6 patches removed. Figure 4A represents
the case where 4 patches were removed from each face.
In a training stage, we learned sets of K ? {50, 200, 500, 750, 1000} basis distributions from the
training data. Sparsity was not used in the compact (R < F ) case (50 and 200 bases) and sparsity
6
Basis Vectors
Basis Vectors
Mixture Weights
Mixture Weights
Pixel Image
Pixel Image
Figure 5: 25 Basis distributions (represented as images) extracted for class ?2? from training data
without sparsity on mixture weights (Left Panel, sparsity parameter = 0) and with sparsity on mixture
weights (Right Panel, sparsity parameter = 0.2). Basis images combine in proportion to the mixture
weights shown to result in the pixel images shown.
?=0
? = 0.2
? = 0.5
Figure 6: 25 basis distributions learned from training data for class ?3? with increasing sparsity
parameters on the mixture weights. The sparsity parameter was set to 0, 0.2 and 0.5 respectively. Increasing the sparsity parameter of mixture weights produces bases which are holistic representations
of the input (histogram) data instead of parts-like features.
was imposed (parameter = 0.1) on the mixture weights in the overcomplete cases (500, 750 and 1000
basis vectors).
The procedure for estimating the occluded regions of the a test image has two steps. In the first step,
we estimate the distribution underlying the image as a linear combination of the basis distributions.
This is done by iterations of Equations 2 and 3 to estimate Pn (z) (the bases P (f |z), being already
known, stay fixed) based only on the pixels that are observed (i.e. we marginalize out the occluded
pixels). The combination of the bases P (f |z) and the estimated Pn (z) give us the overall distribution
Pn (f ) for the image. The occluded pixel P
values at any pixel
Pf is estimated as the expected number
of counts at the pixels, given by Pn (f )( f ? ?{Fo } Vf ? )/( f ? ?{Fo } Pn (f ? )) where Vf represents
the value of the image at the f th pixel and {Fo } is the set of observed pixels. Figure 4B shows the
reconstructed faces for the sparse-overcomplete case of 1000 basis vectors. Figure 7A summarizes
the results for all cases. Performance is measured by mean Signal-to-Noise-Ratio (SNR), where
SNR for an image was computed as the ratio of the sum of squared pixel intensities of the original
image to the sum of squared error between the original image pixels and the reconstruction.
4.2 Handwritten Digit Classification
In this experiment we evaluate the specificity of the bases to the process represented by the training
data set, through a simple example of handwritten digit classification. We used the USPS Handwritten Digits database which has 1100 examples for each digit class. We randomly chose 100 examples
from each class and separated them as the test set. The remaining examples were used for training.
During training, separate sets of basis distributions P k (f |z) were learned for each class, where k
represents the index of the class. Figure 5 shows 25 bases images extracted for the digit ?2?. To
classify any test image v, we attempted to compute the distribution underlying the image using the
bases for each class (by estimating the mixture weights Pvk (z), keeping the bases fixed, as before).
The ?match? of the bases
by the likelihood Lk of the image comP to the test instance was indicated
P
puted using P k (f ) = z P k (f |z)Pvk (z) as Lk = f vf log P k (f ). Since we expect the bases for
the true class of the image to best compose it, we expect the likelihood for the correct class to be
maximum. Hence, the image v was assigned to the class for which likelihood was the highest.
7
A. Reconstruction Experiment
24
5
1 patch
2 patches
3 patches
4 patches
22
25
50
75
100
200
4.5
Percentage Error
20
Mean SNR
B. Classification Experiment
18
16
14
4
3.5
3
12
2.5
10
8
50
200
500
750
2
0
1000
Number of Basis Components
0.05
0.1
Sparsity Parameter
0.2
0.3
Figure 7: (A). Results of the face Reconstruction experiment. Mean SNR of the reconstructions is
shown as a function of the number of basis vectors and the test case (number of deleted patches,
shown in the legend). Notice that the sparse-overcomplete codes consistently perform better than
the compact codes. (B). Results of the classification experiment. The legend shows number of
basis distributions used. Notice that imposing sparsity almost always leads to better classification
performance. In the case of 100 bases, error rate comes down by almost 50% when a sparsity
parameter of 0.3 is imposed.
Results are shown in Figure 7B. As one can see, imposing sparsity improves classification performance in almost all cases. Figure 6 shows three sets of basis distributions learned for class ?3? with
different sparsity values on the mixture weights. As the sparsity parameter is increased, bases tend
to be holistic representations of the input histograms. This is consistent with improved classification
performance - as the representation of basis distributions gets more holistic, the more unlike they
become when compared to bases of other classes. Thus, there is a lesser chance that the bases of
one class can compose an image in another class, thereby improving performance.
5 Conclusions
In this paper, we have presented an algorithm for sparse extraction of overcomplete sets of latent
distributions from histogram data. We have used entropy as a measure of sparsity and employed
the entropic prior to manipulate the entropy of the estimated parameters. We showed that sparseovercomplete components can lead to an improved characterization of data and can be used in applications such as classification and inference of missing data. We believe further improved characterization may be achieved by the imposition of additional priors that represent known or hypothesized
structure in the data, and will be the focus of future research.
References
[1] DM Blei and JD Lafferty. Correlated Topic Models. In NIPS, 2006.
[2] DM Blei, AY Ng, and MI Jordan. Latent Dirichlet Allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[3] ME Brand. Pattern Discovery via Entropy Minimization. In Uncertainty 99: AISTATS 99, 1999.
[4] RM Corless, GH Gonnet, DEG Hare, DJ Jeffrey, and DE Knuth. On the Lambert W Function.
Advances in Computational mathematics, 1996.
[5] DJ Field. What is the Goal of Sensory Coding? Neural Computation, 1994.
[6] T Hofmann. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning, 42:177?196, 2001.
[7] PO Hoyer. Non-negative Matrix Factorization with Sparseness Constraints. Journal of Machine
Learning Research, 5, 2004.
[8] DD Lee and HS Seung. Algorithms for Non-negative Matrix Factorization. In NIPS, 2001.
[9] J Skilling. Classic Maximum Entropy. In J Skilling, editor, Maximum Entropy and Bayesian
Methods. Kluwer Academic, 1989.
8
| 3235 |@word h:1 version:1 proportion:3 norm:1 plsa:8 simulation:1 mitsubishi:1 decomposition:15 wgn:1 thereby:2 initial:1 configuration:2 document:7 com:2 must:7 transcendental:1 informative:1 hofmann:1 hypothesize:1 update:5 fewer:1 blei:2 characterization:2 contribute:1 firstly:1 become:3 compose:7 combine:2 manner:1 expected:2 frequently:1 examine:1 param:3 cardinality:1 increasing:6 becomes:2 provided:1 xx:2 underlying:8 estimating:2 panel:10 pf:1 what:2 interpreted:2 supplemental:2 unobserved:2 nj:1 guarantee:1 scaled:1 rm:1 control:2 underlie:1 positive:2 before:2 local:1 limit:1 might:1 chose:1 co:1 limited:1 factorization:4 range:1 unique:2 digit:5 procedure:1 indeterminate:2 word:6 refers:2 regular:1 enclosure:1 specificity:1 get:1 cannot:1 marginalize:1 context:1 restriction:2 conventional:1 equivalent:1 demonstrated:3 imposed:2 center:1 missing:1 convex:7 formulate:1 immediately:1 estimator:1 rule:1 classic:1 notion:2 analogous:1 hypothesis:1 element:2 conserve:1 located:1 database:3 observed:4 bottom:1 region:1 removed:4 highest:1 acd:1 intuition:1 seung:1 occluded:9 abd:1 basis:47 triangle:3 compactly:1 usps:1 po:1 joint:1 represented:5 polygon:2 various:1 surrounding:1 derivation:2 separated:1 distinct:1 shortcoming:1 artificial:1 outcome:1 outside:1 whose:1 ace:1 larger:2 supplementary:1 solve:1 drawing:2 reconstruct:1 wg:1 enlarges:1 favor:2 triangular:1 ability:1 final:1 reconstruction:6 holistic:3 mixing:1 representational:2 realistically:1 cluster:1 optimum:1 produce:1 generating:1 perfect:2 derive:4 illustrate:1 clearer:1 measured:1 indeterminacy:5 enclose:2 implies:1 come:1 quantify:1 closely:1 correct:1 hull:9 enable:1 material:3 secondly:2 im:1 extension:1 lying:1 cbcl:2 predict:2 unraveled:1 achieves:1 entropic:9 purpose:1 estimation:6 create:1 weighted:1 minimization:1 always:2 aim:4 rather:1 pn:47 derived:2 focus:1 consistently:1 likelihood:9 greatly:1 posteriori:3 inference:2 typically:1 entire:1 pixel:16 overall:1 classification:8 priori:2 proposes:1 constrained:1 corless:1 field:3 equal:1 extraction:2 having:2 eliminated:1 ng:1 identical:1 represents:11 unsupervised:1 future:2 simplex:8 minimized:2 primarily:1 few:1 employ:1 randomly:4 recognize:1 consisting:1 cns:1 jeffrey:1 attempt:4 evaluation:1 mixture:25 analyzed:1 desired:1 overcomplete:13 increased:3 merl:1 instance:5 column:16 gn:3 classify:1 maximization:1 deviation:1 entry:2 subset:1 snr:4 characterize:5 synthetic:2 combined:8 gd:1 stay:1 bu:1 probabilistic:4 lee:1 quickly:2 squared:2 henceforth:1 borne:1 worse:1 corner:1 de:1 coding:5 multiplicative:1 performed:1 view:1 lab:1 analyze:1 portion:1 start:1 contribution:1 minimize:4 formed:3 largely:1 characteristic:2 correspond:2 identify:2 generalize:1 handwritten:3 bayesian:1 lambert:2 accurately:4 none:1 liable:1 comp:1 simultaneous:1 explain:2 fo:3 hare:1 dm:2 mi:1 dataset:3 popular:1 wh:1 improves:1 provision:1 actually:3 back:1 higher:1 specify:1 improved:3 formulation:4 shashanka:2 done:2 mar:1 shrink:1 implicit:1 stage:1 expressive:1 lack:1 gonnet:1 lda:4 indicated:1 puted:1 believe:1 bhiksha:2 effect:5 hypothesized:1 requiring:1 true:3 concept:1 multiplier:1 contain:1 hence:1 assigned:1 normalized:2 leibler:1 semantic:4 illustrated:2 eg:2 during:2 ay:1 complete:1 demonstrate:2 gh:1 image:38 multinomial:2 interpretation:1 occurred:1 approximates:2 functionally:1 interpret:1 kluwer:1 significant:1 composition:2 refer:2 cambridge:1 imposing:5 grid:2 mathematics:1 had:1 dj:2 longer:2 base:27 showed:1 raj:1 minimum:2 unrestricted:3 additional:1 employed:1 determine:1 maximize:1 converge:2 signal:1 multiple:2 reduces:1 match:1 characterized:1 academic:1 permitting:2 manipulate:4 e1:2 adobe:2 prediction:1 variant:1 basic:2 expectation:1 metric:1 histogram:9 represent:14 iteration:4 achieved:1 whereas:1 uninformative:1 meaningless:1 eliminates:1 unlike:1 subject:1 tend:1 legend:2 lafferty:1 jordan:1 constraining:1 easy:1 affect:1 restrict:1 reduce:1 lesser:1 avenue:1 translates:1 motivated:1 utility:3 generally:1 detailed:1 ten:1 reduced:1 generate:2 percentage:1 notice:4 estimated:6 discrete:2 write:4 express:2 sparsifying:1 four:3 deleted:1 clarity:1 preprocessed:1 utilize:1 astute:1 geometrically:1 fraction:1 sum:3 enforced:1 run:2 imposition:1 uncertainty:1 arrive:2 clipped:1 reader:1 almost:3 vn:7 patch:8 draw:6 summarizes:1 scaling:2 vf:11 display:1 smaragdis:1 encountered:1 nonnegative:1 occur:1 constraint:3 generates:1 madhusudana:1 transferred:1 combination:2 remain:1 smaller:1 em:4 reconstructing:1 lp:1 making:1 restricted:1 equation:19 count:15 available:1 enforce:1 occurrence:1 skilling:2 encounter:1 jd:1 original:4 top:2 dirichlet:3 include:2 remaining:2 maintaining:1 newton:1 objective:1 already:1 diagonal:2 hoyer:1 distance:2 separate:1 me:1 topic:4 trivial:6 enforcing:2 code:6 modeled:1 index:1 illustration:4 ratio:2 negative:3 enclosing:3 perform:1 observation:1 enabling:1 incorporated:1 discovered:1 arbitrary:1 abe:1 expressiveness:2 intensity:4 nmf:2 pair:1 paris:2 kl:2 required:1 learned:9 nip:2 address:3 below:2 pattern:1 sparsity:40 force:1 nth:6 representing:1 scheme:2 lk:2 created:1 extract:6 text:3 prior:12 literature:2 discovery:1 expect:2 limitation:5 allocation:3 analogy:1 degree:1 consistent:1 principle:1 dd:1 editor:1 row:2 keeping:1 explaining:1 characterizing:1 face:9 sparse:13 boundary:3 dimension:2 vocabulary:2 valid:1 quantum:2 sensory:1 collection:5 employing:3 reconstructed:2 compact:3 kullback:1 keep:1 deg:1 reveals:1 corpus:1 discriminative:1 pvk:2 grayscale:1 latent:33 learn:1 nature:1 improving:1 posing:1 electric:1 aistats:1 main:1 linearly:1 noise:1 repeated:1 representative:1 explicit:1 acf:1 concatenating:1 lie:5 pe:1 weighting:1 acg:1 down:1 specific:5 symbol:1 evidence:1 exists:1 knuth:1 sparseness:1 entropy:22 simply:3 visual:1 lagrange:1 corresponds:3 chance:1 extracted:4 ma:2 viewed:1 marked:1 identity:1 goal:3 exposition:1 towards:1 consequently:1 content:2 change:1 typical:1 specifically:1 reducing:1 total:2 experimental:2 attempted:1 brand:2 meaningful:4 indicating:3 latter:2 signifying:1 frontal:1 evaluate:2 correlated:2 |
2,465 | 3,236 | Second Order Bilinear Discriminant Analysis for
single-trial EEG analysis
Christoforos Christoforou
Department of Computer Science
The Graduate Center of the City University of New York
365 Fifth Avenue
New York, NY 10016-4309
[email protected]
Paul Sajda
Department of Biomedical Engineering
Columbia University
351 Engineering Terrace Building, MC 8904
1210 Amsterdam Avenue
New York, NY 10027
[email protected]
Lucas C. Parra
Department of Biomedical Engineering
The City College of The City University of New York
Convent Avenue 138th Street
New York,NY 10031, USA
[email protected]
Abstract
Traditional analysis methods for single-trial classification of electroencephalography (EEG) focus on two types of paradigms: phase locked
methods, in which the amplitude of the signal is used as the feature for classification, e.g. event related potentials; and second order methods, in which the feature
of interest is the power of the signal, e.g. event related (de)synchronization. The
procedure for deciding which paradigm to use is ad hoc and is typically driven
by knowledge of the underlying neurophysiology. Here we propose a principled
method, based on a bilinear model, in which the algorithm simultaneously learns
the best first and second order spatial and temporal features for classification of
EEG. The method is demonstrated on simulated data as well as on EEG taken
from a benchmark data used to test classification algorithms for brain computer
interfaces.
1
1.1
Introduction
Utility of discriminant analysis in EEG
Brain computer interface (BCI) algorithms [1][2][3][4] aim to decode brain activity, on a singletrial basis, in order to provide a direct control pathway between a user?s intentions and a computer.
Such an interface could provide ?locked in patients? a more direct and natural control over a neuroprosthesis or other computer applications [2]. Further, by providing an additional communication
1
channel for healthy individuals, BCI systems can be used to increase productivity and efficiency in
high-throughput tasks [5, 6].
Single-trial discriminant analysis has also been used as a research tool to study the neural correlates
of behavior. By extracting activity that differs maximally between two experimental conditions, the
typically low signal-noise ratio of EEG can be overcome. The resulting discriminant components
can be used to identify the spatial origin and time course of stimulus/response specific activity,
while the improved SNR can be leveraged to correlate variability of neural activity across trials to
behavioral variability and behavioral performance [7, 5]. In essence, discriminant analysis adds to
the existing set of multi-variate statistical tools commonly used in neuroscience research (ANOVA,
Hoteling T 2 , Wilks? ? test).
1.2
Linear and quadratic approaches
In EEG the signal-to-noise ratio of individual channels is low, often at -20dB or less. To overcome
this limitation, all analysis methods perform some form of averaging, either across repeated trials,
across time, or across electrodes. Traditional EEG analysis averages signals across many repeated
trials for individual electrodes. A conventional method is to average the measured potentials following stimulus presentation, thereby canceling uncorrelated noise that is not reproducible from one
trial to the next. This averaged activity, called an event related potential (ERP), captures activity that
is time-locked to the stimulus presentation but cancels evoked oscillatory activity that is not locked
in phase to the timing of the stimulus. Alternatively, many studies compute the oscillatory activity
in specific frequency bands by filtering and squaring the signal prior to averaging. Thus, changes in
oscillatory activity are termed event related synchronization or desynchronization (ERS/ERD).
Surprisingly, discriminant analysis methods developed thus far by the machine learning community
have followed this dichotomy: First order methods in which the amplitude of the EEG signal is
considered to be the feature of interest in classification ? corresponding to ERP ? and second order methods in which the power of the feature is considered to be of importance for classification
? corresponding to ERS/ERD. First order methods include temporal filtering + thresholding [2],
hierarchical linear classifiers [5] and bilinear discriminant analysis [8, 9]. Second order methods
include the logistic regression with a quadratic term [11] and the well known common spatial patterns method (CSP) [10] and its variants: common spatio-spectral patterns (CSSP)[12], and common
sparse spectral spatial patterns (CSSSP)[13] .
Choosing what kind of features to use traditionally has been an ad hoc process motivated by knowledge of the underlying neurophysiology and task. From a machine-learning point of view, it seems
limiting to commit a priori to only one type of feature. Instead it would be desirable for the analysis
method to extract the relevant neurophysiological activity de novo with minimal prior expectations.
In this paper we present a new framework that combines both the first order features and the second order features in the analysis of EEG. We use a bilinear formulation which can simultaneously
extract spatial linear components as well as temporal (filtered) features.
2
2.1
Second order bilinear discriminant analysis
Problem setting
D?T
Given a set of sample points D = {Xn , yn }N
, y ? {?1, 1} , where Xn corresponds
n=1 , X ? R
to the EEG signal of D channels and T sample points and yn indicate the class that corresponds
to one of two conditions (e.g. right or left hand imaginary movement, stimulus versus control
conditions, etc.), the task is then to predict the class label y for an unobserved trial X.
2.2
Second order bilinear model
Define a function,
f (X; ?) = C Trace(UT XV) + (1 ? C) Trace(?AT (XB)(XB)T A)
0
(1)
where ? = {U ? RD ? R , V ? RT ? R , A ? RD ? K B ? RT ? T } are the parameters of the model,
? ? diag({?1, 1}) a given diagonal matrix with elements {?1, 1} and C ? [0, 1]. We consider the
2
following discriminative model; we model the log-odds ratio of the posterior class probability to be
the sum of a bilinear function with respect to the EEG signal amplitude and linear with respect to
the second order statistics of the EEG signal:
P (y = +1|X)
log
= f (X|?)
(2)
P (y = ?1|X)
2.2.1
Interpretation of the model
The first term of the equation (1) can be interpreted as a spatio-temporal projection of the signal,
under the bilinear model, and captures the first order statistics of the signal. Specifically, the columns
ur of U represent R linear projections in space (rows of X). Similarly, each of the R columns of
vk in matrix V represent linear projections in time (columns of X). By re-writing the term as:
Trace(UT XV) = Trace(VUT X) = Trace(WT X)
(3)
T
where we defined W = UV , it is easy to see that the bilinear projection is a linear combination
of elements of X with the rank ? R constrained on W. This expression is linear in X and thus
captures directly the amplitude of the signal directly. In particular, the polarity of the signal (positive
evoked response versus negative evoked response) will contribute significantly to discrimination if
it is consistent across trials. This term, therefore, captures phase locked event related potentials in
the EEG signal.
The second term of equation (1), is a projection of the power of the filtered signal, which captures
the second order statistics of the signal. As before, each column of matrix A and B, represent
components that project the data in space and time respectively. Depending on the structure one
enforces in matrix B different interpretations of the model can be archived. In the general case
where no structure on B is assumed, the model captures a linear combination of the elements of a
rank ? T 0 second order matrix approximation of the signal ? = XB(XB)T . In the case where
Toeplitz structure is enforced on B, then B defines a temporal filter on the signal and the model
captures the linear combination of the power of the second order matrix of the filtered signal. For
example if B is fixed to a Toeplitz matrix with coefficients corresponding to a 8Hz-12Hz band pass
filter, then the second term is able to extract differences in the alpha-band which is known to be
modulated during motor related tasks. Further, by learning B from the data, we may be able to
identify new frequency bands that have so far not been identified in novel experimental paradigms.
The spatial weights A together with the Trace operation ensure that the power is measured, not
in individual electrodes, but in some component space that may reflect activity distributed across
several electrodes.
Finally, the scaling factor ? (which may seem superfluous given the available degrees of freedom)
is necessary once regularization terms are added to the log-likelihood function.
2.3
Logistic regression
We use a logistic Rregression (LR) formalism as it is particularly convenient when imposing additional statistical properties on the matrices U, V, A, B such as smoothness or sparseness. In
addition, in our experience, LR performs well in strongly overlapping high-dimensional datasets
and is insensitive to outliers, the later being of particular concern when including quadratic features.
Under the logistic regression model (2) the class posterior probability P (y|X; ?) is modeled as
1
P (y|X; ?) =
(4)
1 + e?y(f (X;?)+wo )
and the resulting log likelihood is given by
L(?) = ?
N
X
log(1 + e?y(f (Xn ;?)+wo ) )
(5)
n=1
We minimize the negative log likelihood and add a log-prior on each of the columns of U, V and A
and parameters of B that act as a regularization term, which is written as:
?
?
R
K
T0
X
X
X
argmin ??L(?) ?
(log p(ur ) + log p(vr )) ?
log p(ak ) ?
log(p(bt ))?
(6)
U,V,A,B,wo
r=1
k=1
3
t=1
(u)
where the log-priors are given for each of the parameters as log p(uk ) = uT
uk
kK
T (v)
T (a)
T (b)
, log p(vk ) = uk K uk , log p(ak ) = ak K ak and log p(bk ) = bk K bk .
K(u) ? RD?D , K(v) ? RT ?T , K(a) ? RD?D , K(b) ? RT ?T are kernel matrices that control the smoothness of the parameter space. Details on the regularization procedure can be found in
[8].
Analytic gradients of the log likelihood (5) with respect to the various parameters are given
by:
N
X
yn ?(Xn )Xn vr
(7)
N
X
yn ?(Xn )ur Xn
(8)
?L(?)
?ur
=
?L(?)
?vr
=
?L(?)
?ar
= 2
?L(?)
?bt
= 2
n=1
n=1
N
X
yn ?(Xn )?r,r (Xn B)(Xn B)T ar
N
X
yn ?(Xn )XT A?AT Xbt
(10)
n=1
where we define
e?y(f (Xn ;?)+wo )
1 + e?y(f (Xn ;?)+wo )
columns of U, V, A, B respectively.
?(Xn ) = 1 ? P (y|X) =
where ui , vi , ai , and bi correspond to the ith
2.4
(9)
n=1
(11)
Fourier Basis for B
If matrix B is constrained to have a circular toepliz structure then it can be represented as B =
F?1 DF, where F?1 denotes the inverse Fourier matrix, and D is a diagonal complex-valued matrix
of Fourier coefficients. In such a case, we can re-write equations (9) and (10) as
?L(?)
?ar
?L(?)
?di
= 2
N
X
? ?T XT )ar
yn ?(Xn )?r,r (Xn F?1 DF
n
(12)
N
X
T
?1
yn ?(Xn )(F?T XT
)i,i di
n A?A Xn F
(13)
n=1
= 2
n=1
(14)
? = DDT , and the parameters are now optimized with respect to Fourier coefficients di =
where D
Di,i . An iterative minimization procedure can be used to solve the above minimization.
3
3.1
Results
Simulated data
In order to validate our method and its ability to capture both linear and second order features, we
generated simulated data that contained both types of features; namely ERP type of features and
ERS/ERD type of features. The simulated signals were generated with a signal to noise ratio of
?20dB which is a typical noise level for EEG. A total of 28 channels, 500 ms long signals and at a
sampling frequency of 100Hz where generated, resulting in a matrix of X of 28 by 50 elements, for
each trial. Data corresponding to a total of 1000 trials were generated; 500 trials contained only zero
mean Gaussian noise (representing baseline conditions), with the other 500 trials having the signal
of interest added to the noise (representing the stimulus condition): For channels 1-9 the signal was
composed of a 10Hz sinusoid with random phase in each of the nine channels, and across trials. The
4
U component
V Component
1.5
0.4
0.3
amplitude
1
0.5
0
?0.5
0.2
0.1
0
0
10
20
?0.1
30
0
50
100
150
200
250
300
time(m/s)
channels
A component
350
400
450
500
350
400
450
500
B component
1.5
0.15
0.1
1
amplitute
0.05
0.5
0
?0.05
?0.1
0
?0.15
?0.5
0
10
20
30
?0.2
0
50
100
150
channels
200
250
300
time (m/s)
Figure 1: Spatial and temporal component extracted on simulated data for the linear term (top) and
quadratic term (bottom).
sinusoids were scaled to match the ?20dB SNR. This simulates an ERS type feature. For channels
10-18, a peak represented by a half cycle sinusoid was added at approximately 400 ms, which T
simulates an ERP type feature.
The extracted components are shown in Figure 1. The linear component U (in this case only a column vector) has non-zero coefficients for channels 10 to 18 only, showing that the method correctly
identified the ERP activity. Furthermore, the associated temporal component V has a temporal
profile that matches the time course of the simulated evoked response. Similarly, the second order
components A have non-zero weights for only channels 1-9 showing that the method also identified
the spatial distribution of the non-phase locked activity. The temporal filter B was trained in the
frequency domain and the resulting filter is shown here in the time domain. It exhibits a dominant
10Hz component, which is indeed the frequency of the non-phase locked activity.
3.2
BCI competition dataset
To evaluate the performance of the proposed method on real data we applied the algorithm to an
EEG data set that was made available through The BCI Competition 2003 ([14], Data Set IV).
EEG was recorded on 28 channels for a single subject performing self-paced key typing, that is,
pressing the corresponding keys with the index and little fingers in a self-chosen order and timing
(i.e. self-paced). Key-presses occurred at an average speed of 1 key per second. Trial matrices
were extracted by epoching the data starting 630ms before each key-press. A total of 416 epochs
were recorded, each of length 500ms. For the competition, the first 316 epochs were to be used for
classifier training, while the remaining 100 epochs were to be used as a test set. Data were recorded
at 1000 Hz with a pass-band between 0.05 and 200 Hz, then downsampled to 100Hz sampling rate.
For this experiment, the matrix B was fixed to a Toeplitz structure that encodes a 10Hz33Hz bandpass filter and only the parameters U, V, A and w0 were trained. The number of
columns of U and V were set to 1, where two columns were used for A. The temporal filter
was selected based on prior knowledge of the relevant frequency band. This demonstrates the
flexibility of our approach to either incorporate prior knowledge when available or extract it from
5
U component
V component
0.1
0.05
0
?0.05
?0.1
First Column of A
0
100
200
300
time (m/s)
400
500
Second Column of A
Figure 2: Spatial and temporal component (top), and two spatial components for second order features (bottom) learned on the benchmark dataset
data otherwise. Regularization parameters where chosen via a five fold cross validation procedure
(details can be found in [8]). The resulting components for this dataset are shown in Figure 2.
Benchmark performance was measured on the test set which had not been used during either training or cross validation. The number of misclassified trials in the test set was 13 which places
our method on a new first place given the results of the competition which can be found online http://ida.first.fraunhofer.de/projects/bci/competition ii/results/index.html ([14]). Hence, our
method works as a classifier producing a state-of-the art result on a realistic data set. The receiveroperator characteristic (ROC) curve for cross validation and for the independent testset are shown in
Figure 3. Figure 3.2 also shows the contribution of the linear and quadratic terms for every trial for
the two types of key-presses. The figure shows that the two terms provide independent information
and that in this case the optimal relative weighting factor is C ? 0.5.
4
Conclusion
In this paper we have presented a framework for uncovering spatial as well as temporal features in
EEG that combine the two predominant paradigms used in EEG analysis: event related potentials
and oscillatory power. These represent phase locked activity (where polarity of the activity matters),
and non-phase locked activity (where only the power of the signal is relevant). We used the probabilistic formalism of logistic regression that readily incorporates prior probabilities to regularize the
increased number of parameters. We have evaluated the proposed method on both simulated data,
and a real BCI benchmark dataset, achieving state-of-the-art classification performance.
The proposed method provides a basis for various future directions. For example, different sets of
basis functions (other than a Fourier basis) can be enforced on the temporal decomposition of the
data through the matrix B (e.g. wavelet basis). Further, the method can be easily generalized to
6
AUC : 0.935 #errors:13
1
0.9
0.9
0.8
0.8
0.7
0.7
True positive rate
True positive rate
AUC : 0.96
1
0.6
0.5
0.4
0.6
0.5
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
False positive rate
0.8
0
1
0
0.2
0.4
0.6
False positive rate
0.8
1
Figure 3: ROC curve with area under the curve 0.96 for the cross validation on the benchmark dataset
(left). ROC curve with area under the curve 0.93, on the independent test set, for the benchmark
dataset. There were a total of 13 errors on unseen data, which is less than any of the results previously
reported, placing this method in first place in the benchmark ranking.
Training Set
Testing set
5
second order term
second order term
10
5
0
?5
0
?5
?10
?15
?20
?10
?10
0
first order term
10
?15
?10
?5
0
first order term
5
10
Figure 4: Scatter plot of the first order term vs second order term of the model, on the training and
testing set for the benchmark dataset (?+? left key, and ?o? right key). It is clear that the two types
of features contain independent information that can help improve the classification performance.
7
multi-class problems by using a multinomial distribution on y. Finally, different regularizations (i.e
L1 norm, L2 norm) can be applied to the different types of parameters of the model.
References
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan. Brain-computer
interfaces for communication and control. Clin Neurophysiol, 113(6):767?791, June 2002.
[2] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. Kubler, J. Perelmouter,
E. Taub, and H. Flor. A spelling device for the paralysed. Nature, 398(6725):297?8, Mar FebruaryMay 1999.
[3] B. Blankertz, G. Curio, and K. uller. Classifying single trial eeg: Towards brain computer interfacing.
In T. G. Diettrich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing
Systems 14. MIT Press, 2002., 2002.
[4] B. Blankertz, G. Dornhege, C. Schfer, R. Krepki, J. Kohlmorgen, K. Mller, V. Kunzmann, F. Losch, and
G. Curio. Boosting bit rates and error detection for the classification of fast-paced motor commands based
on single-trial eeg analysis. IEEE Trans. Neural Sys. Rehab. Eng., 11(2):127?131, 2003.
[5] Adam D. Gerson, Lucas C. Parra, and Paul Sajda. Cortically-coupled computer vision for rapid image
search. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 14:174?179, June 2006.
[6] Lucas C. Parra, Christoforos Christoforou, Adam D. Gerson, Mads Dyrholm, An Luo, Mark Wagner,
Marios G. Philiastides, and Paul Sajda. Spatiotemporal linear decoding of brain state: Application to
performance augmentation in high-throughput tasks. IEEE, Signal Processing Magazine, January 2008.
[7] Philiastides Marios G., Ratcliff Roger, and Sajda Paul. Neural representation of task difficulty and decision making during perceptual categorization: A timing diagram. Journal of Neuroscience, 26(35):
8965?8975, August 2006.
[8] Mads Dyrholm, Christoforos Christoforou, and Lucas C. Parra. Bilinear discriminant component analysis.
J. Mach. Learn. Res., 8:1097?1111, 2007.
[9] Ryota Tomioka and Kazuyuki Aihara. Classifying matrices with a spectral regularization. In 24th International Conference on Machine Learning, 2007.
[10] H. Ramoser, J. M?uller-Gerking, and G. Pfurtscheller. Optimal spatial filtering of single trial EEG during
imagined hand movement. IEEE Trans. Rehab. Eng., 8:441?446, December 2000.
[11] Ryota Tomioka, Kazuyuki Aihara, and Klaus-Robert Mller. Logistic regression for single trial eeg classification. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing
Systems 19, pages 1377?1384. MIT Press, Cambridge, MA, 2007.
[12] S. Lemm, B. Blankertz, G. Curio, and K. Muller. Spatio-spectral filters for improving the classification
of single trial eeg. IEEE Trans Biomed Eng., 52(9):1541?8, 2005., 2005.
[13] Dornhege G., Blankertz B, and K.R. Krauledat M. Losch F. Curio G.Muller. Combined optimization of
spatial and temporal filters for improving brain-computer interfacing. IEEE Trans. Biomed. Eng. 2006,
2006.
[14] B. Blankertz, K.-R. Muller, G. Curio, T.M. Vaughan, G. Schalk, J.R. Wolpaw, A. Schlogl, C. Neuper,
G. Pfurtscheller, T. Hinterberger, M. Schroder, and N. Birbaumer. The bci competition 2003: progress
and perspectives in detection and discrimination of eeg single trials. Biomedical Engineering, IEEE
Transactions on, 51(6):1044?1051, 2004.
8
| 3236 |@word neurophysiology:2 trial:23 seems:1 norm:2 decomposition:1 eng:4 thereby:1 existing:1 imaginary:1 ida:1 luo:1 scatter:1 written:1 readily:1 realistic:1 analytic:1 motor:2 reproducible:1 plot:1 discrimination:2 v:1 half:1 selected:1 device:1 sys:1 ith:1 lr:2 filtered:3 provides:1 boosting:1 contribute:1 five:1 direct:2 philiastides:2 terrace:1 pathway:1 combine:2 behavioral:2 indeed:1 rapid:1 behavior:1 multi:2 brain:7 ccny:1 little:1 kohlmorgen:1 electroencephalography:1 project:2 underlying:2 what:1 kind:1 interpreted:1 argmin:1 developed:1 unobserved:1 dornhege:2 temporal:14 every:1 act:1 classifier:3 scaled:1 uk:4 control:5 demonstrates:1 kubler:1 platt:1 yn:8 producing:1 positive:5 before:2 engineering:5 timing:3 xv:2 bilinear:10 mach:1 ak:4 approximately:1 diettrich:1 evoked:4 graduate:1 locked:9 averaged:1 bi:1 mads:2 enforces:1 testing:2 differs:1 wolpaw:2 procedure:4 area:2 significantly:1 projection:5 convenient:1 intention:1 downsampled:1 writing:1 vaughan:2 conventional:1 demonstrated:1 center:1 starting:1 regularize:1 traditionally:1 limiting:1 decode:1 user:1 magazine:1 origin:1 element:4 particularly:1 bottom:2 capture:8 cycle:1 movement:2 principled:1 ui:1 trained:2 efficiency:1 basis:6 neurophysiol:1 easily:1 various:2 represented:2 finger:1 sajda:4 fast:1 dichotomy:1 klaus:1 choosing:1 valued:1 solve:1 otherwise:1 bci:7 novo:1 statistic:3 toeplitz:3 ability:1 commit:1 unseen:1 online:1 hoc:2 pressing:1 propose:1 rehab:2 relevant:3 singletrial:1 flexibility:1 kunzmann:1 validate:1 competition:6 olkopf:1 electrode:4 categorization:1 adam:2 help:1 depending:1 measured:3 progress:1 indicate:1 direction:1 filter:8 parra:5 considered:2 deciding:1 predict:1 mller:2 gerson:2 label:1 healthy:1 city:3 tool:2 hoffman:1 minimization:2 uller:2 mit:2 interfacing:2 gaussian:1 aim:1 csp:1 command:1 focus:1 june:2 vk:2 rank:2 likelihood:4 ratcliff:1 baseline:1 squaring:1 typically:2 bt:2 misclassified:1 biomed:2 uncovering:1 classification:11 html:1 schroder:1 priori:1 lucas:4 spatial:13 constrained:2 art:2 once:1 having:1 sampling:2 placing:1 cancel:1 throughput:2 future:1 stimulus:6 csssp:1 composed:1 simultaneously:2 individual:4 phase:8 freedom:1 detection:2 interest:3 circular:1 predominant:1 superfluous:1 xb:4 paralysed:1 necessary:1 experience:1 iv:1 re:3 minimal:1 increased:1 column:11 formalism:2 ar:4 snr:2 reported:1 perelmouter:1 spatiotemporal:1 combined:1 gerking:1 peak:1 international:1 ghanayim:1 probabilistic:1 decoding:1 together:1 augmentation:1 reflect:1 recorded:3 leveraged:1 hinterberger:2 cssp:1 potential:5 de:3 archived:1 coefficient:4 matter:1 ranking:1 ad:2 vi:1 later:1 view:1 convent:1 contribution:1 minimize:1 characteristic:1 correspond:1 identify:2 kotchoubey:1 mc:1 oscillatory:4 canceling:1 frequency:6 associated:1 di:4 dataset:7 knowledge:4 ut:3 amplitude:5 response:4 maximally:1 improved:1 erd:3 formulation:1 evaluated:1 strongly:1 mar:1 furthermore:1 roger:1 biomedical:3 hand:2 overlapping:1 defines:1 logistic:6 building:1 usa:1 contain:1 true:2 regularization:6 hence:1 sinusoid:3 during:4 self:3 auc:2 essence:1 wilks:1 m:4 generalized:1 performs:1 l1:1 interface:4 image:1 novel:1 common:3 multinomial:1 birbaumer:3 insensitive:1 imagined:1 interpretation:2 occurred:1 taub:1 cambridge:1 imposing:1 ai:1 smoothness:2 rd:4 uv:1 similarly:2 had:1 cuny:2 etc:1 add:2 dominant:1 posterior:2 perspective:1 driven:1 termed:1 muller:3 additional:2 paradigm:4 signal:27 ii:1 desirable:1 match:2 cross:4 long:1 variant:1 regression:5 patient:1 expectation:1 df:2 vision:1 represent:4 kernel:1 addition:1 diagram:1 sch:1 flor:1 hz:8 subject:1 db:3 simulates:2 december:1 incorporates:1 seem:1 odds:1 extracting:1 easy:1 variate:1 identified:3 avenue:3 t0:1 motivated:1 expression:1 utility:1 becker:1 wo:5 york:5 nine:1 krauledat:1 clear:1 band:6 http:1 neuroscience:2 correctly:1 per:1 ddt:1 write:1 key:8 achieving:1 erp:5 anova:1 sum:1 enforced:2 inverse:1 place:3 decision:1 scaling:1 bit:1 followed:1 paced:3 fold:1 quadratic:5 activity:17 encodes:1 lemm:1 fourier:5 speed:1 performing:1 department:3 combination:3 across:8 ur:4 rehabilitation:1 making:1 aihara:2 outlier:1 taken:1 equation:3 previously:1 krepki:1 available:3 operation:1 marios:2 hierarchical:1 spectral:4 denotes:1 top:2 include:2 ensure:1 remaining:1 schalk:1 clin:1 iversen:1 ghahramani:1 added:3 rt:4 spelling:1 traditional:2 diagonal:2 exhibit:1 gradient:1 simulated:7 street:1 w0:1 discriminant:9 length:1 modeled:1 polarity:2 kk:1 providing:1 ratio:4 index:2 robert:1 ryota:2 trace:6 negative:2 perform:1 vut:1 datasets:1 benchmark:8 january:1 communication:2 variability:2 gc:1 august:1 community:1 neuroprosthesis:1 bk:3 namely:1 optimized:1 learned:1 trans:4 able:2 mcfarland:1 pattern:3 including:1 power:7 event:6 natural:1 typing:1 difficulty:1 ps629:1 representing:2 blankertz:5 improve:1 fraunhofer:1 extract:4 columbia:2 coupled:1 prior:7 epoch:3 l2:1 kazuyuki:2 relative:1 synchronization:2 limitation:1 filtering:3 versus:2 validation:4 degree:1 consistent:1 thresholding:1 editor:2 uncorrelated:1 classifying:2 row:1 course:2 surprisingly:1 wagner:1 fifth:1 sparse:1 distributed:1 overcome:2 curve:5 xn:18 commonly:1 made:1 testset:1 far:2 correlate:2 transaction:2 alpha:1 assumed:1 spatio:3 discriminative:1 alternatively:1 search:1 iterative:1 channel:12 xbt:1 nature:1 learn:1 eeg:25 improving:2 complex:1 domain:2 diag:1 ramoser:1 noise:7 paul:4 profile:1 schlogl:1 repeated:2 roc:3 ny:3 vr:3 pfurtscheller:3 tomioka:2 cortically:1 bandpass:1 perceptual:1 weighting:1 learns:1 wavelet:1 specific:2 xt:3 showing:2 er:4 desynchronization:1 concern:1 curio:5 false:2 importance:1 sparseness:1 neurophysiological:1 amsterdam:1 contained:2 corresponds:2 extracted:3 ma:1 presentation:2 losch:2 towards:1 change:1 specifically:1 typical:1 averaging:2 wt:1 called:1 total:4 pas:2 experimental:2 neuper:1 productivity:1 college:1 mark:1 modulated:1 incorporate:1 evaluate:1 |
2,466 | 3,237 | Learning Horizontal Connections in a Sparse Coding
Model of Natural Images
Pierre J. Garrigues
Department of EECS
Redwood Center for Theoretical Neuroscience
Univ. of California, Berkeley
Berkeley, CA 94720
[email protected]
Bruno A. Olshausen
Helen Wills Neuroscience Inst.
School of Optometry
Redwood Center for Theoretical Neuroscience
Univ. of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
It has been shown that adapting a dictionary of basis functions to the statistics
of natural images so as to maximize sparsity in the coefficients results in a set
of dictionary elements whose spatial properties resemble those of V1 (primary visual cortex) receptive fields. However, the resulting sparse coefficients still exhibit
pronounced statistical dependencies, thus violating the independence assumption
of the sparse coding model. Here, we propose a model that attempts to capture
the dependencies among the basis function coefficients by including a pairwise
coupling term in the prior over the coefficient activity states. When adapted to the
statistics of natural images, the coupling terms learn a combination of facilitatory
and inhibitory interactions among neighboring basis functions. These learned interactions may offer an explanation for the function of horizontal connections in
V1 in terms of a prior over natural images.
1 Introduction
Over the last decade, mathematical explorations into the statistics of natural scenes have led to the
observation that these scenes, as complex and varied as they appear, have an underlying structure that
is sparse [1]. That is, one can learn a possibly overcomplete basis set such that only a small fraction
of the basis functions is necessary to describe a given image, where the operation to infer this sparse
representation is non-linear. This approach is known as sparse coding. Exploiting this structure has
led to advances in our understanding of how information is represented in the visual cortex, since
the learned basis set is a collection of oriented, Gabor-like filters that resemble the receptive fields
in primary visual cortex (V1). The approach of using sparse coding to infer sparse representations
of unlabeled data is useful for classification as shown in the framework of self-taught learning [2].
Note that classification performance relies on finding ?hard-sparse? representations where a few
coefficients are nonzero while all the others are exactly zero.
An assumption of the sparse coding model is that the coefficients of the representation are independent. However, in the case of natural images, this is not the case. For example, the coefficients
corresponding to quadrature pair or colinear Gabor filters are not independent. This has been shown
and modeled in the early work of [3], in the case of the responses of model complex cells [4],
feedforward responses of wavelet coefficients [5, 6, 7] or basis functions learned using independent component analysis [8, 9]. These dependencies are informative and exploiting them leads to
improvements in denoising performance [5, 7].
We develop here a generative model of image patches that does not make the independence assumption. The prior over the coefficients is a mixture of a Gaussian when the corresponding basis
1
function is active, and a delta function centered at zero when it is silent as in [10]. We model the binary variables or ?spins? that control the activation of the basis functions with an Ising model, whose
coupling weights model the dependencies among the coefficients. The representations inferred by
this model are also ?hard-sparse?, which is a desirable feature [2].
Our model is motivated in part by the architecture of the visual cortex, namely the extensive network
of horizontal connections among neurons in V1 [11]. It has been hypothesized that they facilitate
contour integration [12] and are involved in computing border ownership [13]. In both of these
models the connections are set a priori based on geometrical properties of the receptive fields. We
propose here to learn the connection weights in an unsupervised fashion. We hope with our model to
gain insight into the the computations performed by this extensive collateral system and compare our
findings to known physiological properties of these horizontal connections. Furthermore, a recent
trend in neuroscience is to model networks of neurons using Ising models, and it has been shown
to predict remarkably well the statistics of groups of neurons in the retina [14]. Our model gives a
prediction for what is expected if one fits an Ising model to future multi-unit recordings in V1.
2 A non-factorial sparse coding model
Let x ? Rn be an image patch, where the xi ?s are the pixel values. We propose the following
generative model:
m
X
x = ?a + ? =
ai ?i + ?,
i=1
n?m
where ? = [?1 . . . ?m ] ? R
is an overcomplete transform or basis set, and the columns ?i
are its basis functions. ? ? N (0, ?2 In ) is small Gaussian noise. Each coefficient ai = si2+1 ui is a
Gaussian scale mixture (GSM). We model the multiplier s with an Ising model, i.e. s ? {?1, 1}m
T
1 T
has a Boltzmann-Gibbs distribution p(s) = Z1 e 2 s W s+b s , where Z is the normalization constant.
If the spin si is down (si = ?1), then ai = 0 and the basis function ?i is silent. If the spin si is up
(si = 1), then the basis function is active and the analog value of the coefficient ai is drawn from a
Gaussian distribution with ui ? N (0, ?i2 ). The prior on a can thus be described as a ?hard-sparse?
prior as it is a mixture of a point mass at zero and a Gaussian.
The corresponding graphical model is shown in Figure 1. It is a chain graph since it contains
both undirected and directed edges. It bears similarities to [15], which however does not have the
intermediate layer a and is not a sparse coding model. To sample from this generative model, one
first obtains a sample s from the Ising model, then samples coefficients a according to p(a | s), and
then x according to p(x | a) ? N (?a, ?2 In ).
W1m
s1
s2
a1
a2
sm
W2m
am
?
x1
x2
xn
Figure 1: Proposed graphical model
The parameters of the model to be learned from data are ? = (?, (?i2 )i=1..m , W, b). This model
does not make any assumption about which linear code ? should be used, and about which units
should exhibit dependencies. The matrix W of the interaction weights in the Ising model describes
these dependencies. Wij > 0 favors positive correlations and thus corresponds to an excitatory
connection, whereas Wij < 0 corresponds to an inhibitory connection. A local magnetic field
bi < 0 favors the spin si to be down, which in turn makes the basis function ?i mostly silent.
2
3 Inference and learning
3.1 Coefficient estimation
We describe here how to infer the representation a of an image patch x in our model. To do so, we
first compute the maximum a posteriori (MAP) multiplier s (see Section 3.2). Indeed, a GSM model
reduces to a linear-Gaussian model conditioned on the multiplier s, and therefore the estimation of
a is easy once s is known.
Given s = s?, let ? = {i : s?i = 1} be the set of active basis functions. We know that ?i ?
/ ?, ai = 0.
Hence, we have x = ?? a? + ?, where a? = (ai )i?? and ?? = [(?i )i?? ]. The model reduces thus
to linear-Gaussian, where a? ? N (0, H = diag((?i2 )i?? )). We have a? | x, s? ? N (?, K), where
K = (??2 ?? ?T? + H ?1 )?1 and ? = ??2 K?T? x. Hence, conditioned on x and s?, the Bayes
Least-Square (BLS) and maximum a posteriori (MAP) estimators of a? are the same and given by
?.
3.2 Multiplier estimation
The MAP estimate of s given x is given byPs? = arg maxs p(s | x). Given s, x has a Gaussian
distribution N (0, ?), where ? = ?2 In + i : si =1 ?i2 ?i ?Ti . Using Bayes? rule, we can write
p(s | x) ? p(x | s)p(s) ? e?Ex (s) , where
1 T ?1
1
1
x ? x + log det ? ? sT W s ? bT s.
2
2
2
We can thus compute the MAP estimate using Gibbs sampling and simulated annealing. In the
Gibbs sampling procedure, the probability that node i changes its value from si to s?i given x, all the
other nodes s?i and at temperature T is given by
?1
?Ex
p(si ? s?i |s?i , x) = 1 + exp ?
,
T
Ex (s) =
where ?Ex = Ex (si , s?i ) ? Ex (s?i , s?i ). Note that computing Ex requires the inverse and the
? and ? be the covariance matrices corresponding to the
determinant of ?, which is expensive. Let ?
proposed state (s?i , s?i ) and current state (si , s?i ) respectively. They differ only by a rank 1 matrix,
? = ? + ??i ?T , where ? = 1 (s?i ? si )? 2 . Therefore, to compute ?Ex we can take advantage
i.e. ?
i
i
2
of the Sherman-Morrison formula
? ?1 = ??1 ? ???1 ?i (1 + ??Ti ??1 ?i )?1 ?Ti ??1
?
(1)
and of a similar formula for the log det term
? = log det ? + log 1 + ??Ti ??1 ?i .
log det ?
(2)
Using (1) and (2) ?Ex can be written as
T
?Ex =
?1
2
1
1 ?(x ? ?i )
? log 1 + ??Ti ??1 ?i
2 1 + ??Ti ??1 ?i
2
?
?
X
+ (s?i ? si ) ?
Wij sj + bi ? .
j6=i
The transition probabilities can thus be computed efficiently, and if a new state is accepted we update
? and ??1 using (1).
3.3 Model estimation
Given a dataset D = {x(1) , . . . , x(N ) } of image patches, we want to learn the parameters ? =
P
(i)
(?, (?i2 )i=1..m , W, b) that offer the best explanation of the data. Let p? (x) = N1 N
i=1 ?(x ? x )
be the empirical distribution. Since in our model the variables a and s are latent, we use a variational
expectation maximization algorithm [16] to optimize ?, which amounts to maximizing a lower bound
on the log-likelihood derived using Jensen?s inequality
XZ
p(x, a, s | ?)
da,
log p(x | ?) ?
q(a, s | x) log
q(a, s | x)
a
s
3
where q(a, s | x) is a probability distribution. We restrict ourselves to the family of point mass
distributions Q = {q(a, s | x) = ?(a ? a
?)?(s ? s?)}, and with this choice the lower bound on the
log-likelihood of D can be written as
L(?, q)
= Ep? [log p(x, a
?, s? | ?)]
(3)
s?, (?i2 )i=1..m )] + Ep? [log p(?
s|
a|
= Ep? [log p(x | a
?, ?)] + Ep? [log p(?
|
{z
} |
{z
L?
L?
We perform coordinate ascent in the objective function L(?, q).
}
|
{z
LW,b
W, b)] .
}
3.3.1 Maximization with respect to q
We want to solve maxq?Q L(?, q), which amounts to finding arg maxa,s log p(x, a, s) for every
x ? D. This is computationally expensive since s is discrete. Hence, we introduce two phases in
the algorithm.
In the first phase, we infer
in the usual sparse coding model where the prior over a
Q the coefficients
Q
is factorial, i.e. p(a) = i p(ai ) ? i exp{??S(ai )}. In this setting, we have
Y
X
1
a
? = arg max p(x|a)
e??S(ai ) = arg min 2 kx ? ?ak22 + ?
S(ai ).
(4)
a
a 2?
i
i
With S(ai ) = |ai |, (4) is known as basis pursuit denoising (BPDN) whose solution has been shown
to be such that many coefficient of a
? are exactly zero [17]. This allows us to recover the sparsity
pattern s?, where s?i = 2.1[a?i 6= 0] ? 1 ?i. BPDN can be solved efficiently using a competitive
algorithm [18]. Another possible choice is S(ai ) = 1[ai 6= 0] (p(ai ) is not a proper prior though),
where (4) is combinatorial and can be solved approximately using orthogonal matching pursuits
(OMP) [19].
After several iterations of coordinate ascent and convergence of ? using the above approximation,
we enter the second phase of the algorithm and refine ? by using the GSM inference described in
Section 3.1 where s? = arg max p(s|x) and a
? = E[a | s?, x].
3.3.2 Maximization with respect to ?
We want to solve max? L(?, q). Our choice of variational posterior allowed us to write the objective
function as the sum of the three terms L? , L? and LW,b (3), and hence to decouple the variables ?,
(?i2 )i=1..m and (W, b) of our optimization problem.
Maximization of L? . Note that L? is the same objective function as in the standard sparse coding problem when the coefficients a are fixed. Let {?
a(i) , s?(i) } be the coefficients and multipliers
(i)
corresponding to x . We have
L? = ?
N
1 X (i)
Nn
kx ? ??
a(i) k22 ?
log 2??2 .
2
2? i=1
2
We add the constraint that k?i k2 ? 1 to avoid the spurious solution where the norm of the basis
functions grows and the coefficients tend to 0. We solve this ?2 constrained least-square problem
using the Lagrange dual as in [20].
Maximization of L? . The problem of estimating ?i2 is a standard variance estimation problem for
a 0-mean Gaussian random variable, where we only consider the samples a?i such that the spin s?i is
equal to 1, i.e.
X
1
?i2 =
(a?i (k) )2 .
(k)
card{k : s?i = 1} k : s? (k) =1
i
Maximization of LW,b . This problem is tantamount to estimating the parameters of a fully visible
Boltzmann machine [21] which is a convex optimization problem. We do gradient ascent in LW,b ,
?L
?L
where the gradients are given by ?WW,b
= ?Ep? [si sj ] + Ep [si sj ] and ?bW,b
= ?Ep? [si ] + Ep [si ].
ij
i
We use Gibbs sampling to obtain estimates of Ep [si sj ] and Ep [si ].
4
Note that since computing the parameters (?
a, s?) of the variational posterior in phase 1 only depends
on ?, we first perform several steps of coordinate ascent in (?, q) until ? has converged, which is
the same as in the usual sparse coding algorithm. We then maximize L? and LW,b , and after that we
enter the second phase of the algorithm.
4 Recovery of the model parameters
Although the learning algorithm relies on a method where the family of variational posteriors q(a, s |
x) is quite limited, we argue here that if data D = {x(1) , . . . , x(N ) } is being sampled according
to parameters ?0 that obey certain conditions that we describe now, then our proposed learning
algorithm is able to recover ?0 with good accuracy using phase 1 only.
Let ? be the coherence parameter of the basis set which equals the maximum absolute inner product
between two distinct basis functions. It has been shown that given a signal that is a sparse linear
combination of p basis functions, BP and OMP will identify the optimal basis functions and their
coefficients provided that p < 21 (? ?1 + 1), and the sparsest representation of the signal is unique
[19]. Similar results can be derived when noise is present (? > 0) [22], but we restrict ourselves to
the noiseless case for simplicity. Let ksk? be the number of spins that are up. We require (W0 , b0 )
to be such that P r ksk? < 12 (? ?1 + 1) ? 1, which can be enforced by imposing strong negative
biases. A data point x(i) ? D thus has a high probability of yielding a unique sparse representation in
the basis set ?. Provided that we have a good estimate of ? we can recover its sparse representation
using OMP or BP, and therefore identify s(i) that was used to originally sample x(i) . That is we
recover with high probability all the samples from the Ising model used to generate D, which allows
us to recover (W0 , b0 ).
We provide for illustration a simple example of model recovery where n
P= 7 and m = 8. Let
(e1 , . . . , e7 ) be an orthonormal basis in R7 . We let ?0 = [e1 , . . . e7 , ?17 i ei ]. We fix the biases
b0 at ?1.2 such that the model is sufficiently sparse as shown by the histogram of ksk? in Figure
2, and the weights W0 are sampled according to a Gaussian distribution. The variance parameters
?0 are fixed to 1. We then generate synthetic data by sampling 100000 data from this model using
?0 . We then estimate ? from this synthetic data using the variational method described in Section 3
using OMP and phase 1 only. We found that the basis functions are recovered exactly (not shown),
and that the parameters of the Ising model are recovered with high accuracy as shown in Figure 2.
4
14
x 10
b0
sparsity histogram
W0
0
12
0.2
?1
10
?2
8
0.1
1
2
3
2
0
4
5
6
7
0
b
6
4
0
1
2
3
4
5
6
7
W
0.2
0.1
0
0
?0.1
?0.1
?1
?0.2
?0.2
?2
1
2
3
4
5
6
7
Figure 2: Recovery of the model. The histogram of ksk? is such that the model is sparse. The
parameters (W, b) learned from synthetic data are close to the parameters (W0 , b0 ) from which this
data was generated.
5 Results for natural images
We build our training set by randomly selecting 16 ? 16 image patches from a standard set of 10
512 ? 512 whitened images as in [1]. It has been shown that change of luminance or contrast have
little influence on the structure of natural scenes [23]. As our goal is to uncover this structure, we
subtract from each patch its own mean and divide it by its standard deviation such that our dataset
is contrast normalized (we do not consider the patches whose variance is below a small threshold).
We fix the number of basis functions to 256. In the second phase of the algorithm we only update
?, and we have found that the basis functions do not change dramatically after the first phase.
Figure 3 shows the learned parameters ?, ? and b. The basis functions resemble Gabor filters at
a variety of orientations, positions and scales. We show the weights W in Figure 4 according to
5
?
?
2
1
0
0
50
100
150
200
250
150
200
250
b
0
?0.5
?1
0
50
100
Figure 3: On the left is shown the entire set of basis functions ? learned on natural images. On the
right are the learned variances (?i2 )i=1..m (top) and the biases b in the Ising model (bottom).
the spatial properties (position, orientation, length) of the basis functions that are linked together
by them. Each basis function is denoted by a bar that indicates its position, orientation, and length
within the 16 ? 16 patch.
?i
(a) 10 most positive weights
?j
?k
Wij < 0
Wik > 0
(b) 10 most negative weights (c) Weights visualization
(d) Association fields
Figure 4: (a) (resp. (b)) shows the basis function pairs that share the strongest positive (resp. negative) weights ordered from left to right. Each subplot in (d) shows the association field for a basis
function ?i whose position and orientation are denoted by the black bar. The horizontal connections (Wij )j6=i are displayed by a set of colored bars whose orientation and position denote those
of the basis functions ?j to which they correspond, and the color denotes the connection strength
(see (c)). We show a random selection of 36 association fields, see www.eecs.berkeley.edu/ garrigue/nips07.html for the whole set.
We observe that the connections are mainly local and connect basis functions at a variety of orientations. The histogram of the weights (see Figure 5) shows a long positive tail corresponding to a
bias toward facilitatory connections. We can see in Figure 4a,b that the 10 most ?positive? pairs
have similar orientations, whereas the majority of the 10 most ?negative? pairs have dissimilar orientations. We compute for a basis function the average number of basis functions sharing with it
a weight larger than 0.01 as a function of their orientation difference in four bins, which we refer
to as the ?orientation profile? in Figure 5. The error bars are a standard deviation. The resulting
orientation profile is consistent with what has been observed in physiological experiments [24, 25].
We also show in Figure 5 the tradeoff between the signal to noise ratio (SNR) of an image patch x
and its reconstruction ??
a, and the ?0 norm of the representation k?
ak0 . We consider a
? inferred using
both the Laplacian prior and our proposed prior. We vary ? (see Equation (4)) and ? respectively,
and average over 1000 patches to obtain the two tradeoff curves. We see that at similar SNR the
representations inferred by our model are more sparse by about a factor of 2, which bodes well for
compression. We have also compared our prior for tasks such as denoising and filling-in, and have
found its performance to be similar to the factorial Laplacian prior even though it does not exploit
the dependencies of the code. One possible explanation is that the greater sparsity of our inferred
representations makes them less robust to noise. Thus we are currently investigating whether this
6
property may instead have advantages in the self-taught learning setting in improving classification
performance.
T
(W,? ?) correlation
coupling weights histogram
110
12
100
0.08
5000
0.06
4000
W ij
0.04
0.02
3000
0
2000
?0.02
80
8
6
0.05
0.1
?0.06
0
0.15
weights
0.2
|? Ti ? j |
0.3
?2
0.4
60
50
2
40
0
0.1
70
4
?0.04
0
Laplacian prior
proposed prior
90
10
1000
0
?0.05
tradeoff SNR?sparsity
orientation profile
14
0.1
sparsity
6000
0.12
average # of connections
7000
?? / 4
0
?/4
?/2
1
2
3
4
orientation bins
30
20
5
6
7
8
9
10
11
12
13
SNR
Figure 5: Properties of the weight matrix W and comparison of the tradeoff curve SNR - ?0 norm
between a Laplacian prior over the coefficients and our proposed prior.
To access how much information is captured by the second-order statistics, we isolate a group
(?i )i?? of 10 basis functions sharing strong weights. Given a collection of image patches that
we sparsify using (4), we obtain a number of spins (?
si )i?? from which we can estimate the empirical distribution pemp , the Boltzmann-Gibbs distribution pIsing consistent with first and second
order correlations, and the factorial distribution pf act (i.e. no horizontal connections) consistent
with first order correlations. We can see in Figure 6 that the Ising model produces better estimates
of the empirical distribution, and results in better coding efficiency since KL(pemp ||pIsing ) = .02
whereas KL(pemp ||pf act ) = .1.
?1
10
factorial model
Ising model
?2
10
Empirical probaility
all spins down
?3
10
all spins up
3 spins up
?4
10
?5
10
?5
10
?4
10
?3
10
?2
?1
10
10
Model probability
Figure 6: Model validation for a group of 10 basis functions (right). The empirical probabilities of
the 210 patterns of activation are plotted against the probabilities predicted by the Ising model (red),
the factorial model (blue), and their own values (black). These patterns having exactly three spins
up are circled. The prediction of the Ising model is noticably better than that of the factorial model.
6 Discussion
In this paper, we proposed a new sparse coding model where we include pairwise coupling terms
among the coefficients to capture their dependencies. We derived a new learning algorithm to adapt
the parameters of the model given a data set of natural images, and we were able to discover the dependencies among the basis functions coefficients. We showed that the learned connection weights
are consistent with physiological data. Furthermore, the representations inferred in our model have
greater sparsity than when they are inferred using the Laplacian prior as in the standard sparse coding
model. Note however that we have not found evidence that these horizontal connections facilitate
contour integration, as they do not primarily connect colinear basis functions. Previous models in
the literature simply assume these weights according to prior intuitions about the function of horizontal connections [12, 13]. It is of great interest to develop new models and unsupervised learning
schemes possibly involving attention that will help us understand the computational principles underlying contour integration in the visual cortex.
7
References
[1] B.A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381(6583):607?609, June 1996.
[2] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: Transfer learning from unlabeled data. Proceedings of the Twenty-fourth International Conference on Machine Learning, 2007.
[3] G. Zetzsche and B. Wegmann. The atoms of vision: Cartesian or polar? J. Opt. Soc. Am., 16(7):1554?
1565, 1999.
[4] P. Hoyer and A. Hyv?arinen. A multi-layer sparse coding network learns contour coding from natural
images. Vision Research, 42:1593?1605, 2002.
[5] M.J. Wainwright, E.P. Simoncelli, and A.S. Willsky. Random cascades on wavelet trees and their use in
modeling and analyzing natural imagery. Applied and Computational Harmonic Analysis, 11(1):89?123,
July 2001.
[6] O. Schwartz, T. J. Sejnowski, and P. Dayan. Soft mixer assignment in a hierarchical generative model of
natural scene statistics. Neural Comput, 18(11):2680?2718, November 2006.
[7] S. Lyu and E. P. Simoncelli. Statistical modeling of images with fields of gaussian scale mixtures. In
Advances in Neural Computation Systems (NIPS), Vancouver, Canada, 2006.
[8] A. Hyv?arinen, P.O. Hoyer, J. Hurri, and M. Gutmann. Statistical models of images and early vision.
Proceedings of the Int. Symposium on Adaptive Knowledge Representation and Reasoning (AKRR2005),
Espoo, Finland, 2005.
[9] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning non-linear statistical regularities
in non-stationary natural signals. Neural Computation, 17(2):397?423, 2005.
[10] B.A. Olshausen and K.J. Millman. Learning sparse codes with a mixture-of-gaussians prior. Advances in
Neural Information Processing Systems, 12, 2000.
[11] D. Fitzpatrick. The functional organization of local circuits in visual cortex: insights from the study of
tree shrew striate cortex. Cerebral Cortex, 6:329?41, 1996.
[12] O. Ben-Shahar and S. Zucker. Geometrical computations explain projection patterns of long-range horizontal connections in visual cortex. Neural Comput, 16(3):445?476, March 2004.
[13] L. Zhaoping. Border ownership from intracortical interactions in visual area v2. Neuron, 47:143?153,
2005.
[14] E. Schneidman, M.J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply strongly correlated
network states in a neural population. Nature, April 2006.
[15] G. Hinton, S. Osindero, and K. Bao. Learning causally linked markov random fields. Artificial Intelligence
and Statistics, Barbados, 2005.
[16] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul. An introduction to variational methods for
graphical models. Learning in Graphical Models, Cambridge, MA: MIT Press, 1999.
[17] S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM Review,
43(1):129?159, 2001.
[18] C.J. Rozell, D.H. Johnson, R.G. Baraniuk, and B.A. Olshausen. Neurally plausible sparse coding via competitive algorithms. In Proceedings of the Computational and Systems Neuroscience (Cosyne) meeting,
Salt Lake City, UT, February 2007.
[19] J.A. Tropp. Greed is good: algorithmic results for sparse approximation. IEEE Transactions on Information Theory, 50(10):2231?2242, 2004.
[20] H. Lee, A. Battle, R. Raina, and A.Y. Ng. Efficient sparse coding algorithms. In Advances in Neural
Information Processing Systems 19, pages 801?808. MIT Press, Cambridge, MA, 2007.
[21] D.H. Ackley, G.E. Hinton, and T.J. Sejnowski. A learning algorithm for boltzmann machines. Cognitive
Science, 9(1):147?169, 1985.
[22] J.A. Tropp. Just relax: convex programming methods for identifying sparse signals in noise. IEEE
Transactions on Information Theory, 52(3):1030?1051, 2006.
[23] Z. Wang, A.C. Bovik, and E.P. Simoncelli. Structural approaches to image quality assessment. In Alan
Bovik, editor, Handbook of Image and Video Processing, chapter 8.3, pages 961?974. Academic Press,
May 2005. 2nd edition.
[24] R. Malach, Y. Amir, M. Harel, and A. Grinvald. Relationship between intrinsic connections and functional
architecture revealed by optical imaging and in vivo targeted biocytin injections in primate striate cortex.
Proc. Natl. Acad. Sci. U.S.A., 82:935?939, 1993.
[25] W. Bosking, Y. Zhang, B. Schofield, and D. Fitzpatrick. Orientation selectivity and the arrangement of
horizontal connections in the tree shrew striate cortex. J. Neuroscience, 17(6):2112?2127, 1997.
8
| 3237 |@word determinant:1 compression:1 norm:3 nd:1 hyv:2 covariance:1 decomposition:1 garrigues:1 contains:1 selecting:1 current:1 recovered:2 activation:2 si:19 written:2 optometry:1 visible:1 informative:1 update:2 stationary:1 generative:4 intelligence:1 amir:1 colored:1 node:2 zhang:1 mathematical:1 symposium:1 introduce:1 pairwise:3 indeed:1 expected:1 xz:1 multi:2 bpdn:2 little:1 pf:2 provided:2 estimating:2 underlying:2 discover:1 circuit:1 mass:2 what:2 maxa:1 finding:3 berkeley:7 every:1 ti:7 act:2 exactly:4 k2:1 schwartz:1 control:1 unit:2 appear:1 causally:1 positive:5 local:3 acad:1 analyzing:1 approximately:1 black:2 bosking:1 limited:1 bi:2 range:1 directed:1 unique:2 atomic:1 procedure:1 area:1 empirical:5 adapting:1 gabor:3 matching:1 cascade:1 projection:1 unlabeled:2 close:1 selection:1 influence:1 optimize:1 www:1 map:4 center:2 maximizing:1 helen:1 attention:1 convex:2 simplicity:1 recovery:3 identifying:1 insight:2 estimator:1 rule:1 orthonormal:1 population:1 coordinate:3 resp:2 programming:1 element:1 trend:1 expensive:2 rozell:1 malach:1 ising:13 ep:10 bottom:1 observed:1 ackley:1 solved:2 capture:2 wang:1 gutmann:1 intuition:1 ui:2 colinear:2 efficiency:1 basis:41 represented:1 chapter:1 univ:2 distinct:1 describe:3 sejnowski:2 artificial:1 mixer:1 pemp:3 saunders:1 whose:6 quite:1 larger:1 solve:3 plausible:1 relax:1 favor:2 statistic:7 transform:1 emergence:1 advantage:2 w1m:1 shrew:2 propose:3 reconstruction:1 interaction:4 product:1 neighboring:1 pronounced:1 bao:1 exploiting:2 convergence:1 regularity:1 produce:1 ben:1 help:1 coupling:5 develop:2 ij:2 school:1 b0:5 ex:10 strong:2 soc:1 predicted:1 resemble:3 differ:1 filter:3 exploration:1 centered:1 bin:2 require:1 arinen:2 fix:2 opt:1 sufficiently:1 exp:2 great:1 lyu:1 predict:1 algorithmic:1 fitzpatrick:2 dictionary:2 early:2 a2:1 vary:1 finland:1 estimation:5 polar:1 proc:1 combinatorial:1 currently:1 city:1 hope:1 mit:2 gaussian:11 e7:2 avoid:1 sparsify:1 jaakkola:1 derived:3 june:1 improvement:1 rank:1 likelihood:2 indicates:1 mainly:1 contrast:2 am:2 inst:1 inference:2 posteriori:2 dayan:1 wegmann:1 nn:1 bt:1 entire:1 spurious:1 wij:5 pixel:1 arg:5 among:6 classification:3 dual:1 orientation:14 priori:1 denoted:2 html:1 schofield:1 spatial:2 integration:3 constrained:1 espoo:1 field:11 once:1 equal:2 having:1 ng:2 sampling:4 atom:1 zhaoping:1 r7:1 unsupervised:2 filling:1 future:1 others:1 few:1 retina:1 primarily:1 oriented:1 randomly:1 harel:1 packer:1 phase:9 ourselves:2 bw:1 n1:1 attempt:1 organization:1 interest:1 baolshausen:1 mixture:5 yielding:1 zetzsche:1 natl:1 chain:1 edge:1 necessary:1 collateral:1 orthogonal:1 tree:3 divide:1 plotted:1 overcomplete:2 theoretical:2 column:1 modeling:2 soft:1 assignment:1 maximization:6 deviation:2 snr:5 johnson:1 osindero:1 dependency:9 connect:2 eec:3 synthetic:3 st:1 international:1 siam:1 lee:2 barbados:1 together:1 imagery:1 possibly:2 cosyne:1 cognitive:1 intracortical:1 coding:17 coefficient:23 nips07:1 int:1 depends:1 performed:1 linked:2 red:1 competitive:2 bayes:2 recover:5 vivo:1 square:2 spin:11 accuracy:2 variance:4 efficiently:2 correspond:1 identify:2 weak:1 bayesian:1 garrigue:2 j6:2 converged:1 gsm:3 strongest:1 explain:1 sharing:2 against:1 involved:1 gain:1 sampled:2 dataset:2 color:1 knowledge:1 ut:1 uncover:1 originally:1 violating:1 response:2 april:1 though:2 strongly:1 furthermore:2 just:1 correlation:5 until:1 horizontal:10 tropp:2 ei:1 assessment:1 quality:1 grows:1 olshausen:4 facilitate:2 hypothesized:1 k22:1 multiplier:5 normalized:1 hence:4 nonzero:1 i2:10 self:3 temperature:1 geometrical:2 reasoning:1 image:23 variational:6 harmonic:1 functional:2 salt:1 cerebral:1 analog:1 association:3 tail:1 refer:1 cambridge:2 gibbs:5 ai:15 enter:2 imposing:1 bruno:1 sherman:1 access:1 zucker:1 cortex:11 similarity:1 add:1 posterior:3 own:2 recent:1 showed:1 selectivity:1 certain:1 inequality:1 binary:1 shahar:1 meeting:1 captured:1 greater:2 omp:4 subplot:1 maximize:2 schneidman:1 morrison:1 signal:5 july:1 neurally:1 desirable:1 simoncelli:3 infer:4 reduces:2 alan:1 adapt:1 academic:1 offer:2 long:2 e1:2 a1:1 laplacian:5 prediction:2 involving:1 whitened:1 noiseless:1 expectation:1 vision:3 iteration:1 normalization:1 histogram:5 cell:2 whereas:3 remarkably:1 want:3 annealing:1 bovik:2 ascent:4 recording:1 tend:1 isolate:1 undirected:1 jordan:1 structural:1 feedforward:1 intermediate:1 easy:1 revealed:1 variety:2 independence:2 fit:1 architecture:2 restrict:2 silent:3 inner:1 tradeoff:4 det:4 whether:1 motivated:1 greed:1 dramatically:1 useful:1 factorial:7 amount:2 generate:2 inhibitory:2 neuroscience:6 delta:1 noticably:1 blue:1 write:2 discrete:1 bls:1 taught:3 group:3 four:1 threshold:1 drawn:1 v1:5 luminance:1 graph:1 imaging:1 fraction:1 sum:1 enforced:1 inverse:1 fourth:1 baraniuk:1 family:2 patch:11 lake:1 coherence:1 layer:2 bound:2 refine:1 activity:1 adapted:1 strength:1 constraint:1 segev:1 bp:2 scene:4 x2:1 facilitatory:2 min:1 optical:1 injection:1 department:1 according:6 combination:2 march:1 battle:2 describes:1 primate:1 s1:1 computationally:1 equation:1 visualization:1 turn:1 know:1 pursuit:3 operation:1 gaussians:1 obey:1 observe:1 hierarchical:2 v2:1 magnetic:1 pierre:1 top:1 denotes:1 include:1 graphical:4 exploit:1 ghahramani:1 build:1 february:1 objective:3 arrangement:1 receptive:4 primary:2 usual:2 striate:3 bialek:1 exhibit:2 gradient:2 hoyer:2 card:1 simulated:1 sci:1 majority:1 w0:5 argue:1 toward:1 willsky:1 code:4 length:2 modeled:1 relationship:1 illustration:1 ratio:1 mostly:1 negative:4 proper:1 boltzmann:4 twenty:1 perform:2 observation:1 neuron:4 markov:1 sm:1 november:1 displayed:1 hinton:2 rn:1 redwood:2 varied:1 ww:1 canada:1 inferred:6 pair:4 namely:1 kl:2 extensive:2 connection:20 z1:1 california:2 learned:9 maxq:1 nip:1 able:2 bar:4 biocytin:1 below:1 pattern:4 sparsity:7 including:1 max:4 explanation:3 video:1 wainwright:1 natural:15 karklin:1 raina:2 wik:1 scheme:1 imply:1 prior:18 understanding:1 circled:1 literature:1 millman:1 vancouver:1 berry:1 tantamount:1 review:1 fully:1 bear:1 ksk:4 validation:1 consistent:4 principle:1 editor:1 share:1 excitatory:1 last:1 bias:4 understand:1 saul:1 absolute:1 sparse:32 curve:2 xn:1 transition:1 ak0:1 contour:4 collection:2 adaptive:1 si2:1 transaction:2 sj:4 obtains:1 active:3 investigating:1 handbook:1 hurri:1 xi:1 latent:1 decade:1 learn:4 nature:2 robust:1 ca:2 transfer:1 improving:1 complex:2 diag:1 da:1 border:2 noise:5 s2:1 whole:1 profile:3 edition:1 allowed:1 quadrature:1 x1:1 fashion:1 position:5 sparsest:1 grinvald:1 comput:2 lw:5 wavelet:2 learns:1 down:3 formula:2 jensen:1 physiological:3 evidence:1 intrinsic:1 conditioned:2 cartesian:1 w2m:1 kx:2 chen:1 subtract:1 ak22:1 led:2 simply:1 visual:8 lagrange:1 ordered:1 lewicki:1 corresponds:2 relies:2 ma:2 goal:1 targeted:1 donoho:1 ownership:2 hard:3 change:3 denoising:3 decouple:1 accepted:1 dissimilar:1 correlated:1 |
2,467 | 3,238 | One-Pass Boosting
Zafer Barutcuoglu
[email protected]
Philip M. Long
[email protected]
Rocco A. Servedio
[email protected]
Abstract
This paper studies boosting algorithms that make a single pass over a set of base
classifiers.
We first analyze a one-pass algorithm in the setting of boosting with diverse base
classifiers. Our guarantee is the same as the best proved for any boosting algorithm, but our one-pass algorithm is much faster than previous approaches.
We next exhibit a random source of examples for which a ?picky? variant of AdaBoost that skips poor base classifiers can outperform the standard AdaBoost algorithm, which uses every base classifier, by an exponential factor.
Experiments with Reuters and synthetic data show that one-pass boosting can substantially improve on the accuracy of Naive Bayes, and that picky boosting can
sometimes lead to a further improvement in accuracy.
1 Introduction
Boosting algorithms use simple ?base classifiers? to build more complex, but more accurate, aggregate classifiers. The aggregate classifier typically makes its class predictions using a weighted vote
over the predictions made by the base classifiers, which are usually chosen one at a time in rounds.
When boosting is applied in practice, the base classifier at each round is usually optimized: typically,
each example is assigned a weight that depends on how well it is handled by the previously chosen
base classifiers, and the new base classifier is chosen to minimize the weighted training error. But
sometimes this is not feasible; there may be a huge number of base classifiers with insufficient
apparent structure among them to avoid simply trying all of them out to find out which is best. For
example, there may be a base classifier for each word or k-mer. (Note that, due to named entities, the
number of ?words? in some analyses can far exceed the number of words in any natural language.)
In such situations, optimizing at each round may be prohibitively expensive.
The analysis of AdaBoost, however, suggests that there could be hope in such cases. Recall
that if AdaBoost is run with a sequence of base classifiers b1 , . . . , bn that achieve weighted error 12 ? ?1 , . . . , 12 ? ?n , then the training error of AdaBoost?s final output hypothesis is at most
Pn
exp(?2 t=1 ?t2 ). One could imagine applying AdaBoost without performing optimization: (a)
fixing an order b1 , ..., bn of the base classifiers without looking at the data, (b) committing to use
base classifier bt in round t, and (c) setting the weight with which bt votes as a function of its
weighted training error using AdaBoost. (In a one-pass scenario, it seems sensible to use AdaBoost
since, as indicated by the above bound, it can capitalize on the advantage over random guessing
of every hypothesis.) The resulting algorithm uses essentially the same computational resources as
Naive Bayes [2, 7], but benefits from taking some account of the dependence among base classifiers. Thus motivated, in this paper we study the performance of different boosting algorithms in a
one-pass setting.
Contributions. We begin by providing theoretical support for one-pass boosting using the ?diverse
base classifiers? framework previously studied in [1, 6]. In this scenario there are n base classifiers.
For an unknown subset G of k of the base classifiers, the events that the classifiers in G are correct
on a random item are mutually independent. This formalizes the notion that these k base classifiers
are not redundant. Each of these k classifiers is assumed to have error 21 ? ? under the initial
distribution, and no assumption is made about the other n ? k base classifiers. In [1] it is shown
that if Boost-by-Majority is applied with a weak learner that does optimization (i.e. always uses the
?best? of the n candidate base classifiers at each of ?(k) stages of boosting), the error rate of the
combined hypothesis with respect to the underlying distribution is (roughly) at most exp(??(? 2 k)).
In Section 2 we show that a one-pass variant of Boost-by-Majority achieves a similar bound with a
single pass through the n base classifiers, reducing the computation time required by an ?(k) factor.
We next show in Section 3 that when running AdaBoost using one pass, it can sometimes be advantageous to abstain from using base classifiers that are too weak. Intuitively, this is because using
many weak base classifiers early on can cause the boosting algorithm to reweight the data in a way
that obscures the value of a strong base classifier
Pn that comes later. (Note that the quadratic dependence on ?t in the exponent of the exp(?2 t=1 ?t2 ) means that one good base classifier is more
valuable than many poor ones.) In a bit more detail, suppose that base classifiers are considered
in the order b1 , . . . , bn , where each of b1 , . . . , bn?1 has a ?small? advantage over random guessing
under the initial distribution D and bn has a ?large? advantage under D. Using b1 , . . . , bn?1 for the
first n ? 1 stages of AdaBoost can cause the distributions D2 , D3 , . . . to change from the initial D1
in such a way that when bn is finally considered, its advantage under Dn is markedly smaller than
its advantage under D0 , causing AdaBoost to assign bn a small voting weight. In contrast, a ?picky?
version of AdaBoost would pass up the opportunity to use b1 , . . . , bn?1 (since their advantages are
too small) and thus be able to reap the full benefit of using bn under distribution D0 (since when bn
is finally considered the distribution D is still D0 , since no earlier base classifiers have been used).
Finally, Section 4 gives experimental results on Reuters and synthetic data. These show that one-pass
boosting can lead to substantial improvement in accuracy over Naive Bayes while using a similar
amount of computation, and that picky one-pass boosting can sometimes further improve accuracy.
2 Faster learning with diverse base classifiers
We consider the framework of boosting in the presence of diverse base classifiers studied in [1].
Definition 1 (Diverse ?-good) Let D be a distribution over X ? {?1, 1}. We say that a set G of
classifiers is diverse and ?-good with respect to D if (i) each classifier in G has advantage at least
? (i.e., error at most 21 ? ?) with respect to D, and (ii) the events that the classifiers in G are correct
are mutually independent under D.
We will analyze the Picky-One-Pass Boost-by-Majority (POPBM) algorithm, which we define as
follows. It uses three parameters, ?, T and .
1. Choose a random ordering b1 , ..., bn of the base classifiers in H, and set i1 = 1.
2. For as many rounds t as it ? min{T, n}:
(a) Define Dt as follows: for each example (x, y),
i. Let rt (x, y) be the the number of previously chosen base classifiers h1 , . . . , ht?1
that are correct on (x, y);
1
T
T
?t?1
ii. Let wt (x, y) = b T Tc?r
( 2 + ?)b 2 c?rt (x,y) ( 12 ? ?)d 2 e?t?1+rt (x,y) , let
t (x,y)
2
Zt = E(x,y)?D (wt (x, y)), and let Dt (x, y) = wt (x,y)D(x,y)
.
Zt
(b) Compare Zt to /T , and
i. If Zt ? /T , then try bit , bit +1 , ... until you encounter a hypothesis bj with advantage at least ? with respect to Dt (and if you run out of base classifiers before this
happens, then go to step 3). Set ht to be bj (i.e. return bj to the boosting algorithm)
and set it+1 to j + 1 (i.e. the index of the next base classifier in the list).
ii. If Zt < /T , then set ht to be the constant-1 hypothesis (i.e. return this constant
hypothesis to the boosting algorithm) and set it+1 = it .
3. If t < T +1 (i.e. the algorithm ran out of base classifiers before selecting T of them), abort.
Otherwise, output the final classifier f (x) = M aj(h1 (x), . . . , hT (x)).
The idea behind step 2.b.ii is that if Zt is small, then Lemma 4 will show that it doesn?t much matter
how good this weak hypothesis is, so we simply use a constant hypothesis.
To simplify the exposition, we have assumed that POPBM can exactly determine quantities such
as Zt and the accuracies of the weak hypotheses. This would provably be the case if D were
concentrated on a moderate number of examples, e.g. uniform over a training set. With slight
complications, a similar analysis can be performed when these quantities must be estimated.
The following lemma from [1] shows that if the filtered distribution is not too different from the
original distribution, then there is a good weak hypothesis relative to the original distribution.
Lemma 2 ([1]) Suppose a set G of classifiers of size k is diverse and ?-good with respect to D. For
2
any probability distribution Q such that Q(x, y) ? ?3 e? k/2 D(x, y) for all (x, y) ? X ? {?1, 1},
there is a g ? G such that
Pr(x,y)?Q (g(x) = y) ? 21 + ?4 .
(1)
The following simple extension of Lemma 2 shows that, given a stronger constraint on the filtered
distribution, there are many good weak hypotheses available.
Lemma 3 Suppose a set G of classifiers of size k is diverse and ?-good with respect to D. Fix any
` < k. For any probability distribution Q such that
? 2
Q(x, y) ? e? `/2 D(x, y)
(2)
3
for all (x, y) ? X ? {?1, 1}, there are at least k ? ` + 1 members g of G such that (1) holds.
Proof: Fix any distribution Q satisfying (2). Let g1 , ..., g` be an arbitrary collection of ` elements
of G. Since the {g1 , ..., g` } and Q satisfy the requirements of Lemma 2 with k set to `, one of
g1 , . . . , g` must satisfy (1); so any set of ` elements drawn from G contains an element that satisfies
(1). This yields the lemma.
We will use another lemma, implicit in Freund?s analysis [3], formulated as stated here in [1]. It
formalizes two ideas: (a) if the weak learners perform well, then so will the strong learner; and (b)
the performance of the weak learner is not important in rounds for which Z t is small.
Lemma 4 Suppose that Boost-by-Majority is run with parameters ? and T , and generates classifiers h1 , ..., hT for which D1 (h1 (x) = y) = 21 + ?1 , . . . , DT (hT (x) = y) = 21 + ?T . Then,
for a random element of D, a majority vote over h1 , ..., hT is incorrect with probability at most
PT
2
e?2? T + t=1 (? ? ?t )Zt .
Now we give our analysis.
Theorem 5 Suppose the set H of base classifiers used by POPBM contains a subset G of k elements
that is diverse and ?-good with respect to the initial distribution D, where ? is a constant (say 1/4).
Then there is a setting of the parameters of POPBM so that, with probability 1 ? 2 ??(k) , it outputs
a classifier with accuracy exp(??(? 2 k)) with respect to the original distribution D.
Proof: We prove that ? = ?/4, T = k/64, and =
required. We will establish the following claim:
3k ?? 2 k/16
8? e
is a setting of parameters as
Claim 6 For the above parameter settings we have Pr[POPBM aborts in Step 3] = 2 ??(k) .
Suppose for now that the claim holds, so that with high probability POPBM outputs a classifier.
In case it does, let f be this output. Then since POPBM runs for a full T rounds, we may apply
Lemma 4 which bounds the error rate of the Boost-by-Majority final classifier. The lemma gives us
that D(f (x) 6= y) is at most
e?2?
2
T
+
T
P
(? ? ?t )Zt
= e??
2
T /8
+
P
t:Zt < T
t=1
? e??(?
2
k)
(? ? ?t )Zt +
P
t:Zt ? T
+ T (/T ) + 0 = e??(?
2
k)
.
(? ? ?t )Zt
(Theorem 5)
The final inequality holds since ? ? ?t ? 0 if Zt ? /T and ? ? ?t ? 1 if Zt < /T.
Proof of Claim 6: In order for POPBM to abort, it must be the case that as the k base classifiers in
G are encountered in sequence as the algorithm proceeds through h 1 , . . . , hn , more than 63k/64 of
them are skipped in Step 2.b.i. We show this occurs with probability at most 2 ??(k) .
For each j ? {1, ..., k}, let Xj be an indicator variable for the event that the jth member of G
in the ordering b1 , . . . , bn is encountered during the boosting process and skipped, and for each
P`
` ? {1, ..., k}, let S` = min{( j=1 Xj ) ? (3/4)`, k/8}. We claim that S1 , ..., Sk/8 is a supermartingale, i.e. that E[S`+1 |S1 , . . . , S` ] ? S` for all ` < k/8. If S` = k/8 or if the boosting
process has terminated by the `th member of G, this is obvious. Suppose that S ` < k/8 and that the
algorithm has not terminated yet. Let t be the round of boosting in which the `th member of G is encountered. The value wt (x, y) can be interpreted as a probability, and so we have that wt (x, y) ? 1.
Consequently, we have that
D(x, y)
? 2
? 2
T
Dt (x, y) ?
? D(x, y) ? = D(x, y) ? e? k/16 < D(x, y) ? e? k/8 .
Zt
24
3
Now Lemma 3 implies that at least half of the classifiers in G have advantage at least ? w.r.t. D t .
Since ` < k/4, it follows that at least k/4 of the remaining (at most k) classifiers in G that have not
yet been seen have advantage at least ? w.r.t. Dt . Since the base classifiers were ordered randomly,
any order over the remaining hypotheses is equally likely, and so also is any order over the remaining
hypotheses from G. Thus, the probability that the next member of G to be encountered has advantage
at least ? is at least 1/4, so the probability that it is skipped is at most 3/4. This completes the proof
that S1 , ..., Sk/8 is a supermartingale.
Since |S` ? S`?1 | ? 1, Azuma?s inequality for supermartingales implies that Pr(Sk/8 > k/64) ?
e??(k) . This means that the probability that at least k/64 good elements were not skipped is at least
1 ? e?O(k) , which completes the proof.
3 For one-pass boosting, PickyAdaBoost can outperform AdaBoost
AdaBoost is the most popular boosting algorithm. It is most often applied in conjunction with a
weak learner that performs optimization, but it can be used with any weak learner. The analysis
of AdaBoost might lead to the hope that it can profitably be applied for one-pass boosting. In this
section, we compare AdaBoost and its picky variant on an artificial source especially designed to
illustrate why the picky variant may be needed.
AdaBoost. We briefly recall some basic facts about AdaBoost (see Figure 1). If we run AdaBoost
for T stages with weak hypotheses h1 , . . . , hT , it constructs a final hypothesis
T
P
H(x) = sgn(f (x)) where f (x) =
?t ht (x)
(3)
t=1
Here t = Pr(x,y)?Dt [ht (x) 6= y] where Dt is the t-th distribution constructed
with ?t = ln
by the algorithm (the first distribution D1 is just D, the initial distribution). We write ?t to denote
1
2 ? t , the advantage of the t-th weak hypothesis under distribution D t . Freund and Schapire [5]
proved that if AdaBoost is run with an initial distribution D over a set of labeled examples, then the
PT
error rate of the final combined classifier H is at most exp(?2 i=1 ?t2 ) under D:
T
P
2
Pr(x,y)?D [H(x) 6= y] ? exp ?2
?t .
(4)
1
2
1?t
t .
i=1
(We note that AdaBoost is usually described in the case in which D is uniform over a training set, but
the algorithm and most of its analyses, including (4), go through in the greater generality presented
here. The fact that the definition of ?t depends indirectly on an expectation evaluated according to
D makes the case in which D is uniform over a sample most directly relevant to practice. However,
it is easiest to describe our construction using this more general formulation of AdaBoost.)
PickyAdaBoost. Now we define a ?picky? version of AdaBoost, which we call PickyAdaBoost.
The PickyAdaBoost algorithm is initialized with a parameter ? > 0. Given a value ?, the PickyAdaBoost algorithm works like AdaBoost but with the following difference. Suppose that PickyAdaBoost is performing round t of boosting, the current distribution is some D 0 , and the current
Given a source D of random examples.
? Initialize D1 = D.
? For each round t from 1 to T :
? Present Dt to a weak learner, and receive base classifier ht ;
t
? Calculate error t = Pr(x,y)?Dt [ht (x) 6= y] and set ?t = 21 ln 1?
t ;
0
? Update the distribution:
Define Dt+1 by setting Dt+1
(x, y)
=
0
exp(??t yht (x))Dt (x, y) and normalizing Dt+1
to get the probability distribu0
tion Dt+1 = Dt+1
/Zt+1 ;
P
? Return the final classification rule H(x) = sgn ( t ?t ht (x)) .
Figure 1: Pseudo-code for AdaBoost (from [4]).
base classifier ht being considered has advantage ? under D 0 , where |?| < ?. If this is the case
then PickyAdaBoost abstains in that round and does not include h t into the combined hypothesis it
is constructing. (Note that consequently the distribution for the next round of boosting will also be
D0 .) On the other hand, if the current base classifier has advantage ? where |?| ? ?, then PickyAdaBoost proceeds to use the weak hypothesis just like AdaBoost, i.e. it adds ? t ht to the function f
described in (3) and adjusts D 0 to obtain the next distribution.
Note that we only require the magnitude of the advantage to be at least ?. Whether a given base
classifier is used, or its negation is used, the effect that it has on the output of AdaBoost is the same
(briefly, because ln 1?
= ? ln 1? ). Consequently, the appropriate notion of a ?picky? version of
AdaBoost is to require the magnitude of the advantage to be large.
3.1 The construction
We consider a sequence of n + 1 base classifiers b1 , . . . , bn , bn+1 . For simplicity we suppose that
the domain X is {?1, 1}n+1 and that the value of the i-th base classifier on an instance x ? {0, 1} n
is simply bi (x) = xi .
Now we define the distribution D over X ?{?1, 1}. A draw of (x, y) is obtained from D as follows:
the bit y is chosen uniformly from {+1, ?1}. Each bit x1 , . . . , xn is chosen independently to equal
y with probability 21 + ?, and the bit xn+1 is chosen to equal y if there exists an i, 1 ? i ? n, for
which xi = y; if xi = ?y for all 1 ? i ? n then xn+1 is set to ?y.
3.2 Base classifiers in order b1 , . . . , bn , bn+1
Throughout Section 3.2 we will only consider parameter settings of ?, ?, n for which ? < ? ?
1
1
1
1
1
1
n
n
n
2 ? ( 2 ? ?) . Note that the inequality ? < 2 ? ( 2 ? ?) is equivalent to ( 2 ? ?) < 2 ? ?, which
holds for all n ? 2.
PickyAdaBoost. In the case where ? < ? ? 12 ? ( 12 ? ?)n , it is easy to analyze the error rate of
PickyAdaBoost(?) after one pass through the base classifiers in the order b 1 , . . . , bn , bn+1 . Since
each of b1 , . . . , bn has advantage exactly ? under D and bn+1 has advantage 12 ? ( 12 ? ?)n under D,
PickyAdaBoost(?) will abstain in rounds 1, . . . , n and so its final hypothesis is sgn(b n+1 (?)), which
is the same as bn+1 . It is clear that bn+1 is wrong only if each xi 6= y for i = 1, . . . , n, which occurs
with probability ( 21 ? ?)n . We thus have:
Lemma 7 For ? < ? ? 12 ? ( 12 ? ?)n , PickyAdaBoost(?) constructs a final hypothesis which has
error rate precisely ( 21 ? ?)n under D.
AdaBoost. Now let us analyze the error rate of AdaBoost after one pass through the base classifiers
in the order b1 , . . . , bn+1 . We write Dt to denote the distribution that AdaBoost uses at the t-th stage
of boosting (so D = D1 ). Recall that ?t is the advantage of bt under distribution Dt .
The following claim is an easy consequence of the fact that given the value of y, the values of the
base classifiers b1 , . . . , bn are all mutually independent:
Claim 8 For each 1 ? t ? n we have that ?t = ?.
It follows that the coefficients ?1 , . . . , ?n of b1 , . . . , bn are all equal to
1
2
ln 1/2+?
1/2?? =
1
2
ln 1+2?
1?2? .
The next claim can be straightforwardly proved by induction on t:
Claim 9 Let Dr denote the distribution constructed by AdaBoost after processing the base classifiers b1 , . . . , br?1 in that order. A draw of (x, y) from Dr is distributed as follows:
? The bit y is uniform random from {?1, +1};
? Each bit x1 , . . . , xr?1 independently equals y with probability 21 , and each bit xr , . . . , xn
independently equals y with probability 12 + ?;
? The bit xn+1 is set as described in Section 3.1, i.e. xn+1 = ?y if and only if x1 = ? ? ? =
xn = ?y.
Claim 9 immediately gives n+1 = Pr(x,y)?Dn+1 [bn+1 (x) 6= y] = 1/2n. It follows that ?n+1 =
1?n+1
1
1
n
2 ln n+1 = 2 ln(2 ? 1). Thus an explicit expression for the final hypothesis of AdaBoost after
one pass over the n + 1 classifiers b1 , . . . , bn+1 is H(x) = sgn(f (x)), where
f (x) = 12 ln 1+2?
(x1 + ? ? ? + xn ) + 21 (ln(2n ? 1))xn+1 .
1?2?
Using the fact that H(x) 6= y if and only if yf (x) < 0, it is easy to establish the following:
Claim 10 The classifier H(x) makes a mistake on (x, y) if and only if more than A of the variables
n
?1)
x1 , . . . , xn disagree with y, where A = n2 + ln(2
.
2 ln 1+2?
1?2?
For (x, y) drawn from source D, we have that each of x1 , . . . , xn independently agrees with y with
probability 21 + ?. Thus we have established the following:
Lemma 11 Let B(n, p) denote a binomial random variable with parameters n, p (i.e. a draw from
B(n, p) is obtained by summing n i.i.d. 0/1 random variables each of which has expectation p).
Then the AdaBoost final hypothesis error rate is Pr[B(n, 12 ? ?) > A], which equals
n
P
n
(1/2 ? ?)i (1/2 + ?)n?i .
(5)
i=bAc+1 i
In terms of Lemma 11, Lemma 7 states that the PickyAdaBoost(?) final hypothesis has error
Pr[B(n, 21 ? ?) ? n]. We thus have that if A < n ? 1 then AdaBoost?s final hypothesis has
greater error than PickyAdaBoost.
We now give a few concrete settings for ?, n with which PickyAdaBoost beats AdaBoost. First
we observe that even in some simple cases the AdaBoost error rate (5) can be larger than the PickyAdaBoost error rate by a fairly large additive constant. Taking n = 3 and ? = 0.38, we find that
the error rate of PickyAdaBoost(?) is ( 21 ? 0.38)3 = 0.001728, whereas the AdaBoost error rate is
( 12 ? 0.38)3 + 3( 12 ? 0.38)2 ? ( 12 + 0.38) = 0.03974.
Next we observe that there can be a large multiplicative factor difference between the AdaBoost and
Pn?bAc?1 n
PickyAdaBoost error rates. We have that Pr[B(n, 1/2 ? ?) > A] equals i=0
i (1/2 ?
?)n?i (1/2 + ?)i . This can be lower bounded by
n?bAc?1
P
n
n
Pr[B(n, 1/2 ? ?) > A] ? (1/2 ? ?)
;
(6)
i
i=0
this bound is rough but good enough for our purposes. Viewing n as an asymptotic parameter and ?
as a fixed constant, we have
?n n
P
(6) ? (1/2 ? ?)n
(7)
i=0 i
where ? =
1
2
?
ln 2
1+2?
2 ln 1?2?
? o(1). Using the bound
P?n
i=0
n
i
= 2n?(H(?)?o(1)) , which holds for
0 < ? < 21 , we see that any setting of ? such that ? is bounded above zero by a constant gives an
exponential gap between the error rate of PickyAdaBoost (which is (1/2??) n) and the lower bound
on AdaBoost?s error provided by (7). As it happens any ? ? 0.17 yields ? > 0.01. We thus have
Claim 12 For any fixed ? ? (0.17, 0.5) and any ? < ?, the final error rate of AdaBoost on the
source D is 2?(n) times that of PickyAdaBoost(?).
3.3 Base classifiers in an arbitrary ordering
The above results show that PickyAdaBoost can outperform AdaBoost if the base classifiers are
considered in the particular order b1 , . . . , bn+1 . A more involved analysis (omitted because of space
constraints) establishes a similar difference when the base classifiers are chosen in a random order:
Proposition 13 Suppose that 0.3 < ? < ? < 0.5 and 0 < c < 1 are fixed constants independent of
def
n that satisfy Z(?) < c, where Z(?) =
ln
ln
4
(1?2?)2
1+2?
(1?2?)3
. Suppose the base classifiers are listed in an
order such that bn+1 occurs at position c ? n. Then the error rate of AdaBoost at least 2n(1?c) ? 1 =
2?(n) times greater than the error of PickyAdaBoost(?).
For the case of randomly ordered base classifiers, we may view c as a real value that is uniformly
distributed in [0, 1], and for any fixed constant 0.3 < ? < 0.5 there is a constant probability (at least
1 ? Z(?)) that AdaBoost has error rate 2?(n) times larger than PickyAdaBoost(?). This probability
can be fairly large, e.g. for ? = 0.45 it is greater than 1/5.
4 Experiments
We used Reuters data and synthetic data to examine the behavior of three algorithms: (i) Naive
Bayes; (ii) one-pass Adaboost; and (iii) PickyAdaBoost.
The Reuters data was downloaded from www.daviddlewis.com. We used the ModApte splits
into training and test sets. We only used the text of each article, and the text was converted into
lower case before analysis. We compared the boosting algorithms with multinomial Naive Bayes
[7]. We used boosting with confidence-rated base classifiers [8], with a base classifier for each stem
of length at most 5; analogously to the multinomial Naive Bayes, the confidence of a base classifier
was taken to be the number of times its stem appeared in the text. (Schapire and Singer [8, Section
3.2] suggested, when the confidence of base classifiers cannot be bounded a priori, to choose each
voting weight ?t in order to maximize the reduction in potential. We did this, using Newton?s
method to do this optimization.) We averaged over 10 random permutations of the features. The
results are compiled in Table 1. The one-pass boosting algorithms usually improve on the accuracy
of Naive Bayes, while retaining similar simplicity and computational efficiency. PickyAdaBoost
appears to usually improve somewhat on AdaBoost. Using a t-test at level 0.01, the W-L-T for
PickyAdaBoost(0.1) against multinomial Naive Bayes is 5-1-4.
We also experimented with synthetic data generated according to a distribution D defined as follows:
to draw (x, y), begin by picking y ? {?1, +1} uniformly at random. For each of the k features
x1 , . . . , xk in the diverse ?-good set G, set xi equal to y with probability 1/2 + ? (independently
for each i). The remaining n ? k variables are influenced by a hidden variable z which is set
independently to be equal to y with probability 4/5. The features x k+1 , . . . , xn are each set to
be independently equal to z with probability p. So each such x j (j ? k + 1) agrees with y with
probability (4/5) ? p + (1/5) ? (1 ? p).
There were 10000 training examples and 10000 test examples. We tried n = 1000 and n = 10000.
Results when n = 10000 are summarized in Table 2. The boosting algorithms predictably perform
better than Naive Bayes, because Naive Bayes assigns too much weight to the correlated features.
The picky boosting algorithm further ameliorates the effect of this correlation. Results for n = 1000
are omitted due to space constraints: these are qualitatively similar, with all algorithms performing
better, and the differences between algorithms shrinking somewhat.
Data
NB
earn
acq
money-fx
crude
grain
trade
interest
wheat
ship
corn
0.042
0.036
0.043
0.026
0.038
0.068
0.026
0.022
0.013
0.027
Error rates
PickyAdaBoost
0.001 0.01
0.1
0.023 0.020 0.018 0.027
0.094 0.065 0.071 0.153
0.042 0.041 0.041 0.048
0.031 0.027 0.026 0.040
0.021 0.023 0.019 0.018
0.028 0.028 0.026 0.029
0.032 0.029 0.032 0.035
0.014 0.013 0.013 0.017
0.018 0.018 0.017 0.016
0.014 0.014 0.014 0.013
OPAB
NB
19288
19288
19288
19288
19288
19288
19288
19288
19288
19288
Feature counts
OPAB
PickyAdaBoost
0.001 0.01 0.1
19288 2871 542 52
19288 3041 508 41
19288 2288 576 62
19288 2865 697 58
19288 2622 650 64
19288 2579 641 61
19288 2002 501 58
19288 2294 632 61
19288 2557 804 67
19288 2343 640 67
Table 1: Experimental results. On the left are error rates on the 3299 test examples for Reuters
data sets. On the right are counts of the number of features used in the models. NB is the multinomial Naive Bayes, and OPAB is one-pass AdaBoost. Results are shown for three PickyAdaBoost
thresholds: 0.001, 0.01 and 0.1.
k
p
?
NB
OPAB
20
20
20
50
50
50
100
100
100
0.85
0.9
0.95
0.7
0.75
0.8
0.63
0.68
0.73
0.24
0.24
0.24
0.15
0.15
0.15
0.11
0.11
0.11
0.2
0.2
0.21
0.2
0.2
0.21
0.2
0.2
0.2
0.11
0.09
0.06
0.13
0.12
0.11
0.14
0.13
0.1
PickyAdaBoost
0.07 0.1 0.16
0.04 0.04 0.03
0.03 0.03 0.03
0.02 0.02 0.02
0.06 0.04 0.09
0.05 0.04 0.03
0.04 0.03 0.03
0.07 0.05
0.06 0.05
0.05 0.04
Table 2: Test-set error rate for synthetic data. Each value is an average over 100 independent runs
(random permutations of features). Where a result is omitted, the corresponding picky algorithm did
not pick any base classifiers.
References
[1] S. Dasgupta and P. M. Long. Boosting with diverse base classifiers. COLT, 2003.
[2] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973.
[3] Y. Freund. Boosting a weak learning algorithm by majority. Inf. and Comput., 121(2):256?285,
1995.
[4] Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148?
156, 1996.
[5] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. JCSS, 55(1):119?139, 1997.
[6] N. Littlestone. Redundant noisy attributes, attribute errors, and linear-threshold learning using
Winnow. In COLT, pages 147?156, 1991.
[7] A. Mccallum and K. Nigam. A comparison of event models for naive bayes text classification.
In AAAI-98 Workshop on Learning for Text Categorization, 1998.
[8] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions.
Machine Learning, 37:297?336, 1999.
| 3238 |@word briefly:2 version:3 stronger:1 seems:1 advantageous:1 duda:1 d2:1 tried:1 bn:30 reap:1 pick:1 reduction:1 initial:6 contains:2 selecting:1 current:3 com:2 yet:2 must:3 grain:1 additive:1 designed:1 update:1 half:1 item:1 xk:1 mccallum:1 filtered:2 boosting:36 complication:1 dn:2 constructed:2 incorrect:1 prove:1 behavior:1 roughly:1 examine:1 begin:2 provided:1 underlying:1 bounded:3 easiest:1 interpreted:1 substantially:1 guarantee:1 formalizes:2 pseudo:1 every:2 voting:2 exactly:2 prohibitively:1 classifier:79 wrong:1 before:3 mistake:1 consequence:1 might:1 studied:2 suggests:1 bi:1 mer:1 averaged:1 practice:2 xr:2 word:3 confidence:4 get:1 cannot:1 nb:4 applying:1 www:1 equivalent:1 go:2 independently:7 simplicity:2 immediately:1 assigns:1 rule:1 adjusts:1 notion:2 fx:1 imagine:1 suppose:11 pt:2 construction:2 us:5 hypothesis:24 element:6 expensive:1 satisfying:1 labeled:1 jcss:1 calculate:1 wheat:1 ordering:3 trade:1 valuable:1 ran:1 substantial:1 efficiency:1 learner:7 committing:1 describe:1 artificial:1 aggregate:2 apparent:1 larger:2 say:2 otherwise:1 yht:1 g1:3 noisy:1 final:14 sequence:3 advantage:19 causing:1 relevant:1 achieve:1 requirement:1 categorization:1 illustrate:1 fixing:1 strong:2 c:2 skip:1 come:1 implies:2 correct:3 attribute:2 abstains:1 supermartingales:1 sgn:4 viewing:1 require:2 assign:1 fix:2 generalization:1 proposition:1 extension:1 hold:5 considered:5 exp:7 bj:3 claim:12 achieves:1 early:1 omitted:3 purpose:1 daviddlewis:1 agrees:2 establishes:1 weighted:4 hope:2 rough:1 always:1 avoid:1 pn:3 profitably:1 obscures:1 conjunction:1 improvement:2 contrast:1 skipped:4 typically:2 bt:3 hidden:1 i1:1 provably:1 among:2 classification:3 colt:2 exponent:1 priori:1 retaining:1 initialize:1 fairly:2 equal:10 construct:2 capitalize:1 icml:1 t2:3 simplify:1 few:1 randomly:2 negation:1 huge:1 interest:1 behind:1 accurate:1 initialized:1 littlestone:1 theoretical:1 instance:1 earlier:1 subset:2 uniform:4 too:4 straightforwardly:1 synthetic:5 combined:3 picking:1 analogously:1 concrete:1 earn:1 aaai:1 choose:2 hn:1 dr:2 return:3 account:1 converted:1 potential:1 summarized:1 coefficient:1 matter:1 satisfy:3 depends:2 later:1 h1:6 try:1 performed:1 tion:1 analyze:4 multiplicative:1 view:1 bayes:12 contribution:1 minimize:1 acq:1 accuracy:7 yield:2 weak:16 influenced:1 definition:2 against:1 servedio:1 involved:1 obvious:1 proof:5 proved:3 popular:1 recall:3 appears:1 dt:18 adaboost:47 improved:1 formulation:1 evaluated:1 generality:1 just:2 stage:4 implicit:1 until:1 correlation:1 hand:1 google:1 abort:3 yf:1 aj:1 indicated:1 effect:2 assigned:1 round:13 during:1 supermartingale:2 trying:1 theoretic:1 performs:1 abstain:2 multinomial:4 slight:1 language:1 compiled:1 money:1 base:60 add:1 optimizing:1 moderate:1 inf:1 ship:1 scenario:2 winnow:1 inequality:3 seen:1 greater:4 somewhat:2 determine:1 maximize:1 redundant:2 ii:5 full:2 stem:2 d0:4 faster:2 long:2 hart:1 equally:1 prediction:3 variant:4 basic:1 ameliorates:1 essentially:1 expectation:2 sometimes:4 receive:1 whereas:1 completes:2 source:5 markedly:1 member:5 call:1 presence:1 exceed:1 iii:1 easy:3 enough:1 split:1 xj:2 idea:2 br:1 whether:1 motivated:1 handled:1 expression:1 cause:2 clear:1 listed:1 picky:11 amount:1 concentrated:1 bac:3 schapire:5 outperform:3 estimated:1 diverse:11 write:2 dasgupta:1 threshold:2 drawn:2 d3:1 ht:15 run:7 you:2 named:1 throughout:1 draw:4 decision:1 bit:10 bound:6 def:1 quadratic:1 encountered:4 constraint:3 precisely:1 scene:1 generates:1 min:2 performing:3 corn:1 according:2 poor:2 smaller:1 happens:2 s1:3 intuitively:1 pr:11 taken:1 ln:16 resource:1 mutually:3 previously:3 count:2 needed:1 singer:2 available:1 apply:1 observe:2 indirectly:1 appropriate:1 encounter:1 original:3 binomial:1 running:1 remaining:4 include:1 opportunity:1 newton:1 build:1 establish:2 especially:1 quantity:2 occurs:3 rocco:2 dependence:2 rt:3 guessing:2 exhibit:1 entity:1 philip:1 sensible:1 majority:7 modapte:1 induction:1 code:1 length:1 index:1 insufficient:1 providing:1 reweight:1 stated:1 zt:17 unknown:1 perform:2 disagree:1 beat:1 situation:1 looking:1 arbitrary:2 required:2 optimized:1 established:1 boost:5 able:1 suggested:1 proceeds:2 usually:5 pattern:1 azuma:1 appeared:1 including:1 event:4 natural:1 indicator:1 improve:4 rated:2 barutcuoglu:1 columbia:1 naive:12 text:5 relative:1 asymptotic:1 freund:5 permutation:2 downloaded:1 article:1 jth:1 taking:2 benefit:2 distributed:2 xn:12 doesn:1 made:2 collection:1 qualitatively:1 far:1 b1:17 summing:1 assumed:2 predictably:1 xi:5 sk:3 why:1 table:4 nigam:1 complex:1 constructing:1 domain:1 did:2 terminated:2 reuters:5 n2:1 x1:7 wiley:1 shrinking:1 position:1 explicit:1 plong:1 exponential:2 comput:1 candidate:1 crude:1 theorem:2 list:1 experimented:1 normalizing:1 exists:1 workshop:1 magnitude:2 gap:1 tc:1 simply:3 likely:1 ordered:2 satisfies:1 formulated:1 consequently:3 exposition:1 feasible:1 change:1 reducing:1 uniformly:3 wt:5 lemma:16 pas:23 experimental:2 vote:3 support:1 princeton:1 d1:5 correlated:1 |
2,468 | 3,239 | Stability Bounds for Non-i.i.d. Processes
Mehryar Mohri
Courant Institute of Mathematical Sciences
and Google Research
251 Mercer Street
New York, NY 10012
Afshin Rostamizadeh
Department of Computer Science
Courant Institute of Mathematical Sciences
251 Mercer Street
New York, NY 10012
[email protected]
[email protected]
Abstract
The notion of algorithmic stability has been used effectively in the past to derive
tight generalization bounds. A key advantage of these bounds is that they are designed for specific learning algorithms, exploiting their particular properties. But,
as in much of learning theory, existing stability analyses and bounds apply only
in the scenario where the samples are independently and identically distributed
(i.i.d.). In many machine learning applications, however, this assumption does
not hold. The observations received by the learning algorithm often have some
inherent temporal dependence, which is clear in system diagnosis or time series
prediction problems. This paper studies the scenario where the observations are
drawn from a stationary mixing sequence, which implies a dependence between
observations that weaken over time. It proves novel stability-based generalization
bounds that hold even with this more general setting. These bounds strictly generalize the bounds given in the i.i.d. case. It also illustrates their application in the
case of several general classes of learning algorithms, including Support Vector
Regression and Kernel Ridge Regression.
1 Introduction
The notion of algorithmic stability has been used effectively in the past to derive tight generalization
bounds [2?4,6]. A learning algorithm is stable when the hypotheses it outputs differ in a limited way
when small changes are made to the training set. A key advantage of stability bounds is that they are
tailored to specific learning algorithms, exploiting their particular properties. They do not depend
on complexity measures such as the VC-dimension, covering numbers, or Rademacher complexity,
which characterize a class of hypotheses, independently of any algorithm.
But, as in much of learning theory, existing stability analyses and bounds apply only in the scenario
where the samples are independently and identically distributed (i.i.d.). Note that the i.i.d. assumption is typically not tested or derived from a data analysis. In many machine learning applications
this assumption does not hold. The observations received by the learning algorithm often have some
inherent temporal dependence, which is clear in system diagnosis or time series prediction problems. A typical example of time series data is stock pricing, where clearly prices of different stocks
on the same day or of the same stock on different days may be dependent.
This paper studies the scenario where the observations are drawn from a stationary mixing sequence,
a widely adopted assumption in the study of non-i.i.d. processes that implies a dependence between
observations that weakens over time [8, 10, 16, 17]. Our proofs are also based on the independent
block technique commonly used in such contexts [17] and a generalized version of McDiarmid?s
inequality [7]. We prove novel stability-based generalization bounds that hold even with this more
general setting. These bounds strictly generalize the bounds given in the i.i.d. case and apply to all
stable learning algorithms thereby extending the usefulness of stability-bounds to non-i.i.d. scenar1
ios. It also illustrates their application to general classes of learning algorithms, including Support
Vector Regression (SVR) [15] and Kernel Ridge Regression [13].
Algorithms such as support vector regression (SVR) [14, 15] have been used in the context of time
series prediction in which the i.i.d. assumption does not hold, some with good experimental results [9, 12]. To our knowledge, the use of these algorithms in non-i.i.d. scenarios has not been
supported by any theoretical analysis. The stability bounds we give for SVR and many other kernel
regularization-based algorithms can thus be viewed as the first theoretical basis for their use in such
scenarios.
In Section 2, we will introduce the definitions for the non-i.i.d. problems we are considering and
discuss the learning scenarios. Section 3 gives our main generalization bounds based on stability, including the full proof and analysis. In Section 4, we apply these bounds to general kernel
regularization-based algorithms, including Support Vector Regression and Kernel Ridge Regression.
2 Preliminaries
We first introduce some standard definitions for dependent observations in mixing theory [5] and
then briefly discuss the learning scenarios in the non-i.i.d. case.
2.1 Non-i.i.d. Definitions
Definition 1. A sequence of random variables Z = {Zt }?
t=?? is said to be stationary if for any t
and non-negative integers m and k, the random vectors (Zt , . . . , Zt+m ) and (Zt+k , . . . , Zt+m+k )
have the same distribution.
Thus, the index t or time, does not affect the distribution of a variable Zt in a stationary sequence.
This does not imply independence however. In particular, for i < j < k, Pr[Zj | Zi ] may not
equal Pr[Zk | Zi ]. The following is a standard definition giving a measure of the dependence of the
random variables Zt within a stationary sequence. There are several equivalent definitions of this
quantity, we are adopting here that of [17].
?
Definition 2. Let Z = {Zt }t=?? be a stationary sequence of random variables. For any i, j ?
Z ? {??, +?}, let ?ij denote the ?-algebra generated by the random variables Zk , i ? k ? j.
Then, for any positive integer k, the ?-mixing and ?-mixing coefficients of the stochastic process Z
are defined as
i
h
?(k) = sup En
sup Pr[A | B] ? Pr[A] ?(k) = sup Pr[A | B] ? Pr[A]. (1)
?
n B???? A??n+k
n?
A??n+k
n
B???
?
Z is said to be ?-mixing (?-mixing) if ?(k) ? 0 (resp. ?(k) ? 0) as k ? ?. It is said to be
algebraically ?-mixing (algebraically ?-mixing) if there exist real numbers ?0 > 0 (resp. ?0 > 0)
and r > 0 such that ?(k) ? ?0 /k r (resp. ?(k) ? ?0 /k r ) for all k, exponentially mixing if there
exist real numbers ?0 (resp. ?0 > 0) and ?1 (resp. ?1 > 0) such that ?(k) ? ?0 exp(??1 k r ) (resp.
?(k) ? ?0 exp(??1 k r )) for all k.
Both ?(k) and ?(k) measure the dependence of the events on those that occurred more than k
units of time in the past. ?-mixing is a weaker assumption than ?-mixing. We will be using a
concentration inequality that leads to simple bounds but that applies to ?-mixing processes only.
However, the main proofs presented in this paper are given in the more general case of ?-mixing
sequences. This is a standard assumption adopted in previous studies of learning in the presence
of dependent observations [8, 10, 16, 17]. As pointed out in [16], ?-mixing seems to be ?just the
right? assumption for carrying over several PAC-learning results to the case of weakly-dependent
sample points. Several results have also been obtained in the more general context of ?-mixing but
they seem to require the stronger condition of exponential mixing [11]. Mixing assumptions can be
checked in some cases such as with Gaussian or Markov processes [10]. The mixing parameters can
also be estimated in such cases.
2
Most previous studies use a technique originally introduced by [1] based on independent blocks of
equal size [8, 10, 17]. This technique is particularly relevant when dealing with stationary ?-mixing.
We will need a related but somewhat different technique since the blocks we consider may not have
the same size. The following lemma is a special case of Corollary 2.7 from [17].
Lemma 1 (Yu [17], Corollary 2.7). Let ? ? 1 and suppose that
function,
with
Qh is measurable
Q?
?
si
absolute value bounded by M , on a product probability space
j=1 ?j ,
i=1 ?ri where ri ?
si ? ri+1 for all i. Let Q be a probability measure on the product
space
with
marginal
measures Qi
Q
Qi+1 sj
i+1
si
i+1
on (?i , ?ri ), and let Q
be the marginal measure of Q on
j=1 ?j ,
j=1 ?rj , i = 1, . . . , ??1.
Q?
Let ?(Q) = sup1?i???1 ?(ki ), where ki = ri+1 ? si , and P = i=1 Qi . Then,
| E[h] ? E[h]| ? (? ? 1)M ?(Q).
Q
P
(2)
The lemma gives a measure of the difference between the distribution of ? blocks where the blocks
are independent in one case and dependent in the other case. The distribution within each block
is assumed to be the same in both cases. For a monotonically decreasing function ?, we have
?(Q) = ?(k ? ), where k ? = mini (ki ) is the smallest gap between blocks.
2.2 Learning Scenarios
We consider the familiar supervised learning setting where the learning algorithm receives a sample
of m labeled points S = (z1 , . . . , zm ) = ((x1 , y1 ), . . . , (xm , ym )) ? (X ? Y )m , where X is the
input space and Y the set of labels (Y = R in the regression case), both assumed to be measurable.
For a fixed learning algorithm, we denote by hS the hypothesis it returns when trained on the sample
S. The error of a hypothesis on a pair z ? X ?Y is measured in terms of a cost function c : Y ?Y ?
R+ . Thus, c(h(x), y) measures the error of a hypothesis h on a pair (x, y), c(h(x), y) = (h(x)?y)2
in the standard regression cases. We will use the shorthand c(h, z) := c(h(x), y) for a hypothesis h
and z = (x, y) ? X ? Y and will assume that c is upper bounded by a constant M > 0. We denote
b
by R(h)
the empirical error of a hypothesis h for a training sample S = (z1 , . . . , zm ):
m
1 X
b
R(h)
=
c(h, zi ).
(3)
m i=1
In the standard machine learning scenario, the sample pairs z1 , . . . , zm are assumed to be i.i.d., a
restrictive assumption that does not always hold in practice. We will consider here the more general
case of dependent samples drawn from a stationary mixing sequence Z over X ? Y . As in the i.i.d.
case, the objective of the learning algorithm is to select a hypothesis with small error over future
samples. But, here, we must distinguish two versions of this problem.
In the most general version, future samples depend on the training sample S and thus the generalization error or true error of the hypothesis hS trained on S must be measured by its expected error
conditioned on the sample S:
R(hS ) = E[c(hS , z) | S].
(4)
z
This is the most realistic setting in this context, which matches time series prediction problems.
A somewhat less realistic version is one where the samples are dependent, but the test points are
assumed to be independent of the training sample S. The generalization error of the hypothesis hS
trained on S is then:
R(hS ) = E[c(hS , z) | S] = E[c(hS , z)].
(5)
z
z
This setting seems less natural since if samples are dependent, then future test points must also
depend on the training points, even if that dependence is relatively weak due to the time interval
after which test points are drawn. Nevertheless, it is this somewhat less realistic setting that has
been studied by all previous machine learning studies that we are aware of [8, 10, 16, 17], even when
examining specifically a time series prediction problem [10]. Thus, the bounds derived in these
studies cannot be applied to the more general setting.
We will consider instead the most general setting with the definition of the generalization error based
on Eq. 4. Clearly, our analysis applies to the less general setting just discussed as well.
3
3 Non-i.i.d. Stability Bounds
?
This section gives generalization bounds for ?-stable
algorithms over a mixing stationary distribu1
tion. The first two sections present our main proofs which hold for ?-mixing stationary distributions. In the third section, we will be using a concentration inequality that applies to ?-mixing
processes only.
?
The condition of ?-stability
is an algorithm-dependent property first introduced in [4] and [6]. It has
been later used successfully by [2, 3] to show algorithm-specific stability bounds for i.i.d. samples.
Roughly speaking, a learning algorithm is said to be stable if small changes to the training set do
not produce large deviations in its output. The following gives the precise technical definition.
?
Definition 3. A learning algorithm is said to be (uniformly) ?-stable
if the hypotheses it returns for
any two training samples S and S ? that differ by a single point satisfy
?
|c(hS , z) ? c(hS ? , z)| ? ?.
?z ? X ? Y,
(6)
Many generalization error bounds rely on McDiarmid?s inequality. But this inequality requires the
random variables to be i.i.d. and thus is not directly applicable in our scenario. Instead, we will
use a theorem that extends McDiarmid?s inequality to general mixing distributions (Theorem 1,
Section 3.3).
To obtain a stability-based generalization bound, we will apply this theorem to ?(S) = R(hS ) ?
b S ). To do so, we need to show, as with the standard McDiarmid?s inequality, that ? is a Lipschitz
R(h
function and, to make it useful, bound E[?]. The next two sections describe how we achieve both of
these in this non-i.i.d. scenario.
3.1 Lipschitz Condition
As discussed in Section 2.2, in the most general scenario, test points depend on the training sample.
We first present a lemma that relates the expected value of the generalization error in that scenario
and the same expectation in the scenario where the test point is independent of the training sample.
e S ) =
We denote by R(hS ) = Ez [c(hS , z)|S] the expectation in the dependent case and by R(h
b
Eze[c(hSb , ze)] that expectation when the test points are assumed independent of the training, with
Sb denoting a sequence similar to S but with the last b points removed. Figure 1(a) illustrates that
sequence. The block Sb is assumed to have exactly the same distribution as the corresponding block
of the same size in S.
?
Lemma 2. Assume that the learning algorithm is ?-stable
and that the cost function c is bounded
by M . Then, for any sample S of size m drawn from a ?-mixing stationary distribution and for any
b ? {0, . . . , m}, the following holds:
e S )]| ? b?? + ?(b)M.
| E[R(hS )] ? E[R(h
b
(7)
?
E[R(hS )] = E [c(hS , z)] ? E [c(hSb , z)] + b?.
(8)
S
S
?
Proof. The ?-stability
of the learning algorithm implies that
S
S,z
S,z
The application of Lemma 1 yields
e S [R(hS )] + b?? + ?(b)M.
E[R(hS )] ? E [c(hSb , ze)] + b?? + ?(b)M = E
b
S
S,e
z
(9)
The other side of the inequality of the lemma can be shown following the same steps.
We can now prove a Lipschitz bound for the function ?.
1
The standard variable used for the stability coefficient is ?. To avoid the confusion with the ?-mixing
coefficient, we will use ?? instead.
4
Sb
zi
z
b
b
zi
b
(a)
i
Si,b
Si,b
Si
b
(b)
z
b
b
(c)
z
b
z
b
b
(d)
Figure 1: Illustration of the sequences derived from S that are considered in the proofs.
?
Lemma 3. Let S = (z1 , z2 , . . . , zm ) and S i = (z1? , z2? , . . . , zm
) be two sequences drawn from a
?-mixing stationary process that differ only in point i ? [1, m], and let hS and hS i be the hypotheses
?
returned by a ?-stable
algorithm when trained on each of these samples. Then, for any i ? [1, m],
the following inequality holds:
M
|?(S) ? ?(S i )| ? (b + 1)2?? + 2?(b)M +
.
m
(10)
Proof. To prove this inequality, we first bound the difference of the empirical errors as in [3], then
the difference of the true errors. Bounding the difference of costs on agreeing points with ?? and the
one that disagrees with M yields
m
b S ) ? R(h
b S i )| =
|R(h
=
1 X
|c(hS , zj ) ? c(hS i , zj? )|
m j=1
(11)
1 X
1
M
|c(hS , zj ) ? c(hS i , zj? )| + |c(hS , zi ) ? c(hS i , zi? )| ? ?? +
.
m
m
m
j6=i
?
Now, applying Lemma 2 to both generalization error terms and using ?-stability
result in
|R(hS ) ? R(hS i )|
e S ) ? R(h
e S i )| + 2b?? + 2?(b)
? |R(h
b
b
(12)
= E[c(hSb , ze) ? c(hSbi , ze)] + 2b?? + 2?(b)M ? ?? + 2b?? + 2?(b)M.
z
e
The lemma?s statement is obtained by combining inequalities 11 and 12.
3.2 Bound on E[?]
As mentioned earlier, to make the bound useful, we also need to bound ES [?(S)]. This is done by
analyzing independent blocks using Lemma 1.
?
Lemma 4. Let hS be the hypothesis returned by a ?-stable
algorithm trained on a sample S drawn
from a stationary ?-mixing distribution. Then, for all b ? [1, m], the following inequality holds:
E[|?(S)|] ? (6b + 1)?? + 3?(b)M.
S
(13)
b S )]. Let Si be the sequence S with the b points before and
Proof. We first analyze the term ES [R(h
after point zi removed. Figure 1(b) illustrates this definition. Si is thus made of three blocks. Let Sei
denote a similar set of three blocks each with the same distribution as the corresponding block in Si ,
but such that the three blocks are independent. In particular, the middle block reduced to one point
?
zei is independent of the two others. By the ?-stability
of the algorithm,
"
#
"
#
m
m
X
X
1
1
?
b S )] = E
E[R(h
c(hS , zi ) ? E
c(hSi , zi ) + 2b?.
(14)
S
S m
Si m
i=1
i=1
Applying Lemma 1 to the first term of the right-hand side yields
"
#
m
1 X
b
E[R(hS )] ? E
c(hSei , zei ) + 2b?? + 2?(b)M.
S
ei m
S
i=1
5
(15)
b S ) and R(hS ) will help us prove the
Combining the independent block sequences associated to R(h
lemma in a way similar to the i.i.d. case treated in [3]. Let Sb be defined as in the proof of Lemma 2.
To deal with independent block sequences defined with respect to the same hypothesis, we will
consider the sequence Si,b = Si ? Sb , which is illustrated by Figure 1(c). This can result in as many
as four blocks. As before, we will consider a sequence Sei,b with a similar set of blocks each with
the same distribution as the corresponding blocks in Si,b , but such that the blocks are independent.
?
Since three blocks of at most b points are removed from each hypothesis, by the ?-stability
of the
learning algorithm, the following holds:
"
#
m
X
1
b S ) ? R(hS )] = E
E[?(S)] = E[R(h
c(hS , zi ) ? c(hS , z)
(16)
S
S
S,z m
i=1
#
"
m
1 X
?
c(hSi,b , zi ) ? c(hSi,b , z) + 6b?.
(17)
?
E
Si,b ,z m
i=1
Now, the application of Lemma 1 to the difference of two cost functions also bounded by M as in
the right-hand side leads to
"
#
m
1 X
E[?(S)] ? E
c(hSei,b , zei ) ? c(hSei,b , ze) + 6b?? + 3?(b)M.
(18)
S
ei,b ,e
S
z m i=1
Since ze and zei are independent and the distribution is stationary, they have the same distribution and
we can replace zei with ze in the empirical cost and write
#
"
m
1 X
c(hSei , ze) ? c(hSei,b , ze) + 6b?? + 3?(b)M ? ?? + 6b?? + 3?(b)M, (19)
E[?(S)] ? E
i,b
S
ei,b ,e
S
z m i=1
i
where Sei,b
is the sequence derived from Sei,b by replacing zei with ze. The last inequality holds by
?
?-stability
of the learning algorithm. The other side of the inequality in the statement of the lemma
can be shown following the same steps.
3.3 Main Results
This section presents several theorems that constitute the main results of this paper. We will use the
following theorem which extends McDiarmid?s inequality to ?-mixing distributions.
Theorem 1 (Kontorovich and Ramanan [7], Thm. 1.1). Let ? : Z m ? R be a function defined over
a countable space Z. If ? is l-Lipschitz with respect to the Hamming metric for some l > 0, then
the following holds for all ? > 0:
??2
Pr[|?(Z) ? E[?(Z)]| > ?] ? 2 exp
,
(20)
Z
2ml2 ||?m ||2?
where ||?m ||? ? 1 + 2
m
X
?(k).
k=1
?
Theorem 2 (General Non-i.i.d. Stability Bound). Let hS denote the hypothesis returned by a ?stable algorithm trained on a sample S drawn from a ?-mixing stationary distribution and let c be
a measurable non-negative cost function upper bounded by M > 0, then for any b ? [0, m] and any
? > 0, the following generalization bound holds
?
h?
i
?
b S )?? > ? + (6b + 1)?? + 6M ?(b) ? 2 exp
Pr ?R(hS ) ? R(h
S
!
P
?2
??2 (1 + 2 m
i=1 ?(i))
.
2m((b + 1)2?? + 2M ?(b) + M/m)2
Proof. The theorem follows directly the application of Lemma 3 and Lemma 4 to Theorem 1.
The theorem gives a general stability bound for ?-mixing stationary sequences. If we further
assume that the sequence is algebraically ?-mixing, that is for all k, ?(k) = ?0 k ?r for some r > 1,
then we can solve for the value of b to optimize the bound.
6
Theorem 3 (Non-i.i.d. Stability Bound for Algebraically Mixing Sequences). Let hS denote the
?
hypothesis returned by a ?-stable
algorithm trained on a sample S drawn from an algebraically
?-mixing stationary distribution, ?(k) = ?0 k ?r with r > 1 and let c be a measurable non-negative
cost function upper bounded by M > 0, then for any ? > 0, the following generalization bound
holds
?
h?
i
?
b S )?? > ? + ?? + (r + 1)6M ?(b) ? 2 exp
Pr ?R(hS ) ? R(h
S
where ?(b) = ?0
??
r?0 M
r/(r+1)
??2 (4 + 2/(r ? 1))?2
2m(2?? + (r + 1)2M ?(b) + M/m)2
!
,
.
Proof. For an algebraically mixing sequence, the value of b minimizing the bound of Theorem 2
? ?1/(r+1)
? r/(r+1)
?
?
? = rM ?(b), which gives b =
satisfies ?b
and ?(b) = ?0
. The
r?0 M
r?0 M
following term can be bounded as
Z
m
m
X
X
1+2
?(i) = 1 + 2
i?r ? 1 + 2 1 +
i=1
1
i=1
m
i?r di
m1?r ? 1
=1+2 1+
.
1?r
(21)
For r > 1, the exponent of m is negative, and so we can bound this last term by 3 + 2/(r ? 1).
Plugging in this value and the minimizing value of b in the bound of Theorem 2 yields the statement
of the theorem.
In the case of a zero mixing coefficient (? = 0 and b = 0), the bounds of Theorem 2 and Theorem 3
coincide with the i.i.d. stability bound of [3]. In order for the right-hand side of these bounds to
?
?
converge, we must have ?? = o(1/ m) and ?(b) = o(1/ m). For several general classes of
algorithms, ?? ? O(1/m) [3]. In the case of algebraically mixing sequences with r > 1 assumed in
?
(r/(r+1))
?
Theorem 3, ?? ? O(1/m) implies ?(b) = ?0 (?/(r?
< O(1/ m). The next section
0 M ))
illustrates the application of Theorem 3 to several general classes of algorithms.
4 Application
We now present the application of our stability bounds to several algorithms in the case of an algebraically mixing sequence. Our bound applies to all algorithms based on the minimization of a
regularized objective function based on the norm k ? kK in a reproducing kernel Hilbert space, where
K is a positive definite symmetric kernel:
m
1 X
argmin
c(h, zi ) + ?khk2K ,
(22)
h?H m i=1
under some general conditions, since these algorithms are stable with ?? ? O(1/m) [3]. Two specific
instances of these algorithms are SVR, for which the cost function is based on the ?-insensitive cost:
0
if |h(x) ? y| ? ?,
c(h, z) = |h(x) ? y|? =
(23)
|h(x) ? y| ? ? otherwise,
and Kernel Ridge Regression [13], for which c(h, z) = (h(z) ? y)2 .
Corollary 1. Assume a bounded output Y = [0, B], for some B > 0, and assume that K(x, x) ? ?
for all x for some ? > 0. Let hS denote the hypothesis returned by the algorithm when trained on
a sample S drawn from an algebraically ?-mixing stationary distribution. Then, with probability at
least 1 ? ?, the following generalization bounds hold for
a. Support vector regression (SVR):
2
b S ) + 13? + 5
R(hS ) ? R(h
2?m
b. Kernel Ridge Regression (KRR):
2 2
b S ) + 26? B + 5
R(hS ) ? R(h
?m
7
r !r
3?2
B
2 ln(1/?)
+?
;
?
?
m
(24)
r !r
B
2 ln(1/?)
12?2 B 2
+?
.
?
?
m
(25)
p
Proof. It has been shown in [3] that for SVR ?? ? ?2 /(2?m) and that M < ? B/? and for KRR,
p
?? ? 2?2 B 2 /(?m) and M < ? B/?. Plugging in these values in the bound of Theorem 3 and
using the lower bound on r, r > 1, yield the statement of the corollary.
These bounds give, to the best of our knowledge, the first stability-based generalization bounds for
SVR and KRR in a non-i.i.d. scenario. Similar bounds can be obtained for other families of algorithms such as maximum entropy discrimination, which can be shown to have comparable stability
properties [3]. Our bounds have the same convergence behavior as those derived by [3] in the i.i.d.
case. In fact, they ?
differ only by some constants. As in the i.i.d. case, they are non-trivial when the
condition ? ? 1/ m on the regularization parameter holds for all large values of m. It would be
interesting to give a quantitative comparison of our bounds and the generalization bounds of [10]
based on covering numbers for mixing stationary distributions, in the scenario where test points
are independent of the training sample. In general, because the bounds of [10] are not algorithmdependent, one can expect tighter bounds using stability, provided that a tight bound is given on
the stability coefficient. The comparison also depends on how fast the covering number grows with
the sample size and trade-off parameters such as ?. For a fixed ?, the asymptotic behavior of our
stability bounds for SVR and KRR is tight.
5 Conclusion
Our stability bounds for mixing stationary sequences apply to large classes of algorithms, including
SVR and KRR, extending to weakly dependent observations existing bounds in the i.i.d. case. Since
they are algorithm-specific, these bounds can often be tighter than other generalization bounds.
Weaker notions of stability might help further improve or refine them.
References
[1] S. N. Bernstein. Sur l?extension du th?eor`eme limite du calcul des probabilit?es aux sommes de quantit?es
d?ependantes. Math. Ann., 97:1?59, 1927.
[2] O. Bousquet and A. Elisseeff. Algorithmic stability and generalization performance. In NIPS 2000, 2001.
[3] O. Bousquet and A. Elisseeff. Stability and generalization. JMLR, 2:499?526, 2002.
[4] L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. In Information Theory, IEEE Transactions on, volume 25, pages 601?604, 1979.
[5] P. Doukhan. Mixing: Properties and Examples. Springer-Verlag, 1994.
[6] M. Kearns and D. Ron. Algorithmic stability and sanity-check bounds for leave-one-out cross-validation.
In Computational Learing Theory, pages 152?162, 1997.
[7] L. Kontorovich and K. Ramanan. Concentration inequalities for dependent random variables via the
martingale method, 2006.
[8] A. Lozano, S. Kulkarni, and R. Schapire. Convergence and consistency of regularized boosting algorithms
with stationary ?-mixing observations. In NIPS, 2006.
[9] D. Mattera and S. Haykin. Support vector machines for dynamic reconstruction of a chaotic system. In
Advances in kernel methods: support vector learning, pages 211?241. MIT Press, Cambridge, MA, 1999.
[10] R. Meir. Nonparametric time series prediction through adaptive model selection. Machine Learning,
39(1):5?34, 2000.
[11] D. Modha and E. Masry. On the consistency in nonparametric estimation under mixing assumptions.
IEEE Transactions of Information Theory, 44:117?133, 1998.
[12] K.-R. M?uller, A. Smola, G. R?atsch, B. Sch?olkopf, J. K., and V. Vapnik. Predicting time series with support
vector machines. In Proceedings of ICANN?97, LNCS, pages 999?1004. Springer, 1997.
[13] C. Saunders, A. Gammerman, and V. Vovk. Ridge Regression Learning Algorithm in Dual Variables. In
Proceedings of the ICML ?98, pages 515?521. Morgan Kaufmann Publishers Inc., 1998.
[14] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press: Cambridge, MA, 2002.
[15] V. N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998.
[16] M. Vidyasagar. Learning and Generalization: With Applications to Neural Networks. Springer, 2003.
[17] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of
Probability, 22(1):94?116, Jan. 1994.
8
| 3239 |@word h:42 eor:1 version:4 middle:1 briefly:1 seems:2 stronger:1 norm:1 elisseeff:2 thereby:1 series:8 denoting:1 past:3 existing:3 z2:2 si:15 must:4 realistic:3 designed:1 discrimination:1 stationary:22 haykin:1 math:1 boosting:1 ron:1 mcdiarmid:5 quantit:1 mathematical:2 learing:1 prove:4 shorthand:1 interscience:1 introduce:2 expected:2 behavior:2 roughly:1 decreasing:1 considering:1 provided:1 bounded:8 argmin:1 temporal:2 quantitative:1 exactly:1 rm:1 unit:1 ramanan:2 masry:1 positive:2 before:2 io:1 analyzing:1 modha:1 might:1 studied:1 limited:1 doukhan:1 practice:1 block:22 definite:1 chaotic:1 jan:1 lncs:1 probabilit:1 empirical:4 svr:9 cannot:1 selection:1 context:4 applying:2 optimize:1 equivalent:1 measurable:4 independently:3 rule:1 stability:36 notion:3 resp:6 qh:1 suppose:1 annals:1 hypothesis:18 ze:10 particularly:1 labeled:1 trade:1 removed:3 mentioned:1 complexity:2 dynamic:1 trained:8 weakly:2 depend:4 tight:4 algebra:1 carrying:1 basis:1 stock:3 fast:1 describe:1 saunders:1 sanity:1 widely:1 solve:1 tested:1 otherwise:1 advantage:2 sequence:26 reconstruction:1 product:2 zm:5 relevant:1 combining:2 mixing:46 achieve:1 olkopf:2 exploiting:2 convergence:3 extending:2 rademacher:1 produce:1 leave:1 help:2 derive:2 weakens:1 measured:2 ij:1 received:2 eq:1 c:1 implies:4 differ:4 ml2:1 stochastic:1 vc:1 require:1 generalization:22 preliminary:1 tighter:2 rostami:1 strictly:2 extension:1 hold:17 considered:1 exp:5 algorithmic:4 smallest:1 estimation:1 applicable:1 label:1 krr:5 sei:4 successfully:1 minimization:1 uller:1 mit:2 clearly:2 gaussian:1 always:1 avoid:1 corollary:4 derived:5 check:1 rostamizadeh:1 dependent:12 sb:5 typically:1 dual:1 exponent:1 special:1 marginal:2 equal:2 aware:1 yu:2 icml:1 future:3 others:1 inherent:2 familiar:1 theoretical:2 weaken:1 instance:1 earlier:1 cost:9 deviation:1 usefulness:1 examining:1 characterize:1 off:1 ym:1 kontorovich:2 return:2 potential:1 de:2 coefficient:5 inc:1 satisfy:1 depends:1 tion:1 later:1 analyze:1 sup:3 kaufmann:1 yield:5 generalize:2 weak:1 j6:1 checked:1 definition:11 proof:12 associated:1 di:1 hamming:1 knowledge:2 hilbert:1 originally:1 courant:2 day:2 supervised:1 done:1 just:2 smola:2 hand:3 receives:1 ei:3 replacing:1 google:1 pricing:1 grows:1 true:2 lozano:1 regularization:3 symmetric:1 illustrated:1 deal:1 covering:3 generalized:1 ridge:6 confusion:1 novel:2 exponentially:1 insensitive:1 volume:1 discussed:2 occurred:1 m1:1 cambridge:2 consistency:2 pointed:1 stable:11 scenario:17 verlag:1 inequality:16 morgan:1 somewhat:3 algebraically:9 converge:1 monotonically:1 hsi:3 relates:1 full:1 rj:1 technical:1 match:1 cross:1 plugging:2 qi:3 prediction:6 regression:13 expectation:3 metric:1 kernel:11 tailored:1 adopting:1 somme:1 interval:1 publisher:1 sch:2 seem:1 integer:2 presence:1 bernstein:1 identically:2 affect:1 independence:1 zi:13 returned:5 york:3 speaking:1 constitute:1 useful:2 clear:2 nonparametric:2 reduced:1 schapire:1 meir:1 exist:2 zj:5 estimated:1 sup1:1 gammerman:1 diagnosis:2 write:1 key:2 four:1 nevertheless:1 drawn:10 eme:1 extends:2 family:1 comparable:1 bound:63 ki:3 distinguish:1 refine:1 ri:5 bousquet:2 relatively:1 department:1 agreeing:1 pr:9 ln:2 discus:2 ependantes:1 adopted:2 apply:6 giving:1 restrictive:1 prof:1 objective:2 quantity:1 concentration:3 dependence:7 said:5 street:2 trivial:1 afshin:1 devroye:1 sur:1 index:1 mini:1 illustration:1 minimizing:2 kk:1 statement:4 negative:4 countable:1 zt:8 upper:3 observation:10 markov:1 precise:1 y1:1 reproducing:1 thm:1 introduced:2 pair:3 z1:5 nip:2 xm:1 including:5 vidyasagar:1 event:1 natural:1 rely:1 treated:1 regularized:2 predicting:1 improve:1 imply:1 cim:1 disagrees:1 calcul:1 asymptotic:1 expect:1 zei:6 interesting:1 validation:1 mercer:2 mohri:2 supported:1 last:3 free:1 side:5 weaker:2 institute:2 wagner:1 absolute:1 limite:1 distributed:2 dimension:1 made:2 commonly:1 coincide:1 adaptive:1 transaction:2 sj:1 dealing:1 assumed:7 zk:2 du:2 mehryar:1 icann:1 main:5 bounding:1 x1:1 en:1 martingale:1 ny:2 wiley:1 exponential:1 jmlr:1 third:1 theorem:19 specific:5 pac:1 nyu:2 vapnik:2 effectively:2 illustrates:5 conditioned:1 gap:1 entropy:1 ez:1 applies:4 springer:3 satisfies:1 ma:2 viewed:1 ann:1 price:1 lipschitz:4 replace:1 change:2 typical:1 specifically:1 uniformly:1 vovk:1 lemma:19 kearns:1 experimental:1 e:4 atsch:1 select:1 support:8 kulkarni:1 aux:1 |
2,469 | 324 | Discrete Affine Wavelet Transforms For Analysis
And Synthesis Of Feedforward Neural Networks
Y. c. Pati and P. S. Krishnaprasad
Systems Research Center and Department of Electrical Engineering
University of Maryland, College Park, MD 20742
Abstract
In this paper we show that discrete affine wavelet transforms can provide
a tool for the analysis and synthesis of standard feedforward neural networks. It is shown that wavelet frames for L2(IR) can be constructed based
upon sigmoids. The spatia-spectral localization property of wavelets can
be exploited in defining the topology and determining the weights of a
feedforward network. Training a network constructed using the synthesis procedure described here involves minimization of a convex cost functional and therefore avoids pitfalls inherent in standard backpropagation
algorithms. Extension of these methods to L2(IRN) is also discussed.
1
INTRODUCTION
Feedforward type neural network models constructed from empirical data have been
found to display significant predictive power [6]. Mathematical justification in support of such predictive power may be drawn from various density and approximation
theorems [1, 2, 5]. Typically this latter work doesn't take into account the spectral features apparent in the data. In the present paper, we note that the discrete
affine wavelet transform provides a natural framework for the analysis and synthesis of feedforward networks. This new tool takes account of spatial and spectral
localization properties present in the data.
Throughout most of this paper we restrict discussion to networks designed to approximate mappings in L2(IR). Extensions to L2(IRN) are briefly discussed in
Section 4 and will be further developed in [10].
743
744
Pati and Krishnaprasad
2
WAVELETS AND FRAMES
Consider a function
f of one real variable as a static feedforward input-output map
y=
f(x)
For simplicity assume f E L2(IR) the space of square integrable functions on the real
line. Suppose a sequence {fn} C L2(IR) is given such that, for suitable constants
A> 0, B < 00,
(1)
n
for all f E L2(JR) . Such a sequence is said to be a frame. In particular orthonormal
bases are frames. The above definition (1) also applies in the general Hilbert space
setting with the appropriate inner product. Let T denote the bounded operator
from L2(IR) to f2(Z), the space of square summable sequences, defined by
(Tf) = {< f, fn > }neZ'
In terms of the frame operator T, it is possible to give series expansions,
f
=
L
Tn < f, fn >
n
Lfn
<
f,fn
>,
(2)
n
where
{Tn = (T-T)-l fn}
is the dual frame.
A particular class of frames leads to affine wavelet expansions. Consider a family
of functions {tPmn} of the form,
(3)
where, the function 1j; satisfies appropriate admissibility conditions [3, 4] (e.g. J tP =
0). Then for suitable choices of a > 1, b > 0, the family {tPmn} is a frame for L2 (IR) .
Hence there exists a convergent series representation,
m
n
m
n
(4)
The frame condition (1) guarantees that the operator (T-T) is boundedly invertible.
Also since III - (2(A + B)-lT-T)1I < 1, (T-T)-l is given by a Neumann series [3].
Hence, given f, the expansion coefficients emn can be computed.
The representation (4) of f above as a series in dilations and translations of a single
function 1j; is called a wavelet expansion and the function tP is known as the analyzing
or mother wavelet for the expansion.
Discrete Affine Wavelet Transforms
3
FEEDFORWARD NETWORKS AND WAVELET
EXPANSIONS
Consider the input-output relationship of a feedforward network with one input,
one output, and a single hidden layer,
(5)
n
where an are the weights from the the input node to the hidden layer, bn are the
biases on the hidden layer nodes, en are the weights from the hidden layer to the
output layer and g defines the activation function of the hidden layer nodes. It is
clear from (5) that the output of such a network is given in terms of dilations and
translations of a single function g.
3.1
WAVELET ANALYSIS OF FEEDFORWARD NETWORKS
Let g be a 'sigmoidal' function e.g. g(x) =
l+!-z
and let "p be defined as
"p(x) = g(x + 2) + g(x - 2) - 2g(x).
(6)
Then it is possible (see [9] for details) to determine a translation stepsize band
1
- --y--- - - - , - - - - , - -
2 . 5 ,------,------,.----,
2
0.5
1.5
of--1
-0.5
0.5
-1~-~~-~---~--~
-4
-2
time
o
2
(seconds)
4
o~-----'...c::...-----"J.-----'
-4
-2
Log Frequency
o
2
(Hz)
Figure 1: Mother Wavelet "p (Left) And Magnitude Of Fourier Transform 1~12
a dilation stepsize a for which the family of functions "pmn as defined by (3) is a
frame for L2(IR). Note that wavelet frames for L2(JR) can be constructed based
upon other combinations ofsigmoids (e.g "p(x) = g(x+p)+g(x-p)-2g(x), p> 0)
and that we use the mother wavelet of (6) only to illustrate some properties which
are common to many such combinations.
It follows from the above discussion that a feedforward network having one hidden
layer with sigmoidal activation functions can represent any function in L2(IR) . In
such a network (6) says that the sigmoidal nodes should be grouped together in sets
of three so as to form the mother wavelet "p.
745
746
Pati and Krishnaprasad
3.2
WAVELETS AND SYNTHESIS OF FEEDFORWARD
NETWORKS
In defining the topology of a feedforward network we make use of the fact that the
function "p is well concentrated in both spatial and spectral domains (see Figure
1). Dilating"p corresponds to shifting the spectral concentration and translating "p
corresponds to shifting the spatial concentration.
The synthesis procedure we describe here is based upon estimates of the spatial
and spectral localization of the unknown mapping as determined from samples
provided by the training data. Spatial locality of interest can easily be determined
by examination of the training data or by introducing a priori assumptions as to the
region over which it is desired to approximate the unknown mapping. Estimates of
the appropriate spectral locality are also possible via preprocessing of the training
data.
Let Qmn and Qf respectively denote the spatia-spectral concentrations of the
wavelet "pmn and of f. Thus Qmn and Qf are rectangular regions in the spatiaspectral plane (see Figure 2) which contain 'most' of the energy in the functions
"pmn and f. More precise definitions of these concentrations can be found in [9].
Assuming that Qf has been estimated from the training data. We choose only those
ro mu
??????
e
, , , , , , ,, , ,,
,,,,,,,
, , ,',
, '", ,i,
,,,,"
, ,, , , , , ,,
???
??????
-romu
~~~~~~~~~~
time
Figure 2: Spatio-Spectral Concentrations Qmn And Qf Of Wavelets "pmn And
Unknown Map f.
elements of the frame {.,pmn} which contribute 'significantly' to the region Qf by
defining an index set L f ~ Z2 in the following manner,
where, J.L is the Lesbegue measure on lR? Since f is concentrated in Qf, by choosing
L f as above, a 'good' approximation of f can be obtained in terms of the finite set
of frame elements with indices in T f. That is
f
=
L
(m,n)eLJ
f can be approximated by 1 where,
cmn"pmn
(7)
Discrete Affine Wavelet Transforms
for some coefficients {c mn } (m,n )eL J ?
Having determined L f, a network is constructed to implement the appropriate
wavelets tPmn. This is easily accomplished by choosing the number of sigmoidal
hidden layer nodes to be M = 3 x ~L J and then grouping them together in sets of
three to implement tP as in (6). Weights from the input to the hidden layer are set
to provide the required dilations of tP and biases on the hidden layer nodes are set
to provide the required translations.
3.2.1
Computation of Coefficients
By the above construction, all weights in the network have been fixed except for the
weights from the hidden layer to the output which specify the coefficients Cmn in
(7). These coefficients can be computed using a simple gradient descent algorithm
on the standard cost function of backpropagation. Since the cost function is convex
in the remaining weights, only globally minimizing solutions exist.
3.2.2
Simulations
Figure 3 shows the results of a simple simulation example. The solid line in Figure
3 indicates the original mapping f which was defined via the inverse Fourier transform of a randomly generated approximately bandlimited spectrum. Using a single
dilation of tP which covered the frequency band sufficiently well and the required
translations, the dashed curve shows the learned network approximation.
6
.2
o
-.2
-.4 "'-L.~..L-L-~
05
o
t
I
I
I
I
.1
I
I
I
,
L
LI----L..L-lJ..-L..LL...I.....LL.LJ.--'--'"
. 15
2
.25
3
"Time (seconds)"
Figure 3: Simulation Using Network Synthesis Procedure. Solid Curve: Original
Function, Dashed Curve: Network Reconstruction.
4
DISCUSSION AND CONCLUSIONS
It has been demonstrated here that affine wavelet expansions provide a framework
within which feedforward networks designed to approximate mappings in L2(lR) can
be understood. In the case when the mapping is known, the expansion coefficients,
and therefore all weights in the network can be computed. Hence the wavelet
747
748
Pati and Krishnaprasad
transform method (and in general any transform method) not only gives us represent ability of certain classes of mappings by feedforward networks, but also tells
us what the representation should be. Herein lies an essential difference between
the wavelet methods discussed here and arguments based upon density in function
spaces.
In addition to providing arguments in support of the approximating power of feedforward networks, the wavelet framework also suggests one method of choosing
network topology (in this case the number of hidden layer nodes) and reducing
the training problem to a convex optimization problem. The synthesis technique
suggested is based upon spatial and spectral localization which is provided by the
wavelet transform.
Most useful applications of feedforward networks involve the approximation of mappings with higher dimensional domains e.g. mappings in L2(JRN). Discrete affine
wavelet transforms can be applied in higher dimensions as well (see e.g. [7] and [8]).
Wavelet transforms in L2(IRN) can also be defined with respect to mother wavelets
constructed from sigmoids combined in a manner which doesn't deviate from standard feedforward network architectures [10]. Figure 4 shows a mother wavelet for
L2(IR2) constructed from sigmoids. In higher dimensions it is possible to use more
than one analyzing wavelet [7], each having certain orientation selectivity in addition to spatial and spectral localization. If orientation selectivity is not essential,
an isotropic wavelet such as that in Figure 4 can be used.
Figure 4: Two-Dimensional Isotropic Wavelet From Sigmoids
The wavelet formulation of this paper can also be used to generate an orthonormal
basis of compactly supported wavelets within a standard feedforward network architecture. If the sigmoidal function 9 in Equation (6) is chosen as a discontinuous
threshold function, the resulting wavelet 'IjJ is the Haar function which thereby results in the Haar transform. Dilations of the Haar function in powers of 2 (a = 2)
together with integer translations (b = 1), generate an orthonormal basis for L2(IR) .
Multidimensional Haar functions are defined similarly. The Haar transform is the
earliest known example of a wavelet transform which however suffers due to the
discontinuous nature of the mother wavelet.
Discrete Affine Wavelet Transforms
Acknowledgements
The authors wish to thank Professor Hans Feichtinger of the University of Vienna,
and Professor John Benedetto of the University of Maryland for many valuable discussions. This research was supported in part by the National Science Foundation's
Engineering Research Centers Program: NSFD CDR 8803012, the Air Force Office
of Scientific Research under contract AFOSR-88-0204 and by the Naval Research
Laboratory.
References
[1] G. Cybenko. Approximations by Superpositions of a Sigmoidal Function. Technical Report CSRD 856, Center for Supercomputing Research and Development, University of Illinois, Urbana, February 1989.
[2] G. Cybenko. Continuous Valued Neural Networks with Two Hidden Layers are
Sufficient. Technical Report, Department of Computer Science, Tufts University, Medford, MA, March 1988.
[3] I. Daubechies. The Wavelet Transform, Time-Frequency Localization and
Signal Analysis. IEEE Transactions on Information Theory, 36(5):9611005,September 1990.
[4] C. E. Heil and D. F. Walnut. Continuous and Discrete Wavelet Transforms.
SIAM Review, 31(4):628-666, December 1989.
[5] K. Hornik, M. Stinchcombe, and H. White. Multilayer Feedforward Networks
are Universal Approximators. Neural Networks, 2:359-366, 1989.
[6] A. Lapedes, and R. Farber. Nonlinear Signal Processing Using Neural Networks: Prediction and System Modeling. Technical Report LA- UR-87-2662,
Los Alamos National Laboratory, 1987.
[7] S. G. Mallat. Multifrequency Channel Decompositions ofImages and Wavelet
Models. IEEE Transactions On Acoustics Speech and Signal Processing,
37(12):2091-2110, December 1989.
[8] R. Murenzi, "Wavelet Transforms Associated To The n-Dimensional Euclidean
Group With Dilations: Signals In More Than One Dimension," in Wavelets
Time-Frequency Methods And Phase Space (J. M. Combes, A. Grossman and
Ph. Tchamitchian, eds.), pp. 239-246, Springer-Verlag, 1989.
[9] Y. C. Pati and P. S. Krishnaprasad, "Analysis and Synthesis of Feedforward
Neural Networks Using Discrete Affine Wavelet Transforms," Technical Report
SRC TR 90-44, University of Maryland, Systems Research Center, 1990.
[10] Y. C. Pati and P. S. Krishnaprasad, In preparation.
749
| 324 |@word briefly:1 simulation:3 bn:1 decomposition:1 thereby:1 tr:1 solid:2 series:4 lapedes:1 emn:1 z2:1 activation:2 john:1 fn:5 designed:2 plane:1 isotropic:2 lr:2 provides:1 contribute:1 node:7 sigmoidal:6 mathematical:1 constructed:7 manner:2 globally:1 pitfall:1 provided:2 bounded:1 what:1 developed:1 guarantee:1 multidimensional:1 ro:1 engineering:2 understood:1 walnut:1 analyzing:2 approximately:1 pmn:6 suggests:1 implement:2 backpropagation:2 procedure:3 universal:1 empirical:1 significantly:1 ir2:1 operator:3 map:2 demonstrated:1 center:4 dilating:1 convex:3 rectangular:1 simplicity:1 orthonormal:3 justification:1 construction:1 suppose:1 mallat:1 element:2 approximated:1 electrical:1 region:3 valuable:1 src:1 mu:1 predictive:2 upon:5 localization:6 f2:1 basis:2 compactly:1 easily:2 various:1 describe:1 tell:1 choosing:3 apparent:1 valued:1 say:1 ability:1 transform:10 sequence:3 reconstruction:1 product:1 los:1 neumann:1 illustrate:1 involves:1 farber:1 discontinuous:2 translating:1 cybenko:2 extension:2 sufficiently:1 mapping:9 superposition:1 grouped:1 tf:1 tool:2 minimization:1 office:1 earliest:1 naval:1 indicates:1 el:1 typically:1 lj:2 hidden:12 irn:3 krishnaprasad:6 dual:1 orientation:2 priori:1 development:1 spatial:7 having:3 park:1 report:4 inherent:1 randomly:1 national:2 phase:1 interest:1 euclidean:1 desired:1 modeling:1 tp:5 cost:3 introducing:1 alamo:1 combined:1 density:2 siam:1 contract:1 invertible:1 synthesis:9 together:3 daubechies:1 choose:1 summable:1 benedetto:1 grossman:1 li:1 account:2 coefficient:6 square:2 ir:9 air:1 suffers:1 ed:1 definition:2 energy:1 frequency:4 pp:1 associated:1 static:1 hilbert:1 feichtinger:1 higher:3 specify:1 formulation:1 lfn:1 nonlinear:1 combes:1 defines:1 scientific:1 contain:1 hence:3 laboratory:2 white:1 ll:2 tn:2 common:1 functional:1 discussed:3 significant:1 mother:7 similarly:1 illinois:1 han:1 base:1 spatia:2 selectivity:2 certain:2 verlag:1 approximators:1 accomplished:1 exploited:1 integrable:1 determine:1 dashed:2 signal:4 ofimages:1 technical:4 prediction:1 multilayer:1 represent:2 addition:2 boundedly:1 hz:1 december:2 integer:1 feedforward:20 iii:1 architecture:2 topology:3 restrict:1 inner:1 speech:1 useful:1 clear:1 covered:1 involve:1 transforms:10 band:2 jrn:1 concentrated:2 ph:1 generate:2 exist:1 estimated:1 discrete:9 group:1 threshold:1 drawn:1 inverse:1 throughout:1 family:3 layer:13 display:1 convergent:1 fourier:2 argument:2 department:2 combination:2 march:1 jr:2 ur:1 equation:1 spectral:11 appropriate:4 medford:1 tuft:1 stepsize:2 original:2 remaining:1 vienna:1 approximating:1 february:1 concentration:5 md:1 said:1 september:1 gradient:1 thank:1 maryland:3 assuming:1 index:2 pati:6 relationship:1 providing:1 minimizing:1 unknown:3 urbana:1 finite:1 descent:1 defining:3 precise:1 frame:13 required:3 acoustic:1 learned:1 herein:1 suggested:1 program:1 shifting:2 bandlimited:1 power:4 suitable:2 natural:1 examination:1 force:1 haar:5 stinchcombe:1 mn:1 heil:1 deviate:1 review:1 l2:17 acknowledgement:1 determining:1 afosr:1 admissibility:1 foundation:1 affine:10 sufficient:1 translation:6 qf:6 supported:2 bias:2 curve:3 dimension:3 avoids:1 doesn:2 author:1 preprocessing:1 supercomputing:1 transaction:2 approximate:3 spatio:1 spectrum:1 continuous:2 dilation:7 nature:1 channel:1 hornik:1 expansion:8 domain:2 cmn:2 en:1 wish:1 lie:1 wavelet:45 theorem:1 grouping:1 exists:1 essential:2 magnitude:1 sigmoids:4 locality:2 lt:1 nez:1 elj:1 ijj:1 cdr:1 applies:1 springer:1 corresponds:2 satisfies:1 ma:1 professor:2 determined:3 except:1 reducing:1 called:1 la:1 college:1 support:2 latter:1 preparation:1 |
2,470 | 3,240 | Message Passing for Max-weight Independent Set
Sujay Sanghavi
LIDS, MIT
[email protected]
Devavrat Shah
Dept. of EECS, MIT
[email protected]
Alan Willsky
Dept. of EECS, MIT
[email protected]
Abstract
We investigate the use of message-passing algorithms for the problem of finding
the max-weight independent set (MWIS) in a graph. First, we study the performance of loopy max-product belief propagation. We show that, if it converges,
the quality of the estimate is closely related to the tightness of an LP relaxation
of the MWIS problem. We use this relationship to obtain sufficient conditions for
correctness of the estimate. We then develop a modification of max-product ? one
that converges to an optimal solution of the dual of the MWIS problem. We also
develop a simple iterative algorithm for estimating the max-weight independent
set from this dual solution. We show that the MWIS estimate obtained using these
two algorithms in conjunction is correct when the graph is bipartite and the MWIS
is unique. Finally, we show that any problem of MAP estimation for probability
distributions over finite domains can be reduced to an MWIS problem. We believe
this reduction will yield new insights and algorithms for MAP estimation.
1
Introduction
The max-weight independent set (MWIS) problem is the following: given a graph with positive
weights on the nodes, find the heaviest set of mutually non-adjacent nodes. MWIS is a well studied
combinatorial optimization problem that naturally arises in many applications. It is known to be
NP-hard, and hard to approximate [6]. In this paper we investigate the use of message-passing
algorithms, like loopy max-product belief propagation, as practical solutions for the MWIS problem.
We now summarize our motivations for doing so, and then outline our contribution.
Our primary motivation comes from applications. The MWIS problem arises naturally in many
scenarios involving resource allocation in the presence of interference. It is often the case that
large instances of the weighted independent set problem need to be (at least approximately) solved
in a distributed manner using lightweight data structures. In Section 2.1 we describe one such
application: scheduling channel access and transmissions in wireless networks. Message passing
algorithms provide a promising alternative to current scheduling algorithms.
Another, equally important, motivation is the potential for obtaining new insights into the performance of existing message-passing algorithms, especially on loopy graphs. Tantalizing connections
have been established between such algorithms and more traditional approaches like linear programming (see [9] and references). The MWIS problem provides a rich, yet relatively tractable, first
framework in which to investigate such connections.
1.1
Our contributions
In Section 4 we construct a probability distribution whose MAP estimate corresponds to the MWIS
of a given graph, and investigate the application of the loopy Max-product algorithm to this distritbuion. We demonstrate that there is an intimate relationship between the max-product fixed-points
and the natural LP relaxation of the original independent set problem. We use this relationship to
provide a certificate of correctness for the max-product fixed point in certain problem instances.
1
In Section 5 we develop two iterative message-passing algorithms. The first, obtained by a minor
modification of max-product, calculates the optimal solution to the dual of the LP relaxation of the
MWIS problem. The second algorithm uses this optimal dual to produce an estimate of the MWIS.
This estimate is correct when the original graph is bipartite.
In Section 3 we show that any problem of MAP estimation in which all the random variables can
take a finite number of values (and the probability distribution is positive over the entire domain) can
be reduced to a max-weight independent set problem. This implies that any algorithm for solving
the independent set problem immediately yields an algorithm for MAP estimation. We believe this
reduction will prove useful from both practical and analytical perspectives.
2
Max-weight Independent Set, and its LP Relaxation
Consider a graph G = (V, E), with a set V of nodes and a set E of edges. Let N (i) = {j ? V :
(i, j) ? E} be the neighbors of i ? V . Positive weights wi , i ? V are associated with each node.
A subset of V will be represented by vector x = (xi ) ? {0, 1}|V | , where xi = 1 means i is in the
subset xi = 0 means i is not in the subset. A subset x is called an independent set if no two nodes
in the subset are connected by an edge: (xi , xj ) 6= (1, 1) for all (i, j) ? E. We are interested in
finding a maximum weight independent set (MWIS) x? . This can be naturally posed as an integer
program, denoted below by IP. The linear programing relaxation of IP is obtained by replacing the
integrality constraints xi ? {0, 1} with the constraints xi ? 0. We will denote the corresponding
linear program by LP. The dual of LP is denoted below by DUAL.
n
X
IP :
max
s.t.
xi + xj ? 1 for all (i, j) ? E,
xi ? {0, 1}.
DUAL :
wi xi ,
i=1
s.t.
min
X
X
?ij ,
(i,j)?E
?ij ? wi , for all i ? V,
j?N (i)
?ij ? 0, for all (i, j) ? E.
It is well-known that LP can be solved efficiently, and if it has an integral optimal solution then this
solution is an MWIS of G. If this is the case, we say that there is no integrality gap between LP and
IP or equivalently that the LP relaxation is tight. usIt is well known [3] that the LP relaxation is
tight for bipartite graphs. More generally, for non-bipartite graphs, tightness will depend on the
node weights. We will use the performance of LP as a benchmark with which to compare the
performance of our message passing algorithms.
The next lemma states the standard complimentary slackness conditions of linear programming,
specialized for LP above, and for the case when there is no integrality gap.
Lemma 2.1 When there is no integrality gap between IP and LP, there exists a pair of optimal
n
solutions
P x = (xi ), ? = (?ij ) of LP and DUAL respectively, such that: (a) x ? {0, 1} , (b)
xi
j?N (i) ?ij ? wi = 0 for all i ? V , (c) (xi + xj ? 1) ?ij = 0, for all (i, j) ? E.
2.1
Sample Application: Scheduling in Wireless Networks
We now briefly describe an important application that requires an efficient, distributed solution to the
MWIS problem: transmision scheduling in wireless networks that lack a centralized infrastructure,
and where nodes can only communicate with local neighbors (e.g. see [4]). Such networks are
ubiquitous in the modern world: examples range from sensor networks that lack wired connections
to the fusion center, and ad-hoc networks that can be quickly deployed in areas without coverage,
to the 802.11 wi-fi networks that currently represent the most widely used method for wireless data
access.
Fundamentally, any two wireless nodes that transmit at the same time and over the same frequencies
will interfere with each other, if they are located close by. Interference means that the intended
receivers will not be able to decode the transmissions. Typically in a network only certain pairs
2
of nodes interfere. The scheduling problem is to decide which nodes should transmit at a given
time over a given frequency, so that (a) there is no interference, and (b) nodes which have a large
amount of data to send are given priority. In particular, it is well known that if each node is given a
weight equal to the data it has to transmit, optimal network operation demands scheduling the set of
nodes with highest total weight. If a ? conflict graph? is made, with an edge between every pair of
interfering nodes, the scheduling problem is exactly the problem of finding the MWIS of the conflict
graph. The lack of an infrastructure, the fact that nodes often have limited capabilities, and the local
nature of communication, all necessitate a lightweight distributed algorithm for solving the MWIS
problem.
3
MAP Estimation as an MWIS Problem
In this section we show that any MAP estimation problem is equivalent to an MWIS problem on
a suitably constructed graph with node weights. This construction is related to the ?overcomplete
basis? representation [7]. Consider the following canonical MAP estimation problem: suppose we
are given a distribution q(y) over vectors y = (y1 , . . . , yM ) of variables ym , each of which can take
a finite value. Suppose also that q factors into a product of strictly positive functions, which we find
convenient to denote in exponential form:
!
X
1 Y
1
q(y) =
exp (?? (y? )) =
exp
?? (y? )
Z
Z
??A
??A
Here ? specifies the domain of the function ?? , and y? is the vector of those variables that are in
the domain of ?? . The ??s also serve as an index for the functions. A is the set of functions. The
MAP estimation problem is to find a maximizing assignment y? ? arg maxy q(y).
e and assign weights to its nodes, such that the MAP estimation
We now build an auxillary graph G,
e There is one node in G
e for each pair (?, y? ),
problem above is equivalent to finding the MWIS of G.
where y? is an assignment (i.e. a set of values for the variables) of domain ?. We will denote this
e by ?(?, y? ).
node of G
1
2
e between any two nodes ?(?1 , y?
There is an edge in G
) and ?(?2 , y?
) if and only if there exists
1
2
a variable index m such that
1. m is in both domains, i.e. m ? ?1 and m ? ?2 , and
1
2
2. the corresponding variable assignments are different, i.e. ym
6= ym
.
In other words, we put an edge between all pairs of nodes that correspond to inconsistent assigne we now assign weights to the nodes. Let c > 0 be any number such that
ments. Given this graph G,
c + ?? (y? ) > 0 for all ? and y? . The existence of such a c follows from the fact that the set of
assignments and domains is finite. Assign to each node ?(?, y? ) a weight of c + ?? (y? ).
e are as above. (a) If y? is a MAP estimate of q, let ? ? =
Lemma 3.1 Suppose q and G
?
e that correspond to each domain being consistent
{?(?, y? ) | ? ? A} be the set of nodes in G
?
?
e
e Then, for every
with y . Then, ? is an MWIS of G. (b) Conversely, suppose ? ? is an MWIS of G.
?
domain ?, there is exactly one node ?(?, y?
) included in ? ? . Further, the corresponding domain
?
assignments{y?
| ? ? A} are consistent, and the resulting overall vector y? is a MAP estimate of q.
Example. Let y1 and y2 be binary variables with joint distribution
1
q(y1 , y2 ) =
exp(?1 y1 + ?2 y2 + ?12 y1 y2 )
Z
e is shown
where the ? are any real numbers. The corresponding G
to the right. Let c be any number such that c + ?1 , c + ?2 and c + ?12
e are: ?1 + c on
are all greater than 0. The weights on the nodes in G
node ?1? on the left, ?2 + c for node ?1? on the right, ?12 + c for
the node ?11?, and c for all the other nodes.
3
00
0
01
0
10
1
11
1
4
Max-product for MWIS
The classical max-product algorithm is a heuristic that can be used to find the MAP assignment of a
probability distribution. Now, given an MWIS problem on G = (V, E), associate a binary random
variable Xi with each i ? V and consider the following joint distribution: for x ? {0, 1}n ,
p (x)
=
1
Z
Y
1{xi +xj ?1}
Y
exp(wi xi ),
(1)
i?V
(i,j)?E
where Z is the normalization constant. In the above, 1 isP
the standard indicator function: 1true = 1
and 1false = 0. It is easy to see that p(x) = Z1 exp ( i wi xi ) if x is an independent set, and
p(x) = 0 otherwise. Thus, any MAP estimate arg maxx p(x) corresponds to a maximum weight
independent set of G.
The update equations for max-product can be derived in a standard and straightforward fashion from
the probability distribution. We now describe the max-product algorithm as derived from p. At every
iteration t each node i sends a message {mti?j (0), mti?j (1)} to each neighbor j ? N (i). Each node
also maintains a belief {bti (0), bti (1)} vector. The message and belief updates, as well as the final
output, are computed as follows.
Max-product for MWIS
(o) Initially, m0i?j (0) = m0j?i (1) = 1 for all (i, j) ? E.
(i) The messages are updated as follows:
?
?
? Y
?
Y
t
wi
t
mt+1
(0)
=
max
m
(0)
,
e
m
(1)
,
k?i
k?i
i?j
?
?
k6=j,k?N (i)
k6=j,k?N (i)
Y
mt+1
mtk?i (0).
i?j (1) =
k6=j,k?N (i)
(ii) Nodes i ? V , compute their beliefs as follows:
Y
Y
t+1
wi
bt+1
(0) =
mt+1
(0),
b
(1)
=
e
mt+1
i
i
k?i
k?i (1).
k?N (i)
k?N (i)
(iii) Estimate max. wt. independent set x(bt+1 ) as follows: xi (bt+1
) = 1{bt+1 (1)>bt+1 (0)} .
i
i
i
t
(iv) Update t = t + 1; repeat from (i) till x(b ) converges and output the converged estimate.
t
For the purpose of analysis, we find it convenient to transform the messages be defining1 ?i?j
=
t
mi?j (0)
log mt (1) . Step (i) of max-product now becomes
i?j
t+1
?i?j
? ?
?
= max 0, ?wi ?
?
X
k6=j,k?N (i)
??
?
t
? ,
?k?i
?
(2)
where we use the notation (x)+ = max{x, 0}. The estimation of step (iii) of max-product becomes:
xi (? t+1 ) = 1{wi ?Pk?N (i) ?k?i >0} . This modification of max-product is often known as the ?minsum? algorithm, and is just a reformulation of the max-product. In the rest of the paper we refer to
this as simply the max-product algorithm.
1
If the algorithm starts with all messages being strictly positive, the messages will remain strictly positive
over any finite number of iterations. Thus taking logs is a valid operation.
4
4.1
Fixed Points of Max-Product
When applied to general graphs, max product may either (a) not converge, (b) converge, and yield
the correct answer, or (c) converge but yield an incorrect answer. Characterizing when each of the
three situations can occur is a challenging and important task. One approach to this task has been to
look directly at the fixed points, if any, of the iterative procedure [8].
Proposition 4.1 Let ? represent a fixed point of the algorithm, and let x(?) = (xi (?)) be the
corresponding estimate for the independent set. Then, the following properties hold:
(a) Let i be a node with estimate xi (?) = 1, and let j ? N (i) be any neighbor of i. Then,
the messages on edge (i, j) satisfy ?i?j > ?j?i . Further, from this it can be deduced that x(?)
represents an independent set in G.
P
(b) Let j be a node with xj (?) = 0, which by definition means that wj ? k?N (j) ?k?j ? 0.
Suppose
P now there exists a neighbor i ? N (j) whose estimate is xi (?) = 1. Then it has to be that
wj ? k?N (j) ?k?j < 0, i.e. the inequality is strict.
(c) For any edge (j1 , j2 ) ? E, if the estimates of the endpoints are xj1 (?) = xj2 (?) = 0, then it has
to be that ?j1 ?j2 = ?j2 ?j1 . In addition, if there exists a neighbor i1 ? N (j1 ) of j1 whose estimate
is xi1 (?) = 1, then it has to be that ?j1 ?j2 = ?j2 ?j1 = 0 (and similarly for a neighbor i2 of j2 ).
The properties shown in Proposition 4.1 reveal striking similarities between the messages ? of fixed
points of max-product, and the optimal ? that solves the dual linear program DUAL. In particular,
suppose that ? is a fixed point at which the corresponding estimate x(?) is a maximal independent
set: for every j whose estimate xj (?) = 0 there exists a neighbor i ? N (j) whose estimate is
xi (?) = 1. The MWIS, for example, is also maximal (if not, one could add a node to the MWIS and
obtain a higher weight). For a maximal estimate, it is easy to see that
? (xi (?) + xj (?) ? 1) ?i?j = 0 for all edges (i, j) ? E.
P
? xi (?) ?i?j + k?N (i)?j ?k?i ? wi = 0 for all i, j ? V
At least semantically, these relations share a close resemblance to the complimentary slackness
conditions of Lemma 2.1. In the following lemma we leverage this resemblance to derive a certificate
of optimality of the max-product fixed point estimate for certain problems.
Lemma 4.1 Let ? be a fixed point of max-product and x(?) the corresponding estimate of the
independent set. Define G? = (V, E ? ) where E ? = E\{(i, j) ? E : ?i?j = ?j?i = 0} is the
set of edges with at least one non-zero message. Then, if G? is acyclic, we have that : (a) x(?) is
a solution to the MWIS for G, and (b) there is no integrality gap between LP and IP, i.e. x(?) is
an optimal solution to LP. Thus the lack of cycles in G? provides a certificate of optimality for the
estimate x(?).
Max-product vs. LP relaxation. The following general question has been of great recent interest:
which of the two, max-product and LP relaxation, is more powerful ? We now briefly investigate
this question for MWIS. As presented below, we find that there are examples where one technique
is better than the other. That is, neither technique clearly dominates the other.
To understand whether correctness of max-product (e.g. Lemma 4.1) provides information about
LP relaxation, we consider the simplest loopy graph: a cycle. For bipartite graph, we know that
LP relaxation is tight, i.e. provides answer to MWIS. Hence, we consider odd cycle. The following
result suggests that if max-product works then it must be that LP relaxation is tight (i.e. LP is no
weaker than max-product for cycles).
Corollary 4.1 Let G be an odd cycle, and ? a fixed point of Max-product. Then, if there exists at
least one node i whose estimate xi (?) = 1, then there is no integrality gap between LP and IP.
Next, we present two examples which help us conclude that neither max-product nor LP relaxation
dominate the other. The following figures present graphs and the corresponding fixed points of
max-product. In each graph, numbers represent node weights, and an arrow from i to j represents
5
a message value of ?i?j = 2. All other messages have ? are equal to 0. The boxed nodes indicate
the ones for which the estimate xi (?) = 1. It is easy to verify that both represent max-product fixed
points.
2
2
2
3
3
3
2
2
2
3
3
3
For the graph on the left, the max-product fixed point results in an incorrect estimate. However,
the graph is bipartite, and hence LP will get the correct answer. In the graph on the right, there is
an integrality gap between LP and IP: setting each xi = 21 yields an optimal value of 7.5, while
the optimal solution to IP has value 6. However, the estimate at the fixed point of max-product
is the correct MWIS. In both of these examples, the fixed points lie in the strict interiors of nontrivial regions of attraction: starting the iterative procedure from within these regions will result in
convergence to the fixed point.
These examples indicate that it may not be possible to resolve the question of relative strength of the
two procedures based solely on an analysis of the fixed points of max-product.
5
A Convergent Message-passing Algorithm
In this section we present our algorithm for finding the MWIS of a graph. It is based on modifying
max-product by drawing upon a dual co-ordinate descent and barrier method. Specifically, the
algorithm is as follows: (1) For small enough parameters ?, ?, run subroutine DESCENT(?, ?) (close
to) convergence. This will produce output ??,? = (??,?
ij )(i,j)?E . (2) For small enough parameter ?1 ,
use subroutine EST(??,? , ?1 ), to produce an estimate for the MWIS as the output of algorithm.
Both of the subroutines, DESCENT, EST are iterative message-passing procedures. Before going
into details of the subroutines, we state the main result about correctness and convergence of this
algorithm.
Theorem 5.1 The following properties hold for arbitrary graph G and weights: (a) For any choice
of ?, ?, ?1 > 0, the algorithm always converges. (b) As ?, ? ? 0, ??,? ? ?? where ?? is an optimal
solution of DUAL . Further, if G is bipartite and the MWIS is unique, then the following holds: (c)
For small enough ?, ?, ?1 , the algorithm produces the MWIS as output.
Subroutine: DESCENT
5.1
Consider the standard coordinate descent algorithm for DUAL: the variables are {?ij , (i, j) ?
E}(with notation ?ij = ?ji ) and at each iteration t one edge (i, j) ? E is picked2 and update
? ?
? ?
??
?
?
X
X
t ?
t ?
?
?
?t+1
=
max
0,
w
?
?
,
w
?
?
(3)
i
j
ik
jk
ij
?
?
k?N (i),k6=j
k?N (j),k6=i
The ? on all the other edges remain unchanged from t to t + 1. Notice the similarity (at least
syntactic) between (3) and update of max-product (min-sum) (2): essentially, the dual coordinate
descent is a sequential bidirectional version of the max-product algorithm !
It is well known that the coordinate descent always coverges, in terms of cost for linear programs.
Further, it converges to an optimal solution if P
the constraints are of the product set type (see [2] for
details). However, due to constraints of type j?N (i) ?ij ? wi in DUAL, the algorithm may not
2
A good policy for picking edges is round-robin or uniformly at random
6
converge to an optimal of DUAL. Therefore, a direct adaptation of max-product to mimic dual coordinate descent is not good enough. We use barrier (penalty) function based approach to overcome
this difficulty. Consider the following convex optimization problem obtained from DUAL by adding
a logarithmic barrier for constraint violations with ? ? 0 controlling penalty due to violation.
?
?
?
?
??
X
X
X
CP(?) :
min ?
?ij ? ? ? ?
log ?
?ij ? wi ??
i?V
(i,j)?E
subject to
j?N (i)
?ij ? 0, for all (i, j) ? E.
The following is coordinate descent algorithm for CP(?).
DESCENT(?, ?)
(o) The parameters are variables ?ij , one for each edge (i, j) ? E. We will use notation that
?tij = ?tji . The vector ? is iteratively updated, with t denoting the iteration number.
? Initially, set t = 0 and ?0ij = max{wi , wj } for all (i, j) ? E.
(i) In iteration t + 1, update parameters as follows:
? Pick an edge (i, j) ? E. This edge selection is done so that each edge is chosen
infinitely often as t ? ? (for example, at each t choose an edge uniformly at random.)
t
? For all (i? , j ? ) ? E, (i? , j ? ) 6= (i, j) do nothing, i.e. ?t+1
i? j ? = ?i? j ? .
? For edge (i, j), nodes i and j exchange messages as follows:
?
?
?
?
X
X
t+1
t+1
?i?j
= ?wi ?
?tki ? , ?j?i
= ?wj ?
?tk? j ?
k6=j,k?N (i)
? Update
?t+1
ij
as follows: with a =
?t+1
=
ij
k? 6=i,k? ?N (j)
+
t+1
?i?j
+
t+1
?j?i
,
and b =
!
p
a + b + 2? + (a ? b)2 + 4?2
.
2
(4)
+
(ii) Update t = t + 1 and repeat till algorithm converges within ? for each component.
(iii) Output ?, the vector of paramters at convergence,
Remark. The iterative step (4) can be rewritten as follows: for some ? ? [1, 2],
?
?
? ?
??
?
?
X
X
?t+1
= ?? + max ???, ?wi ?
?tik ? , ? wj ?
?tkj ? ,
ij
?
?
k?N (i)\j
k?N (j)\i
t+1
t+1
where ? depends on values of ?i?j
, ?j?i
. Thus the updates in DESCENT are obtained by small
but important perturbation of dual coordinate descent for DUAL, and making it convergent. The
output of DESCENT(?, ?), say ??,? ? ?? as ?, ? ? 0 where ?? is an optimal solution of DUAL.
5.2
Subroutine: EST
DESCENT yields a good estimate of the optimal solution to DUAL, for small values of ? and
?. However, we are interested in the (integral) optimum of LP. In general, it is not possible to
recover the solution of a linear program from a dual optimal solution. However, we show that such
a recovery is possible through EST algorithm described below for the MWIS problem when G is
bipartite with unique MWIS. This procedure is likely to extend for general G when LP relaxation is
tight and LP has unique solution.
EST(?, ?1 ).
7
(o) The algorithm iteratively estimates x = (xi ) given ?.
P
(i) Initially, color a node i gray and set xi = 0 if j?N (i) ?ij > wi . Color all other nodes
P
with green and leave their values unspecified. The condition j?N (i) ?ij > wi is checked
P
as whether j?N (i) ?ij ? wi + ?1 or not.
(ii) Repeat the following steps (in any order) till no more changes can happen:
? if i is green and there exists a gray node j ? N (i) with ?ij > 0, then set xi = 1 and
color it orange. The condition ?ij > 0 is checked as whether ?ij ? ?1 or not.
? if i is green and some orange node j ? N (i), then set xi = 0 and color it gray.
(iii) If any node is green, say i, set xi = 1 and color it red.
(iv) Produce the output x as an estimation.
6
Discussion
We believe this paper opens several interesting directions for investigation. In general, the exact relationship between max-product and linear programming is not well understood. Their close similarity
for the MWIS problem, along with the reduction of MAP estimation to an MWIS problem, suggests
that the MWIS problem may provide a good first step in an investigation of this relationship.
Also, our novel message-passing algorithm and the reduction of MAP estimation to an MWIS problem immediately yields a new message-passing algorithm for MAP estimation. It would be interesting to investigate the power of this algorithm on more general discrete estimation problems.
References
[1] M. Bayati, D. Shah and M. Sharma, ?Max Weight Matching via Max Product Belief Propagation,? IEEE
ISIT, 2005.
[2] D. Bertsekas, ?Non Linear Programming?, Athena Scientific.
[3] M. Grtschel, L. Lovsz, and A. Schrijver, ?Polynomial algorithms for perfect graphs,? in C. Berge and V.
Chvatal (eds.) Topics on Perfect Graphs Ann. Disc. Math. 21, North-Holland, Amsterdam(1984) 325-356.
[4] K. Jung and D. Shah, ?Low Delay Scheduing in Wireless Networks,? IEEE ISIT, 2007.
[5] C. Moallemi and B. Van Roy, ?Convergence of the Min-Sum Message Passing Algorithm for Quadratic
Optimization,? Preprint, 2006 available at arXiv:cs/0603058
[6] Luca Trevisan, ?Inapproximability of combinatorial optimization problems,? Technical Report TR04-065,
Electronic Colloquium on Computational Complexity, 2004.
[7] M. Wainwright and M. Jordan, ?Graphical models, exponential families, and variational inference,? UC
Berkeley, Dept. of Statistics, Technical Report 649. September, 2003.
[8] J. Yedidia, W. Freeman and Y. Weiss, ?Generalized Belief Propagation,? Mitsubishi Elect. Res. Lab., TR2000-26, 2000.
[9] Y. Weiss, C. Yanover, T. Meltzer ?MAP Estimation, Linear Programming and Belief Propagation with
Convex Free Energies? UAI 2007
8
| 3240 |@word version:1 briefly:2 polynomial:1 suitably:1 open:1 mitsubishi:1 pick:1 reduction:4 lightweight:2 denoting:1 existing:1 current:1 yet:1 must:1 happen:1 j1:7 update:9 v:1 infrastructure:2 provides:4 certificate:3 node:45 math:1 along:1 constructed:1 direct:1 ik:1 incorrect:2 prove:1 manner:1 nor:1 freeman:1 resolve:1 becomes:2 estimating:1 notation:3 complimentary:2 unspecified:1 finding:5 berkeley:1 every:4 exactly:2 bertsekas:1 positive:6 before:1 understood:1 local:2 tki:1 solely:1 approximately:1 studied:1 conversely:1 challenging:1 suggests:2 co:1 limited:1 range:1 unique:4 practical:2 procedure:5 area:1 auxillary:1 maxx:1 convenient:2 matching:1 word:1 get:1 close:4 interior:1 selection:1 scheduling:7 put:1 equivalent:2 map:18 center:1 maximizing:1 send:1 straightforward:1 starting:1 convex:2 minsum:1 recovery:1 immediately:2 insight:2 attraction:1 dominate:1 coordinate:6 transmit:3 updated:2 construction:1 suppose:6 controlling:1 decode:1 exact:1 programming:5 us:1 associate:1 roy:1 jk:1 located:1 lovsz:1 preprint:1 solved:2 wj:5 region:2 connected:1 cycle:5 tkj:1 highest:1 colloquium:1 complexity:1 depend:1 solving:2 tight:5 serve:1 upon:1 bipartite:8 basis:1 isp:1 joint:2 represented:1 describe:3 whose:6 heuristic:1 posed:1 widely:1 say:3 tightness:2 otherwise:1 drawing:1 statistic:1 syntactic:1 transform:1 ip:9 final:1 hoc:1 analytical:1 product:42 maximal:3 adaptation:1 j2:6 till:3 xj2:1 convergence:5 transmission:2 optimum:1 wired:1 produce:5 perfect:2 converges:6 leave:1 tk:1 help:1 derive:1 develop:3 ij:25 odd:2 minor:1 solves:1 berge:1 coverage:1 c:1 come:1 implies:1 indicate:2 direction:1 closely:1 correct:5 tji:1 modifying:1 exchange:1 assign:3 investigation:2 proposition:2 isit:2 strictly:3 m0i:1 hold:3 exp:5 great:1 purpose:1 estimation:16 tik:1 combinatorial:2 currently:1 correctness:4 weighted:1 mit:6 clearly:1 sensor:1 always:2 conjunction:1 corollary:1 derived:2 inference:1 entire:1 typically:1 bt:5 initially:3 relation:1 going:1 subroutine:6 interested:2 i1:1 arg:2 dual:23 overall:1 denoted:2 k6:7 orange:2 uc:1 equal:2 construct:1 represents:2 look:1 mimic:1 sanghavi:2 np:1 fundamentally:1 report:2 modern:1 intended:1 centralized:1 message:24 interest:1 investigate:6 violation:2 edge:18 integral:2 moallemi:1 iv:2 re:1 overcomplete:1 instance:2 assignment:6 loopy:5 cost:1 subset:5 delay:1 answer:4 eec:2 deduced:1 xi1:1 picking:1 ym:4 quickly:1 heaviest:1 choose:1 priority:1 necessitate:1 potential:1 north:1 satisfy:1 mwis:43 ad:1 depends:1 lab:1 doing:1 red:1 start:1 recover:1 maintains:1 capability:1 contribution:2 efficiently:1 yield:7 correspond:2 disc:1 converged:1 checked:2 ed:1 definition:1 energy:1 frequency:2 naturally:3 associated:1 mi:1 color:5 ubiquitous:1 bidirectional:1 higher:1 wei:2 done:1 just:1 replacing:1 lack:4 propagation:5 interfere:2 slackness:2 quality:1 reveal:1 gray:3 scientific:1 believe:3 resemblance:2 xj1:1 verify:1 true:1 y2:4 hence:2 iteratively:2 i2:1 adjacent:1 round:1 elect:1 generalized:1 outline:1 demonstrate:1 cp:2 variational:1 novel:1 fi:1 specialized:1 mt:5 ji:1 endpoint:1 extend:1 refer:1 sujay:1 similarly:1 access:2 similarity:3 bti:2 add:1 recent:1 perspective:1 scenario:1 certain:3 inequality:1 binary:2 greater:1 converge:4 sharma:1 ii:3 alan:1 technical:2 luca:1 equally:1 calculates:1 involving:1 essentially:1 arxiv:1 iteration:5 represent:4 normalization:1 m0j:1 addition:1 sends:1 rest:1 strict:2 subject:1 inconsistent:1 jordan:1 integer:1 presence:1 leverage:1 chvatal:1 iii:4 easy:3 enough:4 meltzer:1 xj:7 whether:3 penalty:2 passing:12 remark:1 useful:1 generally:1 tij:1 amount:1 simplest:1 reduced:2 specifies:1 canonical:1 notice:1 paramters:1 discrete:1 reformulation:1 neither:2 integrality:7 graph:26 relaxation:14 mti:2 sum:2 run:1 powerful:1 communicate:1 striking:1 family:1 decide:1 electronic:1 convergent:2 quadratic:1 nontrivial:1 strength:1 occur:1 constraint:5 min:4 optimality:2 relatively:1 remain:2 wi:20 lp:29 lid:1 modification:3 making:1 maxy:1 interference:3 resource:1 mutually:1 equation:1 devavrat:2 know:1 tractable:1 available:1 operation:2 rewritten:1 yedidia:1 alternative:1 shah:3 existence:1 original:2 graphical:1 especially:1 build:1 classical:1 unchanged:1 question:3 primary:1 traditional:1 september:1 athena:1 topic:1 willsky:2 trevisan:1 index:2 relationship:5 equivalently:1 policy:1 benchmark:1 finite:5 descent:14 situation:1 communication:1 y1:5 perturbation:1 arbitrary:1 ordinate:1 pair:5 connection:3 z1:1 conflict:2 established:1 able:1 below:4 mtk:1 summarize:1 program:5 max:55 green:4 belief:8 wainwright:1 power:1 natural:1 difficulty:1 indicator:1 yanover:1 relative:1 interesting:2 allocation:1 acyclic:1 bayati:1 sufficient:1 consistent:2 share:1 interfering:1 jung:1 repeat:3 wireless:6 free:1 weaker:1 understand:1 neighbor:8 taking:1 characterizing:1 barrier:3 distributed:3 van:1 overcome:1 world:1 valid:1 rich:1 made:1 approximate:1 uai:1 receiver:1 conclude:1 xi:32 iterative:6 robin:1 promising:1 channel:1 nature:1 obtaining:1 boxed:1 domain:10 pk:1 main:1 arrow:1 motivation:3 nothing:1 fashion:1 deployed:1 exponential:2 lie:1 intimate:1 theorem:1 ments:1 dominates:1 fusion:1 exists:7 false:1 sequential:1 adding:1 demand:1 gap:6 tantalizing:1 logarithmic:1 simply:1 likely:1 infinitely:1 amsterdam:1 inapproximability:1 holland:1 corresponds:2 ann:1 hard:2 programing:1 included:1 specifically:1 change:1 uniformly:2 semantically:1 wt:1 lemma:7 called:1 total:1 schrijver:1 est:5 arises:2 dept:3 |
2,471 | 3,241 | Iterative Non-linear Dimensionality Reduction by
Manifold Sculpting
Mike Gashler, Dan Ventura, and Tony Martinez ?
Brigham Young University
Provo, UT 84604
Abstract
Many algorithms have been recently developed for reducing dimensionality by
projecting data onto an intrinsic non-linear manifold. Unfortunately, existing algorithms often lose significant precision in this transformation. Manifold Sculpting
is a new algorithm that iteratively reduces dimensionality by simulating surface
tension in local neighborhoods. We present several experiments that show Manifold Sculpting yields more accurate results than existing algorithms with both
generated and natural data-sets. Manifold Sculpting is also able to benefit from
both prior dimensionality reduction efforts.
1
Introduction
Dimensionality reduction is a two-step process: 1) Transform the data so that more information
will survive the projection, and 2) project the data into fewer dimensions. The more relationships
between data points that the transformation step is required to preserve, the less flexibility it will have
to position the points in a manner that will cause information to survive the projection step. Due
to this inverse relationship, dimensionality reduction algorithms must seek a balance that preserves
information in the transformation without losing it in the projection. The key to finding the right
balance is to identify where the majority of the information lies.
Nonlinear dimensionality reduction (NLDR) algorithms seek this balance by assuming that the relationships between neighboring points contain more informational content than the relationships
between distant points. Although non-linear transformations have more potential than do linear
transformations to lose information in the structure of the data, they also have more potential to
position the data to cause more information to survive the projection. In this process, NLDR algorithms expose patterns and structures of lower dimensionality (manifolds) that exist in the original
data. NLDR algorithms, or manifold learning algorithms, have potential to make the high-level
concepts embedded in multidimensional data accessible to both humans and machines.
This paper introduces a new algorithm for manifold learning called Manifold Sculpting, which discovers manifolds through a process of progressive refinement. Experiments show that it yields
more accurate results than other algorithms in many cases. Additionally, it can be used as a postprocessing step to enhance the transformation of other manifold learning algorithms.
2
Related Work
Many algorithms have been developed for performing non-linear dimensionality reduction. Recent
works include Isomap [1], which solves for an isometric embedding of data into fewer dimensions
with an algebraic technique. Unfortunately, it is somewhat computationally expensive as it requires
solving for the eigenvectors of a large dense matrix, and has difficulty with poorly sampled areas of
?
[email protected], [email protected], [email protected]
1
Figure 1: Comparison of several manifold learners on a Swiss Roll manifold. Color is used to
indicate how points in the results correspond to points on the manifold. Isomap and L-Isomap have
trouble with sampling holes. LLE has trouble with changes in sample density.
the manifold. (See Figure 1.A.) Locally Linear Embedding (LLE) [2] is able to perform a similar
computation using a sparse matrix by using a metric that measures only relationships between vectors in local neighborhoods. Unfortunately it produces distorted results when the sample density is
non-uniform. (See Figure 1.B.) An improvement to the Isomap algorithm was later proposed that
uses landmarks to reduce the amount of necessary computation [3]. (See Figure 1.C.) Many other
NLDR algorithms have been proposed, including Kernel Principle Component Analysis [4], Laplacian Eigenmaps [5], Manifold Charting [6], Manifold Parzen Windows [7], Hessian LLE [8], and
others [9, 10, 11]. Hessian LLE preserves the manifold structure better than the other algorithms but
is, unfortunately, computationally expensive. (See Figure 1.D.).
In contrast with these algorithms, Manifold Sculpting is robust to sampling issues and still produces
very accurate results. This algorithm iteratively transforms data by balancing two opposing heuristics, one that scales information out of unwanted dimensions, and one that preserves local structure
in the data. Experimental results show that this technique preserves information into fewer dimensions with more accuracy than existing manifold learning algorithms. (See Figure 1.E.)
3
The Algorithm
An overview of the Manifold Sculpting algorithm is given in Figure 2a.
Figure 2: ? and ? define the relationships that Manifold Sculpting attempts to preserve.
2
Step 1: Find the k nearest neighbors of each point. For each data point pi in P (where P is the set
of all data points represented as vectors in Rn ), find the k-nearest neighbors Ni (such that nij ? Ni
is the j th neighbor of point pi ).
Step 2: Compute relationships between neighbors. For each j (where 0 < j ? k) compute the
Euclidean distance ?ij between pi and each nij ? Ni . Also compute the angle ?ij formed by the
two line segments (pi to nij ) and (nij to mij ), where mij is the most colinear neighbor of nij with
pi . (See Figure 2b.) The most colinear neighbor is the neighbor point that forms the angle closest
to ?. The values of ? and ? are the relationships that the algorithm will attempt to preserve during
transformation. The global average distance between all the neighbors of all points ?ave is also
computed.
Step 3: Optionally preprocess the data. The data may optionally be preprocessed with the transformation step of Principle Component Analysis (PCA), or another efficient algorithm. Manifold
Sculpting will work without this step; however, preprocessing can result in significantly faster convergence. To the extent that there is a linear component in the manifold, PCA will move the information in the data into as few dimensions as possible, thus leaving less work to be done in step 4
(which handles the non-linear component). This step is performed by computing the first |Dpres |
principle components of the data (where Dpres is the set of dimensions that will be preserved in
the projection), and rotating the dimensional axes to align with these principle components. (An
efficient algorithm for computing principle components is presented in [12].)
Step 4: Transform the data. The data is iteratively transformed until some stopping criterion has
been met. One effective technique is to stop when the sum change of all points during the current
iteration falls below a threshold. The best stopping criteria depend on the desired quality of results ?
if precision is important, the algorithm may iterate longer; if speed is important it may stop earlier.
Step 4a: Scale values. All the values in Dscal (The set of dimensions that will be eliminated by the
projection) are scaled by a constant factor ?, where 0 < ? < 1 (? = 0.99 was used in this paper).
Over time, the values in Dscal will converge to 0. When Dscal is dropped by the projection (step 5),
there will be very little informational content left in these dimensions.
Step 4b: Restore original relationships. For each pi ? P , the values in Dpres are adjusted to
recover the relationships that are distorted by scaling. Intuitively, this step simulates tension on the
manifold surface. A heuristic error value is used to evaluate the current relationships among data
points relative to the original relationships:
2
2 !
k
X
?ij ? ?ij0
?ij ? ?ij0
wij
pi =
+
(1)
2?ave
?
j=0
where ?ij is the current distance to nij , ?ij0 is the original distance to nij measured in step 2, ?ij
is the current angle, and ?ij0 is the original angle measured in step 2. The denominator values
were chosen as normalizing factors because the value of the angle term can range from 0 to ?, and
the value of the distance term will tend to have a mean of about ?ave with some variance in both
directions. We adjust the values in Dpres for each point to minimize this heuristic error value.
The order in which points are adjusted has some impact on the rate of convergence. Best results were
obtained by employing a breadth-first neighborhood graph traversal from a randomly selected point.
(A new starting point is randomly selected for each iteration.) Intuitively this may be analogous to
the manner in which a person smoothes a crumpled piece of paper by starting at an arbitrary point
and smoothing outward. To further speed convergence, higher weight, wij , is given to the component
of the error contributed by neighbors that have already been adjusted in the current iteration. For all
of our experiments, we use wij = 1 if ni has not yet been adjusted in this iteration, and wij = 10,
if nij has been adjusted in this iteration.
Unfortunately the equation for the true gradient of the error surface defined by this heuristic is
complex, and is in O(|D|3 ). We therefore use the simple hill-climbing technique of adjusting in
each dimension in the direction that yields improvement.
Since the error surface is not necessarily convex, the algorithm may potentially converge to local
minima. At least three factors, however, mitigate this risk: First, the PCA pre-processing step often
tends to move the whole system to a state somewhat close to the global minimum. Even if a local
3
Figure 3: The mean squared error of four algorithms with a Swiss Roll manifold using a varying
number of neighbors k. When k > 57, neighbor paths cut across the manifold. Isomap is more
robust to this problem than other algorithms, but HLLE and Manifold Sculpting still yield better
results. Results are shown on a logarithmic scale.
minimum exists so close to the globally optimal state, it may have a sufficiently small error as to be
acceptable. Second, every point has a unique error surface. Even if one point becomes temporarily
stuck in a local minimum, its neighbors are likely to pull it out, or change the topology of its error
surface when their values are adjusted. Very particular conditions are necessary for every point to
simultaneously find a local minimum. Third, by gradually scaling the values in Dscaled (instead of
directly setting them to 0), the system always remains in a state very close to the current globally
optimal state. As long as it stays close to the current optimal state, it is unlikely for the error
surface to change in a manner that permanently separates it from being able to reach the globally
optimal state. (This is why all the dimensions need to be preserved in the PCA pre-processing step.)
And perhaps most significantly, our experiments show that Manifold Sculpting generally tends to
converge to very good results.
Step 5: Project the data. At this point Dscal contains only values that are very close to zero. The
data is projected by simply dropping these dimensions from the representation.
4
Empirical Results
Figure 1 shows that Manifold Sculpting appears visually to produce results of higher quality than
LLE and Isomap with the Swiss Roll manifold, a common visual test for manifold learning algorithms. Quantitative analysis shows that it also yields better results than HLLE. Since the actual
structure of this manifold is known prior to using any manifold learner, we can use this prior information to quantitatively measure the accuracy of each algorithm.
4.1
Varying number of neighbors.
We define a Swiss Roll in 3D space with n points (xi , yi , zi ) for each 0 ? i < n, such that xi =
t sin(t), yi is a random number ?6 ? yi < 6, and zi = t cos(t), ?where t = 8i/n + 2. In 2D
?1
t2 +1
and vi = yi .
manifold coordinates, the point is (ui , vi ), such that ui = sinh (t)+t
2
We created a Swiss Roll with 2000 data points and reduced the dimensionality to 2 with each of four
algorithms. Next we tested how well these results align with the expected values by measuring the
mean squared distance from each point to its expected value. (See Figure 3.) We rotated, scaled,
and translated the values as required to obtain the minimum possible error measurement for each
algorithm. These results are consistent with a qualitative assessment of Figure 1. Results are shown
with a varying number of neighbors k. In this example, when k = 57, local neighborhoods begin
to cut across the manifold. Isomap is more robust to this problem than other algorithms, but HLLE
and Manifold Sculpting still yield better results.
4
Figure 4: The mean squared error of points from an S-Curve manifold for four algorithms with a
varying number of data points. Manifold Sculpting shows a trend of increasing accuracy with an
increasing number of points. This experiment was performed with 20 neighbors. Results are shown
on a logarithmic scale.
4.2
Varying sample densities.
A similar experiment was performed with an S-Curve manifold. We defined the S-Curve points in
3D space with n points (xi , yi , zi ) for each 0 ? i < n, such that xi = t, yi = sin(t), and zi is
a random number 0 ? zi < 2, where t = (2.2i?0.1)?
. In 2D manifold coordinates, the point is
n
Z t p
cos2 (w) + 1 dw and vi = yi .
(ui , vi ), such that ui =
0
Figure 4 shows the mean squared error of the transformed points from their expected values using
the same regression technique described for the experiment with the Swiss Roll problem. We varied
the sampling density to show how this affects each algorithm. A trend can be observed in this data
that as the number of sample points increases, the quality of results from Manifold Sculpting also
increases. This trend does not appear in the results from other algorithms.
One drawback to the Manifold Sculpting algorithm is that convergence may take longer when the
value for k is too small. This experiment was also performed with 6 neighbors, but Manifold Sculpting did not always converge within a reasonable time when so few neighbors were used. The other
three algorithms do not have this limitation, but the quality of their results still tend to be poor when
very few neighbors are used.
4.3
Entwined spirals manifold.
A test was also performed with an Entwined Spirals manifold. In this case, Isomap was able to
produce better results than Manifold Sculpting (see Figure 5), even though Isomap yielded the worst
accuracy in previous problems. This can be attributed to the nature of the Isomap algorithm. In cases
where the manifold has an intrinsic dimensionality of exactly 1, a path from neighbor to neighbor
provides an accurate estimate of isolinear distance. Thus an algorithm that seeks to globally optimize isolinear distances will be less susceptible to the noise from cutting across local corners. When
the intrinsic dimensionality is higher than 1, however, paths that follow from neighbor to neighbor
produce a zig-zag pattern that introduces excessive noise into the isolinear distance measurement. In
these cases, preserving local neighborhood relationships with precision yields better overall results
than globally optimizing an error-prone metric. Consistent with this intuition, Isomap is the closest
competitor to Manifold Sculpting in other experiments that involved a manifold with a single intrinsic dimension, and yields the poorest results of the four algorithms when the intrinsic dimensionality
is larger than one.
5
Figure 5: Mean squared error for four algorithms with an Entwined Spirals manifold.
4.4
Image-based manifolds.
The accuracy of Manifold Sculpting is not limited to generated manifolds in three dimensional
space. Unfortunately, the manifold structure represented by most real-world problems is not known
a priori. The accuracy of a manifold learner, however, can still be estimated when the problem
involves a video sequence by simply counting the percentage of frames that are sorted into the same
order as the video sequence. Figure 6 shows several frames from a video sequence of a person
turning his head while gradually smiling. Each image was encoded as a vector of 1, 634 pixel
intensity values. This data was then reduced to a single dimension. (Results are shown on three
separate lines in order to fit the page.) The one preserved dimension could then characterize each
frame according to the high-level concepts that were previously encoded in many dimensions. The
dot below each image corresponds to the single-dimensional value in the preserved dimension for
that image. In this case, the ordering of every frame was consistent with the video sequence.
4.5
Controlled manifold topologies.
Figure 7 shows a comparison of results obtained from a manifold generated by translating an image
over a background of random noise. Nine of the 400 input images are shown as a sample, and
results with each algorithm are shown as a mesh. Each vertex is placed at a position corresponding
to the two values obtained from one of the 400 images. For increased visibility of the inherent
structure, the vertexes are connected with their nearest input space neighbors. Because two variables
(horizontal position and vertical position) were used to generate the dataset, this data creates a
manifold with an intrinsic dimensionality of two in a space with an extrinsic dimensionality of
2,401 (the total number of pixels in each image). Because the background is random, the average
distance between neighboring points in the input space is uniform, so the ideal result is known to
be a square. The distortions produced by Manifold Sculpting tend to be local in nature, while the
distortions produced by other algorithms tend to be more global. Note that the points are spread
nearly uniformly across the manifold in the results from Manifold Sculpting. This explains why the
results from Manifold Sculpting tend to fit the ideal results with much lower total error (as shown in
Figure 6: Images of a face reduced by Manifold Sculpting into a single dimension. The values are
are shown here on three wrapped lines in order to fit the page. The original image is shown above
each point.
6
Figure 7: A comparison of results with a manifold generated by translating an image over a background of noise. Manifold Sculpting tends to produce less global distortion, while other algorithms
tend to produce less local distortion. Each point represents an image. This experiment was done
in each case with 8 neighbors. (LLE fails to yield results with these parameters, but [13] reports a
similar experiment in which LLE produces results. In that case, as with Isomap and HLLE as shown
here, distortion is clearly visible near the edges.)
Figure 3 and Figure 4). Perhaps more significantly, it also tends to keep the intrinsic variables in the
dataset more linearly separable. This is particularly important when the dimensionality reduction is
used as a pre-processing step for a supervised learning algorithm.
We created four video sequences designed to show various types of manifold topologies and measured the accuracy of each manifold learning algorithm. These results (and sample frames from each
video) are shown in Figure 8. The first video shows a rotating stuffed animal. Since the background
pixels remain nearly constant while the pixels on the rotating object change in value, the manifold
corresponding to the vector encoding of this video will contain both smooth and changing areas.
The second video was made by moving a camera down a hallway. This produces a manifold with a
continuous range of variability, since pixels near the center of the frame change slowly while pixels
near the edges change rapidly. The third video pans across a scene. Unlike the video of the rotating
stuffed animal, there are no background pixels that remain constant. The last video shows another
rotating stuffed animal. Unlike the first video, however, the high-contrast texture of the object used
in this video results in a topology with much more variation. As the black spots shift across the
pixels, a manifold is created that swings wildly in the respective dimensions. Due to the large hills
and valleys in the topology of this manifold, the nearest neighbors of a frame frequently create paths
that cut across the manifold. In all four cases, Manifold Sculpting produced results competitive
with Isomap, which does particularly well with manifolds that have an intrinsic dimensionality of
Figure 8: Four video sequences were created with varying properties in the corresponding manfolds.
Dimensionality was reduced to one with each of four manifold learning algorithms. The percentage
of frames that were correctly ordered by each algorithm is shown.
7
one, but Manifold Sculpting is not limited by the intrinsic dimensionality as shown in the previous
experiments.
5
Discussion
The experiments tested in this paper show that Manifold Sculpting yields more accurate results
than other well-known manifold learning algorithms. Manifold Sculpting is robust to holes in the
sampled area. Manifold Sculpting is more accurate than other algorithms when the manifold is
sparsely sampled, and the gap is even wider with higher sampling densities. Manifold Sculpting
has difficulty when the selected number of neighbors is too small but consistently outperforms other
algorithms when it is larger.
Due to the iterative nature of Manifold Sculpting, it?s difficult to produce a valid complexity analysis.
Consequently, we measured the scalability of Manifold Sculpting empirically and compared it with
that of HLLE, L-Isomap, and LLE. Due to space constraints these results are not included here, but
they indicate that Manifold Sculpting scales better than the other algorithms when when the number
of data points is much larger than the number of input dimensions.
Manifold Sculpting benefits significantly when the data is pre-processed with the transformation step of PCA. The transformation step of any algorithm may be used in place of this step.
Current research seeks to identify which algorithms work best with Manifold Sculpting to efficiently produce high quality results. (An implementation of Manifold Sculpting is included at
http://waffles.sourceforge.net.)
References
[1] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for
nonlinear dimensionality reduction. Science, 290:2319?2323, 2000.
[2] Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear
embedding. Science, 290:2323?2326, 2000.
[3] Vin de Silva and Joshua B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. In NIPS, pages 705?712, 2002.
[4] Bernhard Sch?olkopf, Alexander J. Smola, and Klaus-Robert M?uller. Kernel principal component analysis. Advances in kernel methods: support vector learning, pages 327?352, 1999.
[5] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, 14, pages 585?
591, 2001.
[6] Matthew Brand. Charting a manifold. In Advances in Neural Information Processing Systems,
15, pages 961?968. MIT Press, Cambridge, MA, 2003.
[7] Pascal Vincent and Yoshua Bengio. Manifold parzen windows. In Advances in Neural Information Processing Systems 15, pages 825?832. MIT Press, Cambridge, MA, 2003.
[8] D. Donoho and C. Grimes. Hessian eigenmaps: locally linear embedding techniques for high
dimensional data. Proc. of National Academy of Sciences, 100(10):5591?5596, 2003.
[9] Yoshua Bengio and Martin Monperrus. Non-local manifold tangent learning. In Advances
in Neural Information Processing Systems 17, pages 129?136. MIT Press, Cambridge, MA,
2005.
[10] Elizaveta Levina and Peter J. Bickel. Maximum likelihood estimation of intrinsic dimension.
In NIPS, 2004.
[11] Zhenyue Zhang and Hongyuan Zha. A domain decomposition method for fast manifold learning. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing
Systems 18. MIT Press, Cambridge, MA, 2006.
[12] Sam Roweis. Em algorithms for PCA and SPCA. In Michael I. Jordan, Michael J. Kearns, and
Sara A. Solla, editors, Advances in Neural Information Processing Systems, volume 10, 1998.
[13] Lawrence K. Saul and Sam T. Roweis. Think globally, fit locally: Unsupervised learning of
low dimensional manifolds. Journal of Machine Learning Research, 4:119?155, 2003.
8
| 3241 |@word cos2:1 seek:4 decomposition:1 reduction:10 contains:1 outperforms:1 existing:3 current:8 com:1 gmail:1 yet:1 must:1 john:1 mesh:1 visible:1 distant:1 visibility:1 designed:1 fewer:3 selected:3 hallway:1 provides:1 zhang:1 qualitative:1 dan:1 manner:3 expected:3 frequently:1 informational:2 globally:6 little:1 actual:1 window:2 increasing:2 becomes:1 project:2 begin:1 developed:2 finding:1 transformation:10 mitigate:1 every:3 multidimensional:1 quantitative:1 unwanted:1 exactly:1 scaled:2 platt:1 appear:1 dropped:1 local:14 tends:4 encoding:1 path:4 hlle:5 black:1 sara:1 co:1 limited:2 range:2 unique:1 camera:1 swiss:6 spot:1 area:3 empirical:1 significantly:4 projection:7 pre:4 onto:1 close:5 valley:1 risk:1 optimize:1 center:1 starting:2 convex:1 pull:1 his:1 dw:1 embedding:5 handle:1 coordinate:2 variation:1 analogous:1 losing:1 us:1 trend:3 expensive:2 particularly:2 cut:3 sparsely:1 mike:1 observed:1 worst:1 connected:1 ordering:1 solla:1 zig:1 intuition:1 ui:4 complexity:1 traversal:1 depend:1 colinear:2 solving:1 segment:1 creates:1 learner:3 translated:1 represented:2 various:1 fast:1 effective:1 klaus:1 neighborhood:5 heuristic:4 larger:3 encoded:2 distortion:5 niyogi:1 think:1 transform:2 sequence:6 net:1 neighboring:2 rapidly:1 flexibility:1 poorly:1 roweis:3 academy:1 scalability:1 sourceforge:1 olkopf:2 convergence:4 produce:11 rotated:1 object:2 wider:1 measured:4 nearest:4 ij:6 solves:1 c:2 involves:1 indicate:2 met:1 direction:2 drawback:1 human:1 translating:2 explains:1 adjusted:6 sufficiently:1 visually:1 lawrence:2 matthew:1 bickel:1 sculpting:37 estimation:1 proc:1 lose:2 expose:1 create:1 uller:1 mit:4 clearly:1 always:2 varying:6 ax:1 improvement:2 consistently:1 likelihood:1 contrast:2 ave:3 stopping:2 unlikely:1 transformed:2 wij:4 pixel:8 issue:1 among:1 overall:1 pascal:1 priori:1 animal:3 smoothing:1 sampling:4 eliminated:1 progressive:1 represents:1 unsupervised:1 survive:3 excessive:1 nearly:2 others:1 t2:1 quantitatively:1 inherent:1 few:3 report:1 belkin:1 randomly:2 yoshua:2 preserve:7 simultaneously:1 national:1 opposing:1 attempt:2 adjust:1 introduces:2 grime:1 accurate:6 edge:2 necessary:2 respective:1 euclidean:1 rotating:5 desired:1 nij:8 increased:1 earlier:1 measuring:1 vertex:2 uniform:2 eigenmaps:3 too:2 characterize:1 person:2 density:5 accessible:1 stay:1 enhance:1 parzen:2 michael:2 provo:1 squared:5 slowly:1 corner:1 potential:3 de:2 vi:4 piece:1 later:1 performed:5 competitive:1 recover:1 zha:1 vin:2 partha:1 minimize:1 square:1 ni:4 accuracy:7 roll:6 formed:1 variance:1 efficiently:1 yield:10 identify:2 correspond:1 preprocess:1 climbing:1 vincent:1 produced:3 reach:1 competitor:1 involved:1 attributed:1 sampled:3 stop:2 dataset:2 adjusting:1 color:1 ut:1 dimensionality:22 appears:1 higher:4 isometric:1 follow:1 tension:2 supervised:1 wei:1 done:2 though:1 wildly:1 smola:1 until:1 langford:1 horizontal:1 nonlinear:4 assessment:1 monperrus:1 quality:5 perhaps:2 smiling:1 contain:2 concept:2 isomap:14 true:1 swing:1 iteratively:3 sin:2 during:2 wrapped:1 criterion:2 hill:2 silva:2 postprocessing:1 image:12 discovers:1 recently:1 common:1 empirically:1 overview:1 volume:1 significant:1 measurement:2 cambridge:4 dot:1 moving:1 longer:2 surface:7 align:2 closest:2 recent:1 optimizing:1 yi:7 joshua:2 preserving:1 minimum:6 byu:2 somewhat:2 converge:4 reduces:1 smooth:1 faster:1 levina:1 long:1 laplacian:2 controlled:1 impact:1 regression:1 denominator:1 metric:2 iteration:5 kernel:3 preserved:4 background:5 leaving:1 sch:2 unlike:2 tend:6 simulates:1 jordan:1 near:3 counting:1 ideal:2 spca:1 bengio:2 spiral:3 iterate:1 affect:1 fit:4 zi:5 topology:5 reduce:1 shift:1 pca:6 effort:1 peter:1 algebraic:1 hessian:3 cause:2 nine:1 generally:1 eigenvectors:1 amount:1 transforms:1 outward:1 locally:4 tenenbaum:2 processed:1 reduced:4 generate:1 http:1 exist:1 percentage:2 estimated:1 extrinsic:1 correctly:1 dropping:1 key:1 four:9 threshold:1 changing:1 preprocessed:1 breadth:1 stuffed:3 graph:1 sum:1 inverse:1 angle:5 distorted:2 place:1 reasonable:1 smoothes:1 acceptable:1 scaling:2 poorest:1 sinh:1 yielded:1 constraint:1 scene:1 entwined:3 speed:2 performing:1 separable:1 martin:1 according:1 poor:1 across:7 remain:2 pan:1 sam:3 em:1 projecting:1 intuitively:2 gradually:2 computationally:2 equation:1 remains:1 previously:1 nldr:4 spectral:1 simulating:1 permanently:1 original:6 clustering:1 tony:1 include:1 trouble:2 move:2 already:1 gradient:1 elizaveta:1 distance:10 separate:2 majority:1 landmark:1 manifold:94 extent:1 assuming:1 charting:2 relationship:13 balance:3 optionally:2 difficult:1 unfortunately:6 ventura:2 susceptible:1 potentially:1 robert:1 implementation:1 perform:1 contributed:1 vertical:1 variability:1 head:1 frame:8 rn:1 varied:1 arbitrary:1 intensity:1 required:2 nip:2 able:4 below:2 pattern:2 including:1 video:15 natural:1 difficulty:2 restore:1 turning:1 created:4 prior:3 geometric:1 tangent:1 relative:1 embedded:1 limitation:1 versus:1 consistent:3 principle:5 editor:2 pi:7 balancing:1 prone:1 ij0:4 placed:1 last:1 lle:8 neighbor:26 fall:1 face:1 saul:2 mikhail:1 sparse:1 benefit:2 curve:3 dimension:20 world:1 valid:1 stuck:1 made:1 refinement:1 preprocessing:1 projected:1 employing:1 cutting:1 bernhard:1 keep:1 global:6 hongyuan:1 xi:4 continuous:1 iterative:2 why:2 additionally:1 nature:3 robust:4 zhenyue:1 complex:1 necessarily:1 domain:1 did:1 dense:1 spread:1 linearly:1 whole:1 noise:4 martinez:2 precision:3 fails:1 position:5 lie:1 third:2 young:1 down:1 normalizing:1 brigham:1 intrinsic:10 exists:1 texture:1 hole:2 gap:1 logarithmic:2 simply:2 likely:1 visual:1 ordered:1 temporarily:1 mij:2 corresponds:1 ma:4 sorted:1 consequently:1 donoho:1 content:2 change:7 included:2 reducing:1 uniformly:1 principal:1 kearns:1 called:1 total:2 experimental:1 brand:1 zag:1 support:1 alexander:1 evaluate:1 tested:2 |
2,472 | 3,242 | Comparison of objective functions for estimating
linear-nonlinear models
Tatyana O. Sharpee
Computational Neurobiology Laboratory,
the Salk Institute for Biological Studies, La Jolla, CA 92037
[email protected]
Abstract
This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this
model, the neural firing rate is a nonlinear function of a small number of relevant
stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R?enyi divergences of different
orders [1, 2]. We show that maximizing one of them, R?enyi divergence of order 2, is equivalent to least-square fitting of the linear-nonlinear model to neural
data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R?enyi divergences of arbitrary order in the asymptotic limit of large spike
numbers. We find that the smallest errors are obtained with R?enyi divergence of
order 1, also known as Kullback-Leibler divergence. This corresponds to finding
relevant dimensions by maximizing mutual information [2]. We numerically test
how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons.
We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information
maximization provides slightly, but significantly, better reconstructions than least
square fitting. This makes the problem of finding relevant dimensions, together
with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares.
1
Introduction
The application of system identification techniques to the study of sensory neural systems has a
long history. One family of approaches employs the dimensionality reduction idea: while inputs
are typically very high-dimensional, not all dimensions are equally important for eliciting a neural
response [4, 5, 6, 7, 8]. The aim is then to find a small set of dimensions {?
e1 , e?2 , . . .} in the stimulus
space that are relevant for neural response, without imposing, however, a particular functional dependence between the neural response and the stimulus components {s1 , s2 , . . .} along the relevant
dimensions:
P (spike|s) = P (spike)g(s1 , s2 , ..., sK ),
(1)
If the inputs are Gaussian, the last requirement is not important, because relevant dimensions can be
found without knowing a correct functional form for the nonlinear function g in Eq. (1). However,
for non-Gaussian inputs a wrong assumption for the form of the nonlinearity g will lead to systematic
errors in the estimate of the relevant dimensions themselves [9, 5, 1, 2]. The larger the deviations of
the stimulus distribution from a Gaussian, the larger will be the effect of errors in the presumed form
of the nonlinearity function g on estimating the relevant dimensions. Because inputs derived from a
natural environment, either visual or auditory, have been shown to be strongly non-Gaussian [10], we
will concentrate here on system identification methods suitable for either Gaussian or non-Gaussian
stimuli.
To find the relevant dimensions for neural responses probed with non-Gaussian inputs, Hunter and
Korenberg proposed an iterative scheme [5] where the relevant dimensions are first found by assuming that the input?output function g is linear. Its functional form is then updated given the current
estimate of the relevant dimensions. The inverse of g is then used to improve the estimate of the
relevant dimensions. This procedure can be improved not to rely on inverting the nonlinear function
g by formulating optimization problem exclusively with respect to relevant dimensions [1, 2], where
the nonlinear function g is taken into account in the objective function to be optimized. A family of
objective functions suitable for finding relevant dimensions with natural stimuli have been proposed
based on R?enyi divergences [1] between the the probability distributions of stimulus components
along the candidate relevant dimensions computed with respect to all inputs and those associated
with spikes. Here we show that the optimization problem based on the R?enyi divergence of order 2
corresponds to least square fitting of the linear-nonlinear model to neural spike trains. The KullbackLeibler divergence also belongs to this family and is the R?enyi divergence of order 1. It quantifies
the amount of mutual information between the neural response and the stimulus components along
the relevant dimension [2]. The optimization scheme based on information maximization has been
previously proposed and implemented on model [2] and real cells [11]. Here we derive asymptotic
errors for optimization strategies based on R?enyi divergences of arbitrary order, and show that relevant dimensions found by maximizing Kullback-Leibler divergence have the smallest errors in the
limit of large spike numbers compared to maximizing other R?enyi divergences, including the one
which implements least squares. We then show in numerical simulations on model cells that this
trend persists even for very low spike numbers.
2
Variance as an Objective Function
One way of selecting a low-dimensional model of neural response is to minimize a ?2 -difference
between spike probabilities measured and predicted by the model after averaging across all inputs s:
?
?2
Z
P (spike|s) P (spike|s ? v)
?2 [v] = dsP (s)
?
,
(2)
P (spike)
P (spike)
where dimension v is the relevant dimension for a given model described by Eq. (1) [multiple
dimensions could also be used, see below]. Using the Bayes? rule and rearranging terms, we get:
?2 Z
?
Z
Z
[P (s|spike)]2
[Pv (x|spike)]2
P (s|spike) P (s ? v|spike)
?
= ds
? dx
. (3)
?2 [v] = dsP (s)
P (s)
P (s ? v)
P (s)
Pv (x)
In the last integral averaging has been carried out with respect to all stimulus components except
for those along the trial direction v, so that integration variable x = s ? v. Probability distributions
Pv (x) and Pv (x|spike) represent the result of this averaging across all presented stimuli and those
that lead to a spike, respectively:
Z
Z
Pv (x) = dsP (s)?(x ? s ? v), Pv (x|spike) = dsP (s|spike)?(x ? s ? v),
(4)
where ?(x) is a delta-function. In practice, both of the averages (4) are calculated by bining the
range of projections values x and computing histograms normalized to unity. Note that if there
multiple spikes are sometimes elicited, the probability distribution P (x|spike) can be constructed
by weighting the contribution from each stimulus according to the number of spikes it elicited.
If neural spikes are indeed based on one relevant dimension, then this dimension will explain all of
the variance, leading to ?2 = 0. For all other dimensions v, ?2 [v] > 0. Based on Eq. (3), in order
to minimize ?2 we need to maximize
?2
Pv (x|spike)
,
(5)
F [v] = dxPv (x)
Pv (x)
which is a R?enyi divergence of order 2 between probability distribution Pv (x|spike) and Pv (x), and
are part of a family of f -divergences measures that are based on a convex function of the ratio of
Z
?
the two probability distributions (instead of a power ? in a R?enyi divergence of order ?) [12, 13, 1].
For optimization strategy based on R?enyi divergences of order ?, the relevant dimensions are found
by maximizing:
?
??
Z
Pv (x|spike)
1
(?)
dxPv (x)
.
(6)
F [v] =
??1
Pv (x)
By comparison, when the relevant dimension(s) are found by maximizing information [2], the goal
is to maximize Kullback-Leibler divergence, which can be obtained by taking a formal limit ? ? 1:
Z
Z
Pv (x|spike) Pv (x|spike)
Pv (x|spike)
ln
= dxPv (x|spike) ln
. (7)
I[v] = dxPv (x)
Pv (x)
Pv (x)
Pv (x)
Returning to the variance optimization, the maximal value for F [v] that can be achieved by any
dimension v is:
Z
2
[P (s|spike)]
.
(8)
Fmax = ds
P (s)
It corresponds to the variance in the firing rate averaged across different inputs (see Eq. (9) below).
Computation of the mutual information carried by the individual spike about the stimulus relies on
similar integrals. Following the procedure outlined for computing mutual information [14], one can
use the Bayes? rule and the ergodic assumption to compute Fmax as a time-average:
?
?2
Z
r(t)
1
dt
,
(9)
Fmax =
T
r?
where the firing rate r(t) = P (spike|s)/?t is measured in time bins of width ?t using multiple
repetitions of the same stimulus sequence . The stimulus ensemble should be diverse enough to
justify the ergodic assumption [this could be checked by computing Fmax for increasing fractions of
the overall dataset size]. The average firing rate r? = P (spike)/?t is obtained by averaging r(t) in
time.
The fact that F [v] < Fmax can be seen either by simply noting that ?2 [v] ? 0, or from the data
processing inequality, which applies not only to Kullback-Leibler divergence, but also to R?enyi
divergences [12, 13, 1]. In other words, the variance in the firing rate explained by a given dimension
F [v] cannot be greater than the overall variance in the firing rate Fmax . This is because we have
averaged over all of the variations in the firing rate that correspond to inputs with the same projection
value on the dimension v and differ only in projections onto other dimensions.
Optimization scheme based on R?enyi divergences of different orders have very similar structure. In
particular, gradient could be evaluated in a similar way:
"?
???1 #
Z
Pv (x|spike)
?
d
(?)
dxPv (x|spike) [hs|x, spikei ? hs|xi]
,
(10)
?v F
=
??1
dx
Pv (x)
R
where hs|x, spikei = ds s?(x?s?v)P (s|spike)/P (x|spike), and similarly for hs|xi. The gradient
is thus given by a weighted sum of spike-triggered averages hs|x, spikei ? hs|xi conditional upon
projection values of stimuli onto the dimension v for which the gradient of information is being
evaluated. The similarity of the structure of both the objective functions and their gradients for
different R?enyi divergences means that numeric algorithms can be used for optimization of R?enyi
divergences of different orders. Examples of possible algorithms have been described [1, 2, 11] and
include a combination of gradient ascent and simulated annealing.
Here are a few facts common to this family of optimization schemes. First, as was proved in the case
of information maximization based on Kullback-Leibler divergence [2], the merit function F (?) [v]
does not change with the length of the vector v. Therefore v ? ?v F = 0, as can also be seen directly
from Eq. (10), because v ? hs|x, spikei = x and v ? hs|xi = x. Second, the gradient is 0 when
evaluated along the true receptive field. This is because for the true relevant dimension according
to which spikes were generated, hs|s1 , spikei = hs|s1 i, a consequence of the fact that relevant
projections completely determine the spike probability. Third, merit functions, including variance
and information, can be computed with respect to multiple dimensions by keeping track of stimulus
projections on all the relevant dimensions when forming probability distributions (4). For example,
in the case of two dimensions v1 and v2 , we would use
Z
Pv1 ,v2 (x1 , x2 |spike) = ds ?(x1 ? s ? v1 )?(x2 ? s ? v2 )P (s|spike),
Z
Pv1 ,v2 (x1 , x2 ) = ds ?(x1 ? s ? v1 )?(x2 ? s ? v2 )P (s),
(11)
to
R
compute the variance with respect
2
dx1 dx2 [P (x1 , x2 |spike)] /P (x1 , x2 ).
to
the
two
dimensions
as
F [v1 , v2 ]
=
If multiple stimulus dimensions are relevant for eliciting the neural response, they can always be
found (provided sufficient number of responses have been recorded) by optimizing the variance
according to Eq. (11) with the correct number of dimensions. In practice this involves finding
a single relevant dimension first, and then iteratively increasing the number of relevant dimensions
considered while adjusting the previously found relevant dimensions. The amount by which relevant
dimensions need to be adjusted is proportional to the contribution of subsequent relevant dimensions
to neural spiking (the corresponding expression has the same functional form as that for relevant
dimensions found by maximizing information, cf. Appendix B [2]). If stimuli are either uncorrelated
or correlated but Gaussian, then the previously found dimensions do not need to be adjusted when
additional dimensions are introduced. All of the relevant dimensions can be found one by one, by
always searching only for a single relevant dimension in the subspace orthogonal to the relevant
dimensions already found.
3
Illustration for a model simple cell
Here we illustrate how relevant dimensions can be found by maximizing variance (equivalent to least
square fitting), and compare this scheme with that of finding relevant dimensions by maximizing
information, as well as with those that are based upon computing the spike-triggered average. Our
goal is to reconstruct relevant dimensions of neurons probed with inputs of arbitrary statistics. We
used stimuli derived from a natural visual environment [11] that are known to strongly deviate from
a Gaussian distribution. All of the studies have been carried out with respect to model neurons.
Advantage of doing so is that the relevant dimensions are known. The example model neuron is
taken to mimic properties of simple cells found in the primary visual cortex. It has a single relevant
dimension, which we will denote as e?1 . As can be seen in Fig. 1(a), it is phase and orientation
sensitive. In this model, a given stimulus s leads to a spike if the projection s1 = s ? e?1 reaches a
threshold value ? in the presence of noise: P (spike|s)/P (spike) ? g(s1 ) = hH(s1 ? ? + ?)i, where
a Gaussian random variable ? with variance ? 2 models additive noise, and the function H(x) = 1
for x > 0, and zero otherwise. The parameters ? for threshold and the noise variance ? 2 determine
the input?output function. In what follows we will measure these parameters in units of the standard
deviation of stimulus projections along the relevant dimension. In these units, the signal-to-noise
ratio is given by ?.
Figure 1 shows that it is possible to obtain a good estimate of the relevant dimension e?1 by maximizing either information, as shown in panel (b), or variance, as shown in panel(c). The final value
of the projection depends on the size of the dataset, as will be discussed below. In the example
shown in Fig. 1 there were ? 50, 000 spikes with average probability of spike ? 0.05 per frame, and
the reconstructed vector has a projection v?max ? e?1 = 0.98 when maximizing either information or
variance. Having estimated the relevant dimension, one can proceed to sample the nonlinear input?
output function. This is done by constructing histograms for P (s ? v?max ) and P (s ? v?max |spike) of
projections onto vector v?max found by maximizing either information or variance, and taking their
ratio. Because of the Bayes? rule, this yields the nonlinear input?output function g of Eq. (1). In
Fig. 1(d) the spike probability of the reconstructed neuron P (spike|s ? v?max ) (crosses) is compared
with the probability P (spike|s1 ) used in the model (solid line). A good match is obtained.
In actuality, reconstructing even just one relevant dimension from neural responses to correlated
non-Gaussian inputs, such as those derived from real-world, is not an easy problem. This fact can
be appreciated by considering the estimates of relevant dimension obtained from the spike-triggered
average (STA) shown in panel (e). Correcting the STA by second-order correlations of the input
ensemble through a multiplication by the inverse covariance matrix results in a very noisy estimate,
(b)
10
maximally informative
dimension
10
(c)
20
20
30
30
30
(e)
20
30
(f)
STA
10
10
20
30
(g)
decorrelated STA
20
20
20
30
30
30
20
30
10
20
30
regularized
decorrelated STA
10
10
10
(d)
10
20
10
dimension of
maximal variance
spike probability
truth
10
20
30
10
20
30
1.0
truth
0.8
maximizing
information (x)
0.6
variance (x)
0.4
0.2
0.0
(h)
spike probability
(a)
-6 -4 -2 0 2 4 6
filtered stimulus (sd=1)
1.0
0.8
0.6
0.4
decorrelated
STA (x)
regularized
decorrelated
STA (x)
0.2
0.0
-6 -4 -2 0 2 4 6
filtered stimulus (sd=1)
Figure 1: Analysis of a model visual neuron with one relevant dimension shown in (a). Panels (b)
and (c) show normalized vectors v?max found by maximizing information and variance, respectively;
(d) The probability of a spike P (spike|s ? v?max ) (blue crosses ? information maximization, red
crosses ? variance maximization) is compared to P (spike|s1 ) used in generating spikes (solid line).
Parameters of the model are ? = 0.5 and ? = 2, both given in units of standard deviation of s1 ,
which is also the units for the x-axis in panels (d and h). The spike?triggered average (STA) is shown
in (e). An attempt to remove correlations according to the reverse correlation method, Ca?1
priori vsta
(decorrelated STA), is shown in panel (f) and in panel (g) with regularization (see text). In panel
(h), the spike probabilities as a function of stimulus projections onto the dimensions obtained as
decorrelated STA (blue crosses) and regularized decorrelated STA (red crosses) are compared to a
spike probability used to generate spikes (solid line).
shown in panel (f). It has a projection value of 0.25. Attempt to regularize the inverse of covariance
matrix results in a closer match to the true relevant dimension [15, 16, 17, 18, 19] and has a projection
value of 0.8, as shown in panel (g). While it appears to be less noisy, the regularized decorrelated
STA can have systematic deviations from the true relevant dimensions [9, 20, 2, 11]. Preferred
orientation is less susceptible to distortions than the preferred spatial frequency [19]. In this case
regularization was performed by setting aside 1/4 of the data as a test dataset, and choosing a cutoff
on the eigenvalues of the input covariances matrix that would give the maximal information value
on the test dataset [16, 19].
4
Comparison of Performance with Finite Data
In the limit of infinite data the relevant dimensions can be found by maximizing variance, information, or other objective functions [1]. In a real experiment, with a dataset of finite size, the optimal
vector found by any of the R?enyi divergences v? will deviate from the true relevant dimension e?1 .
In this section we compare the robustness of optimization strategies based on R?enyi divergences of
various orders, including least squares fitting (? = 2) and information maximization (? = 1), as the
dataset size decreases and/or neural noise increases.
The deviation from the true relevant dimension ?v = v? ? e?1 arises because the probability distributions (4) are estimated from experimental histograms and differ from the distributions found in
the limit of infinite data size. The effects of noise on the reconstruction can be characterized by
taking the dot product between the relevant dimension and the optimal vector for a particular data
sample: v? ? e?1 = 1 ? 21 ?v2 , where both v? and e?1 are normalized, and ?v is by definition orthogonal to e?1 . Assuming that the deviation ?v is small, we can use quadratic approximation to expand
the objective function (obtained with finite data) near its maximum. This leads to an expression
?v = ?[H (?) ]?1 ?F (?) , which relates deviation ?v to the gradient and Hessian of the objective
function evaluated at the vector e?1 . Subscript (?) denotes the order of the R?enyi divergence used
as an objective function. Similarly to the case of optimizing information [2], the Hessian of R?enyi
divergence of arbitrary order when evaluated along the optimal dimension e?1 is given by
???3 ? ?
??2
?
Z
d P (x|spike)
P (x|spike)
(?)
,
Hij = ?? dxP (x|spike)Cij (x)
P (x)
dx
P (x)
(12)
where Cij (x) = (hsi sj |xi ? hsi |xihsj |xi) are covariance matrices of inputs sorted by their projection x along the optimal dimension.
When averaged over possible outcomes of N trials, the gradient is zero for the optimal direction. In
other words, there is no specific direction towards which the deviations ?v are biased. Next, in order
to measure the
dimensions around the true one e?1 , we need to evaluate
h expected spread?of optimal
? i
2
(?)
(?)T
(?) ?2
h?v i = Tr h?F ?F
i H
, and therefore need to know the variance of the gradient
of F averaged across different equivalent datasets. Assuming that the probability of generating a
(?)
(?)
(?)
spike is independent for different bins, we find that h?Fi ?Fj i = Bij /Nspike , where
?2??4 ?
?2
?
Z
d P (x|spike)
P (x|spike)
(?)
2
.
(13)
Bij = ?
dxP (x|spike)Cij (x)
P (x)
dx P (x)
Therefore an expected error in the reconstruction of the optimal filter by maximizing variance is
inversely proportional to the number of spikes:
1
Tr0 [BH ?2 ]
,
v? ? e?1 ? 1 ? h?v2 i = 1 ?
2
2Nspike
(14)
where we omitted superscripts (?) for clarity. Tr0 denotes the trace taken in the subspace orthogonal to the relevant dimension (deviations along the relevant dimension have no meaning [2], which
mathematically manifests itself in dimension e?1 being an eigenvector of matrices H and B with the
zero eigenvalue). Note that when ? = 1, which corresponds to Kullback-Leibler divergence and
information maximization, A ? H ?=1 = B ?=1 . The asymptotic errors
? in?this case are completely
determined by the trace of the Hessian of information, h?v2 i ? Tr0 A?1 , reproducing the previously published result for maximally informative dimensions [2]. Qualitatively, the expected error
? D/(2Nspike ) increases in proportion to the dimensionality D of inputs and decreases as more
spikes are collected. This dependence is in common with expected errors of relevant dimensions
found by maximizing information [2], as well as methods based on computing the spike-triggered
average both for white noise [1, 21, 22] and correlated Gaussian inputs [2].
Next we examine which of the R?enyi divergences provides the smallest asymptotic error (14) for
estimating relevant dimensions. Representing the covariance matrix as Cij (x) = ?ik (x)?jk (x)
(exact expression for matrices ? will not be needed), we can express the Hessian matrix H and
covariance matrix for the gradient B as averages with respect to probability distribution P (x|spike):
Z
Z
B = dxP (x|spike)b(x)bT (x), H = dxP (x|spike)a(x)bT (x),
(15)
??2
where the gain function g(x) = P (x|spike)/P (x), and matrices bij (x) = ??ij (x)g 0 (x) [g(x)]
and aij (x) = ?ij (x)g 0 (x)/g(x). Cauchy-Schwarz identity for scalar quantities states that,
hb2 i/habi2 ? 1/ha2 i, where the average is taken with respect to some probability distribution.
A similar result can also be proven for matrices under a Tr operation as in Eq. (14). Applying the
matrix-version of the Cauchy-Schwarz identity to Eq. (14), we find that the smallest error is obtained
when
Z
Tr0 [BH ?2 ] = Tr0 [A?1 ], with A = dxP (x|spike)a(x)aT (x),
(16)
Matrix A corresponds to the Hessian of the merit function for ? = 1: A = H (?=1) . Thus, among the
various optimization strategies based on R?enyi divergences, Kullback-Leibler divergence (? = 1)
has the smallest asymptotic errors. The least square fitting corresponds to optimization based on
R?enyi divergence with ? = 2, and is expected to have larger errors than optimization based on
Kullback-Leibler divergence (? = 1) implementing information maximization. This result agrees
with recent findings that Kullback-Leibler divergence is the best distortion measure for performing
lossy compression [3].
Below we use numerical simulations with model cells to compare the performance of information
(? = 1) and variance (? = 2) maximization strategies in the regime of relatively small numbers
<
of spikes. We are interested in the range 0.1 <
? D/Nspike ? 1, where the asymptotic results do not
necessarily apply. The results of simulations are shown in Fig. 2 as a function of D/Nspike , as well
as with varying neural noise levels. To estimate sharper (less noisy) input/output functions with ? =
1.5, 1.0, 0.5, 0.25, we used larger number of bins (16, 21, 32, 64), respectively. Identical numerical
algorithms, including the number of bins, were used for maximizing variance and information. The
relevant dimension for each simulated spike train was obtained as an average of 4 jackknife estimates
computed by setting aside 1/4 of the data as a test set. Results are shown after 1000 line optimizations
(D = 900), and performance on the test set was checked after every line optimization. As can be
seen, generally good reconstructions with projection values >
? 0.7 can be obtained by maximizing
either information or variance, even in the severely undersampled regime D < Nspike . We find that
reconstruction errors are comparable for both information and variance maximization strategies, and
are better or equal (at very low spike numbers) than STA-based methods. Information maximization
achieves significantly smaller errors than the least-square fitting, when we analyze results for all
simulations for four different models cells and spike numbers (p < 10?4 , paired t-test).
1.0
1.0
maximizing information
maximizing variance
regularized decorrelated STA
projection on true dimension
0.9
0.8
0.7
D
C
B
0.9
A
maximizing information
maximizing variance
0.6
0.8
0.5
0.4
0.7
STA
0.3
0.2
0.1
0.6
A
B
C
decorrelated STA
0.5
1.0
1.5
D / N spike
B
C
0
0
A
D
2.0
2.5
0.5
0
D
0.5
1.0
1.5
D / N spike
2.0
2.5
Figure 2: Projection of vector v?max obtained by maximizing information (red filled symbols) or
variance (blue open symbols) on the true relevant dimension e?1 is plotted as a function of ratio between stimulus dimensionality D and the number of spikes Nspike , with D = 900. Simulations were
carried out for model visual neurons with one relevant dimension from Fig. 1(a) and the input-output
function Eq.(1) described by threshold ? = 2.0 and noise standard deviation ? = 1.5, 1.0, 0.5, 0.25
for groups labeled A (4), B (5), C (?), and D (2), respectively. The left panel also shows results
obtained using spike-triggered average (STA, gray) and decorrelated STA (dSTA, black). In the right
panel, we replot results for information and variance optimization together with those for regularized
decorrelated STA (RdSTA, green open symbols). All error bars show standard deviations.
5
Conclusions
In this paper we compared accuracy of a family of optimization strategies for analyzing neural responses to natural stimuli based on R?enyi divergences. Finding relevant dimensions by maximizing
one of the merit functions, R?enyi divergence of order 2, corresponds to fitting the linear-nonlinear
model in the least-square sense to neural spike trains. Advantage of this approach over standard least
square fitting procedure is that it does not require the nonlinear gain function to be invertible. We
derived errors expected for relevant dimensions computed by maximizing R?enyi divergences of arbitrary order in the asymptotic regime of large spike numbers. The smallest errors were achieved not
in the case of (nonlinear) least square fitting of the linear-nonlinear model to the neural spike trains
(R?enyi divergence of order 2), but with information maximization (based on Kullback-Leibler divergence). Numeric simulations on the performance of both information and variance maximization
strategies showed that both algorithms performed well even when the number of spikes is very small.
With small numbers of spikes, reconstructions based on information maximization had also slightly,
but significantly, smaller errors those of least-square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [23, 3], one of examples where
information-theoretic measures are no more data limited than those derived from least squares. It
remains possible, however, that other merit functions based on non-polynomial divergence measures
could provide even smaller reconstruction errors than information maximization.
References
[1] L. Paninski. Convergence properties of three spike-triggered average techniques. Network: Comput.
Neural Syst., 14:437?464, 2003.
[2] T. Sharpee, N.C. Rust, and W. Bialek. Analyzing neural responses to natural signals: Maximally informatiove dimensions. Neural Computation, 16:223?250, 2004. See also physics/0212110, and a preliminary
account in Advances in Neural Information Processing 15 edited by S. Becker, S. Thrun, and K. Obermayer, pp. 261-268 (MIT Press, Cambridge, 2003).
[3] Peter Harremo?es and Naftali Tishby. The Information bottleneck revisited or how to choose a good
distortion measure. Proc. of the IEEE Int. Symp. on Information Theory (ISIT), 2007.
[4] E. de Boer and P. Kuyper. Triggered correlation. IEEE Trans. Biomed. Eng., 15:169?179, 1968.
[5] I. W. Hunter and M. J. Korenberg. The identification of nonlinear biological systems: Wiener and Hammerstein cascade models. Biol. Cybern., 55:135?144, 1986.
[6] R. R. de Ruyter van Steveninck and W. Bialek. Real-time performance of a movement-sensitive neuron in
the blowfly visual system: coding and information transfer in short spike sequences. Proc. R. Soc. Lond.
B, 265:259?265, 1988.
[7] V. Z. Marmarelis. Modeling Methodology for Nonlinear Physiological Systems. Ann. Biomed. Eng.,
25:239?251, 1997.
[8] W. Bialek and R. R. de Ruyter van Steveninck. Features and dimensions: Motion estimation in fly vision.
q-bio/0505003, 2005.
[9] D. L. Ringach, G. Sapiro, and R. Shapley. A subspace reverse-correlation technique for hte study of visual
neurons. Vision Res., 37:2455?2464, 1997.
[10] D. L. Ruderman and W. Bialek. Statistics of natural images: scaling in the woods. Phys. Rev. Lett.,
73:814?817, 1994.
[11] T. O. Sharpee, H. Sugihara, A. V. Kurgansky, S. P. Rebrik, M. P. Stryker, and K. D. Miller. Adaptive
filtering enhances information transmission in visual cortex. Nature, 439:936?942, 2006.
[12] S. M. Ali and S. D. Silvey. A general class of coefficeint of divergence of one distribution from another.
J. R. Statist. Soc. B, 28:131?142, 1966.
[13] I. Csisz?ar. Information-type measures of difference of probability distrbutions and indirect observations.
Studia Sci. Math. Hungar., 2:299?318, 1967.
[14] N. Brenner, S. P. Strong, R. Koberle, W. Bialek, and R. R. de Ruyter van Steveninck. Synergy in a neural
code. Neural Computation, 12:1531?1552, 2000. See also physics/9902067.
[15] F. E. Theunissen, K. Sen, and A. J. Doupe. Spectral-temporal receptive fields of nonlinear auditory
neurons obtained using natural sounds. J. Neurosci., 20:2315?2331, 2000.
[16] F.E. Theunissen, S.V. David, N.C. Singh, A. Hsu, W.E. Vinje, and J.L. Gallant. Estimating spatio-temporal
receptive fields of auditory and visual neurons from their responses to natural stimuli. Network, 3:289?
316, 2001.
[17] K. Sen, F. E. Theunissen, and A. J. Doupe. Feature analysis of natural sounds in the songbird auditory
forebrain. J. Neurophysiol., 86:1445?1458, 2001.
[18] D. Smyth, B. Willmore, G. E. Baker, I. D. Thompson, and D. J. Tolhurst. The receptive fields organization
of simple cells in the primary visual cortex of ferrets under natural scene stimulation. J. Neurosci.,
23:4746?4759, 2003.
[19] G. Felsen, J. Touryan, F. Han, and Y. Dan. Cortical sensitivity to visual features in natural scenes. PLoS
Biol., 3:1819?1828, 2005.
[20] D. L. Ringach, M. J. Hawken, and R. Shapley. Receptive field structure of neurons in monkey visual
cortex revealed by stimulation with natural image sequences. Journal of Vision, 2:12?24, 2002.
[21] N. C. Rust, O. Schwartz, J. A. Movshon, and E. P. Simoncelli. Spatiotemporal elements of macaque V1
receptive fields. Neuron, 46:945?956, 2005.
[22] Schwartz O., J.W. Pillow, N.C. Rust, and E. P. Simoncelli. Spike-triggered neural characterization. Journal of Vision, 176:484?507, 2006.
[23] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In B. Hajek and R. S. Sreenivas, editors, Proceedings of the 37th Allerton Conference on Communication, Control and Computing, pp
368?377. University of Illinois, 1999. See also physics/0004057.
| 3242 |@word h:10 trial:2 version:1 compression:3 proportion:1 polynomial:1 open:2 bining:1 simulation:6 covariance:6 eng:2 tr:2 solid:3 reduction:1 exclusively:1 selecting:1 current:1 dx:4 subsequent:1 additive:1 numerical:3 informative:2 remove:1 aside:2 rebrik:1 short:1 filtered:2 provides:2 math:1 revisited:1 tolhurst:1 characterization:1 allerton:1 along:9 constructed:1 ik:1 fitting:12 shapley:2 symp:1 dan:1 indeed:1 expected:6 tr0:5 themselves:1 examine:1 presumed:1 considering:1 increasing:3 provided:1 estimating:4 baker:1 panel:12 what:1 eigenvector:1 monkey:1 finding:8 sapiro:1 temporal:2 every:1 returning:1 wrong:1 schwartz:2 bio:1 unit:4 control:1 persists:1 sd:2 limit:5 consequence:1 severely:1 willmore:1 analyzing:2 subscript:1 firing:7 black:1 limited:2 range:2 averaged:4 steveninck:3 practice:2 implement:1 procedure:3 significantly:3 cascade:1 projection:18 word:2 get:1 cannot:1 onto:4 bh:2 applying:1 cybern:1 equivalent:3 maximizing:28 convex:1 ergodic:2 thompson:1 correcting:1 rule:3 regularize:1 searching:1 variation:1 updated:1 exact:1 smyth:1 trend:1 element:1 jk:1 labeled:1 theunissen:3 fly:1 plo:1 decrease:2 movement:1 edited:1 environment:2 singh:1 ali:1 upon:2 completely:2 neurophysiol:1 indirect:1 various:2 train:4 enyi:27 choosing:1 outcome:1 larger:4 distortion:3 reconstruct:1 otherwise:1 statistic:2 noisy:3 itself:1 final:1 dxp:5 superscript:1 sequence:3 triggered:9 advantage:2 eigenvalue:2 sen:2 reconstruction:8 maximal:3 product:1 relevant:63 fmax:6 csisz:1 convergence:1 requirement:1 transmission:1 generating:2 derive:2 illustrate:1 measured:2 ij:2 eq:10 strong:1 soc:2 implemented:1 predicted:1 involves:1 differ:2 concentrate:1 direction:3 correct:2 filter:1 implementing:1 bin:4 require:1 preliminary:1 isit:1 biological:2 mathematically:1 adjusted:2 around:1 considered:1 achieves:1 smallest:6 omitted:1 estimation:1 proc:2 sensitive:2 schwarz:2 agrees:1 repetition:1 weighted:1 mit:1 gaussian:12 always:2 aim:1 varying:1 derived:6 dsp:4 sense:1 typically:1 bt:2 expand:1 interested:1 biomed:2 overall:2 among:1 orientation:2 priori:1 spatial:1 integration:1 mutual:4 field:6 equal:1 having:1 identical:1 sreenivas:1 mimic:1 stimulus:29 employ:1 few:1 sta:19 divergence:41 individual:1 phase:1 attempt:2 organization:1 korenberg:2 silvey:1 integral:2 closer:1 orthogonal:3 filled:1 re:1 plotted:1 modeling:1 ar:1 maximization:16 deviation:11 tishby:2 kullbackleibler:1 ha2:1 spatiotemporal:1 boer:1 sensitivity:1 systematic:2 physic:3 felsen:1 invertible:1 together:3 recorded:1 choose:1 marmarelis:1 leading:1 syst:1 account:2 de:4 coding:1 int:1 depends:1 performed:2 doing:1 analyze:1 red:3 bayes:3 elicited:2 contribution:2 minimize:2 square:15 accuracy:1 wiener:1 variance:31 ensemble:2 correspond:1 yield:1 miller:1 identification:3 hunter:2 published:1 history:1 explain:1 reach:1 phys:1 decorrelated:12 checked:2 definition:1 frequency:1 pp:2 associated:1 hsu:1 gain:2 dataset:6 proved:1 studia:1 adjusting:1 auditory:4 manifest:1 dimensionality:3 hajek:1 appears:1 dt:1 methodology:1 response:12 improved:1 maximally:3 evaluated:5 done:1 strongly:2 just:1 correlation:5 d:5 ruderman:1 nonlinear:17 gray:1 lossy:3 effect:2 normalized:3 true:9 regularization:2 laboratory:1 leibler:10 dx2:1 iteratively:1 ringach:2 white:1 width:1 songbird:1 naftali:1 theoretic:1 motion:1 fj:1 meaning:1 image:2 fi:1 common:2 functional:4 spiking:1 rust:3 stimulation:2 discussed:1 numerically:1 cambridge:1 imposing:1 outlined:1 similarly:2 nonlinearity:2 illinois:1 had:1 dot:1 han:1 similarity:1 cortex:4 recent:1 showed:1 optimizing:2 jolla:1 belongs:1 reverse:2 selectivity:1 inequality:1 harremo:1 seen:4 greater:1 additional:1 determine:2 maximize:2 signal:3 hsi:2 relates:1 multiple:5 sound:2 simoncelli:2 match:2 characterized:1 cross:5 long:1 equally:1 e1:1 paired:1 vision:4 histogram:3 represent:1 sometimes:1 achieved:2 cell:7 annealing:1 ferret:1 biased:1 touryan:1 ascent:1 near:1 noting:1 presence:1 revealed:1 enough:1 easy:1 idea:1 knowing:1 bottleneck:2 actuality:1 expression:3 becker:1 movshon:1 peter:1 proceed:1 hessian:5 generally:1 amount:2 statist:1 generate:1 delta:1 estimated:2 track:1 per:1 blue:3 diverse:1 probed:2 express:1 group:1 four:1 threshold:3 clarity:1 cutoff:1 v1:5 fraction:1 sum:1 wood:1 inverse:3 family:8 hawken:1 appendix:1 scaling:1 comparable:1 quadratic:1 x2:6 scene:2 lond:1 formulating:1 performing:1 relatively:1 jackknife:1 according:4 combination:1 across:4 slightly:2 reconstructing:1 smaller:3 unity:1 rev:1 s1:10 explained:1 taken:4 ln:2 previously:4 remains:1 forebrain:1 hh:1 needed:1 know:1 merit:5 operation:1 apply:1 v2:9 blowfly:1 spectral:1 robustness:1 denotes:2 include:1 cf:1 eliciting:2 objective:10 already:1 quantity:1 spike:99 strategy:8 receptive:6 dependence:2 primary:2 stryker:1 bialek:6 obermayer:1 enhances:1 gradient:10 subspace:3 simulated:2 thrun:1 sci:1 collected:1 cauchy:2 assuming:3 length:1 code:1 illustration:1 ratio:5 hungar:1 susceptible:1 cij:4 sharper:1 hij:1 trace:2 perform:2 gallant:1 neuron:13 observation:1 datasets:1 finite:3 neurobiology:1 communication:1 frame:1 reproducing:1 tatyana:1 arbitrary:5 introduced:1 inverting:1 david:1 optimized:1 macaque:1 trans:1 bar:1 below:4 regime:4 including:4 max:8 green:1 power:1 suitable:2 natural:13 rely:1 regularized:6 undersampled:1 representing:1 scheme:7 improve:1 inversely:1 axis:1 carried:4 hb2:1 koberle:1 deviate:2 text:1 multiplication:1 asymptotic:7 proportional:2 filtering:1 proven:1 vinje:1 kuyper:1 sufficient:1 editor:1 uncorrelated:1 last:2 keeping:1 appreciated:1 formal:1 aij:1 sugihara:1 institute:1 characterizing:1 taking:3 van:3 dimension:88 calculated:1 numeric:2 world:1 lett:1 cortical:1 sensory:1 pillow:1 qualitatively:1 adaptive:1 reconstructed:2 sj:1 informationtheoretic:1 preferred:2 kullback:10 synergy:1 hammerstein:1 spatio:1 xi:6 iterative:1 quantifies:1 sk:1 nature:1 transfer:1 ruyter:3 ca:2 rearranging:1 hte:1 necessarily:1 constructing:1 pv1:2 spread:1 neurosci:2 s2:2 noise:11 x1:6 fig:5 salk:2 pv:20 pereira:1 comput:1 candidate:1 weighting:1 third:1 bij:3 specific:1 dx1:1 symbol:3 physiological:1 simply:1 paninski:1 forming:1 visual:13 scalar:1 applies:1 corresponds:7 truth:2 relies:1 conditional:1 goal:2 sorted:1 identity:2 ann:1 towards:1 brenner:1 change:1 infinite:2 except:1 determined:1 averaging:4 justify:1 experimental:1 la:1 e:1 sharpee:4 doupe:2 arises:1 evaluate:1 biol:2 correlated:3 |
2,473 | 3,243 | A Risk Minimization Principle
for a Class of Parzen Estimators
Kristiaan Pelckmans, Johan A.K. Suykens, Bart De Moor
Department of Electrical Engineering (ESAT) - SCD/SISTA
K.U.Leuven University
Kasteelpark Arenberg 10, Leuven, Belgium
[email protected]
Abstract
This paper1 explores the use of a Maximal Average Margin (MAM) optimality
principle for the design of learning algorithms. It is shown that the application
of this risk minimization principle results in a class of (computationally) simple
learning machines similar to the classical Parzen window classifier. A direct relation with the Rademacher complexities is established, as such facilitating analysis
and providing a notion of certainty of prediction. This analysis is related to Support Vector Machines by means of a margin transformation. The power of the
MAM principle is illustrated further by application to ordinal regression tasks,
resulting in an O(n) algorithm able to process large datasets in reasonable time.
1
Introduction
The quest for efficient machine learning techniques which (a) have favorable generalization capacities, (b) are flexible for adaptation to a specific task, and (c) are cheap to implement is a pervasive
theme in literature, see e.g. [14] and references therein. This paper introduces a novel concept for
designing a learning algorithm, namely the Maximal Average Margin (MAM) principle. It closely
resembles the classical notion of maximal margin as lying on the basis of perceptrons, Support Vector Machines (SVMs) and boosting algorithms, see a.o. [14, 11]. It however optimizes the average
margin of points to the (hypothesis) hyperplane, instead of the worst case margin as traditional. The
full margin distribution was studied earlier in e.g. [13], and theoretical results were extended and
incorporated in a learning algorithm in [5].
The contribution of this paper is twofold. On a methodological level, we relate (i) results in structural
risk minimization, (ii) data-dependent (but dimension-independent) Rademacher complexities [8, 1,
14] and a new concept of ?certainty of prediction?, (iii) the notion of margin (as central is most
state-of-the-art learning machines), and (iv) statistical estimators as Parzen windows and NadarayaWatson kernel estimators. In [10], the principle was already shown to underlie the approach of
mincuts for transductive inference over a weighted undirected graph. Further, consider the modelclass consisting of all models with bounded average margin (or classes with a fixed Rademacher
complexity as we will indicate lateron). The set of such classes is clearly nested, enabling structural
risk minimization [8].
On a practical level, we show how the optimality principle can be used for designing a computationally fast approach to (large-scale) classification and ordinal regression tasks, much along the same
1
Acknowledgements - K. Pelckmans is supported by an FWO PDM. J.A.K. Suykens and B. De Moor are a
(full) professor at the Katholieke Universiteit Leuven, Belgium. Research supported by Research Council KUL:
GOA AMBioRICS, CoE EF/05/006 OPTEC, IOF-SCORES4CHEM, several PhD/postdoc & fellow grants;
Flemish Government: FWO: PhD/postdoc grants, projects G.0452.04, G.0499.04, G.0211.05, G.0226.06,
G.0321.06, G.0302.07, (ICCoS, ANMMM, MLDM); IWT: PhD Grants, McKnow-E, Eureka-Flite+ Belgian
Federal Science Policy Office: IUAP P6/04, EU: ERNSI;
1
lines as Parzen classifiers and Nadaraya-Watson estimators. It becomes clear that this result enables
researchers on Parzen windows to benefit directly from recent advances in kernel machines, two
fields which have evolved mostly separately. It must be emphasized that the resulting learning rules
were already studied in different forms and motivated by asymptotic and geometric arguments, as
e.g. the Parzen window classifier [4], the ?simple classifier? as in [12] chap. 1, probabilistic neural
networks [15], while in this paper we show how an (empirical) risk based optimality criterion underlies this approach. A number of experiments confirm the use of the resulting cheap learning rules
for providing a reasonable (baseline) performance in a small time-window.
The following notational conventions are used throughout the paper. Let the random vector
(X, Y ) ? Rd ? {?1, 1} obey a (fixed but unknown) joint distribution PXY from a probability
n
space (Rd ? {?1, 1}, P). Let Dn = {(Xi , Yi )}i=1 be sampled i.i.d. according to PXY . Let y ? Rn
be defined as y = (Y1 , . . . , Yn )T ? {?1, 1}n and X = (X1 , . . . , Xn )T ? Rn?d . This paper
is organized as follows. The next section illustrates the principle of maximal average margin for
classification problems. Section 3 investigates the close relationship with Rademacher complexities, Section 4 develops the maximal average margin principle for ordinal regression, and Section
5 reports experimental results of application of the MAM to classification and ordinal regression
tasks.
2
2.1
Maximal Average Margin for Classifiers
The Linear Case
Let the class of hypotheses be defined as
n
o
H = f (?) : Rd ? R, w ? Rd ?x ? Rd : f (x) = wT x, kwk2 = 1 .
(1)
Consequently, the signed distance of a sample (X, Y ) to the hyper-plane wT x = 0, or the margin
M (w) ? R, can be defined as
Y (wT X)
M (w) =
.
(2)
kwk2
SVMs maximize the worst-case margin. We instead focus on the first moment of the margin distribution. Maximizing the expected (average) margin follows from solving
Y (wT X)
?
= max E [Y f (X)] .
(3)
M = max E
w
f ?H
kwk2
Remark that the non-separable case does not require the need for slack-variables. The empirical
counterpart becomes
n
X
Yi (wT Xi )
? = max 1
M
,
(4)
w n
kwk2
i=1
Pn
which can be written as a constrained convex problem as minw ? n1 i=1 Yi (wT Xi ) s.t. kwk2 ?
P
n
1. The Lagrangian with multiplier ? ? 0 becomes L(w, ?) = ? n1 i=1 Yi (wT Xi ) + ?2 (wT w ? 1).
By switching the minimax problem to a maximin problem (application of Slater?s condition), the
first order condition for optimality ?L(w,?)
= 0 gives
?w
n
1 X
1 T
wn =
Yi Xi =
X y,
(5)
?n i=1
?n
where wn ? Rd denotes the optimum to (4). The corresponding parameter
p ? can be found by
Pn
substituting (5) in the constraint wT w = 1, or ? = n1 k i=1 Yi Xi k2 = n1 yT XXT y since the
optimum is obviously taking place when wT w = 1. It becomes clear that the above derivations
remain valid as n ? ?, resulting in the following theorem.
Theorem 1 (Explicit Actual Optimum for the MAMC) The function f (x) = wT x in H maximizing the expected margin satisfies
Y (wT X)
1
(6)
arg max E
= E[XY ] , w? ,
kwk2
?
w
where ? is a normalization constant such that kw? k2 = 1.
2
2.2
Kernel-based Classifier and Parzen Window
It becomes straightforward to recast the resulting classifier as a kernel classifier by mapping the
input data-samples X in a feature space ? : Rd ? Rd? where d? is possibly infinite. In particular,
we do not have to resort to Lagrange duality in a context of convex optimization (see e.g. [14, 9] for
an overview) or functional analysis in a Reproducing Kernel Hilbert Space. Specifically,
n
1 X
wnT ?(X) =
Yi K(Xi , X),
(7)
?n i=1
where K : Rd ? Rd ? R is defined as the inner product such that ?(X)T ?(X ? ) = K(X, X ? )
for any X, X ? . Conversely, any function K corresponds with the inner product of apvalid map ?
if the function K is positive definite. As previously, the term ? becomes ? = n1 yT ?y with
kernel matrix ? ? Rn?n where ?ij = K(Xi , Xj ) for all i, j = 1, . . . , n. Now the class of
positive definite Mercer kernels can be used as they induce a proper mapping ?. A classical choice
is the use of a linear kernel (or K(X, X ? ) = X T X ? ), a polynomial kernel of degree p ? N0 (or
K(X, X ? ) = (X T X ? + b)p ), an RBF kernel (or K(X, X ? ) = exp(?kX ? X ? k22 /?)), or a dedicated
kernel for a specific application (e.g. a string kernel, a Fisher kernel, see e.g. [14] and references
therein). Figure 1.a depicts an example of a nonlinear classifier based on the well-known Ripley
dataset, and the contourlines score the ?certainty of prediction? as explained in the next section.
The expression (7) is similar (proportional) to the classical Parzen window for classification, but
differs in the use of a positive definite (Mercer) kernel K instead of the pdf ?( X??
h ) with bandwidth
h > 0, and in the form of the denominator. The classical motivation of statistical kernel estimators is
based on asymptotic theory in low dimensions (i.e d = O(1)), see e.g. [4], chap. 10 and references.
The functional form of the optimal rule (7) is similar to the ?simple classifier? described in [12],
chap. 1. Thirdly, this estimator was also termed and empirically validated as a probabilistic neural
network by [15]. The novel element from above result is the derivation of a clear (both theoretical
and empirical) optimality principle of the rule, as opposed to the asymptotic results of [4] and the
geometric motivations in [12, 15]. As a direct byproduct, it becomes straightforward to extend
the Parzen window classifier easily with an additional intercept term or other parametric parts, or
towards additive (structured) models as in [9].
3
Analysis and Rademacher Complexities
The quantity of interest in the analysis of the generalization performance is the probability of predicting a mistake (the risk R(w; PXY )), or
R(w; PXY ) = PXY Y (wT ?(X)) ? 0 = E I(Y (wT ?(X)) ? 0) ,
(8)
where I(z) equals one if z is true, and zero otherwise.
3.1
Rademacher Complexity
Let {?i }ni=1 taken from the set {?1, 1}n be Bernoulli random variables with P (? = 1) = P (? =
?1) = 21 . The empirical Rademacher complexity is then defined [8, 1] as
"
#
n
X
2
? n (H) , E? sup
R
?i f (Xi ) X1 , . . . , Xn ,
(9)
f ?H n
i=1
where the expectation is taken over the choice of the binary vector ? = (?1 , . . . , ?n )T ? {?1, 1}n .
It is observed that the empirical Rademacher complexity defines a natural complexity measure to
study the maximal average margin classifier, as both the definitions of the empirical Rademacher
complexity and the maximal average margin resemble closely (see also [8]). The following result
was given in [1], Lemma 22, but we give an alternative proof by exploiting the structure of the
optimal estimate explicitly.
Lemma 1 (Trace bound for the Empirical Rademacher Complexity for H) Let ? ? Rn?n be
defined as ?ij = K(Xi , Xj ) for all i, j = 1, . . . , n, then
p
? n (H) ? 2 tr(?).
R
(10)
n
3
Proof: The proof goes along the same lines as the classical bound on the empirical Rademacher
n
complexity for kernel machines outlined in [1], Lemma
Pn22. Specifically, once a vector ? ? {?1, 1}
1
is fixed, it is immediately seen that the maxf ?H n i=1 ?i f (Xi ) equals the solution as in (7) or
?
Pn
T
maxw i=1 ?i (wT ?(Xi )) = ?? T?? = ? T ??. Now, application of the expectation operator E
? ??
over the choice of the Rademacher variables gives
?
? 12
?
X
T 12
2
2
2
? n (H) = E
R
E ? ??
E [?i ?j ] K(Xi , Xj )?
? T ?? ?
= ?
n
n
n i,j
2
=
n
n
X
! 12
K(Xi , Xi )
i=1
=
2p
tr(?), (11)
n
where the inequality is based on application of Jensen?s inequality. This proves the Lemma.
Remarkpthat in the case of a kernel with constant trace (as e.g. in the case of the RBF kernel
?
where tr(?)p= n), it follows from this result that also the (expected) Rademacher complexity
? n (H)] ? tr(?). In general, one has that E[K(X, X)] equals the trace of the integral operator
E[R
R
TK defined on L2 (PX ) defined as TK (f ) = K(X, Y )f (X)dPX (X) as in [1]. Application of
Pn
McDiarmid?s inequality on the variable Z = supf ?H E[Y (wT ?(X))] ? n1 i=1 Yi (wT ?(Xi ))
gives as in [8, 1].
Lemmap
2 (Deviation Inequality) Let 0 < B? < ? be a fixed constant such that supz k?(z)k2
= supz K(z, z) ? B? such that |wT ?(z)| ? B? , and let ? ? R+
0 be fixed. Then with probability
d
exceeding 1 ? ?, one has for any w ? R that
s
n
X
2 ln 2?
1
T
T
?
E[Y (w ?(X))] ?
Yi (w ?(Xi )) ? Rn (H) ? 3B?
.
(12)
n i=1
n
Therefore it follows that one maximizes the expected margin by maximizing the empirical average
margin, while controlling the empirical Rademacher complexity by choice of the model class (kernel). In the case of RBF kernels, B? = 1, resulting in a reasonable tight bound. It is now illustrated
how one can obtain a practical upper-bound to the ?certainty of prediction? using f (x) = wnT x.
Theorem 2 (Occurrence
pof Mistakes) Given an i.i.d. sample Dn =
B ? R such that supz K(z, z) ? B? , and a fixed ? ? R+
0 . Then,
1 ? ?, one has for all w ? Rd that
?p
T
B
?
E[Y
(w
?(X))]
yT ?y
?
P Y (wT ?(X)) ? 0 ?
?1??
+
B?
nB?
{(Xi , Yi )}ni=1 , a constant
with probability exceeding
s
?
? n (H)
2 ln 2?
R
?.
+3
B?
n
(13)
Proof: The proof follows directly from application of Markov?s inequality on the positive random
variable B? ? Y (wT ?(X)), with expectation B? ? E[Y (wT ?(X))], estimated accurately by the
sample average as in the previous theorem.
More generally, one obtains that with probability exceeding 1 ? ? that for any w ? Rd and for any
? such that ?B? < ? < B? that
? p
s
?
2
T ?y
?
2
ln
R
(H)
3B
B
y
?
n
?
? ?
, (14)
P Y (wT ?(X)) ? ?? ?
??
+
+
B? + ?
n(B? + ?) B? + ? B? + ?
n
with probability exceeding 1 ? ? < 1. This results in a practical assessment of the ?certainty? of a
prediction as follows. At first, note that the random variable Y (wnT ?(x)) for a fixed X = x can take
two values: either ?|wnT ?(x)| or |wnT ?(x)|. Therefore P (Y (wnT ?(x)) ? 0) = P (Y (wnT ?(x)) =
4
Class prediction
class 1
class 2
1
1
0.6
X2
0.8
0.6
X2
0.8
0.4
0.4
0.2
0.2
0
0
?0.2
?1.2
?1
?0.8
?0.6
?0.4
?0.2
X
0
0.2
0.4
0.6
?0.2
0.8
1
?1.2
?1
?0.8
?0.6
?0.4
?0.2
X
0
0.2
0.4
0.6
0.8
1
(a)
(b)
Figure 1: Example of (a) the MAM classifier and (b) the SVM on the Ripley dataset. The contourlines
represent the estimate of certainty of prediction (?scores?) as derived in Theorem 2 for the MAM classifier for
(a), and as in Corollary 1 for the case of SVMs with g(z) = min(1, max(?1, z)) where |z| < 1 corresponds
with the inner part of the margin of the SVM (b). While the contours in (a) give an overall score of the
predictions, the scores given in (b) focus towards the margin of the SVM.
?|wnT ?(x)|) ? P (Y (wnT ?(x)) ? ?|wnT ?(x)|) as Y can only take the two values ?1 or 1. Thus
the event ?Y 6= sign(wT x? )? for samples X = x? occurs with probability lower than the rhs. of
(13) with ? = |wT x? |. When asserting this for a number nv ? N of samples X ? PX with
nv ? ?, a misprediction would occur less than ?nv times. In this sense, one can use the latent
variable wT ?(x? ) as an indication of how ?certain? the prediction is. Figure 1.a gives an example
of the MAM classifier, together with the level plots indicating the certainty of prediction. Remark
however that the described ?certainty of prediction? statement differs from a conditional statement
of the risk given as P (Y (wT ?(X)) < 0 | X = x? ). The essential difference with the probabilistic
estimates based on the density estimates resulting from the Parzen window estimator is that results
become independent of the data dimension, as one avoids estimating the joint distribution.
3.2
Transforming the Margin Distribution
Consider the case where the assumption of a reasonable constant B such that P (kXk2 < B) = 1 is
unrealistic. Then, a transformation of the random variable Y (wT X) can be fruitful using a monotone
increasing function g : R ? R with a constant B?? ? B such that |g(z)| ? B?? , and g(0) = 0. In
the choice of a proper transformation, two counteracting effects should be traded properly. At first,
a small choice of B improves the bound as e.g. described in Lemma 2. On the other hand, such a
transformation would make the expected value E[g(Y (wT ?(X)))] smaller than E[Y (wT ?(X))].
Modifying Theorem 2 gives
Corollary 1 (Occurrence of Mistakes, bis) Given i.i.d. samples Dn = {(Xi , Yi )}ni=1 , and a fixed
? ? R+
0 . Let g : R ? R be a monotonically increasing function with Lipschitz constant 0 < Lg <
?, let B?? ? R such that |g(z)| ? B?? for all z, and g(0) = 0. Then with probability exceeding
1 ? ?, one has for any ? such that ?B?? ? ? ? B?? and w ? Rd that
q
Pn
2 log( ?2 )
1
?
T
?
?
B
g(Y
(w
?(X
)))
?
L
R
(H)
?
3B
i
i
g n
?
n
?
i=1
n
n
T
P g(Y (wn ?(X))) ? ?? ? ?
?
.
B? + ?
B?? + ?
(15)
? n (g ? H) ?
This result follows straightforwardly from Theorem 2 using the property that R
T
? n (H), see e.g. [1]. When ? = 0, one has P g(Y (wnT ?(X))) ? 0 ? 1?E[Y g(w ?(X))] .
Lg R
1
Similar as in the previous section, corollary 1 can be used to score the certainty of prediction by
considering for each X = x? the value of g(wT x? ) and g(?wT x? ). Figure 1.b gives an example by
considering the clipping transformation g(z) = min(1, max(?1, z)) ? [?1, 1] such that B?? = 1.
5
Note that this a-priori choice of the function g is not dependent on the (empirical) optimality criterion
at hand.
3.3
Soft-margin SVMs and MAM classifiers
Except the margin-based mechanisms, the MAM classifier shares other properties with the softmargin maximal margin classifier (SVM) as well. Consider the following saturation function g(z) =
(1 ? z)+ , where (?)+ is defined as (z)+ = z if z ? 0, and zero otherwise (g(0) = 0). Application
of this function to the MAM formulation of (4), one obtains for a C > 0
n
X
max ?
1 ? Yi (wT ?(Xi )) + s.t. wT w = C,
(16)
w
i=1
which is similar to the support vector machine (see e.g. [14]). To make this equivalence more
explicit, consider the following formulation of (16)
min
w,?
n
X
i=1
?i s.t. wT w ? C and Yi (wT ?(Xi )) ? 1 ? ?i , ?i ? 0
?i = 1, . . . , n,
(17)
which is similar to the SVM. Consider the following modification
min
w,?
n
X
i=1
?i s.t. wT w ? C and Yi (wT ?(Xi )) ? 1 ? ?i
?i = 1, . . . , n,
(18)
which is equivalent to (4) as in the optimum, Yi (wT ?(Xi )) = (1 ? ?i ) for all i. Thus, omission of
the slack constraints ?i ? 0 in the SVM formulation results in the Parzen window classifier.
4
Maximal Average Margin for Ordinal Regression
Along the same lines as [6], the maximal average margin principle can be applied to ordinal regression tasks. Let (X, Y ) ? Rd ? {1, . . . , m} with distribution PXY . The w ? Rd maximizing
P (I(wT (?(X) ? ?(X)? )(Y ? Y ? ) > 0)) can be found by solving for the maximal average margin
between pairs as follows
sign(Y ? Y ? )wT (?(X) ? ?(X)? )
M ? = max E
.
(19)
w
kwk2
Given n i.i.d. samples {(Xi , Yi )}ni=1 , empirical risk minimization is obtained by solving
n
1 X
sign(Yj ? Yi )wT (?(Xj ) ? ?(Xi )) s.t. kwk2 ? 1.
(20)
w
n i,j=1
P
The Lagrangian with multiplier ? ? 0 becomes L(w, ?) = ? n1 i,j wT sign(Yj ? Yi )(?(Xj ) ?
?
?(Xi ))+ ?2 (wT w ?1). Let there be n? couples (i, j). Let Dy ? {?1, 0, 1}n ?n such that Dy,ki = 1
and Dy,kj = ?1 if the kth couple equals (i, j). Then, by switching the minimax problem to a
maximin problem, the first order condition for optimality ?L(w,?)
= 0 gives the expression. wn =
?w
P
1
1
?
(?(X
)
?
?(X
))
=
XD
1
.
Now
the
parameter
? can be found by substituting
j
i
y n
Yi <Yj
?? n
?n
q
(5) in the constraint wT w = 1, or ? = n1 1Tn? DyT XT X Dy 1n? . Now the key element is the
computation of dy = Dy 1n? . Note that
min ?
dy (i) =
n
X
j=1
sign(Yj ? Yi ) , ry (i),
(21)
with rY denoting the ranks of all Yi in y. This expression simplifies expression for wn as wn =
1
?n Xdy . It is seen that using kernels as before, the resulting estimator of the order of the responses
corresponding to x and x? becomes
n
1 X
f?K (x, x? ) = sign (m(x) ? m(x? )) , where m(x) =
K(Xi , x) rY (i).
?n i=1
6
(22)
120
oMAM
LS?SVM
oSVM
oGP
100
Frequency
80
60
40
20
0
0.5
0.55
0.6
0.65
0.7
0.75
?
0.8
0.85
0.9
0.95
(a)
1
Data (train/test)
Bank(1) (100/8.092)
Bank(1) (500/7.629)
Bank(1) (5.000/3.192)
Bank(1) (7.500/692)
Bank(2) (100/8.092)
Bank(2) (500/7.629)
Bank(2) (5.000/3.192)
Bank(2) (7.500/692)
Cpu(1) (100/20.540)
Cpu(1) (500/20.140)
Cpu(1) (5.000/15.640)
Cpu(1) (7.500/13.140)
Cpu(1) (15.000/5.640)
(b)
oMAM
0.37
0.49
0.56
0.57
0.81
0.83
0.86
0.88
0.44
0.50
0.57
0.60
0.69
LS-SVM
0.43
0.51
0.56
0.84
0.86
0.88
0.62
0.66
0.68
-
oSVM
0.46
0.55
0.87
0.87
0.64
0.66
-
oGP
0.41
0.50
0.80
0.81
0.63
0.65
-
Figure 2: Results on ordinal regression tasks using oMAM (22) of O(n), a regression on the rank-transformed
responses using LS-SVMs [16] of O(n2 ) ? O(n3 ), ordinal SVMs and ordinal Gaussian Processes for preferential learning of O(n4 ) ? O(n6 ). The results are expressed as Kendall?s ? (with ?1 ? ? ? 1) computed on
the validation datasets. Figure (a) reports the numerical results of the artificially generated data, Table (b) gives
the result on a number of large scaled datasets described in [2], if the computation took less than 5 minutes.
Remark that the estimator m : Rd ? R equals (except for the normalization term) the NadarayaWatson kernel based on the rank-transform rY of the responses. This observation suggest the application of standard regression tools based on the rank-transformed responses as in [7]. Experiments
confirm the use of the proposed ranking estimator, and also motivate the use of a more involved
function approximation tools as e.g. LS-SVMs [16] based on the rank-transformed responses.
5
Illustrative Example
Table 2.b provides numerical results on the 13 classification (including 100 randomizations) benchmark datasets as described in [11]. The choice of an appropriate kernel parameter was obtained by
cross-validation over a range of bandwidths from ? = 1e ? 2 to ? = 1e15. The results illustrate
that the Parzen window classifier performs in general slightly (but not significantly so) worse than
the other methods, but obviously reduces the required amount of memory and computation time
(i.e. O(n) versus O(n2 ) ? O(n3 )). Hence, it is advised to use the Parzen classifier as a cheap
base-line method, or to use it in a context where time- or memory requirements are stringent. The
first artificial dataset for testing the ordinal regression scheme is constructed as follows.
The trainv
ing set {(Xi , Yi )}ni=1 ? R5 ? R with n = 100 and a validation set {(Xiv , Yiv )}ni=1 ? R5 ? R
with nv = 250 is constructed such that Zi = (w?T Xi )3 + ei and Ziv = (w?T Xiv )3 + evi with
w? ? N (0, 1), X, X v ? N (0, I5 ), and e, ev ? N (0, 0.25). Now Y (and Y v ) are generated prev 250
2
serving the order implied by {Zi }100
i=1 (and {Zi }i=1 ) with the intervals ? -distributed with 5 degrees
of freedom. Figure 2.a shows the results of a Monte Carlo experiment relating both the O(n) proposed estimator (22), a LS-SVM regressor of O(n2 ) ? O(n3 ) on the rank-transformed responses
{(Xi , rY (i))}, the O(n4 ) ? O(n6 ) SVM approach as proposed in [3] and the Gaussian Process
approach of O(n4 ) ? O(n6 ) given in [2]. The performance of the different algorithms is expressed
in terms of Kendall?s ? computed on the validation data. Table 2.b reports the results on some large
scale datasets as described in [2], imposing a maximal computation time of 5 minutes. Both tests
suggest the competitive nature of the proposed O(n) procedure, while clearly showing the benefit
of using function estimation (as e.g. LS-SVMs) based on the rank-transformed responses.
7
6
Conclusion
This paper discussed the use of the MAM risk optimality principle for designing a learning machine for classification and ordinal regression. The relation with classical methods including Parzen
windows and Nadaraya-Watson estimators is established, while the relation with the empirical
Rademacher complexity is used to provide a measure of ?certainty of prediction?. Empirical experiments show the applicability of the O(n) algorithms on real world problems, trading performance
somewhat for computational efficiency with respect to state-of-the art learning algorithms.
References
[1] P.L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:463?482, 2002.
[2] W. Chu and Z. Ghahramani. Gaussian processes for ordinal regression. Journal of Machine Learning
Research, 6:1019?1041, 2006.
[3] W. Chu and S. S. Keerthi. New approaches to support vector ordinal regression. In in Proc. of International Conference on Machine Learning, pages 145?152. 2005.
[4] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag,
1996.
[5] A. Garg and D. Roth. Margin distribution and learning algorithms. In Proceedings of the Fifteenth
International Conference on Machine Learning (ICML), pages 210?217. Morgan Kaufmann Publishers,
2003.
[6] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. Advances in Large Margin Classifiers, pages 115?132, 2000. MIT Press, Cambridge, MA.
[7] R.L. Iman and W.J. Conover. The use of the rank transform in regression. Technometrics, 21(4):499?509,
1979.
[8] V. Koltchinski. Rademacher penalties and structural risk minimization. IEEE Transactions on Information
Theory, 47(5):1902?1914, 1999.
[9] K. Pelckmans. Primal-Dual kernel Machines. PhD thesis, Faculty of Engineering, K.U.Leuven, May.
2005. 280 p., TR 05-95.
[10] K. Pelckmans, J. Shawe-Taylor, J.A.K. Suykens, and B. De Moor. Margin based transductive graph
cuts using linear programming. In Proceedings of the Eleventh International Conference on Artificial
Intelligence and Statistics, (AISTATS 2007), pp. 360-367, San Juan, Puerto Rico, 2007.
[11] G. R?atsch, T. Onoda, and K.-R. M?uller. Soft margins for adaboost. Machine Learning, 42(3):287 ? 320,
2001.
[12] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[13] J. Shawe-Taylor and N. Cristianini. Further results on the margin distribution. In Proceedings of the
twelfth annual conference on Computational learning theory (COLT), pages 278?285. ACM Press, 1999.
[14] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
2004.
[15] D.F. Specht. Probabilistic neural networks. Neural Networks, 3:110?118, 1990.
[16] J.A.K. Suykens, T. van Gestel, J. De Brabanter, B. De Moor, and J. Vandewalle. Least Squares Support
Vector Machines. World Scientific, Singapore, 2002.
8
| 3243 |@word faculty:1 polynomial:1 twelfth:1 tr:5 moment:1 score:5 denoting:1 chu:2 must:1 written:1 numerical:2 additive:1 cheap:3 enables:1 plot:1 n0:1 bart:1 intelligence:1 pelckmans:5 plane:1 provides:1 boosting:1 herbrich:1 mcdiarmid:1 along:3 dn:3 direct:2 become:1 constructed:2 prev:1 eleventh:1 expected:5 ry:5 chap:3 actual:1 cpu:5 window:12 considering:2 increasing:2 becomes:9 project:1 pof:1 bounded:1 misprediction:1 maximizes:1 estimating:1 evolved:1 string:1 transformation:5 certainty:10 fellow:1 xd:1 classifier:22 k2:3 scaled:1 underlie:1 grant:3 yn:1 katholieke:1 positive:4 before:1 engineering:2 flemish:1 mistake:3 switching:2 advised:1 xiv:2 signed:1 lugosi:1 garg:1 therein:2 resembles:1 studied:2 equivalence:1 conversely:1 nadaraya:2 bi:1 range:1 practical:3 yj:4 testing:1 implement:1 definite:3 differs:2 dpx:1 procedure:1 empirical:14 significantly:1 orfi:1 induce:1 suggest:2 close:1 operator:2 nb:1 risk:11 context:2 intercept:1 fruitful:1 map:1 lagrangian:2 yt:3 maximizing:4 equivalent:1 straightforward:2 go:1 roth:1 l:6 convex:2 immediately:1 estimator:12 rule:4 supz:3 notion:3 controlling:1 programming:1 designing:3 hypothesis:2 element:2 recognition:1 slater:1 cut:1 observed:1 electrical:1 worst:2 eu:1 transforming:1 complexity:16 pdm:1 scd:1 cristianini:2 motivate:1 solving:3 tight:1 efficiency:1 basis:1 easily:1 joint:2 xxt:1 derivation:2 train:1 fast:1 monte:1 artificial:2 hyper:1 otherwise:2 statistic:1 transductive:2 transform:2 obviously:2 brabanter:1 indication:1 took:1 maximal:13 product:2 adaptation:1 olkopf:1 exploiting:1 optimum:4 requirement:1 rademacher:17 tk:2 illustrate:1 ij:2 nadarayawatson:2 resemble:1 indicate:1 trading:1 convention:1 closely:2 modifying:1 stringent:1 ogp:2 paper1:1 government:1 require:1 generalization:2 randomization:1 lying:1 exp:1 mapping:2 traded:1 substituting:2 belgium:2 favorable:1 estimation:1 proc:1 council:1 puerto:1 tool:2 moor:4 weighted:1 minimization:6 federal:1 mit:2 clearly:2 gaussian:4 uller:1 pn:5 office:1 pervasive:1 corollary:3 validated:1 focus:2 derived:1 notational:1 methodological:1 bernoulli:1 properly:1 rank:9 baseline:1 sense:1 inference:1 dependent:2 relation:3 transformed:5 arg:1 classification:6 flexible:1 overall:1 ziv:1 priori:1 colt:1 dual:1 art:2 constrained:1 field:1 equal:5 once:1 kw:1 r5:2 icml:1 report:3 develops:1 consisting:1 keerthi:1 n1:8 technometrics:1 freedom:1 interest:1 introduces:1 primal:1 integral:1 byproduct:1 belgian:1 xy:1 preferential:1 minw:1 iv:1 taylor:3 theoretical:2 earlier:1 soft:2 clipping:1 applicability:1 deviation:1 vandewalle:1 straightforwardly:1 density:1 explores:1 international:3 probabilistic:5 regressor:1 parzen:14 together:1 thesis:1 central:1 opposed:1 possibly:1 juan:1 worse:1 resort:1 kul:1 de:5 gy:1 explicitly:1 ranking:1 kendall:2 sup:1 universiteit:1 competitive:1 yiv:1 wnt:11 contribution:1 square:1 ni:6 kaufmann:1 gestel:1 accurately:1 carlo:1 researcher:1 definition:1 frequency:1 involved:1 pp:1 proof:5 couple:2 sampled:1 dataset:3 improves:1 organized:1 hilbert:1 graepel:1 rico:1 adaboost:1 response:7 formulation:3 smola:1 p6:1 hand:2 ei:1 nonlinear:1 assessment:1 defines:1 scientific:1 effect:1 k22:1 concept:2 multiplier:2 true:1 counterpart:1 hence:1 illustrated:2 illustrative:1 criterion:2 pdf:1 tn:1 performs:1 dedicated:1 goa:1 novel:2 ef:1 conover:1 functional:2 empirically:1 overview:1 thirdly:1 extend:1 discussed:1 relating:1 kwk2:8 cambridge:3 imposing:1 leuven:4 rd:16 outlined:1 shawe:3 base:1 softmargin:1 recent:1 optimizes:1 termed:1 certain:1 verlag:1 inequality:5 binary:1 watson:2 yi:22 seen:2 morgan:1 additional:1 somewhat:1 maximize:1 monotonically:1 ii:1 full:2 reduces:1 ing:1 cross:1 prediction:13 underlies:1 regression:15 denominator:1 expectation:3 fifteenth:1 kernel:26 normalization:2 represent:1 suykens:4 separately:1 interval:1 publisher:1 sch:1 nv:4 undirected:1 counteracting:1 structural:4 iii:1 wn:6 xj:5 zi:3 kristiaan:2 bandwidth:2 inner:3 simplifies:1 maxf:1 motivated:1 expression:4 bartlett:1 penalty:1 remark:3 generally:1 clear:3 amount:1 fwo:2 svms:8 sista:1 singapore:1 sign:6 estimated:1 lemmap:1 serving:1 key:1 iof:1 eureka:1 graph:2 monotone:1 i5:1 place:1 throughout:1 reasonable:4 dy:7 investigates:1 bound:6 ki:1 iwt:1 pxy:6 annual:1 occur:1 constraint:3 x2:2 n3:3 argument:1 optimality:8 min:5 separable:1 px:2 department:1 structured:1 according:1 remain:1 smaller:1 slightly:1 n4:3 modification:1 explained:1 taken:2 computationally:2 ln:3 previously:1 slack:2 mechanism:1 ordinal:14 kuleuven:1 specht:1 obey:1 appropriate:1 occurrence:2 alternative:1 evi:1 mam:11 denotes:1 coe:1 ghahramani:1 prof:1 classical:7 implied:1 already:2 quantity:1 occurs:1 parametric:1 traditional:1 obermayer:1 kth:1 distance:1 capacity:1 devroye:1 relationship:1 providing:2 lg:2 mostly:1 statement:2 relate:1 trace:3 design:1 proper:2 policy:1 unknown:1 upper:1 observation:1 datasets:5 markov:1 benchmark:1 enabling:1 extended:1 incorporated:1 y1:1 rn:5 reproducing:1 omission:1 namely:1 pair:1 required:1 maximin:2 established:2 esat:2 able:1 pattern:2 ev:1 saturation:1 recast:1 max:8 including:2 memory:2 power:1 event:1 unrealistic:1 natural:1 predicting:1 minimax:2 scheme:1 e15:1 n6:3 kj:1 literature:1 acknowledgement:1 geometric:2 l2:1 asymptotic:3 proportional:1 versus:1 validation:4 degree:2 mercer:2 principle:12 bank:8 share:1 supported:2 osvm:2 taking:1 benefit:2 distributed:1 boundary:1 dimension:3 xn:2 valid:1 avoids:1 contour:1 asserting:1 world:2 san:1 transaction:1 obtains:2 confirm:2 xi:30 ripley:2 latent:1 table:3 nature:1 johan:1 onoda:1 postdoc:2 artificially:1 aistats:1 rh:1 motivation:2 n2:3 facilitating:1 x1:2 depicts:1 theme:1 explicit:2 exceeding:5 iuap:1 kxk2:1 theorem:7 minute:2 specific:2 emphasized:1 xt:1 showing:1 jensen:1 svm:10 essential:1 mendelson:1 phd:4 illustrates:1 margin:36 kx:1 supf:1 omam:3 lagrange:1 expressed:2 van:1 maxw:1 springer:1 nested:1 corresponds:2 satisfies:1 acm:1 ma:2 conditional:1 consequently:1 rbf:3 towards:2 twofold:1 lipschitz:1 professor:1 fisher:1 infinite:1 specifically:2 except:2 hyperplane:1 wt:44 lemma:5 mincuts:1 dyt:1 duality:1 experimental:1 perceptrons:1 indicating:1 atsch:1 support:5 quest:1 |
2,474 | 3,244 | Optimal models of sound localization by barn owls
Brian J. Fischer
Division of Biology
California Institute of Technology
Pasadena, CA
[email protected]
Abstract
Sound localization by barn owls is commonly modeled as a matching procedure
where localization cues derived from auditory inputs are compared to stored templates. While the matching models can explain properties of neural responses, no
model explains how the owl resolves spatial ambiguity in the localization cues to
produce accurate localization for sources near the center of gaze. Here, I examine two models for the barn owl?s sound localization behavior. First, I consider
a maximum likelihood estimator in order to further evaluate the cue matching
model. Second, I consider a maximum a posteriori estimator to test whether a
Bayesian model with a prior that emphasizes directions near the center of gaze
can reproduce the owl?s localization behavior. I show that the maximum likelihood estimator can not reproduce the owl?s behavior, while the maximum a posteriori estimator is able to match the behavior. This result suggests that the standard
cue matching model will not be sufficient to explain sound localization behavior in
the barn owl. The Bayesian model provides a new framework for analyzing sound
localization in the barn owl and leads to predictions about the owl?s localization
behavior.
1 Introduction
Barn owls, the champions of sound localization, show systematic errors when localizing sounds.
Owls localize broadband noise signals with great accuracy for source directions near the center of
gaze [1]. However, localization errors increase as source directions move to the periphery, consistent
with an underestimate of the source direction [1]. Behavioral experiments show that the barn owl
uses the interaural time difference (ITD) for localization in the horizontal dimension and the interaural level difference (ILD) for localization in the vertical dimension [2]. Direct measurements of the
sounds received at the ears for sources at different locations in space show that disparate directions
are associated with very similar localization cues. Specifically, there is a similarity between ILD
and ITD cues for directions near the center of gaze and directions with eccentric elevations on the
vertical plane. How does the owl resolve this ambiguity in the localization cues to produce accurate
localization for sound sources near the center of gaze?
Theories regarding the use of localization cues by the barn owl are drawn from the extensive knowledge of processing in the barn owl?s auditory system. Neurophysiological and anatomical studies
show that the barn owl?s auditory system contains specialized circuitry that is devoted to extracting
spectral ILD and ITD cues and processing them to derive source direction information [2]. It has
been suggested that a spectral matching operation between ILD and ITD cues computed from auditory inputs and preferred ILD and ITD spectra associated with spatially selective auditory neurons
underlies the derivation of spatial information from the auditory cues [3?6]. The spectral matching
models reproduce aspects of neural responses, but none reproduces the sound localization behavior
of the barn owl. In particular, the spectral matching models do not describe how the owl resolves ambiguities in the localization cues. In addition to spectral matching of localization cues, it is possible
1
that the owl incorporates prior experience or beliefs into the process of deriving direction estimates
from the auditory input signals. These two approaches to sound localization can be formalized using
the language of estimation theory as maximum likelihood (ML) and Bayesian solutions, respectively.
Here, I examine two models for the barn owl?s sound localization behavior in order to further evaluate the spectral matching model and to test whether a Bayesian model with a prior that emphasizes
directions near the center of gaze can reproduce the owl?s localization behavior. I begin by viewing
the sound localization problem as a statistical estimation problem. Maximum likelihood and maximum a posteriori (MAP) solutions to the estimation problem are compared with the localization
behavior of a barn owl in a head turning task.
2 Observation model
To define the localization problem, we must specify an observation model that describes the information the owl uses to produce a direction estimate. Neurophysiological and behavioral experiments
suggest that the barn owl derives direction estimates from ILD and ITD cues that are computed at an
array of frequencies [2, 7, 8]. Note that when computed as a function of frequency, the ITD is given
by an interaural phase difference (IPD).
Here I consider a model where the observation made by the owl is given by the ILD and IPD spectra
derived from barn owl head-related transfer functions (HRTFs) after corruption with additive noise.
For a source direction (?, ?), the observation vector r is expressed mathematically as
rILD
ILD?,?
?ILD
r=
=
+
(1)
rIPD
IPD?,?
?IPD
where the ILD spectrum ILD?,? = [ILD?,? (?1 ), ILD?,? (?2 ), . . . , ILD?,? (?Nf )] and the IPD spectrum IPD?,? = [IPD?,? (?1 ), IPD?,? (?2 ), . . . , IPD?,? (?Nf )] are specified at a finite number of frequencies. The ILD and IPD cues are computed directly from the HRTFs as
? R(?,?) (?)|
|h
ILD?,? (?) = 20 log10
(2)
? L(?,?)(?)|
|h
and
IPD?,? (?) = ?R(?,?) (?) ? ?L(?,?) (?),
? L(?,?)(?) = |h
? L(?,?) (?)|ei?L(?,?) (?) and
where the left and right HRTFs are written as h
? R(?,?) (?) = |h
? R(?,?) (?)|ei?R(?,?) (?) , respectively.
h
(3)
The noise corrupting the ILD spectrum is modeled as a Gaussian random vector with independent
and identically distributed (i.i.d.) components, ?ILD (?j ) ? N (0, ?). The IPD spectrum noise vector
is assumed to have i.i.d. components where each element has a von Mises distribution with parameter ?. The von Mises distribution can be viewed as a 2?-periodic Gaussian distribution for large ?
and is a uniform distribution for ? = 0 [9]. I assume that the ILD and IPD noise terms are mutually
independent.
With this noise model, the likelihood function has the form
pr|?,? (r|?, ?) = prILD |?,? (rILD |?, ?)prIPD |?,? (rIPD |?, ?)
(4)
where the ILD likelihood function is given by
Nf
1
1 X
prILD |?,? (rILD |?, ?) =
exp[? 2
(rILD (?j ) ? ILD?,? (?j ))2 ]
2? j=1
(2?? 2 )Nf /2
(5)
and the IPD likelihood function is given by
Nf
prIPD |?,? (rIPD |?, ?) =
X
1
exp[?
cos(rIPD (?j ) ? IPD?,? (?j ))]
N
f
(2?I0 (?))
j=1
(6)
where I0 (?) is a modified Bessel function of the first kind of order 0. The likelihood function will
have peaks at directions where the expected spectral cues ILD?,? and IPD?,? are near the observed
values rILD and rIPD .
2
3 Model performance measure
I evaluate maximum likelihood and maximum a posteriori methods for estimating the source direction from the observed ILD and IPD cues by computing an expected localization error and comparing the results to an owl?s behavior. The performance of each estimation procedure at a given source
? ? ?| + |?(r)
? ? ?| | ?, ?]. This
direction is quantified by the expected absolute angular error E[|?(r)
measure of estimation error is directly compared to the behavioral performance of a barn owl in
a head turning localization task [1]. The expected absolute angular error is approximated through
Monte Carlo simulation as
? ? ?| + |?(r)
? ? ?| | ?, ?] ? ?({|?(r
? i ) ? ?|}N ) + ?({|?(r
? i ) ? ?|}N )
E[|?(r)
(7)
i=1
i=1
where the ri are drawn from pr|?,? (r|?, ?) and ?({?i }N
i=1 ) is the circular mean of the angles
N
{?i }i=1 . The error is computed using HRTFs for two barn owls [10] and is calculated for directions in the frontal hemisphere with 5? increments in azimuth and elevation, as defined using double
polar coordinates.
4 Maximum likelihood estimate
The maximum likelihood direction estimate is derived from the observed noisy ILD and IPD cues
by finding the source direction that maximizes the likelihood function, yielding
(??ML (r), ??ML (r)) = arg max pr|?,? (r|?, ?).
(8)
(?,?)
This procedure amounts to a spectral cue matching operation. Each direction in space is associated
with a particular ILD and IPD spectrum, as derived from the HRTFs. The direction with associated
cues that are closest to the observed cues is designated as the estimate. This estimator is of particular
interest because of the claim that salience in the neural map of auditory space in the barn owl can be
described by a spectral cue matching operation [3, 4, 6].
The maximum likelihood estimator was unable to reproduce the owl?s localization behavior. The
performance of the maximum likelihood estimator depends on the two likelihood function parameters ? and ?, which determine the ILD and IPD noise variances, respectively. For noise variances
large enough that the error increased at peripheral directions, in accordance with the barn owl?s
behavior, the error also increased significantly for directions near the center of the interaural coordinate system (Figure 1). This pattern of error as a function of eccentricity, with a large central peak,
is not consistent with the performance of the barn owl in the head turning task [1]. Additionally,
directions near the center of gaze were often confused with directions in the periphery leading to a
high variability in the direction estimates, which is not seen in the owl?s behavior.
5 Maximum a posteriori estimate
In the Bayesian framework, the direction estimate depends on both the likelihood function and the
prior distribution over source directions through the posterior distribution. Using Bayes? rule, the
posterior density is proportional to the product of the likelihood function and the prior,
p?,?|r (?, ?|r) ? pr|?,? (r|?, ?)p?,? (?, ?).
(9)
The prior distribution is used to summarize the owl?s belief about the most likely source directions
before an observation of ILD and IPD cues is made. Based on the barn owl?s tendency to underestimate source directions [1], I use a prior that emphasizes directions near the center of gaze. The
prior is given by a product of two one-dimensional von Mises distributions, yielding the probability
density function
exp[?1 cos(?) + ?2 cos(?)]
(10)
p?,? (?, ?) =
(2?)2 I0 (?1 )I0 (?2 )
where I0 (?) is a modified Bessel function of the first kind of order 0. The maximum a posteriori
source direction estimate is computed for a given observation by finding the source direction that
maximizes the posterior density, yielding
(??MAP (r), ??MAP (r)) = arg max p?,?|r (?, ?|r).
(11)
(?,?)
3
Figure 1: Estimation error in the model for the maximum likelihood (ML) and maximum a posteriori
(MAP) estimates. HRTFs were used from owls 884 (top) and 880 (bottom). Left column: Estimation
error at 685 locations in the frontal hemisphere plotted in double polar coordinates. Center column:
Estimation error on the horizontal plane along with the estimation error of a barn owl in a head
turning task [1]. Right column: Estimation error on the vertical plane along with the estimation
error of a barn owl in a head turning task. Note that each plot uses a unique scale.
4
Figure 2: Estimates for the MAP estimator on the horizontal plane (left) and the vertical plane (right)
using HRTFs from owl 880. The box extends from the lower quartile to the upper quartile of the
sample. The solid line is the identity line. Like the owl, the MAP estimator underestimates the
source direction.
In the MAP case, the estimate depends on spectral matching of observations with expected cues for
each direction, but with a penalty on the selection of peripheral directions.
It was possible to find a MAP estimator that was consistent with the owl?s localization behavior
(Figures 1,2). For the example MAP estimators shown in Figures 1 and 2, the error was smallest in
the central region of space and increased at the periphery. The largest errors occurred at the vertical
extremes. This pattern of error qualitatively matches the pattern of error displayed by the owl in a
head turning localization task [1].
The parameters that produced a behaviorally consistent MAP estimator correspond to a likelihood
and prior with large variances. For the estimators shown in Figure 1, the likelihood function parameters were given by ? = 11.5 dB and ? = 0.75 for owl 880 and ? = 10.75 dB and ? = 0.8 for owl
884. For comparison, the range of ILD values normally experienced by the barn owl falls between
? 30 dB [10]. The prior parameters correspond to an azimuthal width parameter ?1 of 0.25 for owl
880 and 0.2 for owl 884 and an elevational width parameter ?2 of 0.25 for owl 880 and 0.18 for owl
884.
The implication of this model for implementation in the owl?s auditory system is that the spectral
localization cues ILD and IPD do not need to be computed with great accuracy and the emphasis on
central directions does not need to be large in order to produce the barn owl?s behavior.
6 Discussion
6.1 A new approach to modeling sound localization in the barn owl
The simulation results show that the maximum likelihood model considered here can not reproduce
the owl?s behavior, while the maximum a posteriori solution is able to match the behavior. This
result suggests that the standard spectral matching model will not be sufficient to explain sound localization behavior in the barn owl. Previously, suggestions have been made that sound localization
by the barn owl can be described using the Bayesian framework [11, 12], but no specific models
have been proposed. This paper demonstrates that a Bayesian model can qualitatively match the
owl?s localization behavior. The Bayesian approach described here provides a new framework for
analyzing sound localization in the owl.
6.2 Failure of the maximum likelihood model
The maximum likelihood model fails because of the nature of spatial ambiguity in the ILD and
IPD cues. The existence of spatial ambiguity has been noted in previous descriptions of barn owl
HRTFs [3, 10, 13]. As expected, directions near each other have similar cues. In addition to sim5
ilarity of cues between proximal directions, distant directions can have similar ILD and IPD cues.
Most significantly, there is a similarity between the ILD and IPD cues at the center of gaze and at
peripheral directions on the vertical plane. The consequence of such ambiguity between distant directions is that noise in measuring localization cues can lead to large errors in direction estimation,
as seen in the ML estimate. The results of the simulations suggest that a behaviorally accurate solution to the sound localization problem must include a mechanism that chooses between disparate
directions which are associated with similar localization cues in such a way as to limit errors for
source directions near the center of gaze. This work shows that a possible mechanism for choosing
between such directions is to incorporate a bias towards directions at the center of gaze through a
prior distribution and utilize the Bayesian estimation framework. The use of a prior that emphasizes
directions near the center of gaze is similar to the use of central weighting functions in models of
human lateralization [14].
6.3 Predictions of the Bayesian model
The MAP estimator predicts the underestimation of peripheral source directions on the horizontal
and vertical planes (Figure 2). The pattern of error displayed by the MAP estimator qualitatively
matches the owl?s behavioral performance by showing increasing error as a function of eccentricity.
Our evaluation of the model performance is limited, however, because there is little behavioral data
for directions outside ? 70 deg [15,16]. For the owl whose performance is displayed in Figure 1, the
largest errors on the vertical and horizontal planes were less than 20 deg and 11 deg, respectively.
The model produces much larger errors for directions beyond 70 deg, especially on the vertical plane.
The large errors in elevation result from the ambiguity in the localization cues on the vertical plane
and the shape of the prior distribution. As discussed above, for broadband noise stimuli, there is a
similarity between the ILD and IPD cues for central and peripheral directions on the vertical plane
[3, 10, 13]. The presence of a prior distribution that emphasizes central directions causes direction
estimates for both central and peripheral directions to be concentrated near zero deg. Therefore,
estimation errors are minimal for sources at the center of gaze, but approach the magnitude of the
source direction for peripheral source directions. Behavioral data shows that localization accuracy
is the greatest near the center of gaze [1], but there is no data for localization performance at the
most eccentric directions on the vertical plane. Further behavioral experiments must be performed
to determine if the owl?s error increases greatly at the most peripheral directions.
There is a significant spatial ambiguity in the localization cues when target sounds are narrowband. It is well known that spatial ambiguity arises from the way that interaural time differences are
processed at each frequency [17?19]. The owl measures the interaural time difference for each frequency of the input sound as an interaural phase difference. Therefore, multiple directions in space
that differ in their associated interaural time difference by the period of a tone at that frequency are
consistent with the same interaural phase difference and can not be distinguished. Behavioral experiments show that the owl may localize a phantom source in the horizontal dimension when the signal
is a tone [20]. Based on the presence of a prior that emphasizes directions near the center of gaze,
I predict that for low frequency tones where phase equivalent directions lie near the center of gaze
and at directions greater than 80 deg, confusion will always lead to an estimate of a source direction
near zero degrees. This prediction can not be evaluated from available data because localization of
tonal signals has only been systematically studied using 5 kHz tones with target directions at ? 20
deg [19]. Because the prior is broad, the target direction of ? 20 deg and the phantom direction of
? 50 deg may both be considered central.
The ILD cue also displays a significant ambiguity at high frequencies. At frequencies above 7 kHz,
the ILD is non-monotonically related to the vertical position of a sound source [3, 10] (Figure 3).
Therefore, for narrowband sounds, the owl can not uniquely determine the direction of a sound
source from the ITD and ILD cues. I predict that for tonal signals above 7 kHz, there will be
multiple directions on the vertical plane that are confused with directions near zero deg. I predict
that confusion between source directions near zero deg and eccentric directions will always lead to
estimates of directions near zero deg. There is no available data to evaluate this prediction.
6
Figure 3: Model predictions for localization of tones on the vertical plane. (A) ILD as a function of
elevation at 8 kHz, computed from HRTFs of owl 880 recorded by Keller et al. (1998). (B) Given
an ILD of 0 dB, a likelihood function (dots) based on matching cues to expected values would be
multimodal with three equal peaks. If the target is at any of the three directions, there will be large
localization errors because of confusion with the other directions. If a prior emphasizing frontal
space (dashed) is included, a posterior density equal to the product of the likelihood and the prior
would have a main peak at 0 deg elevation. Using a maximum a posteriori estimate, large errors
would be made if the target is above or below. However, few errors would be observed when the
target is near 0 deg.
6.4 Testing the Bayesian model
Further head turning localization experiments with barn owls must be performed to test predictions
generated by the Bayesian hypothesis and to provide constraints on a model of sound localization.
Experiments should test the localization accuracy of the owl for broadband noise sources and tonal
signals at directions covering the frontal hemisphere. The Bayesian model will be supported if, first,
localization accuracy is high for both tonal and broadband noise sources near the center of gaze
and, second, peripherally located sources are confused for targets near the center of gaze, leading
to large localization errors. Additionally, a Bayesian model should be fit to the data, including
points away from the horizontal and vertical planes, using a nonparametric prior [21, 22]. While the
model presented here, using a von Mises prior, qualitatively matches the performance of the owl, the
performance of the Bayesian model may be improved by removing assumptions about the structure
of the prior distribution.
6.5 Implications for neural processing
The analysis presented here does not directly address the neural implementation of the solution
to the localization problem. However, our abstract analysis of the sound localization problem has
implications for neural processing. Several models exist that reproduce the basic properties of ILD,
ITD, and space selectivity in ICx and OT neurons using a spectral matching procedure [3, 5, 6].
These results suggest that a Bayesian model is not necessary to describe the responses of individual
ICx and OT neurons. It may be necessary to look in the brainstem motor targets of the optic tectum
to find neurons that resolve the ambiguity present in sound stimuli and show responses that reflect
the MAP solution. This implies that the prior distribution is not employed until the final stage of
processing. The prior may correspond to the distribution of best directions of space-specific neurons
in ICx and OT, which emphasizes directions near the center of gaze [23].
6.6 Conclusion
This analysis supports the Bayesian model of the barn owl?s solution to the localization problem
over the maximum likelihood model. This result suggests that the standard spectral matching model
will not be sufficient to explain sound localization behavior in the barn owl. The Bayesian model
7
provides a new framework for analyzing sound localization in the owl. The simulation results using
the MAP estimator lead to testable predictions that can be used to evaluate the Bayesian model of
sound localization in the barn owl.
Acknowledgments
I thank Kip Keller, Klaus Hartung, and Terry Takahashi for providing the head-related transfer
functions and Mark Konishi and Jos?e Luis Pe?na for comments and support.
References
[1] E.I. Knudsen, G.G. Blasdel, and M. Konishi. Sound localization by the barn owl (Tyto alba) measured
with the search coil technique. J. Comp. Physiol., 133:1?11, 1979.
[2] M. Konishi. Coding of auditory space. Annu. Rev. Neurosci., 26:31?55, 2003.
[3] M.S. Brainard, E.I. Knudsen, and S.D. Esterly. Neural derivation of sound source location: Resolution of
spatial ambiguities in binaural cues. J. Acoust. Soc. Am., 91(2):1015?1027, 1992.
[4] B.J. Arthur. Neural computations leading to space-specific auditory responses in the barn owl. Ph.D.
thesis, Caltech, 2001.
[5] B.J. Fischer. A model of the computations leading to a representation of auditory space in the midbrain
of the barn owl. D.Sc. thesis, Washington University in St. Louis, 2005.
[6] C.H. Keller and T.T. Takahashi. Localization and identification of concurrent sounds in the owl?s auditory
space map. J. Neurosci., 25:10446?10461, 2005.
[7] I. Poganiatz and H. Wagner. Sound-localization experiments with barn owls in virtual space: influence of
broadband interaural level difference on head-turning behavior. J. Comp. Physiol. A, 187:225?233, 2001.
[8] D.R. Euston and T.T. Takahashi. From spectrum to space: The contribution of level difference cues to
spatial receptive fields in the barn owl inferior colliculus. J. Neurosci., 22(1):284?293, Jan. 2002.
[9] Evans M., Hastings N., and Peacock B. von Mises Distribution. In Statistical Distributions, 3rd ed., pages
189?191. Wiley, New York, 2000.
[10] C.H. Keller, K. Hartung, and T.T. Takahashi. Head-related transfer functions of the barn owl: measurement and neural responses. Hearing Research, 118:13?34, 1998.
[11] R.O. Duda. Elevation dependence of the interaural transfer function, chapter 3 in Binaural and Spatial
Hearing in Real and Virtual Environments, pages 49?75. New Jersey: Lawrence Erlbaum Associates,
1997.
[12] Witten I.B. and Knudsen E.I. Why seeing is believing: Merging auditory and visual worlds. Neuron,
48:489?496, 2005.
[13] J.F Olsen, E.I. Knudsen, and S.D. Esterly. Neural maps of interaural time and intensity differences in the
optic tectum of the barn owl. J. Neurosci., 9:2591?2605, 1989.
[14] R.M. Stern and H.S. Colburn. Theory of binaural interaction based on auditory-nerve data. IV. A model
for subjective lateral position. J. Acoust. Soc. Am., 64:127?140, 1978.
[15] H. Wagner. Sound-localization deficits induced by lesions in the barn owl?s auditory space map. J.
Neurosci., 13:371?386, 1993.
[16] I. Poganiatz, I. Nelken, and H. Wagner. Sound-localization experiments with barn owls in virtual space:
influence of interaural time difference on head-turning behavior. J. Ass. Res. Otolarnyg., 2:1?21, 2001.
[17] T. Takahashi and M. Konishi. Selectivity for interaural time difference in the owl?s midbrain. J. Neurosci.,
6(12):3413?3422, 1986.
[18] J.A. Mazer. How the owl resolves auditory coding ambiguity. Proc. Natl. Acad. Sci. USA, 95:10932?
10937, 1998.
[19] K. Saberi, Y. Takahashi, H. Farahbod, and M. Konishi. Neural bases of an auditory illusion and its
elimination in owls. Nature Neurosci., 2(7):656?659, 1999.
[20] E.I. Knudsen and M. Konishi. Mechanisms of sound localization in the barn owl (Tyto alba) measured
with the search coil technique. J. Comp. Phys. A, (133):13?21, 1979.
[21] Liam Paninski. Nonparametric inference of prior probabilities from Bayes-optimal behavior. In Y. Weiss,
B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1067?
1074. MIT Press, Cambridge, MA, 2006.
[22] Stocker A.A. and Simoncelli E.P. Noise characteristics and prior expectations in human visual speed
perception. Nature Neurosci., 9(4):578?585, 2006.
[23] E.I. Knudsen and M. Konishi. A neural map of auditory space in the owl. Science, 200:795?797, 1978.
8
| 3244 |@word duda:1 azimuthal:1 simulation:4 solid:1 contains:1 colburn:1 subjective:1 comparing:1 must:4 luis:1 written:1 physiol:2 evans:1 distant:2 additive:1 shape:1 motor:1 plot:1 cue:40 tone:5 plane:15 provides:3 location:3 along:2 direct:1 interaural:14 behavioral:8 expected:7 behavior:24 examine:2 resolve:5 little:1 increasing:1 begin:1 estimating:1 confused:3 maximizes:2 tyto:2 kind:2 acoust:2 finding:2 esterly:2 nf:5 demonstrates:1 platt:1 normally:1 louis:1 before:1 accordance:1 limit:1 consequence:1 acad:1 analyzing:3 emphasis:1 studied:1 quantified:1 suggests:3 co:3 limited:1 liam:1 range:1 unique:1 acknowledgment:1 testing:1 illusion:1 procedure:4 jan:1 significantly:2 matching:16 seeing:1 suggest:3 selection:1 influence:2 equivalent:1 map:19 phantom:2 center:21 peacock:1 keller:4 resolution:1 formalized:1 estimator:16 rule:1 array:1 deriving:1 konishi:7 coordinate:3 increment:1 target:8 tectum:2 us:3 hypothesis:1 associate:1 element:1 approximated:1 located:1 predicts:1 observed:5 bottom:1 region:1 environment:1 localization:65 division:1 binaural:3 multimodal:1 chapter:1 jersey:1 derivation:2 describe:2 monte:1 sc:1 klaus:1 choosing:1 outside:1 whose:1 larger:1 fischer:2 noisy:1 final:1 interaction:1 product:3 description:1 olkopf:1 double:2 eccentricity:2 produce:5 brainard:1 derive:1 measured:2 received:1 soc:2 implies:1 differ:1 direction:78 quartile:2 human:2 brainstem:1 viewing:1 virtual:3 owl:88 elimination:1 explains:1 elevation:6 brian:1 mathematically:1 considered:2 barn:43 itd:9 exp:3 great:2 lawrence:1 blasdel:1 predict:3 claim:1 circuitry:1 smallest:1 estimation:14 polar:2 proc:1 largest:2 concurrent:1 champion:1 mit:1 behaviorally:2 gaussian:2 always:2 modified:2 derived:4 likelihood:26 believing:1 greatly:1 am:2 posteriori:9 inference:1 i0:5 pasadena:1 reproduce:7 selective:1 arg:2 spatial:9 equal:2 field:1 washington:1 biology:1 broad:1 look:1 stimulus:2 few:1 individual:1 phase:4 interest:1 circular:1 evaluation:1 extreme:1 yielding:3 natl:1 devoted:1 stocker:1 implication:3 accurate:3 necessary:2 experience:1 arthur:1 iv:1 re:1 plotted:1 minimal:1 increased:3 column:3 modeling:1 localizing:1 measuring:1 hearing:2 uniform:1 azimuth:1 erlbaum:1 stored:1 periodic:1 proximal:1 chooses:1 st:1 density:4 peak:4 systematic:1 jos:1 gaze:19 na:1 von:5 ambiguity:13 ear:1 central:8 recorded:1 reflect:1 thesis:2 leading:4 takahashi:6 coding:2 alba:2 depends:3 performed:2 ipd:26 bayes:2 contribution:1 accuracy:5 variance:3 characteristic:1 correspond:3 bayesian:19 identification:1 emphasizes:7 produced:1 none:1 carlo:1 comp:3 corruption:1 explain:4 phys:1 ed:1 failure:1 underestimate:3 frequency:9 associated:6 mi:5 auditory:19 knowledge:1 nerve:1 response:6 specify:1 improved:1 wei:1 evaluated:1 box:1 angular:2 stage:1 until:1 hastings:1 horizontal:7 ei:2 ild:39 usa:1 spatially:1 lateralization:1 width:2 uniquely:1 inferior:1 covering:1 noted:1 confusion:3 saberi:1 narrowband:2 specialized:1 witten:1 khz:4 discussed:1 occurred:1 measurement:2 significant:2 cambridge:1 eccentric:3 rd:1 language:1 dot:1 similarity:3 base:1 closest:1 posterior:4 hemisphere:3 periphery:3 selectivity:2 caltech:2 seen:2 greater:1 employed:1 determine:3 period:1 bessel:2 signal:6 monotonically:1 dashed:1 multiple:2 sound:36 simoncelli:1 match:6 prediction:7 underlies:1 basic:1 expectation:1 addition:2 source:31 sch:1 ot:3 comment:1 induced:1 db:4 incorporates:1 extracting:1 near:25 presence:2 identically:1 enough:1 hartung:2 fit:1 regarding:1 whether:2 tonal:4 penalty:1 york:1 cause:1 amount:1 nonparametric:2 ph:1 concentrated:1 processed:1 exist:1 anatomical:1 localize:2 drawn:2 utilize:1 peripherally:1 colliculus:1 angle:1 extends:1 mazer:1 display:1 optic:2 constraint:1 ri:1 aspect:1 speed:1 designated:1 peripheral:8 describes:1 rev:1 midbrain:2 pr:4 mutually:1 previously:1 mechanism:3 available:2 operation:3 away:1 spectral:14 distinguished:1 existence:1 top:1 include:1 log10:1 testable:1 especially:1 move:1 receptive:1 dependence:1 unable:1 thank:1 lateral:1 deficit:1 sci:1 modeled:2 providing:1 disparate:2 implementation:2 stern:1 upper:1 vertical:16 neuron:6 observation:7 finite:1 displayed:3 knudsen:6 variability:1 head:12 intensity:1 specified:1 extensive:1 kip:1 california:1 icx:3 address:1 able:2 suggested:1 beyond:1 below:1 pattern:4 perception:1 summarize:1 max:2 including:1 belief:2 terry:1 greatest:1 turning:9 hrtfs:9 technology:1 prior:25 suggestion:1 proportional:1 degree:1 sufficient:3 consistent:5 editor:1 corrupting:1 systematically:1 supported:1 salience:1 bias:1 institute:1 fall:1 template:1 wagner:3 absolute:2 distributed:1 dimension:3 calculated:1 world:1 nelken:1 commonly:1 made:4 qualitatively:4 olsen:1 preferred:1 ml:5 reproduces:1 deg:14 assumed:1 spectrum:8 search:2 why:1 additionally:2 nature:3 transfer:4 ca:1 as:1 main:1 neurosci:8 noise:13 lesion:1 broadband:5 wiley:1 experienced:1 fails:1 position:2 lie:1 pe:1 weighting:1 removing:1 emphasizing:1 annu:1 specific:3 showing:1 derives:1 merging:1 magnitude:1 paninski:1 likely:1 neurophysiological:2 visual:2 expressed:1 ma:1 coil:2 viewed:1 identity:1 towards:1 included:1 specifically:1 tendency:1 underestimation:1 support:2 mark:1 arises:1 frontal:4 incorporate:1 evaluate:5 |
2,475 | 3,245 | Learning Monotonic Transformations for
Classification
Andrew G. Howard
Department of Computer Science
Columbia University
New York, NY 10027
[email protected]
Tony Jebara
Department of Computer Science
Columbia University
New York, NY 10027
[email protected]
Abstract
A discriminative method is proposed for learning monotonic transformations of the training data while jointly estimating a large-margin classifier.
In many domains such as document classification, image histogram classification and gene microarray experiments, fixed monotonic transformations
can be useful as a preprocessing step. However, most classifiers only explore
these transformations through manual trial and error or via prior domain
knowledge. The proposed method learns monotonic transformations automatically while training a large-margin classifier without any prior knowledge of the domain. A monotonic piecewise linear function is learned which
transforms data for subsequent processing by a linear hyperplane classifier.
Two algorithmic implementations of the method are formalized. The first
solves a convergent alternating sequence of quadratic and linear programs
until it obtains a locally optimal solution. An improved algorithm is then
derived using a convex semidefinite relaxation that overcomes initialization issues in the greedy optimization problem. The effectiveness of these
learned transformations on synthetic problems, text data and image data
is demonstrated.
1
Introduction
Many fields have developed heuristic methods for preprocessing data to improve performance. This often takes the form of applying a monotonic transformation prior to using
a classification algorithm. For example, when the bag of words representation is used in
document classification, it is common to take the square root of the term frequency [6, 5].
Monotonic transforms are also used when classifying image histograms. In [3], transformations of the form xa where 0 ? a ? 1 are demonstrated to improve performance. When
classifying genes from various microarray experiments it is common to take the logarithm of
the gene expression ratio [2]. Monotonic transformations can also capture crucial properties
of the data such as threshold and saturation effects.
In this paper, we propose to simultaneously learn a hyperplane classifier and a monotonic
transformation. The solution produced by our algorithm is a piecewise linear monotonic
function and a maximum margin hyperplane classifier similar to a support vector machine
(SVM) [4]. By allowing for a richer class of transforms learned at training time (as opposed
to a rule of thumb applied during preprocessing), we improve classification accuracy. The
learned transform is specifically tuned to the classification task. The main contributions
of this paper include, a novel framework for estimating a monotonic transformation and
a hyperplane classifier simultaneously at training time, an efficient method for finding a
xn ,1
w1
xn ,2
yn
w2
wD
xn , D
b
Figure 1: Monotonic transform applied to each dimension followed by a hyperplane classifier.
locally optimal solution to the problem, and a convex relaxation to find a globally optimal
approximate solution.
The paper is organized as follows. In section 2, we present our formulation for learning a
piecewise linear monotonic function and a hyperplane. We show how to learn this combined
model through an iterative coordinate ascent optimization using interleaved quadratic and
linear programs to find a local minimum. In section 3, we derive a convex relaxation based
on Lasserre?s method [8]. In section 4 synthetic experiments as well as document and image
classification problems demonstrate the diverse utility of our method. We conclude with a
discussion and future work.
2
Learning Monotonic Transformations
For an unknown distribution P (~x, y) over inputs ~x ? <d and labels y ? {?1, 1}, we assume
that there is an unknown nuisance monotonic transformation ?(x) and unknown hyperplane
parameterized by w
~ and Rb such that predicting with f (x) = sign(w
~ T ?(~x) + b) yields a low
expected test error R = 21 |y ? f (x)|dP (~x, y). We would like to recover ?(~x), w,
~ b from a
labeled training set S = {(~x1 , y1 ), . . . , (~xN , yN )} which is sampled i.i.d. from P (~x, y). The
transformation acts elementwise as can be seen in Figure 1.
We propose to learn both a maximum margin hyperplane and the unknown transform ?(x)
simultaneously. In our formulation, ?(x) is a piecewise linear function that we parameterize
with a set of K knots {z1 , . . . , zK } and associated positive weights {m1 , . . . , mK } where
P
zj ? < and mj ? <+ . The transformation can be written as ?(x) = K
j=1 mj ?j (x) where
?j (x) are truncated ramp functions acting on vectors and matrices elementwise as follows:
?j (x) =
?
? 0
x?zj
zj+1 ?zj
?
1
x ? zj
zj < x < zj+1
zj+1 ? x
(1)
This is a less common way to parameterize piecewise linear functions. The positivity constraints enforce monotonicity on ?(x) for all x. A more common method is to parameterize
the function value ?(z) at each knot z and apply order constraints between subsequent knots
to enforce monotonicity. Values in between knots are found through linear interpolation.
This is the method used in isotonic regression [10], but in practice, these are equivalent
formulations. Using truncated ramp functions is preferable for numerous reasons. They can
be easily precomputed and are sparse. Once precomputed, most calculations can be done
via sparse matrix multiplications. The positivity constraints on the weights m
~ will also yield
a simpler formulation than order constraints and interpolation which becomes important in
subsequent relaxation steps.
Figure 2a shows the truncated ramp function associated with knot z1 . Figure 2b shows
a conic combination of truncated ramps that builds a piecewise linear monotonic function.
Combining this with the support vector machine formulation leads us to the following learning problem:
m1+m2+m3+m4+m5
1
0.8
m1+m2+m3+m4
0.6
m1+m2+m3
0.4
m1+m2
0.2
0
z1
m1
z1
z2
a) Truncated ramp function ?1 (x).
z2
z3
b) ?(x) =
z4
P5
j=1
z5
mj ?j (x).
Figure 2: Building blocks for piecewise linear functions.
min
~ m
w,
~ ?,b,
~
subject to
kwk
~ 22 + C
N
X
(2)
?i
i=1
?*
yi ? w,
~
K
X
mj ?j (x~i )
j=1
?i ? 0, mj ? 0,
X
+
?
+ b? ? 1 ? ?i ?i
mj ? 1 ?i, j
j
where ?~ are the standard SVM slack variables, w
~ and b are the maximum margin solution
for the training set that has been transformed via ?(x) with learned weights m.
~ Before
training, the knot locations are chosen at the empirical quantiles so that they are evenly
spaced in the data.
This problem is nonconvex due to the quadratic term involving w
~ and m
~ in the classification
constraints. Although it is difficult to find a globally optimal solution, the structure of the
problem suggests a simple method for finding a locally optimal solution. We can divide the
problem into two convex subproblems. This amounts to solving a support vector machine
for w
~ and b with a fixed ?(x) and alternatively solving for ?(x) as a linear program with
the SVM solution fixed. In both subproblems, we optimize over ?~ as it is part of the hinge
loss. This yields an efficient convergent optimization method. However, this method can
get stuck in local minima. In practice, we initialize it with a linear ?(x) and iterate from
there. Alternative initializations do not yield much help. This leads us to look for a method
to efficiently find global solutions.
3
Convex Relaxation
When faced with a nonconvex quadratic problem, an increasingly popular technique is to
relax it into a convex one. Lasserre [8] proposed a sequence of convex relaxations for
these types of nonconvex quadratic programs. This method replaces all quadratic terms
in the original optimization problem with entries in a matrix. In its simplest form this
matrix corresponds to the outer product of the the original variables with rank one and
semidefinite constraints. The relaxation comes from dropping the rank one constraint on
the outer product matrix. Lasserre proposed more elaborate relaxations using higher order
moments of the variables. However, we mainly use the first moment relaxation along with
a few of the second order moment constraints that do not require any additional variables
beyond the outer product matrix.
A convex relaxation could be derived directly from the primal formulation of our problem.
Both w
~ and m
~ would be relaxed as they interact in the nonconvex quadratic terms. Un-
fortunately, this yields a semidefinite constraint that scales with both the number of knots
and the dimensionality of the data. This is troublesome because we wish to work with high
dimensional data such as a bag of words representation for text. However, if we first find
~ we only have to relax m
the dual formulation for w,
~ b, and ?,
~ which yields both a tighter
relaxation and a less computationally intensive problem. Finding the dual leaves us with the
following min max saddle point problem that will be subsequently relaxed and transformed
into a semidefinite program:
? ?
~ T ?Y ?
2~
?T ~1 ? ?
min max
m
~
?
~
X
i,j
? ?
mi mj ?i (X)T ?j (X)? Y ? ?
~
0 ? ?i ? C, ?
~ T ~y = 0, mj ? 0,
X
(3)
mj ? 1 ?i, j
j
where ~1 is a vector of ones, ~y is a vector of the labels, Y = diag(~y ) is a matrix with the
labels on its diagonal with zeros elsewhere, and X is a matrix with ~xi in the ith column.
We introduce the relaxation via the substitution M = m
?m
? T and constraint M 0 where
m
? is constructed by concatenating 1 with m.
~ We can then transform the relaxed min max
problem into a semidefinite program similar to the multiple kernel learning framework [7]
by finding the dual with respect to ?
~ and using the Schur complement lemma to generate
a linear matrix inequality [1]:
min
M,t,?,~
? ,~
?
subject to
(4)
t
P
Y i,j Mi,j ?i (X)T ?j (X)Y
(~1 + ~? ? ~? + ?~y )T
~1 + ~? ? ~? + ?~y
t ? 2C ~?T ~1
!
0
M 0, M ? 0, M ?
1 ? ~0, M0,0 = 1, ~? ? ~0, ~? ? ~0
where ~0 is a vector of zeros and ?
1 is a vector with ?1 in the first dimension and ones in the
rest. The variables ?, ~? , ~? arise from the dual transformation. This relaxation is exact if M
is a rank one matrix.
The above can be seen as a generalization of the multiple kernel learning framework. Instead
of learning a kernel from a combination of kernels, we are learning a combination of inner
products of different functions applied to our data. In our case, these are truncated ramp
functions. The terms ?i (X)T ?j (X) are not Mercer kernels except when i = j. This more
general combination requires the stricter constraints that the mixing weights M form a
positive semidefinite matrix, a constraint whichPis introduced via the relaxation. This is
T
a sufficient condition for the resulting matrix
i,j Mi,j ?i (X) ?j (X) to also be positive
semidefinite.
When using this relaxation, we can recover the monotonic transform by using the first
column (row) as the mixing weights, m,
~ of the truncated ramp
P functions. In practice,
however, we use the learned kernel in our predictions k(~x, ~x0 ) = i,j Mi,j ?i (~x)T ?j (~x0 ).
4
4.1
Experiments
Synthetic Experiment
In this experiment we will demonstrate our method?s ability to recover a monotonic transformation from data. We sampled data near a linear decision boundary and generated labels
based on this boundary. We then applied a strictly monotonic function to this sampled data.
The training set is made up of the transformed points and the original labels. A linear algorithm will have difficulty because the mapped data is not linearly separable. However,
1
1
1
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0.2
0.4
0.6
0.8
1
0
0.1
0
0.2
0.4
a)
0.6
0.8
1
0
0
1
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
d)
0.5
0.6
0.7
0.8
0.9
1
0
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
g)
0
1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0.6
0.7
0.8
0.9
1
f)
1
0.1
0
e)
1
0
0.8
0.1
0
1
0
0.6
c)
1
0
0.4
b)
1
0
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
h)
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
i)
Figure 3: a) Original data. b) Data transformed by a logarithm. c) Data transformed
by a quadratic function. d-f) The transformation functions learned using the nonconvex
algorithm. g-i) The transformation functions learned using the convex algorithm.
if we could recover the inverse monotonic function, then a linear decision boundary would
perform well.
Figure 3a shows the original data and decision boundary. Figure 3b shows the data and
hyperplane transformed with a normalized logarithm. Figure 3c depicts a quadratic transform. 600 data points were sampled, and then transformed. 200 were used for training, 200
for cross validation and 200 for testing. We compared our locally optimal method (L mono),
our convex relaxation (C mono) and a linear SVM (linear). The linear SVM struggled on
all of the transformed data while the other methods performed well as reported in Figure 4.
The learned transforms for L mono are plotted in Figure 3(d-f). The solid blue line is the
mean over 10 experiments, and the dashed blue is the standard deviation. The black line
is the true target function. The learned functions for C mono are in Figure 3(g-i). Both
algorithms performed quite well on the task of classification and recover nearly the exact
monotonic transform. The local method outperformed the relaxation slightly because this
was an easy problem with few local minima.
4.2
Document Classification
In this experiment we used the four universities WebKB dataset. The data is made up of
web pages from four universities plus an additional larger set from miscellaneous universities.
Linear
L Mono
C Mono
linear
0.0005
0.0020
0.0025
exponential
0.0375
0.0005
0.0075
square root
0.0685
0.0020
0.0025
total
0.0355
0.0015
0.0042
Figure 4: Testing error rates for the synthetic experiments.
Linear
TFIDF
Sqrt
Poly
RBF
L Mono
C Mono
1 vs 2
0.0509
0.0428
0.0363
0.0499
0.0514
0.0338
0.0322
1 vs 3
0.0879
0.0891
0.0667
0.0861
0.0836
0.0739
0.0776
1 vs 4
0.1381
0.1623
0.0996
0.1389
0.1356
0.0854
0.0812
2 vs 3
0.0653
0.0486
0.0456
0.0599
0.0641
0.0511
0.0501
2 vs 4
0.1755
0.1910
0.1153
0.1750
0.1755
0.1060
0.0973
3 vs 4
0.0941
0.1096
0.0674
0.0950
0.0981
0.0602
0.0584
total
0.1025
0.1059
0.0711
0.1009
0.1024
0.0683
0.0657
Figure 5: Testing error rates for WebKB.
These web pages are then categorized. We will be working with the largest four categories:
student, faculty, course, and project. The task is to solve all six pairwise classification
problems. In [6, 5] preprocessing the data with a square root was demonstrated to yield
good results. We will compare our nonconvex method (L mono), and our convex relaxation
(C mono) to a linear SVM with and without the square root, with TFIDF features and also
a kernelized SVM with both the polynomial kernel and the RBF kernel. We will follow the
setup of [6] by training on three universities and the miscellaneous university set and testing
on web pages from the fourth university. We repeated this four fold experiment five times.
For each fold, we use a subset of 200 points for training, 200 to cross validate the parameter
settings, and all of the fourth university?s points for testing.
Our two methods outperform the competition on average as reported in Figure 5. The
convex relaxation chooses a step function nearly every time. This outputs a 1 if a word is
in the training vector and 0 if it is absent. The nonconvex greedy algorithm does not end
up recovering this solution as reliably and seems to get stuck in local minima. This leads to
slightly worse performance than the convex version.
4.3
Image Histogram Classification
In this experiment, we used the Corel image dataset. In [3], it was shown that monotonic
transforms of the form xa for 0 ? a ? 1 worked well. The Corel image dataset is made up
of various categories, each containing 100 images. We chose four categories of animals: 1)
eagles, 2) elephants, 3) horses, and 4) tigers. Images were transformed into RGB histograms
following the binning strategy of [3, 5]. We ran a series of six pairwise experiments where the
data was randomly split into 80 percent training, 10 percent cross validation, and 10 percent
testing. These six experiments were repeated 10 times. We compared our two methods to
a linear support vector machine, as well as an SVM with RBF and polynomial kernels. We
also compared to the set of transforms xa for 0 ? a ? 1 where we cross validated over
a = {0, .125, .25, .5, .625, .75, .875, 1}. This set includes linear a = 1 at one end, a binary
threshold a = 0 at the other (choosing 00 = 0), and the square root transform in the middle.
The convex relaxation performed best or tied for best on 4 out 6 of the experiments and
was the best overall as reported in Figure 6. The nonconvex version also performed well
but ended up with a lower accuracy than the cross validated family of xa transforms. The
key to this dataset is that most of the data is very close to zero due to few pixels being in a
given bin. Cross validation over xa most often chose low nonzero a values. Our method had
many knots in these extremely low values because that was where the data support was.
Plots of our learned functions on these small values can be found in Figure 7(a-f). Solid
blue is the mean for the nonconvex algorithm and dashed blue is the standard deviation.
Similarly, the convex relaxation is in red.
Linear
Sqrt
Poly
RBF
xa
L Mono
C Mono
1 vs 2
0.08
0.03
0.07
0.06
0.08
0.05
0.04
1 vs 3
0.10
0.05
0.10
0.08
0.04
0.06
0.03
1 vs 4
0.28
0.09
0.28
0.22
0.03
0.04
0.03
2 vs 3
0.11
0.12
0.11
0.10
0.03
0.05
0.04
2 vs 4
0.14
0.08
0.15
0.13
0.09
0.13
0.06
3 vs 4
0.26
0.20
0.23
0.23
0.06
0.05
0.05
total
0.1617
0.0950
0.1567
0.1367
0.0550
0.0633
0.0417
Figure 6: Testing error rates on Corel dataset.
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
?0.2
0
0.5
1
1.5
2
?0.2
0
0
0.5
1
1.5
?3
2
?0.2
1
1
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
1
1.5
2
?3
x 10
1.5
?0.2
2
?3
0.8
0.5
1
x 10
1
0
0.5
x 10
0.8
?0.2
0
?3
x 10
0
0
0.5
1
1.5
2
?0.2
0
0.5
1
1.5
?3
x 10
2
?3
x 10
Figure 7: The learned transformation functions for 6 Corel problems.
4.4
Gender classification
In this experiment we try to differentiate between images of males and females. We have
1755 labelled images from the FERET dataset processed as in [9]. Each processed image
is a 21 by 12 pixel 256 color gray scale image that is rastorized to form training vectors.
There are 1044 male images and 711 female images. We randomly split the data into 80
percent training, 10 percent cross validation, and and 10 percent testing. We then compare
a linear SVM to our two methods on 5 random splits of the data. The learned monotonic
function from L Mono and C Mono are similar to a sigmoid function which indicates that
useful saturation and threshold effects where uncovered by our methods. Figure 8a shows
examples of training images before and after they have been transformed by our learned
function. Figure 8b summarizes the results. Our learned transformation outperforms the
linear SVM with the convex relaxation performing best.
5
Discussion
A data driven framework was presented for jointly learning monotonic transformations of
input data and a discriminative linear classifier. The joint optimization improves classification accuracy and produces interesting transformations that otherwise would require a
priori domain knowledge. Two implementations were discussed. The first is a fast greedy
algorithm for finding a locally optimal solution. Subsequently, a semidefinite relaxation of
the original problem was presented which does not suffer from local minima. The greedy
algorithm has similar scaling properties as a support vector machine yet has local minima
to contend with. The semidefinite relaxation is more computationally intensive yet ensures
a reliable global solution. Nevertheless, both implementations were helpful in synthetic and
real experiments including text and image classification and improved over standard support
vector machine tools.
Algorithm
Linear
L Mono
C Mono
a)
Error
.0909
.0818
.0648
b)
Figure 8: a) Original and transformed gender images. b) Error rates for gender classification.
A natural next step is to explore faster (convex) algorithms that take advantage of the
specific structure of the problem. These faster algorithms will help us explore extensions
such as learning transformations across multiple tasks. We also hope to explore applications
to other domains such as gene expression data to refine the current logarithmic transforms
necessary to compensate for well-known saturation effects in expression level measurements.
We are also interested in looking at fMRI and audio data where monotonic transformations
are useful.
6
Acknowledgements
This work was supported in part by NSF Award IIS-0347499 and ONR Award
N000140710507.
References
[1] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press,
2004.
[2] M. Brown, W. Grundy, D. Lin, N. Christianini, C. Sugnet, M. Jr, and D. Haussler.
Support vector machine classification of microarray gene expression data, 1999.
[3] O. Chapelle, P. Hafner, and V.N. Vapnik. Support vector machines for histogram-based
classification. Neural Networks, IEEE Transactions on, 10:1055?1064, 1999.
[4] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273?297,
1995.
[5] M. Hein and O. Bousquet. Hilbertian metrics and positive definite kernels on probability
measures. In Proceedings of Artificial Intelligence and Statistics, 2005.
[6] T. Jebara, R. Kondor, and A. Howard. Probability product kernels. Journal of Machine
Learning Research, 5:819?844, 2004.
[7] G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the
kernel matrix with semidefinite programming. Journal of Machine Learning Research,
5:27?72, 2004.
[8] J.B. Lasserre. Convergent LMI relaxations for nonconvex quadratic programs. In
Proceedings of 39th IEEE Conference on Decision and Control, 2000.
[9] B. Moghaddam and M.H. Yang. Sex with support vector machines. In Todd K. Leen,
Thomas G. Dietterich, and Volker Tresp, editors, Advances in Neural Information Processing 13, pages 960?966. MIT Press, 2000.
[10] T. Robertson, F.T. Wright, and R.L. Dykstra. Order Restricted Statistical Inference.
Wiley, 1988.
| 3245 |@word trial:1 version:2 faculty:1 polynomial:2 seems:1 middle:1 kondor:1 sex:1 rgb:1 solid:2 moment:3 substitution:1 series:1 uncovered:1 tuned:1 document:4 outperforms:1 current:1 wd:1 z2:2 yet:2 written:1 subsequent:3 plot:1 v:12 greedy:4 leaf:1 intelligence:1 ith:1 location:1 simpler:1 five:1 along:1 constructed:1 introduce:1 pairwise:2 x0:2 expected:1 globally:2 automatically:1 becomes:1 project:1 estimating:2 webkb:2 developed:1 finding:5 transformation:25 ended:1 every:1 act:1 stricter:1 preferable:1 classifier:9 control:1 yn:2 positive:4 before:2 local:7 todd:1 troublesome:1 interpolation:2 black:1 plus:1 chose:2 initialization:2 suggests:1 testing:8 practice:3 block:1 definite:1 empirical:1 boyd:1 word:3 get:2 close:1 applying:1 isotonic:1 optimize:1 equivalent:1 demonstrated:3 convex:18 formalized:1 m2:4 rule:1 haussler:1 vandenberghe:1 coordinate:1 target:1 exact:2 programming:1 lanckriet:1 robertson:1 labeled:1 binning:1 p5:1 capture:1 parameterize:3 ensures:1 ran:1 grundy:1 cristianini:1 solving:2 easily:1 joint:1 various:2 fast:1 artificial:1 horse:1 choosing:1 quite:1 heuristic:1 richer:1 larger:1 solve:1 ramp:7 relax:2 elephant:1 otherwise:1 ability:1 statistic:1 jointly:2 transform:8 differentiate:1 sequence:2 advantage:1 propose:2 product:5 combining:1 mixing:2 validate:1 competition:1 produce:1 help:2 derive:1 andrew:1 solves:1 recovering:1 c:2 come:1 subsequently:2 bin:1 require:2 generalization:1 tfidf:2 tighter:1 strictly:1 extension:1 wright:1 algorithmic:1 m0:1 outperformed:1 bag:2 label:5 largest:1 tool:1 hope:1 mit:1 volker:1 derived:2 validated:2 rank:3 indicates:1 mainly:1 sugnet:1 helpful:1 inference:1 el:1 kernelized:1 transformed:11 interested:1 pixel:2 issue:1 classification:19 dual:4 overall:1 priori:1 hilbertian:1 animal:1 initialize:1 field:1 once:1 look:1 nearly:2 future:1 fmri:1 piecewise:7 few:3 randomly:2 simultaneously:3 m4:2 hafner:1 male:2 semidefinite:10 primal:1 moghaddam:1 necessary:1 divide:1 logarithm:3 plotted:1 hein:1 mk:1 column:2 deviation:2 entry:1 subset:1 reported:3 synthetic:5 combined:1 chooses:1 w1:1 opposed:1 containing:1 positivity:2 lmi:1 worse:1 student:1 includes:1 performed:4 root:5 try:1 kwk:1 red:1 recover:5 contribution:1 square:5 accuracy:3 efficiently:1 yield:7 spaced:1 thumb:1 produced:1 knot:8 sqrt:2 manual:1 frequency:1 associated:2 mi:4 sampled:4 dataset:6 popular:1 knowledge:3 color:1 dimensionality:1 improves:1 organized:1 higher:1 follow:1 improved:2 formulation:7 done:1 leen:1 xa:6 until:1 working:1 web:3 gray:1 building:1 effect:3 dietterich:1 brown:1 true:1 normalized:1 alternating:1 nonzero:1 during:1 nuisance:1 m5:1 demonstrate:2 percent:6 image:18 novel:1 common:4 sigmoid:1 corel:4 discussed:1 m1:6 elementwise:2 measurement:1 cambridge:1 z4:1 similarly:1 had:1 chapelle:1 female:2 driven:1 nonconvex:10 inequality:1 binary:1 onr:1 yi:1 seen:2 minimum:6 additional:2 relaxed:3 fortunately:1 dashed:2 ii:1 multiple:3 faster:2 calculation:1 cross:7 compensate:1 lin:1 award:2 z5:1 prediction:1 involving:1 regression:1 metric:1 histogram:5 kernel:12 microarray:3 crucial:1 w2:1 rest:1 ascent:1 subject:2 effectiveness:1 schur:1 jordan:1 near:1 yang:1 split:3 easy:1 iterate:1 inner:1 intensive:2 absent:1 expression:4 six:3 utility:1 bartlett:1 suffer:1 york:2 useful:3 transforms:8 amount:1 locally:5 processed:2 category:3 simplest:1 struggled:1 generate:1 outperform:1 zj:8 nsf:1 sign:1 rb:1 blue:4 diverse:1 dropping:1 key:1 four:5 threshold:3 nevertheless:1 mono:16 christianini:1 relaxation:25 inverse:1 parameterized:1 fourth:2 family:1 decision:4 summarizes:1 scaling:1 interleaved:1 followed:1 convergent:3 fold:2 quadratic:10 replaces:1 refine:1 eagle:1 constraint:12 worked:1 bousquet:1 min:5 extremely:1 performing:1 separable:1 department:2 combination:4 jr:1 across:1 slightly:2 increasingly:1 feret:1 restricted:1 ghaoui:1 computationally:2 slack:1 precomputed:2 end:2 apply:1 enforce:2 alternative:1 original:7 thomas:1 tony:1 include:1 hinge:1 build:1 dykstra:1 strategy:1 diagonal:1 dp:1 mapped:1 outer:3 evenly:1 reason:1 z3:1 ratio:1 difficult:1 setup:1 subproblems:2 implementation:3 reliably:1 unknown:4 perform:1 allowing:1 contend:1 howard:2 truncated:7 looking:1 y1:1 jebara:3 introduced:1 complement:1 z1:4 learned:15 beyond:1 program:7 saturation:3 max:3 reliable:1 including:1 difficulty:1 natural:1 predicting:1 improve:3 numerous:1 conic:1 columbia:4 tresp:1 text:3 prior:3 faced:1 acknowledgement:1 multiplication:1 loss:1 interesting:1 validation:4 sufficient:1 mercer:1 editor:1 classifying:2 row:1 elsewhere:1 course:1 supported:1 sparse:2 boundary:4 dimension:2 xn:4 stuck:2 made:3 preprocessing:4 transaction:1 approximate:1 obtains:1 gene:5 overcomes:1 monotonicity:2 global:2 conclude:1 discriminative:2 xi:1 alternatively:1 un:1 iterative:1 lasserre:4 learn:3 zk:1 mj:9 interact:1 poly:2 domain:5 diag:1 main:1 linearly:1 arise:1 repeated:2 categorized:1 x1:1 quantiles:1 elaborate:1 depicts:1 ny:2 wiley:1 wish:1 concatenating:1 exponential:1 tied:1 learns:1 specific:1 svm:10 cortes:1 vapnik:2 margin:5 logarithmic:1 explore:4 saddle:1 monotonic:25 gender:3 corresponds:1 rbf:4 miscellaneous:2 labelled:1 tiger:1 specifically:1 except:1 hyperplane:9 acting:1 lemma:1 total:3 m3:3 support:11 audio:1 |
2,476 | 3,246 | Agreement-Based Learning
Percy Liang
Computer Science Division
University of California
Berkeley, CA 94720
Dan Klein
Computer Science Division
University of California
Berkeley, CA 94720
Michael I. Jordan
Computer Science Division
University of California
Berkeley, CA 94720
[email protected]
[email protected]
[email protected]
Abstract
The learning of probabilistic models with many hidden variables and nondecomposable dependencies is an important and challenging problem. In contrast
to traditional approaches based on approximate inference in a single intractable
model, our approach is to train a set of tractable submodels by encouraging them
to agree on the hidden variables. This allows us to capture non-decomposable
aspects of the data while still maintaining tractability. We propose an objective
function for our approach, derive EM-style algorithms for parameter estimation,
and demonstrate their effectiveness on three challenging real-world learning tasks.
1
Introduction
Many problems in natural language, vision, and computational biology require the joint modeling of
many dependent variables. Such models often include hidden variables, which play an important role
in unsupervised learning and general missing data problems. The focus of this paper is on models
in which the hidden variables have natural problem domain interpretations and are the object of
inference.
Standard approaches for learning hidden-variable models involve integrating out the hidden variables and working with the resulting marginal likelihood. However, this marginalization can be intractable. An alternative is to develop procedures that merge the inference results of several tractable
submodels. An early example of such an approach is the use of pseudolikelihood [1], which deals
with many conditional models of single variables rather than a single joint model. More generally,
composite likelihood permits a combination of the likelihoods of subsets of variables [7]. Another
approach is piecewise training [10, 11], which has been applied successfully to several large-scale
learning problems.
All of the above methods, however, focus on fully-observed models. In the current paper, we develop
techniques in this spirit that work for hidden-variable models. The basic idea of our approach is to
create several tractable submodels and train them jointly to agree on their hidden variables. We
present an intuitive objective function and efficient EM-style algorithms for training a collection of
submodels. We refer to this general approach as agreement-based learning.
Sections 2 and 3 presents the general theory for agreement-based learning. In some applications, it
is infeasible computationally to optimize the objective function; Section 4 provides two alternative
objectives that lead to tractable algorithms. Section 5 demonstrates that our methods can be applied successfully to large datasets in three real world problem domains?grammar induction, word
alignment, and phylogenetic hidden Markov modeling.
1
2
Agreement-based learning of multiple submodels
Assume we have M (sub)models pm (x, z; ?m ), m = 1, . . . , M , where each submodel specifies a
distribution over the observed data x ? X and some hidden state z ? Z. The submodels could be
parameterized in completely different ways as long as they are defined on the common event space
X ? Z. Intuitively, each submodel should capture a different aspect of the data in a tractable way.
To learn these submodels, the simplest approach is to train them independently by maximizing the
sum of their log-likelihoods:
YX
X
def
Oindep (?) = log
pm (x, z; ?m ) =
log pm (x; ?m ),
(1)
m
z
m
P
where ? = (?1 , . . . , ?M ) is the collective set of parameters and pm (x; ?m ) =
z pm (x, z; ?m )
1
is the likelihood under submodel pm . Given an input x, we can then produce an output z by
combining the posteriors pm (z | x; ?m ) of the trained submodels.
If we view each submodel as trying to solve the same task of producing the desired posterior over
z, then it seems advantageous to train the submodels jointly to encourage ?agreement on z.? We
propose the following objective which realizes this insight:
XY
X
XY
def
Oagree (?) = log
pm (x, z; ?m ) =
log pm (x; ?m ) + log
pm (z | x; ?m ).
(2)
z
m
m
z
m
The last term rewards parameter values ? for which the submodels assign probability mass to the
same z (conditioned on x); the summation over z reflects the fact that we do not know what z is.
Oagree has a natural probabilistic interpretation. Imagine defining a joint distribution over M independent copies over the data and hidden state, (x1 , z1 ), . . .Q
, (xM , zM ), which are each generated
by a different submodel: p((x1 , z1 ), . . . , (xM , zM ); ?) = m p(xm , zm ; ?m ). Then Oagree is the
probability that the submodels all generate the same observed data x and the same hidden state:
p(x1 = ? ? ? = xM = x, z1 = ? ? ? = zM ; ?).
Oagree is also related to the likelihood of a proper probabilistic model pnorm , obtained by normalizing
the product of the submodels, as is done in [3]. Our objective Oagree is then a lower bound on the
likelihood under pnorm :
P Q
P Q
pm (x, z; ?m )
def
z Qm pm (x, z; ?m )
pnorm (x; ?) = P
? Qz Pm
= Oagree (?).
(3)
pm (x, z; ?m )
pm (x, z; ?m )
x,z m
m x,z
The inequality holds because the denominator of the lower bound contains additional cross terms.
The bound is generally loose, but becomes tighter as each pm becomes more deterministic. Note
that pnorm is distinct from the product-of-experts model
Q[3],Pin which each ?expert? model pm has
its own set of (nuisance) hidden variables: ppoe (x) ? m z pm (x, z; ?m ). In contrast, pnorm has
one set of hidden variables z common to all submodels, which is what provides the mechanism for
agreement-based learning.
2.1
The product EM algorithm
We now derive the product EM algorithm to maximize Oagree . Product EM bears many striking
similarities to EM: both are coordinate-wise ascent algorithms on an auxiliary function and both
increase the original objective monotonically. By introducing an auxiliary distribution q(z) and
applying Jensen?s inequality, we can lower bound Oagree with an auxiliary function L:
Q
Q
X
pm (x, z; ?m ) def
pm (x, z; ?m )
? Eq(z) log m
= L(?, q)
(4)
Oagree (?) = log
q(z) m
q(z)
q(z)
z
The product EM algorithm performs coordinate-wise ascent on L(?, q). In the (product) E-step, we
optimize L with respect to q. Simple algebra reveals
Q that this optimization is equivalent to minimizing a KL-divergence: L(?, q) = ?KL(q(z)|| m pm (x, z; ?m )) + constant, where the constant
1
To simplify notation, we consider one data point x. Extending to a set of i.i.d. points is straightforward.
2
Q
does not depend on q. This quantity is minimized by setting q(z) ? m pm (x, z; ?m ). In the (product) M-step,Pwe optimize L with respect to ?, which decomposes into M independent objectives:
L(?, q) = m Eq log pm (x, z; ?m ) + constant, where this constant does not depend on ?. Each
term corresponds to an independent M-step, just as in EM for maximizing Oindep .
Thus, our product EM algorithm differs from independent EM only in the E-step, in which the
submodels are multiplied together to produce one posterior over z rather than M separate ones.
Assuming that there is an efficient EM algorithm for each submodel pm , there is no difficulty in
performing the product M-step. In our applications (Section 5), each pm is composed of multinomial
distributions, so the M-step simply involves computing ratios of expected counts. On the other hand,
the product E-step can become intractable and we must develop approximations (Section 4).
3
Exponential family formulation
Thus far, we have placed no restrictions on the form of the submodels. To develop a richer understanding and provide a framework for making approximations, we now assume that each submodel
pm is an exponential family distribution:
T
pm (x, z; ?m ) = exp{?m
?m (x, z) ? Am (?m )} for x ? X , z ? Zm and 0 otherwise,
(5)
P
T
where ?m are sufficient statistics (features) and Am (?m ) = log x?X ,z?Zm exp{?m ?m (x, z)} is
the log-partition function,2 defined on ?m ? ?m ? RJ . We can think of all the submodels pm as
being defined on a common space Z? = ?m Zm , but the support of q(z) as computed in the E-step is
only the intersection Z? = ?m Zm . Controlling this support will be essential in developing tractable
approximations (Section 4.1).
In the general formulation, we required only that the submodels share the same event space X ?
Z. Now we make explicit the possibility of the submodels sharing features, which give us more
structure for deriving approximations. In particular, suppose each feature j of submodel pm can be
decomposed into a part that depends on x (which is specific to that particular submodel) and a part
that depends on z (which is the same for all submodels):
?mj (x, z) =
I
X
Z
X
Z
?X
mji (x)?i (z), or in matrix notation, ?m (x, z) = ?m (x)? (z),
(6)
i=1
Z
where ?X
m (x) is a J ? I matrix and ? (z) is a I ? 1 vector. When z is discrete, such a decompoZ
sition always exists by defining ? (z) to be an |Z? |-dimensional indicator vector which is 1 on the
component corresponding to z. Fortunately, we can usually obtain more compact representations of
?Z (z). We can now express our objective L(?, q) (4) using (5) and (6):
X
X
T X
L(?, q) =
?m
?m (x) (Eq(z) ?Z (z)) + H(q) ?
Am (?m ) for q ? Q(Z? ),
(7)
m
m
def
where Q(Z 0 ) = {q : q(z) = 0 for z 6? Z 0P
} is the set of distributions with support Z 0 . For
T X
convenience, define bTm = ?m
?m (x) and b = m bm , which summarize the parameters ? for the
E-step. Note that for any ?, the q maximizing L always has the following exponential family form:
q(z; ?) = exp{? T ?Z (z) ? AZ? (?)} for z ? Z? and 0 otherwise,
(8)
P
T Z
where AZ? (?) = log z?Z? exp{? ? (z)} is the log-partition function. In a minor abuse of
notation, we write L(?, ?) = L(?, q(?; ?)). Specifically, L(?, ?) is maximized by setting ? = b.
It will be useful to express (7) using convex duality [12]. The key idea of convex duality is the
existence of a mapping between the canonical exponential parameters ? ? RI of an exponential
family distribution q(z; ?) and the mean parameters defined by ? = Eq(z;?) ?Z (z) ? M(Z? ) ? RI ,
where M(Z 0 ) = {? : ?q ? Q(Z 0 ) : Eq ?Z (z) = ?} is the set of realizable mean parameters. The
Fenchel-Legendre conjugate of the log-partition function AZ? (?) is
def
A?Z? (?) = sup {? T ? ? AZ? (?)} for ? ? M(Z? ),
(9)
??RI
2
Our applications use directed graphical models, which correspond to curved exponential families where
each ?m is defined by local normalization constraints and Am (?m ) = 0.
3
which is also equal to ?H(q(z; ?)), the negative entropy of any distribution q(z; ?) corresponding
to ?. Substituting ? and A?Z? (?) into (7), we obtain an objective in terms of the dual variables ?:
X
X
def
T X
L? (?, ?) =
?m
Am (?m ) for ? ? M(Z? ).
(10)
?m (x) ? ? A?Z? (?) ?
m
m
Note that the two objectives are equivalent: sup??RI L(?, ?) = sup??M(Z? ) L? (?, ?) for each ?.
The mean parameters ? are exactly the z-specific expected sufficient statistics computed in the product E-step. The dual is an attractive representation because it allows us to form convex combinations
of different ?, an operation does not have a direct correlate in the primal formulation. The product
EM algorithm is summarized below:
E-step:
M-step:
4
Product EM
? = argmax?0 ?M(Z? ) {bT ?0 ? A?Z? (?0 )}
0T X
0
?m = argmax?m
0 ?? {?m ? (x)? ? Am (?m )}
m
Approximations
The product M-step is tractable provided that the M-step for each submodel is tractable, which
is generally the case. The corresponding statement is not true for the E-step, which in general
requires explicitly summing over all possible z ? Z? , often an exponentially large set. We will thus
consider alternative E-steps, so it will be convenient to succinctly characterize an E-step. An E-step
is specified by a vector b0 (which depends on ? and x) and a set Z 0 (which we sum z over):
E(b0 , Z 0 ) computes ? = argmax {b0T ?0 ? A?Z 0 (?0 )}.
(11)
?0 ?M(Z 0 )
Using this notation, E(bm , Zm ) is the E-step for training the m-th submodel independently using
EM and E(b, Z? ) is the E-step of product EM. Though we write E-steps in the dual formulation, in
practice, we compute ? as an expectation over all z ? Z 0 , perhaps leveraging dynamic programming.
If E(bm , Zm ) is tractable and all submodels have the same dynamic programming structure (e.g.,
if z is a tree and all features are local with respect to that tree), then E(b, Z? ) is also tractable: we
can incorporate all the features into the same dynamic program and simply run product EM (see
Section 5.1 for an example).
However, E(b, Z? ) is intractable in general, owing to two complications: (1) we can sum over each
Zm efficiently but not the intersection
Z? ; and (2) each bm corresponds to a decomposable graphical
P
model, but the combined b = m bm corresponds to a loopy graph. In the sequel, we describe two
approximate objective functions addressing each complication, whose maximization can be carried
out by performing M independent tractable E-steps.
4.1
Domain-approximate product EM
Assume that for each submodel pm , E(b, Zm ) is tractable (see Section 5.2 for an example). We
propose maximizing the following objective:
i X
X h X
def 1
T
X
?
?m
Am (?m ),
(12)
L?dom (?, ?1 , . . . , ?m ) =
0 ?m0 (x) ?m ? AZ (?m ) ?
m
M m
0
m
m
with each ?m ? M(Zm ). This objective can be maximized via coordinate-wise ascent:
Domain-approximate product EM
E-step:
M-step:
?m = argmax?0m ?M(Zm ) {bT ?0m? A?Zm (?0m )}
P
1
0T X
0
?m = argmax?m
0 ?? {?m ? (x)
m0 ?m0 ? Am (?m )}
m
M
[E(b, Zm )]
The product E-step consists of M separate E-steps, which are each tractable because each involves
the respective Zm instead of Z? . The resulting expected sufficient statistics are averaged and used
in the product M-step, which breaks down into M separate M-steps.
4
While we have not yet established any relationship between our approximation L?dom and the original
objective L? , we can, however, relate L?dom to L?? , which is defined as an analogue of L? by replacing
Z? with Z? in (10).
Proposition
1. L?dom (?, ?1 , . . . , ?M ) ? L?? (?, ?
?) for all ? and ?m ? M(Zm ) and ?
? =
P
1
?
.
m
m
M
Proof. First, since M(Zm ) ? M(Z? ) and M(Z? ) is a convex set, ?
? ? M(Z? ), so L?? (?, ?
?)
is well-defined. SubtractingP
the L? version of (10) from (12), we obtain L?dom (?,
?
,
.
.
.
,
?
)
?
1
M
P
1
1
L?? (?, ?
?) = A?Z? (?
?) ? M
A?Zm (?m ). It suffices to show A?Z? (?
?) ? M
A?Z? (?m ) ?
m
m
P
1
?
?
m AZm (?m ). The first inequality follows from convexity of AZ? (?). For the second inequality:
M
since Zm ? Z? , AZ? (?m ) ? AZm (?m ); by inspecting (9), it follows that A?Z? (?m ) ? A?Zm (?m ).
4.2
Parameter-approximate product EM
Now suppose that for each submodel pm , E(bm , Z? ) is tractable (see Section 5.3 for an example).
We propose maximizing the following objective:
i X
Xh
def 1
T X
L?par (?, ?1 , . . . , ?m ) =
(M ?m
?m (x))?m ? A?Z? (?m ) ?
Am (?m ),
(13)
M m
m
with each ?m ? M(Z? ). This objective can be maximized via coordinate-wise ascent, which again
consists of M separate E-steps E(M bm , Z? ) and the same M-step as before:
Parameter-approximate product EM
E-step:
M-step:
?m = argmax?0m ?M(Zm ) {(M bm)T ?0m ? A?Z?(?0m )}
P
1
0T X
0
?m = argmax?m
0 ?? {?m ? (x)
m0 ?m0 ? Am (?m )}
m
M
[E(M bm , Z? )]
We can show that the maximum value of L?par is at least that of L? , which leaves us maximizing an
upper bound of L? . Although less logical than maximizing a lower bound, in Section 5.3, we show
that our approach is nonetheless a reasonable approximation which importantly is tractable.
Proposition 2. max?1 ?M(Z? ),...,?M ?M(Z? ) L?par (?, ?1 , . . . , ?M ) ? max??M(Z? ) L? (?, ?).
Proof. From the definitions of L?par (13) and L? (10), it is easy to see that L?par (?, ?, . . . , ?) =
L? (?, ?) for all ? ? M(Z? ). If we maximize L?par with M distinct arguments, we cannot end up
with a smaller value.
The product E-step could also be approximated by mean-field or loopy belief propagation variants.
These methods and the two we propose all fall under the general variational framework for approximate inference [12]. The two approximations we developed have the advantage of permitting exact
tractable solutions without resorting to expensive iterative methods which are only guaranteed to
converge to a local optima.
While we still lack a complete theory relating our approximations L?dom and L?par to the original
objective L? , we can give some intuitions. Since we are operating in the space of expected sufficient
statistics ?m , most of the information about the full posterior pm (z | x) must be captured in these
statistics alone. Therefore, we expect our approximations to be accurate when each submodel has
enough capacity to represent the posterior pm (z | x; ?m ) as a low-variance unimodal distribution.
5
Applications
We now empirically validate our algorithms on three concrete applications: grammar induction using
product EM (Section 5.1), unsupervised word alignment using domain-approximate product EM
(Section 5.2), and prediction of missing nucleotides in DNA sequences using parameter-approximate
product EM (Section 5.3).
5
HMM model
e1
a1
f1
e2
a2
f2
e3
a3
f3
a4
f4
f1
(a) Submodel p1
e1
e2
e3
a1
a2
a3
f2
f3
f4
(b) Submodel p2
alignment error rate
0.12
Independent EM
Domain-approximate product EM
0.11
0.1
0.09
0.08
0.07
1
2
3
4
5
6
7
8
9
10
iteration
Figure 1: The two instances of IBM model 1 for word alignment are shown in (a) and (b). The graph
shows gains from agreement-based learning.
5.1
Grammar induction
Grammar induction is the problem of inducing latent syntactic structures given a set of observed
sentences. There are two common types of syntactic structure (one based on word dependencies and
the other based on constituent phrases), which can each be represented as a submodel. [5] proposed
an algorithm to train these two submodels. Their algorithm is a special case of our product EM
algorithm, although they did not state an objective function. Since the shared hidden state is a tree
structure, product EM is tractable. They show that training the two submodels to agree significantly
improves accuracy over independent training. See [5] for more details.
5.2
Unsupervised word alignment
Word alignment is an important component of machine translation systems. Suppose we have a set
of sentence pairs. Each pair consists of two sentences, one in a source language (say, English) and
its translation in a target language (say, French). The goal of unsupervised word alignment is to
match the words in a source sentence to the words in the corresponding target sentence. Formally,
let x = (e, f ) be an observed pair of sentences, where e = (e1 , . . . , e|e| ) and f = (f1 , . . . , f|f | ); z
is a set of alignment edges between positions in the English and positions in the French.
Classical models for word alignment include IBM models 1 and 2 [2] and the HMM model [8].
These are asymmetric models, which means that they assign non-zero probability only to alignments
in which each French word is aligned to at most one English word; we denote this set Z1 . An
element z ? Z1 can be parameterized by a vector a = (a1 , . . . , a|f | ), with aj ? {N ULL, 1, . . . , |e|},
corresponding to the English word (if any) that French word fj is aligned to. We define the first
submodel on X ? Z1 as follows (specializing to IBM model 1 for simplicity):
p1 (x, z; ?1 ) = p1 (e, f , a; ?1 ) = p1 (e)
|f |
Y
p1 (aj )p1 (fj | eaj ; ?1 ),
(14)
j=1
where p1 (e) and p1 (aj ) are constant and the canonical exponential parameters ?1 are the transition
log-probabilities {log t1;ef } for each English word e (including N ULL) and French word f .
Written in exponential family form, ?Z (z) is an (|e| + 1)(|f | + 1)-dimensional vector whose comZ
ponents are {?Z
ij (z) ? {0, 1} : i = N ULL , 1, . . . , |e|, j = N ULL , 1, . . . , |f |}. We have ?ij (z) = 1
if and only if English word ei is aligned to French word fj and zN ULLj = 1 if and only if fj is not
aligned to any English word. Also, ?X
ef ;ij (x) = 1 if and only if ei = e and fj = f . The mean
parameters associated with an E-step are {?1;ij }, the posterior probabilities of ei aligning to fj ;
these can be computed independently for each j. We can define a second submodel p2 (x, z; ?2 ) on
X ? Z2 by reversing the roles of English and French. Figure 1(a)?(b) shows the two models.
We cannot use product EM algorithm to train p1 and p2 because summing over all alignments
in Z? = Z1 ? Z2 is NP-hard. However, we can use domain-approximate product EM because
E(b1 + b2 , Zm ) is tractable?the tractability here does not depend on decomposability of b but the
asymmetric alignment structure of Zm . The concrete change from independent EM is slight: we
need to only change the E-step of each pm to use the product of translation probabilities t1;ef t2;f e
and change the M-step to use the average of the edge posteriors obtained from the two E-steps.
6
dA1
dB1
dA2
dC1
dD1
dE1
dB2
dA3
dC2
dD2
dE2
dB3
dA4
dC3
dD3
dE3
dB4
dA1
dC4
dD4
dB1
dE4
dC1
dD1
(a) Submodel p1
dA2
dE1
dB2
dA3
dC2
dD2
dE2
dB3
dA4
dC3
dD3
dE3
dB4
dC4
dD4
dE4
(b) Submodel p2
Figure 2: The two phylogenetic HMM models, one for the even slices, the other for the odd ones.
[6] proposed an alternative method to train two models to agree. Their E-step computes ?1 =
E(b1 , Z1 ) and ?2 = E(b2 , Z2 ), whereas our E-steps incorporate the parameters of both models
in b1 + b2 . Their M-step uses the elementwise product of ?1 and ?2 , whereas we use the average
1
2 (?1 + ?2 ). Finally, while their algorithm appears to be very stable and is observed to converge
empirically, no objective function has been developed; in contrast, our algorithm maximizes (12).
In practice, both algorithms perform comparably.
We conducted our experiments according to the setup of [6]. We used 100K unaligned sentences
for training and 137 for testing from the English-French Hansards data of the NAACL 2003 Shared
Task. Alignments are evaluated using alignment error rate (AER); see [6] for more details. We
trained two instances of the HMM model [8] (English-to-French and French-to-English) using 10
iterations of domain-approximate product EM, initializing with independently trained IBM model 1
parameters. For prediction, we output alignment edges with sufficient posterior probability: {(i, j) :
1
2 (?1;ij + ?2;ij ) ? ?}. Figure 1 shows how agreement-based training improves the error rate over
independent training for the HMM models.
5.3
Phylogenetic HMM models
Suppose we have a set of species s ? S arranged in a fixed phylogeny (i.e., S are the nodes
of a directed tree). Each species s is associated with a length L sequence of nucleotides ds =
(ds1 , . . . , dsL ). Let d = {ds : s ? S} denote all the nucleotides, which consist of some observed
ones x and unobserved ones z.
A good phylogenetic model should take into consideration both the relationship between nucleotides
of the different species at the same site and the relationship between adjacent nucleotides in the same
species. However, such a model would have high tree-width and be intractable to train. Past work
has focused on traditional variational inference in a single intractable model [9, 4]. Our approach is
to instead create two tractable submodels and train them to agree. Define one submodel to be
Y Y Y
p1 (x, z; ?1 ) = p1 (d; ?1 ) =
p1 (ds0 j | dsj ; ?1 )p1 (ds0 j+1 | ds0 j , ds(j+1) ; ?1 ), (15)
j odd s?S s0 ?C H(s)
where C H(s) is the set of children of s in the tree. The second submodel p2 is defined similarly,
only with the product taken over j even. The parameters ?m consist of first-order mutation logprobabilities and second-order mutation log-probabilities. Both submodels permit the same set of
assignments of hidden nucleotides (Z? = Z1 = Z2 ). Figure 2(a)?(b) shows the two submodels.
Exact product EM is not tractable since b = b1 + b2 corresponds to a graph with high tree-width.
We can apply parameter-approximate product EM, in which the E-step only involves computing
?m = E(2bm , Z? ). This can be done via dynamic programming along the tree for each twonucleotide slice of the sequence. In the M-step, the average 21 (?1 + ?2 ) is used for each model,
which has a closed form solution.
Our experiments used a multiple alignment consisting of L = 20, 000 consecutive sites belonging
to the L1 transposons in the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR) gene
(chromosome 7). Eight eutherian species were arranged in the phylogeny shown in Figure 3. The
data we used is the same as that of [9]. Some nucleotides in the sequences were already missing. In
addition, we held out some fraction of the observed ones for evaluation. We trained two models using
30 iterations of parameter-approximate product EM.3 For prediction, the posteriors over heldout
3
We initialized with a small amount of noise around uniform parameters plus a small bias towards identity
mutations.
7
20% heldout
50% heldout
0.85
(hidden)
0.8
0.8
0.75
0.7
baboon
(hidden)
(hidden)
(hidden)
(hidden) (hidden) mouse rat
accuracy
(hidden)
accuracy
0.7
0.65
0.6
0.55
0.5
0.5
0.4
0.45
Independent EM
Parameter-approximate product EM
0.4
chimp human cow pig cat dog
0.6
0
5
10
15
20
25
Independent EM
Parameter-approximate product EM
0
iteration
5
10
15
20
25
iteration
Figure 3: The tree is the phylogeny topology used in experiments. The graphs show the prediction accuracy of independent versus agreement-based training (parameter-approximate product EM)
when 20% and 50% of the observed nodes are held out.
nucleotides under each model are averaged and the one with the highest posterior is chosen. Figure 3
shows the prediction accuracy. Though independent and agreement-based training eventually obtain
the same accuracy, agreement-based training converges much faster. This gap grows as the amount
of heldout data increases.
6
Conclusion
We have developed a general framework for agreement-based learning of multiple submodels. Viewing these submodels as components of an overall model, our framework permits the submodels to be
trained jointly without paying the computational cost associated with an actual jointly-normalized
probability model. We have presented an objective function for agreement-based learning and three
EM-style algorithms that maximize this objective or approximations to this objective. We have also
demonstrated the applicability of our approach to three important real-world tasks. For grammar induction, our approach yields the existing algorithm of [5], providing an objective for that algorithm.
For word alignment and phylogenetic HMMs, our approach provides entirely new algorithms.
Acknowledgments We would like to thank Adam Siepel for providing the phylogenetic data
and acknowledge the support of the Defense Advanced Research Projects Agency under contract
NBCHD030010.
References
[1] J. Besag. The analysis of non-lattice data. The Statistician, 24:179?195, 1975.
[2] P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, and R. L. Mercer. The mathematics of statistical machine
translation: Parameter estimation. Computational Linguistics, 19:263?311, 1993.
[3] G. Hinton. Products of experts. In International Conference on Artificial Neural Networks, 1999.
[4] V. Jojic, N. Jojic, C. Meek, D. Geiger, A. Siepel, D. Haussler, and D. Heckerman. Efficient approximations
for learning phylogenetic HMM models from data. Bioinformatics, 20:161?168, 2004.
[5] D. Klein and C. D. Manning. Corpus-based induction of syntactic structure: Models of dependency and
constituency. In Association for Computational Linguistics (ACL), 2004.
[6] P. Liang, B. Taskar, and D. Klein. Alignment by agreement. In Human Language Technology and North
American Association for Computational Linguistics (HLT/NAACL), 2006.
[7] B. Lindsay. Composite likelihood methods. Contemporary Mathematics, 80:221?239, 1988.
[8] H. Ney and S. Vogel. HMM-based word alignment in statistical translation. In International Conference
on Computational Linguistics (COLING), 1996.
[9] A. Siepel and D. Haussler. Combining phylogenetic and hidden Markov models in biosequence analysis.
Journal of Computational Biology, 11:413?428, 2004.
[10] C. Sutton and A. McCallum. Piecewise training of undirected models. In Uncertainty in Artificial Intelligence (UAI), 2005.
[11] C. Sutton and A. McCallum. Piecewise pseudolikelihood for efficient CRF training. In International
Conference on Machine Learning (ICML), 2007.
[12] M. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Technical report, Department of Statistics, University of California at Berkeley, 2003.
8
| 3246 |@word version:1 seems:1 advantageous:1 contains:1 past:1 existing:1 current:1 z2:4 yet:1 must:2 written:1 partition:3 siepel:3 alone:1 intelligence:1 leaf:1 de1:2 mccallum:2 provides:3 node:2 complication:2 phylogenetic:8 along:1 direct:1 become:1 consists:3 dan:1 expected:4 p1:14 decomposed:1 encouraging:1 actual:1 becomes:2 provided:1 project:1 notation:4 maximizes:1 mass:1 what:2 developed:3 unobserved:1 berkeley:7 exactly:1 ull:4 demonstrates:1 qm:1 producing:1 before:1 t1:2 local:3 sutton:2 merge:1 abuse:1 plus:1 acl:1 challenging:2 hmms:1 baboon:1 averaged:2 directed:2 acknowledgment:1 testing:1 practice:2 differs:1 hansard:1 procedure:1 nondecomposable:1 significantly:1 composite:2 convenient:1 word:21 integrating:1 convenience:1 cannot:2 applying:1 optimize:3 equivalent:2 deterministic:1 restriction:1 missing:3 maximizing:7 demonstrated:1 straightforward:1 independently:4 convex:4 focused:1 chimp:1 decomposable:2 simplicity:1 insight:1 haussler:2 submodel:23 deriving:1 importantly:1 de2:2 coordinate:4 imagine:1 play:1 controlling:1 suppose:4 exact:2 programming:3 target:2 us:1 lindsay:1 agreement:14 element:1 approximated:1 expensive:1 asymmetric:2 observed:9 role:2 taskar:1 initializing:1 capture:2 db3:2 highest:1 contemporary:1 transmembrane:1 intuition:1 agency:1 convexity:1 reward:1 dynamic:4 dom:6 trained:5 depend:3 algebra:1 division:3 f2:2 completely:1 joint:3 represented:1 cat:1 train:9 distinct:2 describe:1 artificial:2 whose:2 richer:1 eaj:1 solve:1 say:2 otherwise:2 grammar:5 statistic:6 think:1 jointly:4 syntactic:3 advantage:1 sequence:4 propose:5 unaligned:1 product:43 zm:25 aligned:4 combining:2 intuitive:1 inducing:1 validate:1 az:7 constituent:1 optimum:1 extending:1 produce:2 adam:1 converges:1 object:1 derive:2 develop:4 ij:6 odd:2 minor:1 b0:2 paying:1 eq:5 p2:5 auxiliary:3 c:3 involves:3 owing:1 f4:2 human:2 viewing:1 require:1 assign:2 suffices:1 f1:3 proposition:2 tighter:1 summation:1 inspecting:1 hold:1 around:1 exp:4 mapping:1 substituting:1 m0:5 early:1 a2:2 consecutive:1 estimation:2 realizes:1 create:2 successfully:2 reflects:1 always:2 rather:2 focus:2 likelihood:8 contrast:3 besag:1 am:10 realizable:1 inference:6 dependent:1 bt:2 hidden:24 overall:1 dual:3 special:1 marginal:1 equal:1 field:1 f3:2 fibrosis:1 biology:2 unsupervised:4 icml:1 minimized:1 np:1 t2:1 piecewise:3 simplify:1 report:1 composed:1 divergence:1 pietra:2 argmax:7 consisting:1 statistician:1 conductance:1 b0t:1 possibility:1 evaluation:1 alignment:19 primal:1 held:2 de4:2 accurate:1 edge:3 encourage:1 biosequence:1 xy:2 respective:1 nucleotide:8 tree:9 initialized:1 desired:1 fenchel:1 instance:2 modeling:2 zn:1 assignment:1 maximization:1 phrase:1 loopy:2 tractability:2 introducing:1 decomposability:1 addressing:1 subset:1 uniform:1 de3:2 cost:1 applicability:1 conducted:1 characterize:1 dependency:3 combined:1 international:3 sequel:1 probabilistic:3 contract:1 michael:1 together:1 mouse:1 concrete:2 again:1 expert:3 american:1 style:3 summarized:1 b2:4 north:1 explicitly:1 depends:3 view:1 break:1 closed:1 sup:3 mutation:3 accuracy:6 variance:1 efficiently:1 maximized:3 correspond:1 yield:1 dsl:1 mji:1 comparably:1 sharing:1 hlt:1 definition:1 nonetheless:1 e2:2 proof:2 associated:3 gain:1 logical:1 improves:2 appears:1 formulation:4 done:2 though:2 evaluated:1 arranged:2 just:1 d:3 working:1 hand:1 replacing:1 ei:3 propagation:1 lack:1 french:10 aj:3 perhaps:1 grows:1 naacl:2 normalized:1 true:1 brown:1 jojic:2 deal:1 pwe:1 attractive:1 adjacent:1 width:2 nuisance:1 rat:1 trying:1 complete:1 demonstrate:1 crf:1 performs:1 percy:1 l1:1 fj:6 wise:4 variational:3 ef:3 consideration:1 common:4 dd1:2 multinomial:1 empirically:2 exponentially:1 association:2 interpretation:2 slight:1 relating:1 elementwise:1 refer:1 resorting:1 pm:34 similarly:1 mathematics:2 language:4 stable:1 similarity:1 operating:1 aligning:1 posterior:10 own:1 inequality:4 captured:1 additional:1 fortunately:1 converge:2 maximize:3 monotonically:1 multiple:3 full:1 rj:1 unimodal:1 technical:1 match:1 faster:1 cross:1 long:1 permitting:1 e1:3 a1:3 specializing:1 prediction:5 variant:1 basic:1 denominator:1 vision:1 sition:1 expectation:1 iteration:5 normalization:1 represent:1 whereas:2 addition:1 source:2 vogel:1 ascent:4 undirected:1 leveraging:1 spirit:1 effectiveness:1 jordan:3 easy:1 enough:1 marginalization:1 topology:1 cow:1 idea:2 dd2:2 defense:1 e3:2 db4:2 generally:3 useful:1 involve:1 amount:2 simplest:1 dna:1 generate:1 specifies:1 constituency:1 canonical:2 klein:4 discrete:1 write:2 ds1:1 express:2 key:1 graph:4 fraction:1 sum:3 run:1 parameterized:2 uncertainty:1 striking:1 family:7 reasonable:1 submodels:28 geiger:1 entirely:1 def:9 bound:6 guaranteed:1 meek:1 aer:1 constraint:1 ri:4 aspect:2 regulator:1 argument:1 performing:2 department:1 developing:1 according:1 combination:2 manning:1 legendre:1 conjugate:1 smaller:1 belonging:1 em:40 heckerman:1 making:1 lattice:1 intuitively:1 taken:1 computationally:1 agree:5 pin:1 loose:1 mechanism:1 count:1 eventually:1 know:1 tractable:20 end:1 operation:1 permit:3 multiplied:1 apply:1 eight:1 pnorm:5 ney:1 alternative:4 existence:1 original:3 include:2 linguistics:4 graphical:3 a4:1 maintaining:1 yx:1 classical:1 objective:24 already:1 quantity:1 traditional:2 separate:4 thank:1 capacity:1 hmm:8 induction:6 assuming:1 length:1 relationship:3 ratio:1 minimizing:1 providing:2 liang:2 setup:1 statement:1 relate:1 btm:1 negative:1 collective:1 proper:1 pliang:1 perform:1 upper:1 datasets:1 markov:2 acknowledge:1 curved:1 defining:2 hinton:1 pair:3 required:1 kl:2 specified:1 z1:9 sentence:7 dog:1 ds0:3 california:4 established:1 nbchd030010:1 usually:1 below:1 xm:4 pig:1 summarize:1 program:1 max:2 including:1 belief:1 analogue:1 wainwright:1 event:2 natural:3 difficulty:1 indicator:1 advanced:1 technology:1 dc1:2 carried:1 understanding:1 fully:1 par:7 bear:1 expect:1 heldout:4 versus:1 sufficient:5 s0:1 mercer:1 db1:2 share:1 ibm:4 translation:5 succinctly:1 placed:1 last:1 copy:1 english:11 infeasible:1 bias:1 pseudolikelihood:2 fall:1 slice:2 world:3 transition:1 computes:2 collection:1 bm:10 dc2:2 far:1 correlate:1 approximate:17 compact:1 gene:1 reveals:1 uai:1 summing:2 b1:4 corpus:1 iterative:1 latent:1 decomposes:1 qz:1 learn:1 mj:1 chromosome:1 ca:3 domain:8 did:1 noise:1 child:1 x1:3 site:2 db2:2 sub:1 position:2 explicit:1 xh:1 exponential:9 logprobabilities:1 coling:1 down:1 specific:2 jensen:1 normalizing:1 a3:2 intractable:6 essential:1 exists:1 consist:2 conditioned:1 gap:1 da1:2 entropy:1 intersection:2 simply:2 corresponds:4 conditional:1 goal:1 identity:1 ponents:1 towards:1 shared:2 hard:1 change:3 specifically:1 reversing:1 specie:5 duality:2 da2:2 formally:1 phylogeny:3 support:4 bioinformatics:1 incorporate:2 |
2,477 | 3,247 | Boosting the Area Under the ROC Curve
Philip M. Long
[email protected]
Rocco A. Servedio
[email protected]
Abstract
We show that any weak ranker that can achieve an area under the ROC curve
slightly better than 1/2 (which can be achieved by random guessing) can be efficiently boosted to achieve an area under the ROC curve arbitrarily close to 1. We
further show that this boosting can be performed even in the presence of independent misclassification noise, given access to a noise-tolerant weak ranker.
1 Introduction
Background. Machine learning is often used to identify members of a given class from a list of
candidates. This can be formulated as a ranking problem, where the algorithm takes a input a list of
examples of members and non-members of the class, and outputs a function that can be used to rank
candidates. The goal is to have the top of the list enriched for members of the class of interest.
ROC curves [12, 3] are often used to evaluate the quality of a ranking function. A point on an ROC
curve is obtained by cutting off the ranked list, and checking how many items above the cutoff are
members of the target class (?true positives?), and how many are not (?false positives?).
The AUC [1, 10, 3] (area under the ROC curve) is often used as a summary statistic. It is obtained
by rescaling the axes so the true positives and false positives vary between 0 and 1, and, as the name
implies, examining the area under the resulting curve.
The AUC measures the ability of a ranker to identify regions in feature space that are unusually
densely populated with members of a given class. A ranker can succeed according to this criterion
even if positive examples are less dense than negative examples everywhere, but, in order to succeed,
it must identify where the positive examples tend to be. This is in contrast with classification, where,
if Pr[y = 1|x] is less than 1/2 everywhere, just predicting y = ?1 everywhere would suffice.
Our Results. It is not hard to see that an AUC of 1/2 can be achieved by random guessing (see [3]),
thus it is natural to define a ?weak ranker? to be an algorithm that can achieve AUC slightly above
1/2. We show that any weak ranker can be boosted to a strong ranker that achieves AUC arbitrarily
close to the best possible value of 1.
We also consider the standard independent classification noise model, in which the label of each
example is flipped with probability ?. We show that in this setting, given a noise-tolerant weak
ranker (that achieves nontrivial AUC in the presence of noisy data as described above), we can
boost to a strong ranker that achieves AUC at least 1 ? ?, for any ? < 1/2 and any ? > 0.
Related work. Freund, Iyer, Schapire and Singer [4] introduced RankBoost, which performs ranking with more fine-grained control over preferences between pairs of items than we consider here.
They performed an analysis that implies a bound on the AUC of the boosted ranking function in
terms of a different measure of the quality of weak rankers. Cortes and Mohri [2] theoretically analyzed the ?typical? relationship between the error rate of a classifier based on thresholding a scoring
function and the AUC obtained through the scoring function; they also pointed out the close relationship between the loss function optimized by RankBoost and the AUC. Rudin, Cortes, Mohri,
and Schapire [11] showed that, when each of two classes are equally likely, the loss function optimized by AdaBoost coincides with the loss function of RankBoost. Noise-tolerant boosting has
previously been studied for classification. Kalai and Servedio [7] showed that, if data is corrupted
with noise at a rate ?, it is possible to boost the accuracy of any noise-tolerant weak learner arbitrarily close to 1 ? ?, and they showed that it is impossible to boost beyond 1 ? ?. In contrast, we show
that, in the presence of noise at a rate arbitrarily close to 1/2, the AUC can be boosted arbitrarily
close to 1. Our noise tolerant boosting algorithm uses as a subroutine the ?martingale booster? for
classification of Long and Servedio [9].
Methods. The key observation is that a weak ranker can be used to find a ?two-sided? weak classifier
(Lemma 4), which achieves accuracy slightly better than random guessing on both positive and
negative examples. Two-sided weak classifiers can be boosted to obtain accuracy arbitrarily close
to 1, also on both the positive examples and the negative examples; a proof of this is implicit in the
analysis of [9]. Such a two-sided strong classifier is easily seen to lead to AUC close to 1.
Why is it possible to boost past the AUC past the noise rate, when this is provably not possible for
classification? Known approaches to noise-tolerant boosting [7, 9] force the weak learner to provide
a two-sided weak hypothesis by balancing the distributions that are constructed so that both classes
are equally likely. However, this balancing skews the distributions so that it is no longer the case that
the event that an example is corrupted with noise is independent of the instance; randomization was
used to patch this up in [7, 9], and the necessary slack was only available if the desired accuracy was
coarser than the noise rate. (We note that the lower bound from [7] is proved using a construction in
which the class probability of positive examples is less than the noise rate; the essence of that proof
is to show that in that situation it is impossible to balance the distribution given access to noisy
examples.) In contrast, having a weak ranker provides enough leverage to yield a two-sided weak
classifier without needing any rebalancing.
Outline. Section 2 gives some definitions. In Section 3, we analyze boosting the AUC when there
is no noise in an abstract model where the weak learner is given a distribution and returns a weak
ranker, and sampling issues are abstracted away. In Section 4, we consider boosting in the presence
of noise in a similarly abstract model. We address sampling issues in Section 5.
2 Preliminaries
Rankings and AUC. Throughout this work we let X be a domain, c : X ? {?1, 1} be a classifier,
and D be a probability distribution over labeled examples (x, c(x)). We say that D is nontrivial (for
c) if D assigns nonzero probability to both positive and negative examples. We write D+ to denote
the marginal distribution over positive examples and D? to denote the marginal distribution over
negative examples, so D is a mixture of the distributions D+ and D? .
As has been previously pointed out, we may view any function h : X ? R as a ranking of X. Note
that if h(x1 ) = h(x2 ) then the ranking does not order x1 relative to x2 . Given a ranking function
h : X ? R, for each value ? ? R there is a point (?? , ?? ) on the ROC curve of h, where ?? is the
false positive rate and ?? is the true positive rate of the classifier obtained by thresholding h at ?:
?? = D? [h(x) ? ?] and ?? = D+ [h(x) ? ?]. Every ROC curve contains the points (0, 0) and
(1, 1) corresponding to ? = ? and ?? respectively.
Given h : X ? R and D, the AUC can be defined as AUC(h; D) = Pru?D+ ,v?D? [h(u) >
h(v)] + 21 Pru?D+ ,v?D? [h(u) = h(v)]. It is well known (see e.g. [2, 6]) that the AUC as defined
above is equal to the area under the ROC curve for h.
Weak Rankers. Fix any distribution D. It is easy to see that any constant function h achieves
AUC(h; D) = 21 , and also that for X finite and ? a random permutation of X, the expected AUC
of h(?(?)) is 21 for any function h. This motivates the following definition:
Definition 1 A weak ranker with advantage ? is an algorithm that, given any nontrivial distribution
D, returns a function h : X ? R that has AUC(h; D) ? 21 + ?.
In the rest of the paper we show how boosting algorithms originally designed for classification can
be adapted to convert weak rankers into ?strong? rankers (that achieve AUC at least 1 ? ?) in a range
of different settings.
3 From weak to strong AUC
The main result of this section is a simple proof that the AUC can be boosted. We achieve this in a
relatively straightforward way by using the standard AdaBoost algorithm for boosting classifiers.
As in previous work [9], to keep the focus on the main ideas we will use an abstract model in which
the booster successively passes distributions D1 , D2 , ... to a weak ranker which returns ranking
functions h1 , h2 , .... When the original distribution D is uniform over a training set, as in the usual
analysis of AdaBoost, this is easy to do. In this model we prove the following:
Theorem 2 There is an algorithm AUCBoost that, given access to a weak ranker with advantage ?
as an oracle, for any nontrivial distribution D, outputs a ranking function with AUC at least 1 ? ?.
The AUCBoost algorithm makes T = O( log(1/?)
) many calls to the weak ranker. If D has finite
?2
support of size m, AUCBoost takes O(mT log m) time.
As can be seen from the observation that it does not depend on the relative frequency of positive
and negative examples, the AUC requires a learner to perform well on both positive and negative
examples. When such a requirement is imposed on a base classifier, it has been called two-sided
weak learning. The key to boosting the AUC is the observation (Lemma 4 below) that a weak
ranker can be used to generate a two-sided weak learner.
Definition 3 A ? two-sided weak learner is an algorithm that, given a nontrivial distribution D,
outputs a hypothesis h that satisfies both Prx?D+ [h(x) = 1] ? 21 + ? and Prx?D? [h(x) = ?1] ?
1
2 + ?. We say that such an h has two-sided advantage ? with respect to D.
Lemma 4 Let A be a weak ranking algorithm with advantage ?. Then there is a ?/4 two-sided
weak learner A? based on A that always returns classifiers with equal error rate on positive and
negative examples.
Proof: Algorithm A? first runs A to get a real-valued ranking function h : X ? R. Consider the
ROC curve corresponding to h. Since the AUC is at least 21 + ?, there must be some point (u, v) on
the curve such that v ? u + ?. Recall that, by the definition of the ROC curve, this means that there
is a threshold ? such that D+ [h(x) ? ?] ? D? [h(x) ? ?] + ?. Thus, for the classifier obtained by
def
def
thresholding h at ?, the class conditional error rates p+ = D+ [h(x) < ?] and p? = D? [h(x) ? ?]
?
1
1
satisfy p+ + p? ? 1 ? ?. This in turn means that either p+ ? 2 ? 2 or p? ? 2 ? ?2 .
Suppose that p? ? p+ , so that p? ? 21 ? ?2 (the other case can be handled symmetrically). Consider
the randomized classifier g that behaves as follows: given input x, (a) if h(x) < ?, it flips a biased
coin, and with probability ? ? 0, predicts 1, and with probability 1 ? ?, predicts ?1, and (b) if
h(x) ? ?, it predicts 1. Let g(x, r) be the output of g on input x and with randomization r and let
def
def
?? = Prx?D? ,r [g(x, r) = 1] and ?+ = Prx?D+ ,r [g(x, r) = ?1]. We have ?+ = (1 ? ?)p+ and
p+ ?p?
?? = p? + ?(1 ? p? ). Let us choose ? so that ?? = ?+ ; that is, we choose ? = 1+p
. This
+ ?p?
yields
p+
?? = ?+ =
.
(1)
1 + p+ ? p?
For any fixed value of p? the RHS of (1) increases with p+ . Recalling that we have p+ +p? ? 1??,
def
the maximum of (1) is achieved at p+ = 1 ? ? ? p? , in which case we have (defining ? = ?? = ?+ )
(1??)?p?
?
? = 1+(1???p
= (1??)?p
2???2p? . The RHS of this expression is nonincreasing in p? , and therefore
? )?p?
?
is maximized at p? is 0, when it takes the value 21 ? 2(2??)
? 12 ? ?4 . This completes the proof.
Figure 1 gives an illustration of the proof of the previous lemma; since the y-coordinate of (a) is at
least ? more than the x-coordinate and (b) lies closer to (a) than to (1, 1), the y-coordinate of (b) is
at least ?/2 more than the x-coordinate, which means that the advantage is at least ?/4.
We will also need the following simple lemma which shows that a classifier that is good on both the
positive and the negative examples, when viewed as a ranking function, achieves a good AUC.
1
0.8
true
positive
rate
(b)
?
0.6
?
(a)
0.4
0.2
0
0
0.2
0.4
0.6
0.8
false positive rate
1
Figure 1: The curved line represents the
ROC curve for ranking function h. The
lower black dot (a) corresponds to the value
? and is located at (p? , 1?p+). The straight
line connecting (0, 0) and (1, 1), which corresponds to a completely random ranking,
is given for reference. The dashed line (covered by the solid line for 0 ? x ? .16)
represents the ROC curve for a ranker h?
which agrees with h on those x for which
h(x) ? ? but randomly ranks those x for
which h(x) < ?. The upper black dot (b)
is at the point of intersection between the
ROC curve for h? and the line y = 1 ? x; its
coordinates are (?, 1 ? ?). The randomized
classifier g is equivalent to thresholding h?
with a value ?? corresponding to this point.
Lemma 5 Let h : X ? {?1, 1} and suppose that Prx?D+ [h(x) = 1] = 1 ? ?+ and
?
.
Prx?D? [h(x) = ?1] = 1 ? ?? . Then we have AUC(h; D) = 1 ? ?+ +?
2
Proof: We have
AUC(h; D) = (1 ? ?+ )(1 ? ?? ) +
?+ (1 ? ?? ) + ?? (1 ? ?+ )
?+ + ??
=1?
.
2
2
Proof of Theorem 2: AUCBoost works by running AdaBoost on 12 D+ + 21 D? . In round t, each copy
of AdaBoost passes its reweighted distribution Dt to the weak ranker, and then uses the process of
Lemma 4 to convert the resulting weak ranking function to a classifier ht with two-sided advantage
?/4. Since ht has two-sided advantage ?/4, no matter how Dt decomposes into a mixture of Dt+
and Dt? , it must be the case that Pr(x,y)?Dt [ht (x) 6= y] ? 21 ? ?/4.
The analysis of AdaBoost (see [5]) shows that T = O log(1/?)
rounds are sufficient for H to have
2
?
error rate at most ? under 12 D+ + 21 D? . Lemma 5 now gives that the classifier H(x) is a ranking
function with AUC at least 1 ? ?.
For the final assertion of the theorem, note that at each round, in order to find the value of ? that
defines ht the algorithm needs to minimize the sum of the error rates on the positive and negative
examples. This can be done by sorting the examples using the weak ranking function (in O(m log m)
time steps) and processing the examples in the resulting order, keeping running counts of the number
of errors of each type.
4 Boosting weak rankers in the presence of misclassification noise
The noise model: independent misclassification noise. The model of independent misclassification noise has been widely studied in computational learning theory. In this framework there is a
noise rate ? < 1/2, and each example (positive or negative) drawn from distribution D has its true
label c(x) independently flipped with probability ? before it is given to the learner. We write D? to
denote the resulting distribution over (noise-corrupted) labeled examples (x, y).
Boosting weak rankers in the presence of independent misclassification noise. We now show
how the AUC can be boosted arbitrarily close to 1 even if the data given to the booster is corrupted
with independent misclassification noise, using weak rankers that are able to tolerate independent
misclassification noise. We note that this is in contrast with known results for boosting the accuracy
of binary classifiers in the presence of noise; Kalai and Servedio [7] show that no ?black-box?
boosting algorithm can be guaranteed to boost the accuracy of an arbitrary noise-tolerant weak
learner to accuracy 1 ? ? in the presence of independent misclassification noise at rate ?.
v0,1
v0,2
v1,2
v1,3
v2,3
...
...
...
Figure 2: The branching program produced
by the boosting algorithm. Each node vi,t
is labeled with a weak classifier hi,t ; left
edges correspond to -1 and right edges to 1.
...
v0,3
...
v0,T +1
v1,T +1
|
{z
}
output -1
vT ?1,T +1 vT,T +1
|
{z
output 1
}
As in the previous section we begin by abstracting away sampling issues and using a model in which
the booster passes a distribution to a weak ranker. Sampling issues will be treated in Section 5.
Definition 6 A noise-tolerant weak ranker with advantage ? is an algorithm with the following
property: for any noise rate ? < 1/2, given a noisy distribution D? , the algorithm outputs a ranking
function h : X ? R such that AUC(h; D) ? 21 + ?.
Our algorithm for boosting the AUC in the presence of noise uses the Basic MartiBoost algorithm
(see Section 4 of [9]). This algorithm boosts any two-sided weak learner to arbitrarily high accuracy
and works in a series of rounds. Before round t the space of labeled examples is partitioned into a
series of bins B0,t , ..., Bt?1,t . (The original bin B0,1 consists of the entire space.) In the t-th round
the algorithm first constructs distributions D0,t , ..., Dt?1,t by conditioning the original distribution
D on membership in B0,t , ..., Bt?1,t respectively. It then calls a two-sided weak learner t times
using each of D0,t , ..., Dt?1,t , getting weak classifiers h0,t , ..., ht?1,t respectively. Having done
this, it creates t + 1 bins for the next round by assigning each element (x, y) of Bi,t to Bi,t+1 if
hi,t (x) = ?1 and to Bi+1,t+1 otherwise. Training proceeds in this way for a given number T of
rounds, which is an input parameter of the algorithm.
The output of Basic MartiBoost is a layered branching program defined as follows. There is a node
vi,t for each round 1 ? t ? T + 1 and each index 0 ? i < t (that is, for each bin constructed during
training). An item x is routed through the branching program the same way a labeled example (x, y)
would have been routed during the training phase: it starts in node v0,1 , and from each node vi,t it
goes to vi,t+1 if hi,t (x) = ?1, and to vi+1,t+1 otherwise. When the item x arrives at a terminal
node of the branching program in layer T + 1, it is at some node vj,T +1 . The prediction is 1 if
j ? T /2 and is ?1 if j < T /2; in other words, the prediction is according to the majority vote of
the weak classifiers that were encountered along the path through the branching program that the
example followed. See Figure 3.
The following lemma is proved in [9]. (The crux of the proof is the observation that positive (respectively, negative) examples are routed through the branching program according to a random
walk that is biased to the right (respectively, left); hence the name ?martingale boosting.?)
Lemma 7 ([9]) Suppose that Basic MartiBoost is provided with a hypothesis hi,t with two-sided advantage ? w.r.t. Di,t at each node vi,t . Then for T = O(log(1/?)/? 2), Basic MartiBoost constructs
a branching program H such that D+ [H(x) = ?1] ? ? and D? [H(x) = 1] ? ?.
We now describe our noise-tolerant AUC boosting algorithm, which we call Basic MartiRank.
Given access to a noise-tolerant weak ranker A with advantage ?, at each node vi,t the Basic MartiRank algorithm runs A and proceeds as described in Lemma 4 to obtain a weak classifier hi,t . Basic
MartiRank runs Basic MartiBoost with T = O(log(1/?)/? 2) and simply uses the resulting classifier
H as its ranking function. The following theorem shows that Basic MartiRank is an effective AUC
booster in the presence of independent misclassification noise:
Theorem 8 Fix any ? < 1/2 and any ? > 0. Given access to D? and a noise-tolerant weak ranker A
with advantage ?, Basic MartiRank outputs a branching program H such that AUC(H; D) ? 1 ? ?.
Proof: Fix any node vi,t in the branching program. The crux of the proof is the following simple
observation: for a labeled example (x, y), the route through the branching program that is taken
by (x, y) is determined completely by the predictions of the base classifiers, i.e. only by x, and
is unaffected by the value of y. Consequently if Di,t denotes the original noiseless distribution D
conditioned on reaching vi,t , then the noisy distribution conditioned on reaching vi,t , i.e. (D? )i,t , is
simply Di,t corrupted with independent misclassification noise, i.e. (Di,t )? . So each time the noisetolerant weak ranker A is invoked at a node vi,t , it is indeed the case that the distribution that it is
given is an independent misclassification noise distribution. Consequently A does construct weak
rankers with AUC at least 1/2 + ?, and the conversion of Lemma 4 yields weak classifiers that have
advantage ?/4 with respect to the underlying distribution Di,t . Given this, Lemma 7 implies that the
final classifier H has error at most ? on both positive and negative examples drawn from the original
distribution D, and Lemma 5 then implies that H, viewed a ranker, achieves AUC at least 1 ? ?.
In [9], a more complex variant of Basic MartiBoost, called Noise-Tolerant SMartiBoost, is presented
and is shown to boost any noise-tolerant weak learning algorithm to any accuracy less than 1 ? ?
in the presence of independent misclassification noise. In contrast, here we are using just the Basic
MartiBoost algorithm itself, and can achieve any AUC value 1 ? ? even for ? < ?.
5 Implementing MartiRank with a distribution oracle
In this section we analyze learning from random examples. Formally, we assume that the weak
ranker is given access to an oracle for the noisy distribution D? . We thus now view a noise-tolerant
weak ranker with advantage ? as an algorithm A with the following property: for any noise rate
? < 1/2, given access to an oracle for D? , the algorithm outputs a ranking function h : X ? R
such that AUC(h; D) ? 21 + ?.
We let mA denote the number of examples from each class that suffice for A to construct a ranking
function as described above. In other words, if A is provided with a sample of draws from D?
such that each class, positive and negative, has at least mA points in the sample with that true label,
then algorithm A outputs a ?-advantage weak ranking function. (Note that for simplicity we are
assuming here that the weak ranker always constructs a weak ranking function with the desired
advantage, i.e. we gloss over the usual confidence parameter ?; this can be handled with an entirely
standard analysis.)
In order to achieve a computationally efficient algorithm in this setting we must change the MartiRank algorithm somewhat; we call the new variant Sampling Martirank, or SMartiRank. We prove
that SMartiRank is computationally efficient, has moderate sample complexity, and efficiently generates a high-accuracy final ranking function with respect to the underlying distribution D.
Our approach follows the same general lines as [9] where an oracle implementation is presented
for the MartiBoost algorithm. The main challenge in [9] is the following: for each node vi,t in the
branching program, the boosting algorithm considered there must simulate a balanced version of
the induced distribution Di,t which puts equal weight on positive and negative examples. If only a
tiny fraction of examples drawn from D are (say) positive and reach vi,t , then it is very inefficient
to simulate this balanced distribution (and in a noisy scenario, as discussed earlier, if the noise rate
is high relative to the frequency of the desired class then it may in fact be impossible to simulate
the balanced distribution). The solution in [9] is to ?freeze? any such node and simply classify any
example that reaches it as negative; the analysis argues that since only a tiny fraction of positive
examples reach such nodes, this freezing only mildly degrades the accuracy of the final hypothesis.
In the ranking scenario that we now consider, we do not need to construct balanced distributions, but
we do need to obtain a non-negligible number of examples from each class in order to run the weak
learner at a given node. So as in [9] we still freeze some nodes, but with a twist: we now freeze
nodes which have the property that for some class label (positive or negative), only a tiny fraction of
examples from D with that class label reach the node. With this criterion for freezing we can prove
that the final classifier constructed has high accuracy both on positive and negative examples, which
is what we need to achieve good AUC. We turn now to the details.
Given a node vi,t and a bit b ? {?1, 1}, let pbi,t denote D[x reaches vi,t and c(x) = b]. The
SMartiRank algorithm is like Basic MartiBoost but with the following difference: for each node vi,t
and each value b ? {?1, 1}, if
? ? D[c(x) = b]
(2)
T (T + 1)
then the node vi,t is ?frozen,? i.e. it is labeled with the bit 1 ? b and is established as a terminal node
with no outgoing edges. (If this condition holds for both values of b at a particular node vi,t then the
node is frozen and either output value may be used as the label.) The following theorem establishes
that if SMartiRank is given weak classifiers with two-sided advantage at each node that is not frozen,
it will construct a hypothesis with small error rate on both positive and negative examples:
pbi,t <
Theorem 9 Suppose that the SMartiRank algorithm as described above is provided with a hypothesis hi,t that has two-sided advantage ? with respect to Di,t at each node vi,t that is not frozen. Then
for T = O(log(1/?)/? 2 ), the final branching program hypothesis H that SMartiRank constructs
will have D+ [H(x) = ?1] ? ? and D? [H(x) = 1] ? ?.
Proof: We analyze D+ [h(x) = ?1]; the other case is symmetric.
Given an unlabeled instance x ? X, we say that x freezes at node vi,t if x?s path through the
branching program causes it to terminate at a node vi,t with t < P
T + 1 (i.e. at a node vi,t which was
frozen by SMartiRank). We have D[x freezes and c(x) = 1] = i,t D[x freezes at vi,t and c(x) =
P
1] ? i,t ??D[c(x)=1]
? 2? ? D[c(x) = 1]. Consequently we have
T (T +1)
D+ [x freezes] =
D[x freezes and c(x) = 1]
?
< .
D[c(x) = 1]
2
(3)
Naturally, D+ [h(x) = ?1] = D+ [(h(x) = ?1) & (x freezes)] + D+ [(h(x) =
?1) & (x does not freeze)]. By (3), this is at most 2? + D+ [(h(x) = ?1) & (x does not freeze)].
Arguments identical to those in the last two paragraphs of the proof of Theorem 3 in [9] show that
D+ [(h(x) = ?1) & (x does not freeze)] ? 2? , and we are done.
We now describe how SMartiRank can be run given oracle access to D? and sketch the analysis of
the required sample complexity (some details are omitted because of space limits). For simplicity of
def
presentation we shall assume that the booster is given the value p = min{D[c(x) = ?1], D[c(x) =
1]}; we note if that p is not given a priori, a standard ?guess and halve? technique can be used
to efficiently obtain a value that is within a multiplicative factor of two of p, which is easily seen
to suffice.
We also make the standard assumption (see [7, 9]) that the noise rate ? is known;
this assumption can similarly be removed by having the algorithm ?guess and check? the value to
sufficiently fine granularity. Also, the confidence can be analyzed using the standard appeal to the
union bound ? details are omitted.
SMartiRank will replace (2) with a comparison of sample estimates of the two quantities. To allow
for the fact that they are just estimates, it will be more conservative, and freeze when the estimate of
pbi,t is at most 4T (T? +1) times the estimate of D[c(x) = b].
We first observe that for any distribution D and any bit b, we have Pr(x,y)?D? [y = b] = ? + (1 ?
?
[y=b]??
. Consequently, given
2?)Pr(x,c(x))?D [c(x) = b], which is equivalent to D[c(x) = b] = D 1?2?
an empirical estimate of D? [y = b] that is accurate to within an additive ? p(1?2?)
(which can easily
10
1
?
be obtained from O( p2 (1?2?)
)
draws
to
D
),
it
is
possible
to
estimate
D[c(x)
=
b] to within an
2
?p
additive ?p/10, and thus to estimate the RHS of (2) to within an additive ? 10T (T +1) . Now in order
to determine whether node vi,t should be frozen, we must compare this estimate with a similarly
accurate estimate of pbi,t (arguments similar to those of, e.g., Section 6.3 of [9] can be used to show
that it suffices to run the algorithm using these estimated values). We have
pbi,t
=
=
D[x reaches vi,t ] ? D[c(x) = b | x reaches vi,t ] = D? [x reaches vi,t ] ? Di,t [c(x) = b]
!
?
Di,t
[y = b] ? ?
?
D [x reaches vi,t ] ?
.
1 ? 2?
A standard analysis (see e.g. Chapter 5 of [8]) shows that this quantity can be estimated to additive
accuracy ?? using poly(1/?, 1/(1 ? 2?)) many calls to D? (briefly, if D? [x reaches vi,t ] is less than
? (1 ? 2?) then an estimate of 0 is good enough, while if it is greater than ? (1 ? 2?) then a ? -accurate
1
?
estimate of the second multiplicand can be obtained using O( ? 3 (1?2?)
3 ) draws from D , since at
least a ? (1 ? 2?) fraction of draws will reach vi,t .) Thus for each vi,t , we can determine whether
to freeze it in the execution of SMartiRank using poly(T, 1/?, 1/p, 1/(1 ? 2?)) draws from D? .
For each of the nodes that are not frozen, we must run the noise-tolerant weak ranker A using the
?
distribution Di,t
. As discussed at the beginning of this section, this requires that we obtain a sample
?
from Di,t containing at least mA examples whose true label belongs to each class. The expected
number of draws from D? that must be made in order to receive an example from a given class
is 1/p, and since vi,t is not frozen, the expected number of draws from D? belonging to a given
?
class that must be made in order to simulate a draw from Di,t
belonging to that class is O(T 2 /?).
2
?
Thus, O(T mA /(?p)) many draws from D are required in order to run the weak learner A at any
particular node. Since there are O(T 2 ) many nodes overall, we have that all in all O(T 4 mA /(?p))
many draws from D? are required, in addition to the poly(T, 1/?, 1/p, 1/(1 ? 2?)) draws required
to identify which nodes to freeze. Recalling that T = O(log(1/?)/? 2 ), all in all we have:
Theorem 10 Let D be a nontrivial distribution over X, p = min{D[c(x) = ?1], D[c(x) = 1]},
and ? < 21 . Given access to an oracle for D? and a noise-tolerant weak ranker A with advantage
1
?, the SMartiRank algorithm makes mA ? poly( 1? , ?1 , 1?2?
, p1 ) calls to D? , and and with probability
1 ? ? outputs a branching program H such that AUC(h; D) ? 1 ? ?.
Acknowledgement
We are very grateful to Naoki Abe for suggesting the problem of boosting the AUC.
References
[1] A. P. Bradley. Use of the area under the ROC curve in the evaluation of machine learning
algorithms. Pattern Recognition, 30:1145?1159, 1997.
[2] C. Cortes and M. Mohri. AUC optimization vs. error rate minimzation. In NIPS 2003, 2003.
[3] T. Fawcett. ROC graphs: Notes and practical considerations for researchers. Technical Report
HPL-2003-4, HP, 2003.
[4] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research, 4(6):933?970, 2004.
[5] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[6] J. Hanley and B. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1):29?36, 1982.
[7] A. Kalai and R. Servedio. Boosting in the presence of noise. Journal of Computer & System
Sciences, 71(3):266?290, 2005. Preliminary version in Proc. STOC?03.
[8] M. Kearns and U. Vazirani. An introduction to computational learning theory. MIT Press,
Cambridge, MA, 1994.
[9] P. Long and R. Servedio. Martingale boosting. In Proceedings of the Eighteenth Annual
Conference on Computational Learning Theory (COLT), pages 79?94, 2005.
[10] F. Provost, T. Fawcett, and Ron Kohavi. The case against accuracy estimation for comparing
induction algorithms. ICML, 1998.
[11] C. Rudin, C. Cortes, M. Mohri, and R. E. Schapire. Margin-based ranking meets boosting in
the middle. COLT, 2005.
[12] J. A. Swets. Signal detection theory and ROC analysis in psychology and diagnostics: Collected papers. Lawrence Erlbaum Associates, 1995.
| 3247 |@word briefly:1 version:2 middle:1 d2:1 solid:1 gloss:1 contains:1 series:2 past:2 bradley:1 com:1 comparing:1 assigning:1 must:9 additive:4 designed:1 v:1 rudin:2 guess:2 item:4 beginning:1 provides:1 boosting:25 node:33 ron:1 preference:2 along:1 constructed:3 prove:3 consists:1 paragraph:1 theoretically:1 swets:1 indeed:1 expected:3 p1:1 terminal:2 rebalancing:1 begin:1 provided:3 suffice:3 underlying:2 what:1 every:1 unusually:1 classifier:27 control:1 positive:30 before:2 negligible:1 naoki:1 limit:1 meet:1 path:2 black:3 studied:2 range:1 bi:3 practical:1 union:1 area:8 empirical:1 word:2 confidence:2 get:1 close:9 layered:1 unlabeled:1 put:1 impossible:3 equivalent:2 imposed:1 eighteenth:1 straightforward:1 go:1 independently:1 simplicity:2 assigns:1 coordinate:5 target:1 construction:1 suppose:4 us:4 hypothesis:7 associate:1 element:1 recognition:1 located:1 coarser:1 labeled:7 predicts:3 region:1 removed:1 balanced:4 complexity:2 depend:1 grateful:1 creates:1 learner:13 completely:2 easily:3 multiplicand:1 chapter:1 describe:2 effective:1 h0:1 whose:1 widely:1 valued:1 say:4 otherwise:2 ability:1 statistic:1 skews:1 radiology:1 noisy:6 itself:1 final:6 advantage:18 frozen:8 combining:1 achieve:8 getting:1 requirement:1 b0:3 p2:1 strong:5 c:1 implies:4 implementing:1 bin:4 crux:2 fix:3 suffices:1 generalization:1 preliminary:2 randomization:2 hold:1 sufficiently:1 considered:1 lawrence:1 vary:1 achieves:7 omitted:2 estimation:1 proc:1 label:7 agrees:1 establishes:1 mit:1 always:2 rankboost:3 reaching:2 kalai:3 boosted:7 ax:1 focus:1 rank:2 check:1 contrast:5 membership:1 bt:2 entire:1 subroutine:1 provably:1 issue:4 classification:6 overall:1 colt:2 priori:1 marginal:2 equal:3 construct:8 having:3 sampling:5 identical:1 flipped:2 represents:2 icml:1 report:1 randomly:1 densely:1 phase:1 recalling:2 detection:1 interest:1 evaluation:1 analyzed:2 mixture:2 arrives:1 diagnostics:1 nonincreasing:1 accurate:3 edge:3 closer:1 necessary:1 walk:1 desired:3 instance:2 classify:1 earlier:1 assertion:1 uniform:1 examining:1 erlbaum:1 corrupted:5 randomized:2 off:1 connecting:1 successively:1 containing:1 choose:2 booster:6 inefficient:1 rescaling:1 return:4 suggesting:1 matter:1 satisfy:1 ranking:27 vi:32 pru:2 performed:2 view:2 h1:1 multiplicative:1 analyze:3 start:1 minimize:1 accuracy:14 characteristic:1 efficiently:3 maximized:1 yield:3 identify:4 correspond:1 weak:60 produced:1 researcher:1 straight:1 unaffected:1 reach:11 halve:1 definition:6 against:1 servedio:6 frequency:2 naturally:1 proof:13 di:12 proved:2 recall:1 originally:1 dt:7 tolerate:1 adaboost:6 done:3 box:1 just:3 implicit:1 hpl:1 sketch:1 freezing:2 google:1 defines:1 quality:2 name:2 true:7 hence:1 symmetric:1 nonzero:1 reweighted:1 round:9 during:2 branching:14 auc:46 essence:1 coincides:1 criterion:2 outline:1 theoretic:1 performs:1 argues:1 meaning:1 invoked:1 consideration:1 behaves:1 mt:1 twist:1 conditioning:1 discussed:2 freeze:15 cambridge:1 populated:1 similarly:3 pointed:2 hp:1 dot:2 access:9 longer:1 operating:1 v0:5 base:2 showed:3 moderate:1 belongs:1 scenario:2 route:1 binary:1 arbitrarily:8 vt:2 scoring:2 seen:3 greater:1 somewhat:1 determine:2 dashed:1 signal:1 needing:1 d0:2 technical:1 long:3 noisetolerant:1 equally:2 prediction:3 variant:2 basic:13 noiseless:1 fawcett:2 achieved:3 receive:1 background:1 addition:1 fine:2 completes:1 kohavi:1 biased:2 rest:1 pass:3 induced:1 tend:1 member:6 call:6 presence:12 leverage:1 symmetrically:1 granularity:1 enough:2 easy:2 psychology:1 idea:1 ranker:38 whether:2 expression:1 handled:2 routed:3 cause:1 covered:1 schapire:5 generate:1 estimated:2 write:2 shall:1 key:2 threshold:1 drawn:3 cutoff:1 ht:5 v1:3 graph:1 mcneil:1 fraction:4 convert:2 sum:1 run:8 everywhere:3 throughout:1 patch:1 draw:11 pbi:5 decision:1 bit:3 entirely:1 bound:3 def:6 hi:6 guaranteed:1 layer:1 followed:1 encountered:1 oracle:7 annual:1 nontrivial:6 adapted:1 x2:2 generates:1 simulate:4 argument:2 min:2 relatively:1 according:3 belonging:2 slightly:3 partitioned:1 pr:4 sided:17 taken:1 computationally:2 previously:2 slack:1 turn:2 count:1 singer:2 flip:1 available:1 observe:1 away:2 v2:1 coin:1 original:5 top:1 running:2 denotes:1 hanley:1 quantity:2 degrades:1 rocco:2 usual:2 guessing:3 philip:1 majority:1 collected:1 induction:1 assuming:1 index:1 relationship:2 illustration:1 balance:1 stoc:1 negative:19 implementation:1 motivates:1 perform:1 upper:1 conversion:1 observation:5 finite:2 curved:1 situation:1 defining:1 provost:1 arbitrary:1 abe:1 introduced:1 pair:1 required:4 optimized:2 established:1 boost:7 nip:1 address:1 beyond:1 able:1 proceeds:2 below:1 pattern:1 challenge:1 program:14 misclassification:12 event:1 ranked:1 natural:1 force:1 predicting:1 treated:1 columbia:1 acknowledgement:1 checking:1 relative:3 freund:3 loss:3 permutation:1 abstracting:1 h2:1 sufficient:1 thresholding:4 tiny:3 balancing:2 summary:1 mohri:4 last:1 copy:1 keeping:1 allow:1 curve:18 made:2 vazirani:1 cutting:1 keep:1 abstracted:1 tolerant:16 receiver:1 decomposes:1 why:1 terminate:1 complex:1 poly:4 domain:1 vj:1 dense:1 main:3 rh:3 noise:47 prx:6 x1:2 enriched:1 roc:18 martingale:3 plong:1 candidate:2 lie:1 grained:1 theorem:9 list:4 appeal:1 cortes:4 false:4 execution:1 iyer:2 conditioned:2 margin:1 sorting:1 mildly:1 intersection:1 simply:3 likely:2 corresponds:2 satisfies:1 ma:7 succeed:2 conditional:1 goal:1 formulated:1 viewed:2 consequently:4 presentation:1 replace:1 hard:1 change:1 typical:1 determined:1 lemma:14 conservative:1 called:2 kearns:1 vote:1 formally:1 support:1 evaluate:1 outgoing:1 d1:1 |
2,478 | 3,248 | Direct Importance Estimation with Model Selection
and Its Application to Covariate Shift Adaptation
Masashi Sugiyama
Tokyo Institute of Technology
[email protected]
Hisashi Kashima
IBM Research
[email protected]
Shinichi Nakajima
Nikon Corporation
[email protected]
?
Paul von Bunau
Technical University Berlin
[email protected]
Motoaki Kawanabe
Fraunhofer FIRST
[email protected]
Abstract
A situation where training and test samples follow different input distributions is
called covariate shift. Under covariate shift, standard learning methods such as
maximum likelihood estimation are no longer consistent?weighted variants according to the ratio of test and training input densities are consistent. Therefore,
accurately estimating the density ratio, called the importance, is one of the key issues in covariate shift adaptation. A naive approach to this task is to first estimate
training and test input densities separately and then estimate the importance by
taking the ratio of the estimated densities. However, this naive approach tends to
perform poorly since density estimation is a hard task particularly in high dimensional cases. In this paper, we propose a direct importance estimation method that
does not involve density estimation. Our method is equipped with a natural cross
validation procedure and hence tuning parameters such as the kernel width can be
objectively optimized. Simulations illustrate the usefulness of our approach.
1
Introduction
A common assumption in supervised learning is that training and test samples follow the same
distribution. However, this basic assumption is often violated in practice and then standard machine
learning methods do not work as desired. A situation where the input distribution P (x) is different
in the training and test phases but the conditional distribution of output values, P (y|x), remains
unchanged is called covariate shift [8]. In many real-world applications such as robot control [10],
bioinformatics [1], spam filtering [3], brain-computer interfacing [9], or econometrics [5], covariate
shift is conceivable and thus learning under covariate shift is gathering a lot of attention these days.
The influence of covariate shift could be alleviated by weighting the log likelihood terms according
to the importance [8]: w(x) = pte (x)/ptr (x), where pte (x) and ptr (x) are test and training input
densities. Since the importance is usually unknown, the key issue of covariate shift adaptation is
how to accurately estimate the importance.
A naive approach to importance estimation would be to first estimate the training and test densities
separately from training and test input samples, and then estimate the importance by taking the ratio
of the estimated densities. However, density estimation is known to be a hard problem particularly
in high-dimensional cases. Therefore, this naive approach may not be effective?directly estimating
the importance without estimating the densities would be more promising.
Following this spirit, the kernel mean matching (KMM) method has been proposed recently [6],
which directly gives importance estimates without going through density estimation. KMM is shown
1
to work well, given that tuning parameters such as the kernel width are chosen appropriately. Intuitively, model selection of importance estimation algorithms (such as KMM) is straightforward
by cross validation (CV) over the performance of subsequent learning algorithms. However, this is
highly unreliable since the ordinary CV score is heavily biased under covariate shift?for unbiased
estimation of the prediction performance of subsequent learning algorithms, the CV procedure itself
needs to be importance-weighted [9]. Since the importance weight has to have been fixed when
model selection is carried out by importance weighted CV, it can not be used for model selection of
importance estimation algorithms.
The above fact implies that model selection of importance estimation algorithms should be performed within the importance estimation step in an unsupervised manner. However, since KMM
can only estimate the values of the importance at training input points, it can not be directly applied
in the CV framework; an out-of-sample extension is needed, but this seems to be an open research
issue currently.
In this paper, we propose a new importance estimation method which can overcome the above
problems, i.e., the proposed method directly estimates the importance without density estimation
and is equipped with a natural model selection procedure. Our basic idea is to find an importance
estimate w(x)
b
such that the Kullback-Leibler divergence from the true test input density p te (x)
to its estimate pbte (x) = w(x)p
b
tr (x) is minimized. We propose an algorithm that can carry out
this minimization without explicitly modeling ptr (x) and pte (x). We call the proposed method the
Kullback-Leibler Importance Estimation Procedure (KLIEP). The optimization problem involved in
KLIEP is convex, so the unique global solution can be obtained. Furthermore, the solution tends to
be sparse, which contributes to reducing the computational cost in the test phase.
Since KLIEP is based on the minimization of the Kullback-Leibler divergence, its model selection
can be naturally carried out through a variant of likelihood CV, which is a standard model selection
technique in density estimation. A key advantage of our CV procedure is that, not the training
samples, but the test input samples are cross-validated. This highly contributes to improving the
model selection accuracy since the number of training samples is typically limited while test input
samples are abundantly available.
The simulation studies show that KLIEP tends to outperform existing approaches in importance
estimation including the logistic regression based method [2], and it contributes to improving the
prediction performance in covariate shift scenarios.
2
New Importance Estimation Method
In this section, we propose a new importance estimation method.
2.1
Formulation and Notation
ntr
Let D ? (Rd ) be the input domain and suppose we are given i.i.d. training input samples {x tr
i }i=1
te nte
from a training input distribution with density ptr (x) and i.i.d. test input samples {xj }j=1 from a
test input distribution with density pte (x). We assume that ptr (x) > 0 for all x ? D. Typically,
the number ntr of training samples is rather small, while the number nte of test input samples is
very large. The goal of this paper is to develop a method of estimating the importance w(x) from
ntr
te nte
{xtr
i }i=1 and {xj }j=1 :
pte (x)
w(x) =
.
ptr (x)
Our key restriction is that we avoid estimating densities pte (x) and ptr (x) when estimating the
importance w(x).
2.2
Kullback-Leibler Importance Estimation Procedure (KLIEP)
Let us model the importance w(x) by the following linear model:
w(x)
b
=
b
X
`=1
2
?` ?` (x),
(1)
where {?` }b`=1 are parameters to be learned from data samples and {?` (x)}b`=1 are basis functions
such that
?` (x) ? 0 for all x ? D and for ` = 1, 2, . . . , b.
ntr
te nte
Note that b and {?` (x)}b`=1 could be dependent on the samples {xtr
i }i=1 and {xj }j=1 , i.e., kernel
models are also allowed?we explain how the basis functions {?` (x)}b`=1 are chosen in Section 2.3.
Using the model w(x),
b
we can estimate the test input density pte (x) by
{?` }b`=1
We determine the parameters
pte (x) to pbte (x) is minimized:
Z
pbte (x) = w(x)p
b
tr (x).
in the model (1) so that the Kullback-Leibler divergence from
pte (x)
dx
w(x)p
b
tr (x)
ZD
Z
pte (x)
=
pte (x) log
dx ?
pte (x) log w(x)dx.
b
ptr (x)
D
D
KL[pte (x)kb
pte (x)] =
pte (x) log
Since the first term in the last equation is independent of {?` }b`=1 , we ignore it and focus on the
second term. We denote it by J:
Z
J=
pte (x) log w(x)dx
b
(2)
D
!
nte
nte
b
X
1 X
1 X
te
?
log w(x
b j )=
log
?` ?` (xte
j ) ,
nte j=1
nte j=1
`=1
nte
where the empirical approximation based on the test input samples {xte
j }j=1 is used from the first
line to the second line above. This is our objective function to be maximized with respect to the
parameters {?` }b`=1 , which is concave [4]. Note that the above objective function only involves the
nte
tr ntr
test input samples {xte
j }j=1 , i.e., we did not use the training input samples {xi }i=1 yet. As shown
tr ntr
below, {xi }i=1 will be used in the constraint.
w(x)
b
is an estimate of the importance w(x) which is non-negative by definition. Therefore, it is
natural to impose w(x)
b
? 0 for all x ? D, which can be achieved by restricting
?` ? 0 for ` = 1, 2, . . . , b.
In addition to the non-negativity, w(x)
b
should be properly normalized since pbte (x) (= w(x)p
b
tr (x))
is a probability density function:
Z
Z
1=
pbte (x)dx =
w(x)p
b
(3)
tr (x)dx
D
D
ntr
ntr X
b
1 X
1 X
?
w(x
b tr
)
=
?` ?` (xtr
i
i ),
ntr i=1
ntr i=1
`=1
ntr
where the empirical approximation based on the training input samples {xtr
i }i=1 is used from the
first line to the second line above.
Now our optimization criterion is summarized as follows.
?
!?
nte
b
X
X
?
maximize ?
log
?` ?` (xte
j )
{?` }b`=1
j=1
subject to
`=1
ntr X
b
X
i=1 `=1
?` ?` (xtr
i ) = ntr and ?1 , ?2 , . . . , ?b ? 0.
This is a convex optimization problem and the global solution can be obtained, e.g., by simply
performing gradient ascent and feasibility satisfaction iteratively. A pseudo code is described in
Figure 1-(a). Note that the solution {b
?` }b`=1 tends to be sparse [4], which contributes to reducing the
computational cost in the test phase. We refer to the above method as Kullback-Leibler Importance
Estimation Procedure (KLIEP).
3
(k)
(k)
Input: M = {mk | mk = {?` (x)}b`=1 },
ntr
te nte
{xtr
i }i=1 , and {xj }j=1
Output: w(x)
b
ntr
te nte
Input: m = {?` (x)}b`=1 , {xtr
i }i=1 , and {xj }j=1
Output: w(x)
b
Aj,` ?? ?`P
(xte
j );
tr
1
tr
b` ?? ntr n
i=1 ?` (xi );
Initialize ? (> 0) and ? (0 < ? 1);
Repeat until convergence
? ?? ? + ?A> (1./A?);
? ?? ? + (1 ? b> ?)b/(b> b);
? ?? max(0, ?);
? ?? ?/(b> ?);
end
P
w(x)
b
?? b`=1 ?` ?` (x);
nte
te R
Split {xte
j }j=1 into R disjoint subsets {Xr }r=1 ;
for each model m ? M
for each split r = 1, . . . , R
ntr
te
w
br (x) ?? KLIEP(m, {xtr
i }i=1 , {Xj }j6=r );
P
1
b
Jr (m) ?? |X te | x?X te log w
br (x);
r
r
end
PR b
1
b
J(m)
?? R
r=1 Jr (m);
end
b
m
b ?? argmaxm?M J(m);
ntr
te nte
w(x)
b
?? KLIEP(m,
b {xtr
i }i=1 , {xj }j=1 );
(a) KLIEP main code
(b) KLIEP with model selection
Figure 1: KLIEP algorithm in pseudo code. ?./? indicates the element-wise division and > denotes
the transpose. Inequalities and the ?max? operation for a vector are applied element-wise.
2.3
Model Selection by Likelihood Cross Validation
The performance of KLIEP depends on the choice of basis functions {?` (x)}b`=1 . Here we explain
how they can be appropriately chosen from data samples.
Since KLIEP is based on the maximization of the score J (see Eq.(2)), it would be natural to select
the model such that J is maximized. The expectation over pte (x) involved in J can be numerically approximated by likelihood cross validation (LCV) as follows: First, divide the test samnte
te R
ples {xte
br (x) from
j }j=1 into R disjoint subsets {Xr }r=1 . Then obtain an importance estimate w
te
{Xj }j6=r and approximate the score J using Xrte as
X
1
Jbr =
log w
br (x).
te
|Xr |
te
x?Xr
We repeat this procedure for r = 1, 2, . . . , R, compute the average of Jbr over all r, and use the
average Jb as an estimate of J:
R
1 Xb
b
Jr .
(4)
J=
R r=1
For model selection, we compute Jb for all model candidates (the basis functions {?` (x)}b`=1 in
b A pseudo code of the LCV procedure is
the current setting) and choose the one that minimizes J.
summarized in Figure 1-(b)
One of the potential limitations of CV in general is that it is not reliable in small sample cases
since data splitting by CV further reduces the sample size. On the other hand, in our CV procedure,
the data splitting is performed over the test input samples, not over the training samples. Since we
typically have a large number of test input samples, our CV procedure does not suffer from the small
sample problem.
A good model may be chosen by the above CV procedure, given that a set of promising model
candidates is prepared. As model candidates, we propose using a Gaussian kernel model centered at
nte
the test input points {xte
j }j=1 , i.e.,
w(x)
b
=
nte
X
?` K? (x, xte
` ),
`=1
where K? (x, x0 ) is the Gaussian kernel with kernel width ?:
kx ? x0 k2
0
K? (x, x ) = exp ?
.
2? 2
4
(5)
nte
The reason why we chose the test input points {xte
j }j=1 as the Gaussian centers, not the training
ntr
input points {xtr
i }i=1 , is as follows. By definition, the importance w(x) tends to take large values
if the training input density ptr (x) is small and the test input density pte (x) is large; conversely,
w(x) tends to be small (i.e., close to zero) if ptr (x) is large and pte (x) is small. When a function
is approximated by a Gaussian kernel model, many kernels may be needed in the region where the
output of the target function is large; on the other hand, only a small number of kernels would be
enough in the region where the output of the target function is close to zero. Following this heuristic,
we decided to allocate many kernels at high test input density regions, which can be achieved by
nte
setting the Gaussian centers at the test input points {xte
j }j=1 .
ntr
te nte
Alternatively, we may locate (ntr +nte ) Gaussian kernels at both {xtr
i }i=1 and {xj }j=1 . However,
in our preliminary experiments, this did not further improve the performance, but slightly increased
nte
the computational cost. Since nte is typically very large, just using all the test input points {xte
j }j=1
as Gaussian centers is already computationally rather demanding. To ease this problem, we practinte
cally propose using a subset of {xte
j }j=1 as Gaussian centers for computational efficiency, i.e.,
w(x)
b
=
b
X
?` K? (x, c` ),
(6)
`=1
nte
where c` is a template point randomly chosen from {xte
j }j=1 and b (? nte ) is a prefixed number.
In the rest of this paper, we fix the number of template points at
b = min(100, nte ),
and optimize the kernel width ? by the above CV procedure.
3
Experiments
In this section, we compare the experimental performance of KLIEP and existing approaches.
3.1
Importance Estimation for Artificial Data Sets
Let ptr (x) be the d-dimensional Gaussian density with mean (0, 0, . . . , 0)> and covariance identity
and pte (x) be the d-dimensional Gaussian density with mean (1, 0, . . . , 0)> and covariance identity.
The task is to estimate the importance at training input points:
wi = w(xtr
i )=
pte (xtr
i )
tr
ptr (xi )
for i = 1, 2, . . . , ntr .
We compare the following methods:
tr
KLIEP(?): {wi }ni=1
are estimated by KLIEP with the Gaussian kernel model (6). Since the performance of KLIEP is dependent on the kernel width ?, we test several different values of
?.
KLIEP(CV): The kernel width ? in KLIEP is chosen based on 5-fold LCV (see Section 2.3).
tr
KDE(CV): {wi }ni=1
are estimated through the kernel density estimator (KDE) with the Gaussian
kernel. The kernel widths for the training and test densities are chosen separately based on
5-fold likelihood cross-validation.
tr
KMM(?): {wi }ni=1
are estimated by kernel mean matching (KMM) [6]. The performance of?KMM
is dependent
on tuning parameters such as B, , and ?. We set B = 1000 and = ( ntr ?
?
1)/ ntr following the paper [6], and test several different values of ?. We used the CPLEX
software for solving quadratic programs in the experiments.
LogReg(?): Importance weights are estimated by logistic regression (LogReg) [2]. The Gaussian
kernels are used as basis functions. Since the performance of LogReg is dependent on the
kernel width ?, we test several different values of ?. We used the LIBLINEAR implementation of logistic regression for the experiments [7].
LogReg(CV): The kernel width ? in LogReg is chosen based on 5-fold CV.
5
?3
10
?4
10
KLIEP(0.5)
KLIEP(2)
KLIEP(7)
KLIEP(CV)
KDE(CV)
KMM(0.1)
KMM(1)
KMM(10)
LogReg(0.5)
LogReg(2)
LogReg(7)
LogReg(CV)
?3
Average NMSE over 100 Trials (in Log Scale)
Average NMSE over 100 Trials (in Log Scale)
KLIEP(0.5)
KLIEP(2)
KLIEP(7)
KLIEP(CV)
KDE(CV)
KMM(0.1)
KMM(1)
KMM(10)
LogReg(0.5)
LogReg(2)
LogReg(7)
LogReg(CV)
?5
10
10
?4
10
?5
10
?6
10
?6
10
2
4
6
8
10
12
14
d (Input Dimension)
16
18
50
20
(a) When input dimension is changed
100
ntr (Number of Training Samples)
150
(b) When training sample size is changed
Figure 2: NMSEs averaged over 100 trials in log scale.
We fixed the number of test input points at nte = 1000 and consider the following two settings for
the number ntr of training samples and the input dimension d:
(a) ntr = 100 and d = 1, 2, . . . , 20,
(b) d = 10 and ntr = 50, 60, . . . , 150.
We run the experiments 100 times for each d, each ntr , and each method, and evaluate the quality
tr
of the importance estimates {w
bi }ni=1
by the normalized mean squared error (NMSE):
2
ntr
1 X
w
bi
wi
Pntr
NMSE =
.
? Pntr
ntr i=1
bi0
i0 =1 w
i0 =1 wi0
NMSEs averaged over 100 trials are plotted in log scale in Figure 2. Figure 2(a) shows that the error
of KDE(CV) sharply increases as the input dimension grows, while KLIEP, KMM, and LogReg
with appropriate kernel widths tend to give smaller errors than KDE(CV). This would be the fruit
of directly estimating the importance without going through density estimation. The graph also
show that the performance of KLIEP, KMM, and LogReg is dependent on the kernel width ??the
results of KLIEP(CV) and LogReg(CV) show that model selection is carried out reasonably well
and KLIEP(CV) works significantly better than LogReg(CV).
Figure 2(b) shows that the errors of all methods tend to decrease as the number of training samples
grows. Again, KLIEP, KMM, and LogReg with appropriate kernel widths tend to give smaller
errors than KDE(CV). Model selection in KLIEP(CV) and LogReg(CV) works reasonably well and
KLIEP(CV) tends to give significantly smaller errors than LogReg(CV).
Overall, KLIEP(CV) is shown to be a useful method in importance estimation.
3.2
Covariate Shift Adaptation with Regression and Classification Benchmark Data Sets
Here we employ importance estimation methods for covariate shift adaptation in regression and
classification benchmark problems (see Table 1).
Each data set consists of input/output samples {(xk , yk )}nk=1 . We normalize all the input samples
te nte
n
{xk }nk=1 into [0, 1]d and choose the test samples {(xte
j , yj )}j=1 from the pool {(xk , yk )}k=1 as
follows. We randomly choose one sample (xk , yk ) from the pool and accept this with probabil(c)
(c)
ity min(1, 4(xk )2 ), where xk is the c-th element of xk and c is randomly determined and fixed
in each trial of experiments; then we remove xk from the pool regardless of its rejection or acceptance, and repeat this procedure until we accept nte samples. We choose the training samples
tr ntr
{(xtr
i , yi )}i=1 uniformly from the rest. Intuitively, in this experiment, the test input density tends
6
(c)
to be lower than the training input density when xk is small. We set the number of samples at
tr ntr
te nte
ntr = 100 and nte = 500 for all data sets. Note that we only use {(xtr
i , yi )}i=1 and {xj }j=1
n
te
for training regressors or classifiers; the test output values {yjte }j=1
are used only for evaluating the
generalization performance.
We use the following kernel model for regression or classification:
fb(x; ?) =
t
X
?` Kh (x, m` ),
`=1
where Kh (x, x0 ) is the Gaussian kernel (5) and m` is a template point randomly chosen from
nte
{xte
j }j=1 . We set the number of kernels at t = 50. We learn the parameter ? by importanceweighted regularized least squares (IWRLS) [9]:
"n
#
tr
2
X
tr
tr
tr
2
bIW RLS ? argmin
w(x
b
?
) fb(x ; ?) ? y
+ ?k?k .
(7)
i
?
i
i
i=1
bIW RLS is analytically given by
The solution ?
b = (K > W
c K + ?I)?1 K > W
c y,
?
where I is the identity matrix and
y = (y1 , y2 , . . . , yntr )> ,
Ki,` = Kh (xtr
i , m` ),
c
W = diag (w
b1 , w
b2 , . . . , w
bntr ) .
The kernel width h and the regularization parameter ? in IWRLS (7) are chosen by 5-fold importance
weighted CV (IWCV) [9]. We compute the IWCV score by
X
1
br (x), y ,
w(x)L
b
f
|Zrtr |
tr
(x,y)?Zr
where
L (b
y , y) =
(b
y ? y)2
(Regression),
1
(1
?
sign{b
y
y})
(Classification).
2
We run the experiments 100 times for each data set and evaluate the mean test error:
nte
1 X
te
L fb(xte
),
y
.
j
j
nte j=1
The results are summarized in Table 1, where ?Uniform? denotes uniform weights, i.e., no importance weight is used. The table shows that KLIEP(CV) compares favorably with Uniform, implying
that the importance weighted methods combined with KLIEP(CV) are useful for improving the prediction performance under covariate shift. KLIEP(CV) works much better than KDE(CV); actually
KDE(CV) tends to be worse than Uniform, which may be due to high dimensionality. We tested
10 different values of the kernel width ? for KMM and described three representative results in the
table. KLIEP(CV) is slightly better than KMM with the best kernel width. Finally, LogReg(CV)
works reasonably well, but it sometimes performs poorly.
Overall, we conclude that the proposed KLIEP(CV) is a promising method for covariate shift adaptation.
4
Conclusions
In this paper, we addressed the problem of estimating the importance for covariate shift adaptation.
The proposed method, called KLIEP, does not involve density estimation so it is more advantageous
than a naive KDE-based approach particularly in high-dimensional problems. Compared with KMM
7
Table 1: Mean test error averaged over 100 trials. The numbers in the brackets are the standard deviation. All the error values are normalized so that the mean error by ?Uniform? (uniform weighting,
or equivalently no importance weighting) is one. For each data set, the best method and comparable
ones based on the Wilcoxon signed rank test at the significance level 5% are described in bold face.
The upper half are regression data sets taken from DELVE and the lower half are classification data
sets taken from IDA. ?KMM(?)? denotes KMM with kernel width ?.
Data
Dim
Uniform
kin-8fh
kin-8fm
kin-8nh
kin-8nm
abalone
image
ringnorm
twonorm
waveform
Average
8
8
8
8
7
18
20
20
21
1.00(0.34)
1.00(0.39)
1.00(0.26)
1.00(0.30)
1.00(0.50)
1.00(0.51)
1.00(0.04)
1.00(0.58)
1.00(0.45)
1.00(0.38)
KLIEP
(CV)
0.95(0.31)
0.86(0.35)
0.99(0.22)
0.97(0.25)
0.94(0.67)
0.94(0.44)
0.99(0.06)
0.91(0.52)
0.93(0.34)
0.94(0.35)
KDE
(CV)
1.22(0.52)
1.12(0.57)
1.09(0.20)
1.14(0.26)
1.02(0.41)
0.98(0.45)
0.87(0.04)
1.16(0.71)
1.05(0.47)
1.07(0.40)
KMM
(0.01)
1.00(0.34)
1.00(0.39)
1.00(0.27)
1.00(0.30)
1.01(0.51)
0.97(0.50)
1.00(0.04)
0.99(0.50)
1.00(0.44)
1.00(0.36)
KMM
(0.3)
1.12(0.37)
0.98(0.46)
1.04(0.17)
1.09(0.23)
0.96(0.70)
0.97(0.45)
0.87(0.05)
0.86(0.55)
0.93(0.32)
0.98(0.37)
KMM
(1)
1.59(0.53)
1.95(1.24)
1.16(0.25)
1.20(0.22)
0.93(0.39)
1.09(0.54)
0.87(0.05)
0.99(0.70)
0.98(0.31)
1.20(0.47)
LogReg
(CV)
1.30(0.40)
1.29(0.58)
1.06(0.17)
1.13(0.25)
0.92(0.41)
0.99(0.48)
0.95(0.08)
0.94(0.59)
0.95(0.34)
1.06(0.37)
which also directly gives importance estimates, KLIEP is practically more useful since it is equipped
with a model selection procedure. Our experiments highlighted these advantages and therefore
KLIEP is shown to be a promising method for covariate shift adaptation.
In KLIEP, we modeled the importance function by a linear (or kernel) model, which resulted in a
convex optimization problem with a sparse solution. However, our framework allows the use of any
models. An interesting future direction to pursue would be to search for a class of models which has
additional advantages.
Finally, the range of application of importance weights is not limited to covariate shift adaptation.
For example, the density ratio could be used for novelty detection. Exploring possible application
areas will be important future directions.
Acknowledgments
This work was supported by MEXT (17700142 and 18300057), the Okawa Foundation, the Microsoft CORE3 Project, and the IBM Faculty Award.
References
[1] P. Baldi and S. Brunak. Bioinformatics: The Machine Learning Approach. MIT Press, Cambridge, 1998.
[2] S. Bickel, M. Br?uckner, and T. Scheffer. Discriminative learning for differing training and test distributions. In Proceedings of the 24th International Conference on Machine Learning, 2007.
[3] S. Bickel and T. Scheffer. Dirichlet-enhanced spam filtering based on biased samples. In B. Sch?olkopf,
J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press,
Cambridge, MA, 2007.
[4] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004.
[5] J. J. Heckman. Sample selection bias as a specification error. Econometrica, 47(1):153?162, 1979.
[6] J. Huang, A. Smola, A. Gretton, K. M. Borgwardt, and B. Sch o? lkopf. Correcting sample selection bias
by unlabeled data. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19, pages 601?608. MIT Press, Cambridge, MA, 2007.
[7] C.-J. Lin, R. C. Weng, and S. S. Keerthi. Trust region Newton method for large-scale logistic regression.
Technical report, Department of Computer Science, National Taiwan University, 2007.
[8] H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood
function. Journal of Statistical Planning and Inference, 90(2):227?244, 2000.
[9] M. Sugiyama, M. Krauledat, and K.-R. Mu? ller. Covariate shift adaptation by importance weighted cross
validation. Journal of Machine Learning Research, 8:985?1005, May 2007.
[10] R. S. Sutton and G. A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA,
1998.
8
| 3248 |@word trial:6 faculty:1 seems:1 advantageous:1 open:1 simulation:2 covariance:2 tr:23 carry:1 liblinear:1 score:4 existing:2 abundantly:1 com:1 current:1 ida:1 yet:1 dx:6 subsequent:2 remove:1 implying:1 half:2 xk:9 direct:2 consists:1 baldi:1 manner:1 x0:3 planning:1 brain:1 equipped:3 project:1 estimating:8 notation:1 argmin:1 minimizes:1 pursue:1 differing:1 corporation:1 pseudo:3 masashi:1 concave:1 k2:1 classifier:1 platt:2 control:1 tends:9 sutton:1 lcv:3 chose:1 pbte:5 iwrls:2 signed:1 conversely:1 co:1 delve:1 ease:1 limited:2 ringnorm:1 bi:2 range:1 averaged:3 decided:1 unique:1 acknowledgment:1 yj:1 practice:1 importanceweighted:1 xr:4 procedure:15 area:1 empirical:2 significantly:2 alleviated:1 matching:2 boyd:1 close:2 selection:17 unlabeled:1 influence:1 restriction:1 optimize:1 center:4 straightforward:1 attention:1 regardless:1 convex:4 splitting:2 correcting:1 estimator:1 vandenberghe:1 ity:1 target:2 suppose:1 heavily:1 enhanced:1 element:3 approximated:2 particularly:3 econometrics:1 region:4 decrease:1 yk:3 mu:1 econometrica:1 solving:1 predictive:1 division:1 efficiency:1 basis:5 logreg:22 effective:1 artificial:1 heuristic:1 objectively:1 highlighted:1 itself:1 advantage:3 propose:6 adaptation:10 tu:1 poorly:2 kh:3 normalize:1 olkopf:2 probabil:1 convergence:1 illustrate:1 develop:1 ac:1 eq:1 c:2 involves:1 implies:1 motoaki:1 direction:2 waveform:1 tokyo:1 kb:1 centered:1 fix:1 generalization:1 preliminary:1 extension:1 exploring:1 practically:1 exp:1 bickel:2 fh:1 estimation:27 bi0:1 currently:1 weighted:6 hoffman:2 minimization:2 xtr:16 mit:4 interfacing:1 gaussian:14 rather:2 avoid:1 barto:1 validated:1 focus:1 properly:1 rank:1 likelihood:7 indicates:1 dim:1 inference:2 dependent:5 i0:2 typically:4 accept:2 going:2 fhg:1 issue:3 overall:2 classification:5 initialize:1 unsupervised:1 rls:2 future:2 minimized:2 jb:2 report:1 employ:1 randomly:4 divergence:3 resulted:1 jbr:2 national:1 phase:3 cplex:1 keerthi:1 microsoft:1 detection:1 acceptance:1 highly:2 weng:1 bracket:1 xb:1 buenau:1 ples:1 xte:17 divide:1 desired:1 plotted:1 mk:2 increased:1 modeling:1 maximization:1 ordinary:1 cost:3 deviation:1 subset:3 uniform:7 usefulness:1 combined:1 density:32 international:1 borgwardt:1 twonorm:1 pool:3 von:1 squared:1 nm:3 again:1 iwcv:2 choose:4 huang:1 worse:1 potential:1 de:2 hisashi:1 summarized:3 b2:1 bold:1 explicitly:1 depends:1 performed:2 lot:1 square:1 ni:4 accuracy:1 maximized:2 lkopf:1 accurately:2 j6:2 explain:2 definition:2 sugi:1 involved:2 naturally:1 dimensionality:1 actually:1 supervised:1 follow:2 day:1 formulation:1 furthermore:1 just:1 smola:1 until:2 hand:2 trust:1 logistic:4 quality:1 aj:1 grows:2 normalized:3 true:1 unbiased:1 y2:1 hence:1 wi0:1 regularization:1 analytically:1 leibler:6 iteratively:1 width:16 ptr:12 criterion:1 abalone:1 performs:1 image:1 wise:2 recently:1 common:1 jp:3 nh:1 numerically:1 refer:1 cambridge:6 cv:47 tuning:3 rd:1 sugiyama:2 pte:21 robot:1 specification:1 longer:1 wilcoxon:1 scenario:1 inequality:1 yi:2 additional:1 impose:1 determine:1 maximize:1 ller:1 novelty:1 ntr:34 reduces:1 gretton:1 technical:2 cross:7 lin:1 award:1 feasibility:1 uckner:1 prediction:3 variant:2 basic:2 regression:9 expectation:1 titech:1 nakajima:2 kernel:34 sometimes:1 achieved:2 addition:1 separately:3 addressed:1 appropriately:2 biased:2 rest:2 sch:3 ascent:1 subject:1 tend:3 spirit:1 call:1 split:2 enough:1 xj:10 fm:1 nabe:1 idea:1 okawa:1 br:6 shift:20 allocate:1 suffer:1 krauledat:1 useful:3 involve:2 prepared:1 outperform:1 sign:1 estimated:6 disjoint:2 zd:1 key:4 nikon:2 graph:1 run:2 comparable:1 ki:1 fold:4 quadratic:1 pntr:2 constraint:1 sharply:1 software:1 min:2 performing:1 department:1 according:2 shimodaira:1 jr:3 smaller:3 slightly:2 wi:5 kmm:24 intuitively:2 pr:1 gathering:1 taken:2 computationally:1 equation:1 remains:1 needed:2 prefixed:1 end:3 available:1 operation:1 kawanabe:1 appropriate:2 kashima:1 denotes:3 dirichlet:1 newton:1 cally:1 unchanged:1 objective:2 already:1 heckman:1 conceivable:1 gradient:1 berlin:2 reason:1 taiwan:1 code:4 modeled:1 ratio:5 equivalently:1 kde:11 favorably:1 negative:1 implementation:1 unknown:1 perform:1 upper:1 benchmark:2 situation:2 kliep:45 shinichi:1 locate:1 y1:1 kl:1 optimized:1 learned:1 usually:1 below:1 program:1 including:1 max:2 reliable:1 satisfaction:1 natural:4 demanding:1 regularized:1 zr:1 improve:1 technology:1 carried:3 fraunhofer:1 negativity:1 naive:5 interesting:1 limitation:1 filtering:2 validation:6 foundation:1 consistent:2 nte:34 fruit:1 editor:2 ibm:3 changed:2 repeat:3 last:1 transpose:1 supported:1 bias:2 institute:1 template:3 taking:2 face:1 sparse:3 overcome:1 dimension:4 world:1 evaluating:1 fb:3 reinforcement:1 regressors:1 spam:2 approximate:1 ignore:1 kullback:6 unreliable:1 global:2 b1:1 conclude:1 xi:4 discriminative:1 alternatively:1 search:1 why:1 table:5 brunak:1 promising:4 learn:1 reasonably:3 contributes:4 improving:4 argmaxm:1 domain:1 diag:1 did:2 significance:1 main:1 paul:1 allowed:1 nmse:4 representative:1 scheffer:2 candidate:3 weighting:4 kin:4 covariate:20 restricting:1 importance:50 te:21 kx:1 nk:2 rejection:1 simply:1 ma:3 conditional:1 goal:1 identity:3 hard:2 determined:1 reducing:2 uniformly:1 called:4 experimental:1 select:1 mext:1 bioinformatics:2 violated:1 evaluate:2 tested:1 |
2,479 | 3,249 | Computational Equivalence of Fixed Points and No
Regret Algorithms, and Convergence to Equilibria
Satyen Kale
Computer Science Department,
Princeton University
35 Olden St.
Princeton, NJ 08540
[email protected]
Elad Hazan
IBM Almaden Research Center
650 Harry Road
San Jose, CA 95120
[email protected]
Abstract
We study the relation between notions of game-theoretic equilibria which are
based on stability under a set of deviations, and empirical equilibria which are
reached by rational players. Rational players are modeled by players using no
regret algorithms, which guarantee that their payoff in the long run is close to
the maximum they could hope to achieve by consistently deviating from the algorithm?s suggested action.
We show that for a given set of deviations over the strategy set of a player, it is
possible to efficiently approximate fixed points of a given deviation if and only if
there exist efficient no regret algorithms resistant to the deviations. Further, we
show that if all players use a no regret algorithm, then the empirical distribution
of their plays converges to an equilibrium.
1
Introduction
We consider a setting where a number of agents need to repeatedly make decisions in the face of
uncertainty. In each round, the agent obtains a payoff based on the decision she chose. Each agent
would like to be able to maximize her payoff. While this might seem like a natural objective, it
may be impossible to achieve without placing restrictions on the kind of payoffs that can arise. For
instance, if the payoffs were adversarially chosen, then the agent?s task would become essentially
hopeless.
In such a situation, one way for the agent to cope with the uncertainty is to aim for a relative
benchmark rather an absolute one. The notion of regret minimization captures this intuition. We
imagine that the agent has a choice of several well-defined ways to change her decision, and now
the agent aims to maximize her payoff relative to what she could have obtained had she changed her
decisions in a consistent manner. As an example of what we mean by consistent changes, a possible
objective could be to maximize her payoff relative to the most she could have achieved by choosing
some fixed decision in all the rounds. The difference between these payoffs is known as external
regret in the game theory literature. Another notion is that of internal regret, which arises when the
possible ways to change are the ones that switch from some decision i to another, j, whenever the
agent chose decision i, leaving all other decisions unchanged.
A learning algorithm for an agent is said to have no regret with respect to an associated set of decision
modifiers (also called deviations) ? if the average payoff of an agent using the algorithm converges
to the largest average payoff she would have achieved had she changed her decisions using a fixed
decision modifier in all the rounds. Based on what set of decision modifiers are under consideration,
various no regret algorithms are known (for e.g. Hannan [10] gave algorithms to minimize external
regret, and Hart and Mas-Collel [11] give algorithms to minimize internal regret).
1
The reason no regret algorithms are so appealing, apart from the fact that they model rational behavior of agents in the face of uncertainty, is that in various cases it can be shown that using no regret
algorithms guides the overall play towards a game theoretic equilibrium. For example, Freund and
Schapire [7] show that in a zero-sum game, if all agents use a no external regret algorithm, then
the empirical distribution of the play converges to the set of minimax equilibria. Similarly, Hart
and Mas-Collel [11] show that if all agents use a no internal regret algorithm, then the empirical
distribution of the play converges to the set of correlated equilibria.
In general, given a set of decision modifiers ?, we can define a notion of game theoretic equilibrium
that is based on the property of being stable under deviations specified by ?. This is a joint distribution on the agents? decisions that ensures that the expected payoff to any agent is no less than the
most she could achieve if she decided to unilaterally (and consistently) decided to deviate from her
suggested action using any decision modifier in ?. One can then show that if all agents use a ?-no
regret algorithm, then the empirical distribution of the play converges to the set of ?-equilibria.
This brings us to the question of whether it is possible to design no regret algorithms for various sets
of decision modifiers ?. In this paper, we design algorithms which achieve no regret with respect to
? for a very general setting of arbitrary convex compact decision spaces, arbitrary concave payoff
functions, and arbitrary continuous decision modifiers. Our method works as long as it is possible
to compute approximate fixed points for (convex combinations) of decision modifiers in ?. Our
algorithms are based on a connection to the framework of Online Convex Optimization (see, e.g.
[18]) and we show how to apply known learning algorithms to obtain ?-no regret algorithms. The
generality of our connection allows us to use various sophisticated Online Convex Optimization
algorithms which can exploit various structural properties of the utility functions and guarantee a
faster rate of convergence to the equilibrium.
Previous work by Greenwald and Jafari [9] gave algorithms for the case when the decision space is
the simplex of probability distributions over the agents? decisions, the payoff functions are linear,
and the decision modifiers are also linear. Their algorithm, based on the work of Hart and MasCollel [11], uses a version of Blackwell?s Approachability Theorem, and also needs to computes
fixed points of the decision modifiers. Since these modifiers are linear, it is possible to compute
fixed points for them by computing the stationary distribution of an appropriate stochastic matrix
(say, by computing its top eigenvector).
Computing Brouwer fixed points of continuous functions is in general a very hard problem (it is
PPAD-complete, as shown by Papadimitriou [15]). Fixed points are ubiquitous in game theory.
Most common notions of equilibria in game theory are defined as the set of fixed points of a certain
mapping. For example, Nash Equilibria (NE) are the set of fixed points of the best response mapping
(appropriately defined to avoid ambiguity). The fact that Brouwer fixed points are hard to compute in
general is no reason why computing specific fixed points should be hard (for instance, as mentioned
earlier, computing fixed points of linear functions is easy via eigenvector computations). More
specifically, could it be the case that the NE, being a fixed point of some well-specified mapping,
is easy to compute? These hopes were dashed by the work of [6, 3] who showed that computing
NE is as computationally difficult as finding fixed points in a general mapping: they show that
computing NE in a two-player game is PPAD-complete. Further work showed that even computing
an approximate NE is PPAD-complete [4].
Since our algorithms (and all previous ones as well) depend on computing (approximate) fixed points
of various decision modifiers, the above discussion leads us to question whether this is necessary.
We show in this paper that indeed it is: a ?-no-regret algorithm can be efficiently used to compute
approximate fixed points of any convex combination of decision modifiers. This establishes an
equivalence theorem, which is the main contribution of this paper: there exist efficient ?-no-regret
algorithms if and only it is possible to efficiently compute fixed points of convex combinations
of decision modifiers in ?. This equivalence theorem allows us to translate complexity theoretic
lower bounds on computing fixed points to designing no regret algorithms. For instance, a Nash
equilibrium can be obtained by applying Brouwer?s fixed point theorem to an appropriately defined
continuous mapping from the compact convex set of pairs of the players? mixed strategies to itself.
Thus, if ? contains this mapping, then it is PPAD-hard to design ?-no-regret algorithms.
It was recently brought to our attention that Stolz and Lugosi [17], building on the work of Hart and
Schmeidler [12], have also considered ?-no-regret algorithms. They also show how to design them
2
from fixed-point oracles, and proved convergence to equilibria under even more general conditions
than we consider. Gordon, Greenwald, Marks, and Zinkevich [8] have also considered similar notions of regret and showed convergence to equilibria, in the special case when the deviations in ?
can be represented as the composition of a fixed embedding into a higher dimensional space and
an adjustable linear transformation. The focus of our results is on the computational aspect of such
reductions, and the equivalence of fixed-points computation and no-regret algorithms.
2
2.1
Preliminaries
Games and Equilibria
We consider the following kinds of games. First, the set of strategies for the players of the game is
a convex compact set. Second, the utility functions for the players are concave over their strategy
sets. To avoid cumbersome notation, we restrict ourselves to two player games, although all of our
results naturally extend to multi-player games.
Formally, for i = 1, 2, player i plays points from a convex compact set Ki ? Rni . Her payoff
is given by function ui : K1 ? K2 ? R, i.e. if x1 , x2 is the pair of strategies played by the two
players, then the payoff to player i is given by ui (x1 , x2 ). We assume that u1 is a concave function
of x1 for any fixed x2 , and similarly u2 is a concave function of x2 for any fixed x1 .
We now define a notion of game theoretic equilibrium based on the property of being stable with
respect to consistent deviations. By this, we mean an online game-playing strategy for the players
that will guarantee that neither stands to gain if they decided to unilaterally, and consistently, deviate
from their suggested moves.
To model this, assume that each player i has a set of possible deviations ?i which is a finite1 set of
continuous mappings ?i : Ki ? Ki . Let ? = (?1 , ?2 ). Let ? be a joint distribution on K1 ? K2 .
If it is the case that for any deviation ?1 ? ?1 , player 1?s expected payoff obtained by sampling x1
using ? is always larger than her expected payoff obtained by deviating to ?1 (x1 ), then we call ?
stable under deviations in ?1 . The distribution ? is said to be a ?-equilibrium if ? is stable under
deviations in ?1 and ?2 . A similar definition appears in [12] and [17].
Definition 1 (?-equilibrium). A joint distribution ? over K1 ? K2 is called a ?-equilibrium if the
following holds, for any ?1 ? ?1 , and for any ?2 ? ?2 :
Z
Z
u1 (x1 , x2 )?(x1 , x2 ) ?
u1 (?1 (x1 ), x2 )?(x1 , x2 )
Z
Z
u2 (x1 , x2 )?(x1 , x2 ) ?
u2 (x1 , ?2 (x2 ))?(x1 , x2 )
We say that ? is a ?-approximate ?-equilibrium if the inequalities above are satisfied up to an
additive error of ?.
Intuitively, we imagine a repeated game between the two players, where at equilibrium, the players?
moves are correlated by a signal, which could be the past history of the play, and various external
factors. This signal samples a pair of moves from an equilibrium joint distribution over all pairs
of moves, and suggests to each player individually only the move she is supposed to play. If no
player stands to gain if she unilaterally, but consistently, used a deviation from her suggested move,
then the distribution of the correlating signal is stable under the set of deviations, and is hence an
equilibrium.
Example 1: Correlated Equilibria. A standard 2-player game is obtained when the Ki are the
simplices of distributions over some base sets of actions Ai and the utility functions ui are bilinear
in x1 , x2 . If the sets ?i consist of the maps ?a,b : Ki ? Ki for every pair a, b ? Ai defined as
?
if c = a
?0
?a,b (x)[c] = xa + xb if c = b
(1)
?
xc
otherwise
1
It is highly plausible that the results in this paper extend to the case where ? is infinite ? indeed, our results
hold for any set of mappings ? which is obtained by taking all convex combinations of finitely many mappings
? but we restrict to finite ? in this paper for simplicity.
3
then it can be shown that any ?-equilibrium can be equivalently viewed as a correlated equilibrium
of the game, and vice-versa.
Example 2: The Stock Market game. Consider the following setting: there are two investors
(the generalization to many investors is straightforward), who invest their wealth in n stocks. In
each period, they choose portfolios x1 and x2 over the n stocks, and observe the stock returns. We
model the stock returns as a function r of the portfolios x1 , x2 chosen by the investors, and it maps
the portfolios to the vector of stock returns. We make the assumption that each player has a small
influence on the market, and thus the function r is insensitive to the small perturbations in the input.
The wealth gain for each investor i is r(x1 , x2 ) ? xi . The standard way to measure performance of
an investment strategy is the logarithmic growth rate, viz. log(r(x1 , x2 ) ? xi ). We can now define
the utility functions as ui (x1 , x2 ) = log(r(x1 , x2 ) ? xi ). Intuitively, this game models the setting in
which the market prices are affected by the investments of the players.
A natural goal for a good investment strategy would be to compare the wealth gain to that of the
best fixed portfolio, i.e. ?i is the set of all constant maps. This was considered by Cover in his
Universal Portfolio Framework [5]. Another possible goal would be to compare the wealth gained
to that achievable by modifying the portfolios using the ?a,b maps above, as considered by [16]. In
Section 3, we show that the stock market game admits algorithms that converge to an ?-equilibrium
in O( 1? log 1? ) rounds, whereas all previous algorithms need O( ?12 ) rounds.
2.2
No regret algorithms
The online learning framework we consider is called online convex optimization [18], in which there
is a fixed convex compact feasible set K ? Rn and an arbitrary, unknown sequence of concave
payoff functions f (1) , f (2) , . . . : K ? R. The decision maker must make a sequence of decisions,
where the tth decision is a selection of a point x(t) ? K and obtains a payoff of f (t) (x(t) ) on period
t. The decision maker can only use the previous points x(1) , . . . , x(t?1) , and the previous payoff
functions f (1) , . . . , f (t?1) to choose the point x(t) .
The performance measure we use to evaluate online algorithms is regret, defined as follows. The
decision maker has a finite set of N decision modifiers ? which, as before, is a set of continuous
mappings from K ? K. Then the regret for not using some deviation ? ? ? is the excess payoff
the decision maker could have obtained if she had changed her points in each round by applying ?.
Definition 2 (?-Regret). Let ? be a set of continuous functions from K ? K. Given a set of T
concave utility functions f1 , ..., fT , define the ?-regret as
Regret? (T ) = max
???
T
X
f (t) (?(x(t) )) ?
t=1
T
X
f (t) (x(t) ).
t=1
Two specific examples of ?-regret deserve mention. The first one is ?external regret?, which is
defined when ? is the set of all constant mappings from K to itself. The second one is ?internal
regret?, which is defined when K is the simplex of distributions over some base set of actions A,
and ? is the set of the ?a,b functions (defined in (1)) for all pairs a, b ? A.
A desirable property of an algorithm for Online Convex Optimization is Hannan consistency: the
regret, as a function of the number of rounds T , is sublinear. This implies that the average per
iteration payoff of the algorithm converges to the average payoff of a clairvoyant algorithm that uses
the best deviation in hindsight to change the point in every round. For the purpose of this paper, we
require a slightly stronger property for an algorithm, viz. that the regret is polynomially sublinear as
a function of T .
Definition 3 (No ?-regret algorithm). A no ?-regret algorithm is one which, given any sequence of
concave payoff functions f (1) , f (2) , . . ., generates a sequence of points x(1) , x(2) , . . . ? K such that
for all T = 1, 2, . . ., Regret? (T ) = O(T 1?c ) for some constant c > 0. Such an algorithm will be
called efficient if it computes x(t) in poly(n, N, t, L) time.
In the above definition, L is a description length parameter for K, defined appropriately depending
on how the set K is represented. For instance, if K is the n-dimensional probability simplex, then
4
L = n. If K is specified by means of a separation oracle and inner and outer radii r and R, then
L = log(R/r), and we allow poly(n, N, t, L) calls to the separation oracle in each iteration.
The relatively new framework of Online Convex Optimization (OCO) has received much attention
recently in the machine learning community. Our no ?-regret algorithms can use any of wide variety
of algorithms for OCO. In this paper, we will use Exponentiated Gradient (EG) algorithm ([14], [1]),
which has the following (external) regret bound:
Theorem 1. Let the domain K be the simplex of distributions over a base set of size n. Let
G? be an upper bound on the L? norm of the gradients of the payoff functions, i.e. G? ?
supx?K k?f (t) (x)k? . Then the EG algorithm generates points x(1) , . . . , x(T ) such that
max
x?K
T
X
t=1
f (t) (x) ?
T
X
f (t) (x(t) ) ? O(G?
p
log(n)T )
t=1
If the utility functions are strictly
? concave rather than linear, even stronger regret bounds, which
depend on log(T ) rather than T , are known [13].
While most of the literature on online convex optimization focuses on external regret, it was observed that any Online Convex Optimization algorithm for external regret can be converted to an
internal regret algorithm (for example, see [2], [16]).
2.3
Fixed Points
As mentioned in the introduction, our no regret algorithms depend on computing fixed points of
the relevant mappings. For a given set of deviations ?, denote by CH(?) the set of all convex
combinations of deviations in ?, i.e.
nP
o
P
CH(?) =
??? ?? ? : ?? ? 0 and
??? ?? = 1 .
Since each map ? ? CH(?) is a continuous function from K ? K, and K is a convex compact
domain, by Brouwer?s fixed theorem, ? has a fixed point in K, i.e. there exists a point x ? K
such that ?(x) = x. We consider algorithms which approximate fixed points for a given map in the
following sense.
Definition 4 (FPTAS for fixed points of deviations). Let ? be a set of N continuous functions
from K ? K. A fully polynomial time approximation scheme (FPTAS) for fixed points of ? is an
algorithm, which, given any function ? ? CH(?) and an error parameter ? > 0, computes a point
x ? K such that k?(x) ? xk ? ? in poly(n, N, L, 1? ) time.
3
Convergence of no ?-regret algorithms to ?-equilibria
In this section we prove that if the players use no ?-regret algorithms, then the empirical distribution of the moves converges to a ?-equilibrium. [11] shows that if players use no internal regret
algorithms, then the empirical distribution of the moves converges to a correlated equilibrium. This
was generalized by [9] to any set of linear transformations ?. The more general setting of this paper
also follows easily from the definitions. A similar theorem was also proved in [17].
The advantage of this general setting is that the connection to online convex optimization allows for
faster rates of convergence using recent online learning techniques. We give an example of a natural
game theoretic setting with faster convergence rate below.
Theorem 2. If each player i chooses moves using a no ?i -regret algorithms, then the empirical
game distribution of the players? moves converges to a ?-equilibrium. Further, an ?-approximate
?-equilibrium is reached after T iterations for the first T which satisfies T1 Regret? (T ) ? ?.
Proof. Consider the first player. In each game iteration t, let (x1 (t) , x2 (t) ) be the pair of moves
played by the two players. From player 1?s point of view, the payoff function she obtains, f (t) , is
the following:
?x ? K1 : f (t) (x) , u1 (x, x2 (t) ).
5
Note that this function is concave by assumption. Then we have, by definition 3,
X
X
Regret?1 (T ) = max
f (t) (?(x1 (t) )) ?
f (t) (x1 (t) ).
???
t
t
Rewriting this in terms of the original utility function, and scaling by the number of iterations we
get
T
T
1X
1X
1
u1 (x1 (t) , x2 (t) ) ?
u1 (?(x1 (t) ), x2 (t) ) ? Regret?1 (T ).
T t=1
T t=1
T
Denote by ?(T ) the empirical distribution of the played strategies till iteration T , i.e. the distribution
which puts a probability mass of T1 on all pairs (x1 (t) , x2 (t) ) for t = 1, 2, . . . , T . Then, the above
inequality can be rewritten as
Z
Z
1
(T )
u1 (x1 , x2 )? (x1 , x2 ) ?
u1 (?(x1 ), x2 )?(T ) (x1 , x2 ) ? Regret?1 (T ).
T
A similar inequality holds for player 2 as well. Now assume that both players use no regret algorithms, which ensure that Regret?i (T ) ? O(T 1?c ) for some constant c > 0. Hence as T ? ?, we
have T1 Regret?i (T ) ? 0. Thus ?(T ) converges to a ?-equilibrium. Also, ?(T ) is a ?-approximate
equilibrium as soon as T is large enough so that T1 Regret?1 (T ) and T1 Regret?2 (T ) are less than ?,
1
i.e. T ? ?( ?1/c
).
A corollary of Theorem 2 is that we can obtain faster rates of convergence using recent online
learning techniques, when the payoff functions are non-linear. This is natural in many situations,
since risk aversion is associated with the concavity of utility functions.
Corollary 3. For the stock market game as defined in section 2.1, there exists no regret algorithms
which guarantee convergence to an ?-equilibrium in O( 1? log 1? ) iterations.
Proof sketch. The utility functions observed by the investor i in the stock market game are of the
form ui (x1 , x2 ) = log(r(x1 , x2 ) ? xi ). This logarithmic utility function is exp-concave, by the
assumption on the insensitivity of the function r to small perturbations in the input. Thus the online
algorithm of [5], or the more efficient algorithms of [13] can be applied. In the full version of this
paper, we show that Lemma 6 can be modified to obtain algorithms with Regret?i (T ) = O(log T ).
By the Theorem 2 above, the investors reach ?-equilibrium in O( 1? log 1? ) iterations.
4
Computational Equivalence of Fixed Points and No Regret algorithms
In this section we prove our main result on the computational equivalence of computing fixed points
and designing no regret algorithms. By the result of the previous section, players using no regret
algorithms converge to equilibria.
We assume that the payoff functions f (t) are scaled so that the (L2 ) norm of their gradients is
bounded by 1, i.e. k?f (t) k ? 1. Our main theorem is the following:
Theorem 4. Let ? be a given finite set of deviations. Then there is a FPTAS for fixed points of ? if
and only if there exists an efficient no ?-regret algorithm.
The first direction of the theorem is proved by designing utility functions for which the no regret
property will imply convergence to an approximate fixed point of the corresponding transformations.
The proof crucially depends on the fact that no regret algorithms have the stringent requirement that
their worst case regret, against arbitrary adversarially chosen payoff functions, is sublinear as a
function of the number of the rounds.
Lemma 5. If there exists a no ?-regret algorithm then there exists an FPTAS for fixed points of ?.
Proof. Let ?0 ? CH(?) be a given mapping whose fixed point we wish to compute. Let ? be a given
error parameter.
6
At iteration t, let x(t) be the point chosen by A. If k?0 (x(t) ) ? x(t) k ? ?, we can stop, because we
have found an approximate fixed point. Else, supply A with the following payoff function:
(?0 (x(t) ) ? x(t) )>
(x ? x(t) )
k?0 (x(t) ) ? x(t) k
f (t) (x) ,
This is a linear function, with k?f (t) (x)k = 1. Also, f (t) (x(t) ) = 0, and f (t) (?0 (x(t) )) =
k?0 (x(t) ) ? x(t) k ? ?. After T iterations, since ?0 is a convex combination of functions in ?,
and since all the f (t) are linear functions, we have
max
???
T
X
f (t) (?(x(t) )) ?
t=1
T
X
f (t) (?0 (x(t) )) ? ?T.
t=1
Thus,
Regret? (T ) = max
X
???
f (t) (?(x(t) )) ?
t
X
f (t) (x(t) ) ? ?T.
(2)
t
Since A is a no-regret algorithm, assume that A ensures that Regret? (T ) = O(T 1?c ) for some
1
constant c > 0. Thus, when T = ?( ?1/c
) the lower bound (2) on the regret cannot hold unless we
have already found an ?-approximate fixed point of ?0 .
The second direction is on the lines of the algorithms of [2] and [16] which use fixed point computations to obtain no internal regret algorithms.
Lemma 6. If there is an FPTAS for fixed points of ?, then?there is an efficient no ?-regret algorithm.
In fact, the algorithm guarantees that Regret? (T ) = O( T ). 2
Proof. We reduce the given OCO problem to an ?inner? OCO problem. The ?outer? OCO problem
is the original one. We use a no external regret algorithm for the inner OCO problem to generate
points in K for the outer one, and use the payoff functions obtained in the outer OCO problem to
generate appropriate payoff functions for the inner one.
Let ? = {?1 , ?2 , . . . , ?N }. The domain for the inner OCO problem is the simplex of all distributions on ?, denoted ?N . For a distribution ? ? ?N , let ?i be the probability measure assigned to
?i in the distribution ?. There is a natural mapping from ?N ? CH(?): for any ? ? ?N , denote
PN
by ?? the function i=1 ?i ?i ? CH(?).
Let x(t) ? K be the point used in the outer OCO problem in the tth round, and let f (t) be the
obtained payoff function. Then the payoff functions for the inner OCO problem is the function
g (t) : ?N ? R defined as follows:
?? ? ?N :
g (t) (?) , f (t) (?? (x(t) )).
We now apply the Exponentiated Gradient (EG) algorithm (see Section 2.2) to the inner OCO prob(t)
lem. To analyze the algorithm, we bound k?gP
k? as follows. Let x0 be an arbitrary
point in K.
P
(t)
(t)
(t)
We can rewrite g as g (?) = f (x0 + i ?i (?i (x(t) ) ? x0 )), because i ?i = 1. Then,
?g (t) = X(t) ?f (t) (?? (x(t) )), where X(t) is an N ? n matrix whose ith row is (?i (x(t) ) ? x0 )> .
Thus,
k?g (t) k? = max |(?i (x(t) )?x0 )> ?f (t) (?? (x(t) ))| ? k?i (x(t) )?x0 kk?f (t) (?? (x(t) ))k ? 1.
i
The last inequality follows because we assumed that the diameter of K is bounded by 1, and the
norm of the gradient of f (t) is also bounded by 1.
Let ?(t) be the distribution on ? produced by the EG algorithm at time t. Now, the point x(t) is
computed by running the FPTAS for computing an ?1t -approximate fixed point of the function ??(t) ,
i.e. we have k??(t) (x(t) ) ? x(t) k ?
1
?
.
t
2
In the full version of the paper, we improve the regret bound to O(log T ) under some stronger concavity
assumptions on the payoff functions.
7
Now, using the definition of the g (t) functions, and by the regret bound for the EG algorithm, we
have that for any fixed distribution ? ? ?N ,
T
X
f (t) (?? (x(t) ))?
t=1
T
X
f (t) (??(t) (x(t) )) =
t=1
Since k?f
(t)
T
X
g (t) (?)?
t=1
T
X
p
g (t) (?(t) ) ? O( log(N )T ). (3)
t=1
k ? 1,
1
f (t) (??(t) (x(t) )) ? f (t) (x(t) ) ? k??(t) (x(t) ) ? x(t) k ? ? .
t
(4)
Summing (4) from t = 1 to T , and adding to (3), we get that for any distribution ? over ?,
T
X
f (t) (?? (x(t) )) ?
t=1
X
t
T
X
p
p
1
? = O( log(N )T ).
f (t) (x(t) ) ? O( log(N )T ) +
t
t=1
In particular, by concentrating ? on any pgiven ?i , the above inequality implies that
PT
PT
(t)
(?i (x(t) )) ? t=1 f (t) (x(t) ) ? O( log(N )T ), and thus we have a no ?-regret alt=1 f
gorithm.
References
[1] S. Arora, E. Hazan, and S. Kale. The multiplicative weights update method: a meta algorithm and
applications. Manuscript, 2005.
[2] A. Blum and Y. Mansour. From external to internal regret. In COLT, pages 621?636, 2005.
[3] X. Chen and X. Deng. Settling the complexity of two-player nash equilibrium. In 47th FOCS, pages
261?272, 2006.
[4] X. Chen, X. Deng, and S-H. Teng. Computing nash equilibria: Approximation and smoothed complexity.
focs, 0:603?612, 2006.
[5] T. Cover. Universal portfolios. Math. Finance, 1:1?19, 1991.
[6] C. Daskalakis, P. W. Goldberg, and C. H. Papadimitriou. The complexity of computing a nash equilibrium.
In 38th STOC, pages 71?78, 2006.
[7] Y. Freund and R. E. Schapire. Adaptive game playing using multiplicative weights. Games and Economic
Behavior, 29:79?103, 1999.
[8] G. Gordon, A. Greenwald, C. Marks, and M. Zinkevich. No-regret learning in convex games. Brown
University Tech Report CS-07-10, 2007.
[9] A. Greenwald and A. Jafari. A general class of no-regret learning algorithms and game-theoretic equilibria, 2003.
[10] J. Hannan. Approximation to bayes risk in repeated play. In M. Dresher, A. W. Tucker, and P. Wolfe,
editors, Contributions to the Theory of Games, volume III, pages 97?139, 1957.
[11] S. Hart and A. Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica,
68(5):1127?1150, 2000.
[12] S. Hart and D. Schmeidler. Existence of correlated equilibria. Mathematics of Operations Research,
14(1):18?25, 1989.
[13] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex optimization. In 19?th COLT, 2006.
[14] J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Inf.
Comput., 132(1):1?63, 1997.
[15] C. H. Papadimitriou. On the complexity of the parity argument and other inefficient proofs of existence.
J. Comput. Syst. Sci., 48(3):498?532, 1994.
[16] G. Stoltz and G. Lugosi. Internal regret in on-line portfolio selection. Machine Learning, 59:125?159,
2005.
[17] G. Stoltz and G. Lugosi. Learning correlated equilibria in games with compact sets of strategies. Games
and Economic Behavior, 59:187?208, 2007.
[18] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In 20th ICML,
pages 928?936, 2003.
8
| 3249 |@word version:3 achievable:1 stronger:3 norm:3 approachability:1 polynomial:1 crucially:1 mention:1 reduction:1 contains:1 past:1 com:1 must:1 additive:1 update:1 stationary:1 warmuth:1 xk:1 ith:1 math:1 become:1 supply:1 clairvoyant:1 prove:2 focs:2 manner:1 x0:6 indeed:2 market:6 expected:3 behavior:3 multi:1 notation:1 bounded:3 mass:1 what:3 kind:2 eigenvector:2 finding:1 transformation:3 hindsight:1 nj:1 guarantee:5 every:2 concave:10 growth:1 finance:1 k2:3 scaled:1 before:1 t1:5 bilinear:1 lugosi:3 might:1 chose:2 equivalence:6 suggests:1 decided:3 investment:3 regret:86 procedure:1 universal:2 empirical:9 road:1 get:2 cannot:1 close:1 selection:2 put:1 risk:2 impossible:1 applying:2 influence:1 restriction:1 zinkevich:3 map:6 center:1 straightforward:1 attention:2 kale:3 convex:23 simplicity:1 unilaterally:3 his:1 stability:1 embedding:1 notion:7 imagine:2 play:9 pt:2 programming:1 us:2 designing:3 goldberg:1 wolfe:1 gorithm:1 observed:2 ft:1 capture:1 worst:1 ensures:2 mentioned:2 intuition:1 nash:5 complexity:5 ui:5 econometrica:1 depend:3 rewrite:1 easily:1 joint:4 stock:9 various:7 represented:2 choosing:1 whose:2 elad:1 larger:1 plausible:1 say:2 otherwise:1 satyen:2 gp:1 itself:2 online:16 sequence:4 advantage:1 relevant:1 translate:1 till:1 achieve:4 insensitivity:1 supposed:1 description:1 invest:1 convergence:10 requirement:1 converges:10 depending:1 finitely:1 received:1 c:2 implies:2 direction:2 radius:1 modifying:1 stochastic:1 stringent:1 require:1 f1:1 generalization:1 preliminary:1 strictly:1 hold:4 considered:4 exp:1 equilibrium:45 mapping:14 purpose:1 maker:4 individually:1 largest:1 vice:1 establishes:1 hope:2 minimization:1 brought:1 always:1 aim:2 modified:1 rather:3 kalai:1 avoid:2 pn:1 corollary:2 focus:2 viz:2 she:12 consistently:4 tech:1 sense:1 fptas:6 her:11 relation:1 overall:1 colt:2 almaden:1 denoted:1 special:1 sampling:1 placing:1 adversarially:2 icml:1 ppad:4 oco:11 simplex:5 papadimitriou:3 np:1 gordon:2 report:1 deviating:2 ourselves:1 highly:1 xb:1 necessary:1 unless:1 stoltz:2 instance:4 earlier:1 cover:2 deviation:20 predictor:1 colell:1 supx:1 chooses:1 st:1 ambiguity:1 satisfied:1 choose:2 external:10 inefficient:1 leading:1 return:3 syst:1 converted:1 harry:1 depends:1 multiplicative:2 view:1 hazan:4 analyze:1 reached:2 investor:6 bayes:1 contribution:2 minimize:2 who:2 efficiently:3 produced:1 history:1 reach:1 cumbersome:1 whenever:1 definition:9 infinitesimal:1 against:1 tucker:1 naturally:1 associated:2 proof:6 rational:3 gain:4 proved:3 stop:1 concentrating:1 ubiquitous:1 sophisticated:1 appears:1 manuscript:1 higher:1 response:1 generality:1 xa:1 sketch:1 brings:1 building:1 brown:1 hence:2 assigned:1 eg:5 round:10 game:33 generalized:2 theoretic:7 complete:3 consideration:1 recently:2 common:1 insensitive:1 volume:1 extend:2 composition:1 versa:1 ai:2 consistency:1 mathematics:1 similarly:2 had:3 portfolio:8 resistant:1 stable:5 base:3 showed:3 recent:2 inf:1 apart:1 certain:1 inequality:5 meta:1 deng:2 converge:2 maximize:3 period:2 dashed:1 signal:3 full:2 hannan:3 desirable:1 faster:4 long:2 hart:6 essentially:1 iteration:10 agarwal:1 achieved:2 whereas:1 wealth:4 else:1 leaving:1 appropriately:3 ascent:1 seem:1 call:2 structural:1 iii:1 easy:2 enough:1 switch:1 variety:1 gave:2 restrict:2 inner:7 reduce:1 economic:2 whether:2 utility:11 repeatedly:1 action:4 tth:2 diameter:1 schapire:2 generate:2 exist:2 per:1 affected:1 blum:1 neither:1 rewriting:1 sum:1 run:1 jose:1 prob:1 uncertainty:3 separation:2 decision:33 scaling:1 modifier:15 bound:8 ki:6 played:3 dresher:1 oracle:3 x2:30 generates:2 aspect:1 u1:8 argument:1 relatively:1 department:1 combination:6 slightly:1 appealing:1 lem:1 intuitively:2 computationally:1 operation:1 rewritten:1 apply:2 observe:1 appropriate:2 pgiven:1 existence:2 original:2 top:1 running:1 brouwer:4 ensure:1 xc:1 exploit:1 k1:4 unchanged:1 objective:2 move:11 question:2 already:1 strategy:10 said:2 gradient:8 sci:1 schmeidler:2 olden:1 outer:5 reason:2 length:1 modeled:1 kk:1 equivalently:1 difficult:1 stoc:1 design:4 adjustable:1 unknown:1 upper:1 benchmark:1 finite:3 descent:1 payoff:35 situation:2 rn:1 perturbation:2 mansour:1 smoothed:1 arbitrary:6 community:1 pair:8 blackwell:1 specified:3 connection:3 deserve:1 able:1 suggested:4 below:1 max:6 natural:5 settling:1 kivinen:1 minimax:1 scheme:1 improve:1 imply:1 ne:5 arora:1 deviate:2 literature:2 l2:1 relative:3 freund:2 fully:1 mixed:1 sublinear:3 versus:1 aversion:1 agent:17 rni:1 consistent:3 editor:1 playing:2 ibm:2 row:1 hopeless:1 changed:3 last:1 soon:1 parity:1 finite1:1 guide:1 allow:1 exponentiated:3 wide:1 face:2 taking:1 absolute:1 stand:2 computes:3 concavity:2 adaptive:2 san:1 polynomially:1 cope:1 excess:1 approximate:13 obtains:3 compact:7 correlating:1 summing:1 assumed:1 xi:4 daskalakis:1 continuous:8 why:1 ca:1 poly:3 domain:3 jafari:2 main:3 arise:1 repeated:2 x1:33 simplices:1 wish:1 comput:2 theorem:13 specific:2 admits:1 alt:1 consist:1 exists:5 adding:1 gained:1 chen:2 logarithmic:3 u2:3 ch:7 satisfies:1 ma:3 viewed:1 goal:2 greenwald:4 towards:1 price:1 feasible:1 change:4 hard:4 specifically:1 infinite:1 lemma:3 called:4 teng:1 player:35 formally:1 internal:9 mark:2 arises:1 evaluate:1 princeton:3 correlated:8 |
2,480 | 325 | Learning by Combining Memorization
and Gradient Descent
John C. Platt
Synaptics, Inc.
2860 Zanker Road, Suite 206
San Jose, CA 95134
ABSTRACT
We have created a radial basis function network that allocates a
new computational unit whenever an unusual pattern is presented
to the network. The network learns by allocating new units and
adjusting the parameters of existing units. If the network performs
poorly on a presented pattern, then a new unit is allocated which
memorizes the response to the presented pattern. If the network
performs well on a presented pattern, then the network parameters
are updated using standard LMS gradient descent. For predicting
the Mackey Glass chaotic time series, our network learns much
faster than do those using back-propagation and uses a comparable
number of synapses.
1
INTRODUCTION
Currently, networks that perform function interpolation tend to fall into one of two
categories: networks that use gradient descent for learning (e.g., back-propagation),
and constructive networks that use memorization for learning (e.g., k-nearest neighbors).
Networks that use gradient descent for learning tend to form very compact representations, but use many learning cycles to find that representation. Networks that
memorize their inputs need to only be exposed to examples once, but grow linearly
in the training set size.
The network presented here strikes a compromise between memorization and gradient descent. It uses gradient descent for the "easy" input vectors and memorization
for the "hard" input vectors. If the network performs well on a particular input
714
Learning by Combining Memorization and Gradient Descent
vector, or the particular input vector is already close to a stored vector, then the
network adjusts its parameters using gradient descent. Otherwise, it memorizes the
input vector and the corresponding output vector by allocating a new unit. The explicit storage of an input-output pair means that this pair can be used immediately
to improve the performance of the system, instead of merely using that information
for gradient descent.
The network, called the resource-allocation network (RAN), uses units whose response is localized in input space. A unit with a non-local response needs to undergo
gradient descent, because it has a non-zero output for a large fraction of the training
data.
Because RAN is a constructive network, it automatically adjusts the number of
units to reflect the complexity of the function that is being interpolated. Fixed-size
networks either use too few units, in which case the network memorizes poorly,
or too many, in which case the network generalizes poorly. Parzen windows and
K-nearest neighbors both require a number of stored patterns that grow linearly
with the number of presented patterns. With RAN, the number of stored patterns
grows sublinearly, and eventually reaches a maximum.
1.1
PREVIOUS WORK
Previous workers have used networks with localized basis functions (Broomhead &
Lowe, 1988) (Moody & Darken, 1988 & 89) (Poggio & Girosi, 1990). Moody has
further extended his work by incorporating a hash table lookup (Moody, 1989). The
hash table is a resource-allocating network where the values in the hash table only
become non-zero if the entry in the hash table is activated by the corresponding
presence of non-zero input probability.
The RAN adjusts the centers of the Gaussian units based on the error at the output,
like (Poggio & Girosi, 1990). Networks with centers placed on a high-dimensional
grid, such as (Broomhead & Lowe, 1988) and (Moody, 1989), or networks that use
unsupervised clustering for center placement, such as (Moody & Darken, 1988 &
89) generate larger networks than RAN, because they cannot move the centers to
increase the accuracy.
Previous workers have created function interpolation networks that allocate fewer
units than the size of training set. Cascade-correlation (Fahlman & Lebiere, 1990),
SONN (Tenorio & Lee, 1989), and MARS (Friedman, 1988) all construct networks
by adding additional units. These algorithms work well. The RAN algorithm
improves on these algorithms by making the addition of a unit as simple as possible.
RAN uses simple algebra to find the parameters of a new unit, while cascadecorrelation and MARS use gradient descent and SONN uses simulated annealing.
2
THE ALGORITHM
This section describes a resource-allocating network (RAN), which consists of a
network, a strategy for allocating new units, and a learning rule for refining the
network.
2.1
THE NETWORK
The RAN is a two-layer radial-basis-function network. The first layer consists of
715
716
Platt
units that respond to only a local region of the space of input values. The second
layer linearly aggregates outputs from these units and creates the function that
approximates the input-output mapping over the entire space.
A simple function that implements a locally tuned unit is a Gaussian:
Zj
= L(Cjk -
h)2,
(1)
k
Xj
= exp( -Zj /wJ).
We use a C 1 continuous polynomial approximation to speed up the algorithm,
without loss of network accuracy:
if z?J < qw?'
J'
otherwise;
where q
(2)
= 2.67 is chosen empirically to make the best fit to a Gaussian.
Each output of the network Yi is a sum of the outputs Xj, each weighted by the
synaptic strength h ij plus a global polynomial. The Xj represent information about
local parts of the space, while the polynomial represents global information:
Yi
= E hijXj + E Liklk + Ii?
j
(3)
k
The h ij Xj term can be thought of as a bump that is added or subtracted to the
polynomial term Lk Likh + Ii to yield the desired function.
The linear term is useful when the function has a strong linear component. In
the results section, the Mackey-Glass equation was predicted with only a constant
term.
2.2
THE LEARNING ALGORITHM
The network starts with a blank slate: no patterns are yet stored. As patterns are
presented to it, the network chooses to store some of them. At any given point
the network has a current state, which reflects the patterns that have been stored
previously.
The allocator may allocate a new unit to memorize a pattern. After the new unit
is allocated, the network output is equal to the desired output f. Let the index of
this new unit be n.
The peak of the response of the newly allocated unit is set to the memorized input
vector,
(4)
The linear synapses on the second layer are set to the difference between the output
of the network and the novel output,
(5)
Learning by Combining Memorization and Gradient Descent
The width of the response of the new unit is proportional to the distance from the
nearest stored vector to the novel input vector,
(6)
where K is an overlap factor. As
more and more.
K
grows larger, the responses of the units overlap
The RAN uses a two-part memorization condition. An input-output pair
should be memorized if the input is far away from existing centers,
III -
Cne ares t
II > oCt),
(I, f)
(7)
and if the difference between the desired output and the output of the network is
large
(8)
Ilf - y(l) II > f.
Typically, f is a desired accuracy of output of the network . Errors larger than f
are immediately corrected by the allocation of a new unit, while errors smaller than
f are gradually repaired using gradient descent. The distance oCt) is the scale of
resolution that the network is fitting at the tth input presentation. The learning
starts with oCt) = 0max, which is the largest length scale of interest, typically the
size of the entire input space of non-zero probability density. The distance oCt)
shrinks until the it reaches Omin, which is the smallest length scale of interest. The
network will average over features that are smaller than Omin. We used a function:
(9)
6(t) = max(omax exp( -tiT), Omin),
where
T
is a decay constant.
At first, the system creates a coarse representation of the function, then refines the
representation by allocating units with smaller and smaller widths. Finally, when
the system has learned the entire function to the desired accuracy and length scale,
it stops allocating new units altogether.
The two-part memorization condition is necessary for creating a compact network.
If only condition (7) is used, then the network will allocate units instead of using
gradient descent to correct small errors. If only condition (8) is used, then fine-scale
units may be allocated in order to represent coarse-scale features, which is wasteful.
By allocating new units the RAN eventually represents the desired function ever
more closely as the network is trained. Fewer units are needed for a given accuracy
if the first-layer synapses Cj 1:, the second-level synapses h ij , and the parameters for
the global polynomial'Yi and Lil: are adjusted to decrease the error: ?
lIil - fll2
(Widrow & Hoff, 1960). We use gradient descent on the second-layer synapses to
decrease the error whenever a new unit is not allocated:
=
Ahij
= a(1i -
Yi)Xj,
A'Yi = a(Ti - Yi),
ALiI: = a(Ti - Yi)h.
(10)
717
718
Platt
In addition, we adjust the centers of the responses of units to decrease the error:
(11)
Equation (11) is derived from gradient descent and equation (1). Empirically, equation (11) also works for the polynomial approximation (2).
RESULTS
3
One application of an interpolating RAN is to predict complex time series. As
a test case, a chaotic time series can be generated with a nonlinear algebraic or
differential equation. Such a series has some short-range time coherence, but longterm prediction is very difficult.
The RAN was tested on a particular chaotic time series created by the Mackey-Glass
delay-difference equation:
x(t + 1)
for a
= 0.2, b = 0.1,
and r
x(t - r)
= (1- b)x(t) + a l+xt-r
(
)10'
= 17.
(12)
We trained the network to predict the value
x(T + dT), given the values x(T), x(T - 6), x(T - 12), and x(T - 18) as inputs.
The network was tested using two different learning modes: off-line learning with
a limited amount of data, and on-line learning with a large amount of data. The
Mackey-Glass equation has been learned off-line, by other workers, using the backpropagation algorithm (Lapedes & Farber, 1987), and radial basis functions (Moody
& Darken, 1989). We used RAN to predict the Mackey-Glass equations with the
0.02, 400 learning epochs, 6max
0.7, K,
0.87 and
following parameters: a
6m in = 0.07 reached after 100 epochs. RAN was simulated using f = 0.02 and
f = 0.05. In all cases, dT = 85.
=
=
=
Figure 1 shows the efficiency of the various learning algorithms: the smallest, most
accurate algorithms are towards the lower left. When optimized for size of network
(f = 0.05), the RAN has about as many weights as back-propagation and is just
as accurate. The efficiency of RAN is roughly the same as back-propagation, but
requires much less computation: RAN takes approximately 8 minutes of SUN-4
CPU time to reach the accuracy listed in figure 4, while back-propagation took
approximately 30-60 minutes of Cray X-MP time.
The Mackey-Glass equation has been learned using on-line techniques by hashing
B-splines (Moody, 1989). We used on-line RAN using the following parameters;
a
0.05, 6max
0.7, 6m in
0.07, K,
0.87, and 6m in reached after 5000 input
presentations. Table 1 compares the on-line error versus the size of network for
both RAN and the hashing B-spline (Moody, personal communication). In both
cases, dT 50. The RAN algorithm has similar accuracy to the hashing B-splines,
but the number of units allocated is between a factor of 2 and 8 smaller.
=
=
=
=
=
For more detailed results on the Mackey-Glass equation, see (Platt, 1991).
Learning by Combining Memorization and Gradient Descent
o =RAN
... = hashing B-spline
o =standard RBF
? =K-means RBF
* =back-propagation
- __________
o
-+____________*-__________
100
1000
o~
10000
~
100000
Nwnber of Weights
Figure 1: The error on a test set versus the size of the network. Back-propagation
stores the prediction function very compactly and accurately, but takes a large
amount of computation to form the compact representation. RAN is as compact
and accurate as back-propagation, but uses much less computation to form its
representation.
Table 1: Comparison between RAN and hashing B-splines
Method
RAN,
RAN,
f
= 0.05
f
= 0.02
Hashing B-spline
1 level of hierarchy
Hashing B-spline
2 levels of hierarchy
4
Number of Units
Normalized RMS Error
50
143
0.071
0.054
284
0.074
1166
0.044
CONCLUSIONS
There are various desirable attributes for a network that learns: it should learn
quickly, it should learn accurately, and it should form a compact representation.
Formation of a compact representation is particularly important for networks that
are implemented in hardware, because silicon area is at a premium. A compact
representation is also important for statistical reasons: a network that has too
many parameters can overfit data and generalize poorly.
719
720
Platt
Many previous network algorithms either learned quickly at the expense of a compact representation, or formed a compact representation only after laborious computation. The RAN is a network that can find a compact representation with a
reasonable amount of computation.
Acknowledgements
Thanks to Carver Mead, Carl Ruoff, and Fernando Pineda for useful comments on
the paper. Special thanks to John Moody who not only provided useful comments
on the paper, but also provided data on the hashing B-splines.
References
Broomhead, D., Lowe, D., 1988, Multivariable function interpolation and adaptive
networks, Complex Systems, 2, 321-355.
Fahlman, S. E., Lebiere, C., 1990, The Cascade-Correlation Learning Architecture,
In: Advances in Neural Information Processing Systems 2, D. Touretzky, ed., 524532, Morgan-Kaufmann, San Mateo.
Friedman, J. H., 1988, Multivariate Adaptive Regression Splines, Department of
Statistics, Stanford University, Tech. Report LCSI02.
Lapedes, A., Farber, R., 1987, Nonlinear Signal Processing Using Neural Networks:
Prediction and System Modeling, Technical Report LA-UR-87-2662, Los Alamos
National Laboratory, Los Alamos, NM.
Moody, J, Darken, C., 1988, Learning with Localized Receptive Fields, In: Proceedings of the 1988 Connectionist Models Summer School, D. Touretzky, G. Hinton,
T. Sejnowski, eds., 133-143, Morgan-Kaufmann, San Mateo.
Moody, J, Darken, C., 1989, Fast Learning in Networks of Locally-Tuned Processing
Units, Neural Computation, 1(2), 281-294.
Moody, J., 1989, Fast Learning in Multi-Resolution Hierarchies, In: Advances
in Neural Information Processing Systems 1, D. Touretzky, ed., 29-39, MorganKaufmann, San Mateo.
Platt., J., 1991, A Resource-Allocating Network for Function Interpolation, Neural
Computation, 3(2), to appear.
Poggio, T., Girosi, F., 1990, Regularization Algorithms for Learning that are Equivalent to Multilayer Networks, Science, 247, 978-982.
Powell, M. J. D., 1987, Radial Basis Functions for Multivariable Interpolation: A
Review, In: Algorithms for Approximation, J. C. Mason, M. G. Cox, eds., Clarendon Press, Oxford.
Tenorio, M. F., Lee, W., 1989, Self-Organizing Neural Networks for the Identification Problem, In: Advances in Neural Information Processing Systems 1, D.
Touretzky, ed., 57-64, Morgan-Kaufmann, San Mateo.
Widrow, B., Hoff, M., 1960, Adaptive Switching Circuits, In: 1960 IRE WESCON
Convention Record, 96-104, IRE, New York.
| 325 |@word cox:1 longterm:1 polynomial:6 series:5 tuned:2 lapedes:2 existing:2 blank:1 current:1 yet:1 john:2 refines:1 girosi:3 mackey:7 hash:4 fewer:2 short:1 record:1 ire:2 coarse:2 become:1 differential:1 consists:2 cray:1 fitting:1 ahij:1 sublinearly:1 roughly:1 multi:1 automatically:1 cpu:1 window:1 provided:2 circuit:1 qw:1 suite:1 ti:2 platt:6 unit:36 appear:1 local:3 switching:1 oxford:1 mead:1 interpolation:5 approximately:2 plus:1 mateo:4 limited:1 range:1 implement:1 backpropagation:1 chaotic:3 powell:1 area:1 cascade:2 thought:1 road:1 radial:4 cannot:1 close:1 storage:1 memorization:9 ilf:1 equivalent:1 center:6 resolution:2 immediately:2 adjusts:3 rule:1 his:1 updated:1 hierarchy:3 carl:1 us:7 particularly:1 region:1 wj:1 cycle:1 sun:1 decrease:3 ran:27 complexity:1 personal:1 trained:2 algebra:1 exposed:1 compromise:1 tit:1 creates:2 efficiency:2 basis:5 compactly:1 slate:1 various:2 fast:2 sejnowski:1 aggregate:1 formation:1 whose:1 larger:3 stanford:1 otherwise:2 statistic:1 pineda:1 took:1 combining:4 organizing:1 poorly:4 los:2 widrow:2 nearest:3 ij:3 school:1 strong:1 sonn:2 implemented:1 predicted:1 memorize:2 convention:1 closely:1 correct:1 farber:2 attribute:1 memorized:2 require:1 adjusted:1 exp:2 mapping:1 predict:3 lm:1 bump:1 smallest:2 currently:1 largest:1 weighted:1 reflects:1 gaussian:3 derived:1 refining:1 tech:1 glass:7 entire:3 typically:2 special:1 hoff:2 equal:1 once:1 construct:1 field:1 represents:2 unsupervised:1 report:2 spline:9 connectionist:1 few:1 zanker:1 omin:3 national:1 friedman:2 interest:2 adjust:1 laborious:1 activated:1 allocating:9 allocator:1 accurate:3 worker:3 necessary:1 poggio:3 allocates:1 carver:1 desired:6 modeling:1 entry:1 alamo:2 delay:1 too:3 stored:6 chooses:1 thanks:2 density:1 peak:1 lee:2 off:2 parzen:1 quickly:2 moody:12 reflect:1 nm:1 creating:1 lookup:1 inc:1 mp:1 memorizes:3 lowe:3 reached:2 start:2 formed:1 accuracy:7 kaufmann:3 who:1 yield:1 generalize:1 identification:1 accurately:2 synapsis:5 reach:3 touretzky:4 whenever:2 synaptic:1 ed:5 lebiere:2 stop:1 newly:1 adjusting:1 broomhead:3 improves:1 cj:1 back:8 clarendon:1 hashing:8 dt:3 response:7 shrink:1 mar:2 just:1 correlation:2 until:1 overfit:1 nonlinear:2 propagation:8 mode:1 grows:2 normalized:1 regularization:1 laboratory:1 width:2 self:1 omax:1 multivariable:2 performs:3 novel:2 empirically:2 approximates:1 silicon:1 grid:1 synaptics:1 multivariate:1 store:2 yi:7 morgan:3 additional:1 fernando:1 strike:1 signal:1 ii:4 desirable:1 technical:1 faster:1 prediction:3 regression:1 multilayer:1 represent:2 addition:2 fine:1 annealing:1 grow:2 allocated:6 comment:2 tend:2 undergo:1 presence:1 iii:1 easy:1 xj:5 fit:1 architecture:1 allocate:3 rms:1 algebraic:1 york:1 useful:3 detailed:1 listed:1 amount:4 locally:2 hardware:1 category:1 tth:1 generate:1 zj:2 wasteful:1 merely:1 fraction:1 sum:1 jose:1 respond:1 reasonable:1 coherence:1 comparable:1 layer:6 summer:1 strength:1 placement:1 interpolated:1 speed:1 department:1 smaller:5 cascadecorrelation:1 describes:1 ur:1 making:1 gradually:1 resource:4 equation:10 previously:1 liil:1 eventually:2 needed:1 unusual:1 generalizes:1 away:1 subtracted:1 altogether:1 clustering:1 move:1 already:1 added:1 strategy:1 receptive:1 gradient:17 distance:3 simulated:2 reason:1 length:3 index:1 difficult:1 expense:1 lil:1 perform:1 darken:5 descent:17 extended:1 ever:1 communication:1 hinton:1 nwnber:1 pair:3 optimized:1 learned:4 pattern:11 max:4 overlap:2 predicting:1 improve:1 lk:1 created:3 alii:1 epoch:2 review:1 acknowledgement:1 repaired:1 loss:1 allocation:2 proportional:1 versus:2 localized:3 placed:1 fahlman:2 fall:1 neighbor:2 adaptive:3 san:5 far:1 compact:10 global:3 wescon:1 continuous:1 table:6 learn:2 ca:1 interpolating:1 complex:2 linearly:3 explicit:1 learns:3 minute:2 xt:1 mason:1 decay:1 incorporating:1 cjk:1 adding:1 tenorio:2 oct:4 presentation:2 rbf:2 towards:1 hard:1 corrected:1 called:1 premium:1 la:1 morgankaufmann:1 constructive:2 tested:2 |
2,481 | 3,250 | Theoretical Analysis of Heuristic Search Methods for
Online POMDPs
St?ephane Ross
McGill University
Montr?eal, Qc, Canada
[email protected]
Joelle Pineau
McGill University
Montr?eal, Qc, Canada
[email protected]
Brahim Chaib-draa
Laval University
Qu?ebec, Qc, Canada
[email protected]
Abstract
Planning in partially observable environments remains a challenging problem, despite significant recent advances in offline approximation techniques. A few online methods have also been proposed recently, and proven to be remarkably scalable, but without the theoretical guarantees of their offline counterparts. Thus it
seems natural to try to unify offline and online techniques, preserving the theoretical properties of the former, and exploiting the scalability of the latter. In this
paper, we provide theoretical guarantees on an anytime algorithm for POMDPs
which aims to reduce the error made by approximate offline value iteration algorithms through the use of an efficient online searching procedure. The algorithm
uses search heuristics based on an error analysis of lookahead search, to guide the
online search towards reachable beliefs with the most potential to reduce error. We
provide a general theorem showing that these search heuristics are admissible, and
lead to complete and ?-optimal algorithms. This is, to the best of our knowledge,
the strongest theoretical result available for online POMDP solution methods. We
also provide empirical evidence showing that our approach is also practical, and
can find (provably) near-optimal solutions in reasonable time.
1
Introduction
Partially Observable Markov Decision Processes (POMDPs) provide a powerful model for sequential decision making under state uncertainty. However exact solutions are intractable in most domains featuring more than a few dozen actions and observations. Significant efforts have been
devoted to developing approximate offline algorithms for larger POMDPs [1, 2, 3, 4]. Most of these
methods compute a policy over the entire belief space. This is both an advantage and a liability.
On the one hand, it allows good generalization to unseen beliefs, and this has been key to solving
relatively large domains. Yet it makes these methods impractical for problems where the state space
is too large to enumerate. A number of compression techniques have been proposed, which handle large state spaces by projecting into a sub-dimensional representation [5, 6]. Alternately online
methods are also available [7, 8, 9, 10, 11]. These achieve scalability by planning only at execution
time, thus allowing the agent to only consider belief states that can be reached over some (small)
finite planning horizon. However despite good empirical performance, both classes of approaches
lack theoretical guarantees on the approximation. So it would seem we are constrained to either
solving small to mid-size problems (near-)optimally, or solving large problems possibly badly.
This paper suggests otherwise, arguing that by combining offline and online techniques, we can
preserve the theoretical properties of the former, while exploiting the scalability of the latter. In
previous work [11], we introduced an anytime algorithm for POMDPs which aims to reduce the
error made by approximate offline value iteration algorithms through the use of an efficient online
searching procedure. The algorithm uses search heuristics based on an error analysis of lookahead
search, to guide the online search towards reachable beliefs with the most potential to reduce error. In
this paper, we derive formally the heuristics from our error minimization point of view and provide
theoretical results showing that these search heuristics are admissible, and lead to complete and ?optimal algorithms. This is, to the best of our knowledge, the strongest theoretical result available
for online POMDP solution methods. Furthermore the approach works well with factored state
representations, thus further enhancing scalability, as suggested by earlier work [2]. We also provide
empirical evidence showing that our approach is computationally practical, and can find (provably)
near-optimal solutions within a smaller overall time than previous online methods.
2
Background: POMDP
A POMDP is defined by a tuple (S, A, ?, T, R, O, ?) where S is the state space, A is the action
set, ? is the observation set, T : S ? A ? S ? [0, 1] is the state-to-state transition function,
R : S ? A ? R is the reward function, O : ? ? A ? S ? [0, 1] is the observation function,
and ? is the discount factor. In a POMDP, the agent often does not know the current state with full
certainty, since observations provide only a partial indicator of state. To deal with this uncertainty,
the agent maintains a belief state b(s), which expresses the probability that the agent is in each state
at a given time step. After each step, the belief state b is updated usingPBayes rule. We denote the
belief update function b? = ? (b, a, o), defined
as b? (s? ) = ?O(o, a, s? ) s?S T (s, a, s? )b(s), where
P
? is a normalization constant ensuring s?S b? (s) = 1.
Solving a POMDP consists in finding an optimal policy, ? ? : ?S ? A, which specifies the best
action a to do in every belief state b, that maximizes the expected return (i.e., expected sum of
discounted rewards over the planning horizon) of the agent. We can find the optimal policy by
computing the optimal value of a belief state over the planning horizon.
P For the infinite horizon, the
optimal value function is defined as V ? (b) = maxa?A [R(b, a) + ? o?? P (o|b, a)V ? (? (b, a, o))],
where R(b, a) represents the expected immediate reward of doing action a in belief state b and
P (o|b, a) is the probability of observingPo after doing action
P a in belief state b. This probability can
be computed according to P (o|b, a) = s? ?S O(o, a, s? ) s?S T (s, a, s? )b(s). We also denote the
value Q? (b, a) of a particular action a in belief state b, as the return
P we will obtain if we perform a in
b and then follow the optimal policy Q? (b, a) = R(b, a) + ? o?? P (o|b, a)V ? (? (b, a, o)). Using
this, we can define the optimal policy ? ? (b) = argmaxa?A Q? (b, a).
While any POMDP problem has infinitely many belief states, it has been shown that the optimal
value function of a finite-horizon POMDP is piecewise linear and convex. Thus we can define the
optimal value function and policy of a finite-horizon POMDP using a finite set of |S|-dimensional
hyper plans, called ?-vectors, over the belief state space. As a result, exact offline value iteration
algorithms are able to compute V ? in a finite amount of time, but the complexity can be very high.
Most approximate offline value iteration algorithms achieve computational tractability by selecting
a small subset of belief states, and keeping only those ?-vectors which are maximal at the selected
belief states [1, 3, 4]. The precision of these algorithms depend on the number of belief points and
their location in the space of beliefs.
3
Online Search in POMDPs
Contrary to offline approaches, which compute a complete policy determining an action for every
belief state, an online algorithm takes as input the current belief state and returns the single action
which is the best for this particular belief state. The advantage of such an approach is that it only
needs to consider belief states that are reachable from the current belief state. This naturally provides
a small set of beliefs, which could be exploited as in offline methods. But in addition, since online
planning is done at every step (and thus generalization between beliefs is not required), it is sufficient
to calculate only the maximal value for the current belief state, not the full optimal ?-vector. A
lookahead search algorithm can compute this value in two simple steps.
First we build a tree of reachable belief states from the current belief state. The current belief is the
top node in the tree. Subsequent belief states (as calculated by the ? (b, a, o) function) are represented
using OR-nodes (at which we must choose an action) and actions are included in between each layer
of belief nodes using AND-nodes (at which we must consider all possible observations). Note that
in general the belief MDP could have a graph structure with cycles. Our algorithm simply handle
such structure by unrolling the graph into a tree. Hence, if we reach a belief that is already elsewhere
in the tree, it will be duplicated.1
Second, we estimate the value of the current belief state by propagating value estimates up from the
fringe nodes, to their ancestors, all the way to the root. An approximate value function is generally
used at the fringe of the tree to approximate the infinite-horizon value. We are particularly interested
in the case where a lower bound and an upper bound on the value of the fringe belief states is
available, as this allows us to get a bound on the error at any specific node. The lower and upper
bounds can be propagated to parent nodes according to:
U (b)
if b is a leaf in T ,
UT (b) =
(1)
maxa?A UT (b, a) otherwise;
X
UT (b, a) = RB (b, a) + ?
P (o|b, a)UT (? (b, a, o));
(2)
o??
L(b)
if b is a leaf in T ,
maxa?A LT (b, a) otherwise;
X
LT (b, a) = RB (b, a) + ?
P (o|b, a)LT (? (b, a, o));
LT (b) =
(3)
(4)
o??
where UT (b) and LT (b) represent the upper and lower bounds on V ? (b) associated to belief state
b in the tree T , UT (b, a) and LT (b, a) represent corresponding bounds on Q? (b, a), and L(b) and
U (b) are the bounds on fringe nodes, typically computed offline.
Performing a complete k-step lookahead search multiplies the error bound on the approximate value
function used at the fringe by ? k ([13]), and thus ensures better value estimates. However, it has
complexity exponential in k, and may explore belief states that have very small probabilities of occurring (and an equally small impact on the value function) as well as exploring suboptimal actions
(which have no impact on the value function). We would evidently prefer to have a more efficient
online algorithm, which can guarantee equivalent or better error bounds. In particular, we believe
that the best way to achieve this is to have a search algorithm which uses estimates of error reduction
as a criteria to guide the search over the reachable beliefs.
4
Anytime Error Minimization Search
In this section, we review the Anytime Error Minimization Search (AEMS) algorithm we had first
introduced in [11] and present a novel mathematical derivation of the heuristics that we had suggested. We also provide new theoretical results describing sufficient conditions under which the
heuristics are guaranteed to yield ?-optimal solutions.
Our approach uses a best-first search of the belief reachability tree, where error minimization (at the
root node) is used as the search criteria to select which fringe nodes to expand next. Thus we need a
way to express the error on the current belief (i.e. root node) as a function of the error at the fringe
nodes. This is provided in Theorem 1. Let us denote (i) F(T ), the set of fringe nodes of a tree T ; (ii)
eT (b) = V ? (b) ? LT (b), the error function for node b in the tree T ; (iii) e(b) = V ? (b) ? L(b), the
error at a fringe node b ? F(T ); (iv) hbT0 ,b , the unique action/observation sequence that leads from
the root b0 to belief b in tree T ; (v) d(h), the depth of an action/observation sequence h (number of
Qd(h)
h
actions); and (vi) P (h|b0 , ? ? ) = i=1 P (hio |b0 i?1 , hia )? ? (bhi?1 , hia ), the probability of executing
the action/observation sequence h if we follow the optimal policy ? ? from the root node b0 (where
hia and hio refers to the ith action and observation in the sequence h, and bhi is the belief obtained
after taking the i first actions and observations from belief b. ? ? (b, a) is the probability that the
optimal policy chooses action a in belief b).
By abuse of notation, we will use b to represent both a belief node in the tree and its associated
belief2 .
1
We are considering using a technique proposed in the LAO* algorithm [12] to handle cycle, but we have
not investigated
this fully, especially in terms of how it affects the heuristic value presented below.
P
2
e.g.
b?F (T ) should be interpreted as a sum over all fringe nodes in the tree, while e(b) to be the error
associated to the belief in fringe node b.
Theorem 1. In any tree T , eT (b0 ) ?
P
b?F (T )
b0 ,b
? d(hT
)
P (hbT0 ,b |b0 , ? ? )e(b).
Proof. Consider an arbitrary parent node b in tree T and let?s denote
a
?T = argmaxa?A LT (b, a). We
P b
have eT (b) = V ? (b) ? LT (b). If a
?Tb = ? ? (b), then eT (b) = ? o?? P (o|b, ? ? (b))e(? (b, ? ? (b), o)).
T
On the other
?b 6= ? ? (b), then we know that LT (b, ? ? (b)) ? LT (b, a
?Tb ) and therefore
P hand, when ?a
?
eT (b) ? ? o?? P (o|b, ? (b))e(? (b, ? (b), o)). Consequently, we have the following:
(
e(b)
if b ? F (T )
P
eT (b) ?
?
P (o|b, ? ? (b))eT (? (b, ? ? (b), o)) otherwise
o??
Then eT (b0 ) ?
4.1
P
b?F (T )
?
b ,b
d(hT0 )
P (hbT0 ,b |b0 , ? ? )e(b) can be easily shown by induction.
Search Heuristics
From Theorem 1, we see that the contribution of each fringe node to the error in b0 is simply
b0 ,b
the term ? d(hT ) P (hbT0 ,b |b0 , ? ? )e(b). Consequently, if we want to minimize eT (b0 ) as quickly as
possible, we should expand fringe nodes reached by the optimal policy ? ? that maximize the term
b0 ,b
? d(hT ) P (hbT0 ,b |b0 , ? ? )e(b) as they offer the greatest potential to reduce eT (b0 ). This suggests us
a sound heuristic to explore the tree in a best-first-search way. Unfortunately we do not know V ?
nor ? ? , which are required to compute the terms e(b) and P (hTb0 ,b |b0 , ? ? ); nevertheless, we can
approximate them. First, the term e(b) can be estimated by the difference between the lower and
upper bound. We define e?(b) = U (b) ? L(b) as an estimate of the error introduced by our bounds at
fringe node b. Clearly, e?(b) ? e(b) since U (b) ? V ? (b).
To approximate P (hbT0 ,b |b0 , ? ? ), we can view the term ? ? (b, a) as the probability that action a
is optimal in belief b. Thus, we consider an approximate policy ?
?T that represents the probability that action a is optimal in belief state b given the bounds LT (b, a) and UT (b, a) that we
have on Q? (b, a) in tree T . More precisely, to compute ?
?T (b, a), we consider Q? (b, a) as a
random variable and make some assumptions about its underlying probability distribution. Once
cumulative distribution functions FTb,a , s.t. FTb,a (x) = P (Q? (b, a) ? x), and their associated
density functions fTb,a are determined for each (b, a) in tree T , we can compute the probability
R?
?
Q
?
?T (b, a) = P (Q? (b, a? ) ? Q? (b, a)?a? 6= a) = ?? fTb,a (x) a? 6=a FTb,a (x)dx. Computing this
integral may not be computationally efficient depending on how we define the functions fTb,a . We
consider two approximations.
One possible approximation is to simply compute the probability that the Q-value of a given action
is higher thanR its parent belief state value (instead of all actions? Q-value). In this case, we get
?
?
?T (b, a) = ?? fTb,a (x)FTb (x)dx, where FTb is the cumulative distribution function for V ? (b),
given bounds LT (b) and UT (b) in tree T . Hence by considering both Q? (b, a) and V ? (b) as random
variables with uniform distributions between their respective lower and upper bounds, we get:
(
2
T (b,a)?LT (b))
? (U
if UT (b, a) > LT (b),
U
(b,a)?L
(b,a)
T
T
?
?T (b, a) =
(5)
0
otherwise.
P
where ? is a normalization constant such that a?A ?
?T (b, a) = 1. Notice that if the density function
is 0 outside the interval between the lower and upper bound, then ?
?T (b, a) = 0 for dominated
actions, thus they are implicitly pruned from the search tree by this method.
A second practical approximation is:
1 if a = argmaxa? ?A UT (b, a? ),
?
?T (b, a) =
0 otherwise.
(6)
which simply selects the action that maximizes the upper bound. This restricts exploration of the
search tree to those fringe nodes that are reached by sequence of actions that maximize the upper
bound of their parent belief state, as done in the AO? algorithm [14]. The nice property of this
approximation is that these fringe nodes are the only nodes that can potentially reduce the upper
bound in b0 .
Using either of these two approximations for ?
?T , we can estimate the error contribution e?T (b0 , b) of
b0 ,b
a fringe node b on the value of root belief b0 in tree T , as: e?T (b0 , b) = ? d(hT ) P (hTb0 ,b |b0 , ?
?T )?
e(b).
e
e
Using this as a heuristic, the next fringe node b(T ) to expand in tree T is defined as b(T ) =
b0 ,b
argmaxb?F (T ) ? d(hT ) P (hbT0 ,b |b0 , ?
?T )?
e(b). We use AEMS13 to denote the heuristic that uses ?
?T
4
as defined in Equation 5, and AEMS2 to denote the heuristic that uses ?
?T as defined in Equation 6.
4.2
Algorithm
Algorithm 1 presents the anytime error minimization search. Since the objective is to provide a
near-optimal action within a finite allowed online planning time, the algorithm accepts two input
parameters: t, the online search time allowed per action, and ?, the desired precision on the value
function.
Algorithm 1 AEMS: Anytime Error Minimization Search
Function S EARCH(t, ?)
Static : T : an AND-OR tree representing the current search tree.
t0 ? T IME()
while T IME() ? t0 ? t and not S OLVED(ROOT(T ), ?) do
b? ? e
b(T )
E XPAND(b? )
U PDATE A NCESTORS(b? )
end while
return argmaxa?A LT (ROOT(T ), a)
The E XPAND function expands the tree one level under the node b? by adding the next action and
belief nodes to the tree T and computing their lower and upper bounds according to Equations 14. After a node is expanded, the U PDATE A NCESTORS function simply recomputes the bounds of
its ancestors according to Equations determining b? (s? ), V ? (b), P (o|b, a) and Q? (b, a), as outlined
in Section 2. It also recomputes the probabilities ?
?T (b, a) and the best actions for each ancestor
node. To find quickly the node that maximizes the heuristic in the whole tree, each node in the tree
contains a reference to the best node to expand in their subtree. These references are updated by
the U PDATE A NCESTORS function without adding more complexity, such that when this function
terminates, we always know immediatly which node to expand next, as its reference is stored in the
root node. The search terminates whenever there is no more time available, or we have found an ?optimal solution (verified by the S OLVED function). After an action is executed in the environment,
the tree T is updated such that our new current belief state becomes the root of T ; all nodes under
this new root can be reused at the next time step.
4.3
Completeness and Optimality
We now provide some sufficient conditions under which our heuristic search is guaranteed to converge to an ?-optimal policy after a finite number of expansions. We show that the heuristics proposed in Section 4.1 satisfy those conditions, and therefore are admissible. Before we present the
main theorems, we provide some useful preliminary lemmas.
Lemma 1. In any tree T , the approximate error contribution e?T (b0 , bd ) of a belief node bd at depth
d is bounded by e?T (b0 , bd ) ? ? d supb e?(b).
Proof. P (hbT0 ,b |b0 , ??T ) ? 1 and e?(b) ? supb? e?(b? ) for all b. Thus e?T (b0 , bd ) ? ? d supb e?(b).
Qd(h)
h
For the following lemma and theorem, we will denote P (ho |b0 , ha ) = i=1 P (hio |b0 i?1 , hia ) the
probability of observing the sequence of observations ho in some action/observation sequence h,
b ) ? F(T )
given that the sequence of actions ha in h is performed from current belief b0 , and F(T
b0 ,b
the set of all fringe nodes in T such that P (hT |b0 , ?
?T ) > 0, for ?
?T defined as in Equation 6 (i.e.
3
4
This heuristic is slightly different from the AEMS1 heuristic we had introduced in [11].
This is the same as the AEMS2 heuristic we had introduced in [11].
the set of fringe nodes reached by a sequence of actions in which each action maximizes UT (b, a)
in its respective belief state.)
b ), either
Lemma 2. For any tree T , ? > 0, and D such that ? D supb e?(b) ? ?, if for all b ? F(T
d(hbT0 ,b ) ? D or there exists an ancestor b? of b such that e?T (b? ) ? ?, then e?T (b0 ) ? ?.
Proof. Let?s denote a?Tb = argmaxa?A UT (b, P
a). Notice that for any tree T , and parent belief b ? T , e?T (b) =
UT (b)?LT (b) ? UT (b, a
?Tb )?LT (b, a
?Tb ) = ? o?? P (o|b, a
?Tb )?
eT (? (b, a
?Tb , o)). Consequently, the following
recurrence is an upper bound on e?T (b):
8
if b ? F (T )
>
< e?(b)
?
if e?T (b) ? ?
e?T (b) ?
P
>
P (o|b, a
?Tb )?
eT (? (b, a
?Tb , o)) otherwise
: ?
o??
b ,b
P
d(hT0 )
0 ,b
0 ,b
By unfolding the recurrence for b0 , we get e?T (b0 ) ?
P (hbT,o
|b0 , hbT,a
)?
e(b) +
b?A(T ) ?
b0 ,b
P
b0 ,b
b0 ,b
d(hT )
?
? b?B(T ) ?
P (hT,o |b0 , hT,a ), where B(T ) is the set of parent nodes b having a descendant in Fb(T )
?
such that e?T (b ) ? ? and A(T ) is the set of fringe nodes b?? in Fb(T ) not having an ancestor in B(T ). Hence
if for all b ? Fb(T ), d(hbT0 ,b ) ? D or there exists an ancestor b? of b such that e?T (b? ) ? ?, then this means
?
P
b0 ,b?
0 ,b
that for all b in A(T ), d(hbT0 ,b ) ? D, and therefore, e?T (b0 ) ? ? D supb e?(b) b? ?A(T ) P (hT,o
|b0 , hbT,a
)+
P
P
b0 ,b?
b0 ,b?
b0 ,b?
b0 ,b?
? b? ?B(T ) P (hT,o |b0 , hT,a ) ? ? b? ?A(T )?B(T ) P (hT,o |b0 , hT,a ) = ?.
Theorem 2. For any tree T and ? > 0, if ?
?T is defined such that inf b,T |?eT (b)>? ?
?T (b, a
?Tb ) > 0 for
T
a
?b = argmaxa?A UT (b, a), then Algorithm 1 using eb(T ) is complete and ?-optimal.
Proof. If ? = 0, then the proof is immediate. Consider now the case where ? ? (0, 1). Clearly, since U
is bounded above and L is bounded below, then e? is bounded above. Now using ? ? (0, 1), we can find a
positive integer D such that ? D supb e?(b) ? ?. Let?s denote ATb the set of ancestor belief states of b in the
tree T , and given a finite set A of belief nodes, let?s define e?min
(A) = minb?A e?T (b). Now let?s define Tb =
T
b0 ,b
T
{T |T f inite, b ? Fb(T ), e?min
(A
)
>
?}
and
B
=
{b|?
e
(b)
inf
?T ) > 0, d(hbT0 ,b ) ? D}.
T ?Tb P (hT |b0 , ?
T
b
T
Clearly, by the assumption that inf b,T |?eT (b)>? ?
?T (b, a
?b ) > 0, then B contains all belief states b within depth
0 ,b
0 ,b
D such that e?(b) > 0, P (hbT,o
|b0 , hbT,a
) > 0 and there exists a finite tree T where b ? Fb(T ) and all ancestors
b? of b have e?T (b? ) > ?. Furthermore, B is finite since there are only finitely many belief states within depth
b0 ,b
D. Hence there exist a Emin = minb?B ? d(hT ) e?(b) inf T ?Tb P (hbT0 ,b |b0 , ?
?T ). Clearly, Emin > 0 and we
know that for any tree T , all beliefs b in B ? Fb(T ) have an approximate error contribution e?T (b0 , b) ? Emin .
?
Since Emin > 0 and ? ? (0, 1), there exist a positive integer D? such that ? D supb e?(b) < Emin . Hence
by Lemma 1, this means that Algorithm 1 cannot expand any node at depth D? or beyond before expanding
a tree T where B ? Fb(T ) = ?. Because there are only finitely many nodes within depth D? , then it is clear
that Algorithm 1 will reach such tree T after a finite number of expansions. Furthermore, for this tree T , since
B ? Fb(T ) = ?, we have that for all beliefs b ? Fb(T ), either d(hbT0 ,b ) ? D or e?min
(ATb ) ? ?. Hence by
T
Lemma 2, this implies that e?T (b0 ) ? ?, and consequently Algorithm 1 will terminate after a finite number of
expansions (S OLVED(b0 , ?) will evaluate to true) with an ?-optimal solution (since eT (b0 ) ? e?T (b0 )).
From this last theorem, we notice that we can potentially develop many different admissible
heuristics for Algorithm 1; the main sufficient condition being that ?
?T (b, a) > 0 for a =
argmaxa? ?A UT (b, a? ). It also follows from this theorem that the two heuristics described above,
AEMS1 and AEMS2, are admissible. The following corollaries prove this:
Corollary 1. Algorithm 1, using eb(T ), with ?
?T as defined in Equation 6 is complete and ?-optimal.
Proof. Immediate by Theorem 2 and the fact that ?
?T (b, a
?Tb ) = 1 for all b, T .
Corollary 2. Algorithm 1, using eb(T ), with ?
?T as defined in Equation 5 is complete and ?-optimal.
Proof. We first notice that (UT (b, a) ? LT (b))2 /(UT (b, a) ? LT (b, a)) ? e?T (b, a), since LT (b) ?
LT (b, a) for all a. Furthermore, e?T (b, a) ? supb? e?(b? ). Therefore the normalization constant
? ? (|A| supb e?(b))?1 . For a
?Tb = argmaxa?A UT (b, a), we have UT (b, a
?Tb ) = UT (b), and thereT
T
fore UT (b, a
?b ) ? LT (b) = e?T (b). Hence this means that ?
?T (b, a
?b ) = ?(?
eT (b))2 /?
eT (b, a
?Tb ) ?
(|A|(supb? e?(b? ))2 )?1 (?
eT (b))2 for all T , b. Hence, for any ? > 0, inf b,T |?eT (b)>? ?
?T (b, a
?Tb ) ?
2 ?1 2
(|A|(supb e?(b)) ) ? > 0. Hence, corrolary follows from Theorem 2.
5
Experiments
In this section we present a brief experimental evaluation of Algorithm 1, showing that in addition to
its useful theoretical properties, the empirical performance matches, and in some cases exceeds, that
of other online approaches. The algorithm is evaluated in three large POMDP environments: Tag
[1], RockSample [3] and FieldVisionRockSample (FVRS) [11]; all are implemented using a factored
state representation. In each environments we compute the Blind policy5 to get a lower bound
and the FIB algorithm [15] to get an upper bound. We then compare performance of Algorithm 1
with both heuristics (AEMS1 and AEMS2) to the performance achieved by other online approaches
(Satia [7], BI-POMDP [8], RTBSS [10]). For all approaches we impose a real-time constraint of
1 sec/action, and measure the following metrics: average return, average error bound reduction6
(EBR), average lower bound improvement7 (LBI), number of belief nodes explored at each time
step, percentage of belief nodes reused in the next time step, and the average online time per action
(< 1s means the algorithm found an ?-optimal action)8 . Satia, BI-POMDP, AEMS1 and AEMS2
were all implemented using the same algorithm since they differ only in their choice of search
heuristic used to guide the search. RTBSS served as a base line for a complete k-step lookahead
search using branch & bound pruning. All results were obtained on a Xeon 2.4 Ghz with 4Gb of
RAM; but the processes were limited to use a max of 1Gb of RAM.
Table 1 shows the average value (over 1000+ runs) of the different statistics. As we can see from
these results, AEMS2 provides the best average return, average error bound reduction and average
lower bound improvement in all considered environments. The higher error bound reduction and
lower bound improvement obtained by AEMS2 indicates that it can guarantee performance closer
to the optimal. We can also observe that AEMS2 has the best average reuse percentage, which
indicates that AEMS2 is able to guide the search toward the most probable nodes and allows it to
generally maintain a higher number of belief nodes in the tree. Notice that AEMS1 did not perform
very well, except in FVRS[5,7]. This could be explained by the fact that our assumption that the
values of the actions are uniformly distributed between the lower and upper bounds is not valid in
the considered environments.
Finally, we also examined how fast the lower and upper bounds converge if we let the algorithm run
up to 1000 seconds on the initial belief state. This gives an indication of which heuristic would be
the best if we extended online planning time past 1sec. Results for RockSample[7,8] are presented
in Figure 2, showing that the bounds converge much more quickly for the AEMS2 heuristic.
6
Conclusion
In this paper we examined theoretical properties of online heuristic search algorithms for POMDPs.
To this end, we described a general online search framework, and examined two admissible heuristics to guide the search. The first assumes that Q? (b, a) is distributed uniformly at random between the bounds (Heuristic AEMS1), the second favors an optimistic point of view, and assume
the Q? (b, a) is equal to the upper bound (Heuristic AEMS2). We provided a general theorem that
shows that AEMS1 and AEMS2 are admissible and lead to complete and ?-optimal algorithms. Our
experimental work supports the theoretical analysis, showing that AEMS2 is able to outperform online approaches. Yet it is equally interesting to note that AEMS1 did not perform nearly as well.
This highlights the fact that not all admissible heuristics are equally useful. Thus it will be interesting in the future to develop further guidelines and theoretical results describing which subclasses of
heuristics are most appropriate.
5
The policy obtained by taking the combination of the |A| ?-vectors that each represents the value of a
policy performing the same action in every belief state.
6
0 )?LT (b0 )
The error bound reduction is defined as 1 ? UTU (b
, when the search process terminates on b0
(b0 )?L(b0 )
7
The lower bound improvement is defined as LT (b0 ) ? L(b0 ), when the search process terminates on b0
8
For RTBSS, the maximum search depth under the 1sec time constraint is show in parenthesis.
30
Figure 1: Comparison of different online search algorithm in different environments.
25
Time
(ms)
?1
20
580
856
814
622
623
0
Belief Reuse
Return EBR (%)
LBI
Nodes
(%)
? 0.01
? 0.1
? 0.01
?0.1
Tag (|S| = 870, |A| = 5, |?| = 30)
RTBSS(5)
-10.30
22.3
3.03
45067
0
Satia & Lave
-8.35
22.9
2.47
36908
10.0
AEMS1
-6.73
49.0
3.92
43693
25.1
BI-POMDP
-6.22
76.2
7.81
79508
54.6
AEMS2
-6.19
76.3
7.81
80250
54.8
RockSample[7,8] (|S| = 12545, |A| = 13, |?| = 2)
Satia & Lave
7.35
3.6
0
509
8.9
AEMS1
10.30
9.5
0.90
579
5.3
RTBSS(2)
10.30
9.7
1.00
439
0
BI-POMDP
18.43
33.3
4.33
2152
29.9
AEMS2
20.75
52.4
5.30
3145
36.4
FVRS[5,7] (|S| = 3201, |A| = 5, |?| = 128)
RTBSS(1)
20.57
7.7
2.07
516
0
BI-POMDP
22.75
11.1
2.08
4457
0.4
Satia & Lave 22.79
11.1
2.05
3683
0.4
AEMS1
23.31
12.4
2.24
3856
1.4
AEMS2
23.39
13.3
2.35
4070
1.6
V(b )
Heuristic /
Algorithm
15
AEMS2
AEMS1
BI?POMDP
Satia
10
900
916
896
953
859
254
923
947
942
944
5 ?2
10
?1
10
0
1
10
10
2
10
3
10
Time (s)
Figure 2: Evolution of the upper / lower bounds on
the initial belief state in RockSample[7,8].
Acknowledgments
This research was supported by the Natural Sciences and Engineering Research Council of Canada
(NSERC) and the Fonds Qu?eb?ecois de la Recherche sur la Nature et les Technologies (FQRNT).
References
[1] J. Pineau. Tractable planning under uncertainty: exploiting structure. PhD thesis, Carnegie Mellon
University, Pittsburgh, PA, 2004.
[2] P. Poupart. Exploiting structure to efficiently solve large scale partially observable Markov decision
processes. PhD thesis, University of Toronto, 2005.
[3] T. Smith and R. Simmons. Point-based POMDP algorithms: improved analysis and implementation. In
UAI, 2005.
[4] M. T. J. Spaan and N. Vlassis. Perseus: randomized point-based value iteration for POMDPs. JAIR,
24:195?220, 2005.
[5] N. Roy and G. Gordon. Exponential family PCA for belief compression in POMDPs. In NIPS, 2003.
[6] P. Poupart and C. Boutilier. Value-directed compression of POMDPs. In NIPS, 2003.
[7] J. K. Satia and R. E. Lave. Markovian decision processes with probabilistic observation of states. Management Science, 20(1):1?13, 1973.
[8] R. Washington. BI-POMDP: bounded, incremental partially observable Markov model planning. In 4th
Eur. Conf. on Planning, pages 440?451, 1997.
[9] D. McAllester and S. Singh. Approximate Planning for Factored POMDPs using Belief State Simplification. In UAI, 1999.
[10] S. Paquet, L. Tobin, and B. Chaib-draa. An online POMDP algorithm for complex multiagent environments. In AAMAS, 2005.
[11] S. Ross and B. Chaib-draa. AEMS: an anytime online search algorithm for approximate policy refinement
in large POMDPs. In IJCAI, 2007.
[12] E. A. Hansen and S. Zilberstein. LAO * : A heuristic search algorithm that finds solutions with loops.
Artificial Intelligence, 129(1-2):35?62, 2001.
[13] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley &
Sons, Inc., New York, NY, USA, 1994.
[14] N.J. Nilsson. Principles of Artificial Intelligence. Tioga Publishing, 1980.
[15] M. Hauskrecht. Value-function approximations for POMDPs. JAIR, 13:33?94, 2000.
| 3250 |@word compression:3 seems:1 reused:2 reduction:4 initial:2 atb:2 contains:2 selecting:1 past:1 lave:4 current:11 yet:2 dx:2 must:2 bd:4 john:1 subsequent:1 update:1 intelligence:2 selected:1 leaf:2 ith:1 smith:1 recherche:1 provides:2 completeness:1 node:51 location:1 toronto:1 mathematical:1 descendant:1 consists:1 prove:1 expected:3 planning:12 nor:1 discounted:1 considering:2 unrolling:1 becomes:1 provided:2 notation:1 underlying:1 maximizes:4 bounded:5 interpreted:1 maxa:3 perseus:1 finding:1 impractical:1 hauskrecht:1 guarantee:5 certainty:1 every:4 expands:1 subclass:1 ebec:1 before:2 positive:2 engineering:1 despite:2 abuse:1 eb:4 examined:3 suggests:2 challenging:1 limited:1 bi:7 directed:1 practical:3 unique:1 arguing:1 acknowledgment:1 procedure:2 empirical:4 refers:1 argmaxa:8 get:6 cannot:1 equivalent:1 convex:1 pomdp:19 qc:3 unify:1 factored:3 rule:1 searching:2 handle:3 updated:3 mcgill:4 simmons:1 exact:2 programming:1 us:6 pa:1 roy:1 particularly:1 calculate:1 ensures:1 cycle:2 inite:1 environment:8 complexity:3 reward:3 dynamic:1 depend:1 solving:4 singh:1 easily:1 represented:1 pdate:3 derivation:1 recomputes:2 fast:1 artificial:2 hyper:1 outside:1 heuristic:34 larger:1 solve:1 otherwise:7 favor:1 statistic:1 unseen:1 paquet:1 online:28 advantage:2 sequence:9 evidently:1 indication:1 maximal:2 combining:1 loop:1 tioga:1 achieve:3 lookahead:5 scalability:4 exploiting:4 parent:6 ijcai:1 incremental:1 executing:1 derive:1 depending:1 develop:2 propagating:1 finitely:2 b0:69 ebr:2 implemented:2 c:2 reachability:1 implies:1 qd:2 differ:1 stochastic:1 exploration:1 mcallester:1 brahim:1 ao:1 generalization:2 preliminary:1 probable:1 exploring:1 considered:2 argmaxb:1 hansen:1 ross:2 council:1 minimization:6 unfolding:1 aems2:17 clearly:4 always:1 aim:2 rocksample:4 corollary:3 zilberstein:1 improvement:3 indicates:2 entire:1 typically:1 ancestor:8 expand:6 interested:1 selects:1 provably:2 overall:1 multiplies:1 plan:1 constrained:1 equal:1 once:1 having:2 washington:1 represents:3 nearly:1 future:1 ephane:1 piecewise:1 gordon:1 few:2 preserve:1 ime:2 maintain:1 montr:2 earch:1 evaluation:1 devoted:1 tuple:1 integral:1 partial:1 closer:1 respective:2 draa:3 tree:40 iv:1 desired:1 theoretical:14 eal:2 earlier:1 xeon:1 markovian:1 tractability:1 subset:1 uniform:1 too:1 optimally:1 stored:1 chooses:1 eur:1 st:1 density:2 randomized:1 probabilistic:1 quickly:3 thesis:2 management:1 choose:1 possibly:1 conf:1 return:7 supb:11 potential:3 de:1 sec:3 inc:1 satisfy:1 vi:1 blind:1 performed:1 try:1 view:3 root:11 optimistic:1 doing:2 observing:1 reached:4 maintains:1 contribution:4 minimize:1 efficiently:1 yield:1 fore:1 pomdps:13 served:1 strongest:2 reach:2 whenever:1 naturally:1 associated:4 proof:7 static:1 propagated:1 chaib:4 duplicated:1 anytime:7 knowledge:2 ut:22 higher:3 jair:2 follow:2 emin:5 improved:1 done:2 evaluated:1 furthermore:4 hand:2 lack:1 pineau:2 mdp:1 believe:1 usa:1 true:1 counterpart:1 former:2 hence:9 evolution:1 deal:1 puterman:1 recurrence:2 ulaval:1 criterion:2 m:1 complete:9 belief2:1 novel:1 recently:1 lbi:2 laval:1 significant:2 mellon:1 outlined:1 had:4 reachable:5 base:1 recent:1 inf:5 fqrnt:1 joelle:1 exploited:1 preserving:1 impose:1 converge:3 maximize:2 ii:1 branch:1 full:2 sound:1 exceeds:1 match:1 offer:1 equally:3 parenthesis:1 ensuring:1 impact:2 scalable:1 enhancing:1 metric:1 iteration:5 normalization:3 represent:3 achieved:1 background:1 remarkably:1 addition:2 want:1 interval:1 utu:1 minb:2 contrary:1 seem:1 integer:2 tobin:1 near:4 iii:1 affect:1 suboptimal:1 reduce:6 t0:2 pca:1 gb:2 reuse:2 effort:1 hia:4 york:1 action:39 enumerate:1 generally:2 useful:3 clear:1 boutilier:1 amount:1 discount:1 mid:1 specifies:1 outperform:1 exist:2 restricts:1 percentage:2 notice:5 estimated:1 per:2 rb:2 carnegie:1 discrete:1 express:2 key:1 nevertheless:1 verified:1 ht:16 ht0:2 ram:2 graph:2 sum:2 run:2 powerful:1 uncertainty:3 family:1 reasonable:1 decision:5 prefer:1 layer:1 bound:38 guaranteed:2 simplification:1 badly:1 precisely:1 constraint:2 fib:1 dominated:1 tag:2 optimality:1 min:3 pruned:1 performing:2 expanded:1 relatively:1 developing:1 according:4 combination:1 smaller:1 terminates:4 slightly:1 son:1 spaan:1 qu:2 making:1 nilsson:1 hio:3 projecting:1 explained:1 computationally:2 equation:7 remains:1 describing:2 know:5 tractable:1 end:2 available:5 observe:1 appropriate:1 ho:2 top:1 assumes:1 publishing:1 build:1 especially:1 objective:1 already:1 poupart:2 toward:1 induction:1 sur:1 bhi:2 unfortunately:1 executed:1 potentially:2 implementation:1 guideline:1 policy:15 perform:3 allowing:1 upper:16 observation:13 markov:4 finite:12 immediate:3 extended:1 vlassis:1 arbitrary:1 canada:4 introduced:5 required:2 accepts:1 alternately:1 nip:2 able:3 suggested:2 beyond:1 below:2 tb:18 max:1 belief:73 greatest:1 natural:2 indicator:1 representing:1 jpineau:1 technology:1 lao:2 brief:1 review:1 nice:1 satia:7 determining:2 fully:1 multiagent:1 highlight:1 interesting:2 proven:1 agent:5 sufficient:4 principle:1 ift:1 featuring:1 elsewhere:1 supported:1 last:1 keeping:1 liability:1 offline:12 guide:6 taking:2 ghz:1 distributed:2 hbt:5 calculated:1 depth:7 transition:1 cumulative:2 valid:1 fb:9 made:2 refinement:1 approximate:14 observable:4 pruning:1 implicitly:1 uai:2 pittsburgh:1 search:41 table:1 terminate:1 nature:1 ca:3 expanding:1 expansion:3 investigated:1 complex:1 domain:2 did:2 main:2 whole:1 allowed:2 aamas:1 ny:1 wiley:1 precision:2 sub:1 exponential:2 admissible:8 dozen:1 theorem:12 specific:1 showing:7 explored:1 evidence:2 intractable:1 exists:3 sequential:1 adding:2 phd:2 execution:1 subtree:1 occurring:1 fonds:1 horizon:7 lt:25 simply:5 explore:2 infinitely:1 aems:3 nserc:1 partially:4 fringe:21 consequently:4 towards:2 included:1 infinite:2 determined:1 except:1 uniformly:2 lemma:6 called:1 experimental:2 la:2 formally:1 select:1 support:1 latter:2 evaluate:1 |
2,482 | 3,251 | Managing Power Consumption and Performance of
Computing Systems Using Reinforcement Learning
Gerald Tesauro, Rajarshi Das, Hoi Chan, Jeffrey O. Kephart,
Charles Lefurgy? , David W. Levine and Freeman Rawson?
IBM Watson and Austin? Research Laboratories
{gtesauro,rajarshi,hychan,kephart,lefurgy,dwl,frawson}@us.ibm.com
Abstract
Electrical power management in large-scale IT systems such as commercial datacenters is an application area of rapidly growing interest from both an economic
and ecological perspective, with billions of dollars and millions of metric tons of
CO2 emissions at stake annually. Businesses want to save power without sacrificing performance. This paper presents a reinforcement learning approach to
simultaneous online management of both performance and power consumption.
We apply RL in a realistic laboratory testbed using a Blade cluster and dynamically varying HTTP workload running on a commercial web applications middleware platform. We embed a CPU frequency controller in the Blade servers?
firmware, and we train policies for this controller using a multi-criteria reward
signal depending on both application performance and CPU power consumption.
Our testbed scenario posed a number of challenges to successful use of RL, including multiple disparate reward functions, limited decision sampling rates, and
pathologies arising when using multiple sensor readings as state variables. We
describe innovative practical solutions to these challenges, and demonstrate clear
performance improvements over both hand-designed policies as well as obvious
?cookbook? RL implementations.
1
Introduction
Energy consumption is a major and growing concern throughout the IT industry as well as for
customers and for government regulators concerned with energy and environmental matters. To cite
a prominent example, the US Congress recently mandated a study of the power efficiency of servers,
including a feasibility study of an Energy Star standard for servers and data centers [16]. Growing
interest in power management is also apparent in the formation of the Green Grid, a consortium of
systems and other vendors dedicated to improving data center power efficiency [7]. Recent trade
press articles also make it clear that computer purchasers and data center operators are eager to
reduce power consumption and the heat densities being experienced with current systems.
In response to these concerns, researchers are tackling intelligent power control of processors, memory chips and whole systems, using technologies such as processor throttling, frequency and voltage
manipulation, low-power DRAM states, feedback control using measured power values, and packing
and virtualization to reduce the number of machines that need to be powered on to run a workload.
This paper presents a reinforcement learning (RL) approach to developing effective control policies for real-time management of power consumption in application servers. Such power management policies must make intelligent tradeoffs between power and performance, as running servers
in low-power modes inevitably degrades the application performance. Our approach to this entails
designing a multi-criteria objective function Upp taking both power and performance into account,
and using it to give reward signals in reinforcement learning. We let Upp be a function of mean
1
application response time RT , and total power Pwr consumed by the servers in a decision interval.
Specifically, Upp subtracts a linear power cost from a performance-based utility U (RT ):
Upp (RT, Pwr) = U (RT ) ? ? ? Pwr
(1)
where ? is a tunable coefficient expressing the relative value of power and performance objectives. This approach admits other objective functions such as ?performance value per watt?
Upp = U (RT )/Pwr, or a simple performance-based utility Upp = U (RT ) coupled with a constraint on total power.
The problem of jointly managing performance and power in IT-systems was only recently studied in
the literature [5, 6, 17]. Existing approaches use knowledge-intensive and labor-intensive modeling,
such as developing queuing-theoretic or control-theoretic performance models. RL methods can
potentially avoid such knowledge bottlenecks, by automatically learning high-quality management
policies using little or no built-in system specific knowledge. Moreover, as we discuss later, RL may
have the merit of properly handling complex dynamic and delayed consequences of decisions.
In Section 2 we give details of our laboratory testbed, while Section 3 describes our RL approach.
Results are presented in Section 4, and the final section discusses next steps in our ongoing research
and ties to related work.
2
Experimental Testbed
Figure 1 provides a high-level overview of our experimental testbed. In brief, a Workload Generator produces an HTTP-based workload of dynamically varying intensity that is routed to a blade
cluster, i.e., a collection of blade servers contained in a single chassis. (Specifically, we use an IBM
BladeCenter containing xSeries HS20 blade servers.) A commercial performance manager and our
RL-based power manager strive to optimize a joint power-performance objective cooperatively as
load varies, each adjusting its control parameters individually while sharing certain information
with the other manager. RL techniques (described subsequently) are used to train a state-action
value function which defines the power manager?s control policy. The ?state? is characterized by
a set of observable performance, power and load intensity metrics collected in our data collection
module as detailed below. The ?action? is a throttling of CPU frequency1 that is achieved by setting
a ?powercap? on each blade that provides an upper limit on the power that the blade may consume.
Given this limit, a feedback controller embedded in the server?s firmware [11] continuously monitors
the power consumption, and continuously regulates the CPU clock speed so as to keep the power
consumption close to, but not over, the powercap limit. The CPU throttling affects both application
performance as well as power consumption, and the goal of learning is to achieve the optimal level
of throttling in any given state that maximizes cumulative discounted values of joint reward Upp .
We control workload intensity by varying the number of clients nc sending HTTP requests. We
varied nc in a range from 1 to 50 using a statistical time-series model of web traffic derived from
observations of a highly accessed Olympics web site [14]. Clients behave according to a closed-loop
model [12] with exponentially distributed think times of mean 125 msec.
The commercial performance manager is WebSphere Extended Deployment (WXD)[18], a multinode webserver environment providing extensive data collection and performance management
functionality. WXD manages the routing policy of the Workload Distributer as well as control
parameters on individual blades, such as the maximum workload concurrency.
Our data collector receives several streams of data and provides a synchronized report to the power
policy evaluator on a time scale ?l (typically set to 5 seconds). Data generated on much faster time
scales than ?l are time-averaged over the interval, otherwise the most recent values are reported.
Among the aggregated data are several dozen performance metrics collected by a daemon running
on the WXD data server, such as mean response time, queue length and number of CPU cycles per
transaction; CPU utilization and effective frequency collected by local daemons on each blade; and
current power and temperature measurements collected by the firmware on each blade, which are
polled using IPMI commands sent from the BladeCenter management module.
1
An alternative technique with different power/performance trade-offs is Dynamic Voltage and Frequency
Scaling (DVFS).
2
Manager-to-manager
Interactions
Performance
Manager
(WebSphere XD)
Power
Data
Blade Chassis
Workload Distributor
HTTP
Requests
Workload
Generator
Performance
Data
Blade System
Blade System
Blade System
Control
Policy
Power Assignment
Control
Policy
Power
Manager
Power
Figure 1: Overview of testbed environment.
2.1
Utility function definition
Our specific performance-based utility U (RT ) in Eq. 1 is a piecewise linear function of response
time RT which returns a maximum value of 1.0 when RT is less than a specified threshold RT0 ,
and which drops linearly when RT exceeds RT0 , i.e.,
U (RT /RT0 ) =
?
1.0
2.0 ? RT /RT0
if RT ? RT0
otherwise
(2)
Such a utility function reflects the common assumptions in customer service level agreements that
there is no incentive to improve the performance once it reaches the target threshold, and that there
is always a constant incentive to improve performance if it violates the threshold. In all of our
experiments, we set RT0 = 1000 msec, and we also set the power scale factor ? = 0.01 in Eq. 1.
At this value of ? the power-performance tradeoff is strongly biased in favor of performance, as is
commonly desired in today?s data centers. However, larger values of ? could be appropriate in future
scenarios where power is much more costly, in which case the optimal policies would tolerate more
frequent performance threshold violations in order to save more aggressively on power consumption.
2.2
Baseline Powercap Policies
To assess the effectiveness of our RL-based power management policies, we compare with two
different benchmark policies: ?UN? (unmanaged) and ?HC? (hand-crafted). The unmanaged policy
always sets the powercap to a maximal value of 120W; we verified that the CPU runs at the highest
frequency under all load conditions with this setting.
The hand-crafted policy was created as follows. We measured power consumption on a blade server
at extremely low (nc = 1) and high (nc = 50) loads, finding that in all cases the power consumption
ranged between 75 and 120 watts. Given this range, we established a grid of sample points, with p?
running from 75 watts to 120 watts in increments of 5 watts, and the number of clients running from
0 to 50 in increments of 5. For each of the 10 possible settings of p? , we held nc fixed at 50 for 45
minutes to permit WXD to adapt to the workload, and then decremented nc by 5 every 5 minutes.
Finally, the models RT (p? , nc ) and P wr(p? , nc ), were derived by linearly interpolating for the RT
and P wr between the sampled grid points.
We substitute these models into our utility function Upp (RT, P wr) to obtain an equivalent utility function U ? depending on p? and nc , i.e., U ? (p? , nc ) = Upp (RT (p? , nc ), P wr(p? , nc )). We
can then choose the optimal powercap for any workload intensity nc by optimizing U ? : p?? (nc ) =
arg maxp? U ? (p? , nc ).
3
3
Reinforcement Learning Approach
One may naturally question whether RL could be capable of learning effective control policies for
systems as complex as a population of human users interacting with a commercial web application.
Such systems are surely far from full observability in the MDP sense. Without even considering
whether the behavior of users is ?Markovian,? we note that the state of a web application may
depend, for example, on the states of the underlying middleware and Java Virtual Machines (JVMs),
and these states are not only unobservable, they also have complex historical dependencies on prior
load and performance levels over multiple time scales. Despite such complexities, we have found in
our earlier work [15, 9] that RL can in fact learn decent policies when using severely limited state
descriptions, such as a single state variable representing current load intensity. The focus of our
work in this paper is to examine empirically whether RL may obtain better policies by including
more observable metrics in the state description.
Another important question is whether current decisions have long-range effects, or if it suffices to
simply learn policies that optimize immediate reward. The answer appears to vary in an interesting way: under low load conditions, the system response to a decision is fairly immediate, whereas
under conditions of high queue length (which may result from poor throttling decisions), the responsiveness to decisions may become sluggish and considerably delayed.
Our reinforcement learning approach leverages our recent ?Hybrid RL? approach [15], which originally was applied to autonomic server allocation. Hybrid RL is a form of offline (batch) RL that
entails devising an initial control policy, running the initial policy in the live system and logging a set
of (state, action, reward) tuples, and then using a standard RL/function approximator combination
to learn a value function V (s, a) estimating cumulative expected reward of taking action a in state
s. (The term ?Hybrid? refers to the fact that expert domain knowledge can be engineered into the
initial policy without needing explicit engineering or interfacing into the RL module.) The learned
value function V then implies a policy of selecting the action a? in state s with highest expected
value, i.e., a? = arg maxa V (s, a).
For technical reasons detailed below, we use the Sarsa(0) update rule rather than Q-Learning (note
that unlike textbook Sarsa, decisions are made by an external fixed policy). Following [15], we
set the discount parameter ? = 0.5; we found some preliminary evidence that this is superior to
setting ? = 0.0 but haven?t been able to systematically study the effect of varying ?. We also
perform standard direct gradient training of neural net weights: we train a multilayer perceptron
with 12 sigmoidal hidden units, using backprop to compute the weight changes. Such an approach
is appealing, as it is simple to implement and has a proven track record of success in many practical
applications. There is a theoretical risk that the approach could produce value function divergence.
However, we have not seen such divergence in our application. Were it to occur, it would not
entail any live performance costs, since we train offline. Additionally, we note that instead of direct
gradient training, we can use Baird?s residual gradient method [4], which guarantees convergence to
local Bellman error minima. In practice we find that direct gradient training yields good convergence
to Bellman error minima in ?5-10K training epochs, requiring only a few CPU minutes on a 3GHz
workstation.
In implementing an initial policy to be used with Hybrid RL, one would generally want to exploit
the best available human-designed policy, combined with sufficient randomized exploration needed
by RL, in order to achieve the best possible learned policy. However, in view of the difficulty
expected in designing such initial policies, it would be advantageous to be able to learn effective
policies starting from simplistic initial policies. We have therefore trained our RL policies using an
extremely simple performance-biased random walk policy for setting the powercap, which operates
as follows: At every decision point, p? either is increased by 1 watt with probability p+ , or decreased
by 1 watt with probability p? = (1 ? p+ ). The upward bias p+ depends on the ratio r = RT/RT0
of current mean response time to response time threshold according to: p+ = r/(1 + r). Note that
this rule implies an unbiased random walk when r = 1 and that p+ ? 1 for r ? 1, while p+ ? 0
when r ? 1. This simple rule seems to strike a good balance between keeping the performance
near the desired threshold, while providing plenty of exploration needed by RL, as can been seen in
Figure 2.
Having collected training data during the execution of an initial policy, the next step of Hybrid RL is
to design an (input, output) representation and functional form of the value function approximator.
4
Avg Power (watts)
Avg RT (msec)
3000
2500
2000
1500
1000
500
0
Clients
120
110
100
90
80
70
cap
(c)
goal
(b)
50
40
30
20
10
0
(a)
0
500
1000
1500
2000 2500
Time (x 5 sec)
3000
3500
4000
Figure 2: Traces of (a) workload intensity, (b) mean response time, and (c) powercap and consumed
power of the random-walk (RW) powercap policy.
We have initially used the basic input representation studied in [15], in which the state s is represented using a single metric of workload intensity (number of clients nc ), and the action a is a single
scalar variable?the powercap p? . This scheme robustly produces decent learned policies, with little
sensitivity to exact learning algorithm parameter settings. In later experiments, we have expanded
the state representation to a much larger set of 14 state variables, and find that substantial improvements in learned policies can be obtained, provided that certain data pre-processing techniques are
used, as detailed below.
3.1
System-specific innovations
In our research in this application domain, we have devised several innovative ?tricks? enabling us
to achieve substantially improved RL performance. Such tricks are worth mentioning as they are
likely to be of more general use in other problem domains with similar characteristics.
First, to represent and learn V , we could employ a single output unit, trained on the total utility (reward) using Q-Learning. However, we can take advantage of the fact that total utility Upp
in equation 1 is a linear combination of performance utility U and power cost ?? ? P wr. Since
the separate reward components are generally observable, and since these should have completely
different functional forms relying on different state variables, we propose training two separate function approximators estimating future discounted reward components Vperf and Vpwr respectively.
This type of ?decompositional reward? problem has been studied for tabular RL in [13], where it
is shown that learning the value function components using Sarsa provably converges to the correct
total value function. (Note that Q-Learning cannot be used to train the value function components,
as it incorrectly assumes that the optimal policy optimizes each individual component function.)
Second, we devised a new type of neuronal output unit to learn Vperf . This is motivated by the
shape of U , which is a piecewise linear function of RT , with constant value for low RT and linearly
decreasing for large RT . This functional form is is not naturally approximated by either a linear or
a sigmoidal transfer function. However, by noting that the derivative of U is a step function (changing from 0 to -1 at the threshold), and that sigmoids give a good approximation to step functions,
this suggests using an output transfer function that behaves as the integral of a sigmoid
function.
R
Specifically, our transfer function has the form Y (x) = 1 ? ?(x) where ?(x) = ?(x)dx + C,
where ?(x) = 1/(1 + exp(?x)) is the standard sigmoid function, and the integration constant C is
chosen so that ? ? 0 as x ? ??. We find that this type of output unit is easily trained by standard
backprop and provides quite a good approximation to the true expected rewards.
We have also trained separate neural networks to estimate Vpwr using a similar hidden layer architecture and a standard linear output unit. However, we found only a slight improvement in Bellman
error over a simple estimator of predicted power ?
= p? (although this is not always a good estimate).
5
Hence for simplicity we used Vpwr = ?? ? p? in computing the overall learned policy maximizing
V = Vperf + Vpwr .
Thirdly, we devised a data pre-processing technique to address a specific rate limitation in our system that the powercap decision p? as well as the number of clients nc can only be changed every
30 seconds, whereas we collect state data from the system every 5 seconds. This limitation was
imposed because faster variations in effective CPU speed or in load disrupt WXD?s functionality, as
its internal models estimate parameters on much slower time scales, and in particular, it assumes that
CPU speed is a constant. As a result, we cannot do standard RL on the 5 second interval data, since
this would presume the policy?s ability to make a new decision every 5 seconds. A simple way to
address this would be to discard data points where a decision was not made (5/6 of the data), but this
would make the training set much smaller, and we would lose valuable state transition information
contained in the discarded samples. As an alternative, we divide the entire training set into six subsets according to line number mod-6, so that within each subset, adjacent data points are separated
by 30 second intervals. We then concatenate the subsets to form one large training set, with no loss
of data, where all adjacent intervals are 30 seconds long. In effect, a sweep through such a dataset
replays the experiment six times, corresponding to the six different 5-second phases within the 30second decision cycle. As we shall see in the following section, such rearranged datasets result in
substantially more stable policies.
Finally, we realized that in the artificially constructed dataset described above, there is an inaccuracy in training on samples in the five non-decision phases: standard RL would presume that the
powercap decision is held constant over the full 30 seconds until the next recorded sample, whereas
we know that decision actually changes somewhere in the middle of the interval, depending on the
phase. To obtain the best approximation to a constant decision over such intervals, we compute an
equally weighted average p?? of the recorded decisions at times {t, t+5, t+10, t+15, t+20, t+25}
and train on p?? as the effective decision that was made at time t. This change results in a significant
reduction (? 40%) in Bellman error, and the combination of this with the mod-6 data reordering
enables us to obtain substanial improvements in policy performance.
Results
1200
1000
110
(b)
(a)
power (watts)
response time (msec)
4
800
600
400
UN
HC
RW
0.2
utility
temperature (cent)
95
UN
HC
RW
2NN 15NN 15NNp
HC
RW
2NN 15NN 15NNp
0.25
(c)
52
50
48
46
100
90
2NN 15NN 15NNp
56
54
105
(d)
0.15
0.1
0.05
UN
HC
RW
0
2NN 15NN 15NNp
UN
Figure 3: Comparison of mean metrics (a) response time, (b) power consumed, (c) temperature and
(d) utility for six different power management policies: ?UN? (unmanaged), ?HC? (hand-crafted),
?RW? (random walk), ?2NN? (2-input neural net), ?15NN? (15-input neural net, no pre-processing),
?15NNp? (15-input neural net with pre-processing).
While we have conducted experiments in other work involving multiple blade servers, in this section
we focus on experiments involving a single blade. Fig. 3 plots various mean performance metrics in
identical six-hour test runs using identical workload traces for six different power management policies: ?UN? and ?HC? denote the unmanaged and hand-crafted policies described in Sec. 2.2; ?RW?
is the random-walk policy of Sec. 3; ?2NN? denotes a two-input (single state variable) neural net;
?15NN? refers to a 15-input neural net without any data pre-processing as described in Sec. 3.1, and
6
?15NNp? indicates a 15-input neural net using said pre-processing. In the figure, the performance
metrics plotted are: (a) mean response time, (b) mean power consumed, (c) mean temperature, and
most importantly, (d) mean utility. Standard error in estimates of these mean values are quite small,
as indicated by error bars which lie well within the diamond-shaped data points. Since the runs use
identical workload traces, we can also assess significance of the differences in means across policies
via paired T-tests; exhaustive pairwise comparisons show that in all cases, the null hypothesis of no
difference in mean metrics is rejected at 1% significance level with P-value ? 10?6 .
(e)
goal
1200
120
110
100
90
80
70
1600
cap
(k)
goal
1200
800
400
(j)
0
400
Avg Power (watts)
800
(d)
120
110
100
90
80
70
1600
Avg RT (msec)
cap
(c)
goal
1200
120
110
100
90
80
70
1600
cap
(i)
goal
1200
800
400
(h)
0
800
400
Avg Power (watts)
Avg Power (watts)
Avg RT (msec)
Avg RT (msec)
cap
0
Clients
Avg Power (watts)
120
110
100
90
80
70
1600
(b)
0
60
50
40
30
20
10
0
Avg RT (msec)
Avg RT (msec)
Avg Power (watts)
We see in Fig. 3 that all RL-based policies, after what is effectively a single round of policy iteration, significantly outperform the original random walk policy which generated the training data.
Using only load intensity as a state variable, 2NN achieves utility close to (but not matching) the
hand-crafted policy. 15NN is disappointing in that its utility is actually worse than 2NN, for reasons that we discuss below. Comparing 15NNp with 15NN shows that pre-processing yields great
improvements; 15NNp is clearly the best of the six policies. Breaking down overall utility into separate power and performance components, we note that all RL-based policies achieve greater power
savings than HC at the price of somewhat higher mean response times. An additional side benefit
of this is lower mean temperatures, as shown in the lower left plot; this implies both lower cooling
costs as well as prolonged machine life.
(a)
0
500
1000
1500
2000 2500
Time (x5 sec)
3000
3500
4000
120
110
100
90
80
70
1600
cap
(g)
goal
1200
800
400
(f )
0
0
500
1000
1500
2000
2500
3000
3500
4000
Time (x5 sec)
Figure 4: Traces of the five non-random policies: (a) workload intensity; (b) UN response time; (c)
UN powercap; (d) HC response time; (e) HC powercap; (f) 2NN response time; (g) 2NN powercap;
(h) 15NN response time; (i) 15NN powercap; (j) 15NNp response time; (k) 15NNp powercap.
Fig. 4 shows the actual traces of response time, powercap and power consumed in all experiments
except the random walk, which was plotted earlier. The most salient points to note are that 15NNp
exhibits the steadiest response time, keeping closest to the response time goal, and that the powercap
decsions of 15NN show quite large short-term fluctuations. We attribute the latter behavior to ?overreacting? to response time fluctuations above or below the target value. Such behavior may well
be correct if the policy could reset every 5 seconds, as 15NN presumes. In this case, the policy
could react to a response time flucutation by setting an extreme powercap value in an attempt to
quickly drive the response time back to the goal value, and then backing off to a less extreme value
5 seconds later. However, such behavior would be quite poor in the actual system, in which the
extreme powercap setting is held fixed for 30 seconds.
7
5
Summary and related work
This paper presented a successful application of batch RL combined with nonlinear function approximation in a new and challenging domain of autonomic management of power and performance
in web application servers. We addressed challenges arising both from operating in real hardware,
and from limitations imposed by interoperating with commercial middleware. By training on data
from a simple random-walk initial policy, we achieved high-quality management polices that outperformed the best available hand-crafted policy. Such policies save more than 10% on server power
while keeping performance close to a desired target.
In our ongoing and future work, we are aiming to scale the approach to an entire Blade cluster, and
to achieve much greater levels of power savings. With the existing approach it appears that power
savings closer to 20% could be obtained simply by using more realistic web workload profiles in
which high-intensity spikes are brief, and the ratio of peak-to-mean workload is much higher than
in our current traffic model. It also appears that savings of ?30% are plausible when using multicore processors [8]. Finally, we are also aiming to learn policies for powering machines off when
feasible; this offers the potential to achieve power savings of 50% or more. In order to scale our
approach to larger systems, we can leverage the fact that Blade clusters usually have sets of identical
machines. All servers within such a homogeneous set can be managed in an identical fashion by the
performance and power managers, thereby making the size of the overall state space and the action
space more tractable for RL.
An important component of our future work is also to improve our current RL methodology. Beyond Hybrid RL, there has been much recent research in offline RL methods, including LSPI [10],
Apprenticeship Learning [2], Differential Dynamic Programming [1], and fitted policy iteration
minimizing Bellman residuals [3]. These methods are of great interest to us, as they typically have
stronger theoretical guarantees than Hybrid RL, and have delivered impressive performance in applications such as helicopter aerobatics. For powering machines on and off, we are especially interested
in offline model-based RL approaches: as the number of training samples that can be acquired is
likely to be severely limited, it will be important to reduce sample complexity by learning explicit
state-transition models.
References
[1] P. Abbeel, A. Coates, M. Quigley, and A. Y. Ng. An application of reinforcement learning to aerobatic
helicopter flight. In Proc. of NIPS-06, 2006.
[2] P. Abbeel and A. Y. Ng. Exploration and apprenticeship learning in reinforcement learning. In Proc. of
ICML-05, 2005.
[3] A. Antos, C. Szepesvari, and R. Munos. Learning near-optimal policies with bellman-residual minimization based fitted policy iteration and a single sample path. In Proc. of COLT-06, 2006.
[4] L. Baird. Residual algorithms: Reinforcement learning with function approximation. In Proc. of ICML95, 1995.
[5] Y. Chen et al. Managing server energy and operational costs in hosting centers. In Proc. of SIGMETRICS,
2005.
[6] M. Femal and V. Freeh. Boosting data center performance through non-uniform power allocation. In
Second Intl. Conf. on Autonomic Computing, 2005.
[7] Green Grid Consortium. Green grid. http://www.thegreengrid.org, 2006.
[8] J. Chen et al. Datacenter power modeling and prediction. UC Berkeley RAD Lab presentation, 2007.
[9] J. O. Kephart, H. Chan, R. Das, D. Levine, G. Tesauro, F. Rawson, and C. Lefurgy. Coordinating multiple
autonomic managers to achieve specified power-performance tradeoffs. In Proc. of ICAC-07, 2007.
[10] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. J. of Machine Learning Research, 4:1107?
1149, 2003.
[11] C. Lefurgy, X. Wang, and M. Ware. Server-level power control. In Proc. of ICAC-07, 2007.
[12] D. Menasce and V. A. F. Almeida. Capacity Planning for Web Performance: Metrics, Models, and
Methods. Prentice Hall, 1998.
[13] S. Russell and A. L. Zimdars. Q-decomposition for reinforcement learning agents. In Proc. of ICML-03,
pages 656?663, 2003.
[14] M. S. Squillante, D. D. Yao, and L. Zhang. Internet traffic: Periodicity, tail behavior and performance
implications. In System Performance Evaluation: Methodologies and Applications, 1999.
[15] G. Tesauro, N. K. Jong, R. Das, and M. N. Bennani. A hybrid reinforcement learning approach to autonomic resource allocation. In Proc. of ICAC-06, pages 65?73, 2006.
[16] United States Environmental Protection Agency. Letter to Enterprise Server Manufacturers and Other
Stakeholders. http://www.energystar.gov, 2006.
[17] M. Wang et al. Adaptive Performance Control of Computing Systems via Distributed Cooperative Control: Application to Power Management in Computer Clusters. In Proc. of ICAC-06, 2006.
[18] WebSphere Extended Deployment. http://www.ibm.com/software/webservers/appserv/extend/, 2007.
8
| 3251 |@word middle:1 stronger:1 advantageous:1 seems:1 decomposition:1 thereby:1 blade:19 reduction:1 initial:8 series:1 selecting:1 united:1 existing:2 current:7 com:2 comparing:1 protection:1 tackling:1 dx:1 must:1 realistic:2 concatenate:1 shape:1 enables:1 designed:2 drop:1 update:1 plot:2 icac:4 devising:1 short:1 record:1 provides:4 boosting:1 nnp:11 sigmoidal:2 org:1 accessed:1 evaluator:1 five:2 zhang:1 constructed:1 direct:3 become:1 differential:1 enterprise:1 distributor:1 apprenticeship:2 acquired:1 pairwise:1 expected:4 behavior:5 planning:1 wxd:5 growing:3 multi:2 manager:11 examine:1 bellman:6 freeman:1 discounted:2 relying:1 decreasing:1 automatically:1 prolonged:1 cpu:11 little:2 actual:2 considering:1 gov:1 provided:1 annually:1 moreover:1 underlying:1 maximizes:1 estimating:2 null:1 what:1 substantially:2 maxa:1 textbook:1 finding:1 guarantee:2 berkeley:1 every:6 xd:1 tie:1 control:15 utilization:1 unit:5 datacenter:1 service:1 engineering:1 local:2 congress:1 limit:3 consequence:1 severely:2 despite:1 aiming:2 ware:1 fluctuation:2 path:1 studied:3 dynamically:2 suggests:1 collect:1 challenging:1 deployment:2 mentioning:1 limited:3 range:3 averaged:1 practical:2 practice:1 implement:1 area:1 java:1 significantly:1 matching:1 pre:7 refers:2 consortium:2 cannot:2 close:3 operator:1 prentice:1 risk:1 live:2 optimize:2 equivalent:1 imposed:2 customer:2 center:6 maximizing:1 www:3 rt0:7 starting:1 simplicity:1 react:1 rule:3 estimator:1 importantly:1 population:1 variation:1 increment:2 target:3 commercial:6 today:1 user:2 exact:1 programming:1 homogeneous:1 designing:2 hypothesis:1 agreement:1 trick:2 approximated:1 cooling:1 cooperative:1 levine:2 module:3 bennani:1 electrical:1 wang:2 cycle:2 olympics:1 trade:2 highest:2 russell:1 valuable:1 substantial:1 environment:2 agency:1 complexity:2 reward:12 ipmi:1 co2:1 dynamic:3 gerald:1 trained:4 depend:1 concurrency:1 efficiency:2 logging:1 completely:1 packing:1 workload:18 joint:2 easily:1 chip:1 represented:1 various:1 train:6 separated:1 heat:1 describe:1 effective:6 formation:1 exhaustive:1 apparent:1 quite:4 posed:1 larger:3 plausible:1 consume:1 otherwise:2 maxp:1 favor:1 ability:1 think:1 jointly:1 final:1 online:1 delivered:1 advantage:1 quigley:1 net:7 propose:1 polled:1 interaction:1 maximal:1 reset:1 frequent:1 helicopter:2 loop:1 mandated:1 rapidly:1 achieve:7 description:2 billion:1 convergence:2 cluster:5 intl:1 produce:3 converges:1 depending:3 measured:2 multicore:1 eq:2 predicted:1 throttling:5 implies:3 synchronized:1 correct:2 functionality:2 attribute:1 subsequently:1 exploration:3 human:2 routing:1 engineered:1 hoi:1 violates:1 virtual:1 backprop:2 implementing:1 government:1 suffices:1 abbeel:2 preliminary:1 sarsa:3 cooperatively:1 hall:1 exp:1 great:2 parr:1 major:1 vary:1 achieves:1 proc:10 outperformed:1 lose:1 individually:1 reflects:1 weighted:1 minimization:1 offs:1 clearly:1 sensor:1 always:3 interfacing:1 sigmetrics:1 powering:2 rather:1 avoid:1 varying:4 voltage:2 command:1 derived:2 emission:1 focus:2 improvement:5 properly:1 indicates:1 baseline:1 sense:1 dollar:1 nn:22 typically:2 entire:2 initially:1 hidden:2 interested:1 provably:1 upward:1 unobservable:1 arg:2 among:1 backing:1 overall:3 colt:1 platform:1 integration:1 fairly:1 uc:1 once:1 saving:5 having:1 shaped:1 sampling:1 ng:2 identical:5 cookbook:1 icml:2 plenty:1 future:4 tabular:1 report:1 decremented:1 intelligent:2 piecewise:2 haven:1 few:1 employ:1 divergence:2 individual:2 delayed:2 phase:3 jeffrey:1 attempt:1 interest:3 highly:1 evaluation:1 violation:1 extreme:3 antos:1 held:3 implication:1 integral:1 capable:1 closer:1 divide:1 walk:8 desired:3 plotted:2 sacrificing:1 theoretical:2 fitted:2 kephart:3 industry:1 modeling:2 earlier:2 markovian:1 increased:1 assignment:1 cost:5 subset:3 uniform:1 successful:2 conducted:1 eager:1 reported:1 dependency:1 answer:1 varies:1 considerably:1 combined:2 density:1 peak:1 randomized:1 sensitivity:1 off:3 continuously:2 quickly:1 yao:1 recorded:2 management:14 containing:1 choose:1 worse:1 external:1 conf:1 expert:1 strive:1 derivative:1 return:1 presumes:1 account:1 potential:1 star:1 sec:6 coefficient:1 matter:1 baird:2 depends:1 stream:1 queuing:1 later:3 view:1 closed:1 lab:1 traffic:3 ass:2 square:1 characteristic:1 yield:2 manages:1 worth:1 researcher:1 presume:2 drive:1 processor:3 simultaneous:1 reach:1 sharing:1 definition:1 energy:4 frequency:5 obvious:1 naturally:2 workstation:1 sampled:1 tunable:1 adjusting:1 dataset:2 knowledge:4 cap:6 actually:2 back:1 appears:3 tolerate:1 originally:1 higher:2 methodology:2 response:23 improved:1 strongly:1 rejected:1 clock:1 until:1 hand:7 receives:1 flight:1 web:8 nonlinear:1 defines:1 mode:1 quality:2 indicated:1 mdp:1 effect:3 ranged:1 requiring:1 unbiased:1 true:1 managed:1 hence:1 aggressively:1 laboratory:3 adjacent:2 round:1 during:1 x5:2 aerobatics:1 upp:10 criterion:2 prominent:1 theoretic:2 demonstrate:1 dedicated:1 temperature:5 autonomic:5 multinode:1 recently:2 charles:1 lagoudakis:1 common:1 superior:1 sigmoid:2 behaves:1 functional:3 rl:36 overview:2 regulates:1 empirically:1 exponentially:1 virtualization:1 million:1 thirdly:1 slight:1 tail:1 extend:1 expressing:1 measurement:1 significant:1 grid:5 pathology:1 stable:1 entail:3 impressive:1 operating:1 closest:1 chan:2 recent:4 perspective:1 optimizing:1 optimizes:1 tesauro:3 scenario:2 manipulation:1 certain:2 server:19 ecological:1 discard:1 disappointing:1 watson:1 success:1 life:1 approximators:1 responsiveness:1 seen:2 minimum:2 greater:2 somewhat:1 additional:1 managing:3 surely:1 aggregated:1 strike:1 signal:2 multiple:5 full:2 needing:1 exceeds:1 technical:1 faster:2 characterized:1 adapt:1 offer:1 long:2 devised:3 equally:1 paired:1 feasibility:1 prediction:1 involving:2 simplistic:1 basic:1 controller:3 multilayer:1 metric:10 iteration:4 represent:1 achieved:2 whereas:3 rajarshi:2 want:2 interval:7 decreased:1 addressed:1 biased:2 unlike:1 sent:1 mod:2 effectiveness:1 near:2 leverage:2 noting:1 concerned:1 decent:2 affect:1 architecture:1 economic:1 reduce:3 observability:1 tradeoff:3 consumed:5 intensive:2 bottleneck:1 whether:4 motivated:1 six:7 chassis:2 utility:16 routed:1 queue:2 action:7 generally:2 clear:2 detailed:3 hosting:1 discount:1 hardware:1 rw:7 rearranged:1 http:7 outperform:1 coates:1 coordinating:1 arising:2 per:2 wr:5 track:1 shall:1 incentive:2 salient:1 threshold:7 monitor:1 changing:1 verified:1 run:4 letter:1 daemon:2 throughout:1 decision:19 scaling:1 layer:1 internet:1 occur:1 constraint:1 software:1 regulator:1 speed:3 innovative:2 extremely:2 expanded:1 developing:2 according:3 watt:14 request:2 poor:2 combination:3 describes:1 smaller:1 across:1 appealing:1 making:1 vendor:1 equation:1 resource:1 discus:3 needed:2 know:1 merit:1 tractable:1 sending:1 available:2 permit:1 apply:1 manufacturer:1 appropriate:1 robustly:1 save:3 alternative:2 batch:2 slower:1 substitute:1 cent:1 assumes:2 running:6 denotes:1 original:1 somewhere:1 exploit:1 especially:1 lspi:1 sweep:1 objective:4 question:2 realized:1 spike:1 degrades:1 costly:1 rt:27 said:1 exhibit:1 gradient:4 separate:4 capacity:1 consumption:12 collected:5 reason:2 length:2 providing:2 ratio:2 balance:1 innovation:1 nc:17 minimizing:1 potentially:1 decompositional:1 trace:5 disparate:1 dram:1 implementation:1 design:1 policy:64 perform:1 diamond:1 upper:1 observation:1 datasets:1 discarded:1 benchmark:1 enabling:1 inevitably:1 behave:1 incorrectly:1 immediate:2 extended:2 interacting:1 varied:1 police:1 intensity:10 david:1 specified:2 extensive:1 rad:1 learned:5 testbed:6 established:1 hour:1 inaccuracy:1 nip:1 address:2 able:2 bar:1 middleware:3 below:5 usually:1 beyond:1 reading:1 challenge:3 built:1 including:4 green:3 memory:1 power:72 business:1 client:7 hybrid:8 difficulty:1 residual:4 representing:1 scheme:1 improve:3 technology:1 brief:2 created:1 coupled:1 prior:1 literature:1 epoch:1 powered:1 aerobatic:1 relative:1 embedded:1 loss:1 reordering:1 interesting:1 limitation:3 allocation:3 proven:1 approximator:2 generator:2 agent:1 sufficient:1 article:1 systematically:1 ibm:4 austin:1 periodicity:1 changed:1 summary:1 firmware:3 keeping:3 offline:4 bias:1 side:1 perceptron:1 taking:2 munos:1 distributed:2 ghz:1 feedback:2 benefit:1 transition:2 cumulative:2 collection:3 reinforcement:11 commonly:1 made:3 subtracts:1 historical:1 far:1 avg:12 adaptive:1 transaction:1 observable:3 keep:1 pwr:4 tuples:1 disrupt:1 un:9 additionally:1 learn:7 transfer:3 szepesvari:1 operational:1 improving:1 hc:10 complex:3 interpolating:1 artificially:1 domain:4 da:3 significance:2 linearly:3 whole:1 profile:1 collector:1 neuronal:1 site:1 crafted:6 fig:3 fashion:1 experienced:1 msec:9 explicit:2 replay:1 lie:1 breaking:1 dozen:1 minute:3 down:1 embed:1 load:9 specific:4 ton:1 admits:1 concern:2 evidence:1 effectively:1 execution:1 sluggish:1 sigmoids:1 chen:2 simply:2 likely:2 labor:1 contained:2 scalar:1 cite:1 environmental:2 goal:9 presentation:1 price:1 feasible:1 change:3 specifically:3 except:1 operates:1 dwl:1 total:5 stake:1 experimental:2 jong:1 internal:1 almeida:1 latter:1 ongoing:2 handling:1 |
2,483 | 3,252 | Multiple-Instance Active Learning
Burr Settles Mark Craven
University of Wisconsin
Madison, WI 5713 USA
{bsettles@cs,craven@biostat}.wisc.edu
Soumya Ray
Oregon State University
Corvallis, OR 97331 USA
[email protected]
Abstract
We present a framework for active learning in the multiple-instance (MI) setting.
In an MI learning problem, instances are naturally organized into bags and it is
the bags, instead of individual instances, that are labeled for training. MI learners
assume that every instance in a bag labeled negative is actually negative, whereas
at least one instance in a bag labeled positive is actually positive. We consider
the particular case in which an MI learner is allowed to selectively query unlabeled instances from positive bags. This approach is well motivated in domains
in which it is inexpensive to acquire bag labels and possible, but expensive, to
acquire instance labels. We describe a method for learning from labels at mixed
levels of granularity, and introduce two active query selection strategies motivated by the MI setting. Our experiments show that learning from instance labels
can significantly improve performance of a basic MI learning algorithm in two
multiple-instance domains: content-based image retrieval and text classification.
1
Introduction
A limitation of supervised learning is that it requires a set of instance labels which are often difficult
or expensive to obtain. The multiple-instance (MI) learning framework [3] can, in some cases, address this handicap by relaxing the granularity at which labels are given. In the MI setting, instances
are grouped into bags (i.e., multi-sets) which may contain any number of instances. A bag is labeled
negative if and only if it contains all negative instances. A bag is labeled positive, however, if at
least one of its instances is positive. Note that positive bags may also contain negative instances.
The MI setting was formalized by Dietterich et al. in the context of drug activity prediction [3], and
has since been applied to a wide variety of tasks including content-based image retrieval [1, 6, 8],
text classification [1, 9], stock prediction [6], and protein family modeling [10].
Figure 1 illustrates how the MI representation can be applied to (a) content-based image retrieval
(CBIR) and (b) text classification tasks. For the CBIR task, images are represented as bags and instances correspond to segmented regions of the image. A bag representing a given image is labeled
positive if the image contains some object of interest. The multiple-instance paradigm is well suited
to this task because only a few regions of an image may represent the object of interest, such as the
gold medal in Figure 1(a). An advantage of the MI representation here is that it is significantly easier
to label an entire image than it is to label each segment. For text classification, documents are represented as bags and instances correspond to short passages (e.g., paragraphs) in the documents. This
formulation is useful in classification tasks for which document labels are freely available or cheaply
obtained, but the target concept is represented by only a few passages. For example, consider the
task of classifying articles according whether or not they contain information about the sub-cellular
location of proteins. The article in Figure 1(b) is labeled by the Mouse Genome Database [4] as a
citation for the protein catalase that specifies its sub-cellular location. However, the text that states
this is only a short passage on the second page of the article. The MI approach is therefore compelling because document labels can be cheaply obtained (say from the Mouse Genome Database),
but the labeling is not readily available at the most appropriate level of granularity (passages).
bag: image = { instances: segments }
bag: document = { instances: passages }
The catalase-containing structures
represent peroxisomes as concluded
from the co-localization with the peroxisomal membrane marker, PMP70.
(a)
(b)
Figure 1: Motivating examples for multiple- instance active learning. (a) In content- based image retrieval,
images are represented as bags and instances correspond to segmented image regions. An active MI learner
may query which segments belong to the object of interest, such as the gold medal shown in this image. (b) In
text classification, documents are bags and the instances represent passages of text. In MI active learning, the
learner may query specific passages to determine if they are representative of the positive class at hand.
The main challenge of multiple- instance learning is that, to induce an accurate model of the target concept, the learner must determine which instances in positive bags are actually positive, even
though the ratio of negatives to positives in these bags can be arbitrarily high. For many MI problems, such as the tasks illustrated in Figure 1, it is possible to obtain labels both at the bag level
and directly at the instance level. Fully labeling all instances, however, is expensive. As mentioned
above, the rationale for formulating the learning task as an MI problem is that it allows us to take
advantage of coarse labelings that may be available at low cost, or even for free. The approach
that we consider here is one that involves selectively obtaining the labels of certain instances in the
context of MI learning. In particular, we consider obtaining labels for selected instances in positive
bags, since the labels for instances in negative bags are known.
In active learning [2], the learner is allowed to ask queries about unlabeled instances. In this way, the
oracle (or human annotator) is required to label only instances that are assumed to be most valuable
for training. In the standard supervised setting, pool- based active learning typically begins with an
initial learner trained with a small set of labeled instances. Then the learner can query instances
from a large pool of unlabeled instances, re- train, and repeat. The goal is to reduce the total amount
of labeling effort required for the learner to achieve a certain level of accuracy.
We argue that whereas multiple- instance learning reduces the burden of labeling data by getting
labels at a coarse level of granularity, we may also benefit from selectively labeling some part of
the training data at a finer level of granularity. Hence, we explore the approach of multiple- instance
active learning as a way to efficiently overcome the ambiguity of the MI framework while keeping
labeling costs low.
There are several MI active learning scenarios we might consider. The first, which is analogous to
standard supervised active learning, is simply to allow the learner to query for the labels of unlabeled
bags. A second scenario is one in which all bags in the training set are labeled and the learner is
allowed to query for the labels of selected instances from positive bags. For example, the learner
might query on particular image segments or passages of text in the CBIR and text classification
domains, respectively. If an instance- query result is positive, the learner now has direct evidence
for the positive class. If the query result is negative, the learner knows to focus its attention to
other instances from that bag, also reducing ambiguity. A third scenario involves querying selected
positive bags rather than instances, and obtaining labels for any (or all) instances in such bags. For
example, the learner might query a positive image in the CBIR domain, and ask the oracle to label as
many segments as desired. A final scenario would assume that some bags are labeled and some are
not, and the learner would be able to query on (i) unlabeled bags, (ii) unlabeled instances in positive
bags, or (iii) some combination thereof. In the present work, we focus on the second formulation
above, where the learner queries selected unlabeled instances from labeled, positive bags.
The rest of this paper is organized as follows. First, we describe the algorithms we use to train
MI classifiers and select instance queries for active learning. Then, we describe our experiments to
evaluate these approaches on two data sets in the CBIR and text classification domains. Finally, we
discuss the results of our experiments and offer some concluding remarks.
2
Algorithms
MI Logistic Regression. We train probabilistic models for multiple-instance tasks using a generalization of the Diverse Density framework [6]. For MI classification, we seek the conditional
probability that the label yi is positive for bag Bi given n constituent instances: P (yi = 1|Bi =
{Bi1 , Bi2 , . . . , Bin }). If a classifier can provide an equivalent probability P (yij = 1|Bij ) for instance Bij , we can use a combining function (such as softmax or noisy-or) to combine posterior
probabilities of all the instances in a bag and estimate its posterior probability P (yi = 1|Bi ). The
combining function here explicitly encodes the MI assumption. If the model finds an instance likely
to be positive, the output of the combining function should find its corresponding bag likely to be
positive as well.
In our work, we train classifiers using multiple-instance logistic regression (MILR) which has been
shown to be a state-of-the-art MI learning algorithm, and appears to be a competitive method for
text classification and CBIR tasks [9]. MILR uses logistic regression with parameters ? = (w, b) to
estimate conditional probabilities for each instance:
oij = P (yij = 1|Bij ) =
1
1+
e?(w?Bij +b)
.
Here Bij represents a vector of feature values representing the jth instance in the ith bag, and w
is a vector of weights associated with the features. In order to combine these class probabilities for
instances into a class probability for a bag, MILR uses the softmax function:
Pn
j=1
oi = P (yi = 1|Bi ) = softmax? (oi1 , . . . , oin ) = Pn
oij e?oij
j=1
e?oij
,
where ? is a constant that determines the extent to which softmax approximates a hard max function.
In the general MI setting we do not know the labels of instances in positive bags. Because the equations above represent smooth functions of the model parameters ?, however, we can learn parameter
values using a gradient-based optimization method and an appropriate
objective function. In the
P
present work, we minimize squared error over the bags E(?) = 12 i (yi ? oi )2 , where yi ? {0, 1}
is the known label of bag Bi . While we describe our MI active learning methods below in terms
of this formulation of MILR, it is important to note that they generalize to any classifier that outputs instance-level probabilities used with differentiable combining and objective functions. Diverse
Density [6], for example, couples a Gaussian instance model with a noisy-or combining function.
Learning from Labels at Mixed Granularities. Suppose our active MI learner queries instance
Bij and the corresponding instance label yij is provided by the oracle. We would like to include a
direct training signal for this instance in the optimization procedure above. However, E(?) is defined
in terms of bag-level error, not instance-level error. Consider, though, that in MI learning a labeled
instance is effectively the same as a labeled bag that contains only that instance. So when the label
for instance Bij is known, we transform the training set for each query by adding a new training tuple
h{Bij }, yij i, where {Bij } is a new singleton bag containing only a copy of the queried instance,
and yij is the corresponding label. A copy of the query instance Bij also remains in the original bag
Bi , enabling the learner to compute the remaining instance gradients as described below.
Since the objective function will guide the learner toward classifying the singleton query instance
Bij in the positive tuple h{Bij }, 1i as positive, it will tend to classify the original bag Bi positive as
well. Conversely, if we add the negative tuple h{Bij }, 0i, the learner will tend to classify the instance
negative in the original bag, which will affect the other instance gradients via the combining function
and guides the learner to focus on other potentially positive instances in that bag.
It may seem that this effect on the original bag could be achieved by clamping the instance output oij
to yij during training, but this has the undesirable property of eliminating the training signal for the
bag and the instance. If yij = 1, the combining function output would be extremely high, making
bag error nearly zero, thus minimizing the objective function without any actual parameter updates.
If yij = 0, the instance would output nothing to the combining function, thus the learner would get
no training signal for this instance (though in this case the learner can still focus on other instances
in the bag). It is possible to combine clamped instance outputs with our singleton bag approach to
overcome this problem, but our experiments indicate that this has no practical advantage over adding
singleton bags alone.
Also note that simply adding singleton bags will alter the objective function by adding weight, albeit
indirectly, to bags that have been queried more often. To control this effect, we uniformly weight
each bag and all its queried singleton bags to sum to 1 when computing the value and gradient for
the objective function during training. For example, an unqueried bag has weight 1, a bag with one
instance query and its derived singleton bag each have weight 0.5, and so on.
Uncertainty Sampling. Now we turn our attention to strategies for selecting query instances for
labeling. A common approach to active learning in the standard supervised setting is uncertainty
sampling [5]. For probabilistic classifiers, this involves applying the classifier to each unlabeled
instance and querying those with most uncertainty about the class label. Recall that the learned
model estimates oij = P (yij = 1|Bij ), the probability that instance Bij is positive. We represent
the uncertainty U (Bij ) by the Gini measure:
U (Bij ) = 2oij (1 ? oij ).
Note that the particular measure we use here is not critical; the important properties are that its
minima are at zero and one, its maximum is at 0.5, and it is symmetric about 0.5.
MI Uncertainty (MIU). We argue that when doing active learning in a multiple-instance setting,
the selection criterion should take into account not just uncertainty about a given instance?s class
label, but also the extent to which the learner can adequately ?explain? the bag to which the instance
belongs. For example, the instance that the learner finds most uncertain may belong to the same
bag as the instance it finds most positive. In this case, the learned model will have a high value of
P (yi = 1|Bi ) for the bag because the value computed by the combining function will be dominated
by the output of the positive-looking instance. We propose an uncertainty-based query strategy that
weights the uncertainty of Bij in terms of how much it contributes to the classification of bag Bi .
As such, we define the MI Uncertainty (MIU) of an instance to be the derivative of bag output with
respect to instance output (i.e., the derivative of the softmax combining function) times instance
uncertainty:
M IU (Bij ) =
?oi
U (Bij ).
?oij
Expected Gradient Length (EGL). Another query strategy we consider is to identify the instance
that would impart the greatest change to the current model if we knew its label. Since we train
MILR with gradient descent, this involves querying the instance which, if h{Bij }, yij i is added to
the training set, would create the greatest change in the gradient of the objective function (i.e., the
largest gradient vector used to re-estimate values for ?). Let ?E(?) be the gradient of E with respect
to ?, which is a vector whose components are the partial derivatives of E with respect to each model
?E ?E
?E
parameter: ?E(?) = [ ??
,
, . . . , ??
].
1 ??2
m
+
Now let ?Eij
(?) be the new gradient obtained by adding the positive tuple h{Bij }, 1i to the training
?
set, and likewise let ?Eij
(?) be the new gradient if a query results in the negative tuple h{Bij }, 0i
being added. Since we do not know which label the oracle will provide in advance, we instead
calculate the expected length of the gradient based on the learner?s current belief oij in each outcome.
More precisely, we define the Expected Gradient Length (EGL) to be:
+
?
EGL(Bij ) = oij k?Eij
(?)k + (1 ? oij )k?Eij
(?)k.
Note that this selection strategy does not explicitly encode the MI bias. Instead, it employs class
probabilities to determine the expected label for candidate queries, with the goal of maximizing
parameter changes to what happens to be an MI learning algorithm. This strategy can be generalized
to query for other properties in non-MI active learning as well. For example, Zhu et al. [11] use
a related approach to determine the expected label of candidate query instances when combining
active learning with graph-based semi-supervised learning. Rather than trying to maximize the
expected change in the learning model, however, they select for the expected reduction in estimated
error over unlabeled instances.
3
Data and Experiments
Since no MI data sets with instance-level labels previously existed, we augmented an existing MI
data set by manually adding instance labels. SIVAL1 is a collection for content-based image retrieval
that includes 1500 images, each labeled with one of 25 class labels. The images contain complex
objects photographed in a variety of positions, orientations, locations, and lighting conditions. The
images (bags) have been transformed and segmented into approximately 30 segments (instances)
each. Each segment is represented by a 30-dimensional feature vector describing color and texture
attributes of the segment and its neighbors. For more details, see Rahmani & Goldman [8]. We
modified the collection by manually annotating the instance segments that belong to the labeled
object for each image using a graphical interface we developed.
We also created a semi-synthetic MI data set for text classification, using the 20 Newsgroups2 corpus
as a base. This corpus was chosen because it is an established benchmark for text classification,
and because the source texts?newsnet posts from the early 1990s?are relatively short (in the MI
setting, instances are usually paragraphs or short passages [1, 9]). For each of the 20 news categories,
we generate artificial bags of approximately 50 posts (instances) each by randomly sampling from
the target class (i.e., newsgroup category) at a rate of 3% for positive bags, with remaining instances
(and all instances for negative bags) drawn uniformly from the other classes. The texts are processed
with stemming, stop-word removal, and information-gain ranked feature selection. The TFIDF
values of the top 200 features are used to represent the instance texts. We construct a data set of 100
bags (50 positives and 50 negatives) for each class.
We compare our MI Uncertainty (MIU) and Expected Gradient Length (EGL) selection strategies
from Section 2 against two baselines: Uncertainty (using only the instance-model?s uncertainty), and
instances chosen uniformly at Random from positive bags (to evaluate the advantage of ?passively?
labeling instances). The MILR model uses ? = 2.5 for the softmax function and is trained by
minimizing squared loss via L-BFGS [7]. The instance-labeled MI data sets and MI learning source
code used in these experiments are available online3 .
We evaluate our methods by constructing learning curves that plot the area under the ROC curve
(AUROC) as a function of instances queried for each data set and selection strategy. The initial point
in all experiments is the AUROC for a model trained on labeled bags from the training set without
any instance queries. Following previous work on the CBIR problem [8], we average results for
SIVAL over 20 independent runs for each image class, where the learner begins with 20 randomly
drawn positive bags (from which instances may be queried) and 20 random negative bags. The
model is then evaluated on the remainder of the unlabeled bags, and labeled query instances are
added to the training set in batches of size q = 2. For 20 Newsgroups, we average results using
10-fold cross-validation for each newsgroup category, using a query batch size of q = 5.
Due to lack of space, we cannot show learning curves for every task. Figure 2 shows three representative learning curves for each of the two data sets. In Table 1 we summarize all curves by reporting
the average improvement made by each query selection strategy over the initial MILR model (before
any instance queries) for various points along the learning curve. Table 2 presents a more detailed
comparison of the initial model against each query selection method at a fixed point early on in
active learning (10 query batches).
1
http://www.cs.wustl.edu/accio/
http://people.csail.mit.edu/jrennie/20Newsgroups/
3
http://pages.cs.wisc.edu/?bsettles/amil/
2
1
0.9
AUROC
1
wd40can
0.9
0.8
0.6
1
0
AUROC
0.7
10
20
30
MIU
EGL
Uncertainty
Random
0.6
40
50
rec.autos
0.9
0.5
1
0
10
20
30
40
50
0.6
0
20
140
0.5
0
10
20
30
40
50
talk.politics.misc
0.7
MIU
EGL
Uncertainty
Random
0.6
40
60
80 100 120
Number of Instance Queries
1
0.8
0.7
MIU
EGL
Uncertainty
Random
0.5
0.9
0.8
0.7
MIU
EGL
Uncertainty
Random
0.6
sci.crypt
0.9
0.8
0.5
0.8
0.7
MIU
EGL
Uncertainty
Random
spritecan
0.9
0.8
0.7
0.5
1
translucentbowl
0
20
40
60
80 100 120
Number of Instance Queries
MIU
EGL
Uncertainty
Random
0.6
140
0.5
0
20
40
60
80 100 120
Number of Instance Queries
140
Figure 2: Sample learning curves from SIVAL (top row) and 20 Newsgroups (bottom row) tasks.
Table 1: Summary of learning curves. The average AUROC improvement over the initial MI model (before
any instance queries) is reported for each selection strategy. Numbers are averaged across all tasks in each data
set at various points during active learning. The winning algorithm at each point is indicated with a box.
4
Instance
Queries
Random
Uncert.
SIVAL Tasks
EGL
MIU
Random
20 Newsgroups Tasks
Uncert.
EGL
MIU
10
20
50
80
100
+0.023
+0.033
+0.057
+0.065
+0.068
+0.043
+0.065
+0.084
+0.088
+0.092
+0.039
+0.063
+0.085
+0.093
+0.095
+0.050
+0.070
+0.087
+0.090
+0.090
-0.001
-0.002
+0.002
+0.003
+0.008
+0.002
+0.015
+0.046
+0.052
+0.055
+0.002
+0.015
+0.045
+0.056
+0.055
+0.009
+0.029
+0.051
+0.056
+0.058
Discussion of Results
We can draw several interesting conclusions from these results. First and most germane to MI
active learning is that MI learners benefit from instance-level labels. With the exception of random
selection on 20 Newsgroups data, instance-level labels almost always improve the accuracy of the
learner, often with statistical significance after only a few queries.
Second, we see that active query strategies (e.g., Uncertainty, EGL, and MIU) perform better than
passive (random) instance labeling. On SIVAL tasks, random querying steadily improves accuracy,
but very slowly. As Table 1 shows, random selection at 100 queries fails to be competitive with the
three active query strategies after half as many queries. On 20 Newsgroups tasks, random selection
has a slight negative effect (if any) early on, possibly because it lacks a focused search for positive
instances (of which there are only one or two per bag). All three active selection methods, on the
other hand, show significant gains fairly quickly on both data sets.
Finally, MIU appears to be a well-suited query strategy for this formulation of MI active learning.
On both data sets, it consistently improves the initial MI learner, usually with statistical significance,
and often approaches the asymptotic level of accuracy with fewer labeled instances than the other
two active methods. Uncertainty and EGL seem to perform quite comparably, with EGL performing
slightly better between the two. MIU?s gains over these other query strategies are not usually statistically significant, however, and in the long run it is generally matched or slightly surpassed by them.
MIU shows the greatest advantage early in the active instance-querying process, perhaps because it
is the only method we tested that explicitly encodes the MI assumption by taking advantage of the
combining function in its estimation of value to the learner.
Table 2: Detailed comparison of the initial MI learner against various query strategies after 10 query batches
(20 instances for SIVAL, 50 instances for 20 Newsgroups). Average AUROC values are shown for each algorithm on each task. Statistically significant gains over the initial learner (using a two-tailed t-test at 95%)
are shown in bold. The winning algorithm for each task is indicated with a box, and a tally of wins for each
algorithm is reported below each column.
Task
Initial
Random
Uncert.
EGL
MIU
ajaxorange
apple
banana
bluescrunge
candlewithholder
cardboardbox
checkeredscarf
cokecan
dataminingbook
dirtyrunningshoe
dirtyworkgloves
fabricsoftenerbox
feltflowerrug
glazedwoodpot
goldmedal
greenteabox
juliespot
largespoon
rapbook
smileyfacedoll
spritecan
stripednotebook
translucentbowl
wd40can
woodrollingpin
0.547
0.431
0.440
0.410
0.623
0.430
0.662
0.668
0.445
0.620
0.455
0.417
0.743
0.444
0.496
0.563
0.479
0.436
0.478
0.556
0.670
0.477
0.548
0.599
0.416
0.564
0.418
0.463
0.426
0.662
0.437
0.749
0.727
0.480
0.701
0.497
0.534
0.754
0.464
0.544
0.595
0.490
0.403
0.455
0.612
0.711
0.478
0.614
0.658
0.435
0.633
0.469
0.514
0.508
0.646
0.451
0.765
0.693
0.505
0.703
0.491
0.617
0.794
0.528
0.622
0.614
0.571
0.406
0.463
0.675
0.749
0.486
0.678
0.687
0.420
0.638
0.455
0.511
0.470
0.656
0.442
0.772
0.713
0.522
0.697
0.496
0.594
0.799
0.515
0.602
0.619
0.580
0.394
0.454
0.640
0.746
0.519
0.665
0.700
0.426
0.627
0.459
0.507
0.491
0.677
0.454
0.765
0.736
0.519
0.708
0.497
0.634
0.792
0.526
0.605
0.639
0.564
0.408
0.457
0.655
0.750
0.489
0.702
0.707
0.429
alt.atheism
comp.graphics
comp.os.ms-windows.misc
comp.sys.ibm.pc.hardware
comp.sys.mac.hardware
comp.windows.x
misc.forsale
rec.autos
rec.motorcycles
rec.sport.baseball
rec.sport.hockey
sci.crypt
sci.electronics
sci.med
sci.space
soc.religion.christian
talk.politics.guns
talk.politics.mideast
talk.politics.misc
talk.religion.misc
0.812
0.720
0.772
0.716
0.716
0.835
0.769
0.768
0.844
0.838
0.918
0.770
0.719
0.827
0.822
0.768
0.847
0.791
0.789
0.759
0.836
0.690
0.768
0.690
0.728
0.827
0.748
0.785
0.844
0.846
0.918
0.770
0.751
0.819
0.824
0.780
0.855
0.793
0.797
0.773
0.863
0.789
0.764
0.687
0.861
0.888
0.758
0.872
0.871
0.871
0.966
0.887
0.731
0.837
0.901
0.769
0.860
0.874
0.878
0.785
0.839
0.783
0.742
0.694
0.855
0.894
0.777
0.872
0.879
0.869
0.962
0.893
0.733
0.845
0.905
0.771
0.870
0.880
0.866
0.773
0.877
0.819
0.714
0.707
0.878
0.882
0.771
0.860
0.883
0.899
0.964
0.913
0.725
0.862
0.893
0.789
0.858
0.876
0.856
0.793
TOTAL NUMBER OF WINS
4
3
9
12
19
It is also interesting to note that in an earlier version of our learning algorithm, we did not normalize
weights for bags and instance-query singleton bags when learning with labels at mixed granularities.
Instead, all such bags were weighted equally and the objective function was slightly altered. In
those experiments, MIU?s accuracy was roughly equivalent to the figures reported here, although
the improvement for all other query strategies (especially random selection) were lower.
5
Conclusion
We have presented multiple-instance active learning, a novel framework for reducing the labeling
burden by obtaining labels at a coarse granularity, and then selectively labeling at finer levels. This
approach is useful when bag labels are easily acquired, and instance labels can be obtained but are
expensive. In the present work, we explored the case where an MI learner may query unlabeled
instances from positively labeled bags in order reduce the inherent ambiguity of the MI representation, while keeping label costs low. We also described a simple method for learning from labels at
both the bag-level and instance-level, and showed that querying instance-level labels through active
learning is beneficial in content-based image retrieval and text categorization problems. In addition,
we introduced two active query selection strategies motivated by this work, MI Uncertainty and
Expected Gradient Length, and demonstrated that they are well-suited to MI active learning.
In future work, we plan to investigate the other MI active learning scenarios mentioned in Section 1.
Of particular interest is the setting where, initially, some bags are labeled and others are not, and
the learner is allowed to query on (i) unlabeled bags, (ii) unlabeled instances from positively labeled
bags, or (iii) some combination thereof. We also plan to investigate other selection methods for
different query formats, such as ?label any or all positive instances in this bag,? which may be more
natural for some MI learning problems.
Acknowledgments
This research was supported by NSF grant IIS-0093016 and NIH grants T15-LM07359 and R01LM07050-05.
References
[1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning.
In Advances in Neural Information Processing Systems (NIPS), pages 561?568. MIT Press, 2003.
[2] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15(2):201?221, 1994.
[3] T. Dietterich, R. Lathrop, and T. Lozano-Perez. Solving the multiple-instance problem with axis-parallel
rectangles. Artificial Intelligence, 89:31?71, 1997.
[4] J.T. Eppig, C.J. Bult, J.A. Kadin, J.E. Richardson, J.A. Blake, and the members of the Mouse Genome
Database Group. The Mouse Genome Database (MGD): from genes to mice?a community resource for
mouse biology. Nucleic Acids Research, 33:D471?D475, 2005. http://www.informatics.jax.org.
[5] D. Lewis and J. Catlett. Heterogeneous uncertainty sampling for supervised learning. In Proceedings of
the International Conference on Machine Learning (ICML), pages 148?156. Morgan Kaufmann, 1994.
[6] O. Maron and T. Lozano-Perez. A framework for multiple-instance learning. In Advances in Neural
Information Processing Systems (NIPS), pages 570?576. MIT Press, 1998.
[7] J. Nocedal and S.J. Wright. Numerical Optimization. Springer, 1999.
[8] R. Rahmani and S.A. Goldman. MISSL: Multiple-instance semi-supervised learning. In Proceedings of
the International Conference on Machine Learning (ICML), pages 705?712. ACM Press, 2006.
[9] S. Ray and M. Craven. Supervised versus multiple instance learning: An empirical comparison. In
Proceedings of the International Conference on Machine Learning (ICML), pages 697?704. ACM Press,
2005.
[10] Q. Tao, S.D. Scott, and N.V. Vinodchandran. SVM-based generalized multiple-instance learning via
approximate box counting. In Proceedings of the International Conference on Machine Learning (ICML),
pages 779?806. Morgan Kaufmann, 2004.
[11] X. Zhu, J. Lafferty, and Z. Ghahramani. Combining active learning and semi-supervised learning using
gaussian fields and harmonic functions. In Proceedings of the ICML Workshop on the Continuum from
Labeled to Unlabeled Data, pages 58?65, 2003.
| 3252 |@word version:1 eliminating:1 seek:1 reduction:1 electronics:1 initial:9 contains:3 selecting:1 document:6 existing:1 current:2 must:1 readily:1 stemming:1 numerical:1 hofmann:1 christian:1 plot:1 atlas:1 update:1 alone:1 half:1 selected:4 fewer:1 intelligence:1 sys:2 ith:1 short:4 coarse:3 location:3 org:1 along:1 direct:2 combine:3 burr:1 ray:2 paragraph:2 introduce:1 acquired:1 expected:9 roughly:1 multi:1 goldman:2 actual:1 window:2 begin:2 provided:1 matched:1 what:1 developed:1 every:2 classifier:6 control:1 grant:2 uncert:3 positive:37 before:2 approximately:2 might:3 conversely:1 relaxing:1 co:1 bi:9 statistically:2 averaged:1 practical:1 acknowledgment:1 procedure:1 cbir:7 area:1 empirical:1 drug:1 online3:1 significantly:2 word:1 induce:1 wustl:1 protein:3 get:1 cannot:1 unlabeled:14 selection:16 undesirable:1 tsochantaridis:1 context:2 applying:1 www:2 equivalent:2 demonstrated:1 maximizing:1 attention:2 focused:1 formalized:1 analogous:1 target:3 suppose:1 us:3 expensive:4 rec:5 labeled:23 database:4 bottom:1 calculate:1 region:3 news:1 valuable:1 mentioned:2 trained:3 solving:1 segment:9 localization:1 baseball:1 learner:36 easily:1 stock:1 represented:5 various:3 talk:5 train:5 describe:4 query:54 gini:1 labeling:11 artificial:2 outcome:1 whose:1 quite:1 say:1 annotating:1 richardson:1 transform:1 noisy:2 final:1 advantage:6 differentiable:1 propose:1 remainder:1 combining:13 motorcycle:1 achieve:1 gold:2 normalize:1 getting:1 constituent:1 categorization:1 object:5 andrew:1 soc:1 c:3 involves:4 indicate:1 attribute:1 germane:1 human:1 settle:1 bin:1 generalization:2 sival:5 bi1:1 tfidf:1 yij:10 blake:1 wright:1 forsale:1 early:4 catlett:1 continuum:1 estimation:1 bag:86 label:46 grouped:1 largest:1 vinodchandran:1 create:1 weighted:1 impart:1 mit:3 gaussian:2 always:1 modified:1 rather:2 pn:2 encode:1 derived:1 focus:4 improvement:3 consistently:1 baseline:1 entire:1 typically:1 initially:1 transformed:1 labelings:1 tao:1 iu:1 classification:13 orientation:1 plan:2 art:1 softmax:6 fairly:1 field:1 construct:1 sampling:4 manually:2 biology:1 represents:1 icml:5 nearly:1 alter:1 future:1 others:1 inherent:1 few:3 employ:1 soumya:1 randomly:2 individual:1 interest:4 investigate:2 pc:1 perez:2 accurate:1 tuple:5 partial:1 desired:1 re:2 uncertain:1 instance:141 column:1 classify:2 modeling:1 compelling:1 earlier:1 eppig:1 cost:3 mac:1 graphic:1 motivating:1 reported:3 eec:1 synthetic:1 density:2 international:4 csail:1 probabilistic:2 informatics:1 pool:2 mouse:6 quickly:1 squared:2 ambiguity:3 containing:2 slowly:1 possibly:1 derivative:3 account:1 singleton:8 bfgs:1 bold:1 includes:1 oregon:1 explicitly:3 doing:1 competitive:2 parallel:1 minimize:1 oi:3 accuracy:5 kaufmann:2 acid:1 efficiently:1 likewise:1 correspond:3 identify:1 generalize:1 comparably:1 biostat:1 lighting:1 apple:1 finer:2 comp:5 explain:1 inexpensive:1 against:3 crypt:2 steadily:1 thereof:2 naturally:1 associated:1 mi:53 couple:1 stop:1 gain:4 ask:2 recall:1 color:1 improves:2 organized:2 actually:3 appears:2 supervised:9 formulation:4 evaluated:1 though:3 box:3 just:1 hand:2 cohn:1 o:1 marker:1 lack:2 logistic:3 maron:1 perhaps:1 indicated:2 usa:2 dietterich:2 effect:3 rahmani:2 contain:4 concept:2 adequately:1 hence:1 lozano:2 symmetric:1 misc:5 illustrated:1 during:3 criterion:1 generalized:2 trying:1 m:1 interface:1 passage:9 passive:1 image:23 harmonic:1 novel:1 nih:1 common:1 belong:3 slight:1 approximates:1 significant:3 corvallis:1 queried:5 jrennie:1 add:1 base:1 posterior:2 showed:1 belongs:1 scenario:5 certain:2 arbitrarily:1 yi:7 morgan:2 minimum:1 freely:1 determine:4 paradigm:1 maximize:1 signal:3 ii:3 semi:4 multiple:19 reduces:1 segmented:3 smooth:1 offer:1 cross:1 retrieval:6 long:1 post:2 equally:1 prediction:2 basic:1 regression:3 heterogeneous:1 surpassed:1 represent:6 achieved:1 whereas:2 addition:1 source:2 concluded:1 rest:1 tend:2 med:1 member:1 lafferty:1 seem:2 counting:1 granularity:8 iii:2 variety:2 affect:1 newsgroups:7 reduce:2 politics:4 whether:1 motivated:3 effort:1 remark:1 oi1:1 useful:2 generally:1 detailed:2 amount:1 hardware:2 processed:1 category:3 generate:1 specifies:1 http:4 nsf:1 estimated:1 per:1 diverse:2 group:1 drawn:2 wisc:2 rectangle:1 nocedal:1 graph:1 sum:1 run:2 uncertainty:23 reporting:1 family:1 almost:1 draw:1 handicap:1 existed:1 fold:1 oracle:4 activity:1 precisely:1 encodes:2 dominated:1 extremely:1 formulating:1 concluding:1 passively:1 performing:1 photographed:1 relatively:1 format:1 according:1 combination:2 craven:3 membrane:1 across:1 slightly:3 beneficial:1 wi:1 making:1 happens:1 equation:1 resource:1 remains:1 previously:1 discus:1 turn:1 describing:1 know:3 available:4 appropriate:2 indirectly:1 batch:4 original:4 top:2 remaining:2 include:1 graphical:1 madison:1 ghahramani:1 especially:1 objective:8 added:3 strategy:17 gradient:15 win:2 sci:5 gun:1 argue:2 extent:2 cellular:2 toward:1 length:5 code:1 ratio:1 minimizing:2 acquire:2 difficult:1 potentially:1 negative:15 perform:2 ladner:1 nucleic:1 benchmark:1 enabling:1 descent:1 looking:1 banana:1 community:1 introduced:1 required:2 learned:2 established:1 nip:2 address:1 able:1 below:3 usually:3 scott:1 challenge:1 summarize:1 including:1 max:1 belief:1 greatest:3 bi2:1 critical:1 ranked:1 natural:1 oij:12 zhu:2 representing:2 improve:2 altered:1 axis:1 created:1 auto:2 text:17 oregonstate:1 removal:1 asymptotic:1 wisconsin:1 fully:1 loss:1 rationale:1 mixed:3 interesting:2 limitation:1 querying:6 versus:1 annotator:1 validation:1 article:3 classifying:2 ibm:1 row:2 summary:1 repeat:1 supported:1 keeping:2 free:1 jth:1 copy:2 guide:2 allow:1 bias:1 wide:1 neighbor:1 taking:1 miu:17 benefit:2 overcome:2 curve:8 genome:4 collection:2 made:1 citation:1 approximate:1 gene:1 active:34 corpus:2 assumed:1 knew:1 unqueried:1 search:1 tailed:1 table:5 hockey:1 learn:1 obtaining:4 contributes:1 improving:1 complex:1 constructing:1 domain:5 did:1 oin:1 main:1 significance:2 nothing:1 allowed:4 atheism:1 positively:2 augmented:1 representative:2 roc:1 sub:2 position:1 fails:1 tally:1 winning:2 candidate:2 clamped:1 third:1 mideast:1 bij:24 specific:1 explored:1 svm:1 auroc:6 alt:1 evidence:1 burden:2 workshop:1 albeit:1 adding:6 effectively:1 texture:1 illustrates:1 egl:16 clamping:1 easier:1 suited:3 simply:2 explore:1 likely:2 eij:4 cheaply:2 religion:2 sport:2 medal:2 springer:1 determines:1 lewis:1 acm:2 conditional:2 goal:2 content:6 hard:1 change:4 reducing:2 uniformly:3 total:2 lathrop:1 t15:1 newsgroup:2 exception:1 selectively:4 select:2 mark:1 people:1 support:1 evaluate:3 tested:1 |
2,484 | 3,253 | Hierarchical Apprenticeship Learning, with
Application to Quadruped Locomotion
J. Zico Kolter, Pieter Abbeel, Andrew Y. Ng
Department of Computer Science
Stanford University
Stanford, CA 94305
{kolter, pabbeel, ang}@cs.stanford.edu
Abstract
We consider apprenticeship learning?learning from expert demonstrations?in
the setting of large, complex domains. Past work in apprenticeship learning
requires that the expert demonstrate complete trajectories through the domain.
However, in many problems even an expert has difficulty controlling the system,
which makes this approach infeasible. For example, consider the task of teaching a quadruped robot to navigate over extreme terrain; demonstrating an optimal
policy (i.e., an optimal set of foot locations over the entire terrain) is a highly
non-trivial task, even for an expert. In this paper we propose a method for hierarchical apprenticeship learning, which allows the algorithm to accept isolated
advice at different hierarchical levels of the control task. This type of advice is
often feasible for experts to give, even if the expert is unable to demonstrate complete trajectories. This allows us to extend the apprenticeship learning paradigm
to much larger, more challenging domains. In particular, in this paper we apply
the hierarchical apprenticeship learning algorithm to the task of quadruped locomotion over extreme terrain, and achieve, to the best of our knowledge, results
superior to any previously published work.
1
Introduction
In this paper we consider apprenticeship learning in the setting of large, complex domains. While
most reinforcement learning algorithms operate under the Markov decision process (MDP) formalism (where the reward function is typically assumed to be given a priori), past work [1, 13, 11]
has noted that often the reward function itself is difficult to specify by hand, since it must quantify
the trade off between many features. Apprenticeship learning is based on the insight that often it
is easier for an ?expert? to demonstrate the desired behavior than it is to specify a reward function
that induces this behavior. However, when attempting to apply apprenticeship learning to large domains, several challenges arise. First, past algorithms for apprenticeship learning require the expert
to demonstrate complete trajectories in the domain, and we are specifically concerned with domains
that are sufficiently complex so that even this task is not feasible. Second, these past algorithms
require the ability to solve the ?easier? problem of finding a nearly optimal policy given some candidate reward function, and even this is challenging in large domains. Indeed, such domains often
necessitate hierarchical control in order to reduce the complexity of the control task [2, 4, 15, 12].
As a motivating application, consider the task of navigating a quadruped robot (shown in Figure
1(a)) over challenging, irregular terrain (shown in Figure 1(b,c)). In a naive approach, the dimensionality of the state space is prohibitively large: the robot has 12 independently actuated joints, and
the state must also specify the current three-dimensional position and orientation of the robot, leading to an 18-dimensional state space that is well beyond the capabilities of standard RL algorithms.
Fortunately, this control task succumbs very naturally to a hierarchical decomposition: we first plan
a general path over the terrain, then plan footsteps along this path, and finally plan joint movements
1
Figure 1: (a) LittleDog robot, designed and built by Boston Dynamics, Inc. (b) Typical terrain. (c) Height
map of the depicted terrain. (Black = 0cm altitude, white = 12cm altitude.)
to achieve these footsteps. However, it is very challenging to specify a proper reward, specifically
for the higher levels of control, as this requires quantifying the trade-off between many features,
including progress toward a goal, the height differential between feet, the slope of the terrain underneath its feet, etc. Moreover, consider the apprenticeship learning task of specifying a complete set
of foot locations, across an entire terrain, that properly captures all the trade-offs above; this itself is
a highly non-trivial task.
Motivated by these difficulties, we present a unified method for hierarchical apprenticeship learning. Our approach is based on the insight that, while it may be difficult for an expert to specify
entire optimal trajectories in a large domain, it is much easier to ?teach hierarchically?: that is, if we
employ a hierarchical control scheme to solve our problem, it is much easier for the expert to give
advice independently at each level of this hierarchy. At the lower levels of the control hierarchy,
our method only requires that the expert be able to demonstrate good local behavior, rather than
behavior that is optimal for the entire task. This type of advice is often feasible for the expert to give
even when the expert is entirely unable to give full trajectory demonstrations. Thus the approach
allows for apprenticeship learning in extremely complex, previously intractable domains.
The contributions of this paper are twofold. First, we introduce the hierarchical apprenticeship
learning algorithm. This algorithm extends the apprenticeship learning paradigm to complex, highdimensional control tasks by allowing an expert to demonstrate desired behavior at multiple levels of
abstraction. Second, we apply the hierarchical apprenticeship approach to the quadruped locomotion
problem discussed above. By applying this method, we achieve performance that is, to the best of
our knowledge, well beyond any published results for quadruped locomotion.1
The remainder of this paper is organized as follows. In Section 2 we discuss preliminaries and
notation. In Section 3 we present the general formulation of the hierarchical apprenticeship learning
algorithm. In Section 4 we present experimental results, both on a hierarchical multi-room grid
world, and on the real-world quadruped locomotion task. Finally, in Section 5 we discuss related
work and conclude the paper.
2
Preliminaries and Notation
A Markov decision process (MDP) is a tuple (S, A, T, H, D, R), where S is a set of states; A is a
set of actions, T = {Psa } is a set of state transition probabilities (here, Psa is the state transition
distribution upon taking action a in state s); H is the horizon which corresponds to the number of
time-steps considered; D is a distribution over initial states; and R : S ? R is a reward function.
As we are often concerned with MDPs for which no reward function is given, we use the notation
MDP\R to denote an MDP minus the reward function. A policy ? is a mapping from
states to a probhP
i
H
ability distribution over actions. The value of a policy ? is given by V (?) = E
t=0 R(st )|? ,
where the expectation is taken with respect to the random state sequence s0 , s1 , . . . , sH drawn by
stating from the state s0 (drawn from distribution D) and picking actions according to ?.
1
There are several other institutions working with the LittleDog robot, and many have developed (unpublished) systems that are also very capable. As of the date of submission, we believe that the controller presented
in this paper is on par with the very best controllers developed at other institutions. For instance, although direct comparison is difficult, the fastest running time that any team achieved during public evaluations was 39
seconds. In Section 4 we present results crossing terrain of comparable difficulty and distance in 30-35 seconds.
2
Often the reward function R can be represented more compactly as a function of the state. Let
? : S ? Rn be a mapping from states to a set of features. We consider the case where the reward
function R is a linear combination of the features: R(s) = wT ?(s) for parameters w ? Rn . Then
we have that the value of a policy ? is linear in the reward function weights
PH
PH
PH
V (?) = E[ t=0 R(st )|?] = E[ t=0 wT ?(st )|?] = wT E[ t=0 ?(st )|?] = wT ?? (?)
(1)
where we used linearity of expectation to bring w outside of the expectation. The last quantity
PH
defines the vector of feature expectations ?? (?) = E[ t=0 ?(st )|?].
3
The Hierarchical Apprenticeship Learning Algorithm
We now present our hierarchical apprenticeship learning algorithm (hereafter HAL). For simplicity,
we present a two level hierarchical formulation of the control task, referred to generically as the
low-level and high-level controllers. The extension to higher order hierarchies poses no difficulties.
3.1 Reward Decomposition in HAL
At the heart of the HAL algorithm is a simple decomposition of the reward function that links the
two levels of control. Suppose that we are given a hierarchical decomposition of a control task in the
form of two MDP\Rs ? a low-level and a high-level MDP\R, denoted M` = (S` , A` , T` , H` , D` )
and Mh = (Sh , Ah , Th , Hh , Dh ) respectively ? and a partitioning function ? : S` ? Sh that maps
low level states to high-level states (the assumption here is that |Sh | |S` | so that this hierarchical
decomposition actually provides a computational gain).2 For example, in the case of the quadruped
locomotion problem the low-level MDP\R describes the state of all four feet, while the high-level
MDP\R describes only the position of the robot?s center of mass. As is standard in apprenticeship
learning, we suppose that the rewards in the low-level MDP\R can be represented as a linear function
of state features, R(s` ) = wT ?(s` ). The HAL algorithm assumes that the reward of a high-level
state is equal to the average reward over all its corresponding low-level states. Formally
X
X
X
1
1
1
R(sh ) =
wT
?(s` )
R(s` ) =
wT ?(s` ) =
N (sh )
N (sh )
N (sh )
?1
?1
?1
s` ??
(sh )
s` ??
(sh )
s` ??
(sh )
(2)
where ? ?1 (sh ) denotes the inverse image of the partitioning function and N (sh ) = |? ?1 (sh )|.
While this may not always be the most ideal decomposition of the reward function in many cases?
for example, we may want to let the reward of a high-level state be the maximum of its low level
state rewards to capture the fact that an ideal agent would always seek to maximize reward at the
lower level, or alternatively the minimum of its low level state rewards to be robust to worst-case
outcomes?it captures the idea that in the absence of other prior information, it seems reasonable
to assume a uniform distribution over the low-level states corresponding to a high-level state. An
important consequence of (2) is that the high level reward is now also linear in the low-level reward
weights w. This will enable us in the subsequent sections to formulate a unified hierarchical apprenticeship learning algorithm that is able to incorporate expert advice at both the high level and the
low level simultaneously.
3.2 Expert Advice at the High Level
Similar to past apprenticeship learning methods, expert advice at the high level consists of full
policies demonstrated by the expert. However, because the high-level MDP\R can be significantly
simpler than the low-level MDP\R, this task can be substantially easier. If the expert suggests that
(i)
(i)
?h,E is an optimal policy for some given MDP\R Mh , then this corresponds to the following
constraint, which states that the expert?s policy outperforms all other policies:
(i)
(i)
(i)
V (i) (?h,E ) ? V (i) (?h ) ??h .
Equivalently, using (1), we can formulate this constraint as follows:
(i)
(i)
(i)
(i)
wT ?? (?h,E ) ? wT ?? (?h ) ??h .
While we may not be able to obtain the exact feature expectations of the expert?s policy if the highlevel transitions are stochastic, observing a single expert demonstration corresponds to receiving
2
As with much work in reinforcement learning, it is the assumption of this paper that the hierarchical
decomposition of a control task is given by a system designer. While there has also been recent work on the
automated discovery of state abstractions[5], we have found that there is often a very natural decomposition of
control tasks into multiple levels (as we will discuss for the specific case of quadruped locomotion).
3
a sample from these feature expectations, so we simply use the observed expert features counts
(i) (i)
?
?? (?h,E ) in lieu of the true expectations. By standard sample complexity arguments [1], it can be
shown that a sufficient number of observed feature counts will converge to the true expectation. To
resolve the ambiguity in w, and to allow the expert to provide noisy advice, we use regularization and
slack variables (similar to standard SVM formulations),
which results in the following formulation:
Pn
minw,? 21 kwk22 + Ch i=1 ? (i)
(i) (i)
(i)
(i)
s.t. wT ?
?? (?h,E ) ? wT ?? (?h ) + 1 ? ? (i) ??h , i
(i)
where ?h indexes over all high-level policies, i indexes over all MDPs, and Ch is a regularization
constant.3 Despite the fact that there are an exponential number of possible policies there are wellknown algorithms that are able to solve this optimization problem; however, we defer this discussion
until after presenting our complete formulation.
3.3 Expert Advice at the Low Level
Our approach differs from standard apprenticeship learning when we consider advice at the low
level. Unlike the apprenticeship learning paradigm where an expert specifies full trajectories in the
target domain, we allow for an expert to specify single, greedy actions in the low-level domain.
Specifically, if the agent is in state s` and the expert suggests that the best greedy action would move
to state s0` , this corresponds directly to a constraint on the reward function, namely that
R(s0` ) ? R(s00` )
00
for all other states s` that can be reached from the current state (we say that s00` is ?reachable? from
the current state s` if ?a s.t.Ps` a (s00` ) > for some 0 < ? 1).4 This results in the following
constraints on the reward function parameters w,
wT ?(s0` ) ? wT ?(s00` )
00
for all s` reachable from s` . As before, to resolve the ambiguity in w and to allow for the expert to
provide noisy advice, we use regularization P
and slack variables. This gives:
m
minw,? 12 kwk22 + C` j=1 ? (j)
(j)
(j)
(j)
s.t. wT ?(s0` ) ? wT ?(s00` ) + 1 ? ? (j) ?s00` , j
(j)
(j)
where s00` indexes over all states reachable from s0` and j indexes over all low-level demonstrations provided by the expert.
3.4 The Unified HAL Algorithm
From (2) we see the high level and low level rewards are a linear combination of the same set of
reward weights w. This allows us to combine both types of expert advice presented above to obtain
the following unified optimization problem P
Pn
m
minw,?,? 12 kwk22 + C` j=1 ? (j) + Ch i=1 ? (i)
(j)
(j)
(j)
(3)
s.t. wT ?(s0` ) ? wT ?(s00` ) + 1 ? ? (j) ?s00` , j
(i)
(i)
T (i) (i)
T
(i)
w ?
?? (?h,E ) ? w ?? (?h ) + 1 ? ?
??h , i.
This optimization problem is convex, and can be solved efficiently. In particular, even though the
optimization problem has an exponentially large number of constraints (one constraint per policy),
the optimum can be found efficiently (i.e., in polynomial time) using, for example, the ellipsoid
method, since we can efficiently identify a constraint that is violated.5 However, in practice we
found the following constraint generation method more efficient:
3
This formulation is not entirely correct by itself, due to the fact that it is impossible to separate a policy
from all policies (including itself) by a margin of one, and so the exact solution to this problem will be w = 0.
To deal with this, one typically scales the margin or slack by some loss function that quantifies how different
two policies are [16, 17], and this is the approach taken by Ratliff, et al. [13] in their maximum margin planning
algorithm. Alternatively, Abbeel & Ng [1], solve the optimization problem without any slack, and notice that
as soon as the problem becomes infeasible, the expert?s policy lies in the convex hull of the generated policies.
However, in our full formulation with low-level advice also taken into account, this becomes less of an issue,
and so we present the above formulation for simplicity. In all experiments where we use only the high-level
constraints, we employ margin scaling as in [13].
4
Alternatively, one
Pinterpret low-level advice
P at the level of actions, and interpret the expert picking action a
as the constraint that s0 Psa (s0 )R(s0 ) ? s0 Psa0 (s0 )R(s0 ) ?a0 6= a. However, in the domains we consider,
where there is a clear set of ?reachable? states from each state, the formalism above seems more natural.
5
Similar techniques are employed by [17] to solve structured prediction problems. Alternatively, Ratliff, et
al. [13] take a different approach, and move the constraints into the objective by eliminating the slack variables,
then employ a subgradient method.
4
400
300
HAL
350
HAL
High?Level Contraints Only
Flat Apprenticeship Learning
250
Low?Level Constraints Only
Suboptimality of policy
Suboptimality of Policy
300
200
150
250
200
150
100
100
50
0
100
200
300
400
500
600
Number of Training Samples
700
800
900
1000
50
0
2
4
6
8
10
12
# of Training MDPs
14
16
18
20
Figure 2: (a) Picture of the multi-room gridworld environment. (b) Performance versus number of training
samples for HAL and flat apprenticeship learning. (c) Performance versus number of training MDPs for HAL
versus using only low-level or only high-level constraints.
1. Begin with no expert path constraints.
2. Find the current reward weights by solving the current optimization problem.
3. Solve the reinforcement learning problem at the high level of the hierarchy to find the
optimal (high-level) policies for the current reward for each MDP\R, i. If the optimal
policy violates the current (high level) constraints, then add this constraint to the current
optimization problem and goto Step (2). Otherwise, no constraints are violated and the
current reward weights are the solution of the optimization problem.
4
Experimental Results
4.1 Gridworld
In this section we present results on a multi-room gridworld domain with unknown cost. While this
is not meant to be a challenging control task, it allows us to compare the performance of HAL to
traditional ?flat? (non-hierarchical) apprenticeship learning methods, as these algorithms are feasible
in such domains. The grid world domain has a very natural hierarchical decomposition: if we
average the cost over each room, we can form a ?high-level? approximation of the grid world. Our
hierarchical controller first plans in this domain to choose a path over the rooms. Then for each
room along this path we plan a low-level path to the desired exit.
Figure 2(b) shows the performance versus number of training examples provided to the algorithm
(where one training example equals one action demonstrated by the expert).6 As expected, the flat
apprenticeship learning algorithm eventually converges to a superior policy, since it employs full
value iteration to find the optimal policy, while HAL uses the (non-optimal) hierarchical controller.
However, for small amounts of training data, HAL outperforms the flat method, since it is able to
leverage the small amount of data provided by the expert at both levels of the hierarchy. Figure 2(c)
shows performance versus number of MDPs in the training set for HAL and well as for algorithms
which receive the same training data as HAL (that is, both high level and low level expert demonstrations), but which make use of only one or the other. Here we see that HAL performs substantially
better. This is not meant to be a direct comparison of the different methods, since HAL obtains more
training data per MDP than the single-level approaches. Rather, this experiment illustrates that in
situations where one has access to both high-level and low-level advice, it is advantageous to use
6
Experimental details: We consider a 111x111 grid world, evenly divided into 100 rooms of size 10x10
each. There are walls around each room, except for a door of size 2 that connects a room to each of its
neighbors (a picture of the domain is shown in figure 2(a)). Each state has 40 binary features, sampled from
a distribution particular to that room, and the reward function is chosen randomly to have 10 ?small? [-0.75,
-0.25], negative rewards, 20 ?medium? [-1.0 -2.0] negative rewards, and 10 ?high? [-3.0 -5.0] negative rewards.
In all cases we generated multiple training MDPs, which differ in which features are active at each state and we
provided the algorithm with one expert demonstration for each sampled MDP. After training on each MDP we
tested on 25 holdout MDPs generated by the same process. In all cases the results were averaged over 10 runs.
For all our experiments, we fixed the ratio of Ch /C` so that the both constraints were equally weighted (i.e., if
it typically took t low level actions to accomplish one high-level action, then we used a ratio of Ch /C` = t).
Given this fixed scaling, we found that the algorithm was generally insensitive (in terms of the resulting policy?s
suboptimality) to scaling of the slack penalties. In the comparison of HAL with flat apprenticeship learning
in Figure 2(b), one training example corresponds to one expert action. Concretely, for HAL the number of
training examples for a given training MDP corresponds to the number of high level actions in the high level
demonstration plus the (equal) number of low level expert actions provided. For flat apprenticeship learning
the number of training examples for a given training MDP corresponds to the number of expert actions in the
expert?s full trajectory demonstration.
5
Figure 3: (a) High-level (path) expert demonstration. (b) Low-level (footstep) expert demonstration.
both. This will be especially important in domains such as the quadruped locomotion task, where
we have access to very few training MDPs (i.e., different terrains).
4.2 Quadruped Robot
In this section we present the primary experimental result of this paper, a successful application of
hierarchical apprenticeship learning to the task of quadruped locomotion. Videos of the results in
this section are available at http://cs.stanford.edu/?kolter/nips07videos.
4.2.1
Hierarchical Control for Quadruped Locomotion
The LittleDog robot, shown in Figure 1, is designed and built by Boston Dynamics, Inc. The robot
consists of 12 independently actuated servo motors, three on each leg, with two at the hip and one at
the knee. It is equipped with an internal IMU and foot force sensors. We estimate the robot?s state
using a motion capture system that tracks reflective markers on the robot?s body. We perform all
computation on a desktop computer, and send commands to the robot via a wireless connection.
As mentioned in the introduction, we employ a hierarchical control scheme for navigating the
quadruped over the terrain. Due to space constraints, we describe the complete control system
briefly, but a much more detailed description can be found in [8]. The high level controller is a body
path planner, that plans an approximate trajectory for the robot?s center of mass over the terrain;
the low-level controller is a footstep planner that, given a path for the robot?s center, plans a set of
footsteps that follow this path. The footstep planner uses a reward function that specifies the relative trade-off between several different features of the robot?s state, including (i) several features
capturing the roughness and slope of the terrain at several different spatial scales around the robot?s
feet, (ii) distance of the foot location from the robot?s desired center, (iii) the area and inradius of the
support triangle formed by the three stationary feet, and other similar features. Kinematic feasibility
is required for all candidate foot locations and collision of the legs with obstacles is forbidden. To
form the high-level cost, we aggregate features from the footstep planner. In particular, for each
foot we consider all the footstep features within a 3 cm radius of the foot?s ?home? position (the
desired position of the foot relative to the center of mass in the absence of all other discriminating
features), and aggregate these features to form the features for the body path planner. While this is
an approximation, we found that it performed very well in practice, possibly due to its ability to account for stochasticity of the domain. After forming the cost function for both levels, we used value
iteration to find the optimal policy for the body path planner, and a five-step lookahead receding
horizon search to find a good set of footsteps for the footstep planner.
4.2.2
Hierarchical Apprenticeship Learning for Quadruped Locomotion
All experiments were carried out on two terrains: a relatively easy terrain for training, and a significantly more challenging terrain for testing. To give advice at the high level, we specified complete
body trajectories for the robot?s center of mass, as shown in Figure 3(a). To give advice for the
low level we looked for situations in which the robot stepped in a suboptimal location, and then
indicated the correct greedy foot placement, as shown in Figure 3(b). The entire training set con6
Figure 4: Snapshots of quadruped while traversing the testing terrain.
Figure 5: Body and footstep plans for different constraints on the training (left) and testing (right) terrains:
(Red) No Learning, (Green) HAL, (Blue) Path Only, (Yellow) Footstep Only.
sisted of a single high-level path demonstration across the training terrain, and 20 low-level footstep
demonstrations on this terrain; it took about 10 minutes to collect the data.
Even from this small amount of training data, the learned system achieved excellent performance,
not only on the training board, but also on the much more difficult testing board. Figure 4 shows
snapshots of the quadruped crossing the testing board. Figure 5 shows the resulting footsteps taken
for each of the different types of constraints, which shows a very large qualitative difference between the footsteps chosen before and after training. Table 1 shows the crossing times for each of
the different types of constraints. As shown, he HAL algorithm outperforms all the intermediate
methods. Using only footstep constraints does quite well on the training board, but on the testing
board the lack of high-level training leads the robot to take a very roundabout route, and it performs
much worse. The quadruped fails at crossing the testing terrain when learning from the path-level
demonstration only or when not learning at all.
Finally, prior to undertaking our work on hierarchical apprenticeship learning, we invested several
weeks attempting to hand-tune controller capable of picking good footsteps across challenging terrain. However, none of our previous efforts could significantly outperform the controller presented
here, learned from about 10 minutes worth of data, and many of our previous efforts performed
substantially worse.
5
Related Work and Discussion
The work presented in this paper relates to many areas of reinforcement learning, including apprenticeship learning and hierarchical reinforcement learning, and to a large body of past work in
quadruped locomotion. In the introduction and in the formulation of our algorithm we discussed the
connection to the inverse reinforcement learning algorithm of [1] and the maximum margin planning algorithm of [13]. In addition, there has been subsequent work [14] that extends the maximum
margin planning framework to allow for the automated addition of new features through a boosting
procedure; There has also been much recent work in reinforcement learning on hierarchical reinforcement learning; a recent survey is [2]. However, all the work in this area that we are aware of
deals with the more standard reinforcement learning formulation where known rewards are given
to the agent as it acts in a (possibly unknown) environment. In contrast, our work follows the apprenticeship learning paradigm where the model, but not the rewards, are known to the agent. Prior
work on legged locomotion has mostly focused on generating gaits for stably traversing fairly flat
7
Training
Testing
Time (sec)
Time (sec)
HAL
31.03
35.25
Feet Only
33.46
45.70
Path Only
?
?
No Learning
40.25
?
Table 1: Execution times for different constraints on training and testing terrains. Dashes indicate that the
robot fell over and did not reach the goal.
terrain (see, among many others, [10], [7]). Only very few learning algorithms, which attempt to
generalize to previously unseen terrains, have been successfully applied before [6, 3, 9]. The terrains
considered in this paper go well beyond the difficulty level considered in prior work.
6
Acknowledgements
We gratefully acknowledge the anonymous reviewers for helpful suggestions. This work was supported by the DARPA Learning Locomotion program under contract number FA8650-05-C-7261.
References
[1] Pieter Abbeel and Andrew Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the International Conference on Machine Learning, 2004.
[2] Andrew G. Barto and Sridhar Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event Dynamic Systems: Theory and Applications, 13:41?77, 2003.
[3] Joel Chestnutt, James Kuffner, Koichi Nishiwaki, and Satoshi Kagami. Planning biped navigation strategies in complex environments. In Proceedings of the International Conference on Humanoid Robotics,
2003.
[4] Thomas G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. Journal of Artificial Intelligence Research, 13:227?303, 2000.
[5] Nicholas K. Jong and Peter Stone. State abstraction discovery from irrelevant state variables. In Proceedings of the International Joint Conference on Artificial Intelligence, 2005.
[6] H. Kim, T. Kang, V. G. Loc, and H. R. Choi. Gait planning of quadruped walking and climbing robot
for locomotion in 3D environment. In Proceedings of the International Conference on Robotics and
Automation, 2005.
[7] Nate Kohl and Peter Stone. Machine learning for fast quadrupedal locomotion. In Proceedings of AAAI,
2004.
[8] J. Zico Kolter, Mike P. Rodgers, and Andrew Y. Ng. A complete control architecture for quadruped locomotion over rough terrain. In Proceedings of the International Conference on Robotics and Automation
(to appear), 2008.
[9] Honglak Lee, Yirong Shen, Chih-Han Yu, Gurjeet Singh, and Andrew Y. Ng. Quadruped robot obstacle
negotiation via reinforcement learning. In Proceedings of the International Conference on Robotics and
Automation, 2006.
[10] Jun Morimoto and Christopher G. Atkeson. Minimax differential dynamic programming: An application
to robust biped walking. In Neural Information Processing Systems 15, 2002.
[11] Gergeley Neu and Csaba Szepesv?ari. Apprenticeship learning using inverse reinforcement learning and
gradient methods. In Proceedings of Uncertainty in Artificial Intelligence, 2007.
[12] Ronald Parr and Stuart Russell. Reinforcement learning with hierarchcies of machines. In Neural Information Processing Systems 10, 1998.
[13] Nathan Ratliff, J. Andrew Bagnell, and Martin Zinkevich. Maximum margin planning. In Proceedings of
the International Conference on Machine Learning, 2006.
[14] Nathan Ratliff, David Bradley, J. Andrew Bagnell, and Joel Chestnutt. Boosting structured prediction for
imitation learning. In Neural Information Processing Systems 19, 2007.
[15] Richard S. Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181?211, 1999.
[16] Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. Learning structured prediction
models: A large margin approach. In Proceedings of the International Conference on Machine Learning,
2005.
[17] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. Journal of Machine Learning Research, 6:1453?1484, 2005.
8
| 3253 |@word briefly:1 eliminating:1 polynomial:1 seems:2 advantageous:1 pieter:2 r:1 seek:1 decomposition:10 minus:1 initial:1 loc:1 hereafter:1 past:6 outperforms:3 bradley:1 current:9 must:2 ronald:1 subsequent:2 hofmann:1 littledog:3 motor:1 designed:2 stationary:1 greedy:3 intelligence:4 desktop:1 institution:2 provides:1 boosting:2 location:5 simpler:1 daphne:1 five:1 height:2 along:2 direct:2 differential:2 qualitative:1 consists:2 combine:1 introduce:1 apprenticeship:37 expected:1 indeed:1 behavior:5 planning:6 multi:3 resolve:2 equipped:1 becomes:2 provided:5 begin:1 moreover:1 notation:3 linearity:1 mass:4 medium:1 cm:3 substantially:3 developed:2 unified:4 finding:1 csaba:1 temporal:1 act:1 prohibitively:1 control:18 zico:2 partitioning:2 appear:1 before:3 local:1 consequence:1 despite:1 sutton:1 path:16 black:1 plus:1 specifying:1 challenging:7 suggests:2 collect:1 fastest:1 averaged:1 testing:9 practice:2 differs:1 procedure:1 area:3 significantly:3 altun:1 tsochantaridis:1 applying:1 impossible:1 zinkevich:1 map:2 demonstrated:2 center:6 reviewer:1 send:1 go:1 independently:3 convex:2 survey:1 formulate:2 focused:1 simplicity:2 knee:1 shen:1 insight:2 yirong:1 controlling:1 hierarchy:5 suppose:2 target:1 exact:2 programming:1 us:2 locomotion:17 crossing:4 walking:2 submission:1 observed:2 mike:1 taskar:1 solved:1 capture:4 worst:1 trade:4 movement:1 russell:1 servo:1 mentioned:1 environment:4 complexity:2 reward:37 dynamic:4 legged:1 singh:2 solving:1 upon:1 exit:1 triangle:1 compactly:1 joint:3 mh:2 darpa:1 represented:2 fast:1 describe:1 quadruped:22 artificial:4 aggregate:2 outside:1 outcome:1 quite:1 stanford:4 larger:1 solve:6 say:1 otherwise:1 ability:3 unseen:1 invested:1 itself:4 noisy:2 sequence:1 highlevel:1 took:2 propose:1 gait:2 remainder:1 date:1 achieve:3 lookahead:1 description:1 p:1 optimum:1 generating:1 converges:1 ben:1 andrew:7 stating:1 pose:1 progress:1 c:2 indicate:1 quantify:1 differ:1 foot:15 radius:1 undertaking:1 correct:2 stochastic:1 hull:1 enable:1 public:1 violates:1 require:2 abbeel:3 wall:1 preliminary:2 anonymous:1 roughness:1 extension:1 sufficiently:1 considered:3 around:2 mapping:2 week:1 parr:1 successfully:1 weighted:1 offs:1 rough:1 sensor:1 always:2 rather:2 pn:2 command:1 barto:1 joachim:1 properly:1 contrast:1 underneath:1 kim:1 helpful:1 abstraction:4 chatalbashev:1 entire:5 typically:3 accept:1 a0:1 footstep:17 koller:1 issue:1 among:1 orientation:1 denoted:1 priori:1 negotiation:1 plan:8 spatial:1 fairly:1 equal:3 aware:1 ng:5 stuart:1 yu:1 nearly:1 others:1 richard:1 employ:5 few:2 randomly:1 simultaneously:1 connects:1 attempt:1 highly:2 kinematic:1 evaluation:1 joel:2 generically:1 navigation:1 extreme:2 sh:14 tuple:1 capable:2 minw:3 traversing:2 desired:5 isolated:1 instance:1 formalism:2 hip:1 obstacle:2 cost:4 uniform:1 successful:1 motivating:1 accomplish:1 st:5 international:8 discriminating:1 contract:1 off:3 receiving:1 lee:1 picking:3 precup:1 ambiguity:2 s00:9 aaai:1 choose:1 possibly:2 necessitate:1 worse:2 expert:44 leading:1 account:2 sec:2 automation:3 inc:2 kolter:4 doina:1 performed:2 inradius:1 observing:1 reached:1 red:1 carlos:1 capability:1 slope:2 defer:1 contribution:1 formed:1 morimoto:1 efficiently:3 identify:1 climbing:1 yellow:1 satoshi:1 generalize:1 none:1 trajectory:9 worth:1 published:2 ah:1 reach:1 neu:1 james:1 naturally:1 gain:1 sampled:2 holdout:1 knowledge:2 dimensionality:1 organized:1 actually:1 higher:2 follow:1 specify:6 formulation:10 though:1 until:1 hand:2 working:1 christopher:1 marker:1 lack:1 defines:1 stably:1 indicated:1 mdp:18 believe:1 hal:21 dietterich:1 true:2 regularization:3 white:1 deal:2 psa:3 during:1 noted:1 suboptimality:3 stone:2 presenting:1 complete:8 demonstrate:6 performs:2 motion:1 bring:1 image:1 ari:1 superior:2 rl:1 exponentially:1 insensitive:1 extend:1 discussed:2 he:1 rodgers:1 interpret:1 honglak:1 grid:4 teaching:1 stochasticity:1 biped:2 gratefully:1 reachable:4 robot:24 access:2 han:1 etc:1 add:1 recent:4 forbidden:1 irrelevant:1 wellknown:1 route:1 binary:1 guestrin:1 minimum:1 fortunately:1 employed:1 converge:1 paradigm:4 maximize:1 nate:1 ii:1 relates:1 full:6 multiple:3 semi:1 x10:1 divided:1 equally:1 feasibility:1 prediction:3 controller:9 expectation:8 iteration:2 achieved:2 robotics:4 irregular:1 receive:1 addition:2 want:1 szepesv:1 operate:1 unlike:1 fell:1 kwk22:3 goto:1 reflective:1 leverage:1 ideal:2 door:1 iii:1 easy:1 concerned:2 automated:2 intermediate:1 mahadevan:1 architecture:1 suboptimal:1 reduce:1 idea:1 motivated:1 effort:2 penalty:1 peter:2 fa8650:1 action:15 generally:1 collision:1 clear:1 detailed:1 tune:1 amount:3 ang:1 ph:4 induces:1 http:1 specifies:2 outperform:1 notice:1 designer:1 per:2 track:1 blue:1 discrete:1 four:1 demonstrating:1 quadrupedal:1 drawn:2 subgradient:1 run:1 inverse:4 uncertainty:1 extends:2 planner:7 reasonable:1 chih:1 home:1 decision:2 scaling:3 comparable:1 entirely:2 capturing:1 dash:1 kohl:1 placement:1 constraint:24 flat:8 nathan:2 argument:1 extremely:1 attempting:2 relatively:1 martin:1 department:1 structured:4 according:1 combination:2 across:3 describes:2 s1:1 kuffner:1 leg:2 altitude:2 taken:4 heart:1 previously:3 discus:3 count:2 slack:6 hh:1 eventually:1 lieu:1 available:1 apply:3 chestnutt:2 hierarchical:32 nicholas:1 thomas:1 imu:1 assumes:1 running:1 denotes:1 especially:1 move:2 objective:1 quantity:1 looked:1 strategy:1 primary:1 traditional:1 bagnell:2 navigating:2 gradient:1 distance:2 unable:2 link:1 separate:1 evenly:1 stepped:1 trivial:2 toward:1 index:4 ellipsoid:1 ratio:2 demonstration:13 equivalently:1 difficult:4 mostly:1 teach:1 negative:3 ratliff:4 contraints:1 proper:1 policy:26 unknown:2 perform:1 allowing:1 snapshot:2 markov:2 acknowledge:1 situation:2 team:1 gridworld:3 rn:2 david:1 unpublished:1 namely:1 required:1 specified:1 connection:2 learned:2 kang:1 maxq:1 beyond:3 able:5 receding:1 challenge:1 program:1 built:2 including:4 green:1 video:1 event:1 difficulty:5 natural:3 force:1 minimax:1 scheme:2 mdps:10 picture:2 carried:1 jun:1 naive:1 prior:4 interdependent:1 discovery:2 acknowledgement:1 relative:2 loss:1 par:1 generation:1 suggestion:1 versus:5 pabbeel:1 humanoid:1 agent:4 sufficient:1 s0:14 vassil:1 supported:1 last:1 soon:1 wireless:1 infeasible:2 allow:4 neighbor:1 taking:1 world:5 transition:3 concretely:1 reinforcement:16 atkeson:1 approximate:1 obtains:1 satinder:1 active:1 assumed:1 conclude:1 alternatively:4 terrain:28 imitation:1 search:1 quantifies:1 table:2 robust:2 ca:1 actuated:2 excellent:1 complex:6 domain:21 did:1 hierarchically:1 arise:1 sridhar:1 body:7 advice:17 referred:1 board:5 fails:1 position:4 exponential:1 candidate:2 lie:1 minute:2 choi:1 specific:1 navigate:1 svm:1 intractable:1 execution:1 illustrates:1 horizon:2 margin:9 easier:5 boston:2 depicted:1 simply:1 forming:1 ch:5 corresponds:7 dh:1 goal:2 quantifying:1 twofold:1 room:10 absence:2 feasible:4 specifically:3 typical:1 except:1 wt:17 experimental:4 jong:1 formally:1 highdimensional:1 internal:1 support:1 meant:2 violated:2 incorporate:1 tested:1 |
2,485 | 3,254 | Object Recognition by Scene Alignment
Bryan C. Russell Antonio Torralba Ce Liu Rob Fergus William T. Freeman
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambrige, MA 02139 USA
{brussell,torralba,celiu,fergus,billf}@csail.mit.edu
Abstract
Current object recognition systems can only recognize a limited number of object
categories; scaling up to many categories is the next challenge. We seek to build
a system to recognize and localize many different object categories in complex
scenes. We achieve this through a simple approach: by matching the input image, in an appropriate representation, to images in a large training set of labeled
images. Due to regularities in object identities across similar scenes, the retrieved
matches provide hypotheses for object identities and locations. We build a probabilistic model to transfer the labels from the retrieval set to the input image. We
demonstrate the effectiveness of this approach and study algorithm component
contributions using held-out test sets from the LabelMe database.
1
Introduction
The recognition of objects in a scene often consists of matching representations of image regions
to an object model while rejecting background regions. Recent examples of this approach include
aligning pictorial cues [4], shape correspondence [1], and modeling the constellation of parts [5].
Other models, exploiting knowledge of the scene context in which the objects reside, have proven
successful in boosting object recognition performance [18, 20, 15, 7, 13]. These methods model the
relationship between scenes and objects and allow information transfer across the two.
Here, we exploit scene context using a different approach: we formulate the object detection problem as one of aligning elements of the entire scene to a large database of labeled images. The
background, instead of being treated as a set of outliers, is used to guide the detection process. Our
approach relies on the observation that when we have a large enough database of labeled images, we
can find with high probability some images in the database that are very close to the query image
in appearance, scene contents, and spatial arrangement [6, 19]. Since the images in the database
are partially labeled, we can transfer the knowledge of the labeling to the query image. Figure 1
illustrates this idea. With these assumptions, the problem of object detection in scenes becomes a
problem of aligning scenes. The main issues are: (1) Can we find a big enough dataset to span the
required large number of scene configurations? (2) Given an input image, how do we find a set of
images that aligns well with the query image? (3) How do we transfer the knowledge about objects
contained in the labels?
The LabelMe dataset [14] is well-suited for this task, having a large number of images and labels
spanning hundreds of object categories. Recent studies using non-parametric methods for computer
vision and graphics [19, 6] show that when a large number of images are available, simple indexing
techniques can be used to retrieve images with object arrangements similar to those of a query image.
The core part of our system is the transfer of labels from the images that best match the query image.
We assume that there are commonalities amongst the labeled objects in the retrieved images and we
cluster them to form candidate scenes. These scene clusters give hints as to what objects are depicted
1
screen 2
desk 3
mousepad 2
keyboard 2
(a) Input image
(b) Images with similar scene
configuration
mouse 1
(c) Output image with object
labels transferred
Figure 1:
Overview of our system. Given an input image, we search for images having a similar scene
configuration in a large labeled database. The knowledge contained in the object labels for the best matching
images is then transfered onto the input image to detect objects. Additional information, such as depth-ordering
relationships between the objects, can also be transferred.
Figure 2: Retrieval set images. Each of the two rows depicts an input image (on the left) and 30 images from
the LabelMe dataset [14] that best match the input image using the gist feature [12] and L1 distance (the images
are sorted by their distances in raster order). Notice that the retrieved images generally belong to similar scene
categories. Also the images contain mostly the same object categories, with the larger objects often matching
in spatial location within the image. Many of the retrieved images share similar geometric perspective.
in the query image and their likely location. We describe a relatively simple generative model for
determining which scene cluster best matches the query image and use this to detect objects.
The remaining sections are organized as follows: In Section 2, we describe our representation for
scenes and objects. We formulate a model that integrates the information in the object labels with
object detectors in Section 3. In Section 4, we extend this model to allow clustering of the retrieved
images based on the object labels. We show experimental results of our system output in Section 5,
and conclude in Section 6.
2
Matching Scenes and Objects with the Gist Feature
We describe the gist feature [12], which is a low dimensional representation of an image region
and has been shown to achieve good performance for the scene recognition task when applied to an
entire image. To construct the gist feature, an image region is passed through a Gabor filter bank
comprising 4 scales and 8 orientations. The image region is divided into a 4x4 non-overlapping grid
and the output energy of each filter is averaged within each grid cell. The resulting representation
is a 4 ? 8 ? 16 = 512 dimensional vector. Note that the gist feature preserves spatial structure
information and is similar to applying the SIFT descriptor [9] to the image region.
We consider the task of retrieving a set of images (which we refer to as the retrieval set) that closely
matches the scene contents and geometrical layout of an input image. Figure 2 shows retrieval sets
for two typical input images using the gist feature. We show the top 30 closest matching images
from the LabelMe database based on the L1-norm distance, which is robust to outliers. Notice that
the gist feature retrieves images that match the scene type of the input image. Furthermore, many
of the objects depicted in the input image appear in the retrieval set, with the larger objects residing
in approximately the same spatial location relative to the image. Also, the retrieval set has many
2
images that share a similar geometric perspective. Of course, not every retrieved image matches
well and we account for outliers in Section 4.
3 Utilizing Retrieval Set
Images for Object Detection
1
0.95
screen
0.9
SVM (local appearance)
We evaluate the ability of the retrieval
set to predict the presence of objects in
the input image. For this, we found a
retrieval set of 200 images and formed
a normalized histogram (the histogram
entries sum to one) of the object categories that were labeled. We compute
performance for object categories with
at least 200 training examples and that
appear in at least 15 test images. We
compute the area under the ROC curve
for each object category. As a comparison, we evaluate the performance
of an SVM applied to gist features by
using the maximal score over a set of
bounding boxes extracted from the image. The area under ROC performance
of the retrieval set versus the SVM is
shown in Figure 3 as a scatter plot, with
each point corresponding to a tested object category. As a guide, a diagonal
line is displayed; those points that reside above the diagonal indicate better
SVM performance (and vice versa). Notice that the retrieval set predicts well
the objects present in the input image
and outperforms the detectors based on
local appearance information (the SVM)
for most object classes.
0.85
sidewalk
road
mouse
head
keyboard
phone
mousepad
table bookshelf
lampspeaker motorbike
pole cup
cabinet
mug
blindbottle
paper book
car
chair
streetlight
plant
tree
person
window
door
sky
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.5
0.55
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
Retrieval set
Figure 3: Evaluation of the goodness of the retrieval set by
how well it predicts which objects are present in the input image. We build a simple classifier based on object counts in the
retrieval set as provided by their associated LabelMe object labels. We compare this to detection based on local appearance
alone using an SVM applied to bounding boxes in the input image (the maximal score is used). The area under the ROC curve
is computed for many object categories for the two classifiers.
Performance is shown as a scatter plot where each point represents an object category. Notice that the retrieval set predicts
well object presence and in a majority cases outperforms the
SVM output, which is based only on local appearance.
In Section 2, we observed that the set of labels corresponding to images that best match an input
image predict well the contents of the input image. In this section, we will describe a model that
integrates local appearance with object presence and spatial likelihood information given by the
object labels belonging to the retrieval set.
We wish to model the relationship between object categories o, their spatial location x within an
image, and their appearance g. For a set of N images, each having Mi object proposals over L
object categories, we assume a joint model that factorizes as follows:
p(o, x, g|?, ?, ?) =
Mi X
N Y
1
Y
p(oi,j |hi,j , ?) p(xi,j |oi,j , hi,j , ?) p(gi,j |oi,j , hi,j , ?)
(1)
i=1 j=1 hi,j =0
We assume that the joint model factorizes as a product of three terms: (i) p(oi,j |hi,j = m, ?m ), the
likelihood of which object categories will appear in the image, (ii) p(xi,j |oi,j = l, hi,j = m, ?m,l ),
the likely spatial locations of observing object category l in the image, and (iii) p(gi,j |oi,j = l, hi,j =
m, ?m,l ), the appearance likelihood of object category l. We let hi,j = 1 indicate whether object
category oi,j is actually present in location xi,j (hi,j = 0 indicates absence). Figure 4 depicts the
above as a graphical model. We use plate notation, where the variable nodes inside a plate are
duplicated based on the counts depicted in the top-left corner of the plate.
We instantiate the model as follows. The spatial location of objects are parameterized as bounding
y
h
x
w w
boxes xi,j = (cxi,j , cyi,j , cw
i,j , ci,j ) where (ci,j , ci,j ) is the centroid and (ci,j ,ci,j ) is the width and
3
height (bounding boxes are extracted from object labels by tightly cropping the polygonal annotation). Each component of xi,j is normalized with respect to the image to lie in [0, 1]. We assume ?m
are multinomial parameters and ?m,l = (?m,l , ?m,l ) are Gaussian means and covariances over the
bounding box parameters. Finally, we assume gi,j is the output of a trained SVM applied to a gist
feature g?i,j . We let ?m,l parameterize the logistic function (1 + exp(??m,l [1 gi,j ]T ))?1 .
The parameters ?m,l are learned offline by first
training SVMs for each object class on the set
N
of all labeled examples of object class l and a
Mi
f0,1g
set of distractors. We then fit logistic functions
h
o
?
?m
to the positive and negative examples of each
i,j
?
i,j
class. We learn the parameters ?m and ?m,l
f0,1g
online using the object labels corresponding to
L
L
gi,j
?m,l
xi,j
?m,l
the retrieval set. These are learned by sim?
ply counting the object class occurrences and
fitting Gaussians to the bounding boxes corre- Figure 4: Graphical model that integrates information about which objects are likely to be present in the
sponding to the object labels.
image o, their appearance g, and their likely spatial lo-
For the input image, we wish to infer the latent cation x. The parameters for object appearance ? are
variables hi,j corresponding to a dense sam- learned offline using positive and negative examples for
pling of all possible bounding box locations each object class. The parameters for object presence
xi,j and object classes oi,j using the learned likelihood ? and spatial location ? are learned online
parameters ?m , ?m,l , and ?m,l . For this, we from the retrieval set. For all possible bounding boxes
compute the postierior distribution p(hi,j = in the input image, we wish to infer h, which indicates
m|oi,j = l, xi,j , gi,j , ?m , ?m,l , ?m,l ), which is whether an object is present or absent.
proportional to the product of the three learned distributions, for m = {0, 1}.
The procedure outlined here allows for significant computational savings over naive application of
an object detector. Without finding similar images that match the input scene configuration, we
would need to apply an object detector densely across the entire image for all object categories. In
contrast, our model can constrain which object categories to look for and where. More precisely,
we only need to consider object categories with relatively high probability in the scene model and
bounding boxes within the range of the likely search locations. These can be decided based on
thresholds. Also note that the conditional independences implied by the graphical model allows us
to fit the parameters from the retrieval set and train the object detectors separately.
Note that for tractability, we assume Dirichlet and Normal-Inverse-Wishart conjugate prior distributions over ?m and ?m,l with hyperparemters ? and ? = (?, ?, ?, ?) (expected mean ?, ? pseudocounts on the scale of the spatial observations, ? degrees of freedom, and sample covariance ?).
Furthermore, we assume a Bernoulli prior distribution over hi,j parameterized by ? = 0.5. We
hand-tuned the remaining parameters in the model. For hi,j = 0, we assume the noninformative
distributions oi,j ? U nif orm(1/L) and each component of xi,j ? U nif orm(1).
4
Clustering Retrieval Set Images for Robustness to Mismatches
While many images in the retrieval set match the input image scene configuration and contents,
there are also outliers. Typically, most of the labeled objects in the outlier images are not present
in the input image or in the set of correctly matched retrieval images. In this section, we describe
a process to organize the retrieval set images into consistent clusters based on the co-occurrence of
the object labels within the images. The clusters will typically correspond to different scene types
and/or viewpoints. The task is to then automatically choose the cluster of retrieval set images that
will best assist us in detecting objects in the input image.
We augment the model of Section 3 by assigning each image to a latent cluster si . The cluster assignments are distributed according to the mixing weights ?. We depict the model in Figure 5(a).
Intuitively, the model finds clusters using the object labels oi,j and their spatial location xi,j within
the retrieved set of images. To automatically infer the number of clusters, we use a Dirichlet Process
prior on the mixing weights ? ? Stick(?), where Stick(?) is the stick-breaking process of Grif4
N
si
?
?
Cluster counts
Input image
1
Mi
60
?k,m
oi,j
Counts
f0,1g
hi,j
?
?
40
20
f0,1g
L
L
?m,l
gi,j
xi,j
0
?k,m,l
?
(b)
(a)
Cluster 1
Cluster 2
Cluster 3
1
2 3 4
Clusters
5
(c)
Cluster 4
Cluster 5
(d)
(e)
0.05
0.05
0
w
w ch all
in ai
do r
pi flo w
ct or
u
ca tabre
bi le
n
la et
bomp
ok
sc
ke rdeen
yb e
m oask
bo ousrd
okchae
sh ir
e
peflo lf
porsoor
st n
er
0
0.2
0.8
0.15
0.6
0.1
0.1
0.4
0.05
0.05
0.2
0
0
0
pe
rs
be bon
ds ag
i
dde
fu ish
rn fo
ga itu ot
r re
glden
heass
ad
0.1
(f)
0.2
0.15
t
plree
an
c sk t
gr flolocy
ee w k
ne er
be lanry
r d
brries
us
h
0.1
w
sidindca
r
buew ow
ildalk
roing
pe sad
de t ky
do str ree
or ian
w
ay
0.2
0.15
(g)
Figure 5: (a) Graphical model for clustering retrieval set images using their object labels. We extend the
model of Figure 4 to allow each image to be assigned to a latent cluster si , which is drawn from mixing weights
?. We use a Dirichlet process prior to automatically infer the number of clusters. We illustrate the clustering
process for the retrieval set corresponding to the input image in (b). (c) Histogram of the number of images
assigned to the five clusters with highest likelihood. (d) Montages of retrieval set images assigned to each
cluster, along with their object labels (colors show spatial extent), shown in (e). (f) The likelihood of an object
category being present in a given cluster (the top nine most likely objects are listed). (g) Spatial likelihoods for
the objects listed in (f). Note that the montage cells are sorted in raster order.
fiths, Engen, and McCloskey [8, 11, 16] with concentration parameter ?. In the Chinese restaurant
analogy, the different clusters correspond to tables and the parameters for object presence ?k and
spatial location ?k are the dishes served at a given table. An image (along with its object labels)
corresponds to a single customer that is seated at a table.
We illustrate the clustering process for a retrieval set belonging to the input image in Figure 5(b).
The five clusters with highest likelihood are visualized in the columns of Figure 5(d)-(g). Figure 5(d)
shows montages of retrieval images with highest likelihood that were assigned to each cluster. The
total number of retrieval images that were assigned to each cluster are shown as a histogram in
Figure 5(c). The number of images assigned to each cluster is proportional to the cluster mixing
weights, ?. Figure 5(e) depicts the object labels that were provided for the images in Figure 5(d),
with the colors showing the spatial extent of the object labels. Notice that the images and labels
belonging to each cluster share approximately the same object categories and geometrical configuration. Also, the cluster that best matches the input image tends to have the highest number of
retrieval images assigned to it. Figure 5(f) shows the likelihood of objects that appear in the cluster
5
(the nine objects with highest likelihood are shown). This corresponds to ? in the model. Figure 5(g)
depicts the spatial distribution of the object centroid within the cluster. The montage of nine cells
correspond to the nine objects listed in Figure 5(f), sorted in raster order. The spatial distributions
illustrate ?. Notice that typically at least one cluster predicts well the objects contained in the input
image, in addition to their location, via the object likelihoods and spatial distributions.
To learn ?k and ?k , we use a Rao-Blackwellized Gibbs sampler to draw samples from the posterior
distribution over si given the object labels belonging to the set of retrieved images. We ran the
Gibbs sampler for 100 iterations. Empirically, we observed relatively fast convergence to a stable
solution. Note that improved performance may be achieved with variational inference for Dirichlet
Processes [10, 17]. We manually tuned all hyperparameters using a validation set of images, with
concentration parameter ? = 100 and spatial location parameters ? = 0.1, ? = 0.5, ? = 3, and
? = 0.01 across all bounding box parameters (with the exception of ? = 0.1 for the horizontal
centroid location, which reflects less certainty a priori about the horizontal location of objects). We
used a symmetric Dirichlet hyperparameter with ?l = 0.1 across all object categories l.
For final object detection, we use the learned parameters ?, ?, and ? to infer hi,j . Since si and hi,j
are latent random variables for the input image, we perform hard EM by marginalizing over hi,j to
infer the best cluster s?i . We then in turn fix s?i and infer hi,j , as outlined in Section 3.
5
Experimental Results
In this section we show qualitative and quantitative results for our model. We use a subset of the
LabelMe dataset for our experiments, discarding spurrious and nonlabeled images. The dataset is
split into training and test sets. The training set has 15691 images and 105034 annotations. The
test set has 560 images and 3571 annotations. The test set comprises images of street scenes and
indoor office scenes. To avoid overfitting, we used street scene images that were photographed in
a different city from the images in the training set. To overcome the diverse object labels provided
by users of LabelMe, we used WordNet [3] to resolve synonyms. For object detection, we extracted
3809 bounding boxes per image. For the final detection results, we used non-maximal suppression.
Example object detections from our system are shown in Figure 6(b),(d),(e). Notice that our system
can find many different objects embedded in different scene type configurations. When mistakes
are made, the proposed object location typically makes sense within the scene. In Figure 6(c), we
compare against a baseline object detector using only appearance information and trained with a
linear kernel SVM. Thresholds for both detectors were set to yield a 0.5 false positive rate per image
for each object category (?1.3e-4 false positives per window). Notice that our system produces
more detections and rejects objects that do not belong to the scene. In Figure 6(e), we show typical
failures of the system, which usually occurs when the retrieval set is not correct or an input image is
outside of the training set.
In Figure 7, we show quantitative results for object detection for a number of object categories.
We show ROC curves (plotted on log-log axes) for the local appearance detector, the detector from
Section 3 (without clustering), and the full system with clustering. We scored detections using the
PASCAL VOC 2006 criteria [2], where the outputs are sorted from most confident to least and the
ratio of intersection area to union area is computed between an output bounding box and groundtruth bounding box. If the ratio exceeds 0.5, then the output is deemed correct and the ground-truth
label is removed. While this scoring criteria is good for some objects, other objects are not well
represented by bounding boxes (e.g. buildings and sky).
Notice that the detectors that take into account context typically outperforms the detector using local
appearance only. Also, clustering does as well and in some cases outperforms no clustering. Finally,
the overall system sometimes performs worse for indoor scenes. This is due to poor retrieval set
matching, which causes a poor context model to be learned.
6
Conclusion
We presented a framework for object detection in scenes based on transferring knowledge about
objects from a large labeled image database. We have shown that a relatively simple parametric
6
(a)
sky
wall
wall
screen
sky
tree
building tree
(b)
road
road
car
car
car
car
keyboard
road
sky
keyboard
building
sky
wall
sky
building
sidewalk
(c)
sky
window
(d)
car
car
chair table
car
keyboard
road
tabletable
keyboard keyboard
keyboard
chair
keyboard
road
screen
building
table
table road
keyboard
window
window
person
person
person
car
wall
road
car
car
car
sidewalk
car
road
car
sky
sky
window screen screen
(e)
building building
chair
road
keyboard
Figure 6: (a) Input images. (b) Object detections from our system combining scene alignment with local
detection. (c) Object detections using appearance information only with an SVM. Notice that our system
detects more objects and rejects out-of-context objects. (d) More outputs from our system. Notice that many
different object categories are detected across different scenes. (e) Failure cases for our system. These often
occur when the retrieval set is incorrect.
model, trained on images loosely matching the spatial configuration of the input image, is capable
of accurately inferring which objects are depicted in the input image along with their location. We
showed that we can successfully detect a wide range of objects depicted in a variety of scene types.
7
Acknowledgments
This work was supported by the National Science Foundation Grant No. 0413232, the National
Geospatial-Intelligence Agency NEGI-1582-04-0004, and the Office of Naval Research MURI
Grant N00014-06-1-0734.
References
[1] A. Berg, T. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In CVPR, volume 1, pages 26?33, June 2005.
[2] M. Everingham, A. Zisserman, C.K.I. Williams, and L. Van Gool. The pascal visual object classes
challenge 2006 (voc 2006) results. Technical report, September 2006. The PASCAL2006 dataset can be
downloaded at http : //www.pascal ? network.org/challenges/VOC/voc2006/.
7
tree (531)
0
?3
10
?2
10
?1
car (138)
0
?3
10
?2
10
10
?1
10
road (232)
?3
10
?2
10
?1
screen (268)
0
10
10
?2
10
?1
sky (144)
0
?3
10
?2
10
10
?1
10
bookshelf (47)
?4
10
?3
10
?2
10
?1
10
motorbike (40)
0
10
?1
?4
10
?3
10
?2
10
?1
10
10
keyboard (154)
0
10
10
10
?1
?4
10
0
10
?3
10
10
?1
?4
10
?1
?4
10
10
?1
10
?1
?4
10
0
10
10
10
10
sidewalk (196)
0
10
?1
?4
10
person (113)
0
10
?1
10
building (547)
0
10
?4
10
?3
10
?1
10
wall (69)
0
10
?2
10
10
SVM
No clustering
Clustering
?1
10
?1
?4
10
?3
10
?2
10
?1
10
10
?1
?4
10
?3
10
?2
10
10
?1
10
?1
?4
10
?3
10
?2
10
?1
10
10
?4
10
?3
10
?2
10
?1
10
Figure 7: Comparison of full system against local appearance only detector (SVM). Detection rate for a
number of object categories tested at a fixed false positive per window rate of 2e-04 (0.8 false positives per
image per object class). The number of test examples appear in parenthesis next to the category name. We
plot performance for a number of classes for the baseline SVM object detector (blue), the detector of Section 3
using no clustering (red), and the full system (green). Notice that detectors taking into account context performs
better in most cases than using local appearance alone. Also, clustering does as well, and sometimes exceeds no
clustering. Notable exceptions are for some indoor object categories. This is due to poor retrieval set matching,
which causes a poor context model to be learned.
[3] C. Fellbaum. Wordnet: An Electronic Lexical Database. Bradford Books, 1998.
[4] P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. Intl. J. Computer Vision,
61(1), 2005.
[5] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning.
In CVPR, 2003.
[6] James Hays and Alexei Efros. Scene completion using millions of photographs. In ?SIGGRAPH?, 2007.
[7] D. Hoiem, A. Efros, and M. Hebert. Putting objects in perspective. In CVPR, 2006.
[8] H. Ishwaran and M. Zarepour. Exact and approximate sum-representations for the dirichlet process.
Canadian Journal of Statistics, 30:269?283, 2002.
[9] David G. Lowe. Distinctive image features from scale-invariant keypoints. Intl. J. Computer Vision,
60(2):91?110, 2004.
[10] J. McAuliffe, D. Blei, and M. Jordan. Nonparametric empirical bayes for the Dirichlet process mixture
model. Statistics and Computing, 16:5?14, 2006.
[11] R. M. Neal. Density modeling and clustering using Dirichlet diffusion trees. In Bayesian Statistics,
7:619?629, 2003.
[12] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial
envelope. Intl. J. Computer Vision, 42(3):145?175, 2001.
[13] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In IEEE
Intl. Conf. on Computer Vision, 2007.
[14] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: a database and web-based tool
for image annotation. Technical Report AIM-2005-025, MIT AI Lab Memo, September, 2005.
[15] E. Sudderth, A. Torralba, W. T. Freeman, and W. Willsky. Learning hierarchical models of scenes, objects,
and parts. In IEEE Intl. Conf. on Computer Vision, 2005.
[16] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 2006.
[17] Y. W. Teh, D. Newman, and Welling M. A collapsed variational bayesian inference algorithm for latent
dirichlet allocation. In Advances in Neural Info. Proc. Systems, 2006.
[18] A. Torralba. Contextual priming for object detection. Intl. J. Computer Vision, 53(2):153?167, 2003.
[19] A. Torralba, R. Fergus, and W.T. Freeman. Tiny images. Technical Report AIM-2005-025, MIT AI Lab
Memo, September, 2005.
[20] A. Torralba, K. Murphy, W. Freeman, and M. Rubin. Context-based vision system for place and object
recognition. In Intl. Conf. Computer Vision, 2003.
8
| 3254 |@word norm:1 everingham:1 r:1 seek:1 covariance:2 liu:1 configuration:8 score:2 hoiem:1 tuned:2 outperforms:4 current:1 contextual:1 si:5 scatter:2 assigning:1 dde:1 wiewiora:1 noninformative:1 shape:3 voc2006:1 plot:3 gist:9 depict:1 alone:2 intelligence:2 cue:1 generative:1 instantiate:1 core:1 geospatial:1 blei:2 detecting:1 boosting:1 node:1 location:19 org:1 five:2 height:1 blackwellized:1 along:3 retrieving:1 qualitative:1 consists:1 incorrect:1 fitting:1 inside:1 expected:1 freeman:5 voc:3 montage:4 detects:1 automatically:3 resolve:1 window:7 str:1 becomes:1 provided:3 notation:1 matched:1 what:1 finding:1 ag:1 certainty:1 sky:11 every:1 quantitative:2 classifier:2 stick:3 grant:2 appear:5 organize:1 mcauliffe:1 positive:6 local:10 tends:1 mistake:1 ree:1 approximately:2 co:1 limited:1 range:2 bi:1 averaged:1 decided:1 acknowledgment:1 union:1 lf:1 procedure:1 area:5 empirical:1 gabor:1 reject:2 matching:10 vedaldi:1 road:11 orm:2 onto:1 close:1 ga:1 context:9 applying:1 collapsed:1 www:1 customer:1 lexical:1 layout:1 williams:1 formulate:2 ke:1 utilizing:1 retrieve:1 user:1 exact:1 hypothesis:1 element:1 recognition:9 predicts:4 labeled:10 database:10 observed:2 muri:1 huttenlocher:1 parameterize:1 region:6 ordering:1 russell:2 highest:5 removed:1 ran:1 agency:1 trained:3 distinctive:1 joint:2 siggraph:1 represented:1 retrieves:1 streetlight:1 train:1 fast:1 describe:5 artificial:1 query:7 labeling:1 sc:1 celiu:1 outside:1 detected:1 newman:1 larger:2 cvpr:3 distortion:1 ability:1 statistic:3 gi:7 final:2 online:2 beal:1 maximal:3 product:2 combining:1 holistic:1 mixing:4 achieve:2 flo:1 ky:1 exploiting:1 convergence:1 regularity:1 cluster:34 cropping:1 produce:1 intl:7 object:132 ish:1 illustrate:3 completion:1 sim:1 indicate:2 closely:1 correct:2 filter:2 fix:1 wall:5 residing:1 ground:1 normal:1 exp:1 predict:2 efros:2 torralba:8 commonality:1 proc:1 integrates:3 label:25 vice:1 city:1 successfully:1 reflects:1 tool:1 mit:3 gaussian:1 aim:2 avoid:1 factorizes:2 office:2 ax:1 june:1 naval:1 bernoulli:1 likelihood:12 indicates:2 contrast:1 centroid:3 suppression:1 detect:3 sense:1 baseline:2 inference:2 entire:3 typically:5 transferring:1 perona:1 comprising:1 issue:1 overall:1 orientation:1 pascal:3 augment:1 priori:1 spatial:22 construct:1 saving:1 having:3 cyi:1 manually:1 x4:1 represents:1 look:1 unsupervised:1 report:3 hint:1 preserve:1 recognize:2 tightly:1 densely:1 national:2 pictorial:2 murphy:2 william:1 freedom:1 detection:18 alexei:1 evaluation:1 alignment:2 mixture:1 sh:1 held:1 fu:1 capable:1 tree:5 loosely:1 re:1 plotted:1 column:1 modeling:3 rao:1 goodness:1 assignment:1 rabinovich:1 tractability:1 pole:1 entry:1 bon:1 subset:1 hundred:1 successful:1 gr:1 graphic:1 confident:1 person:5 st:1 density:1 csail:1 probabilistic:1 transfered:1 mouse:2 choose:1 wishart:1 worse:1 corner:1 book:2 conf:3 american:1 account:3 de:1 notable:1 ad:1 lowe:1 lab:2 observing:1 red:1 bayes:1 annotation:4 contribution:1 formed:1 oi:12 ir:1 descriptor:1 correspond:3 yield:1 bayesian:2 rejecting:1 accurately:1 served:1 cation:1 detector:15 fo:1 aligns:1 against:2 raster:3 energy:1 failure:2 james:1 cabinet:1 associated:1 mi:4 dataset:6 massachusetts:1 duplicated:1 knowledge:5 car:15 distractors:1 color:2 organized:1 actually:1 fellbaum:1 ok:1 zisserman:2 improved:1 yb:1 box:14 furthermore:2 nif:2 d:1 hand:1 horizontal:2 web:1 overlapping:1 logistic:2 usa:1 building:8 name:1 contain:1 normalized:2 zarepour:1 galleguillos:1 assigned:7 symmetric:1 laboratory:1 neal:1 mug:1 width:1 criterion:2 plate:3 ay:1 demonstrate:1 performs:2 l1:2 geometrical:2 image:125 variational:2 multinomial:1 empirically:1 overview:1 bomp:1 volume:1 million:1 belong:2 extend:2 association:1 refer:1 significant:1 versa:1 cup:1 ai:3 gibbs:2 grid:2 outlined:2 stable:1 f0:4 cambrige:1 pling:1 aligning:3 closest:1 posterior:1 recent:2 showed:1 retrieved:8 perspective:3 dish:1 phone:1 keyboard:12 n00014:1 hay:1 scoring:1 additional:1 ii:1 full:3 keypoints:1 infer:7 exceeds:2 technical:3 match:11 retrieval:35 divided:1 parenthesis:1 oliva:1 vision:9 histogram:4 sponding:1 iteration:1 kernel:1 sometimes:2 achieved:1 cell:3 proposal:1 background:2 addition:1 separately:1 sudderth:1 ot:1 envelope:1 effectiveness:1 jordan:2 ee:1 presence:5 door:1 counting:1 iii:1 enough:2 split:1 canadian:1 variety:1 independence:1 fit:2 restaurant:1 idea:1 absent:1 whether:2 assist:1 passed:1 nine:4 cause:2 antonio:1 generally:1 listed:3 nonparametric:1 desk:1 svms:1 category:30 visualized:1 http:1 notice:12 correctly:1 per:6 bryan:1 blue:1 diverse:1 hyperparameter:1 putting:1 threshold:2 localize:1 drawn:1 ce:1 diffusion:1 sum:2 inverse:1 parameterized:2 place:1 groundtruth:1 electronic:1 sad:1 draw:1 scaling:1 hi:18 ct:1 corre:1 correspondence:2 occur:1 precisely:1 constrain:1 scene:41 span:1 chair:4 relatively:4 photographed:1 transferred:2 according:1 poor:4 belonging:4 conjugate:1 across:6 em:1 sam:1 rob:1 outlier:5 pseudocounts:1 indexing:1 intuitively:1 invariant:2 turn:1 count:4 available:1 gaussians:1 apply:1 sidewalk:4 ishwaran:1 hierarchical:2 appropriate:1 occurrence:2 robustness:1 motorbike:2 top:3 remaining:2 include:1 clustering:15 dirichlet:10 graphical:4 exploit:1 build:3 chinese:1 implied:1 malik:1 arrangement:2 occurs:1 parametric:2 concentration:2 diagonal:2 september:3 amongst:1 ow:1 distance:3 cw:1 majority:1 street:2 extent:2 spanning:1 willsky:1 itu:1 relationship:3 ratio:2 mostly:1 info:1 negative:2 memo:2 perform:1 teh:2 observation:2 displayed:1 head:1 rn:1 david:1 required:1 learned:9 negi:1 usually:1 mismatch:1 indoor:3 challenge:3 green:1 gool:1 treated:1 technology:1 ne:1 deemed:1 naive:1 prior:4 geometric:2 determining:1 relative:1 marginalizing:1 embedded:1 billf:1 plant:1 proportional:2 allocation:1 proven:1 versus:1 analogy:1 validation:1 foundation:1 downloaded:1 degree:1 consistent:1 rubin:1 viewpoint:1 bank:1 tiny:1 seated:1 share:3 pi:1 row:1 lo:1 course:1 supported:1 hebert:1 offline:2 guide:2 allow:3 institute:1 wide:1 taking:1 felzenszwalb:1 distributed:1 van:1 curve:3 depth:1 overcome:1 reside:2 made:1 welling:1 approximate:1 overfitting:1 conclude:1 belongie:1 fergus:4 xi:11 search:2 latent:5 sk:1 table:7 learn:2 transfer:5 robust:1 ca:1 complex:1 priming:1 main:1 dense:1 synonym:1 big:1 bounding:14 hyperparameters:1 scored:1 bookshelf:2 cxi:1 roc:4 screen:7 depicts:4 inferring:1 comprises:1 wish:3 candidate:1 lie:1 pe:2 breaking:1 ply:1 ian:1 discarding:1 sift:1 showing:1 er:2 constellation:1 svm:13 polygonal:1 false:4 ci:5 illustrates:1 engen:1 suited:1 depicted:5 intersection:1 photograph:1 appearance:16 likely:6 visual:1 contained:3 partially:1 bo:1 mccloskey:1 ch:1 corresponds:2 truth:1 relies:1 extracted:3 ma:1 conditional:1 identity:2 sorted:4 labelme:8 absence:1 content:4 hard:1 typical:2 sampler:2 wordnet:2 total:1 bradford:1 experimental:2 la:1 exception:2 berg:2 evaluate:2 tested:2 |
2,486 | 3,255 | Adaptive Bayesian Inference
Umut A. Acar?
Toyota Tech. Inst.
Chicago, IL
[email protected]
Alexander T. Ihler
U.C. Irvine
Irvine, CA
[email protected]
Ramgopal R. Mettu?
Univ. of Massachusetts
Amherst, MA
[email protected]
? ur
? Sumer
?
Ozg
Uni. of Chicago
Chicago, IL
[email protected]
Abstract
Motivated by stochastic systems in which observed evidence and conditional dependencies between states of the network change over time, and certain quantities
of interest (marginal distributions, likelihood estimates etc.) must be updated, we
study the problem of adaptive inference in tree-structured Bayesian networks. We
describe an algorithm for adaptive inference that handles a broad range of changes
to the network and is able to maintain marginal distributions, MAP estimates, and
data likelihoods in all expected logarithmic time. We give an implementation of
our algorithm and provide experiments that show that the algorithm can yield up
to two orders of magnitude speedups on answering queries and responding to dynamic changes over the sum-product algorithm.
1
Introduction
Graphical models [14, 8] are a powerful tool for probabilistic reasoning over sets of random variables. Problems of inference, including marginalization and MAP estimation, form the basis of
statistical approaches to machine learning. In many applications, we need to perform inference under dynamically changing conditions, such as the acquisition of new evidence or an alteration of
the conditional relationships which make up the model. Such changes arise naturally in the experimental setting, where the model quantities are empirically estimated and may change as more data
are collected, or in which the goal is to assess the effects of a large number of possible interventions. Motivated by such applications, Delcher et al. [6] identify dynamic Bayesian inference as the
problem of performing Bayesian inference on a dynamically changing graphical model. Dynamic
changes to the graphical model may include changes to the observed evidence, to the structure of the
graph itself (such as edge or node insertions/deletions), and changes to the conditional relationships
among variables.
To see why adapting to dynamic changes is difficult, consider the simple problem of Bayesian inference in a Markov chain with n variables. Suppose that all marginal distributions have been computed
in O(n) time using the sum-product algorithm, and that some conditional distribution at a node u
is subsequently updated. One way to update the marginals would be to recompute the messages
computed by sum-product from u to other nodes in the network. This can take ?(n) time because
regardless of where u is in the network, there always is another node v at distance ?(n) from u.
A similar argument holds for general tree-structured networks. Thus, simply updating sum-product
messages can be costly in applications where marginals must be adaptively updated after changes to
the model (see Sec. 5 for further discussion).
In this paper, we present a technique for efficient adaptive inference on graphical models. For a treestructured graphical model with n nodes, our approach supports the computation of any marginal,
updates to conditional probability distributions (including observed evidence) and edge insertions
?
?
U. A. Acar is supported by a gift from Intel.
R. R. Mettu is supported by a National Science Foundation CAREER Award (IIS-0643768).
1
and deletions in expected O(log n) time. As an example of where adaptive inference can be effective, consider a computational biology application that requires computing the state of the active site
in a protein as the user modifies the protein (e.g., mutagenesis). In this application, we can represent
the protein with a graphical model and use marginal computations to determine the state of the active site. We reflect the modifications to the protein by updating the graphical model representation
and performing marginal queries to obtain the state of the active site. We show in Sec. 5 that our
approach can achieve a speedup of one to two orders of magnitude over the sum-product algorithm
in such applications.
Our approach achieves logarithmic update and query times by mapping an arbitrary tree-structued
graphical model into a balanced representation that we call a cluster tree (Sec. 3?4). We perform
an O(n)-time preprocessing step to compute the cluster tree using a technique known as tree contraction [13]. We ensure that for an input network with n nodes, the cluster tree has an expected
depth of O(log n) and expected size O(n). We show that the nodes in the cluster tree can be tagged
with partial computations (corresponding to marginalizations of subtrees of the input network) in
way that allows marginal computations and changes to the network to be performed in O(log n) expected time. We give simulation results (Sec. 5) that show that our algorithm can achieve a speedup
of one to two orders of magnitude over the sum-product algorithm. Although we focus primarily
on the problem of answering marginal queries, it is straightforward to generalize our algorithms to
other, similar inference goals, such as MAP estimation and evaluating the likelihood of evidence.
We note that although tree-structured graphs provide a relatively restrictive class of models, junction
trees [14] can be used to extend some of our results to more general graphs. In particular, we can
still support changes to the parameters of the distribution (evidence and conditional relationships),
although changes to the underlying graph structure become more difficult. Additionally, a number
of more sophisticated graphical models require efficient inference over trees at their core, including learning mixtures of trees [12] and tree-reparameterized max-product [15]. Both these methods
involve repeatedly performing a message passing algorithm over a set of trees with changing parameters or evidence, making efficient updates and recomputations a significant issue.
Related Work. It is important to contrast our notion of adapting to dynamic updates to the graphical model (due to changes in the evidence, or alterations of the structure and distribution) with
the potentially more general definition of dynamic Bayes? nets (DBNs) [14]. Specifically, a DBN
typically refers to a Bayes? net in which the variables have an explicit notion of time, and past
observations have some influence on estimates about the present and future. Marginalizing over unobserved variables at time t?1 typically produces increased complexity in the the model of variables
at time t. However, in both [6] and this work, the emphasis is on performing inference with current
information only, and efficiency is obtained by leveraging the similarity between the previous and
newly updated models.
Our work builds on previous work by Delcher, Grove, Kasif and Pearl [6]; they give an algorithm to
update Bayesian networks dynamically as the observed variables in the network change and compute belief queries of hidden variables in logarithmic time. The key difference between their work
and ours is that their algorithm only supports updates to observed evidence, and does not support dynamic changes to the graph structure (i.e., insertion/deletion of edges) or to conditional probabilities.
In many applications it is important to consider the effect of changes to conditional relationships between variables; for example, to study protein structure (see Sec. 5 for further discussion). In fact,
Delcher et al. cite structural updates to the given network as an open problem. Another difference
includes the use of tree contraction: they use tree contractions to answer queries and perform updates. We use tree contractions to construct a cluster tree, which we then use to perform queries and
all other updates (except for insertions/deletions). We provide an implementation and show that this
approach yields significant speedups.
Our approach to clustering factor graphs using tree contractions is based on previous results that
show that tree contractions can be updated in expected logarithmic time under certain dynamic
changes by using a general-purpose change-propagation algorithm [2]. The approach has also been
applied to a number of basic problems on trees [3] but has not been considered in the context of
statistical inference. The change-propagation approach used in this work has also been extended
to provide a general-purpose technique for updating computations under changes to their data and
applied to a number of applications (e.g. [1]).
2
2
Background
Graphical models provide a convenient formalism for describing the structure of a function g defined over a set of variables x1 , . . . , xn (most commonly a joint probability distribution over the
xi ). Graphical models use this structure to organize computations and create efficient algorithms
for many inference tasks over g, such as finding a maximum a-posteriori (MAP) configuration,
marginalization, or computing data likelihood. For the purposes of this paper, we assume that
each variable xi takes on
Pvalues from some finite set, denoted Ai . We write the operation of
marginalizing over xi as xi , and let Xj represent an index-ordered subset of variables and f (Xj )
a function defined over those variables, so that for example if Xj = {x2 , x3 , x5 }, then the function
f (Xj ) = f (x2 , x3 , x5 ). We use X to indicate the index-ordered set of all {x1 , . . . , xn }.
Factor Graphs. A factor graph [10] is one type of graphical model, similar to a Bayes? net [14]
or Markov random field [5] used to represent the factorization structure of a function g(xQ
1 , . . . , xn ).
In particular, suppose that g decomposes into a product of simpler functions, g(X) = j fj (Xj ),
for some collection of real-valued functions fj , called factors, whose arguments are (index-ordered)
sets Xj ? X. A factor graph consists of a graph-theoretic abstraction of g?s factorization, with
vertices of the graph representing variables xi and factors fj . Because of the close correspondence
between these quantities, we abuse notation slightly and use xi to indicate both the variable and its
associated vertex, and fj to indicate both the factor and its vertex.
Definition 2.1. A factor graph is a bipartite graph G = (X + F, E) where X = {x1 , x2 , . . . , xn }
is a set of variables, F = {f1 , f2 , . . . , fm } is a set of factors and E ? X ? F . A factor tree is a
factor graph G where G is a tree. The neighbor set N (v) of a vertex v is theQ(index-ordered) set of
vertices adjacent to vertex v. The graph G represents the function g(X) = j fj (Xj ) if, for each
factor fj , the arguments of fj are its neighbors in G, i.e., N (fj ) = Xj .
Other types of graphical models, such as Bayes? nets [14], can be easily converted into a factor
graph representation. When the Bayes? net is a polytree (singly connected directed acyclic graph),
the resulting factor graph is a factor tree.
The Sum-Product Algorithm. The factorization of g(X) and its structure as represented by the
graph G can be used to organize various computations
Pabout g(X) efficiently. For example, the
marginals of g(X), defined for each i by g i (xi ) =
X\{xi } g(X) can be computed using the
sum?product algorithm.
Sum-product is best described in terms of messages sent between each pair of adjacent vertices in
the factor graph. For every pair of neighboring vertices (xi , fj ) ? E, the vertex xi sends a message
?xi ?fj as soon as it receives the messages from all of its neighbors except for fj , and similarly
for the message from fj to xi . The messages between these vertices take the form of a real-valued
function of the variable xi ; for discrete-valued xi this can be represented as a vector of length |Ai |.
The message ?xi ?fj sent from a variable vertex xi to a neighboring factor vertex fj , and the message ?fj ?xi from factor fj to variable xi are given by
Y
X
Y
?xi ?fj (xi ) =
?f ?xi (xi )
?fj ?xi (xi ) =
fj (Xj )
?x?fj (x)
f ?N (xi )\fj
Xj \xi
x?Xj \xi
i
Once all the messages (2 |E| in total) are sent, we can
Q calculate the marginal g (xi ) by simply
i
multiplying all the incoming messages, i.e., g (xi ) = f ?N (xi ) ?f ?xi (xi ). Sum?product can be
thought of as selecting an efficient elimination ordering of variables (leaf to root) and marginalizing
in that order.
Other Inferences. Although in this paper we focus on marginal computations using sum?product,
similar message passing operations can be generalized to other tasks. For example, the operations
of sum?product can be used to compute the data likelihood of any observed evidence; such computations are an inherent part of learning and model comparisons (e.g., [12]). More generally, similar algorithms can be defined to compute functions over any semi?ring possessing the distributive
property [11]. Most commonly, the max operation produces a dynamic programming algorithm
(?max?product?) to compute joint MAP configurations [15].
3
(Round 1)
f1
f3
f?3
(Round 2)
01x1
111
x000
2
x?1
f?4
01x300
00
11 f4 11
00
11
x4
f2
f?2
(Round 3)
f?1
f?5
f5
01
x1
f?1
f?2
11
00
1
x2 0
f3
1x3
0
f?4
f?5
x?2
f3
x?4
x4
x?1
x?2
f?3
01x
3
x?4
(Round 4)
x3
x?4
Figure 1: Clustering a factor graph with rake, compress, finalize operations.
3
Constructing the Cluster Tree
In this section, we describe an algorithm for constructing a balanced representation of the input
graphical model, that we call a cluster tree. Given the input graphical model, we first apply a clustering algorithm that hierarchically clusters the graphical model, and then apply a labeling algorithm
that labels the clusters with cluster functions that can be used to compute marginal queries.
Clustering Algorithm. Given a factor graph as input, we first tag each node v with a unary cluster
that consists of v and each edge (u, v) with a binary cluster that consists of the edge (u, v). We then
cluster the tree hierarchically by applying the rake, compress, and finalize operations. When applied
to a leaf node v with neighbor u, the rake operation deletes the v and the edge (u, v), and forms
unary cluster by combining the clusters which tag either v or (u, v); u is tagged with the resulting
cluster. When applied to a degree-two node v with neighbors u and w, a compress operation deletes
v and the edges (u, v) and (v, w), inserts the edge (u, w), and forms a binary cluster by combining
the clusters which tag the deleted node and edges; (u, w) is then tagged with the resulting cluster.
A finalize operation is applied when the tree consists of a single node (when no edges remain); it
constructs a final cluster that consists of all the clusters with which the final node is tagged.
We cluster a tree T by applying rake
and compress operations in rounds. Each
round consists of the following two steps
until no more edges remain: (1) Apply the
rake operation to each leaf; (2) Apply the
compress operation to an independent set
of degree-two nodes. We choose a random independent set: we flip a coin for
each node in each round and apply compress to a degree-two node only if it flips
heads and its two neighbors flips tails. This
ensures that no two adjacent nodes apply
compress simultaneously. When all edges
are deleted, we complete the clustering by
applying the finalize operation.
x?3
f?3
x3
x?1 f3 x?2
x3f3
f?1
x1 x1f3 f?2 x2 x2f3
f5
f1
x1f1
f2
x?4
f?5 x4 f?4 = x3x4
x3f4 f4 x4f4
x4f5
x2f2
Figure 2: A cluster tree.
Fig. 1 shows a four-round clustering of a
factor graph and Fig. 2 shows the corresponding cluster tree. In round 1, nodes f1 , f2 , f5 are raked and f4 is compressed. In round 2,
x1 , x2 , x4 are raked. In round 3, f3 is raked. A finalize operation is applied in round 4 to produce
the final cluster. The leaves of the cluster tree correspond to the nodes and the edges of the factor
graph. Each internal node v? corresponds a unary or a binary cluster formed by deleting v. The
children of an internal node are the edges and the nodes deleted during the operation that forms the
cluster. For example, the cluster f?1 is formed by the rake operation applied to f1 in round 1. The
children of f?1 are node f1 and edge (f1 , x1 ), which are deleted during that operation.
4
Labeling Algorithm. After building the cluster tree, we compute cluster functions along with a
notion of orientation for neighboring clusters in a second pass, which we call labeling.1 The cluster
function at a node v? in the tree is computed recursively using the cluster functions at v??s child
clusters, which we denote Sv? = {?
v1 , . . . , v?k }. Intuitively, each cluster function corresponds to a
partial marginalization of the factors contained in cluster v?.
Since each cluster function is defined over a subset of the variables in the original graph, we require some additional notation to represent these sets. Specifically, for a cluster v?, let A(?
v ) be
the arguments
of
its
cluster
function,
and
let
V(?
v
)
be
the
set
of
all
arguments
of
its
children,
S
V(?
v ) = i A(?
vi ). In a slight abuse of notation, we let A(v) be the arguments of the node v in
the original graph, so that if v is a variable node A(v) = v and if v is a factor node A(v) = N (v).
The cluster functions cv? (?) and their arguments are then defined recursively, as follows. For the base
case, the leaf nodes of the cluster tree correspond to nodes v in the original graph, and we define cv
using the original variables and factors. If v is a factor node fj , we take cv (A(v)) = fj (Xj ), and if
v is a variable node xi , A(v) = xi and cv = 1. For nodes of the cluster tree corresponding to edges
(u, v) of the original graph, we simply take A(u, v) = ? and cu,v = 1.
The cluster function at an internal node of the cluster tree is then given by combining the cluster
functions of its children and marginalizing over as many variables as possible. Let v? be the internal
node corresponding to the removal of v in the original graph. If v? is a binary cluster (u, w), that is,
at v?s removal it had two neighbors u and w, then cv? is given by
X
Y
cv? (A(?
v )) =
cv?i (A(?
vi ))
?i ?Sv?
V(?
v )\A(?
v) v
where the arguments A(?
v ) = V(?
v ) ? (A(u) ? A(w)). For unary cluster v?, where v had a single
neighbor u at its removal, cv? (?) is calculated in the same way with A(w) = ?.
We also compute an orientation for each cluster?s neighbors based on their proximity to the cluster
tree?s root. This is also calculated recursively using the orientations of each node?s ancestors. For
a unary cluster v? with parent u
? in the cluster tree, we define in(?
v) = u
?. For a binary cluster v? with
neighbors u, w at v?s removal, define in(?
v) = w
? and out(?
v) = u
? if w
? = in(?
u); otherwise in(?
v) = u
?
and out(?
v ) = w.
?
We now describe the efficiency of our clustering and labeling algorithms and show that the resulting
cluster tree is linear in the size of the input factor graph.
Theorem 1 (Hierarchical Clustering). A factor tree of n nodes with maximum degree of k can be
clustered and labeled in expected O(dk+2 n) time where d is the domain size of each variable in the
factor tree. The resulting cluster tree has exactly 2n ? 1 leaves and n internal clusters (nodes) and
expected O(log n) depth where the expectation is taken over internal randomization (over the coin
flips). Furthermore, the cluster tree has the following properties: (1) each cluster has at most k + 1
children, and (2) if v? = (u, w) is a binary cluster, then u
? and w
? are ancestors of v?, and one of them
is the parent of v?.
Proof. Consider first the construction of the cluster tree. The time and the depth bound follow from
previous work [2]. The bound on the number of nodes holds because the leaves of the cluster tree
correspond to the n ? 1 edges and n nodes of the factor graph. To see that each cluster has at
most k + 1 children, note that the a rake or compress operation deletes one node and the at most k
edges incident on that node. Every edge appearing in any level of the tree contraction algorithm is
represented as a binary cluster v? = (u, w) in the cluster tree. Whenever an edge is removed, one of
the nodes incident to that edge, say u is also removed, making u
? the parent of v?. The fact that w
? is
also an ancestor of v? follows from an induction argument on the levels.
Consider the labeling step. By inspection of the labeling algorithm, the computation of the arguments A(?) and V(?) requires O(k) time. To bound the time for computing a cluster function,
observe that A(?
v ) is always a singleton set if v? is a unary cluster, and A(?
v ) has at most two variables
if v? is a binary cluster. Therefore, |V(?
v )| ? k + 2. The number of operations required to compute
1
Although presented here as a separate labeling operation, the cluster functions can alternatively be computed at the time of the rake or compress operation, as they depend only on the children of v?, and the orientations
can be computed during the query operation, since they depend only on the ancestors of v?.
5
|V(?
v )|
the cluster function at v? is bounded by O(|S
), where Sv? are the children of v?. Since each
P v? | d
cluster can appear only once as a child,
|Sv? | is O(n) and thus the labeling step takes O(dk+2 n)
time. Although the running time may appear large, note that the representation of the factor graph
takes O(dk n) space if functions associated with factors are given explicitly.
4
Queries and Dynamic Changes
We give algorithms for computing marginal queries on the cluster trees and restructuring the cluster
tree with respect to changes in the underlying graphical model. For all of these operations, our
algorithms require expected logarithmic time in the size of the graphical model.
Queries. We answer marginal queries at a vertex v of the graphical node by using the cluster tree.
At a high level, the idea is to find the leaf of the cluster tree corresponding to v and compute the
messages along the path from the root of the cluster tree to v. Using the orientations computed
during the tagging pass, for each cluster v? we define the following messages:
? P
Q
?
m
c
(A(?
u
))
, if u
? = in(?
v)
u
?
i
in(?
u
)??
u
i
V(?
u)\A(?
v)
u
?i ?Su
v}
? \{?
mu???v =
Q
P
?
ui )) , if u
? = out(?
v ),
?i (A(?
u)??
u
u
?i ?Su
v } cu
V(?
u)\A(?
v ) mout(?
? \{?
where Su? is the set of the children of u
?. Note that for unary clusters, out(?) is undefined; we define
the message in this case to be 1.
Theorem 2 (Query). Given a factor tree with n nodes, maximum degree k, domain size d, and its
cluster tree, the marginal at any xi can be computed with the following formula
Y
X
cv?i (A(?
vi )),
g i (xi ) =
mout(xi )?xi min(xi )?xi
v
?i ?Sx
?i
V(?
xi )\{xi }
where Sx?i is the set of children of x
?i , in O(kd
k+2
log n) time.
Messages are computed only at the ancestors of the query node xi and downward along the path to
xi ; there are at most O(log n) nodes in this path by Theorem 1. Computing each message requires
at most O(kdk+2 ) time, and any marginal query takes O(kdk+2 log n) time.
Dynamic Updates. Given a factor graph and its cluster tree, we can change the function of a factor
and update the cluster tree by starting at the leaf of the cluster tree that corresponds to the factor and
relabeling all the clusters on the path to the root. Updating these labels suffices, because the label of
a cluster is a function of its children only. Since relabeling a cluster takes O(kdk+2 ) time and the
cluster tree has expected O(log n) depth, any update requires O(kdk+2 log n) time.
To allow changes to the factor graph itself by insertion/deletion of edges, we maintain a forest of
factor trees and the corresponding cluster trees (obtained by clustering the trees one by one). We
also maintain the sequence of operations used to construct each cluster tree, i.e., a data structure
which represents the state of the clustering at each round. Note that this structure is also size O(n),
since at each round a constant fraction of nodes are removed (raked or compressed) in expectation.
Suppose now that the user inserts an edge that connects two trees, or deletes an edge connecting two
subtrees. It turns out that both operations have only a limited effect on the sequence of clustering
operations performed during construction, affecting only a constant number of nodes at each round
of the process. Using a general-purpose change propagation technique (detailed in previous work [2,
1]) the necessary alterations can be made to the cluster tree in expected O(log n) time. Change
propagation gives us a new cluster tree that corresponds to the cluster tree that we would have
obtained by re-clustering from scratch, conditioned on the same internal randomization process.
In addition to changing the structure of the cluster tree via change propagation, we must also change
the labeling information (cluster functions and orientation) of the affected nodes, which can be done
using the same process described in Sec. 3. It is a property of the tree contraction process that all such
affected clusters form a subtree of the cluster tree that includes the root. Since change propagation
affects an expected O(log n) clusters, and since each cluster can be labeled in O(kdk+2 ) time, the
new labels can be computed in O(kdk+2 log n) time.
For dynamic updates, we thus have the following theorem.
6
Theorem 3 (Dynamic Updates). For a factor forest F of n vertices with maximum degree k, and
domain size d, the forest of cluster trees can be updated in expected O(kdk+2 log n) time under edge
insertions/deletions, and changes to factors.
5
Implementation and Experimental Results
We have implemented our algorithm in Matlab2 and compared its performance against the standard
two-pass sum-product algorithm (used to recompute marginals after dynamic changes). Fig. 3 shows
the results of a simulation experiment in which we considered randomly generated factor trees between 100 and 1000 nodes, with each variable having 51 , 52 , or 53 states, so that each factor has
size between 52 and 56 . These factor tree correspond roughly to the junction trees of models with
between 200 and 6000 nodes, where each node has up to 5 states. Our results show that the time
required to build the cluster tree is comparable to one run of sum-product. Furthermore, the query
and update operations in the cluster tree incur relatively small constant factors in their asymptotic
running time, and are between one to two orders of magnitude faster than recomputing from scratch.
A particularly compelling application area, and one of the original motivations for developing our algorithm, is in the analysis of protein structure. Graphical models constructed from protein structures
have recently been used to successfully predict structural properties [17] as well as free energy [9].
These models are typically constructed by taking each node as an amino acid whose states represent
their most common conformations, known as rotamers [7], and basing conditional probabilities on
proximity, and a physical energy function (e.g., [16]) and/or empirical data [4].
Our algorithm is a natural choice for problems where various aspects of protein structure are allowed
to change. One such application is computational mutagenesis, in which functional amino acids in
a protein structure are identified by examining systematic amino acid mutations in the protein structure (i.e., to characterize when a protein ?loses? function). In this setting, performing updates to
the model (i.e., mutations) and queries (i.e., the free energy or maximum likelihood set of rotameric
states) to determine the effect of updates would be likely be far more efficient than standard methods.
Thus, our algorithm has the potential to substantially speed up computational studies that examine
each of a large number local changes to protein structure, such as in the study of protein flexibility
and dynamics. Interestingly, [6] actually give a sample application in computational biology, although their model is a simple sequence-based HMM in which they consider the effect of changing
observed sequence on secondary structure only.
The simulation results given in Fig. 3 validate the use of our algorithm for these applications, since
protein-structure based graphical models have similar complexity to the inputs we consider: proteins
range in size from hundreds to thousands of amino acids, and each amino acid typically has relatively
few rotameric states and local interactions. As in prior work [17], our simulation considers a small
number of rotamers per amino acid, but the one to two order-of-magnitude speedups obtained by
our algorithm indicate that it maybe be possible also to handle higher-resolution models (e.g., with
more rotamer states, or degrees of freedom in the protein backbone).
6
Conclusion
We give an algorithm for adaptive inference in dynamically changing tree-structured Bayesian networks. Given n nodes in the network, our algorithm can support updates to the observed evidence,
conditional probability distributions, as well as updates to the network structure (as long as they
keep the network tree-structured) with O(n) preprocessing time and O(log n) time for queries on
any marginal distribution. Our algorithm can easily be modified to maintain MAP estimates as well
as compute data likelihoods dynamically, with the same time bounds. We implement the algorithm
and show that it can speed up Bayesian inference by orders of magnitude after small updates to
the network. Applying our algorithm on the junction tree representation of a graph yields an inference algorithm that can handle updates on conditional distributions and observed evidence in
general Bayesian networks (e.g., with cycles); an interesting open question is whether updates to the
network structure (i.e., edge insertions/deletions) can also be supported.
2
Available for download at http://www.ics.uci.edu/?ihler/code/.
7
Naive sum?product
Build
Query
Update
Restructure
?1
Time (sec)
10
?2
10
?3
10
2
3
10
10
# of nodes
Figure 3: Log-log plot of run time for naive sum-product, building the cluster tree, computing
queries, updating factors, and restructuring (adding and deleting edges). Although building the cluster tree is slightly more expensive than sum-product, each subsequent update and query is between
10 and 100 times more efficient than recomputing from scratch.
References
[1] Umut A. Acar, Guy E. Blelloch, Matthias Blume, and Kanat Tangwongsan. An experimental analysis of
self-adjusting computation. In Proceedings of the ACM SIGPLAN Conference on Programming Language
Design and Implementation (PLDI), 2006.
[2] Umut A. Acar, Guy E. Blelloch, Robert Harper, Jorge L. Vittes, and Maverick Woo. Dynamizing static
algorithms with applications to dynamic trees and history independence. In ACM-SIAM Symposium on
Discrete Algorithms (SODA), 2004.
[3] Umut A. Acar, Guy E. Blelloch, and Jorge L. Vittes. An experimental analysis of change propagation in
dynamic trees. In Workshop on Algorithm Engineering and Experimentation (ALENEX), 2005.
[4] H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat, H. Weissig, I. N. Shindyalov, and P. E.
Bourne. The protein data bank. Nucl. Acids Res., 28:235?242, 2000.
[5] P. Clifford. Markov random fields in statistics. In G. R. Grimmett and D. J. A. Welsh, editors, Disorder
in Physical Systems, pages 19?32. Oxford University Press, Oxford, 1990.
[6] A. L. Delcher, A. J. Grove, S. Kasif, and J. Pearl. Logarithmic-time updates and queries in probabilistic
networks. Journal of Artificial Intelligence Research, 4:37?59, 1995.
[7] R. L. Dunbrack Jr. Rotamer libraries in the 21st century. Curr Opin Struct Biol, 12(4):431?440, 2002.
[8] M. I. Jordan. Graphical models. Statistical Science, 19:140?155, 2004.
[9] H. Kamisetty, E. P Xing, and C. J. Langmead. Free energy estimates of all-atom protein structures using
generalized belief propagation. In Proceedings of the 11th Annual International Conference on Research
in Computational Molecular Biology, 2007. To appear.
[10] F. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47:498?519, 2001.
[11] R. McEliece and S. M. Aji. The generalized distributive law. IEEE Transactions on Information Theory,
46(2):325?343, March 2000.
[12] Marina Meil?a and Michael I. Jordan. Learning with mixtures of trees. Journal of Machine Learning
Research, 1(1):1?48, October 2000.
[13] Gary L. Miller and John H. Reif. Parallel tree contraction and its application. In Proceedings of the 26th
Annual IEEE Symposium on Foundations of Computer Science, pages 487?489, 1985.
[14] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, 1988.
[15] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. Tree consistency and bounds on the performance of the
max-product algorithm and its generalizations. Statistics and Computing, 14:143?166, April 2004.
[16] S. J. Weiner, P.A. Kollman, D.A. Case, U.C. Singh, G. Alagona, S. Profeta Jr., and P. Weiner. A new
force field for the molecular mechanical simulation of nucleic acids and proteins. J. Am. Chem. Soc.,
106:765?784, 1984.
[17] C. Yanover and Y. Weiss. Approximate inference and protein folding. In Proceedings of Neural Information Processing Systems Conference, pages 84?86, 2002.
8
| 3255 |@word cu:2 open:2 simulation:5 contraction:9 recursively:3 configuration:2 uma:1 selecting:1 loeliger:1 ours:1 interestingly:1 past:1 current:1 delcher:4 must:3 john:1 subsequent:1 chicago:3 opin:1 acar:5 plot:1 update:26 intelligence:1 leaf:9 inspection:1 dunbrack:1 core:1 recompute:2 node:55 org:1 simpler:1 along:3 constructed:2 become:1 symposium:2 consists:6 tagging:1 expected:13 roughly:1 examine:1 gift:1 underlying:2 notation:3 bounded:1 backbone:1 substantially:1 alenex:1 unobserved:1 finding:1 blume:1 every:2 exactly:1 intervention:1 appear:3 organize:2 engineering:1 local:2 frey:1 oxford:2 meil:1 path:4 abuse:2 emphasis:1 dynamically:5 factorization:3 limited:1 range:2 directed:1 implement:1 x3:5 aji:1 area:1 empirical:1 adapting:2 thought:1 convenient:1 refers:1 protein:20 close:1 context:1 influence:1 applying:4 www:1 map:6 modifies:1 straightforward:1 regardless:1 starting:1 resolution:1 disorder:1 century:1 handle:3 notion:3 updated:6 dbns:1 suppose:3 construction:2 user:2 programming:2 rotamers:2 expensive:1 particularly:1 updating:5 labeled:2 observed:9 calculate:1 thousand:1 ensures:1 connected:1 cycle:1 ordering:1 removed:3 balanced:2 mu:1 insertion:7 complexity:2 ui:1 rake:8 dynamic:17 depend:2 singh:1 incur:1 bipartite:1 efficiency:2 f2:4 basis:1 easily:2 joint:2 kasif:2 represented:3 various:2 univ:1 describe:3 effective:1 query:23 artificial:1 labeling:9 whose:2 valued:3 plausible:1 say:1 otherwise:1 compressed:2 statistic:2 itself:2 final:3 sequence:4 net:5 matthias:1 interaction:1 product:22 westbrook:1 neighboring:3 uci:2 combining:3 flexibility:1 achieve:2 validate:1 parent:3 cluster:101 produce:3 tti:1 ring:1 conformation:1 soc:1 implemented:1 c:1 indicate:4 berman:1 f4:3 stochastic:1 subsequently:1 elimination:1 require:3 f1:7 clustered:1 suffices:1 generalization:1 randomization:2 blelloch:3 insert:2 hold:2 proximity:2 considered:2 ic:2 mapping:1 predict:1 achieves:1 purpose:4 estimation:2 label:4 treestructured:1 basing:1 create:1 successfully:1 tool:1 always:2 modified:1 jaakkola:1 focus:2 likelihood:7 tech:1 contrast:1 am:1 inst:1 inference:21 ozg:1 posteriori:1 abstraction:1 unary:7 typically:4 hidden:1 ancestor:5 issue:1 among:1 theq:1 orientation:6 denoted:1 marginal:16 field:3 construct:3 once:2 f3:5 having:1 atom:1 biology:3 represents:2 broad:1 x4:4 bourne:1 future:1 intelligent:1 inherent:1 primarily:1 few:1 randomly:1 simultaneously:1 national:1 mettu:3 relabeling:2 connects:1 welsh:1 maintain:4 curr:1 freedom:1 interest:1 message:18 mixture:2 undefined:1 chain:1 subtrees:2 grove:2 edge:27 partial:2 necessary:1 tree:86 reif:1 re:2 increased:1 formalism:1 recomputing:2 compelling:1 vertex:14 subset:2 hundred:1 examining:1 characterize:1 dependency:1 answer:2 sv:4 adaptively:1 st:1 international:1 amherst:1 siam:1 probabilistic:3 systematic:1 michael:1 connecting:1 clifford:1 reflect:1 choose:1 f5:3 restructure:1 guy:3 converted:1 potential:1 singleton:1 alteration:3 sec:7 includes:2 explicitly:1 vi:3 performed:2 root:5 xing:1 bayes:5 parallel:1 mutation:2 ass:1 il:2 formed:2 acid:8 kaufmann:1 efficiently:1 miller:1 yield:3 identify:1 correspond:4 generalize:1 bayesian:9 multiplying:1 finalize:5 history:1 whenever:1 definition:2 against:1 energy:4 acquisition:1 naturally:1 associated:2 ihler:3 proof:1 static:1 irvine:2 newly:1 adjusting:1 massachusetts:1 rotameric:2 sophisticated:1 actually:1 higher:1 follow:1 wei:1 april:1 done:1 furthermore:2 until:1 mceliece:1 receives:1 su:3 propagation:8 building:3 effect:5 tagged:4 adjacent:3 x5:2 round:16 during:5 self:1 generalized:3 theoretic:1 complete:1 fj:23 reasoning:2 possessing:1 recently:1 common:1 functional:1 empirically:1 physical:2 extend:1 tail:1 slight:1 marginals:4 significant:2 ai:2 cv:9 consistency:1 dbn:1 similarly:1 pldi:1 maverick:1 language:1 had:2 similarity:1 etc:1 base:1 certain:2 binary:8 jorge:2 morgan:1 additional:1 determine:2 ii:1 semi:1 faster:1 long:1 molecular:2 award:1 marina:1 basic:1 expectation:2 represent:5 folding:1 background:1 affecting:1 addition:1 bhat:1 sends:1 tangwongsan:1 sent:3 leveraging:1 jordan:2 call:3 structural:2 marginalization:4 xj:12 affect:1 independence:1 fm:1 identified:1 idea:1 whether:1 motivated:2 weiner:2 passing:2 kanat:1 repeatedly:1 generally:1 detailed:1 involve:1 maybe:1 singly:1 http:1 estimated:1 per:1 write:1 discrete:2 mutagenesis:2 affected:2 key:1 four:1 mout:2 deletes:4 deleted:4 changing:6 v1:1 graph:36 x000:1 fraction:1 sum:18 run:2 powerful:1 soda:1 comparable:1 bound:5 correspondence:1 annual:2 x2:6 sigplan:1 tag:3 aspect:1 speed:2 argument:10 min:1 performing:5 relatively:3 speedup:5 structured:5 developing:1 march:1 kd:1 jr:2 remain:2 slightly:2 ur:1 modification:1 making:2 intuitively:1 taken:1 describing:1 turn:1 flip:4 junction:3 operation:27 available:1 experimentation:1 apply:6 observe:1 hierarchical:1 appearing:1 grimmett:1 coin:2 struct:1 original:7 compress:9 responding:1 clustering:12 include:1 ensure:1 running:2 graphical:23 x3x4:1 restrictive:1 build:3 feng:1 question:1 quantity:3 costly:1 distance:1 separate:1 distributive:2 hmm:1 collected:1 considers:1 induction:1 willsky:1 length:1 code:1 index:4 relationship:4 difficult:2 october:1 robert:1 potentially:1 implementation:4 design:1 perform:4 observation:1 nucleic:1 markov:3 finite:1 reparameterized:1 extended:1 head:1 arbitrary:1 download:1 rotamer:2 polytree:1 pair:2 required:2 mechanical:1 deletion:7 pearl:3 able:1 including:3 max:4 belief:2 deleting:2 wainwright:1 natural:1 force:1 nucl:1 yanover:1 representing:1 library:1 woo:1 naive:2 xq:1 prior:1 removal:4 marginalizing:4 asymptotic:1 law:1 interesting:1 acyclic:1 foundation:2 degree:7 incident:2 editor:1 bank:1 supported:3 soon:1 free:3 uchicago:1 allow:1 neighbor:10 taking:1 depth:4 xn:4 evaluating:1 calculated:2 kdk:7 commonly:2 adaptive:6 preprocessing:2 collection:1 made:1 san:1 ec:1 far:1 transaction:2 approximate:1 uni:1 keep:1 umut:5 active:3 incoming:1 francisco:1 xi:44 alternatively:1 decomposes:1 why:1 additionally:1 scratch:3 ca:1 career:1 kschischang:1 forest:3 constructing:2 domain:3 hierarchically:2 motivation:1 arise:1 child:13 allowed:1 amino:6 x1:8 site:3 intel:1 fig:4 x300:1 explicit:1 answering:2 toyota:1 theorem:5 formula:1 dk:3 evidence:12 workshop:1 adding:1 magnitude:6 subtree:1 downward:1 conditioned:1 sx:2 logarithmic:6 simply:3 likely:1 ordered:4 contained:1 restructuring:2 cite:1 corresponds:4 loses:1 gary:1 acm:2 ma:1 conditional:11 goal:2 change:35 specifically:2 except:2 pvalues:1 called:1 total:1 pas:3 secondary:1 experimental:4 internal:7 support:5 harper:1 chem:1 alexander:1 biol:1 |
2,487 | 3,256 | Neural characterization in partially observed
populations of spiking neurons
Jonathan W. Pillow
Peter Latham
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, UK
[email protected]
[email protected]
Abstract
Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have
been successfully applied to neurons in the early sensory pathway, they have fared
less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach:
we include unobserved as well as observed spiking neurons in a joint encoding
model. The resulting model exhibits richer dynamics and more highly nonlinear
response properties, making it more powerful and more flexible for fitting neural
data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks
process sensory input. We formulate the estimation procedure using variational
EM and the wake-sleep algorithm, and illustrate the model?s performance using a
simulated example network consisting of two coupled neurons.
1 Introduction
A central goal of computational neuroscience is to understand how the brain transforms sensory
input into spike trains, and considerable effort has focused on the development of statistical models
that can describe this transformation. One of the most successful of these is the linear-nonlinearPoisson (LNP) cascade model, which describes a cell?s response in terms of a linear filter (or receptive field), an output nonlinearity, and an instantaneous spiking point process [1?5]. Recent efforts
have generalized this model to incorporate spike-history and multi-neuronal dependencies, which
greatly enhances the model?s flexibility, allowing it to capture non-Poisson spiking statistics and
joint responses of an entire population of neurons [6?10].
Point process models accurately describe the spiking responses of neurons in the early visual pathway to light, and of cortical neurons to injected currents. However, they perform poorly both in
higher visual areas and in auditory cortex, and often do not generalize well to stimuli whose statistics differ from those used for fitting. Such failings are in some ways not surprising: the cascade
model?s stimulus sensitivity is described with a single linear filter, whereas responses in the brain
reflect multiple stages of nonlinear processing, adaptation on multiple timescales, and recurrent
feedback from higher-level areas. However, given its mathematical tractability and its accuracy in
capturing the input-output properties of single neurons, the model provides a useful building block
for constructing richer and more complex models of neural population responses.
Here we extend the point-process modeling framework to incorporate a set of unobserved or ?hidden? neurons, whose spike trains are unknown and treated as hidden or latent variables. The unobserved neurons respond to the stimulus and to synaptic inputs from other neurons, and their spiking
1
activity can in turn affect the responses of the observed neurons. Consequently, their functional
properties and connectivity can be inferred from data [11?18]. However, the idea is not to simply
build a more powerful statistical model, but to develop a model that can help us learn something
about the underlying structure of networks deep in the brain.
Although this expanded model offers considerably greater flexibility in describing an observed set
of neural responses, it is more difficult to fit to data. Computing the likelihood of an observed set
of spike trains requires integrating out the probability distribution over hidden activity, and we need
sophisticated algorithms to find the maximum likelihood estimate of model parameters. Here we
introduce a pair of estimation procedures based on variational EM (expectation maximization) and
the wake-sleep algorithm. Both algorithms make use of a novel proposal density to capture the
dependence of hidden spikes on the observed spike trains, which allows for fast sampling of hidden
neurons? activity. In the remainder of this paper we derive the basic formalism and demonstrate
its utility on a toy problem consisting of two neurons, one of which is observed and one which
is designated ?hidden?. We show that a single-cell model used to characterize the observed neuron
performs poorly, while a coupled two-cell model estimated using the wake-sleep algorithm performs
much more accurately.
2 Multi-neuronal point-process encoding model
We begin with a description of the encoding model, which generalizes the LNP model to incorporate
non-Poisson spiking and coupling between neurons. We refer to this as a generalized linear pointprocess (glpp) model 1 [8, 9]. For simplicity, we formulate the model for a pair of neurons, although
it can be tractably applied to data from a moderate-sized populations (?10-100 neurons). In this
section we do not distinguish between observed and unobserved spikes, but will do so in the next.
Let xt denote the stimulus at time t, and y t and zt denote the number of spikes elicited by two
neurons at t, where t ? [0, T ] is an index over time. Note that x t is a vector containing all elements
of the stimulus that are causally related to the (scalar) responses y t and zt at time t. Furthermore, let
us assume t takes on a discrete set of values, with bin size ?, i.e., t ? {0, ?, 2?, . . . , T }. Typically
? is sufficiently small that we observe only zero or one spike in every bin: y t , zt ? {0, 1}.
The conditional intensity (or instantaneous spike rate) of each cell depends on both the stimulus and
the recent spiking history via a bank of linear filters. Let y [t??,t) and z[t??,t) denote the (vector)
spike train histories at time t. Here [t ? ?, t) refers to times between t ? ? and t ? ?, so y [t??,t) ?
(yt?? , yt?? +? , ..., yt?2? , yt?? ) and similarly for z [t??,t) . The conditional intensities for the two
cells are then given by
?yt
=
f (ky ? xt + hyy ? y[t??,t) + hyz ? z[t??,t) )
?zt
=
f (kz ? xt + hzz ? z[t??,t) + hzy ? y[t??,t) )
(1)
where ky and kz are linear filters representing each cell?s receptive field, h yy and hzz are filters
operating on each cell?s own spike-train history (capturing effects like refractoriness and bursting),
and hzy and hyz are a filters coupling the spike train history of each neuron to the other (allowing
the model to capture statistical correlations and functional interactions between neurons). The ???
notation represents the standard dot product (performing a summation over either index or time):
k ? xt ?
ki xit
i
h ? y[t??,t)
?
t??
ht yt ,
t =t??
where the index i run over the components of the stimuli (which typically are time points extending
into the past). The second expression generalizes to h ? z [t??,t) .
The nonlinear function, f , maps the input to the instantaneous spike rate of each cell. We assume
here that f is exponential, although any monotonic convex function that grows no faster than expo1
We adapt this terminology from ?generalized linear model? (glm), a much more general class of models
from the statistics literature [19]; this model is a glm whose distribution function is Poisson.
2
=
stimulus filter
x
ky
+
exp(?)
stimulus
y
xt
hyy
hyy
coupling
filters
neuron z
ky
f
neuron y
kz
equivalent model diagram
stochastic
nonlinearity spiking
post-spike filter
x
>
point-process model
neuron
y spikes
hzy
x1
x2
x3
x4
...
y1
y2
y3
y4
...
z1
z2
z3
z4
...
hzy
neuron
z spikes
+
causal structure
+
y[t-?,t) yt
hyz
?
time
z[t-?,t)
z
time
hzz
Figure 1: Schematic of generalized linear point-process (glpp) encoding model. a, Diagram of model
parameters for a pair of coupled neurons. For each cell, the parameters consist of a stimulus filter
(e.g., ky ), a spike-train history filter (hyy ), and a filter capturing coupling from the spike train history
of the other cell (hzy ). The filter outputs are summed, pass through an exponential nonlinearity, and
drive spiking via an instantaneous point process. b, Equivalent diagram showing just the parameters
of the neuron y, as used for drawing a sample yt . Gray boxes highlight the stimulus vector xt and
spike train history vectors that form the input to the model on this time step. c, Simplified graphical
model of the glpp causal structure, which allows us to visualize how the likelihood factorizes. Arrows
between variables indicate conditional dependence. For visual clarity, temporal dependence is depicted
as extending only two time bins, though in real data extends over many more. Red arrows highlight the
dependency structure for a single time bin of the response y3 .
nentially is suitable [9]. Equation 1 is equivalent to f applied to a linear convolution of the stimulus
and spike trains with their respective filters; a schematic is shown in figure 1.
The probability of observing y t spikes in a bin of size ? is given by a Poisson distribution with rate
parameter ?yt ?,
(?yt ?)yt ??yt ?
e
,
yt !
P (yt |?yt ) =
(2)
and likewise for P (z t |?zt ). The likelihood of the full set of spike times is the product of conditionally independent terms,
P (Y, Z|X, ?) =
P (yt |?yt )P (zt |?zt ),
(3)
t
where Y and Z represent the full spike trains, X denotes the full set of stimuli, and ? ?
{ky , kz , hyy , hzy , hzz , hyz } denotes the model parameters. This factorization is possible because
?yt and ?zt depend only on the process history up to time t, making y t and zt conditionally independent given the stimulus and spike histories up to t (see Fig. 1c). If the response at time t were to
depend on both the past and future response, we would have a causal loop , preventing factorization
and making both sampling and likelihood evaluation very difficult.
The model parameters can be tractably fit to spike-train data using maximum likelihood. Although
the parameter space may be high-dimensional (incorporating spike-history dependence over many
time bins and stimulus dependence over a large region of time and space), the negative log-likelihood
is convex with respect to the model parameters, making fast convex optimization methods feasible
for finding the global maximum [9]. We can write the log-likelihood simply as
log P (Y, Z|X, ?) =
(yt log ?yt + zt log ?zt ? ??yt ? ??zt ) + c,
(4)
t
where c is a constant that does not depend on ?.
3
3 Generalized Expectation-Maximization and Wake-Sleep
Maximizing log P (Y, Z|X, ?) is straightforward if both Y and Z are observed, but here we are
interested in the case where Y is observed and Z is ?hidden?. Consequently, we have to average
over Z. The log-likelihood of the observed data is given by
L(?) ? log P (Y |?) = log
P (Y, Z|?),
(5)
Z
where we have dropped X to simplify notation (all probabilities can henceforth be taken to also
depend on X). This sum over Z is intractable in many settings, motivating the use of approximate
methods for maximizing likelihood. Variational expectation-maximization (EM) [20, 21] and the
wake-sleep algorithm [22] are iterative algorithms for solving this problem by introducing a tractable
approximation to the conditional probability over hidden variables,
Q(Z|Y, ?) ? P (Z|Y, ?),
(6)
where ? denotes the parameter vector determining Q.
The idea behind variational EM can be described as follows. Concavity of the log implies a lower
bound on the log-likelihood:
P (Y, Z|?)
L(?) ?
Q(Z|Y, ?) log
Q(Z|Y, ?)
Z
(7)
= log P (Y |?) ? DKL Q(Z|Y, ?), P (Z|Y, ?) ,
where Q is any probability distribution over Z and D KL is the Kullback-Leibler (KL) divergence
between Q and P (using P as shorthand for P (Z|Y, ?)), which is always ? 0. In standard EM, Q
takes the same functional form as P , so that by setting ? = ? (the E-step), D KL is 0 and the bound is
tight, since the right-hand-side of eq. 7 equals L(?). Fixing ?, we then maximize the r.h.s. for ? (the
M-step), which is equivalent to maximizing the expected complete-data log-likelihood (expectation
taken w.r.t. Q), given by
EQ(Z|Y,?) log P (Y, Z|?) ?
Q(Z|Y, ?) log P (Y, Z|?).
(8)
Z
Each step increases a lower bound on the log-likelihood, which can always be made tight, so the
algorithm converges to a fixed point that is a maximum of L(?). The variational formulation differs
in allowing Q to take a different functional form than P (i.e., one for which eq. 8 is easier to maximize). The variational E-step involves minimizing D KL (Q, P ) with respect to ?, which remains
positive if Q does not approximate P exactly; the variational M-step is unchanged from the standard
algorithm.
In certain cases, it is easier to minimize the KL divergence D KL (P, Q) than DKL (Q, P ), and
doing so in place of the variational E-step above results in the wake-sleep algorithm [22]. In this
algorithm, we fit ? by minimizing D KL (P, Q) averaged over Y , which is equivalent to maximizing
the expectation
EP (Y,Z|?) log Q(Z|Y, ?) ?
P (Y, Z|?) log Q(Z|Y, ?),
(9)
Y,Z
which bears an obvious symmetry to eq. 8. Thus, both steps of the wake-sleep algorithm involve
maximizing an expected log-probability. In the ?wake? step (identical to the M-step), we fit the
true model parameters ? by maximizing (an approximation to) the log-probability of the observed
data Y . In the ?sleep? step, we fit ? by trying to find a distribution Q that best approximates the
conditional dependence of Z on Y , averaged over the joint distribution P (Y, Z|?). We can therefore
think of the wake phase as learning a model of the data (parametrized by ?), and the sleep phase as
learning a consistent internal description of that model (parametrized by ?).
Both variational-EM and the wake-sleep algorithm work well when Q closely approximates P , but
may fail to converge to a maximum of the likelihood if there is a significant mismatch. Therefore,
the efficiency of these methods depends on choosing a good approximating distribution Q(Z|Y, ?)
? one that closely matches P (Z|Y, ?). In the next section we show that considerations of the spike
generation process can provide us with a good choice for Q.
4
=
acausal model schematic
kz
exp(?)
stimulus
neuron z spikes
(hidden)
neuron y spikes
(observed)
xt
hzz
+
z[t-t,t) zt
hyz
>
causal structure
x1
x2
x3
x4
z1
z2
z3
z4
y1
y2
y3
y4
time
y[t-? , t+?]
time
Figure 2: Schematic diagram of the (acausal) model for the proposal density Q(Z|Y, ?), the conditional density on hidden spikes given the observed spike data. a, Conditional model schematic, which
allows zt to depend on the observed response both before and after t. b, Graphical model showing
causal structure of the acausal model, with arrows indicating dependency. The observed spike responses (gray circles) are no longer dependent variables, but regarded as fixed, external data, which is
necessary for computing Q(zt |Y, ?). Red arrows illustrate the dependency structure for a single bin of
the hidden response, z3 .
4 Estimating the model with partially observed data
To understand intuitively why the true P (Z|Y, ?) is difficult to sample, and to motivate a reasonable
choice for Q(Z|Y, ?), let us consider a simple example: suppose a single hidden neuron (whose full
response is Z) makes a strong excitatory connection to an observed neuron (whose response is Y ),
so that if zt = 1 (i.e., the hidden neuron spikes at time t), it is highly likely that y t+1 = 1 (i.e.,
the observed neuron spikes at time t + 1). Consequently, under the true P (Z|Y, ?), which is the
probability over Z in all time bins given Y in all time bins, if y t+1 = 1 there is a high probability
that zt = 1. In other words, z t exhibits an acausal dependence on y t+1 . But this acausal dependence
is not captured in Equation 3, which expresses the probability over z t as depending only on past
events at time t, ignoring the future event y t+1 = 1.
Based on this observation ? essentially, that the effect of future observed spikes on the probability of
unobserved spikes depends on the connection strength between the two neurons ? we approximate
P (Z|Y, ?) using a separate point-process model Q(Z|Y, ?), which contains set of acausal linear
filters from Y to Z. Thus we have
? zt = exp(k
? z ? xt + h
? zz ? z[t??,t) + h
? zy ? y[t??,t+? ) ).
?
(10)
?z , h
? zz and h
? zy are linear filters; the important difference is that h
? zy ? y[t??,t+? ) is
As above, k
a sum over past and future time: from t ? ? to t + ? ? ?. For this model, the parameters are
? zz , h
? zy ). Figure 2 illustrates the model architecture.
?z , h
? = (k
We now have a straightforward way to implement the wake-sleep algorithm, using samples from Q
to perform the wake phase (estimating ?), and samples from P (Y, Z|?) to perform the sleep phase
(estimating ?). The algorithm works as follows:
? Wake: Draw samples {Zi } ? Q(Z|Y, ?), where Y are the observed spike trains and ? is
the current set of parameters for the acausal point-process model Q. Evaluate the expected
complete-data log-likelihood (eq. 8) using Monte Carlo integration:
N
1
EQ log P (Y, Z|?) = lim
log P (Y, Zi |?).
N ?? N
i=1
(11)
This is log-concave in ?, meaning that we can efficiently find its global maximum to fit ?.
5
? Sleep: Draw samples {Yj , Zj } ? P (Y, Z|?), the true encoding distribution with current
parameters ?. (Note these samples are pure ?fantasy? data, drawn without reference to the
observed Y ). As above, compute the expected log-probability (eq. 9) using these samples:
EP (Y,Z|?)
N
1
log Q(Z|Y, ?) = lim
log Q(Zj |Yj , ?),
N ?? N
i=1
(12)
which is also log-concave and thus efficiently maximized for ?.
One advantage of the wake-sleep algorithm is that each complete iteration can be performed using
only a single set of samples drawn from Q and P . A theoretical drawback to wake-sleep, however, is
that the sleep step is not guaranteed to increase a lower-bound on the log-likelihood, as in variationalEM (wake-sleep minimizes the ?wrong? KL divergence). We can implement variational-EM using
the same approximating point-process model Q, but we now require multiple steps of sampling for
a complete E-step. To perform a variational E-step,
we draw samples (as above) from Q and use
them to evaluate both the KL divergence D KL Q(Z|Y, ?)||P (Z|Y, ?) and its gradient with respect
to ?. We can then perform noisy gradient descent to find a minimum, drawing a new set of samples
for each evaluation of D KL (Q, P ). The M-step is equivalent to the wake phase of wake-sleep,
achievable with a single set of samples.
One additional use for the approximating point-process model Q is as a ?proposal? distribution for
Metropolis-Hastings sampling of the true P (Z|Y, ?). Such samples can be used to evaluate the true
log-likelihood, for comparison with the variational lower bound, and for noisy gradient ascent of the
likelihood to examine how closely these approximate methods converge to the true ML estimate. For
fully observed data, such samples also provide a useful means for measuring how much the entropy
of one neuron?s response is reduced by knowing the responses of its neighbors.
5 Simulations: a two-neuron example
To verify the method, we applied it to a pair of neurons (as depicted in fig. 1), simulated using a
stimulus consisting of a long presentation of white noise. We denoted one of the neurons ?observed?
and the other ?hidden?. The parameters used for the simulation are depicted in fig. 3. The cells have
similarly-shaped biphasic stimulus filters with opposite sign, like those commonly observed in ON
and OFF retinal ganglion cells. We assume that the ON-like cell is observed, while the OFF-like
cell is hidden. Both cells have spike-history filters that induce a refractory period following a spike,
with a small peak during the relative refractory period that elicits burst-like responses. The hidden
cell has a strong positive coupling filter h zy onto the observed cell, which allows spiking activity in
the hidden cell to excite the observed cell (despite the fact that the two cells receive opposite-sign
stimulus input). For simplicity, we assume no coupling from the observed to the hidden cell 2 . Both
types of filters were defined in a linear basis consisting of four raised cosines, meaning that each
filter is specified by four parameters, and the full model contains 20 parameters (i.e., 2 stimulus
filters and 3 spike-train filters).
Fig. 3b shows rasters of the two cells? responses to a repeated presentations of a 1s Gaussian whitenoise stimulus with a framerate of 100Hz. Note that the temporal structure of the observed cell?s
response is strongly correlated with that of the hidden cell due to the strong coupling from hidden
to observed (and the fact that the hidden cell receives slightly stronger stimulus drive).
Our first task is to examine whether a standard, single-cell glpp model can capture the mapping from
stimuli to spike responses. Fig. 3c shows the parameters obtained from such a fit to the observed data,
using 10s of the response to a non-repeating white noise stimulus (1000 samples, 251 spikes). Note
that the estimated stimulus filter (red) has much lower amplitude than the stimulus filter of the true
model (gray). Fig. 3d shows the parameters obtained for an observed and a hidden neuron, estimated
using wake-sleep as described in section 4. Fig. 3e-f shows a comparison of the performance of the
two models, indicating that the coupled model estimated with wake-sleep does a much better job of
capturing the temporal structure of the observed neuron?s response (accounting for 60% vs. 15% of
2
Although the stimulus and spike-history filters bear a rough similarity to those observed in retinal ganglion
cells, the coupling used here is unlike coupling filters observed (to our knowledge) between ON and OFF cells
in retinal data; it is assumed purely for demonstration purposes.
6
2
0
-2
ky
kz
-0.2
>
?
true parameters
0
-10
-20
hzy
0
-10
-20
0
0
hyy
hzz
@ coupled-model estimate
using variational EM
0.05
A
single-cell model
estimate
GWN stimulus
B
hidden observed
raster
raster
raster comparison
coupled singletrue
model
cell observed
observed
2
0
-2
hidden
=
psth comparison
true
rate (Hz)
100
0
0.5
time (s)
1
single-cell
coupled
50
0
0
0.5
time (s)
1
Figure 3: Simulation results. a, Parameters used for generating simulated responses. The top row
shows the filters determining the input to the observed cell, while the bottom row shows those influencing the hidden cell. b, Raster of spike responses of observed and hidden cells to a repeated, 1s
Gaussian white noise stimulus (top). c, Parameter estimates for a single-cell glpp model fit to the
observed cell?s response, using just the stimulus and observed data (estimates in red; true observedcell filters in gray). d, Parameters obtained using wake-sleep to estimate a coupled glpp model, again
using only the stimulus and observed spike times. e, Response raster of true observed cell (obtained
by simulating the true two-cell model), estimated single-cell model and estimated coupled model. f,
Peri-stimulus time histogram (PSTH) of the above rasters showing that the coupled model gives much
higher accuracy predicting the true response.
the PSTH variance). The single-cell model, by contrast, exhibits much worse performance, which
is unsurprising given that the standard glpp encoding model can capture only quasi-linear stimulus
dependencies.
6 Discussion
Although most statistical models of spike trains posit a direct pathway from sensory stimuli to neuronal responses, neurons are in fact embedded in highly recurrent networks that exhibit dynamics
on a broad range of time-scales. To take into account the fact that neural responses are driven by
both stimuli and network activity, and to understand the role of network interactions, we proposed
a model incorporating both hidden and observed spikes. We regard the observed spike responses
as those recorded during a typical experiment, while the responses of unobserved neurons are modeled as latent variables (unrecorded, but exerting influence on the observed responses). The resulting
model is tractable, as the latent variables can be integrated out using approximate sampling methods,
and optimization using variational EM or wake-sleep provides an approximate maximum likelihood
estimate of the model parameters. As shown by a simple example, certain settings of model parameters necessitate the incorporation unobserved spikes, as the standard single-stage encoding model
does not accurately describe the data.
In future work, we plan to examine the quantitative performance of the variational-EM and wakesleep algorithms, to explore their tractability in scaling to larger populations, and to apply them to
real neural data. The model offers a promising tool for analyzing network structure and networkbased computations carried out in higher sensory areas, particularly in the context where data are
only available from a restricted set of neurons recorded within a larger population.
7
References
[1] I. Hunter and M. Korenberg. The identification of nonlinear biological systems: Wiener and hammerstein
cascade models. Biological Cybernetics, 55:135?144, 1986.
[2] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling optimizes information
transmission. Neuron, 26:695?702, 2000.
[3] H. Plesser and W. Gerstner. Noise in integrate-and-fire neurons: From stochastic input to escape rates.
Neural Computation, 12:367?384, 2000.
[4] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in
Neural Systems, 12:199?213, 2001.
[5] E. P. Simoncelli, L. Paninski, J. Pillow, and O. Schwartz. Characterization of neural responses with
stochastic stimuli. In M. Gazzaniga, editor, The Cognitive Neurosciences, pages 327?338. MIT Press, 3rd
edition, 2004.
[6] M. Berry and M. Meister. Refractoriness and neural precision. Journal of Neuroscience, 18:2200?2211,
1998.
[7] K. Harris, J. Csicsvari, H. Hirase, G. Dragoi, and G. Buzsaki. Organization of cell assemblies in the
hippocampus. Nature, 424:552?556, 2003.
[8] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework
for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J.
Neurophysiol, 93(2):1074?1089, 2004.
[9] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15:243?262, 2004.
[10] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Correlations and coding with multi-neuronal spike trains in primate retina. SFN abstracts, #768.9, 2007.
[11] D. Nykamp. Reconstructing stimulus-driven neural networks from spike times. NIPS, 15:309?316, 2003.
[12] D. Nykamp. Revealing pairwise coupling in linear-nonlinear networks. SIAM Journal on Applied Mathematics, 65:2005?2032, 2005.
[13] M. Okatan, M. Wilson, and E. Brown. Analyzing functional connectivity using a network likelihood
model of ensemble neural spiking activity. Neural Computation, 17:1927?1961, 2005.
[14] L. Srinivasan, U. Eden, A. Willsky, and E. Brown. A state-space analysis for reconstruction of goaldirected movements using neural signals. Neural Computation, 18:2465?2494, 2006.
[15] D. Nykamp. A mathematical framework for inferring connectivity in probabilistic neuronal networks.
Mathematical Biosciences, 205:204?251, 2007.
[16] J. E. Kulkarni and L Paninski. Common-input models for multiple neural spike-train data. Network:
Computation in Neural Systems, 18(4):375?407, 2007.
[17] B. Yu, A. Afshar, G. Santhanam, S. Ryu, K. Shenoy, and M. Sahani. Extracting dynamical structure
embedded in neural activity. NIPS, 2006.
[18] S. Escola and L. Paninski. Hidden Markov models applied toward the inference of neural states and the
improved estimation of linear receptive fields. COSYNE07, 2007.
[19] P. McCullagh and J. Nelder. Generalized linear models. Chapman and Hall, London, 1989.
[20] A. Dempster, N. Laird, and R. Rubin. Maximum likelihood from incomplete data via the EM algorithm.
J. Royal Statistical Society, B, 39(1):1?38, 1977.
[21] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants.
In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368. MIT Press, Cambridge, 1999.
[22] GE Hinton, P. Dayan, BJ Frey, and RM Neal. The? wake-sleep? algorithm for unsupervised neural
networks. Science, 268(5214):1158?1161, 1995.
8
| 3256 |@word achievable:1 stronger:1 hippocampus:1 simulation:3 accounting:1 contains:2 past:4 current:3 z2:2 surprising:1 v:1 provides:2 characterization:2 psth:3 mathematical:3 burst:1 direct:1 goaldirected:1 shorthand:1 pathway:3 fitting:2 introduce:2 pairwise:1 expected:4 examine:3 fared:1 brain:4 multi:3 begin:1 estimating:3 underlying:1 notation:2 pel:1 fantasy:1 minimizes:1 unobserved:8 transformation:1 finding:1 biphasic:1 temporal:3 quantitative:1 every:1 y3:3 fellow:1 concave:2 exactly:1 wrong:1 rm:1 uk:3 schwartz:1 unit:1 causally:1 okatan:1 shenoy:1 positive:2 before:1 dropped:1 influencing:1 frey:1 despite:1 encoding:9 analyzing:2 bursting:1 factorization:2 range:1 averaged:2 steveninck:1 yj:2 block:1 implement:2 differs:1 x3:2 procedure:2 area:4 cascade:4 revealing:1 word:1 integrating:1 refers:1 induce:1 onto:1 context:1 influence:1 equivalent:6 map:1 yt:21 maximizing:6 straightforward:2 convex:3 focused:1 formulate:2 simplicity:2 pure:1 insight:1 importantly:1 regarded:1 shlens:1 population:6 suppose:1 element:1 particularly:1 observed:50 ep:2 bottom:1 role:1 capture:5 region:1 movement:1 dempster:1 hyy:6 dynamic:2 hzz:6 motivate:1 depend:5 solving:1 tight:2 purely:1 efficiency:1 basis:1 neurophysiol:1 joint:3 train:18 fast:2 describe:3 london:2 monte:1 whitenoise:1 choosing:1 whose:5 richer:2 larger:2 drawing:2 statistic:3 think:1 noisy:2 laird:1 advantage:1 ucl:3 reconstruction:1 interaction:2 product:2 adaptation:1 remainder:1 loop:1 flexibility:2 poorly:2 buzsaki:1 description:2 ky:7 transmission:1 extending:2 generating:1 incremental:1 converges:1 help:1 coupling:10 illustrate:2 ac:2 fixing:1 recurrent:2 develop:1 derive:1 depending:1 job:1 eq:7 strong:3 involves:1 indicate:1 implies:1 differ:1 posit:1 closely:3 drawback:1 owing:1 filter:29 stochastic:3 bin:9 require:1 truccolo:1 biological:2 summation:1 sufficiently:1 hall:1 exp:3 mapping:1 bj:1 visualize:1 early:2 purpose:1 failing:1 estimation:4 networkbased:1 successfully:1 tool:1 rough:1 mit:2 always:2 gaussian:2 factorizes:1 wilson:1 xit:1 likelihood:22 greatly:1 contrast:1 litke:1 inference:1 dependent:1 dayan:1 plesser:1 entire:1 typically:2 integrated:1 hidden:28 quasi:1 interested:1 among:1 flexible:1 denoted:1 development:1 plan:1 raised:1 summed:1 integration:1 field:3 equal:1 shaped:1 sampling:5 zz:3 x4:2 represents:1 broad:1 yu:1 unsupervised:1 identical:1 chapman:1 future:5 stimulus:38 simplify:1 escape:1 retina:1 divergence:4 gwn:1 phase:5 consisting:4 fire:1 organization:1 highly:3 evaluation:2 korenberg:1 light:2 behind:1 wc1n:1 necessary:1 respective:1 incomplete:1 circle:1 nykamp:3 causal:5 theoretical:1 nentially:1 formalism:1 modeling:2 ar:1 measuring:1 queen:1 maximization:3 tractability:2 introducing:1 successful:1 unsurprising:1 characterize:1 motivating:1 dependency:5 considerably:1 density:3 peak:1 sensitivity:1 peri:1 siam:1 probabilistic:1 off:3 connectivity:4 again:1 central:1 reflect:1 recorded:2 containing:1 henceforth:1 worse:1 external:1 necessitate:1 cognitive:1 rescaling:1 toy:1 account:2 de:1 retinal:3 coding:1 depends:3 performed:1 view:1 observing:1 doing:1 red:4 elicited:1 minimize:1 square:1 afshar:1 wiener:1 accuracy:2 variance:1 likewise:1 efficiently:2 maximized:1 ensemble:2 generalize:1 identification:1 accurately:3 zy:5 hunter:1 carlo:1 drive:2 cybernetics:1 history:14 synaptic:1 raster:7 obvious:1 bioscience:1 auditory:1 unrecorded:1 lim:2 knowledge:1 exerting:1 amplitude:1 sophisticated:1 higher:4 response:39 improved:1 formulation:1 refractoriness:2 box:1 though:1 furthermore:1 just:2 stage:3 strongly:1 correlation:2 hand:1 hastings:1 receives:1 nonlinear:5 gray:4 grows:1 building:1 effect:3 verify:1 y2:2 true:14 brown:3 leibler:1 neal:2 white:4 conditionally:2 during:2 cosine:1 generalized:6 trying:1 complete:4 demonstrate:1 latham:1 performs:2 meaning:2 variational:15 instantaneous:4 novel:1 consideration:1 common:1 functional:5 spiking:14 twist:1 refractory:2 extend:1 approximates:2 relating:1 refer:1 significant:1 cambridge:1 rd:1 mathematics:1 similarly:2 z4:2 nonlinearity:3 dot:1 cortex:1 operating:1 longer:1 similarity:1 something:1 own:1 recent:2 moderate:1 driven:2 optimizes:1 certain:2 lnp:2 captured:1 minimum:1 greater:1 additional:1 converge:2 maximize:2 period:2 signal:1 multiple:5 full:5 simoncelli:2 faster:1 adapt:1 match:1 offer:2 long:1 post:1 dkl:2 schematic:5 variant:1 basic:1 essentially:1 expectation:5 poisson:4 iteration:1 represent:1 histogram:1 cell:41 proposal:3 whereas:1 receive:1 wake:23 diagram:4 unlike:1 ascent:1 hz:2 jordan:1 extracting:1 affect:1 fit:8 zi:2 architecture:1 opposite:2 idea:2 knowing:1 donoghue:1 whether:1 expression:1 utility:1 effort:2 peter:1 deep:1 useful:2 involve:1 transforms:1 repeating:1 reduced:1 zj:2 sign:2 neuroscience:4 estimated:6 extrinsic:1 yy:1 hirase:1 discrete:1 write:1 express:1 srinivasan:1 santhanam:1 four:2 terminology:1 eden:2 drawn:2 clarity:1 ht:1 sum:2 run:1 powerful:3 injected:1 respond:1 extends:1 place:1 reasonable:1 draw:3 scaling:1 capturing:5 ki:1 bound:5 guaranteed:1 distinguish:1 sleep:23 activity:8 strength:1 incorporation:1 x2:2 performing:1 expanded:1 designated:1 describes:1 slightly:1 em:12 reconstructing:1 metropolis:1 making:4 primate:1 intuitively:1 restricted:1 glm:2 taken:2 equation:2 remains:1 turn:1 describing:1 fail:1 ge:1 tractable:2 meister:1 generalizes:2 available:1 apply:1 observe:1 simulating:1 denotes:3 top:2 include:1 hyz:5 assembly:1 graphical:3 build:1 approximating:3 society:1 unchanged:1 spike:53 receptive:3 dependence:8 bialek:1 exhibit:4 enhances:1 gradient:3 separate:1 elicits:1 simulated:3 parametrized:2 dragoi:1 toward:1 willsky:1 sfn:1 index:3 y4:2 z3:3 modeled:1 minimizing:2 demonstration:1 difficult:3 negative:1 zt:18 unknown:1 perform:5 allowing:3 neuron:48 convolution:1 observation:1 markov:1 descent:1 hinton:2 y1:2 intensity:2 inferred:1 pair:4 kl:11 specified:1 z1:2 connection:2 chichilnisky:2 csicsvari:1 ryu:1 tractably:2 nip:2 gazzaniga:1 dynamical:1 pattern:1 mismatch:1 royal:1 suitable:1 event:2 treated:1 predicting:1 representing:1 carried:1 coupled:10 sher:1 sahani:1 understanding:1 literature:1 berry:1 determining:2 relative:1 embedded:2 fully:1 wakesleep:1 highlight:2 bear:2 generation:1 integrate:1 consistent:1 rubin:1 editor:2 bank:1 row:2 excitatory:1 side:1 deeper:1 understand:3 neighbor:1 sparse:1 van:1 regard:1 feedback:1 cortical:1 pillow:4 kz:6 sensory:6 preventing:1 concavity:1 made:1 commonly:1 simplified:1 adaptive:1 approximate:6 kullback:1 ml:1 global:2 hammerstein:1 assumed:1 excite:1 nelder:1 latent:3 iterative:1 why:1 promising:1 nature:1 learn:1 ruyter:1 ignoring:1 symmetry:1 gerstner:1 complex:1 constructing:1 timescales:1 arrow:4 noise:5 edition:1 repeated:2 x1:2 neuronal:6 fig:7 gatsby:3 precision:1 inferring:1 exponential:2 xt:8 covariate:1 showing:3 consist:1 incorporating:2 intractable:1 illustrates:1 justifies:1 easier:2 entropy:1 depicted:3 simply:2 likely:1 explore:1 ganglion:2 paninski:5 visual:3 partially:2 scalar:1 monotonic:1 harris:1 conditional:7 goal:1 sized:1 presentation:2 consequently:3 brenner:1 considerable:1 feasible:1 acausal:7 mccullagh:1 typical:1 pas:1 indicating:2 internal:1 jonathan:1 kulkarni:1 incorporate:3 evaluate:3 correlated:1 |
2,488 | 3,257 | Convex Relaxations of Latent Variable Training
Yuhong Guo and Dale Schuurmans
Department of Computing Science
University of Alberta
{yuhong, dale}@cs.ualberta.ca
Abstract
We investigate a new, convex relaxation of an expectation-maximization (EM)
variant that approximates a standard objective while eliminating local minima.
First, a cautionary result is presented, showing that any convex relaxation of EM
over hidden variables must give trivial results if any dependence on the missing
values is retained. Although this appears to be a strong negative outcome, we then
demonstrate how the problem can be bypassed by using equivalence relations instead of value assignments over hidden variables. In particular, we develop new
algorithms for estimating exponential conditional models that only require equivalence relation information over the variable values. This reformulation leads to
an exact expression for EM variants in a wide range of problems. We then develop
a semidefinite relaxation that yields global training by eliminating local minima.
1
Introduction
Few algorithms are better known in machine learning and statistics than expectation-maximization
(EM) [5]. One reason is that EM solves a common problem?learning from incomplete data?that
occurs in almost every area of applied statistics. Equally well known to the algorithm itself, however, is the fact that EM suffers from shortcomings. Here it is important to distinguish between
the EM algorithm (essentially a coordinate descent procedure [10]) and the objective it optimizes
(marginal observed or conditional hidden likelihood). Only one problem is due to the algorithm
itself: since it is a simple coordinate descent, EM suffers from slow (linear) convergence and therefore can require a large number of iterations to reach a solution. Standard optimization algorithms
such as quasi-Newton methods can, in principle, require exponentially fewer iterations to achieve the
same accuracy (once close enough to a well behaved solution) [2, 11]. Nevertheless, EM converges
quickly in many circumstances [12, 13]. The main problems attributed to EM are not problems
with the algorithm per se, but instead are properties of the objective it optimizes. In particular, the
standard objectives tackled by EM are not convex in any standard probability model (e.g. the exponential family). Non-convexity immediately creates the risk of local minima, which unfortunately
is not just a theoretical concern: EM often does not produce very good results in practice, and can
sometimes fail to improve significantly upon initial parameter settings [9]. For example, the field of
unsupervised grammar induction [8] has been thwarted in its attempts to use EM for decades and is
still unable to infer useful syntactic models of natural language from raw unlabeled text.
We present a convex relaxation of EM for a standard training criterion and a general class of models in an attempt to understand whether local minima are really a necessary aspect of unsupervised
learning. Convex relaxations have been a popular topic in machine learning recently [4, 16]. In this
paper, we propose a convex relaxation of EM that can be applied to a general class of directed graphical models, including mixture models and Bayesian networks, in the presence of hidden variables.
There are some technical barriers to overcome in achieving an effective convex relaxation however.
First, as we will show, any convex relaxation of EM must produce trivial results if it maintains any
dependence on the values of hidden variables. Although this result suggests that any convex relaxation of EM cannot succeed, we subsequently show that the problem can be overcome by working
with equivalence relations over the values of the hidden variables, rather than the missing values
themselves. Although equivalence relations provide an easy way to solve the symmetry collapsing
problem, they do not immediately yield a convex EM formulation, because the underlying estimation principles for directed graphical models have not been formulated in these terms. Our main
technical contribution therefore is a reformulation of standard estimation principles for exponential conditional models in terms of equivalence relations on variable values, rather than the variable
values themselves. Given an adequate reformulation of the core estimation principle, developing a
useful convex relaxation of EM becomes possible.
1.1
EM Variants
Before proceeding, it is important to first clarify the precise EM variant we address. In fact, there are
many EM variants that optimize different criteria. Let z = (x, y) denote a complete observation,
where x refers to the observed part of the data and y refers to the unobserved part; and let w
refer to the parameters of the underlying probability model, P (x, y|w). (Here we consider discrete
probability distributions just for simplicity of the discussion.) Joint and conditional EM algorithms
are naive ?self-supervised? training procedures that alternate between inferring the values of the
missing variables and optimizing the parameters of the model
(joint EM update) y(k+1) = arg max P (x, y|w(k) ) w(k+1) = arg max P (x, y(k+1) |w)
y
(conditional EM update)
y
(k+1)
w
= arg max P (y|x, w
(k)
y
)
w
(k+1)
= arg max P (y(k+1) |x, w)
w
These are clearly coordinate descent procedures that make monotonic progress in their objectives,
P (x, y|w) and P (y|x, w). Moreover, the criteria being optimized are in fact well motivated objectives for unsupervised training: joint EM is frequently used in statistical natural language processing
(where it is referred to as ?Viterbi EM? [3, 7]); the conditional form has been used in [16]. The primary problem with these iterations is not that they optimize approximate or unjustified criteria, but
rather that they rapidly get stuck in poor local maxima due to the extreme updates made on y. By far,
the more common form of EM?contributing the very name expectation-maximization?is given by
P (k+1)
(marginal EM update) q(k+1)
= P (y|x, w(k) ) w(k+1) = arg max y qy
log P (x, y|w)
y
w
where qy is a distribution over possible missing values. Although it is not immediately obvious what this iteration optimizes, it has long been known that it monotonically improves the
marginal
later showed that the E-step could be generalized to
P likelihood P (x|w) [5]. [10]
maxqy y qy log P (x, y|w(k) )/qy . Due to the softer qy update, the standard EM update does
not as converge as rapidly to a local maximum as the joint and conditional variants; however, as a
result, it tends to find better local maxima. Marginal EM has subsequently become the dominant
form of EM algorithm in the literature (although joint EM is still frequently used in statistical NLP
[3, 7]). Nevertheless, none of the training criteria are jointly convex in the optimization variables,
thus these iterations are only guaranteed to find local maxima.
Independent of the updates, the three training criteria are not equivalent nor equally well motivated. In fact, for most applications we are more interested in acquiring an accurate conditional
P (y|x, w), rather than optimizing the marginal P (x|w) [16]. Of the three training criteria therefore
(joint, conditional and marginal), marginal likelihood appears to be the least relevant to learning
predictive models. Nevertheless, the convex relaxation techniques we propose can be applied to all
three objectives. For simplicity we will focus on maximizing joint likelihood in this paper, since
it incorporates aspects of both marginal and conditional training. Conveniently, joint and marginal
EM pose nearly identical optimization problems:
P
(joint EM objective) arg max max P (x, y|w) = arg max max
qy log P (x, y|w)
y
w
y
w
qy
P
P
(marg. EM objective) arg max y P (x, y|w) = arg max max
y qy log P (x, y|w) +H(qy )
w
w
qy
where qy is a distribution over possible missing values [10]. Therefore, much of the analysis we
provide for joint EM also applies to marginal EM, leaving only a separate convex relaxation of
the entropy term that can be conducted independently. We will also primarily consider the hidden
variable case and assume a fixed set of random variables Y1 , ..., Y` is always unobserved, and a fixed
set of variables X`+1 , ..., Xn is always observed. The technique remains extendable to the general
missing value case however.
2
A Cautionary Result for Convexity
Our focus in this paper will be to develop a jointly convex relaxation to the minimization problem
posed by joint EM
P
(1)
min min ? i log P (xi , yi |w)
y
w
One obvious issue we must face is to relax the discrete constraints on the assignments y. However,
the challenge is deeper than this. In the hidden variable case?when the same variables are missing in each observation?there is a complete symmetry between the missing values. In particular,
for any optimal solution (y, w) there must be other, equivalent solutions (y 0 , w0 ) corresponding
to a permutation of the hidden variable values. Unfortunately, this form of solution symmetry has
devastating consequences for any convex relaxation: Assume one attempts to use any jointly convex relaxation f (qy , w) of the standard loglikelihood objective (1), where the the missing variable
assignment y has been relaxed into a continuous probabilistic assignment qy (like standard EM).
Lemma 1 If f is strictly convex and invariant to permutations of unobserved variable values, then
the global minimum of f , (q?y , w? ), must satisfy q?y = uniform.
Proof: Assume (qy , w) is a global minimum of f but qy 6= uniform. Then there must be some
permutation of the missing values, ?, such that the alternative (q0y , w0 ) = (?(qy ), ?(w)) satisfies
q0y 6= qy . But by the permutation invariance of f , this implies f (qy , w) = f (q0y , w0 ). By the strict
convexity of f , we then have f ?(qy , w) + (1 ? ?)(q0y , w0 ) < ?f (qy , w)+(1??)f (q0y , w0 ) =
f (qy , w), for 0 < ? < 1, contradicting the global optimality of f (qy , w).
Therefore, any convex relaxation of (1) that uses a distribution qy over missing values and does
not make arbitrary distinctions can never do anything but produce a uniform distribution over the
hidden variable values. (The same is true for marginal and conditional versions of EM.) Moreover,
any non-strictly convex relaxation must admit the uniform distribution as a possible solution. This
trivialization is perhaps the main reason why standard EM objectives have not been previously convexified. (Note that standard coordinate descent algorithms simply break the symmetry arbitrarily
and descend into some local solution.) This negative result seems to imply that no useful convex
relaxation of EM is possible in the hidden variable case. However, our key observation is that a
convex relaxation expressed in terms of an equivalence relation over the missing values avoids this
symmetry breaking problem. In particular, equivalence relations exactly collapse the unresolvable
symmetries in this context, while still representing useful structure over the hidden assignments.
Representations based on equivalence relations are a useful tool for unsupervised learning that has
largely been overlooked (with some exceptions [4, 15]). Our goal in this paper, therefore, will be to
reformulate standard training objectives to use only equivalence relations on hidden variable values.
3
Directed Graphical Models
We will derive a convex relaxation framework for a general class of probability models?namely,
directed models?that includes mixture models and discrete Bayesian networks as special cases. A
directed model defines a joint probability distribution over a set of random variables Z 1 , ..., Zn by
exploiting the chain rule of probability
Qnto decompose the joint into a product of locally normalized
conditional distributions P (z|w) = j=1 P (zj |z?(j) , wj ). Here, ?(j) ? {1, ..., j ? 1}, and wj is
the set of parameters defining conditional distribution j. Furthermore, we will assume an exponential
family representation for the conditional distributions
P (zj |z?(j) , wj ) = exp wj> ?j (zj , z?(j) ) ? A(wj , z?(j) ) , where
P
>
A(wj , z?(j) ) = log
a exp wj ?j (a, z?(j) )
and ?j (zj , z?(j) ) denotes a vector of features evaluated on the value of the child and its parents.
For simplicity, we will initially restrict our discussion to discrete Bayesian networks, but then reintroduce continuous random variables later. A discrete Bayesian network is just a directed model
where the conditional distributions are represented by a sparse feature vector indicating the identity
of the child-parent configuration ?j (zj , z?(j) ) = (...1(zj =a,z?(j) =b) ...)> . That is, there is a single
indicator feature for each local configuration (a, b).
A particularly convenient property of directed models is that the complete data likelihood decomposes into an independent sum of local loglikelihoods
P P
P
>
i
i
i
i
(2)
j
i wj ?j (zj , z?(j) ) ? A(wj , z?(j) )
i log P (z |w) =
Thus the problem of solving for a maximum likelihood set of parameters, given complete training
data, amounts to solving a set of independent log-linear regression problems, one for each variable
Zj . To simplify notation, consider one of the log-linear regression problems in (2) and drop the
subscript j. Then, using a matrix notation we can rewrite the jth local optimization problem as
P
>
min
i A(W, ?i: ) ? tr(?W Y )
W
where W ? IR , ? ? {0, 1}t?c , and Y ? {0, 1}t?v , such that t is the number of training
examples, v is the number of possible values for the child variable, c is the number of possible
configurations for the parent variables, and tr is the matrix trace. To explain this notation, note that
Y and ? are indicator matrices that have a single 1 in each row, where Y indicates the value of
the child variable, and ? indicates the specific configuration of the parent values, respectively; i.e.
Y 1 = 1 and ?1 = 1, where 1 denotes the vector of all 1s. (This matrix notation greatly streamlines
the presentation below.) We also use the notation ?i:Pto denote the ith row vector in ?. Here, the
log normalization factor is given by A(W, ?i: ) = log a exp (?i: W 1a ), where 1a denotes a sparse
vector with a single 1 in position a.
c?v
Below, we will consider a regularized form of the objective, and thereby work with the maximum a
posteriori (MAP) form of the problem
P
?
min
A(W, ?i: ) ? tr(?W Y > ) + tr(W > W )
(3)
i
W
2
This provides the core estimation principle at the heart of Bayesian network parameter learning.
However, for our purposes it suffers from a major drawback: (3) is not expressed in terms of equivalence relations between the variable values. Rather it is expressed in terms of direct indicators of
specific variable values in specific examples?which will lead to a trivial outcome if we attempt
any convex relaxation. Instead, we require a fundamental reformulation of (3) to remove the value
dependence and replace it with a dependence only on equivalence relationships.
4
Log-linear Regression on Equivalence Relations
The first step in reformulating (3) in terms of equivalence relations is to derive its dual.
Lemma 2 An equivalent optimization problem to (3) is
max ?tr(? log ?> ) ?
?
1
tr (Y ? ?)> ??> (Y ? ?)
2?
subject to ? ? 0, ?1 = 1
(4)
Proof: The proof follows a standard derivation, which we sketch; see e.g. [14]. First, by considering
the Fenchel conjugate of A it can be shown that
A(W, ?i: )
=
>
max tr(?>
i: ?i: W ) ? ?i: log ?i:
?i:
subject to ?i: ? 0, ?i: 1 = 1
Substituting this in (3) and then invoking the strong minimax property [1] allows one to show that
(3) is equivalent to
?
max min ?tr(? log ?> ) ? tr((Y ? ?)> ?W ) + tr(W > W ) subject to ? ? 0, ?1 = 1
?
W
2
Finally, the inner minimization can be solved by setting W =
1 >
? ? (Y
? ?), yielding (4).
Interestingly, deriving the dual has already achieved part of the desired result: the parent configurations now only enter the problem through the kernel matrix K = ??> . For Bayesian networks this
kernel matrix is in fact an equivalence relation between parent configurations: ? is a 0-1 indicator
matrix with a single 1 in each row, implying that Kij = 1 iff ?i: = ?j: , and Kij = 0 otherwise.
But more importantly, K can be re-expressed as a function of the individual equivalence relations on
each of the parent variables. Let Y p ? {0, 1}t?vp indicate the value of a parent variable Zp for each
training example. That is, Yi:p is a 1 ? vp sparse row vector with a single 1 indicating the value of
variable Zp in example i. Then M p = Y p Y p > defines an equivalence relation over the assignments
p
p
= 0 otherwise. It is not hard to see that the
= 1 if Yi:p = Yj:p and Mij
to variable Zp , since Mij
equivalence relation over complete parent configurations, K = ??> , is equal to the componentwise (Hadamard) product of the individual equivalence relations for each parent variable. That is,
p
1
2
= 1.
K = ??> = M 1 ? M 2 ? ? ? ? ? M p , since Kij = 1 iff Mij
= 1 and Mij
= 1 and ... Mij
Unfortunately, the dual problem (4) is still expressed in terms of the indicator matrix Y over child
variable values, which is still not acceptable. We still need to reformulate (4) in terms of the equivalence relation matrix M = Y Y > . Consider an alternative dual parameterization ? ? IR t?t such
that ? ? 0, ?1 = 1, and ?Y = ?. (Note that ? ? IRt?v , for v < t, and therefore ? is larger than
?. Also note that as long as every child value occurs at least once in the training set, Y has full rank
v. If not, then the child variable effectively has fewer values, and we could simply reduce Y until
it becomes full rank again without affecting the objective (3).) Therefore, since Y is full rank, for
any ?, some ? must exist that achieves ?Y = ?. Then we can relate the primal parameters to this
larger set of dual parameters by the relation W = ?1 ?> (I ? ?)Y . (Even though ? is larger than ?,
they can only express the same realizable set of parameters W .) To simplify notation, let B = I ? ?
and note the relation W = ?1 ?> BY . If we reparameterize the original problem using this relation,
then it is possible to show that an equivalent optimization problem to (3) is given by
P
1
A(B, ?i: ? tr(KBM ) +
min
tr(B > KBM ) subject to B ? I, B1 = 0
(5)
i
B
2?
where K = ??> and M = Y Y > are equivalence relations on the parent configurations and
child values respectively. The formulation (5) is now almost completely expressed in terms of
equivalence relations
P over the data, except for one subtle problem: the log normalization factor
A(B, ?i: ) = log a exp ?1 ?i: ?> BY 1a still directly depends on the label indicator matrix Y .
Our key technical lemma is that this log normalization factor can be re-expressed to depend on the
equivalence relation matrix M alone.
P
Lemma 3 A(B, ?i: ) = log j exp ?1 Ki: BM:j ? log 1> M:j
Proof: The main observation is that an equivalence relation over value indicators, M = Y Y > ,
consists of columns copied from Y . That is, for all j, M:j = Y:a for some a corresponding to the
child value in example j. Let y(j) denote the child value in example j and let ? i: = ?1 Ki: B. Then
P
P
P P
1
1
>
a exp ? ?i: ? BY 1a =
a exp(? i: Y:a ) =
a
j:y(j)=a |{`:y(`)=a}| exp(? i: M:j )
P
P
P
1
= j |{`:y(`)=y(j)}|
exp(? i: M:j ) = j 1>1M:j exp(? i: M:j ) = j exp(? i: M:j ? log 1> M:j )
Using Lemma 3 one can show that the dual problem to (5) is given by the following.
Theorem 1 An equivalent optimization problem to (3) is
1
max ?tr(? log ?> ) ? 1> ? log(M 1) ?
tr((I ? ?)> K(I ? ?)M )
(6)
??0,?1=1
2?
where K = M 1 ? ? ? ? ? M p for parent variables Z1 , ..., Zp .
Proof: This follows the same derivation as Lemma 2, modified by taking into account the extra
term introduced by Lemma 3. First, considering the Fenchel conjugate of A, it can be shown that
1
>
Ki: BM ?>
A(B, ?i: ) =
max
i: ? ?i: log ?i: ? ?i: log(M 1)
?i: ?0,?i: 1=1 ?
Substituting this in (5) and then invoking the strong minimax property [1] allows one to show that
(5) is equivalent to
1
1
max
min ?tr(? log ?> ) ? 1> ? log(M 1) ? tr((I ? ?)> KBM ) +
tr(B > KBM )
??0,?1=1 B?I,B 1=0
?
2?
Finally, the inner minimization on B can be solved by setting B = I ? ?, yielding (6).
This gives our key result: the log-linear regression (3) is equivalent to (6), which is now expressed
strictly in terms of equivalence relations over the parent configurations and child values. That is, the
value indicators, ? and Y , have been successfully eliminated from the formulation. Given a solution
?? to (6), the optimal model parameters W ? for (3) can be recovered via W ? = ?! ?> (I ? ?? )Y .
5
Convex Relaxation of Joint EM
The equivalence relation form of log-linear regression can be used to derive useful relaxations of
EM variants for directed models. In particular, by exploiting Theorem 1, we can now re-express
the regularized form of the joint EM objective (1) strictly in terms of equivalence relations over the
hidden variable values
X
?
min
min ? log P (zji |zi?(j) , wj ) + wj> wj
(7)
wj
2
{Y h }
j
= min
{M h }
X
j
max
>
j
?tr(?j log ?>
j ) ? 1 ?j log(M 1) ?
?j ?0,?j 1=1
1
tr (I ? ?j )>K j (I ? ?j )M j (8)
2?
>
subject to M h = Y h Y h , Y h ? {0, 1}t?vh , Y h 1 = 1
where h ranges over the hidden variables, and K
Zj1 , ..., Zjp of Zj .
j
= M
j1
? ??? ? M
(9)
jp
for the parent variables
Note that (8) is an exact reformulation of the joint EM objective (7); no relaxation has yet been
introduced. Another nice property of the objective in (8) is that is it concave in each ? j and convex
in each M h individually (a maximum of convex functions is convex [2]). Therefore, (8) appears
as though it might admit an efficient algorithmic solution. However, one difficulty in solving the
resulting optimization problem is the constraints. Although the constraints imposed in (9) are not
convex, there is a natural convex relaxation suggested by the following.
Lemma 4 (9) is equivalent to: M ? {0, 1}t?t , diag(M ) = 1, M = M > , M 0, rank(M ) = v.
A natural convex relaxation of (9) can therefore be obtained by relaxing the discreteness constraint
and dropping the nonconvex rank constraint, yielding
>
M h ? [0, 1]t?t , diag(M h ) = 1, M h = M h , M h 0
(10)
Optimizing the exact objective in (8) subject to the relaxed convex constraints (10) provides the
foundation for our approach to convexifying EM. Note that since (8) and (10) are expressed solely
in terms of equivalence relations, and do not depend on the specific values of hidden variables in
any way, this formulation is not subject to the triviality result of Lemma 1.
However, there are still some details left to consider. First, if there is only a single hidden variable
then (8) is convex with respect to the single matrix variable M h . This result immediately provides
a convex EM training algorithm for various applications, such as for mixture models for example
(see the note regarding continuous random variables below). Second, if there are multiple hidden
variables that are separated from each other (none are neighbors, nor share a common child) then the
formulation (8) remains convex and can be directly applied. On the other hand, if hidden variables
are connected in any way, either by sharing a parent-child relationship or having a common child,
then (8) is no longer jointly convex because the trace term is no longer linear in the matrix variables
{M h }. In this case, we can restore convexity by further relaxing the problem: To illustrate, if there
are multiple hidden parents Zp1 , ..., Zp` for a given child, then the combined equivalence relation
M p1 ? ? ? ? ? M p` is a Hadamard product of the individual matrices. A convex formulation can be
? to replace M p1 ?? ? ??M p` in (8) and adding
recovered by introducing an auxiliary matrix variable M
p
?
? ij ? M p1 + ? ? ? + M p` ? ` + 1
the set of linear constraints Mij ? Mij for p ? {p1 , ..., p` }, M
ij
ij
to approximate the componentwise ?and?. A similar relaxation can also be applied when a child is
hidden concurrently with hidden parent variables.
Continuous Variables The formulation in (8) can be applied to directed models with continuous
random variables, provided that all hidden variables remain discrete. If every continuous random
variable is observed, then the subproblems on these variables can be kept in their natural formulations, and hence still solved. This extension is sufficient to allow the formulation to handle Gaussian
mixture models, for example. Unfortunately, the techniques developed in this paper do not apply to
the situation where there are continuous hidden variables.
Recovering the Model Parameters Once the relaxed equivalence relation matrices {M h } have
been obtained, the parameters of they underlying probability model need to be recovered. At an
Bayesian
Fully Supervised
Viterbi EM
Convex EM
networks
Train
Test
Train
Test
Train
Test
Synth1
7.23 ?.06
7.90 ?.04 11.29 ?.44 11.73 ?.38
8.96 ?.24
9.16 ?.21
Synth2
4.24 ?.04
4.50 ?.03
6.02 ?.20
6.41 ?.23
5.27 ?.18
5.55 ?.19
Synth3
4.93 ?.02
5.32 ?.05
7.81 ?.35
8.18 ?.33
6.23 ?.18
6.41 ?.14
Diabetes
5.23 ?.04
5.53 ?.04
6.70 ?.27
7.07 ?.23
6.51 ?.35
6.50 ?.28
Pima
5.07 ?.03
5.32 ?.03
6.74 ?.34
6.93 ?.21
5.81 ?.07
6.03 ?.09
Cancer
2.18 ?.05
2.31 ?.02
3.90 ?.31
3.94 ?.29
2.98 ?.19
3.06 ?.16
Alarm
10.23 ?.16 12.30 ?.06 11.94 ?.32 13.75 ?.17 11.74 ?.25 13.62 ?.20
Asian
2.17 ?.05
2.33 ?.02
2.21 ?.05
2.36 ?.03
2.70 ?.14
2.78 ?.12
Table 1: Results on synthetic and real-world Bayesian networks: average loss ? standard deviation
optimal solution to (8), one not only obtains {M h }, but also the associated set of dual parameters
{?j }. Therefore, we can recover the primal parameters Wj from the dual parameters ?j by using
j
the relationship Wj = ?1 ?>
j (I ??j )Y established above, which only requires availability of a label
j
assignment matrix Y . For observed variables, Y j is known, and therefore the model parameters
can be immediately recovered. For hidden variables, we first need to compute a rank v h factorization
of M h . Let V = U ?1/2 where U and ? are the top vh eigenvector and eigenvalue matrices of the
centered matrix HM h H, such that H = I ? 1t 11> . One simple idea to recover Y?h from V is to run
k-means on the rows of V and construct the indicator matrix. A more elegant approach would be to
use a randomized rounding scheme [6], which also produces a deterministic Y?h , but provides some
guarantees about how well Y?h Y?h> approximates M h . Note however that V is an approximation
of Y h where the row vectors have been re-centered on the origin in a rotated coordinate system.
Therefore, a simpler approach is just to map the rows of V back onto the simplex by translating the
mean back to the simplex center and rotation the coordinates back into the positive orthant.
6
Experimental Results
An important question to ask is whether the relaxed, convex objective (8) is in fact over-relaxed, and
whether important structure in the original marginal likelihood objective has been lost as a result. To
investigate this question, we conducted a set of experiments to evaluate our convex approach compared to the standard Viterbi (i.e. joint) EM algorithm, and to supervised training on fully observed
data. Our experiments are conducted using both synthetic Bayesian networks and real networks,
while measuring the trained models by their logloss produced on the fully observed training data
and testing data. All the results reported in this paper are averages over 10 times repeats. The test
size for the experiments is 1000, the training size is 100 without specification. For a fair comparison,
we used 10 random restarts for Viterbi EM to help avoid poor local optima.
For the synthetic experiments, we constructed three Bayesian networks: (1) Bayesian network 1
(Synth1) is a three layer network with 9 variables, where the two nodes in the middle layer are
picked as hidden variables; (2) Bayesian network 2 (Synth2) is a network with 6 variables and
6 edges, where a node with 2 parents and 2 children is picked as hidden variable; (3) Bayesian
network 3 (Synth3) is a Naive Bayes model with 7 variables, where the parent node is selected as
the hidden variable. The parameters are generated in a discriminative way to produce models with
apparent causal relations between the connected nodes. We performed experiments on these three
synthetic networks using varying training sizes: 50, 100 and 150. Due to space limits, we only
report the results for training size 100 in Table 1. Besides these three synthetic Bayesian networks,
we also ran experiments using real UCI data, where we used Naive Bayes as the model structure,
and set the class variables to be hidden. The middle two rows of the Table 1 show the results on two
UCI data sets.
Here we can see that the convex relaxation was successful at preserving structure in the EM objective, and in fact, generally performed much better than the Viterbi EM algorithm, particularly
in the case (Synth1) where there was two hidden variables. Not surprisingly, supervised training
on the complete data performed better than the EM methods, but generally demonstrated a larger
gap between training and test losses than the EM methods. Similar results were obtained for both
larger and smaller training sample sizes. For the UCI experiments, the results are very similar to the
synthetic networks, showing good results again for the convex EM relaxation.
Finally, we conducted additional experiments on three real world Bayesian networks: Alarm, Cancer
and Asian (downloaded from http://www.norsys.com/networklibrary.html). We picked one well
connected node from each model to serve as the hidden variable, and generated data by sampling
from the models. Table 1 shows the experimental results for these three Bayesian networks. Here
we can see that the convex EM relaxation performed well on the Cancer and Alarm networks. Since
we only picked one hidden variable from the 37 variables in Alarm, it is understandable that any
potential advantage for the convex approach might not be large. Nevertheless, a slight advantage is
still detected here. Much weaker results are obtained on the Asian network however. We are still
investigating what aspects of the problem are responsible for the poorer approximation in this case.
7
Conclusion
We have presented a new convex relaxation of EM that obtains generally effective results in simple
experimental comparisons to a standard joint EM algorithm (Viterbi EM), on both synthetic and
real problems. This new approach was facilitated by a novel reformulation of log-linear regression
that refers only to equivalence relation information on the data, and thereby allows us to avoid
the symmetry breaking problem that blocks naive convexification strategies from working. One
shortcoming of the proposed technique however is that it cannot handle continuous hidden variables;
this remains a direction for future research. In one experiment, weaker approximation quality was
obtained, and this too is the subject of further investigation.
References
[1] J. Borwein and A. Lewis. Convex Analysis and Nonlinear Optimization. Springer, 2000.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004.
[3] S. Chen. Models for grapheme-to-phoneme conversion. In Eurospeech, 2003.
[4] T. De Bie and N. Cristianini. Fast SDP relaxations of graph cut clustering, transduction, and
other combinatorial problems. Journal of Machine Learning Research, 7, 2006.
[5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society. Series B, 39(1):1?38, 1977.
[6] M. Goemans and D. Williamson. Improved approximation algorithms for maximum cut and
satisfiability problems using semidefinite programming. JACM, 42(6):1115?1145, 1995.
[7] S. Goldwater and M. Johnson. Bias in learning syllable structure. In Proc. CONLL, 2005.
[8] D. Klein and C. Manning. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings ACL, 2004.
[9] B. Merialdo. Tagging text with a probabilistic model. Comput. Ling., 20(2):155?171, 1994.
[10] R. Neal and G. Hinton. A view of the em algorithm that justifies incremental, sparse, and other
variants. In M. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.
[11] J. Nocedal and S. Wright. Numerical Optimization. Springer, 1999.
[12] R. Salakhutdinov, S. Roweis, and Z. Ghahramani. Optimization with EM and expectationconjugate-gradient. In Proceedings ICML, 2003.
[13] N. Srebro, G. Shakhnarovich, and S. Roweis. An investigation of computational and informational limits in gaussian mixture clustering. In Proceedings ICML, 2006.
[14] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Technical Report TR-649, UC Berkeley, Dept. Statistics, 2003.
[15] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Max margin clustering. In NIPS 17, 2004.
[16] L. Xu, D. Wilkinson, F. Southey, and D. Schuurmans. Discriminative unsupervised learning of
structured predictors. In Proceedings ICML, 2006.
| 3257 |@word middle:2 version:1 eliminating:2 seems:1 invoking:2 thereby:2 tr:20 initial:1 configuration:9 series:1 interestingly:1 recovered:4 com:1 yet:1 grapheme:1 must:8 bie:1 numerical:1 j1:1 remove:1 drop:1 update:7 implying:1 alone:1 fewer:2 selected:1 parameterization:1 ith:1 core:2 provides:4 node:5 simpler:1 constructed:1 direct:1 become:1 consists:1 tagging:1 themselves:2 frequently:2 nor:2 p1:4 sdp:1 salakhutdinov:1 informational:1 alberta:1 considering:2 becomes:2 provided:1 estimating:1 underlying:3 moreover:2 notation:6 what:2 pto:1 eigenvector:1 developed:1 unobserved:3 guarantee:1 berkeley:1 every:3 concave:1 exactly:1 before:1 positive:1 local:13 tends:1 limit:2 consequence:1 subscript:1 solely:1 might:2 acl:1 equivalence:30 suggests:1 relaxing:2 collapse:1 factorization:1 range:2 directed:9 responsible:1 merialdo:1 yj:1 testing:1 practice:1 lost:1 block:1 procedure:3 area:1 significantly:1 convenient:1 boyd:1 refers:3 q0y:5 get:1 cannot:2 close:1 unlabeled:1 onto:1 risk:1 marg:1 context:1 optimize:2 equivalent:9 map:2 imposed:1 missing:12 maximizing:1 deterministic:1 center:1 demonstrated:1 www:1 independently:1 convex:48 simplicity:3 immediately:5 rule:1 importantly:1 deriving:1 vandenberghe:1 handle:2 coordinate:6 ualberta:1 exact:3 programming:1 us:1 origin:1 diabetes:1 particularly:2 cut:2 convexification:1 observed:7 solved:3 descend:1 wj:15 reintroduce:1 connected:3 ran:1 dempster:1 convexity:4 zp1:1 wilkinson:1 cristianini:1 trained:1 depend:2 solving:3 rewrite:1 shakhnarovich:1 predictive:1 serve:1 creates:1 upon:1 completely:1 joint:18 represented:1 various:1 derivation:2 train:3 separated:1 fast:1 shortcoming:2 effective:2 detected:1 outcome:2 apparent:1 posed:1 solve:1 larger:5 loglikelihood:1 relax:1 otherwise:2 grammar:1 statistic:3 syntactic:2 jointly:4 itself:2 laird:1 advantage:2 eigenvalue:1 neufeld:1 propose:2 product:3 relevant:1 hadamard:2 uci:3 rapidly:2 loglikelihoods:1 iff:2 achieve:1 roweis:2 exploiting:2 convergence:1 parent:19 optimum:1 zp:5 produce:5 incremental:1 converges:1 rotated:1 help:1 derive:3 develop:3 illustrate:1 pose:1 ij:3 progress:1 strong:3 solves:1 auxiliary:1 c:1 recovering:1 implies:1 indicate:1 direction:1 drawback:1 subsequently:2 centered:2 softer:1 translating:1 require:4 really:1 decompose:1 investigation:2 strictly:4 extension:1 clarify:1 wright:1 exp:11 viterbi:6 algorithmic:1 substituting:2 major:1 achieves:1 purpose:1 estimation:4 proc:1 label:2 combinatorial:1 individually:1 successfully:1 tool:1 minimization:3 clearly:1 concurrently:1 always:2 gaussian:2 modified:1 rather:5 avoid:2 varying:1 focus:2 rank:6 likelihood:8 indicates:2 greatly:1 realizable:1 posteriori:1 inference:1 initially:1 hidden:33 relation:33 quasi:1 interested:1 arg:9 issue:1 dual:8 html:1 special:1 uc:1 marginal:12 field:1 construct:1 once:3 never:1 equal:1 sampling:1 having:1 eliminated:1 identical:1 devastating:1 unsupervised:5 nearly:1 icml:3 future:1 simplex:2 report:2 simplify:2 few:1 primarily:1 synth2:2 individual:3 asian:3 attempt:4 investigate:2 unjustified:1 mixture:5 extreme:1 semidefinite:2 yielding:3 primal:2 chain:1 accurate:1 logloss:1 poorer:1 edge:1 necessary:1 incomplete:2 desired:1 re:4 causal:1 theoretical:1 fenchel:2 kij:3 column:1 measuring:1 zn:1 assignment:7 maximization:3 introducing:1 deviation:1 uniform:4 predictor:1 successful:1 rounding:1 conducted:4 johnson:1 too:1 eurospeech:1 reported:1 dependency:1 synthetic:7 extendable:1 combined:1 fundamental:1 randomized:1 probabilistic:2 quickly:1 again:2 borwein:1 collapsing:1 admit:2 account:1 potential:1 de:1 includes:1 availability:1 satisfy:1 depends:1 later:2 break:1 picked:4 performed:4 view:1 recover:2 maintains:1 bayes:2 contribution:1 ir:2 accuracy:1 phoneme:1 largely:1 yield:2 vp:2 goldwater:1 raw:1 bayesian:16 produced:1 none:2 explain:1 reach:1 suffers:3 sharing:1 obvious:2 proof:5 attributed:1 associated:1 popular:1 ask:1 improves:1 satisfiability:1 subtle:1 back:3 appears:3 supervised:4 restarts:1 improved:1 formulation:9 evaluated:1 though:2 furthermore:1 just:4 until:1 working:2 sketch:1 hand:1 nonlinear:1 defines:2 quality:1 perhaps:1 behaved:1 name:1 normalized:1 true:1 hence:1 reformulating:1 neal:1 self:1 anything:1 larson:1 criterion:7 generalized:1 complete:6 demonstrate:1 variational:1 novel:1 recently:1 common:4 rotation:1 exponentially:1 jp:1 slight:1 approximates:2 kluwer:1 refer:1 cambridge:1 enter:1 language:2 convexified:1 specification:1 longer:2 dominant:1 showed:1 optimizing:3 optimizes:3 nonconvex:1 arbitrarily:1 yi:3 preserving:1 minimum:6 additional:1 relaxed:5 converge:1 monotonically:1 full:3 multiple:2 infer:1 technical:4 long:2 equally:2 variant:8 regression:6 essentially:1 expectation:3 circumstance:1 iteration:5 sometimes:1 normalization:3 kernel:2 achieved:1 qy:23 affecting:1 leaving:1 extra:1 strict:1 subject:8 elegant:1 incorporates:1 jordan:2 presence:1 enough:1 easy:1 zi:1 restrict:1 inner:2 reduce:1 regarding:1 idea:1 whether:3 motivated:2 expression:1 triviality:1 adequate:1 useful:6 generally:3 se:1 amount:1 locally:1 constituency:1 http:1 exist:1 zj:9 per:1 klein:1 discrete:6 dropping:1 express:2 key:3 reformulation:6 nevertheless:4 achieving:1 discreteness:1 zjp:1 kept:1 nocedal:1 graph:1 relaxation:33 sum:1 run:1 facilitated:1 almost:2 family:3 acceptable:1 conll:1 ki:3 layer:2 guaranteed:1 distinguish:1 tackled:1 copied:1 syllable:1 constraint:7 cautionary:2 aspect:3 min:10 optimality:1 reparameterize:1 department:1 developing:1 structured:1 alternate:1 poor:2 manning:1 conjugate:2 remain:1 smaller:1 em:65 invariant:1 heart:1 remains:3 previously:1 fail:1 apply:1 alternative:2 original:2 denotes:3 top:1 nlp:1 clustering:3 graphical:5 newton:1 ghahramani:1 society:1 objective:21 already:1 question:2 occurs:2 strategy:1 primary:1 dependence:4 gradient:1 unable:1 separate:1 w0:5 topic:1 evaluate:1 trivial:3 reason:2 induction:2 besides:1 retained:1 relationship:3 reformulate:2 unfortunately:4 pima:1 relate:1 subproblems:1 trace:2 negative:2 irt:1 understandable:1 conversion:1 observation:4 descent:4 orthant:1 defining:1 situation:1 thwarted:1 precise:1 hinton:1 y1:1 arbitrary:1 overlooked:1 introduced:2 namely:1 optimized:1 componentwise:2 z1:1 distinction:1 established:1 nip:1 address:1 suggested:1 below:3 challenge:1 including:1 max:20 royal:1 wainwright:1 convexifying:1 natural:5 difficulty:1 regularized:2 restore:1 indicator:9 representing:1 minimax:2 improve:1 scheme:1 imply:1 hm:1 naive:4 vh:2 text:2 nice:1 literature:1 contributing:1 fully:3 loss:2 permutation:4 srebro:1 southey:1 foundation:1 downloaded:1 sufficient:1 rubin:1 principle:5 editor:1 share:1 row:8 cancer:3 repeat:1 surprisingly:1 jth:1 bias:1 allow:1 understand:1 deeper:1 weaker:2 wide:1 neighbor:1 face:1 barrier:1 taking:1 sparse:4 overcome:2 xn:1 world:2 avoids:1 dale:2 stuck:1 made:1 bm:2 far:1 approximate:2 obtains:2 global:4 investigating:1 b1:1 corpus:1 xi:1 discriminative:2 continuous:8 latent:1 decade:1 decomposes:1 why:1 table:4 ca:1 bypassed:1 symmetry:7 schuurmans:3 williamson:1 diag:2 main:4 ling:1 alarm:4 contradicting:1 child:17 fair:1 xu:2 referred:1 streamlines:1 transduction:1 slow:1 inferring:1 position:1 exponential:5 comput:1 breaking:2 theorem:2 specific:4 yuhong:2 showing:2 zj1:1 concern:1 adding:1 effectively:1 justifies:1 margin:1 zji:1 gap:1 chen:1 entropy:1 simply:2 jacm:1 conveniently:1 expressed:9 synth1:3 monotonic:1 acquiring:1 applies:1 mij:7 springer:2 satisfies:1 lewis:1 succeed:1 conditional:15 goal:1 formulated:1 identity:1 presentation:1 replace:2 hard:1 except:1 lemma:9 goemans:1 invariance:1 experimental:3 exception:1 indicating:2 guo:1 dept:1 |
2,489 | 3,258 | Incremental Natural Actor-Critic Algorithms
Shalabh Bhatnagar
Department of Computer Science & Automation, Indian Institute of Science, Bangalore, India
Richard S. Sutton, Mohammad Ghavamzadeh, Mark Lee
Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
Abstract
We present four new reinforcement learning algorithms based on actor-critic and
natural-gradient ideas, and provide their convergence proofs. Actor-critic reinforcement learning methods are online approximations to policy iteration in which
the value-function parameters are estimated using temporal difference learning
and the policy parameters are updated by stochastic gradient descent. Methods
based on policy gradients in this way are of special interest because of their compatibility with function approximation methods, which are needed to handle large
or infinite state spaces. The use of temporal difference learning in this way is of
interest because in many applications it dramatically reduces the variance of the
gradient estimates. The use of the natural gradient is of interest because it can
produce better conditioned parameterizations and has been shown to further reduce variance in some cases. Our results extend prior two-timescale convergence
results for actor-critic methods by Konda and Tsitsiklis by using temporal difference learning in the actor and by incorporating natural gradients, and they extend
prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and
Schaal by providing the first convergence proofs and the first fully incremental
algorithms.
1
Introduction
Actor-critic (AC) algorithms are based on the simultaneous online estimation of the parameters of
two structures, called the actor and the critic. The actor corresponds to a conventional actionselection policy, mapping states to actions in a probabilistic manner. The critic corresponds to a
conventional value function, mapping states to expected cumulative future reward. Thus, the critic
addresses a problem of prediction, whereas the actor is concerned with control. These problems are
separable, but are solved simultaneously to find an optimal policy, as in policy iteration. A variety
of methods can be used to solve the prediction problem, but the ones that have proved most effective
in large applications are those based on some form of temporal difference (TD) learning (Sutton,
1988) in which estimates are updated on the basis of other estimates. Such bootstrapping methods
can be viewed as a way of accelerating learning by trading bias for variance.
Actor-critic methods were among the earliest to be investigated in reinforcement learning (Barto
et al., 1983; Sutton, 1984). They were largely supplanted in the 1990?s by methods that estimate
action-value functions and use them directly to select actions without an explicit policy structure.
This approach was appealing because of its simplicity, but when combined with function approximation was found to have theoretical difficulties including in some cases a failure to converge. These
problems led to renewed interest in methods with an explicit representation of the policy, which
came to be known as policy gradient methods (Marbach, 1998; Sutton et al., 2000; Konda & Tsitsiklis, 2000; Baxter & Bartlett, 2001). Policy gradient methods without bootstrapping can be easily
proved convergent, but converge slowly because of the high variance of their gradient estimates.
Combining them with bootstrapping is a promising avenue toward a more effective method.
Another approach to speeding up policy gradient algorithms was proposed by Kakade (2002) and
then refined and extended by Bagnell and Schneider (2003) and by Peters et al. (2003). The idea
1
was to replace the policy gradient with the so-called natural policy gradient. This was motivated by
the intuition that a change in the policy parameterization should not influence the result of the policy
update. In terms of the policy update rule, the move to the natural gradient amounts to linearly
transforming the gradient using the inverse Fisher information matrix of the policy.
In this paper, we introduce four new AC algorithms, three of which incorporate natural gradients. All
the algorithms are for the average reward setting and use function approximation in the state-value
function. For all four methods we prove convergence of the parameters of the policy and state-value
function to a local maximum of a performance function that corresponds to the average reward plus
a measure of the TD error inherent in the function approximation. Due to space limitations, we
do not present the convergence analysis of our algorithms here; it can be found, along with some
empirical results using our algorithms, in the extended version of this paper (Bhatnagar et al., 2007).
Our results extend prior AC methods, especially those of Konda and Tsitsiklis (2000) and of Peters
et al. (2005). We discuss these relationships in detail in Section 6. Our analysis does not cover the
use of eligibility traces but we believe the extension to that case would be straightforward.
2
The Policy Gradient Framework
We consider the standard reinforcement learning framework (e.g., see Sutton & Barto, 1998), in
which a learning agent interacts with a stochastic environment and this interaction is modeled as a
discrete-time Markov decision process. The state, action, and reward at each time t ? {0, 1, 2, . . .}
are denoted st ? S, at ? A, and rt ? R respectively. We assume the reward is random, realvalued, and uniformly bounded. The environment?s dynamics are characterized by state-transition
probabilities p(s0 |s, a) = Pr(st+1 = s0 |st = s, at = a), and single-stage expected rewards
r(s, a) = E[rt+1 |st = s, at = a], ?s, s0 ? S, ?a ? A. The agent selects an action at each time t
using a randomized stationary policy ?(a|s) = Pr(at = a|st = s). We assume
(B1) The Markov chain induced by any policy is irreducible and aperiodic.
The long-term average reward per step under policy ? is defined as
"T ?1
#
X
X
X
1
J(?) = lim
E
rt+1 |? =
d? (s)
?(a|s)r(s, a),
T ?? T
t=0
s?S
a?A
where d? (s) is the stationary distribution of state s under policy ?. The limit here is welldefined under (B1). Our aim is to find a policy ? ? that maximizes the average reward, i.e.,
? ? = arg max? J(?). In the average reward formulation, a policy ? is assessed according to the
expected differential reward associated with states s or state?action pairs (s, a). For all states s ? S
and actions a ? A, the differential action-value function and the differential state-value function
under policy ? are defined as1
?
X
X
Q? (s, a) =
E[rt+1 ? J(?)|s0 = s, a0 = a, ?] ,
V ? (s) =
?(a|s)Q? (s, a). (1)
t=0
a?A
In policy gradient methods, we define a class of parameterized stochastic policies
{?(?|s; ?), s ? S, ? ? ?}, estimate the gradient of the average reward with respect to the
policy parameters ? from the observed states, actions, and rewards, and then improve the policy
by adjusting its parameters in the direction of the gradient. Since in this setting a policy ? is
represented by its parameters ?, policy dependent functions such as J(?), d? (?), V ? (?), and Q? (?, ?)
can be written as J(?), d(?; ?), V (?; ?), and Q(?, ?; ?), respectively. We assume
(B2) For any state?action pair (s, a), policy ?(a|s; ?) is continuously differentiable in the
parameters ?.
Previous works (Marbach, 1998; Sutton et al., 2000; Baxter & Bartlett, 2001) have shown that the
gradient of the average reward for parameterized policies that satisfy (B1) and (B2) is given by 2
X
X
?J(?) =
d? (s)
??(a|s)Q? (s, a).
(2)
s?S
a?A
1
From now on in the paper, we use the terms state-value function and action-value function instead of
differential state-value function and differential action-value function.
2
Throughout the paper, we use notation ? to denote ?? ? the gradient w.r.t. the policy parameters.
2
Observe that if b(s) is any given function of s (also called a baseline), then
X
?
d (s)
X
??(a|s)b(s) =
a?A
s?S
X
?
d (s)b(s)?
s?S
X
?(a|s)
a?A
!
=
X
d? (s)b(s)?(1) = 0,
s?S
and thus, for any baseline b(s), the gradient of the average reward can be written as
X
X
?J(?) =
d? (s)
??(a|s)(Q? (s, a) ? b(s)).
s?S
(3)
a?A
The baseline can be chosen such in a way that the variance of the gradient estimates is minimized
(Greensmith et al., 2004).
?
The natural gradient, denoted ?J(?),
can be calculated by linearly transforming the regular gra?
dient, using the inverse Fisher information matrix of the policy: ?J(?)
= G?1 (?)?J(?). The
Fisher information matrix G(?) is positive definite and symmetric, and is given by
(4)
G(?) = Es?d? ,a?? [? log ?(a|s)? log ?(a|s)> ].
3
Policy Gradient with Function Approximation
Now consider the case in which the action-value function for a fixed policy ?, Q ? , is approximated
by a learned function approximator. If the approximation is sufficiently good, we might hope to
use it in place of Q? in Eqs. 2 and 3, and still point roughly in the direction of the true gradient.
? ?w with parameters w is compatible, i.e.,
Sutton et al. (2000) showed that if the approximation Q
? ?w (s, a) = ? log ?(a|s), and minimizes the mean squared error
?w Q
X
X
? ? (s, a)]2
E ? (w) =
d? (s)
?(a|s)[Q? (s, a) ? Q
(5)
w
s?S
a?A
? ?w? in Eqs. 2 and 3. Thus, we work with
for parameter value w ? , then we can replace Q? with Q
?
>
?
a linear approximation Qw (s, a) = w ?(s, a), in which the ?(s, a)?s are compatible features
defined according to ?(s, a) = ? log ?(a|s). Note that compatible features are well defined under
(B2). The Fisher information matrix of Eq. 4 can be written using the compatible features as
(6)
G(?) = Es?d? ,a?? [?(s, a)?(s, a)> ].
Suppose E (w) denotes the mean squared error
X
X
E ? (w) =
d? (s)
?(a|s)[Q? (s, a) ? w > ?(s, a) ? b(s)]2
?
s?S
(7)
a?A
of our compatible linear parameterized approximation w > ?(s, a) and an arbitrary baseline b(s).
Let w? = arg minw E ? (w) denote the optimal parameter. Lemma 1 shows that the value of w ?
does not depend on the given baseline b(s); as a result the mean squared error problems of Eqs. 5
and 7 have the same solutions. Lemma 2 shows that if the parameter is set to be equal to w ? , then
the resulting mean squared error E ? (w? ) (now treated as a function of the baseline b(s)) is further
minimized when b(s) = V ? (s). In other words, the variance in the action-value-function estimator
is minimized if the baseline is chosen to be the state-value function itself.3
Lemma 1 The optimum weight parameter w ? for any given ? (policy ?) satisfies4
w? = G?1 (?)Es?d? ,a?? [Q? (s, a)?(s, a)].
Proof Note that
?w E ? (w) = ?2
X
s?S
d? (s)
X
?(a|s)[Q? (s, a) ? w > ?(s, a) ? b(s)]?(s, a).
(8)
a?A
Equating the above to zero, one obtains
X
s?S
d? (s)
X
a?A
?(a|s)?(s, a)?(s, a)> w? =
X
d? (s)
s?S
X
a?A
?(a|s)Q? (s, a)?(s, a)?
X
s?S
d? (s)
X
?(a|s)b(s)?(s, a).
a?A
3
It is important to note that Lemma 2 is not about the minimum variance baseline for gradient estimation.
It is about the minimum variance baseline of the action-value-function estimator.
4
This lemma is similar to Kakade?s (2002) Theorem 1.
3
The last term on the right-hand side equals zero because a?A ?(a|s)?(s, a) = a?A ??(a|s) = 0
for any state s. Now, from Eq. 8, the Hessian ?2w E ? (w) evaluated at w ? can be seen to be 2G(?).
The claim follows because G(?) is positive definite for any ?.
P
P
Next, given the optimum weight parameter w ? , we obtain the minimum variance baseline in
the action-value-function estimator corresponding to policy ?. Thus we consider now E ? (w? ) as a
function of the baseline b, and obtain b? = arg minb E ? (w? ).
Lemma 2 For any given policy ?, the minimum variance baseline b? (s) in the action-valuefunction estimator corresponds to the state-value function V ? (s).
P
?
?>
?(s, a) ? b(s)]2 .
Proof For any P
s ? S, let E ?,s (w? ) =
a?A ?(a|s)[Q (s, a) ? w
?
?,s
?
?
?
Then E (w ) = s?S d (s)E (w ). Note that by (B1), the Markov chain corresponding to any
policy ? is positive recurrent because the number of states is finite. Hence, d? (s) > 0 for all s ? S.
Thus, one needs to find the baseline b(s) that minimizes E ?,s (w? ) for each s ? S. For any s ? S,
X
?E ?,s (w? )
= ?2
?(a|s)[Q? (s, a) ? w ?> ?(s, a) ? b(s)].
?b(s)
a?A
Equating the above to zero, we obtain
X
X
?(a|s)w ?> ?(s, a).
?(a|s)Q? (s, a) ?
b? (s) =
a?A
a?A
P
The rightmost term equals zero because a?A ?(a|s)?(s, a) = 0. Hence b? (s) = a?A ?(a|s)
Q? (s, a) = V ? (s). The second derivative of E ?,s (w? ) w.r.t. b(s) equals 2. The claim follows.
P
From Lemmas 1 and 2, w ?> ?(s, a) is a least-squared optimal parametric representation for
the advantage function A? (s, a) = Q? (s, a) ? V ?P
(s) as well as for the action-value function
>
Q? (s, a). However, because Ea?? [w> ?(s, a)] =
a?A ?(a|s)w ?(s, a) = 0, ?s ? S, it is
>
better to think of w ?(s, a) as an approximation of the advantage function rather than of the
action-value function.
The TD error ?t is a random quantity that is defined according to ?t = rt+1 ?J?t+1 +V? (st+1 )?V? (st ),
where V? and J? are consistent estimates of the state-value function and the average reward, respectively. Thus, these estimates satisfy E[V? (st )|st , ?] = V ? (st ) and E[J?t+1 |st , ?] = J(?), for any
t ? 0. The next lemma shows that ?t is a consistent estimate of the advantage function A? .
Lemma 3 Under given policy ?, we have E[?t |st , at , ?] = A? (st , at ).
Proof Note that
E[?t |st , at , ?] = E[rt+1 ?J?t+1 +V? (st+1 )?V? (st )|st , at , ?] = r(st , at )?J(?)+E[V? (st+1 )|st , at , ?]?V ? (st ).
Now
E[V? (st+1 )|st , at , ?] = E[E[V? (st+1 )|st+1 , ?]|st , at , ?] = E[V ? (st+1 )|st , at ] =
X
p(st+1 |st , at )V ? (st+1 ).
st+1 ?S
Also r(st , at ) ? J(?) +
P
p(st+1 |st , at )V (st+1 ) = Q (st , at ). The claim follows.
?
st+1 ?S
?
By settingPthe baselinePb(s) equal to the value function V ? (s), Eq. 3 can be written as
?J(?) = s?S d? (s) a?A ?(a|s)?(s, a)A? (s, a). From Lemma 3, ?t is a consistent estimate
d
of the advantage function A? (s, a). Thus, ?J(?)
= ?t ?(st , at ) is a consistent estimate of ?J(?).
? V? , of the average reward and the value funcHowever, calculating ?t requires having estimates, J,
tion. While an average reward estimate is simple enough to obtain given the single-stage reward
function, the same is not necessarily true for the value function. We use function approximation for
the value function as well. Suppose f (s) is a feature vector for state s. One may then approximate
V ? (s) with v > f (s), where v is a parameter vector that can be tuned (for a fixed policy ?) using a
>
TD algorithm. In our algorithms, we use ?t = rt+1 ? J?t+1 + v >
t f (st+1 ) ? v t f (st ) as an estimate
for the TD error, where v t corresponds to the value function parameter at time t.
4
P
P
Let V? ? (s) = a?A ?(a|s)[r(s, a) ? J(?) + s0 ?S p(s0 |s, a)v ?> f (s0 )], where v ?> f (s0 ) is an
estimate of the value function V ? (s0 ) that is obtained upon convergence viz., limt?? v t = v ? with
probability one. Also, let ?t? = rt+1 ? J?t+1 + v ?> f (st+1 ) ? v ?> f (st ), where ?t? corresponds to
a stationary estimate of the TD error with function approximation under policy ?.
P
Lemma 4 E[?t? ?(st , at )|?] = ?J(?) + s?S d? (s)[?V? ? (s) ? ?v ?> f (s)].
Proof of this lemma can be found in the extended version of this paper (Bhatnagar et al., 2007).
Note that E[?t ?(st , at )|?] = ?J(?), provided ?t is defined as ?t = rt+1 ? J?t+1 + V? (st+1 ) ? V? (st )
(as was considered in Lemma
3). For the case with function approximation that we study, from
P
Lemma 4, the quantity s?S d? (s)[?V? ? (s) ? ?v ?> f (s)] may be viewed as the error or bias in
the estimate of the gradient of average reward that results from the use of function approximation.
4
Actor-Critic Algorithms
We present four new AC algorithms in this section. These algorithms are in the general form shown
in Table 1. They update the policy parameters along the direction of the average-reward gradient.
While estimates of the regular gradient are used for this purpose in Algorithm 1, natural gradient
estimates are used in Algorithms 2?4. While critic updates in our algorithms can be easily extended
to the case of TD(?), ? > 0, we restrict our attention to the case when ? = 0. In addition to
assumptions (B1) and (B2), we make the following assumption:
(B3) The step-size schedules for the critic {?t } and the actor {?t } satisfy
X
t
?t =
X
?t = ?
,
t
X
?t2 ,
t
X
t
?t2 < ?
,
?t
= 0.
t?? ?t
lim
(9)
As a consequence of Eq. 9, ?t ? 0 faster than ?t . Hence the critic has uniformly higher increments
than the actor beyond some t0 , and thus it converges faster than the actor.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
Table 1: A Template for Incremental AC Algorithms.
Input:
? Randomized parameterized policy ?(?|?; ?),
? Value function feature vector f (s).
Initialization:
? Policy parameters ? = ? 0 ,
? Value function weight vector v = v 0 ,
? Step sizes ? = ?0 , ? = ?0 , ? = c?0 ,
? Initial state s0 .
for t = 0, 1, 2, . . . do
Execution:
? Draw action at ? ?(at |st ; ? t ),
? Observe next state st+1 ? p(st+1 |st , at ),
? Observe reward rt+1 .
Average Reward Update:
J?t+1 = (1 ? ?t )J?t + ?t rt+1
>
TD error:
?t = rt+1 ? J?t+1 + v >
t f (st+1 ) ? v t f (st )
Critic Update:
algorithm specific (see the text)
Actor Update:
algorithm specific (see the text)
endfor
return Policy and value-function parameters ?, v
We now present the critic and the actor updates of our four AC algorithms.
Algorithm 1 (Regular-Gradient AC):
Critic Update:
Actor Update:
v t+1 = v t + ?t ?t f (st ),
? t+1 = ? t + ?t ?t ?(st , at ).
5
This is the only AC algorithm presented in the paper that is based on the regular gradient estimate.
This algorithm stores two parameter vectors ? and v. Its per time-step computational cost is linear
in the number of policy and value-function parameters.
?1
?
The next algorithm is based on the natural-gradient estimate ?J(?
(? t )?t ?(st , at ) in
t) = G
place of the regular-gradient estimate in Algorithm 1. We derive a procedure for recursively esti?1
mating G?1 (?) and show in Lemma 5 that our estimate G?1
(?) as t ? ? with
t converges to G
probability one. This is required for proving convergence of this
algorithm.
The
Fisher information
Pt
>
1
matrix can be estimated in an online manner as Gt+1 = t+1
?(s
,
a
)?
(si , ai ). One may
i
i
i=0
>
1
1
obtain recursively Gt+1 = (1 ? t+1 )Gt + t+1 ?(st , at )? (st , at ), or more generally
(10)
Gt+1 = (1 ? ?t )Gt + ?t ?(st , at )? > (st , at ).
Using the Sherman-Morrison matrix inversion lemma, one obtains
?1
>
1
G?1
?1
t ?(st , at )(Gt ?(st , at ))
G?1
=
G
?
?
t
t
t+1
1 ? ?t
1 ? ?t + ?t ?(st , at )> G?1
t ?(st , at )
(11)
For our Alg. 2 and 4, we require the following additional assumption for the convergence analysis:
?1
(B4) The iterates Gt and G?1
t satisfy supt,?,s,a k Gt k and supt,?,s,a k Gt k< ?.
? G?1 (?) as t ? ? with
in Eq. 11 satisfies G?1
Lemma 5 For any given parameter ?, G?1
t
t
probability one.
Proof It is easy to see from Eq. 10 that Gt ? G(?) as t ? ? with probability one, for
any given ? held fixed. For a fixed ?,
?1
?1
(?)(G(?) ? Gt )G?1
(?) k=k G?1 (?)(G(?)G?1
k G?1
t k?
t ? I) k=k G
t ?G
sup k G?1 (?) k sup k G?1
t k ? k G(?) ? Gt k? 0
?
as
t??
t,?,s,a
by assumption (B4). The claim follows.
Our second algorithm stores a matrix G?1 and two parameter vectors ? and v. Its per timestep computational cost is linear in the number of value-function parameters and quadratic in the
number of policy parameters.
Algorithm 2 (Natural-Gradient AC with Fisher Information Matrix):
Critic Update:
v t+1 = v t + ?t ?t f (st ),
Actor Update:
? t+1 = ? t + ?t G?1
t+1 ?t ?(st , at ),
with the estimate of the inverse Fisher information matrix updated according to Eq. 11. We let
?1
G?1
0 = kI, where k is a positive constant. Thus G0 and G0 are positive definite and symmetric
matrices. From Eq. 10, Gt , t > 0 can be seen to be positive definite and symmetric because these
are convex combinations of positive definite and symmetric matrices. Hence, G ?1
t , t > 0, are
positive definite and symmetric as well.
As mentioned in Section 3, it is better to think of the compatible approximation w > ?(s, a)
as an approximation of the advantage function rather than of the action-value function. In our
next algorithm we tune the parameters w in such a way as to minimize an estimate of the
least-squared error E ? (w) = Es?d? ,a?? [(w> ?(s, a) ? A? (s, a))2 ]. The gradient of E ? (w)
is thus ?w E ? (w) = 2Es?d? ,a?? [(w> ?(s, a) ? A? (s, a))?(s, a)], which can be estimated as
>
?
\
?
w E (w) = 2[?(st , at )?(st , at ) w ? ?t ?(st , at )]. Hence, we update advantage parameters w
along with value-function parameters v in the critic update of this algorithm. As with Peters et al.
?
(2005), we use the natural gradient estimate ?J(?
t ) = w t+1 in the actor update of Alg. 3. This
algorithm stores three parameter vectors, v, w, and ?. Its per time-step computational cost is linear
in the number of value-function parameters and quadratic in the number of policy parameters.
6
Algorithm 3 (Natural-Gradient AC with Advantage Parameters):
Critic Update:
v t+1 = v t + ?t ?t f (st ),
Actor Update:
wt+1 = [I ? ?t ?(st , at )?(st , at )> ]wt + ?t ?t ?(st , at ),
? t+1 = ? t + ?t wt+1 .
Although an estimate of G?1 (?) is not explicitly computed and used in Algorithm 3, the convergence analysis of this algorithm shows that the overall scheme still moves in the direction of
the natural gradient of average reward. In Algorithm 4, however, we explicitly estimate G ?1 (?)
(as in Algorithm 2), and use it in the critic update for w. The overall scheme is again seen
? w E ? (w) =
to follow the direction of the natural gradient of average reward. Here, we let ?
?1
>
2Gt [?(st , at )?(st , at ) w ? ?t ?(st , at )] be the estimate of the natural gradient of the leastsquared error E ? (w). This also simplifies the critic update for w. Algorithm 4 stores a matrix G?1
and three parameter vectors, v, w, and ?. Its per time-step computational cost is linear in the number of value-function parameters and quadratic in the number of policy parameters.
Algorithm 4 (Natural-Gradient AC with Advantage Parameters and Fisher Information Matrix):
Critic Update:
v t+1 = v t + ?t ?t f (st ),
wt+1 = (1 ? ?t )wt + ?t G?1
t+1 ?t ?(st , at ),
Actor Update:
? t+1 = ? t + ?t wt+1 ,
where the estimate of the inverse Fisher information matrix is updated according to Eq. 11.
5
Convergence of Our Actor-Critic Algorithms
Since our algorithms are gradient-based, one cannot expect to prove convergence to a globally
optimal policy. The best that one could hope for is convergence to a local maximum of J(?).
However, because the critic will generally converge to an approximation of the desired projection of
the value function (defined by the value function features f ) in these algorithms, the corresponding
convergence results are necessarily weaker, as indicated by the following theorem.
For the parameter iterations in Algorithms 1-4,5 we have (J?t , v t , ? t ) ?
{(J(? ? ), v ? , ? ? )|? ? ? Z} as t ? ? with probability one, where the set Z corresponds to
the set of local maxima of a performance function whose gradient is E[?t? ?(st , at )|?] (cf. Lemma 4).
Theorem ?1
For the proof of this theorem, please refer to Section 6 (Convergence Analysis) of the extended version of this paper (Bhatnagar et al., 2007). This theorem indicates that the policy
and state-value-function parameters converge to a local maximum of a performance function
that corresponds to the average reward plus a measure of the TD error inherent in the function
approximation.
6
Relation to Previous Algorithms
Actor-Critic Algorithm of Konda and Tsitsiklis (2000): Unlike our Alg. 2?4, their algorithm
does not use estimates of the natural gradient in its actor?s update. Their algorithm is similar to
our Alg. 1, but with some key differences. 1) Konda?s algorithm uses the Markov process of state?
action pairs, and thus its critic update is based on an action-value function. Alg. 1 uses the state
process, and therefore its critic update is based on a state-value function. 2) Whereas Alg. 1 uses
a TD error in both critic and actor recursions, Konda?s algorithm uses a TD error only in its critic
update. The actor recursion in Konda?s algorithm uses an action-value estimate instead. Because
the TD error is a consistent estimate of the advantage function (Lemma 3), the actor recursion in
Alg. 1 uses estimates of advantages instead of action-values, which may result in lower variances.
3) The convergence analysis of Konda?s algorithm is based on the martingale approach and aims at
bounding error terms and directly showing convergence; convergence to a local optimum is shown
when a TD(1) critic is used. For the case where ? < 1, they show that given an > 0, there exists
? close enough to one such that when a TD(?) critic is used, one gets lim inf t |?J(? t )| < with
5
The proof of this theorem requires another assumption viz., (A3) in the extended version of this paper
(Bhatnagar et al., 2007), in addition to (B1)-(B3) (resp. (B1)-(B4)) for Algorithm 1 and 3 (resp. for Algorithm 2
and 4). This was not included in this paper due to space limitations.
7
probability one. Unlike Konda and Tsitsiklis, we primarily use the ordinary differential equation
(ODE) based approach for our convergence analysis. Though we use martingale arguments in our
analysis, these are restricted to showing that the noise terms asymptotically diminish; the resulting
scheme can be viewed as an Euler-discretization of the associated ODE.
Natural Actor-Critic Algorithm of Peters et al. (2005): Our Algorithms 2?4 extend their algorithm by being fully incremental and in that we provide convergence proofs. Peters?s algorithm uses
a least-squares TD method in its critic?s update, whereas all our algorithms are fully incremental.
It is not clear how to satisfactorily incorporate least-squares TD methods in a context in which the
policy is changing, and our proof techniques do not immediately extend to this case.
7
Conclusions and Future Work
We have introduced and analyzed four AC algorithms utilizing both linear function approximation
and bootstrapping, a combination which seems essential to large-scale applications of reinforcement
learning. All of the algorithms are based on existing ideas such as TD-learning, natural policy gradients, and two-timescale stochastic approximation, but combined in new ways. The main contribution
of this paper is proving convergence of the algorithms to a local maximum in the space of policy
and value-function parameters. Our Alg. 2?4 are explorations of the use of natural gradients within
an AC architecture. The way we use natural gradients is distinctive in that it is totally incremental:
the policy is changed on every time step, yet the gradient computation is never reset as it is in the
algorithm of Peters et al. (2005). Alg. 3 is perhaps the most interesting of the three natural-gradient
algorithms. It never explicitly stores an estimate of the inverse Fisher information matrix and, as
a result, it requires less computation. In empirical experiments using our algorithms (not reported
here) we observed that it is easier to find good parameter settings for Alg. 3 than it is for the other
natural-gradient algorithms and, perhaps because of this, it converged more rapidly than the others
and than Konda?s algorithm. All our algorithms performed better than Konda?s algorithm.
There are a number of ways in which our results are limited and suggest future work. 1) It is
important to characterize the quality of the converged solutions, either by bounding the performance
loss due to bootstrapping and approximation error, or through a thorough empirical study. 2) The
algorithms can be extended to incorporate eligibility traces and least-squares methods. As discussed
earlier, the former seems straightforward whereas the latter requires more fundamental extensions.
3) Application of the algorithms to real-world problems is needed to assess their ultimate utility.
References
Bagnell, J., & Schneider, J. (2003). Covariant policy search. Proceedings of the Eighteenth International Joint
Conference on Artificial Intelligence.
Barto, A. G., Sutton, R. S., & Anderson, C. (1983). Neuron-like elements that can solve difficult learning
control problems. IEEE Transaction on Systems, Man and Cybernetics, 13, 835?846.
Baxter, J., & Bartlett, P. (2001). Infinite-horizon policy-gradient estimation. JAIR, 15, 319?350.
Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., & Lee, M. (2007). Natural actor-critic algorithms. Submitted
to Automatica.
Greensmith, E., Bartlett, P., & Baxter, J. (2004). Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5, 1471?1530.
Kakade, S. (2002). A natural policy gradient. Proceedings of NIPS 14.
Konda, V., & Tsitsiklis, J. (2000). Actor-critic algorithms. Proceedings of NIPS 12 (pp. 1008?1014).
Marbach, P. (1998). Simulated-based methods for Markov decision processes. Doctoral dissertation, MIT.
Peters, J., Vijayakumar, S., & Schaal, S. (2003). Reinforcement learning for humanoid robotics. Proceedings
of the Third IEEE-RAS International Conference on Humanoid Robots.
Peters, J., Vijayakumar, S., & Schaal, S. (2005). Natural actor-critic. Proceedings of the Sixteenth European
Conference on Machine Learning (pp. 280?291).
Sutton, R. S. (1984). Temporal credit assignment in reinforcement learning. Doctoral dissertation, UMass
Amherst.
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning, 3, 9?44.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT Press.
Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (2000). Policy gradient methods for reinforcement
learning with function approximation. Proceedings of NIPS 12 (pp. 1057?1063).
8
| 3258 |@word version:4 inversion:1 seems:2 valuefunction:1 recursively:2 reduction:1 initial:1 uma:1 tuned:1 renewed:1 rightmost:1 existing:1 discretization:1 si:1 yet:1 written:4 update:26 stationary:3 intelligence:1 parameterization:1 dissertation:2 iterates:1 parameterizations:1 along:3 differential:6 welldefined:1 prove:2 introduce:1 manner:2 ra:1 expected:3 roughly:1 globally:1 alberta:2 td:17 totally:1 provided:1 bounded:1 notation:1 maximizes:1 qw:1 minimizes:2 bootstrapping:5 temporal:6 esti:1 every:1 thorough:1 control:2 greensmith:2 positive:8 local:6 limit:1 consequence:1 sutton:13 might:1 plus:2 initialization:1 equating:2 doctoral:2 limited:1 satisfactorily:1 definite:6 procedure:1 empirical:4 projection:1 word:1 regular:5 suggest:1 get:1 cannot:1 close:1 context:1 influence:1 conventional:2 eighteenth:1 straightforward:2 attention:1 convex:1 simplicity:1 immediately:1 rule:1 estimator:4 utilizing:1 proving:2 handle:1 increment:1 updated:4 resp:2 pt:1 suppose:2 us:7 element:1 approximated:1 observed:2 solved:1 mentioned:1 intuition:1 transforming:2 environment:2 reward:25 dynamic:1 ghavamzadeh:2 depend:1 singh:1 upon:1 distinctive:1 basis:1 easily:2 joint:1 represented:1 effective:2 artificial:1 refined:1 whose:1 solve:2 timescale:2 think:2 itself:1 online:3 advantage:10 differentiable:1 interaction:1 reset:1 combining:1 rapidly:1 supplanted:1 sixteenth:1 convergence:20 optimum:3 produce:1 incremental:6 converges:2 derive:1 recurrent:1 ac:13 eq:12 trading:1 direction:5 aperiodic:1 stochastic:4 exploration:1 mcallester:1 require:1 extension:2 sufficiently:1 considered:1 diminish:1 credit:1 mapping:2 predict:1 claim:4 purpose:1 estimation:3 endfor:1 hope:2 mit:2 supt:2 aim:2 rather:2 barto:4 earliest:1 viz:2 schaal:3 indicates:1 baseline:13 dependent:1 dient:1 a0:1 relation:1 selects:1 compatibility:1 arg:3 among:1 overall:2 denoted:2 special:1 equal:5 never:2 having:1 future:3 minimized:3 t2:2 others:1 inherent:2 richard:1 irreducible:1 bangalore:1 primarily:1 simultaneously:1 interest:4 analyzed:1 held:1 chain:2 minw:1 desired:1 theoretical:1 earlier:1 cover:1 assignment:1 ordinary:1 cost:4 euler:1 characterize:1 reported:1 combined:2 st:79 fundamental:1 randomized:2 international:2 amherst:1 vijayakumar:3 lee:2 probabilistic:1 continuously:1 squared:6 again:1 slowly:1 derivative:1 return:1 b2:4 automation:1 satisfy:4 explicitly:3 tion:1 performed:1 actionselection:1 sup:2 contribution:1 minimize:1 square:3 ass:1 variance:12 largely:1 bhatnagar:6 cybernetics:1 converged:2 submitted:1 simultaneous:1 mating:1 failure:1 pp:3 proof:11 associated:2 proved:2 adjusting:1 lim:3 schedule:1 ea:1 higher:1 jair:1 follow:1 formulation:1 evaluated:1 though:1 anderson:1 stage:2 hand:1 quality:1 perhaps:2 indicated:1 believe:1 b3:2 shalabh:1 true:2 former:1 hence:5 symmetric:5 eligibility:2 please:1 mohammad:1 b4:3 extend:5 discussed:1 refer:1 ai:1 marbach:3 sherman:1 robot:1 actor:32 gt:14 showed:1 inf:1 store:5 came:1 seen:3 minimum:4 additional:1 schneider:2 converge:4 morrison:1 reduces:1 faster:2 characterized:1 long:1 prediction:2 iteration:3 limt:1 robotics:1 whereas:4 addition:2 ode:2 unlike:2 minb:1 induced:1 enough:2 concerned:1 baxter:4 variety:1 easy:1 architecture:1 restrict:1 reduce:1 idea:3 simplifies:1 avenue:1 t0:1 motivated:1 bartlett:4 ultimate:1 accelerating:1 utility:1 peter:9 hessian:1 action:25 dramatically:1 generally:2 clear:1 tune:1 amount:1 estimated:3 per:5 discrete:1 key:1 four:6 changing:1 timestep:1 asymptotically:1 inverse:5 parameterized:4 gra:1 throughout:1 place:2 draw:1 decision:2 ki:1 convergent:1 quadratic:3 argument:1 separable:1 department:2 according:5 combination:2 appealing:1 kakade:3 restricted:1 pr:2 equation:1 discus:1 needed:2 observe:3 denotes:1 cf:1 calculating:1 konda:12 especially:1 move:2 g0:2 quantity:2 parametric:1 rt:12 bagnell:2 interacts:1 gradient:56 simulated:1 toward:1 modeled:1 relationship:1 providing:1 difficult:1 trace:2 policy:63 neuron:1 markov:5 finite:1 descent:1 extended:7 mansour:1 arbitrary:1 canada:1 introduced:1 pair:3 required:1 learned:1 nip:3 address:1 beyond:1 max:1 including:1 natural:28 difficulty:1 treated:1 recursion:3 scheme:3 improve:1 realvalued:1 speeding:1 text:2 prior:3 fully:3 expect:1 loss:1 interesting:1 limitation:2 approximator:1 humanoid:2 agent:2 consistent:5 s0:10 critic:37 compatible:6 changed:1 last:1 tsitsiklis:6 bias:2 side:1 weaker:1 institute:1 india:1 template:1 calculated:1 transition:1 cumulative:1 world:1 reinforcement:10 transaction:1 approximate:1 obtains:2 b1:7 automatica:1 search:1 table:2 as1:1 promising:1 alg:10 investigated:1 necessarily:2 european:1 main:1 linearly:2 bounding:2 noise:1 edmonton:1 martingale:2 explicit:2 third:1 theorem:6 specific:2 showing:2 a3:1 incorporating:1 exists:1 essential:1 execution:1 conditioned:1 horizon:1 easier:1 led:1 covariant:1 corresponds:8 satisfies:1 viewed:3 replace:2 fisher:10 man:1 change:1 included:1 infinite:2 uniformly:2 wt:6 lemma:19 called:3 e:5 select:1 mark:1 latter:1 assessed:1 indian:1 incorporate:3 |
2,490 | 3,259 | The Noisy-Logical Distribution and its Application to
Causal Inference
Alan Yuille
Department of Statistics
University of California at Los Angeles
Los Angeles, CA 90095
[email protected]
Hongjing Lu
Department of Psychology
University of California at Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
We describe a novel noisy-logical distribution for representing the distribution of
a binary output variable conditioned on multiple binary input variables. The distribution is represented in terms of noisy-or?s and noisy-and-not?s of causal features
which are conjunctions of the binary inputs. The standard noisy-or and noisy-andnot models, used in causal reasoning and artificial intelligence, are special cases
of the noisy-logical distribution. We prove that the noisy-logical distribution is
complete in the sense that it can represent all conditional distributions provided a
sufficient number of causal factors are used. We illustrate the noisy-logical distribution by showing that it can account for new experimental findings on how
humans perform causal reasoning in complex contexts. We speculate on the use
of the noisy-logical distribution for causal reasoning and artificial intelligence.
1 Introduction
The noisy-or and noisy-and-not conditional probability distributions are frequently studied in cognitive science for modeling causal reasoning [1], [2],[3] and are also used as probabilistic models
for artificial intelligence [4]. It has been shown, for example, that human judgments of the power of
causal cues in experiments involving two cues [1] can be interpreted in terms of maximum likelihood
estimation and model selection using these types of models [3].
But the noisy-or and noisy-and-not distributions are limited in the sense that they can only represent
a restricted set of all possible conditional distributions. This restriction is sometimes an advantage
because there may not be sufficient data to determine the full conditional distribution. Nevertheless it
would be better to have a representation that can expand to represent the full conditional distribution,
if sufficient data is available, but can be reduced to simpler forms (e.g. standard noisy-or) if there is
only limited data.
This motivates us to define the noisy-logical distribution. This is defined in terms of noisy-or?s
and noisy-and-not?s of causal features which are conjunctions of the basic input variables (inspired
by the use of conjunctive features in [2] and the extensions in [5]). By restricting the choice of
causal features we can obtain the standard noisy-or and noisy-and-not models. We prove that the
noisy-logical distribution is complete in the sense that it can represent any conditional distribution
provided we use all the causal features. Overall, it gives a distribution whose complexity can be
adjusted by restricting the number of causal features.
To illustrate the noisy-logical distribution we apply it to modeling some recent human experiments
on causal reasoning in complex environments [6]. We show that noisy-logical distributions involving causal factors are able to account for human performance. By contrast, an alternative linear
model gives predictions which are the opposite of the observed trends in human causal judgments.
Section (2) presents the noisy-logical distribution for the case with two input causes (the case commonly studied in causal reasoning). In section (3) we specify the full noisy-logical distribution and
1
we prove its completeness in section (4). Section (5) illustrates the noisy-logical distribution by
showing that it accounts for recent experimental findings in causal reasoning.
2
The Case with N = 2 causes
In this section we study the simple case when the binary output effect E depends only on two binaryvalued causes C1 , C2 . This covers most of the work reported in the cognitive science literature
[1],[3]. In this case, the probability distribution is specified by the four numbers P (E = 1|C1 , C2 ),
for C1 ? {0, 1}, C2 ? {0, 1}.
To define the noisy-logical distribution over two variables P (E = 1|C1 , C2 ), we introduce three
concepts. Firstly, we define four binary-valued causal features ?0 (.), ?1 (.), ?2 (.), ?3 (.) which are
~ = (C1 , C2 ). They are defined by ?0 (C)
~ = 1, ?1 (C)
~ = C1 , ?2 (C)
~ =
functions of the input state C
~
C2 , ?3 (C) = C1 ?C2 , where ? denotes logical-and operation(i.e. C1 ?C2 = 1 if C1 = C2 = 1 and
~ is the conjunction of C1 and C2 . Secondly, we introduce binaryC1 ? C2 = 0 otherwise). ?3 (C)
valued hidden states E0 , E1 , E2 , E3 which are caused by the corresponding features ?0 , ?1 , ?2 , ?3 .
We define P (Ei = 1|?i ; ?i ) = ?i ?i with ?i ? [0, 1], for i = 1, ..., 4 with ?
~ = (?1 , ?2 , ?3 , ?4 ).
Thirdly, we define the output effect E to be a logical combination of the states E0 , E1 , E2 , E3
which we write in form ?E,f (E0 ,E1 ,E2 ,E3 ) , where f (., ., ., .) is a logic function which is formed by a
combination of three logic operations AN D, OR, N OT . This induces the noisy-logical distribution
P
Q3
~ ?
~ ?i ).
Pnl (E|C;
~ ) = E0 ,...,E3 ?E,f (E0 ,E1 ,E2 ,E3 ) i=0 P (Ei |?i (C);
The noisy-logical distribution is characterized by the parameters ?0 , ..., ?3 and the choice of the
logic function f (., ., ., .). We can represent the distribution by a circuit diagram where the output E
is a logical function of the hidden states E0 , ..., E3 and each state is caused probabilistically by the
corresponding causal features ?0 , ..., ?3 , as shown in Figure (1).
Figure 1: Circuit diagram in the case with N = 2 causes.
The noisy-logical distribution includes the commonly known distributions, noisy-or and noisy-andnot, as special cases. To obtain the noisy-or, we set E = E1 ? E2 (i.e. E1 ? E2 = 0 if E1 = E2 = 0
and E1 ? E2 = 1 otherwise). A simple calculation shows that the noisy-logical distribution reduces
to the noisy-or Pnor (E|C1 , C2 ; ?1 , ?2 ) [4], [1]:
X
~ ?1 )P (E2 |?2 (C);
~ ?2 )
?1,E1 ?E2 P (E1 |?1 (C);
Pnl (E = 1|C1 , C2 ; ?1 , ?2 ) =
E1 ,E2
= ?1 C1 (1 ? ?2 C2 ) + (1 ? ?1 C1 )?2 C2 + ?1 ?2 C1 C2
= ?1 C1 + ?2 C2 ? ?1 ?2 C1 C2 = Pnor (E = 1|C1 , C2 ; ?1 , ?2 )(1)
To obtain the noisy-and-not, we set E = E1 ? ?E2 (i.e. E1 ? ?E2 = 1 if E1 = 1, E2 = 0
and E1 ? ?E2 = 0 otherwise). The noisy-logical distribution reduces to the noisy-and-not
Pn?and?not (E|C1 , C2 ; ?1 , ?2 ) [4],[?]:
X
~ ?1 )P (E2 |?2 (C);
~ ?2 )
?1,E1 ??E2 P (E1 |?1 (C);
Pnl (E = 1|C1 , C2 ; ?1 , ?2 ) =
E1 ,E2
= ?1 C1 {1 ? ?2 C2 } = Pn?and?not (E = 1|C1 , C2 ; ?1 , ?2 ) (2)
2
We claim that noisy-logical distributions of this form can represent any conditional distribution
~ The logical function f (E0 , E1 , E2 , E3 ) will be expressed as a combination of logic operP (E|C).
ations AND-NOT, OR. The parameters of the distribution are given by ?0 , ?1 , ?2 , ?3 .
The proof of this claim will be given for the general case in the next section. To get some insight,
we consider the special case where we only know the values P (E|C1 = 1, C2 = 0) and P (E|C1 =
1, C2 = 1). This situation is studied in cognitive science where C1 is considered to be a background
cause which always takes value 1, see [1] [3]. In this case, the only causal features are considered,
~ = C1 and ?2 (C)
~ = C2 .
?1 (C)
Result. The noisy-or and the noisy-and-not models, given by equations (1,2) are sufficient to fit any
values of P (E = 1|1, 0) and P (E = 1|1, 1). (In this section we use P (E = 1|1, 0) to denote
P (E = 1|C1 = 1, C2 = 0) and use P (E = 1|1, 1) to denote P (E = 1|C1 = 1, C2 = 1).)
The noisy-or and noisy-and-not fit the cases when P (E = 1|1, 1) ? P (E = 1|1, 0) and P (E =
1|1, 1) ? P (E = 1|1, 0) respectively. In Cheng?s terminology [1] C2 is respectively a generative or
preventative cause).
Proof. We can fit both the noisy-or and noisy-and-not models to P (E|1, 0) by setting ?1 = P (E =
1|1, 0), so it remains to fit the models to P (E|1, 1). There are three cases to consider: (i) P (E =
1|1, 1) > P (E = 1|1, 0), (ii) P (E = 1|1, 1) < P (E = 1|1, 0), and (iii) P (E = 1|1, 1) =
P (E = 1|1, 0). It follows directly from equations (1,2) that Pnor (E = 1|1, 1) ? Pnor (E =
1|1, 0) and Pn?and?not (E = 1|1, 1) ? Pn?and?not (E = 1|1, 0) with equality only if P (E =
1|1, 1) = P (E = 1|1, 0). Hence we must fit a noisy-or and a noisy-and-not model to cases (i)
and (ii) respectively. For case (i), this requires solving P (E = 1|1, 1) = ?1 + ?2 ? ?1 ?2 to
obtain ?2 = {P (E = 1|1, 1) ? P (E = 1|1, 0)}/{1 ? P (E = 1|1, 0)} (note that the condition
P (E = 1|1, 1) > P (E = 1|1, 0) ensures that ?2 ? [0, 1]). For case (ii), we must solve P (E =
1|1, 1) = ?1 ? ?1 ?2 which gives ?2 = {P (E = 1|1, 0) ? P (E = 1|1, 1)}/P (E = 1|1, 0) (the
condition P (E = 1|1, 1) < P (E = 1|1, 0) ensures that ?2 ? [0, 1]). For case (iii), we can fit either
model by setting ?2 = 0.
3
The Noisy-Logical Distribution for N causes
~ where E ? {0, 1} and
We next consider representing probability distributions of form P (E|C),
~
C = (C1 , ..., CN ) where Ci ? {0, 1}, ?i = 1, .., N . These distributions can be characterized by
~ for all possible 2N values of C.
~
the values of P (E = 1|C)
~ : i = 0, ..., 2N ? 1}. These features
We define the set of 2N binary-valued causal features {?i (C)
~ = 1, ?i (C)
~ = Ci : i = 1, .., N , ?N +1 (C)
~ = C1 ? C2 is the conjunction
are ordered so that ?0 (C)
~
of C1 and C2 , and so on. The feature ?(C) = Ca ? Cb ? ... ? Cg will take value 1 if Ca = Cb =
... = Cg = 1 and value 0 otherwise.
We define binary variables {Ei : i = 0, ..., 2N ? 1} which are related to the causal features {?i :
i = 0, ..., 2N ? 1} by distributions P (Ei = 1|?i ; ?i ) = ?i ?i , specified by parameters {?i : i =
0, ..., 2N ? 1}.
Then we define the output variable E to be a logical (i.e. deterministic) function of the {Ei :
i = 0, ..., 2N ? 1}. This can be thought of as a circuit diagram. In particular, we define E =
f (E0 , ..., E2N ?1 ) = (((((E1 ? E2 ) ? E3 ) ? E4 ....) where E1 ? E2 can be E1 ? E2 or E1 ? ?E2
(where ?E means logical negation). This gives the general noisy-logical distribution, as shown in
Figure (2).
~ ?
P (E = 1|C;
~) =
X
?E,f (E0 ,...,E2N ?1 )
P (Ei = 1|?i ; ?i ).
(3)
i=0
~
E
4
N
2Y
?1
The Completeness Result
This section proves that the noisy-logical distribution is capable of representing any conditional
distribution. This is the main theoretical result of this paper.
3
Figure 2: Circuit diagram in the case with N causes. All conditional distributions can be represented
in this form if we use all possible 2N causal features ?, choose the correct parameters ?, and select
the correct logical combinations ?.
~ defined on binary variables in terms
Result We can represent any conditional distribution P (E|C)
of a noisy logical distribution given by equation (3).
~ can be expressed as a
Proof. The proof is constructive. We show that any distribution P (E|C)
noisy-logical distribution.
~ 0 , ..., C
~ 2N ?1 . This ordering must obey ?i (C
~ i ) = 1 and ?i (C
~ j ) = 0, ?j < i.
We order the states C
~ 0 = (0, ..., 0), then selecting the terms with a single
This ordering can be obtained by setting C
conjunction (i.e. only one Ci is non-zero), then those with two conjunctions (i.e. two Ci ?s are
non-zero), then with three conjunctions, and so on.
~
The strategy is to use induction to build a noisy-logical distribution which agrees with P (E|C)
~
for all values of C. We loop over the states and incrementally construct the logical function
f (E0 , ..., E2N ?1 ) and estimate the parameters ?0 , ..., ?2N ?1 . It is convenient to recursively deN
fine a variable E i+1 = E i ? Ei , so that f (E0 , ..., E2N ?1 ) = E 2 ?1 .
~ = 1. Set E 0 = E0 and ?0 = P (E|0, ..., 0). Then
We start the induction using feature ?0 (C)
0 ~
~ 0 ), so the noisy-logical distribution fits the data for input C
~0.
P (E |C0 ; ?0 ) = P (E|C
Now proceed by induction to determine E M +1 and ?M +1 , assuming that we have determined E M
~ i ; ?0 , ..., ?M ) = P (E = 1|C
~ i ), for i = 0, ..., M . There are
and ?0 , ..., ?M such that P (E M = 1|C
three cases to consider which are analogous to the cases considered in the section with two causes.
~ M +1 ) > P (E M = 1|C
~ M +1 ; ?0 , ..., ?M ) we need ?M +1 (C)
~ to be a generaCase 1. If P (E = 1|C
M +1
M
tive feature. Set E
= E ? EM +1 with P (EM +1 = 1|?M +1 ; ?M +1 ) = ?M +1 ?M +1 . Then
we obtain:
~ M +1 ; ?0 , ., ?M +1 ) = P (E M = 1|C
~ M +1 ; ?0 , ., ?M )+P (EM +1 |?M +1 (C);
~ ?M +1 )
P (E M +1 = 1|C
P (E M
~ M +1 ; ?0 , ., ?M )P (EM +1 = 1|?M +1 (C);
~ ?M +1 ) =
?P (E M = 1|C
~ M +1 ; ?0 , ., ?M )+?M +1 ?M +1 (C)?P
~
~ M +1 ; ?0 , ., ?M )?M +1 ?M +1 (C)
~
= 1|C
(E M = 1|C
~ i ; ?0 , ..., ?M +1 ) = P (E M = 1|C
~ i ; ?0 , ..., ?M ) =
In particular, we see that P (E M +1 = 1|C
~
~
P (E = 1|Ci ) for i < M + 1 (using ?M +1 (Ci ) = 0, ?i < M + 1). To determine the value
~ M +1 ) = P (E M = 1|C
~ M +1 ; ?0 , ..., ?M ) + ?M +1 ? P (E M =
of ?M +1 , we must solve P (E = 1|C
~
~
~ M +1 ) ?
1|CM +1 ; ?0 , ..., ?M )?M +1 (using ?M +1 (CM +1 ) = 1). This gives ?M +1 = {P (E = 1|C
M
M
~ M +1 ; ?0 , ..., ?M )}/{1 ? P (E = 1|C
~ M +1 ; ?0 , ..., ?M +1 )} (the conditions ensure
P (E = 1|C
that ?M +1 ? [0, 1]).
~ M +1 ) < P (E M = 1|C
~ M +1 ; ?0 , ..., ?M ) we need ?M +1 (C)
~ to be a prevenCase 2. If P (E = 1|C
M +1
M
tative feature. Set E
= E ? ?EM +1 with P (EM +1 = 1|?M +1 ; ?M +1 ) = ?M +1 ?M +1 .
Then we obtain:
~ M +1 ; ?0 , ..., ?M +1 ) = P (E M = 1|C
~ M +1 ; ?0 , ..., ?M ){1 ? ?M +1 ?M +1 (C)}.
~
P (E M +1 = 1|C
(4)
4
~ i ; ?0 , ..., ?M +1 ) = P (E M = 1|C
~ i ; ?0 , ..., ?M ) = P (E =
As for the first case, P (E M +1 = 1|C
~
~
1|Ci ) for i < M + 1 (because ?M +1 (Ci ) = 0, ?i < M + 1). To determine the value of
~ M +1 ) = P (E M = 1|C
~ M +1 ; ?0 , ..., ?M ){1 ? ?M +1 } (us?M +1 we must solve P (E = 1|C
M
~ M +1 ) = 1). This gives ?M +1 = {P (E
~ M +1 ; ?0 , ..., ?M ) ? P (E =
ing ?M +1 (C
= 1|C
M
~
~
1|CM +1 )}/P (E = 1|CM +1 ; ?0 , ..., ?M ) (the conditions ensure that ?M +1 ? [0, 1]).
~ M +1 ) = P (E M = 1|C
~ M +1 ; ?0 , ..., ?M ), then we do nothing.
Case 3. If P (E = 1|C
5
Cognitive Science Human Experiments
We illustrate noisy-logical distributions by applying them to model two recent cognitive science
experiments by Liljeholm and Cheng which involve causal reasoning in complex environments [6].
In these experiments, the participants are asked questions about the causal structure of the data.
But the participants are not given enough data to determine the full distribution (i.e. not enough
to determine the causal structure with certainty). Instead the experimental design forces them to
choose between two different causal structures.
We formulate this as a model selection problem [3]. Formally, we specify distributions
P (D|~
? , Graph) for generating the data D from a causal model specified by Graph and parameterized by ?
~ . These distributions will be of simple noisy-logical form. We set the prior distributions
P (~
? |Graph) on the parameter values to be the uniform distribution. The evidence for the causal
model is given by:
Z
P (D|Graph) = d~
? P (D|~
? , Graph)P (~
? |Graph).
(5)
P (D|Graph1)
We then evaluate the log-likelihood ratio log P
(D|Graph2) between two causal models Graph1
Graph2, called the causal support [3] and use this to predict the performance of the participants.
This gives good fits to the experimental results.
As an alternative theoretical model, we consider the possibility that the participants use the same
causal structures, specified by Graph1 and Graph2, but use a linear model to combine cues.
Formally, this corresponds to a model P (E = 1|C1 , ..., CN ) = ?1 C1 + ... + ?N CN (with
?i ? 0, ?i = 1, ..., N and ?1 + ... + ?N ? 1). This model corresponds [1, 3] to the classic
Rescorla-Wagner learning model [8]. It cannot be expressed in simple noisy-logical form. Our
simulations show that this model does not account for human participant performance .
We note that previous attempts to model experiments with multiple causes and conjunctions by
Novick and Cheng [2] can be interpreted as performing maximum likelihood estimation of the parameters of noisy-logical distributions (their paper helped inspire our work). Those experiments,
however, were simpler than those described here and model selection was not used. The extensive
literatures on two cases [1, 3] can also be interpreted in terms of noisy-logical models.
5.1
Experiment I: Multiple Causes
In Experiment 1 of [6], the cover story involves a set of allergy patients who either did or did not
have a headache, and either had or had not received allergy medicines A and B. The experimental
participants were informed that two independent studies had been conducted in different labs using different patient groups. In the first study, patients were administered medicine A, whereas in
the second study patients were administered both medicines A and B. A simultaneous presentation format [7] was used to display the specific contingency conditions used in both studies to the
experimental subjects. The participants were then asked whether medicine B caused the headache.
We represent this experiment as follows using binary-valued variables E, B1 , B2 , C1 , C2 . The variable E indicates whether a headache has occurred (E = 1) or not (E = 0). B1 = 1 and B2 = 1 notate background causes for the two studies (which are always present). C1 and C2 indicate whether
medicine A and B are present respectively (e.g. C1 = 1 if A is present, C1 = 0 otherwise). The
data D shown to the subjects can be expressed as D = (D1 , D2 ) where D1 is the contingency table
Pd (E = 1|B1 = 1, C1 = 0, C2 = 0), Pd (E = 1|B1 = 1, C1 = 1, C2 = 0) for the first study
5
and D2 is the contingency table Pd (E = 1|B2 = 1, C1 = 0, C2 = 0), Pd (E = 1|B2 = 1, C1 =
1, C2 = 1) for the second study.
The experimental design forces the participants to choose between the two causal models shown
on the left of figure (3). These causal models differ by whether C2 (i.e. medicine B)
can have an effect or not. We set P (D|~
? , Graph) = P (D1 |~
?1 , Graph)P (D2 |~
?2 , Graph),
~ ? )} (for i = 1, 2) is the contingency data. We express these distribuwhere Di = {(E ? , C
i
Q
? ~?
tions in form P (Di |~
?i , Graph) =
~ i? , Graph). For Graph1, P1 (.) and P2 (.)
? Pi (E |Ci , ?
are P (E|B1 , C1 , ?B1 , ?C1 ) and P (E|B2 , C1 , ?B2 , ?C1 ). For Graph2, P1 (.) and P2 (.) are
P (E|B1 , C1 , ?B1 , ?C1 ) and P (E|B2 , C1 , C2 , ?B2 , ?C1 , ?C2 ). All these P (E|.) are noisy-or distributions.
For Experiment 1 there are two conditions [6], see table (1). In the first power-constant condition
[6], the data is consistent with the causal structure for Graph1 (i.e. C2 has no effect) using noisy-or
distributions. In the second ?P-constant condition [6], the data is consistent with the causal structure
for Graph1 but with noisy-or replaced by the linear distributions (e.g. P (E = 1|C1 , ..., Cn ) =
?1 C1 + ... + ?n Cn )).
(1)
(2)
5.2
Table 1: Experimental conditions (1) and (2) for Experiment 1
Pd (E = 1|B1 = 1, C1 = 0, C2 = 0), Pd (E = 1|B1 = 1, C1 = 1, C2 = 0)
Pd (E = 1|B2 = 1, C1 = 0, C2 = 0), Pd (E = 1|B2 = 1, C1 = 1, C2 = 1)
16/24, 22/24
0/24,18/24
Pd (E = 1|B1 = 1, C1 = 0, C2 = 0), Pd (E = 1|B1 = 1, C1 = 1, C2 = 0)
Pd (E = 1|B2 = 1, C1 = 0, C2 = 0), Pd (E = 1|B2 = 1, C1 = 1, C2 = 1)
0/24, 6/24
16/24,22/24
Experiment I: Results
We compare Liljeholm and Cheng?s experimental results with our theoretical simulations. These
comparisons are shown on the right-hand-side of figure (3). The left panel shows the proportion
of participants who decide that medicine B causes a headache for the two conditions. The right
panel shows the predictions of our model (labeled ?noisy-logical?) together with predictions of a
model that replaces the noisy-logical distributions by a linear model (labeled ?linear?). The simulations show that the noisy-logical model correctly predicts that participants (on average) judge that
medicine B has no effect in the first experimental condition, but B does have an effect in the second
condition. By contrast, the linear model makes the opposite (wrong) prediction. In summary, model
selection comparing two noisy-logical models gives a good prediction of participant performance.
Figure 3: Causal model and results for Experiment I. Left panel: two alternative causal models for
the two studies. Right panel: the experimental results (proportion of patients who think medicine
B causes headaches)) for the Power-constant and ?P-constant conditions [6]. Far right, the causal
support for the noisy-logic and linear models.
6
5.3
Experiment II: Causal Interaction
Liljeholm and Cheng [6] also investigated causal interactions. The experimental design was identical
to that used in Experiment 1, except that participants were presented with three studies in which only
one medicine (A) was tested. Participants were asked to judge whether medicine A interacts with
background causes that vary across the three studies. We define the background causes as B1 ,B2 ,B3
for the three studies, and C1 for medicine A. This experiment was also run under two different
conditions, see table (2). The first power-constant condition [6] was consistent with a noisy-logical
model, but the second power-varying condition [6] was not.
(1)
(2)
Table 2: Experimental conditions (1) and (2) for Experiment 2
P (E = 1|B1 = 1, C1 = 0), P (E = 1|B1 = 1, C1 = 1) 16/24, 22/24
P (E = 1|B2 = 1, C1 = 0), P (E = 1|B2 = 1, C1 = 1)
8/24,20/24
P (E = 1|B3 = 1, C1 = 0), P (E = 1|B3 = 1, C1 = 1)
0/24,18/24
P (E = 1|B1 = 1, C1 = 0), P (E = 1|B1 = 1, C1 = 1)
P (E = 1|B2 = 1, C1 = 0), P (E = 1|B2 = 1, C1 = 1)
P (E = 1|B3 = 1, C1 = 0), P (E = 1|B3 = 1, C1 = 1)
0/24, 6/24
0/24,12/24
0/24,18/24
The experimental design caused participants to choose between two causal models shown on the
left panel of figure (4). The probability of generating the data is given by P (D|~
? , Graph) =
P (D1 |~
?1 , Graph)P (D2 |~
?2 , Graph)P (D3 |~
?3 , Graph). For Graph1, the P (Di |.) are noisyor distributions P (E|B1 , C1 , ?B1 , ?C1 ), P (E|B2 , C1 , ?B2 , ?C1 ), P (E|B3 , C1 , ?B3 , ?C1 ). For
Graph2, the P (Di |.) are P (E|B1 , C1 , ?B1 , ?C1 ), P (E|B2 , C1 , B2 C1 , ?B2 , ?C1 , ?B2C1 ) and
P (E|B3 , C1 , B3 C1 , ?B3 , ?C1 , ?B3C1 ).
All the distributions are noisy-or on the unary causal features (e.g. B, C1 ), but the nature of the
conjunctive cause B ? C1 is unknown (i.e. not specified by the experimental design). Hence our
theory considers the possibilities that it is a noisy-or (e.g. can produce headaches) or noisy-and-not
(e.g. can prevent headaches), see graph 2 of Figure (4).
5.4 Results of Experiment II
Figure (4) shows human and model performance for the two experimental conditions. Our noisylogical model is in agreement with human performance ? i.e. there is no interaction between causes
in the power-constant condition, but there is interaction in the power-varying condition. By contrast,
the linear model predicts interaction in both conditions and hence fails to model human performance.
Figure 4: Causal model and results for Experiment II. Left panel: two alternative causal models (one
involving conjunctions) for the three studies . Right panel: the proportion of participants who think
that there is an interaction (conjunction) between medicine A and the background for the powerconstant and power-varying conditions [6]. Far right, the causal support for the noisy-logical and
linear models.
7
6
Summary
The noisy-logical distribution gives a new way to represent conditional probability distributions
defined over binary variables. The complexity of the distribution can be adjusted by restricting
the set of causal factors. If all the causal factors are allowed, then the distribution can represent
any conditional distribution. But by restricting the set of causal factors we can obtain standard
distributions such as the noisy-or and noisy-and-not.
We illustrated the noisy-logical distribution by modeling experimental findings on causal reasoning.
Our results showed that this distribution fitted the experimental data and, in particular, accounted for
the major trends (unlike the linear model). This is consistent with the success of noisy-or and noisyand-not models for accounting for experiments involving two causes [1], [2],[3]. This suggests that
humans may make use of noisy-logical representations for causal reasoning.
One attraction of the noisy-logical representation is that it helps clarify the relationship between
logic and probabilities. Standard logical relationships between causes and effects arise in the limit
as the ?i take values 0 or 1. We can, for example, bias the data towards a logical form by using
a prior on the ?
~ . This may be useful, for example, when modeling human cognition ? evidence
suggests that humans first learn logical relationships and, only later, move to probabilities.
In summary, the noisy-logical distribution is a novel way to represent conditional probability distributions defined on binary variables. We hope this class of distributions will be useful for modeling
cognitive phenomena and for applications to artificial intelligence.
Acknowledgements
We thank Mimi Liljeholm, Patricia Cheng, Adnan Darwiche, Keith Holyoak, Iasonas Kokkinos, and
YingNian Wu for helpful discussions. Mimi and Patricia kindly gave us access to their experimental
data. We acknowledge funding support from the W.M. Keck foundation and from NSF 0413214.
References
[1] P. W. Cheng. From covariation to causation: A causal power theory. Psychological Review,
104, 367405. 1997.
[2] L.R. Novick and P.W. Cheng. Assessing interactive causal influence. Psychological Review,
111, 455-485. 2004.
[3] T. L. Griffiths, and J. B. Tenenbaum. Structure and strength in causal induction. Cognitive
Psychology, 51, 334-384, 2005.
[4] J. Pearl, Probabilistic Reasoning in Intelligent Systems. Morgan-Kauffman, 1988.
[5] C.N. Glymour. The Mind?s Arrow: Bayes Nets and Graphical Causal Models in Psychology.
MIT Press. 2001.
[6] M. Liljeholm and P. W. Cheng. When is a Cause the ?Same?? Coherent Generalization across
Contexts. Psychological Science, in press. 2007.
[7] M. J. Buehner, P. W. Cheng, and D. Clifford. From covariation to causation: A test of the
assumption of causal power. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 29, 1119-1140, 2003.
[8] R. A. Rescorla, and A. R. Wagner. A theory of Pavlovian conditioning: Variations in the effectiveness of reinforcement and nonreinforcement. In A. H. Black and W. F. Prokasy (Eds.), Classical conditioning II: Current theory and research (pp. 64-99). New York: Appleton-Century
Crofts. 1972.
8
| 3259 |@word proportion:3 kokkinos:1 c0:1 adnan:1 holyoak:1 d2:4 simulation:3 accounting:1 recursively:1 selecting:1 current:1 comparing:1 conjunctive:2 must:5 intelligence:4 cue:3 generative:1 completeness:2 firstly:1 simpler:2 c2:49 prove:3 combine:1 darwiche:1 introduce:2 p1:2 frequently:1 inspired:1 provided:2 circuit:4 panel:7 cm:4 interpreted:3 informed:1 finding:3 certainty:1 interactive:1 wrong:1 limit:1 black:1 studied:3 suggests:2 limited:2 binaryvalued:1 thought:1 convenient:1 griffith:1 get:1 cannot:1 graph1:7 selection:4 context:2 applying:1 influence:1 restriction:1 deterministic:1 formulate:1 insight:1 attraction:1 d1:4 classic:1 century:1 variation:1 analogous:1 agreement:1 trend:2 predicts:2 labeled:2 observed:1 ensures:2 ordering:2 environment:2 pd:12 complexity:2 asked:3 graph2:5 solving:1 yuille:2 represented:2 describe:1 artificial:4 whose:1 valued:4 solve:3 otherwise:5 statistic:1 think:2 noisy:80 advantage:1 net:1 rescorla:2 interaction:6 loop:1 los:4 keck:1 assessing:1 produce:1 generating:2 tions:1 illustrate:3 help:1 stat:1 received:1 keith:1 p2:2 involves:1 indicate:1 judge:2 differ:1 correct:2 human:13 generalization:1 secondly:1 adjusted:2 extension:1 clarify:1 considered:3 cb:2 cognition:2 predict:1 claim:2 major:1 vary:1 estimation:2 e2n:4 agrees:1 hope:1 mit:1 always:2 pn:4 varying:3 probabilistically:1 conjunction:10 q3:1 likelihood:3 indicates:1 contrast:3 cg:2 sense:3 helpful:1 inference:1 unary:1 hidden:2 expand:1 overall:1 special:3 construct:1 identical:1 intelligent:1 causation:2 replaced:1 negation:1 attempt:1 possibility:2 patricia:2 capable:1 causal:56 e0:12 theoretical:3 fitted:1 psychological:3 modeling:5 cover:2 ations:1 novick:2 uniform:1 conducted:1 reported:1 notate:1 probabilistic:2 together:1 headache:7 clifford:1 choose:4 cognitive:7 account:4 speculate:1 b2:22 includes:1 caused:4 depends:1 later:1 helped:1 pnor:4 lab:1 start:1 bayes:1 participant:15 hongjing:2 noisyor:1 formed:1 who:4 judgment:2 lu:1 simultaneous:1 andnot:2 ed:1 pp:1 e2:23 proof:4 di:4 covariation:2 logical:54 iasonas:1 specify:2 inspire:1 hand:1 ei:7 incrementally:1 b3:10 effect:7 concept:1 equality:1 hence:3 illustrated:1 complete:2 reasoning:11 novel:2 funding:1 preventative:1 conditioning:2 thirdly:1 occurred:1 buehner:1 appleton:1 had:3 access:1 recent:3 showed:1 binary:11 success:1 morgan:1 nonreinforcement:1 determine:6 ii:7 multiple:3 full:4 reduces:2 alan:1 ing:1 characterized:2 calculation:1 e1:23 prediction:5 involving:4 basic:1 patient:5 represent:11 sometimes:1 c1:88 background:5 whereas:1 fine:1 diagram:4 ot:1 unlike:1 subject:2 effectiveness:1 iii:2 enough:2 fit:8 psychology:4 gave:1 opposite:2 cn:5 angeles:4 administered:2 whether:5 e3:8 york:1 proceed:1 cause:21 useful:2 involve:1 tenenbaum:1 induces:1 reduced:1 nsf:1 correctly:1 write:1 express:1 group:1 four:2 terminology:1 nevertheless:1 d3:1 prevent:1 graph:16 run:1 parameterized:1 decide:1 wu:1 display:1 cheng:10 replaces:1 strength:1 ucla:2 performing:1 pavlovian:1 format:1 glymour:1 department:2 combination:4 across:2 em:6 den:1 restricted:1 equation:3 remains:1 know:1 mind:1 available:1 operation:2 apply:1 obey:1 alternative:4 denotes:1 ensure:2 graphical:1 medicine:13 prof:1 build:1 classical:1 allergy:2 move:1 question:1 strategy:1 interacts:1 thank:1 considers:1 induction:4 assuming:1 relationship:3 ratio:1 design:5 motivates:1 unknown:1 perform:1 acknowledge:1 situation:1 tive:1 specified:5 extensive:1 california:2 coherent:1 pearl:1 able:1 kauffman:1 memory:1 power:10 force:2 representing:3 yingnian:1 prior:2 literature:2 acknowledgement:1 review:2 foundation:1 contingency:4 sufficient:4 consistent:4 story:1 pi:1 summary:3 accounted:1 side:1 bias:1 wagner:2 tative:1 commonly:2 reinforcement:1 far:2 prokasy:1 logic:6 b1:21 table:6 nature:1 learn:1 ca:4 investigated:1 complex:3 did:2 kindly:1 main:1 arrow:1 arise:1 nothing:1 allowed:1 fails:1 croft:1 e4:1 specific:1 showing:2 evidence:2 liljeholm:5 restricting:4 ci:9 pnl:3 conditioned:1 illustrates:1 expressed:4 ordered:1 mimi:2 corresponds:2 conditional:13 presentation:1 towards:1 determined:1 except:1 called:1 experimental:20 select:1 formally:2 support:4 constructive:1 evaluate:1 tested:1 phenomenon:1 |
2,491 | 326 | Flight Control in the Dragonfly:
A Neurobiological Simulation
William E. Faller and Marvin W. Luttges
Aerospace Engineering Sciences,
University of Colorndo, Boulder, Colorado 80309-0429.
ABSTRACT
Neural network simulations of the dragonfly flight neurocontrol system
have been developed to understand how this insect uses complex,
unsteady aerodynamics. The simulation networks account for the
ganglionic spatial distribution of cells as well as the physiologic
operating range and the stochastic cellular fIring history of each neuron.
In addition the motor neuron firing patterns, "flight command
sequences", were utilized. Simulation training was targeted against both
the cellular and flight motor neuron firing patterns. The trained
networks accurately resynthesized the intraganglionic cellular firing
patterns. These in tum controlled the motor neuron fIring patterns that
drive wing musculature during flight. Such networks provide both
neurobiological analysis tools and fIrst generation controls for the use
of "unsteady" aerodynamics.
1 INTRODUCTION
Hebb (1949) proposed a theory of inter-neuronal learning. "Hebbian Learning", in which
cells acting together as assemblies alter the effIcacy of mutual interconnections. These
neural "cell assemblies" presumably comprise the information processing "units" of the
nervous system.
To provide one framework within which to perform detailed analyses of these cellular
organizational "rules" a new analytical technique based on neural networks is being
explored. The neurobiological data analyzed was obtained from the neurnl cells of the
drngonfly ganglia.
514
Flight Control in the Dragonfly: A Neurobiological Simulation
The dragonfly use of unsteady separated flows to generate highly maneuverable flight is
governed by the control sequences that originate in the thoracic ganglia flight motor
neurons (MN). To provide this control the roughly 2200 cells of the meso- and
metathoracic ganglia integrate environmental cues that include visual input. wind shear,
velocity and acceleration. The cellular flring patterns coupled with proprioceptive feedback
in turn drive elevator/depressor flight MNs which typically produce a 25-37 Hz wingbeat
depending on the flight mode (Luttges 1989; Kliss 1989).
The neural networks utilized in the analyses incorporate the spatial distribution of cells,
the physiologic operating range of each neuron and the stochastic history of the cellular
spike trains (Faller and Luttges 1990). The present work describes two neural networks.
The simultaneous Single-unit firing patterns at time (t) were used to predict the cellular
ftring patterns at time (t+~). And, the simultaneous single-unit frring patterns were used
to "drive" flight-MN frring patterns at a 37 Hz wingbeat frequency.
2 METHODS
2.1 BIOLOGICAL DATA
Recordings were obtained from the mesothoracic ganglion of the dragonfly Aeshna in the
ganglionic regions known to contain the cell bodies of flight MNs as well as small and
large cell bodies (Simmons 1977; Kliss 1989). Multiple-unit recordings from many cells
(-40-80) were systematically decomposed to yield simultaneously active single-unit ftring
patterns. The technique has been described elsewhere (Faller and Luttges in press).
During the recording of neural activity spontaneous flight episodes commonly occurred.
These events were consistent with typical flight episodes (2-3 secs duration) observed in
the tethered dragonfly (Somps and Luttges 1985). For analysis, a 12 second record was
obtained from 58 single units, 26 rostral cells and 32 caudal cells. The continuous record
was separated into 4 second behavioral epochs: pre-flight, flight and post-flight.
A simplified model of one flight mode was assumed. Each forewing is driven by 3 main
elevator and 2 main depressor muscles, innervated by 11 and 14 MNs, respectively. A 37
Hz MN firing frequency, 3-5 spikes per output burst, and 180 degree phase shift between
antagonistic MNs was assumed. Given the symmetrical nature of the elevator/depressor
output patterns only the 11 elevator MNs were simulated.
Prior to analysis the ganglionic spatial distribution of neurons was reconstructed. The
importance of this is reserved for later discussion. A method has been described (Faller and
Luttges submitted:a) that resolves the spatial distribution based on two distancing criteria:
the amplitude ratio across electrodes and the spike angle (width) for each cell. Cells were
sorted along a rostral, cell 1, to caudal, cell 58 continuum based on this infonnation.
The middle 2 seconds of the flight data was simulated. This was consistent with the
known duration of spontaneous flight episodes. Within these 2 seconds, 44 cells remained
active, 19 rostral and 25 caudal. The cell numbering (1-58) derived for the biological data
was not altered. The remaining 14 inactive cells/units carry zeros in all analyses.
515
516
Faller and Luttges
2.2 MIMICKING THE SINGLE CELLS
Each neuron was represented by a unique unit that mimicked both the mean fIring
frequency and dynamic range of the physiologic cell. The activation value ranged from
zero to twice the nonnalized mean fIring frequency for each cell. The dynamic range was
calculated as a unique thermodynamic profile for each sigmoidal activation function. The
technique has been described fully elsewhere (Faller and Luttges 1990).
2.3 SPIKE TRAIN REPRESENT ATION
The spike trains and MN firing patterns were represented as iteratively continuous
"analog" gradients (Faller and Luttges 1990 & submitted:b). Briefly. each spike train was
represented in two-dimensions based on the following assumptions: (1) the mean fIring
frequency reflects the inherent physiology of each cell and (2) the interspike intervals
encode the information transferred to other cells. Exponential functions were mapped into
the intervals between consecutive spikes and these functions were then discretized to
provide the spike train inputs to the neural network. These functions retain the exact
spiking times and the temporal modulations (interval code) of cell fIring histories.
2.4 ARCHITECTURE
The two simulation architectures were as follows:
Simulation 2
Simulation 1
Input layer
1 cell:l unit (44 units)
1 cell:l unit (44 units)
Hidden layer
1 ceU:2 units (88 units)
1 cell:2 unit (88 units)
Output layer
1 cell: 1 unit (44 total units)
11 main elevator MNs
The hidden units were recurrently connected and the interconnections between units were
based on a 1st order exponential rise and decay. The general architecture has been described
elsewhere (Faller and Luttges 1990).
For the cell-to-cell simulation no bias units were utilized. Since the MNs fire both
synchronously and infrequently bias units were incorporated in the MN simulation. These
units were constrained to function synchronously at the MN fuing frequency. This
constraining technique pennitted the network to be trained despite the sparsity of the MN
dataset
Training was perfonned using a supervised backpropagation algorithm in time. All 44
cells. 2000 points per discretized gradient (~=1 msec real-time) were presented
synchronously to the network. The results were consistent for L\=2-5 msec in all cases.
The simulation paradigms were as follows:
Simulation 1
Simulation 2
lnmJ.t
Neural activity at time (t)
Neural activity at time (t)
Output/Target
Neural activity at time (t+~)
MN activity at time (t)
Initial weights were random. -0.3 and 0.3. and the learning rate was 1l=O.2. Training was
performed until the temporal reproduction of cell spiking patterns was verified for all
cells. Following training. the network was "run". 1l =
o.
Flight Control in the Dragonfly: A Neurobiological Simulation
Sum squared errors for aU units were calculated and normalized to an activation value of 0
to 1. The temporal reproduction of the output patterns was verified by linear correlation
against the targeted spike trains. The "effective" contribution of each unit to the flight
pattern was then determined by "lesioningrt individual cells from the network prior to
presenting the input pattern. The effects of lesioning were judged by the change in error
relative to the unlesioned network.
3 RESULTS
3.1 CELL?TO-CELL SIMULATION
Following training the complete pattern set was presented to the network. And. the sum
squared error was averaged over aU units. Fig. 1. Clearly the network has a different
"interpretation rt of the data at certain time steps. This is due both to the
omission/commission of spikes as well as timing errors. However. the data needed to
reproduce overall cell firing patterns is clearly available.
o~O~--------------------------------------------------~
00-20t
ffio.tOc:t:l
0.00
~
I
I
I
I
t 0'00 I
I
,
I
I
~
~
(zoo
JOOO I
TIM!: t4JWSECONDS)
I
I
1
:to
Figure 1: The network error
Unit sum squared errors were also averaged over the 2 second simulation. Fig. 2. Clearly
the network predicted some unil/cell flring patterns e:lSier than others.
!~ t.
~
0
0,0,0,0,
,ctili,D1dJlitIm, ,',,.DJlI1.1bdJJjJ,llilli),dli
10
10
~o
40
UNIT (SHOWN BY caJ.. NUYSER)
~
so
Figure 2: The unit errors
The temporal reproduction of the cell firing patterns was verified by linear correlation
between the network outputs and the biological spike train representations. If the network
accurately reproduces the temporal history of the spike trains these functions should be
identical. r=1. Fig. 3. Clearly the network reproduces the temporal coding inherent within
each spike train. The lowest correlation of roughly 0.85 is highly signffic:ult. (p<O.Ol).
Figure 3: The unit temporal errors
517
518
Faller and Luttges
One way to measure the relative importance of each unit/cell to the network is to
omit/"lesion each unit prior to presenting the cell firing patterns to the trained network.
The data shown was collected by lesioning each unit individually, Fig. 4. The unlesioned
network error is shown as the "0" cell. Overall the degradation of the network was
minimal. Clearly some units provide more information to the network in reproducing the
cell fuing histories. Units that caused relatively large errors when "lesioned" were defmed
as primary units. The other units were defined as secondary units.
It
!~t'Q.O'Q'QI'~'~'I" '~'I'I'I.J1bJJ.Jl.JJ.1
o
::I
10
20
JO
40
UNIT (SHOWN BY CElL NUMBER)
~O
to
Figure 4: Lesion studies
The primary units (cells) form what might classically be termed a central pattern
generator. These units can provide a relatively gross representation of both cellular and
MN firing patterns. The generation of dynamic cellular and MN firing patterns. however
is apparently dependent on both primary and secondary units. It appears that the
generation of functional activity patterns within the ganglia is largely controlled by the
dynamic interactions between large groups of cells. ie. the "whole" network. This is
consistent with other results derived from both neural network and statistical analyses of
the biological data (Faller and Luttges 1990 & submitted:b).
9
3.2 MOTOR NEURON FIRING PATTERNS
The 44 cellular frring patterns were then used to drive the MN ruing patterns. Following
training. the cell fIring pattern set was presented to the network and the sum squared error
was averaged over the output MNs. Fig. 5. The error in this case oscillates in time at the
wingbeat frequency of 37 Hz. As will be shown. however. this is an artifact and the
network does accurately drive the MNs.
Im
l
III om~
~
o
0::
~o
?? ,
o
JIIIII UIIUllll!llllUllul II II U!lU
111111111 Uh Ulllilu 1111 IlluUIIL
,
2000
Jobo
1 00
nUE (t.IlWSfCONOS)
"~l
Figure 5: The network error
For each MN the sum squared error was also averaged over the 2 second simulation. Fig.
6. Clearly individual MNs contribute nearly equally to the network error.
~ o~I-r==~==~==~==~==~==~==~==~==~~~~~:l
~ ~t I ~.I ~.I _____.I.~.I ~.I I ~oI ~.I ,~I,~I,-'--tIl
-+---L-
::::I
Q
+---'--1
I
UOTOR NEURON NUUBER
Figure 6: The unit errors
12
Flight Control in the Dragonfly: A Neurobiological Simulation
The temporal reproduction of the MN ftring patterns was verifted by linear correlation
between the output and targeted MN ftring patterns of the network. This is shown in Fig.
7. Clearly the cell inputs to the network have the spiking characteristics needed for
driving the temporal ftring sequences of the MNs innervating the wing musculature. All
correlations are roughly 0.80. highly signiftcant. (p<O.Ol). The output for one MN is
shown relative to the targeted MN output in Fig. 8. Clearly the network does drive the
MNs correctly.
,.oa!
g
11.10.90
u
i~
E;
u
a
I.
.1.1 ? . 111.1.
I
I
I
I
I
12
I
WOTOR NEURON NUUBER
Figure 7: The unit temporal errors
~'AO~--------~------------~------------~~----------'
~o.~
~o~o
~
O.2!
~
- - TARGETED MOTOR NEURON OUTPUT
__ .;>...
_UU'""
?.'"ON
.. OUTPUT
GO.OO~--~~~~20~--~~--~~--~----~6~~--~----teo~--~--~1~O
~
llUE CMIUJSECONDS}
Figure 8: The MN flring patterns
3.3 SUMMARY
The re~ults indicate that synthetic networks can learn and then synthesize patterns of
neural spiking activity needed for biological function. In this case, cell and MN fIring
patterns occurring in the dragonfly ganglia during a spontaneous flight episode.
4 DISCUSSION
Recordings from more than 50 spatially unique cells that reflect the complex network
characteristics of a small. intact neural tissue were used to successfully train two neural
networks. Unit sum squared errors were less than 0.003 and spike train temporal histories
were accurately reproduced. There was little evidence for unexpected "cellular behavior".
Functional lesioning of single units in the network caused minimal degradation of
network performance. however. some lesioned cells were more important than others to
overall network performance.
The capability to lesion cells permitted the contribution of individual cells to the
production of the flight rhythm to be detennined. The detection of primary and secondary
cells underlying the dynamic generation of both cellular and MN firing patterns is one
example. Such results may encourage neurobiologists to adopt neural networks as
effective analytical tools with which to study and analyze spike train data.
Clearly the solution arrived at is not the biological one. However. the networks do
accurately predict the future cell firing patterns based on past flring history information. It
519
520
Faller and L u ttges
is asserted that the network must therefore contain the majority of infonnation required to
resolve biological cell interactions during flight in the dragonfly. A sample of 58
ganglionic cells was utilized. the remaining cells functional contributions are presumably
statistically accounted for by this small sampling. The inherent "infonnation" of the
biological network is presumably stored in the weight matrices as a generalized statistical
representation of the "rules" through which cells participate in biological assemblies.
Analyses of the weight matrices in turn may permit the operational "rules" of cell
assemblies to be defined. Questions about the effects of cell size. the spatial architecture
of the network and the temporal interactions between cells as they relate to cell assembly
function can be addressed. For this reason the individuality of cells. the spatial architecture
and the stochastic cellular firing histories of the individual cells were retained within the
network architectures utilized. Crucial to these analyses will be methods that permit
direct, time-incrementing evaluations of the weight matrices following training.
Biological nervous system function can now be analyzed from two points of view: direct
analyses of the biological data and indirect, but potentially more approachable, analyses of
the weight matrices from trained neural networks such as the ones described.
REFERENCES
Faller WE, Luttges MW (1990) A Neural Network Simulation of Simultaneous SingleUnit Activity Recorded from the Dragonfly Ganglia. ISA Paper #90-033
Faller WE, Luttges MW (in press) Recording of Simultaneous Single-Unit Activity in
the Dragonfly Ganglia. J Neurosci Methods
Faller WE, Luttges MW (Submitted:a) Spatiotemporal Analysis of Simultaneous SingleUnit Activity in the Dragonfly: 1. Cellular Activity Patterns. Bioi Cybem
Faller WE, Luttges MW (Submitted:b) Spatiotemporal Analysis of Simultaneous SingleUnit Activity in the Dragonfly: II. Network Connectivity. Bioi Cybem
Hebb DO (1949) The Organization of Behavior: A Neuropsychological Theory. Wiley,
New York, Chapman and Hall, London
Kliss MH (1989) Neurocontrol Systems and Wing-Fluid Interactions Underlying
Dragonfly Flight Ph.D. Thesis, University of Colorado. Boulder, pp 70-80
Luttges MW (1989) Accomplished Insect Fliers. In: Gad-el-Hale M (ed) Frontiers in
Experimental Fluid Mechanics. Springer-Verlag, Berlin Heidelberg. pp 429-456
Simmons P (1977) The Neuronal Control of Dragonfly Flight I. Anatomy. J exp Bioi
71:123-140
Somps C, Luttges MW (1985) Dragonfly flight: Novel uses of unsteady separated flows.
Science 228:1326-1329
Part IX
Applications
| 326 |@word briefly:1 middle:1 meso:1 simulation:19 innervating:1 carry:1 initial:1 efficacy:1 past:1 activation:3 must:1 interspike:1 motor:6 cue:1 nervous:2 record:2 contribute:1 sigmoidal:1 burst:1 along:1 direct:2 behavioral:1 rostral:3 inter:1 behavior:2 roughly:3 mechanic:1 ol:2 discretized:2 decomposed:1 resolve:2 little:1 underlying:2 lowest:1 what:1 developed:1 temporal:12 oscillates:1 control:8 unit:47 omit:1 engineering:1 timing:1 despite:1 firing:23 modulation:1 might:1 twice:1 au:2 range:4 statistically:1 averaged:4 neuropsychological:1 unique:3 backpropagation:1 physiology:1 pre:1 lesioning:3 judged:1 go:1 duration:2 rule:3 antagonistic:1 simmons:2 spontaneous:3 target:1 colorado:2 exact:1 us:2 velocity:1 infrequently:1 synthesize:1 utilized:5 observed:1 region:1 connected:1 episode:4 gross:1 unil:1 lesioned:2 dynamic:5 frring:3 trained:4 fuing:2 uh:1 mh:1 indirect:1 represented:3 train:12 separated:3 effective:2 london:1 interconnection:2 reproduced:1 sequence:3 analytical:2 tethered:1 interaction:4 detennined:1 electrode:1 produce:1 tim:1 depending:1 oo:1 predicted:1 indicate:1 uu:1 anatomy:1 stochastic:3 ao:1 biological:11 im:1 frontier:1 hall:1 exp:1 presumably:3 predict:2 driving:1 depressor:3 continuum:1 consecutive:1 adopt:1 infonnation:3 teo:1 individually:1 successfully:1 tool:2 reflects:1 caudal:3 clearly:9 command:1 encode:1 derived:2 dependent:1 el:1 typically:1 hidden:2 reproduce:1 mimicking:1 overall:3 insect:2 spatial:6 constrained:1 mutual:1 comprise:1 sampling:1 chapman:1 identical:1 nearly:1 alter:1 future:1 others:2 inherent:3 simultaneously:1 approachable:1 individual:4 elevator:5 phase:1 fire:1 william:1 detection:1 organization:1 highly:3 evaluation:1 analyzed:2 asserted:1 encourage:1 re:1 minimal:2 organizational:1 commission:1 stored:1 spatiotemporal:2 synthetic:1 st:1 jooo:1 ie:1 retain:1 together:1 jo:1 squared:6 central:1 reflect:1 recorded:1 connectivity:1 thesis:1 classically:1 wing:3 til:1 account:1 sec:1 coding:1 caused:2 later:1 performed:1 wind:1 view:1 apparently:1 analyze:1 capability:1 contribution:3 om:1 oi:1 reserved:1 largely:1 characteristic:2 yield:1 accurately:5 lu:1 zoo:1 drive:6 tissue:1 history:8 submitted:5 simultaneous:6 ed:1 against:2 frequency:7 pp:2 dataset:1 amplitude:1 singleunit:3 appears:1 tum:1 supervised:1 permitted:1 until:1 correlation:5 flight:29 mode:2 artifact:1 effect:2 contain:2 ranged:1 normalized:1 spatially:1 iteratively:1 proprioceptive:1 during:4 width:1 defmed:1 rhythm:1 criterion:1 generalized:1 presenting:2 arrived:1 complete:1 novel:1 shear:1 spiking:4 functional:3 jl:1 analog:1 occurred:1 interpretation:1 operating:2 driven:1 termed:1 certain:1 verlag:1 accomplished:1 muscle:1 flring:4 ganglionic:4 paradigm:1 ii:3 multiple:1 thermodynamic:1 isa:1 hebbian:1 post:1 equally:1 controlled:2 qi:1 represent:1 cell:65 addition:1 thoracic:1 interval:3 addressed:1 crucial:1 aerodynamics:2 hz:4 recording:5 resynthesized:1 flow:2 mw:6 constraining:1 iii:1 architecture:6 shift:1 inactive:1 caj:1 york:1 jj:1 physiologic:3 detailed:1 ph:1 generate:1 per:2 correctly:1 group:1 verified:3 sum:6 musculature:2 run:1 angle:1 neurobiologist:1 layer:3 individuality:1 toc:1 activity:12 marvin:1 ults:1 relatively:2 transferred:1 numbering:1 describes:1 across:1 boulder:2 turn:2 needed:3 available:1 permit:2 innervated:1 mimicked:1 remaining:2 include:1 assembly:5 question:1 spike:15 primary:4 rt:1 gradient:2 mapped:1 simulated:2 berlin:1 majority:1 oa:1 participate:1 originate:1 collected:1 cellular:14 reason:1 code:1 retained:1 ratio:1 potentially:1 relate:1 rise:1 fluid:2 perform:1 neuron:12 nonnalized:1 incorporated:1 ruing:1 synchronously:3 reproducing:1 omission:1 required:1 dli:1 aerospace:1 pattern:37 sparsity:1 event:1 ation:1 perfonned:1 mn:31 altered:1 coupled:1 epoch:1 prior:3 relative:3 fully:1 generation:4 generator:1 integrate:1 degree:1 consistent:4 systematically:1 production:1 elsewhere:3 summary:1 neurocontrol:2 accounted:1 bias:2 understand:1 feedback:1 calculated:2 dimension:1 commonly:1 kliss:3 simplified:1 reconstructed:1 neurobiological:6 reproduces:2 active:2 cybem:2 symmetrical:1 assumed:2 continuous:2 nature:1 learn:1 operational:1 heidelberg:1 complex:2 main:3 neurosci:1 whole:1 incrementing:1 profile:1 ceu:1 lesion:3 body:2 neuronal:2 fig:8 gad:1 hebb:2 wiley:1 msec:2 exponential:2 governed:1 ix:1 remained:1 hale:1 recurrently:1 explored:1 decay:1 reproduction:4 evidence:1 unlesioned:2 importance:2 occurring:1 ganglion:8 visual:1 unexpected:1 springer:1 environmental:1 bioi:3 sorted:1 targeted:5 acceleration:1 change:1 typical:1 determined:1 acting:1 degradation:2 total:1 secondary:3 experimental:1 intact:1 ult:1 incorporate:1 |
2,492 | 3,260 | Bayesian Co-Training
Shipeng Yu, Balaji Krishnapuram, Romer Rosales, Harald Steck, R. Bharat Rao
CAD & Knowledge Solutions, Siemens Medical Solutions USA, Inc.
[email protected]
Abstract
We propose a Bayesian undirected graphical model for co-training, or more generally for semi-supervised multi-view learning. This makes explicit the previously
unstated assumptions of a large class of co-training type algorithms, and also clarifies the circumstances under which these assumptions fail. Building upon new insights from this model, we propose an improved method for co-training, which is
a novel co-training kernel for Gaussian process classifiers. The resulting approach
is convex and avoids local-maxima problems, unlike some previous multi-view
learning methods. Furthermore, it can automatically estimate how much each
view should be trusted, and thus accommodate noisy or unreliable views. Experiments on toy data and real world data sets illustrate the benefits of this approach.
1
Introduction
Data samples may sometimes be characterized in multiple ways, e.g., web-pages can be described
both in terms of the textual content in each page and the hyperlink structure between them. [1]
have shown that the error rate on unseen test samples can be upper bounded by the disagreement
between the classification-decisions obtained from independent characterizations (i.e., views) of the
data. Thus, in the web-page example, misclassification rate can be indirectly minimized by reducing the rate of disagreement between hyperlink-based and content-based classifiers, provided these
characterizations are independent conditional on the class.
In many application domains class labels can be expensive to obtain and hence scarce, whereas
unlabeled data are often cheap and abundantly available. Moreover, the disagreement between the
class labels suggested by different views can be computed even when using unlabeled data. Therefore, a natural strategy for using unlabeled data to minimize the misclassification rate is to enforce
consistency between the classification decisions based on several independent characterizations of
the unlabeled samples. For brevity, unless otherwise specified, we shall use the term co-training to
describe the entire genre of methods that rely upon this intuition, although strictly it should only
refer to the original algorithm of [2].
Some co-training algorithms jointly optimize an objective function which includes misclassification
penalties (loss terms) for classifiers from each view and a regularization term that penalizes lack
of agreement between the classification decisions of the different views. In recent times, this coregularization approach has become the dominant strategy for exploiting the intuition behind multiview consensus learning, rendering obsolete earlier alternating-optimization strategies.
We survey in Section 2 the major approaches to co-training, the theoretical guarantees that have
spurred interest in the topic, and the previously published concerns about the applicability to certain
domains. We analyze the precise assumptions that have been made and the optimization criteria to
better understand why these approaches succeed (or fail) in certain situations. Then in Section 3
we propose a principled undirected graphical model for co-training which we call the Bayesian cotraining, and show that co-regularization algorithms provide one way for maximum-likelihood (ML)
learning under this probabilistic model. By explicitly highlighting previously unstated assumptions,
1
Bayesian co-training provides a deeper understanding of the co-regularization framework, and we
are also able to discuss certain fundamental limitations of multi-view consensus learning. In Section 4, we show that even simple and visually illustrated 2-D problems are sometimes not amenable
to a co-training/co-regularization solution (no matter which specific model/algorithm is used ? including ours). Empirical studies on two real world data sets are also illustrated.
Summarizing our algorithmic contributions, co-regularization is exactly equivalent to the use of a
novel co-training kernel for support vector machines (SVMs) and Gaussian processes (GP), thus
allowing one to leverage the large body of available literature for these algorithms. The kernel is intrinsically non-stationary, i.e., the level of similarity between any pair of samples depends on all the
available samples, whether labeled or unlabeled, thus promoting semi-supervised learning. Therefore, this approach is significantly simpler and more efficient than the alternating-optimization that
is used in previous co-regularization implementations. Furthermore, we can automatically estimate
how much each view should be trusted, and thus accommodate noisy or unreliable views.
2
Related Work
Co-Training and Theoretical Guarantees: The iterative, alternating co-training method originally
introduced in [2] works in a bootstrap mode, by repeatedly adding pseudo-labeled unlabeled samples into the pool of labeled samples, retraining the classifiers for each view, and pseudo-labeling
additional unlabeled samples where at least one view is confident about its decision. The paper provided PAC-style guarantees that if (a) there exist weakly useful classifiers on each view of the data,
and (b) these characterizations of the sample are conditionally independent given the class label,
then the co-training algorithm can utilize the unlabeled data to learn arbitrarily strong classifiers.
[1] proved PAC-style guarantees that if (a) sample sizes are large, (b) the different views are conditionally independent given the class label, and (c) the classification decisions based on multiple
views largely agree with each other, then with high probability the misclassification rate is upper
bounded by the rate of disagreement between the classifiers based on each view. [3] tried to reduce
the strong theoretical requirements. They showed that co-training would be useful if (a) there exist
low error rate classifiers on each view, (b) these classifiers never make mistakes in classification
when they are confident about their decisions, and (c) the two views are not too highly correlated,
in the sense that there would be at least some cases where one view makes confident classification
decisions while the classifier on the other view does not have much confidence in its own decision.
While each of these theoretical guarantees is intriguing and theoretically interesting, they are also
rather unrealistic in many application domains. The assumption that classifiers do not make mistakes
when they are confident and that of class conditional independence are rarely satisfied in practice.
Nevertheless empirical success has been reported.
Co-EM and Related Algorithms: The Co-EM algorithm of [4] extended the original bootstrap
approach of the co-training algorithm to operate simultaneously on all unlabeled samples in an iterative batch mode. [5] used this idea with SVMs as base classifiers and subsequently in unsupervised
learning by [6]. However, co-EM also suffers from local maxima problems, and while each iteration?s optimization step is clear, the co-EM is not really an expectation maximization algorithm
(i.e., it lacks a clearly defined overall log-likelihood that monotonically improves across iterations).
Co-Regularization: [7] proposed an approach for two-view consensus learning based on simultaneously learning multiple classifiers by maximizing an objective function which penalized misclassifications by any individual classifier, and included a regularization term that penalized a high level
of disagreement between different views. This co-regularization framework improves upon the cotraining and co-EM algorithms by maximizing a convex objective function; however the algorithm
still depends on an alternating optimization that optimizes one view at a time. This approach was
later adapted to two-view spectral clustering [8].
Relationship to Current Work: The present work provides a probabilistic graphical model for
multi-view consensus learning; alternating optimization based co-regularization is shown to be just
one algorithm that accomplishes ML learning in this model. A more efficient, alternative strategy is
proposed here for fully Bayesian classification under the same model. In practice, this strategy offers
several advantages: it is easily extended to multiple views, it accommodates noisy views which are
less predictive of class labels, and reduces run-time and memory requirements.
2
f(x1)
y1
f1(x1(1))
f(x2)
y2
f1(x2(1))
fc(x1)
f2(x1(2))
y1
yn
(b)
f1(xn(1))
y2
fc(xn)
f2(x2(2))
?
f(xn)
?
?
(a)
fc(x2)
f2(xn(2))
yn
Figure 1: Factor graph for (a) one-view and (b) two-view models.
3
3.1
Bayesian Co-Training
Single-View Learning with Gaussian Processes
A Gaussian Process (GP) defines a nonparametric prior over functions in Bayesian statistics [9].
A random real-valued function f : Rd ? R follows a GP, denoted by GP(h, ?), if for every
finite number of data points x1 , . . . , xn ? Rd , f = {f (xi )}ni=1 follows a multivariate Gaussian
N (h, K) with mean h = {h(xi )}ni=1 and covariance K = {?(xi , xj )}ni,j=1 . Normally we fix the
mean function h ? 0, and take a parametric (and usually stationary) form for the kernel function ?
(e.g., the Gaussian kernel ?(xk , x` ) = exp(??kxk ? x` k2 ) with ? > 0 a free parameter).
In a single-view, supervised learning scenario, an output or target yi is given for each observation
xi (e.g., for regression yi ? R and for classification yi ? {?1,
R +1}). In the GP model we assume
there is a latent function f underlying the output, p(yi |xi ) = p(yi |f, xi )p(f ) df , with the GP prior
p(f ) = GP(h, ?). Given the latent function f , p(yi |f, xi ) = p(yi |f (xi )) takes a Gaussian noise
model N (f (xi ), ? 2 ) for regression, and a sigmoid function ?(yi f (xi )) for classification.
The dependency structure of the single-view GP model can be shown as an undirected graph
as in Fig. 1(a). The maximal cliques of the graphical model are the fully connected nodes
(f (x1 ), . . . , f (xn )) and the pairs (yi , f (xi )), i = 1, . . . , n. Therefore, the
Qjoint probability of
random variables f = {f (xi )} and y = {yi } is defined as p(f , y) = Z1 ?(f ) i ?(yi , f (xi )), with
potential functions1
?
exp(? 2?1 2 kyi ? f (xi )k2 ) for regression
1 > ?1
?(f ) = exp(? 2 f K f ), ?(yi , f (xi )) =
(1)
?(yi f (xi ))
for classification
and normalization factor Z (hereafter Z is defined such that the joint probability sums to 1).
3.2
Undirected Graphical Model for Multi-View Learning
In multi-view learning, suppose we have m different views of a same set of n data samples. Let
(j)
xi ? Rdj be the features for the i-th sample obtained using the j-th view, where dj is the di(1)
(m)
mensionality of the input space for view j. Then the vector xi , (xi , . . . , xi ) is the complete
(j)
(j)
representation of the i-th data sample, and x(j) , (x1 , . . . , xn ) represents all sample observations
for the j-th view. As in the single-view learning, let y = (y1 , . . . , yn ) where yi is the single output
assigned to the i-th data point.
One can clearly concatenate the multiple views into a single view and apply a single-view GP model,
but the basic idea of multi-view learning is to introduce one function per view which only uses the
features from that view, and then jointly optimize these functions such that they come to a consensus.
Looking at this problem from a GP perspective, let fj denote the latent function for the j-th view
(i.e., using features only from view j), and let fj ? GP(0, ?j ) be its GP prior in view j. Since one
data sample i has only one single label yi even though it has multiple features from the multiple
1
The definition of ? in this paper has been overloaded to simplify notation, but its meaning should be clear
from the function arguments.
3
(j)
views (i.e., latent function value fj (xi ) for view j), the label yi should depend on all of these
latent function values for data sample i.
The challenge here is to make this dependency explicit in a graphical model. We tackle this problem
by introducing a new latent function, the consensus function fc , to ensure conditional independence
between the output y and the m latent functions {fj } for the m views (see Fig. 1(b) for the undirected
graphical model). At the functional level, the output y depends only on fc , and latent functions {fj }
depend on each other only via the consensus function fc . That is, we have the joint probability:
p(y, fc , f1 , . . . , fm ) =
m
Y
1
?(y, fc )
?(fj , fc ),
Z
j=1
with some potential functions ?. In the ground network with n data samples, let f c = {fc (xi )}ni=1
(j)
and f j = {fj (xi )}ni=1 . The graphical model leads to the following factorization:
p (y, f c , f 1 , . . . , f m ) =
m
Y
1 Y
?(yi , fc (xi ))
?(f j )?(f j , f c ).
Z i
j=1
(2)
Here the within-view potential ?(f j ) specifies the dependency structure within each view j, and
the consensus potential ?(f j , f c ) describes how the latent function in each view is related with the
consensus function fc . With a GP prior for each of the views, we can define the following potentials:
?
?
?
?
1 > ?1
kf j ? f c k2
,
(3)
?(f j ) = exp ? f j Kj f j , ?(f j , f c ) = exp ?
2
2?j2
(j)
(j)
where Kj is the covariance matrix of view j, i.e., Kj (xk , x` ) = ?j (xk , x` ), and ?j > 0 a scalar
which quantifies how far away the latent function f j is from f c . The output potential ?(yi , fc (xi ))
is defined the same as that in (1) for regression or classification.
Some more insight may be gained by taking a careful look at these definitions: 1) The within-view
potentials only rely on the intrinsic structure of each view, i.e., through the covariance Kj in a GP
setting; 2) Each consensus potential actually defines a Gaussian over the difference of f j and f c ,
i.e., f j ? f c ? N (0, ?j2 I), and it can also be interpreted as assuming a conditional Gaussian for f j
with the consensus f c being the mean. Alternatively if we focus on f c , the joint consensus potentials
effectively define a conditional Gaussian prior for f c , f c |f 1 , . . . , f m , as N (?c , ?c2 I) where
? X ??1
X fj
1
2
,
.
(4)
?c = ?c2
?
=
c
2
?j
?j2
j
j
This can be easily verified as a product of Gaussians. This indicates that the prior mean of the
consensus function f c is a weighted combination of the latent functions from all the views, and the
weight is given by the inverse variance of each consensus potential. The higher the variance, the
smaller the contribution to the consensus function.
More insights of this undirected graphical model can be seen from the marginals, which we discuss
in detail in the following subsections. One advantage of this representation is that is allows us
to see that many existing multi-view learning models are actually a special case of the proposed
framework. In addition, this Bayesian interpretation also helps us understand both the benefits and
the limitations of co-training.
3.3
Marginal 1: Co-Regularized Multi-View Learning
By taking the integral of (2) over f c (and ignoring the output potential for the moment), we obtain
the joint marginal distribution of the m latent functions:
?
?
m
? 1X
X kf j ? f k k2 ?
1
1
f j K?1
.
(5)
p(f 1 , . . . , f m ) = exp ?
j fj ?
? 2
Z
2
?j2 + ?k2 ?
j=1
j<k
It is clearly seen that the negation of the logarithm of this marginal exactly recovers the regularization
terms in co-regularized multi-view learning: The first part regularizes the functional space of each
4
view, and the second part constrains that all the functions need to agree on their outputs (inversely
weighted by the sum of the corresponding variances).
From the GP perspective, (5) actually defines a joint multi-view prior for the m latent functions,
(f 1 , . . . , f m ) ? N (0, ??1 ), where ? is a mn ? mn matrix with block-wise definition
X
1
1
I, ?(j, j 0 ) = ? 2
I, j = 1, . . . , m, j 0 6= j.
?(j, j) = K?1
(6)
j +
?j2 + ?k2
?j + ?j20
k6=j
Jointly with the target variable y, the marginal is (for instance for regression):
?
?
m
?
2
2?
X
X
X
1
kf j ? yk
1
1
1
kf j ? f k k
f j K?1
. (7)
p(y, f 1 , . . . , f m ) = exp ?
j fj ?
2 + ?2 ? 2
? 2
Z
?
2
?j2 + ?k2 ?
j
j
j=1
j<k
This recovers the co-regularization with least square loss in its log-marginal form.
3.4
Marginal 2: The Co-Training Kernel
The joint multi-view kernel defined in (6) is interesting, but it has a large dimension and is difficult
to work with. A more interesting kernel can be obtained if we instead integrate out all the m latent
functions in (2). This leads to a Gaussian prior p(f c ) = N (0, Kc ) for the consensus function fc ,
where
?
?
?1
X
Kc = ? (Kj + ?j2 I)?1 ?
.
(8)
j
In the following we call Kc the co-training kernel for multi-view learning. This marginalization is
very important, because it reveals the previously unclear insight of how the kernels from different
views are combined together in a multi-view learning framework. This allows us to transform a
multi-view learning problem into a single-view problem, and simply use the co-training kernel Kc
to solve GP classification or regression. Since this marginalization is equivalent to (5), we will end
up with solutions that are largely similar to any other co-regularization algorithm, but however a key
difference is the Bayesian treatement contrasting previous ML-optimization methods. Additional
benefits of the co-training kernel include the following:
1. The co-training kernel avoids repeated alternating optimizations over the different views f j ,
and directly works with a single consensus view f c . This reduces both time complexity and space
complexity (only maintains Kc in memory) of multi-view learning.
2. While other alternating optimization algorithms might converge to local minima (because they
optimize, not integrate), the single consensus view guarantees the global optimal solution for multiview learning.
3. Even if all the individual kernels are stationary, Kc is in general non-stationary. This is because
the inverse-covariances are added and then inverted again. In a transductive setting where the data
are partially labeled, the co-training kernel between labeled data is also dependent on the unlabeled
data. Hence the proposed co-training kernel can be used for semi-supervised GP learning [10].
3.5
Benefits of Bayesian Co-Training
The proposed undirected graphical model provides better understandings of multi-view learning
algorithms. The co-training kernel in (8) indicates that the Bayesian co-training is equivalent to
single-view learning with a special (non-stationary) kernel. This is also the preferable way of working with multi-view learning since it avoids alternating optimizations. Here are some other benefits
which are not mentioned before:
Trust-worthiness of each view: The graphical model allows each view j to have its own levels of
uncertainty (or trust-worthiness) ?j2 . In particular, a larger value of ?j2 implies less confidence on
the observation of evidence provided by the j-th view. Thus when some views of the data are better
at predicting the output than the others, they are weighted more while forming consensus opinions.
5
6
4
4
4
2
2
2
0
x(2)
6
x(2)
x(2)
6
0
0
?2
?2
?2
?4
?4
?4
?6
?6
?4
?2
0
x(1)
2
4
?6
?6
6
?4
?2
0
x(1)
2
4
?6
?6
6
6
4
4
4
2
2
2
?4
?2
0
x(1)
2
4
6
0.5
?0.5
?0.5
6
0
6
0.5
x(2)
x(2)
x(2)
0
0
0
0
0
0
?0.5
0
?2
?4
?4
?4
?0
.5
0.5
?2
0.5
0.5
?2
0
?6
?6
?4
?2
0
x(1)
2
4
6
?6
?6
?4
?2
0
x(1)
2
4
6
?6
?6
?4
?2
0
x(1)
2
4
6
Figure 2: Toy examples for co-training. Big red/blue markers denote +1/ ? 1 labeled points; remaining points
are unlabeled. TOP left: co-training result on two-Gaussian data with mean (2, ?2) and (?2, 2); center and
right: canonical and Bayesian co-training on two-Gaussian data with mean (2, 0) and (?2, 0); BOTTOM left:
XOR data with four Gaussians; center and right: Bayesian co-training and pure GP supervised learning result
(with RBF kernel). Co-training is much worse than GP supervised learning
in this case. All Gaussians have
?
unit variance. RBF kernel uses width 1 for supervised learning and 1/ 2 for each feature in two-view learning.
These uncertainties can be easily optimized in the GP framework by maximizing the marginal of
output y (omitted in this paper due to space limit).
Unsupervised and semi-supervised multi-view learning: The proposed graphical model also motivates new methods for unsupervised multi-view learning such as spectral clustering. While the
similarity matrix of each view j is encoded in Kj , the co-training kernel Kc encodes the similarity
of two data samples with multiple views, and thus can be used directly in spectral clustering. The
extension to semi-supervised learning is also straightforward since Kc by definition depends on
unlabeled data as well.
Alternative interaction potential functions: Previous discussions about multi-view learning rely
on potential definitions in (3) (which we call the consensus-based potentials), but other definitions
are also possible and will lead to different co-training models. Actually, the definition in (3) has fundamental limitations and leads only to consensus-based learning, as seen from the next subsection.
3.6
Limitations of Consensus-based Potentials
As mentioned before, the consensus-based potentials in (3) can be interpreted as defining a Gaussian
prior (4) to f c , where the mean is a weighted average of the m individual views. This averaging
indicates that the value of f c is never higher (or lower) than that of any single view. While the
consensus-based potentials are intuitive and useful for many applications, they are limited for some
real world problems where the evidence from different views should be additive (or enhanced) rather
than averaging. For instance, when a radiologist is making a diagnostic decision about a lung cancer
patient, he might look at both the CT image and the MRI image. If either of the two images gives
a strong evidence of cancer, he can make decision based on a single view; if both images give
an evidence of 0.6 (in a [0,1] scale), the final evidence of cancer should be higher (say, 0.8) than
either of them. It?s clear that the multi-view learning in this scenario is not consensus-based. While
all the previously proposed co-training and co-regularization algorithms have thus far been based
on enforcing consensus between the views, in principle our graphical model allows other forms of
6
Table 1: Results for Citeseer with different numbers of training data (pos/neg). Bold face indicates best
performance. Bayesian co-training is significantly better than the others (p-value 0.01 in Wilcoxon rank sum
test) except in AUC with ?Train +2/-10?.
M ODEL
T EXT
I NBOUND L INK
O UTBOUND L INK
T EXT +L INK
C O -T RAINED GPLR
BAYESIAN C O -T RAINING
# T RAIN +2/-10
AUC
F1
0.5725 ? 0.0180
0.1359 ? 0.0565
0.5451 ? 0.0025
0.3510 ? 0.0011
0.5550 ? 0.0119
0.3552 ? 0.0053
0.5730 ? 0.0177
0.1386 ? 0.0561
0.6459 ? 0.1034
0.4001 ? 0.2186
0.6536 ? 0.0419
0.4210 ? 0.0401
# T RAIN +4/-20
AUC
F1
0.5770 ? 0.0209
0.1443 ? 0.0705
0.5479 ? 0.0035
0.3521 ? 0.0017
0.5662 ? 0.0124
0.3600 ? 0.0059
0.5782 ? 0.0218
0.1474 ? 0.0721
0.6519 ? 0.1091
0.4042 ? 0.2321
0.6880 ? 0.0300
0.4530 ? 0.0293
relationships between the views. In particular, potentials other than those in (3) should be of great
interest for future research.
4
Experimental Study
Toy Examples: We show some 2D toy classification problems to visualize the co-training result (in
Fig. 2). Our first example is a two-Gaussian case where either feature x(1) or x(2) can fully solve the
problem (top left). This is an ideal case for co-training since: 1) each single view is sufficient to train
a classifier, and 2) both views are conditionally independent given the class labels. The second toy
data is a bit harder since the two Gaussians are aligned to the x(1) -axis. In this case the feature x(2)
is totally irrelevant to the classification problem. The canonical co-training fails here (top center)
since when we add labels using the x(2) feature , noisy labels will be introduced and expanded to
future training. The proposed model can handle this situation since we can adapt the weight of
each view and penalize the feature x(2) (top right). Our third toy data follows an XOR shape where
four Gaussians form a binary classification problem that is not linearly separable (bottom left). In
this case both assumptions mentioned above are violated, and co-training failed completely (bottom
center). A supervised learning model can however easily recover the non-linear underlying structure
(bottom right). This indicates that the co-training kernel Kc is not suitable for this problem.
Web Data: We use two sets of linked documents for our experiment. The Citeseer data set contains
3,312 entries that belong to six classes. There are three natural views: the text view consists of
title and abstract of a paper; the two link views are inbound and outbound references. We pick up
the largest class which contains 701 documents and test the one-vs-rest classification performance.
The WebKB data set is a collection of 4,502 academic web pages manually grouped into six classes
(student, faculty, staff, department, course, project). There are two views containing the text on the
page and the anchor text of all inbound links, respectively. We consider the binary classification
problem ?student? against ?faculty?, for which there are 1,641 and 1,119 documents, respectively.
We compare the single-view learning methods (T EXT, I NBOUND L INK, etc), concatenated-view
method (T EXT +L INK), and co-training methods C O -T RAINED GPLR (Co-Trained Gaussian Process Logistic Regression) and BAYESIAN C O -T RAINING. Linear kernels are used for all the competing methods. For the canonical co-training method we repeat 50 times and in each iteration add
the most predictable 1 positive sample and r negative samples into the training set where r depends
on the number of negative/positive ratio of each data set. Performance is evaluated using AUC score
and F1 measure. We vary the number of training documents (with ratio proportional to the true
positive/negative ratio), and all the co-training algorithms use all the unlabeled data in the training
process. The experiments are repeated 20 times and the prediction means and standard deviations
are shown in Table 1 and 2.
It can be seen that for Citeseer the co-training methods are better than the supervised methods. In this
cases Bayesian co-training is better than canonical co-training and achieves the best performance.
For WebDB, however, canonical co-trained GPLR is not as good as supervised algorithms, and thus
Bayesian co-training is also worse than supervised methods though a little better than co-trained
GPLR. This is maybe because the T EXT and L INK features are not independent given the class labels
(especially when two classes ?faculty? and ?staff? might share features). Canonical co-training has
higher deviations than other methods due to the possibility of adding noisy labels. We have also
tried other number of iterations but 50 seems to give an overall best performance.
7
Table 2: Results for WebKB with different numbers of training data (pos/neg). Bold face indicates best
performance. No results are significantly better than all the others (p-value 0.01 in Wilcoxon rank sum test).
M ODEL
T EXT
I NBOUND L INK
T EXT +L INK
C O -T RAINED GPLR
BAYESIAN C O -T RAINING
# T RAIN +2/-2
AUC
F1
0.5767 ? 0.0430
0.4449 ? 0.1614
0.5211 ? 0.0017
0.5761 ? 0.0013
0.5766 ? 0.0429
0.4443 ? 0.1610
0.5624 ? 0.1058
0.5437 ? 0.1225
0.5794 ? 0.0491
0.5562 ? 0.1598
# T RAIN +4/-4
AUC
F1
0.6150 ? 0.0594
0.5338 ? 0.1267
0.5210 ? 0.0019
0.5758 ? 0.0015
0.6150 ? 0.0594
0.5336 ? 0.1267
0.5959 ? 0.0927
0.5737 ? 0.1203
0.6140 ? 0.0675
0.5742 ? 0.1298
Note that the single-view learning with T EXT almost achieves the same performance as
concatenated-view method. This is because the number of text features are much more than the
link features (e.g., for WebKB there are 24,480 text features and only 901 link features). So these
multiple views are very unbalanced and should be taken into account in co-training with different
weights. Bayesian co-training provides a natural way of doing it.
5
Conclusions
This paper has two principal contributions. We have proposed a graphical model for combining
multi-view data, and shown that previously derived co-regularization based training algorithms maximize the likelihood of this model. In the process, we showed that these algorithms have been making an intrinsic assumption of the form p(fc , f1 , f2 , . . . , fm ) ? ?(fc , f1 )?(fc , f2 ) . . . ?(fc , fm ),
even though it was not explicitly realized earlier. We also studied circumstances when this assumption proves unreasonable. Thus, our first contribution was to clarify the implicit assumptions and
limitations in multi-view consensus learning in general, and co-regularization in particular.
Motivated by the insights from the graphical model, our second contribution was the development
of alternative algorithms for co-regularization; in particular the development of a non-stationary cotraining kernel, and the development of methods for using side-information in classification. Unlike
previously published co-regularization algorithms, our approach: (a) handles naturally more than 2
views; (b) automatically learns which views of the data should be trusted more while predicting class
labels; (c) shows how to leverages previously developed methods for efficiently training GP/SVM;
(d) clearly explains our assumptions, what is being optimized overall, etc; (e) does not suffer from
local maxima problems; (f) is less computationally demanding in terms of both speed and memory
requirements.
References
[1] S. Dasgupta, M. Littman, and D. McAllester. PAC generalization bounds for co-training. In NIPS, 2001.
[2] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT, 1998.
[3] N. Balcan, A. Blum, and K. Yang. Co-training and expansion: Towards bridging theory and practice. In
NIPS, 2004.
[4] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In Workshop on
information and knowledge management, 2000.
[5] U. Brefeld and T. Scheffer. Co-em support vector learning. In ICML, 2004.
[6] Steffen Bickel and Tobias Scheffer. Estimation of mixture models using co-em. In ECML, 2005.
[7] B. Krishnapuram, D. Williams, Y. Xue, A. Hartemink, L. Carin, and M. Figueiredo. On semi-supervised
classification. In NIPS, 2004.
[8] Virginia de Sa. Spectral clustering with two views. In ICML Workshop on Learning With Multiple Views,
2005.
[9] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[10] Xiaojin Zhu, John Lafferty, and Zoubin Ghahramani. Semi-supervised learning: From Gaussian fields to
gaussian processes. Technical report, CMU-CS-03-175, 2003.
8
| 3260 |@word faculty:3 mri:1 seems:1 retraining:1 steck:1 tried:2 covariance:4 citeseer:3 pick:1 harder:1 accommodate:2 moment:1 contains:2 score:1 hereafter:1 ours:1 document:4 existing:1 abundantly:1 com:1 current:1 cad:1 intriguing:1 john:1 concatenate:1 additive:1 shape:1 cheap:1 v:1 stationary:6 obsolete:1 xk:3 characterization:4 provides:4 node:1 simpler:1 c2:2 become:1 consists:1 introduce:1 bharat:1 theoretically:1 multi:24 steffen:1 automatically:3 little:1 totally:1 provided:3 webkb:3 bounded:2 moreover:1 underlying:2 notation:1 project:1 what:1 interpreted:2 developed:1 contrasting:1 guarantee:6 pseudo:2 every:1 tackle:1 exactly:2 preferable:1 classifier:15 k2:7 normally:1 medical:1 unit:1 yn:3 before:2 positive:3 local:4 mistake:2 limit:1 ext:8 analyzing:1 might:3 studied:1 co:84 factorization:1 limited:1 practice:3 block:1 bootstrap:2 empirical:2 significantly:3 confidence:2 krishnapuram:2 zoubin:1 unlabeled:14 optimize:3 equivalent:3 center:4 maximizing:3 straightforward:1 williams:2 convex:2 survey:1 pure:1 insight:5 handle:2 target:2 suppose:1 enhanced:1 us:2 agreement:1 expensive:1 balaji:1 labeled:7 bottom:4 connected:1 yk:1 principled:1 intuition:2 mentioned:3 predictable:1 complexity:2 constrains:1 littman:1 tobias:1 trained:3 weakly:1 depend:2 predictive:1 upon:3 f2:5 completely:1 easily:4 joint:6 po:2 genre:1 train:2 describe:1 labeling:1 encoded:1 larger:1 valued:1 solve:2 say:1 otherwise:1 statistic:1 unseen:1 gp:21 jointly:3 noisy:5 transform:1 transductive:1 final:1 advantage:2 brefeld:1 propose:3 interaction:1 maximal:1 product:1 j2:9 aligned:1 combining:2 intuitive:1 exploiting:1 requirement:3 inbound:2 help:1 illustrate:1 sa:1 strong:3 c:1 come:1 rosales:1 implies:1 subsequently:1 mcallester:1 opinion:1 explains:1 f1:11 fix:1 really:1 generalization:1 strictly:1 extension:1 clarify:1 ground:1 visually:1 exp:7 great:1 algorithmic:1 visualize:1 major:1 vary:1 achieves:2 bickel:1 omitted:1 estimation:1 unstated:2 label:13 title:1 largest:1 grouped:1 trusted:3 weighted:4 mit:1 clearly:4 gaussian:19 rather:2 derived:1 focus:1 rank:2 likelihood:3 indicates:6 summarizing:1 sense:1 rdj:1 dependent:1 entire:1 kc:9 overall:3 classification:19 colt:1 denoted:1 k6:1 development:3 special:2 marginal:7 field:1 never:2 functions1:1 manually:1 represents:1 yu:1 unsupervised:3 look:2 icml:2 carin:1 future:2 minimized:1 report:1 others:3 simplify:1 simultaneously:2 individual:3 negation:1 interest:2 highly:1 possibility:1 worthiness:2 mixture:1 behind:1 radiologist:1 amenable:1 integral:1 unless:1 logarithm:1 penalizes:1 theoretical:4 instance:2 earlier:2 rao:1 maximization:1 applicability:2 introducing:1 deviation:2 entry:1 too:1 virginia:1 reported:1 dependency:3 xue:1 combined:1 confident:4 fundamental:2 probabilistic:2 pool:1 together:1 again:1 satisfied:1 management:1 outbound:1 containing:1 worse:2 style:2 toy:6 account:1 potential:18 de:1 bold:2 student:2 includes:1 inc:1 matter:1 explicitly:2 depends:5 later:1 view:113 analyze:1 linked:1 red:1 doing:1 lung:1 maintains:1 recover:1 odel:2 contribution:5 minimize:1 square:1 ni:5 xor:2 variance:4 largely:2 efficiently:1 clarifies:1 bayesian:20 j20:1 published:2 suffers:1 definition:7 against:1 naturally:1 di:1 recovers:2 proved:1 intrinsically:1 mitchell:1 knowledge:2 subsection:2 improves:2 actually:4 originally:1 higher:4 supervised:15 improved:1 evaluated:1 though:3 furthermore:2 just:1 implicit:1 working:1 web:4 trust:2 marker:1 lack:2 defines:3 mode:2 logistic:1 usa:1 building:1 y2:2 true:1 hence:2 regularization:18 assigned:1 alternating:8 illustrated:2 conditionally:3 width:1 lastname:1 auc:6 criterion:1 multiview:2 complete:1 fj:10 balcan:1 meaning:1 wise:1 image:4 novel:2 sigmoid:1 functional:2 belong:1 interpretation:1 he:2 marginals:1 refer:1 rd:2 consistency:1 dj:1 similarity:3 etc:2 base:1 add:2 dominant:1 wilcoxon:2 multivariate:1 own:2 recent:1 showed:2 perspective:2 optimizes:1 irrelevant:1 scenario:2 certain:3 binary:2 arbitrarily:1 success:1 yi:18 inverted:1 seen:4 minimum:1 additional:2 neg:2 staff:2 accomplishes:1 rained:3 converge:1 maximize:1 monotonically:1 semi:7 multiple:10 reduces:2 technical:1 characterized:1 adapt:1 offer:1 academic:1 prediction:1 regression:7 basic:1 circumstance:2 expectation:1 df:1 patient:1 cmu:1 iteration:4 kernel:24 sometimes:2 normalization:1 harald:1 penalize:1 whereas:1 addition:1 operate:1 unlike:2 rest:1 undirected:7 lafferty:1 effectiveness:1 call:3 leverage:2 ideal:1 yang:1 rendering:1 independence:2 xj:1 marginalization:2 misclassifications:1 fm:3 competing:1 reduce:1 idea:2 whether:1 six:2 motivated:1 bridging:1 penalty:1 suffer:1 repeatedly:1 generally:1 useful:3 clear:3 maybe:1 nonparametric:1 svms:2 specifies:1 exist:2 canonical:6 diagnostic:1 per:1 blue:1 shall:1 dasgupta:1 key:1 four:2 nevertheless:1 blum:2 kyi:1 verified:1 utilize:1 graph:2 sum:4 run:1 inverse:2 uncertainty:2 almost:1 decision:10 bit:1 bound:1 ct:1 adapted:1 x2:4 encodes:1 speed:1 argument:1 expanded:1 separable:1 department:1 combination:1 across:1 describes:1 em:7 smaller:1 making:2 taken:1 computationally:1 agree:2 previously:8 discus:2 fail:2 end:1 available:3 gaussians:5 unreasonable:1 promoting:1 apply:1 away:1 indirectly:1 disagreement:5 enforce:1 spectral:4 romer:1 batch:1 alternative:3 original:2 top:4 spurred:1 clustering:4 ensure:1 include:1 graphical:15 remaining:1 rain:4 concatenated:2 ghahramani:1 especially:1 prof:1 ink:8 objective:3 added:1 realized:1 strategy:5 parametric:1 unclear:1 link:4 accommodates:1 topic:1 consensus:27 enforcing:1 assuming:1 relationship:2 ratio:3 difficult:1 negative:3 implementation:1 motivates:1 allowing:1 upper:2 observation:3 finite:1 ecml:1 situation:2 extended:2 looking:1 precise:1 regularizes:1 y1:3 defining:1 introduced:2 overloaded:1 pair:2 specified:1 z1:1 optimized:2 textual:1 nip:3 able:1 suggested:1 usually:1 firstname:1 hyperlink:2 challenge:1 including:1 memory:3 unrealistic:1 misclassification:4 suitable:1 natural:3 rely:3 regularized:2 predicting:2 demanding:1 scarce:1 mn:2 zhu:1 inversely:1 axis:1 xiaojin:1 kj:6 text:5 prior:9 understanding:2 literature:1 kf:4 loss:2 fully:3 interesting:3 limitation:5 proportional:1 mensionality:1 integrate:2 sufficient:1 principle:1 share:1 cancer:3 course:1 penalized:2 repeat:1 free:1 rasmussen:1 figueiredo:1 side:1 understand:2 deeper:1 taking:2 face:2 benefit:5 dimension:1 xn:7 world:3 avoids:3 raining:3 made:1 collection:1 far:2 unreliable:2 clique:1 ml:3 global:1 reveals:1 anchor:1 xi:25 alternatively:1 iterative:2 latent:14 quantifies:1 why:1 table:3 learn:1 ignoring:1 nigam:1 expansion:1 shipeng:1 domain:3 linearly:1 big:1 noise:1 repeated:2 coregularization:1 ghani:1 body:1 x1:7 fig:3 scheffer:2 fails:1 explicit:2 cotraining:3 third:1 learns:1 specific:1 pac:3 svm:1 concern:1 evidence:5 intrinsic:2 workshop:2 adding:2 effectively:1 gained:1 fc:18 simply:1 forming:1 failed:1 highlighting:1 kxk:1 hartemink:1 partially:1 scalar:1 succeed:1 conditional:5 careful:1 rbf:2 towards:1 content:2 included:1 except:1 reducing:1 averaging:2 principal:1 experimental:1 siemens:2 rarely:1 support:2 unbalanced:1 brevity:1 violated:1 correlated:1 |
2,493 | 3,261 | Subspace-Based Face Recognition in Analog
VLSI
Gonzalo Carvajal, Waldo Valenzuela and Miguel Figueroa
Department of Electrical Engineering, Universidad de Concepci?n
Casilla 160-C, Correo 3, Concepci?n, Chile
{gcarvaja, waldovalenzuela, miguel.figueroa}@udec.cl
Abstract
We describe an analog-VLSI neural network for face recognition based on
subspace methods. The system uses a dimensionality-reduction network
whose coefficients can be either programmed or learned on-chip to perform PCA, or programmed to perform LDA. A second network with userprogrammed coefficients performs classification with Manhattan distances.
The system uses on-chip compensation techniques to reduce the effects of
device mismatch. Using the ORL database with 12x12-pixel images, our
circuit achieves up to 85% classification performance (98% of an equivalent
software implementation).
1
Introduction
Subspace-based techniques for face recognition, such as Eigenfaces [1] and Fisherfaces [2],
take advantage of the large redundancy present in most images to compute a lowerdimensional representation of their input data and stored patterns, and perform classification in the reduced subspace. Doing so substantially lowers the storage and computational
requirements of the face-recognition task.
However, most techniques for dimensionality reduction require a high computational
throughput to transform images from the large input data space to the feature subspace.
Therefore, software [3] even dedicated digital hardware implementations [4, 5] are too large
and power-hungry to be used in highly portable systems. Analog VLSI circuits can compute using orders of magnitude less power and die area than their digital counterparts,
but their performance is limited by signal offsets, parameter mismatch, charge leakage and
nonlinear behavior, particularly in large-scale systems. Traditional circuit-design techniques
can reduce these effects, but they increase power and area, rendering analog solutions less
attractive.
In this paper, we present a neural network for face recognition which implements Principal
Components Analysis (PCA) and Linear Discriminant Analysis (LDA) for dimensionality
reduction, and Manhattan distances and a loser-take-all (LTA) circuit for classification.
We can download the network weights in a chip-in-the loop configuration, or use on-chip
learning to compute PCA coefficients. We use local adaptation to achieve good classification
performance in the presence of device mismatch. The circuit die area is 2.2mm2 in a 0.35?m
CMOS process, with an estimated power dissipation of 18mW. Using PCA reduction and
a hard classifier, our network achieves up to 83% accuracy on the Olivetti Research Labs
(ORL) face database [6] using 12x12-pixel images, which corresponds to 99% of the accuracy
of a software implementation of the algorithm. Using LDA projections and a software Radial
Basis Function (RBF) network on the hardware-computed distances yields 85% accuracy
(98% of the software performance).
1
2
Eigenspace based face recognition methods
The problem of face recognition consists of assigning an identity to an unknown face by
comparing it to a database of labeled faces. However, the dimensionality of the input
images is usually so high that performing the classification on the original data becomes
prohibitively expensive.
Fortunately, human faces exhibit relatively regular statistics; therefore, their intrinsic dimensionality is much lower than that of their images. Subspace methods transform the
input images to reduce their dimensionality, and perform the classification task on this
lower-dimensional feature space. In particular, the Eigenfaces [1] method performs dimensionality reduction using PCA, and classification by choosing the stored face with the lowest
distance to the input data.
Principal Components Analysis uses a linear transformation from the input space to the
feature space, which preserves most of the information (in the mean-square error sense)
present in the original vector. Consider a column vector x of dimension n, formed by
the concatenated columns of the input image. Let the matrix Xn?N = {x1 , x2 , . . . , xN }
represent a set of N images, such as the image database available for a face recognition
task. PCA computes a new matrix Ym?N , with m < n:
Y = W*T X
(1)
The columns of Y are the lower-dimensional projections of the original images in the feature
space. The columns of the orthogonal transformation matrix W? are the eigenvectors
associated to the m largest eigenvalues of the covariance matrix of the original image space.
Upon presentation of a new face image, the Eigenfaces method first transforms this image
into the feature space using the transformation matrix W? , and then computes the distance
between the reduced image and each image class in the reference database. The image is
classified with the identity of the closest reference pattern.
Fisherfaces [2] performs dimensionality reduction using Linear Discriminant Analysis (LDA).
LDA takes advantage of labeled data to maximize the distance between classes in the projected subspace. Considering Xc , c = 1, . . . , Nc as subsets of X containing Ni images of
the same subject, LDA defines two matrices:
SW =
c
X
X
(xk ? mi )(xk ? mi )T , with mi =
i=1 xk ?Xc
SB =
Ni
1 X
xk
Ni
(2)
k=1
c
X
Ni (mi ? m)(mi ? m)T
(3)
i=1
where SW represents the scatter (variance) within classes, and SB is the scatter between
different classes. To perform the dimensionality reduction of Eqn. (1), LDA constructs W?
such that its columns are the m largest eigenvectors of S?1
W SB . This requires SW to be nonsingular, which is often not the case; therefore, LDA frequently uses a PCA preprocessing
stage [2].
Fisherfaces can perform classification using a hard classifier on the computed distances
between the test data and stored patterns in the LDA subspace, as in Eigenfaces, or it
can use a Radial Basis Function (RBF) network. RBF uses a hidden layer of neurons with
Gaussian activation functions to detect clusters in the projected subspace.
Traditionally, the subspace method use Euclidian distances. However, our experiments show
that, as long as the dimensionality reduction preserves enough distance between classes,
less computationally expensive distance metrics such as Manhattan distance are equally
effective for classification. The Manhattan distance between two vectors x = [x1 . . . xn ]
and y = [y1 . . . yn ] is given by:
n
X
d=
|xi ? yi |
(4)
i=1
2
distances 1
n
test data
y
dimensionality
reduction
2
...
input image
x
database
m
LTA
face ID
k
(a) Architecture
y1
...
x1
c
y2
b1
W1,1
ym
...
+
...
Wn,1
b2
...
+
f1,i
data f
2,i
base
_
abs ()
_
abs ()
+
dist
i
...
W1,m
y1
...
xn
ym
_
fn,i
Wn,m
(b) Projection network
abs ()
(c) Distance computation
Figure 1: Face-recognition hardware. (a) Architecture. A dimensionality-reduction network
projects a n-dimensional image onto m dimensions, and loser-take-all (LTA) circuit labels
the image by choosing the nearest stored face in the reduced space. (b) The dimensionality
reduction network is an array of linear combiners with weights that have been pre-computed
or learned on chip. (c) The distance circuit computes the Manhattan distance between the
m projections of the test image and the stored face database. In our current implementation,
n = 144, m = 39, and k = 40.
3
Hardware Implementation
Fig. 1(a) shows the architecture of our face-recognition network. It follows the signal flow
described in Section 2, where the n-dimensional test image x is first projected onto the mdimensional feature space (test data y) using an array of m n-input analog linear combiners,
shown in Fig. 1(b). The constant input c is a bias used to compensate for the offset introduced by the analog multipliers. The network also stores the m projections of the database
face set (the training set) in an array of analog memories. A distance computation block,
shown in Fig. 1(c), computes the Manhattan distance between each labeled element in the
stored training set and the reduced test data y. A loser-take-all (LTA) circuit, currently
implemented in software, selects the smallest distance and labels the test image with the
selected class.
The linear combiners are based on the synapse shown in Fig. 2(a). An analog Gilbert multiplier computes the product of each pixel of the input image, represented as a differential
voltage, and the local synaptic weight. An accurate transformation requires a multiplier
response that is linear in the pixel value, therefore we designed the multipliers to maximize
the linearity of that input. Device mismatch introduces offsets and gain variance across different multipliers in the network; we describe the calibration techniques used to compensate
for these effects in Section 4. The multipliers provide a differential current output, therefore
we can add them across a single neuron by connecting them to common wires.
Each synaptic weight is stored in an analog nonvolatile memory cell [7] based on floatinggate transistors, shown also in Fig. 2(a). The cell features linear weight-updates based on
digital pulses applied to the terminals inc and dec. Using local calibration, also based on
floating gates, we independently tune each synapse to achieve symmetric updates in the
3
Vx+
Vx
_
input
Gilbert
Multiplier
inc
database
element
I-
FG
Mem Cell
Vw_
Vw+
dec
From PCA
I+
FG Memory
Cell
inc
Iy+
Crossbar
switch
If+
select
If-
dec
Iabs +
Iabs -
Current
Comp .
sum
weight
Iy-
sum
(a) Hardware synapse
(b) Distance circuit
Figure 2: (a) The synapse is comprised by a Gilbert multiplier and a nonvolatile analog
memory cell with local calibration. The output currents are summed across each neuron.
(b) Each component of the Manhattan distance is computed as the subtraction of the
corresponding principal components and an optional inversion based on the sign of the
result. The output currents are summed across all components.
presence of device mismatch, and to make the update rates uniform across the entire chip.
As a result, the resolution of the memory cell exceeds 12 bits in a 0.35?m CMOS process.
Fig. 2(b) depicts the circuit used to compute the Manhattan distance between the test data
and the stored patterns. Each projection of the training set is stored as a current in an analog
memory cell, simpler and smaller than the cell used in the dimensionality reduction network,
and written using a self-limiting write process. The difference between each projection of
the pattern and the test input is computed by inverting the polarity of one of the signals
and adding the currents. To compute the absolute value, a current comparator based on a
simple transconductance amplifier determines the sign of the result and uses a 2 ? 2 crossbar
switch to invert the polarity of the outputs if needed.
As stated in Section 5, our current implementation considers 12?12-pixel images (n = 144
in Fig. 1). We compute 39 projections using PCA and LDA, and perform the classification
using 40 Manhattan-distance units on the 39-dimensional projections. The next section
analyzes the effects of device mismatch on the dimensionality-reduction network.
4
Analog implementation of dimensionality reduction networks
The arithmetic distortions introduced by the nonlinear transfer function of the analog multipliers, coupled with the effects of device mismatch (offsets and gains), affect the accuracy
of the operations performed by the reduction network and become the limiting factor in
the classification performance. In order to achieve good performance, we must calibrate the
network to compensate for the effect of these limitations.
In this section, we analyze and design solutions for two different cases. First, we consider the
case when a computer performs PCA or LDA to determine W? off-line, and downloads the
weights onto the chip. Second, we analyze the performance of adaptive on-chip computation
of PCA using a Hebbian-learning algorithm. In both cases, we design mechanisms that use
local on-chip adaptation to compensate for the offsets and gain variances introduced by
device mismatch, thus improving classification performance. In the following analysis we
assume that the inputs have zero mean and have been normalized. Also, for simplicity, we
assume that the inputs and weights are operating within the linear range of the multipliers.
We remove these assumptions when presenting experimental results. Thus, our analysis uses
a simplified model of the analog multipliers given by:
o = (ax x + ?x )(aw w + ?w )
(5)
where o is the multiplier output, x and w are the inputs, ?x and ?w represent the input
offsets, and ax and aw are the multiplier gains associated with each input. These parameters
vary across different multipliers due to device mismatch and are unknown at design time,
and difficult to determine even after circuit fabrication.
4
4.1
Dimensionality reduction with precomputed weights
Let us consider an analog linear combiner such as the one depicted in Fig. 1(b), which
computes the first projection y of x, using the first column w? of the software precomputed
optimal transformation W? of Eqn. (1). Using the simplified multiplier linear model of
Eqn. (5), the linear combiner computes the first projection as:
?
y = xT (Ax Aw w? + Ax ? w ) + ? T
(6)
x (Aw w + ?w )
where Ax = diag([ax1 . . . axn ]), Aw = diag([aw1 . . . awn ]), ? x = [?x1 . . . ?xn ]T , and
? w = [?w1 . . . ?wn ]T represent the gains and offsets of each multiplier. Eqn. (6) shows that
device mismatch has two effects on the output: the first term modifies the effective weight
value of the network, and the second term represents an offset added to the output (w? is
a constant).
Replacing w? with an adaptive version wk , the structure becomes a classic adaptive linear
combiner which, using the optimal weights to generate a reference output signal, can be
trained using the well known Least-Mean Squares (LMS) algorithm. Adding a bias synapse
b with constant input c and training the network with LMS, the weights converge to [7]:
w?
?
=
(Ax Aw )?1 (w? ? Ax ? w )
?
?(? T
x (Aw w
(7)
?1
+ ?w ) + c?b )(cab )
(8)
b =
where ab and ?b are the gain and offset of the analog multiplier associated to the bias input
c. These weight values fully compensate for the effects of gain mismatch and offsets.
In our hardware implementation, we use m adaptive linear combiners to compute every
projection in the feature space, and calibrate these circuits using on-chip LMS local adaptation to compute and store the optimal weight values of Eqns. (7) and (8), achieving a good
approximation of the optimal output Y. Fig. 3(a) shows our analog-VLSI implementation
of LMS. We train the weight values in the memory cells by providing inputs and a reference
output to each linear combiner, and use an on-chip pulse-based compact implementation
of the LMS learning rule. In order to improve the convergence of the algorithm, we draw
the inputs from a zero-mean random Gaussian distribution. Thus, the performance of the
dimensionality reduction network is ultimately limited by the resolution of the memory
cells, the reference noise, the learning rate of the LMS training stage and linearity of the
multipliers. This last effect can be controlled by restricting the dynamic range of the input
to linear range of the multipliers.
To measure the accuracy of our implementation, we computed (in software) the first 10
principal components of one half the Olivetti Research Labs (ORL) face database, reduced
to 12x12 pixels, and used our on-chip implementation of LMS to train the hardware network
to learn the coefficients. We then measured the output of the circuit on the other half of
the database. Fig. 3(b) plots the RMS value of the error between the circuit output and the
software results, normalized to the RMS value of each principal component. The figure also
shows the error when we wrote the coefficients onto the circuit in open-loop, without using
LMS. In this case, offset and gain mismatch completely obscure the information present
in the signal. LMS training compensates for these effects, and reduces the error energy to
between 0.25% and 1% of the energy of the signal. A different experiment (not shown)
computing LDA coefficients yields equivalent results.
4.2
On-chip PCA computation
In some cases, such as when the face-recognition network is integrated with a camera on a
single chip, it may be necessary to train the face database on-chip. It is not practical for the
chip to include the hardware resources to compute the optimal weights from the eigenvalue
analysis of the training set?s covariance matrix, therefore we compute them on chip using
the standard Generalized Hebbian Algorithm (GHA). The computation of the first principal
component and the learning rule to update the weights at time k are:
yk = xTk wk
(9)
?wk = ?yk (xk ? x0 k )
(10)
x0 k = yk wk
(11)
5
Norm. RMS error (log scale)
inc
LMS
wi
dec learning rule
xi
to adder
X
_
T
yref =x w*
+
+
y from output
noise
4
10
No circuit calibration
On?chip LMS
2
10
0
10
?2
10
?4
10
1
2
3
4
5
6
7
8
9
10
Principal Component
(a) LMS computation
(b) Output error of PCA network
Figure 3: Training the PCA network with LMS. (a) Block diagram of our LMS implementation. We present random inputs to each linear combiner, and provide a reference output. A
pulse-based implementation of the LMS learning rule updates the memory cells. (b) RMS
value of the error for the first 10 principal components, normalized to the RMS value of
each PC.
where ? is the learning rate of the algorithm and x0 k is the reconstruction of the input
xk from the first principal component. The distortion introduced to the output by gain
mismatch and offsets in Eqn. (9) is identical to Eqn. (6). Similarly to LMS, it is easy
to show that a bias input c connected to a synapse b with an anti-Hebbian learning rule
?bk = ?b cyk removes the constant offset added to the output. Therefore, we can eliminate
the second term of Eqn. (6) and express the output as:
T
y k = xT
k (Ax Aw wk + Ax ? w ) = xk wk
(12)
0
Using analog multipliers to compute x k , we obtain:
x0 k = y k (Ay A0w wk + Ay ? 0w ) + ? y (A0w wk + ? 0w )
A0w ,
(13)
? 0w
are the gains and offsets associated with the multipliers used
? y , and
where Ay ,
to compute yk wk . Replacing Eqns. (12) and (13) in Eqn. (10), we determine the effective
learning rule modified by device mismatch:
?wk = ?y k (x ? y k (Ay A0w wk + Ay ? 0w )) = ?y k (x ? y k w0k )
If we use the same analog multipliers to compute y k and
and ? w = ? 0w , and the learning rule becomes:
?wk = ?y k (x ? y k wk )
x0
k,
(14)
then Ax = Ay , Aw = A0w ,
(15)
where y k and wk are the modified weight and output defined in Eqn. (12). Eqn. (15) is
equivalent to the original learning rule in Eqn. (10), but with a new weight vector modified
by device mismatch.
A convergence analysis for Eqn. (15) is complicated, but by analogy to LMS we can show that
the weights indeed converge to the same values given in Eqns. (7) and (8), which compensate
for the effects of gain mismatch and offset. Simulation results verify this assumption. Note
that this will only be the case if we use the same hardware multipliers to compute yk and
x0 k . The analysis extends naturally to the higher-order principal components.
Fig. 4(a) shows our implementation of the GHA learning rule. The multiplexer shares the
analog multipliers between the computation of yk and x0 k , and is controlled by a digital signal
that alternates its value during the computation and adaptation phases of the algorithm.
Unlike LMS, GHA trains the algorithm using the images from the training set. Fig. 4(b)
shows the normalized RMS value of the output error for the first 10 principal components.
Comparing it to Fig. 3(b), the error is significantly higher than LMS, moving between 4%
and 35% of the enery of the output. This higher error is due in part to the nonlinear
multiply in the computation of x0 k , and because there is a strong dependency between
the learning rates used to update the bias synapse and the other weights in the network.
However, as Section 5 shows, this error does not translate into a large degradation in the
face classification performance.
6
xi
M
U
X
Norm. RMS error (log scale)
Compute /
update
inc
GHA
wi
learning
rule
dec
X
to adder
y
from output
4
10
No circuit calibration
On?chip GHA
2
10
0
10
?2
10
?4
10
1
2
3
4
5
6
7
8
9
10
Principal Component
(a) GHA computation
(b) Output error of PCA network
Figure 4: Training the PCA network with GHA. (a) We reuse the multiplier to compute
x0 k and use a pulse-based implementation of the GHA rule. (b) RMS value of the error for
the first 10 principal components, normalized to the RMS value of each PC.
5
Classification Results
We designed and fabricated arithmetic circuits for the building blocks described in the
previous sections using a 0.35?m CMOS process, including analog memory cells, multipliers,
and weight-update rules for LMS and GHA. We characterized these circuits in the lab and
built a software emulator that allows us to test the static performance of different network
configurations with less than 0.5% error. We simulated the LTA circuit in software. Using
the emulator, we tested the performance of the face-recognition network on the Olivetti
Research Labs (ORL) database, consisting on 10 photos of each of 40 total subjects. We
used 5 random photos of each subject for the training set and 5 for testing. Limitations
in our circuit emulator forced us to reduce the images to 12 ? 12 pixels. The estimated
power consumption of the circuit with these 144 inputs and 39 projections is 18mW (540nJ
per classification with 30?s settling time), and the layout area is 2.2mm2 . These numbers
represent a 2?5x reduction in area and more than 100x reduction in power compated to
standard cell-based digital implementations [4, 5].
Fig. 5(a) shows the classification performance of the network using PCA for dimensionality
reduction, versus the number of principal components in the subspace. First, we tested
the network using PCA for dimensionality reduction. The figure shows the performance
of a software implementation of PCA with Euclidean distances, hardware PCA trained
with LMS and software-computed weights, and hardware PCA trained with on-chip GHA.
Both hardware implementations use Manhattan distances and a software LTA. The plots
show the mean of the classification accuracy computed for each of the 40 individuals in the
database. The error bars show one standard deviation above and below the mean. The
software implementation peaks at 84% classification accuracy, while the hardware LMS and
GHA implementations peak at 83% and 79%, respectively. Note that GHA performs only
slightly worse than LMS, mainly because we compute and store the principal components
of the training set in the face database using the same PCA network used to reduce the
dimensionality of the test images, which helps to preserve the distance between classes in
the feature space. The standard deviations are similar in all cases. Using an uncalibrated
network brings the performance below 5%, mainly due to the offsets in the multipliers which
change the PCA projection and take the signals outside of their nominal operating range.
Fig. 5(a) shows the classification results using the LDA in the dimensionality reduction
network. The results are slightly better than PCA, and the error bars show also a lower
variance. The performance of the software implementation of LDA and an a hard-classifier
based on Euclidean distances is 83%. The LMS-trained hardware network with Manhattan
distances and a software LTA yields 82%. Replacing the LTA with a software RBF classifier,
the chip achieves 85% classification performance, while the software implementation (not
shown) peaks at 87%. Using 40x40-pixel images and 39 projections, the software LDA
network with RBF achieves more than 98% classification accuracy. Therefore, our current
results are limited by the resolution of the input images.
7
Classification Performance
Classification Performance
1
0.8
0.6
0.4
PCA+dist (SW)
PCA with LMS+ dist (HW)
GHA+dist (HW)
0.2
0
5
10
15
20
25
30
35
40
Number of Principal Componentes
1
0.8
0.6
0.4
LDA+dist+LTA (SW)
LDA+dist+LTA (HW)
LDA+dist (HW)+RBF (SW)
0.2
0
5
10
15
20
25
30
35
40
Number of LDA Projections
(a) Classification performance for PCA
(b) Classification performance for LDA
Figure 5: Classification performance for a 12 ? 12?pixel version of the ORL database versus number of projections, using PCA and LDA for dimensionality reduction. Computing
coefficients off-chip and writing them on the chip using LMS yields between 83% and 85%
classification performance for PCA and LDA, respectively. This represents 98%-99% of the
performance of a software implementation.
6
Conclusions
We presented an analog-VLSI network for face-recognition using subspace methods. We
analyzed the effects of device mismatch on the performance of the dimensionality-reduction
network and tested two techniques based on local adaptation which compensate for gain
mismatch and offsets. We showed that using LMS to train the network on precomputed
coefficients to perform PCA or LDA performs better than using GHA to learn PCA coefficients on chip. Ultimately, both techniques perform similarly in the face-classification task
with the ORL database, achieving a classification performance of 83%-85% (98%-99% of a
software implementation of the algorithms). Simulation results show that the performance
is currently limited by the resolution of the input images. We are currently working on the
integration LTA and RBF classifiers on chip, and on support of higher-dimensional inputs.
Acknowledgments
This work was funded by the Chilean government through FONDECYT grant No. 1070485.
The authors would like to thank Dr. Seth Bridges for his valuable contribution to this work.
References
[1] M. Turk and A. Pentland. Face Recognition Using Eigenfaces. Proc. of IEEE Conf. on Computer
Vision and Pattern Recognition, pages 586?591, 1991.
[2] Peter Belhumeur, Joao Hespanha, and David J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection". IEEE Transactions on Pattern Analysis and
Machine Intelligence, 19(7):711?720, 1997.
[3] A. U. Batur, B. E. Flinchbaugh, and M. H. Hayes IIl. A DSP-Based approach for the implementation of face recognition algorithms. In IEEE International Conference on Acoustics, Speech,
and Signal Processing, 2003. Proceedings. (ICASSP ?03), volume 2, pages 253?256, 2003.
[4] N. Shams, I. Hosseini, M. Sadri, and E. Azarnasab. Low Cost FPGA-Based Highly Accurate Face Recognition System Using Combined Wavelets Withs Subspace Methods. In IEEE
International Conference on Image Processing, 2006, pages 2077?2080, 2006.
[5] C. S. S. Prasanna, N. Sudha, and V. Kamakoti. A Principal Component Neural Network-Based
Face Recognition System and Its ASIC Implementation. In VLSI Design, pages 795?798, 2005.
[6] Ferdinando Samaria and Andy Harter. Parameterisation of a Stochastic Model for Human Face
Identification. In IEEE Workshop on Applications of Computer Vision, Sarasota (Florida),
December 1994.
[7] Miguel Figueroa, Esteban Matamala, Gonzalo Carvajal, and Seth Bridges. Adaptive Signal
Processing in Mixed-Signal VLSI with Anti-Hebbian Learning. In IEEE Computer Society
Annual Symposium on VLSI, pages 133?138, Karlsruhe, Germany, 2006. IEEE.
8
| 3261 |@word version:2 inversion:1 norm:2 open:1 pulse:4 simulation:2 covariance:2 euclidian:1 reduction:24 configuration:2 current:10 comparing:2 activation:1 assigning:1 scatter:2 written:1 must:1 fn:1 remove:2 designed:2 plot:2 update:8 v:1 half:2 selected:1 device:12 floatinggate:1 intelligence:1 xk:7 chile:1 simpler:1 differential:2 become:1 symposium:1 consists:1 x0:9 indeed:1 behavior:1 axn:1 frequently:1 dist:7 terminal:1 considering:1 becomes:3 project:1 linearity:2 joao:1 circuit:22 eigenspace:1 lowest:1 substantially:1 transformation:5 fabricated:1 nj:1 every:1 charge:1 prohibitively:1 classifier:5 unit:1 grant:1 yn:1 engineering:1 local:7 id:1 downloads:1 programmed:2 limited:4 range:4 practical:1 camera:1 acknowledgment:1 testing:1 block:3 implement:1 area:5 ax1:1 significantly:1 projection:18 pre:1 radial:2 regular:1 onto:4 storage:1 writing:1 gilbert:3 equivalent:3 modifies:1 layout:1 independently:1 resolution:4 simplicity:1 rule:12 array:3 his:1 classic:1 traditionally:1 limiting:2 nominal:1 us:7 element:2 recognition:19 particularly:1 expensive:2 database:17 labeled:3 electrical:1 connected:1 uncalibrated:1 yk:6 valuable:1 waldo:1 kriegman:1 dynamic:1 ultimately:2 trained:4 upon:1 basis:2 completely:1 icassp:1 seth:2 chip:25 represented:1 samaria:1 train:5 forced:1 describe:2 effective:3 choosing:2 outside:1 whose:1 distortion:2 compensates:1 statistic:1 transform:2 advantage:2 eigenvalue:2 transistor:1 reconstruction:1 product:1 adaptation:5 loop:2 loser:3 translate:1 achieve:3 harter:1 convergence:2 cluster:1 requirement:1 cmos:3 help:1 miguel:3 measured:1 nearest:1 strong:1 implemented:1 stochastic:1 human:2 vx:2 require:1 government:1 f1:1 iil:1 lm:27 achieves:4 vary:1 smallest:1 esteban:1 proc:1 label:2 currently:3 bridge:2 largest:2 gaussian:2 modified:3 voltage:1 combiner:5 ax:10 dsp:1 mainly:2 sense:1 detect:1 sb:3 entire:1 integrated:1 eliminate:1 hidden:1 vlsi:8 selects:1 germany:1 pixel:9 classification:30 summed:2 integration:1 construct:1 mm2:2 represents:3 identical:1 throughput:1 preserve:3 individual:1 floating:1 phase:1 consisting:1 ab:4 amplifier:1 highly:2 multiply:1 introduces:1 analyzed:1 pc:2 accurate:2 andy:1 necessary:1 orthogonal:1 euclidean:2 mdimensional:1 column:6 calibrate:2 cost:1 deviation:2 subset:1 uniform:1 comprised:1 fpga:1 fabrication:1 too:1 stored:9 dependency:1 aw:9 combined:1 concepci:2 peak:3 international:2 universidad:1 off:2 ym:3 connecting:1 iy:2 w1:3 containing:1 dr:1 worse:1 conf:1 multiplexer:1 de:1 b2:1 wk:14 coefficient:9 inc:5 performed:1 lab:4 doing:1 analyze:2 w0k:1 complicated:1 contribution:1 formed:1 ni:4 accuracy:8 square:2 variance:4 yield:4 nonsingular:1 identification:1 comp:1 classified:1 synaptic:2 energy:2 turk:1 naturally:1 associated:4 mi:5 static:1 gain:12 dimensionality:24 higher:4 response:1 synapse:7 stage:2 crossbar:2 working:1 eqn:12 adder:2 replacing:3 nonlinear:3 defines:1 brings:1 lda:23 karlsruhe:1 building:1 effect:12 normalized:5 y2:1 multiplier:27 counterpart:1 verify:1 symmetric:1 attractive:1 during:1 self:1 eqns:3 die:2 generalized:1 presenting:1 ay:6 performs:6 dedicated:1 hungry:1 dissipation:1 image:33 common:1 volume:1 analog:22 similarly:2 gonzalo:2 funded:1 moving:1 calibration:5 operating:2 base:1 add:1 closest:1 showed:1 olivetti:3 store:3 asic:1 yi:1 analyzes:1 fortunately:1 lowerdimensional:1 belhumeur:1 subtraction:1 converge:2 maximize:2 determine:3 signal:11 arithmetic:2 sham:1 reduces:1 hebbian:4 exceeds:1 characterized:1 long:1 compensate:7 equally:1 controlled:2 vision:2 metric:1 represent:4 invert:1 cell:13 dec:5 diagram:1 unlike:1 subject:3 december:1 flow:1 mw:2 presence:2 vw:1 enough:1 wn:3 rendering:1 switch:2 affect:1 easy:1 architecture:3 reduce:5 x40:1 pca:31 rms:9 reuse:1 peter:1 speech:1 eigenvectors:2 tune:1 transforms:1 hardware:14 reduced:5 generate:1 sign:2 estimated:2 per:1 write:1 express:1 redundancy:1 achieving:2 sum:2 extends:1 draw:1 orl:6 bit:1 layer:1 annual:1 figueroa:3 x2:1 software:22 awn:1 transconductance:1 performing:1 x12:3 relatively:1 xtk:1 department:1 alternate:1 across:6 smaller:1 slightly:2 wi:2 parameterisation:1 sarasota:1 computationally:1 resource:1 precomputed:3 lta:11 mechanism:1 needed:1 photo:2 available:1 operation:1 gate:1 florida:1 original:5 include:1 sw:6 xc:2 concatenated:1 hosseini:1 society:1 leakage:1 added:2 traditional:1 exhibit:1 subspace:13 distance:28 thank:1 simulated:1 consumption:1 portable:1 discriminant:2 considers:1 polarity:2 providing:1 nc:1 difficult:1 hespanha:1 stated:1 implementation:27 design:5 fisherfaces:4 perform:9 unknown:2 neuron:3 wire:1 compensation:1 anti:2 pentland:1 optional:1 y1:3 download:1 introduced:4 inverting:1 bk:1 david:1 acoustic:1 learned:2 bar:2 usually:1 pattern:7 mismatch:18 below:2 built:1 including:1 memory:10 power:6 settling:1 improve:1 coupled:1 manhattan:11 fully:1 mixed:1 limitation:2 analogy:1 versus:2 digital:5 iabs:2 emulator:3 prasanna:1 share:1 obscure:1 last:1 bias:5 combiners:4 eigenfaces:6 face:33 absolute:1 fg:2 dimension:2 xn:5 computes:7 author:1 adaptive:5 projected:3 preprocessing:1 simplified:2 transaction:1 compact:1 wrote:1 hayes:1 mem:1 b1:1 xi:3 learn:2 transfer:1 improving:1 cl:1 diag:2 cyk:1 noise:2 x1:4 fig:15 gha:14 depicts:1 nonvolatile:2 wavelet:1 hw:4 xt:2 specific:1 offset:17 intrinsic:1 workshop:1 restricting:1 adding:2 cab:1 magnitude:1 depicted:1 corresponds:1 determines:1 comparator:1 identity:2 presentation:1 rbf:7 ferdinando:1 hard:3 change:1 principal:17 degradation:1 total:1 experimental:1 select:1 support:1 tested:3 |
2,494 | 3,262 | Blind channel identification for speech
dereverberation using l1-norm sparse learning
?
Yuanqing Lin? , Jingdong Chen? , Youngmoo Kim? , Daniel D. Lee?
GRASP Laboratory, Department of Electrical and Systems Engineering, University of Pennsylvania
?
Bell Laboratories, Alcatel-Lucent
?
Department of Electrical and Computer Engineering, Drexel University
Abstract
Speech dereverberation remains an open problem after more than three decades
of research. The most challenging step in speech dereverberation is blind channel identification (BCI). Although many BCI approaches have been developed,
their performance is still far from satisfactory for practical applications. The main
difficulty in BCI lies in finding an appropriate acoustic model, which not only
can effectively resolve solution degeneracies due to the lack of knowledge of the
source, but also robustly models real acoustic environments. This paper proposes
a sparse acoustic room impulse response (RIR) model for BCI, that is, an acoustic RIR can be modeled by a sparse FIR filter. Under this model, we show how
to formulate the BCI of a single-input multiple-output (SIMO) system into a l1 norm regularized least squares (LS) problem, which is convex and can be solved
efficiently with guaranteed global convergence. The sparseness of solutions is
controlled by l1 -norm regularization parameters. We propose a sparse learning
scheme that infers the optimal l1 -norm regularization parameters directly from
microphone observations under a Bayesian framework. Our results show that the
proposed approach is effective and robust, and it yields source estimates in real
acoustic environments with high fidelity to anechoic chamber measurements.
1 Introduction
Speech dereverberation, which may be viewed as a denoising technique, is crucial for many speech
related applications, such as hands-free teleconferencing and automatic speech recognition. It is a
challenging signal processing task and remains an open problem after more than three decades of
research. Although many approaches [1] have been developed for speech dereverberation, blind
channel identification (BCI) is believed to be the key to thoroughly solving the dereverberation
problem. Most BCI approaches rely on source statistics (higher order statistics [2] or statistics
of LPC coefficients [3]), or spatial difference among multiple channels [4] for resolving solution
degeneracies due to the lack of knowledge of the source. The performance of these approaches
depends on how well they model real acoustic systems (mainly sources and channels). The BCI
approaches using source statistics need a long sequence of data to build up the statistics, and their
performance often degrades significantly in real acoustic environments where acoustic systems are
time-varying and only approximately time-invariant during a short time window. Besides the data
efficiency issue, there are some other difficulties in the BCI approaches using source statistics, for
example, non-stationarity of a speech source, whitening side effect, and non-minimum phase of
a filter [2]. In contrast, the BCI approaches exploiting channel spatial difference are blind to the
source, and thus they avoid those difficulties arising in assuming source statistics. Unfortunately,
these approaches are often too ill-conditioned to tolerate even a very small amount of ambient noise.
In general, BCI for speech dereverberation is an active research area, and the main challenge is how
to build an effective acoustic model that not only can resolve solution degeneracies due to the lack
of knowledge of the source, but also robustly models real acoustic environments.
1
To address the challenge, this paper proposes a sparse acoustic room impulse response (RIR) model
for BCI, that is, an acoustic RIR can be modeled by a sparse FIR filter. The sparse RIR model is
theoretically sound [5], and it has been shown to be useful for estimating RIRs in real acoustic environments when the source is given a priori [6]. In this paper, the sparse RIR model is incorporated
with channel spatial difference, resulting a blind sparse channel identification (BSCI) approach for
a single-input multiple-output (SIMO) acoustic system. The BSCI approach aims to resolve some
of the difficulties in conventional BCI approaches. It is blind to the source and therefore avoids the
difficulties arising in assuming source statistics. Meanwhile, the BSCI approach is expected to be
robust to ambient noise. It has been shown that, when the source is given a priori [7], the prior
knowledge about sparse RIRs plays an important role in robustly estimating RIRs in noisy acoustic
environments. Furthermore, the statistics describing the sparseness of RIRs are governed by acoustic room characteristics, and thus they are close to be stationary with respect to a specific room. This
is advantageous in terms of both learning the statistics and applying them in channel identification.
Based on the cross relation formulation [4] of BCI, this paper develops a BSCI algorithm that incorporates the sparse RIR model. Our choice for enforcing sparsity is l1 -norm regularization [8], which
has been the driving force for many emerging fields in signal processing, such as sparse coding and
compressive sensing. In the context of BCI, two important issues need to be addressed when using
l1 -norm regularization. First, the existing cross relation formulation for BCI is nonconvex, and directly enforcing l1 -norm regularization will result in an intractable optimization. Second, l1 -norm
regularization parameters are critical for deriving correct solutions, and their improper setting may
lead to totally irrelevant solutions. To address these two issues, this paper shows how to formulate
the BCI of a SIMO system into a convex optimization, indeed an unconstrained least squares (LS)
problem, which provides a flexible platform for incorporating l1 -norm regularization; it also shows
how to infer the optimal l1 -norm regularization parameters directly from microphone observations
under a Bayesian framework.
We evaluate the proposed BSCI approach using both simulations and experiments in real acoustic
environments. Simulation results illustrate the effectiveness of the proposed sparse RIR model in
resolving solution degeneracies, and they show that the BSCI approach is able to robustly and accurately identify filters from noisy microphone observations. When applied to speech dereverberation
in real acoustic environments, the BSCI approach yields source estimates with high fidelity to anechoic chamber measurements. All of these demonstrate that the BSCI approach has the potential for
solving the difficult speech dereverberation problem.
2 Blind sparse channel identification (BSCI)
2.1 Previous work
Our BSCI approach is based on the cross relation formulation for blind SIMO channel identification [4]. In a one-speaker two-microphone system, the microphone signals at time k can be written
as:
xi (k) = s(k) ? hi + ni (k), i = 1, 2,
(1)
where ? denotes linear convolution, s(k) is a source signal, hi represents the channel impulse response between the source and the ith microphone, and ni (k) is ambient noise. The cross relation
formulation is based on a clever observation, x2 (k) ? h1 = x1 (k) ? h2 = s(k) ? h1 ? h2 , if the microphone signals are noiseless [4]. Then, without requiring any knowledge from the source signal,
the channel filters can be identified by minimizing the squared cross relation error. In matrix-vector
form, the optimization can be written as
h?1 , h?2 =
argmin
kh1
k2 +kh
2
k2 =1
1
kX2 h1 ? X1 h2 k2
2
(2)
where Xi is the (N + L ? 1) ? L convolution Toeplitz matrix whose first row and first column are
[xi (k ? N + 1), 0, . . . , 0] and [xi (k ? N + 1), xi (k ? N + 2), ..., xi (k), 0, . . . , 0]T , respectively, N
is the microphone signal length, L is the filter length, hi (i = 1, 2) are L ? 1 vectors representing the
filters, k ? k denotes l2 -norm, and the constraint is to avoid the trivial zero solution. It is easy to see
that the above optimization is a minimum eigenvalue problem, and it can be solved by eigenvalue
decomposition. As shown in [4], the eigenvalue decomposition approach finds the true solution
within a constant time delay and a constant scalar factor when 1) the system is noiseless; 2) the two
2
filters are co-prime (namely, no common zeros); and 3) the system is sufficiently excited (i.e., the
source needs to have enough frequency bands).
Unfortunately, the eigenvalue decomposition approach has not been demonstrated to be useful for
speech dereverberation in real acoustic environments. This is because the conditions for finding
true solutions are difficult to sustain. First, microphone signals in real acoustic environments are
always immersed in excessive ambient noise (such as air-conditioning noise), and thus the noiseless
assumption is never true. Second, it requires precise information about filter order for the filters to
be co-prime, however, the filter order itself is hard to compute accurately since the filters modeling
RIRs are often thousands of taps long. As a result, eigenvalue decomposition approach is often
ill-conditioned and very sensitive to even a very small amount of ambient noise.
Our proposed sparse RIR model aims to alleviate those difficulties. Under the sparse RIR model,
sparsity regularization automatically determines filter order since surplus filter coefficients are
forced to be zero. Furthermore, previous work [7] has demonstrated that, when the source is given a
priori, sparsity regularization plays an important role in robustly estimating RIRs in noisy acoustic
environments. In order to exploit the sparse RIR model, we first formulate the BCI using cross relation into a convex optimization, which will provide a flexible platform for enforcing l1 -norm sparsity
regularization.
2.2 Convex formulation
The optimization in Eq. 2 is nonconvex because its domain, kh1 k2 + kh2 k2 = 1, is nonconvex. We
propose to replace it with a convex singleton linear constraint, and the optimization becomes
1
h?1 , h?2 = argmin kX2 h1 ? X1 h2 k2
h1 (l)=1 2
(3)
where h1 (l) is the lth element of filter h1 . It is easy to see that, when microphone signals are
noiseless, the optimizations in Eqs. 2 and 3 yield equivalent solutions within a constant time delay
and a constant scalar factor. Because the optimization is a minimization, h1 (l) tends to align with
the largest coefficient in filter h1 , which normally is the coefficient corresponding to the direct path.
Consequently, the singleton linear constraint removes two degrees of freedom in filter estimates: a
constant time delay (by fixing l) and a constant scalar factor [by fixing h1 (l) = 1]. The choice of l
(0 ? l ? L ? 1) is arbitrary as long as the direct path in filter h2 is no more than l samples earlier
than the one in filter h1 .
The new formulation in Eq. 3 has many advantages. It is convex and indeed an unconstrained LS
problem since the singleton linear constraint can be easily substituted into the objective function.
Furthermore, the new LS formulation is more robust to ambient noise than the eigenvalue decomposition approach in Eq. 2. This can be better viewed in the frequency domain. Because the squared
cross relation error (the objective function in Eqs. 2 and 3) is weighted in the frequency domain by
the power spectrum density of a common source, the total filter energy constraint in Eq. 2 may be
filled with less significant frequency bands which contribute little to the source and are weighted
less in the objective function. As a result, the eigenvalue decomposition approach is very sensitive
to noise. In contrast, the singleton linear constraint in Eq. 3 has much less coupling in filter energy
allocation, and the new LS approach is more robust to ambient noise.
Then, the BSCI approach is to incorporate the LS formulation with l1 -norm sparsity regularization,
and the optimization becomes
L?1
X
1
[|h1 (j)| + |h2 (j)|]
h?1 , h?2 = argmin kX2 h1 ? X1 h2 k2 + ??
h1 (l)=1 2
j=0
(4)
where ?? is a nonnegative scalar regularization parameter that balances the preference between the
squared cross relation error and the sparseness of solutions described by their l1 -norm. The setting
of ?? is critical for deriving appropriate solutions, and we will show how to compute its optimal
setting in a Bayesian framework in Section 2.3. Given a ?? , the optimization in Eq. 4 is convex
and can be solved by various methods with guaranteed global convergence. We implemented the
Mehrotra predictor-corrector primal-dual interior point method [9], which is known to yield better
search directions than the Newton?s method. Our implementation usually solves the optimization in
Eq. 4 with extreme accuracy (relative duality gap less than 10?14 ) in less than 20 iterations.
3
2.3 Bayesian l1 -norm sparse learning for blind channel identification
The l1 -norm regularization parameter ?? in Eq. 4 is critical for deriving appropriately sparse solutions. How to determine its optimal setting is still an open research topic. A recent development is to
solve the optimization in Eq. 4 with respect to all possible values of ?? [10], and cross-validation is
then employed to find an appropriate solution. However, it is not easy to obtain extra data for crossvalidation in BCI since real acoustic environments are often time-varying. In this study, we develop
a Bayesian framework for inferring the optimal regularization parameters for the BSCI formulation
in Eq. 4. A similar Bayesian framework can be found in [7], where the source was assumed to be
known a priori.
The optimization in Eq. 4 is a maximum-a-posteriori estimation under the following probabilistic
assumptions
1
1
2
exp
?
kX
h
?
X
h
k
, (5)
P X2 h1 ? X1 h2 |? 2 , h1 , h2
=
2 1
1 2
2? 2
(2?? 2 )(N +L?1)/2
?
?
2L
L?1
?
?
X
?
exp ??
[|h1 (j)| + |h2 (j)|]
(6)
P (h1 , h2 |?) =
?
?
2
j=0
where the cross relation error is an I.I.D. zero-mean Gaussian with variance ? 2 , and the filter coefficients are governed by a Laplacian sparse prior with the scalar parameter ?. Then, the regularization
parameter ?? in Eq. 4 can be written as
?? = ? 2 ?.
(7)
When the ambient noise [n1 (k) and n2 (k) in Eq. 1] is an I.I.D. zero-mean Gaussian with variance
?02 , the parameter ? 2 can be approximately written as
? 2 = ?02 (kh1 k2 + kh2 k2 ),
(8)
because x2 (k) ? h1 ? x1 (k) ? h2 = n2 (k) ? h1 ? n1 (k) ? h2 . The above form of ? 2 is only an
approximation because the cross relation error is temporally correlated through the convolution.
Nevertheless, since the cross relation error is the result of the convolutive mixing, its distribution
will be close to the Gaussian with its variance described by Eq. 8 according to the central limit
theorem. We choose to estimate the ambient noise level (?02 ) directly from microphone observations
via restricted maximum likelihood [11]:
2 N
?1
X
X
1
kxi (k) ? s(k) ? hi k2
(9)
?02 = min
s,h1 ,h2 N ? L ? 1
i=1
k=0
where the denominator N ? L ? 1 (but not 2N ) accounts for the loss of the degrees of freedom
during the optimization. The above minimization is solved by coordinate descent alternatively with
respect to the source and the filters. It is initialized with the LS solution by Eq. 3 and often able to
yield a good ?02 estimate in a few iterations. Note that each iteration can be computed efficiently in
the frequency domain. Meanwhile, the parameter ? can be computed by
2L
? = PL?1
,
(10)
j=0 [|h1 (j)| + |h2 (j)|]
as a result of finding the optimal Laplacian distribution given its sufficient statistics.
With the Eqs. 8 and 10, finding the optimal regularization parameters becomes computing the statisP
tics of filters, kh1 k2 + kh2 k2 and L?1
j=0 [|h1 (j)| + |h2 (j)|]. These statistics are closely related
to acoustic room characteristics and may be computed from them if they are known a priori. For
example, the reverberation time of a room defines how fast echoes decay ?60 dB, and it can be
used to compute the filter statistics. More generally, we choose to compute the statistics directly
from microphone observations
R in the Baysian framework by maximizing the marginal likelihood,
P (X2 h1 ? X1 h2 |? 2 , ?) = h1 (l)=1 P (X2 h1 ? X1 h2 , h1 , h2 |? 2 , ?)dh1 dh2 . The optimization is
through Expectation-Maximization (EM) updates [7]:
Z
2
2
(kh1 k2 + kh2 k2 )Q(h1 , h2 )dh1 dh2
(11)
? ?? ?0
h(l)=1
? ??
2L
PL?1
h(l)=1 ( j=0 |h1 (j)| + |h2 (j)|)Q(h1 , h2 )dh1 dh2
R
4
(12)
where h1 and h2 are treated as hidden variables, ? 2 and ? are parameters, and Q(h1 , h2 ) ?
PL?1
exp{? 2?1 2 kX2 h1 ? X1 h2 k2 ? ?[ j=0 |h1 (j)| + |h2 (j)|]} is the probability distribution of h1
and h2 given the current estimate of ? 2 and ?. The integrals in Eqs. 11 and 12 can be computed
using the variational scheme described in [7]. The EM updates often converge to a good estimate of
? 2 and ? in a few iterations. Moreover, since the filter statistics are relatively stationary for a specified room, the Bayesian inference may be carried out off-line and only once if the room conditions
stay the same.
After the filters are identified by BCI approaches, the source can be computed by various methods [12]. We choose to estimate the source by the following optimization
s? = argmin
s
2 N
?1
X
X
kxi (k) ? s(k) ? hi k2 ,
(13)
i=1 k=0
which will yield maximum-likelihood (ML) estimation if the filter estimates are accurate.
3 Simulations and Experiments
3.1 Simulations
3.1.1 Simulations with artificial RIRs
We first employ a simulated example to illustrate the effectiveness of the proposed sparse RIR model
for BCI. In the simulation, we used a speech sequence of 1024 samples (with 16 kHz sampling rate)
as the source (s) and simulated two 16-sample FIR filters (h1 and h2 ). The filter h1 had nonzero
elements only at indices 0, 2, and 12 with amplitudes of 1, -0.7, and 0.5, respectively; the filter h2 had
nonzero elements only at indices 2, 6, 8, and 10 with amplitudes of 1, -0.6, 0.6, and 0.4, respectively.
Notice that both h1 and h2 are sparse. Then the simulated microphone observations (x1 and x2 )
were computed by Eq. 1 with the ambient noise being real noise recorded in a classroom. The noise
was scaled so that the signal-to-noise ratio (SNR) of the microphone signals was approximately 20
dB. Because a big portion of the noise (mainly air-conditioning noise) was at low frequency, the
microphone observations were high-passed with a cut-off frequency of 100 Hz before they were fed
to BCI algorithms. In the BSCI algorithm, the l1 -norm regularization parameters, ? 2 and ?, were
estimated in the Bayesian framework using the update rules given in Eqs. 11 and 12.
Figure 1 shows the filters identified by different BCI approaches. Compared to the conventional
eigenvalue decomposition method (Eq. 2), the new convex LS approach (Eq. 3) is more robust to
ambient noise and yielded better filter estimates even though the estimates still seem to be convolved
by a common filter. The proposed BSCI approach (Eq. 4) yielded filter estimates that are almost
identical to the true ones. It is evident that the proposed sparse RIR model played a crucial role in
robustly and accurately identifying filters in blind manners. The robustness and accuracy gained by
the BSCI approach will become essential when the filters are thousands of taps long in real acoustic
environments.
3.1.2 Simulations with measured RIRs
Here we employ simulations using RIRs measured in real rooms to demonstrate the effectiveness of the proposed BSCI approach for speech dereverberation. Its performance is compared
to the beamforming, the eigenvalue decomposition (Eq. 2), and the LS (Eq. 3) approaches.
In the simulation, the source sequence (s) was a sentence of speech (approximately 1.5 seconds), and the filters (h1 and h2 ) were two measured RIRs from York MARDY database
(http://www.commsp.ee.ic.ac.uk/ sap/mardy.htm) but down-sampled to 16 kHz (from originally 48
kHz). The original filters in the database were not sparse, but they had many tiny coefficients which
were in the range of measurement uncertainty. To make the simulated filters sparse, we simply
zeroed out those coefficients whose amplitudes were less than 2% of the maximum. Finally, we
truncated the filters to have length of 2048 since there were very few nonzero coefficients after that.
With the simulated source and filters, we then computed microphone observations using Eq. 1 with
ambient noise being real noise recorded in a classroom. For testing the robustness of different BCI
algorithms, the ambient noise was scaled to different levels so that the SNRs varied from 60 dB to 10
dB. Similar to the previous simulations, the simulated observations were high-passed with a cutoff
5
1
Estimated
True
1
Eig?
0.5
decomp
0
h1
h
2
0
?0.5
LS
5
10
15
1
1
0
0
?1
BSCI
?1
0
0
5
10
?1
15
1
1
0
0
?1
0
5
10
Time (sample)
?1
15
0
5
10
15
0
5
10
15
5
10
Time (sample)
15
0
Figure 1: Identified filters by three different BCI approaches in a simulated example: the eigenvalue decomposition approach (denoted as eig-decomp) in Eq. 2, the LS approach in Eq. 3, and the blind sparse channel
identification (BSCI) approach in Eq. 4. The solid-dot lines represent the estimated filters, and the dot-square
lines indicate the true filters within a constant time delay and a constant scalar factor.
Filter estimates
Source estimates
100
Normalized correlation (%)
Normalized correlation (%)
100
80
60
eigen?decomp
LS
BSCI
40
20
80
60
40
eigen?decomp
beamforming
20
LS
BSCI
0
?60
?50
?40
?30
?20
Noise level (dB)
0
?60
?10
?50
?40
?30
?20
Noise level (dB)
?10
Figure 2: The simulation results using measured real RIRs. The normalized correlation (defined in Eq. 14)
of the estimates were computed with respect to their true values. The filters were identified by three different
approaches: the eigenvalue decomposition approach (denoted as eigen-decomp) in Eq. 2 , the LS approach in
Eq. 3, and the blind sparse channel identification (BSCI) approach in Eq. 4. After the filters were identified,
the source was estimated by Eq. 13. The source estimated by beamforming is also presented as a baseline
reference.
frequency of 100 Hz before they were fed to different BCI algorithms. In the BSCI approach, the
l1 -norm regularization parameters were iteratively computed using the updates in Eqs. 11 and 12.
After filters were identified, the source was estimated using Eq. 13.
Because both filter and source estimates by BCI algorithms are within a constant time delay and
a constant scalar factor, we use normalized correlation for evaluating the estimates. Let ?
s and s0
denote an estimated source and the true source, respectively, then the normalized correlation C(?
s, s0 )
is defined as
P
s?(k ? m)s0 (k)
C(?
s, s0 ) = max k
(14)
m
k?
skks0 k
where m and k are sample indices, and k ? k denotes l2 -norm. It is easy to see that, the normalized
correlation is between 0% and 100%: it is equal to 0% when the two signals are uncorrelated, and it
is equal to 100% only when the two signal are identical within a constant time delay and a constant
scalar factor. The definition in Eq. 14 is also applicable to the evaluation of filter estimates.
The simulation results are shown in Fig. 2. Similar to what we observed in the previous example,
the convex LS approach (Eq. 3) shows significant improvement in both filter and source estimation
compared to the eigenvalue decomposition approach (Eq. 2). In fact, the eigenvalue decomposition
6
Normalized correlation (%)
100
Beamforming
Eig?decomp
LS
BSCI
90
80
70
60
50
40
30
1
2
3
4
5
6
7
Experiments
8
9
10
Figure 3: The source estimates of 10 experiments in real acoustic environments. The normalized correlation
was with respect to their anechoic chamber measurement. The filters were identified by three different BCI
approaches: the eigenvalue decomposition approach (denoted as eig-decomp) in Eq. 2, the LS approach in
Eq. 3, and the blind sparse channel identification (BSCI) approach in Eq. 4. The beamforming results serve as
the baseline performance for comparison.
h
1
Amplitude
0
0.5
0
?10
0
500
1000
1500
h
2
Amplitude
Amplitude
0
0
-0.5
0
500
1000
Time (samples)
1500
0
100
5
200 300 400 500 600 700
Real room recording (left microphone)
800
900
B
0
?5
1
0.5
A
0
-0.5
-1
Anechoic chamber measurement
10
Amplitude
Amplitude
1
0
100
200
300
400
500
600
700
800
900
Source estimate using the filters identified by BSCI
5
C
0
?5
0
100
200
300
400 500 600
Time (samples)
700
800
900
Figure 4: Results of Experiment 6 in Fig. 3. Left: the filters estimated by the proposed blind sparse channel
identification (BSCI) approach. They are sparse as indicated by the enlarged segments. Right: a segment of
source estimate (shown in C) using the BSCI approach. It is compared with its anechoic measurement (shown
in A) and its microphone recording (shown in B).
approach did not yield relevant results because it was too ill-conditioned due to the long filters.
The remarkable performance came from the BSCI approach, which incorporates the convex LS
formulation with the sparse RIR model. In particular, the BSCI approach yielded higher than 90%
normalized correlation in source estimates when SNR was better than 20 dB, and it yielded higher
than 99% normalized correlation in the low noise limit. The performance of the canonical delayand-sum beamforming is also presented as the baseline for all BCI algorithms.
3.2 Experiments
We also evaluated the proposed BSCI approach using signals recorded in real acoustic environments. We carried out 10 experiments in total in a reverberant room. In each experiment, a sentence
of speech (approximately 1.5 seconds, and the same for all experiments) was played through a loudspeaker (NSW2-326-8A, Aura Sound) and recorded by a matched omnidirectional microphone pair
(M30MP, Earthworks). The speaker-microphone positions (and thus RIRs) were different in different experiments. Because the recordings had a large amount of low-frequency noise, they were
high-passed with a cutoff frequency of 100 Hz before they were fed to BCI algorithms. In the
BSCI approach, the l1 -norm regularization parameters, ? 2 and ?, were iteratively computed using
the updates in Eq. 11 and 12. After the filters were identified, the sources were computed using
Eq. 13. We also had recordings in the anechoic chamber at Bell Labs using the same instruments
and settings, and the anechoic measurement served as the approximated ground truth for evaluating
the performance of different BCI approaches.
7
Figure 3 shows the source estimates in the 10 experiments in terms of their normalized correlation
to the anechoic measurement. The performance of the proposed BSCI is compared with the beamforming, the eigenvalue decomposition (Eq. 2), and the convex LS (Eq. 3) approaches. The results of
the 10 experiments unanimously support our previous findings in simulations. First, the convex LS
approach yielded significantly better source estimates than the eigenvalue decomposition method.
Second, the proposed BSCI approach, which incorporates the convex LS formulation with the sparse
RIR model, yielded the most dramatic results, achieving 85% or higher of normalized correlation in
source estimates in most experiments while the LS approach only obtained approximately 70% of
normalized correlation.
Figure 4 shows one instance of filter and source estimates. The estimated filters have about 2000
zeros out of totally 3072 coefficients, and thus they are sparse. This observation experimentally
validates our hypothesis of the sparse RIR models, namely, an acoustic RIR can be modeled by a
sparse FIR filter. The source estimate shown in Fig. 4 vividly illustrates the convolution and dereverberation process. It only plots a small segment to reveal greater details. As we see, the anechoic
measurement was clean and had clear harmonic structure; the signal recorded in the reverberant
room was smeared by echoes during the convolution process; and then, the dereverberation using
our BSCI approach deblurred the signal and recovered the underlying harmonic structure.
4 Discussion
We propose a blind sparse channel identification (BSCI) approach for speech dereverberation. It
consists of three important components. The first is the sparse RIR model, which effectively resolves
solution degeneracies and robustly models real acoustic environments. The second is the convex
formulation, which guarantees global convergence of the proposed BSCI algorithm. And the third
is the Bayesian l1 -norm sparse learning scheme that infers the optimal regularization parameters
for deriving optimally sparse solutions. The results demonstrate that the proposed BSCI approach
holds the potential to solve the speech dereverberation problem in real acoustic environments, which
has been recognized as a very difficult problem in signal processing. The acoustic data used in this
paper are available at http://www.seas.upenn.edu/?linyuanq/Research.html.
Our future work includes side-by-side comparison between our BSCI approach and existing source
statistics based BCI approaches. Our goal is to build a uniform framework that combines various
prior knowledge about acoustic systems for best solving the speech dereverberation problem.
References
[1] T. Nakatani, M. Miyoshi, and K. Kinoshita, ?One microphone blind dereverberation based on quasiperiodicity of speech signals,? in NIPS 16. 2004.
[2] A. Hyvarinen, J. Karhunen, and E. Oja, Independent Component Analysis, New York, NY: John Wiley
and Sons, 2001.
[3] H. Attias, J. C. Platt, A. Acero, and L. Deng, ?Speech denoising and dereverberation using probabilistic
models,? in NIPS 13, 2000.
[4] L. Tong, G. Xu, and T. Kailath, ?Blind identification and equalization based on second-order statistics: A
time domain approach,? IEEE Trans. Information Theory, vol. 40, no. 2, pp. 340?349, 1994.
[5] J. B. Allen and D. A. Berkley, ?Image method for efficiently simulating small-room acoustics,? J.
Acoustical Society America, vol. 65, pp. 943?950, 1979.
[6] D. L. Duttweiler, ?Proportionate normalized least-mean-squares adaptation in echo cancelers,? IEEE
Trans. Speech Audio Processing, vol. 8, pp. 508?518, 2000.
[7] Y. Lin and D. D. Lee, ?Bayesian L1 -norm sparse learning,? in Proc. ICASSP, 2006.
[8] S. S. Chen, D. L. Donoho, and M. A. Saunders, ?Atomic decomposition by basis pursuit,? SIAM J.
Scientific Computing, vol. 20, no. 1, pp. 33?61, 1998.
[9] S. J. Wright, Primal-Dual Interior Point Methods, Philadelphia, PA: SIAM, 1997.
[10] D. M. Malioutov, M. Cetin, and A. S. Willsky, ?Homotopy continuation for sparse signal representation,?
in Proc. ICASSP, 2005.
[11] D.A. Harville, ?Maximum likelihood approaches to variance component estimation and to related problems,? J. American Statistical Association, vol. 72, pp. 320?338, 1977.
[12] M. Miyoshi and Y. Kaneda, ?Inverse filtering of room acoustics,? IEEE Trans. Acoustics, Speech, and
Signal Processing, vol. 36, no. 2, pp. 145?152, 1988.
8
| 3262 |@word norm:22 advantageous:1 open:3 simulation:13 jingdong:1 decomposition:16 excited:1 dramatic:1 solid:1 daniel:1 existing:2 current:1 recovered:1 written:4 john:1 remove:1 plot:1 update:5 stationary:2 ith:1 short:1 provides:1 contribute:1 preference:1 direct:2 become:1 consists:1 combine:1 manner:1 theoretically:1 upenn:1 indeed:2 expected:1 automatically:1 resolve:4 little:1 window:1 totally:2 becomes:3 estimating:3 moreover:1 matched:1 underlying:1 what:1 tic:1 argmin:4 emerging:1 developed:2 compressive:1 finding:5 guarantee:1 nakatani:1 k2:16 scaled:2 uk:1 platt:1 normally:1 before:3 cetin:1 engineering:2 tends:1 limit:2 path:2 approximately:6 challenging:2 co:2 range:1 practical:1 testing:1 atomic:1 area:1 kh2:4 bell:2 significantly:2 interior:2 close:2 clever:1 acero:1 context:1 applying:1 equalization:1 www:2 equivalent:1 conventional:2 demonstrated:2 cancelers:1 maximizing:1 l:22 convex:14 formulate:3 identifying:1 rule:1 deriving:4 coordinate:1 play:2 hypothesis:1 pa:1 element:3 recognition:1 approximated:1 cut:1 database:2 observed:1 role:3 electrical:2 solved:4 thousand:2 improper:1 environment:17 solving:3 segment:3 serve:1 efficiency:1 basis:1 teleconferencing:1 easily:1 htm:1 icassp:2 various:3 america:1 snrs:1 forced:1 fast:1 effective:2 artificial:1 saunders:1 whose:2 solve:2 bci:32 toeplitz:1 statistic:17 noisy:3 itself:1 echo:3 validates:1 sequence:3 eigenvalue:16 advantage:1 dh1:3 propose:3 adaptation:1 relevant:1 mixing:1 anechoic:9 kh:1 crossvalidation:1 exploiting:1 convergence:3 sea:1 miyoshi:2 illustrate:2 coupling:1 develop:1 fixing:2 ac:1 measured:4 eq:47 solves:1 implemented:1 indicate:1 direction:1 closely:1 correct:1 filter:60 bsci:38 alleviate:1 homotopy:1 pl:3 hold:1 sufficiently:1 ic:1 ground:1 exp:3 wright:1 driving:1 estimation:4 proc:2 applicable:1 sensitive:2 largest:1 weighted:2 minimization:2 smeared:1 always:1 gaussian:3 aim:2 avoid:2 varying:2 improvement:1 likelihood:4 mainly:2 contrast:2 kim:1 baseline:3 posteriori:1 inference:1 hidden:1 relation:11 issue:3 aura:1 fidelity:2 flexible:2 among:1 ill:3 priori:5 proposes:2 development:1 spatial:3 platform:2 denoted:3 marginal:1 field:1 once:1 never:1 equal:2 sampling:1 identical:2 represents:1 excessive:1 future:1 develops:1 few:3 employ:2 deblurred:1 oja:1 phase:1 dual:2 n1:2 freedom:2 stationarity:1 evaluation:1 grasp:1 extreme:1 primal:2 accurate:1 ambient:13 integral:1 simo:4 filled:1 initialized:1 instance:1 column:1 modeling:1 earlier:1 maximization:1 snr:2 predictor:1 uniform:1 delay:6 too:2 optimally:1 kxi:2 thoroughly:1 density:1 siam:2 stay:1 lee:2 probabilistic:2 off:2 squared:3 central:1 recorded:5 choose:3 fir:4 american:1 account:1 potential:2 singleton:4 coding:1 includes:1 coefficient:9 blind:17 depends:1 h1:40 lab:1 portion:1 proportionate:1 vividly:1 square:4 ni:2 air:2 accuracy:2 variance:4 characteristic:2 efficiently:3 yield:7 identify:1 kh1:5 html:1 identification:14 bayesian:10 accurately:3 served:1 malioutov:1 definition:1 energy:2 frequency:10 pp:6 degeneracy:5 sampled:1 sap:1 knowledge:6 infers:2 classroom:2 amplitude:8 surplus:1 higher:4 tolerate:1 originally:1 response:3 sustain:1 formulation:12 evaluated:1 though:1 furthermore:3 correlation:13 hand:1 eig:4 lack:3 defines:1 indicated:1 impulse:3 reveal:1 scientific:1 effect:1 requiring:1 true:8 normalized:14 regularization:21 laboratory:2 satisfactory:1 iteratively:2 nonzero:3 omnidirectional:1 during:3 speaker:2 evident:1 demonstrate:3 l1:20 allen:1 image:1 variational:1 harmonic:2 common:3 conditioning:2 khz:3 association:1 measurement:9 significant:2 automatic:1 unconstrained:2 dh2:3 had:6 dot:2 berkley:1 whitening:1 align:1 recent:1 irrelevant:1 prime:2 nonconvex:3 came:1 kx2:4 minimum:2 greater:1 employed:1 deng:1 recognized:1 determine:1 converge:1 signal:20 resolving:2 multiple:3 sound:2 infer:1 believed:1 long:5 lin:2 cross:12 kaneda:1 controlled:1 laplacian:2 denominator:1 noiseless:4 expectation:1 iteration:4 represent:1 addressed:1 source:49 crucial:2 appropriately:1 extra:1 hz:3 recording:4 db:7 beamforming:7 incorporates:3 effectiveness:3 seem:1 ee:1 reverberant:2 easy:4 enough:1 pennsylvania:1 identified:10 attias:1 passed:3 speech:23 york:2 useful:2 generally:1 clear:1 amount:3 band:2 http:2 continuation:1 canonical:1 notice:1 estimated:9 arising:2 vol:6 key:1 nevertheless:1 achieving:1 harville:1 cutoff:2 clean:1 immersed:1 sum:1 inverse:1 uncertainty:1 almost:1 hi:5 guaranteed:2 played:2 nonnegative:1 yielded:6 constraint:6 x2:6 min:1 relatively:1 loudspeaker:1 department:2 according:1 em:2 son:1 kinoshita:1 invariant:1 restricted:1 remains:2 describing:1 fed:3 instrument:1 available:1 pursuit:1 appropriate:3 chamber:5 simulating:1 robustly:7 robustness:2 eigen:3 convolved:1 original:1 denotes:3 newton:1 exploit:1 build:3 society:1 objective:3 degrades:1 simulated:7 earthwork:1 topic:1 acoustical:1 trivial:1 enforcing:3 yuanqing:1 willsky:1 assuming:2 besides:1 length:3 modeled:3 corrector:1 index:3 ratio:1 minimizing:1 balance:1 difficult:3 unfortunately:2 reverberation:1 implementation:1 rir:18 observation:11 convolution:5 descent:1 truncated:1 incorporated:1 precise:1 varied:1 arbitrary:1 namely:2 pair:1 specified:1 sentence:2 baysian:1 tap:2 acoustic:34 nip:2 trans:3 address:2 able:2 usually:1 convolutive:1 dereverberation:18 lpc:1 challenge:2 sparsity:5 max:1 power:1 critical:3 difficulty:6 force:1 rely:1 drexel:1 regularized:1 treated:1 representing:1 scheme:3 temporally:1 carried:2 mehrotra:1 philadelphia:1 prior:3 l2:2 relative:1 loss:1 allocation:1 filtering:1 remarkable:1 validation:1 h2:31 degree:2 sufficient:1 s0:4 zeroed:1 tiny:1 uncorrelated:1 row:1 free:1 side:3 sparse:41 evaluating:2 avoids:1 far:1 hyvarinen:1 ml:1 global:3 active:1 assumed:1 xi:6 alternatively:1 spectrum:1 search:1 decade:2 channel:19 robust:5 meanwhile:2 domain:5 substituted:1 did:1 main:2 big:1 noise:25 n2:2 x1:10 enlarged:1 fig:3 xu:1 ny:1 wiley:1 tong:1 inferring:1 position:1 lie:1 governed:2 third:1 theorem:1 down:1 lucent:1 specific:1 sensing:1 decay:1 intractable:1 incorporating:1 essential:1 effectively:2 gained:1 conditioned:3 illustrates:1 sparseness:3 kx:1 karhunen:1 chen:2 gap:1 simply:1 scalar:8 truth:1 determines:1 lth:1 viewed:2 goal:1 kailath:1 consequently:1 donoho:1 decomp:7 room:14 replace:1 hard:1 experimentally:1 denoising:2 microphone:21 total:2 duality:1 support:1 incorporate:1 evaluate:1 audio:1 correlated:1 |
2,495 | 3,263 | Optimal ROC Curve for a Combination of Classifiers
Marco Barreno
Alvaro A. C?ardenas
J. D. Tygar
Computer Science Division
University of California at Berkeley
Berkeley, California 94720
{barreno,cardenas,tygar}@cs.berkeley.edu
Abstract
We present a new analysis for the combination of binary classifiers. Our analysis
makes use of the Neyman-Pearson lemma as a theoretical basis to analyze combinations of classifiers. We give a method for finding the optimal decision rule for a
combination of classifiers and prove that it has the optimal ROC curve. We show
how our method generalizes and improves previous work on combining classifiers
and generating ROC curves.
1
Introduction
We present an optimal way to combine binary classifiers in the Neyman-Pearson sense: for a given
upper bound on false alarms (false positives), we find the set of combination rules maximizing the
detection rate (true positives). This forms the optimal ROC curve of a combination of classifiers.
This paper makes the following original contributions: (1) We present a new method for finding
the meta-classifier with the optimal ROC curve. (2) We show how our framework can be used to
interpret, generalize, and improve previous work by Provost and Fawcett [1] and Flach and Wu [2].
(3) We present experimental results that show our method is practical and performs well, even when
we must estimate the distributions with insufficient data.
In addition, we prove the following results: (1) We show that the optimal ROC curve is composed
in general of
2n + 1 different decision rules and of the interpolation between these rules (over the
2n
space of 2 possible Boolean rules). (2) We prove that our method is optimal in this space. (3) We
prove that the Boolean AND and OR rules are always part of the optimal set for the special case of
independent classifiers (though in general we make no independence assumptions). (4) We prove a
sufficient condition for Provost and Fawcett?s method to be optimal.
2
Background
Consider classification problems where examples from a space of inputs X are associated with
binary labels {0, 1} and there is a fixed but unknown probability distribution P(x, c) over examples
(x, c) ? X ? {0, 1}. H0 and H1 denote the events that c = 0 and c = 1, respectively.
A binary classifier is a function f : X ? {0, 1} that predicts labels on new inputs. When we use
the term ?classifier? in this paper we mean binary classifier. We address the problem of combining
results from n base classifiers f1 , f2 , . . . , fn . Let Yi = fi (X) be a random variable indicating the
output of classifier fi and Y ? {0, 1}n = (Y1 , Y2 , . . . , Yn ). We can characterize the performance of
classifier fi by its detection rate (also true positives, or power) PDi = Pr[Yi = 1|H1 ] and its false
alarm rate (also false positives, or test size) PF i = Pr[Yi = 1|H0 ]. In this paper we are concerned
with proper classifiers, that is, classifiers where PDi > PF i . We sometimes omit the subscript i.
1
The Receiver Operating Characteristic (ROC) curve plots PF on the x-axis and PD on the y-axis
(ROC space). The point (0, 0) represents always classifying as 0, the point (1, 1) represents always
classifying as 1, and the point (0, 1) represents perfect classification. If one classifier?s curve has no
points below another, it weakly dominates the latter. If no points are below and at least one point
is strictly above, it dominates it. The line y = x describes a classifier that is no better than chance,
and every proper classifier dominates this line. When an ROC curve consists of a single point, we
connect it with straight lines to (0, 0) and (1, 1) in order to compare it with others (see Lemma 1).
In this paper, we focus on base classifiers that occupy a single point in ROC space. Many classifiers
have tunable parameters and can produce a continuous ROC curve; our analysis can apply to these
cases by choosing representative points and treating each one as a separate classifier.
2.1
The ROC convex hull
Provost and Fawcett [1] give a seminal result on the use of ROC curves for combining classifiers.
They suggest taking the convex hull of all points of the ROC curves of the classifiers. This ROC
convex hull (ROCCH) combination rule interpolates between base classifiers f1 , f2 , . . . , fn , selecting (1) a single best classifier or (2) a randomization between the decisions of two classifiers for
every false alarm rate [1]. This approach, however, is not optimal: as pointed out in later work by
Fawcett, the Boolean AND and OR rules over classifiers can perform better than the ROCCH [3].
AND and OR are only 2 of 22 possiblen Boolean rules over the outputs of n base classifiers (n
classifiers ? 2n possible outcomes ? 22 rules over outcomes). We address finding optimal rules.
n
2.2
The Neyman-Pearson lemma
In this section we introduce Neyman-Pearson theory from the framework of statistical hypothesis
testing [4, 5], which forms the basis of our analysis.
We test a null hypothesis H0 against an alternative H1 . Let the random variable Y have probability
distributions P (Y|H0 ) under H0 and P (Y|H1 ) under H1 , and define the likelihood ratio ?(Y) =
P (Y|H1 )/P (Y|H0 ). The Neyman-Pearson lemma states that the likelihood ratio test
(
1 if ?(Y) > ?
? if ?(Y) = ? ,
D(Y) =
(1)
0 if ?(Y) < ?
for some ? ? (0, ?) and ? ? [0, 1], is a most powerful test for its size: no other test has higher
PD = Pr[D(Y) = 1|H1 ] for the same bound on PF = Pr[D(Y) = 1|H0 ]. (When ?(Y) = ? ,
D = 1 with probability ? and 0 otherwise.) Given a test size ?, we maximize PD subject to PF ? ?
by choosing ? and ? as follows. First we find the smallest value ? ? such that Pr[?(Y) > ? ? |H0 ] ?
?. To maximize PD , which is monotonically nondecreasing with PF , we choose the highest value
? ? that satisfies Pr[D(Y) = 1|H0 ] = Pr[?(Y) > ? ? |H0 ] + ? ? Pr[?(Y) = ? ? |H0 ] ? ?, finding
? ? = (? ? Pr[?(Y) > ? ? |H0 ])/ Pr[?(Y) = ? ? |H0 ].
3
The optimal ROC curve for a combination of classifiers
We characterize the optimal ROC curve for a decision based on a combination of arbitrary
classifiers?for any given bound ? on PF , we maximize PD . We frame this problem as a NeymanPearson hypothesis test parameterized by the choice of ?. We assume nothing about the classifiers
except that each produces an output in {0, 1}. In particular, we do not assume the classifiers are
independent or related in any way.
Before introducing our method we analyze the one-classifier case (n = 1).
Lemma 1 Let f1 be a classifier with performance probabilities PD1 and PF 1 . Its optimal ROC
curve is a piecewise linear function parameterized by a free parameter ? bounding PF : for ? <
PF 1 , PD (?) = (PD1 /PF 1 )?, and for ? > PF 1 , PD (?) = [(1 ? PD1 )/(1 ? PF 1 )](? ? PF 1 ) + PD1 .
Proof. When ? < PF 1 , we can obtain a likelihood ratio test by setting ? ? = ?(1) and ? ? = ?/PF 1 ,
and for ? > PF 1 , we set ? ? = ?(0) and ? ? = (? ? PF 1 )/(1 ? PF 1 ).
2
2
The intuitive interpretation of this result is that to decrease or increase the false alarm rate of the
classifier, we randomize between using its predictions and always choosing 1 or 0. In ROC space,
this forms lines interpolating between (PF 1 , PD1 ) and (1, 1) or (0, 0), respectively.
To generalize this result for the combination of n classifiers, we require the distributions P (Y|H0 )
and P (Y|H1 ). With this information we then compute and sort the likelihood ratios ?(y) for all
outcomes y ? {0, 1}n . Let L be the list of likelihood ratios ranked from low to high.
Lemma 2 Given any 0 ? ? ? 1, the ordering L determines parameters ? ? and ? ? for a likelihood
ratio test of size ?.
Lemma 2 sets up a classification rule for each interval between likelihoods in L and interpolates
between them to create a test with size exactly ?. Our meta-classifier does this for any given bound
on its false positive rate, then makes predictions according to Equation 1. To find the ROC curve for
our meta-classifier, we plot PD against PF for all 0 ? ? ? 1. In particular, for each y ? {0, 1}n
we can compute Pr[?(Y) > ?(y)|H0 ], which gives us one value for ? ? and a point in ROC space
(PF and PD follow directly from L and P ). Each ? ? will turn out to be the slope of a line segment
between adjacent vertices, and varying ? ? interpolates between the vertices. We call the ROC curve
obtained in this way the LR-ROC.
Theorem 1 The LR-ROC weakly dominates the ROC curve of any possible combination of Boolean
functions g : {0, 1}n ? {0, 1} over the outputs of n classifiers.
Proof. Let ?? be the probability of false alarm PF for g. Let ? ? and ? ? be chosen for a test of
size ?? . Then our meta-classifier?s decision rule is a likelihood ratio test. By the Neyman-Pearson
lemma, no other test has higher power for any given size. Since ROC space plots power on the
y-axis and size on the x-axis, this means that the PD for g at PF = ?? cannot be higher than that of
the LR-ROC. Since this is true at any ?? , the LR-ROC weakly dominates the ROC curve for g. 2
3.1
Practical considerations
To compute all likelihood ratios for the classifier outcomes we need to know the probability distributions P (Y|H0 ) and P (Y|H1 ). In practice these distributions need to be estimated. The simplest
method is to run the base classifiers on a training set and count occurrences of each outcome. It is
likely that some outcomes will not occur in the training, or will occur only a small number of times.
Our initial approach to deal with small or zero counts when estimating was to use add-one smoothing. In our experiments, however, simple special-case treatment of zero counts always produced
better results than smoothing, both on the training set and on the test set. See Section 5 for details.
Furthermore, the optimal ROC curve may have a different likelihood ratio for each possible outcome
from the n classifiers, and therefore a different point in ROC space, so optimal ROC curves in general
have up to 2n points. This implies an exponential (in the number of classifiers) lower bound on the
running time of any algorithm to compute the optimal ROC curve for a combination of classifiers.
For a handful of classifiers, such a bound is not problematic, but it is impractical to compute the
optimal ROC curve for dozensnor hundreds of classifiers. (However, by computing and sorting the
likelihood ratios we avoid a 22 -time search over all possible classification functions.)
4
4.1
Analysis
The independent case
In this section we take an in-depth look at the case of two binary classifiers f1 and f2 that are
conditionally independent given the input?s class, so that P (Y1 , Y2 |Hc ) = P (Y1 |Hc )P (Y2 |Hc ) for
c ? {0, 1} (this section is the only part of the paper in which we make any independence assumptions). Since Y1 and Y2 are conditionally independent, we do not need the full joint distribution; we
need only the probabilities PD1 , PF 1 , PD2 , and PF 2 to find the combined PD and PF . For example,
?(01) = ((1 ? PD1 )PD2 )/((1 ? PF 1 )PF 2 ).
The assumption that f1 and f2 are conditionally independent and proper defines a partial ordering
on the likelihood ratio: ?(00) < ?(10) < ?(11) and ?(00) < ?(01) < ?(11). Without loss of
3
Table 1: Two probability distributions.
Class 1 (H1 )
Y1
Y2
0
1
0 0.2 0.375
1 0.1 0.325
Class 0 (H0 )
Y1
Y2
0
1
0 0.5 0.1
1 0.3 0.1
Class 1 (H1 )
Y1
Y2
0
1
0 0.2 0.1
1 0.2 0.5
(a)
Class 0 (H0 )
Y1
Y2
0
1
0 0.1 0.3
1 0.5 0.1
(b)
generality, we assume ?(00) < ?(01) < ?(10) < ?(11). This ordering breaks the likelihood ratio?s
range (0, ?) into five regions; choosing ? in each region defines a different decision rule.
The trivial cases 0 ? ? < ?(00) and ?(11) < ? < ? correspond to always classifying as
1 and 0, respectively. PD and PF are therefore both equal to 1 and both equal to 0, respectively. For the case ?(00) ? ? < ?(01), Pr [?(Y) > ? ] = Pr [Y = 01 ? Y = 10 ? Y = 11] =
Pr [Y1 = 1 ? Y2 = 1] . Thresholds in this range define an OR rule for the classifiers, with PD =
PD1 + PD2 ? PD1 PD2 and PF = PF 1 + PF 2 ? PF 1 PF 2 . For the case ?(01) ? ? < ?(10), we
have Pr [?(Y) > ? ] = Pr [Y = 10 ? Y = 11] = Pr [Y1 = 1] . Therefore the performance probabilities are simply PD = PD1 and PF = PF 1 . Finally, the case ?(10) ? ? < ?(11) implies that
Pr [?(Y) > ? ] = Pr [Y = 11] , and therefore thresholds in this range define an AND rule, with
PD = PD1 PD2 and PF = PF 1 PF 2 . Figure 1a illustrates this analysis with an example.
The assumption of conditional independence is a sufficient condition for ensuring that the AND and
OR rules improve on the ROCCH for n classifiers, as the following result shows.
Theorem 2 If the distributions of the outputs of n proper binary classifiers Y1 , Y2 , . . . , Yn are conditionally independent given the instance class, then the points in ROC space for the rules AND
(Y1 ? Y2 ? ? ? ? ? Yn ) and OR (Y1 ? Y2 ? ? ? ? ? Yn ) are strictly above the convex hull of the ROC
curves of the base classifiers f1 , . . . , fn . Furthermore, these Boolean rules belong to the LR-ROC.
Proof.
The likelihood ratio of the case when AND outputs 1 is given by ?(11 ? ? ? 1) =
(PD1 PD2 ? ? ? PDn )/(PF 1 PF 2 ? ? ? PF n ). The likelihood ratio of the case when OR does not output 1
is given by ?(00 ? ? ? 0) = [(1 ? PD1 )(1 ? PD2 ) ? ? ? (1 ? PDn )]/[(1 ? PF 1 )(1 ? PF 2 ) ? ? ? (1 ? PF n )].
Now recall that for proper classifiers fi , PDi > PF i and thus (1 ? PDi )/(1 ? PF i ) < 1 < PDi /PF i .
It is now clear that ?(00 ? ? ? 0) is the smallest likelihood ratio and ?(11 ? ? ? 1) is the largest likelihood
ratio, since others are obtained only by swapping P(F,D)i and (1 ? P(F,D)i ), and therefore the OR
and AND rules will always be part of the optimal set of decisions for conditionally independent classifiers. These rules are strictly above the ROCCH: because ?(11 ? ? ? 1) > PD1 /PD2 , and PD1 /PD2
is the slope of the line from (0, 0) to the first point in the ROCCH (f1 ), the AND point must be
above the ROCCH. A similar argument holds for OR since ?(00 ? ? ? 0) < (1 ? PDn )/(1 ? PF n ). 2
4.2
Two examples
We return now to the general case with no independence assumptions. We present two example
distributions for the two-classifier case that demonstrate interesting results.
The first distribution appears in Table 1a. The likelihood ratio values are ?(00) = 0.4, ?(10) = 3.75,
?(01) = 1/3, and ?(11) = 3.25, giving us ?(01) < ?(00) < ?(11) < ?(10). The three non-trivial
rules correspond to the Boolean functions Y1 ? ?Y2 , Y1 , and Y1 ? ?Y2 . Note that Y2 appears only
negatively despite being a proper classifier, and both the AND and OR rules are sub-optimal.
The distribution for the second example appears in Table 1b. The likelihood ratios of the outcomes
are ?(00) = 2.0, ?(10) = 1/3, ?(01) = 0.4, and ?(11) = 5, so ?(10) < ?(01) < ?(00) < ?(11)
and the three points defining the optimal ROC curve are ?Y1 ? Y2 , ?(Y1 ? Y2 ), and Y1 ? Y2 (see
Figure 1b). In this case, an XOR rule emerges from the likelihood ratio analysis.
These examples show that for true optimal results it is not sufficient to use weighted voting rules
w1 Y1 + w2 Y2 + ? ? ? + wn Yn ? ? , where w ? (0, ?) (like some ensemble methods). Weighted
voting always has AND and OR rules in its ROC curve, so it cannot always express optimal rules.
4
1
Y1 ? Y2
Y2
0.8
Y1 ? Y2
Y1
0.4
0.8
Y1 ? Y2
PD
0.6
PD
0.6
Y2
0.4
PF 0.6
f1
f2
f3?
0.4
0.2
0.2
ROC of f2
LR?ROC
0.2
f3
ROC of f1
ROC of f2
0
0
f2?
0.6
Y1
0.4
ROC of f1
0.2
1
?Y1 ? Y2
?(Y1 ? Y2 )
PD
1
0.8
Original ROC
LR?ROC
f1?
LR?ROC
0.8
1
(a)
0
0
0.2
0.4 P
F
0.6
0.8
(b)
1
0
0
0.2
0.4 P 0.6
F
0.8
1
(c)
Figure 1: (a) ROC for two conditionally independent classifiers. (b) ROC curve for the distributions
in Table 1b. (c) Original ROC curve and optimal ROC curve for example in Section 4.4.
4.3
Optimality of the ROCCH
We have seen that in some cases, rules exist with points strictly above the ROCCH. As the following
result shows, however, there are conditions under which the ROCCH is optimal.
Theorem 3 Consider n classifiers f1 , . . . , fn . The convex hull of points (PF i , PDi ) with (0, 0) and
(1, 1) (the ROCCH) is an optimal ROC curve for the combination if (Yi = 1) ? (Yj = 1) for i < j
and the following ordering holds: ?(00 ? ? ? 0) < ?(00 ? ? ? 01) < ?(00 ? ? ? 011) < ? ? ? < ?(1 ? ? ? 1).
Proof. The condition (Yi = 1) ? (Yj = 1) for i < j implies that we only need to consider n + 2
points in the ROC space (the two extra points are (0, 0) and (1, 1)) rather than 2n . It also implies the
following conditions on the joint distribution: Pr[Y1 = 0 ? ? ? ? ? Yi = 0 ? Yi+1 = 1 ? ? ? ? ? Yn =
1|H0 ] = PF i+1 ? PF i , and Pr[Y1 = 1 ? ? ? ? ? Yn = 1|H0 ] = PF 1 . With these conditions
and the ordering condition on the likelihood ratios, we have Pr[?(Y) > ?(1 ? ? ? 1)|H0 ] = 0, and
Pr[?(Y) > ?(0
? ? 0} 1 ? ? ? 1)|H0 ] = PF i . Therefore, finding the optimal threshold of the likelihood
| ?{z
i
ratio test for PF i?1 ? ? < PF i , we get ? ? = ?(0
? ? 0} 1 ? ? ? 1), and for PF i ? ? < PF i+1 ,
| ?{z
i?1
? ? = ?(0
? ? 0} 1 ? ? ? 1). This change in ? ? implies that the point PF i is part of the LR-ROC. Setting
| ?{z
i
? = PF i (thus ? ? = ?(0
? ? 0} 1 ? ? ? 1) and ? ? =0) implies Pr[?(Y) > ? ? |H1 ] = PDi .
| ?{z
2
i
The condition Yi = 1 ? Yj = 1 for i < j is the same inclusion condition Flach and Wu use
for repairing an ROC curve [2]. It intuitively represents the performance in ROC space of a single
classifier with different operating points. The next section explores this relationship further.
4.4
Repairing an ROC curve
Flach and Wu give a voting technique to repair concavities in an ROC curve that generates operating
points above the ROCCH [2]. Their intuition is that points underneath the convex hull can be
mirrored to appear above the convex hull in much the same way as an improper classifier can be
negated to obtain a proper classifier. Although their algorithm produces better ROC curves, their
solution will often yield curves with new concavities (see for example Flach and Wu?s Figure 4 [2]).
Their algorithm has a similar purpose to ours, but theirs is a local greedy optimization technique,
while our method performs a global search in order to find the best ROC curve.
Figure 1c shows an example comparing their method to ours. Consider the following probability distribution on a random variable Y ? {0, 1}2 : P ((00, 10, 01, 11)|H1 ) = (0.1, 0.3, 0.0, 0.6),
P ((00, 10, 01, 11)|H0 ) = (0.5, 0.001, 0.4, 0.099). Flach and Wu?s method assumes the original
ROC curve to be repaired has three models, or operating points: f1 predicts 1 when Y ? {11}, f2
predicts 1 when Y ? {11, 01}, and f3 predicts 1 when Y ? {11, 01, 10}. If we apply Flach and
Wu?s repair algorithm, the point f2 is corrected to the point f2? ; however, the operating points of f1
and f3 remain the same.
5
1.0
0.2
0.4
Pd
0.6
0.8
1.0
0.8
0.6
Pd
0.4
0.2
0.00
0.05
0.10
0.15
Meta (train)
Base (train)
Meta (test)
Base (test)
PART
0.0
0.0
Meta (train)
Base (train)
Meta (test)
Base (test)
PART
0.20
0.000
0.005
Pfa
0.010
0.015
Pfa
0.8
0.6
Pd
0.4
0.2
0.2
0.4
Pd
0.6
0.8
1.0
(b) hypothyroid
1.0
(a) adult
0.00
0.05
0.10
Meta (train)
Base (train)
Meta (test)
Base (test)
PART
0.0
0.0
Meta (train)
Base (train)
Meta (test)
Base (test)
PART
0.15
0.00
Pfa
0.02
0.04
0.06
0.08
0.10
Pfa
(c) sick-euthyroid
(d) sick
Figure 2: Empirical ROC curves for experimental results on four UCI datasets.
Our method improves on this result by ordering the likelihood ratios ?(01) < ?(00) < ?(11) < ?(10)
and using that ordering to make three different rules: f1? predicts 1 when Y ? {10}, f2? predicts 1
when Y ? {10, 11}, and f3? predicts 1 when Y ? {10, 11, 00}.
5
Experiments
We ran experiments to test the performance of our combining method on the adult, hypothyroid,
sick-euthyroid, and sick datasets from the UCI machine learning repository [6]. We chose five base
classifiers from the YALE machine learning platform [7]: PART (a decision list algorithm), SMO
(Sequential Minimal Optimization), SimpleLogistic, VotedPerceptron, and Y-NaiveBayes. We used
default settings for all classifiers. The adult dataset has around 30,000 training points and 15,000
test points and the sick dataset has around 2000 training points and 700 test points. The others each
have around 2000 points that we split randomly into 1000 training and 1000 test.
For each experiment, we estimate the joint distribution by training the base classifiers on a training
set and counting the outcomes. We compute likelihood ratios for all outcomes and order them. When
outcomes have no examples, we treat ?/0 as near-infinite and 0/? as near-zero and define 0/0 = 1.
6
We derive a sequence of decision rules from the likelihood ratios computed on the training set. We
can compute an optimal ROC curve for the combination by counting the number of true positives
and false positives each rule achieves. In the test set we use the rules learned on the training set.
5.1
Results
The ROC graphs for our four experiments appear in Figure 2. The ROC curves in these experiments
all rise very quickly and then flatten out, so we show only the range of PF 1 for which the values
are interesting. We can draw some general conclusions from these graphs. First, PART clearly
outperforms the other base classifiers in three out of four experiments, though it seems to overfit
on the hypothyroid dataset. The LR-ROC dominates the ROC curves of the base classifiers on both
training and test sets. The ROC curves for the base classifiers are all strictly below the LR-ROC
in results on the test sets. The results on training sets seem to imply that the LR-ROC is primarily
classifying like PART with a small boost from the other classifiers; however, the test set results (in
particular, Figure 2b) demonstrate that the LR-ROC generalizes better than the base classifiers.
The robustness of our method to estimation errors is uncertain. In our experiments we found that
smoothing did not improve generalization, but undoubtedly our method would benefit from better
estimation of the outcome distribution and increased robustness.
We ran separate experiments to test how many classifiers our method could support in practice.
Estimation of the joint distribution and computation of the ROC curve finished within one minute
for 20 classifiers (not including time to train the individual classifiers). Unfortunately, the inherent
exponential structure of the optimal ROC curve means we cannot expect to do significantly better
(at the same rate, 30 classifiers would take over 12 hours and 40 classifiers almost a year and a half).
6
Related work
Our work is loosely related to ensemble methods such as bagging [8] and boosting [9] because
it finds meta-classification rules over a set of base classifiers. However, bagging and boosting each
take one base classifier and train many times, resampling or reweighting the training data to generate
classifier diversity [10] or increase the classification margin [11]. The decision rules applied to
the generated classifiers are (weighted) majority voting. In contrast, our method takes any binary
classifiers and finds optimal combination rules from the more general space of all binary functions.
Ranking algorithms, such as RankBoost [12], approach the problem of ranking points by score or
preference. Although we present our methods in a different light, our decision rule can be interpreted
as a ranking algorithm. RankBoost, however, is a boosting algorithm and therefore fundamentally
different from our approach. Ranking can be used for classification by choosing a cutoff or threshold,
and in fact ranking algorithms tend to optimize the common Area Under the ROC Curve (AUC)
metric. Although our method may have the side effect of maximizing the AUC, its formulation is
different in that instead of optimizing a single global metric, it is a constrained optimization problem,
maximizing PD for each PF .
Another more similar method for combining classifiers is stacking [13]. Stacking trains a metalearner to combine the predictions of several base classifiers; in fact, our method might be considered a stacking method with a particular meta-classifier. It can be difficult to show the improvement
of stacking in general over selecting the best base-level classifier [14]; however, stacking has a useful interpretation as generalized cross-validation that makes it practical. Our analysis shows that our
combination method is the optimal meta-learner in the Neyman-Pearson sense, but incorporating the
model validation aspect of stacking would make an interesting extension to our work.
7
Conclusion
In this paper we introduce a new way to analyze a combination of classifiers and their ROC curves.
We give a method for combining classifiers and prove that it is optimal in the Neyman-Pearson
sense. This work raises several interesting questions.
Although the algorithm presented in this paper avoids checking the whole doubly exponential number of rules, the exponential factor in running time limits the number of classifiers that can be
7
combined in practice. Can a good approximation algorithm approach optimality while having lower
time complexity? Though in general we make no assumptions about independence, Theorem 2
shows that certain simple rules are optimal when we do know that the classifiers are independent.
Theorem 3 proves that the ROCCH can be optimal when only n output combinations are possible.
Perhaps other restrictions on the distribution of outcomes can lead to useful special cases.
Acknowledgments
This work was supported in part by TRUST (Team for Research in Ubiquitous Secure Technology),
which receives support from the National Science Foundation (NSF award number CCF-0424422)
and the following organizations: AFOSR (#FA9550-06-1-0244), Cisco, British Telecom, ESCHER,
HP, IBM, iCAST, Intel, Microsoft, ORNL, Pirelli, Qualcomm, Sun, Symantec, Telecom Italia, and
United Technologies; and in part by the UC Berkeley-Taiwan International Collaboration in Advanced Security Technologies (iCAST) program. The opinions expressed in this paper are solely
those of the authors and do not necessarily reflect the opinions of any funding agency or the U.S. or
Taiwanese governments.
References
[1] Foster Provost and Tom Fawcett. Robust classification for imprecise environments. Machine Learning
Journal, 42(3):203?231, March 2001.
[2] Peter A. Flach and Shaomin Wu. Repairing concavities in ROC curves. In Proceedings of the 19th
International Joint Conference on Artificial Intelligence (IJCAI?05), pages 702?707, August 2005.
[3] Tom Fawcett. ROC graphs: Notes and practical considerations for data mining researchers. Technical
Report HPL-2003-4, HP Laboratories, Palo Alto, CA, January 2003. Updated March 2004.
[4] J. Neyman and E. S. Pearson. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London, Series A, Containing Papers of a Mathematical or
Physical Character, 231:289?337, 1933.
[5] Vincent H. Poor. An Introduction to Signal Detection and Estimation. Springer-Verlag, second edition,
1988.
[6] D. J. Newman, S. Hettich, C. L. Blake, and C. J. Merz. UCI repository of machine learning databases,
1998. http://www.ics.uci.edu/?mlearn/MLRepository.html.
[7] I. Mierswa, M. Wurst, R. Klinkenberg, M. Scholz, and T. Euler. YALE: Rapid prototyping for complex data mining tasks. In Proceedings of the ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining (KDD), 2006.
[8] L. Breiman. Bagging predictors. Machine Learning, 24(2):123?140, 1996.
[9] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In Thirteenth International
Conference on Machine Learning, pages 148?156, Bari, Italy, 1996. Morgan Kaufmann.
[10] Thomas G. Dietterich. Ensemble methods in machine learning. Lecture Notes in Computer Science,
1857:1?15, 2000.
[11] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651?1686, October
1998.
[12] Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research (JMLR), 4:933?969, 2003.
[13] D. H. Wolpert. Stacked generalization. Neural Networks, 5:241?259, 1992.
?
[14] Saso D?zeroski and Bernard Zenko.
Is combining classifiers with stacking better than selecting the best
one? Machine Learning, 54:255?273, 2004.
8
| 3263 |@word repository:2 seems:1 flach:7 initial:1 series:1 score:1 selecting:3 united:1 ours:2 outperforms:1 comparing:1 must:2 fn:4 kdd:1 plot:3 treating:1 resampling:1 greedy:1 half:1 intelligence:1 fa9550:1 lr:13 boosting:6 preference:2 five:2 mathematical:1 prove:6 consists:1 doubly:1 combine:2 introduce:2 rapid:1 pf:64 estimating:1 alto:1 null:1 interpreted:1 finding:5 impractical:1 berkeley:4 every:2 voting:5 exactly:1 classifier:91 omit:1 yn:7 appear:2 positive:7 before:1 local:1 treat:1 limit:1 despite:1 subscript:1 solely:1 interpolation:1 might:1 chose:1 scholz:1 range:4 practical:4 acknowledgment:1 testing:1 yj:3 practice:3 area:1 empirical:1 significantly:1 imprecise:1 flatten:1 suggest:1 get:1 cannot:3 seminal:1 optimize:1 restriction:1 www:1 maximizing:3 convex:7 rule:38 hypothyroid:3 updated:1 annals:1 hypothesis:4 bari:1 predicts:7 database:1 region:2 improper:1 sun:2 ordering:7 decrease:1 highest:1 ran:2 intuition:1 pd:23 agency:1 complexity:1 environment:1 weakly:3 raise:1 segment:1 negatively:1 division:1 f2:12 learner:1 basis:2 joint:5 train:11 stacked:1 london:1 artificial:1 repairing:3 newman:1 pearson:9 h0:23 choosing:5 outcome:13 otherwise:1 cardenas:1 qualcomm:1 italia:1 statistic:1 nondecreasing:1 sequence:1 uci:4 combining:8 intuitive:1 ijcai:1 produce:3 generating:1 perfect:1 derive:1 c:1 implies:6 hull:7 opinion:2 require:1 government:1 f1:15 generalization:2 randomization:1 strictly:5 extension:1 marco:1 hold:2 around:3 considered:1 blake:1 ic:1 achieves:1 smallest:2 purpose:1 estimation:4 label:2 palo:1 largest:1 create:1 weighted:3 clearly:1 always:9 rankboost:2 rather:1 avoid:1 breiman:1 varying:1 focus:1 improvement:1 likelihood:25 contrast:1 secure:1 pdi:7 underneath:1 sense:3 sigkdd:1 saso:1 classification:8 html:1 tygar:2 special:3 smoothing:3 platform:1 constrained:1 equal:2 uc:1 f3:5 having:1 represents:4 look:1 others:3 report:1 piecewise:1 inherent:1 primarily:1 fundamentally:1 randomly:1 composed:1 wee:1 national:1 individual:1 microsoft:1 detection:3 undoubtedly:1 organization:1 mining:3 pfa:4 light:1 swapping:1 symantec:1 partial:1 loosely:1 theoretical:1 minimal:1 uncertain:1 instance:1 increased:1 boolean:7 yoav:2 stacking:7 introducing:1 vertex:2 euler:1 hundred:1 predictor:1 characterize:2 connect:1 combined:2 explores:1 international:4 alvaro:1 lee:1 quickly:1 w1:1 cisco:1 reflect:1 containing:1 choose:1 return:1 diversity:1 ranking:5 later:1 h1:13 break:1 analyze:3 sort:1 slope:2 contribution:1 xor:1 kaufmann:1 characteristic:1 ensemble:3 correspond:2 yield:1 generalize:2 vincent:1 produced:1 researcher:1 straight:1 mlearn:1 against:2 associated:1 proof:4 tunable:1 treatment:1 dataset:3 recall:1 knowledge:1 emerges:1 improves:2 ubiquitous:1 appears:3 higher:3 follow:1 tom:2 formulation:1 though:3 generality:1 furthermore:2 hpl:1 overfit:1 receives:1 trust:1 reweighting:1 defines:2 perhaps:1 effect:1 dietterich:1 true:5 y2:26 ccf:1 laboratory:1 deal:1 conditionally:6 adjacent:1 auc:2 mlrepository:1 generalized:1 demonstrate:2 performs:2 consideration:2 fi:4 funding:1 common:1 physical:1 belong:1 interpretation:2 interpret:1 theirs:1 hp:2 pointed:1 inclusion:1 operating:5 base:24 add:1 sick:5 optimizing:1 italy:1 raj:1 verlag:1 certain:1 meta:15 binary:9 pd2:9 yi:8 seen:1 morgan:1 maximize:3 monotonically:1 signal:1 full:1 technical:1 cross:1 award:1 ensuring:1 prediction:3 metric:2 fawcett:6 sometimes:1 background:1 addition:1 thirteenth:1 interval:1 w2:1 extra:1 subject:1 tend:1 seem:1 effectiveness:1 call:1 near:2 counting:2 split:1 concerned:1 wn:1 independence:5 pdn:3 bartlett:1 peter:2 interpolates:3 zeroski:1 useful:2 clear:1 simplest:1 rocch:12 generate:1 occupy:1 http:1 exist:1 mirrored:1 problematic:1 nsf:1 schapire:3 estimated:1 express:1 four:3 threshold:4 cutoff:1 graph:3 year:1 run:1 parameterized:2 powerful:1 almost:1 wu:7 hettich:1 draw:1 decision:11 bound:6 yale:2 occur:2 handful:1 generates:1 aspect:1 argument:1 optimality:2 according:1 combination:18 march:2 poor:1 describes:1 remain:1 character:1 intuitively:1 pr:24 repair:2 neyman:9 equation:1 turn:1 count:3 singer:1 know:2 barreno:2 generalizes:2 apply:2 occurrence:1 alternative:1 robustness:2 original:4 bagging:3 assumes:1 running:2 thomas:1 yoram:1 giving:1 prof:1 society:1 question:1 randomize:1 separate:2 majority:1 trivial:2 taiwan:1 relationship:1 insufficient:1 ratio:24 difficult:1 unfortunately:1 october:1 robert:2 rise:1 proper:7 unknown:1 perform:1 negated:1 upper:1 datasets:2 january:1 defining:1 pd1:15 team:1 y1:29 frame:1 provost:4 arbitrary:1 august:1 philosophical:1 security:1 california:2 smo:1 learned:1 boost:1 hour:1 address:2 adult:3 below:3 prototyping:1 program:1 including:1 royal:1 explanation:1 power:3 event:1 ranked:1 advanced:1 improve:3 technology:3 imply:1 finished:1 axis:4 discovery:1 checking:1 afosr:1 repaired:1 loss:1 expect:1 freund:3 lecture:1 interesting:4 validation:2 foundation:1 sufficient:3 foster:1 classifying:4 collaboration:1 ibm:1 supported:1 free:1 side:1 taking:1 benefit:1 curve:47 depth:1 default:1 avoids:1 concavity:3 author:1 transaction:1 global:2 receiver:1 continuous:1 search:2 table:4 robust:1 ca:1 hc:3 interpolating:1 necessarily:1 complex:1 did:1 bounding:1 whole:1 alarm:5 edition:1 nothing:1 representative:1 telecom:2 intel:1 roc:77 sub:1 exponential:4 jmlr:1 theorem:5 minute:1 british:1 list:2 dominates:6 incorporating:1 false:9 sequential:1 iyer:1 illustrates:1 margin:2 sorting:1 wolpert:1 simply:1 likely:1 expressed:1 springer:1 chance:1 satisfies:1 determines:1 acm:1 conditional:1 change:1 infinite:1 except:1 corrected:1 lemma:8 bernard:1 experimental:2 merz:1 indicating:1 support:2 latter:1 |
2,496 | 3,264 | The discriminant center-surround hypothesis for
bottom-up saliency
Dashan Gao
Vijay Mahadevan
Nuno Vasconcelos
Department of Electrical and Computer Engineering
University of California, San Diego
{dgao, vmahadev, nuno}@ucsd.edu
Abstract
The classical hypothesis, that bottom-up saliency is a center-surround process, is
combined with a more recent hypothesis that all saliency decisions are optimal in
a decision-theoretic sense. The combined hypothesis is denoted as discriminant
center-surround saliency, and the corresponding optimal saliency architecture is
derived. This architecture equates the saliency of each image location to the discriminant power of a set of features with respect to the classification problem that
opposes stimuli at center and surround, at that location. It is shown that the resulting saliency detector makes accurate quantitative predictions for various aspects
of the psychophysics of human saliency, including non-linear properties beyond
the reach of previous saliency models. Furthermore, it is shown that discriminant
center-surround saliency can be easily generalized to various stimulus modalities
(such as color, orientation and motion), and provides optimal solutions for many
other saliency problems of interest for computer vision. Optimal solutions, under
this hypothesis, are derived for a number of the former (including static natural
images, dense motion fields, and even dynamic textures), and applied to a number of the latter (the prediction of human eye fixations, motion-based saliency in
the presence of ego-motion, and motion-based saliency in the presence of highly
dynamic backgrounds). In result, discriminant saliency is shown to predict eye
fixations better than previous models, and produces background subtraction algorithms that outperform the state-of-the-art in computer vision.
1
Introduction
The psychophysics of visual saliency and attention have been extensively studied during the last
decades. As a result of these studies, it is now well known that saliency mechanisms exist for a
number of classes of visual stimuli, including color, orientation, depth, and motion, among others.
More recently, there has been an increasing effort to introduce computational models for saliency.
One approach that has become quite popular, both in the biological and computer vision communities, is to equate saliency with center-surround differencing. It was initially proposed in [12], and
has since been applied to saliency detection in both static imagery and motion analysis, as well
as to computer vision problems such as robotics, or video compression. While difference-based
modeling is successful at replicating many observations from psychophysics, it has three significant limitations. First, it does not explain those observations in terms of fundamental computational
principles for neural organization. For example, it implies that visual perception relies on a linear
measure of similarity (difference between feature responses in center and surround). This is at odds
with well known properties of higher level human judgments of similarity, which tend not to be
symmetric or even compliant with Euclidean geometry [20]. Second, the psychophysics of saliency
offers strong evidence for the existence of both non-linearities and asymmetries which are not easily reconciled with this model. Third, although the center-surround hypothesis intrinsically poses
1
saliency as a classification problem (of distinguishing center from surround), there is little basis on
which to justify difference-based measures as optimal in a classification sense. From an evolutionary
perspective, this raises questions about the biological plausibility of the difference-based paradigm.
An alternative hypothesis is that all saliency decisions are optimal in a decision-theoretic sense.
This hypothesis has been denoted as discriminant saliency in [6], where it was somewhat narrowly
proposed as the justification for a top-down saliency algorithm. While this algorithm is of interest
only for object recognition, the hypothesis of decision theoretic optimality is much more general,
and applicable to any form of center-surround saliency. This has motivated us to test its ability to
explain the psychophysics of human saliency, which is better documented for the bottom-up neural
pathway. We start from the combined hypothesis that 1) bottom-up saliency is based on centersurround processing, and 2) this processing is optimal in a decision theoretic sense. In particular,
it is hypothesized that, in the absence of high-level goals, the most salient locations of the visual
field are those that enable the discrimination between center and surround with smallest expected
probability of error. This is referred to as the discriminant center-surround hypothesis and, by
definition, produces saliency measures that are optimal in a classification sense. It is also clearly
tied to a larger principle for neural organization: that all perceptual mechanisms are optimal in a
decision-theoretic sense.
In this work, we present the results of an experimental evaluation of the plausibility of the discriminant center-surround hypothesis. Our study evaluates the ability of saliency algorithms, that are
optimal under this hypothesis, to both
? reproduce subject behavior in classical psychophysics experiments, and
? solve saliency problems of practical significance, with respect to a number of classes of
visual stimuli.
We derive decision-theoretic optimal center-surround algorithms for a number of saliency problems,
ranging from static spatial saliency, to motion-based saliency in the presence of egomotion or even
complex dynamic backgrounds. Regarding the ability to replicate psychophysics, the results of this
study show that discriminant saliency not only replicates all anecdotal observations that can be explained by linear models, such as that of [12], but can also make (surprisingly accurate) quantitative
predictions for non-linear aspects of human saliency, which are beyond the reach of the existing
approaches. With respect to practical saliency algorithms, they show that discriminant saliency not
only is more accurate than difference-based methods in predicting human eye fixations, but actually produces background subtraction algorithms that outperform the state-of-the-art in computer
vision. In particular, it is shown that, by simply modifying the probabilistic models employed in
the (decision-theoretic optimal) saliency measure - from well known models of natural image statistics, to the statistics of simple optical-flow motion features, to more sophisticated dynamic texture
models - it is possible to produce saliency detectors for either static or dynamic stimuli, which are
insensitive to background image variability due to texture, egomotion, or scene dynamics.
2
Discriminant center-surround saliency
A common hypothesis for bottom-up saliency is that the saliency of each location is determined by
how distinct the stimulus at the location is from the stimuli in its surround (e.g., [11]). This hypothesis is inspired by the ubiquity of ?center-surround? mechanisms in the early stages of biological
vision [10]. It can be combined with the hypothesis of decision-theoretic optimality, by defining a
classification problem that equates
? the class of interest, at location l, with the observed responses of a pre-defined set of features X within a neighborhood Wl1 of l (the center),
? the null hypothesis with the responses within a surrounding window Wl0 (the surround ),
The saliency of location l? is then equated with the power of the feature set X to discriminate
between center and surround. Mathematically, the feature responses within the two windows are
interpreted as observations drawn from a random process X(l) = (X1 (l), . . . , Xd (l)), of dimension
d, conditioned on the state of a hidden random variable Y (l). The observed feature vector at any
location j is denoted by x(j) = (x1 (j), . . . , xd (j)), and feature vectors x(j) such that j ? Wlc , c ?
2
{0, 1} are drawn from class c (i.e., Y (l) = c), according to conditional densities PX(l)|Y (l) (x|c).
The saliency of location l, S(l), is quantified by the mutual information between features, X, and
class label, Y ,
XZ
pX(l),Y (l) (x, c)
S(l) = Il (X; Y ) =
pX(l),Y (l) (x, c) log
dx.
(1)
pX(l) (x)pY (l) (c)
c
The l subscript emphasizes the fact that the mutual information is defined locally, within Wl . The
function S(l) is referred to as the saliency map.
3
Discriminant saliency detection in static imagery
Since human saliency has been most thoroughly studied in the domain of static stimuli, we first
derive the optimal solution for discriminant saliency in this domain. We then study the ability of
the discriminant center-surround saliency hypothesis to explain the fundamental properties of the
psychophysics of pre-attentive vision.
3.1
Feature decomposition
The building blocks of the static discriminant saliency detector are shown in Figure 1. The first
stage, feature decomposition, follows the proposal of [11], which closely mimics the earliest stages
of biological visual processing. The image to process is first subject to a feature decomposition into
an intensity map and four broadly-tuned color channels, I = (r + g + b)/3, R = b?
r ? (?
g + ?b)/2c+ ,
?
?
?
G = b?
g ? (?
r + b)/2c+ , B = bb ? (r + g?)/2c+ , and Y = b(?
r + g?)/2 ? |?
r ? g?|/2c+ , where
r? = r/I, g? = g/I, ?b = b/I, and bxc+ = max(x, 0). The four color channels are, in turn, combined
into two color opponent channels, R ? G for red/green and B ? Y for blue/yellow opponency.
These and the intensity map are convolved with three Mexican hat wavelet filters, centered at spatial
frequencies 0.02, 0.04 and 0.08 cycle/pixel, to generate nine feature channels. The feature space X
consists of these channels, plus a Gabor decomposition of the intensity map, implemented with a
dictionary of zero-mean Gabor filters at 3 spatial scales (centered at frequencies of 0.08, 0.16, and
0.32 cycle/pixel) and 4 directions (evenly spread from 0 to ?).
3.2
Leveraging natural image statistics
In general, the computation of (1) is impractical, since it requires density estimates on a potentially
high-dimensional feature space. This complexity can, however, be drastically reduced by exploiting
a well known statistical property of band-pass natural image features, e.g. Gabor or wavelet coefficients: that features of this type exhibit strongly consistent patterns of dependence (bow-tie shaped
conditional distributions) across a very wide range of classes of natural imagery [2, 9, 21]. The
consistency of these feature dependencies suggests that they are, in general, not greatly informative
about the image class [21, 2] and, in the particular case of saliency, about whether the observed
feature vectors originate in the center or surround. Hence, (1) can usually be well approximated by
the sum of marginal mutual informations [21]1 , i.e.,
S(l)
=
d
X
Il (Xi ; Y ).
(2)
i=1
Since (2) only requires estimates of marginal densities, it has significantly less complexity than (1).
This complexity can, indeed, be further reduced by resorting to the well known fact that the marginal
densities are accurately modeled by a generalized Gaussian distribution (GGD) [13]. In this case, all
computations have a simple closed form [4] and can be mapped into a neural network that replicates
the standard architecture of V1: a cascade of linear filtering, divisive normalization, quadratic nonlinearity and spatial pooling [7].
1
Note that this approximation does not assume that the features are independently distributed, but simply
that their dependencies are not informative about the class.
3
(a)
Color (R/G, B/Y)
Feature maps
Feature saliency
maps
3
1.9
2.5
1.85
1.5
1.8
1
1.75
6
0.5
0
5 10
20
30
40
50
60
Orientation contrast (deg)
(b)
70
80
90
1.7
0
10
20
30
40
50
60
70
80
90
Orientation contrast (deg)
(c)
Figure 2: The nonlinearity of human saliency responses to orientation contrast [14] (a) is replicated
by discriminant saliency (b), but not by the model
of [11] (c).
Figure 1: Bottom-up discriminant saliency detector.
3.3
Saliency
Saliency
Intensity
Saliency map
Orientation
Feature decomposition
2
Consistency with psychophysics
To evaluate the consistency of discriminant saliency with psychophysics, we start by applying the
discriminant saliency detector to a series of displays used in classical studies of visual attention [18,
19, 14]2 . In [7], we have shown that discriminant saliency reproduces the anecdotal properties of
saliency - percept of pop-out for single feature search, disregard of feature conjunctions, and search
asymmetries for feature presence vs. absence - that have previously been shown possible to replicate
with linear saliency models [11]. Here, we focus on quantitative predictions of human performance,
and compare the output of discriminant saliency with both human data and that of the differencebased center-surround saliency model [11]3 .
The first experiment tests the ability of the saliency models to predict a well known nonlinearity
of human saliency. Nothdurft [14] has characterized the saliency of pop-out targets due to orientation contrast, by comparing the conspicuousness of orientation defined targets and luminance
defined ones, and using luminance as a reference for relative target salience. He showed that the
saliency of a target increases with orientation contrast, but in a non-linear manner: 1) there exists a
threshold below which the effect of pop-out vanishes, and 2) above this threshold saliency increases
with contrast, saturating after some point. The results of this experiment are illustrated in Figure 2,
which presents plots of saliency strength vs orientation contrast for human subjects [14] (in (a)),
for discriminant saliency (in (b)), and for the difference-based model of [11]. Note that discriminant saliency closely predicts the strong threshold and saturation effects characteristic of subject
performance, but the difference-based model shows no such compliance.
The second experiment tests the ability of the models to make accurate quantitative predictions of
search asymmetries. It replicates the experiment designed by Treisman [19] to show that the asymmetries of human saliency comply with Weber?s law. Figure 3 (a) shows one example of the displays
used in the experiment, where the central target (vertical bar) differs from distractors (a set of identical vertical bars) only in length. Figure 3 (b) shows a scatter plot of the values of discriminant
saliency obtained across the set of displays. Each point corresponds to the saliency at the target
location in one display, and the dashed line shows that, like human perception, discriminant saliency
follows Weber?s law: target saliency is approximately linear in the ratio between the difference of
target/distractor length (?x) and distractor length (x). For comparison, Figure 3 (c) presents the corresponding scatter plot for the model of [11], which clearly does not replicate human performance.
4
Applications of discriminant saliency
We have, so far, presented quantitative evidence in support of the hypothesis that pre-attentive vision implements decision-theoretical center-surround saliency. This evidence is strengthened by the
2
For the computation of the discriminant saliency maps, we followed the common practice of psychophysics
and physiology [18, 10], to set the size of the center window to a value comparable to that of the display items,
and the size of the surround window is 6 times of that of the center. Informal experimentation has shown that
the saliency results are not substantively affected by variations around the parameter values adopted.
3
Results obtained with the MATLAB implementation available in [22].
4
0.95
0.9
Saliency ROC area
(a)
1.95
1
1.9
0.8
Saliency
Saliency
1.85
0.6
0.4
1.8
0.85
0.8
0.75
0.7
1.75
0.2
0
0
1.7
0.2
0.4
? x/x
0.6
0.8
1.65
0
0.2
0.4
? x/x
0.6
0.8
0.6
0.8
(b)
(c)
Figure 3: An example display (a) and performance of saliency detectors (discriminant saliency
(b) and [11] (c)) on Weber?s law experiment.
Saliency model
ROC area
discriminant saliency
Itti et al.
Bruce et al.
0.65
Discriminant
0.7694
0.85
0.9
Inter?subject ROC area
0.95
0.98
Figure 4: Average ROC area, as a function of
inter-subject ROC area, for the saliency algorithms.
Itti et al. [11]
0.7287
Bruce et al. [1]
0.7547
Table 1: ROC areas for different saliency models with respect to all human fixations.
already mentioned one-to-one mapping between the discriminant saliency detector proposed above
and the standard model for the neurophysiology of V1 [7]. Another interesting property of discriminant saliency is that its optimality is independent of the stimulus dimension under consideration, or
of specific feature sets. In fact, (1) can be applied to any type of stimuli, and any type of features, as
long as it is possible to estimate the required probability distributions from the center and surround
neighborhoods. This encouraged us to derive discriminant saliency detectors for various computer
vision applications, ranging from the prediction of human eye fixations, to the detection of salient
moving objects, to background subtraction in the context of highly dynamic scenes. The outputs
of these discriminant saliency detectors are next compared with either human performance, or the
state-of-the-art in computer vision for each application.
4.1
Prediction of eye fixations on natural images
We start by using the static discriminant saliency detector of the previous section to predict human
eye fixations. For this, the saliency maps were compared to the eye fixations of human subjects in
an image viewing task. The experimental protocol was that of [1], using fixation data collected from
20 subjects and 120 natural images. Under this protocol, all saliency maps are first quantized into
a binary mask that classifies each image location as either a fixation or non-fixation [17]. Using
the measured human fixations as ground truth, a receiver operator characteristic (ROC) curve is
then generated by varying the quantization threshold. Perfect prediction corresponds to an ROC
area (area under the ROC curve) of 1, while chance performance occurs at an area of 0.5. The
predictions of discriminant saliency are compared to those of the methods of [11] and [1].
Table 1 presents average ROC areas for all detectors, across the entire image set. It is clear that
discriminant saliency achieves the best performance among the three detectors. For a more detailed
analysis, we also plot (in Figure 4) the ROC areas of the three detectors as a function of the ?intersubject? ROC area (a measure of the consistency of eye movements among human subjects [8]), for
the first two fixations - which are more likely to be driven by bottom-up mechanisms than the later
ones [17]. Again, discriminant saliency exhibits the strongest correlation with human performance,
this happens at all levels of inter-subject consistency, and the difference is largest when the latter
is strong. In this region, the performance of discriminant saliency (.85) is close to 90% of that of
humans (.95), while the other two detectors only achieve close to 85% (.81).
4.2
Discriminant saliency on motion fields
Similarly to the static case, center-surround discriminant saliency can produce motion-based
saliency maps if combined with motion features. We have implemented a simple motion-based detector by computing a dense motion vector map (optical flow) between pairs of consecutive images,
and using the magnitude of the motion vector at each location as motion feature. The probability
distributions of this feature, within center and surround, were estimated with histograms, and the
motion saliency maps computed with (2).
5
Figure 5: Optical flow-based saliency in the presence of egomotion.
Despite the simplicity of our motion representation, the discriminant saliency detector exhibits interesting performance. Figure 5 shows several frames (top row) from a video sequence, and their
discriminant motion saliency maps (bottom row). The sequence depicts a leopard running in a grassland, which is tracked by a moving camera. This results in significant variability of the background,
due to egomotion, making the detection of foreground motion (leopard), a non-trivial task. As shown
in the saliency maps, discriminant saliency successfully disregards the egomotion component of the
optical flow, detecting the leopard as most salient.
4.3
Discriminant Saliency with dynamic background
While the results of Figure 5 are probably within the reach of previously proposed saliency models,
they illustrate the flexibility of discriminant saliency. In this section we move to a domain where
traditional saliency algorithms almost invariably fail. This consists of videos of scenes with complex and dynamic backgrounds (e.g. water waves, or tree leaves). In order to capture the motion
patterns characteristic of these backgrounds it is necessary to rely on reasonably sophisticated probabilistic models, such as the dynamic texture model [5]. Such models are very difficult to fit in the
conventional, e.g. difference-based, saliency frameworks but naturally compatible with the discriminant saliency hypothesis. We next combine discriminant center-surround saliency with the dynamic
texture model, to produce a background-subtraction algorithm for scenes with complex background
dynamics. While background subtraction is a classic problem in computer vision, there has been
relatively little progress for these type of scenes (e.g. see [15] for a review).
A dynamic texture (DT) [5, 3] is an autoregressive, generative model for video. It models the spatial
component of the video and the underlying temporal dynamics as two stochastic processes. A video
n
m
is represented as a time-evolving state process xt ? R , and the appearance of a frame yt ? R is
a linear function of the current state vector with some observation noise. The system equations are
xt = Axt?1 + vt
yt = Cxt + wt
n?n
(3)
m?n
where A ? R
is the state transition matrix, C ? R
is the observation matrix. The state and
observation noise are given by vt ?iid N (0, Q,) and wt ?iid N (0, R), respectively. Finally, the
initial condition is distributed as x1 ? N (?, S). Given a sequence of images, the parameters of the
dynamic texture can be learned for the center and surround regions at each image location, enabling
a probabilistic description of the video, with which the mutual information of (2) can be evaluated.
We applied the dynamic texture-based discriminant saliency (DTDS) detector to three video sequences containing objects moving in water. The first (Water-Bottle from [23]) depicts a bottle
floating in water which is hit by rain drops, as shown in Figure 7(a). The second and third, Boat and
Surfer, are composed of boats/surfers moving in water, and shown in Figure 8(a) and 9(a). These
sequences are more challenging, since the micro-texture of the water surface is superimposed on a
lower frequency sweeping wave (Surfer) and interspersed with high frequency components due to
turbulent wakes (created by the boat, surfer, and crest of the sweeping wave). Figures 7(b), 8(b)
and 9(b), show the saliency maps produced by discriminant saliency for the three sequences. The
DTDS detector performs surprisingly well, in all cases, at detecting the foreground objects while ignoring the movements of the background. In fact, the DTDS detector is close to an ideal backgroundsubtraction algorithm for these scenes.
6
1
0.8
0.8
0.8
0.6
0.4
0.2
Detection rate (DR)
1
Detection rate (DR)
Detection rate (DR)
1
0.6
0.4
0.2
0.2
0.4
0.6
0.8
False positive rate (FPR)
(a)
0.4
0.2
Discriminant Salliency
GMM
0
0
0.6
Discriminant Saliency
GMM
1
0
0
0.2
0.4
0.6
0.8
False positive rate (FPR)
(b)
Discriminant Salliency
GMM
1
0
0
0.2
0.4
0.6
0.8
1
False positive rate (FPR)
(c)
Figure 6: Performance of background subtraction algorithms on: (a) Water-Bottle, (b) Boat, and (c) Surfer.
(a)
(b)
(c)
Figure 7: Results on Bottle: (a) original; b) discriminant saliency with DT; and c) GMM model of [16, 24].
For comparison, we present the output of a state-of-the-art background subtraction algorithm, a
Gaussian mixture model (GMM) [16, 24]. As can be seen in Figures 7(c), 8(c) and 9(c), the resulting
foreground detection is very noisy, and cannot adapt to the highly dynamic nature of the water
surface. Note, in particular, that the waves produced by boat and surfer, as well as the sweeping
wave crest, create serious difficulties for this algorithm. Unlike the saliency maps of DTDS, the
resulting foreground maps would be difficult to analyze by subsequent vision (e.g. object tracking)
modules. To produce a quantitative comparison of the saliency maps, these were thresholded at a
large range of values. The results were compared with ground-truth foreground masks, and an ROC
curve produced for each algorithm. The results are shown in Figure 6, where it is clear that while
DTDS tends to do well on these videos, the GMM based background model does fairly poorly.
References
[1] N. D. Bruce and J. K. Tsotsos. Saliency based on information maximization. In Proc. NIPS, 2005.
[2] R. Buccigrossi and E. Simoncelli. Image compression via joint statistical characterization in the wavelet
domain. IEEE Transactions on Image Processing, 8:1688?1701, 1999.
[3] A. B. Chan and N. Vasconcelos. Modeling, clustering, and segmenting video with mixtures of dynamic
textures. IEEE Trans. PAMI, In Press.
[4] M. N. Do and M. Vetterli. Wavelet-based texture retrieval using generalized gaussian density and
kullback-leibler distance. IEEE Trans. Image Processing, 11(2):146?158, 2002.
[5] G. Doretto, A. Chiuso, Y. N. Wu, and S. Soatto. Dynamic textures. Int. J. Comput. Vis., 51, 2003.
[6] D. Gao and N. Vasconcelos. Discriminant saliency for visual recognition from cluttered scenes. In Proc.
NIPS, pages 481?488, 2004.
[7] D. Gao and N. Vasconcelos. Decision-theoretic saliency: computational principle, biological plausibility,
and implications for neurophysiology and psychophysics. submitted to Neural Computation, 2007.
[8] J. Harel, C. Koch, and P. Perona. Graph-based visual saliency. In Proc. NIPS, 2006.
[9] J. Huang and D. Mumford. Statistics of Natural Images and Models. In Proc. IEEE Conf. CVPR, 1999.
[10] D. H. Hubel and T. N. Wiesel. Receptive fields and functional architecture in two nonstriate visual areas
(18 and 19) of the cat. J. Neurophysiol., 28:229?289, 1965.
7
(a)
(b)
(c)
Figure 8: Results on Boats: (a) original; b) discriminant saliency with DT; and c) GMM model of [16, 24].
(a)
(b)
(c)
Figure 9: Results on Surfer: (a) original; b) discriminant saliency with DT; and c) GMM model of [16, 24].
[11] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention.
Vision Research, 40:1489?1506, 2000.
[12] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE
Trans. PAMI, 20(11), 1998.
[13] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation. IEEE Trans.
PAMI, 11(7):674?693, 1989.
[14] H. C. Nothdurft. The conspicuousness of orientation and motion contrast. Spat. Vis., 7, 1993.
[15] Y. Sheikh and M. Shah. Bayesian modeling of dynamic scenes for object detection. IEEE Trans. on
PAMI, 27(11):1778?92, 2005.
[16] C. Stauffer and W. Grimson. Adaptive background mixture models for real-time tracking. In CVPR,
pages 246?52, 1999.
[17] B. W. Tatler, R. J. Baddeley, and I. D. Gilchrist. Visual correlates of fixation selection: effects of scale
and time. Vision Research, 45:643?659, 2005.
[18] A. Treisman and G. Gelade. A feature-integratrion theory of attention. Cognit. Psych., 12, 1980.
[19] A. Treisman and S. Gormican. Feature analysis in early vision: Evidence from search asymmetries.
Psychological Review, 95:14?58, 1988.
[20] A. Tversky. Features of similarity. Psychol. Rev., 84, 1977.
[21] N. Vasconcelos. Scalable discriminant feature selection for image retrieval. In CVPR, 2004.
[22] D. Walther and C. Koch. Modeling attention to salient proto-objects. Neural Networks, 19, 2006.
[23] J. Zhong and S. Sclaroff. Segmenting foreground objects from a dynamic textured background via a
robust Kalman filter. In ICCV, 2003.
[24] Z. Zivkovic. Improved adaptive Gaussian mixture model for background subtraction. In ICVR, 2004.
8
| 3264 |@word neurophysiology:2 wiesel:1 compression:2 replicate:3 decomposition:6 initial:1 series:1 tuned:1 existing:1 baddeley:1 current:1 comparing:1 scatter:2 dx:1 subsequent:1 informative:2 plot:4 designed:1 drop:1 discrimination:1 v:2 generative:1 leaf:1 item:1 dashan:1 fpr:3 cognit:1 provides:1 quantized:1 detecting:2 location:13 characterization:1 become:1 chiuso:1 walther:1 consists:2 fixation:14 pathway:1 combine:1 manner:1 introduce:1 inter:3 mask:2 indeed:1 expected:1 wl1:1 rapid:1 xz:1 distractor:2 behavior:1 inspired:1 little:2 window:4 increasing:1 classifies:1 linearity:1 underlying:1 null:1 interpreted:1 psych:1 impractical:1 temporal:1 quantitative:6 xd:2 tie:1 axt:1 hit:1 segmenting:2 positive:3 engineering:1 tends:1 despite:1 conspicuousness:2 subscript:1 approximately:1 pami:4 gormican:1 plus:1 studied:2 quantified:1 suggests:1 challenging:1 range:2 practical:2 camera:1 practice:1 block:1 implement:1 differs:1 area:13 evolving:1 gabor:3 significantly:1 cascade:1 physiology:1 pre:3 cannot:1 close:3 selection:2 operator:1 context:1 applying:1 py:1 conventional:1 map:19 center:29 yt:2 attention:6 independently:1 cluttered:1 simplicity:1 classic:1 variation:1 justification:1 diego:1 target:8 mallat:1 distinguishing:1 hypothesis:20 ego:1 recognition:2 approximated:1 predicts:1 bottom:8 observed:3 module:1 electrical:1 capture:1 region:2 cycle:2 movement:2 mentioned:1 grimson:1 vanishes:1 complexity:3 dynamic:21 tversky:1 raise:1 basis:1 neurophysiol:1 textured:1 easily:2 joint:1 various:3 represented:1 cat:1 surrounding:1 distinct:1 neighborhood:2 quite:1 larger:1 solve:1 cvpr:3 ability:6 statistic:4 noisy:1 sequence:6 spat:1 bow:1 tatler:1 flexibility:1 achieve:1 poorly:1 multiresolution:1 description:1 exploiting:1 asymmetry:5 produce:7 perfect:1 object:8 derive:3 illustrate:1 pose:1 measured:1 intersubject:1 progress:1 strong:3 implemented:2 implies:1 direction:1 closely:2 modifying:1 filter:3 stochastic:1 centered:2 human:24 enable:1 viewing:1 wlc:1 biological:5 mathematically:1 leopard:3 around:1 koch:4 ground:2 mapping:1 predict:3 surfer:7 dictionary:1 early:2 smallest:1 achieves:1 consecutive:1 proc:4 overt:1 applicable:1 label:1 largest:1 wl:1 create:1 successfully:1 anecdotal:2 clearly:2 gaussian:4 zhong:1 varying:1 conjunction:1 earliest:1 derived:2 focus:1 superimposed:1 greatly:1 contrast:8 stauffer:1 sense:6 entire:1 initially:1 hidden:1 perona:1 reproduce:1 pixel:2 classification:5 orientation:11 among:3 denoted:3 art:4 spatial:5 psychophysics:12 mutual:4 marginal:3 field:4 fairly:1 vasconcelos:5 shaped:1 encouraged:1 identical:1 foreground:6 mimic:1 others:1 stimulus:10 micro:1 serious:1 composed:1 harel:1 floating:1 geometry:1 detection:9 organization:2 interest:3 invariably:1 highly:3 evaluation:1 replicates:3 mixture:4 implication:1 accurate:4 necessary:1 grassland:1 tree:1 euclidean:1 opponency:1 theoretical:1 psychological:1 modeling:4 maximization:1 successful:1 dependency:2 combined:6 thoroughly:1 density:5 fundamental:2 probabilistic:3 compliant:1 treisman:3 imagery:3 central:1 again:1 containing:1 huang:1 dr:3 conf:1 itti:4 coefficient:1 int:1 vi:2 later:1 closed:1 analyze:1 red:1 start:3 wave:5 bruce:3 cxt:1 il:2 characteristic:3 equate:1 percept:1 judgment:1 saliency:133 yellow:1 bayesian:1 accurately:1 emphasizes:1 iid:2 produced:3 niebur:1 submitted:1 detector:19 explain:3 reach:3 strongest:1 definition:1 evaluates:1 attentive:2 frequency:4 nuno:2 naturally:1 static:9 popular:1 intrinsically:1 color:6 distractors:1 substantively:1 vetterli:1 sophisticated:2 actually:1 higher:1 dt:4 response:5 improved:1 evaluated:1 strongly:1 furthermore:1 stage:3 correlation:1 building:1 effect:3 hypothesized:1 former:1 hence:1 soatto:1 symmetric:1 leibler:1 illustrated:1 during:1 generalized:3 theoretic:9 performs:1 motion:22 covert:1 image:21 ranging:2 weber:3 consideration:1 recently:1 common:2 gilchrist:1 functional:1 tracked:1 insensitive:1 interspersed:1 he:1 vmahadev:1 significant:2 surround:29 consistency:5 resorting:1 similarly:1 nonlinearity:3 replicating:1 moving:4 similarity:3 surface:2 recent:1 showed:1 perspective:1 chan:1 driven:1 binary:1 vt:2 seen:1 somewhat:1 employed:1 subtraction:8 paradigm:1 doretto:1 dashed:1 signal:1 simoncelli:1 characterized:1 plausibility:3 offer:1 long:1 adapt:1 retrieval:2 prediction:9 scalable:1 vision:15 histogram:1 normalization:1 robotics:1 proposal:1 background:20 wake:1 modality:1 unlike:1 probably:1 subject:10 tend:1 pooling:1 compliance:1 flow:4 leveraging:1 odds:1 presence:5 ideal:1 mahadevan:1 fit:1 architecture:4 regarding:1 shift:1 narrowly:1 whether:1 motivated:1 effort:1 nine:1 matlab:1 clear:2 detailed:1 extensively:1 locally:1 band:1 documented:1 generate:1 reduced:2 outperform:2 exist:1 estimated:1 blue:1 broadly:1 affected:1 salient:4 four:2 threshold:4 drawn:2 nothdurft:2 gmm:8 thresholded:1 v1:2 luminance:2 graph:1 tsotsos:1 sum:1 almost:1 wu:1 decision:12 comparable:1 followed:1 display:6 quadratic:1 strength:1 scene:9 aspect:2 optimality:3 optical:4 px:4 relatively:1 department:1 according:1 across:3 sheikh:1 rev:1 making:1 happens:1 explained:1 iccv:1 equation:1 previously:2 opposes:1 turn:1 mechanism:5 fail:1 turbulent:1 informal:1 adopted:1 available:1 opponent:1 experimentation:1 ubiquity:1 alternative:1 shah:1 convolved:1 existence:1 hat:1 original:3 top:2 running:1 rain:1 clustering:1 classical:3 move:1 question:1 already:1 occurs:1 mumford:1 receptive:1 dependence:1 traditional:1 evolutionary:1 exhibit:3 distance:1 mapped:1 centersurround:1 evenly:1 originate:1 collected:1 discriminant:58 trivial:1 water:8 length:3 kalman:1 modeled:1 ratio:1 zivkovic:1 differencing:1 difficult:2 potentially:1 implementation:1 wl0:1 vertical:2 observation:7 enabling:1 defining:1 variability:2 frame:2 ucsd:1 sweeping:3 ggd:1 community:1 intensity:4 pair:1 required:1 bottle:4 california:1 learned:1 pop:3 nip:3 trans:5 gelade:1 beyond:2 bar:2 usually:1 perception:2 pattern:2 below:1 saturation:1 including:3 max:1 video:10 green:1 power:2 natural:8 rely:1 difficulty:1 predicting:1 boat:6 eye:8 created:1 psychol:1 comply:1 review:2 relative:1 law:3 interesting:2 limitation:1 filtering:1 consistent:1 principle:3 egomotion:5 row:2 compatible:1 surprisingly:2 last:1 buccigrossi:1 drastically:1 salience:1 wide:1 distributed:2 curve:3 depth:1 dimension:2 transition:1 autoregressive:1 equated:1 adaptive:2 san:1 replicated:1 far:1 transaction:1 bb:1 crest:2 correlate:1 kullback:1 deg:2 reproduces:1 hubel:1 receiver:1 xi:1 search:5 decade:1 table:2 channel:5 reasonably:1 nature:1 robust:1 ignoring:1 complex:3 domain:4 protocol:2 reconciled:1 dense:2 significance:1 spread:1 noise:2 x1:3 referred:2 roc:13 depicts:2 strengthened:1 bxc:1 comput:1 perceptual:1 tied:1 third:2 wavelet:5 down:1 specific:1 xt:2 evidence:4 exists:1 quantization:1 false:3 equates:2 texture:12 magnitude:1 conditioned:1 vijay:1 sclaroff:1 simply:2 likely:1 appearance:1 gao:3 visual:13 saturating:1 tracking:2 corresponds:2 truth:2 chance:1 relies:1 conditional:2 goal:1 absence:2 determined:1 justify:1 wt:2 mexican:1 discriminate:1 pas:1 experimental:2 divisive:1 disregard:2 support:1 latter:2 evaluate:1 proto:1 |
2,497 | 3,265 | Multiple-Instance Pruning For Learning Efficient
Cascade Detectors
Cha Zhang and Paul Viola
Microsoft Research
One Microsoft Way, Redmond, WA 98052
{chazhang,viola}@microsoft.com
Abstract
Cascade detectors have been shown to operate extremely rapidly, with high accuracy, and have important applications such as face detection. Driven by this
success, cascade learning has been an area of active research in recent years. Nevertheless, there are still challenging technical problems during the training process
of cascade detectors. In particular, determining the optimal target detection rate
for each stage of the cascade remains an unsolved issue. In this paper, we propose
the multiple instance pruning (MIP) algorithm for soft cascades. This algorithm
computes a set of thresholds which aggressively terminate computation with no reduction in detection rate or increase in false positive rate on the training dataset.
The algorithm is based on two key insights: i) examples that are destined to be
rejected by the complete classifier can be safely pruned early; ii) face detection is
a multiple instance learning problem. The MIP process is fully automatic and requires no assumptions of probability distributions, statistical independence, or ad
hoc intermediate rejection targets. Experimental results on the MIT+CMU dataset
demonstrate significant performance advantages.
1
Introduction
The state of the art in real-time face detection has progressed rapidly in recently years. One very
successful approach was initiated by Viola and Jones [11]. While some components of their work
are quite simple, such as the so called ?integral image?, or the use of AdaBoost, a great deal of
complexity lies in the training of the cascaded detector. There are many required parameters: the
number and shapes of rectangle filters, the number of stages, the number of weak classifiers in each
stage, and the target detection rate for each cascade stage. These parameters conspire to determine
not only the ROC curve for the resulting system but also its computational complexity. Since the
Viola-Jones training process requires CPU days to train and evaluate, it is difficult, if not impossible,
to pick these parameters optimally.
The conceptual and computational complexity of the training process has led to many papers proposing improvements and refinements [1, 2, 4, 5, 9, 14, 15]. Among them, three are closely related to
this paper: Xiao, Zhu and Zhang[15], Sochman and Matas[9], and Bourdev and Brandt[1]. In each
paper, the original cascade structure of distinct and separate stages is relaxed so that earlier computation of weak classifier scores can be combined with later weak classifiers. Bourdev and Brandt
coined the term, ?soft-cascade?, where the entire detector is trained as a single strong classifier
without stages (with 100?s or 1000?s of weak classifiers sometimes called ?features?). The score
assigned to
Pa detection window by the soft cascade is simply a weighted sum of the weak classifiers:
sk (T ) = j?T ?j hj (xk ), where T is the total number of weak classifiers; hj (xk ) is the j th feature
computed on example xk ; ?j is the vote on weak classifier j. Computation of the sum is terminated
early whenever the partial sum falls below a rejection threshold: sk (t) < ?(t). Note the soft cascade
1
is similar to, but simpler than both the boosting chain approach of Xiao, Zhu, and Zhang and the
WaldBoost approach of Sochman and Matas.
The rejection thresholds ?(t), t ? {1, ? ? ? , T ? 1} are critical to the performance and speed of the
complete classifier. However, it is difficult to set them optimally in practice. One possibility is to set
the rejection thresholds so that no positive example is lost; this leads to very conservative thresholds
and a very slow detector. Since the complete classifier will not achieve 100% detection (Note, given
practical considerations, the final threshold of the complete classifier is set to reject some positive
examples because they are difficult to detect. Reducing the final threshold further would admit too
many false positives.), it seems justified to reject positive examples early in return for fast detection
speed. The main question is which positive examples can be rejected and when.
A key criticism of all previous cascade learning approaches is that none has a scheme to determine
which examples are best to reject. Viola-Jones attempted to reject zero positive examples until
this become impossible and then reluctantly gave up on one positive example at a time. Bourdev
and Brandt proposed a method for setting rejection thresholds based on an ad hoc detection rate
target called a ?rejection distribution vector?, which is a parameterized exponential curve. Like the
original Viola-Jones proposal, the soft-cascade gradually gives up on a number of positive examples
in an effort to aggressively reduce the number of negatives passing through the cascade. Perhaps
a particular family of curves is more palatable, but it is still arbitrary and non-optimal. SochmanMatas used a ratio test to determine the rejection thresholds. While this has statistical validity,
distributions must be estimated, which introduces empirical risk. This is a particular problem for the
first few rejection thresholds, and can lead to low detection rates on test data.
This paper proposes a new mechanism for setting the rejection thresholds of any soft-cascade which
is conceptually simple, has no tunable parameters beyond the final detection rate target, yet yields
a cascade which is both highly accurate and very fast. Training data is used to set all reject thresholds after the final classifier is learned. There are no assumptions about probability distributions,
statistical independence, or ad hoc intermediate targets for detection rate (or false positive rate).
The approach is based on two key insights that constitute the major contributions of this paper: 1)
positive examples that are rejected by the complete classifier can be safely rejected earlier during
pruning; 2) each ground-truth face requires no more than one matched detection window to maintain
the classifier?s detection rate. We propose a novel algorithm, multiple instance pruning (MIP), to set
the reject thresholds automatically, which results in a very efficient cascade detector with superior
performance.
The rest of the paper is organized as follows. Section 2 describes an algorithm which makes use
of the final classification results to perform pruning. Multiple instance pruning is presented in Section 3. Experimental results and conclusions are given in Section 4 and 5, respectively.
2
Pruning Using the Final Classification
We propose a scheme which is simultaneously simpler and more effective than earlier techniques.
Our key insight is quite simple: the reject thresholds are set so that they give up on precisely those
positive examples which are rejected by the complete classifier. Note that the score of each example,
sk (t) can be considered a trajectory through time. The full classifier rejects a positive example if its
final score sk (T ) falls below the final threshold ?(T ). In the simplest version of our threshold setting
algorithm, all trajectories from positive windows which fall below the final threshold are removed.
Each rejection threshold is then simply:
?(t) = ? ?
min
? sk (t)
k?sk (T )>?(T ),yk =1
where {xk , yk } is the training set in which yk = 1 indicates positive windows and yk = ?1 indicates
negative windows. These thresholds produce a reasonably fast classifier which is guaranteed to
produce no more errors than the complete classifier (on the training dataset). We call this pruning
algorithm direct backward pruning (DBP).
One might question whether the minimum of all retained trajectories is robust to mislabeled or
noisy examples in the training set. Note that the final threshold of the complete classifier will often
reject mislabeled or noisy examples (though they will be considered false negatives). These rejected
2
10
Positive W indows
Negative W indows
Positive windows but
below threshold
Positive windows
above threshold
Positive windows
retained after pruning
Cumulative Score
5
0
Final Threshold
-5
-10
-15
-20
0
100
200
300
400
500
600
700
Feature Index
Figure 1: Traces of cumulative scores of different windows in an image of a face. See text.
examples play no role in setting the rejection thresholds. We have found this procedure very robust
to the types of noise present in real training sets.
In past approaches, thresholds are set to reject the largest number of negative examples and only a
small percentage of positive examples. These approaches justify these thresholds in different ways,
but they all struggle to determine the correct percentage accurately and effectively. In the new
approach, the final threshold of the complete soft-cascade is set to achieve the require detection rate.
Rejection thresholds are then set to reject the largest number of negative examples and retain all
positive examples which are retained by the complete classifier. The important difference is that the
particular positive examples which are rejected are those which are destined to be rejected by the
final classifier. This yields a fast classifier which labels all positive examples in exactly the same
way as the complete classifier. In fact, it yields the fastest possible soft-cascade with such property
(provided the weak classifiers are not re-ordered). Note, some negative examples that eventually
pass the complete classifier threshold may be pruned by earlier rejection thresholds. This has the
satisfactory side benefit of reducing false positive rate as well. In contrast, although the detection
rate on the training set can also be guaranteed in Bourdev-Brandt?s algorithm, there is no guarantee
that false positive rate will not increase.
Bourdev-Brandt propose reordering the weak classifiers based on the separation between the mean
score of the positive examples and the mean score of the negative examples. Our approach is equally
applicable to a reordered soft-cascade.
Figure 1 shows 293 trajectories from a single image whose final score is above -15. While the rejection thresholds are learned using a large set of training examples, this one image demonstrates
the basic concepts. The red trajectories are negative windows. The single physical face is consistent
with a set of positive detection windows that are within an acceptable range of positions and scales.
Typically there are tens of acceptable windows for each face. The blue and magenta trajectories correspond to acceptable windows which fall above the final detection threshold. The cyan trajectories
are potentially positive windows which fall below the final threshold. Since the cyan trajectories are
rejected by the final classifier, rejection thresholds need only retain the blue and magenta trajectories.
In a sense the complete classifier, along with a threshold which sets the operating point, provides
labels on examples which are more valuable than the ground-truth labels. There will always be a
set of ?positive? examples which are extremely difficult to detect, or worse which are mistakenly
labeled positive. In practice the final threshold of the complete classifier will be set so that these
particular examples are rejected. In our new approach these particular examples can be rejected
early in the computation of the cascade. Compared with existing approaches, that set the reject
thresholds in a heuristic manner, our approach is data-driven and hence more principled.
3
Multiple Instance Pruning
The notion of an ?acceptable detection window? plays a critical role in an improved process for
setting rejection thresholds. It is difficult to define the correct position and scale of a face in an image.
3
For a purely upright and frontal face, one might propose the smallest rectangle which includes the
chin, forehead, and the inner edges of the ears. But, as we include a range of non-upright and
non-frontal faces these rectangles can vary quite a bit. Should the correct window be a function
of apparent head size? Or is eye position and interocular distance more reliable? Even given clear
instructions, one finds that two subjects will differ significantly in their ?ground-truth? labels.
Recall that the detection process scans the image generating a large, but finite, collection of overlapping windows at various scales and locations. Even in the absence of ambiguity, some slop is
required to ensure that at least one of the generated windows is considered a successful detection for
each face. Experiments typically declare that any window which is within 50% in size and within a
distance of 50% (of size) be considered a true positive. Using typical scanning parameters this can
lead to tens of windows which are all equally valid positive detections. If any of these windows is
classified positive then this face is consider detected.
Even though all face detection algorithms must address the ?multiple window? issue, few papers
have discussed it. Two papers which have fundamentally integrated this observation into the training process are Nowlan and Platt [6] and more recently by Viola, Platt, and Zhang [12]. These
papers proposed a multiple instance learning (MIL) framework where the positive examples are
collected into ?bags?. The learning algorithm is then given the freedom to select at least one, and
perhaps more examples, in each bag as the true positive examples. In this paper, we do not directly
address soft-cascade learning, though we will incorporate the ?multiple window? observation into
the determination of the rejection thresholds.
One need only retain one ?acceptable? window for each face which is detected by the final classifier.
A more aggressive threshold is defined as:
?
?
?(t) = min ?? ? max
? sk (t)?
i?P
k?k?Fi ?Ri ,yk =1
where i is the index of ground-truth faces; Fi is the set of acceptable windows associated with
ground-truth face i and Ri is the set of windows which are ?retained? (see below). P is the set of
ground-truth faces that have at least one acceptable window above the final threshold:
??
?
P = i? ? max
?
? sk (T ) > ?(T )
k?k?Fi
In this new procedure the acceptable windows come in bags, only one of which must be classified
positive in order to ensure that each face is successfully detected. This new criteria for success is
more flexible and therefore more aggressive. We call this pruning method multiple instance pruning
(MIP).
Returning to Figure 1 we can see that the blue, cyan, and magenta trajectories actually form a ?bag?.
Both in this algorithm, and in the simpler previous algorithm, the cyan trajectories are rejected before
the computation of the thresholds. The benefit of this new algorithm is that the blue trajectories can
be rejected as well.
The definition of ?retained? examples in the computation above is a bit more complex than before.
Initially the trajectories from the positive bags which fall above the final threshold are retained. The
set of retained examples is further reduced as the earlier thresholds are set. This is in contrast to the
simpler DBP algorithm where the thresholds are set to preserve all retained positive examples. In
the new algorithm the partial score of an example can fall below the current threshold (because it
is in a bag with a better example). Each such example is removed from the retained set Ri and not
used to set subsequent thresholds.
The pseudo code of the MIP algorithm is shown in Figure 2. It guarantees the same face detection
rate on the training dataset as the complete classifier. Note that the algorithm is greedy, setting earlier
thresholds first so that all positive bags are retained and the fewest number of negative examples pass.
Theoretically it is possible that delaying the rejection of a particular example may result in a better
threshold at a later stage. Searching for the optimal MIP pruned detector, however, may be quite
expensive. The MIP algorithm is however guaranteed to generate a soft-cascade that is at least as
fast as DBP, since the criteria for setting the thresholds is less restrictive.
4
Input
? A cascade detector.
? Threshold ?(T ) at the final stage of the detector.
? A large training set (the whole training set to learn the cascade detector can be reused here).
Initialize
? Run the detector on all rectangles that match with any ground-truth faces. Collect all windows
that are above the final threshold ?(T ). Record all intermediate scores as s(i, j, t), where i =
1, ? ? ? , N is the face index; j = 1, ? ? ? , Mi is the index of windows that match with face i;
t = 1, ? ? ? , T is the index of the feature node.
? Initialize flags f (i, j) as true.
MIP
For t = 1, ? ? ? , T :
1. For i = 1, ? ? ? , N : find s?(i, t) = max{j|f (i,j)=true} s(i, j, t).
2. Set ?(t) = mini s?(i, t) ? ? as the rejection threshold of node t, ? = 10?6 .
3. For i = 1, ? ? ? , N, j = 1, ? ? ? , Mi : set f (i, j) as false if s(i, j, t) < ?(t).
Output
Rejection thresholds ?(t), t = 1, ? ? ? , T .
Figure 2: The MIP algorithm.
(a)
(b)
Figure 3: (a) Performance comparison with existing works on MIT+CMU frontal face dataset. (b)
ROC curves of the detector after MIP pruning using the original training set. No performance
degradation is found on the MIT+CMU testing dataset.
4
Experimental Results
More than 20,000 images were collected from the web, containing roughly 10,000 faces. Over
2 billion negative examples are generated from the same image set. A soft cascade classifier is
learned through a new framework based on weight trimming and bootstrapping (see Appendix).
The training process was conducted on a dual core AMD Opteron 2.2 GHz processor with 16 GB
of RAM. It takes less than 2 days to train a classifier with 700 weak classifiers based on the Haar
features [11]. The testing set is the standard MIT+CMU frontal face database [10, 7], which consists
of 125 grayscale images containing 483 labeled frontal faces. A detected rectangle is considered to
be a true detection if it has less than 50% variation in shift and scale from the ground-truth.
It is difficult to compare the performance of various detectors, since every detector is trained on
a different dataset. Nevertheless, we show the ROC curves of a number of existing detectors and
ours in Figure 3(a). Note there are two curves plotted for soft cascade. The first curve has very
good performance, at the cost of slow speed (average 37.1 features per window). The classification
accuracy dropped significantly in the second curve, which is faster (average 25 features per window).
5
Final Threshold
-3.0
-2.5
-2.0
-1.5
-1.0
-0.5
0.0
88.8%
Detection Rate
95.2%
94.6%
93.2%
92.5%
91.7%
90.3%
# of False Positive
95
51
32
20
8
7
5
DBP
36.13
35.78
35.76
34.93
29.22
28.91
26.72
MIP
16.11
16.06
16.80
18.60
16.96
15.53
14.59
(a)
Approach
Viola-Jones
Boosting chain
FloatBoost
WaldBoost
Wu et al.
Total # of features
6061
700
2546
600
756
Soft cascade
4943
Slowness
10
18.1
18.9
13.9
N/A
37.1 (25)
(b)
Figure 4: (a) Pruning performance of DBP and MIP. The bottom two rows indicate the average
number of features visited per window on the MIT+CMU dataset. (b) Results of existing work.
Figure 4(a) compares DBP and MIP with different final thresholds of the strong classifier. The
original data set for learning the soft cascade is reused for pruning the detector. Since MIP is a more
aggressive pruning method, the average number of features evaluated is much lower than DBP.
Note both DBP and MIP guarantee that no positive example from the training set is lost. There
is no similar guarantee for test data, though. Figure 3(b) shows that there is no practical loss in
classification accuracy on the MIT+CMU test dataset for various applications of the MIP algorithm
(note that the MIT+CMU data is not used by the training process in any way).
Speed comparison with other algorithms are subtle (Figure 4(b)). The first observation is that higher
detection rates almost always require the evaluation of additional features. This is certainly true
in our experiments, but it is also true in past papers (e.g., the two curves of Bourdev-Brandt soft
cascade in Figure 3(a)). The fastest algorithms often cannot achieve very high detection rates. One
explanation is that in order to achieve higher detection rates one must retain windows which are
?ambiguous? and may contain faces. The proposed MIP-based detector yields a much lower false
positive rate than the 25-feature Bourdev-Brandt soft cascade and nearly 35% improvement on detection speed. While the WaldBoost algorithm is quite fast, detection rates are measurably lower.
Detectors such as Viola-Jones, boosting chain, FloatBoost, and Wu et al. all requires manual tuning.
We can only guess how much trial and error went into getting a fast detector that yields good results.
The expected computation time of the DBP soft-cascade varies monotonically in detection rate.
This is guaranteed by the algorithm. In experiments with MIP we found a surprising quirk in the
expected computation times. One would expect that if the required detection rate is higher, it world
be more difficult to prune. In MIP, when the detection rate increases, there are two conflicting
factors involved. First, the number of detected faces increases, which increases the difficulty of
pruning. Second, for each face the number of retained and acceptable windows increases. Since
we are computing the maximum of this larger set, MIP can in some cases be more aggressive. The
second factor explains the increase of speed when the final threshold changes from -1.5 to -2.0.
The direct performance comparison between MIP and Bourdev-Brandt (B-B) was performed using
the same soft-cascade and the same data. In order to better measure performance differences we
created a larger test set containing 3,859 images with 3,652 faces collected from the web. Both
algorithms prune the strong classifier for a target detection rate of 97.2% on the training set, which
corresponds to having a final threshold of ?2.5 in Figure 4(a). We use the same exponential function
family as [1] for B-B, and adjust the control parameter ? in the range between ?16 and 4. The
results are shown in Figure 5. It can be seen that the MIP pruned detector has the best detection
performance. When a positive ? is used (e.g., ? = 4), the B-B pruned detector is still worse than
the MIP pruned detector, and its speed is 5 times slower (56.83 vs. 11.25). On the other hand, when
? is negative, the speed of B-B pruned detectors improves and can be faster than MIP (e.g., when
? = ?16). Note, adjusting ? leads to changes both in detection time and false positive rate.
In practice, both MIP and B-B can be useful. MIP is fully automated and guarantees detection rate
with no increase in false positive rate on the training set. The MIP pruned strong classifier is usually
fast enough for most real-world applications. On the other hand, if speed is the dominant factor,
one can specify a target detection rate and target execution time and use B-B to find a solution.
6
0.909
0.907
Detection Rate
0.905
0.903
0.901
MIP, T=-2.5, #f=11.25
B-B, alpha=-16, #f=8.46
B-B, alpha=-8, #f=10.22
B-B, alpha=-4, #f=13.17
B-B, alpha=0, #f=22.75
B-B, alpha=4, #f=56.83
0.899
0.897
0.895
1000
1100
1200
1300
1400
Number of False Positive
1500
1600
Figure 5: The detector performance comparison after applying MIP and Bourdev-Brandt?s
method [1]. Note, this test was done using a much larger, and more difficult, test set than MIT+CMU.
In the legend, symbol #f represents the average number of weak classifiers visited per window.
Note such a solution is not guaranteed, and the false positive rate may be unacceptably high (The
performance degradation of B-B heavily depends on the given soft-cascade. While with our detector
the performance of B-B is acceptable even when ? = ?16, the performance of the detector in [1]
drops significantly from 37 features to 25 features, as shown in Fig. 3 (a).).
5
Conclusions
We have presented a simple yet effective way to set the rejection thresholds of a given soft-cascade,
called multiple instance pruning (MIP). The algorithm begins with a conventional strong classifier
and an associated final threshold. MIP then adds a set of rejection thresholds to construct a cascade
detector. The rejection thresholds are determined so that every face which was detected by the original strong classifier is guaranteed to be detected by the soft cascade. The algorithm also guarantees
that the false positive rate on the training set will not increase. There is only one parameter used
throughout the cascade training process, the target detection rate for the final system. Moreover,
there are no required assumptions about probability distributions, statistical independence, or ad hoc
intermediate targets for detection rate or false positive rate.
Appendix: Learn Soft Cascade with Weight Trimming and Bootstrapping
We present an algorithm for learning a strong classifier from a very large set of training examples. In
order to deal with the many millions of examples, the learning algorithm uses both weight trimming
and bootstrapping. Weight trimming was proposed by Friedman, Hastie and Tibshirani [3]. At each
round of boosting it ignores training examples with the smallest weights, up to a percentage of the
total weight which can be between 1% and 10%. Since the weights are typically very skewed toward
a small number of hard examples, this can eliminate a very large number of examples. It was shown
that weight trimming can dramatically reduce computation for boosted methods without sacrificing
accuracy. In weight trimming no example is discarded permanently, therefore it is ideal for learning
a soft cascade.
The algorithm is described in Figure 6. In step 4, a set A is predefined to reduce the number of weight
updates on the whole training set. One can in theory update the scores of the whole training set after
each feature is learned if computationally affordable, though the gain in detector performance may
not be visible.Note, a set of thresholds are also returned by this process (making the result a softcascade). These preliminary rejection thresholds are extremely conservative, retaining all positive
examples in the training set. They result in a very slow detector ? the average number of features
visited per window is on the order of hundreds. These thresholds will be replaced with the ones
derived by the MIP algorithm. We set the preliminary thresholds only to moderately speed up the
computation of ROC curves before MIP.
7
Input
? Training examples (x1 , y1 ), ? ? ? , (xK , yK ), where yk = ?1, 1 for negative and positive examples. K is on the order of billions.
? T is the total number of weak classifiers, which can be set through cross-validation.
Initialize
? Take all positive examples and randomly sample negative examples to form a subset of Q examples. Q = 4 ? 106 in the current implementation.
? Initialize weights ?1,i to guarantee weight balance between positive and negative examples on
the sampled dataset.
? Define A as the set {2, 4, 8, 16, 32, 64, 128, 192, 256, ? ? ?}.
Adaboost Learning
For t = 1, ? ? ? , T :
1. For each rectangle filter in the pool, construct a weak classifier that minimizes the Z score [8]
under the current set of weights ?t,i , i ? Q.
2. Select the best classifier ht with the minimum Z score, find the associated confidences ?t .
3. Update weights of all Q sampled examples.
4. If t ? A,
? Update weights of the whole training set using the previously selected classifiers h1 , ? ? ? , ht .
? Perform weight trimming [3] to trim 10% of the negative weights.
? Take all positive examples and randomly sample negative examples from the trimmed training set to form a new subset of Q examples.
Pt
5. Set preliminary rejection threshold ?(t) of
? h as the minimum score of all positive
j=1 j j
examples at stage t.
Output
Weak classifiers ht , t = 1, ? ? ? , T , the associated confidences ?t and preliminary rejection thresholds ?(t).
Figure 6: Adaboost learning with weight trimming and booststrapping.
References
[1] L. Bourdev and J. Brandt. Robust object detection via soft cascade. In Proc. of CVPR, 2005.
[2] S. C. Brubaker, M. D. Mullin, and J. M. Rehg. Towards optimal training of cascaded detectors. In Proc.
of ECCV, 2006.
[3] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting.
Technical report, Dept. of Statistics, Stanford University, 1998.
[4] S. Li, L. Zhu, Z. Zhang, A. Blake, H. Zhang, and H. Shum. Statistical learning of multi-view face
detection. In Proc. of ECCV, 2002.
[5] H. Luo. Optimization design of cascaded classifiers. In Proc. of CVPR, 2005.
[6] S. J. Nowlan and J. C. Platt. A convolutional neural network hand tracker. In Proc. of NIPS, volume 7,
1995.
[7] H. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Trans. on PAMI,
20:23?38, 1998.
[8] R. E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine
Learning, 37:297?336, 1999.
[9] J. Sochman and J. Matas. Waldboost - learning for time constrained sequential detection. In Proc. of
CVPR, 2005.
[10] K. Sung and T. Poggio. Example-based learning for view-based face detection. IEEE Trans. on PAMI,
20:39?51, 1998.
[11] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple features. In Proc. of
CVPR, 2001.
[12] P. Viola, J. C. Platt, and C. Zhang. Multiple instance boosting for object detection. In Proc. of NIPS,
volume 18, 2006.
[13] B. Wu, H. Ai, C. Huang, and S. Lao. Fast rotation invariant multi-view face detection based on real
adaboost. In Proc. of IEEE Automatic Face and Gesture Recognition, 2004.
[14] J. Wu, J. M. Rehg, and M. D. Mullin. Learning a rare event detection cascade by direct feature selection.
In Proc. of NIPS, volume 16, 2004.
[15] R. Xiao, L. Zhu, and H. Zhang. Boosting chain learning for object detection. In Proc. of ICCV, 2003.
8
| 3265 |@word trial:1 version:1 seems:1 reused:2 cha:1 instruction:1 pick:1 reduction:1 score:15 shum:1 ours:1 past:2 existing:4 current:3 com:1 nowlan:2 surprising:1 luo:1 yet:2 must:4 subsequent:1 visible:1 additive:1 shape:1 drop:1 update:4 v:1 greedy:1 selected:1 guess:1 unacceptably:1 destined:2 xk:5 core:1 record:1 provides:1 sochman:3 boosting:8 location:1 node:2 brandt:10 simpler:4 zhang:8 along:1 direct:3 become:1 consists:1 manner:1 theoretically:1 expected:2 rapid:1 roughly:1 multi:2 floatboost:2 automatically:1 cpu:1 window:37 provided:1 begin:1 matched:1 moreover:1 minimizes:1 proposing:1 bootstrapping:3 sung:1 guarantee:7 safely:2 pseudo:1 every:2 exactly:1 returning:1 classifier:48 demonstrates:1 platt:4 control:1 positive:55 declare:1 before:3 dropped:1 struggle:1 initiated:1 pami:2 might:2 collect:1 challenging:1 fastest:2 range:3 practical:2 testing:2 practice:3 lost:2 procedure:2 area:1 empirical:1 reject:12 cascade:43 significantly:3 confidence:3 cannot:1 selection:1 risk:1 impossible:2 applying:1 conventional:1 insight:3 rehg:2 searching:1 notion:1 variation:1 target:11 play:2 heavily:1 pt:1 us:1 pa:1 expensive:1 recognition:1 labeled:2 database:1 bottom:1 role:2 went:1 removed:2 valuable:1 yk:7 principled:1 complexity:3 moderately:1 rowley:1 trained:2 reordered:1 purely:1 mislabeled:2 various:3 fewest:1 train:2 distinct:1 fast:9 effective:2 detected:7 quite:5 whose:1 heuristic:1 apparent:1 larger:3 cvpr:4 stanford:1 statistic:1 noisy:2 final:29 hoc:4 advantage:1 propose:5 rapidly:2 achieve:4 getting:1 billion:2 produce:2 generating:1 object:4 bourdev:10 quirk:1 strong:7 come:1 indicate:1 differ:1 closely:1 correct:3 filter:2 opteron:1 explains:1 require:2 preliminary:4 tracker:1 considered:5 ground:8 blake:1 great:1 major:1 vary:1 early:4 smallest:2 proc:11 applicable:1 bag:7 label:4 visited:3 largest:2 successfully:1 weighted:1 mit:8 always:2 hj:2 boosted:2 mil:1 derived:1 improvement:2 indicates:2 contrast:2 criticism:1 detect:2 sense:1 entire:1 typically:3 integrated:1 initially:1 eliminate:1 issue:2 among:1 classification:4 flexible:1 dual:1 retaining:1 proposes:1 art:1 constrained:1 initialize:4 construct:2 having:1 represents:1 jones:7 progressed:1 nearly:1 report:1 fundamentally:1 few:2 randomly:2 simultaneously:1 preserve:1 replaced:1 microsoft:3 maintain:1 friedman:2 freedom:1 detection:53 trimming:8 possibility:1 highly:1 evaluation:1 certainly:1 adjust:1 introduces:1 chain:4 predefined:1 accurate:1 integral:1 edge:1 partial:2 poggio:1 re:1 plotted:1 mip:33 sacrificing:1 mullin:2 instance:10 soft:25 earlier:6 cost:1 subset:2 rare:1 hundred:1 successful:2 conducted:1 too:1 optimally:2 scanning:1 varies:1 combined:1 retain:4 pool:1 ambiguity:1 ear:1 containing:3 huang:1 worse:2 admit:1 return:1 li:1 aggressive:4 includes:1 ad:4 depends:1 later:2 performed:1 h1:1 view:4 red:1 contribution:1 accuracy:4 convolutional:1 yield:5 correspond:1 conceptually:1 weak:14 accurately:1 none:1 interocular:1 trajectory:13 processor:1 classified:2 detector:31 whenever:1 manual:1 definition:1 involved:1 associated:4 mi:2 unsolved:1 gain:1 sampled:2 dataset:10 tunable:1 adjusting:1 recall:1 improves:1 organized:1 subtle:1 actually:1 higher:3 day:2 adaboost:4 specify:1 improved:2 evaluated:1 though:5 done:1 rejected:13 stage:9 until:1 hand:3 mistakenly:1 web:2 overlapping:1 logistic:1 perhaps:2 validity:1 concept:1 true:7 contain:1 hence:1 assigned:1 aggressively:2 satisfactory:1 deal:2 round:1 during:2 skewed:1 ambiguous:1 criterion:2 chin:1 complete:15 demonstrate:1 image:10 consideration:1 novel:1 recently:2 fi:3 superior:1 rotation:1 physical:1 volume:3 million:1 discussed:1 forehead:1 significant:1 ai:1 automatic:2 tuning:1 operating:1 add:1 dominant:1 recent:1 driven:2 slowness:1 success:2 seen:1 minimum:3 additional:1 relaxed:1 waldboost:4 prune:2 determine:4 monotonically:1 ii:1 multiple:12 full:1 technical:2 match:2 determination:1 faster:2 cross:1 gesture:1 equally:2 prediction:1 basic:1 regression:1 cmu:8 affordable:1 sometimes:1 justified:1 proposal:1 operate:1 rest:1 subject:1 legend:1 call:2 slop:1 ideal:1 intermediate:4 enough:1 automated:1 independence:3 gave:1 hastie:2 reduce:3 inner:1 shift:1 whether:1 gb:1 trimmed:1 effort:1 returned:1 passing:1 constitute:1 dramatically:1 useful:1 clear:1 ten:2 reluctantly:1 simplest:1 reduced:1 generate:1 schapire:1 percentage:3 estimated:1 per:5 tibshirani:2 blue:4 key:4 nevertheless:2 threshold:68 conspire:1 ht:3 rectangle:6 backward:1 ram:1 year:2 sum:3 run:1 parameterized:1 family:2 almost:1 throughout:1 wu:4 separation:1 acceptable:10 appendix:2 bit:2 cyan:4 guaranteed:6 precisely:1 ri:3 speed:10 extremely:3 min:2 pruned:8 describes:1 making:1 gradually:1 invariant:1 iccv:1 computationally:1 remains:1 previously:1 eventually:1 mechanism:1 singer:1 dbp:9 slower:1 permanently:1 original:5 include:1 ensure:2 coined:1 restrictive:1 matas:3 question:2 distance:2 separate:1 amd:1 evaluate:1 collected:3 toward:1 code:1 retained:11 index:5 mini:1 ratio:1 balance:1 difficult:8 potentially:1 trace:1 negative:17 implementation:1 design:1 perform:2 observation:3 discarded:1 finite:1 viola:11 head:1 delaying:1 y1:1 brubaker:1 arbitrary:1 required:4 learned:4 conflicting:1 nip:3 trans:2 address:2 beyond:1 redmond:1 below:7 usually:1 reliable:1 max:3 explanation:1 critical:2 event:1 difficulty:1 haar:1 cascaded:3 zhu:4 scheme:2 rated:1 lao:1 eye:1 created:1 text:1 determining:1 fully:2 reordering:1 loss:1 expect:1 validation:1 consistent:1 xiao:3 row:1 eccv:2 side:1 fall:7 face:36 benefit:2 ghz:1 curve:10 valid:1 cumulative:2 world:2 computes:1 ignores:1 collection:1 refinement:1 pruning:19 alpha:5 trim:1 active:1 conceptual:1 grayscale:1 sk:8 kanade:1 terminate:1 reasonably:1 robust:3 learn:2 complex:1 main:1 terminated:1 whole:4 noise:1 paul:1 x1:1 measurably:1 fig:1 roc:4 slow:3 position:3 exponential:2 lie:1 magenta:3 symbol:1 false:15 sequential:1 effectively:1 execution:1 rejection:26 led:1 simply:2 ordered:1 corresponds:1 truth:8 towards:1 absence:1 change:2 hard:1 upright:2 typical:1 reducing:2 determined:1 justify:1 baluja:1 flag:1 degradation:2 conservative:2 called:4 total:4 pas:2 experimental:3 attempted:1 vote:1 palatable:1 select:2 scan:1 frontal:5 incorporate:1 dept:1 |
2,498 | 3,266 | Bayesian Agglomerative Clustering with Coalescents
Yee Whye Teh
Gatsby Unit
University College London
Hal Daum?e III
School of Computing
University of Utah
Daniel Roy
CSAIL
MIT
[email protected]
[email protected]
[email protected]
Abstract
We introduce a new Bayesian model for hierarchical clustering based on a prior
over trees called Kingman?s coalescent. We develop novel greedy and sequential
Monte Carlo inferences which operate in a bottom-up agglomerative fashion. We
show experimentally the superiority of our algorithms over the state-of-the-art,
and demonstrate our approach in document clustering and phylolinguistics.
1
Introduction
Hierarchically structured data abound across a wide variety of domains. It is thus not surprising that
hierarchical clustering is a traditional mainstay of machine learning [1]. The dominant approach to
hierarchical clustering is agglomerative: start with one cluster per datum, and greedily merge pairs
until a single cluster remains. Such algorithms are efficient and easy to implement. Their primary
limitations?a lack of predictive semantics and a coherent mechanism to deal with missing data?
can be addressed by probabilistic models that handle partially observed data, quantify goodness-offit, predict on new data, and integrate within more complex models, all in a principled fashion.
Currently there are two main approaches to probabilistic models for hierarchical clustering. The
first takes a direct Bayesian approach by defining a prior over trees followed by a distribution over
data points conditioned on a tree [2, 3, 4, 5]. MCMC sampling is then used to obtain trees from
their posterior distribution given observations. This approach has the advantages and disadvantages
of most Bayesian models: averaging over sampled trees can improve predictive capabilities, give
confidence estimates for conclusions drawn from the hierarchy, and share statistical strength across
the model; but it is also computationally demanding and complex to implement. As a result such
models have not found widespread use. [2] has the additional advantage that the distribution induced
on the data points is exchangeable, so the model can be coherently extended to new data. The
second approach uses a flat mixture model as the underlying probabilistic model and structures the
posterior hierarchically [6, 7]. This approach uses an agglomerative procedure to find the tree giving
the best posterior approximation, mirroring traditional agglomerative clustering techniques closely
and giving efficient and easy to implement algorithms. However because the underlying model has
no hierarchical structure, there is no sharing of information across the tree.
We propose a novel class of Bayesian hierarchical clustering models and associated inference algorithms combining the advantages of both probabilistic approaches above. 1) We define a prior and
compute the posterior over trees, thus reaping the benefits of a fully Bayesian approach; 2) the distribution over data is hierarchically structured allowing for sharing of statistical strength; 3) we have
efficient and easy to implement inference algorithms that construct trees agglomeratively; and 4) the
induced distribution over data points is exchangeable. Our model is based on an exchangeable distribution over trees called Kingman?s coalescent [8, 9]. Kingman?s coalescent is a standard model from
population genetics for the genealogy of a set of individuals. It is obtained by tracing the genealogy
backwards in time, noting when lineages coalesce together. We review Kingman?s coalescent in
Section 2. Our own contribution is in using it as a prior over trees in a hierarchical clustering model
(Section 3) and in developing novel inference procedures for this model (Section 4).
(a)
x1
y{1,2}
x2
y{1,2,3,4}
z
x3
y{3,4}
?3
??
t3
?(t) = {{1, 2, 3, 4}}
?2
t2
{{1, 2}, {3, 4}}
x4
?1
t0 = 0
t1
{{1}, {2}, {3, 4}}
{{1}, {2}, {3}, {4}}
(b)
(c)
!")
#&'
!
#
&")
$&'
&
$
!&")
%&'
!!
%
!!")
!%&'
!%
!$
!%")
!$&'
!(
!!"#
!!"$
!!"%
!!
!&"'
!&"#
!&"$
!&"%
&
t
!#
!!
!"
!#
!$
%
$
Figure 1: (a) Variables describing the n-coalescent. (b) Sample path from a Brownian diffusion
coalescent process in 1D, circles are coalescent points. (c) Sample observed points from same in
2D, notice the hierarchically clustered nature of the points.
2
Kingman?s coalescent
Kingman?s coalescent is a standard model in population genetics describing the common genealogy
(ancestral tree) of a set of individuals [8, 9]. In its full form it is a distribution over the genealogy of
a countably infinite set of individuals. Like other nonparametric models (e.g. Gaussian and Dirichlet processes), Kingman?s coalescent is most easily described and understood in terms of its finite
dimensional marginal distributions over the genealogies of n individuals, called n-coalescents. We
obtain Kingman?s coalescent as n ? ?.
Consider the genealogy of n individuals alive at the present time t = 0. We can trace their ancestry
backwards in time to the distant past t = ??. Assume each individual has one parent (in genetics,
haploid organisms), and therefore genealogies of [n] = {1, ..., n} form a directed forest. In general,
at time t ? 0, there are m (1 ? m ? n) ancestors alive. Identify these ancestors with their corresponding sets ?1 , ..., ?m of descendants (we will make this identification throughout the paper). Note that
?(t) = {?1 , ..., ?m } form a partition of [n], and interpret t 7? ?(t) as a function from (??, 0] to the
set of partitions of [n]. This function is piecewise constant, left-continuous, monotonic (s ? t implies
that ?(t) is a refinement of ?(s)), and ?(0) = {{1}, ..., {n}} (see Figure 1a). Further, ? completely
and succinctly characterizes the genealogy; we shall henceforth refer to ? as the genealogy of [n].
Kingman?s n-coalescent is simply a distribution over genealogies of [n], or equivalently, over the
space of partition-valued functions like ?. More specifically, the n-coalescent is a continuous-time,
partition-valued, Markov process, which starts at {{1}, ..., {n}} at present time t = 0, and evolves
backwards in time, merging (coalescing) lineages until only one is left. To describe the Markov
process in its entirety, it is sufficient to describe the jump process (i.e. the embedded, discrete-time,
Markov chain over partitions) and the distribution over coalescent times. Both are straightforward
and their simplicity is part of the appeal of Kingman?s coalescent. Let ?li , ?ri be the ith pair of
lineages to coalesce, tn?1 < ? ? ?< t1 < t0 = 0 be the coalescent times and ?i = ti?1 ?ti > 0 be the
duration between adjacent events (see Figure 1a). Under the n-coalescent, every pair of lineages
merges independently with exponential rate 1. Thus the first pair amongst m lineages merge with
m(m?1)
. Therefore ?i ? Exp n?i+1
independently, the pair ?li , ?ri is chosen from
rate m
2 =
2
2
among those right after time ti , and with probability one a random draw from the n-coalescent is a
binary tree with a single root at t = ?? and the n individuals at time t = 0. The genealogy is:
?
if t = 0;
?{{1}, ..., {n}}
?(t) = ?ti?1 ? ?li ? ?ri + (?li ? ?ri ) if t = ti ;
(1)
?
? ti
if ti+1 < t < ti .
Combining the probabilities of the durations and choices of lineages, the probability of ? is simply:
n?i+1 Qn?1
Qn?1
p(?) = i=1 n?i+1
exp ? n?i+1
?i / 2
= i=1 exp ? n?i+1
?i
(2)
2
2
2
The n-coalescent has some interesting statistical properties [8, 9]. The marginal distribution over
tree topologies is uniform and independent of the coalescent times. Secondly, it is infinitely exchangeable: given a genealogy drawn from an n-coalescent, the genealogy of any m contemporary
individuals alive at time t ? 0 embedded within the genealogy is a draw from the m-coalescent.
Thus, taking n ? ?, there is a distribution over genealogies of a countably infinite population
for which the marginal distribution of the genealogy of any n individuals gives the n-coalescent.
Kingman called this the coalescent.
3
Hierarchical clustering with coalescents
We take a Bayesian approach to hierarchical clustering, placing a coalescent prior on the latent
tree and modeling the observed data with a tree structured Markov process evolving forward in
time. We will alter our terminology from genealogy to tree, from n individuals at present time to n
observed data points, and from individuals on the genealogy to latent variables on the tree-structured
distribution. Let x = {x1 , ..., xn } be n observed data points at the leaves of a tree ? drawn from
the n-coalescent. ? has n ? 1 coalescent points, the ith occuring when ?li and ?ri merge at time ti
to form ?i = ?li ? ?ri . Let tli and tri be the times at which ?li and ?ri are themselves formed.
We use a continuous-time Markov process to define the distribution over the n data points x given
the tree ?. The Markov process starts in the distant past, evolves forward in time, splits at each
coalescent point, and evolves independently down both branches until we reach time 0, when n data
points are observations of the process at the n leaves of the tree. The joint distribution described by
this process respects the conditional independences implied by the structure of the directed tree ?.
Let y?i be a latent variable that takes on the value of the Markov process at ?i just before it splitsLet
y{i} = xi at leaf i. See Figure 1a.
To complete the description of the likelihood model, let q(z) be the initial distribution of the Markov
process at time t = ??, and kst (x, y) be the transition probability from state x at time s to state y
at time t. This Markov process need be neither stationary nor ergodic. Marginalizing over paths of
the Markov process, the joint probability over the latent variables and the observations is:
Qn?1
p(x, y, z|?) = q(z)k?? tn?1 (z, y?n?1 ) i=1 kti tli (y?i , y?li )kti tri (y?i , y?ri )
(3)
Notice that the marginal distributions for each observation p(xi |?) are identical and given by the
Markov process at time 0. However the observations are not independent as they share the same
sample path down the Markov process until it splits. In fact the amount of dependence between two
observations is a function of the time at which the observations coalesce. A more recent coalescent
time implies larger dependence. The overall distribution induced on the observations p(x) inherits
the infinite exchangeability of the n-coalescent. We consider in Section 4.3 a brownian diffusion
(Figures 1(b,c)) and a simple independent sites mutation process on multinomial vectors.
4
Agglomerative sequential Monte Carlo and greedy inference
We develop two classes of efficient and easily implementable inference algorithms for our hierarchical clustering model based on sequential Monte Carlo (SMC) and greedy schemes respectively.
In both classes, the latent variables are integrated out, and the trees are constructed in a bottom-up
fashion. The full tree ? can be expressed as a series of n ? 1 coalescent events, ordered backwards
in time. The ith coalescent event involves the merging of the two subtrees with leaves ?li and ?ri
and occurs at a time ?i before the previous coalescent event. Let ?i = {?j , ?lj , ?rj for j ? i} denote
the first i coalescent events. ?n?1 is equivalent to ? and we shall use them interchangeably.
We assume that the form of the Markov process is such that the latent variables {y?i }n?1
i=1 and z can
be efficiently integrated out using an upward pass of belief propagation on the tree. Let M?i (y) be
the message passed from y?i to its parent; M{i} (y) = ?xi (y) is point mass at xi for leaf i. M?i (y)
is proportional to the likelihood of the observations at the leaves below coalescent event i, given that
y?i = y. Belief propagation computes the messages recursively up the tree; for i = 1, ..., n ? 1:
R
Q
M?i (y) = Z??1
(x, ?i ) b=l,r kti tbi (y, yb )M?bi (yb ) dyb
(4)
i
where Z?i (x, ?i ) is a normalization constant.
The choice of Z does not affect the computed
probability of x, but
RR does impact the accuracy and efficiency of our inference algorithms. We found
that Z?i (x, ?i ) =
q(z)k??ti (z, y)M?i (y) dy dz worked well. At the root, we have:
RR
Z?? (x, ?n?1 ) =
q(z)k?? tn?1 (z, y)M?n?1 (y) dy dz
(5)
The marginal probability p(x|?) is now given by the product of normalization constants:
Qn?1
p(x|?) = Z?? (x, ?n?1 ) i=1 Z?i (x, ?i )
(6)
Multiplying in the prior (2) over ?, we get the joint probability for the tree ? and observations x:
Qn?1
p(x, ?) = Z?? (x, ?n?1 ) i=1 exp ? n?i+1
?i Z?i (x, ?i )
(7)
2
Our inference algorithms are based upon (7). The sequential Monte Carlo (SMC) algorithms approximate the posterior over the tree ?n?1 using a weighted sum of samples, while the greedy algorithms
construct ?n?1 by maximizing local terms in (7). Both proceeds by iterating over i = 1, ..., n ? 1,
choosing a duration ?i and a pair of subtrees ?li , ?ri to coalesce at each iteration. This choice
is
?i and a
based upon the ith term in (7), interpreted as the product of a local prior exp ? n?i+1
2
local likelihood Z?i (x, ?i ) for choosing ?i , ?li and ?ri given ?i?1 .
4.1
Sequential Monte Carlo algorithms
SMC algorithms approximate the posterior by iteratively constructing a weighted sum of point
s
masses. At iteration i ? 1, particle s consists of ?i?1
= {?js , ?slj , ?srj for j < i}, and has weight
s
s
wi?1 . At iteration i, s is extended by sampling ?i , ?sli and ?sri from a proposal distribution
s
fi (?is , ?sli , ?sri |?i?1
), and the weight is updated by:
s
s
s
?i Z?i (x, ?is )/fi (?is , ?sli , ?sri |?i?1
)
(8)
wis = wi?1
exp ? n?i+1
2
s
s
After n ? 1 iterations, we obtain
Pa sets of trees ?n?1 and weights wn?1 . The joint distribution
s
is approximated by: p(?, x) ?
s wn?1 ??n?1 (?), while the posterior is approximated with the
weights normalized. An important aspect of SMC is resampling, which places more particles in
high probability regions and prunes particles stuck in low probability regions. We resample as in
Algorithm 5.1 of [10] when the effective sample size ratio as estimated in [11] falls below one half.
SMC-PriorPrior. The simplest proposal distribution is to sample ?is , ?sli and ?sri from the local
s
s
s
n?i+1
prior. ?i is drawn from an exponential with rate
and ?li , ?ri are drawn uniformly from
2
all available pairs. The weight updates (8) reduce to multiplying by Z?i (x, ?is ). This approach is
computationally very efficient, but performs badly with many objects due to the uniform draws over
pairs. SMC-PriorPost. The second approach addresses the suboptimal choice of pairs to coalesce.
We first draw ?is from its local prior, then draw ?sli , ?sri from the local posterior:
P
s
s
s
s
s 0 0
fi (?sli , ?sri |?is , ?i?1
) ? Z?i (x, ?i?1
, ?is , ?sli , ?sri ); wis = wi?1
?0 ,?0 Z?i (x, ?i?1 , ?i , ?l , ?r ) (9)
l
r
This approach is more computationally demanding since we need to evaluate the local likelihood of
every pair. It also performs significantly better than SMC-PriorPrior. We have found that it works
reasonably well for small data sets but fails in larger ones for which the local posterior for ?i is highly
peaked. SMC-PostPost. The third approach is to draw all of ?is , ?sli and ?sri from their posterior:
s
s
s
?i Z?i (x, ?i?1
, ?is , ?sli , ?sri )
fi (?is , ?sli , ?sri |?i?1
) ? exp ? n?i+1
2
R
P
s
s
wis = wi?1
exp ? n?i+1
? 0 Z?i (x, ?i?1
, ? 0 , ?0l , ?0r ) d? 0
(10)
?0 ,?0r
2
l
This approach requires the fewest particles, but is the most computationally expensive due to the
integral for each pair. Fortunately, for the case of Brownian diffusion process described below, these
integrals are tractable and related to generalized inverse Gaussian distributions.
4.2
Greedy algorithms
SMC algorithms are attractive because they can produce an arbitrarily accurate approximation to the
full posterior as the number of samples grow. However in many applications a single good tree is
often sufficient. We describe a few greedy algorithms to construct a good tree.
Greedy-MaxProb: the obvious greedy algorithm is to pick ?i , ?li and ?ri maximizing the ith term
in (7). We do so by computing the optimal ?i for each pair of ?li , ?ri , and then picking the pair
maximizing the ith term at its optimal ?i . Greedy-MinDuration: pick the pair to coalesce whose
optimal duration is minimum. Both algorithmsrequire recomputing the optimal duration for each
pair at each iteration, since the prior rate n?i+1
on the duration varies with the iteration i. The total
2
3
computational cost is thus O(n ). We can avoid this by using the alternative view of the n-coalesent
as a Markov process where each pair of lineages coalesces at rate
1. Greedy-Rate1: for each pair
?li and ?ri we determine the optimal ?i , replacing the n?i+1
prior rate with 1. We coalesce the
2
pair with most recent time (as in Greedy-MinDuration). This reduces the complexity to O(n2 ). We
found that all three performed similarly, and use Greedy-Rate1 in our experiments as it is faster.
4.3
Examples
Brownian diffusion. Consider the case of continuous data evolving via Brownian diffusion. The
transition kernel kst (y, ?) is a Gaussian centred at y with variance (t ? s)?, where ? is a symmetric
positive definite covariance matrix. Because the joint distribution (3) over x, y and z is Gaussian,
we can express each message M?i (y) as a Gaussian with mean yb?i and covariance ?v?i . The local
likelihood is:
2
b i |? 21 exp ? 1 ||b
b i = ?(v? +v? +tli +tri ?2ti ) (11)
Z? (x, ?i ) = |2? ?
y? ?b
y? || b ; ?
2
i
li
ri
?i
li
ri
where kxk? = x> ??1 x is the Mahanalobis norm. The optimal duration ?i can also be solved for,
?1 q n?i+1
2
2 ? D ? 1 (v
+D
||b
y
?b
y
||
4
?i = 14 n?i+1
?
?
ri ?
li
2
2
2 ?li +v?ri +tli +tri ?2ti?1 ) (12)
where D is the dimensionality. The message at the newly coalesced point has parameters:
?1
y
b?li
y
b?ri
v?i = (v?li + tli ? ti )?1 + (v?ri + tri ? ti )?1
; yb?i = v? +t
+ v? +t
v?i (13)
ri ?ti
li ?ti
li
ri
Multinomial vectors. Consider a Markov process acting on multinomial vectors with each entry
taking one of K values and evolving independently. Entry d evolves at rate ?d and has equilibrium
distribution vector qd . The transition rate matrix is Qd = ?d (qh>1 K ? Ik ) where 1 K is a vector of
K ones and IK is identity matrix of size K, while the transition probability matrix for entry d in
a time interval of length t is eQd t = e??d t IK + (1 ? e??d t )qd>1 K . Representing the message for
entry d from ?i to its parent as a vector M?di = [M?d1
, ..., M?dK
]> , normalized so that qd ? M?di = 1,
i
i
the local likelihood terms and messages are computed as,
PK
Z?di (x, ?i ) = 1 ? e?h (2ti ?tli ?tri ) 1 ? k=1 qdk M?dk
M?dk
(14)
ri
li
M?di = (1 ? e?d (ti ?tli ) (1 ? M?dli ))(1 ? e?d (ti ?tri ) (1 ? M?dri ))/Z?di (x, ?i )
(15)
Unfortunately the optimal ?i cannot be solved analytically and we use Newton steps to compute it.
4.4
Hyperparameter estimation
We perform hyperparameter estimation by iterating between estimating a tree, and estimating the
hyperparameters. In the Brownian case, we place an inverse Wishart prior on ? and the MAP
? is available in a standard closed form. In the multinomial case, the updates are not
posterior ?
available analytically and are solved iteratively. Further information on hyperparameter estimation,
as well predictive densities and more experiments are available in a longer technical report.
5
Experiments
Synthetic Data Sets. In Figure 2 we compare the various SMC algorithms and Greedy-Rate1 on a
range of synthetic data sets drawn from the Brownian diffusion coalescent process itself (? = ID )
to investigate the effects of various parameters on the efficacy of the algorithms1 . Generally SMCPostPost performed best, followed by SMC-PriorPost, SMC-PriorPrior and Greedy-Rate1. With
increasing D the amount of data given to the algorithms increases and all algorithms do better,
especially Greedy-Rate1. This is because the posterior becomes concentrated and the Greedy-Rate1
approximation corresponds well with the posterior. As n increases, the amount of data increases
as well and all algorithms perform better. However, the posterior space also increases and SMCPriorPrior which simply samples from the prior over genealogies does not improve as much. We
see this effect as well when S is small. As S increases all SMC algorithms improve. Finally, the
algorithms were surprisingly robust when there is mismatch between the generated data sets? ? and
the ? used by the model. We expected all models to perform worse with SMC-PostPost best able to
maintain its performance (though this is possibly due to our experimental setup).
MNIST and SPAMBASE. We compare the performance of Greedy-Rate1 to two other hierarchical
clustering algorithms: average-linkage and Bayesian hierarchical clustering (BHC) [6]. In MNIST,
1
Each panel was generated from independent runs. Data set variance affected all algorithms, varying overall
performance across panels. However, trends in each panel are still valid, as they are based on the same data.
av e rage l og p re d i c ti v e
(a) ?0.6
(b ) ?0.6
(c ) ?0.6
(d ) ?0.6
?0.8
?0.8
?0.8
?0.8
?1
?1
?1
?1
?1.2
?1.2
?1.2
?1.2
?1.4
?1.4
?1.4
?1.4
?1.6
4
6
8
D : d i m e n si on s
?1.6
4
6
n : ob se rvati on s
8
?1.6
0.5
1
?: mu tati on rate
2
?1.6
SMC?PostPost
SMC?PriorPost
SMC?PriorPrior
Greedy?Rate1
10
30
50
S : p arti c l e s
70
Figure 2: Predictive performance of algorithms as we vary (a) the numbers of dimensions D, (b)
observations n, (c) the mutation rate ? (? = ?ID ), and (d) number of samples S. In each panel
other parameters are fixed to their middle values (we used S = 50) in other panels, and we report
log predictive probabilities on one unobserved entry, averaged over 100 runs.
Purity
Subtree
LOO-acc
MNIST
Avg-link
BHC
Coalescent
.363?.004 .392?.006 .412?.006
.581?.005 .579?.005 .610?.005
.755?.005 .763?.005 .773?.005
SPAMBASE
Avg-link
BHC
Coalescent
.616?.007 .711?.010 .689?.008
.607?.011 .549?.015 .661?.012
.846?.010 .832?.010 .861?.008
Table 1: Comparative results. Numbers are averages and standard errors over 50 and 20 repeats.
we use 20 exemplars from each of 10 digits from the MNIST data set, reduced via PCA to 20
dimensions, repeating the experiment 50 times. In SPAMBASE, we use 100 examples of 57 binary
attributes from each of 2 classes, repeating 20 times. We present purity scores [6], subtree scores
(#{interior nodes with all leaves of same class}/(n ? #classes)) and leave-one-out accuracies (all
scores between 0 and 1, higher better). The results are in Table 1; except for purity on SPAMBASE,
ours gives the best performance. Experiments not presented here show that all greedy algorithms
perform about the same and that performance improves with hyperparameter updates.
Phylolinguistics. We apply Greedy-Rate1 to a phylolinguistic problem: language evolution. Unlike previous research [12] which studies only phonological data, we use a full typological database
of 139 binary features over 2150 languages: the World Atlas of Language Structures (WALS) [13].
The data is sparse: about 84% of the entries are unknown. We use the same version of the database
as extracted by [14]. Based on the Indo-European subset of this data for which at most 30 features
are unknown (48 languages total), we recover the coalescent tree shown in Figure 3(a). Each language is shown with its genus, allowing us to observe that it teases apart Germanic and Romance
languages, but makes a few errors with respect to Iranian and Greek.
Next we compare predictive abilities to other algoIndo-European Data
rithms. We take a subset of WALS and tested on
Avg-link BHC Coalescent
5% of withheld entries, restoring these with varPurity
0.510 0.491
0.813
ious techniques: Greedy-Rate1; nearest neighbors
Subtree
0.414 0.414
0.690
LOO-acc
0.538 0.590
0.769
(use value from nearest observed neighbor); averageWhole World Data
linkage (nearest neighbor in the tree); and probabilistic
Avg-link BHC Coalescent
PCA (latent dimensions in 5, 10, 20, 40, chosen optiPurity
0.162 0.160
0.269
mistically). We use five subsets of the WALS database,
Subtree
0.227 0.099
0.177
obtained by sorting both the languages and features of
LOO-acc
0.080 0.248
0.369
the database according to sparsity and using a varying
percentage (10% ? 50%) of the densest portion. The Table 2: Comparative performance of varresults are in Figure 3(b). Our approach performed ious algorithms on phylolinguistics data.
reasonably well.
Finally, we compare the trees generated by Greedy-Rate1 with trees generated by either averagelinkage or BHC, using the same evaluation criteria as for MNIST and SPAMBASE, using language
genus as classes. The results are in Table 5, where we can see that the coalescent significantly
outperforms the other methods.
[Celtic] Irish
[Celtic] Gaelic (Scots)
[Celtic] Welsh
[Celtic] Cornish
[Celtic] Breton
[Iranian] Tajik
[Iranian] Persian
[Iranian] Kurdish (Central)
[Romance] French
[Germanic] German
[Germanic] Dutch
[Germanic] English
[Germanic] Icelandic
[Germanic] Swedish
[Germanic] Norwegian
[Germanic] Danish
[Romance] Spanish
[Greek] Greek (Modern)
[Slavic] Bulgarian
[Romance] Romanian
[Romance] Portuguese
[Romance] Italian
[Romance] Catalan
[Albanian] Albanian
[Slavic] Polish
[Slavic] Slovene
[Slavic] Serbian?Croatian
[Slavic] Ukrainian
[Slavic] Russian
[Baltic] Lithuanian
[Baltic] Latvian
[Slavic] Czech
[Iranian] Pashto
[Indic] Panjabi
[Indic] Hindi
[Indic] Kashmiri
[Indic] Sinhala
[Indic] Nepali
[Iranian] Ossetic
[Indic] Maithili
[Indic] Marathi
[Indic] Bengali
[Armenian] Armenian (Western)
[Armenian] Armenian (Eastern)
0.2
0.1
0
(a) Coalescent for a subset of Indo-European languages from WALS.
82
Coalescent
Neighbor
Agglomerative
PPCA
80
78
76
74
72
0.1
0.2
0.3
0.4
0.5
(b) Data restoration on WALS. Y-axis is accuracy;
X-axis is percentage of data set used in experiments.
At 10%, there are N = 215 languages, H = 14
features and p = 94% observed data; at 20%, N =
430, H = 28 and p = 80%; at 30%: N = 645,
H = 42 and p = 66%; at 40%: N = 860, H =
56 and p = 53%; at 50%: N = 1075, H = 70
and p = 43%. Results are averaged over five folds
with a different 5% hidden each time. (We also tried
a ?mode? prediction, but its performance is in the
60% range in all cases, and is not depicted.)
Figure 3: Results of the phylolinguistics experiments.
LLR (t) Top Words
Top Authors (# papers)
32.7 (-2.71) bifurcation attractors hopfield network saddle Mjolsness (9) Saad (9) Ruppin (8) Coolen (7)
0.106 (-3.77) voltage model cells neurons neuron
Koch (30) Sejnowski (22) Bower (11) Dayan (10)
83.8 (-2.02) chip circuit voltage vlsi transistor
Koch (12) Alspector (6) Lazzaro (6) Murray (6)
140.0 (-2.43) spike ocular cells firing stimulus
Sejnowski (22) Koch (18) Bower (11) Dayan (10)
2.48 (-3.66) data model learning algorithm training
Jordan (17) Hinton (16) Williams (14) Tresp (13)
31.3 (-2.76) infomax image ica images kurtosis
Hinton (12) Sejnowski (10) Amari (7) Zemel (7)
31.6 (-2.83) data training regression learning model
Jordan (16) Tresp (13) Smola (11) Moody (10)
39.5 (-2.46) critic policy reinforcement agent controller Singh (15) Barto (10) Sutton (8) Sanger (7)
23.0 (-3.03) network training units hidden input
Mozer (14) Lippmann (11) Giles (10) Bengio (9)
Table 3: Nine clusters discovered in NIPS abstracts data.
NIPS. We applied Greedy-Rate1 to all NIPS abstracts through NIPS12 (1740, total). The data was
preprocessed so that only words occuring in at least 100 abstracts were retained. The word counts
were then converted to binary. We performed one iteration of hyperparameter re-estimation. In
the supplemental material, we depict the top levels of the coalescent tree. Here, we use the tree to
generate a flat clustering. To do so, we use the log likelihood ratio at each branch in the coalescent
to determine if a split should occur. If the log likelihood ratio is greater than zero, we break the
branch; otherwise, we recurse down. On the NIPS abstracts, this leads to nine clusters, depicted
in Table 3. Note that clusters two and three are quite similar?had we used a slighly higher log
likelihood ratio, they would have been merged (the LLR for cluster 2 was only 0.105). Note that
the clustering is able to tease apart Bayesian learning (cluster 5) and non-bayesian learning (cluster
7)?both of which have Mike Jordan as their top author!
6
Discussion
We described a new model for Bayesian agglomerative clustering. We used Kingman?s coalescent
as our prior over trees, and derived efficient and easily implementable greedy and SMC inference
algorithms for the model. We showed empirically that our model gives better performance than other
agglomerative clustering algorithms, and gives good results on applications to document modeling
and phylolinguistics.
Our model is most similar in spirit to the Dirichlet diffusion tree of [2]. Both use infinitely exchangeable priors over trees. While [2] uses a fragmentation process for trees, our prior uses the
reverse?a coalescent process instead. This allows us to develop simpler inference algorithms than
those in [2] (we have not compared our model against the Dirichlet diffusion tree due to the complexity of implementing it). It will be interesting to consider the possibility of developing similar
agglomerative style algorithms for [2]. [3] also describes a hierarchical clustering model involving
a prior over trees, but his prior is not infinitely exchangeable. [5] uses tree-consistent partitions to
model relational data; it would be interesting to apply our approach to their setting. Another related
work is the Bayesian hierarchical clustering of [6], which uses an agglomerative procedure returning
a tree structured approximate posterior for a Dirichlet process mixture model. As opposed to our
work [6] uses a flat mixture model and does not have a notion of distributions over trees.
There are a number of unresolved issues with our work. Firstly, our algorithms take O(n3 ) computation time, except for Greedy-Rate1 which takes O(n2 ) time. Among the greedy algorithms we see
that there are no discernible differences in quality of approximation thus we recommend GreedyRate1. It would be interesting to develop SMC algorithms with O(n2 ) runtime, and compare these
against Greedy-Rate1 on real world problems. Secondly, there are unanswered statistical questions.
For example, since our prior is infinitely exchangeable, by de Finetti?s theorem there is an underlying random distribution for which our observations are i.i.d. draws. What is this underlying random
distribution, and how do samples from this distribution look like? We know the answer for at least a
simple case: if the Markov process is a mutation process with mutation rate ?/2 and new states are
drawn i.i.d. from a base distribution H, then the induced distribution is a Dirichlet process DP(?, H)
[8]. Another issue is that of consistency?does the posterior over random distributions converge to
the true distribution as the number of observations grows? Finally, it would be interesting to generalize our approach to varying mutation rates, and to non-binary trees by using generalizations to
Kingman?s coalescent called ?-coalescents [15].
References
[1] R. O. Duda and P. E. Hart. Pattern Classification And Scene Analysis. Wiley and Sons, New York, 1973.
[2] R. M. Neal. Defining priors for distributions using Dirichlet diffusion trees. Technical Report 0104,
Department of Statistics, University of Toronto, 2001.
[3] C. K. I. Williams. A MCMC approach to hierarchical mixture modelling. In Advances in Neural Information Processing Systems, volume 12, 2000.
[4] C. Kemp, T. L. Griffiths, S. Stromsten, and J. B. Tenenbaum. Semi-supervised learning with trees. In
Advances in Neural Information Processing Systems, volume 16, 2004.
[5] D. M. Roy, C. Kemp, V. Mansinghka, and J. B. Tenenbaum. Learning annotated hierarchies from relational data. In Advances in Neural Information Processing Systems, volume 19, 2007.
[6] K. A. Heller and Z. Ghahramani. Bayesian hierarchical clustering. In Proceedings of the International
Conference on Machine Learning, volume 22, 2005.
[7] N. Friedman. Pcluster: Probabilistic agglomerative clustering of gene expression profiles. Technical
Report Technical Report 2003-80, Hebrew University, 2003.
[8] J. F. C. Kingman. On the genealogy of large populations. Journal of Applied Probability, 19:27?43, 1982.
Essays in Statistical Science.
[9] J. F. C. Kingman. The coalescent. Stochastic Processes and their Applications, 13:235?248, 1982.
[10] P. Fearnhead. Sequential Monte Carlo Method in Filter Theory. PhD thesis, Merton College, University
of Oxford, 1998.
[11] R. M. Neal. Annealed importance sampling. Technical Report 9805, Department of Statistics, University
of Toronto, 1998.
[12] A. McMahon and R. McMahon. Language Classification by Numbers. Oxford University Press, 2005.
[13] M. Haspelmath, M. Dryer, D. Gil, and B. Comrie, editors. The World Atlas of Language Structures.
Oxford University Press, 2005.
[14] H. Daum?e III and L. Campbell. A Bayesian model for discovering typological implications. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2007.
[15] J. Pitman. Coalescents with multiple collisions. Annals of Probability, 27:1870?1902, 1999.
| 3266 |@word middle:1 version:1 sri:10 norm:1 duda:1 essay:1 tried:1 covariance:2 pick:2 arti:1 recursively:1 reaping:1 initial:1 series:1 efficacy:1 score:3 daniel:1 document:2 ours:1 past:2 spambase:5 outperforms:1 nepali:1 surprising:1 si:1 romance:7 portuguese:1 distant:2 partition:6 discernible:1 atlas:2 update:3 depict:1 resampling:1 stationary:1 greedy:27 leaf:7 half:1 breton:1 discovering:1 ith:6 node:1 toronto:2 firstly:1 simpler:1 five:2 constructed:1 direct:1 ik:3 tbi:1 descendant:1 consists:1 introduce:1 ica:1 expected:1 alspector:1 themselves:1 nor:1 priorpost:3 increasing:1 abound:1 becomes:1 estimating:2 underlying:4 panel:5 mass:2 circuit:1 what:1 interpreted:1 supplemental:1 unobserved:1 every:2 ti:20 tajik:1 runtime:1 returning:1 uk:1 exchangeable:7 unit:2 superiority:1 t1:2 before:2 understood:1 local:10 positive:1 sutton:1 mainstay:1 id:2 oxford:3 path:3 firing:1 merge:3 genus:2 kurdish:1 smc:19 bi:1 range:2 averaged:2 directed:2 restoring:1 implement:4 definite:1 x3:1 cornish:1 digit:1 procedure:3 evolving:3 significantly:2 confidence:1 word:3 ossetic:1 griffith:1 get:1 cannot:1 interior:1 yee:1 equivalent:1 map:1 missing:1 dz:2 maximizing:3 straightforward:1 williams:2 annealed:1 duration:7 independently:4 ergodic:1 simplicity:1 lineage:7 d1:1 pashto:1 his:1 population:4 handle:1 notion:1 unanswered:1 updated:1 annals:1 hierarchy:2 qh:1 densest:1 us:7 haploid:1 pa:1 trend:1 roy:2 approximated:2 expensive:1 mahanalobis:1 database:4 bottom:2 observed:7 mike:1 solved:3 region:2 mjolsness:1 contemporary:1 principled:1 mozer:1 mu:1 complexity:2 singh:1 predictive:6 upon:2 efficiency:1 completely:1 easily:3 joint:5 hopfield:1 chip:1 various:2 fewest:1 postpost:3 describe:3 london:1 monte:6 effective:1 sejnowski:3 zemel:1 choosing:2 whose:1 quite:1 larger:2 valued:2 amari:1 otherwise:1 coalescents:5 ability:1 statistic:2 panjabi:1 itself:1 advantage:3 rr:2 transistor:1 kurtosis:1 ucl:1 propose:1 product:2 unresolved:1 combining:2 description:1 srj:1 parent:3 cluster:8 produce:1 comparative:2 indic:8 leave:1 armenian:4 object:1 develop:4 ac:1 exemplar:1 nearest:3 school:1 mansinghka:1 entirety:1 involves:1 implies:2 nips12:1 quantify:1 qd:4 iou:2 greek:3 closely:1 merged:1 attribute:1 annotated:1 filter:1 stochastic:1 coalescent:50 material:1 implementing:1 require:1 clustered:1 generalization:1 secondly:2 genealogy:20 tati:1 koch:3 exp:9 equilibrium:1 predict:1 vary:1 resample:1 serbian:1 estimation:4 coalesced:1 currently:1 coolen:1 weighted:2 icelandic:1 mit:2 gaussian:5 fearnhead:1 avoid:1 kashmiri:1 exchangeability:1 varying:3 og:1 voltage:2 barto:1 derived:1 inherits:1 modelling:1 likelihood:9 polish:1 greedily:1 inference:10 baltic:2 dayan:2 integrated:2 lj:1 hidden:2 italian:1 ancestor:2 vlsi:1 hal3:1 semantics:1 upward:1 issue:2 overall:2 among:2 bulgarian:1 classification:2 art:1 bifurcation:1 marginal:5 construct:3 phonological:1 sampling:3 irish:1 x4:1 placing:1 identical:1 look:1 alter:1 peaked:1 t2:1 report:6 piecewise:1 rate1:14 few:2 stimulus:1 recommend:1 modern:1 individual:11 welsh:1 attractor:1 maintain:1 friedman:1 message:6 highly:1 investigate:1 possibility:1 evaluation:1 mixture:4 llr:2 recurse:1 chain:1 subtrees:2 accurate:1 iranian:6 implication:1 integral:2 tree:50 circle:1 re:2 recomputing:1 modeling:2 giles:1 disadvantage:1 goodness:1 restoration:1 cost:1 entry:7 subset:4 uniform:2 loo:3 celtic:5 answer:1 varies:1 synthetic:2 density:1 international:1 csail:1 ancestral:1 probabilistic:6 picking:1 infomax:1 together:1 moody:1 thesis:1 central:1 opposed:1 possibly:1 henceforth:1 wishart:1 worse:1 coalesces:1 kingman:15 style:1 li:24 converted:1 de:1 centred:1 tli:7 root:2 view:1 performed:4 closed:1 break:1 characterizes:1 portion:1 start:3 recover:1 capability:1 mutation:5 contribution:1 formed:1 accuracy:3 variance:2 efficiently:1 t3:1 identify:1 generalize:1 bayesian:14 identification:1 carlo:6 multiplying:2 typological:2 eqd:1 acc:3 reach:1 sharing:2 danish:1 against:2 ocular:1 obvious:1 associated:1 di:5 rithms:1 sampled:1 newly:1 ppca:1 merton:1 dimensionality:1 improves:1 campbell:1 higher:2 supervised:1 yb:4 swedish:1 though:1 catalan:1 just:1 smola:1 until:4 dyb:1 replacing:1 lack:1 propagation:2 widespread:1 french:1 western:1 mode:1 quality:1 hal:1 russian:1 grows:1 utah:1 name:1 effect:2 normalized:2 true:1 evolution:1 analytically:2 marathi:1 symmetric:1 iteratively:2 neal:2 deal:1 attractive:1 adjacent:1 interchangeably:1 spanish:1 criterion:1 generalized:1 whye:1 occuring:2 demonstrate:1 complete:1 tn:3 performs:2 image:2 ruppin:1 novel:3 fi:4 common:1 multinomial:4 empirically:1 slovene:1 volume:4 association:1 organism:1 ukrainian:1 interpret:1 refer:1 consistency:1 similarly:1 particle:4 language:12 had:1 longer:1 base:1 dominant:1 j:1 posterior:17 own:1 brownian:7 recent:2 showed:1 apart:2 reverse:1 binary:5 arbitrarily:1 meeting:1 minimum:1 additional:1 fortunately:1 greater:1 prune:1 purity:3 determine:2 converge:1 semi:1 branch:3 full:4 persian:1 rj:1 reduces:1 multiple:1 technical:5 faster:1 hart:1 impact:1 prediction:1 involving:1 regression:1 rage:1 controller:1 dutch:1 iteration:7 normalization:2 kernel:1 cell:2 proposal:2 addressed:1 interval:1 grow:1 saad:1 operate:1 unlike:1 tri:7 induced:4 dri:1 maithili:1 spirit:1 jordan:3 noting:1 backwards:4 iii:2 easy:3 split:3 wn:2 variety:1 independence:1 affect:1 bengio:1 topology:1 suboptimal:1 reduce:1 t0:2 expression:1 pca:2 passed:1 linkage:2 york:1 nine:2 lazzaro:1 mirroring:1 generally:1 iterating:2 se:1 collision:1 amount:3 nonparametric:1 repeating:2 tenenbaum:2 concentrated:1 simplest:1 reduced:1 generate:1 stromsten:1 percentage:2 notice:2 gil:1 estimated:1 per:1 discrete:1 hyperparameter:5 shall:2 affected:1 express:1 finetti:1 terminology:1 drawn:7 preprocessed:1 neither:1 diffusion:9 sum:2 run:2 inverse:2 place:2 throughout:1 draw:7 ob:1 dy:2 followed:2 datum:1 fold:1 annual:1 badly:1 strength:2 occur:1 alive:3 worked:1 x2:1 flat:3 ri:24 n3:1 scene:1 ywteh:1 aspect:1 slj:1 structured:5 developing:2 according:1 department:2 across:4 describes:1 son:1 wi:7 evolves:4 gaelic:1 dryer:1 computationally:4 remains:1 describing:2 german:1 mechanism:1 count:1 know:1 tractable:1 available:4 apply:2 observe:1 hierarchical:16 alternative:1 lithuanian:1 coalescing:1 clustering:22 dirichlet:6 top:4 linguistics:1 newton:1 sanger:1 daum:2 germanic:8 giving:2 ghahramani:1 especially:1 murray:1 implied:1 question:1 coherently:1 occurs:1 spike:1 primary:1 dependence:2 traditional:2 amongst:1 dp:1 link:4 bengali:1 me:1 albanian:2 agglomerative:12 kemp:2 kst:2 length:1 retained:1 ratio:4 hebrew:1 romanian:1 equivalently:1 setup:1 unfortunately:1 trace:1 policy:1 unknown:2 perform:4 teh:1 allowing:2 av:1 observation:13 neuron:2 markov:16 finite:1 implementable:2 withheld:1 defining:2 extended:2 norwegian:1 hinton:2 relational:2 discovered:1 pair:18 dli:1 coalesce:7 coherent:1 merges:1 czech:1 nip:4 address:1 able:2 proceeds:1 below:3 pattern:1 mismatch:1 sparsity:1 belief:2 event:6 demanding:2 sinhala:1 latvian:1 hindi:1 representing:1 scheme:1 improve:3 axis:2 tresp:2 prior:20 review:1 heller:1 marginalizing:1 embedded:2 fully:1 interesting:5 limitation:1 proportional:1 bhc:6 integrate:1 kti:3 agent:1 sufficient:2 consistent:1 editor:1 share:2 critic:1 genetics:3 succinctly:1 surprisingly:1 repeat:1 tease:2 english:1 eastern:1 wide:1 fall:1 taking:2 neighbor:4 sparse:1 tracing:1 benefit:1 pitman:1 dimension:3 xn:1 transition:4 valid:1 world:4 qn:5 computes:1 forward:2 stuck:1 refinement:1 jump:1 offit:1 sli:10 avg:4 author:2 reinforcement:1 mcmahon:2 approximate:3 lippmann:1 countably:2 gene:1 wals:5 xi:4 ancestry:1 continuous:4 latent:7 table:6 nature:1 reasonably:2 robust:1 forest:1 complex:2 european:3 constructing:1 domain:1 pk:1 hierarchically:4 main:1 hyperparameters:1 profile:1 n2:3 x1:2 site:1 fashion:3 gatsby:2 wiley:1 fails:1 droy:1 exponential:2 indo:2 bower:2 third:1 scot:1 down:3 theorem:1 appeal:1 dk:3 mnist:5 sequential:6 merging:2 importance:1 fragmentation:1 phd:1 subtree:4 conditioned:1 sorting:1 depicted:2 simply:3 saddle:1 infinitely:4 expressed:1 ordered:1 kxk:1 partially:1 monotonic:1 corresponds:1 extracted:1 conditional:1 identity:1 slighly:1 experimentally:1 infinite:3 specifically:1 uniformly:1 except:2 averaging:1 acting:1 slavic:7 called:5 total:3 pas:1 experimental:1 college:2 evaluate:1 mcmc:2 tested:1 |
2,499 | 3,267 | Unsupervised Feature Selection for Accurate
Recommendation of High-Dimensional Image Data
Sabri Boutemedjet
DI, Universite de Sherbrooke
2500 boulevard de l?Universit?e
Sherbrooke, QC J1K 2R1, Canada
[email protected]
Djemel Ziou
DI, Universite de Sherbrooke
2500 boulevard de l?Universit?e
Sherbrooke, QC J1K 2R1, Canada
[email protected]
Nizar Bouguila
CIISE, Concordia University
1515 Ste-Catherine Street West
Montreal, QC H3G 1T7, Canada
[email protected]
Abstract
Content-based image suggestion (CBIS) targets the recommendation of products
based on user preferences on the visual content of images. In this paper, we motivate both feature selection and model order identification as two key issues for
a successful CBIS. We propose a generative model in which the visual features
and users are clustered into separate classes. We identify the number of both user
and image classes with the simultaneous selection of relevant visual features using the message length approach. The goal is to ensure an accurate prediction
of ratings for multidimensional non-Gaussian and continuous image descriptors.
Experiments on a collected data have demonstrated the merits of our approach.
1 Introduction
Products in today?s e-market are described using both visual and textual information. From consumer psychology, the visual information has been recognized as an important factor that influences
the consumer?s decision making and has an important power of persuasion [4]. Furthermore, it is
well recognized that the consumer choice is also influenced by the external environment or context
such as the time and location [4]. For example, a consumer could express an information need
during a travel that is different from the situation when she or he is working or even at home.
?Content-Based Image Suggestion? (CBIS) [4] motivates the modeling of user preferences with
respect to visual information under the influence of the context. Therefore, CBIS aims at the suggestion of products whose relevance is inferred from the history of users in different contexts on
images of the previously consumed products. The domains considered by CBIS are a set of users
U = {1, 2, . . . , Nu }, a set of visual documents V = {v 1 , v2 , . . . , vNv }, and a set of possible contexts E = {1, 2, . . . , Ne }. Each vk is an arbitrary descriptor (visual, textual, or categorical) used
to represent images or products. In this work, we consider an image as a D-dimensional vector
v = (v1 , v2 , . . . , vD ). The visual features may be local such as interest points or global such as
color, texture, or shape. The relevance is expressed explicitly on an ordered voting (or rating) scale
defined as R = {r1 , r2 , . . . , rNr }. For example, the five star scale (i.e. N r = 5) used by Amazon allows consumers to give different degrees of appreciation. The history of each user u ? U, is defined
as Du = {< u, e(j) , v (j) , r(j) > |e(j) ? E, v (j) ? V, r(j) ? R, j = 1, . . . , |Du |}.
Figure 1: The VCC-FMM identifies like-mindedness from similar appreciations on similar images
represented in 3-dimensional space. Notice the inter-relation between the number of image clusters
and the considered feature subset.
In literature, the modeling of user preferences has been addressed mainly within collaborative filtering (CF) and content-based filtering (CBF) communities. On the one hand, CBF approaches [12]
build a separate model of ?liked? and ?disliked? discrete data (word features) from each D u taken
individually. On the other hand, CF approaches predict the relevance of a given product for a given
user based on the preferences provided by a set of ?like-minded? (similar tastes) users. The data set
u
u
used by CF is the user-product matrix (? N
u=1 D ) which is discrete since each product is represented
by a categorical index. The Aspect model [7] and the flexible mixture model (FMM) [15] are examples of some model-based CF approaches. Recently, the authors in [4] have proposed a statistical
model for CBIS which uses both visual and contextual information in modeling user preferences
with respect to multidimensional non Gaussian and continuous data. Users with similar preferences
are considered in [4] as those who appreciated with similar degrees similar images. Therefore, instead of considering products as categorical variables (CF), visual documents are represented by
a richer visual information in the form of a vector of visual features (texture, shape, and interest
points). The similarity between images and between user preferences is modeled in [4] through a
single graphical model which clusters users and images separately into homogeneous groups in a
similar way to the flexible mixture model (FMM) [15]. In addition, since image data are generally
non-Gaussian [1], class-conditional distributions of visual features are assumed Dirichlet densities.
By this way, the like-mindedness in user preferences is captured at the level of visual features.
Statistical models for CBIS are useful tools in modeling for many reasons. First, once the model is
learned from training data (union of user histories), it can be used to ?suggest? unknown (possibly
unrated) images efficiently i.e. few effort is required at the prediction phase. Second, the model can
be updated from new data (images or ratings) in an online fashion in order to handle the changes in
either image clusters and/or user preferences. Third, model selection approaches can be employed
to identify ?without supervision? both numbers of user preferences and image clusters (i.e. model
order) from the statistical properties of the data. It should be stressed that the unsupervised selection
of the model order was not addressed in CF/CBF literature. Indeed, the model order in many well-
founded statistical models such as the Aspect model [7] or FMM [15] was set ?empirically? as a
compromise between the model?s complexity and the accuracy of prediction, but not from the data.
From an ?image collection modeling? point of view, the work in [4] has focused on modeling user
preferences with respect to non-Gaussian image data. However, since CBIS employs generally highdimensional image descriptors, then the problem of modeling accurately image collections needs to
be addressed in order to overcome the curse of dimensionality and provide accurate suggestions.
Indeed, the presence of many irrelevant features degrades substantially the performance of the modeling and prediction [6] in addition to the increase of the computational complexity. To achieve a
better modeling, we consider feature selection and extraction as another ?key issue? for CBIS. In
literature [6], the process of feature selection in mixture models have not received as much attention
as in supervised learning. The main reason is the absence of class labels that may guide the selection
process [6]. In this paper, we address the issue of feature selection in CBIS through a new generative
model which we call Visual Content Context-aware Flexible Mixture Model (VCC-FMM). Due to
the problem of the inter-relation between feature subsets and the model order i.e. different feature
subsets correspond to different natural groupings of images, we propose to learn the VCC-FMM
from unlabeled data using the Minimum Message Length (MML) approach [16]. The next Section
details the VCC-FMM model with an integrated feature selection. After that, we discuss the identification of the model order using the MML approach in Section 3. Experimental results are presented
in Section 4. Finally, we conclude this paper by a summary of the work.
2 The Visual Content Context Flexible Mixture Model
The data set D used to learn a CBIS system is the union of all user histories i.e. D = ? u?U Du . From
this data set we model both like-mindedness shared by user groups as well as the visual and semantic
similarity between images [4]. For that end, we introduce two latent variables z and c to label each
observation < u, e, v, r > with information about user classes and image classes, respectively.
In
order to make predictions on unseen images, we need to model the joint event p(v , r, u, e) =
v , r, u, e, z, c). Then, the rating r for a given user u, context e and a visual document v can be
z,c p(
predicted on the basis of probabilities p(r|u, e, v) that can be derived by conditioning the generative
model p(u, e, v, r). We notice that the full factorization of p(v , r, u, e, z, c) using the chain rule
leads to quantities with a huge number of parameters which are difficult to interpret in terms of the
data [4]. To overcome this problem, we make use of some conditional independence assumptions
that constitute our statistical approximation of the joint event p(v , r, u, e). These assumptions are
illustrated by the graphical representation of the model in figure 2. Let K and M be the number of
user classes and images classes respectively, an initial model for CBIS can be derived as [4]:
p(v , r, u, e) =
K
M
p(z)p(c)p(u|z)p(e|z)p(v|c)p(r|z, c)
(1)
z=1 c=1
The quantities p(z) and p(c) denote the a priori weights of user and image classes. p(u|z) and p(e|z)
denote the likelihood of a user and context to belong respectively to the user?s class z. p(r|z, c) is the
probability to sample a rating for a given user class and image class. All these quantities are modeled
from discrete data. On the other hand, image descriptors are high-dimensional, continuous and
generally non Gaussian data [1]. Thus, the distribution of class-conditional densities p(v |c) should
be modeled carefully in order to capture efficiently the added-value of the visual information. In this
work, we assume that p(v |c) is a Generalized Dirichlet distribution (GDD) which is more appropriate
than other distributions such as the Gaussian or Dirichlet distributions in modeling image collections
[1]. This distribution has a more general covariance structure and provides multiple shapes. The
?c is given by equation (2). The ? superscript is used to denote
distribution of the c-th component ?
the unknown true GDD distribution.
?c ) =
p(v |?
D
l
?
?
?(??cl + ?cl
) ??cl ?1
(1
?
vk )?cl
v
l
?
?(??cl )?(?cl
)
l=1
(2)
k=1
D
?
?
?
where l=l vl < 1 and 0 < vl < 1 for l = 1, . . . , D. ?cl
= ?cl
???cl+1 ??cl+1
for l = 1, . . . , D?1
?
?
?
?
?
?
?
and ?D = ?D ? 1. In equation (2) we have set ?c = (?c1 , ?c1 , . . . , ?cD , ?cD ). From the mathematical properties of the GDD, we can transform using a geometric transformation the data point
Figure 2: Graphical representation of VCC-FMM.
v into another data point x = (x 1 , . . . , xD ) with independent features without loss of information
?
[1]. In addition, each x l of x generated by the c-th component, follows a Beta distribution p b (.|?cl
)
D
?
?
?
?
?
with parameters ?cl = (?cl , ?cl ) which leads to the fact p(x| ?c ) = l=1 pb (xl |?cl ). The independence between xl makes the estimation of a GDD very efficient i.e. D estimations of univariate Beta
distributions without loss of accuracy. However, even with independent features, the unsupervised
identification of image clusters based on high-dimensional descriptors remains a hard problem due
to the omnipresence of noisy, redundant and uninformative features [6] that degrade the accuracy of
the modeling and prediction. We consider feature selection and extraction as a ?key? methodology
in order to remove that kind of features in our modeling. Since x l are independent, then we can
extract ?relevant? features in the representation space X . However, we need some definition of
feature?s relevance. From figure 1, four well-separated image clusters can be identified from only
two relevant features 1 and 2 which are multimodal and influenced by class labels. On the other
hand, feature 3 is unimodal (i.e. irrelevant) and can be approximated by a single Beta distribution
pb (.|?l ) common to all components. This definition of feature?s relevance has been motivated in
= (?1 , . . . , ?D ) be a set of missing binary variables denoting
unsupervised learning [2][9]. Let ?
the relevance of all features. ? l is set to 1 when the l-th feature is relevant and 0 otherwise. The
?
?true? Beta distribution ? cl
can be approximated as [2][9]:
?
1??l
?
p(xl |?cl
, ?l ) pb (xl |?cl ) l pb (xl |?l )
(3)
By considering each ? l as Bernoulli variable with parameters p(? l = 1) = l1 and p(?l = 0) = l2
?
( l1 + l2 = 1) then, the distribution p(x l |?cl
) can be obtained after marginalizing over ? l [9] as:
?
p(xl |?cl
) l1 pb (xl |?cl ) + l2 pb (xl |?l ). The VCC-FMM model is given by equation (4). We notice
that both models [3] [4] are special cases of VCC-FMM.
p(x, r, u, e) =
M
K
z=1 c=1
D
p(z)p(u|z)p(e|z)p(c)p(r|z, c) [ l1 pb (xl |?cl ) + l2 pb (xl |?l )]
(4)
l=1
3 A Unified Objective for Model and Feature Selection using MML
We denote by ??A the parameter vector of the multinomial distribution of any discrete variable A
conditioned on its parent ? of
VCC-FMM (see figure 2). We have A| ?=? ? M ulti(1; ??A) where
A
A
??a = p(A = a|? = ?) and a ??a
= 1. Also, we employ the superscripts ? and ? to denote the
parameters of the Beta distribution of relevant and irrelevant components, respectively i.e. ?cl =
?
R ?l
(??cl , ?cl
) and ?l = (??l , ?l? ) . The set ? of all VCC-FMM parameters is defined by ?zU , ?zE , ?zc
,? ,
Z C
? , ? and ?cl , ?l . The log-likelihood of a data set of N independent and identically distributed
observations D = {< u (i) , e(i) , x(i) , r(i) > |i = 1, . . . , N, u(i) ? U, e(i) ? E, x(i) ? X , r(i) ? R}
is given by:
log p(D|?) =
N
i=1
log
M
K
z=1 c=1
p(z)p(c)p(u(i) |z)p(e(i) |z)p(r (i) |z, c)
D
l=1
(i)
(i)
[l1 pb (xl |?cl ) + l2 pb (xl |?l )]
(5)
The maximum likelihood (ML) approach which optimizes equation (5) w.r.t ? is not appropriate
for learning VCC-FMM since both K and M are unknown. In addition, the likelihood increases
monotonically with the number of components and favors lower dimensions [5]. To overcome these
problems, we define a message length objective [16] for both the estimation of ? and identification
of K and M using MML [9][2]. This objective incorporates in addition to the log-likelihood, a
penalty term which encodes the data to penalize complex models as:
s
1
1
(6)
M M L(K, M ) = ? log p(?) + log |I(?)| + (1 + log ) ? log p(D|?)
2
2
12
In equation (6), |I(?)|, p(?), and s denote the Fisher information, prior distribution and the total number of parameters, respectively. The Fisher information of a parameter is the expectation
of the second derivatives with respect to the parameter of the minus log-likelihood. It is common sense to assume an independence among the different groups of parameters which factorizes both |I(?)| and p(?) over the Fisher and prior distribution of different groups of parameters, respectively. We approximate the Fisher information of the VCC-FMM from the complete
likelihood which assumes the knowledge about the values of hidden variables for each observation
< u(i) , e(i) , x(i) , r(i) >? D. The Fisher information of ? cl and ?l can be computed by following a
similar methodology of [1]. Also, we use the result found in [8] in computing the Fisher information
of
??A of a discrete variable A with N A different values in a data set of N observations. |I( ??A )| is
NA ?1 NA A
given by |I(
??A )| = N p(? = ?)
/ a=1 ??a [8], where p(? = ?) is the marginal probability of the parent ?. The graphical representation of of VCC-FMM does not involve variable
ancestors (parents of parents). Therefore, the marginal probabilities p(? = ?) are simply the paR
rameters of the multinomial distribution of the parent variable. For example, |I( ?zc
)| is computed
N ?1 C Z N ?1 Nr R
R
r
r
/ r=1 ?zcr . In case of complete ignorance, it is common to
(?c ?z )
as: |I(?zc )| = N
employ the Jeffrey?s prior for different groups of parameters. Replacing p(?) and I(?) in (6), and
after discarding the first order terms, the MML objective is given by:
D
D
K
N
M M L(K, M ) = 2p log N + M l=1 log l1 + l=1 log l2 + 12 NpZ z=1 log ?zZ
C
+ 21 (Nr ? 1) M
(7)
c=1 log ?c ? log p(D|?)
with Np = 2D(M + 1) + K(Nu + Ne ? 2) + M K(Nr ? 1) and NpZ = Nr + Nu + Ne ? 3. For
fixed values of K, M and D, the minimization of MML objective with respect to ? is equivalent to
a maximum a posteriori (MAP) estimate with the following improper Dirichlet priors [9]:
p(?C ) ?
M
(?cC )?
Nr ?1
2
,
K
p(?Z ) ?
c=1
(?zZ )?
Z
Np
,
2
D
p( 1 , . . . , D ) ?
z=1
l=1
?1
?M
l1 l2
(8)
3.1 Estimation of parameters
We optimize the MML of the data set using the Expectation-Maximization (EM) algorithm in order
to estimate the parameters. In the E-step, the joint posterior probabilities of the latent variables given
?
the observations are computed as Q zci = p(z, c|u(i) , e(i) , x(i) , r(i) , ?):
(i) ?
(i) ?
U
?E ?R
??zZ ??cC ??zu
(i) ?ze(i) ?zcr (i)
l ( l1 p(xl |?cl ) + l2 p(xl |?l ))
Qzci =
(9)
(i)
(i)
( l p(x |??cl ) + l p(x |??l ))
??Z ??C ??U (i) ??E (i) ??R (i)
z,c z
c
zu
ze
zcr
l
l
1
l
2
In the M-step, the parameters are updated using the following equations:
max
??zZ =
z max
U
=
??zu
i
c Qzci ?
i
i:u(i) =u
c Qzci ?
N ??zZ
Z
Np
2
c
Qzci
,
,0
Z
Np
2
max
,
??cC =
,0
E =
??ze
max
z,c,i
1
=1+
l1
max
z,c,i
i:e(i) =e
c max
N ??zZ
c
Qzci
i
i
R
??zcr
=
(i)
Qzci l2 pb (xl |?l )
(i)
(i)
l1 pb (xl |?cl )+l2 pb (xl |?l )
Qzci l1 pb (Xil |?cl )
(i)
(i)
l1 pb (xl |?cl )+l2 pb (xl |?l )
z Qzci ?
Nr ?1
,0
2
z Qzci ?
(11)
(12)
? M, 0
(10)
Qzci
Q
zci
i
(i)
i:r
=r
? 1, 0
Nr ?1
,0
2
The parameters of Beta distributions ? cl and ?l are updated using the Fisher scoring method based
on the first and second order derivatives of the MML objective [1].
4 Experiments
The benefits of using feature selection and the contextual information are evaluated by considering
two variants: V-FMM and V-GD-FMM in addition the original VCC-FMM given by equation (4).
E
V-FMM does not handle the contextual information and assumes ? ze
constant for all e ? E. On the
other hand, feature selection is not considered for V-GD-FMM by setting l1 = 1 and pruning the
uninformative components ? l for l = 1, . . . , D.
4.1 Data Set
We have collected ratings from 27 subjects who participated in the experiment (i.e. N u = 27) during a period of three months. The participating subjects are graduate students in faculty of science.
Subjects received periodically (twice a day) a list of three images on which they assign relevance
degrees expressed on a five star rating scale (i.e. N r = 5). We define the context as a combination of
two attributes: location L = {in ? campus, out ? campus} inferred from the Internet Protocol (IP)
address of the subject, and time as T = (weekday, weekend) i.e N e = 4. A data set D of 13446
ratings is collected (N = 13446). We have used a collection of 4775 (i.e. N v = 4775) images collected from Washington University [10] and collections of free photographs which we categorized
manually into 41 categories. For visual content characterization, we have employed both local and
global descriptors. For local descriptors, we use the 128-dimensional Scale Invariant Feature Transform (SIFT) [11] to represent image patches. We employ vector quantization to SIFT descriptors
and we build a histogram for each image (?bag of visual words?). The size of the visual vocabulary
is 500. For global descriptors, we used the color correlogram for image texture representation, and
the edge histogram descriptor. Therefore, a visual feature vector is represented in a 540-dimensional
space (D = 540). We measure the accuracy of the prediction by the Mean Absolute Error (MAE)
which is the average of the absolute deviation between the actual and predicted ratings.
4.2 First Experiment: Evaluating the influence of model order on the prediction accuracy
This experiment tries to investigate the relationship between the assumed model order defined by K
and M on the prediction accuracy of VCC-FMM. It should be noticed that the ground truth number
of user classes K ? is not known for our data set D. We run this experiment on a ground truth
(artificial) data with known K and M . D GT is sampled from the preferences P 1 and P2 of two
most dissimilar subjects according to Pearson correlation coefficients [14]. We sample ratings for
100 simulated users from the preferences P 1 and P2 only on images of four image classes. For each
user, we generate 80 ratings (? 20 ratings per context). Therefore, the ground truth model order is
K ? = 2 and M ? = 4. The choice of M ? is purely motivated by convenience of presentation since
similar performance was reported for higher values of M ? . We learn the VCC-FMM model using
one half of D GT for different choices of training and validation data. The model order defined by
M = 15 and K = 15 is used to initialize EM algorithm.
Figure 3(a) shows that both K and M have been identified correctly on D GT since the lowest MML
was reported for the model order defined by M = 4 and K = 2. The selection of the best model
order is important since it influences the accuracy of the prediction (MAE) as illustrated by Figure
3(b). It should be noticed that the over-estimation of M (M > M ? ) leads to more errors than the
over-estimation of K (K > K ? ).
4.3 Second Experiment: Comparison with state-of-the-art
The aim of this experiment is to measure the contribution of the visual information and the user?s
context in making accurate predictions comparatively with some existing CF approaches. We make
comparisons with the Aspect model [7], Pearson Correlation (PCC)[14], Flexible Mixture Model
(FMM) [15], and User Rating Profile (URP) [13]. For accurate estimators, we learn the URP model
using Gibs sampling. We retained for the previous algorithms, the model order that ensured the
lowest MAE.
(a) MML
(b) MAE
Figure 3: MML and MAE curves for different model orders on D GT .
Table 1: Averaged MAE over 10 runs of the different algorithms on D
Avg MAE
Deviation
Improvement
PCC(baseline)
1.327
0.040
0.00%
Aspect
1.201
0.051
9.49%
FMM
1.145
0.036
13.71%
URP
1.116
0.042
15.90%
V-FMM
0.890
0.034
32.94%
V-GD-FMM
0.754
0.027
43.18%
VCC-FMM
0.646
0.014
55.84%
The first five columns of table 1 show the added value provided by the visual information comparatively with pure CF techniques. For example, the improvement in the rating?s prediction reported by
V-FMM is 3.52% and 1.97% comparatively with FMM and URP, respectively. The algorithms (with
context information) shown in the last two columns have also improved the accuracy of the prediction comparatively with the others (at least 15.28%). This explains the importance of the contextual
information on user preferences. Feature selection is also important since VCC-FMM has reported
a better accuracy (14.45%) than V-GD-FMM. Furthermore, it is reported in figure 4(a) that VCCFMM is less sensitive to data sparsity (number of ratings per user) than pure CF techniques. Finally,
the evolution of the average MAE provided VCC-FMM for different proportions of unrated images
remains under < 25% for up to 30% of unrated images as shown in Figure 4(b). We explain the
stability of the accuracy of VCC-FMM for data sparsity and new images by the visual information
since only cluster representatives need to be rated.
(a) Data sparsity
(b) new images
Figure 4: MAE curves with error bars on the data set D.
5 Conclusions
This paper has motivated theoretically and empirically the importance of both feature selection and
model order identification from unlabeled data as important issues in content-based image suggestion. Experiments on collected data showed also the importance of the visual information and the
user?s context in making accurate suggestions.
Acknowledgements
The completion of this research was made possible thanks to Natural Sciences and Engineering Research Council of Canada (NSERC), Bell Canada?s support through its Bell University Laboratories
R&D program and a start-up grant from Concordia University.
References
[1] N. Bouguila and D. Ziou. High-Dimensional Unsupervised Selection and Estimation of a Finite Generalized Dirichlet Mixture Model Based on Minimum Message Length. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 29(10):1716?1731, 2007.
[2] S. Boutemedjet, N. Bouguila, and D. Ziou. Unsupervised Feature and Model Selection for Generalized
Dirichlet Mixture Models. In Proc. of International Conference on Image Analysis and Recognition
(ICIAR), pages 330?341. LNCS 4633, 2007.
[3] S. Boutemedjet and D. Ziou. Content-based Collaborative Filtering Model for Scalable Visual Document
Recommendation. In Proc. of IJCAI-2007 Workshop on Multimodal Information Retrieval, pages 11?18,
2007.
[4] S. Boutemedjet and D. Ziou. A Graphical Model for Context-Aware Visual Content Recommendation.
IEEE Transactions on Multimedia, 10(1):52?62, 2008.
[5] J. G. Dy and C. E. Brodley. Feature Selection for Unsupervised Learning. Journal of Machine Learning
Research, 5:845?889, 2004.
[6] I. Guyon and A. Elisseeff. An Introduction to Variable and Feature Selection. Journal of Machine
Learning Research, 3:1157?1182, 2003.
[7] T. Hofmann. Latent Semantic Models for Collaborative Filtering. ACM Transactions on Information
Systems, 22(1):89?115, 2004.
[8] P. Kontkanen, P. Myllymki, T. Silander, H. Tirri, and P. Grnwald. On Predictive Distributions and
Bayesian Networks. Statistics and Computing, 10(1):39?54, 2000.
[9] M. H. C. Law, M.A.T. Figueiredo, and A. K. Jain. Simultaneous Feature Selection and Clustering Using
Mixture Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9), 2004.
[10] J. Li and J. Z. Wang. Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):49?68, 2003.
[11] D.G. Lowe. Distinctive Image Features From Scale-Invariant Keypoints. International Journal of Computer Vision, 60(2):91?110, 2004.
[12] J. Muramastsu M. Pazzani and D. Billsus. Syskill and Webert:Identifying Interesting Web Sites. In In
Proc. of the 13th National Conference on Artificial Intelligence (AAAI), 1996.
[13] B. Marlin. Modeling User Rating Profiles For Collaborative Filtering. In Proc. of Advances in Neural
Information Processing Systems 16 (NIPS), 2003.
[14] P. Resnick, N. Iacovou, M. Suchak, P. Bergstrom, and J. Riedl. Grouplens: An Open Architecture for
Collaborative Filtering of Netnews. In Proc. of ACM Conference on Computer Supported Cooperative
Work, 1994.
[15] L. Si and R. Jin. Flexible Mixture Model for Collaborative Filtering. In Proc. of 20th International
Conference on Machine Learning (ICML), pages 704?711, 2003.
[16] C. Wallace. Statistical and Inductive Inference by Minimum Message Length. Information Science and
Statistics. Springer, 2005.
| 3267 |@word faculty:1 pcc:2 proportion:1 open:1 covariance:1 elisseeff:1 weekday:1 minus:1 initial:1 t7:1 denoting:1 document:4 existing:1 contextual:4 si:1 periodically:1 hofmann:1 shape:3 remove:1 generative:3 half:1 intelligence:4 urp:4 provides:1 characterization:1 location:2 preference:14 five:3 mathematical:1 beta:6 tirri:1 introduce:1 theoretically:1 inter:2 indeed:2 market:1 fmm:33 wallace:1 actual:1 bouguila:4 curse:1 considering:3 provided:3 campus:2 lowest:2 kind:1 substantially:1 unified:1 marlin:1 transformation:1 multidimensional:2 voting:1 xd:1 universit:2 ensured:1 grant:1 engineering:1 local:3 bergstrom:1 twice:1 factorization:1 graduate:1 averaged:1 union:2 lncs:1 bell:2 word:2 suggest:1 convenience:1 unlabeled:2 selection:22 context:14 influence:4 optimize:1 equivalent:1 map:1 demonstrated:1 missing:1 attention:1 focused:1 qc:3 amazon:1 identifying:1 pure:2 rule:1 estimator:1 stability:1 handle:2 updated:3 target:1 today:1 user:38 homogeneous:1 us:1 approximated:2 ze:5 recognition:1 persuasion:1 cooperative:1 resnick:1 wang:1 capture:1 improper:1 environment:1 complexity:2 sherbrooke:4 motivate:1 compromise:1 predictive:1 purely:1 distinctive:1 basis:1 multimodal:2 joint:3 represented:4 weekend:1 separated:1 jain:1 artificial:2 netnews:1 pearson:2 whose:1 richer:1 otherwise:1 favor:1 statistic:2 unseen:1 transform:2 noisy:1 superscript:2 online:1 ip:1 propose:2 product:9 ste:1 silander:1 relevant:5 achieve:1 j1k:2 participating:1 parent:5 cluster:7 ijcai:1 r1:3 xil:1 liked:1 montreal:1 completion:1 received:2 p2:2 predicted:2 gib:1 attribute:1 explains:1 assign:1 clustered:1 considered:4 ground:3 predict:1 estimation:7 proc:6 travel:1 bag:1 label:3 grouplens:1 sensitive:1 individually:1 council:1 minded:1 tool:1 minimization:1 gaussian:6 aim:2 cbf:3 factorizes:1 linguistic:1 derived:2 she:1 vk:2 bernoulli:1 likelihood:7 mainly:1 improvement:2 baseline:1 sense:1 posteriori:1 inference:1 vl:2 integrated:1 hidden:1 relation:2 ancestor:1 issue:4 among:1 flexible:6 priori:1 art:1 special:1 initialize:1 marginal:2 once:1 aware:2 extraction:2 washington:1 sampling:1 zz:6 manually:1 unsupervised:7 icml:1 vnv:1 np:4 others:1 few:1 employ:4 national:1 phase:1 jeffrey:1 interest:2 message:5 huge:1 investigate:1 mixture:10 chain:1 accurate:6 edge:1 column:2 modeling:14 suchak:1 maximization:1 deviation:2 subset:3 successful:1 reported:5 gd:4 thanks:1 density:2 international:3 na:2 aaai:1 possibly:1 ziou:6 external:1 derivative:2 li:1 de:4 star:2 student:1 coefficient:1 explicitly:1 view:1 try:1 lowe:1 start:1 collaborative:6 contribution:1 accuracy:10 descriptor:10 who:2 efficiently:2 correspond:1 identify:2 zci:2 identification:5 bayesian:1 accurately:1 cc:3 history:4 simultaneous:2 explain:1 influenced:2 definition:2 iacovou:1 universite:2 di:2 sampled:1 concordia:3 color:2 knowledge:1 dimensionality:1 carefully:1 higher:1 supervised:1 day:1 methodology:2 improved:1 evaluated:1 furthermore:2 correlation:2 working:1 hand:5 web:1 replacing:1 true:2 evolution:1 inductive:1 laboratory:1 semantic:2 illustrated:2 ignorance:1 during:2 ulti:1 generalized:3 complete:2 l1:13 image:48 recently:1 common:3 multinomial:2 empirically:2 conditioning:1 belong:1 he:1 mae:9 interpret:1 automatic:1 similarity:2 supervision:1 gt:4 posterior:1 showed:1 irrelevant:3 optimizes:1 catherine:1 binary:1 usherbrooke:2 unrated:3 scoring:1 captured:1 minimum:3 employed:2 recognized:2 period:1 monotonically:1 redundant:1 full:1 unimodal:1 multiple:1 keypoints:1 kontkanen:1 retrieval:1 prediction:13 variant:1 scalable:1 vision:1 expectation:2 histogram:2 represent:2 c1:2 penalize:1 addition:6 uninformative:2 separately:1 participated:1 addressed:3 subject:5 incorporates:1 call:1 presence:1 identically:1 independence:3 psychology:1 architecture:1 identified:2 consumed:1 motivated:3 effort:1 penalty:1 rnr:1 constitute:1 generally:3 useful:1 involve:1 category:1 generate:1 notice:3 per:2 correctly:1 discrete:5 express:1 group:5 key:3 four:2 pb:16 v1:1 run:2 guyon:1 patch:1 home:1 decision:1 dy:1 internet:1 encodes:1 aspect:4 according:1 cbis:12 combination:1 riedl:1 em:2 making:3 invariant:2 indexing:1 billsus:1 taken:1 equation:7 previously:1 remains:2 discus:1 mml:11 merit:1 end:1 v2:2 appropriate:2 original:1 assumes:2 dirichlet:6 ensure:1 cf:9 clustering:1 graphical:5 build:2 comparatively:4 objective:6 noticed:2 added:2 quantity:3 degrades:1 nr:7 separate:2 simulated:1 street:1 vd:1 degrade:1 collected:5 reason:2 consumer:5 length:5 vcc:19 index:1 modeled:3 gdd:4 relationship:1 retained:1 difficult:1 motivates:1 unknown:3 appreciation:2 observation:5 finite:1 jin:1 situation:1 arbitrary:1 community:1 canada:5 inferred:2 rating:16 required:1 learned:1 textual:2 nu:3 nip:1 address:2 bar:1 pattern:3 sparsity:3 program:1 max:6 power:1 event:2 natural:2 rated:1 brodley:1 ne:3 picture:1 identifies:1 categorical:3 extract:1 prior:4 literature:3 taste:1 geometric:1 l2:11 acknowledgement:1 marginalizing:1 law:1 loss:2 par:1 suggestion:6 rameters:1 filtering:7 interesting:1 validation:1 degree:3 cd:2 summary:1 supported:1 last:1 free:1 figueiredo:1 appreciated:1 guide:1 zc:3 absolute:2 distributed:1 benefit:1 overcome:3 dimension:1 vocabulary:1 evaluating:1 curve:2 author:1 collection:5 avg:1 made:1 founded:1 transaction:5 approximate:1 pruning:1 ml:1 global:3 assumed:2 conclude:1 continuous:3 latent:3 table:2 learn:4 pazzani:1 ca:3 du:3 cl:34 complex:1 domain:1 protocol:1 main:1 profile:2 categorized:1 site:1 west:1 representative:1 fashion:1 xl:19 third:1 discarding:1 zu:4 sift:2 r2:1 list:1 grouping:1 workshop:1 quantization:1 importance:3 texture:3 conditioned:1 photograph:1 simply:1 univariate:1 visual:30 expressed:2 ordered:1 correlogram:1 nserc:1 recommendation:4 springer:1 truth:3 acm:2 conditional:3 goal:1 month:1 presentation:1 shared:1 absence:1 content:10 change:1 hard:1 fisher:7 total:1 multimedia:1 experimental:1 highdimensional:1 support:1 stressed:1 dissimilar:1 relevance:7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.