Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,000 | 5,527 | The Blinded Bandit:
Learning with Adaptive Feedback
Ofer Dekel
Microsoft Research
Elad Hazan
Technion
Tomer Koren
Technion
[email protected]
[email protected]
[email protected]
Abstract
We study an online learning setting where the player is temporarily deprived of
feedback each time it switches to a different action. Such model of adaptive feedback naturally occurs in scenarios where the environment reacts to the player?s actions and requires some time to recover and stabilize after the algorithm switches
actions. This motivates a variant of the multi-armed bandit problem, which we call
the blinded multi-armed bandit, in which no feedback is given to the algorithm
whenever it switches arms. We develop efficient online learning algorithms for
this problem and prove that they guarantee the same asymptotic regret as the optimal algorithms for the standard multi-armed bandit problem. This result stands
in stark contrast to another recent result, which states that adding a switching cost
to the standard multi-armed bandit makes it substantially harder to learn, and provides a direct comparison of how feedback and loss contribute to the difficulty
of an online learning problem. We also extend our results to the general prediction framework of bandit linear optimization, again attaining near-optimal regret
bounds.
1
Introduction
The adversarial multi-armed bandit problem [4] is a T -round prediction game played by a randomized player in an adversarial environment. On each round of the game, the player chooses an arm
(also called an action) from some finite set, and incurs the loss associated with that arm. The player
can choose the arm randomly, by choosing a distribution over the arms and then drawing an arm
from that distribution. He observes the loss associated with the chosen arm, but he does not observe
the loss associated with any of the other arms. The player?s cumulative loss is the sum of all the loss
values that he incurs during the game. To minimize his cumulative loss, the player must trade-off
exploration (trying different arms to observe their loss values) and exploitation (choosing a good
arm based on historical observations).
The loss values are assigned by the adversarial environment before the game begins. Each of the
loss values is constrained to be in [0, 1] but otherwise they can be arbitrary. Since the loss values are
set beforehand, we say that the adversarial environment is oblivious to the player?s actions.
The performance of a player strategy is measured in the standard way, using the game-theoretic
notion of regret (formally defined below). Auer ?
et al. [4] present a player strategy called E XP 3,
prove that it guarantees a worst-case regret of O( T ) on any oblivious assignment of loss values,
and prove that this guarantee is the best possible. A sublinear upper bound on regret implies that the
player?s strategy
? improves over time and is therefore a learning strategy, but if this upper bound has
a rate of O( T ) then the problem is called an easy1 online learning problem.
1
The classification of online problems into easy vs. hard is borrowed from Antos et al. [2].
1
In this paper, we study a variant of the standard multi-armed bandit problem where the player is
temporarily blinded each time he switches arms. In other words, if the player?s current choice is
different than his choice on the previous round then we say that he has switched arms, he incurs the
loss as before, but he does not observe this loss, or any other feedback. On the other hand, if the
player chooses the same arm that he chose on the previous round, he incurs and observes his loss as
usual2 . We call this setting the blinded multi-armed bandit.
For example, say that the player?s task is to choose an advertising campaign (out of k candidates) to
reduce the frequency of car accidents. Even if a new advertising campaign has an immediate effect,
the new accident rate can only be measured over time (since we must wait for a few accidents to
occur) and the environment?s reaction to the change cannot be observed immediately.
The blinded bandit setting can also be used to model problems where a switch introduces a temporary bias into the feedback, which makes this feedback useless. A good example is the well-known
primacy and novelty effect [14, 15] that occurs in human-computer interaction. Say that we operate
an online restaurant directory and the task is to choose the best user interface (UI) for our site (from
a set of k candidates). The quality of a UI is measured by the the time it takes the user to complete
a successful interaction with our system. Whenever we switch to a new UI, we encounter a primacy
effect: users are initially confused by the unfamiliar interface and interaction times artificially increase. In some situations, we may encounter the opposite, a novelty effect: a fresh new UI could
intrigue users, increase their desire to engage with the system, and temporarily decrease interaction times. In both cases, feedback is immediately available, but each switch makes the feedback
temporarily unreliable.
There are also cases where switching introduces a variance in the feedback, rather than a bias.
Almost any setting where the feedback is measured by a physical sensor, such as a photometer or a
digital thermometer, fits in this category. Most physical sensors apply a low-pass filter to the signal
they measure and a low-pass filter in the frequency domain is equivalent to integrating the signal
over a sliding window in the time domain. While the sensor may output an immediate reading, it
needs time to stabilize and return to an adequate precision.
The blinded bandit setting bears a close similarity to another setting called the adversarial multiarmed bandit with switching costs. In that setting, the player incurs an additional loss each time he
switches arms. This penalty discourages the player from switching frequently. At first glance, it
would seem that the practical problems described above could be formulated and solved as multiarmed bandit problems with switching costs and one might question the need for our new blinded
bandit setting. However, Dekel et al. [12] recently proved that the adversarial multi-armed bandit
with switching costs is a hard online learning problem, which is a problem where the best possible
e 2/3 ). In other words, for any learning algorithm, there exists an oblivious
regret guarantee is ?(T
e 2/3 ).
setting of the loss values that forces a regret of ?(T
In this paper,?we present a new algorithm for the blinded bandit setting and prove that it guarantees a
regret of O( T ) on any oblivious sequence of loss values. In other words, we prove that the blinded
bandit is surprisingly as easy as the standard multi-armed bandit setting, despite its close similarity to
the hard multi-armed bandit with switching costs problem. Our result has a theoretical significance
and a practical significance. Theoretically, it provides a direct comparison of how feedback and
loss contribute to the difficulty of an online learning problem. Practically, it identifies a rich and
important class of online learning problems that would seem to be a natural fit for the multi-armed
bandit setting with switching costs, but are in fact much easier to learn. Moreover, to the best of our
knowledge, our work is the first to consider online learning in an setting where the loss values are
oblivious to the player?s past actions but the feedback is adaptive.
We also extend our results and study a blinded version of the more general bandit linear optimization
setting. The bandit linear optimization framework is useful for efficiently modeling problems of
learning under uncertainty with extremely large, yet structured decision sets. For example, consider
the problem of online routing in networks [5], where our task is to route a stream of packets between
two nodes in a computer network. While there may be exponentially many paths between the two
nodes, the total time it takes to send a packet is simply the sum of the delays on each edge in the
path. If the route is switched in the middle of a long streaming transmission, the network protocol
2
More generally, we could define a setting where the player is blinded for m rounds following each switch,
but for simplicity we focus on m = 1.
2
needs a while to find the new optimal transmission rate, and the delay of the first few packets after
the switch can be arbitrary. This view on the packet routing problem demonstrates the need for a
blinded version of bandit linear optimization.
The paper is organized as follows. In Section 2 we formalize the setting and lay out the necessary
definitions. Section 3 is dedicated to presenting our main result, which is an optimal algorithm for
the blinded bandit problem. In Section 4 we extend this result to the more general setting of bandit
linear optimization. We conclude in Section 5.
2
Problem Setting
To describe our contribution to this problem and its significance compared to previous work, we first
define our problem setting more formally and give some background on the problem.
As mentioned above, the player plays a T -round prediction game against an adversarial environment.
Before the game begins, the environment picks a sequence of loss functions `1 , . . . , `T : K 7? [0, 1]
that assigns loss values to arms from the set K = {1, . . . , k}. On each round t, the player chooses an
arm xt ? K, possibly at random, which results in a loss `t (xt ). In the standard multi-armed bandit
setting, the feedback provided to the player at the end of round t is the number `t (xt ), whereas the
other values of the function `t are never observed.
PT
The player?s expected cumulative loss at the end of the game equals E[ t=1 `t (xt )]. Since the loss
values are assigned adversarially, the player?s cumulative loss is only meaningful when compared
to an adequate baseline; we compare the player?s cumulative loss to the cumulative loss of a fixed
policy, which chooses the same arm on every round. Define the player?s regret as
" T
#
T
X
X
`t (x) .
(1)
R(T ) = E
`t (xt ) ? min
x?K
t=1
t=1
Regret can be positive or negative. If R(T ) = o(T ) (namely, the regret is either negative or grows at
most sublinearly with T ), we say that the player is learning. Otherwise, if R(T ) = ?(T ) (namely,
the regret grows linearly with T ), it indicates that the player?s per-round loss does not decrease with
time and therefore we say that the player is not learning.
In the blinded version of the problem, the feedback on round t, i.e. the number `t (xt ), is revealed to
the player only if he chooses xt to be the same as xt?1 . On the other hand, if xt 6= xt?1 , then the
player does not observe any feedback. The blinded bandit game is summarized in Fig. 1.
Parameters: action set K, time horizon T
? Environment determines a sequence of loss functions `1 , . . . , `T : K 7? [0, 1]
? On each round t = 1, 2, . . . , T :
1. Player picks an action xt ? K and suffers the loss `t (xt ) ? [0, 1]
2. If xt = xt?1 , the number `t (xt ) is revealed as feedback to the player
3. Otherwise, if xt 6= xt?1 , the player gets no feedback from the environment
Figure 1: The blinded bandit game.
Bandit Linear Optimization. In Section 4, we consider the more general setting of online linear
optimization with bandit feedback [10, 11, 1]. In this problem, on round t of the game, the player
chooses an action, possibly at random, which is a point xt in a fixed action set K ? Rn . The loss
he suffers on that round is then computed by a linear function `t (xt ) = `t ? xt , where `t ? Rn is a
loss vector chosen by the oblivious adversarial environment before the game begins. To ensure that
the incurred losses are bounded, we assume that the loss vectors `1 , . . . , `T are admissible, that is,
they satisfy |`t ? x| ? 1 for all t and x ? K (in other words, the loss vectors reside in the polar set
of K). As in the multi-armed bandit problem, the player only observes the loss he incurred, and the
full loss vector `t is never revealed to him. The player?s performance is measured by his regret, as
defined above in Eq. (1).
3
3
Algorithm
We recall the classic E XP 3 algorithm for the standard multi-armed bandit problem, and specifically
focus on the version presented in Bubeck and Cesa-Bianchi [6]. The player maintains a probability
distribution over the arms, which we denote by pt ? ?(K) (where ?(K) denotes the set of probability measures over K, which is simply the k-dimensional simplex when K = {1, 2, . . . , k}). Initially,
p1 is set to the uniform distribution ( k1 , . . . , k1 ). On round t, the player draws xt according to pt ,
incurs and observes the loss `t (xt ), and applies the update rule
`t (xt )
? x ? K, pt+1 (x) ? pt (x) ? exp ??
? 11x=xt .
pt (xt )
E XP 3 provides the following regret guarantee, which depends on the user-defined learning rate
parameter ?:
Theorem 1 (due to Auer et al. [4], taken from Bubeck and Cesa-Bianchi [6]). Let `1 , . . . , `T be an
arbitrary loss sequence, where each `t : K 7? [0, 1]. Let x1 , . . . , xT be the random sequence of
arms chosen by E XP 3 (with learning rate ? > 0) as it observes this sequence. Then,
R(T ) ?
?kT
log k
+
.
2
?
E XP 3 cannot be used in the blinded bandit setting because the E XP 3 update rule cannot be called
on rounds where a switch occurs.
Also, since switching actions ?(T ) times is, in general, required
?
for obtaining the optimal O( T ) regret (see [12]), the player must avoid switching actions too frequently and often stick with the action that was chosen on the previous round. Due to the adversarial
nature of the problem, randomization must be used in controlling the scheme of action switches.
We propose a variation on E XP 3, which is presented in Algorithm 1. Our algorithm begins by
drawing a sequence of independent Bernoulli random variables b0 , b1 , . . . , bT +1 (i.e., such that
P(bt = 0) = P(bt = 1) = 21 ). This sequence determines the schedule of switches and updates
for the entire game. The algorithm draws a new arm (and possibly switches) only on rounds where
bt?1 = 0 and bt = 1 and invokes the E XP 3 update rule only on rounds where bt = 0 and bt+1 = 1.
Note that these two events can never co-occur. Specifically, the algorithm always invokes the update
rule one round before the potential switch occurs. This confirms that the algorithm relies on the
value of `t (xt ) only on non-switching rounds.
Algorithm 1: B LINDED E XP 3
set p1 ? ( k1 , . . . , k1 ), draw x0 ? p1
draw b0 , . . . , bT +1 i.i.d. unbiased Bernoullis
for t = 1, 2, . . . , T
if bt?1 = 0 and bt = 1
draw xt ? pt
else
set xt ? xt?1
// possible switch
// no switch
play arm xt and incur loss `t (xt )
if bt = 0 and bt+1 = 1
observe `t (xt ) and for all x ? K, update
`t (xt )
wt+1 (x) ? pt (x) ? exp ??
? 11x=xt
pt (xt )
set pt+1 ? wt+1 /kwt+1 k1
else
set pt+1 ? pt
We set out to prove the following regret bound.
4
Theorem 2. Let `1 , . . . , `T be an arbitrary loss sequence, where each `t : K 7? [0, 1]. Let
x1 , . . . , xT be the random sequence of arms chosen by Algorithm
1 as it plays the blinded banq
k
dit game on this sequence (with learning rate fixed to ? = 2 log
kT ). Then,
p
R(T ) ? 6 T k log k .
We prove Theorem 2 with the below sequence of lemmas. In the following, we let `1 , . . . , `T be
an arbitrary loss sequence and let x1 , . . . , xT be the sequence of arms chosen by Algorithm 1 (with
parameter ? > 0). First, we define the set
S = t ? [T ] : bt = 0 and bt+1 = 1 .
In words, S is a random subset of [T ] that indicates the rounds on which Algorithm 1 uses its
feedback and applies the E XP 3 update.
Lemma 1. For any x ? K, it holds that
"
#
X
X
?kT
log k
E
`t (xt ) ?
`t (x) ?
+
.
8
?
t?S
t?S
Proof. For any concrete instantiation of b0 , . . . , bT +1 , the set S is fixed and the sequence (`t )t?S is
an oblivious sequence of loss functions. Note that the steps performed by Algorithm 1 on the rounds
indicated in S are precisely the steps that the standard E XP 3 algorithm would perform if it were
presented with the loss sequence (`t )t?S . Therefore, Theorem 1 guarantees that
#
"
X
X
?k|S| log k
?
+
.
E
`t (xt ) ?
`t (x) S
2
?
t?S
t?S
Taking expectations on both sides of the above and noting that E[|S|] ? T /4 proves the lemma.
Lemma 1 proves a regret bound that is restricted to the rounds indicated by S. The following lemma
relates that regret to the total regret, on all T rounds.
Lemma 2. For any x ? K, we have
"
" T
#
#
" T
#
T
X
X
X
X
X
`t (x) ? 4 E
E
`t (xt ) ?
`t (xt ) ?
`t (x) + E
kpt ? pt?1 k1 .
t=1
t=1
t?S
t?S
t=1
Proof. Using the definition of S, we have
"
#
T
T
X
X
1 X
E
`t (x) =
`t (x) .
`t (x) E[(1 ? bt )bt+1 ] =
4 t=1
t=1
(2)
t?S
Similarly, we have
"
E
#
X
t?S
`t (xt )
=
T
X
E `t (xt ) (1 ? bt )bt+1 .
(3)
t=1
We focus on the t?th summand in the right-hand side above. Since bt+1 is independent of `t (xt )(1 ?
bt ), it holds that
1
E `t (xt )(1 ? bt )bt+1 = E[bt+1 ]E `t (xt )(1 ? bt ) = E `t (xt )(1 ? bt ) .
2
Using the law of total expectation, we get
i
i
1
1 h
1 h
E `t (xt )(1 ? bt ) = E `t (xt )(1 ? bt ) bt = 0 + E `t (xt )(1 ? bt ) bt = 1
2
4
4
1
= E `t (xt ) bt = 0 .
4
5
If bt = 0 then Algorithm 1 sets xt ? xt?1 so we have that xt = xt?1 . Therefore, the above equals
1
1
older?s
4 E[`t (xt?1 ) | bt = 0]. Since xt?1 is independent of bt , this simply equals 4 E[`t (xt?1 )]. H?
inequality can be used to upper bound
hX
i
E[`t (xt ) ? `t (xt?1 )] = E
pt (x) ? pt?1 (x) `t (x) ? E[kpt ? pt?1 k1 ] ? max `t (x) ,
x?K
x?K
where we have used the fact that xt and xt?1 are distributed according to pt and pt?1 respectively
(regardless of whether an update took place or not). Since it is assumed that `t (x) ? [0, 1] for all t
and x ? K, we obtain
1
1
E `t (xt?1 ) ?
E `t (xt ) ? E[kpt ? pt?1 k1 ] .
4
4
Overall, we have shown that
1
E `t (xt ) ? E[kpt ? pt?1 k1 ] .
E `t (xt )(1 ? bt )bt+1 ?
4
Plugging this inequality back into Eq. (3) gives
" T
#
"
#
T
X
X
X
1
`t (xt ) ?
kpt ? pt?1 k1 .
E
`t (xt ) ? E
4
t=1
t=1
t?S
Summing the inequality above with the one in Eq. (2) concludes the proof.
Next, we prove that the probability distributions over arms do not change much on consecutive
rounds of E XP 3.
Lemma 3. The distributions p1 , p2 , . . . , pT generated by the B LINDED E XP 3 algorithm satisfy
E[kpt+1 ? pt k1 ] ? 2? for all t.
Proof. Fix a round t; we shall prove the stronger claim that kpt+1 ? pt k1 ? 2? with probability 1.
If no update had occurred on round t and pt+1 = pt , this holds trivially. Otherwise, we can use the
triangle inequality to bound
kpt+1 ? pt k1 ? kpt+1 ? wt+1 k1 + kwt+1 ? pt k1 ,
with the vector wt+1 as specified in Algorithm 1. Letting Wt+1 = kwt+1 k1 we have pt+1 =
wt+1 /Wt+1 , so we can rewrite the first term on the right-hand side above as
kpt+1 ? Wt+1 ? pt+1 k1 = |1 ? Wt+1 | ? kpt+1 k1 = 1 ? Wt+1 = kpt ? wt+1 k1 ,
where the last equality follows by observing that pt ? wt+1 entrywise, kpt k1 = 1 and kwt+1 k1 =
Wt+1 . By the definition of wt+1 , the second term on the right-hand side above equals pt (xt ) ? 1 ?
e??`t (xt )/pt (xt ) . Overall, we have
kpt+1 ? pt k1 ? 2pt (xt ) ? 1 ? e??`t (xt )/pt (xt ) .
Using the inequality 1 ? exp(??) ? ?, we get kpt+1 ? pt k1 ? 2?`t (xt ). The claim now follows
from the assumption that `t (xt ) ? [0, 1].
We can now proceed to prove our regret bound.
Proof of Theorem 2. Combining the bounds of Lemmas 1?3 proves that for any fixed arm x ? K, it
holds that
" T
#
T
X
X
4 log k
?kT
E
`t (xt ) ?
`t (x) ?
+
+ 2?T
2
?
t=1
t=1
4 log k
.
?
q
k
Specifically, the above holds for the best arm in hindsight. Setting ? = 2 log
kT proves the theorem.
? 2?kT +
6
4
Blinded Bandit Linear Optimization
In this section we extend our results to the setting of linear optimization with bandit feedback,
formally defined in Section 2. We focus on the G EOMETRIC
H EDGE algorithm [11], that was the
?
first algorithm for the problem to attain the optimal O( T ) regret, and adapt it to the blinded setup.
Our B LINDED G EOMETRIC H EDGE algorithm is detailed in Algorithm 2. The algorithm uses a
mechanism similar to that of Algorithm 1 for deciding when to avoid switching actions. Following
the presentation of [11], we assume that K ? [?1, 1]n is finite and that the standard basis vectors
e1 , . . . , en are contained in K. Then, the set E = {e1 , . . . , en } is a barycentric spanner of K [5] that
serves the algorithm as an exploration basis. We denote the uniform distribution over E by uE .
Algorithm 2: B LINDED G EOMETRIC H EDGE
Parameter: learning rate ? > 0
let q1 be the uniform distribution over K, and draw x0 ? q1
draw b0 , . . . , bT +1 i.i.d. unbiased Bernoullis
set ? ? n2 ?
for t = 1, 2, . . . , T
set pt ? (1 ? ?) qt + ? uE
compute covariance Ct ? Ex?pt [xx> ]
if bt?1 = 0 and bt = 1
draw xt ? pt
else
set xt ? xt?1
// possible switch
// no switch
play arm xt and incur loss `t (xt ) = `t ? xt
if bt = 0 and bt+1 = 1
observe `t (xt ) and let `?t ? `t (xt ) ? Ct?1 xt
update qt+1 (x) ? qt (x) ? exp(?? `?t ? x)
else
set qt+1 ? qt
?
The main result of this section is an O( T ) upper-bound over the expected regret of Algorithm 2.
Theorem 3. Let `1 , . . . , `T be an arbitrary sequence of linear loss functions, admissible with respect
to the action set K ? Rn . Let x1 , . . . , xT be the random sequence of arms chosen by
q Algorithm 2 as
it plays the blinded bandit game on this sequence, with learning rate fixed to ? =
R(T ) ? 4n3/2
p
log(nT )
10nT .
Then,
T log(nT ) .
With minor modifications, our technique can also be applied to variants of the G EOMETRIC H EDGE algorithm (that differ by their exploration basis) for obtaining regret bounds with improved dependence of the dimension n. This includes the C OM BAND algorithm [8], E XP 2 with
John?s exploration [7], and the more recent version employing volumetric spanners [13].
We now turn to prove Theorem 3. Our first step is proving an analogue of Lemma 1, using the regret
bound of the G EOMETRIC H EDGE algorithm proved by Dani et al. [11].
P
?n2 T
P
)
Lemma 4. For any x ? K, it holds that E
+ n log(nT
.
t?S `t (xt ) ?
t?S `t (x) ?
2
2?
We proceed to prove that the distributions generated by Algorithm 2 do not change too quickly.
Lemma 5. The distributions p1 , p2 , . . . , pT produced by the B LINDED G EOMETRIC H?EDGE algorithm (from which the actions x1 , x2 , . . . , xT are drawn) satisfy E[kpt+1 ? pt k1 ] ? 4? n for all t.
The proofs of both lemmas are omitted due to space constraints. We now prove Theorem 3.
7
Proof of Theorem 3. Notice that the bound of Lemma 2 is independent of the construction of the
distributions p1 , p2 , . . . , pT and the structure of K, and thus applies for Algorithm 2 as well. Combining this bound with the results of Lemmas 4 and 5, it follows that for any fixed action x ? K,
" T
#
T
X
X
?
?n2 T
n log(nT )
n log(nT )
E
`t (xt ) ?
`t (x) ?
+
+ 4? nT ? 5?n2 T +
.
2
2?
2?
t=1
t=1
q
)
Setting ? = log(nT
10nT proves the theorem.
5
Discussion and Open Problems
In this paper, we studied a new online learning scenario where the player receives feedback from
the adversarial environment only when his action is the same as the one from the previous round, a
setting that we named the blinded bandit. We devised an optimal algorithm for the blinded multiarmed bandit problem based on the E XP 3 strategy, and used similar ideas to adapt the G EOMETRIC H EDGE algorithm to the blinded bandit linear optimization setting. In fact, a similar analysis
can be applied to any online algorithm that does not change its underlying prediction distributions
too quickly (in total variation distance).
In the practical examples given in the introduction, where each switch introduces a bias or a variance, we argued that the multi-armed bandit problem with switching costs is an inadequate solution,
since it is unreasonable to solve an easy problem by reducing it to one that is substantially harder.
Alternatively, one might consider simply ignoring the noise in the feedback after each switch and
using a standard adversarial multi-armed bandit algorithm like E XP 3 despite the bias or the variance. However, if we do that, the player?s observed losses would no longer be oblivious (as the
observed loss on round t would depend on xt?1 ), and the regret guarantees
of E XP 3 would no
?
longer hold3 . Moreover, any multi-armed bandit algorithm with O( T ) regret can be forced to
make ?(T ) switches [12], so the loss observed by the player could actually be non-oblivious in a
constant fraction of the rounds, which would deteriorate the performance of E XP 3.
Our setting might seem similar to the related problem of label-efficient prediction (with bandit feedback), see [9]. In the label-efficient prediction setting, the feedback for the action performed on
some round is received only if the player explicitly asks for it. The player may freely choose when
to observe feedback, subject to a global constraint on the number of total feedback queries. In contrast, in our setting there is a strong correlation between the actions the player takes and the presence
of the feedback signal. As a consequence, the player is not free to decide when he observes feedback
as in the label-efficient setting. Another setting that may seem closely related to our setting is the
multi-armed bandit problem with delayed feedback [16, 17]. In this setting, the feedback for the
action performed on round t is received at the end of round t + 1. However, note that in all of the examples we have discussed, the feedback is always immediate, but is either nonexistent or unreliable
right after a switch. The important aspect of our setup, which does not apply to the label-efficient
and delayed feedback settings, is that the feedback adapts to the player?s past actions.
Our work leaves a few interesting questions for future research. A closely related adaptive-feedback
problem is one where feedback
is revealed only on rounds where the player does switch actions.
?
Can the player attain O( T ) regret in this setting as well, or is the need to constantly switch actions
detrimental to the player? More generally, we can consider other multi-armed bandit problems with
adaptive feedback, where the feedback depends on the player?s actions on previous rounds. It would
be quite interesting to understand
what kind of adaptive-feedback patterns give rise to easy problems,
?
for which a regret of O( T ) is attainable. Specifically, is there a problem with oblivious losses and
e 2/3 ), as is the case with adaptive losses?
adaptive feedback whose minimax regret is ?(T
Acknowledgments
The research leading to these results has received funding from the Microsoft-Technion EC center,
and the European Union?s Seventh Framework Programme (FP7/2007-2013]) under grant agreement
n? 336078 ERC-SUBLRN.
?
Auer et al. [4] also present an algorithm called E XP 3.P and seemingly prove O( T ) regret guarantees
against non-oblivious adversaries. These bounds are irrelevant in our setting?see Arora et al. [3].
3
8
References
[1] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for
bandit linear optimization. In COLT, pages 263?274, 2008.
[2] A. Antos, G. Bart?ok, D. P?al, and C. Szepesv?ari. Toward a classification of finite partialmonitoring games. Theoretical Computer Science, 2012.
[3] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary:
from regret to policy regret. In Proceedings of the Twenty-Ninth International Conference on
Machine Learning, 2012.
[4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[5] B. Awerbuch and R. D. Kleinberg. Adaptive routing with end-to-end feedback: Distributed
learning and geometric approaches. In Proceedings of the thirty-sixth annual ACM symposium
on Theory of computing, pages 45?53. ACM, 2004.
[6] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[7] S. Bubeck, N. Cesa-Bianchi, and S. M. Kakade. Towards minimax policies for online linear
optimization with bandit feedback. In Proceedings of the 25th Annual Conference on Learning
Theory (COLT), volume 23, pages 41.1?41.14, 2012.
[8] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. Journal of Computer and System
Sciences, 78(5):1404?1422, 2012.
[9] N. Cesa-Bianchi, G. Lugosi, and G. Stoltz. Minimizing regret with label efficient prediction.
IEEE Transactions on Information Theory, 51(6):2152?2162, 2005.
[10] V. Dani and T. P. Hayes. Robbing the bandit: Less regret in online geometric optimization
against an adaptive adversary. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006.
[11] V. Dani, S. M. Kakade, and T. P. Hayes. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems, pages 345?352, 2007.
[12] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. arXiv
preprint arXiv:1310.2997, 2013.
[13] E. Hazan, Z. Karnin, and R. Mehka. Volumetric spanners and their applications to machine
learning. In arXiv:1312.6214, 2013.
[14] R. Kohavi, R. Longbotham, D. Sommerfield, and R. M. Henne. Controlled experiments on
the web: survey and practical guide. Data Mining and Knowledge Discovery, 18(1):140?181,
2009.
[15] R. Kohavi, A. Deng, B. Frasca, R. Longbotham, T. Walker, and Y. Xu. Trustworthy online
controlled experiments: Five puzzling outcomes explained. In Proceedings of the 18th ACM
SIGKDD international conference on Knowledge discovery and data mining, pages 786?794.
ACM, 2012.
[16] C. Mesterharm. Online learning with delayed label feedback. In Proceedings of the Sixteenth
International Conference on Algorithmic Learning Theory, 2005.
[17] G. Neu, A. Gy?orgy, C. Szepesv?ari, and A. Antos. Online Markov decision processes under
bandit feedback. In Advances in Neural Information Processing Systems 23, pages 1804?1812,
2010.
9
| 5527 |@word exploitation:1 middle:1 version:5 stronger:1 dekel:4 open:1 confirms:1 covariance:1 q1:2 pick:2 asks:1 incurs:6 attainable:1 harder:2 nonexistent:1 past:2 reaction:1 current:1 com:1 nt:9 trustworthy:1 yet:1 must:4 john:1 update:10 v:1 bart:1 leaf:1 directory:1 provides:3 contribute:2 node:2 thermometer:1 five:1 direct:2 symposium:2 prove:14 deteriorate:1 x0:2 theoretically:1 sublinearly:1 expected:2 p1:6 frequently:2 multi:20 armed:20 window:1 begin:4 confused:1 moreover:2 provided:1 bounded:1 underlying:1 xx:1 what:1 kind:1 substantially:2 hindsight:1 guarantee:9 every:1 demonstrates:1 stick:1 grant:1 before:5 positive:1 consequence:1 switching:14 despite:2 path:2 lugosi:2 might:3 chose:1 studied:1 co:1 campaign:2 seventeenth:1 practical:4 acknowledgment:1 thirty:1 union:1 regret:36 kpt:16 attain:2 word:5 integrating:1 wait:1 get:3 cannot:3 close:2 equivalent:1 center:1 send:1 regardless:1 survey:1 simplicity:1 immediately:2 assigns:1 rule:4 his:5 classic:1 proving:1 notion:1 variation:2 pt:43 play:5 controlling:1 user:5 engage:1 construction:1 us:2 agreement:1 trend:1 lay:1 observed:5 preprint:1 ding:1 solved:1 worst:1 trade:1 decrease:2 observes:6 mentioned:1 environment:11 ui:4 depend:1 rewrite:1 incur:2 basis:3 triangle:1 forced:1 describe:1 query:1 choosing:2 outcome:1 abernethy:1 quite:1 whose:1 elad:1 solve:1 say:6 drawing:2 otherwise:4 online:21 seemingly:1 sequence:20 took:1 propose:1 interaction:4 combining:2 adapts:1 sixteenth:1 transmission:2 develop:1 ac:2 measured:5 minor:1 received:3 qt:5 b0:4 strong:1 borrowed:1 eq:3 p2:3 implies:1 differ:1 closely:2 filter:2 stochastic:1 exploration:4 human:1 routing:3 packet:4 argued:1 hx:1 fix:1 randomization:1 frasca:1 hold:6 practically:1 exp:4 deciding:1 algorithmic:1 claim:2 consecutive:1 omitted:1 polar:1 label:6 combinatorial:1 him:1 dani:3 sensor:3 always:2 rather:1 avoid:2 focus:4 bernoulli:3 indicates:2 contrast:2 adversarial:11 sigkdd:1 baseline:1 streaming:1 bt:42 entire:1 initially:2 bandit:55 overall:2 classification:2 colt:2 constrained:1 equal:4 never:3 karnin:1 adversarially:1 future:1 simplex:1 summand:1 oblivious:11 few:3 randomly:1 kwt:4 delayed:3 microsoft:3 mining:2 introduces:3 antos:3 kt:6 beforehand:1 edge:8 ehazan:1 necessary:1 stoltz:1 theoretical:2 modeling:1 assignment:1 cost:8 subset:1 uniform:3 technion:5 delay:2 successful:1 inadequate:1 seventh:1 too:3 chooses:6 international:3 randomized:1 siam:2 ie:1 off:1 quickly:2 concrete:1 again:1 cesa:7 choose:4 possibly:3 leading:1 return:1 stark:1 potential:1 attaining:1 gy:1 summarized:1 stabilize:2 includes:1 satisfy:3 explicitly:1 depends:2 stream:1 performed:3 view:1 hazan:3 observing:1 recover:1 maintains:1 contribution:1 minimize:1 il:2 om:1 variance:3 efficiently:1 produced:1 advertising:2 suffers:2 whenever:2 neu:1 definition:3 volumetric:2 against:4 sixth:1 frequency:2 naturally:1 associated:3 proof:7 proved:2 recall:1 knowledge:3 car:1 improves:1 organized:1 formalize:1 schedule:1 auer:4 back:1 actually:1 ok:1 improved:1 entrywise:1 correlation:1 hand:5 receives:1 web:1 glance:1 quality:1 indicated:2 grows:2 effect:4 unbiased:2 awerbuch:1 equality:1 assigned:2 round:36 game:16 during:1 ue:2 trying:1 presenting:1 theoretic:1 complete:1 dedicated:1 interface:2 photometer:1 longbotham:2 mesterharm:1 recently:1 funding:1 ari:2 discourages:1 physical:2 exponentially:1 volume:1 extend:4 he:14 occurred:1 discussed:1 unfamiliar:1 multiarmed:4 trivially:1 similarly:1 erc:1 had:1 similarity:2 longer:2 recent:2 irrelevant:1 scenario:2 route:2 inequality:5 additional:1 accident:3 deng:1 freely:1 novelty:2 signal:3 sliding:1 full:1 relates:1 adapt:2 eometric:7 long:1 devised:1 e1:2 plugging:1 controlled:2 prediction:7 variant:3 expectation:2 arxiv:3 background:1 whereas:1 szepesv:2 else:4 walker:1 kohavi:2 operate:1 subject:1 seem:4 call:2 near:1 noting:1 presence:1 revealed:4 easy:4 reacts:1 switch:25 restaurant:1 fit:2 nonstochastic:2 competing:1 opposite:1 reduce:1 idea:1 whether:1 penalty:1 blinded:24 proceed:2 action:26 adequate:2 useful:1 generally:2 detailed:1 tewari:1 dark:1 band:1 category:1 dit:1 schapire:1 notice:1 per:1 discrete:1 shall:1 drawn:1 fraction:1 sum:2 uncertainty:1 named:1 place:1 almost:1 decide:1 draw:8 decision:2 bound:15 ct:2 koren:2 played:1 annual:3 occur:2 precisely:1 constraint:2 n3:1 x2:1 kleinberg:1 aspect:1 extremely:1 min:1 structured:1 according:2 kakade:2 modification:1 deprived:1 explained:1 restricted:1 taken:1 turn:1 mechanism:1 letting:1 fp7:1 end:5 serf:1 ofer:1 available:1 unreasonable:1 apply:2 observe:7 primacy:2 encounter:2 denotes:1 ensure:1 robbing:1 invokes:2 k1:24 prof:5 question:2 occurs:4 strategy:5 dependence:1 detrimental:1 distance:1 toward:1 fresh:1 useless:1 minimizing:1 setup:2 negative:2 rise:1 motivates:1 policy:3 twenty:1 perform:1 bianchi:7 upper:4 observation:1 markov:1 finite:3 immediate:3 situation:1 peres:1 rn:3 barycentric:1 ninth:1 arbitrary:6 tomer:1 namely:2 required:1 specified:1 temporary:1 adversary:3 below:2 pattern:1 reading:1 spanner:3 max:1 analogue:1 event:1 difficulty:2 force:1 natural:1 arm:28 minimax:2 scheme:1 older:1 identifies:1 arora:2 concludes:1 geometric:2 discovery:2 asymptotic:1 law:1 freund:1 loss:52 bear:1 sublinear:1 interesting:2 digital:1 foundation:1 switched:2 incurred:2 xp:19 surprisingly:1 last:1 free:1 bias:4 side:4 understand:1 guide:1 taking:1 distributed:2 feedback:45 dimension:1 stand:1 cumulative:6 rich:1 reside:1 adaptive:11 historical:1 employing:1 ec:1 programme:1 transaction:1 unreliable:2 global:1 instantiation:1 hayes:2 b1:1 summing:1 conclude:1 assumed:1 alternatively:1 learn:2 nature:1 ignoring:1 obtaining:2 orgy:1 european:1 artificially:1 domain:2 protocol:1 significance:3 main:2 linearly:1 oferd:1 noise:1 n2:4 x1:5 xu:1 site:1 fig:1 en:2 precision:1 candidate:2 admissible:2 theorem:11 xt:92 rakhlin:1 exists:1 adding:1 horizon:1 easier:1 simply:4 bubeck:4 desire:1 contained:1 temporarily:4 applies:3 tomerk:1 determines:2 relies:1 constantly:1 acm:5 formulated:1 presentation:1 towards:1 price:1 hard:3 change:4 specifically:4 reducing:1 wt:14 lemma:14 called:6 total:5 pas:2 player:52 meaningful:1 formally:3 puzzling:1 ex:1 |
5,001 | 5,528 | Near-optimal sample compression
for nearest neighbors
Lee-Ad Gottlieb
Department of Computer Science and Mathematics, Ariel University
Ariel, Israel. [email protected]
Aryeh Kontorovich
Computer Science Department, Ben Gurion University
Beer Sheva, Israel. [email protected]
Pinhas Nisnevitch
Department of Computer Science and Mathematics, Ariel University
Ariel, Israel. [email protected]
Abstract
We present the first sample compression algorithm for nearest neighbors with nontrivial performance guarantees. We complement these guarantees by demonstrating almost matching hardness lower bounds, which show that our bound is nearly
optimal. Our result yields new insight into margin-based nearest neighbor classification in metric spaces and allows us to significantly sharpen and simplify existing
bounds. Some encouraging empirical results are also presented.
1 Introduction
The nearest neighbor classifier for non-parametric classification is perhaps the most intuitive learning algorithm. It is apparently the earliest, having been introduced by Fix and Hodges in 1951
(technical report reprinted in [1]). In this model, the learner observes a sample S of labeled points
(X, Y ) = (Xi , Yi )i?[n] , where Xi is a point in some metric space X and Yi ? {1, ?1} is its label.
Being a metric space, X is equipped with a distance function d : X ? X ? R. Given a new unlabeled point x ? X to be classified, x is assigned the same label as its nearest neighbor in S, which is
argminYi ?Y d(x, Xi ). Under mild regularity assumptions, the nearest neighbor classifier?s expected
error is asymptotically bounded by twice the Bayesian error, when the sample size tends to infinity
[2].1 These results have inspired a vast body of research on proximity-based classification (see [4, 5]
for extensive background and [6] for a recent refinement of classic results). More recently, strong
margin-dependent generalization bounds were obtained in [7], where the margin is the minimum
distance between opposite labeled points in S.
In addition to provable generalization bounds, nearest neighbor (NN) classification enjoys several
other advantages. These include simple evaluation on new data, immediate extension to multiclass
labels, and minimal structural assumptions ? it does not assume a Hilbertian or even a Banach
space. However, the naive NN approach also has disadvantages. In particular, it requires storing the
entire sample, which may be memory-intensive. Further, information-theoretic considerations show
that exact NN evaluation requires ?(|S|) time in high-dimensional metric spaces [8] (and possibly
Euclidean space as well [9]) ? a phenomenon known as the algorithmic curse of dimensionality.
Lastly, the NN classifier has infinite VC-dimension [5], implying that it tends to overfit the data.
1
A Bayes-consistent modification of the 1-NN classifier was recently proposed in [3].
1
This last problem can be mitigated by taking the majority vote among k > 1 nearest neighbors
[10, 11, 5], or by deleting some sample points so as to attain a larger margin [12].
Shortcomings in the NN classifier led Hart [13] to pose the problem of sample compression. Indeed, significant compression of the sample has the potential to simultaneously address the issues
of memory usage, NN search time, and overfitting. Hart considered the minimum Consistent Subset
problem ? elsewhere called the Nearest Neighbor Condensing problem ? which seeks to identify
a minimal subset S ? ? S that is consistent with S, in the sense that the nearest neighbor in S ? of
every x ? S possesses the same label as x. This problem is known to be NP-hard [14, 15], and Hart
provided a heuristic with runtime O(n3 ). The runtime was recently improved by [16] to O(n2 ), but
neither paper gave performance guarantees.
The Nearest Neighbor Condensing problem has been the subject of extensive research since its introduction [17, 18, 19]. Yet surprisingly, there are no known approximation algorithms for it ?
all previous results on this problem are heuristics that lack any non-trivial approximation guarantees. Conversely, no strong hardness-of-approximation results for this problem are known, which
indicates a gap in the current state of knowledge.
Main results. Our contribution aims at closing the existing gap in solutions to the Nearest Neighbor
Condensing problem. We present a simple near-optimal approximation algorithm for this problem,
where our only structural assumption is that the points lie in some metric space. Define the scaled
margin ? < 1 of a sample S as the ratio of the minimum distance between opposite labeled points
in S to the diameter of S. Our algorithm produces a consistent set S 0 ? S of size d1/?eddim(S)+1
(Theorem 1), where ddim(S) is the doubling dimension of the space S. This result can significantly
speed up evaluation on test points, and also yields sharper and simpler generalization bounds than
were previously known (Theorem 3).
To establish optimality, we complement the approximation result with an almost matching
hardness-of-approximation lower-bound. Using a reduction from the Label Cover problem, we
show that the Nearest Neighbor Condensing problem is NP-hard to approximate within factor
1?o(1)
2(ddim(S) log(1/?))
(Theorem 2). Note that the above upper-bound is an absolute size guarantee, and stronger than an approximation guarantee.
Additionally, we present a simple heuristic to be applied in conjunction with the algorithm of Theorem 1, that achieves further sample compression. The empirical performances of both our algorithm
and heuristic seem encouraging (see Section 4).
Related work. A well-studied problem related to the Nearest Neighbor Condensing problem is that
of extracting a small set of simple conjunctions consistent with much of the sample, introduced by
[20] and shown by [21] to be equivalent to minimum Set Cover (see [22, 23] for further extensions).
This problem is monotone in the sense that adding a conjunction to the solution set can only increase
the sample accuracy of the solution. In contrast, in our problem the addition of a point of S to S ?
can cause S ? to be inconsistent ? and this distinction is critical to the hardness of our problem.
Removal of points from the sample can also yield lower dimensionality, which itself implies faster
nearest neighbor evaluation and better generalization bounds. For metric spaces, [24] and [25] gave
algorithms for dimensionality reduction via point removal (irrespective of margin size).
The use of doubling dimension as a tool to characterize metric learning has appeared several times
in the literature, initially by [26] in the context of nearest neighbor classification, and then in [27]
and [28]. A series of papers by Gottlieb, Kontorovich and Krauthgamer investigate doubling spaces
for classification [12], regression [29], and dimension reduction [25].
k-nearest neighbor. A natural question is whether the Nearest Neighbor Condensing problem of
[13] has a direct analogue when the 1-nearest neighbor rule is replaced by a (k > 1)-nearest neighbor
? that is, when the label of a point is determined by the majority vote among its k nearest neighbors.
A simple argument shows that the analogy breaks down. Indeed, a minimal requirement for the
condensing problem to be meaningful is that the full (uncondensed) set S is feasible, i.e. consistent
with itself. Yet even for k = 3 there exist self-inconsistent sets. Take for example the set S consisting
of two positive points at (0, 1) and (0, ?1) and two negative points at (1, 0) and (?1, 0). Then the
3-nearest neighbor rule misclassifies every point in S, hence S itself is inconsistent.
2
Paper outline. This paper is organized as follows. In Section 2, we present our algorithm and prove
its performance bound, as well as the reduction implying its near optimality (Theorem 2). We then
highlight the implications of this algorithm for learning in Section 3. In Section 4 we describe a
heuristic which refines our algorithm, and present empirical results.
1.1 Preliminaries
Metric spaces. A metric d on a set X is a positive symmetric function satisfying the triangle
inequality d(x, y) ? d(x, z) + d(z, y); together the two comprise the metric space (X , d). The
diameter of a set A ? X , is defined by diam(A) = supx,y?A d(x, y). Throughout this paper we
will assume that diam(S) = 1; this can always be achieved by scaling.
Doubling dimension. For a metric (X , d), let ? be the smallest value such that every ball in X
of radius r (for any r) can be covered by ? balls of radius 2r . The doubling dimension of X is
ddim(X ) = log2 ?. A metric is doubling when its doubling dimension is bounded. Note that while a
low Euclidean dimension implies a low doubling dimension (Euclidean metrics of dimension d have
doubling dimension O(d) [30]), low doubling dimension is strictly more general than low Euclidean
dimension. The following packing property can be demonstrated via a repetitive application of the
doubling property: For set S with doubling dimension ddim(X ) and diam(S) ? ?, if the minimum
interpoint distance in S is at least ? < ? then
|S| ? d?/?eddim(X )+1
(1)
(see, for example [8]). The above bound is tight up to constant factors, meaning there exist sets of
size (?/?)?(ddim(X )) .
Nearest Neighbor Condensing. Formally, we define the Nearest Neighbor Condensing (NNC)
problem as follows: We are given a set S = S? ? S+ of points, and distance metric d : S ? S ? R.
We must compute a minimal cardinality subset S 0 ? S with the property that for any p ? S, the
nearest neighbor of p in S 0 comes from the same subset {S+ , S? } as does p. If p has multiple exact
nearest neighbors in S 0 , then they must all be of the same subset.
Label Cover. The Label Cover problem was first introduced by [31] in a seminal paper on the
hardness of computation. Several formulations of this problem have appeared the literature, and we
give the description forwarded by [32]: The input is a bipartite graph G = (U, V, E), with two sets
of labels: A for U and B for V . For each edge (u, v) ? E (where u ? U , v ? V ), we are given
a relation ?u,v ? A ? B consisting of admissible label pairs for that edge. A labeling (f, g) is a
pair of functions f : U ? 2A and g : V ? 2B \{?} assigning a set of labels to each vertex. A
labeling covers an edge (u, v) if for every label b ? g(v) there is some label a ? f (u) such that
(a, b) ? ?u,v . The goal is to find a labeling that covers
Pall edges, and which minimizes the sum of
the number of labels assigned to each u ? U , that is u?U |f (u)|. It was shown in [32] that it is
1?o(1)
NP-hard to approximate Label Cover to within a factor 2(log n)
, where n is the total size of the
input.
Learning. We work in the agnostic learning model [33, 5]. The learner receives n labeled examples
(Xi , Yi ) ? X ?{?1, 1} drawn iid according to some unknown probability distribution
P. Associated
P
to any hypothesis h : X ? {?1, 1} is its empirical error err(h)
c
= n?1 i?[n] 1{h(Xi )6=Yi } and
generalization error err(h) = P(h(X) 6= Y ).
2 Near-optimal approximation algorithm
In this section, we describe a simple approximation algorithm for the Nearest Neighbor Condensing
problem. In Section 2.1 we provide almost tight hardness-of-approximation bounds. We have the
following theorem:
Theorem 1. Given a point set S and its scaled margin ? < 1, there exists an algorithm that in time
min{n2 , 2O(ddim(S)) n log(1/?)}
computes a consistent set S 0 ? S of size at most d1/?eddim(S)+1 .
Recall that an ?-net of point set S is a subset S? ? S with two properties:
3
(i) Packing. The minimum interpoint distance in S? is at least ?.
(ii) Covering. Every point p ? S has a nearest neighbor in S? strictly within distance ?.
We make the following observation: Since the margin of the point set is ?, a ?-net of S is consistent
with S. That is, every point p ? S has a neighbor in S? strictly within distance ?, and since the
margin of S is ?, this neighbor must be of the same label set as p. By the packing property of
doubling spaces (Equation 1), the size of S? is at most d1/?eddim(S)+1 . The solution returned by
our algorithm is S? , and satisfies the guarantees claimed in Theorem 1.
It remains only to compute the net S? . A brute-force greedy algorithm can accomplish this in time
O(n2 ): For every point p ? S, we add p to S? if the distance from p to all points currently in S? is
? or greater, d(p, S? ) ? ?. See Algorithm 1.
Algorithm 1 Brute-force net construction
Require: S
1: S? ? arbitrary point of S
2: for all p ? S do
3:
if d(p, S? ) ? ? then
4:
S? = S? ? {p}
5:
end if
6: end for
The construction time can be improved by building a net hierarchy, similar to the one employed by
[8], in total time 2O(ddim(S)) n log(1/?). (See also [34, 35, 36].) A hierarchy consists of all nets
S2i for i = 0, ?1, . . . , blog ?c, where S2i ? S2i?1 for all i > blog ?c. Two points p, q ? S2i are
neighbors if d(p, q) < 4 ? 2i . Further, each point q ? S is a child of a single nearby parent point
p ? S2i satisfying d(p, q) < 2i . By the definition of a net, a parent point must exist. If two points
p, q ? S2i are neighbors (d(p, q) < 4 ? 2i ) then their respective parents p0 , q 0 ? S2i+1 are necessarily
neighbors as well: d(p0 , q 0 ) ? d(p0 , p) + d(p, q) + d(q, q 0 ) < 2i+1 + 4 ? 2i + 2i+1 = 4 ? 2i+1 .
The net S20 = S1 consists of a single arbitrary point. Having constructed S2i , it is an easy matter
to construct S2i?1 : Since we require S2i?1 ? S2i , we will initialize S2i?1 = S2i . For each q ? S,
we need only to determine whether d(q, S2i?1 ) ? 2i?1 , and if so add q to S2i?1 . Crucially, we need
not compare q to all points of S2i?1 : If there exists a point p ? S2i with d(q, p) < 2i , then the
respective parents p0 , q 0 ? S2i of p, q must be neighbors. Let set T include only the children of q 0
and of q 0 ?s neighbors. To determine the inclusion of every q ? S in S2i?1 , it suffices to compute
whether d(q, T ) ? 2i?1 , and so n such queries are sufficient to construct S2i?1 . The points of T
have minimum distance 2i?1 and are all contained in a ball of radius 4 ? 2i + 2i?1 centered at T , so
by the packing property (Equation 1) |T | = 2O(ddim(S)) . It follows that the above query d(q, T ) can
be answered in time 2O(ddim(S)) . For each point in S we execute O(log(1/?)) queries, for a total
runtime of 2O(ddim(S)) n log(1/?). The above procedure is illustrated in the Appendix.
2.1 Hardness of approximation of NNC
In this section, we prove almost matching hardness results for the NNC problem.
Theorem 2. Given a set S of labeled points with scaled margin ?, it is NP-hard to approximate the solution to the Nearest Neighbor Condensing problem on S to within a factor
1?o(1)
2(ddim(S) log(1/?))
.
To simplify the proof, we introduce an easier version of NNC called Weighted Nearest Neighbor
Condensing (WNNC). In this problem, the input is augmented with a function assigning weight
to each point of S, and the goal is to find a subset S 0 ? S of minimum total weight. We will
reduce Label Cover to WNNC and then reduce WNNC to NNC (with some mild assumptions on
the admissible range of weights), all while preserving hardness of approximation. The theorem will
follow from the hardness of Label Cover [32].
P
First reduction. Given a Label Cover instance of size m = |U |+|V |+|A|+|B|+|E|+ e?E |?E |,
fix large value c to be specified later, and an infinitesimally small constant ?. We create an instance
of WNNC as follows (see Figure 1).
1. We first create a point p+ ? S+ of weight 1.
4
Label Cover
U
u1
u2
Nearest Neighbor Condensing
V
e1
e2
e3
SU,A ? S+
SL ? S+
SV,B ? S-
SE ? S-
v1
v2
l1: (a1,b1) ? ?e1
l2: (a2,b2) ? ?e1
u1a1
l1
v1b1
u1a2
l2
v1b2
u2a1
l3,,l4
v2b1
u2a2
2
l5
2+2{
l3: (a1,b1) ? ?e2
l4: (a2,b1) ? ?e2
?[+
2+{
v2b2
2
p-
e1
e2
e3
3
3+{
p+
l5: (a1,b2) ? ?e3
Figure 1: Reduction from Label Cover to Nearest Neighbor Condensing.
We introduce set SE ? S? representing edges in E: For each edge e ? E, create point pe of
weight ?. The distance from pe to p+ is 3 + ?.
2. We introduce set SV,B ? S? representing pairs in V ? B: For each vertex v ? V and label
b ? B, create point pv,b of weight 1. If edge e is incident to v and there exists a label (a, b) ? ?e
for any a ? A, then the distance from pv,b to pe is 3.
Further add a point p? ? S? of weight 1, at distance 2 from all points in SV,B .
3. We introduce set SL ? S+ representing labels in ?e . For each edge e = (u, v) and label b ? B
for which (a, b) ? ?e (for any a ? A), we create point pe,b ? SL of weight ?. pe,b represents
the set of labels (a, b) ? ?e over all a ? A. pe,b is at distance 2 + ? from pv,b .
Further add a point p0+ ? S+ of weight 1, at distance 2 + 2? from all points in SL .
4. We introduce set SU,A ? S+ representing pairs in U ? A: For each vertex u ? U and label
a ? A, create point pu,a of weight c. For any edge e = (u, v) and label b ? B, if (a, b) ? ?e
then the distance from pe,b ? SL to pu,a is 2.
The points of each set SE , SV,B , SL and SU,A are packed into respective balls of diameter 1. Fixing
any target doubling dimension D = ?(1) and recalling that the cardinality of each of these sets
is less than m2 , we conclude that the minimum interpoint distance in each ball is m?O(1/D) . All
interpoint distances not yet specified are set to their maximum possible value. The diameter of the
resulting set is constant, so its scaled margin is ? = m?O(1/D) . We claim that a solution of WNNC
on the constructed instance implies some solution of the Label Cover Instance:
1. p+ must appear in any solution: The nearest neighbors of p+ are the negative points of SE , so
if p+ is not included the nearest neighbor of set SE is necessarily the nearest neighbor of p+ ,
which is not consistent.
2. Points in SE have infinite weight, so no points of SE appear in the solution. All points of SE
are at distance exactly 3 + ? from p+ , hence each point of SE must be covered by some point
of SV,B to which it is connected ? other points in SV,B are farther than 3 + ?. (Note that SV,B
itself can be covered by including the single point p? .)
Choosing covering points in SV,B corresponds to assigning labels in B to vertices of V in the
Label Cover instance.
3. Points in SL have infinite weight, so no points of SL appear in the solution. Hence, either p0+
or some points of SU,A must be used to cover points of SL . Specifically, a point in SL ? S+
incident on an included point of SV,B ? S? is at distance exactly 2 + ? from this point, and so
it must be covered by some point of SU,A to which it is connected, at distance 2 ? other points
in SU,A are farther than 2 + ?. Points of SL not incident on an included point of SV,B can be
covered by p0+ , which at distance 2 + 2? is still closer than any point in SV,B . (Note that SU,A
itself can be covered by including a single arbitrary point of SU,A , which at distance 1 is closer
than all other point sets.)
Choosing the covering point in SU,A corresponds to assigning labels in A to vertices of U in
the Label Cover instance, thereby inducing a valid labeling for some edge and solving the Label
Cover problem.
5
Now, a trivial solution to this instance of WNNC is to take all points of SU,A , SV,B and the single
point p+ : then SE and p? are covered by SV,B , and SL and p0+ by SU,A . The size of the resulting
set is c|SU,A | + |SU,B | + 1, and this provides an upper bound on the optimal solution. By setting
c = m4 m3 > m(|SU,B | + 1), we ensure that the solution cost of WNNC is asymptotically equal
to the number of points of SU,A included in its solution. This in turn is exactly the sum of labels
of A assigned to each vertex of U in a solution to the Label Cover problem. Label Cover is hard
1?o(1)
to approximate within a factor 2(log m)
, implying that WNNC is hard to approximate within a
1?o(1)
1?o(1)
factor of 2(log m)
= 2(D log(1/?))
.
Before proceeding to the next reduction, we note that to rule out the inclusion of points of SE , SL
in the solution set, infinite weight is not necessary: It suffices to give each heavy point weight c2 ,
which is itself greater than the weight of the optimal solution by a factor of at least m2 . Hence, we
may assume all weights are restricted to the range [1, mO(1) ], and the hardness result for WNNC
still holds.
Second reduction. We now reduce WNNC to NNC, assuming that the weights of the n points
are in the range [1, mO(1) ]. Let ? be the scaled margin of the WNNC instance. To mimic the
weight assignment of WNNC using the unweighted points of NNC, we introduce the following
gadget graph G(w, D): Given parameter w and doubling dimension D, create a point set T of size
w whose interpoint distances are the same as those realized by a set of contiguous points on the
D-dimensional `1 -grid of side-length dw1/D e. Now replace each point p ? T by twin positive and
negative points at mutual distance ?2 , so that the distance from each twin replacing p to each twin
replacing any q ? T is the same as the distance from p to q. G(w, D) consists of T , as well as
a single positive point at distance dw1/D e from all positive points of T , and dw1/D e + ?2 from all
negative points of T , and a single negative point at distance dw1/D e from all negative points of T ,
and dw1/D e + ?2 from all positive points of T .
Clearly, the optimal solution to NNC on the gadget instance is to choose the two points not in T .
Further, if any single point in T is included in the solution, then all of T must be included in the
solution: First the twin of the included point must also be included in the solution. Then, any point
at distance 1 from both twins must be included as well, along with its own twin. But then all points
within distance 1 of the new twins must be included, etc., until all points of T are found in the
solution.
To effectively assign weight to a positive point of NNC, we add a gadget to the point set, and place
all negative points of the gadget at distance dw1/D e from this point. If the point is not included in
the NNC solution, then the cost of the gadget is only 2.2 But if this point is included in the NNC
solution, then it is the nearest neighbor of the negative gadget points, and so all the gadget points
must be included in the solution, incurring a cost of w. A similar argument allows us to assign
weight to negative points of NNC. The scaled margin of the NNC instance is of size ?(?/w1/D ) =
?(?m?O(1/D) ), which completes the proof of Theorem 2.
3 Learning
In this section, we apply Theorem 1 to obtain improved generalization bounds for binary classification in doubling spaces. Working in the standard agnostic PAC setting, we take the labeled sample
S to be drawn iid from some unknown distribution over X ? {?1, 1}, with respect to which all of
our probabilities will be defined. In a slight abuse of notation, we will blur the distinction between
S ? X as a collection of points in a metric space and S ? (X ? {?1, 1})n as a sequence of pointlabel pairs. As mentioned in the preliminaries, there is no loss of generality in taking diam(S) = 1.
Partitioning the sample S = S+ ? S? into its positively and negatively labeled subsets, the margin
induced by the sample is given by ?(S) = d(S+ , S? ), where d(A, B) := minx?A,x0 ?B d(x, x0 ) for
A, B ? X . Any labeled sample S induces the nearest-neighbor classifier ?S : X ? {?1, 1} via
+1 if d(x, S+ ) < d(x, S? )
?S (x) =
?1 else.
2
By scaling up all weights by a factor of n2 , we can ensure that the cost of all added gadgets (2n) is
asymptotically negligible.
6
P
We say that S? ? S is ?-consistent with S if n1 x?S 1{?S (x)6=? ? (x)} ? ?. For ? = 0, an ?-consistent
S
S? is simply said to be consistent (which matches our previous notion of consistent subsets). A
? if there is an ?-consistent S? ? S with
sample S is said to be (?, ?)-separable (with witness S)
?
?(S) ? ?.
We begin by invoking a standard Occam-type argument to show that the existence of small ?consistent sets implies good generalization. The generalizing power of sample compression was
independently discovered by [37, 38], and later elaborated upon by [39].
Theorem 3. For any distribution P, any n ? N and any 0 < ? < 1, with probability at least 1 ? ?
over the random sample S ? (X ? {?1, 1})n , the following holds:
1
? log n + log n + log 1 .
(i) If S? ? S is consistent with S, then err(?S? ) ?
|S|
?
?
n ? |S|
s
? log n + 2 log n + log 1
|S|
?n
?
(ii) If S? ? S is ?-consistent with S, then err(?S? ) ?
+
.
?
?
n ? |S|
2(n ? |S|)
Proof. Finding a consistent (resp., ?-consistent) S? ? S constitutes a sample compression scheme of
? as stipulated in [39]. Hence, the bounds in (i) and (ii) follow immediately from Theorems
size |S|,
1 and 2 ibid.
Corollary 1. With probability at least 1 ? ?, the following holds: If S is (?, ?)-separable with
? then
witness S,
s
` log n + 2 log n + log 1?
?n
err(?S? ) ?
+
,
n?`
2(n ? `)
where ` = d1/?eddim(S)+1 .
Proof. Follows immediately from Theorems 1 and 3(ii).
Remark. It is instructive to compare the bound above to [12, Corollary 5]. Stated in the language
of this paper, the latter upper-bounds the NN generalization error in terms of the sample margin ?
and ddim(X ) by
r
2
?+
(d? ln(34en/d? ) log2 (578n) + ln(4/?)),
(2)
n
ddim(X )+1
where d? = d16/?e
and ? is the fraction of the points in S that violate the margin condition (i.e., opposite-labeled point pairs less than ? apart in d). Hence, Corollary 1 is a considerable improvement over (2) in at least three aspects. First, the data-dependent ddim(S) may be significantly
smaller than the dimension of the ambient space, ddim(X ).3 Secondly, the factor of 16ddim(X )+1
is shaved off. Finally, (2) relied on some fairly intricate fat-shattering arguments [40, 41], while
Corollary 1 is an almost immediate consequence of much simpler Occam-type results.
One limitation of Theorem 1 is that it requires the sample to be (0, ?)-separable. The form of the
bound in Corollary 1 suggests a natural Structural Risk Minimization (SRM) procedure: minimize
the right-hand size over (?, ?). A solution to this problem was (essentially) given in [12, Theorem
7]:
Theorem 4. Let R(?, ?) denote the right-hand size of the inequality in Corollary 1 and put
(?? , ? ? ) = argmin?,? R(?, ?). Then (i) One may compute (?? , ? ? ) in O(n4.376 ) randomized time.
(ii) One may compute (?
?, ??) satisfying R(?
?, ?? ) ? 4R(?? , ? ? ) in O(ddim(S)n2 log n) deterministic
time. Both solutions yield a witness S? ? S of (?, ?)-separability as a by-product.
? we may
Having thus computed the optimal (or near-optimal) ??, ?? with the corresponding witness S,
?
now run the algorithm furnished by Theorem 1 on the sub-sample S and invoke the generalization
bound in Corollary 1. The latter holds uniformly over all ??, ??.
3
In general, ddim(S) ? c ddim(X ) for some universal constant c, as shown in [24].
7
4 Experiments
In this section we discuss experimental results. First, we will describe a simple heuristic built upon
our algorithm. The theoretical guarantees in Theorem 1 feature a dependence on the scaled margin
?, and our heuristic aims to give an improved solution in the problematic case where ? is small.
Consider the following procedure for obtaining a smaller consistent set. We first extract a net S?
satisfying the guarantees of Theorem 1. We then remove points from S? using the following rule:
for all i ? {0, . . . dlog ?e}, and for each p ? S? , if the distance from p to all opposite labeled points
in S? is at least 2 ? 2i , then remove from S? all points strictly within distance 2i ? ? of p (see
Algorithm 2). We can show that the resulting set is consistent:
Lemma 5. The above heuristic produces a consistent solution.
Proof. Consider a point p ? S? , and assume without loss of generality that p is positive. If
d(p, S?? ) ? 2 ? 2i , then the positive net-points strictly within distance 2i of p are closer to p than to
any negative point in S? , and are ?covered? by p. The removed positive net-points at distance 2i ? ?
themselves cover other positive points of S within distance ?, but p covers these points of S as well.
Further, p cannot be removed at a later stage in the algorithm, since p?s distance from all remaining
points is at least 2i ? ?.
Algorithm 2 Consistent pruning heuristic
1: S? is produced by Algorithm 1 or its fast version (Appendix)
2: for all i ? {0, . . . , dlog ?e} do
3:
for all p ? S? do
4:
if p ? S?? and d(p, S?? ) ? 2 ? 2i then
5:
for all q 6= p ? S? with d(p, q) < 2i ? ? do
6:
S? ? S? \{q}
7:
end for
8:
end if
9:
end for
10: end for
As a proof of concept, we tested our sample compression algorithms on several data sets from the
UCI Machine Learning Repository. These included the Skin Segmentation, Statlog Shuttle, and
Covertype sets.4 The final dataset features 7 different label types, which we treated as 21 separate
binary classification problems; we report results for labels 1 vs. 4, 4 vs. 6, and 4 vs. 7, and these
typify the remaining pairs. We stress that the focus of our experiments is to demonstrate that (i) a
significant amount of consistent sample compression is often possible and (ii) the compression does
not adversely affect the generalization error.
For each data set and experiment, we sampled equal sized learning and test sets, with equal representation of each label type. The L1 metric was used for all data sets. We report (i) the initial sample
set size, (ii) the percentage of points retained after the net extraction procedure of Algorithm 1, (iii)
the percentage retained after the pruning heuristic of Algorithm 2, and (iv) the change in prediction accuracy on test data, when comparing the heuristic to the uncompressed sample. The results,
averaged over 500 trials, are summarized in Figure 2.
data set
Skin Segmentation
Statlog Shuttle
Covertype 1 vs. 4
Covertype 4 vs. 6
Covertype 4 vs. 7
original sample
10000
2000
2000
2000
2000
% after net
35.10
65.75
35.85
96.50
4.40
% after heuristic
4.78
29.65
17.70
69.00
3.40
?% accuracy
-0.0010
+0.0080
+0.0200
-0.0300
0.0000
Figure 2: Summary of the performance of NN sample compression algorithms.
4
http://tinyurl.com/skin-data;
http://tinyurl.com/cover-data
http://tinyurl.com/shuttle-data;
8
References
[1] E. Fix and J. L. Hodges, Discriminatory analysis. nonparametric discrimination: Consistency properties.
International Statistical Review / Revue Internationale de Statistique, 57(3):pp. 238?247, 1989.
[2] T. Cover, P. Hart. Nearest neighbor pattern classification. IEEE Trans. Info. Theo., 13:21?27, 1967.
[3] A. Kontorovich, R. Weiss. A Bayes consistent 1-NN classifier (arXiv:1407.0208), 2014.
[4] G. Toussaint. Open problems in geometric methods for instance-based learning. In Discrete and computational geometry, volume 2866 of Lecture Notes in Comput. Sci., pp 273?283. 2003.
[5] S. Shalev-Shwartz, S. Ben-David. Understanding Machine Learning. 2014.
[6] K. Chaudhuri, S. Dasgupta. Rates of Convergence for Nearest Neighbor Classification. In NIPS, 2014.
[7] U. von Luxburg, O. Bousquet. Distance-based classification with Lipschitz functions. JMLR, 2004.
[8] R. Krauthgamer and J. R. Lee. Navigating nets: Simple algorithms for proximity search. In SODA, 2004.
[9] K. L. Clarkson. An algorithm for approximate closest-point queries. In SCG, 1994
[10] L. Devroye, L. Gy?orfi, A. Krzy?zak, G. Lugosi. On the strong universal consistency of nearest neighbor
regression function estimates. Ann. Statist., 22(3):1371?1385, 1994.
[11] R. R. Snapp and S. S. Venkatesh. Asymptotic expansions of the k nearest neighbor risk. Ann. Statist.,
26(3):850?878, 1998.
[12] L. Gottlieb, A. Kontorovich, R. Krauthgamer. Efficient classification for metric data. In COLT, 2010.
[13] P. E. Hart. The condensed nearest neighbor rule. IEEE Trans. Info. Theo., 14(3):515?516, 1968.
[14] G. Wilfong. Nearest neighbor problems. In SCG, 1991.
[15] A. V. Zukhba. NP-completeness of the problem of prototype selection in the nearest neighbor method.
Pattern Recognit. Image Anal., 20(4):484?494, 2010.
[16] F. Angiulli. Fast condensed nearest neighbor rule. In ICML, 2005.
[17] W. Gates. The reduced nearest neighbor rule. IEEE Trans. Info. Theo., 18:431?433, 1972.
[18] G. L. Ritter, H. B. Woodruff, S. R. Lowry, T. L. Isenhour. An algorithm for a selective nearest neighbor
decision rule. IEEE Trans. Info. Theo., 21:665?669, 1975.
[19] D. R. Wilson and T. R. Martinez. Reduction techniques for instance-based learning algorithms. Mach.
Learn., 38:257?286, 2000.
[20] L. G. Valiant. A theory of the learnable. Commun. ACM, 27(11):1134?1142, 1984.
[21] D. Haussler. Quantifying inductive bias: AI learning algorithms and valiant?s learning framework. Artificial Intelligence, 36(2):177 ? 221, 1988.
[22] F. Laviolette, M. Marchand, M. Shah, S. Shanian. Learning the set covering machine by bound minimization and margin-sparsity trade-off. Mach. Learn., 78(1-2):175?201, 2010.
[23] M. Marchand and J. Shawe-Taylor. The set covering machine. JMLR, 3:723?746, 2002.
[24] L. Gottlieb and R. Krauthgamer. Proximity algorithms for nearly doubling spaces. SIAM J. on Discr.
Math., 27(4):1759?1769, 2013.
[25] L. Gottlieb, A. Kontorovich, R. Krauthgamer. Adaptive metric dimensionality reduction. ALT, 2013.
[26] A. Beygelzimer, S. Kakade, J. Langford. Cover trees for nearest neighbor. In ICML, 2006.
[27] Y. Li and P. M. Long. Learnability and the doubling dimension. In NIPS, 2006.
[28] N. H. Bshouty, Y. Li, P. M. Long. Using the doubling dimension to analyze the generalization of learning
algorithms. J. Comp. Sys. Sci., 75(6):323 ? 335, 2009.
[29] L. Gottlieb, A. Kontorovich, R. Krauthgamer. Efficient regression in metric spaces via approximate
Lipschitz extension. In SIMBAD, 2013.
[30] A. Gupta, R. Krauthgamer, J. R. Lee. Bounded geometries, fractals, and low-distortion embeddings. In
FOCS, 2003.
[31] S. Arora, L. Babai, J. Stern, Z. Sweedyk. The hardness of approximate optima in lattices, codes, and
systems of linear equations. In FOCS, 1993.
[32] I. Dinur, S. Safra. On the hardness of approximating label-cover. Info. Proc. Lett., 2004.
[33] M. Mohri, A. Rostamizadeh, A. Talwalkar. Foundations Of Machine Learning. 2012.
[34] A. Beygelzimer, S. Kakade, J. Langford. Cover trees for nearest neighbor. In ICML 2006.
[35] S. Har-Peled and M. Mendel. Fast construction of nets in low-dimensional metrics and their applications.
SIAM J. on Comput., 35(5):1148?1184, 2006.
[36] R. Cole, L. Gottlieb. Searching dynamic point sets in spaces with bounded doubling dimension. STOC,
2006.
[37] N. Littlestone and M. K. Warmuth. Relating data compression and learnability, unpublished. 1986.
[38] L. Devroye, L. Gy?orfi, G. Lugosi. A probabilistic theory of pattern recognition, 1996.
[39] T. Graepel, R. Herbrich, J. Shawe-Taylor. Pac-bayesian compression bounds on the prediction error of
learning algorithms for classification. Mach. Learn., 59(1-2):55?76, 2005.
[40] N. Alon, S. Ben-David, N. Cesa-Bianchi, D. Haussler. Scale-sensitive dimensions, uniform convergence,
and learnability. J. ACM, 44(4):615?631, 1997.
[41] P. Bartlett and J. Shawe-Taylor. Generalization performance of support vector machines and other pattern
classifiers, pages 43?54. 1999.
9
| 5528 |@word mild:2 trial:1 repository:1 version:2 compression:13 stronger:1 open:1 seek:1 crucially:1 scg:2 p0:8 invoking:1 thereby:1 reduction:10 initial:1 series:1 woodruff:1 existing:2 err:5 current:1 com:4 comparing:1 beygelzimer:2 ddim:19 gmail:1 yet:3 must:14 assigning:4 dw1:6 refines:1 gurion:1 blur:1 remove:2 v:6 implying:3 greedy:1 discrimination:1 intelligence:1 warmuth:1 sys:1 farther:2 provides:1 completeness:1 math:1 herbrich:1 mendel:1 simpler:2 along:1 constructed:2 direct:1 aryeh:1 c2:1 focs:2 prove:2 consists:3 introduce:6 x0:2 intricate:1 expected:1 indeed:2 themselves:1 hardness:13 sweedyk:1 inspired:1 encouraging:2 curse:1 equipped:1 cardinality:2 provided:1 begin:1 bounded:4 mitigated:1 notation:1 agnostic:2 israel:3 argmin:1 minimizes:1 finding:1 guarantee:9 every:8 runtime:3 exactly:3 fat:1 classifier:8 scaled:7 brute:2 partitioning:1 appear:3 positive:11 before:1 negligible:1 tends:2 consequence:1 mach:3 abuse:1 lugosi:2 twice:1 studied:1 conversely:1 suggests:1 discriminatory:1 range:3 averaged:1 revue:1 procedure:4 empirical:4 universal:2 significantly:3 attain:1 matching:3 orfi:2 statistique:1 nisnevitch:1 unlabeled:1 cannot:1 selection:1 put:1 context:1 risk:2 seminal:1 equivalent:1 deterministic:1 demonstrated:1 independently:1 immediately:2 m2:2 insight:1 rule:8 haussler:2 d1:4 classic:1 searching:1 notion:1 resp:1 construction:3 hierarchy:2 target:1 exact:2 hypothesis:1 satisfying:4 recognition:1 labeled:10 d16:1 connected:2 trade:1 removed:2 observes:1 mentioned:1 peled:1 dynamic:1 tight:2 solving:1 negatively:1 bipartite:1 upon:2 learner:2 triangle:1 packing:4 s2i:20 fast:3 shortcoming:1 describe:3 query:4 recognit:1 labeling:4 artificial:1 shanian:1 choosing:2 shalev:1 whose:1 heuristic:12 larger:1 say:1 distortion:1 forwarded:1 itself:6 final:1 advantage:1 sequence:1 net:15 product:1 uci:1 chaudhuri:1 intuitive:1 description:1 inducing:1 parent:4 regularity:1 requirement:1 convergence:2 optimum:1 produce:2 ben:3 alon:1 ac:2 fixing:1 pose:1 nearest:49 bshouty:1 strong:3 c:1 implies:4 come:1 radius:3 stipulated:1 vc:1 centered:1 require:2 assign:2 fix:3 generalization:12 suffices:2 preliminary:2 statlog:2 secondly:1 extension:3 strictly:5 hold:4 proximity:3 considered:1 algorithmic:1 mo:2 claim:1 achieves:1 smallest:1 a2:2 proc:1 condensed:2 label:41 currently:1 cole:1 sensitive:1 create:7 tool:1 weighted:1 minimization:2 clearly:1 always:1 aim:2 shuttle:3 wilson:1 conjunction:3 earliest:1 corollary:7 krzy:1 focus:1 improvement:1 indicates:1 contrast:1 rostamizadeh:1 sense:2 talwalkar:1 dependent:2 nn:10 entire:1 initially:1 relation:1 selective:1 issue:1 classification:13 among:2 colt:1 hilbertian:1 misclassifies:1 initialize:1 mutual:1 fairly:1 equal:3 comprise:1 construct:2 having:3 extraction:1 shattering:1 represents:1 uncompressed:1 nearly:2 constitutes:1 icml:3 mimic:1 report:3 np:5 simplify:2 simultaneously:1 babai:1 m4:1 replaced:1 geometry:2 consisting:2 n1:1 recalling:1 investigate:1 evaluation:4 har:1 implication:1 ambient:1 edge:10 closer:3 necessary:1 respective:3 tree:2 iv:1 euclidean:4 taylor:3 littlestone:1 theoretical:1 minimal:4 instance:12 contiguous:1 cover:26 disadvantage:1 assignment:1 lattice:1 cost:4 vertex:6 subset:9 uniform:1 srm:1 learnability:3 characterize:1 supx:1 sv:13 accomplish:1 international:1 randomized:1 siam:2 l5:2 lee:3 off:2 invoke:1 ritter:1 probabilistic:1 together:1 kontorovich:6 w1:1 von:1 hodges:2 cesa:1 choose:1 possibly:1 bgu:1 adversely:1 li:2 potential:1 de:1 gy:2 b2:2 twin:7 summarized:1 matter:1 ad:1 later:3 break:1 apparently:1 analyze:1 bayes:2 relied:1 elaborated:1 contribution:1 minimize:1 il:2 accuracy:3 yield:4 identify:1 bayesian:2 produced:1 iid:2 comp:1 classified:1 definition:1 pp:2 e2:4 associated:1 proof:6 sampled:1 dataset:1 recall:1 knowledge:1 dimensionality:4 organized:1 segmentation:2 graepel:1 dinur:1 follow:2 improved:4 wei:1 formulation:1 execute:1 generality:2 stage:1 lastly:1 until:1 overfit:1 working:1 receives:1 hand:2 langford:2 replacing:2 su:15 lack:1 wilfong:1 perhaps:1 usage:1 building:1 concept:1 inductive:1 hence:6 assigned:3 symmetric:1 illustrated:1 self:1 covering:5 stress:1 outline:1 theoretic:1 demonstrate:1 l1:3 tinyurl:3 meaning:1 image:1 consideration:1 recently:3 volume:1 banach:1 slight:1 relating:1 significant:2 zak:1 ai:1 consistency:2 grid:1 mathematics:2 inclusion:2 closing:1 sharpen:1 language:1 shawe:3 l3:2 etc:1 add:5 pu:2 closest:1 own:1 recent:1 commun:1 apart:1 claimed:1 inequality:2 blog:2 binary:2 yi:4 preserving:1 minimum:9 greater:2 employed:1 determine:2 ii:7 full:1 multiple:1 violate:1 technical:1 faster:1 match:1 long:2 hart:5 e1:4 a1:3 prediction:2 regression:3 essentially:1 metric:20 arxiv:1 repetitive:1 achieved:1 background:1 addition:2 completes:1 else:1 posse:1 subject:1 induced:1 inconsistent:3 seem:1 angiulli:1 extracting:1 structural:3 near:5 iii:1 easy:1 embeddings:1 affect:1 gave:2 opposite:4 reduce:3 reprinted:1 prototype:1 multiclass:1 intensive:1 whether:3 sheva:1 bartlett:1 clarkson:1 returned:1 e3:3 discr:1 cause:1 remark:1 fractal:1 covered:8 se:11 amount:1 nonparametric:1 statist:2 induces:1 ibid:1 diameter:4 reduced:1 http:3 sl:13 exist:3 percentage:2 problematic:1 discrete:1 dasgupta:1 demonstrating:1 drawn:2 neither:1 internationale:1 v1:1 vast:1 asymptotically:3 graph:2 monotone:1 fraction:1 sum:2 run:1 luxburg:1 soda:1 furnished:1 place:1 almost:5 throughout:1 decision:1 appendix:2 scaling:2 bound:21 marchand:2 simbad:1 nontrivial:1 covertype:4 infinity:1 n3:1 bousquet:1 nearby:1 aspect:1 speed:1 argument:4 optimality:2 min:1 answered:1 u1:1 separable:3 infinitesimally:1 department:3 according:1 ball:5 smaller:2 separability:1 kakade:2 n4:1 modification:1 s1:1 condensing:14 restricted:1 dlog:2 ariel:5 ln:2 equation:3 previously:1 remains:1 turn:1 discus:1 end:6 incurring:1 apply:1 v2:1 shah:1 gate:1 existence:1 original:1 pinhas:1 remaining:2 include:2 ensure:2 krauthgamer:7 log2:2 laviolette:1 pall:1 establish:1 approximating:1 skin:3 question:1 realized:1 added:1 parametric:1 dependence:1 said:2 minx:1 navigating:1 distance:39 separate:1 sci:2 majority:2 trivial:2 provable:1 assuming:1 length:1 devroye:2 retained:2 code:1 ratio:1 sharper:1 stoc:1 info:5 negative:10 stated:1 anal:1 packed:1 stern:1 unknown:2 bianchi:1 upper:3 observation:1 immediate:2 witness:4 discovered:1 arbitrary:3 introduced:3 complement:2 pair:7 david:2 specified:2 extensive:2 venkatesh:1 unpublished:1 s20:1 distinction:2 nip:2 trans:4 address:1 pattern:4 appeared:2 sparsity:1 safra:1 built:1 nnc:13 including:2 memory:2 deleting:1 analogue:1 power:1 critical:1 karyeh:1 natural:2 force:2 treated:1 representing:4 scheme:1 arora:1 irrespective:1 naive:1 extract:1 review:1 literature:2 l2:2 removal:2 geometric:1 understanding:1 asymptotic:1 loss:2 lecture:1 highlight:1 limitation:1 analogy:1 toussaint:1 foundation:1 incident:3 sufficient:1 beer:1 consistent:25 storing:1 occam:2 heavy:1 elsewhere:1 summary:1 mohri:1 surprisingly:1 last:1 theo:4 enjoys:1 side:1 bias:1 neighbor:56 taking:2 absolute:1 dimension:21 lett:1 valid:1 unweighted:1 computes:1 collection:1 refinement:1 adaptive:1 approximate:8 pruning:2 overfitting:1 b1:3 conclude:1 xi:5 shwartz:1 search:2 additionally:1 learn:3 obtaining:1 expansion:1 necessarily:2 main:1 snapp:1 n2:5 martinez:1 child:2 body:1 augmented:1 gadget:8 positively:1 en:1 sub:1 pv:3 comput:2 lie:1 pe:7 jmlr:2 admissible:2 theorem:21 down:1 pac:2 learnable:1 alt:1 gupta:1 exists:3 adding:1 effectively:1 valiant:2 margin:18 gap:2 easier:1 generalizing:1 led:1 simply:1 contained:1 doubling:20 u2:1 corresponds:2 satisfies:1 acm:2 diam:4 goal:2 sized:1 ann:2 quantifying:1 replace:1 lipschitz:2 feasible:1 hard:6 considerable:1 included:14 infinite:4 determined:1 specifically:1 gottlieb:7 uniformly:1 change:1 lemma:1 called:2 total:4 experimental:1 m3:1 vote:2 meaningful:1 l4:2 formally:1 support:1 latter:2 interpoint:5 phenomenon:1 tested:1 instructive:1 |
5,002 | 5,529 | Clamping Variables and Approximate Inference
Adrian Weller
Columbia University, New York, NY 10027
[email protected]
Tony Jebara
Columbia University, New York, NY 10027
[email protected]
Abstract
It was recently proved using graph covers (Ruozzi, 2012) that the Bethe partition
function is upper bounded by the true partition function for a binary pairwise
model that is attractive. Here we provide a new, arguably simpler proof from
first principles. We make use of the idea of clamping a variable to a particular
value. For an attractive model, we show that summing over the Bethe partition
functions for each sub-model obtained after clamping any variable can only raise
(and hence improve) the approximation. In fact, we derive a stronger result that
may have other useful implications. Repeatedly clamping until we obtain a model
with no cycles, where the Bethe approximation is exact, yields the result. We also
provide a related lower bound on a broad class of approximate partition functions
of general pairwise multi-label models that depends only on the topology. We
demonstrate that clamping a few wisely chosen variables can be of practical value
by dramatically reducing approximation error.
1 Introduction
Marginal inference and estimating the partition function for undirected graphical models, also
called Markov random fields (MRFs), are fundamental problems in machine learning. Exact
solutions may be obtained via variable elimination or the junction tree method, but unless the
treewidth is bounded, this can take exponential time (Pearl, 1988; Lauritzen and Spiegelhalter, 1988;
Wainwright and Jordan, 2008). Hence, many approximate methods have been developed.
Of particular note is the Bethe approximation, which is widely used via the loopy belief propagation
algorithm (LBP). Though this is typically fast and results are often accurate, in general it may converge only to a local optimum of the Bethe free energy, or may not converge at all (McEliece et al.,
1998; Murphy et al., 1999). Another drawback is that, until recently, there were no guarantees
on whether the returned approximation to the partition function was higher or lower than the true
value. Both aspects are in contrast to methods such as the tree-reweighted approximation (TRW,
Wainwright et al., 2005), which features a convex free energy and is guaranteed to return an upper
bound on the true partition function. Nevertheless, empirically, LBP or convergent implementations
of the Bethe approximation often outperform other methods (Meshi et al., 2009; Weller et al., 2014).
Using the method of graph covers (Vontobel, 2013), Ruozzi (2012) recently proved that the optimum
Bethe partition function provides a lower bound for the true value, i.e. ZB ? Z, for discrete binary
MRFs with submodular log potential cost functions of any arity. Here we provide an alternative
proof for attractive binary pairwise models. Our proof does not rely on any methods of loop series
(Sudderth et al., 2007) or graph covers, but rather builds on fundamental properties of the derivatives
of the Bethe free energy. Our approach applies only to binary models (whereas Ruozzi, 2012 applies
to any arity), but we obtain stronger results for this class, from which ZB ? Z easily follows. We
use the idea of clamping a variable and considering the approximate sub-partition functions over the
remaining variables, as the clamped variable takes each of its possible values.
Notation and preliminaries are presented in ?2. In ?3, we derive a lower bound, not just for the
standard Bethe partition function, but for a range of approximate partition functions over multi-label
1
variables that may be defined from a variational perspective as an optimization problem, based only
on the topology of the model. In ?4, we consider the Bethe approximation for attractive binary pairwise models. We show that clamping any variable and summing the Bethe sub-partition functions
over the remaining variables can only increase (hence improve) the approximation. Together with a
similar argument to that used in ?3, this proves that ZB ? Z for this class of model. To derive the
result, we analyze how the optimum of the Bethe free energy varies as the singleton marginal of one
particular variable is fixed to different values in [0, 1]. Remarkably, we show that the negative of this
optimum, less the singleton entropy of the variable, is a convex function of the singleton marginal.
This may have further interesting implications. We present experiments in ?5, demonstrating that
clamping even a single variable selected using a simple heuristic can be very beneficial.
1.1 Related work
Branching or conditioning on a variable (or set of variables) and approximating over the remaining
variables has a fruitful history in algorithms such as branch-and-cut (Padberg and Rinaldi, 1991;
Mitchell, 2002), work on resolution versus search (Rish and Dechter, 2000) and various approaches
of (Darwiche, 2009, Chapter 8). Cutset conditioning was discussed by Pearl (1988) and refined
by Peot and Shachter (1991) as a method to render the remaining topology acyclic in preparation
for belief propagation. Eaton and Ghahramani (2009) developed this further, introducing the conditioned belief propagation algorithm together with back-belief-propagation as a way to help identify
which variables to clamp. Liu et al. (2012) discussed feedback message passing for inference in
Gaussian (not discrete) models, deriving strong results for the particular class of attractive models. Choi and Darwiche (2008) examined methods to approximate the partition function by deleting
edges.
2 Preliminaries
We consider a pairwise model with n variables X1 , . . . , Xn and graph topology (V, E): V contains
nodes {1, . . . , n} where i corresponds to Xi , and E ? V ? V contains an edge for each pairwise
relationship. We sometimes consider multi-label models where each variable Xi takes values in
{0, . . . , Li ? 1}, and sometimes restrict attention to binary models where Xi ? B = {0, 1} ?i.
Let x = (x1 , . . . , xn ) be a configuration of all the variables, and N (i) be the neighbors of i. For
all analysis of binary models, to be consistent with Welling and Teh (2001) and Weller and Jebara
?E(x)
(2013), we assume
a reparameterization
such that p(x) = e Z , where the energy of a configuraP
P
tion, E = ? i?V ?i xi ? (i,j)?E Wij xi xj , with singleton potentials ?i and edge weights Wij .
2.1 Clamping a variable and related definitions
We shall find it useful to examine sub-partition functions obtained by clamping one particular variable Xi , that is we consider the model on the n ? 1 variables X1 , . . . , Xi?1 , Xi+1 , . . . , Xn obtained
by setting Xi equal to one of its possible values.
Let Z|Xi =a be the sub-partition function on the model obtained by setting Xi = a, a ? {0, . . . , Li ?
1}. Observe that true partition functions and marginals are self-consistent in the following sense:
Z=
LX
i ?1
j=0
Z|Xi =j ?i ? V,
Z|X =a
p(Xi = a) = PLi ?1 i
.
j=0 Z|Xi =j
(1)
This is not true in general for approximate forms of inference,1 but if the model has no cycles, then
in many cases of interest, (1) does hold, motivating the following definition.
Definition 1. We say an approximation to the log-partition function ZA is ExactOnTrees if it may be
specified by the variational formula ? log ZA = minq?Q FA (q) where: (1) Q is some compact space
that includes the marginal polytope; (2) FA is a function of the (pseudo-)distribution q (typically a
free energy approximation); and (3) For any model, whenever a subset of variables V ? ? V is
clamped to particular values P = {pi ? {0, . . . , Li ? 1}, ?Xi ? V ? }, i.e. ?Xi ? V ? , we constrain
1
For example, consider a single cycle with positive edge weights. This has ZB < Z (Weller et al., 2014),
yet after clamping any variable, each resulting sub-model is a tree hence the Bethe approximation is exact.
2
Xi = pi , which we write as V ? ? P , and the remaining induced graph on V \ V ? is acyclic, then the
approximation is exact, i.e. ZA |V ? ?P = Z|V ? ?P . Similarly, define an approximation to be in the
broader class of NotSmallerOnTrees if it satisfies all of the above properties except that condition
(3) is relaxed to ZA |V ? ?P ? Z|V ? ?P . Note that the Bethe approximation is ExactOnTrees, and
approximations such as TRW are NotSmallerOnTrees, in both cases whether using the marginal
polytope or any relaxation thereof, such as the cycle or local polytope (Weller et al., 2014).
We shall derive bounds on ZA with the following idea: Obtain upper or lower bounds on the approximation achieved by clamping and summing over the approximate sub-partition functions; Repeat
until an acyclic graph is reached, where the approximation is either exact or bounded. We introduce
the following related concept from graph theory.
Definition 2. A feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a
graph without cycles. Determining if there exists a feedback vertex set of a given size is a classical NP-hard problem (Karp, 1972). There is a significant literature on determining the minimum
cardinality of an FVS of a graph G, which we write as ?(G). Further, if vertices are assigned nonnegative weights, then a natural problem is to find an FVS with minimum weight, which we write as
?w (G). An FVS with a factor 2 approximation to ?w (G) may be found in time O(|V| + |E| log |E|)
(Bafna et al., 1999). For pairwise multi-label MRFs, we may create a weighted graph from the
topology by assigning each node i a weight of log Li , and then compute the corresponding ?w (G).
3 Lower Bound on Approximate Partition Functions
We obtain a lower bound on any approximation that is NotSmallerOnTrees by observing that ZA ?
ZA |Xn =j ?j from the definition (the sub-partition functions optimize over a subset).
Theorem 3. If a pairwise MRF has topology with an FVS of size n and corresponding values
L1 , . . . , Ln , then for any approximation that is NotSmallerOnTrees, ZA ? QnZ Li .
i=1
Proof. We proceed by induction on n. The base case n = 0 holds by the assumption that ZA
is NotSmallerOnTrees. Now assume the result holds for n ? 1 and consider a MRF which requires n vertices to be deleted to become acyclic. Clamp variable Xn at each of its Ln values
P n ?1
(n)
to create the approximation ZA := L
j=0 ZA |Xn =j . By the definition of NotSmallerOnTrees,
Z|
Xn =j
ZA ? ZA |Xn =j ?j; and by the inductive hypothesis, ZA |Xn =j ? Qn?1
.
i=1 Li
PLn ?1
P
(n)
L
?1
n
1
Z
Hence, Ln ZA ? ZA = j=0 ZA |Xn =j ? Qn?1
j=0 Z|Xn =j = Qn?1 L .
L
i=1
By considering an FVS with minimum
Qn
i=1
i
i=1
i
Li , Theorem 3 is equivalent to the following result.
Theorem 4. For any approximation that is NotSmallerOnTrees, ZA ? Ze??w .
This bound applies to general multi-label models with any pairwise and singleton potentials (no
need for attractive). The bound is trivial for a tree, but already for a binary model with one cycle we
obtain that ZB ? Z/2 for any potentials, even over the marginal polytope. The bound is tight, at
least for uniform Li = L ?i.2 The bound depends only on the vertices that must be deleted to yield
a graph with no cycles, not on the number of cycles (which clearly upper bounds ?(G)). For binary
models, exact inference takes time ?((|V| ? |?(G)|)2?(G) ).
4 Attractive Binary Pairwise Models
In this Section, we restrict attention to the standard Bethe approximation. We shall use results
derived in (Welling and Teh, 2001) and (Weller and Jebara, 2013), and adopt similar notation. The
Bethe partition function, ZB , is defined as in Definition 1, where Q is set as the local polytope
relaxation and FA is the Bethe free energy, given by F (q) = Eq (E) ? SB (q), where E is the energy
2
For example, in the binary case: consider a sub-MRF on a cycle with no singleton potentials and uniform,
very high edge weights. This can be shown to have ZB ? Z/2 (Weller et al., 2014). Now connect ? of these
together in a chain using very weak edges (this construction is due to N. Ruozzi).
3
and SB is the Bethe pairwise entropy approximation (see Wainwright and Jordan, 2008 for details).
We consider attractive binary pairwise models and apply similar clamping ideas to those used in ?3.
In ?4.1 we show that clamping can never decrease the approximate Bethe partition function, then
use this result in ?4.2 to prove that ZB ? Z for this class of model. In deriving the clamping result
of ?4.1, in Theorem 7 we show an interesting, stronger result on how the optimum Bethe free energy
changes as the singleton marginal qi is varied over [0, 1].
4.1 Clamping a variable can only increase the Bethe partition function
Let ZB be the Bethe partition function for the original model. Clamp variable Xi and form the new
P1
(i)
approximation ZB = j=0 ZB |Xi =j . In this Section, we shall prove the following Theorem.
(i)
Theorem 5. For an attractive binary pairwise model and any variable Xi , ZB ? ZB .
We first introduce notation and derive preliminary results, which build to Theorem 7, our strongest
result, from which Theorem 5 easily follows. Let q = (q1 , . . . , qn ) be a location in n-dimensional
pseudomarginal space, i.e. qi is the singleton pseudomarginal q(Xi = 1) in the local polytope. Let
F (q) be the Bethe free energy computed at q using Bethe optimum pairwise pseudomarginals given
by the formula for q(Xi = 1, Xj = 1) = ?ij (qi , qj , Wij ) in (Welling and Teh, 2001), i.e. for an
attractive model, for edge (i, j), ?ij is the lower root of
2
?ij ?ij
? [1 + ?ij (qi + qj )]?ij + (1 + ?ij )qi qj = 0,
(2)
where ?ij = eWij ? 1, and Wij > 0 is the strength (associativity) of the log-potential edge weight.
Let G(q) = ?F (q). Note that log ZB = maxq?[0,1]n G(q). For any x ? [0, 1], consider the
optimum constrained by holding qi = x fixed, i.e. let log ZBi (x) = maxq?[0,1]n :qi =x G(q). Let
?
?
?
r? (x) = (r1? (x), . . . , ri?1
(x), ri+1
(x), . . . , rn? (x)) with corresponding pairwise terms {?ij
}, be an
arg max for where this optimum occurs. Observe that log ZBi (0) = log ZB |Xi =0 , log ZBi (1) =
log ZB |Xi =1 and log ZB = log ZBi (qi? ) = maxq?[0,1]n G(q), where qi? is a location of Xi at which
the global optimum is achieved.
To prove Theorem 5, we need a sufficiently good upper bound on log ZBi (qi? ) compared to
log ZBi (0) and log ZBi (1). First we demonstrate what such a bound could be, then prove that
this holds. Let Si (x) = ?x log x ? (1 ? x) log(1 ? x) be the standard singleton entropy.
Lemma 6 (Demonstrating what would be a sufficiently good upper bound on log ZB ). If ?x ? [0, 1]
such that log ZB ? x log ZBi (1) + (1 ? x) log ZBi (0) + Si (x), then:
(i) ZBi (0) + ZBi (1) ? ZB ? em fc (x) where fc (x) = 1 + ec ? exc+Si (x) ,
m = min(log ZBi (0), log ZBi (1)) and c = | log ZBi (1) ? log ZBi (0)|; and
(ii) ?x ? [0, 1], fc (x) ? 0 with equality iff x = ?(c) = 1/(1 + exp(?c)), the sigmoid function.
Proof. (i) This follows easily from the assumption. (ii) This is easily checked by differentiating. It
is also given in (Koller and Friedman, 2009, Proposition 11.8).
See Figure 6 in the Supplement for example plots of the function fc (x). Lemma 6 motivates us to
consider if perhaps log ZBi (x) might be upper bounded by x log ZBi (1)+(1?x) log ZBi (0)+Si (x),
i.e. the linear interpolation between log ZBi (0) and log ZBi (1), plus the singleton entropy term
Si (x). It is easily seen that this would be true if r? (qi ) were constant. In fact, we shall show that
r? (qi ) varies in a particular way which yields the following, stronger result, which, together with
Lemma 6, will prove Theorem 5.
Theorem 7. Let Ai (qi ) = log ZBi (qi ) ? Si (qi ). For an attractive binary pairwise model, Ai (qi ) is
a convex function.
Proof. We outline the main points of the proof. Observe that Ai (x) = maxq?[0,1]n :qi =x G(q) ?
Si (x), where G(q) = ?F (q). Note that there may be multiple arg max locations r? (x). As shown
in (Weller and Jebara, 2013), F is at least thrice differentiable in (0, 1)n and all stationary points lie
in the interior (0, 1)n . Given our conditions, the ?envelope theorem? of (Milgrom, 1999, Theorem
4
v=1/Qij, W=1
v=1/Qij, W=3
v=1/Qij, W=10
4
x 10
3
20
4
2
10
2
1
1
0.5
qi
0 0
0.5
0
1
1
0.5
0 0
qi
qj
(a) W=1
0.5
0
1
1
0.5
0 0
qi
qj
(b) W=3
0.5
1
qj
(c) W=10
Figure 1: 3d plots of vij = Q?1
ij , using ?ij (qi , qj , W ) from (Welling and Teh, 2001).
1) applies, showing that Ai is continuous in [0, 1] with right derivative3
?
?
dSi (x)
[G(qi = x, r? (x)) ? Si (x)] = max
[G(qi = x, r? (x))] ?
.
A?i+ (x) = max
r?(qi =x) ?x
r?(qi =x) ?x
dx
(3)
We shall show that this is non-decreasing, which is sufficient to show the convexity result of Theorem
7. To evaluate the right hand side of (3), we use the derivative shown by Welling and Teh (2001):
?F
= ??i + log Qi ,
?qi
Q
(1 ? qi )di ?1
j?N (i) (qi ? ?ij )
Q
where log Qi = log
(as in Weller and Jebara, 2013)
di ?1
qi
j?N (i) (1 + ?ij ? qi ? qj )
Y
qi
qi ? ?ij
1 ? qi
= log
+ log
Qij , here defining Qij =
.
1 ? qi
1 + ?ij ? qi ? qj
qi
j?N (i)
qi
i (qi )
i (x)
A key observation is that the log 1?q
term is exactly ? dSdq
, and thus cancels the ? dSdx
term
i
i
h P
i
?
at the end of (3). Hence, A?i+ (qi ) = maxr?(qi ) ? j?N (i) log Qij (qi , rj? , ?ij
) .4
It remains to show that this expression is non-decreasing with qi . We shall show something stronger,
that at every arg max r? (qi ), and for all j ? N (i), ? log Qij is non-decreasing ? vij = Q?1
ij is nondecreasing. The result then follows since the max of non-decreasing functions is non-decreasing.
See Figure 1 for example plots of the vij function, and observe that vij appears to decrease with
qi (which is unhelpful here) while it increases with qj . Now, in an attractive model, the Bethe free
2
F
energy is submodular, i.e. ?q?i ?q
? 0 (Weller and Jebara, 2013), hence as qi increases, rj? (qi ) can
j
only increase (Topkis, 1978). For our purpose, we must show that
dvij
dqi
drj?
dqi
is sufficiently large such that
? 0. This forms the remainder of the proof.
?
At any particular arg max r? (qi ), writing v = vij [qi , rj? (qi ), ?ij
(qi , rj? (qi ))], we have
?
?v
?v d?ij
?v drj?
dv
=
+
+
dqi
?qi
??ij dqi
?qj dqi
?
?
drj?
?v
?v ??ij
?v ??ij
?v
=
+
+
+
.
?qi
??ij ?qi
dqi ??ij ?qj
?qj
From (Weller and Jebara, 2013),
??ij
?qi
Wij
=
?ij (qi ??ij )+qi
? 1.
1+?ij (qj ??ij +qi ??ij ) , where ?ij = e
qi (qj ?1)(1?qi )+(1+?ij ?qi ?qj )(qi ??ij ) ?v
?v
, ??ij
?qi =
(1?qi )2 (qi ??ij )2
?ij (qj ??ij )+qj
1+?ij (qi ??ij +qj ??ij )
(4)
and similarly,
??ij
?qj
=
The other partial derivatives are easily derived:
=
qi (1?qj )
(1?qi )(qi ??ij )2 ,
and
?v
?qj
=
?qi
(1?qi )(qi ??ij ) .
dr ?
The only remaining term needed for (4) is dqji . The following results are proved in the Appendix,
subject to a technical requirement that at an arg max, the reduced Hessian H\i , i.e. the matrix of
3
This result is similar to Danskin?s theorem (Bertsekas, 1995). Intuitively, for multiple arg max locations,
each may increase at a different rate,so here we must
take
.
the maxof the derivatives over all the arg max.
p(Xi =1,Xj =0)
p(X =0|X =1)
p(Xi =1)
4
We remark that Qij is the ratio p(Xi =0,Xj =0)
= p(Xjj =0|Xii =0) .
p(Xi =0)
5
second partial derivatives of F after removing the ith row and column, must be non-singular in
order to have an invertible locally linear function. Call this required property P. By nature, each
H\i is positive semi-definite. If needed, a small perturbation argument allows us to assume that no
eigenvalue is 0, then in the limit as the perturbation tends to 0, Theorem 7 holds since the limit of
convex functions is convex. Let [n] = {1, . . . , n} and G be the topology of the MRF.
Theorem 8. For any k ? [n] \ i, let Ck be the connected component of G \ i that contains Xk . If
Q
dr ?
? ? ?rs? rt?
Ck + i is a tree, then dqki = (s?t)?P (i k) rst
k) is the unique path from i to
?
? ,where P (i
s (1?rs )
k in Ck + i, and for notational convenience, define ri? = qi . Proof in Appendix (subject to P).
In fact, this result applies for any combination of attractive and repulsive edges. The result is remarkable, yet also intuitive. In the numerator, ?st ? qs qt = Covq (Xs , Xt ), increasing with Wij and
equal to 0 at Wij = 0 (Weller and Jebara, 2013), and in the denominator, qs (1 ? qs ) = Varq (Xs ),
hence the ratio is exactly what is called in finance the beta of Xt with respect to Xs .5
dr ?
? ? ?qi r ?
j
In particular, Theorem 8 shows that for any j ? N (i) whose component is a tree, dqji = qij
.
i (1?qi )
The next result shows that in an attractive model, additional edges can only reinforce this sensitivity.
Theorem 9. In an attractive model with edge (i, j),
to P).
drj? (qi )
dqi
?
?
?ij
?qi rj?
qi (1?qi ) .
Proof in Appendix (subject
Now collecting all terms, substituting into (4), and using (2), after some algebra yields that
as required to prove Theorem 7. This now also proves Theorem 5.
dv
dqi
? 0,
4.2 The Bethe partition function lower bounds the true partition function
Theorem 5, together with an argument similar to the proof of Theorem 3, easily yields a new proof
that ZB ? Z for an attractive binary pairwise model.
Theorem 10 (first proved by Ruozzi, 2012). For an attractive binary pairwise model, ZB ? Z.
Proof. We shall use induction on n to show that the following statement holds for all n:
If a MRF may be rendered acyclic by deleting n vertices v1 , . . . , vn , then ZB ? Z.
The base case n = 0 holds since the Bethe approximation is ExactOnTrees. Now assume the result
holds for n?1 and consider a MRF which requires n vertices to be deleted to become acyclic. Clamp
P
(n)
(n)
variable Xn and consider ZB = 1j=0 ZB |Xn =j . By Theorem 5, ZB ? ZB ; and by the inductive
P1
P1
hypothesis, ZB |Xn =j ? Z|Xn =j ?j. Hence, ZB ? j=0 ZB |Xn =j ? j=0 Z|Xn =j = Z.
5 Experiments
For an approximation which is ExactOnTrees, it is natural to try clamping a few variables to remove
cycles from the topology. Here we run experiments on binary pairwise models to explore the potential benefit of clamping even just one variable, though the procedure can be repeated. For exact
inference, we used the junction tree algorithm. For approximate inference, we used Frank-Wolfe
(FW) (Frank and Wolfe, 1956): At each iteration, a tangent hyperplane to the approximate free energy is computed at the current point, then a move is made to the best computed point along the
line to the vertex of the local polytope with the optimum score on the hyperplane. This proceeds
monotonically, even on a non-convex surface, hence will converge (since it is bounded), though
it may be only to a local optimum and runtime is not guaranteed. This method typically produces
good solutions in reasonable time compared to other approaches (Belanger et al., 2013; Weller et al.,
2014) and allows direct comparison to earlier results (Meshi et al., 2009; Weller et al., 2014). To
further facilitate comparison, in this Section we use the same unbiased reparameterization used by
P
P
W
Weller et al. (2014), with E = ? i?V ?i xi ? (i,j)?E 2ij [xi xj + (1 ? xi )(1 ? xj )].
?st ?qs qt
Sudderth et al. (2007) defined a different, symmetric ?st = qs (1?q
for analyzing loop series. In
s )qt (1?qt )
our context, we suggest that the ratio defined above may be a better Bethe beta.
5
6
Test models were constructed as follows: For n variables, singleton potentials were drawn ?i ?
U [?Tmax , Tmax ]; edge weights were drawn Wij ? U [0, Wmax ] for attractive models, or Wij ?
U [?Wmax , Wmax ] for general models. For models with random edges, we constructed Erd?os-Renyi
random graphs (rejecting disconnected samples), where each edge has independent probability p of
being present. To observe the effect of increasing n while maintaining approximately the same
average degree, we examined n = 10, p = 0.5 and n = 50, p = 0.1. We also examined models on
a complete graph topology with 10 variables for comparison with TRW in (Weller et al., 2014). 100
models were generated for each set of parameters with varying Tmax and Wmax values.
Results are displayed in Figures 2 to 4 showing average absolute error of log ZB vs log Z and average ?1 error of singleton marginals. The legend indicates the different methods used: Original is FW
on the initial model; then various methods were used to select the variable to clamp, before running
FW on the 2 resulting submodels and combining those results. avg Clamp for log Z means average
over all possible clampings, whereas all Clamp for marginals computes each singleton marginal as
the estimated p?i = ZB |Xi =1 /(ZB |Xi =0 + ZB |Xi =1 ). best Clamp uses the variable which with
hindsight gave the best improvement in log Z estimate, thereby showing the best possible result for
log Z. Similarly, worst Clamp picks the variable which showed worst performance. Where one
variable is clamped, the respective marginals are computed thus: for the clamped variable Xi , use
p?i as before; for all others, take the weighted average over the estimated Bethe pseudomarginals on
each sub-model using weights 1 ? p?i and p?i for sub-models with Xi = 0 and Xi = 1 respectively.
maxW and Mpower are heuristics to try to pick a good variable in advance. Ideally, we would like
to break heavy cycles, but searching
P for these is NP-hard. maxW is a simple O(|E|) method which
picks a variable Xi with maxi?V j?N (i) |Wij |, and can be seen to perform well (Liu et al., 2012
proposed the same maxW approach for inference in Gaussian models). One way in which maxW
can make a poor selection is to choose a variable at the centre of a large star configuration but far
from any cycle. Mpower attempts to avoid this by considering the convergent series of powers of a
modified W matrix, but on the examples shown, this did not perform significantly better. See ?8.1
in the Appendix for more details on Mpower and further experimental results.
FW provides no runtime guarantee when optimizing over a non-convex surface such as the Bethe
free energy, but across all parameters, the average combined runtimes on the two clamped submodels was the same order of magnitude as that for the original model, see Figure 5.
6 Discussion
The results of ?4 immediately also apply to any binary pairwise model where a subset of variables may be flipped to yield an attractive model, i.e. where the topology has no frustrated cycle
(Weller et al., 2014), and also to any model that may be reduced to an attractive binary pairwise
model (Schlesinger and Flach, 2006; Zivny et al., 2009). For this class, together with the lower
bound of ?3, we have sandwiched the range of ZB (equivalently, given ZB , we have sandwiched the
range of the true partition function Z) and bounded its error; further, clamping any variable, solving
for optimum log ZB on sub-models and summing is guaranteed to be more accurate than solving on
the original model. In some cases, it may also be faster; indeed, some algorithms such as LBP may
fail on the original model but perform well on clamped sub-models.
Methods presented may prove useful for analyzing general (non-attractive) models, or for other
applications. As one example, it is known that the Bethe free energy is convex for a MRF whose
topology has at most one cycle (Pakzad and Anantharam, 2002). In analyzing the Hessian of the
Bethe free energy, we are able to leverage this to show the following result, which may be useful for
optimization (proof in Appendix; this result was conjectured by N. Ruozzi).
Lemma 11. In a binary pairwise MRF (attractive or repulsive edges, any topology), for any subset
of variables S ? V whose induced topology contains at most one cycle, the Bethe free energy (using
optimum pairwise marginals) over S, holding variables V \S at fixed singleton marginals, is convex.
In ?5, clamping appears to be very helpful, especially for attractive models with low singleton potentials where results are excellent (overcoming TRW?s advantage in this context), but also for general
models, particularly with the simple maxW selection heuristic. We can observe some decline in
benefit as n grows but this is not surprising when clamping just a single variable. Note, however,
7
1
0.5
50
0.8
0.4
40
0.6
0.3
0.4
0.2
0.2
0.1
0
2
4
8
12
maximum coupling strength Wmax
16
0
30
Original
all Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
TRW
2
4
8
12
maximum coupling strength Wmax
20
0.4
Original
avg Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
TRW
0.3
0.2
0.1
10
0
16
(a) attractive log Z, Tmax = 0.1 (b) attractive margs, Tmax = 0.1
2
4
8
12
maximum coupling strength Wmax
16
0
(c) general log Z, Tmax = 2
2
4
8
12
maximum coupling strength Wmax
16
(d) general margs, Tmax = 2
Figure 2: Average errors vs true, complete graph on n = 10. TRW in pink. Consistent legend throughout.
0.8
0.6
0.5
6
0.4
5
4
0.3
Original
avg Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
0.4
0.2
0
2
4
8
12
maximum coupling strength Wmax
0.1
16
0
3
Original
all Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
0.2
2
4
8
12
maximum coupling strength Wmax
0.4
Original
avg Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
0.3
0.2
Original
all Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
2
0.1
1
16
0
2
4
8
12
maximum coupling strength Wmax
16
0
(c) general log Z, Tmax = 2
(a) attractive log Z, Tmax = 0.1 (b) attractive margs, Tmax = 0.1
2
4
8
12
maximum coupling strength Wmax
16
(d) general margs, Tmax = 2
Figure 3: Average errors vs true, random graph on n = 10, p = 0.5. Consistent legend throughout.
0.8
0.6
0.5
30
0.4
25
20
0.3
Original
all Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
0.4
0.2
0.2
0.1
0
2
4
8
12
maximum coupling strength Wmax
16
0
2
4
8
12
maximum coupling strength Wmax
15
0.4
Original
avg Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
0.3
0.2
Original
all Clamp
maxW Clamp
best Clamp
worst Clamp
Mpower
10
0.1
5
16
(a) attractive log Z, Tmax = 0.1 (b) attractive margs, Tmax = 0.1
0
2
4
8
12
maximum coupling strength Wmax
16
0
(c) general log Z, Tmax = 2
2
4
8
12
maximum coupling strength Wmax
16
(d) general margs, Tmax = 2
Figure 4: Average errors vs true, random graph on n = 50, p = 0.1. Consistent legend throughout.
6
Random n=10, T
=2
Random n=10, T
=0.1
Random n=50, T
=2
Random n=50, T
=0.1
7
Random n=10, T
max
5
max
4
=2
max
Random n=10, Tmax=0.1
6
max
5
max
x1
x2
x4
x3
Random n=50, Tmax=2
Random n=50, T
=0.1
max
4
3
3
2
1
2
2
4
8
12
maximum coupling strength Wmax
(a) attractive random graphs
16
1
2
4
8
12
maximum coupling strength W
16
max
(b) general random graphs
(c) Blue (dashed red) edges are attractive (repulsive)
with edge weight +2 (?2). No singleton potentials.
Figure 5: Left: Average ratio of combined sub-model runtimes to original runtime (using maxW, other choices
are similar). Right: Example model where clamping any variable worsens the Bethe approximation to log Z.
that non-attractive models exist such that clamping and summing over any variable can lead to a
worse Bethe approximation of log Z, see Figure 5c for a simple example on four variables.
It will be interesting to explore the extent to which our results may be generalized beyond binary
pairwise models. Further, it is tempting to speculate that similar results may be found for other
approximations. For example, some methods that upper bound the partition function, such as TRW,
might always yield a lower (hence better) approximation when a variable is clamped.
Acknowledgments. We thank Nicholas Ruozzi for careful reading, and Nicholas, David Sontag,
Aryeh Kontorovich and Toma?z Slivnik for helpful discussion and comments. This work was supported in part by NSF grants IIS-1117631 and CCF-1302269.
References
V. Bafna, P. Berman, and T. Fujito. A 2-approximation algorithm for the undirected feedback vertex set problem. SIAM Journal on Discrete Mathematics, 12(3):289?9, 1999.
D. Belanger, D. Sheldon, and A. McCallum. Marginal inference in MRFs using Frank-Wolfe. In NIPS Workshop on Greedy Optimization, Frank-Wolfe and Friends, December 2013.
D. Bertsekas. Nonlinear Programming. Athena Scientific, 1995.
8
A. Choi and A. Darwiche. Approximating the partition function by deleting and then correcting for model
edges. In Uncertainty in Artificial Intelligence (UAI), 2008.
A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009.
F. Eaton and Z. Ghahramani. Choosing a variable to clamp: Approximate inference using conditioned belief
propagation. In Artificial Intelligence and Statistics, 2009.
K. Fan. Topological proofs for certain theorems on matrices with non-negative elements. Monatshefte fr
Mathematik, 62:219?237, 1958.
M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):
95?110, 1956. ISSN 1931-9193. doi: 10.1002/nav.3800030109.
R. Karp. Complexity of Computer Computations, chapter Reducibility Among Combinatorial Problems, pages
85?103. New York: Plenum., 1972.
D. Koller and N. Friedman. Probabilistic Graphical Models - Principles and Techniques. MIT Press, 2009.
S. Lauritzen and D. Spiegelhalter. Local computations with probabilities on graphical structures and their
application to expert systems. Journal of the Royal Statistical Society series B, 50:157?224, 1988.
Y. Liu, V. Chandrasekaran, A. Anandkumar, and A. Willsky. Feedback message passing for inference in
Gaussian graphical models. IEEE Transactions on Signal Processing, 60(8):4135?4150, 2012.
R. McEliece, D. MacKay, and J. Cheng. Turbo decoding as an instance of Pearl?s ?Belief Propagation? algorithm. IEEE Journal on Selected Areas in Communications, 16(2):140?152, 1998.
O. Meshi, A. Jaimovich, A. Globerson, and N. Friedman. Convexifying the Bethe free energy. In UAI, 2009.
P. Milgrom. The envelope theorems. Department of Economics, Standford University, Mimeo, 1999. URL
http://www-siepr.stanford.edu/workp/swp99016.pdf.
J. Mitchell. Branch-and-cut algorithms for combinatorial optimization problems. Handbook of Applied Optimization, pages 65?77, 2002.
K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: An empirical study.
In Uncertainty in Artificial Intelligence (UAI), 1999.
M. Padberg and G. Rinaldi. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling
salesman problems. SIAM review, 33(1):60?100, 1991.
P. Pakzad and V. Anantharam. Belief propagation and statistical physics. In Princeton University, 2002.
J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann,
1988.
M. Peot and R. Shachter. Fusion and propagation with multiple observations in belief networks. Artificial
Intelligence, 48(3):299?318, 1991.
I. Rish and R. Dechter. Resolution versus search: Two strategies for SAT. Journal of Automated Reasoning, 24
(1-2):225?275, 2000.
N. Ruozzi. The Bethe partition function of log-supermodular graphical models. In Neural Information Processing Systems, 2012.
D. Schlesinger and B. Flach. Transforming an arbitrary minsum problem into a binary one. Technical report,
Dresden University of Technology, 2006.
E. Sudderth, M. Wainwright, and A. Willsky. Loop series and Bethe variational bounds in attractive graphical
models. In NIPS, 2007.
D. Topkis. Minimizing a submodular function on a lattice. Operations Research, 26(2):305?321, 1978.
P. Vontobel. Counting in graph covers: A combinatorial characterization of the Bethe entropy function. Information Theory, IEEE Transactions on, 59(9):6018?6048, Sept 2013. ISSN 0018-9448.
M. Wainwright and M. Jordan. Graphical models, exponential families and variational inference. Foundations
and Trends in Machine Learning, 1(1-2):1?305, 2008.
M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition function. IEEE
Transactions on Information Theory, 51(7):2313?2335, 2005.
A. Weller and T. Jebara. Bethe bounds and approximating the global optimum. In AISTATS, 2013.
A. Weller and T. Jebara. Approximating the Bethe partition function. In UAI, 2014.
A. Weller, K. Tang, D. Sontag, and T. Jebara. Understanding the Bethe approximation: When and how can it
go wrong? In Uncertainty in Artificial Intelligence (UAI), 2014.
M. Welling and Y. Teh. Belief optimization for binary networks: A stable alternative to loopy belief propagation. In Uncertainty in Artificial Intelligence (UAI), 2001.
S. Zivny, D. Cohen, and P. Jeavons. The expressive power of binary submodular functions. Discrete Applied
Mathematics, 157(15):3347?3358, 2009.
9
| 5529 |@word worsens:1 stronger:5 flach:2 adrian:2 r:2 q1:1 pick:3 thereby:1 initial:1 liu:3 series:5 contains:4 configuration:2 score:1 rish:2 current:1 surprising:1 si:8 yet:2 assigning:1 must:4 dx:1 dechter:2 partition:32 pseudomarginals:2 remove:1 plot:3 v:4 stationary:1 greedy:1 selected:2 leaf:1 intelligence:6 xk:1 mccallum:1 ith:1 provides:2 characterization:1 node:2 location:4 lx:1 simpler:1 along:1 constructed:2 direct:1 become:2 beta:2 aryeh:1 qij:9 prove:7 introduce:2 darwiche:4 pairwise:24 peot:2 indeed:1 p1:3 examine:1 multi:5 decreasing:5 considering:3 cardinality:1 increasing:2 estimating:1 bounded:6 notation:3 what:3 nav:1 developed:2 hindsight:1 guarantee:2 pseudo:1 every:1 collecting:1 finance:1 runtime:3 exactly:2 cutset:1 wrong:1 grant:1 arguably:1 bertsekas:2 positive:2 before:2 local:7 tends:1 limit:2 analyzing:3 path:1 interpolation:1 approximately:1 might:2 plus:1 tmax:17 dresden:1 examined:3 range:3 drj:4 practical:1 unique:1 acknowledgment:1 globerson:1 definite:1 x3:1 procedure:1 area:1 empirical:1 significantly:1 pln:1 suggest:1 convenience:1 interior:1 selection:2 context:2 writing:1 optimize:1 fruitful:1 equivalent:1 www:1 go:1 attention:2 economics:1 minq:1 convex:9 resolution:3 minsum:1 immediately:1 zbi:21 correcting:1 q:5 deriving:2 reparameterization:2 searching:1 pli:1 plenum:1 construction:1 exact:7 programming:2 us:1 hypothesis:2 wolfe:5 element:1 ze:1 particularly:1 trend:1 cut:3 worst:11 cycle:15 connected:1 decrease:2 transforming:1 convexity:1 complexity:1 ideally:1 raise:1 tight:1 solving:2 algebra:1 easily:7 various:2 chapter:2 fast:1 doi:1 artificial:6 choosing:1 refined:1 whose:4 heuristic:3 widely:1 stanford:1 plausible:1 say:1 statistic:1 nondecreasing:1 varq:1 covq:1 advantage:1 differentiable:1 eigenvalue:1 clamp:46 remainder:1 fr:1 loop:3 combining:1 iff:1 intuitive:1 rst:1 optimum:14 r1:1 requirement:1 produce:1 help:1 derive:5 coupling:14 friend:1 ij:45 qt:4 lauritzen:2 eq:1 strong:1 c:2 treewidth:1 berman:1 drawback:1 elimination:1 meshi:3 preliminary:3 proposition:1 hold:8 sufficiently:3 exp:1 eaton:2 substituting:1 adopt:1 purpose:1 standford:1 label:5 combinatorial:3 create:2 weighted:2 mit:1 clearly:1 gaussian:3 always:1 modified:1 rather:1 ck:3 avoid:1 varying:1 broader:1 karp:2 fujito:1 jaakkola:1 derived:2 naval:1 notational:1 improvement:1 indicates:1 contrast:1 sense:1 helpful:2 inference:14 mrfs:4 sb:2 typically:3 associativity:1 koller:2 wij:10 arg:7 among:1 constrained:1 mackay:1 marginal:9 field:1 equal:2 never:1 runtimes:2 x4:1 flipped:1 broad:1 cancel:1 np:2 others:1 intelligent:1 report:1 few:2 murphy:2 friedman:3 attempt:1 interest:1 message:2 chain:1 implication:2 accurate:2 edge:18 partial:2 respective:1 unless:1 tree:7 vontobel:2 schlesinger:2 instance:1 column:1 earlier:1 modeling:1 cover:4 lattice:1 loopy:3 cost:1 introducing:1 vertex:10 subset:4 uniform:2 weller:20 motivating:1 connect:1 varies:2 combined:2 st:3 fundamental:2 sensitivity:1 siam:2 probabilistic:2 physic:1 decoding:1 invertible:1 together:6 kontorovich:1 choose:1 dr:3 worse:1 expert:1 derivative:5 return:1 li:8 potential:10 singleton:16 star:1 speculate:1 includes:1 depends:2 tion:1 root:1 try:2 break:1 analyze:1 observing:1 reached:1 red:1 kaufmann:1 yield:7 identify:1 weak:1 bayesian:1 rejecting:1 history:1 za:18 strongest:1 whenever:1 checked:1 definition:7 energy:17 thereof:1 proof:15 di:2 proved:4 mitchell:2 back:1 trw:8 appears:2 higher:1 supermodular:1 wei:1 erd:1 though:3 just:3 until:3 mceliece:2 hand:1 belanger:2 traveling:1 wmax:17 expressive:1 o:1 nonlinear:1 propagation:10 perhaps:1 scientific:1 grows:1 facilitate:1 effect:1 concept:1 true:12 unbiased:1 ccf:1 inductive:2 hence:11 assigned:1 equality:1 symmetric:2 attractive:33 reweighted:1 numerator:1 branching:1 self:1 generalized:1 pdf:1 outline:1 complete:2 demonstrate:2 l1:1 reasoning:3 variational:4 recently:3 sigmoid:1 empirically:1 cohen:1 conditioning:2 discussed:2 marginals:6 significant:1 cambridge:1 ai:4 mathematics:2 similarly:3 centre:1 submodular:4 thrice:1 stable:1 surface:2 base:2 something:1 showed:1 perspective:1 optimizing:1 conjectured:1 certain:1 dqi:8 binary:24 topkis:2 seen:2 minimum:3 additional:1 relaxed:1 morgan:1 converge:3 monotonically:1 signal:1 dashed:1 branch:3 ii:3 multiple:3 rj:5 semi:1 tempting:1 pakzad:2 technical:2 faster:1 qi:81 mrf:8 denominator:1 iteration:1 sometimes:2 achieved:2 lbp:3 whereas:2 remarkably:1 pseudomarginal:2 sudderth:3 singular:1 envelope:2 comment:1 induced:2 subject:3 undirected:2 december:1 legend:4 jordan:4 call:1 anandkumar:1 leverage:1 counting:1 automated:1 xj:6 gave:1 topology:13 restrict:2 idea:4 decline:1 qj:22 whether:2 expression:1 url:1 render:1 returned:1 sontag:2 xjj:1 york:3 passing:2 proceed:1 repeatedly:1 hessian:2 remark:1 dramatically:1 useful:4 locally:1 reduced:2 http:1 outperform:1 exist:1 wisely:1 nsf:1 estimated:2 ruozzi:8 blue:1 xii:1 discrete:4 write:3 shall:8 key:1 four:1 nevertheless:1 demonstrating:2 deleted:3 drawn:2 rinaldi:2 v1:1 graph:20 relaxation:2 run:1 uncertainty:4 throughout:3 reasonable:1 chandrasekaran:1 submodels:2 padberg:2 vn:1 family:1 appendix:5 bound:22 guaranteed:3 convergent:2 cheng:1 fan:1 topological:1 quadratic:1 nonnegative:1 turbo:1 strength:15 constrain:1 ri:3 x2:1 sheldon:1 aspect:1 argument:3 min:1 rendered:1 department:1 combination:1 disconnected:1 poor:1 pink:1 beneficial:1 across:1 em:1 dv:2 intuitively:1 ln:3 remains:1 mathematik:1 fail:1 needed:2 milgrom:2 end:1 repulsive:3 junction:2 salesman:1 operation:1 apply:2 observe:6 quarterly:1 nicholas:2 alternative:2 original:15 remaining:6 tony:1 running:1 graphical:7 maintaining:1 ghahramani:2 build:2 prof:2 approximating:4 classical:1 sandwiched:2 especially:1 society:1 move:1 already:1 occurs:1 fa:3 strategy:1 rt:1 thank:1 reinforce:1 athena:1 exc:1 polytope:7 evaluate:1 extent:1 trivial:1 induction:2 willsky:3 issn:2 relationship:1 ratio:4 bafna:2 minimizing:1 equivalently:1 statement:1 holding:2 frank:5 negative:2 danskin:1 implementation:1 motivates:1 perform:3 teh:6 upper:9 observation:2 markov:1 displayed:1 logistics:1 defining:1 communication:1 rn:1 varied:1 perturbation:2 arbitrary:1 jebara:12 overcoming:1 david:1 required:2 specified:1 pearl:4 maxq:4 nip:2 able:1 beyond:1 unhelpful:1 proceeds:1 reading:1 max:18 royal:1 belief:11 wainwright:6 deleting:3 power:2 convexifying:1 natural:2 rely:1 improve:2 spiegelhalter:2 technology:1 columbia:4 sept:1 review:1 literature:1 reducibility:1 removal:1 tangent:1 understanding:1 determining:2 dsi:1 interesting:3 acyclic:6 versus:2 remarkable:1 foundation:1 degree:1 sufficient:1 consistent:5 principle:2 vij:5 pi:2 heavy:1 row:1 repeat:1 supported:1 free:15 side:1 neighbor:1 differentiating:1 absolute:1 benefit:2 feedback:5 xn:17 qn:5 computes:1 made:1 avg:5 ec:1 far:1 welling:6 transaction:3 approximate:14 compact:1 global:2 maxr:1 uai:6 handbook:1 summing:5 sat:1 xi:39 search:2 continuous:1 bethe:42 nature:1 excellent:1 jaimovich:1 did:1 aistats:1 main:1 repeated:1 x1:4 ny:2 sub:14 exponential:2 lie:1 clamped:7 renyi:1 tang:1 formula:2 choi:2 theorem:27 removing:1 xt:2 arity:2 showing:3 maxi:1 x:3 jeavons:1 fusion:1 exists:1 workshop:1 supplement:1 magnitude:1 conditioned:2 clamping:23 entropy:5 fc:4 explore:2 shachter:2 maxw:15 applies:5 corresponds:1 satisfies:1 frustrated:1 careful:1 hard:2 change:1 fw:4 except:1 reducing:1 hyperplane:2 lemma:4 zb:37 called:2 experimental:1 fvs:6 select:1 preparation:1 anantharam:2 princeton:1 |
5,003 | 553 | Propagation Filters in PDS Networks for
Sequencing and Ambiguity Resolution
Ronald A. Sumida
Michael G. Dyer
Artificial Intelligence Laboratory
Computer Science Department
University of California
Los Angeles, CA, 90024
[email protected]
Abstract
We present a Parallel Distributed Semantic (PDS) Network architecture
that addresses the problems of sequencing and ambiguity resolution in
natural language understanding. A PDS Network stores phrases and their
meanings using multiple PDP networks, structured in the form of a semantic net. A mechanism called Propagation Filters is employed: (1) to
control communication between networks, (2) to properly sequence the
components of a phrase, and (3) to resolve ambiguities. Simulation results
indicate that PDS Networks and Propagation Filters can successfully represent high-level knowledge, can be trained relatively quickly, and provide
for parallel inferencing at the knowledge level.
1
INTRODUCTION
Backpropagation has shown considerable potential for addressing problems in natural language processing (NLP). However, the traditional PDP [Rumelhart and
McClelland, 1986] approach of using one (or a small number) of backprop networks
for NLP has been plagued by a number of problems: (1) it has been largely unsuccessful at representing high-level knowledge, (2) the networks are slow to train, and
(3) they are sequential at the knowledge level. A solution to these problems is to
represent high-level knowledge structures over a large number of smaller PDP net-
233
234
Sumida and Dyer
works. Reducing the size of each network allows for much faster training, and since
the different networks can operate in parallel, more than one knowledge structure
can be stored or accessed at a time.
In using multiple networks, however, a number of important issues must be addressed: how the individual networks communicate with one another, how patterns
are routed from one network to another, and how sequencing is accomplished as
patterns are propagated. In previous papers [Sumida and Dyer, 1989] [Sumida,
1991], we have demonstrated how to represent high-level semantic knowledge and
generate dynamic inferences using Parallel Distributed Semantic (PDS) Networks,
which structure multiple PDP networks in the form of a semantic network. This
paper discusses how Propagation Filters address communication and sequencing
issues in using multiple PDP networks for NLP.
2
PROPAGATION FILTERS
Propagation Filters are inspired by the idea of skeleton filters, proposed by [Sejnowski, 1981, Hinton, 1981]. They are composed of: (1) sets of filter ensembles
that gate the connection from a source to a destination and (2) a selector ensemble
that decides which filter group to enable. Each filter group is sensitive to a particular pattern over the selector. When the particular pattern occurs, the source
pattern is propagated to its destination. Figure 1 is an example of a propagation filter where the "01" pattern over units 2 and 3 of the selector opens up filter group1,
thus permitting the pattern to be copied from source1 to destination!. The units
of filter group2 do not respond to the "01" pattern and remain well below thresold,
so the activation pattern over the source2 ensemble is not propagated.
H*wrMA~f-~
I
I
I
I
...
I
????
lOurcol
i1
I
I I
fill? IJ'OUpl I I
~-----~-~~--~
I I
sourcc2
Mv..-~-
filter group2
destin.tioo2
Figure 1: A Propagation Filter architecture. The small circles indicate PDP units
within an ensemble (oval), the black arrows represent full connectivity between two
ensembles, and the dotted lines connecting units 2 and 3 of the selector to each
filter group oval indicate total connectivity from selector units to filter units. The
jagged lines are suggestive of temporary patterns of activation over an ensemble.
The units in a filter group receive input from units in the selector. The weights
on these input connections are set so that when a specific pattern occurs over the
Propagation Filters in PDS Networks
selector, every unit in the filter group is driven above threshold. The filter units
also receive input from the source units and provide output to the destination units.
The weights on both these i/o connections can be set so that the filter merely copies
the pattern from the source to the destination when its units exceed threshold (as
in Figure 1). Alternatively, these weights can be set (e.g. using backpropagation)
so that the filter transforms the source pattern to a desired destination pattern.
3
PDS NETWORKS
PDS Networks store syntactic and semantic information over multiple PDP networks, with each network representing a class of concepts and with related networks connected in the general manner of a semantic net. For example, Figure 2
shows a network for encoding a basic sentence consisting of a subject, verb and
direct object. The network is connected to other PDP networks, such as HUMAN,
VERB and ANIMAL, that store information about the content of the subject role
(s-content), the filler for the verb role, and the content of the direct-object role
(do-content). Each network functions as a type of encoder net, where: (1) the input
and output layers have the same number of units and are presented with exactly the
same pattern, (2) the weights of the network are modified so that the input pattern
will recreate itself as output, and (3) the resulting hidden unit pattern represents
a reduced description of the input. In the networks that we use, a single set of
units is used for both the input and output layers. The net can thus be viewed as
an encoder with the output layer folded back onto the input layer and with two
sets of connections: one from the single input/output layer to the hidden layer,
and one from the hidden layer back to the i/o layer. In Figure 2 for example, the
subject-content, verb, and direct-object-content role-groups collectively represent
the input/output layer, and the BASIC-S ensemble represents the hidden layer .
......... ...........
-----....<//tvM
,
,
I
HUMAN
I
I
I
MA
="hit"
VERB
= DOG
Figure 2: The network that stores information about a basic sentence. The black
arrows represent links from the input layer to the hidden layer and the grey arrows
indicate links from the hidden layer to the output layer. The thick lines represent
links between networks that propagate a pattern without changing it.
A network stores information by learning to encode the items in its training set.
235
236
Sumida and Dyer
For each item, the patterns that represent its features are presented to the input
role groups, and the weights are modified so that the patterns recreate themselves
as output. For example, in Figure 2, the MAN-"hit"-DOG pattern is presented to
the BASIC-S network by propagating the MAN pattern from the HUMAN network
to the s-content role, the "hit" pattern from the VERB network to the verb-content
role, and the DOG pattern from the ANIMAL network to the do-content role.
The BASIC-S network is then trained on this pattern by modifying the weights
between the input/output role groups and the BASIC-S hidden units so that the
MAN-"hit"-DOG pattern recreates itself as output. The network automatically
generalizes by having the hidden units become sensitive to common features of the
training patterns. When the network is tested on a new concept (i.e., one that is
not in the training set), the pattern over the hidden units reflects its similarity to
the items seen during training.
3.1
SEQUENCING PHRASES
To illustrate how Propagation Filters sequence the components of a phrase, consider
the following sentence, whose constituents occur in the standard subject-verb-object
order: 81. The man hit the dog. We would like to recognize that the BASIC-S
network of Figure 2 is applicable to the input by binding the roles of the network to
the correct components. In order to generate the proper role bindings, the system
must: (1) recognize the components of the sentence in the correct order (e.g. "the
man" should be recognized as the subject, "hit" as the verb, and "the dog" as
the direct object), and (2) associate each phrase of the input with its meaning
(e.g. reading the phrase "the man" should cause the pattern for the concept MAN
to appear over the HUMAN units). Figure 3 illustrates how Propagation Filters
properly sequence the components of the sentence.
First, the phrase "the man" is read by placing the pattern for "the" over the determiner network (Step 1) and the pattern for "man" over the noun network (Step 2).
The "the" pattern is then propagated to the np-determiner input role units of
the NP network (Step 3) and the "man" pattern to the np-noun role input units
(Step 4). The pattern that results over the hidden NP units is then used to represent the entire phrase "the man" (Step 5). The filters connecting the NP units with
the subject and direct object roles are not enabled, so the pattern is not yet bound
to any role. Next, the word "hit" is read and a pattern for it is generated over
the VERB units (Step 6). The BASIC-S network is now applicable to the input
(for simplicity of exposition, we ignore passive constructions here). Since there are
no restrictions (i.e., no filter) on the connection between the VERB units and the
verb role of BASIC-S, the "hit" pattern is bound to the verb role (Step 7). The
verb role units act as the selector of the Propagation Filter that connects the NP
units to the subject units. The filter is constructed so that whenever any of the
verb role units receive non-zero input (i.e., whenever the role is bound) it opens up
the filter group connecting NP with the subject role (Step 8). Thus, the pattern
for "the man" is copied from NP to the subject (Step 9) and deleted from the NP
units. Similarly, the subject units act as the selector of a filter that connects NP
with the direct object. Since the subject was just bound, the connection from the
NP to direct object is enabled (Step 10). At this point, the system has generated
the expectation that a NP will occur next. The phrase "the dog" is now read and
Propagation Filters in PDS Networks
9.
16.~
7.~
MA
....
"the man"
,. ,.
.... ....
"hit"
,.
"the dog"
VERB
s?MA
~
6.~
"the dog"
"hit"
11-IS.
"the man"
3.~
4?NM
"the"
DET
l.~
"the"
"man"
N
2.NM
"man"
Figure 3: The figure shows how Propagation Filters sequence the components of
the sentence "The man hit the dog". The numbers indicate the order of events.
The dotted arrows indicate Propagation Filter connections from a selector to an
open filter group (indicated by a black circle) and the dark arrows represent the
connections from a source to a destination.
its pattern is generated over the NP units (Steps 11-15). Finally, the pattern for
"the dog" is copied across the open connection from NP to direct-object (Step 16).
3.2
ASSOCIATING PHRASES WITH MEANINGS
The next task is to associate lexical patterns with their corresponding semantic patterns and bind semantic patterns to the appropriate roles in the BASIC-S network.
Figure 4 indicates how Propagation Filters: (1) transform the phrase "the man"
into its meaning (i.e., MAN), and (2) bind MAN to the s-content role of BASIC-S.
Reading the word "man", by placing the "man" pattern into the noun units (Step 2),
opens the filter connecting N to HUMAN (Step 5), while leaving the filters connecting N to other networks (e.g. ANIMAL) closed. The opened filter transforms
the lexical pattern "man" over N into the semantic pattern MAN over HUMAN
(Step 7). Binding "the man" to subject (Step 8) by the procedure shown in the
Figure 3 opens the filter connecting HUMAN to the s-content role of BASIC-S
(Step 9). The s-content role is then bound to MAN (Step 10).
The do-content role is bound by a procedure similar to that shown in Figure 4.
When "dog" is read, the filter connecting N with ANIMAL is opened while filters
to other networks (e.g. HUMAN) remain closed. The "dog" pattern is then transformed into the semantic pattern DOG over the ANIMAL units. When "the dog"
237
238
Sumida and Dyer
BASIC-S
Figure 4: The figure illustrates how the concept MAN is bound to the s-content role
of BASIC-S, given the phrase "the man" as input. Black (white) circles indicate
open (closed) filters.
is bound to direct-object as in Figure 3, the filter from ANIMAL to do-content is
opened, and DOG is propagated from ANIMAL to the do-content role of BASIC-S.
3.3
AMBIGUITY RESOLUTION AND INFERENCING
There are two forms that inference and ambiguity resolution can take: (1) routing
patterns (e.g. propagation of role bindings) to the appropriate subnets and (2)
pattern reconstruction from items seen during training.
(1) Pattern Routing: Propagation Filters help resolve ambiguities by having the
selector only open connections to the network containing the correct interpretation.
As an example, consider the following sentence: S2. The singer hit the note. Both
S2 and Sl (Sec. 3.1) have the same syntactic structure and are therefore represented
over the BASIC-S ensemble of Figure 2. However, the meaning of the word "hit"
in Sl refers to physically striking an object while in S2 it refers to singing a musical
note. The pattern over the BASIC-S units that represents Sl differs significantly
from the pattern that represents S2, due to the differences in the s-content and
do-content roles. A Propagation Filter with the BASIC-S units as its selector uses
the differences in the two patterns to determine whether to open connections to the
HIT network or to the PERFORM-MUSIC network (Figure 5).
Propagation Filters in PDS Networks
PERRlRM-MUSIC
---/wA
Figure 5: The pattern over BASIC-S acts as a selector that determines whether
to open the connections to HIT or to PERFORM-MUSIC. Since the input here is
MAN-"hit"-DOG, the filters to HIT are opened while the filters to PERFORMMUSIC remain closed. The black and grey arrows indicating connections between
the input/output and hidden layers have been replaced by a single thin line.
During training, the BASIC-S network was presented with sentences of the general
form <MUSIC-PERFORMER "hit" MUSICAL-NOTE> and <ANIMATE "hit"
OBJECT>. The BASIC-S hidden units generalize from the training sentences by
developing a distinct pattern for each of the two types of "hit" sentences. The Propagation Filter is then constructed so that the hidden unit pattern for <MUSICPERFORMER "hit" MUSICAL-NOTE> opens up connections to PERFORMMUSIC, while the pattern for <ANIMATE "hit" OBJECT> opens up connections
to HIT. Thus, when S1 is presented, the BASIC-S hidden units develop the pattern
classifying it as <ANIMATE "hit" OBJECT>, which enables connections to HIT.
For example, Figure 5 shows how the MAN pattern is routed from the s-content
role of BASIC-S to the actor role of HIT and the DOG pattern is routed from the
do-content role of BASIC-S to the object role of HIT. If S2 is presented instead, the
hidden units will classify it as <MUSIC-PERFORMER "hit" MUSICAL-NOTE>
and open the connections to PERFORM-MUSIC.
The technique of using propagation filters to control pattern routing can also be
applied to generate inferences. Consider the sentence, "Douglas hit Tyson". Since
both are boxers, it is plausible they are involved in a competitive activity. In S1,
however, punishing the dog is a more plausible motivation for HIT. The proper
inference is generated in each case by training the HIT network (Figure 5) on a
number of instances of boxers hitting one another and of people hitting dogs. The
network learns two distinct sets of hidden unit patterns: <BOXER-HIT-BOXER>
and <HUMAN-HIT-DOG>. A Propagation Filter, (like that shown in Figure 5)
with the HIT units as its selector, uses the differences in the two classes of patterns
to route to either the network that stores competitive activities or to the network
that stores punishment acts.
(2) Pattern Reconstruction: The system also resolves ambiguities by reconstructing
patterns that were seen during training. For example, the word "note" in sentence
239
240
Sumida and Dyer
S2 is ambiguous and could refer to a message, as in "The singer left the note".
Thus, when the word "note" is read in S2, the do-content role of BASIC-S can
be bound to MESSAGE or to MUSICAL-NOTE. To resolve the ambiguity, the
BASIC-S network uses the information that SINGER is bound to the s-content role
and "hit" to the verb role to: (1) reconstruct the <MUSIC-PERFORMER "hit"
MUSICAL-NOTE> pattern that it learned during training and (2) predict that the
do-content will be MUSICAL-NOTE. Since the prediction is consistent with one of
the possible meanings for the do-content role, the ambiguity is resolved. Similarly, if
the input had been "The singer left the note", BASIC-S would use the binding of a
human to the s-content role and the binding of "left" to the verb role to reconstruct
the pattern <HUMAN "left" MESSAGE> and thus resolve the ambiguity.
4
CURRENT STATUS AND CONCLUSIONS
PDS Networks and Propagation Filters are implemented in OCAIN, a natural language understanding system that: (1) takes each word of the input sequentially, (2)
binds the roles of the corresponding syntactic and semantic structures in the proper
order, and (3) resolves ambiguities. In our simulations with OCAIN, we successfully represented high-level knowledge by structuring individual PDP networks in
the form of a semantic net. Because the system's knowledge is spread over multiple
subnetworks, each one is relatively small and can therefore be trained quickly. Since
the subnetworks can operate in parallel, OCAIN is able to store and retrieve more
than one knowledge structure simultaneously, thus achieving knowlege-Ievel parallelism. Because PDP ensembles (versus single localist units) are used, the generalization, noise and fault-tolerance properties of the PDP approach are retained. At
the same time, Propagation Filters provide control over the way patterns are routed
(and transformed) between subnetworks. The PDS architecture, with its Propagation Filters, thus provides significant advantages over traditional PDP models for
natural language understanding.
References
[Hinton, 1981] G. E. Hinton. Implementing Semantic Networks in Parallel Hardware. In Parallel Models of Associative Memory, Lawrence Erlbaum, Hillsdale,
NJ, 1981.
[Rumelhart and McClelland, 1986] D. E. Rumelhart and J. L. McClelland. Parallel
Distributed Processing, Volume 1. MIT Press, Cambridge, Massachusetts, 1986.
[Sejnowski, 1981] T. J. Sejnowski. Skeleton Filters in the Brain. In Parallel Models
of Associative Memory, Lawrence Erlbaum, Hillsdale, NJ, 1981.
[Sumida and Dyer, 1989] R. A. Sumida and M. G. Dyer. Storing and Generalizing
Multiple Instances while Maintaining Knowledge-Level Parallelism. In Proceedings of the Eleventh International Joint Conference on Artificial Intelligence,
Detroit, MI, 1989.
[Sumida, 1991] R. A. Sumida. Dynamic Inferencing in Parallel Distributed Semantic Networks. In Proceedings of the Thirteenth Annual Conference of the Cognitive
Science Society, Chicago, IL, 1991.
| 553 |@word open:13 grey:2 simulation:2 propagate:1 current:1 activation:2 yet:1 must:2 ronald:1 chicago:1 enables:1 intelligence:2 item:4 provides:1 accessed:1 constructed:2 direct:9 become:1 eleventh:1 manner:1 themselves:1 brain:1 inspired:1 automatically:1 resolve:6 nj:2 every:1 act:4 exactly:1 hit:36 control:3 unit:44 appear:1 bind:3 encoding:1 black:5 differs:1 backpropagation:2 procedure:2 significantly:1 word:6 refers:2 onto:1 restriction:1 demonstrated:1 lexical:2 resolution:4 simplicity:1 fill:1 enabled:2 retrieve:1 construction:1 us:3 associate:2 rumelhart:3 role:41 singing:1 connected:2 pd:12 skeleton:2 dynamic:2 trained:3 animate:3 resolved:1 joint:1 represented:2 train:1 distinct:2 sejnowski:3 artificial:2 whose:1 plausible:2 reconstruct:2 encoder:2 syntactic:3 transform:1 itself:2 associative:2 sequence:4 advantage:1 net:6 reconstruction:2 description:1 constituent:1 los:1 object:15 help:1 illustrate:1 inferencing:3 develop:1 subnets:1 propagating:1 ij:1 implemented:1 c:1 indicate:7 thick:1 correct:3 filter:56 modifying:1 opened:4 human:11 routing:3 enable:1 implementing:1 hillsdale:2 backprop:1 generalization:1 plagued:1 lawrence:2 predict:1 group1:1 determiner:2 applicable:2 sensitive:2 successfully:2 detroit:1 reflects:1 mit:1 modified:2 encode:1 structuring:1 properly:2 sequencing:5 indicates:1 inference:4 entire:1 hidden:16 transformed:2 i1:1 issue:2 animal:7 noun:3 having:2 represents:4 placing:2 thin:1 tyson:1 np:14 composed:1 simultaneously:1 recognize:2 individual:2 replaced:1 consisting:1 connects:2 message:3 circle:3 desired:1 instance:2 classify:1 phrase:12 localist:1 addressing:1 erlbaum:2 stored:1 punishment:1 international:1 destination:7 michael:1 connecting:7 quickly:2 connectivity:2 ambiguity:11 nm:2 containing:1 cognitive:1 potential:1 sec:1 jagged:1 mv:1 closed:4 competitive:2 parallel:10 il:1 musical:7 largely:1 ensemble:9 generalize:1 whenever:2 involved:1 mi:1 propagated:5 massachusetts:1 knowledge:11 back:2 recreates:1 just:1 propagation:26 indicated:1 concept:4 read:5 laboratory:1 semantic:15 white:1 during:5 ambiguous:1 passive:1 meaning:6 common:1 volume:1 interpretation:1 refer:1 significant:1 cambridge:1 similarly:2 language:4 had:1 actor:1 similarity:1 driven:1 store:8 route:1 fault:1 accomplished:1 tvm:1 seen:3 performer:3 employed:1 recognized:1 determine:1 multiple:7 full:1 faster:1 permitting:1 prediction:1 basic:27 expectation:1 physically:1 represent:10 receive:3 thirteenth:1 addressed:1 source:6 leaving:1 operate:2 subject:12 exceed:1 architecture:3 associating:1 idea:1 det:1 angeles:1 recreate:2 whether:2 routed:4 cause:1 ievel:1 transforms:2 dark:1 hardware:1 mcclelland:3 reduced:1 generate:3 sl:3 dotted:2 group:10 threshold:2 achieving:1 deleted:1 changing:1 douglas:1 merely:1 communicate:1 respond:1 striking:1 layer:15 bound:10 copied:3 annual:1 activity:2 occur:2 ucla:1 relatively:2 department:1 structured:1 developing:1 smaller:1 remain:3 across:1 reconstructing:1 s1:2 discus:1 mechanism:1 singer:4 dyer:8 subnetworks:3 generalizes:1 appropriate:2 gate:1 nlp:3 maintaining:1 music:7 society:1 occurs:2 traditional:2 link:3 retained:1 proper:3 perform:3 hinton:3 communication:2 pdp:12 verb:18 dog:21 sentence:12 connection:17 california:1 learned:1 temporary:1 address:2 able:1 below:1 parallelism:2 pattern:69 reading:2 unsuccessful:1 memory:2 event:1 natural:4 representing:2 understanding:3 versus:1 consistent:1 classifying:1 storing:1 copy:1 distributed:4 tolerance:1 boxer:4 selector:14 ignore:1 status:1 suggestive:1 decides:1 sequentially:1 alternatively:1 ca:1 punishing:1 spread:1 arrow:6 s2:7 motivation:1 noise:1 slow:1 learns:1 specific:1 sequential:1 illustrates:2 generalizing:1 hitting:2 collectively:1 binding:6 determines:1 ma:3 viewed:1 exposition:1 man:30 considerable:1 content:25 folded:1 reducing:1 called:1 oval:2 total:1 indicating:1 people:1 filler:1 tested:1 |
5,004 | 5,530 | Advances in Learning Bayesian
Networks of Bounded Treewidth
Denis D. Mau?a
University of S?ao Paulo
S?ao Paulo, Brazil
[email protected]
Siqi Nie
Rensselaer Polytechnic Institute
Troy, NY, USA
[email protected]
Qiang Ji
Rensselaer Polytechnic Institute
Troy, NY, USA
[email protected]
Cassio P. de Campos
Queen?s University Belfast
Belfast, UK
[email protected]
Abstract
This work presents novel algorithms for learning Bayesian networks of bounded
treewidth. Both exact and approximate methods are developed. The exact method
combines mixed integer linear programming formulations for structure learning
and treewidth computation. The approximate method consists in sampling k-trees
(maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The
approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables.
1
Introduction
Bayesian networks are graphical models widely used to represent joint probability distributions on
complex multivariate domains. A Bayesian network comprises two parts: a directed acyclic graph
(the structure) describing the relationships among the variables in the model, and a collection of
conditional probability tables from which the joint distribution can be reconstructed. As the number
of variables in the model increases, specifying the underlying structure becomes a daunting task,
and practitioners often resort to learning Bayesian networks directly from data. Here, learning a
Bayesian network refers to inferring its structure from data, a task known to be NP-hard [9].
Learned Bayesian networks are commonly used for drawing inferences such as querying the posterior probability of some variable given some evidence or finding the mode of the posterior joint
distribution. Those inferences are NP-hard to compute even approximately [23], and all known
exact and provably good algorithms have worst-case time complexity exponential in the treewidth,
which is a measure of the tree-likeness of the structure. In fact, under widely believed assumptions
from complexity theory, exponential time complexity in the treewidth is inevitable for any algorithm
that performs exact inference [7, 20]. Thus, learning networks of small treewidth is essential if one
wishes to ensure reliable and efficient inference. This is particularly important in the presence of
missing data, when learning becomes intertwined with inference [16]. There is a second reason to
limit the treewidth. Previous empirical results [15, 22] suggest that bounding the treewidth improves
model performance on unseen data, hence improving the model generalization ability.
In this paper we present two novel ideas for score-based Bayesian network learning with a hard
constraint on treewidth. The first one is a mixed-integer linear programming (MILP) formulation
of the problem (Section 3) that builds on existing MILP formulations for unconstrained learning
of Bayesian networks [10, 11] and for computing the treewidth of a graph [17]. Unlike the MILP
1
formulation of Parviainen et al. [21], the MILP problem we generate is of polynomial size in the
number of variables, and dispense with the use of cutting planes techniques. This makes for a clean
and succinct formulation that can be solved with a single call of any MILP optimizer. We provide
some empirical evidence (in Section 5) that suggests that our approach is not only simpler but often
faster. It also outperforms the dynamic programming approach of Korhonen and Parviainen [19].
Since linear programming relaxations are used for solving the MILP problem, any MILP formulation can be used to provide approximate solutions and error estimates in an anytime fashion (i.e., the
method can be stopped at any time during the computation with a feasible solution whose quality
monotonically improves with time). However, the MILP formulations (both ours and that of Parviainen et al. [21]) cannot cope with very large domains, even if we settle for approximate solutions.
In order to deal with large domains, we devise (in Section 4) an approximate method based on a
uniform sampling of k-trees (maximal chordal graphs of treewidth k), which is achieved by using
a fast computable bijection between k-trees and Dandelion codes [6]. For each sampled k-tree, we
either run an exact algorithm similar to the one in [19] (when computationally appealing) to learn
the score-maximizing network whose moral graph is a subgraph of that k-tree, or we resort to a
more efficient method that takes partial variable orderings uniformly at random from a (relatively
small) space of orderings that are compatible with the k-tree. We show empirically (in Section 5)
that our sampling-based methods are very effective in learning close to optimal structures and scale
up to large domains. We conclude in Section 6 and point out possible future work. We begin with
some background knowledge and literature review on learning Bayesian networks (Section 2).
2
Bayesian Network Structure Learning
Let N be {1, . . . , n} and consider a finite set X = {Xi : i ? N } of categorical random variables
Xi taking values in finite sets Xi . A Bayesian network is a triple (X, G, ?), where G = (N, A)
is a directed acyclic graph (DAG) whose nodes are in one-to-one correspondence with variables in
X, and ? = {?i (xi , xGi )} is a set of numerical parameters specifying (conditional) probabilities
?i (xi , xGi ) = Pr(xi |xGi ), for every node i in G, value xi of Xi and assignment xGi to the parents
Gi of Xi in G. The structure G of the network represents a set of stochastic independence assessments among variables in X called graphical Markov conditions: every variable Xi is conditionally
independent of its nondescendant nonparents given its parents. As a consequence, a Bayesian network uniquely defines a joint probability distribution over X as the product of its parameters.
As it is common in the literature, we formulate the problem of Bayesian network learning as an
optimization over DAG structures guided by a score function. We only require that (i) the score
function can be written as a sum of local score functions si (Gi ), i ? N , each depending only on
the corresponding parent set Gi and on the data, and (ii) the local score functions can be efficiently
computed and stored [13, 14]. These properties are satisfied by commonly used score functions
such as the Bayesian Dirichlet equivalent uniform score [18]. We assume the reader is familiar with
graph-theoretic concepts such as polytrees, chordal graphs, chordalizations, moral graphs, moralizations, topological orders, (perfect) elimination orders, fill-in edges and clique-trees. References
[1] and [20] are good starting points to the topic.
Most score functions penalize model complexity in order to avoid overfitting. The way scores penalize model complexity generally leads to learning structures of bounded in-degree, but even bounded
in-degree graphs can have high treewidth (for instance, directed square grids have treewidth equal
to the square root of the number of nodes, yet have maximum in-degree equal to two), which brings
difficulty to subsequent probabilistic inferences with the model [5].
The goal of this work is to develop methods that search for
X
G? = argmax
si (Gi ) ,
G?GN,k
(1)
i?N
where GN,k is the set of all DAGs with node set N and treewidth at most k. Dasgupta proved
NP-hardness of learning polytrees of bounded treewidth when the score is data log likelihood [12].
Korhonen and Parviainen [19] adapted Srebro?s complexity result for Markov networks [25] to show
that learning Bayesian networks of treewidth two or greater is NP-hard.
In comparison to the unconstrained problem, few algorithms have been designed for the bounded
treewidth case. Korhonen and Parviainen [19] developed an exact algorithm based on dynamic
2
programming that learns optimal n-node structures of treewidth at most w in time 3n nw+O(1) ,
which is above the 2n nO(1) time required by the best worst-case algorithms for learning optimal
Bayesian networks with no constraint on treewidth [24]. We shall refer to their method in the rest
of this paper as K&P (after the authors? initials). Elidan and Gould [15] combined several heuristics
to treewidth computation and network structure learning in order to design approximate methods.
Others have addressed the similar (but not equivalent) problem of learning undirected models of
bounded treewidth [2, 8, 25]. Very recently, there seems to be an increase of interest in the topic.
Berg et al. [4] showed that the problem of learning bounded treewidth Bayesian networks can be
reduced to a weighted maximum satisfiability problem, and subsequently solved by weighted MAXSAT solvers. They report experimental results showing that their approach outperforms K&P. In the
same year, Parviainen et al. [21] showed that the problem can be reduced to a MILP. Their reduced
MILP problem however has exponentially many constraints in the number of variables. Following
the work of Cussens [10], the authors avoid creating such large programs by a cutting plane generation mechanism, which iteratively includes a new constraint while the optimum is not found. The
generation of each new constraint (cutting plane) requires solving another MILP problem. We shall
refer to their method from now on as TWILP (after the name of the software package the authors
provide).
3
A Mixed Integer Linear Programming Approach
The first contribution of this work is the MILP formulation that we design to solve the problem of
structure learning with bounded treewidth. MILP formulations have shown to be very effective for
learning Bayesian networks with no constraint on treewidth [3, 10], surpassing other attempts in
a range of data sets. The formulation is based on combining the MILP formulation for structure
learning in [11] with the MILP formulation presented in [17] for computing the treewidth of an
undirected graph. There are however notable differences: for instance, we do not enforce a linear
elimination ordering of nodes; instead we allow for partial orders which capture the equivalence between different orders in terms of minimizing treewidth, and we represent such partial order by real
numbers instead of integers. We avoid the use of sophisticate techniques for solving MILP problems
such as constraint generation [3, 10], which allows for an easy implementation and parallelization
(MILP optimizers such as CPLEX can take advantage of that).
For each node i in N , let Pi be the collection of allowed parent sets for that node (these sets can
be specified manually by the user or simply defined as the subsets of N \ {i} with cardinality less
than a given bound). We denote an element of Pi as Pit , with t = 1, . . . , ri and ri = |Pi | (hence
Pit ? N ). We will refer to a DAG as valid if its node set is N and the parent set of each node i in it
is an element of Pi . The following MILP problem can be used to find valid DAGs whose treewidth
is at most w:
X
Maximize
pit ? si (Pit ) subject to
(2)
it
P
j?N
yij ? w,
(n + 1) ? yij ? n + zj ? zi ,
yij + yik ? yjk ? ykj ? 1,
P
t pit = 1,
(n + 1)pit ? n + vj ? vi ,
pit ? yij + yji ,
pit ? yjk + ykj ,
?i ? N,
(3a)
?i, j ? N,
?i, j, k ? N,
(3b)
(3c)
?i ? N,
?i ? N, ?t ? {1, . . . , ri }, ?j ? Pit ,
?i ? N, ?t ? {1, . . . , ri }, ?j ? Pit ,
?i ? N, ?t ? {1, . . . , ri }, ?j, k ? Pit ,
(4a)
(4b)
(4c)
(4d)
zi ? [0, n], vi ? [0, n], yij ? {0, 1}, pit ? {0, 1}
?i, j ? N, ?t ? {1, . . . , ri }.
(5)
The variables pit define which parent sets are chosen, while the variables vi guarantee that those
choices respect a linear ordering of the variables, and hence that the corresponding directed graph
is acyclic. The variables yij specify a chordal moralization of this DAG with arcs respecting an
elimination ordering of width at most w, which is given by the variables zi .
The following result shows that any solution to the MILP above can be decoded into a chordal graph
of bounded treewidth and a suitable perfect elimination ordering.
3
Lemma 1. Let zi , yij , i, j ? N , be variables satisfying Constraints (4) and (5). Then the undirected
graph M = (N, E), where E = {ij ? N ? N : yij = 1 or yji = 1}, is chordal and has treewidth
at most w. Any elimination ordering that extends the weak ordering induced by zi is perfect for M .
The graph M is used in the formulation as a template for the moral graph of a valid DAG:
Lemma 2. Let vi , pit , i ? N, t = 1, . . . , ri , be variables satisfying Constraints (4) and (5). Then
the directed graph G = (N, A), where Gi = {j : pit = 1 and j ? Pit }, is acyclic and valid.
Moreover the moral graph of G is a subgraph of the graph M defined in the previous lemma.
The previous lemmas suffice to show that the solutions of the MILP problem can be decoded into
valid DAGs of bounded treewidth:
Theorem 1. Any solution to the MILP can be decoded into a valid DAG of treewidth less than or
equal to w. In particular, the decoding of an optimal solution solves (1).
The MILP formulation can be directly fed into any off-the-shelf MILP optimizer. Most MILP optimizers (e.g. CPLEX) can be prematurely stopped while providing an incumbent solution and an
error estimate. Moreover, given enough resources (time and memory), these solvers always find
optimal solutions. Hence, the MILP formulation provides an anytime algorithm that can be used to
provide both exact and approximate solutions.
The bottleneck in terms of efficiency of the MILP construction lies in the specification of Constraints (3c) and (4d), as there are ?(n3 ) such constraints. Thus, as n increases even the linear
relaxations of the MILP problem become hard to solve. We demonstrate empirically in Section 5
that the quality of solutions found by the MILP approach in a reasonable amount of time degrades
quickly as the number of variables exceeds a few dozens. In the next section, we present an approximate algorithm to overcome such limitations and handle large domains.
4
A Sampling Based Approach
A successful method for learning Bayesian networks of unconstrained treewidth on large domains
is order-based local search, which consists in sampling topological orderings for the variables and
selecting optimal compatible DAGs [26]. Given a topological ordering, the optimal DAG can be
found in linear time (assuming scores are given as input), hence rendering order-based search really effective in exploring the solution space. A naive extension of that approach to the bounded
treewidth case would be to (i) sample a topological order, (ii) find the optimal compatible DAG, (iii)
verify the treewidth and discard if it exceeds the desired bound. There are two serious issues with
that approach. First, verifying the treewidth is an NP-hard problem, and even if there are linear-time
algorithms (which are exponential in the treewidth), they perform poorly in practice; second, the vast
majority of structures would be discarded, since the most used score functions penalize the number
of free parameters, which correlates poorly with treewidth [5].
In this section, we propose a more sophisticate extension of order-based search to learn bounded
treewidth structures. Our method relies on sampling k-trees, which are defined inductively as follows [6]. A complete graph with k + 1 nodes (i.e., a (k + 1)-clique) is a k-tree. Let Tk = (V, E)
be a k-tree, K be a k-clique in it, and v be a node not in V . Then the graph obtained by connecting
v to every node in K is also a k-tree. A k-tree is a maximal graph of treewidth k in the sense that
no edge can be added without increasing the treewidth. Every graph of treewidth at most k is a
subgraph of some k-tree. Hence, Bayesian networks of treewidth bounded by k are exactly those
whose moral graph is a subgraph of some k-tree [19]. We are interested in k-trees over the nodes N
of the Bayesian network and where k = w is the bound we impose to the treewidth.
Caminiti et al. [6] proposed a linear time method (in both n and k) for coding and decoding ktrees into what is called (generalized) Dandelion codes. They also established a bijection between
Dandelion codes and k-trees. Hence, sampling Dandelion codes is essentially equivalent to sampling
k-trees. The former however is computationally much easier and faster to perform, especially if we
want to draw samples uniformly at random (uniform sampling provides good coverage of the space
and produces low variance estimates across data sets). Formally, a Dandelion code is a pair (Q, S),
where Q ? N with |Q| = k and S is a list of n ? k ? 2 pairs of integers drawn from N ? {}, where
is an arbitrary number not in N . Dandelion codes can be sampled uniformly by a trivial linear-time
4
algorithm that uniformly chooses k elements from N to build Q, then uniformly samples n ? k ? 2
pairs of integers in N ? {}. Algorithm 1 contains a high-level description of our approach.
Algorithm 1 Learning a structure of bounded treewidth by sampling Dandelion codes.
% Takes a score function si , i ? N , and an integer k, and outputs a DAG G? of treewidth ? k.
1 Initialize G? as an empty DAG.
2 Repeat a certain number of iterations:
2.a Uniformly sample a Dandelion code (Q, S) and decode it into Tk .
2.b Search
P for a DAG G
Pthat maximizes the score function and is compatible with Tk .
2.c If i?N si (Gi ) > i?N si (G?i ), update G? .
We assume from now on that a k-tree Tk is available, and consider the problem of searching for a
compatible DAG that maximizes the score (Step 2.b). Korhonen and Parviainen [19] presented an
algorithm (which we call K&P) that given an undirected graph M finds a DAG G maximizing the
score function such that the moralization of G is a subgraph of M . The algorithm runs in time and
space O(n) assuming the scores are part of the input (hence pre-computed and accessed at constant
time). We can use their algorithm to find the optimal structure whose moral graph is a subgraph of
Tk . We call this approach S+K&P to remind of (k-tree) sampling followed by K&P.
Theorem 2. The size of the sampling space of S+K&P is less than en log(nk) . Each of its iterations
runs in linear time in n (but exponential in k).
According to the result above, the sampling space of S+K&P is not much bigger than that of standard order-based local search (which is approximately en log n ), especially if k n. The practical
drawback of this approach is the ?(k3k (k + 1)!n) time taken by K&P to process each sampled
k-tree, which forbids its use for moderately high treewidth bounds (say, k ? 10). Our experiments
in the next section further corroborate our claim: S+K&P often performs poorly even on small k,
mostly due to the small number of k-trees sampled within the given time limit. A better approach is
to sacrifice the optimality of the search for compatible DAGs in exchange of an efficiency gain. We
next present a method based on sampling topological orderings that achieves such a goal.
Let Ci be the collection of maximal cliques of Tk that contain a certain node i (these can be obtained
efficiently, as Tk is chordal), and consider a topological ordering < of N . Let C<i = {j ? C : j <
i}. We can find an optimal DAG G compatible with < and Tk by making Gi = argmax{si (P ) :
P ? C<i : C ? Ci } for each i ? N . The graph G is acyclic since each parent set Gi respects the
topological ordering by construction. Its treewidth is at most k because both i and Gi belong to a
clique C of Tk , which implies that the moralization of G is a subgraph of Tk .
Sampling topological orderings is both inefficient and wasteful, as different topological orderings
impose the same constraints on the choices of Gi . To see this, consider the k-tree with edges 1?
2,1?3,2?3,2?4 and 3?4. Since there is no edge connecting nodes 1 and 4 their relative ordering is
irrelevant when choosing both G1 or G4 . A better approach is to linearly order the nodes in each
maximal clique.
A k-tree Tk can be represented by a clique-tree structure, which comprises its maximal cliques
C1 , . . . , Cn+k?1 and a tree T over the maximal cliques. Every two adjacent cliques in T differ by
exactly one node. Assume T is rooted at a clique R, so we can unambiguously refer to the (single)
parent of a (maximal) clique and to its children. A clique-tree structure as such can directly be
obtained from the process of decoding a Dandelion code [6]. The procedure in Algorithm 2 shows
how to efficiently obtain a collection of compatible orderings of the nodes of each clique of a k-tree.
Algorithm 2 Sampling a partial order within a k-tree.
% Takes a k-tree represented as a clique-tree structure T rooted at R, and outputs a collection of
orderings ?C for every maximal clique C of T .
1 Sample an order ?R of the nodes in R, paint R black and the other maximal cliques white.
2 Repeat until all maximal cliques are painted black:
2.a Take a white clique C whose parent clique P in T is black, and let i be the single node in C \ P .
2.b Sample a relative order for i with respect to ?P (i.e., insert i into some arbitrary position of the
projection of ?P onto C), and generate ?C accordingly; when done paint C black.
5
Table 1: Number of variables in the data sets.
nursery
breast
housing
adult
zoo
letter
mushroom
wdbc
audio
hill
community
9
10
14
15
17
17
22
31
62
100
100
The cliques in Algorithm 2 are processed in topological ordering in the clique-tree structure, which
ensures that the order ?P of the parent P of a clique C is already defined when processing C (note
that the order in which we process cliques does not restrict the possible orderings among nodes). At
the end, we have a node ordering for each clique. Given such a collection of local orderings, we can
efficiently learn the optimal parent set of every node i by
Gi =
argmax
si (P ) ,
(6)
P ?C:P ??C ,C?Ci
where P ? ?C denotes that the parent sets are constrained to be nodes smaller than i in ?C . In
fact, the choices made in (6) can be implemented together with step 2.b of Algorithm 2, providing
a slight increase of efficiency. We call the method obtained by Algorithm 1 with partial orderings
established by Algorithm 2 and parent set selection by (6) as S2, in allusion to the double sampling
scheme of k-trees and local node orderings.
Theorem 3. S2 samples DAGs ? on a sample space of size k! ? (k + 1)n?k , and runs in linear time
in n and k.
The generation of partial orderings can also serve to implement the DAG search in S+K&P, by
replacing the sampling with complete enumeration of them. Then Step 2.b would be performed for
each compatible ordering ?P of the parent in a recursive way. Dynamic programming can be used
to make the procedure more efficient. We have actually used this approach in our implementation
of S+K&P. Finally, the sampling can be enhanced by some systematic search in the neighborhood
of the sampled candidates. We have implemented and put in place a simple hill-climbing procedure
for that, even though the quality of solutions does not considerably improve by doing so.
5
Experiments
We empirically analyzed the accuracy of the algorithms proposed here against each other and
against the available implementations of TWILP (https://bitbucket.org/twilp/twilp/) and K&P
(http://www.cs.helsinki.fi/u/jazkorho/aistats-2013/) on a collection of data sets from the UCI repository. The S+K&P and S2 algorithms were implemented (purely) in Matlab. The data sets were
selected so as to span a wide range of dimensionality, and were preprocessed to have variables discretized over the median value when needed. Some columns of the original data sets audio and
community were discarded: 7 variables of audio had a constant value, 5 variables of community
have almost one different value per sample (such as personal data), and 22 variables had missing
data (Table 1 shows the number of (binary) variables after pre-processing). In all experiments, we
maximize the Bayesian Dirichlet equivalent uniform score with equivalent sample size equal to one.
5.1
Exact Solutions
We refer to our MILP formulation as simply MILP hereafter. We compared MILP, TWILP and K&P
on the task of finding an optimal structure. Table 2 reports the running time on a selection of data
sets of reasonably low dimensionality and small values for the treewidth bound. The experiments
were run in a computer with 32 cores, memory limit of 64GB, time limit of 3h and maximum
number of parents of three (the latter restriction facilitates the experiments and does not constrain
the treewidth). On cases where MILP or TWILP did not finish we report also the error estimates from
CPLEX (an error of e% means that the achieved solution is certainly not more than e% worse than
the optimal). While we emphasize that one should be careful when directly comparing execution
time between methods, as the implementations use different languages (we are running CPLEX
12.4, the original K&P uses a Cython compiled Python code, TWILP uses a Python interface to
CPLEX to generate the cutting plane mechanism), we note that MILP goes much further in terms
of which data sets and treewidth values it can compute. MILP has found the optimal structure in
all instances, but was not able to certify its optimality in due time. TWILP found the optimum for
6
all treewidth bounds only on the nursery and breast data sets. The results also suggest that MILP
becomes faster with the increase of the bound, while TWILP running times remain almost unaltered.
This might be explained by the fact that the MILP formulation is complete and the increase of the
bound facilitates encountering good solutions, while TWILP needs to generate constraints until an
optimal solution can be certified.
Table 2: Time to learn an optimal Bayesian network subject to treewidth bound w. Dashes denote
failure to solve due to excessive memory demand.
method
MILP
TWILP
K&P
5.2
w
nursery
n=9
breast
n=10
housing
n=14
adult
n=15
mushroom
n=22
2
3
4
5
2
3
4
5
2
3
4
5
1s
<1s
<1s
<1s
5m
5s
<1s
<1s
7s
72s
12m
131m
31s
19s
8s
8s
3h [0.5%]
3h [3%]
3h [0.3%]
3h [0.5%]
26s
5m
103m
?
3h [2.4%]
25m
80s
56s
3h [7%]
3h [9%]
3h [9%]
3h [7%]
128m
?
?
?
3h [0.39%]
3h [0.04%]
40m
37s
3h [0.6%]
3h [0.7%]
3h [0.9%]
3h [0.9%]
137m
?
?
?
3h [50%]
3h [19.3%]
3h [14.9%]
3h [11.2%]
3h [32%]
3h [31%]
3h [27%]
3h [23%]
?
?
?
?
Approximate Solutions
We used treewidth bounds of 4 and 10, and maximum parent set size of 3, except for hill and
community, where it was set as 2 to help the integer programming approaches (which suffer the
most from large parent sets). To be fair with all methods, we pre-computed scores, and considered
them as input of the problem. Both MILP and TWILP used CPLEX 12.4 with a memory limit of
64GB to solve the optimizations. We have allowed CPLEX to run up to three hours, collecting the
incumbent solution after 10 minutes. S+K&P and S2 have been given 10 minutes. This evaluation
at 10 minutes is to be seen as an early-stage comparison for applications that need a reasonably
fast response. To account for the intrinsic variability of the performance of the sampling methods
with respect to the sampling seed, S+K&P and S2 were ran ten times on each data set with different
seeds; we report the minimum, median and maximum obtained values over the runs.
Figure 1 shows the normalized scores (in percentage) of each method on each data set. The normalized score of a method that returns a solution with score s on a certain data set is norm-score(s) =
(s ? sempty )/(smax ? sempty ), where sempty is the score of an empty DAG (used as baseline), and smax
is the maximum score over all methods in that data set. Hence, a normalized score of 0 indicates the
method found solutions as good as the empty graph (a trivial solution), whereas a normalized score
of 1 indicates the method performed best on that data set.
The exponential dependence on treewidth of S+K&P prevents it to run with treewidth bound greater
than 6. We see from the plot on the left that S2 is largely superior to S+K&P, even though the former
finds suboptimal networks for each given k-tree. This suggests that finding good k-trees is more important than selecting good networks for a given k-tree. We also see that both integer programming
formulations scale poorly with the number of variables, being unable to obtain satisfactory solutions
for data sets with more than 50 variables. For the hill data set and treewidth ? 4, MILP was not able
to find a feasible solution within 10 minutes, and could only find the trivial solution (empty DAG)
after 3 hours; TWILP did not find any solution even after 3 hours. On the community data set with
treewidth ? 4, neither MILP nor TWILP found a solution within 3 hours. For treewidth ? 10 the
integer programming approaches performed even worse: TWILP could not provide a solution for
the audio, hill and community data sets; MILP could only find the empty graph.
Since both S+K&P and S2 were implemented in Matlab, the comparison with either MILP or
TWILP within the same time period (10 minutes) might be unfair (one could also try to improve
the MILP formulation, although it will eventually suffer from the problems discussed in Section 3).
Nevertheless, the results show that S2 is very competitive even under implementation disadvantage.
7
Treewidth ? 4
Treewidth ? 10
community
community
hill
audio
wdbc
mushroom
letter
zoo
hill
audio
wdbc
mushroom
letter
zoo
adult
housing
adult
housing
breast
nursery
breast
nursery
0
50
100
0
NORMALIZED SCORE (%)
50
100
NORMALIZED SCORE (%)
S+K&P S2 MILP-10m MILP-3h TWILP-10m TWILP-3h
Figure 1: Normalized scores. Missing results indicate failure to provide a solution.
6
Conclusions
We presented exact and approximate procedures to learn Bayesian networks of bounded treewidth.
The exact procedure is based on a MILP formulation, and is shown to outperform other methods for
exact learning, including the different MILP formulation proposed in [21]. Our MILP approach is
also competitive when used to produce approximate solutions. However, due to the cubic number
of constraints, the MILP formulation cannot cope with very large domains, and there is probably
little we can do to considerably improve this situation. Constraint generation techniques [3] are yet
to be explored, even though we do not expect them to produce dramatic performance gains ? the
competing objectives of maximizing score and bounding treewidth usually lead to the generation of
a large number of constraints.
To tackle large problems, we developed an approximate algorithm that samples k-trees and then
searches for compatible structures. We derived two variants by trading off the computational effort
spent in sampling k-trees and in searching for compatible structures. The sampling-based methods
are empirically shown to provide fairly accurate solutions and to scale to large domains.
Acknowledgments
We thank the authors of [19, 21] for making their software publicly available and the anonymous
reviewers for their useful suggestions. Most of this work has been performed while C. P. de Campos
was with the Dalle Molle Institute for Artificial Intelligence. This work has been partially supported
by the Swiss NSF grant 200021 146606/1, by the S?ao Paulo Research Foundation (FAPESP) grant
2013/23197-4, and by the grant N00014-12-1-0868 from the US Office of Navy Research.
References
[1] S. Arnborg, D. Corneil, and A. Proskurowski. Complexity of finding embeddings in a k-tree.
SIAM J. on Matrix Analysis and Applications, 8(2):277?284, 1987.
[2] F. R. Bach and M. I. Jordan. Thin junction trees. In Advances in Neural Inf. Proc. Systems 14,
pages 569?576, 2001.
[3] M. Barlett and J. Cussens. Advances in Bayesian Network Learning using Integer Programming. In Proc. 29th Conf. on Uncertainty in AI, pages 182?191, 2013.
8
[4] J. Berg, M. J?arvisalo, and B. Malone. Learning optimal bounded treewidth Bayesian networks
via maximum satisfiability. In Proc. 17th Int. Conf. on AI and Stat., pages 86?95, 2014. JMLR
W&CP 33.
[5] A. Beygelzimer and I. Rish. Inference complexity as a model-selection criterion for learning
Bayesian networks. In Proc. 8th Int. Conf. Princ. Knowledge Representation and Reasoning,
pages 558?567, 1998.
[6] S. Caminiti, E. G. Fusco, and R. Petreschi. Bijective linear time coding and decoding for
k-trees. Theory of Comp. Systems, 46(2):284?300, 2010.
[7] V. Chandrasekaran, N. Srebro, and P. Harsha. Complexity of inference in graphical models. In
Proc. 24th Conf. on Uncertainty in AI, pages 70?78, 2008.
[8] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In Advances
in Neural Inf. Proc. Systems, pages 273?280, 2007.
[9] D. M. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: AI
and Stat. V, pages 121?130. Springer-Verlag, 1996.
[10] J. Cussens. Bayesian network learning with cutting planes. In Proc. 27th Conf. on Uncertainty
in AI, pages 153?160, 2011.
[11] J. Cussens, M. Bartlett, E. M. Jones, and N. A. Sheehan. Maximum Likelihood Pedigree
Reconstruction using Integer Linear Programming. Genetic Epidemiology, 37(1):69?83, 2013.
[12] S. Dasgupta. Learning polytrees. In Proc. 15th Conf. on Uncertainty in AI, pages 134?141,
1999.
[13] C. P. de Campos and Q. Ji. Efficient structure learning of Bayesian networks using constraints.
J. of Mach. Learning Res., 12:663?689, 2011.
[14] C. P. de Campos, Z. Zeng, and Q. Ji. Structure learning of Bayesian networks using constraints.
In Proc. 26th Int. Conf. on Mach. Learning, pages 113?120, 2009.
[15] G. Elidan and S. Gould. Learning Bounded Treewidth Bayesian Networks. J. of Mach. Learning Res., 9:2699?2731, 2008.
[16] N. Friedman. The Bayesian structural EM algorithm. In Proc. 14th Conf. on Uncertainty in
AI, pages 129?138, 1998.
[17] A. Grigoriev, H. Ensinck, and N. Usotskaya. Integer linear programming formulations for
treewidth. Technical report, Maastricht Res. School of Economics of Tech. and Organization,
2011.
[18] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Mach. Learning, 20(3):197?243, 1995.
[19] J. H. Korhonen and P. Parviainen. Exact learning of bounded tree-width Bayesian networks.
In Proc. 16th Int. Conf. on AI and Stat., pages 370?378, 2013. JMLR W&CP 31.
[20] J. H. P. Kwisthout, H. L. Bodlaender, and L. C. van der Gaag. The Necessity of Bounded
Treewidth for Efficient Inference in Bayesian Networks. In Proc. 19th European Conf. on AI,
pages 237?242, 2010.
[21] P. Parviainen, H. S. Farahani, and J. Lagergren. Learning bounded tree-width Bayesian networks using integer linear programming. In Proc. 17th Int. Conf. on AI and Stat., pages 751?
759, 2014. JMLR W&CP 33.
[22] E. Perrier, S. Imoto, and S. Miyano. Finding optimal Bayesian network given a super-structure.
J. of Mach. Learning Res., 9(2):2251?2286, 2008.
[23] D. Roth. On the hardness of approximate reasoning. Artif. Intell., 82(1?2):273?302, 1996.
[24] T. Silander and P. Myllymaki. A simple approach for finding the globally optimal Bayesian
network structure. In Proc. 22nd Conf. on Uncertainty in AI, pages 445?452, 2006.
[25] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artif. Intell., 143(1):
123?138, 2003.
[26] M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In Proc. 21st Conf. on Uncertainty in AI, pages 584?590, 2005.
9
| 5530 |@word repository:1 unaltered:1 polynomial:1 seems:1 norm:1 nd:1 dramatic:1 necessity:1 initial:1 contains:1 score:32 selecting:3 hereafter:1 genetic:1 ours:1 outperforms:2 existing:1 imoto:1 rish:1 comparing:1 chordal:6 beygelzimer:1 rpi:2 si:8 yet:2 written:1 mushroom:4 numerical:1 subsequent:1 designed:1 plot:1 update:1 intelligence:1 selected:1 malone:1 accordingly:1 plane:5 core:1 provides:2 bijection:2 denis:2 node:26 org:1 simpler:1 accessed:1 chechetka:1 become:1 consists:2 combine:1 bitbucket:1 g4:1 sacrifice:1 hardness:2 nor:1 discretized:1 globally:1 little:1 enumeration:1 solver:2 cardinality:1 becomes:3 begin:1 increasing:1 bounded:22 underlying:1 moreover:2 suffice:1 maximizes:2 what:1 cassio:1 developed:3 finding:6 guarantee:1 every:7 collecting:1 tackle:1 exactly:3 uk:2 grant:3 local:6 limit:5 consequence:1 painted:1 mach:5 approximately:3 black:4 might:2 equivalence:1 specifying:2 suggests:2 polytrees:3 pit:16 range:2 proskurowski:1 directed:5 pthat:1 practical:1 acknowledgment:1 practice:1 recursive:1 implement:1 swiss:1 optimizers:2 procedure:5 empirical:2 projection:1 pre:3 refers:1 suggest:2 cannot:2 close:1 caminiti:2 onto:1 selection:3 put:1 www:1 equivalent:5 restriction:1 reviewer:1 missing:3 maximizing:3 gaag:1 go:1 economics:1 starting:1 roth:1 formulate:1 xgi:4 fill:1 handle:1 searching:2 brazil:1 construction:2 enhanced:1 user:1 exact:12 programming:14 decode:1 us:2 element:3 satisfying:2 particularly:1 solved:2 capture:1 worst:2 verifying:1 ensures:1 ordering:27 ran:1 principled:1 complexity:9 nie:1 dispense:1 respecting:1 inductively:1 moderately:1 dynamic:3 personal:1 solving:3 serve:1 purely:1 efficiency:3 joint:4 represented:2 fast:2 effective:4 artificial:1 milp:49 choosing:1 neighborhood:1 navy:1 whose:8 heuristic:1 widely:2 solve:4 say:1 drawing:1 ability:1 gi:11 unseen:1 g1:1 certified:1 housing:4 advantage:1 propose:1 reconstruction:1 maximal:11 product:1 silander:1 uci:1 combining:1 subgraph:8 poorly:4 description:1 parent:17 empty:5 optimum:2 double:1 smax:2 produce:3 perfect:3 tk:11 help:1 depending:1 develop:1 ac:1 stat:4 spent:1 ij:1 school:1 solves:1 coverage:1 implemented:4 c:1 treewidth:67 implies:1 indicate:1 differ:1 trading:1 guided:1 drawback:1 subsequently:2 stochastic:1 settle:1 public:1 elimination:5 farahani:1 require:1 exchange:1 ao:3 generalization:1 really:1 anonymous:1 molle:1 yij:8 extension:2 exploring:1 insert:1 considered:1 seed:2 nw:1 claim:1 optimizer:2 achieves:1 early:1 decampos:1 proc:15 myllymaki:1 weighted:2 ecse:1 always:1 super:1 avoid:3 shelf:1 office:1 derived:1 arvisalo:1 likelihood:3 indicates:2 tech:1 baseline:1 sense:1 inference:9 koller:1 interested:1 provably:1 issue:1 among:3 art:1 constrained:1 initialize:1 fairly:1 equal:4 sampling:23 qiang:1 manually:1 represents:1 jones:1 excessive:1 thin:2 inevitable:1 siqi:1 future:1 np:6 others:1 report:5 serious:1 few:2 intell:2 familiar:1 lagergren:1 argmax:3 cplex:7 attempt:1 friedman:1 organization:1 interest:1 evaluation:1 certainly:1 cython:1 analyzed:1 accurate:1 edge:4 partial:6 tree:46 desired:1 re:4 stopped:2 instance:3 column:1 gn:2 corroborate:1 moralization:4 disadvantage:1 queen:1 assignment:1 arnborg:1 subset:1 uniform:4 successful:1 stored:1 considerably:2 combined:1 chooses:1 st:1 incumbent:2 epidemiology:1 siam:1 probabilistic:1 off:2 systematic:1 decoding:4 connecting:2 quickly:1 together:1 satisfied:1 worse:2 conf:13 creating:1 resort:2 inefficient:1 return:1 account:1 paulo:3 de:4 coding:2 includes:1 int:5 notable:1 vi:4 performed:4 root:1 try:1 doing:1 competitive:2 contribution:1 square:2 publicly:1 accuracy:1 variance:1 largely:1 efficiently:4 climbing:1 weak:1 bayesian:41 zoo:3 comp:1 against:2 failure:2 sampled:5 gain:2 proved:1 anytime:2 knowledge:3 improves:2 satisfiability:2 dimensionality:2 actually:1 unambiguously:1 specify:1 response:1 daunting:1 formulation:23 done:1 though:3 stage:1 until:2 replacing:1 zeng:1 assessment:1 defines:1 mode:1 brings:1 quality:3 artif:2 usa:2 name:1 concept:1 verify:1 contain:1 normalized:7 former:2 hence:9 iteratively:1 satisfactory:1 deal:1 conditionally:1 adjacent:1 white:2 during:1 width:4 uniquely:1 rooted:2 pedigree:1 criterion:1 generalized:1 hill:7 bijective:1 theoretic:1 demonstrate:1 complete:4 performs:2 cp:3 interface:1 reasoning:2 likeness:1 novel:2 recently:1 fi:1 common:1 superior:1 dalle:1 ji:3 empirically:5 exponentially:1 belong:1 slight:1 discussed:1 surpassing:1 refer:5 dag:23 ai:12 unconstrained:3 grid:1 language:1 had:2 specification:1 encountering:1 compiled:1 multivariate:1 posterior:2 showed:2 irrelevant:1 inf:2 discard:1 certain:3 n00014:1 verlag:1 binary:1 sophisticate:2 devise:1 der:1 seen:1 minimum:1 greater:2 guestrin:1 impose:2 maximize:2 period:1 monotonically:1 elidan:2 ii:2 exceeds:2 technical:1 faster:3 believed:1 bach:1 grigoriev:1 bigger:1 variant:1 breast:5 essentially:1 iteration:2 represent:2 achieved:2 penalize:3 c1:1 background:1 want:1 whereas:1 campos:4 addressed:1 median:2 parallelization:1 rest:1 unlike:1 probably:1 subject:2 induced:1 undirected:4 facilitates:2 jordan:1 integer:14 practitioner:1 call:4 structural:1 presence:1 iii:1 easy:1 enough:1 rendering:1 embeddings:1 independence:1 finish:1 zi:5 restrict:1 suboptimal:1 competing:1 idea:1 cn:1 br:1 computable:1 bottleneck:1 bartlett:1 gb:2 moral:7 effort:1 suffer:2 matlab:2 yik:1 generally:1 useful:1 amount:1 ten:1 processed:1 reduced:3 generate:4 http:2 outperform:1 percentage:1 zj:1 nsf:1 certify:1 per:1 nursery:5 intertwined:1 dasgupta:2 shall:2 nevertheless:1 drawn:1 wasteful:1 preprocessed:1 neither:1 clean:1 vast:1 graph:30 relaxation:2 sum:1 year:1 run:8 package:1 letter:3 uncertainty:7 extends:1 place:1 reader:1 reasonable:1 almost:2 barlett:1 chandrasekaran:1 geiger:1 draw:1 cussens:4 bound:11 followed:1 dash:1 correspondence:1 maua:1 topological:10 adapted:1 constraint:18 qub:1 constrain:1 ri:7 software:2 n3:1 helsinki:1 optimality:2 span:1 relatively:1 gould:2 according:1 combination:1 across:1 smaller:1 remain:1 em:1 heckerman:1 appealing:1 making:2 usp:1 explained:1 pr:1 taken:1 computationally:2 resource:1 belfast:2 describing:1 mechanism:2 eventually:1 needed:1 fed:1 end:1 available:3 junction:2 polytechnic:2 harsha:1 enforce:1 bodlaender:1 original:2 denotes:1 dirichlet:2 ensure:1 running:3 graphical:3 qji:1 build:2 maxsat:1 especially:2 objective:1 added:1 paint:2 already:1 degrades:1 teyssier:1 dependence:1 unable:1 thank:1 majority:1 topic:2 trivial:3 reason:1 assuming:2 code:10 relationship:1 remind:1 providing:2 minimizing:1 mostly:1 yjk:2 troy:2 design:2 implementation:5 perform:2 markov:3 discarded:2 arc:1 finite:2 situation:1 variability:1 prematurely:1 arbitrary:2 community:8 pair:3 required:1 specified:1 learned:1 twilp:18 established:2 hour:4 adult:4 able:2 usually:1 program:1 reliable:1 memory:4 including:1 suitable:1 difficulty:1 scheme:1 improve:3 categorical:1 naive:1 review:1 literature:2 python:2 relative:2 expect:1 mixed:3 generation:6 limitation:1 suggestion:1 acyclic:5 querying:1 srebro:3 triple:1 kwisthout:1 foundation:1 degree:3 miyano:1 pi:4 maastricht:1 compatible:11 repeat:2 supported:1 free:1 allow:1 institute:3 wide:1 template:1 taking:1 van:1 overcome:1 valid:6 author:4 collection:8 commonly:2 made:1 cope:2 correlate:1 reconstructed:1 approximate:13 emphasize:1 cutting:5 clique:25 overfitting:1 conclude:1 mau:1 xi:10 forbids:1 yji:2 search:11 rensselaer:2 table:5 learn:5 reasonably:2 improving:1 complex:1 european:1 domain:8 vj:1 aistats:1 did:2 linearly:1 bounding:2 s2:9 succinct:1 allowed:2 child:1 fair:1 en:2 fashion:1 cubic:1 ny:3 inferring:1 comprises:2 wish:1 decoded:3 exponential:5 position:1 lie:1 candidate:1 unfair:1 chickering:2 jmlr:3 learns:1 dozen:1 theorem:3 minute:5 showing:1 list:1 explored:1 evidence:2 essential:1 intrinsic:1 ci:3 execution:1 demand:1 nk:1 parviainen:9 easier:1 wdbc:3 simply:2 prevents:1 partially:1 springer:1 relies:1 conditional:2 goal:2 careful:1 feasible:2 hard:6 except:1 korhonen:5 uniformly:6 lemma:4 called:2 experimental:1 formally:1 berg:2 latter:1 corneil:1 audio:6 ykj:2 sheehan:1 |
5,005 | 5,531 | Augur: Data-Parallel Probabilistic Modeling
Jean-Baptiste Tristan1 , Daniel Huang2 , Joseph Tassarotti3 ,
Adam Pocock1 , Stephen J. Green1 , Guy L. Steele, Jr1
1
Oracle Labs {jean.baptiste.tristan, adam.pocock,
stephen.x.green, guy.steele}@oracle.com
2
Harvard University [email protected]
3
Carnegie Mellon University [email protected]
Abstract
Implementing inference procedures for each new probabilistic model is timeconsuming and error-prone. Probabilistic programming addresses this problem
by allowing a user to specify the model and then automatically generating the
inference procedure. To make this practical it is important to generate high performance inference code. In turn, on modern architectures, high performance requires parallel execution. In this paper we present Augur, a probabilistic modeling
language and compiler for Bayesian networks designed to make effective use of
data-parallel architectures such as GPUs. We show that the compiler can generate
data-parallel inference code scalable to thousands of GPU cores by making use of
the conditional independence relationships in the Bayesian network.
1
Introduction
Machine learning, and especially probabilistic modeling, can be difficult to apply. A user needs to
not only design the model, but also implement an efficient inference procedure. There are many different inference algorithms, many of which are conceptually complicated and difficult to implement
at scale. This complexity makes it difficult to design and test new models, or to compare inference
algorithms. Therefore any effort to simplify the use of probabilistic models is useful.
Probabilistic programming [1], as introduced by BUGS [2], is a way to simplify the application of
machine learning based on Bayesian inference. It allows a separation of concerns: the user specifies
what needs to be learned by describing a probabilistic model, while the runtime automatically generates the how, i.e., the inference procedure. Specifically the programmer writes code describing a
probability distribution, and the runtime automatically generates an inference algorithm which samples from the distribution. Inference itself is a computationally intensive and challenging problem.
As a result, developing inference algorithms is an active area of research. These include deterministic approximations (such as variational methods) and Monte Carlo approximations (such as MCMC
algorithms). The problem is that most of these algorithms are conceptually complicated, and it is
not clear, especially to non-experts, which one would work best for a given model.
In this paper we present Augur, a probabilistic modeling system, embedded in Scala, whose design
is guided by two observations. The first is that if we wish to benefit from advances in hardware we
must focus on producing highly parallel inference algorithms. We show that many MCMC inference
algorithms are highly data-parallel [3, 4] within a single Markov Chain, if we take advantage of
the conditional independence relationships of the input model (e.g., the assumption of i.i.d. data
makes the likelihood independent across data points). Moreover, we can automatically generate
good data-parallel inference with a compiler. This inference runs efficiently on common highly
parallel architectures such as Graphics Processing Units (GPUs). We note that parallelism brings
interesting trade-offs to MCMC performance as some inference techniques generate less parallelism
and thus scale poorly.
1
The second observation is that a high performance system begins by selecting an appropriate inference algorithm, and this choice is often the hardest problem. For example, if our system only
implements Metropolis-Hastings inference, there are models for which our system will be of no
use, even given large amounts of computational power. We must design the system so that we can
include the latest research on inference while reusing pre-existing analyses and optimizations. Consequently, we use an intermediate representation (IR) for probability distributions that serves as a
target for modeling languages and as a basis for inference algorithms, allowing us to easily extend
the system. We will show this IR is key to scaling the system to very large networks.
We present two main results: first, some inference algorithms are highly data-parallel and a compiler
can automatically generate effective GPU implementations; second, it is important to use a symbolic
representation of a distribution rather than explicitly constructing a graphical model in memory,
allowing the system to scale to much larger models (such as LDA).
2
The Augur Language
We present two example model specifications in Augur, latent Dirichlet allocation (LDA) [5], and
a multivariate linear regression model. The supplementary material shows how to generate samples
from the models, and how to use them for prediction. It also contains six more example probabilistic
models in Augur: polynomial regression, logistic regression, a categorical mixture model, a Gaussian Mixture Model (GMM), a Naive Bayes Classifier, and a Hidden Markov Model (HMM). Our
language is similar in form to BUGS [2] and Stan [6], except our statements are implicitly parallel.
2.1
Specifying a Model
The LDA model specification is shown in Figure 1a. The probability distribution is a Scala object
(object LDA) composed of two declarations. First, we declare the support of the probability
distribution as a class named sig. The support of the LDA model is composed of four arrays, one
each for the distribution of topics per document (theta), the distribution of words per topic (phi),
the topics assignments (z), and the words in the corpus (w). The support is used to store the inferred
model parameters. These last two arrays are flat representations of ragged arrays, and thus we do
not require the documents to be of equal length. The second declaration specifies the probabilistic
model for LDA in our embedded domain specific language (DSL) for Bayesian networks. The
DSL is marked by the bayes keyword and delimited by the enclosing brackets. The model first
declares the parameters of the model: K for the number of topics, V for the vocabulary size, M for
the number of documents, and N for the array of document sizes. In the model itself, we define the
hyperparameters (values alpha and beta) for the Dirichlet distributions and sample K Dirichlets
of dimension V for the distribution of words per topic (phi) and M Dirichlets of dimension K for
the distribution of topics per document (theta). Then, for each word in each document, we draw
a topic z from theta, and finally a word from phi conditioned on the topic we drew for z.
The regression model in Figure 1b is defined in the same way using similar language features. In
this example the support comprises the (x, y) data points, the weights w, the bias b, and the noise
tau. The model uses an additional sum function to sum across the feature vector.
2.2
Using a Model
Once a model is specified, it can be used as any other Scala object by writing standard Scala code.
For instance, one may want to use the LDA model with a training corpus to learn a distribution
of words per topic and then use it to learn the per-document topic distribution of a test corpus. In
the supplementary material we provide a code sample which shows how to use an Augur model
for such a task. Each Augur model forms a distribution, and the runtime system generates a Dist
interface which provides two methods: map, which implements maximum a posteriori estimation,
and sample, which returns a sequence of samples. Both of these calls require a similar set of
arguments: a list of additional variables to be observed (e.g., to fix the phi values at test time in
LDA), the model hyperparameters, the initial state of the model support, the model support that
stores the inferred parameters, the number of MCMC samples and the chosen inference method.
2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
object LDA {
class sig(var phi: Array[Double],
var theta: Array[Double],
var z: Array[Int],
var w: Array[Int])
val model = bayes {
(K:Int,V:Int,M:Int,N:Array[Int]) => {
val alpha = vector(K,0.1)
val beta = vector(V,0.1)
val phi = Dirichlet(V,beta).sample(K)
val theta = Dirichlet(K,alpha).sample(M)
val w =
for(i <- 1 to M) yield {
for(j <- 1 to N(i)) yield {
val z: Int =
Categorical(K,theta(i)).sample()
Categorical(V,phi(z)).sample()
}}
observe(w)
}}}
(a) A LDA model in Augur. The model specifies the
distribution p(?, ?, z | w).
object LinearRegression {
class sig(var w: Array[Double],
var b: Double,
var tau: Double,
var x: Array[Double],
var y: Array[Double])
val model = bayes {
(K:Int,N:Int,l:Double,u:Double) => {
val w = Gaussian(0,10).sample(K)
val b = Gaussian(0,10).sample()
val tau = InverseGamma(3.0,1.0).sample()
val x = for(i <- 1 to N)
yield Uniform(l,u).sample(K)
val y = for (i <- 1 to N) yield {
val phi = for(j <- 1 to K) yield
w(j) * x(i)(j)
Gaussian(phi.sum + b,tau).sample()
}
observe(x, y)
}}}
(b) A multivariate regression in Augur. The model
specifies the distribution p(w, b, ? | x, y).
Figure 1: Example Augur programs.
3
System Architecture
We now describe how a model specification is transformed into CUDA code running on a GPU.
Augur has two distinct compilation phases. The first phase transforms the block of code following
the bayes keyword into our IR for probability distributions, and occurs when scalac is invoked.
The second phase happens at runtime, when a method is invoked on the model. At that point, the IR
is transformed, analyzed, and optimized, and then CUDA code is emitted, compiled, and executed.
Due to these two phases, our system is composed of two distinct components that communicate
through the IR: the front end, where the DSL is converted into the IR, and the back end, where
the IR is compiled down to the chosen inference algorithm (currently Metropolis-Hastings, Gibbs
sampling, or Metropolis-Within-Gibbs). We use the Scala macro system to define the modeling
language in the front end. The macro system allows us to define a set of functions (called ?macros?)
that are executed by the Scala compiler on the code enclosed by the macro invocation. We currently
focus on Bayesian networks, but other DSLs (e.g., Markov random fields) could be added without
modifications to the back end. The implementation of the macros to define the Bayesian network
language is conceptually uninteresting so we omit further details.
Separating the compilation into two distinct phases provides many advantages. As our language is
implemented using Scala?s macro system, it provides automatic syntax highlighting, method name
completion, and code refactoring in any IDE which supports Scala. This improves the usability of the
DSL as we require no special tools support. We also use Scala?s parser, semantic analyzer (e.g., to
check that variables have been defined), and type checker. Additionally we benefit from scalac?s
optimizations such as constant folding and dead code elimination. Then, because we compile the IR
to CUDA code at run time, we know the values of all the hyperparameters and the size of the dataset.
This enables better optimization strategies, and also gives us important insights into how to extract
parallelism (Section 4.2). For example, when compiling LDA, we know that the number of topics is
much smaller than the number of documents and thus parallelizing over documents produces more
parallelism than parallelizing over topics. This is analogous to JIT compilation in modern runtime
systems where the compiler can make different decisions at runtime based upon the program state.
4
Generation of Data-Parallel Inference
We now explain how Augur generates data-parallel samplers by exploiting the conditional independence structure of the model. We will use the two examples from Section 2 to explain how the
compiler analyzes the model and generates the inference code.
3
When we invoke an inference procedure on a model (e.g., by calling model.map), Augur compiles
the IR into CUDA inference code for that model. Our aim with the IR is to make the parallelism
explicit in the model
Q and to support further analysis of the probability distributions contained within.
For example, a indicates that each sub-term in the expression can be evaluated in parallel. Informally, our IR expressions are generated from this Backus-Naur Form (BNF) grammar:
N
1 Y
? ?
Z
?
P ::= p(X ) p(X | X ) P P
P
P d x {P }c
P
X
i
The use of a symbolic representation for the model is key to Augur?s ability to scale to large networks. Indeed, as we show in the experimental study (Section 5), popular probabilistic modeling
systems such as JAGS [7] or Stan [8] reify the graphical model, resulting in unreasonable memory
consumption for models such as LDA. However, a consequence of our symbolic representation is
that it is more difficult to discover conjugacy relationships, a point we return to later.
4.1
Generating data-parallel MH samplers
To use Metropolis-Hastings (MH) inference, the compiler emits code for a function f that is proportional to the distribution to be sampled. This code is then linked with our library implementation of MH. The function f is the product of the prior and the model likelihood and is extracted automatically from the model specification. In our regression example this function is:
f (x, y, ?, b, w) = p(b)p(? )p(w)p(x)p(y | x, b, ?, w) which we rewrite to
! N
!
K
Y
Y
f (x, y, ?, b, w) = p(b)p(? )
p(wk )
p(xn )p(yn | xn ? w + b, ? )
n
k
In this form, the compiler knows that the distribution factorizes into a large number of terms that
can be evaluated in parallel and then efficiently multiplied together. Each (x, y) contributes to the
likelihood independently (i.e., the data is i.i.d.), and each pair can be evaluated in parallel and the
compiler can optimize accordingly. In practice, we work in log-space, so we perform summations.
The compiler then generates the CUDA code to evaluate f from the IR. This code generation step is
conceptually simple and we will not explain it further.
It is interesting to note that the code scales well despite the simplicity of this parallelization: there
is a large amount of parallelism because it is roughly proportional to the number of data points;
uncovering the parallelism in the code does not increase the amount of computation performed; and
the ratio of computation to global memory accesses is high enough to hide the memory latency.
4.2
Generating data-parallel Gibbs samplers
Alternatively we can generate a Gibbs sampler for conjugate models. We would prefer to generate
a Gibbs sampler for LDA, as an MH sampler will have a very low acceptance ratio. To generate
a Gibbs sampler, the compiler needs to figure out how to sample from each univariate conditional
distribution. As an example, to draw ?m as part of the (? + 1)th sample, the compiler needs to
generate code that samples from the following distribution
? +1
? +1
?
?
p(?m
| w? +1 , z ? +1 , ?1? +1 , ..., ?m?1
, ?m+1
, ..., ?M
).
As we previously explained, our compiler uses a symbolic representation of the model: the advantage is that we can scale to large networks, but the disadvantage is that it is more challenging
to uncover conjugacy and independence relationships between variables. To accomplish this, the
compiler uses an algebraic rewrite system that aims to rewrite the above expression in terms of expressions it knows (i.e., the joint distribution of the model). We show a few selected rules below to
give a flavor of the rewrite system. The full set of 14 rewrite rules are in the supplementary material.
(a)
P
P
(b)
R
N
Q
?1
(c)
P (xi ) ?
R
P (x) Q dx ? Q P (x)dx
(d) P (x | y) ?
i
N
N
Q
Q
{P (xi )}q(i)=T {P (xi )}q(i)=F
i
R P (x,y)
P (x,y) dx
i
Rule (a) states that like terms can be canceled. Rule (b) says that terms that do not depend on the
variable of integration can be pulled out of the integral. Rule (c) says that we can partition a product
4
over N terms into two products, one where a predicate q is true on the indexing variable and one
where it is false. Rule (d) is a combination of the product and sum rule. Currently, the rewrite system
is comprised of rules we found useful in practice, and it is easy to extend the system with more rules.
Going back to our example, the compiler rewrites the desired expression into the one below:
N (m)
Y
1
? +1
? +1
p(?m
)
p(zmj |?m
)
Z
j
In this form, it is clear that each ?1 , . . . , ?m is independent of the others after conditioning on the
other random variables. As a result, they may all be sampled in parallel.
At each step, the compiler can test for a conjugacy relation. In the above form, the compiler recognizes that the zmj are drawn from a categorical distribution and ?m is drawn from a Dirichlet,
and can exploit the fact that these are conjugate distributions. The posterior distribution for ?m is
Dirichlet(? + cm ) where cm is a vector whose kth entry is the number of z of topic k from
document m. Importantly, the compiler now knows that sampling each z requires a counting phase.
The case of the ? variables is more interesting. In this case, we want to sample from
+1
p(??k+1 |w? +1 , z ? +1 , ?? +1 , ??1 +1 , ..., ??k?1
, ??k+1 , ..., ??K )
After the applying the rewrite system to this expression, the compiler discovers that it is equal to
M N (i)
Y
Y
1
p(?k )
{p(wi |?zij )}k=zij
Z
i
j
The key observation that the compiler uses to reach this conclusion is the fact that the z are distributed according to a categorical distribution and are used to index into the ? array. Therefore,
they partition the set of words w into K disjoint sets w1 ] ... ] wk , one for each topic. More
concretely, the probability of words drawn from topic k can be rewritten in partitioned form using
QM QN (i)
rule (c) as i
{p(wij |?zij )}k=zij as once a word?s topic is fixed, the word depends upon
j
only one of the ?k distributions. In this form, the compiler recognizes that it should draw from
Dirichlet(? + ck ) where ck is the count of words assigned to topic k. In general, the compiler
detects this pattern when it discovers that samples drawn from categorical distributions are being
used to index into arrays.
Finally, the compiler turns to analyzing the zij . It detects that they can be sampled in parallel
but it does not find a conjugacy relationship. However, it discovers that the zij are drawn from
discrete distributions, so the univariate distribution can be calculated exactly and sampled from. In
cases where the distributions are continuous, it tries to use another approximate sampling method to
sample from that variable.
One concern with such a rewrite system is that it may fail to find a conjugacy relation if the model has
a complicated structure. So far we have found our rewrite system to be robust and it can find all the
usual conjugacy relations for models such as LDA, GMMs or HMMs, but it suffers from the same
shortcomings as implementations of BUGS when deeper mathematics are required to discover a
conjugacy relation (as would be the case for instance for a non-linear regression). In the cases where
a conjugacy relation cannot be found, the compiler will (like BUGS) resort to using MetropolisHastings and therefore exploit the inherent parallelism of the model likelihood.
Finally, note that the rewrite rules are applied deterministically and the process will always terminate
with the same result. Overall, the cost of analysis is negligible compared to the sampling time for
large data sets. Although the rewrite system is simple, it enables us to use a concise symbolic
representation for the model and thereby scale to large networks.
4.3
Data-parallel Operations on Distributions
To produce efficient code, the compiler needs to uncover parallelism, but we also need a library of
data-parallel operations for distributions. For instance, in LDA, there are two steps where we sample
from many Dirichlet distributions in parallel. When drawing the per document topic distributions,
each thread can draw a ?i by generating K Gamma variates and normalizing them [9]. Since the
5
number of documents is usually very large, this produces enough parallelism to make full use of
the GPU?s cores. However, this may not produce sufficient parallelism when drawing the ?k , because the number of topics is usually small compared to the number of cores. Consequently, we
use a different procedure which exposes more parallelism (the algorithm is given in the supplementary material). To generate K Dirichlet variates over V categories with concentration parameters
?11 , . . . , ?KV , we first generate a matrix A where Aij ? Gamma(?ij ) and then normalize each row
of this matrix. To sample the ?i , we could launch a thread per row. However, as the number of
columns is much larger than the number of rows, we launch a thread to generate the gamma variates
for each column, and then separately compute a normalizing constant for each row by multiplying
the matrix with a vector of ones using CUBLAS. This is an instance where the two-stage compilation
procedure (Section 3) is useful, because the compiler is able to use information about the relative
sizes of K and V to decide that the complex scheme will be more efficient than the simple scheme.
This sort of optimization is not unique to the Dirichlet distribution. For example, when sampling
a large number of multivariate normals by applying a linear transformation to a vector of normal
samples, the strategy for extracting parallelism may change based on the number of samples to
generate, the dimension of the multinormal, and the number of GPU cores. We found that issues
like these were crucial to generating high-performance data-parallel samplers.
4.4
Parallelism & Inference Tradeoffs
It is difficult to give a cost model for Augur programs. Traditional approaches are not necessarily
appropriate for probabilistic inference because there are tradeoffs between faster sampling times and
convergence which are not easy to characterize. In particular, different inference methods may affect
the amount of parallelism that can be exploited in a model. For example, in the case of multivariate
regression, we can use the Metropolis-Hastings sampler presented above, which lets us sample from
all the weights in parallel. However, we may be better off generating a Metropolis-Within-Gibbs
sampler where the weights are sampled one at a time. This reduces the amount of exploitable
parallelism, but it may converge faster, and there may still be enough parallelism in each calculation
of the Hastings ratio by evaluating the likelihood in parallel.
Many of the optimizations in the literature that improve the mixing time of a Gibbs sampler, such as
blocking or collapsing, reduce the available parallelism by introducing dependencies between previously independent variables. In a system like Augur it is not always beneficial to eliminate variables
(e.g., by collapsing) if it introduces more dependencies for the remaining variables. Currently Augur cannot generate a blocked or collapsed sampler, but there is interesting work on automatically
blocking or collapsing variables [10] that we wish to investigate in the future. Our experimental
results on LDA demonstrate this tradeoff between mixing and runtime. There we show that while
a collapsed Gibbs sampler converges more quickly in terms of the number of samples compared to
an uncollapsed sampler, the uncollapsed sampler converges more quickly in terms of runtime. This
is due to the uncollapsed sampler having much more available parallelism. We hope that as more
options and inference strategies are added to Augur, users will be able to experiment further with the
tradeoffs of different inference methods in a way that would be too time-consuming to do manually.
5
Experimental Study
We provide experimental results for the two examples presented throughout the paper and in the
supplementary material for a Gaussian Mixture Model (GMM). More detailed information on the
experiments can be found in the supplementary material.
To test multivariate regression and the GMM, we compare Augur?s performance to those of two
popular languages for statistical modeling, JAGS [7] and Stan [8]. JAGS is an implementation of
BUGS, and performs inference using Gibbs sampling, adaptive MH, and slice sampling. Stan uses
a No-U-Turn sampler, a variant of Hamiltonian Monte Carlo. For the regression, we configured
Augur to use MH1 , while for the GMM Augur generated a Gibbs sampler. In our LDA experiments
we also compare Augur to a handwritten CUDA implementation of a Gibbs sampler, and also to
1
Augur could not generate a Gibbs sampler for regression, as the conjugacy relation for the weights is not a
simple application of conjugacy rules[11]. JAGS avoids this issue by adding specific rules for linear regression.
6
RMSE v. Training Time (winequality-red)
?1.25
80
Augur
Jags
Stan
100
70
9 210 211
28 2
10 11
29 2 2
28
?1.3
26
Log10 Predictive Probability
?1.35
60
200
?1.4
50
RMSE
Predictive Probability v. Training Time
?105
25
12 2
2
23
27
24
2
?1.45
40
?1.5
30
20
27
6
25
12222 32
4
26
25
2
?1.55
500
2000
100 500
1000
200 1000
150
3000 2000
7500
0
5000
0
5
10
15
20
25
30
35
Runtime (seconds)
10
40
45
50
(a) Multivariate linear regression results on the UCI
WineQuality-red dataset.
?1.65
1
4
23
1
?1.6
5000
11
8
9 10 2
27 2 2 2
2 22
10
102
103
Runtime (seconds)
Augur
Cuda
Factorie(Collapsed)
104
105
(b) Predictive probability vs time for up to 2048
samples with three LDA implementations: Augur,
hand-written CUDA, Factorie?s Collapsed Gibbs.
Figure 2: Experimental results on multivariate linear regression and LDA.
the collapsed Gibbs sampler [12] from the Factorie library [13]. The former is a comparison for an
optimised GPU implementation, while the latter is a baseline for a CPU Scala implementation.
5.1
Experimental Setup
For the linear regression experiment, we used data sets from the UCI regression repository [14]. The
Gaussian Mixture Model experiments used two synthetic data sets, one generated from 3 clusters,
the other from 4 clusters. For the LDA benchmark, we used a corpus extracted from the simple
English variant of Wikipedia, with standard stopwords removed. This corpus has 48556 documents,
a vocabulary size of 37276 words, and approximately 3.3 million tokens. From that we sampled
1000 documents to use as a test set, removing words which appear only in the test set. To evaluate
the model we measure the log predictive probability [15] on the test set.
All experiments ran on a single workstation with an Intel Core i7 4820k CPU, 32 GB RAM, and an
NVIDIA GeForce Titan Black. The Titan Black uses the Kepler architecture. All probability values
are calculated in double precision. The CPU performance results using Factorie are calculated using
a single thread, as the multi-threaded samplers are neither stable nor performant in the tested release.
The GPU results use all 960 double-precision ALU cores available in the Titan Black. The Titan
Black has 2880 single-precision ALU cores, but single precision resulted in poor quality inference
results, though the speed was greatly improved.
5.2
Results
In general, our results show that once the problem is large enough we can amortize Augur?s startup
cost of model compilation to CUDA, nvcc compilation to a GPU binary, and copying the data to
and from the GPU. This cost is approximately 9 seconds averaged across all our experiments. After
this point Augur scales to larger numbers of samples in shorter runtimes than comparable systems.
In this mode we are using Augur to find a likely set of parameters rather than generating a set of
samples with a large effective sample size for posterior estimation. We have not investigated the
effective sample size vs runtime tradeoff, though the MH approach we use for regression is likely to
have a lower effective sample size than the HMC used in Stan.
Our linear regression experiments show that Augur?s inference is similar to JAGS in runtime and
performance, and better than Stan. Augur takes longer to converge as it uses MH, though once we
have amortized the compilation time it draws samples very quickly. The regression datasets tend to
be quite small in terms of both number of random variables and number of datapoints, so it is harder
to amortize the costs of GPU execution. However, the results are very different for models where the
number of inferred parameters grows with the data set. In the GMM example in the supplementary,
7
we show that Augur scales to larger problems than JAGS or Stan. For 100, 000 data points, Augur
draws a thousand samples in 3 minutes while JAGS takes more than 21 minutes and Stan requires
more than 6 hours. Each system found the correct means and variances for the clusters; our aim was
to measure the scaling of runtime with problem size.
Results from the LDA experiment are presented in Figure 2b and use predictive probability to monitor convergence over time. We compute the predictive probability and record the time (in seconds)
after drawing 2i samples, for i ranging from 0 to 11 inclusive. It takes Augur 8.1 seconds to draw its
first sample for LDA. Augur?s performance is very close to that of the hand-written CUDA implementation, and much faster than the Factorie collapsed Gibbs sampler. Indeed, it takes the collapsed
LDA implementation 6.7 hours longer than Augur to draw 2048 samples. We note that the collapsed Gibbs sampler appears to have converged after 27 samples, in approximately 27 minutes.
The uncollapsed implementations converge after 29 samples, in approximately 4 minutes. We also
implemented LDA in JAGS and Stan but they ran into scalability issues. The Stan version of LDA
(taken from the Stan user?s manual[6]) uses 55 GB of RAM but failed to draw a sample in a week of
computation time. We could not test JAGS as it required more than 128 GB of RAM. In comparison,
Augur uses less than 1 GB of RAM for this experiment.
6
Related Work
Augur is similar to probabilistic modeling languages such as BUGS [16], Factorie [13], Dimple [17],
Infer.net [18], and Stan [8]. This family of languages explicitly represents a probability distribution,
restricting the expressiveness of the modeling language to improve performance. For example,
Factorie, Dimple, and Infer.net provide languages for factor graphs enabling these systems to take
advantage of specific efficient inference algorithms (e.g., Belief Propagation). Stan, while Turing
complete, focuses on probabilistic models with continuous variables using a No-U-Turn sampler
(recent versions also support discrete variables). In contrast, Augur focuses on Bayesian Networks,
allowing a compact symbolic representation, and enabling the generation of data-parallel code.
Another family of probabilistic programming languages is characterized by their ability to express
all computable generative models by reasoning over execution traces which implicitly represent
probability distributions. These are typically a Turing complete language with probabilistic primitives and include Venture [19], Church [20], and Figaro [21]. Augur and the modeling languages
described above are less expressive than these languages, and so describe a restricted set of probabilistic programs. However performing inference over program traces generated by a model, instead
of the model support itself, makes it more difficult to generate an efficient inference algorithm.
7
Discussion
We show that it is possible to automatically generate parallel MCMC inference algorithms, and it
is also possible to extract sufficient parallelism to saturate a modern GPU with thousands of cores.
The choice of a Single-Instruction Multiple-Data (SIMD) architecture such as a GPU is central to the
success of Augur, as it allows many parallel threads with low overhead. Creating thousands of CPU
threads is less effective as each thread has too little work to amortize the overhead. GPU threads
are comparatively cheap, and this allows for many small parallel tasks (like likelihood calculations
for a single datapoint). Our compiler achieves this parallelization with no extra information beyond
that which is normally encoded in a graphical model description and uses a symbolic representation
that allows scaling to large models (particularly for latent variable models like LDA). It also makes
it easy to run different inference algorithms and evaluate the tradeoffs between convergence and
sampling time. The generated inference code is competitive in terms of model performance with
other probabilistic modeling systems, and can sample from large problems much more quickly.
The current version of Augur runs on a single GPU, which introduces another tier into the memory
hierarchy as data and samples need to be streamed to and from the GPU?s memory and main memory.
We do not currently support this in Augur for problems larger than GPU memory, though it is
possible to analyse the generated inference code and automatically generate the data movement
code [22]. This movement code can execute concurrently with the sampling process. One area
we have not investigated is expanding Augur to clusters of GPUs, though this will introduce the
synchronization problems others have encountered when scaling up MCMC [23].
8
References
[1] N. D. Goodman. The principles and practice of probabilistic programming. In Proc. of the
40th ACM Symp. on Principles of Programming Languages, POPL ?13, pages 399?402, 2013.
[2] A. Thomas, D. J. Spiegelhalter, and W. R. Gilks. BUGS: A program to perform Bayesian
inference using Gibbs sampling. Bayesian Statistics, 4:837 ? 842, 1992.
[3] W. D. Hillis and G. L. Steele, Jr. Data parallel algorithms. Comm. of the ACM, 29(12):1170?
1183, 1986.
[4] G. E. Blelloch. Programming parallel algorithms. Comm. of the ACM, 39:85?97, 1996.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[6] Stan Dev. Team. Stan Modeling Language Users Guide and Ref. Manual, Version 2.2, 2014.
[7] M. Plummer. JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling. In 3rd International Workshop on Distributed Statistical Computing (DSC 2003), pages
20?22, 2003.
[8] M.D. Hoffman and A. Gelman. The No-U-Turn Sampler: Adaptively setting path lengths in
Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15:1593?1623, 2014.
[9] G. Marsaglia and W. W. Tsang. A simple method for generating gamma variables. ACM Trans.
Math. Softw., 26(3):363?372, 2000.
[10] D. Venugopal and V. Gogate. Dynamic blocking and collapsing for Gibbs sampling. In 29th
Conf. on Uncertainty in Artificial Intelligence, UAI?13, 2013.
[11] R. Neal. CSC 2541: Bayesian methods for machine learning, 2013. Lecture 3.
[12] T. L. Griffiths and M. Steyvers. Finding scientific topics. In Proc. of the National Academy of
Sciences, volume 101, 2004.
[13] A. McCallum, K. Schultz, and S. Singh. Factorie: Probabilistic programming via imperatively
defined factor graphs. In Neural Information Processing Systems 22, pages 1249?1257, 2009.
[14] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[15] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of
Machine Learning Research, 14:1303?1347, 2013.
[16] D. Lunn, D. Spiegelhalter, A. Thomas, and N. Best. The BUGS project: Evolution, critique
and future directions. Statistics in Medicine, 2009.
[17] S. Hershey, J. Bernstein, B. Bradley, A. Schweitzer, N. Stein, T. Weber, and B. Vigoda.
Accelerating inference: Towards a full language, compiler and hardware stack. CoRR,
abs/1212.2991, 2012.
[18] T. Minka, J.M. Winn, J.P. Guiver, and D.A. Knowles. Infer.NET 2.5, 2012. Microsoft Research
Cambridge.
[19] V. K. Mansinghka, D. Selsam, and Y. N. Perov. Venture: a higher-order probabilistic programming platform with programmable inference. CoRR, abs/1404.0099, 2014.
[20] N. D. Goodman, V. K. Mansinghka, D. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: A
language for generative models. In 24th Conf. on Uncertainty in Artificial Intelligence, UAI
2008, pages 220?229, 2008.
[21] A. Pfeffer. Figaro: An object-oriented probabilistic programming language. Technical report,
Charles River Analytics, 2009.
[22] J. Ragan-Kelley, C. Barnes, A. Adams, S. Paris, F. Durand, and S. Amarasinghe. Halide: a
language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. ACM SIGPLAN Notices, 48(6):519?530, 2013.
[23] A. Smola and S. Narayanamurthy. An architecture for parallel topic models. Proceedings of
the VLDB Endowment, 3(1-2):703?710, 2010.
9
| 5531 |@word repository:2 version:4 polynomial:1 instruction:1 vldb:1 concise:1 thereby:1 harder:1 initial:1 contains:1 lichman:1 selecting:1 zij:6 daniel:1 document:14 existing:1 bradley:1 current:1 com:1 dx:3 must:2 gpu:16 written:2 csc:1 partition:2 enables:2 cheap:1 designed:1 v:2 generative:2 selected:1 intelligence:2 accordingly:1 mccallum:1 inversegamma:1 hamiltonian:2 core:8 record:1 blei:2 provides:3 math:1 kepler:1 stopwords:1 schweitzer:1 beta:3 overhead:2 symp:1 introduce:1 indeed:2 roughly:1 dist:1 nor:1 multi:1 detects:2 automatically:9 cpu:4 little:1 begin:1 discover:2 moreover:1 mh1:1 project:1 what:1 cm:2 finding:1 transformation:1 ragged:1 runtime:13 exactly:1 classifier:1 qm:1 unit:1 normally:1 omit:1 yn:1 producing:1 appear:1 declare:1 negligible:1 consequence:1 despite:1 vigoda:1 analyzing:1 critique:1 optimised:1 path:1 approximately:4 black:4 specifying:1 challenging:2 compile:1 hmms:1 analytics:1 averaged:1 practical:1 unique:1 gilks:1 practice:3 block:1 implement:4 writes:1 figaro:2 procedure:7 area:2 pre:1 word:13 griffith:1 symbolic:7 cannot:2 close:1 gelman:1 collapsed:8 applying:2 writing:1 optimize:1 deterministic:1 map:2 timeconsuming:1 latest:1 primitive:1 independently:1 guiver:1 simplicity:1 insight:1 rule:13 array:14 importantly:1 datapoints:1 steyvers:1 analogous:1 target:1 hierarchy:1 parser:1 user:6 programming:9 us:10 sig:3 harvard:2 amortized:1 roy:1 particularly:1 bache:1 pfeffer:1 blocking:3 observed:1 wang:1 tsang:1 thousand:4 linearregression:1 halide:1 keyword:2 trade:1 backus:1 removed:1 movement:2 ran:2 comm:2 complexity:1 dynamic:1 depend:1 rewrite:12 singh:1 predictive:6 upon:2 basis:1 easily:1 mh:7 joint:1 distinct:3 effective:6 describe:2 monte:3 shortcoming:1 plummer:1 artificial:2 startup:1 jean:2 whose:2 larger:5 supplementary:7 quite:1 say:2 drawing:3 encoded:1 grammar:1 ability:2 statistic:2 analyse:1 itself:3 advantage:4 sequence:1 net:3 product:4 huang2:1 macro:6 uci:3 jag:11 mixing:2 poorly:1 academy:1 bug:8 description:1 kv:1 normalize:1 scalability:1 venture:2 exploiting:1 convergence:3 double:11 cluster:4 produce:4 generating:8 adam:3 converges:2 uncollapsed:4 object:6 completion:1 ij:1 mansinghka:2 implemented:2 c:1 launch:2 direction:1 guided:1 correct:1 stochastic:1 programmer:1 material:6 implementing:1 elimination:1 require:3 fix:1 blelloch:1 summation:1 normal:2 week:1 achieves:1 estimation:2 compiles:1 proc:2 currently:5 expose:1 tool:1 hoffman:2 hope:1 offs:1 concurrently:1 gaussian:6 always:2 aim:3 rather:2 ck:2 dimple:2 factorizes:1 release:1 focus:4 likelihood:6 check:1 indicates:1 greatly:1 contrast:1 baseline:1 posteriori:1 inference:48 eliminate:1 typically:1 hidden:1 relation:6 transformed:2 going:1 wij:1 uncovering:1 canceled:1 overall:1 issue:3 winequality:2 platform:1 special:1 integration:1 equal:2 once:4 field:1 having:1 ng:1 sampling:13 manually:1 runtimes:1 represents:1 softw:1 hardest:1 future:2 others:2 report:1 simplify:2 inherent:1 few:1 modern:3 oriented:1 composed:3 gamma:4 resulted:1 national:1 phase:6 microsoft:1 ab:2 acceptance:1 highly:4 investigate:1 introduces:2 mixture:4 bracket:1 analyzed:1 compilation:7 chain:1 integral:1 shorter:1 desired:1 instance:4 column:2 modeling:13 lunn:1 dev:1 disadvantage:1 perov:1 assignment:1 cublas:1 simd:1 cost:5 introducing:1 entry:1 uniform:1 uninteresting:1 comprised:1 predicate:1 graphic:1 front:2 characterize:1 too:2 dependency:2 accomplish:1 synthetic:1 adaptively:1 international:1 river:1 probabilistic:23 invoke:1 off:1 together:1 quickly:4 dirichlets:2 w1:1 recomputation:1 central:1 guy:2 collapsing:4 dead:1 creating:1 expert:1 resort:1 conf:2 return:2 reusing:1 converted:1 wk:2 int:9 configured:1 titan:4 explicitly:2 depends:1 later:1 performed:1 try:1 lab:1 linked:1 compiler:30 red:2 bayes:5 sort:1 parallel:32 complicated:3 option:1 competitive:1 rmse:2 ir:12 variance:1 efficiently:2 yield:5 conceptually:4 dsl:5 bayesian:11 handwritten:1 carlo:3 multiplying:1 converged:1 explain:3 datapoint:1 reach:1 suffers:1 manual:2 geforce:1 minka:1 workstation:1 emits:1 sampled:6 dataset:2 popular:2 improves:1 uncover:2 back:3 appears:1 delimited:1 higher:1 factorie:8 specify:1 improved:1 hershey:1 scala:10 evaluated:3 though:5 execute:1 stage:1 smola:1 hand:2 hastings:5 expressive:1 propagation:1 logistic:1 brings:1 lda:26 quality:1 mode:1 alu:2 grows:1 scientific:1 name:1 steele:3 true:1 former:1 evolution:1 assigned:1 semantic:1 neal:1 ide:1 syntax:1 complete:2 demonstrate:1 performs:1 interface:1 reasoning:1 ranging:1 variational:2 invoked:2 discovers:3 weber:1 charles:1 image:1 common:1 wikipedia:1 dsc:1 conditioning:1 volume:1 million:1 extend:2 mellon:1 blocked:1 cambridge:1 gibbs:20 paisley:1 automatic:1 rd:1 narayanamurthy:1 augur:45 mathematics:1 analyzer:1 kelley:1 language:24 specification:4 access:1 stable:1 compiled:2 longer:2 multivariate:7 posterior:2 hide:1 recent:1 optimizing:1 store:2 nvidia:1 binary:1 success:1 durand:1 exploited:1 analyzes:1 additional:2 converge:3 stephen:2 full:3 multiple:1 reduces:1 infer:3 technical:1 usability:1 faster:3 calculation:2 characterized:1 baptiste:2 prediction:1 scalable:1 regression:19 variant:2 cmu:1 represent:1 folding:1 want:2 separately:1 winn:1 crucial:1 goodman:2 parallelization:2 extra:1 popl:1 checker:1 tend:1 gmms:1 jordan:1 call:1 emitted:1 extracting:1 counting:1 intermediate:1 bernstein:1 enough:4 easy:3 independence:4 variate:3 affect:1 architecture:7 reduce:1 selsam:1 tradeoff:6 computable:1 intensive:1 i7:1 thread:8 six:1 expression:6 gb:4 accelerating:1 effort:1 algebraic:1 programmable:1 useful:3 latency:1 clear:2 informally:1 detailed:1 amount:5 transforms:1 stein:1 tenenbaum:1 hardware:2 category:1 generate:19 specifies:4 cuda:10 notice:1 disjoint:1 per:8 carnegie:1 discrete:2 express:1 key:3 four:1 monitor:1 drawn:5 gmm:5 neither:1 ram:4 graph:2 sum:4 run:4 turing:2 uncertainty:2 communicate:1 named:1 throughout:1 family:2 decide:1 knowles:1 separation:1 draw:9 decision:1 prefer:1 scaling:4 comparable:1 encountered:1 oracle:2 barnes:1 declares:1 inclusive:1 flat:1 sigplan:1 calling:1 generates:6 speed:1 argument:1 performing:1 gpus:3 developing:1 tristan:1 according:1 combination:1 poor:1 conjugate:2 jr:1 across:3 smaller:1 beneficial:1 pocock:1 wi:1 partitioned:1 joseph:1 metropolis:6 making:1 happens:1 modification:1 explained:1 restricted:1 indexing:1 taken:1 tier:1 computationally:1 pipeline:1 conjugacy:10 previously:2 turn:5 describing:2 count:1 fail:1 know:5 serf:1 end:4 available:3 operation:2 rewritten:1 unreasonable:1 apply:1 observe:2 multiplied:1 appropriate:2 compiling:1 thomas:2 dirichlet:11 include:3 running:1 recognizes:2 graphical:4 remaining:1 log10:1 medicine:1 exploit:2 especially:2 comparatively:1 streamed:1 added:2 occurs:1 fa:1 strategy:3 concentration:1 usual:1 traditional:1 kth:1 separating:1 hmm:1 consumption:1 topic:21 threaded:1 jit:1 code:27 length:2 index:2 relationship:5 performant:1 ratio:3 copying:1 gogate:1 difficult:6 executed:2 setup:1 hmc:1 statement:1 trace:2 design:4 implementation:12 enclosing:1 perform:2 allowing:4 observation:3 markov:3 datasets:1 benchmark:1 enabling:2 team:1 stack:1 parallelizing:2 expressiveness:1 inferred:3 introduced:1 pair:1 required:2 specified:1 paris:1 optimized:1 learned:1 hour:2 hillis:1 trans:1 address:1 able:2 beyond:1 parallelism:21 below:2 pattern:1 usually:2 program:7 green:1 memory:8 tau:4 belief:1 power:1 metropolishastings:1 scheme:2 improve:2 spiegelhalter:2 theta:6 library:3 stan:16 church:2 categorical:6 naive:1 extract:2 prior:1 literature:1 val:14 relative:1 embedded:2 synchronization:1 lecture:1 interesting:4 generation:3 allocation:2 proportional:2 enclosed:1 var:9 sufficient:2 principle:2 endowment:1 row:4 prone:1 token:1 last:1 english:1 aij:1 bias:1 guide:1 pulled:1 deeper:1 benefit:2 distributed:2 slice:1 dimension:3 vocabulary:2 xn:2 calculated:3 evaluating:1 qn:1 avoids:1 concretely:1 adaptive:1 schultz:1 far:1 alpha:3 approximate:1 compact:1 implicitly:2 global:1 active:1 uai:2 zmj:2 corpus:5 consuming:1 xi:3 alternatively:1 continuous:2 latent:3 bonawitz:1 additionally:1 learn:2 terminate:1 robust:1 expanding:1 contributes:1 investigated:2 complex:1 necessarily:1 constructing:1 domain:1 venugopal:1 main:2 noise:1 hyperparameters:3 refactoring:1 ref:1 exploitable:1 intel:1 amortize:3 precision:4 sub:1 comprises:1 wish:2 explicit:1 deterministically:1 invocation:1 down:1 removing:1 minute:4 saturate:1 specific:3 list:1 concern:2 normalizing:2 workshop:1 imperatively:1 false:1 adding:1 restricting:1 drew:1 corr:2 execution:3 conditioned:1 flavor:1 locality:1 univariate:2 likely:2 failed:1 highlighting:1 contained:1 phi:9 extracted:2 acm:5 declaration:2 conditional:4 marked:1 consequently:2 towards:1 change:1 specifically:1 except:1 sampler:26 called:1 experimental:6 support:12 latter:1 evaluate:3 mcmc:6 tested:1 |
5,006 | 5,532 | Making Pairwise Binary Graphical Models Attractive
Nicholas Ruozzi
Institute for Data Sciences and Engineering
Columbia University
New York, NY 10027
[email protected]
Tony Jebara
Department of Computer Science
Columbia University
New York, NY 10027
[email protected]
Abstract
Computing the partition function (i.e., the normalizing constant) of a given pairwise binary graphical model is NP-hard in general. As a result, the partition
function is typically estimated by approximate inference algorithms such as belief propagation (BP) and tree-reweighted belief propagation (TRBP). The former
provides reasonable estimates in practice but has convergence issues. The later
has better convergence properties but typically provides poorer estimates. In this
work, we propose a novel scheme that has better convergence properties than BP
and provably provides better partition function estimates in many instances than
TRBP. In particular, given an arbitrary pairwise binary graphical model, we construct a specific ?attractive? 2-cover. We explore the properties of this special
cover and show that it can be used to construct an algorithm with the desired
properties.
1
Introduction
Graphical models provide a mechanism for expressing the relationships among a collection of variables. Many applications in computer vision, coding theory, and machine learning can be reduced
to performing statistical inference, either computing the partition function or the most likely configuration, of specific graphical models. In general models, both of these problems are NP-hard. As
a result, much effort has been invested in designing algorithms that can approximate, or in some
special cases exactly solve, these inference problems.
The belief propagation algorithm (BP) is an efficient message-passing algorithm that is often used
to approximate the partition function of a given graphical model. However, BP does not always
converge, and so-called convergent message-passing algorithms such as tree reweighted belief propagation (TRBP) have been proposed as alternatives to BP. Such convergent message passing algorithms can be viewed as dual coordinate-descent schemes on a particular convex upper bound on the
partition function [1]. While TRBP-style message-passing algorithms guarantee convergence under
suitable message-passing schedules, finding the optimal message-passing schedule can be cumbersome or impractical depending on the application, and TRBP often performs worse than BP in terms
of estimating the partition function.
The primary goal of this work is to study alternatives to BP and TRBP that have better convergence
properties than BP and approximate the partition function better than TRBP. To that end, the socalled ?attractive? graphical models (i.e., those models that do not contain frustrated cycles) stand
out as a special case. Attractive graphical models have desirable computational properties: Weller
and Jebara [2, 3] describe a polynomial time approximation scheme to minimize the Bethe free
energy of attractive models (note that BP only guarantees convergence to a local optimum). In
addition, BP has much better convergence properties on attractive models than on general pairwise
binary models [4, 5].
1
In this work, we show how to approximate the inference problem over a general pairwise binary
graphical model as an inference problem over an attractive graphical model. Similar in spirit to
the work of Bayati et al. [6] and Ruozzi and Tatikonda [7], we will use graph covers in order to
better understand the behavior of the Bethe approximation with respect to the partition function. In
particular, we will show that if a graphical model is strictly positive and contains even one frustrated
cycle, then there exists a choice of external field and a 2-cover without frustrated cycles whose
partition function provides a strict upper bound on the partition function of the original model.
We then show that the computation of the Bethe partition function can approximated, or in some
cases found exactly, by computing the Bethe partition function over this special cover. The required
computations are easier on this ?attractive? graph cover as computing the MAP assignment can be
done in polynomial time and there exists a polynomial time approximation scheme for computing
the Bethe partition function.
We illustrate the theory through a series of experiments on small models, grid graphs, and vertex induced subgraphs of the Epinions social network1 , . All of these models have frustrated cycles which
make the computation of their partition functions, marginals, and most-likely configurations exceedingly difficult. In these experiments, the proposed scheme converges significantly more frequently
than BP and provides a better estimate of the partition function than TRBP.
2
Prerequisites
We begin by reviewing pairwise binary graphical models, graph covers, the Bethe and TRBP approximations, and recent work on lower bounds.
2.1
Pairwise Binary Graphical Models
Let f : {0, 1}n ? R?0 be a non-negative function. A function f factors with respect to a graph
G = (V, E), if there exist potential functions ?i : {0, 1} ? R?0 for each i ? V and ?ij : {0, 1}2 ?
R?0 for each (i, j) ? E such that
Y
Y
f (x1 , . . . , xn ) =
?i (xi )
?ij (xi , xj ).
i?V
(i,j)?E
The graph G together with the collection of potential functions ? and ? define a graphical model
that we will denote as (G; ?, ?). For clarity, we will often denote the corresponding function as
f (G;?,?) (x). For a given graphical model (G; ?, ?), we are interested in computing the partition
P
Q
Q
function Z(G; ?, ?) , x?{0,1}|V | i?V ?i (xi ) (i,j)?E ?ij (xi , xj ).
We will also be interested in computing the maximum value of f , sometimes referred to as the
MAP problem. The problem of computing the MAP solution can be converted into the problem
of computing the partition function by adding a temperature parameter, T , and taking the limit as
T ? 0.
max f (G;?,?) (x) = lim Z(G; ?1/T , ? 1/T )T
x
T ?0
Here, ?1/T is the collection of potentials generated by taking each potential ?i (xi ) and raising it to
the 1/T power for all i ? V, xi ? {0, 1}.
2.2
Graph Covers
Graph covers have played an important role in our understanding of statistical inference in graphical
models [8, 9]. Roughly speaking, if a graph H covers a graph G, then H looks locally the same as
G.
Definition 2.1. A graph H covers a graph G = (V, E) if there exists a graph homomorphism
h : H ? G such that for all vertices i ? G and all j ? h?1 (i), h maps the neighborhood ?j of j in
H bijectively to the neighborhood ?i of i in G.
1
In the Epinions network, users are connected by agreement and disagreement edges and therefore frustrated cycles abound. By treating the network as a pairwise binary graphical model, we may compute the
trustworthiness of a user by performing marginal inference over a variable representing if the user is trusted or
not.
2
1
2
1
2
3
4
4
3
1
2
3
4
(a) A graph, G.
(b) One possible cover of G.
Figure 1: An example of a graph cover. The nodes in the cover are labeled for the node that they
copy in the base graph.
If h(j) = i, then we say that j ? H is a copy of i ? G. Further, H is said to be an M -cover of G
if every vertex of G has exactly M copies in H. For an example of a graph cover, see Figure 1. For
a connected graph G = (V, E), each M -cover consists of M copies of each of the variable nodes
of G with an edge joining each distinct copy of i ? V to a distinct copy of j ? V if and only if
(i, j) ? E.
To any M -cover H = (V H , E H ) of G given by the homomorphism h, we can associate a collection
of potentials: the potential at node i ? V H is equal to ?h(i) , the potential at node h(i) ? G, and
for each (i, j) ? E H , we associate the potential ?h(i,j) . In this way, we can construct a function
H
H
H
H
f (H;? ,? ) : {0, 1}M |V | ? R?0 such that f (H;? ,? ) factorizes over H. We will say that the
graphical model (H; ?H , ? H ) is an M -cover of the graphical model (G; ?, ?) whenever H is an
M -cover of G and ?H and ? H are derived from ? and ? as above.
2.3
The Bethe Partition Function
The Bethe free energy is a standard approximation to the so-called Gibbs free energy that is motivated by ideas from statistical physics. TRBP and more general reweighted belief propagation
algorithms take advantage of a similar approximation.
For ? in the local marginal polytope,
X
X
T ,{? ? 0 | ?(i, j) ? E,
?ij (xi , xj ) = ?i (xi ) and ?i ? V,
?i (xi ) = 1}.
xj
xi
the reweighted free energy approximation at temperature T = 1 is given by
log FB,? (G, ? ; ?, ?) = U (? ; ?, ?) ? H(?, ?)
where U is the energy,
XX
X X
U (? ; ?, ?) = ?
?i (xi ) log ?i (xi ) ?
?ij (xi , xj ) log ?ij (xi , xj ),
i?V xi
(i,j)?E xi ,xj
and H is an entropy approximation,
XX
X X
?ij (xi , xj )
H(?, ?) = ?
?i (xi ) log ?i (xi ) ?
.
?ij ?ij (xi , xj ) log
?
i (xi )?j (xj )
x
x ,x
i?V
i
(i,j)?E
i
j
Here, ?ij controls the reweighting over the edge (i, j) in the graphical model. If ?ij = 1 for
all (i, j) ? E, then we call this the Bethe approximation and will typically drop the ? writing
ZB,~1 = ZB . The reweighted partition function is then expressed in terms of the minimum value
achieved by this approximation over T as follows.
ZB,? (G; ?, ?) = e? min? ?T FB,? (G,? ;?,?)
Similar to the exact partition function computation, the reweighted partition function at temperature
T is given by ZB,? (G; ?1/T , ? 1/T )T . The zero temperature limit corresponds to minimizing the
energy function over the local marginal polytope.
In practice, local optima of these free energy approximations can be found by a reweighted version of
belief propagation. The fixed points of this reweighted algorithm correspond to stationary points of
log ZB (G, ? ; ?, ?) over T [10]. The TRBP algorithm chooses the vector ? such that ?ij corresponds
to the edge appearance probability of (i, j) over a convex combination of spanning trees. For these
choices of ?, the reweighted free energy approximation is convex in ? , ZB,? (G; ?, ?) is always
larger than the true partition function and there exists an ordering of the message updates so that
reweighted belief propagation is guaranteed to converge.
3
2.4
Log-Supermodularity and Lower Bounds
A recent theorem of Vontobel [8] provides a combinatorial characterization of the Bethe partition
function in terms of graph covers.
Theorem 2.2 (8).
v
u X
Z(H; ?H , ? H )
u
t
ZB (G; ?, ?) = lim sup M
|C M (G)|
M ??
M
H?C
(G)
M
where C (G) is the set of all M -covers of G.
This characterization suggests that bounds on the partition functions of individual graph covers can
be used to bound the Bethe partition function. This approach has recently been used to prove that the
Bethe partition function provides a lower bound on the true partition function in certain nice families
of graphical models [8, 11, 12]. One such nice family is the family of so-called log-supermodular
(aka attractive) graphical models.
Definition 2.3. A function f : {0, 1}n ? R?0 is log-supermodular if for all x, y ? {0, 1}n
f (x)f (y) ? f (x ? y)f (x ? y)
where (x ? y)i = min{xi , yi } and (x ? y)i = max{xi , yi }. Similarly, f is log-submodular if the
inequality is reversed for all x, y ? {0, 1}n .
Theorem 2.4 (Ruozzi [11]). If (G; ?, ?) is a log-supermodular graphical model, then for any M cover, (H; ?H , ? H ), of (G; ?, ?), Z(H; ?H , ? H ) ? Z(G; ?, ?)M .
Plugging this result into Theorem 2.2, we can conclude that the Bethe partition function always
lower bounds the true partition function in log-supermodular models.
3
Switching Log-Supermodular Functions
Let (G; ?, ?) be a pairwise binary graphical model. Each ?ij , in this model, is either logsupermodular, log-submodular, or both. In the case that each ?ij is log-supermodular, Theorem
2.4 says that the partition function of the disconnected 2-cover of G provides an upper bound on the
partition function of any other 2-cover of G.
When the ?ij are not all log-supermodular, this is not necessarily the case. As an example, if G is
a 3-cycle, then, up to isomorphism, G has two distinct covers: the 6-cycle and the graph consisting
of two disconnected 3-cycles. Consider the pairwise binary graphical model for the independent set
problem on G = (V, E) given by the edge potentials ?ij (xi , xj ) = 1 ? xi xj for all (i, j) ? E. We
can easily check that the 6-cycle has 18 distinct independent sets while the disconnected cover has
only 16 independent sets. That is, the disconnected 2-cover does not provide an upper bound on the
number of independent sets in all 2-covers.
Sometimes graphical models that are not log-supermodular can be converted into log-supermodular
models by performing a simple change of variables (e.g., for a fixed I ? V , a change of variables
that sends xi 7? 1 ? xi for each i ? I and xi 7? xi for each i ? V \ I). As a change of variables
does not change the partition function, we can directly apply Theorem 2.4 to the new model. We will
call such functions switching log-supermodular. These functions are the log-supermodular analog
of the ?switching supermodular? and ?permuted submodular? functions considered by Crama and
Hammer [13] and Schlesinger [14] respectively.
The existence of a 2-cover whose partition function is larger than the disconnected one is not unique
to the problem of counting independent sets. Such a cover exists whenever the base graphical model
is not switching log-supermodular. In this section, we will describe one possible construction of a
specific 2-cover that is distinct from the disconnected 2-cover whenever the given graphical model
is not switching log-supermodular and will always provide an upper bound on the true partition
function.
3.1
Signed Graphs
In order to understand when a graphical model can be converted into a log-supermodular model
by switching some of the variables, we introduce the notion of a signed graph. A signed graph is
4
2
1
2
4
3
4
1
1
(a)
2
3
1
2
4
3
3
2
4
1
3
1
4
(b)
3
2
(c)
4
(d)
Figure 2: An example of the construction of the 2-cover G2 for the same graph with different edge
potentials. Here, dashed lines represent edges with log-submodular potentials. The graph in (b) is
the 2-cover construction of the graph in (a) and the graph in (d) is the 2-cover construction applied
to the graph in (c). Note that the graph in (b) is connected while the graph in (d) is not.
a graph in which each edge has an associated sign. For our graphical models, we will use a ?+?
to represent a log-supermodular edge and a ??? to represent a log-submodular edge. The sign of
a cycle in the graph is positive if it has an even number of ??? edges and negative otherwise. A
signed graph is said to be balanced if there are no negative cycles. Equivalently, a signed graph is
balanced, if we can divide its vetices into two sets A and B such that all edges in the graph with
one endpoint in set A and the other endpoint in the set B are negative and the remaining edges are
positive [15]. Switching, or flipping, a variable as above has the effect of flipping the sign of all
edges adjacent to the corresponding variable node in the graphical model: flipping a single variable
converts an incident log-supermodular edge into a log-submodular edge and vice versa. A graphical
model is switching log-supermodular if and only if its signed graph is balanced.
Signed graphs have been studied before in the context of graphical models. Watanabe [16] characterized signed graphs for which belief propagation is guaranteed to have a unique fixed point. These
results depend only on the graph structure and the signs on the edges and not on the strength of the
potentials.
3.2
Switching Log-Supermodular 2-covers
We can always construct a 2-cover of a pairwise binary graphical model that is switching logsupermodular.
Definition 3.1. Given a pairwise binary graphical model (G; ?, ?), construct a 2-cover,
2
2
2
2
(G2 ; ?G , ? G ) where G2 = (V G , E G ), as follows.
2
? For each i ? G, create two copies of i, denoted i1 and i2 , in V G .
? For each edge (i, j) ? E, if ?ij is log-supermodular, then add the edges (i1 , j1 ) and (i2 , j2 )
2
2
to E G . Otherwise, add the edges (i1 , j2 ) and (i2 , j1 ) to E G .
G2 is switching log-supermodular. This follows from the characterization of Harary [15] as G2 can
be divided into two sets V1 and V2 with only negative edges between the two partitions and positive
edges elsewhere. See Figure 2 for an example of this construction.
If all of the potentials in (G; ?, ?) are log-supermodular, then G2 is equal to the disconnected 2cover of G. If all of the potentials in (G; ?, ?) are log-submodular, then G2 is a bipartite graph.
2
2
Lemma 3.2. For a connected graph G, (G2 ; ?G , ? G ) is disconnected if and only if f (G;?,?) is
switching log-supermodular. Equivalently, G2 is disconnected if and only if the signed version of G
is balanced.
Returning to the example of counting independent sets on a 3-cycle at the beginning of this section,
we can check that G2 for this graphical model corresponds to the 6-cycle. The observation that the
6-cycle has more independent sets than two disconnected copies of the 3-cycle is a special case of a
general theorem.
2
2
Theorem 3.3. For any pairwise binary graphical model (G; ?, ?), Z(G2 ; ?G , ? G )
Z(G; ?, ?)2 .
5
?
The proof of Theorem 3.3 can be found in Appendix A of the supplementary material. Unlike
Theorem 2.4 that provides lower bounds on the partition function, Theorem 3.3 provides an upper
bound on the partition function.
4
Properties of the Cover G2
In this section, we study the implications that Theorem 2.4 and Theorem 3.3 have for characterizations of switching log-supermodular functions and the computation of the Bethe partition function.
4.1
Field Independence
We begin with the simple observation that Theorem 3.3, like Theorem 2.4, does not depend on the
choice of external field. In fact, in the case that all of the edge potentials are strictly larger than zero,
this independence of external field completely characterizes switching log-supermodular graphical
models.
Theorem 4.1. For a pairwise binary graphical model (G; ?, ?) with strictly positive edge potentials
?, the following are equivalent.
1. f (G;?,?) (x) is switching log-supermodular.
b and any M -cover (H; ?bH , ? H ) of (G; ?,
b ?),
2. For all M ? 1, any external field ?,
H
H
M
b
b
Z(H; ? , ? ) ? Z(G; ?, ?) .
b ?),
3. For all choices of the external field ?b and any 2-cover (H; ?bH , ? H ) of (G; ?,
H
H
2
b
b
Z(H; ? , ? ) ? Z(G; ?, ?) .
In other words, if all of the edge potentials are strictly positive, and the graphical model has even
b ?) such
one negative cycle, then there exists an external field ?b and a 2-cover (H; ?bH , ? H ) of (G; ?,
b ?)2 < Z(H; ?bH , ? H ). In particular, the proof of the theorem shows that there exists an
that Z(G; ?,
b ?)2 < Z(G2 ; ?bG2 , ? G2 ). See Appendix B in the supplementary
external field ?b such that Z(G; ?,
material for a proof of Theorem 4.1.
4.2
Bethe Partition Function of Graph Covers
Although the true partition function of an arbitrary graph cover could overestimate or underestimate
the true partition function of the base graphical model, the Bethe partition function on every cover
always provides an upper bound on the Bethe partition function of the base graph. In addition,
the reweighted free energy is always convex for an appropriate choice of parameters ?T RBP which
2
2
means that ZB,?T RBP (G; ?, ?)2 = ZB,?T RBP (G2 ; ?G , ? G ). Consequently,
2
2
2
2
ZB,?T RBP (G; ?, ?)2 ? Z(G2 ; ?G , ? G ) ? ZB (G2 ; ?G , ? G ) ? ZB (G; ?, ?)2 .
(1)
Because the graph cover G2 is switching log-supermodular, the convergence properties of BP are
2
2
better [5], and we can always apply the PTAS of Weller and Jebara [3] to (G2 ; ?G , ? G ) in order
to obtain an upper bound on the Bethe partition function of the original model. That is, by forming
the special graph cover G2 , we accomplished our stated goal of deriving an algorithm that produces
better estimates of the partition function than TRBP but has better convergence properties than BP.
We examine the convergence properties experimentally in Section 5.
Before we evaluate the empirical properties of this strategy, observe that (1) holds for the MAP
inference problem as well. In the zero temperature limit, computing the Bethe partition function is
equivalent to minimizing the energy over the local marginal polytope. Many provably convergent
message-passing algorithms have been designed for this specific task [17, 18, 19, 1].
2
2
By Theorem 3.3, the MAP solution on (G2 ; ?G , ? G ) is always at least as good as the MAP solution on the original graph. The problem of finding the MAP solution for a log-supermodular
pairwise binary graphical model is exactly solvable in strongly polynomial time using max-flow
6
[20, 21]. We can show that the optimal solution to the Bethe approximation in the zero temperature
limit is attained as an integral assignment on this specific 2-cover. The argument goes as follows.
2
2
The graphical model (G2 ; ?G , ? G ) is switching log-supermodular. By Theorem 2.4, in the zero
2
2
temperature limit, no MAP solution on any cover of (G2 ; ?G , ? G ) can attain a higher value of the
objective function. This means that
2
2
lim ZB (G2 ; (?G )1/T ,(? G )1/T )T = max f (G
xG2
T ?0
2
2
2
2
;?G ,? G )
(xG2 ).
2
By (1), the Bethe approximation on (G2 ; ?G , ? G ) is at least as good as the Bethe approximation
on the original problem. In fact, they are equivalent in the zero temperature limit: the only part of
the Bethe approximation that is not necessarily convex in ? is the entropy approximation, which
becomes negligible as T ? 0.
As a consequence, we can compute the optimum of the Bethe free energy in the zero temperature
limit in polynomial time without relying on convergent message-passing algorithms. This is particularly interesting as the local marginal polytope for pairwise binary graphical models has an integer
persistence property. Given any fractional optimum ? of the energy, U , over the local marginal polytope, if ?i (0) > ?i (1), then there exists an integer optimum ? 0 in the marginal polytope such that
? 0 (0) > ? 0 (1) [22]. A similar result holds when the strict inequality is reversed. Therefore, we can
compute both the Bethe optimum and partial solutions to the exact MAP inference problem simply
2
2
by solving a max-flow problem over (G2 ; ?G , ? G ).
In this restricted setting, the two cover G2 is essentially the same as the graph construction produced
as part of the quadratic pseudo-boolean optimization (QPBO) algorithm in the computer vision
community [23]. In this sense, we can view the technique presented in this work as a generalization
of QPBO to approximate the partition function of pairwise binary graphical models.
5
Experimental Results
In this section, we present several experimental results for the above procedure. For the experiments,
we used a standard implementation of reweighted, asynchronous message passing starting from a
random initialization and a damping factor of .9. We test the performance of these algorithms on
Ising models with a randomly selected external field and various interaction strengths on the edges.
We do not use the convergent version of TRBP as the message update order is graph dependent
and not as easily parallelizable as the reweighted message-passing algorithm [1]. In addition, alternative message-passing schemes that guarantee convergence tend to converge slower than damped
reweighted message passing [24]. In some cases where the TRBP parameter choices do not converge, additional damping does help but does not allow convergence within the specified number of
iterations.
The first experiment was conducted on a complete cycle on four nodes. The convergence properties
of BP have been studied both theoretically and empirically by Mooij and Kappen [5]. As expected,
TRBP provides a looser bound on the partition function than BP on the 2-cover and both typically
perform worse in terms of estimation than BP on the original graph (when BP converges there).
The experimental results are described in Figure 3. In all cases, the algorithms were run until the
messages in consecutive time steps differed by less than 10?8 or until more than 20, 000 iterations
were performed (a single iteration consists of updating all of the messages in the model). In general,
BP on the 2-cover construction converges more quickly than both BP and TRBP on the original
graph. BP failed to converge as the interaction strength decreased past ?.9. The number of iterations
required for
pconvergence of BP on the 2-cover has a spike at the first interaction strength such that
ZB (G) 6= ZB (G2 ). Empirically, this occurs because of the appearance of new BP fixed points on
the two cover that are close to the BP fixed point on the original graph. As the interaction strength
increases past this point, the new fixed points further separate from the old fixed points and the
algorithm converges significantly faster.
Our second set of experiments evaluates the practical performance of these three message-passing
schemes for Ising models on frustrated grid graphs (which arise in computer vision problems), subnetworks of the Epinions social network (the specific subnetworks tested can be found in Appendix
D of the supplementary material), and simple four layer graphical models with five nodes per layer
7
Iterations
log Z
10
5
2,000
1,000
BP 2-cover
BP
TRBP
0
1
1.5
2
0
0.5
1
1.5
2
?J
?J
Figure 3: Plots of the log partition function and the number of iterations for the different algorithms
to converge for a complete graph on four nodes with no external field as the strength of the negative
edges goes from 0 to -2. For TRBP, ?ij = .5 for all (i, j) ? E. The dashed black line is the ground
truth.
0
Grid
EPIN1
EPIN1
Deep Networks
0.5
a
1
2
4
1
2
4
1
2
4
1
2
4
BP
100%
15%
1%
47%
37%
38%
41%
50%
53%
61%
61%
60%
TRBP
100%
30%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
BP 2-cover
95%
100%
100%
100%
100%
100%
100%
99%
100%
100%
100%
100%
BP Iter.
44.62
210
219
63.53
90.1
93.63
51.8
42.46
86.66
89.2
30.66
24.88
TRBP Iter
110.41
815.3
-
BP 2-cover Iter.
222.99
44.14
29.59
21.12
16.19
15.9
15.12
14.84
14.93
16.67
16.82
18.17
Figure 4: Percent of samples on which each algorithm converged within 1000 iterations and the
average number of iterations for convergence for 100 samples of edges weights in [?a, a] for the
designated graphs. For TRBP, performance was poor independent of the spanning trees selected.
similar to those used to model ?deep? belief networks (layer i and layer i + 1 form a complete bipartite graph and there are no intralayer edges). In the Epinions network, the pairwise interactions
correspond to trust relationships. If our goal was to find the most trusted users in the network, then
we could, for example, compute the marginal probability that each user is trusted and then rank the
users by these probabilities. For each of these models, the edge weights are drawn uniformly at
random from the interval [?a, a]. The performance of BP, TRBP, and BP on the 2-cover continue to
behave as they did for the simple four node model: as a increases, BP fails to converge and BP on the
2-cover converges much faster and more frequently than the other methods. Here, convergence was
required to an accuracy of 10?8 within 1, 000 iterations. The results for the different graphs appear
in Figure 4. Notably, both BP and TRBP perform poorly on the real networks from the Epinions
data set.
Acknowledgments
This work was supported in part by NSF grants IIS-1117631, CCF-1302269 and IIS-1451500.
References
[1] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms: a unifying view. In
Proc. 25th Uncertainty in Artifical Intelligence (UAI), Montreal, Canada, 2009.
[2] A. Weller and T. Jebara. Bethe bounds and approximating the global optimum. In Sixteenth International
Conference on Artificial Intelligence and Statistics (AISTATS), 2013.
[3] A. Weller and T. Jebara. Approximating the Bethe partition function. In Uncertainty in Artifical Intelligence (UAI), 2014.
[4] N. Taga and S. Mase. On the convergence of loopy belief propagation algorithm for different update rules.
IEICE Trans. Fundam. Electron. Commun. Comput. Sci., E89-A(2):575?582, Feb. 2006.
[5] J. M. Mooij and H. J. Kappen. Sufficient conditions for convergence of the sum-product algorithm.
Information Theory, IEEE Transactions on, 53(12):4422?4437, Dec. 2007.
8
[6] M. Bayati, C. Borgs, J. Chayes, and R. Zecchina. Belief propagation for weighted b-matchings on arbitrary graphs and its relation to linear programs with integer solutions. SIAM Journal on Discrete Mathematics, 25(2):989?1011, 2011.
[7] N. Ruozzi and S. Tatikonda. Message-passing algorithms for quadratic minimization. Journal of Machine
Learning Research, 14:2287?2314, 2013.
[8] P. O. Vontobel. Counting in graph covers: A combinatorial characterization of the Bethe entropy function.
Information Theory, IEEE Transactions on, Jan. 2013.
[9] P. O. Vontobel and R. Koetter. Graph-cover decoding and finite-length analysis of message-passing iterative decoding of LDPC codes. CoRR, abs/cs/0512078, 2005.
[10] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations and generalized
belief propagation algorithms. Information Theory, IEEE Transactions on, 51(7):2282 ? 2312, July 2005.
[11] N. Ruozzi. The Bethe partition function of log-supermodular graphical models. In Neural Information
Processing Systems (NIPS), Lake Tahoe, NV, Dec. 2012.
[12] N. Ruozzi. Beyond log-supermodularity: Lower bounds and the bethe partition function. In Proceedings
of the Twenty-Ninth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-13),
pages 546?555, Corvallis, Oregon, 2013. AUAI Press.
[13] Y. Crama and P. L. Hammer. Boolean functions: Theory, algorithms, and applications, volume 142.
Cambridge University Press, 2011.
[14] D. Schlesinger. Exact solution of permuted submodular minsum problems. In Energy Minimization
Methods in Computer Vision and Pattern Recognition (EMMCVPR), pages 28?38. Springer, 2007.
[15] F. Harary. On the notion of balance of a signed graph. The Michigan Mathematical Journal, 2(2):143?146,
1953.
[16] Y. Watanabe. Uniqueness of belief propagation on signed graphs. In Advances in Neural Information
Processing Systems, pages 1521?1529, 2011.
[17] T. Werner. A linear programming approach to max-sum problem: A review. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 29(7):1165?1179, 2007.
[18] A. Globerson and T. S. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP
LP-relaxations. In Proc. 21st Neural Information Processing Systems (NIPS), Vancouver, B.C., Canada,
2007.
[19] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. MAP estimation via agreement on (hyper)trees:
Message-passing and linear programming. Information Theory, IEEE Transactions on, 51(11):3697?
3717, Nov. 2005. ISSN 0018-9448. doi: 10.1109/TIT.2005.856938.
[20] D. M. Greig, B. T. Porteous, and A. H. Seheult. Exact maximum a posteriori estimation for binary images.
Journal of the Royal Statistical Society. Series B (Methodological), pages 271?279, 1989.
[21] V. Kolmogorov and R. Zabih. What energy functions can be minimized via graph cuts? In Computer
VisionECCV 2002, pages 65?81. Springer, 2002.
[22] V. Kolmogorov and M. Wainwright. On the optimality of tree-reweighted max-product message-passing.
In Proceedings of the Twenty-First Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-05), pages 316?323, Arlington, Virginia, 2005. AUAI Press.
[23] V. Kolmogorov and C. Rother. Minimizing nonsubmodular functions with graph cuts-a review. Pattern
Analysis and Machine Intelligence, IEEE Transactions on, 29(7):1274?1279, July 2007.
[24] A. Globerson and T. S. Jaakkola. Convergent propagation algorithms via oriented trees. In Proc. 23rd
Uncertainty in Artifical Intelligence (UAI), 2007.
[25] A. W. Marshall and I. Olkin. Inequalities: Theory of Majorization and its Applications. Academic Press,
New York, 1979.
[26] L. Lov?asz. Submodular functions and convexity. In A. Bachem, B. Korte, and M. Grtschel, editors,
Mathematical Programming The State of the Art, pages 235?257. Springer Berlin Heidelberg, 1983.
[27] M. Richardson, R. Agrawal, and P. Domingos. Trust management for the semantic web. In Dieter Fensel,
Katia Sycara, and John Mylopoulos, editors, The Semantic Web - ISWC 2003, volume 2870 of Lecture
Notes in Computer Science, pages 351?368. Springer Berlin Heidelberg, 2003.
9
| 5532 |@word version:3 polynomial:5 homomorphism:2 kappen:2 configuration:2 contains:1 series:2 past:2 olkin:1 john:1 partition:53 koetter:1 j1:2 treating:1 drop:1 update:3 designed:1 plot:1 stationary:1 intelligence:8 selected:2 beginning:1 provides:12 characterization:5 node:10 tahoe:1 five:1 mathematical:2 consists:2 prove:1 introduce:1 theoretically:1 pairwise:18 lov:1 notably:1 expected:1 roughly:1 behavior:1 frequently:2 examine:1 freeman:1 relying:1 abound:1 begin:2 estimating:1 xx:2 becomes:1 what:1 finding:2 impractical:1 guarantee:3 pseudo:1 zecchina:1 every:2 auai:2 exactly:4 returning:1 control:1 grant:1 appear:1 overestimate:1 before:2 negligible:1 positive:6 engineering:1 local:7 iswc:1 limit:7 consequence:1 switching:17 joining:1 signed:11 black:1 initialization:1 studied:2 suggests:1 unique:2 practical:1 acknowledgment:1 globerson:3 practice:2 procedure:1 jan:1 empirical:1 significantly:2 attain:1 persistence:1 word:1 close:1 bh:4 context:1 writing:1 equivalent:3 map:12 go:2 starting:1 convex:5 minsum:1 subgraphs:1 rule:1 deriving:1 notion:2 coordinate:1 bg2:1 construction:7 user:6 exact:4 programming:3 designing:1 domingo:1 agreement:2 associate:2 approximated:1 particularly:1 updating:1 recognition:1 cut:2 ising:2 labeled:1 role:1 cycle:17 connected:4 ordering:1 balanced:4 convexity:1 depend:2 reviewing:1 solving:1 tit:1 bipartite:2 completely:1 matchings:1 easily:2 various:1 kolmogorov:3 distinct:5 describe:2 doi:1 artificial:3 hyper:1 neighborhood:2 whose:2 larger:3 solve:1 supplementary:3 say:3 supermodularity:2 otherwise:2 tested:1 statistic:1 richardson:1 invested:1 chayes:1 advantage:1 agrawal:1 propose:1 interaction:5 product:3 j2:2 poorly:1 sixteenth:1 convergence:17 optimum:7 produce:1 converges:5 help:1 depending:1 illustrate:1 montreal:1 fixing:1 ij:18 c:2 hammer:2 material:3 generalization:1 crama:2 strictly:4 hold:2 considered:1 ground:1 electron:1 consecutive:1 uniqueness:1 estimation:3 proc:3 combinatorial:2 tatikonda:2 vice:1 create:1 trusted:3 weighted:1 minimization:2 always:9 factorizes:1 jaakkola:3 derived:1 methodological:1 rank:1 check:2 aka:1 sense:1 posteriori:1 inference:9 dependent:1 typically:4 relation:1 interested:2 i1:3 provably:2 issue:1 among:1 dual:1 denoted:1 socalled:1 art:1 special:6 marginal:8 field:10 construct:5 equal:2 bachem:1 look:1 minimized:1 np:2 randomly:1 oriented:1 individual:1 consisting:1 ab:1 message:23 damped:1 implication:1 poorer:1 edge:30 integral:1 partial:1 damping:2 tree:7 divide:1 old:1 desired:1 vontobel:3 schlesinger:2 instance:1 boolean:2 marshall:1 cover:64 assignment:2 werner:1 loopy:1 vertex:3 conducted:1 virginia:1 weller:4 chooses:1 st:1 international:1 siam:1 physic:1 decoding:2 together:1 quickly:1 management:1 worse:2 external:9 style:1 potential:17 converted:3 coding:1 oregon:1 later:1 view:2 performed:1 sup:1 characterizes:1 majorization:1 minimize:1 accuracy:1 correspond:2 produced:1 converged:1 parallelizable:1 cumbersome:1 whenever:3 xg2:2 definition:3 evaluates:1 underestimate:1 energy:15 associated:1 proof:3 lim:3 fractional:1 sycara:1 schedule:2 attained:1 supermodular:30 higher:1 arlington:1 wei:2 done:1 strongly:1 until:2 web:2 trust:2 reweighting:1 propagation:13 ieice:1 rbp:4 effect:1 contain:1 true:6 ccf:1 former:1 i2:3 semantic:2 attractive:9 reweighted:15 adjacent:1 generalized:1 complete:3 performs:1 temperature:9 network1:1 percent:1 image:1 novel:1 recently:1 permuted:2 empirically:2 endpoint:2 volume:2 analog:1 marginals:1 expressing:1 corvallis:1 epinions:5 versa:1 cambridge:1 gibbs:1 rd:1 grid:3 mathematics:1 similarly:1 submodular:9 logsupermodular:2 base:4 add:2 feb:1 recent:2 commun:1 certain:1 inequality:3 binary:18 continue:1 yi:2 accomplished:1 minimum:1 ptas:1 additional:1 converge:7 dashed:2 ii:2 july:2 desirable:1 faster:2 characterized:1 academic:1 divided:1 nonsubmodular:1 plugging:1 emmcvpr:1 vision:4 essentially:1 iteration:9 sometimes:2 represent:3 achieved:1 dec:2 addition:3 decreased:1 interval:1 sends:1 unlike:1 asz:1 strict:2 nv:1 induced:1 tend:1 flow:2 spirit:1 call:2 integer:3 counting:3 meltzer:1 xj:12 independence:2 greig:1 idea:1 motivated:1 isomorphism:1 effort:1 york:3 passing:19 speaking:1 deep:2 korte:1 locally:1 zabih:1 reduced:1 exist:1 nsf:1 sign:4 estimated:1 per:1 ruozzi:6 discrete:1 iter:3 four:4 drawn:1 clarity:1 v1:1 graph:65 relaxation:1 convert:1 sum:2 run:1 uncertainty:5 family:3 reasonable:1 looser:1 lake:1 appendix:3 bound:18 layer:4 guaranteed:2 played:1 convergent:8 quadratic:2 annual:2 strength:6 bp:34 argument:1 min:2 optimality:1 performing:3 department:1 designated:1 combination:1 disconnected:10 poor:1 harary:2 lp:1 making:1 restricted:1 dieter:1 mechanism:1 end:1 subnetworks:2 prerequisite:1 apply:2 observe:1 yedidia:1 v2:1 bijectively:1 disagreement:1 appropriate:1 nicholas:1 alternative:3 slower:1 existence:1 original:7 remaining:1 tony:1 porteous:1 graphical:47 unifying:1 approximating:2 society:1 objective:1 flipping:3 spike:1 strategy:1 primary:1 occurs:1 said:2 reversed:2 separate:1 sci:1 berlin:2 polytope:6 spanning:2 willsky:1 rother:1 length:1 code:1 ldpc:1 relationship:2 issn:1 minimizing:3 trustworthiness:1 balance:1 equivalently:2 difficult:1 negative:7 stated:1 implementation:1 twenty:2 perform:2 upper:8 observation:2 qpbo:2 finite:1 descent:1 behave:1 ninth:1 arbitrary:3 jebara:6 community:1 canada:2 required:3 specified:1 raising:1 trbp:23 nip:2 trans:1 beyond:1 pattern:3 program:1 max:8 royal:1 belief:13 wainwright:2 power:1 suitable:1 solvable:1 representing:1 scheme:7 mase:1 columbia:4 nice:2 understanding:1 review:2 mooij:2 vancouver:1 katia:1 lecture:1 interesting:1 bayati:2 incident:1 sufficient:1 editor:2 elsewhere:1 supported:1 free:9 copy:8 asynchronous:1 allow:1 understand:2 institute:1 taking:2 xn:1 stand:1 exceedingly:1 fb:2 collection:4 social:2 transaction:6 approximate:6 nov:1 global:1 uai:5 conclude:1 xi:29 iterative:1 bethe:30 heidelberg:2 necessarily:2 intralayer:1 constructing:1 did:1 aistats:1 arise:1 x1:1 referred:1 differed:1 ny:2 fails:1 watanabe:2 comput:1 theorem:20 specific:6 borgs:1 normalizing:1 exists:8 adding:1 corr:1 easier:1 entropy:3 michigan:1 simply:1 explore:1 likely:2 appearance:2 forming:1 failed:1 expressed:1 g2:28 springer:4 corresponds:3 truth:1 frustrated:6 viewed:1 goal:3 consequently:1 hard:2 change:4 experimentally:1 uniformly:1 lemma:1 zb:15 called:3 experimental:3 artifical:3 evaluate:1 seheult:1 |
5,007 | 5,533 | Mode Estimation for High Dimensional Discrete Tree
Graphical Models
Chao Chen
Department of Computer Science
Rutgers, The State University of New Jersey
Piscataway, NJ 08854-8019
[email protected]
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University, Princeton, NJ 08544
[email protected]
Dimitris N. Metaxas
Department of Computer Science
Rutgers, The State University of New Jersey
Piscataway, NJ 08854-8019
[email protected]
Tianqi Zhao
Department of Operations Research
and Financial Engineering
Princeton University, Princeton, NJ 08544
[email protected]
Abstract
This paper studies the following problem: given samples from a high dimensional
discrete distribution, we want to estimate the leading (?, ?)-modes of the underlying distributions. A point is defined to be a (?, ?)-mode if it is a local optimum of the density within a ?-neighborhood under metric ?. As we increase
the ?scale? parameter ?, the neighborhood size increases and the total number of
modes monotonically decreases. The sequence of the (?, ?)-modes reveal intrinsic topographical information of the underlying distributions. Though the mode
finding problem is generally intractable in high dimensions, this paper unveils
that, if the distribution can be approximated well by a tree graphical model, mode
characterization is significantly easier. An efficient algorithm with provable theoretical guarantees is proposed and is applied to applications like data analysis and
multiple predictions.
1
Introduction
Big Data challenge modern data analysis in terms of large dimension, insufficient sample and the
inhomogeneity. To handle these challenges, new methods for visualizing and exploring complex
datasets are crucially needed. In this paper, we develop a new method for computing diverse modes
of the unknown discrete distribution function. Our method is applicable in many fields, such as
computational biology, computer vision, etc. More specifically, our method aims to find a sequence
of (?, ?)-modes, which are defined as follows:
Definition 1 ((?, ?)-modes). A point is a (?, ?)-mode if and only if its probability is higher than all
points within distance ? under a distance metric ?.
With a metric ?(?) given, the ?-neighborhood of a point x, N? (x), is defined as the ball centered
at x with radius ?. Varying ? from small to large, we can examine the topology of the underlying
distribution at different scales. Therefore ? is also called the scale parameter. When ? = 0, N? (x) =
{x}, so every point is a mode. When ? = ?, N? (x) is the whole domain, denoted by X , so the
maximum a posteriori is the only mode. As ? increases from zero to infinity, the ?-neighborhood of x
monotonically grows and the set of modes, denoted by M? , monotonically decreases. Therefore as ?
increases, the sets of M? form a nested sequence, which can be viewed as a multi-scale description
of the underlying probability landscape. See Figure 1 for an illustrative example. In this paper,
we will use the Hamming distance, ?H , i.e., the number of variables at which two points disagree.
Other distance metrics, e.g., the L2 distance ?L2 (x, x0 ) = kx ? x0 k2 , are also possible but with more
computational challenges.
The concept of modes can be justified by many practical problems. We mention the following
two motivating applications: (1) Data analysis: modes of multiple scales provide a comprehensive
1
geometric description of the topography of the underlying distribution. In the low-dimensional
continuous domain, such tools have been proposed and used for statistical data analysis [20, 17, 3].
One of our goals is to carry these tools to the discrete and high dimensional setting. (2) Multiple
predictions: in applications such as computational biology [9] and computer vision [2, 6], instead of
one, a model generates multiple predictions. These predictions are expected to have not only high
probability but also high diversity. These solutions are valid hypotheses which could be useful in
other modules down the pipeline. In this paper we address the computation of modes, formally,
Problem 1 (M -modes). For all ??s, compute the M modes with the highest probabilities in M? .
This problem is challenging. In the continuous setting, one often starts from random positions,
estimates the gradient of the distribution and walks along it towards the nearby mode [8]. However,
this gradient-ascent approach is limited to low-dimensional distributions over continuous domains.
In discrete domains, gradients are not defined. Moreover, a naive exhaustive search is computationally infeasible as the total number of points is exponential to dimension. In fact, even deciding
whether a given point is a mode is expensive as the neighborhood has exponential size.
In this paper, we propose a new approach to compute these discrete (?, ?)-modes. We show that
the problem becomes computationally tractable when we restrict to distributions with tree factor
structures. We explore the structure of the tree graphs and devise a new algorithm to compute
the top M modes of a tree-structured graphical model. Inspired by the observation that a global
mode is also a mode within smaller subgraphs, we show that all global modes can be discovered
by examining all local modes and their consistent combinations. Our algorithm first computes local
modes, and then computes the high probability combinations of these local modes using a junction
tree approach. We emphasize that the algorithm itself can be used in many graphical model based
methods, such as conditional random field [10], structured SVM [22], etc.
When the distribution is not expressed as a factor graph, we will first estimate the tree-structured
factor graph using the algorithm of Liu et al. [13]. Experimental results demonstrate the accuracy
and efficiency of our algorithm. More theoretical guarantee of our algorithm can be found in [7].
Related work. Modes of distributions have been studied in continuous settings. Silverman [21]
devised a test of the null hypothesis of whether a kernel density estimation has a certain number
of modes or less. Modes can be used in clustering [8, 11]. For each data point, a monotonically
increasing path is computed using a gradient-ascend method. All data points whose gradient path
converge to a same mode is labeled the same class. Modes can be also used to help decide the
number of mixture components in a mixture model, for example as the initialization of the maximum
likelihood estimation [11, 15]. The topographical landscape of distributions has been studied and
used in characterizing topological properties of the data [4, 20, 17]. Most of these approaches
assume a kernel density estimation model. Modes are detected by approximating the gradient using
k-nearest neighbors. This approach is known to be inaccurate for high dimensional data.
We emphasize that the multi-scale view of a function has been used broadly in compute vision.
By convolving an image with a Gaussian kernel of different widths, we obtain different level of
details. This theory, called the scale-space theory [25, 12], is used as the fundamental principle
of most state-of-the-art image feature extraction techniques [14, 16]. This multi-scale view has
been used in statistical data analysis by Chaudhuri and Marron [3]. Chen and Edelsbrunner [5]
quantitatively measured the topographical landscape of an image at different scales.
Chen et al. [6] proposed a method to compute modes of a simple chain model. However, restricting to a simple chain will limit our mode prediction accuracy. A simple chain model has much less
flexibility than tree-factored models. Even if the distribution has a chain structure, recovering the
chain from data is computationally intractable: the problem requires finding the chain with maximal
total mutual information, and thus is equivalent to the NP-hard travelling salesman problem.
P (x)
P (x)
?=1
=0
?=4
=1
?=
= 00
?==11
? ==44
= 77
?=
Figure 1: An illustration of modes of different scales. Each vertical bar corresponds to an element. The height
corresponds to its probability. Left: when ? = 1, there are three modes (red). Middle: when ? = 4, only two
modes left. Right: the multi-scale view of the landscape.
2
=4
=7
2
Background
Graphical models. We briefly introduce graphical models. Please refer to [23, 19] for more details.
The graphical model is a powerful tool to model the joint distribution of a set of interdependent
random variables. The distribution is encrypted in a graph G = (V, E) and a potential function f .
The set of vertices/nodes V corresponds to the set of discrete variables i ? [1, D], where D = |V|.
A node i can be assigned a label xi ? L. A label configuration of all variables x = (x1 , . . . , xD )
is called a labeling. We denote by X = LD the domain of all labelings. The potential function
f : X ? R assigns to each labeling a real value, which is inversely proportional
to the logarithm
P
of the probability distribution, p(x) = exp(?f (x) ? A), where A = log x?X exp(?f (x)) is the
log-partition function. Thus the maximal modes of the distribution and the minimal modes of f have
a one-to-one correspondence. Assuming these variables satisfy the Markov properties, the potential
function can be written as
P
f (x) = (i,j)?E fi,j (xi , xj ),
(2.1)
where fi,j : L ? L ? R is the potential function for edge (i, j) 1 . For convenience, we assume any
two different labelings have different potential function values.
We define the following notations for convenience. A vertex subset, V 0 ? V, induces a subgraph
consisting of V 0 together with all edges whose both ends are within V 0 . In this paper, all subgraphs
are vertex-induced. Therefore, we abuse the notation and denote both the subgraph and the vertex
subset by the same symbol.
We call a labeling of a subgraph B a partial labeling. For a given labeling y, we may denote
by yB its label configurations of vertices of B. We say the distance between two partial labelings
xB and yB 0 is equal to the Hamming distance between the two within the intersection of the two
? = B ? B 0 , formally, ?(xB , yB 0 ) = ?(x ? , y ? ). We denote by fB (yB ) the potential
subgraphs B
B B
of the partial labeling, which is only evaluated over edges within B. When the context is clear, we
drop the subscript B and write f (yB ).
Tree density estimation. In this paper, we focus on tree-structured graphical models. A distribution that is Markov to a tree structure has the following factorization:
Q
p(xi , xj ) Q
P (X = x) = p(x) = (i,j)?E
p(xk ).
(2.2)
p(xi )p(xj ) k?V
It is easy to see that the potential function can be written in the form (2.1). In the case when the
input is a set of samples, we will first use the tree density estimation algorithm [13] to estimate
the graphical model. The oracle tree distribution is the one on the space of all tree distributions
that minimizes the Kullback-Leibler (KL) divergence between itself and the tree density, that is,
?
q ? = argminq?PT D(p? ||q), where
P PT is the family of distributions supported on a tree graph, p is
the true density, and D(p||q) = x?X p(x)(log p(x) ? log q(x)) is the KL divergence. It is proved
[1] that q ? has the same marginal univariate and bivariate distribution as p? . Hence to recover q ? , we
?
only need to recover the structure of the
Ptree. Denote by E the edge set of the oracle tree. Simple
?
?
calculation shows that D(p ||q ) = ? (i,j)?E ? Iij + const, where
PL PL
(2.3)
Iij = xi =1 xj =1 p? (xi , xj )(log p? (xi , xj ) ? log p? (xi ) ? log p? (xj ))
is called the mutual information between node i and j. Therefore we can apply Kruskal?s maximum
spanning tree algorithm to obtain E ? , with edge weights being the mutual information.
In reality, we do not know the true marginal
univariate and bivariate distribution. We thus
compute estimators I?ij from the data set X (1) , . . . , X (n) by replacing p? (xi , xj ) and p? (xi )
Pn
(s)
(s)
= xi , Xj = xj } and p?(xi ) =
in (2.3) with their estimates p?(xi , xj ) = n1 s=1 1{Xi
P
(s)
n
1
= xi }. The tree estimator is thus obtained by Kruskal?s algorithm:
s=1 1{Xi
n
P
T?n = argmaxT (i,j)?E(T ) I?ij .
(2.4)
By definition, the potential function on each edge can be estimated similarly using the estimated
P
marginal univariate and bivariate distributions. By (2.1), we have f?(x) = (i,j)?E(T?) f?i,j (xi , xj ),
where T? is the estimated tree using Kruskal?s algorithm.
1
For convenience, we drop unary potentials fi in this paper. Note that any potential function with unary
potentials can be rewritten as a potential function without them.
3
c
Figure 2: Left: The junction tree with radius r = 2. We show the geodesic balls of three supernodes. In each
geodesic ball, the center is red. The boundary vertices are blue. The interior vertices are black and red.
Right-bottom: Candidates of a geodesic ball. Each column corresponds to candidates of one boundary labeling.
Solid and empty vertices represent label zero and one. Right-top: A geodesic ball with radius r = 3.
3
Method
We present the first algorithm to compute M? for a tree-structured graph. To compute modes of all
scales, we go through ??s from small to large. The iteration stops at a ? with only a single mode.
We first present a polynomial algorithm for the verification problem: deciding whether a given
labeling is a mode (Sec. 3.1). However, this algorithm is insufficient for computing the top M modes
because the space of labelings is exponential size. To compute global modes, we decompose the
problem into computing modes of smaller subgraphs, which are called local modes. Because of the
bounded subgraph size, local modes can be solved efficiently. In Sec. 3.2, we study the relationship
between global and local modes. In Sec. 3.3 and Sec. 3.4, we give two different methods to compute
local modes, depending on different situations.
3.1 Verifying whether a labeling is a mode
To verify whether a given labeling y is a mode, we check whether there is another labeling within
N? (y) with a smaller potential. We compute the labeling within the neighborhood with the minimal
potential, y ? = argminz?N? (y) f (z). The given labeling y is a mode if and only if y ? = y.
We present a message-passing algorithm. We select an arbitrary node as the root, and thus a
corresponding child-parent relationship between any two adjacent nodes. We compute messages
from leaves to the root. Denote by Tj as the subtree rooted at node j. The message from vertex i
to j, MSGi?j (`i , ? ) is the minimal potential one can achieve within the subtree Ti given a fixed
label `i at i and a constraint that the partial labeling of the subtree is no more than ? away from y.
Formally,
MSGi?j (`i , ? ) =
min
f (zTi )
zTi :zi =`i ,?(zTi ,y)??
where `i ? L and ? ? [0, ?]. This message cannot be computed until the messages from all children
of i have been computed. For ease of exposition, we add a pseudo vertex s as the parent of the root, r.
By definition, min`r MSGr?s (`r , ?) is the potential of the desired labeling, y ? . Using the standard
backtracking strategy of message passing, we can recover y ? . Please refer to [7] for details of the
computation of each individual message. For convenience we call this procedure Is-a-Mode. This
procedure and its variations will be used later.
3.2 Local and global modes
Given a graph G and a collection of its subgraphs B, we show that under certain conditions, there
is a tight connection between the modes of these subgraphs and the modes of G. In particular, any
consistent combinations of these local modes is a global mode, and vice versa.
Simply considering the modes of a subgraph B is insufficient. A mode of B with small potential
may cause big penalty when it is extended to a labeling of the whole graph. Therefore, when
defining a local mode, we select a boundary region of the subgraph and consider all possible label
configurations of this boundary region. Formally, we divide the vertex set of B into two disjoint
subsets, the boundary ?B and the interior int(B), so that any path connecting an interior vertex
u ? int(B) and an outside vertex v ?
/ B has to pass through at least one boundary vertex w ? ?B.
See Figure 2(left) for examples of B. Similar to the definition of a global mode, we define a local
mode as the partial labeling with the smallest potential in its ?-neighborhood:
Definition 2 (local modes). A partial labeling, xB , is a local mode w.r.t. ?-neighborhood if and only
if there is no other partial labeling yB which (C1) has a smaller potential, f (yB ) < f (xB ); (C2) is
within ? distance from xB , ?(yB , xB ) ? ? and (C3) has the same boundary labeling, y?B = x?B .
4
We denote by M?B the space of local modes of the subgraph B. Given a set of subgraphs B
together with a interior-boundary decomposition for each subgraph, we have the following theorem.
Theorem 3.1 (local-global). Suppose any connected subgraph G0 ? G of size ? is contained within
int(B) of some B ? B. A labeling x of G is a global mode if and only if for every B ? B, the
corresponding partial labeling xB is a local mode.
Proof. The necessity is obvious since a global mode is a local mode within every subgraph. Note
that necessity is not true any more if the restriction on ?B (C3 in Definition 2) is relaxed. Next we
show the sufficiency by contradiction. Suppose a labeling x is a local mode within every subgraph,
but is not a global mode. By definition, there is y ? N? (x) with smaller potential than x. We assume
y and x disagree within a connected subgraph. If y and x disagree within multiple connected components, we can always find y 0 ? N? (x) with smaller potential which disagree with x within only one
of these connected components. The subgraph on which x and y disagree must be contained by the
interior of some B ? B. Thus xB is not a local mode due to the existence of yB . Contradiction.
We say partial labelings of two different subgraphs are consistent if they agree at all common
vertices. Theorem 3.1 shows that there is a bijection between the set of global modes and the set of
consistent combinations of local modes. This enables us to compute global modes by first compute
local modes of each subgraph and then search through all their consistent combinations.
Instantiating for a tree-structured graph. For a tree-structured graph with D nodes, let B be
the set of D geodesic balls, centered at the D nodes. Each ball has radius r = b 2? c + 1. Formally,
we have Bi = {j | dist(i, j) ? r}, ?Bi = {j | dist(i, j) = r}, and int(Bi ) = {j | dist(i, j) < r}.
Here dist(i, j) is the number of edges between the two nodes. See Figure 2(left) for examples. It
is not hard to see that any size ? subtree is contained within a int(Bi ) for some i. Therefore, the
prerequisite of Theorem 3.1 is guaranteed.
We construct a junction tree to combine the set of all consistent local modes. It is constructed
as follows: Each supernode of the junction tree corresponds to a geodesic ball. Two supernodes are
neighbors if and only if their centers are neighbors in the original tree. See Figure 2(left). Let the
label set of a supernode be its corresponding local modes, as defined in Definition 2. We construct
a potential function of the junction tree so that a labeling of the junction tree has finite potential if
and only if the corresponding local modes are consistent. Furthermore, whenever the potential of a
junction tree labeling is finite, it is equal to the potential of the corresponding labeling in the original
graph. This construction can be achieved using a standard junction tree construction algorithm, as
long as the local mode set of each ball is given.
The M -modes problem is then reduced to computing the M lowest potential labelings of the
junction tree. This is the M -best labeling problem and can be solved efficiently using Nilsson?s
algorithm [18]. The algorithm of this section is summarized in the Procedure Compute-M-Modes.
Procedure 1 Compute-M-Modes
Input: A tree G, a potential function f and a scale ?
Output: The M modes of the lowest potential
1: Construct geodesic balls B = {Br (c) | c ? V}, where r = b 2? c + 1
2: for all B ? B do
3:
M?B = the set of local modes of B
4: Construct a junction tree (Figure 2). The label set of each supernode is its local modes.
5: Compute the M lowest-potential labelings of the junction tree, using Nilsson?s algorithm.
3.3 Computing local modes via enumeration
It remains to compute all local modes of each geodesic ball B. We give two different algorithms in
Sec. 3.3 and 3.4. Both methods have two steps. First, compute a set of candidate partial labelings.
Second, choose from these candidates the ones that satisfy Definition 2. In both methods, it is
essential to ensure the candidate set contains all local modes.
Computing a candidate set. The first method enumerates through all possible labelings of
the boundary. For each boundary labeling x?B , we compute a corresponding subset of candidates.
Each candidate is the partial labeling of the minimal potential with boundary labeling x?B and a
fixed label ` of the center c. This subset has L elements since
c has L labels. Formally, the candidate
subset for a fixed boundary labeling x?B is CB (x?B ) = argminyB fB (yB )|y?B = x?B , yc ? L .
It can be computed using a standard message-passing algorithm over the tree, using c as the root.
Denote by XB and X?B the space of all partial labelings of B and ?B respectively. The
candidate set we compute is the union of candidate subsets of all boundary labelings, i.e. CB =
5
S
x?B ?X?B CB (x?B ). See Figure 2(right-bottom) for an example candidate set. We can show that
the computed candidate set CB contains all local modes of B.
Theorem 3.2. Any local mode yB belongs to the candidate set CB .
Before proving the theorem, we formalize an assumption of the geodesic balls.
Assumption 1 (well-centered). We assume that after removing the center from int(B), each connected component of the remaining graph has a size smaller than ?.
For example, in Figure 2(right-top), a geodesic ball of radius 3 has three connected components
in int(B)\{c}, of size one, two and three, respectively. Since r = b 2? c + 1, ? is either four or
five. The ball is well-centered. Since the interior of B is essentially a ball of radius r ? 1 = b 2? c,
the assumption is unlikely to be violated, as we observed in practice. In the worst case when the
assumption is violated, we can still solve the problem by adding additional centers in the middle of
these connected components. Next we prove the theorem.
Proof of Theorem 3.2. We prove by contradiction. Suppose there is a local mode yB ?
/ XB (x?B )
0
? XB (x?B ) be the candidate
such that y?B = x?B . Let ` be the label of yB at the center c. Let yB
with the same label at the center. Furthermore, the two partial labelings agree at ?B and at c.
Therefore the two labelings differ at a set of connected subgraphs. Each of the subgraphs has a size
0
has a smaller potential than yB by definition, we can
smaller than ?, due to Assumption 1. Since yB
00
00
find a partial labeling yB which only disagree with yB within one of these components. And yB
has
a smaller potential than yB . Therefore yB cannot be a local mode. Contradiction.
Verifying each candidate. Next, we show how to check whether a candidate is a local mode.
For a given boundary labeling, x?B , we denote by XB (x?B ) the space of all partial labelings with
fixed boundary labeling x?B . By definition, a candidate yB ? XB (x?B ) is a local mode if and
only if there is no other partial labeling in XB (x?B ) within ? from yB with a smaller potential. The
verification of yB can be transformed into a global mode verification problem and solved by the
algorithm in Sec. 3.1. We use the subgraph B and its potential to construct a new graph. We need
to ensure that only labelings with the fixed boundary labeling x?B are considered in this new graph.
This can be done by enforcing each boundary node i ? ?B to have xi as the only feasible label.
3.4 Computing local modes using local modes of smaller scales
In Sec. 3.3, we computed the candidate set by enumerating all boundary labelings x?B . In this
subsection, we present an alternative method when the local modes of the scale ? ? 1 has been
computed. We construct a new candidate set using local modes of scale ? ? 1. This candidate
set is smaller that the candidate set from the previous subsection and thus leads to a more efficient
algorithm. Since our algorithm computes modes from small scale to large scale. This algorithm can
be used in all scales except for ? = 1. The step of verifying whether each candidate is a local mode
is the same as the previous subsection.
The following notations will prove convenient. Denote by r and r0 the radii of balls for scales ?
and ? ? 1 respectively (See Sec. 3.2 for the definition). Denote by Bi and Bi0 the balls centered at
node i for scales ? and ? ? 1. Let M?Bi and M??1
B 0 be their sets of local modes at scales ? and ? ? 1
i
?
respectively. Our idea is to use M??1
B 0 ?s to compute a candidate set containing MBi .
i
Consider two different cases, ? is odd and even. When ? is odd, r = r0 and Bi = Bi0 . By
??1
definition, M?Bi ? M??1
Bi = MBi0 . We can directly use the local modes of the previous scale as
the candidate set for the current scale.
When ? is even, r = r0 + 1. The ball Bi is the union of the
S
0
Bj ?s for all j adjacent to i, Bi = j?Ni Bj0 , where Ni is the set of neighbors of i. We collect the set
??1
of all consistent combinations of MB
for all j ? Ni as the candidate set. This set is a superset of
0
j
M?Bi , because a local mode at scale ? has to be a local mode at scale ? ? 1.
Dropping unused local modes. In practice, we observe that a large amount of local modes
do not contribute to any global mode. These unused local modes can be dropped when computing
global modes and when computing local modes of larger scales. To check if a local mode of Bi can
be dropped, we compare it with all local modes of an adjacent ball Bj , j ? Ni . If it is not consistent
with any local mode of Bj , we drop it. We go through all adjacent balls Bj in order to drop as many
local modes as possible.
6
(a)
(b)
(c)
(d)
Figure 3: Scalability.
3.5 Complexity
There are three steps in our algorithm for each fixed ?: computing, verifying candidates and computing the M best labelings of the junction tree. Denote by d the tree degree. Denote by ? the maximum
number of undropped local modes for any ball B and scale ?. When ? = 1, we use the enumeration
method. Since the ball radius is 1, the ball boundary size is O(d). There are at most Ld many candidates for each ball. When ? > 1, we use local modes of the scale ? ? 1 to construct the candidate
set. Since each ball of scale ? is the union of O(d) many balls of scale ? ? 1, there are at most ?d
many candidates per node. The verification takes O(DdL? 2 (L + ?)) time per candidate. (See [7] for
complexity analysis of the verification algorithm.) Therefore overall the computation and verification of all local modes for all D balls is O(D2 dL? 2 (L + ?)(Ld + ?d )). The last step runs Nilsson?s
algorithm on a junction tree with label size O(?), and thus takes O(D?2 +M D?+M D log(M D)).
Summing up these complexities gives the final complexity.
Scalability. Even though our algorithm is not polynomial to all relevant parameters, it is efficient
in practice. The complexity is exponential to the tree degree (d). However, in practice, we can
enforce an upperbound of the tree degree in the model estimation stage. This way we can assume
d to be constant. Another parameter in the complexity is ?, the maximal number of undropped
local modes of a geodesic ball. When the scale ? is large, ? could be exponential to the graph size.
However, in practice, we observe that ? decreases quickly as ? increases. Therefore, our algorithm
can finish in a reasonable time. See Sec. 4 for more discussions.
4
Experiment
To validate our method, we first show the scalability and accuracy of our algorithm in synthetic data.
Furthermore, we demonstrate using biological data how modes can be used as a novel analysis tool.
Quantitative analysis of modes reveals new insight of the data. This finding is well supported by a
visualization of the modes, which intuitively outlines the topographical map of the distribution. In
all experiments, we choose M to be 500. At bigger scales, there are often less than M modes in
total. As mentioned earlier, modes can also be applied to the problem of multiple predictions [7].
Scalability. We randomly generate tree-structured graphical model (tree size D =200 . . . 2000,
label size L = 3) and test the speed. For each tree size, we generates 100 random data. In Figure
3(a), we show the running time of our algorithm to compute modes of all scales. The running time
is roughly linear to the graph size. In Figure 3(b) we show the average running time for each delta
when the graph size is 200, 1000 and 2000. As we see most of the computation time is spent on
computations with ? = 1 and 2. Note only when ? = 1, the enumeration method is used. When
? ? 2, we reuse local modes of previous ?. The algorithm speed depends on the parameter ?, the
maximum number of undropped local modes over all balls. In Figure 3(c), we show that ? drops
quickly as the scale increases. We believe this is critical to the overall efficiency of our method. In
Figure 3(d), we show the average number of global modes at different scales.
Accuracy. We randomly generate tree-structured distributions (D = 20, L = 2). We select
the trees with strong modes as ground-truth trees, i.e. those with at least two modes up to ? = 7.
See Figure 4(a) for the average number of modes at different scales over these selected tree models.
Next we sample these trees and then use the samples to estimate a tree model to approximate this
distribution. Finally we compute modes of the estimated tree and compare them to the modes of the
ground-truth trees.
To evaluate the sensitivity of our method to noise, we randomly flip 0%, 5%, 10%, 15% and 20%
labels of these samples. We compare the number of predicted modes to the number of true modes
for each scale. The error is normalized by the number of true modes. See Figure 4(b). With small
noise, our prediction is accurate except for ? = 1, when the number of true modes is very large. As
the noise level increases, the error increases linearly. We do notice an increase of error at near ? = 7.
This is because at ? = 8, many data become unimodal. Predicting two modes leads to 50% error.
7
(a)
(b)
(c)
(d)
Figure 4: Accuracy. Denote by the noise level, n the sample size.
We also measure the prediction accuracy using the Hausdorff distance between the predicted
modes and the true modes. The Hausdorff distance between two finite points sets X and Y is
defined as max (maxx?X miny?Y ?(x, y), maxy?Y minx?X ?(x, y)). The result is shown in Figure
4(c). We normalize the error using the tree size D. So the error is between zero and one. The error
is again increasing linearly w.r.t. the noise level. An increase at ? = 7 is due to the fact that many
data change from multiple modes to one single mode. In Figure 4(d), we compare for a same noise
level the error when we use different sample sizes. When the sample size is 10K, we have bigger
error. When the sample size is 80K and 40K, the errors are similar and small.
Biological data analysis. We compute modes of the microarray data of Arabidopsis thaliana
plant (108 samples, 39 dimensions) [24]. Each gene has three labels: ?+?, ?0? and ?-? respectively
denote over-expression, normal-expression and under-expression of the genes. Based on the data
sample we estimate the tree graph and compute the top modes with different radiuses ? using Hamming distance. We use multidimensional scaling to map these modes so that their pairwise Hamming
distance is approximated by the L2 distance in R2 . The result is visualized in Fig. 5 with different
scales. The size of the points is proportional to the log of its probability. Arrows in the figure show
how each mode merges to survived modes at the larger scale. The graph intuitively shows that there
are two major modes when viewed from a large scale and even shows how the modes evolve as we
change the scale.
(a)
(b)
(c)
(d)
Figure 5: Microarray results. From left to right: scale 1 to 4.
5
Conclusion
This paper studies the (?, ?)-mode estimation problem for tree graphical models. The significance
of this work lies in several aspects: (1) we develop an efficient algorithm to illustrate the intrinsic connection between structured statistical modeling and mode characterization; (2) our notion of
(?, ?)-modes provides a new tool for visualizing the topographical information of complex discrete
distributions. This work is the first step towards understanding the statistical and computational aspects of complex discrete distributions. For future investigations, we plan to relax the tree graphical
model assumption to junction trees.
Acknowledgments
Chao Chen thanks Vladimir Kolmogorov and Christoph H. Lampert for helpful discussions. The research of Chao Chen and Dimitris N. Metaxas is partially supported by the grants NSF IIS 1451292
and NSF CNS 1229628. The research of Han Liu is partially supported by the grants NSF
IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841.
8
References
[1] F. R. Bach and M. I. Jordan. Beyond independent components: trees and clusters. The Journal of Machine
Learning Research, 4:1205?1233, 2003.
[2] D. Batra, P. Yadollahpour, A. Guzman-Rivera, and G. Shakhnarovich. Diverse M-best solutions in markov
random fields. Computer Vision?ECCV 2012, pages 1?16, 2012.
[3] P. Chaudhuri and J. S. Marron. SiZer for exploration of structures in curves. Journal of the American
Statistical Association, 94(447):807?823, 1999.
[4] F. Chazal, L. J. Guibas, S. Y. Oudot, and P. Skraba. Persistence-based clustering in Riemannian manifolds.
In Proceedings of the 27th annual ACM symp. on Computational Geometry, pages 97?106. ACM, 2011.
[5] C. Chen and H. Edelsbrunner. Diffusion runs low on persistence fast. In IEEE International Conference
on Computer Vision (ICCV), pages 423?430. IEEE, 2011.
[6] C. Chen, V. Kolmogorov, Y. Zhu, D. Metaxas, and C. H. Lampert. Computing the M most probable modes
of a graphical model. In International Conf. on Artificial Intelligence and Statistics (AISTATS), 2013.
[7] C. Chen, H. Liu, D. N. Metaxas, M. G. Uzunbas?, and T. Zhao. High dimensional mode estimation ? a
graphical model approach. Technical report, October 2014.
[8] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. Pattern Analysis
and Machine Intelligence, IEEE Transactions on, 24(5):603?619, 2002.
[9] M. Fromer and C. Yanover. Accurate prediction for atomic-level protein design and its application
in diversifying the near-optimal sequence space. Proteins: Structure, Function, and Bioinformatics,
75(3):682?705, 2009.
[10] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on
Machine Learning (ICML), pages 282?289, 2001.
[11] J. Li, S. Ray, and B. G. Lindsay. A nonparametric statistical approach to clustering via mode identification.
Journal of Machine Learning Research, 8(8):1687?1723, 2007.
[12] T. Lindeberg. Scale-space theory in computer vision. Springer, 1993.
[13] H. Liu, M. Xu, H. Gu, A. Gupta, J. Lafferty, and L. Wasserman. Forest density estimation. Journal of
Machine Learning Research, 12:907?951, 2011.
[14] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[15] R. Maitra. Initializing partition-optimization algorithms. Computational Biology and Bioinformatics,
IEEE/ACM Transactions on, 6(1):144?157, 2009.
[16] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, and
L. Van Gool. A comparison of affine region detectors. International journal of computer vision, 65(12):43?72, 2005.
[17] M. C. Minnotte and D. W. Scott. The mode tree: A tool for visualization of nonparametric density
features. Journal of Computational and Graphical Statistics, 2(1):51?68, 1993.
[18] D. Nilsson. An efficient algorithm for finding the m most probable configurationsin probabilistic expert
systems. Statistics and Computing, 8(2):159?173, 1998.
[19] S. Nowozin and C. Lampert. Structured learning and prediction in computer vision. Foundations and
Trends in Computer Graphics and Vision, 6(3-4):185?365, 2010.
[20] S. Ray and B. G. Lindsay. The topography of multivariate normal mixtures. Annals of Statistics, pages
2042?2065, 2005.
[21] B. W. Silverman. Using kernel density estimates to investigate multimodality. Journal of the Royal
Statistical Society. Series B (Methodological), pages 97?99, 1981.
[22] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. In Journal of Machine Learning Research, pages 1453?1484, 2005.
[23] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[24] A. Wille, P. Zimmermann, E. Vranov?a, A. F?urholz, O. Laule, S. Bleuler, L. Hennig, A. Prelic, P. von
Rohr, L. Thiele, et al. Sparse graphical gaussian modeling of the isoprenoid gene network in arabidopsis
thaliana. Genome Biol, 5(11):R92, 2004.
[25] A. Witkin. Scale-space filtering. Readings in computer vision: issues, problems, principles, and
paradigms, pages 329?332, 1987.
9
| 5533 |@word briefly:1 middle:2 polynomial:2 d2:1 crucially:1 decomposition:1 mention:1 rivera:1 solid:1 ld:3 carry:1 necessity:2 liu:5 configuration:3 contains:2 series:1 current:1 com:1 gmail:1 written:2 must:1 partition:2 hofmann:1 enables:1 drop:5 intelligence:2 leaf:1 selected:1 xk:1 mccallum:1 characterization:2 provides:1 node:12 bijection:1 contribute:1 five:1 height:1 along:1 c2:1 constructed:1 become:1 prove:3 ijcv:1 combine:1 symp:1 ray:2 multimodality:1 introduce:1 pairwise:1 x0:2 ascend:1 expected:1 roughly:1 examine:1 dist:4 multi:4 zti:3 inspired:1 enumeration:3 lindeberg:1 considering:1 increasing:2 becomes:1 r01mh102339:1 underlying:5 moreover:1 notation:3 bounded:1 null:1 lowest:3 minimizes:1 finding:4 nj:4 guarantee:2 pseudo:1 quantitative:1 every:4 multidimensional:1 ti:1 xd:1 k2:1 arabidopsis:2 grant:2 segmenting:1 before:1 engineering:2 local:60 dropped:2 limit:1 subscript:1 dnm:1 path:3 abuse:1 black:1 initialization:1 studied:2 collect:1 challenging:1 christoph:1 ease:1 limited:1 factorization:1 bi:13 practical:1 acknowledgment:1 atomic:1 union:3 practice:5 silverman:2 procedure:4 survived:1 maxx:1 significantly:1 convenient:1 persistence:2 protein:2 altun:1 convenience:4 interior:6 cannot:2 tsochantaridis:1 context:1 restriction:1 equivalent:1 map:2 center:7 eighteenth:1 go:2 assigns:1 wasserman:1 subgraphs:10 factored:1 estimator:2 contradiction:4 insight:1 financial:2 wille:1 proving:1 handle:1 notion:1 variation:1 meer:1 annals:1 pt:2 suppose:3 construction:2 lindsay:2 hypothesis:2 element:2 trend:2 approximated:2 expensive:1 labeled:1 bottom:2 observed:1 module:1 solved:3 verifying:4 worst:1 initializing:1 region:3 connected:8 decrease:3 highest:1 thiele:1 mentioned:1 complexity:6 miny:1 geodesic:11 unveils:1 tight:1 shakhnarovich:1 distinctive:1 efficiency:2 gu:1 joint:1 jersey:2 kolmogorov:2 fast:1 detected:1 artificial:1 labeling:37 neighborhood:8 outside:1 exhaustive:1 whose:2 larger:2 solve:1 skraba:1 say:2 relax:1 kadir:1 statistic:4 tuytelaars:1 itself:2 inhomogeneity:1 final:1 sequence:5 propose:1 maximal:3 mb:1 relevant:1 subgraph:15 chaudhuri:2 flexibility:1 achieve:1 description:2 validate:1 normalize:1 scalability:4 parent:2 empty:1 optimum:1 cluster:1 tianqi:2 help:1 depending:1 develop:2 spent:1 illustrate:1 measured:1 ij:2 nearest:1 odd:2 strong:1 recovering:1 c:1 predicted:2 differ:1 radius:9 centered:5 exploration:1 decompose:1 investigation:1 biological:2 probable:2 exploring:1 pl:2 considered:1 ground:2 normal:2 deciding:2 exp:2 cb:5 guibas:1 bj:4 kruskal:3 major:1 smallest:1 estimation:10 bi0:2 applicable:1 label:17 vice:1 tool:6 gaussian:2 always:1 aim:1 pn:1 varying:1 focus:1 joachim:1 methodological:1 likelihood:1 check:3 posteriori:1 helpful:1 inference:1 unary:2 inaccurate:1 unlikely:1 transformed:1 labelings:17 overall:2 issue:1 denoted:2 plan:1 art:1 mutual:3 marginal:3 field:4 equal:2 construct:7 extraction:1 argmaxt:1 biology:3 bj0:1 icml:1 future:1 np:1 guzman:1 quantitatively:1 report:1 modern:1 randomly:3 divergence:2 comprehensive:1 individual:1 geometry:1 consisting:1 cns:1 n1:1 message:8 investigate:1 mixture:3 tj:1 xb:14 chain:6 accurate:2 edge:7 partial:16 tree:59 divide:1 logarithm:1 walk:1 desired:1 theoretical:2 minimal:4 column:1 earlier:1 modeling:2 vertex:15 subset:7 examining:1 graphic:1 motivating:1 marron:2 synthetic:1 thanks:1 density:10 international:4 fundamental:1 sensitivity:1 probabilistic:2 together:2 connecting:1 quickly:2 again:1 von:1 containing:1 choose:2 conf:1 convolving:1 zhao:2 leading:1 american:1 expert:1 li:1 potential:34 upperbound:1 diversity:1 sec:9 summarized:1 int:7 satisfy:2 depends:1 later:1 view:3 root:4 lowe:1 red:3 start:1 recover:3 r01hg06841:1 oudot:1 ni:4 accuracy:6 efficiently:2 landscape:4 metaxas:4 identification:1 detector:1 chazal:1 whenever:1 definition:13 obvious:1 proof:2 riemannian:1 hamming:4 stop:1 proved:1 enumerates:1 subsection:3 formalize:1 higher:1 zisserman:1 yb:24 sufficiency:1 evaluated:1 though:2 done:1 furthermore:3 stage:1 until:1 replacing:1 mode:161 reveal:1 grows:1 believe:1 concept:1 true:7 verify:1 normalized:1 hausdorff:2 hence:1 assigned:1 leibler:1 visualizing:2 adjacent:4 width:1 please:2 rooted:1 illustrative:1 comaniciu:1 outline:1 demonstrate:2 image:4 variational:1 novel:1 fi:3 nih:3 common:1 association:1 diversifying:1 r01gm083084:1 refer:2 versa:1 similarly:1 bleuler:1 han:2 etc:2 add:1 edelsbrunner:2 multivariate:1 belongs:1 certain:2 devise:1 additional:1 relaxed:1 r0:3 converge:1 paradigm:1 monotonically:4 ii:1 multiple:7 unimodal:1 keypoints:1 witkin:1 technical:1 calculation:1 iis1408910:1 long:1 bach:1 devised:1 bigger:2 prediction:10 instantiating:1 vision:10 metric:4 rutgers:3 essentially:1 iteration:1 kernel:4 represent:1 achieved:1 encrypted:1 c1:1 justified:1 background:1 want:1 argminq:1 microarray:2 ascent:1 induced:1 lafferty:2 jordan:2 call:2 near:2 unused:2 easy:1 superset:1 xj:12 finish:1 zi:1 topology:1 restrict:1 idea:1 br:1 enumerating:1 shift:1 whether:8 expression:3 reuse:1 penalty:1 passing:3 cause:1 generally:1 useful:1 clear:1 amount:1 nonparametric:2 induces:1 visualized:1 argminz:1 reduced:1 generate:2 ddl:1 nsf:4 iis1332109:1 notice:1 estimated:4 disjoint:1 per:2 delta:1 blue:1 diverse:2 broadly:1 discrete:9 write:1 dropping:1 hennig:1 four:1 yadollahpour:1 diffusion:1 graph:19 run:2 powerful:1 topographical:5 family:2 reasonable:1 decide:1 scaling:1 thaliana:2 guaranteed:1 correspondence:1 topological:1 oracle:2 annual:1 infinity:1 constraint:1 nearby:1 generates:2 aspect:2 speed:2 min:2 structured:12 department:4 piscataway:2 combination:6 ball:29 smaller:13 nilsson:4 maxy:1 intuitively:2 iccv:1 invariant:1 zimmermann:1 pipeline:1 computationally:3 agree:2 remains:1 visualization:2 needed:1 know:1 flip:1 tractable:1 end:1 travelling:1 salesman:1 junction:14 operation:2 rewritten:1 prerequisite:1 apply:1 observe:2 away:1 enforce:1 alternative:1 existence:1 original:2 top:5 clustering:3 ensure:2 remaining:1 running:3 graphical:17 const:1 approximating:1 society:1 matas:1 g0:1 strategy:1 gradient:6 minx:1 distance:13 manifold:1 spanning:1 provable:1 enforcing:1 toward:1 assuming:1 relationship:2 insufficient:3 illustration:1 vladimir:1 october:1 fromer:1 design:1 unknown:1 disagree:6 vertical:1 observation:1 datasets:1 markov:3 finite:3 supernodes:2 situation:1 extended:1 defining:1 discovered:1 arbitrary:1 kl:2 c3:2 connection:2 merges:1 address:1 beyond:1 bar:1 pattern:1 dimitris:2 yc:1 scott:1 reading:1 challenge:3 max:1 royal:1 gool:1 wainwright:1 critical:1 predicting:1 yanover:1 zhu:1 inversely:1 naive:1 schmid:1 chao:4 geometric:1 l2:3 interdependent:2 understanding:1 evolve:1 plant:1 topography:2 proportional:2 filtering:1 maitra:1 foundation:2 degree:3 affine:1 verification:6 consistent:9 principle:2 nowozin:1 eccv:1 supported:4 last:1 infeasible:1 neighbor:4 characterizing:1 undropped:3 sparse:1 van:1 boundary:19 dimension:4 curve:1 valid:1 genome:1 computes:3 fb:2 mbi:1 collection:1 mikolajczyk:1 transaction:2 schaffalitzky:1 approximate:1 emphasize:2 kullback:1 configurationsin:1 gene:3 supernode:3 global:17 reveals:1 summing:1 xi:18 continuous:4 search:2 reality:1 robust:1 forest:1 complex:3 domain:5 aistats:1 significance:1 linearly:2 arrow:1 big:2 whole:2 noise:6 lampert:3 child:2 x1:1 xu:1 fig:1 iij:2 position:1 pereira:1 exponential:6 candidate:31 lie:1 down:1 theorem:8 removing:1 hanliu:1 symbol:1 r2:1 svm:1 gupta:1 bivariate:3 intrinsic:2 intractable:2 essential:1 restricting:1 adding:1 dl:1 subtree:4 kx:1 margin:1 chen:9 easier:1 intersection:1 backtracking:1 simply:1 explore:1 univariate:3 expressed:1 contained:3 partially:2 springer:1 nested:1 corresponds:5 truth:2 acm:3 conditional:2 viewed:2 goal:1 exposition:1 towards:2 feasible:1 hard:2 change:2 specifically:1 except:2 total:4 called:5 pas:1 batra:1 experimental:1 formally:6 select:3 bioinformatics:2 violated:2 evaluate:1 princeton:6 biol:1 |
5,008 | 5,534 | Poisson Process Jumping between an Unknown
Number of Rates: Application to Neural Spike Data
Florian Stimberg
Computer Science, TU Berlin
[email protected]
Andreas Ruttor
Computer Science, TU Berlin
[email protected]
Manfred Opper
Computer Science, TU Berlin
[email protected]
Abstract
We introduce a model where the rate of an inhomogeneous Poisson process is
modified by a Chinese restaurant process. Applying a MCMC sampler to this
model allows us to do posterior Bayesian inference about the number of states in
Poisson-like data. Our sampler is shown to get accurate results for synthetic data
and we apply it to V1 neuron spike data to find discrete firing rate states depending
on the orientation of a stimulus.
1
Introduction
Event time data is often modeled as an inhomogeneous Poisson process, whose rate ?(t) as a function of time t has to be learned from the data. Poisson processes have been used to model a wide
variety of data, ranging from network traffic [25] to photon emission data [12]. Although neuronal
spikes are in general not perfectly modeled by a Poisson process [17], there has been extensive work
based on the simplified Poisson assumption [e.g. 19, 20]. Prior assumptions about the rate process
strongly influence the result of inference. Some models assume that the rate ?(t) changes continuously [1, 7, 22], but for certain applications it is more useful to model it as a piecewise constant
function of time, which switches between a finite number of distinct states. Such an assumption
could be of interest, when one tries to relate the change of the rate to sudden changes of certain
external experimental conditions, e.g. changes of neural spike activity when external stimuli are
switched.
An example for a discrete state rate process is the Markov modulated Poisson process (MMPP)
[10, 18], where changes between the states of the rate follow a continuous time Markov jump process (MJP). For the MMPP one has to specify the number of states beforehand and it is often not
clear how this number should be chosen. Comparing models with different numbers of states by
computing Bayes factors can be cumbersome and time consuming. On the other hand, nonparametric Bayesian methods for models with an unknown number of model parameters based on Dirichlet
or Chinese restaurant processes have been highly popular in recent years [e.g. 24, 26].
However?to our knowledge?such an idea has not yet been applied to the conceptually simpler
Poisson process scenario. In this paper, we present a computationally efficient MCMC approach to
this model, which utilizes its feature that given the jump process the observed Poisson events are
independent. This property makes computing the data likelihood very fast in each iteration of our
sampler and leads to a highly efficient estimation of the rate. This allows us to apply our sampler to
large data sets.
1
?
p?
f
?
c
?i
?i
s
c
?[0 : T ]
Y
Figure 1: Generative model.
2
Model
We assume that the data comes from an inhomogeneous Poisson process, which has rate ?(t) at time
t. In our model ?(t) is a latent, piecewise constant process. The likelihood of the data given a path
?(0:T ) with s distinct states then becomes [8]
P (Y|?(0:T ) ) ?
s
Y
?ni i e??i ?i ,
(1)
i=1
where ?i is the overall time spent in state i defined as ?(t) = ?i and ni is the number of Poisson
events in the data Y, while the system is in this state. A trajectory of ?(0:T ) is generated by drawing
c jump times from a Poisson process with rate f . This means ?(0:T ) is separated in c + 1 segments
during which it remains in one state ?i . To deal with an unknown number of discrete states and
their unknown probability ? of being visited, we assume that the distribution ? is drawn from a
Dirichlet process with concentration parameter ? and base distribution p? . By integrating out ? we
get a Chinese restaurant process (CRP) with the same parameters as the Dirichlet process. For a
derivation of this result see [27].
Let us assume we already have i segments and draw the next jump time from an exponential distribution with rate f . The next segment gets a new ?-value sampled from p? with probability ?/(? + i),
otherwise one of the previous segments is chosen with equal probability and its ?-value is also used
for the new segment. This leads to the following prior probability of a path ?(0:T ) :
Qs
j=1 (p? (?j )(#j ? 1)!)
c ?f T s
Qc
P (?(0:T ) |f, ?, p? ) ? f e
?
,
(2)
i=0 (? + i)
where s is the number of distinct values of ?. To summarize, we have f as the rate of jumps, p? as
a prior distribution over the values of ?, #j as the number of segments assigned to state j, and ? as
a hyperparameter which determines how likely a jump will lead to a completely new value for ?. If
there are c jumps in the path ?(0:T ) , then a priori the expected number of distinct ?-values is [28]
E[s|c] =
c+1
X
i=1
?
.
?+i?1
(3)
We choose a gamma distribution for p? with shape a and scale b,
p? (?) = Gamma(?; a, b) ? ?a?1 e??/b ,
(4)
which is conjugate to the likelihood (2). The generative model is visualized in figure 1.
3
MCMC Sampler
We use a Metropolis-within-Gibbs sampler with two main steps: First, we change the path of the
Chinese restaurant process conditioned on the current parameters with a Metropolis Hastings random walk. In the seconds step, the time of the jumps and the states are held fixed, and we directly
sample the ?-values and f from their conditional posteriors.
2
3.1
Random Walk on the many-state Markov jump process
To generate a proposal path ??(0:T ) (for the remainder of this paper ? will always denote a variable
concerning the proposal path) we manipulate the current path ?(0:T ) by one of the following actions:
shifting one of the jumps in time, adding a jump, removing one of the existing jumps, switching the
state of a segment, joining two states, or dividing one state into two. This is similar to the birth-death
approach, which has been used before for other types of MJPs [e.g. 5].
We shift a jump by drawing the new time from a Gaussian distribution centered at the current time
with standard deviation ?t and truncated at the neighboring jumps. ?t is a parameter of the sampler,
which we chose by hand and which should be in the same scale as the typical time between Poisson
events. If in doubt, a high value should be chosen, so that the truncated distribution becomes more
uniform.
When adding a jump the time of the new jump is drawn from a uniform distribution over the whole
time interval. With probability qn a new value of ? is added, otherwise we reuse an old one. The
parameter qn was chosen by hand to be 0.1, which worked well for all data sets we tested the sampler
on.
To remove a jump we choose one of the jumps with equal probability.
Switching the state of a segment is done by choosing one of the segments at random and either
assigning it to an existing value or introducing a value which was not used before, again with probability qn .
When adding a new value of ?, both when adding a jump or when switching the state of a segment,
we draw it from the conditional density
P (??s+1 |Y, ?(0:T ) )
?
?
Gamma(??s+1 ; a, b) Gamma(??s+1 ; ns+1 + 1, 1/?s+1 )
Gamma ??s+1 ; a + ns+1 , b/(?s+1 b + 1) .
(5)
If we instead reuse an already existing ?, we choose which state to use by drawing it from a discrete
distribution with probabilities proportional to (5), but with n and ? being the number of Poisson
events and the time in this segment, respectively.
Changing the number of states through adding and removing jumps or switching the states of segments is sufficient to guarantee that the sampler converges to the posterior density. However, the
sampler is very unlikely to reduce the number of states through these actions, if all states are used
in multiple segments, so that convergence might take a very long time in this case. Therefore, we
introduce the option to join all segments assigned to a neighboring
p (when ordered by their ?-values)
pair of states into one state. Here the geometrical mean ??j = ?i1 ?i2 of both ?-values is used for
the joined state.
Because we added the join action, we need an inverted action, which divides a state into two new
ones, in order to guarantee reversibility and therefore fulfill detailed balance. The state to divide is
randomly chosen among the states which have at least two segments assigned to them. Then a small
factor ? > 1 is drawn from a shifted exponential distribution and the ?-value of the chosen state is
multiplied with and divided by ?, respectively, to get the ?-values ??j1 = ?i ? and ??j2 = ?i /? of
the two new states. The distribution over ? is bounded, so that the new ?-values are assured to be
between the neighboring ones. After this, the segments of the old state are randomly assigned to the
two new states with probability proportional to the data likelihood (1). If by the last segment only
one of the two states was chosen for all segments, the last segment is set to the other state. This
method assures that every possible assignment (where both states are used) of the two states to the
segments of the old state can occur. Additionally, there is exactly one way for each assignment to
be drawn allowing a simple calculation of the Metropolis-Hastings acceptance probability for both
the join and the divide action. Figure 2 shows how these actions work on the path.
A proposed path ??(0:T ) is accepted with probability
pMH = min 1,
P (Y|??(0:T ) ) Q(?(0:T ) |??(0:T ) ) P (??(0:T ) |f, ?, p? )
P (Y|?(0:T ) ) Q(??(0:T ) |?(0:T ) ) P (?(0:T ) |f, ?, p? )
3
!
.
(6)
Switch
Shift
Join
Remove
Divide
Add
Figure 2: Example showing how the proposal actions modify the path of the Chinese restaurant
process. The new path is drawn in dark blue, the old one in light blue.
While the data likelihood ratio is the same for all proposal actions and follows from (1), the proposal
and prior ratios
Q(?(0:T ) |??(0:T ) ) P (??(0:T ) |f, ?, p? )
?=
(7)
Q(??(0:T ) |?(0:T ) ) P (?(0:T ) |f, ?, p? )
depend on the chosen proposal action. The acceptance probability for each action (provided in the
supplementary material) can be calculated based on its description and the probability of a path (2).
Because our proposal process is a simple random walk, the major contribution to the computation
time comes from calculating the data likelihood. Luckily, this can be done very efficiently, because
we only need to know how many Poisson events occur during the segments of ??(0:T ) and ?(0:T ) ,
how often the process changes state, and how much time it spends in each state. In order to avoid
iterating over all the data for each proposal, we compute the index of the next event in the data for
a fine time grid before the sampler starts. This ensures that the computational time is linear in the
number of jumps in ?(0:T ) , while the number of Poisson events in the data only introduces onetime costs for calculating the grid, which are negligible in practice. Additionally, we only need to
compute the likelihood ratio over those segments which are changed in the proposal, because the
unchanged parts cancel each other out.
3.2
Sampling the parameters
As we use a gamma prior Gamma(?i ; a, b) for each ?i , it is easy to see from (1) that this leads to
gamma posteriors
Gamma (?i ; a + ni , b/(?i b + 1))
(8)
over ?i . Thus a Gibbs sampling step is used to update each ?i . As for the rate f of change points,
if we assume a gamma prior for f ? Gamma(af , bf ), the posterior becomes a gamma distribution,
too:
Gamma (f ; af + c, bf /(T bf + 1)) .
(9)
4
Experiments
We first validate our sampler on synthetic data sets, then we test our Chinese restaurant approach on
neural spiking data from a cat?s primary visual cortex.
4.1
Synthetic Data
We sampled 100 data sets from the prior with f = 0.02 and ? = 3.0. Figure 3 compares the
true values for the number of states and number of jumps with the posterior mean after 1.1 million
samples with the first 100, 000 dropped as burn-in. On average the sampler took around 25 seconds
to generate the samples on an Intel Xeon CPU with 2.40 GHz.
The amounts of both jumps and states seem to be captured well, but for a large number of distinct
states the mean seems to underestimate the true value. This is not surprising, because the ? parameters are drawn from the same base distribution. For a large number of states the probability that two
4
30
30
30
20
20
5.0
2.5
?
?
10
20
0
0
250
500
750
1000
500
750
1000
750
1000
20
10
18
15
14
10
5
0.0
0
0.0
2.5
5.0
7.5
10.0
0
true number of states
10
20
30
0
250
true number of jumps
750
1000
0
250
500
t
Posterior of ? over t for the first 4 toy
data sets. The black line is the true
path, while the posterior mean is drawn
as a dashed green lined surrounded by
a 95% confidence interval.
6
triangle width = 0.1
90
500
t
Figure 3: Posterior mean vs. true number of Figure 4:
states (left) and jumps (right) for 100
data sets drawn from the prior. The red
line shows the identity function.
triangle width = 1
posterior mean number of states
triangle width = 10
rate
250
t
10
60
30
0
orientation in ?
0
t
?
7.5
10
?
posterior mean number of jumps
posterior mean number of states
10.0
100
5
4
50
3
0
540
550
560
0
4000
time in s
8000
number of spikes
Figure 5: Stimulus and data for a part of the Figure 6:
recordings from the first neuron. (top)
Mean rates computed by using a moving triangle function. (middle) Spiking times. (bottom) Orientation of the
stimulus.
12000
0
50
100
150
200
posterior mean number of jumps
(left) Posterior mean number of states
vs. number of spikes in the data for all
neurons. (right) Posterior mean number of states over the posterior mean
number of jumps.
states are very similar becomes high, which makes them indistinguishable without observing more
data. For four of the 100 data sets the posterior distribution over ?(t) is compared to the true path in
figure 4. While we used the true value of ? for our simulations the model seems to be robust against
different choices of the parameter. This is shown in the supplementary material.
4.2
Bursting of Cat V1 Neurons
Poisson processes are not an ideal model for single neuron spiking times [3]. The two main reasons
for this are the refractory period of neurons and bursting [14]. Despite this, Poisson processes have
been used extensively to analyze spiking data [e.g. 19, 20]. Additionally, both reasons should not be
a problem for us. The refractory period is not as important for inference since spiking during it will
not be observed. Bursting, on the other hand, is exactly what models with jumping Poisson rates are
made to explain: sudden changes in the spiking rate.
The data set used in this paper was obtained from multi-site silicon electrodes in the primary visual
cortex of an anesthetized cat. For further information on the experimental setup see [4]. The data
set contains spike trains from 10 different neurons, which were recorded while bars of varying
orientation moved through the visual field of the cat. Since the stimulus is discrete (the orientation
5
state probability
1.00
probability
0.75
0.6
0.4
0.2
0.0
0.50
0
state
1
2
3
4
state probability
0.25
5
0.00
100
200
300
0
0.6
0.4
100
200
state 1
state 4
state 2
state 5
300
state 3
0.2
0.0
260
270
280
290
0
time in s
100
200
orientation
Figure 7: Detail of the results for one of the neu- Figure 8:
rons. The black lines at the bottom represent the spike data, while the colors
indicate the state with the highest posterior probability, which is represented
by the height of the area. The states are
ordered by increasing rate ?.
300
0
100
200
300
orientation
Probability distribution of the orientation of the stimulus conditioned on the
active state. The states are ordered by
increasing rate ? and the results are
taken from samples at the MAP number of states.
ranges from 0? to 340? in steps of 20? ), we expect to find discrete states in the response of the
neurons. The recording lasted for 720 seconds and, while the orientation of the stimulus changed
randomly, each orientation was shown 8 times for 5 seconds each over the whole experiment. In
figure 5, a section of the spiking times of one neuron is shown together with the orientation of the
stimulus. When computing a mean spiking rate by sliding a triangle function over the data, it is
crucial to select a good width for the triangle function. A small width makes it possible to find short
phases of very high spiking rate (so called bursts), but also leads to jumps in the rate even for single
spikes. A larger width, on the other hand, smoothes the bursts out. Using our sampler for Bayesian
inference based on our model allows us to find bursts and cluster them by their spiking rate, but at
the same time the spikes between bursts are explained by one of the ground states, which have lower
rates, but longer durations.
We used an exponential prior for f with mean rate 10?4 and a low value of ? = 0.1 to prevent
overfitting. A second simulation running with a ten times higher prior mean for f and ? = 0.5 lead
to almost the same posterior number of states and only a slightly higher number of jumps, of which
a larger fraction had no impact, because the state was not changed. The base distribution p? was
chosen to be exponential with mean 106 , which is a fairly uninformative prior, because the duration
of a single spike is in the order of magnitude of 1ms [11] resulting in an upper bound for the rate at
around 1000/s.
The posterior number of states for all of the 10 neurons is in the same region, as shown in figure
6, even though the number of spikes differs widely (from 725 to 13244). Although there seem
to be more states if more jumps are found, the posterior differs strongly from the prior?a priori
the expected number of states is under 2?indicating that the posterior is dominated by the data
likelihood.
For a small time frame of the spiking data from one of the neurons figure 7 shows which state had
the highest posterior probability at each time and how high this probability was. It can be seen
that the bursting states, which have high rates, are only active for a short time. Figure 8 shows that
these burst states are clearly orientation dependent (see the supplementary material for results of all
10 neurons). Over the whole experiment all orientations were shown for exactly the same amount
of time. While the highest state is always clearly concentrated on a range of about 60? , the lower
bursting states cover neighboring orientations. Often a smaller reaction can be seen for bars rotated
by 180? from the favored angle. The lowest state might indicate inhibition, because it is mostly
active between the favored state and the one rotated by 180? .
As we can see in figure 9, some of the rates of the states are pretty similar over all the neurons,
although it has to be noted that the orientation is probably not the only feature of the stimulus the
6
neurons are receptive to. Especially the position of the bar in the visual field should be important
and could explain, why only some of the neurons reach the highest burst rate.
It may seem that finding bursts is a simple task, but there has been extensive work in this field
[e.g. 6, 13, 16] and naive approaches, like looking at the mean rate of events over time, fail easily,
if the time resolution is not chosen well (as seen in figure 5). Additionally, our sampler not only
distinguishes between burst and non-burst phases, but also uncovers discrete intensities, which are
associated with features of the stimulus.
4.3
Comparison to a continuous rate model
While our model assumes that the Poisson rates are discrete values, there have been other approaches
applying continuous functions to estimate the rate. [1] use a Gaussian process prior over ?(t) and
present a Markov chain Monte Carlo sampler to sample from the posterior. Since the sampler is very
slow for our neuron data, we restricted the inference task to a small time window of the spike train
from only one of the neurons.
In figure 10 the results from the Sigmoidal Gaussian Cox Process (SGCP) model of [1] are shown for
different values of the length scale hyperparameter and contrasted with the results from our model.
Similar to the naive approach of computing a moving average of the rate (as in figure 5) the GP
seems to either smooth out the bursts or becomes so sensitive that even single spikes change the rate
function significantly depending on the choice of the GP hyperparameters.
Our neural data seems to be especially bad for the performance of this algorithm, because it is based
on the principle of uniformization. Uniformization was introduced by [9] and allows to sample from
an inhomogeneous Poisson process by first sampling from a homogeneous one. If the rate of the
homogeneous process is an upper bound of the rate function of the inhomogeneous Poisson process,
then a sample of the latter can be generated by thinning out the events, where each event is omitted
with a certain probability. The sampler for the SGCP model performs inference using this method,
so that events are sampled at the current estimate of the maximum rate for the whole data set and
thinned out afterwards.
For our neural data the maximum rate would have to be the spiking rate during the strongest bursts,
but this would lead to a very large number of (later thinned out) event times to be sampled in the
long periods between bursts, which slows down the algorithm severely. This problem only occurs if
uniformization is applied on ?(t) while other approaches, like [21], use it on the rate of a MJP with
a fixed number of states.
When we use a practically flat prior for the sampling of the maximum rate, it will be very low
compared to the bursting rates our algorithm finds (see figure 10). On the other hand, if we use a
very peaked prior around our burst rates, the algorithm becomes extremely slow (taking hours for
just 100 samples) even when used on less than a tenth of the data for one neuron.
5
Conclusion
We have introduced an inhomogeneous Poisson process model with a flexible number of states.
Our inference is based on a MCMC sampler which detects recurring states in the data set and joins
them in the posterior. Thus the number of distinct event rates is estimated directly during MCMC
sampling.
Clearly, sampling the number of states together with the jump times and rates needs considerably
more samples to fully converge compared to a MJP with a fixed number of states. For our application
to neural data in section 4.2 we generated 110 million samples for each neuron, which took between
80 and 325 minutes on an Intel Xeon CPU with 2.4 GHz. For all neurons the posterior had converged
at the latest after a tenth of the time. It has to be remembered that to obtain similar results without
the Chinese restaurant process, we would need to compute the Bayes factors for different number
of states. This is a more complicated task than just doing posterior inference for a fixed number
of states and would require more computationally demanding approaches, e.g. a bridge sampler, in
order to get reasonably good estimates. Additionally, it would be hard to decide for what range of
state dimensionality the samplers should be run. In contrast to this, our sampler typically gave a
good estimate of the number of states in the data set already after just a few seconds of sampling.
7
prior mean lengthscale=0.25
60
prior mean lengthscale=0.50
prior mean lengthscale=0.75
prior mean lengthscale=1.00
40
?
posterior mean fire rate
75
50
20
25
0
time in s
0
1
2
3
4
states
5
1
2
3
4
states
5
1
2
3
4
states
5
1
2
3
4
5
260
states
270
280
290
300
time in s
Figure 9: Posterior mean rates ?i for the MAP Figure 10: Results of the SGCP Sampler on a
number of states.
small part of the data of one neuron.
The black dashed line shows the posterior mean from our sampler. The
spiking times are drawn as black vertical lines below.
Longer run times are only needed for a higher accuracy estimate of the posterior distribution over
the number of states.
Although our prior for the transition rates of the MJP is state-independent, which facilitates the
integration over the maximum number of states and gives rise to the Chinese restaurant process, this
does not hold for the posterior. We can indeed compute the full posterior state transition matrix?
with state-dependent jump rates?from the samples.
A huge advantage of our algorithm is that its computation time scales linearly in the number of
jumps in the hidden process and the influence of the number of events can be neglected in practice.
This has been shown to speed up inference for MMPPs [23], but our more flexible model makes it
possible to find simple underlying structures in huge data sets (e.g. network access data with millions
of events) in reasonable time without the need to fix the number of states beforehand.
In contrast to other MCMC algorithms [2, 8, 15] for MMPPs, our sampler is very flexible and can be
easily adapted to e.g. Gamma processes generating the data or semi-Markov jump processes, which
have non-exponentially distributed waiting times for the change of the rate. For Gamma process
data the computation time to calculate the likelihood would no longer be independent of the number
of events, but it might lead to better results for data which is strongly non-Poissonian.
We showed that our model can be applied to neural spike trains and that our MCMC sampler finds
discrete states in the data, which are linked to the discreteness of the stimulus. In general, our
model should yield the best results when applied to data with many events and a discrete structure
of unknown dimensionality influencing the rate.
Acknowledgments
Neural data were recorded by Tim Blanche in the laboratory of Nicholas Swindale, University of
British Columbia, and downloaded from the NSF-funded CRCNS Data Sharing website.
References
[1] Ryan Prescott Adams, Iain Murray, and David J. C. MacKay. Tractable nonparametric bayesian inference
in poisson processes with gaussian process intensities. In Proceedings of the 26th Annual International
Conference on Machine Learning, ICML ?09, pages 9?16, New York, NY, USA, 2009. ACM.
[2] Elja Arjas and Dario Gasbarra. Nonparametric Bayesian inference from right censored survival data,
using the Gibbs sampler. Statistica Sinica, 4:505?524, 1994.
[3] R. Barbieri, M. C. Quirk, L. M. Frank, M. A. Wilson, and E. N. Brown. Construction and analysis of
non-poisson stimulus-response models of neural spiking activity. J. Neurosci. Methods, 105(1):25?37,
January 2001.
8
[4] Timothy J. Blanche, Martin A. Spacek, Jamille F. Hetke, and Nicholas V. Swindale. Polytrodes: HighDensity Silicon Electrode Arrays for Large-Scale Multiunit Recording. Journal of Neurophysiology,
93(5):2987?3000, 2005.
[5] R. J. Boys, D. J. Wilkinson, and T. B. Kirkwood. Bayesian inference for a discretely observed stochastic
kinetic model. Statistics and Computing, 18(2):125?135, June 2008.
[6] M. Chiappalone, A. Novellino, I. Vajda, A. Vato, S. Martinoia, and J. van Pelt. Burst detection algorithms for the analysis of spatio-temporal patterns in cortical networks of neurons. Neurocomputing,
6566(0):653?662, 2005.
[7] John P. Cunningham, Vikash Gilja, Stephen I. Ryu, and Krishna V. Shenoy. Methods for estimating
neural firing rates, and their application to brainmachine interfaces. Neural Networks, 22(9):1235?1246,
November 2009.
[8] Paul Fearnhead and Chris Sherlock. An exact Gibbs sampler for the Markov-modulated Poisson process.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(5):767?784, November
2006.
[9] W.K. Grassmann. Transient solutions in markovian queueing systems. Computers & Operations Research, 4(1):47?53, 1977.
[10] H. Heffes and D. Lucantoni. A markov modulated characterization of packetized voice and data traffic
and related statistical multiplexer performance. Selected Areas in Communications, IEEE Journal on,
4(6):856?868, 1986.
[11] Peter R. Huttenlocher. Development of cortical neuronal activity in the neonatal cat. Experimental Neurology, 17(3):247?262, 1967.
[12] Mark J?ager, Alexander Kiel, Dirk-Peter Herten, and Fred A. Hamprecht. Analysis of single-molecule fluorescence spectroscopic data with a markov-modulated poisson process. ChemPhysChem, 10(14):2486?
2495, 2009.
[13] Y. Kaneoke and J.L. Vitek. Burst and oscillation as disparate neuronal properties. Journal of Neuroscience
Methods, 68(2):211?223, 1996.
[14] R. E. Kass, V. Ventura, and E. N. Brown. Statistical issues in the analysis of neuronal data. J Neurophysiol,
94(1):8?25, July 2005.
[15] S. C. Kou, X. Sunney Xie, and Jun S. Liu. Bayesian analysis of single-molecule experimental data.
Journal of the Royal Statistical Society: Series C (Applied Statistics), 54(3):469?506, June 2005.
[16] C. R. Leg?endy and M. Salcman. Bursts and recurrences of bursts in the spike trains of spontaneously
active striate cortex neurons. Journal of Neurophysiology, 53(4):926?939, April 1985.
[17] Gaby Maimon and John A. Assad. Beyond poisson: Increased spike-time regularity across primate parietal cortex. Neuron, 62(3):426?440, 2009.
[18] K. S. Meier-Hellstern. A fitting algorithm for markov-modulated poisson processes having two arrival
rates. European Journal of Operational Research, 29(3):370?377, 1987.
[19] Martin Nawrot, Ad Aertsen, and Stefan Rotter. Single-trial estimation of neuronal firing rates: From
single-neuron spike trains to population activity. Journal of Neuroscience Methods, 94:81?92, 1999.
[20] D. H. Perkel, G. L. Gerstein, and G. P. Moore. Neuronal spike trains and stochastic point processes. I.
The single spike train. Biophysical Journal, 7(4):391?418, July 1967.
[21] V. A. Rao. Markov chain Monte Carlo for continuous-time discrete-state systems. PhD thesis, University
College London, 2012.
[22] V. A. Rao and Y. W. Teh. Gaussian process modulated renewal processes. In J. Shawe-Taylor, R.S. Zemel,
P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing
Systems 24, pages 2474?2482. 2011.
[23] V. A. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and extensions. Journal of
Machine Learning Research, 14:3207?3232, 2013.
[24] Ardavan Saeedi and Alexandre Bouchard-C?ot?e. Priors over Recurrent Continuous Time Processes. In
Advances in Neural Information Processing Systems (NIPS), volume 24, 2011.
[25] K. Sriram and W. Whitt. Characterizing superposition arrival processes in packet multiplexers for voice
and data. Selected Areas in Communications, IEEE Journal on, 4(6):833?846, sep 1986.
[26] Florian Stimberg, Andreas Ruttor, and Manfred Opper. Bayesian inference for change points in dynamical systems with reusable states?a chinese restaurant process approach. Journal of Machine Learning
Research, Proceedings Track, 22:1117?1124, 2012.
[27] Yee Whye Teh. Dirichlet processes. In Encyclopedia of Machine Learning. Springer, 2010.
[28] Xinhua Zhang. A very gentle note on the construction of dirichlet process. Technical report, Canberra,
Australia, 09 2008.
9
| 5534 |@word neurophysiology:2 trial:1 cox:1 middle:1 seems:4 mjp:4 bf:3 simulation:2 uncovers:1 liu:1 contains:1 series:2 existing:3 reaction:1 current:4 comparing:1 ka:1 surprising:1 yet:1 assigning:1 john:2 j1:1 shape:1 remove:2 update:1 v:2 generative:2 selected:2 website:1 short:2 manfred:3 sudden:2 characterization:1 ron:1 sigmoidal:1 simpler:1 kiel:1 zhang:1 height:1 burst:17 fitting:1 thinned:2 introduce:2 indeed:1 expected:2 multi:1 kou:1 perkel:1 highdensity:1 detects:1 cpu:2 window:1 increasing:2 becomes:6 provided:1 estimating:1 bounded:1 underlying:1 lowest:1 what:2 spends:1 finding:1 guarantee:2 temporal:1 every:1 pmh:1 exactly:3 shenoy:1 before:3 negligible:1 dropped:1 influencing:1 modify:1 severely:1 switching:4 despite:1 joining:1 barbieri:1 firing:3 path:14 might:3 chose:1 burn:1 black:4 bursting:6 range:3 acknowledgment:1 spontaneously:1 practice:2 differs:2 area:3 significantly:1 confidence:1 integrating:1 prescott:1 get:5 applying:2 influence:2 yee:1 map:2 latest:1 duration:2 resolution:1 qc:1 q:1 iain:1 array:1 population:1 construction:2 exact:1 homogeneous:2 huttenlocher:1 observed:3 bottom:2 calculate:1 region:1 ensures:1 highest:4 wilkinson:1 xinhua:1 neglected:1 depend:1 segment:22 completely:1 triangle:6 neurophysiol:1 easily:2 sep:1 cat:5 represented:1 derivation:1 train:7 separated:1 distinct:6 fast:2 lengthscale:4 monte:2 london:1 zemel:1 choosing:1 birth:1 whose:1 supplementary:3 larger:2 widely:1 drawing:3 otherwise:2 statistic:2 gp:2 advantage:1 biophysical:1 took:2 remainder:1 tu:6 neighboring:4 j2:1 doing:1 description:1 moved:1 validate:1 gentle:1 convergence:1 electrode:2 cluster:1 regularity:1 generating:1 adam:1 converges:1 rotated:2 spent:1 depending:2 tim:1 recurrent:1 quirk:1 vitek:1 dividing:1 come:2 indicate:2 inhomogeneous:6 stochastic:2 luckily:1 centered:1 vajda:1 packet:1 transient:1 australia:1 material:3 require:1 fix:1 spectroscopic:1 ryan:1 swindale:2 extension:1 hold:1 practically:1 around:3 ground:1 major:1 omitted:1 estimation:2 visited:1 superposition:1 sensitive:1 bridge:1 fluorescence:1 stefan:1 clearly:3 always:2 gaussian:5 fearnhead:1 modified:1 fulfill:1 avoid:1 varying:1 wilson:1 emission:1 june:2 likelihood:9 lasted:1 contrast:2 inference:13 dependent:2 unlikely:1 typically:1 cunningham:1 hidden:1 i1:1 overall:1 among:1 orientation:15 flexible:3 issue:1 priori:2 favored:2 development:1 integration:1 fairly:1 mackay:1 renewal:1 equal:2 field:3 having:1 reversibility:1 sampling:8 cancel:1 icml:1 peaked:1 report:1 stimulus:12 piecewise:2 few:1 distinguishes:1 randomly:3 gamma:15 neurocomputing:1 phase:2 fire:1 detection:1 interest:1 acceptance:2 huge:2 highly:2 sriram:1 introduces:1 light:1 hamprecht:1 held:1 chain:2 accurate:1 beforehand:2 censored:1 jumping:2 ager:1 old:4 divide:4 walk:3 taylor:1 increased:1 xeon:2 markovian:1 rao:3 cover:1 assignment:2 cost:1 introducing:1 deviation:1 uniform:2 too:1 synthetic:3 considerably:1 density:2 international:1 together:2 continuously:1 again:1 thesis:1 recorded:2 choose:3 external:2 multiplexer:2 doubt:1 toy:1 photon:1 de:3 ad:1 later:1 try:1 linked:1 observing:1 traffic:2 red:1 start:1 bayes:2 option:1 analyze:1 complicated:1 bouchard:1 whitt:1 contribution:1 ni:3 accuracy:1 efficiently:1 yield:1 conceptually:1 bayesian:8 carlo:2 trajectory:1 converged:1 explain:2 strongest:1 reach:1 cumbersome:1 sharing:1 neu:1 against:1 underestimate:1 associated:1 sampled:4 popular:1 knowledge:1 color:1 dimensionality:2 thinning:1 alexandre:1 higher:3 xie:1 follow:1 methodology:1 specify:1 response:2 april:1 done:2 though:1 strongly:3 just:3 crp:1 hand:6 hastings:2 usa:1 dario:1 brown:2 true:8 assigned:4 death:1 laboratory:1 moore:1 i2:1 deal:1 indistinguishable:1 during:5 width:6 recurrence:1 noted:1 m:1 whye:1 performs:1 interface:1 geometrical:1 ranging:1 spiking:14 refractory:2 exponentially:1 volume:1 million:3 silicon:2 gibbs:4 grid:2 shawe:1 had:3 funded:1 moving:2 access:1 cortex:4 longer:3 inhibition:1 base:3 add:1 posterior:33 recent:1 showed:1 scenario:1 certain:3 arjas:1 remembered:1 rotter:1 inverted:1 captured:1 seen:3 krishna:1 florian:3 converge:1 period:3 dashed:2 semi:1 sliding:1 multiple:1 afterwards:1 full:1 july:2 stephen:1 smooth:1 technical:1 calculation:1 af:2 long:2 divided:1 concerning:1 manipulate:1 grassmann:1 impact:1 poisson:30 iteration:1 represent:1 proposal:9 uninformative:1 fine:1 interval:2 crucial:1 ot:1 probably:1 recording:3 facilitates:1 seem:3 ideal:1 easy:1 variety:1 switch:2 restaurant:9 gave:1 perfectly:1 blanche:2 andreas:3 idea:1 reduce:1 shift:2 vikash:1 bartlett:1 reuse:2 peter:2 york:1 action:10 useful:1 iterating:1 clear:1 detailed:1 amount:2 nonparametric:3 dark:1 encyclopedia:1 extensively:1 ten:1 concentrated:1 visualized:1 generate:2 nsf:1 shifted:1 estimated:1 neuroscience:2 track:1 blue:2 discrete:11 hyperparameter:2 waiting:1 reusable:1 four:1 drawn:9 queueing:1 changing:1 prevent:1 discreteness:1 saeedi:1 tenth:2 v1:2 packetized:1 fraction:1 year:1 run:2 angle:1 almost:1 reasonable:1 smoothes:1 decide:1 utilizes:1 oscillation:1 draw:2 gerstein:1 bound:2 annual:1 activity:4 discretely:1 adapted:1 occur:2 worked:1 flat:1 dominated:1 speed:1 min:1 extremely:1 martin:2 pelt:1 conjugate:1 smaller:1 slightly:1 across:1 metropolis:3 primate:1 leg:1 explained:1 restricted:1 ardavan:1 taken:1 computationally:2 remains:1 assures:1 fail:1 lined:1 needed:1 know:1 tractable:1 operation:1 multiplied:1 apply:2 nicholas:2 voice:2 weinberger:1 top:1 dirichlet:5 running:1 assumes:1 calculating:2 neonatal:1 chinese:9 especially:2 murray:1 polytrodes:1 society:2 unchanged:1 already:3 added:2 spike:20 occurs:1 receptive:1 concentration:1 primary:2 striate:1 aertsen:1 berlin:6 chris:1 reason:2 length:1 modeled:2 index:1 ratio:3 balance:1 setup:1 mostly:1 sinica:1 ventura:1 multiunit:1 relate:1 frank:1 boy:1 slows:1 rise:1 disparate:1 unknown:5 allowing:1 upper:2 vertical:1 neuron:25 teh:3 markov:11 finite:1 november:2 parietal:1 truncated:2 january:1 looking:1 communication:2 dirk:1 frame:1 uniformization:3 intensity:2 introduced:2 david:1 pair:1 meier:1 extensive:2 learned:1 ryu:1 hour:1 nip:1 poissonian:1 beyond:1 bar:3 recurring:1 below:1 pattern:1 nawrot:1 mjps:1 dynamical:1 summarize:1 sherlock:1 green:1 royal:2 shifting:1 event:18 demanding:1 jun:1 naive:2 columbia:1 prior:21 fully:1 expect:1 proportional:2 switched:1 downloaded:1 sufficient:1 principle:1 editor:1 surrounded:1 changed:3 last:2 stimberg:3 wide:1 taking:1 characterizing:1 anesthetized:1 ghz:2 distributed:1 kirkwood:1 opper:3 calculated:1 transition:2 van:1 cortical:2 qn:3 fred:1 made:1 jump:36 simplified:1 maimon:1 ruttor:3 active:4 overfitting:1 consuming:1 spatio:1 gilja:1 neurology:1 continuous:5 latent:1 pretty:1 why:1 additionally:5 reasonably:1 robust:1 molecule:2 operational:1 european:1 onetime:1 assured:1 main:2 statistica:1 linearly:1 neurosci:1 whole:4 hyperparameters:1 paul:1 arrival:2 neuronal:6 site:1 intel:2 join:5 crcns:1 canberra:1 slow:2 ny:1 n:2 position:1 pereira:1 exponential:4 removing:2 down:1 minute:1 bad:1 british:1 showing:1 survival:1 adding:5 phd:1 magnitude:1 conditioned:2 timothy:1 likely:1 visual:4 ordered:3 joined:1 springer:1 determines:1 acm:1 kinetic:1 conditional:2 identity:1 brainmachine:1 change:12 hard:1 typical:1 contrasted:1 sampler:28 called:1 accepted:1 experimental:4 indicating:1 select:1 assad:1 college:1 mark:1 latter:1 modulated:6 alexander:1 mcmc:8 tested:1 |
5,009 | 5,535 | Randomized Experimental Design for Causal Graph
Discovery
Huining Hu
School of Computer Science, McGill University.
[email protected]
Zhentao Li
?
LIENS, Ecole
Normale Sup?erieure
[email protected]
Adrian Vetta
Department of Mathematics and Statistics and School of Computer Science, McGill University.
[email protected]
Abstract
We examine the number of controlled experiments required to discover a causal
graph. Hauser and Buhlmann [1] showed that the number of experiments required
is logarithmic in the cardinality of maximum undirected clique in the essential
graph. Their lower bounds, however, assume that the experiment designer cannot
use randomization in selecting the experiments. We show that significant improvements are possible with the aid of randomization ? in an adversarial (worst-case)
setting, the designer can then recover the causal graph using at most O(log log n)
experiments in expectation. This bound cannot be improved; we show it is tight
for some causal graphs.
We then show that in a non-adversarial (average-case) setting, even larger improvements are possible: if the causal graph is chosen uniformly at random under
a Erd?os-R?enyi model then the expected number of experiments to discover the
causal graph is constant. Finally, we present computer simulations to complement
our theoretic results.
Our work exploits a structural characterization of essential graphs by Andersson
et al. [2]. Their characterization is based upon a set of orientation forcing operations. Our results show a distinction between which forcing operations are most
important in worst-case and average-case settings.
1
Introduction
We are given n random variables V = {V1 , V2 , . . . , Vn } and would like to learn the causal relations
between these variables. Assume the dependencies between the variables can be represented as a
directed acyclic graph G = (V, A), known as the causal graph. In seminal work, Sprites, Glymour,
and Scheines [3] present methods to obtain structural information on G from passive observational
data. In general, however, observational data can be used to discover only a part of the causal graph
G; specifically, observation data will recover the essential graph E(G). To recover the entire causal
graph G we may undertake experiments. Here, an experiment is a controlled intervention on a subset
S of the variables. A controlled intervention allows us to deduce information about which variables
S influences.
The focus of this paper is to understand how many experiments are required to discover G. This line
of research was initiated in a series of works by Eberhardt, Glymour, and Scheines (see [4, 5, 6]).
First, they showed [4] that n ? 1 experiments suffice when interventions can only be made upon
singleton variables. For general experiments, they proved [5] that dlog ne experiments are sufficient
and, in the worst case necessary, to discover G. Eberhardt [7] then conjectured that dlog(?(G))e
1
experiments are sufficient and, in the worst case, necessary; here ?(G) is the size of a maximum
clique in G.1 Hauser and Buhlmann [1] recently proved (a slight strengthening of) this conjecture.
The essential mathematical concepts underlying this result can be traced back to work of Cai [8] on
?separating systems? [9]; see also Hyttinen et al. [10].
Eberhardt [11] proposed the use of randomization (mixed strategies) in causal graph discovery. He
proved that, if the designer is restricted to single-variable interventions, the worst case expected
number of experiments required is ?(n). Eberhardt [11] considered multi-variable interventions to
be ?far more complicated? to analyze, but hypothesized that O(log n) experiments may be sufficient,
in that setting, in the worst-case.
1.1
Our Results
The purpose of this paper is to show that the lower bounds of [5] and [1] are not insurmountable.
In essence, those lower bounds are based upon the causal graph being constructed by a powerful
adversary. This adversary must pre-commit to the causal graph in advance but, before doing so,
it has access to the entire list of experiments S = {S1 , S2 , . . . } that the experiment designer will
use; here Si ? V for all i. (This adversary also describes the ?separating system? model of causal
discovery. In Section 2.4 we will explain how this adversary can also be viewed as adaptive. The
adversary may be given the list of experiments in order over time, but at time i it needs only commit
to the arcs in ?(Si ), the set of edges with exactly one end-vertex in Si .)
Our first result is that we show this powerful adversary can be tricked if the experiment designer
uses randomization in selecting the experiments. Specifically, suppose the designer selects the experiments {S1 , S2 , . . . } from a collection of probability distributions P = {P1 , P2 , . . . }, respectively, where distribution Pi+1 may depend upon the results of experiments 1, 2 . . . , i. Then, even
if the adversary has access to the list of probability distributions P before it commits to the causal
graph G, the expected number of experiments required to recover G falls significantly. Specifically,
if the designer uses randomization then, in the worst case, only at most O(log log n) experiments
in expectation are required. This result is given in Section 3, after we have presented the necessary
background on causal graphs and experiments in Section 2. We also prove our lower bound is tight.
This worst case result immediately extends to the case where the adversary is also allowed to use
randomization in selecting the causal graph. Thus, the O(log log n) bound applies to mixed-strategy
equilibria in the game framework [11] where multi-variable interventions are allowed.
Our second result is that even more dramatic improvements are possible if the causal graph is nonadversarial. For a typical causal graph needs only a constant number of experiments are required in
expectation! Specifically, if the directed acyclic graph is random, based upon an underlying Erd?osR?enyi model, then O(1) experiments in expectation are required to discover G. We prove this result
in Section 4.
Our work exploits a structural characterization of essential graphs by Andersson et al. [2]. Their
characterization is based upon a set of four operations. One operation is based upon acyclicity, the
other three are based upon v-shapes. Our results show that the acyclicity operation is most important
in improving worst-case bounds, but the v-shape operations are more important for average-case
bounds. This conclusion is highlighted by our simulation results in Section 5. These simulations
confirm that, by exploiting the v-shape operations, causal graph discovery is extremely quick in the
non-adversarial setting. In fact, the constant in the O(1) average-case guarantee may be even better
than our theoretical results suggest. Typically, it takes one or two experiments to discover a causal
graph on 15000 vertices!
2
Background
Suppose we want to discover an (unknown) directed acyclic graph G = (V, A) and we are given
its observational data. Without experimentation, we may not be able to recover all of G from its
observation data. But we can deduce a subgraph of it known as the essential graph E(G). In this
section, we describe this process and explain how experiments (deterministic or randomized) can
then be used to recover the rest of the graph. Throughout this paper, we assume the causal graph
1
A directed graph is a clique if its underlying undirected graph is a (undirected) clique.
2
and data distribution obey the faithfulness assumption and causal sufficiency [3]. The faithfulness
assumption ensures that all independence relationships revealed by the data are results of the causal
structure and are not due to some coincidental combinations of parameters. Causal sufficiency means
there are no latent (that is, hidden) variables. These assumptions are important as they provide a one
to one mapping between data and causal structure.
2.1
Observational Equivalence
First we may discover the skeleton and all the v-structures of G. To explain this, we begin with some
definitions. The skeleton of G is the undirected graph on V with an undirected edge (between the
same endpoints) for each arc of A. A v-shape in a graph (directed or undirected) is an ordered set
(a, b, c) of three distinct vertices with exactly two edges (arcs), both incident to b. The v-structures,
sometimes called immoralities [2], are the set of v-shapes (a, b, c) where ab and cb are arcs. Two
directed graphs with indistinguishable by observational data are said to belong to the same Markov
equivalence class. Specifically, Verma and Pearl [12] and Frydenberg [13] showed the skeleton and
the set of v-structures determine which equivalence class G belongs to.
Theorem 2.1. (Observational Equivalence) G and H are in the same Markov equivalence class if
and only if they have the same skeletons and the same sets of v-structures.
Because of this equivalence, we will think of an observational Markov equivalence class as given by
the skeleton and the set of (all) v-structures. From the observational data it is straightforward [12] to
obtain the basic graph B(G), a mixed graph2 obtained from the skeleton of G by orienting the edges
in each v-structure. For example, to test for an edge {i, j}, simply check there is no d-separator for
i and j; to test for a v-structure (i, k, j), simply check that there is no d-separator for i and j that
contains k. (These tests are not polynomial time. However, this is not relevant for the question we
address in this paper.)
2.2
The Essential Graph
In fact, from the observational data we may orient more edges than simply those in the basic graph
B(G). Specifically we can obtain the essential graph E(G). The essential graph is a mixed graph that
also includes every edge orientation that is present in every directed acyclic graph that is compatible
with the data. That is, an edge is oriented if and only if it has the same orientation in every graph
in the equivalence class. For example, an edge {a, b} is forced to be oriented as the arc ab for the
following reasons.
(F1 )
(F2 )
(F3 )
(F4 )
The arc ab (and the arc cb) is forced if it belongs to a v-structure (a, b, c).
There is a v-shape (b, a, c) but it is not a v-structure. Then arc ab is forced if ca is an arc.
The arc ab is forced, by acyclicity, if there is already a directed path P from a to b.
There is a v-shape (c1 , a, c2 ) but it is not a v-structure. Then the arc ab is forced if there
are directed paths Q1 and Q2 from c1 to b and from c2 to b, respectively.
The reader can find illustrations of these forcing mechanisms in Figure 2 of the supplemental material. Andersson et al. [2] showed that these are the only ways to force an edge to become oriented.
In fact, they characterize essential graphs and show only local versions of (F3 ) and (F4 ) are needed
to obtain the essential graph ? that is, it suffices to assume the path P has two arcs and the paths Q1
and Q2 have only one arc each.
Let U(G) be the subgraph induced by the undirected edges of the essential graph E(G). For simplicity, we will generally just use the notation B, E and U. From the characterization, it can be
shown that U is a chordal graph.3 We remark that this chordality property is extremely useful in
quantitatively analyzing the performance of the experiments we design. In particular, the size of the
maximum clique and the chromatic number can be computed in linear time.
Corollary 2.2. [2] The subgraph U is chordal.
2
A mixed graph contains oriented edges and unoriented edges. To avoid confusion, we refer to oriented
edges as arcs.
3
A graph H is chordal if every induced cycle in H contains exactly three vertices. That is, every cycle C
on at least four vertices has a chord, an edge not in C that connects two vertices of the cycle.
3
2.3
Experimental Design
So observation data (the null experiment) will give us the essential graph E. If we perform experiments then we may recover the entire causal graph G and, in a series of works, Eberhardt, Glymour,
and Scheines [5, 4, 6] investigated the number of experiments required to achieve this. An experiment is a controlled intervention that forces a distribution, chosen by the designer, on a set S ? V .
A key fact is that, given the existence of an edge (a, b) in G, an experiment on S can perform a
directional test on (a, b) if (a, b) ? ?(S) (that is, if exactly one endpoint of the edge is in S); see [5]
for more details. Recall that we already know the skeleton of G from the observational data. Thus,
we can determine the existence of every edge in G. It then follows that to recover the entire causal
graph it suffices that (?) Each edge undergoes one directional test. The separating systems method
is based on this sufficiency condition (?). Using this condition, it is known that log n experiments
suffice [5]. In fact, this bound can be improved to log ?(U), where ?(U) is the size of the maximum
clique in the undirected subgraph U of the essential graph E. For completeness we show this result
here; see also [8] and [1].
Theorem 2.3. We can recover G using log ?(U) experiments.
Proof. First use the observational data to obtain the skeleton of G. To find the orientation of each
edge, take a vertex colouring c : V (U) ? {0, 1, . . . , ?(U) ? 1}, where ?(U) is the chromatic
number of U. We use this colouring to define our experiments. Specifically, for the ith experiment,
select all vertices whose colour is 1 in the ith bit. That is, select Si = {v : bini (c(v)) = 1}, where
bini extracts the ith bit of a number. Now, if vertices u and v are adjacent in U, they receive different
colours and consequently their colours differ at some bit j. Thus, in the jth experiment, one of u, v
is selected in Sj and the other is not. This gives a directional test for the edge {u, v}. Therefore,
from all the experiments we find the orientation of every edge. The result follows from the fact that
chordal graphs are perfect (see, for example, [14]).
But (?) is just a sufficiency condition for recovering the entire causal graph G; it need not be necessary to perform a directional test on every edge. Indeed, we may already know some edge orientations from the essential graph E via the forcing operations (F1 ), (F2 ), (F3 ) and (F4 ). Furthermore,
the experiments we carry out will force some more edge orientations. But then we may again apply
the forcing operations (F1 )-(F4 ) incorporating these new arcs to obtain even more orientations.
Let S = {S1 , S2 , . . . Sk }, where Si ? V for all 1 ? i ? k, be a collection of experiments,
Then the experimental graph is a mixed graph that includes every edge orientation that is present
in every directed acyclic graph that is compatible with the data and the experiments S. We denote
the experimental graph by ES+ (G). Thus the question Eberhardt, Glymour, and Scheines pose is:
how many experiments are needed to ensure that ES+ (G) = G? As before, we know how to find the
experimental graph.
Theorem 2.4. The experimental graph ES+ (G) is obtained by repeatedly applying rules (F1 )?(F4 )
along with the rule:
(F0 ) There is an experiment Si ? S and an edge (a, b), with a ? Si and b ?
/ Si . Then either the arc
ab or the arc ba is forced depending upon the outcome of the experiment.
We note that the proof uses the fact that arcs forced by (F0 ) are the union of edges across a set of
cuts; without this property, a fourth forcing rule may be needed [15].
Theorem 2.4 suggests that it may be possible to improve upon the log ?(U) upper bound. Unfortunately, Hauser and Buhlmann [1] show using an adversarial argument that in the worst case there is
a matching lower bound, settling a conjecture of Eberhardt [6].
2.4
Randomized Experimental Design
As discussed in the introduction, the lower bounds of [5] and [1] are generated via a powerful
adversary. The adversary must pre-commit to the causal graph in advance but, before doing so, it
has access to the entire list of experiments S = {S1 , S2 , . . . } that the experiment designer will use.
For example, assume that the adversary choses a clique for G and the experiment designer selects
a collection of experiments S = {S1 , S2 , . . . }. Given the knowledge of S then, for a worst case
performance, the adversary will direct every edge in ?(S1 ) from S1 to V \ S1 . The adversary will
4
then direct every edge in ?(S2 ) (that has yet to be assigned an orientation) from S2 to V \ S2 , etc. It
is not difficult to show that the designer will need to implement at least log n of the experiments.
We remark that there is an alternative way to view the adversary. It need commit only to the essential
graph in advance but otherwise may adaptively commit the rest of the graph over time. In particular,
at time i, after experiment Si is conducted it must commit only to the arcs in ?(Si ) and to any
induced forcings. This second adversary is clearly weaker than the first, but the lower bounds of [5]
and [1] still apply here. Again, though, even this form of adversary appears unnaturally strong in the
context of causal graphs. In particular, given the random variables V the causal relations between
them are pre-determined. They are already naturally present before the experimentation begins, and
thus it seems appropriate to insist that the adversary pre commit to the graph rather than construct it
adaptively.
Regardless, both of these adversaries can be countered if the designer uses randomization in selecting the experiments. In particular, in randomized experimental design we allow the designer to select
the experiments {S1 , S2 , . . . } from a collection of probability distributions P = {P1 , P2 , . . . }, respectively, where distribution Pi+1 may depend upon the results of experiments 1, 2 . . . , i. As an
example, consider again the case in which the adversary selects a clique. Suppose now that the
designer selects the first experiment S1 uniformly at random from the collection of subsets of cardinality 21 n. Even given this knowledge, it is less obvious how the adversary should act against such a
design. Indeed, in this article we show the usefulness of the randomized approach. It will allow the
designer to require only O(log log n) experiments in expectation. This is the case even if the adversary has access to the entire list of probability distributions P before it commits to the causal graph
G. We prove this in Section 3. Thus, by Theorem 2.3, we have that min[O(log log n), log ?(U)]
experiments are sufficient. We also prove that this bound is tight; there are graphs for which
min[O(log log n), log ?(U)] experiments are necessary.
Still our new lower bound only applies to causal graphs selected adversarially. For a typical causal
graph we can do even better. Specifically, we prove, in Section 4, that for a random causal graph
a constant number of experiments is sufficient in expectation. Consequently, for a random causal
graph the number of experiments required is independent of the number of vertices in the graph!
This surprising result is confirmed by our simulations. For various values n of number of vertices,
we construct numerous random causal graphs and compute the average and maximum number of
experiments needed to discover them. Simulations confirm this number does not increase with n.
Our results can be viewed in the game theoretic framework of Eberhardt [11], where the adversary
selects a probability distribution (mixed strategy) over causal graphs and the experiment designer
choses a distribution over which experiments to run. In this zero-sum game, the payoff to the
designer is the negative of the number of experiments needed. The worst case setting corresponds
to the situation where the adversary can choose any distribution over causal graphs. Thus, our result
implies a worst case ??(log log n) bound on the value of a game with multi-variable interventions
and no latent variables. Therefore, the ability to randomize turns out to be much more helpful to the
designer than the adversary. Our average case O(1) bound corresponds to the situation where the
adversary in the game is restricted to choose the uniform distribution over causal graphs.
3
3.1
Randomized Experimental Design
Improving the Upper Bound by Exploiting Acyclicity
We now show randomization significantly reduces the number of experiments required to find the
causal graph. To improve upon the log ?(U) bound, recall that (?) is a sufficient but not necessary
condition. In fact, we will not need to apply directional tests to every edge. Given some edge orientations we may obtain other orientations for free by acyclicity or by exploiting the characterization
of [2]. Here we show that the acyclicity forcing operation (F3 ) on its own provides for significant
speed-ups when we allow randomisation.
Theorem 3.1. To orient a clique on t vertices, O(log log t) experiments suffice in expectation.
Proof. Let {x1 , x2 , . . . , xt } be the true acyclic ordering of the clique G. Now take a random experiment S, where each vertex is independently selected in S with probability 21 . The experiment
S partitions the ordering into runs (streaks) ? contiguous segments of {x1 , x2 , . . . , xt } where either
5
every vertex of the segment is in S or every vertex of the segment is in S? = V \ S. Without loss of
? by
generality the first run is in S and we denote it by R0 . We denote the second run, which is in S,
? 0 , the third run by R1 , the fourth run by R
? 1 etc. A well known fact (see, for example, [16]) is that,
R
with high probability, the longest run has length ?(log t).
Take any pair of vertices u and v. We claim that edge {u, v} can be oriented provided the two
vertices are in different runs. To see this first observe that the experiment will orient any edge
? Thus if u ? Ri and v ? R
? j , or vice versa, then we may orient {u, v}. Assume
between S and S.
u ? Ri and v ? Rj , where i < j. We know {i, j} must be the arc ij, but how do we conclude this
? i . Because G is a clique there are edges {u, w}
from our experiment? Well, take any vertex w ? R
and {v, w}. But these edges have already been oriented as uw and wv by the experiment. Thus, by
? i and v ? R
? j , where i < j.
acyclicity the arc uv is forced. A similar argument applies for u ? R
It follows that the only edges that cannot be oriented lie between vertices within the same run. Each
run induces an undirected clique after the experiment, but each such clique has cardinality O(log t)
with high probability. We can now independently and simultaneously apply the deterministic method
of Theorem 2.3 to orient the edges in each of these cliques using O(log log t) experiments. Hence
the entire graph is oriented using 1 + O(log log t) experiments.
We note that if any high probability event does not occur, we simply restart with new random variables, at most doubling the number of experiments (and tripling if it happens again and so on).
P The
expected number of experiments is then the number we get with no restart multiplied by i ipi ,
which is bounded by a constant (usually approaching 1 if p is a decreasing function of t).
Theorem 3.1 applies to cliques. The same guarantee, however, can be obtained for any graph.
Theorem 3.2. To construct G, O(log log n) experiments suffice in expectation.
Proof. Take any graph G with n vertices. Recall, we only need orient the edges of the chordal graph
U. But a chordal graph contains at most n maximal cliques [14] (each of size t ? n). Suppose we
perform the randomized experiment where each vertex is independently selected in S with probability 21 , as in Theorem 3.1. Then any vertex of a maximal clique Q is in S with probability 21 . Thus,
this experiment breaks Q into runs all of cardinality at most O(log n) with high probability.4 Since
there are only n maximal cliques, applying the union bound gives that every maximal clique in U is
broken up into runs of cardinality O(log n) with high probability. Therefore, since every clique is a
subgraph of a maximal clique, after a single randomized experiment, the chordal graph U 0 formed
by the remaining undirected edges has ? = O(log n). We can now independently apply Theorem
2.3 on U 0 to orient the remaining edges using O(log log n) experiments.
We can also iteratively exploit the essential graph characterization [2] but in the worst case we will
have no v-structures and so the expected bound above will not be improved. Combining Theorem
2.3 and Theorem 3.2 we obtain
Corollary 3.3. To construct G, min[O(log log n), log ?(U)] experiments suffice in expectation.
3.2
A Matching Lower Bound
The bound in Corollary 3.3 cannot be improved. In particular, the bound is tight for unions of
disjoint cliques. (Due to space constraints, this proof is given in the supplemental materials.)
Lemma 3.4. If G is a union of disjoint cliques, ? (min[log log n, log ?(U)]) experiments are necessary in expectation to construct G.
Observe that Lemma 3.4 explains why attempting to recursively partition the runs (used in Theorem
3.1) in sub-runs will not improve worst-case performance. Specifically, a recursive procedure may
produce a large number of sub-runs and, with high probability, the trick will fail on one of them.
4
Specifically, every run will have cardinality at most k ? log n with probability at least 1 ?
6
1
.
nk?1
4
Random Causal Graphs
In this section, we go beyond worst-case analysis and consider the number of experiments needed
to recover a typical causal graph. To do this, however, we must provide a model for generating a
?typical? causal graph. For this task, we use the Erd?os-R?enyi (E-R) random graph model. Under this
model, we show that the expected number of experiments required to discover the causal graph is just
a constant. We remark that we chose the E-R model because it is the predominant graph sampling
model. We do not claim that the E-R model is the most appropriate random model for every causal
graph application. However, we believe the main conclusion we draw, that the expected number of
experiments to orient a typical graph is very small, applies much more generally. This is because
the vast improvement we obtain for our average-case analysis (over worst-case analysis) is derived
from the fact that the E-R model produces many v-shapes. Since any other realistic random graph
model will also produce numerous v-shapes (or small clique number), the number of experiments
required should also be small in those models.
Now, recall that the standard Erd?os-R?enyi random graph model generates an undirected graph. The
model, though, extends naturally to directed, acyclic graphs as well. Specifically, our graphs Cn,p
with parameters n and p are chosen according to the following distribution:
(1) Pick a random permutation ? of n vertices.
(2) Pick an edge (i, j) (with 1 ? i < j ? n) independently with probability p.
(3) If (i, j) is picked, orient it from i to j if ?(i) < ?(j) and from j to i otherwise.
Note that since each edge was chosen randomly, we obtain the same distribution of causal graphs
if we simply fix ? to be the identity permutation. In other words, Cn,p is just a random undirected
graph Gn,p in which we?ve directed all edges from lower to higher indexed vertices. Clearly, this
graph is then acyclic. The main result in this section is that the expected number of experiments
needed to recover the graph is constant. We prove this in the supplemental materials.
Theorem 4.1. For p ? 45 we can recover Cn,p using at most log log 13 experiments in expectation.
We remark that the probability 54 in Theorem 4.1 can easily be replaced by 1 ? ?, for any ? > 0. The
resulting expected number of experiments is a constant depending upon ?. Note, also, that the result
holds even if ? is a function of n tending to zero. Furthermore, we did not attempt to optimize the
constant log log 13 in this bound.
Theorem 4.1 illustrates an important distinction between worst-case and average-case analyses.
Specifically, the bad examples for the worst-case setting are based upon clique-like structures.
Cliques have no v-shapes, so to improve upon existing results we had to exploit the acyclicity operation (F3 ). In contrast, for the average-case, the proof of Theorem 4.1 exploits the v-structure operation (F1 ). The simulations in Section 5 reinforce this point: in practice, the operations (F1 , F2 , F4 )
are extremely important as v-shapes are likely to arise in typical causal graphs.
5
Simulation Results
In this section, we describe the simulations we conducted in MATLAB. The results confirm the
theoretical upper bounds of Theorem 4.1; indeed the results suggest that the expected number of
experiments required may be even smaller than the constant produced in Theorem 4.1. For example,
even in graphs with 15000 vertices, the average cardinality of the maximum clique in the simulations
is only just over two! This suggests that the full power of the forcing rules (F1 )-(F4 ) has not been
completely measured by the theoretical results we presented in Sections 3 and 4.
For the simulations, we first generate a random causal graph G in the E-R model. We then calculate
the essential graph E(G). To do this we apply the forcing rules (F1 )-(F4 ) from the characterization
of [2]. At this point we examine properties of the U(G) the undirected subgraph of E(G). We
are particularly interested in the maximum clique size in U because this information is sufficient to
upper bound the number of experiments that any reasonable algorithm will require to discover G.
We remark that, to speed up the simulations we represent a random graph G by a symmetric adjacency matrix M . Here, if Mi,j = 1 then there is an arc ij if i < j and an arc ji if i > j. The matrix
formulation allows the forcing rules (F1 )-(F4 ) to be implemented more quickly than standard approaches. For example, the natural way to apply the forcing rule (F1 ) is to search explicitly for each
v-structure of which there may be O(n3 ). Instead we can find every edge contained in a v-structure
7
1.0E+08
4.0E+05
9.0e+07
1.0e+07
P=0.8
4.0E+05
4.0e+05
1.0e+05
1.0E+04
1.0E+03
120.0
100.0
80.0
60.0
41
40.0 40
20.0 30.530.9
0.0
10.0
8 8
8.0
6.0 7.1 6.9
4.0
2.0 3.2 3.2
0.0
5001000
n
51
43
31.1 31.2
9
6.9
3.2
9
7
3.2
5000 15,000
n
n=
1.0E+08
P=0.5
5.6e+07
6.2e+06
2.5e+05
6.2e+04
1.0E+04
1.0E+03
120.0
100.0
80.0
60.0
40.0 18
16
20.0
0.0 7.7 7.8
10.0
8.0
6.0 5 5
4.0 3.2 3.3
2.0 2.2 2.2
0.0
5001000
21
7.9
19
8.3
5
3.3
2.2
5
3.6
2. 3
1.0E+08
4.0E+05
P=0.1
5.0e+04
1.2e+04
1.1e+07
1.3e+06
1.0E+04
1.0E+03
120.0
100.0
80.0
60.0
40.0 20
20
19
21
20.0
12.3 12.2
0.0 12.4 12.5
10.0
8.3 8.4
8.2 8.1
8.0
6.0 4
4
3
3
4.0
2.0 2.2 2.2
2.2 2.2
0.0
5001000
5000 15,000
5000 15,000
n
n=
n
n=
1.0E+08
P=0.01
4.0E+05
1.1e+06
5.0e+03 1.3e+05
1.0E+04
1.0E+03 1.2e+03
120.0
104
102 98
100.0 103
80.0 97.6 102.1 101.7 102.0
60.0 71.2 72.5
72.4 72.4
40.0
20.0
0.0
10.0
8.0
6.0
3
3
4.0 3 3
2.0 2.2 2.2
2.2 2.2
0.0
5001000
5000 15,000
n
n=
Figure 1: Experimental results: number of edges and size of the maximum cliques for Cn,p
using matrix multiplication, which is fast under MATLAB.5 The validity of such an approach can
be seen by the following theorem whose proof is left to the supplemental material.
Theorem 5.1. Given the adjacency matrix M of a causal graph, we can find all edges contained in
a v-structure via matrix multiplication.
To speed up computation for smaller values of p and large n, we instead used sparse matrices to
apply (F1 ) storing only a list of non-zero entries ordered by row and column and vice versa. Then
matrix multiplication could be performed quickly by looking for common entries in two short lists.
We ran simulations for four choices of probability p, specifically p ? {0.8, 0.5, 0.1, 0.01}, and for
four choices of graph size n, specifically n ? {500, 1000, 5000, 15000}. For each combination
pair {n, p} we ran 1000 simulations. For each random graph G, once no more forcing rules can be
applied we have obtained the essential graph E(G). We then calculate |E(U)| and ?(U). Our results
are summarized in Figure 1.
Here average/largest refers to the average/largest over all 1000 simulations for that {n, p} combination. Observe that the lines for AVG-E(G) and AVG-E(F1 ) illustrate Theorem 4.1: there is a dramatic
fall in the expected number of undirected edges remaining by just applying the v-structure forcing
operation (F1 ). The AVG-E(U) and MAX-E(U) show that the number of edges fall even more when
we apply all the forcing operations to obtain U.
More remarkably the maximum clique size in U is tiny, AVG-?(U) is just around two or three for
all our choices of p ? {0.8, 0.5, 0.1, 0.01}. The largest clique size we ever encountered was just
nine. Since the number of experiments required is at most logarithmic in the maximum clique size,
none of our simulations would ever require more than five experiments to recover the causal graph
and nearly always required just one or two. Thus, the expected clique size (and hence number of
experiments) required appears even smaller than the constant 13 produced in Theorem 4.1.
We emphasize that the simulations do not require the use of a specific algorithm, such as the algorithms associated with the proofs of the worst-case bound (Theorem 3.2) and the average-case bound
(Theorem 4.1). In particular, the simulations show that the null experiment applied in conjunction
with the forcing operations (F1 )-(F4 ) is typically sufficient to discover most of the causal graph.
Since the remaining unoriented edges U have small maximum clique size, any reasonable algorithm
will then be able to orient the rest of the graph using a constant number of experiments.
Acknowledgement We would like to thank the anonymous referees for their remarks that helped us
improve this paper.
5
In theory, matrix multiplication can be carried in time O(n2.38 ) [17].
8
References
[1] A. Hauser and P. B?uhlmann. Two optimal strategies for active learning of causal models from
interventional data. International Journal of Approximate Reasoning, 55(4):926?939, 2013.
[2] S. Andersson, D. Madigan, and M. Perlman. A characterization of Markov equivalence classes
for acyclic digraphs. Annals of Statistics, 25(2):505?541, 1997.
[3] P. Sprites, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2
edition, 2000.
[4] F. Eberhardt, C. Glymour, and R. Scheines. n ? 1 experiments suffice to determine the causal
relations among n variables. In D. Holmes and L. Jain, editors, Innovations in Machine Learning, volume 194, pages 97?112. Springer-Verlag, 2006.
[5] F. Eberhardt, C. Glymour, and R. Scheines. On the number of experiments sufficient and in
the worst case necessary to identify all causal relations among n variables. In Proceedings of
the 21st Conference on Uncertainty in Artificial Intelligence (UAI), pages 178?184, 2005.
[6] F. Eberhardt. Causation and Intervention. Ph.d. thesis, Carnegie Melon University, 2007.
[7] F. Eberhardt. Almost optimal sets for causal discovery. In Proceedings of the 24th Conference
on Uncertainty in Artificial Intelligence (UAI), pages 161?168, 2008.
[8] M. Cai. On separating systems of graphs. Discrete Mathematics, 49(1):15?20, 1984.
[9] A. R?enyi. On random generating elements of a finite boolean algebra. Acta Sci. Math. Szeged,
22(75-81):4, 1961.
[10] A. Hyttinen, F. Eberhardt, and P. Hoyer. Experiment selection for causal discovery. Journal of
Machine Learning Research, 14:3041?3071, 2013.
[11] F. Eberhardt. Causal discovery as a game. Journal of Machine Learning Research, 6:87?96,
2010.
[12] T. Verma and J. Pearl. Equivalence and synthesis in causal models. In Proceedings of the 6th
Conference on Uncertainty in Artificial Intelligence (UAI), pages 255?268, 1990.
[13] M. Frydenberg. The chain graph Markov property. Scandinavian Journal of Statistics, 17:333?
353, 1990.
[14] F. Gavril. Algorithms for minimum coloring, maximum clique, minimum covering by cliques,
and maximum independent set of a chordal graph. SIAM Journal on Computing, 2(1):180?187,
1972.
[15] C. Meek. Causal inference and causal explanation with background knowledge. In Proceedings
of the Eleventh conference on Uncertainty in artificial intelligence, pages 403?410. Morgan
Kaufmann Publishers Inc., 1995.
[16] T. Cormen, C. Leiserson, R. Rivest, and C. Stein. Introduction to Algorithms. McGraw Hill, 2
edition, 2001.
[17] V. Williams. Multiplying matrices faster than Coppersmith-Winograd. In Proceedings of
the44th Symposium on Theory of Computing (STOC), pages 887?898, 2012.
9
| 5535 |@word version:1 polynomial:1 seems:1 adrian:1 hu:2 simulation:17 q1:2 pick:2 dramatic:2 recursively:1 carry:1 series:2 contains:4 selecting:4 ecole:1 existing:1 chordal:8 surprising:1 si:10 yet:1 must:5 realistic:1 partition:2 shape:11 intelligence:4 selected:4 ith:3 short:1 characterization:9 math:2 completeness:1 provides:1 ipi:1 five:1 mathematical:1 along:1 constructed:1 c2:2 become:1 direct:2 symposium:1 prove:6 eleventh:1 indeed:3 expected:12 p1:2 examine:2 multi:3 insist:1 decreasing:1 cardinality:7 begin:2 discover:13 underlying:3 suffice:6 notation:1 provided:1 bounded:1 null:2 rivest:1 coincidental:1 q2:2 supplemental:4 guarantee:2 every:20 act:1 exactly:4 intervention:9 before:6 local:1 analyzing:1 initiated:1 path:4 chose:1 acta:1 equivalence:10 suggests:2 directed:12 perlman:1 union:4 recursive:1 implement:1 practice:1 procedure:1 significantly:2 matching:2 ups:1 pre:4 word:1 refers:1 madigan:1 suggest:2 get:1 cannot:4 selection:1 context:1 influence:1 seminal:1 applying:3 optimize:1 deterministic:2 quick:1 straightforward:1 regardless:1 go:1 independently:5 williams:1 simplicity:1 immediately:1 rule:8 holmes:1 mcgill:4 annals:1 suppose:4 us:4 trick:1 referee:1 element:1 particularly:1 cut:1 winograd:1 worst:21 calculate:2 ensures:1 cycle:3 ordering:2 chord:1 ran:2 broken:1 skeleton:8 graph2:1 depend:2 tight:4 segment:3 algebra:1 chordality:1 upon:15 f2:3 completely:1 easily:1 represented:1 various:1 enyi:5 distinct:1 describe:2 forced:8 fast:1 jain:1 artificial:4 outcome:1 whose:2 larger:1 otherwise:2 ability:1 statistic:3 commit:7 think:1 highlighted:1 cai:2 maximal:5 strengthening:1 fr:1 relevant:1 combining:1 subgraph:6 achieve:1 exploiting:3 r1:1 produce:3 generating:2 perfect:1 depending:2 illustrate:1 insurmountable:1 pose:1 measured:1 ij:2 school:2 strong:1 p2:2 recovering:1 implemented:1 implies:1 differ:1 f4:10 observational:11 material:4 adjacency:2 explains:1 require:4 f1:14 suffices:2 fix:1 anonymous:1 randomization:8 hold:1 around:1 considered:1 equilibrium:1 mapping:1 cb:2 claim:2 purpose:1 uhlmann:1 largest:3 vice:2 mit:1 clearly:2 always:1 normale:1 immorality:1 avoid:1 rather:1 chromatic:2 conjunction:1 corollary:3 derived:1 focus:1 improvement:4 longest:1 check:2 contrast:1 adversarial:4 helpful:1 inference:1 entire:8 typically:2 hidden:1 relation:4 lien:1 selects:5 interested:1 among:2 orientation:12 construct:5 f3:5 once:1 hyttinen:2 sampling:1 adversarially:1 choses:2 nearly:1 quantitatively:1 causation:2 oriented:9 randomly:1 simultaneously:1 ve:1 replaced:1 connects:1 ab:7 attempt:1 leiserson:1 predominant:1 chain:1 edge:50 necessary:8 indexed:1 causal:62 theoretical:3 column:1 boolean:1 gn:1 contiguous:1 vertex:25 subset:2 entry:2 uniform:1 usefulness:1 conducted:2 characterize:1 hauser:4 dependency:1 adaptively:2 st:1 international:1 randomized:8 siam:1 synthesis:1 quickly:2 again:4 thesis:1 choose:2 li:2 singleton:1 summarized:1 includes:2 inc:1 explicitly:1 performed:1 helped:1 view:1 break:1 picked:1 analyze:1 sup:1 doing:2 recover:13 complicated:1 formed:1 kaufmann:1 identify:1 directional:5 produced:2 tricked:1 none:1 multiplying:1 confirmed:1 explain:3 definition:1 against:1 obvious:1 naturally:2 proof:8 mi:1 associated:1 proved:3 colouring:2 recall:4 knowledge:3 back:1 coloring:1 appears:2 higher:1 improved:4 erd:4 sufficiency:4 formulation:1 though:2 generality:1 furthermore:2 just:9 szeged:1 o:3 undergoes:1 believe:1 orienting:1 hypothesized:1 concept:1 true:1 validity:1 hence:2 assigned:1 symmetric:1 iteratively:1 adjacent:1 indistinguishable:1 game:6 essence:1 covering:1 hill:1 theoretic:2 confusion:1 passive:1 reasoning:1 recently:1 common:1 tending:1 ji:1 endpoint:2 volume:1 belong:1 slight:1 he:1 unoriented:2 discussed:1 significant:2 refer:1 versa:2 uv:1 erieure:1 mathematics:2 had:1 access:4 f0:2 scandinavian:1 deduce:2 etc:2 own:1 showed:4 conjectured:1 belongs:2 forcing:15 verlag:1 wv:1 seen:1 minimum:2 morgan:1 r0:1 determine:3 full:1 rj:1 reduces:1 faster:1 controlled:4 prediction:1 basic:2 expectation:11 sometimes:1 represent:1 c1:2 receive:1 background:3 want:1 remarkably:1 publisher:1 rest:3 induced:3 undirected:14 structural:3 revealed:1 undertake:1 independence:1 approaching:1 cn:4 colour:3 sprite:2 nine:1 remark:6 repeatedly:1 matlab:2 generally:2 useful:1 stein:1 ph:1 induces:1 generate:1 designer:18 disjoint:2 carnegie:1 discrete:1 key:1 four:4 traced:1 interventional:1 v1:1 uw:1 graph:117 vast:1 sum:1 orient:10 run:16 powerful:3 fourth:2 uncertainty:4 extends:2 throughout:1 reader:1 reasonable:2 almost:1 vn:1 draw:1 frydenberg:2 bit:3 bound:29 meek:1 melon:1 encountered:1 occur:1 constraint:1 x2:2 ri:2 n3:1 generates:1 randomisation:1 speed:3 osr:1 extremely:3 argument:2 min:4 attempting:1 conjecture:2 glymour:7 department:1 according:1 combination:3 cormen:1 describes:1 across:1 smaller:3 s1:10 happens:1 dlog:2 restricted:2 scheines:7 turn:1 mechanism:1 fail:1 needed:7 know:4 end:1 operation:16 experimentation:2 multiplied:1 apply:9 obey:1 observe:3 v2:1 appropriate:2 alternative:1 existence:2 remaining:4 ensure:1 commits:2 exploit:5 bini:2 question:2 already:5 strategy:4 randomize:1 countered:1 said:1 hoyer:1 thank:1 reinforce:1 separating:4 sci:1 unnaturally:1 restart:2 mail:1 reason:1 length:1 relationship:1 illustration:1 gavril:1 innovation:1 difficult:1 unfortunately:1 stoc:1 negative:1 ba:1 design:7 unknown:1 perform:4 upper:4 observation:3 markov:5 arc:23 finite:1 payoff:1 situation:2 looking:1 ever:2 buhlmann:3 complement:1 pair:2 required:17 faithfulness:2 distinction:2 pearl:2 address:1 able:2 adversary:25 beyond:1 usually:1 coppersmith:1 max:1 explanation:1 power:1 event:1 natural:1 force:3 settling:1 improve:5 ne:1 numerous:2 carried:1 extract:1 discovery:7 acknowledgement:1 multiplication:4 loss:1 permutation:2 mixed:7 acyclic:9 incident:1 sufficient:9 article:1 editor:1 verma:2 pi:2 storing:1 tiny:1 row:1 compatible:2 free:1 jth:1 weaker:1 understand:1 allow:3 fall:3 sparse:1 made:1 adaptive:1 collection:5 avg:4 far:1 sj:1 approximate:1 emphasize:1 mcgraw:1 clique:36 confirm:3 active:1 uai:3 conclude:1 search:2 latent:2 sk:1 why:1 learn:1 ca:3 streak:1 eberhardt:14 improving:2 investigated:1 separator:2 did:1 main:2 s2:9 arise:1 edition:2 n2:1 allowed:2 x1:2 en:1 aid:1 sub:2 lie:1 third:1 theorem:26 bad:1 xt:2 specific:1 list:7 essential:19 incorporating:1 illustrates:1 nk:1 logarithmic:2 simply:5 likely:1 ordered:2 contained:2 doubling:1 applies:5 springer:1 corresponds:2 vetta:2 viewed:2 identity:1 consequently:2 digraph:1 specifically:14 typical:6 uniformly:2 determined:1 lemma:2 called:1 andersson:4 experimental:10 e:3 acyclicity:8 select:3 |
5,010 | 5,536 | Transportability from Multiple Environments
with Limited Experiments: Completeness Results
Judea Pearl
Computer Science
UCLA
[email protected]
Elias Bareinboim
Computer Science
UCLA
[email protected]
Abstract
This paper addresses the problem of mz-transportability, that is, transferring
causal knowledge collected in several heterogeneous domains to a target domain
in which only passive observations and limited experimental data can be collected.
The paper first establishes a necessary and sufficient condition for deciding the
feasibility of mz-transportability, i.e., whether causal effects in the target domain
are estimable from the information available. It further proves that a previously
established algorithm for computing transport formula is in fact complete, that is,
failure of the algorithm implies non-existence of a transport formula. Finally, the
paper shows that the do-calculus is complete for the mz-transportability class.
1
Motivation
The issue of generalizing causal knowledge is central in scientific inferences since experiments are
conducted, and conclusions that are obtained in a laboratory setting (i.e., specific population, domain, study) are transported and applied elsewhere, in an environment that differs in many aspects
from that of the laboratory. If the target environment is arbitrary, or drastically different from the
study environment, no causal relations can be learned and scientific progress will come to a standstill. However, the fact that scientific experimentation continues to provide useful information about
our world suggests that certain environments share common characteristics and that, owed to these
commonalities, causal claims would be valid even where experiments have never been performed.
Remarkably, the conditions under which this type of extrapolation can be legitimized have not been
formally articulated until very recently. Although the problem has been extensively discussed in
statistics, economics, and the health sciences, under rubrics such as ?external validity? [1, 2], ?metaanalysis? [3], ?quasi-experiments? [4], ?heterogeneity? [5], these discussions are limited to verbal
narratives in the form of heuristic guidelines for experimental researchers ? no formal treatment of
the problem has been attempted to answer the practical challenge of generalizing causal knowledge
across multiple heterogeneous domains with disparate experimental data as posed in this paper. The
lack of sound mathematical machinery in such settings precludes one of the main goals of machine
learning (and by and large computer science), which is automating the process of discovery.
The class of problems of causal generalizability is called transportability and was first formally
articulated in [6]. We consider the most general instance of transportability known to date that is
the problem of transporting experimental knowledge from heterogeneous settings to a certain specific target. [6] introduced a formal language for encoding differences and commonalities between
domains accompanied with necessary or sufficient conditions under which transportability of empirical findings is feasible between two domains, a source and a target; then, these conditions were
extended for a complete characterization for transportability in one domain with unrestricted experimental data [7, 8]. Subsequently, assumptions were relaxed to consider settings when only limited
experiments are available in the source domain [9, 10], further for when multiple source domains
1
with unrestricted experimental information are available [11, 12], and then for multiple heterogeneous sources with limited and distinct experiments [13], which was called ?mz-transportability?.1
Specifically, the mz-transportability problem concerns with the transfer of causal knowledge from a
heterogeneous collection of source domains ? = {?1 , ..., ?n } to a target domain ? ? . In each domain
?i ? ?, experiments over a set of variables Zi can be performed, and causal knowledge gathered.
In ? ? , potentially different from ?i , only passive observations can be collected (this constraint will
be weakened). The problem is to infer a causal relationship R in ? ? using knowledge obtained in ?.
The problem studied here generalizes the one-dimensional version of transportability with limited
scope and the multiple dimensional with unlimited scope previously studied. Interestingly, while
certain effects might not be individually transportable to the target domain from the experiments in
any of the available sources, combining different pieces from the various sources may enable their
estimation. Conversely, it is also possible that effects are not estimable from multiple experiments in
individual domains, but they are from experiments scattered throughout domains (discussed below).
The goal of this paper is to formally understand the conditions causal effects in the target domain are (non-parametrically) estimable from the available data. Sufficient conditions for ?mztransportability? were given in [13], but this treatment falls short of providing guarantees whether
these conditions are also necessary, should be augmented, or even replaced by more general ones.
This paper establishes the following results:
? A necessary and sufficient condition for deciding when causal effects in the target domain
are estimable from both the statistical information available and the causal information
transferred from the experiments in the domains.
? A proof that the algorithm proposed in [13] is in fact complete for computing the transport
formula, that is, the strategy devised for combining the empirical evidence to synthesize
the target relation cannot be improved upon.
? A proof that the do-calculus is complete for the mz-transportability class.
2
Background in Transportability
In this section, we consider other transportability instances and discuss the relationship with the
mz-transportability setting. Consider Fig. 1(a) in which the node S represents factors that produce
differences between source and target populations. We conduct a randomized trial in Los Angeles
(LA) and estimate the causal effect of treatment X on outcome Y for every age group Z = z,
denoted by P (y|do(x), z). We now wish to generalize the results to the population of New York
City (NYC), but we find the distribution P (x, y, z) in LA to be different from the one in NYC (call
the latter P ? (x, y, z)). In particular, the average age in NYC is significantly higher than that in LA.
How are we to estimate the causal effect of X on Y in NYC, denoted R = P ? (y|do(x))? 2 3
The selection diagram ? overlapping of the diagrams in LA and NYC ? for this example (Fig. 1(a))
conveys the assumption that the only difference between the two populations are factors determining
age distributions, shown as S ? Z, while age-specific effects P ? (y|do(x), Z = z) are invariant
across populations. Difference-generating factors are represented by a special set of variables called
selection variables S (or simply S-variables), which are graphically depicted as square nodes ().
From this assumption, the overall causal effect in NYC can be derived as follows:
X
R =
P ? (y|do(x), z)P ? (z)
z
=
X
P (y|do(x), z)P ? (z)
(1)
z
The last line constitutes a transport formula for R; it combines experimental results obtained in
LA, P (y|do(x), z), with observational aspects of NYC population, P ? (z), to obtain a causal claim
1
Traditionally, the machine learning literature has been concerned about discrepancies among domains in
the context, almost exclusively, of predictive or classification tasks as opposed to learning causal or counterfactual measures [14, 15]. Interestingly, recent work on anticausal learning leverages knowledge about invariances
of the underlying data-generating structure across domains, moving the literature towards more general modalities of learning [16, 17].
2
We will use Px (y | z) interchangeably with P (y | do(x), z).
3
We use the structural interpretation of causal diagrams as described in [18, pp. 205] (see also Appendix 1).
2
Z
Z1
Z1
Z1
Z1
X
X
Y
(a)
X
Y
(b)
Z2
X
Y
(c)
Z2
X
(d)
Y
X
Y
Z2
Z2
(e)
(f)
Figure 1: (a) Selection diagram illustrating when transportability of R = P ? (y|do(x)) between two
domains is trivially solved through simple recalibration. (b) The smallest diagram in which a causal
relation is not transportable. (c,d) Selection diagrams illustrating the impossibility of estimating
R through individual transportability from ?a and ?b even when Z = {Z1 , Z2 }. If experiments
over {Z2 } is available in ?a and over {Z1 } in ?b , R is transportable. (e,f) Selection diagrams
illustrating opposite phenomenon ? transportability through multiple domains is not feasible, but if
Z = {Z1 , Z2 } in one domain is. The selection variables S are depicted as square nodes ().
P ? (y|do(x)) about NYC. In this trivial example, the transport formula amounts to a simple recalibration (or re-weighting) of the age-specific effects to account for the new age distribution. In
general, however, a more involved mixture of experimental and observational findings would be
necessary to obtain an unbiased estimate of the target relation R. In certain cases there is no way
to synthesize a transport formula, for instance, Fig. 1(b) depicts the smallest example in which
transportability is not feasible (even with X randomized). Our goal is to characterize these cases.
In real world applications, it may happen that only a limited amount of experimental information can
be gathered at the source environment. The question arises whether an investigator in possession of
a limited set of experiments would still be able to estimate the desired effects at the target domain.
To illustrate some of the subtle issues that mz-transportability entails, consider Fig. 1(c,d) which
concerns the transport of experimental results from two sources ({?a , ?b }) to infer the effect of X
on Y in ? ? , R = P ? (y|do(x)). In these diagrams, X may represent the treatment (e.g., cholesterol
level), Z1 represents a pre-treatment variable (e.g., diet), Z2 represents an intermediate variable (e.g.,
biomarker), and Y represents the outcome (e.g., heart failure). Assume that experimental studies
randomizing {Z2 } can be conducted in domain ?a and {Z1 } in domain ?b . A simple analysis can
show that R cannot be transported from either source alone (even when experiments are available
over both variables) [9]. Still, combining experiments from both sources allows one to determine
the effect in the target through the following transport formula [13]:
X
P ? (y|do(x)) =
P (b) (z2 |x, do(Z1 ))P (a) (y|do(z2 ))
(2)
z2
This transport formula is a mixture of the experimental result over {Z1 } from ?b ,
P (b) (z2 |x, do(Z1 )), with the result of the experiment over {Z2 } in ?a , P (a) (y|do(z2 )), and constitute a consistent estimand of the target relation in ? ? . Further consider Fig. 1(e,f) which illustrates
the opposite phenomenon. In this case, if experiments over {Z2 } are available in domain ?a and
over {Z1 } in ?b , R is not transportable. However, if {Z1 , Z2 } are available in the same domain, say
?a , R is transportable and equals P (a) (y|x, do(Z1 , Z2 )), independently of the values of Z1 and Z2 .
These intriguing results entail two fundamental issues that will be answered throughout this paper.
First, whether the do-calculus is complete relative to such problems, that is, whether it would always
find a transport formula whenever such exists. Second, assuming that there exists a sequence of
applications of do-calculus that achieves the reduction required by mz-transportability, to find such a
sequence may be computational intractable, so an efficient way is needed for obtaining such formula.
3
A Graphical Condition for mz-transportability
The basic semantical framework in our analysis rests on structural causal models as defined in [18,
pp. 205], also called data-generating models. In the structural causal framework [18, Ch. 7], actions
are modifications of functional relationships, and each action do(x) on a causal model M produces
a new model Mx = hU, V, Fx , P (U)i, where V is the set of observable variables, U is the set of
unobservable variables, and Fx is obtained after replacing fX ? F for every X ? X with a new
function that outputs a constant value x given by do(x).
We follow the conventions given in [18]. We denote variables by capital letters and their realized
values by small letters. Similarly, sets of variables will be denoted by bold capital letters, sets
3
Y
of realized values by bold small letters. We use the typical graph-theoretic terminology with the
corresponding abbreviations De(Y)G , P a(Y)G , and An(Y)G , which will denote respectively the
set of observable descendants, parents, and ancestors of the node set Y in G. A graph GY will
denote the induced subgraph G containing nodes in Y and all arrows between such nodes. Finally,
GXZ stands for the edge subgraph of G where all arrows incoming into X and all arrows outgoing
from Z are removed.
Key to the analysis of transportability is the notion of identifiability [18, pp. 77], which expresses
the requirement that causal effects are computable from a combination of non-experimental data P
and assumptions embodied in a causal diagram G. Causal models and their induced diagrams are
associated with one particular domain (i.e., setting, population, environment), and this representation
is extended in transportability to capture properties of two domains simultaneously. This is possible
if we assume that the structural equations share the same set of arguments, though the functional
forms of the equations may vary arbitrarily [7]. 4
Definition 1 (Selection Diagrams). Let hM, M ? i be a pair of structural causal models relative to
domains h?, ? ? i, sharing a diagram G. hM, M ? i is said to induce a selection diagram D if D is
constructed as follows: every edge in G is also an edge in D; D contains an extra edge Si ? Vi
whenever there might exist a discrepancy fi 6= fi? or P (Ui ) 6= P ? (Ui ) between M and M ? .
In words, the S-variables locate the mechanisms where structural discrepancies between the two domains are suspected to take place.5 Armed with the concept of identifiability and selection diagrams,
mz-transportability of causal effects can be defined as follows [13]:
Definition 2 (mz-Transportability). Let D = {D(1) , ..., D(n) } be a collection of selection diagrams
relative to source domains ? = {?1 , ..., ?n }, and target domain ? ? , respectively, and Zi (and Z? )
?
i i
be the variables in which experiments can be conducted in domain ?i (and
S ? ). Leti hP , Iz i0 be the
i
pair of observational and interventional distributions of ?i , where Iz = Z0 ?Zi P (v|do(z )), and
in an analogous manner, hP ? , Iz? i be the observational and interventional distributions of ? ? . The
causal effect R =SPx? (y) is said to be mz-transportable from ? to ? ? in D if Px? (y) is uniquely
computable from i=1,...,n hP i , Izi i ? hP ? , Iz? i in any model that induces D.
While this definition might appear convoluted, it is nothing more than a formalization of the statement ?R need to be uniquely computable from the information set IS alone.? Naturally, when IS
has many components (multiple observational and interventional distributions), it becomes lengthy.
This requirement of computability from hP ? , Iz? i and hP i , Izi i from all sources has a syntactic image
in the do-calculus, which is captured by the following sufficient condition:
Theorem 1 ([13]). Let D = {D(1) , ..., D(n) } be a collection of selection diagrams relative to source
domains ? = {?1 , ..., ?n }, and target domain ? ? , respectively, and Si represents the collection of
S-variables in the selection diagram D(i) . Let {hP i , Izi i} and hP ? , Iz? i be respectively the pairs
of observational and interventional distributions in the sources ? and target ? ? . The effect R =
P ? (y|do(x)) is mz-transportable from ? to ? ? in D if the expression P (y|do(x), S1 , ..., Sn ) is
reducible, using the rules of the do-calculus, to an expression in which (1) do-operators that apply
to subsets of Izi have no Si -variables or (2) do-operators apply only to subsets of Iz? .
It is not difficult to see that in Fig. 1(c,d) (and also in Fig. 1(e,f)) a sequence of applications of
the rules of do-calculus indeed reaches the reduction required by the theorem and yields a transport
formula as shown in Section 2. It is not obvious, however, whether such sequence exists in Fig.
2(a,b) when experiments over {X} are available in ?a and {Z} in ?b , and if it does not exist, it is
also not clear whether this would imply the inability to transport. It turns out that in this specific
example there is not such sequence and the target relation R is not transportable, which means
that there exist two models that are equally compatible with the data (i.e., both could generate the
same dataset) while each model entails a different answer for the effect R (violating the uniqueness
requirement of Def. 2). 6 To demonstrate this fact formally, we show the existence of two structural
4
As discussed in the reference, the assumption of no structural changes between domains can be relaxed,
but some structural assumptions regarding the discrepancies between domains must still hold (e.g., acyclicity).
5
Transportability assumes that enough structural knowledge about both domains is known in order to substantiate the production of their respective causal diagrams. In the absence of such knowledge, causal discovery
algorithms might be used to infer the diagrams from data [19, 18].
6
This is usually an indication that the current state of scientific knowledge about the problem (encoded in
the form of a selection diagram) does not constraint the observed distributions in such a way that an answer is
entailed independently of the details of the functions and probability over the exogenous.
4
X
X
X
X
Y
Y
Y
Y
W
Z
Z
(a)
U
Z
Z
(c)
(b)
W
(d)
Figure 2: (a,b) Selection diagrams in which is not possible to transport R = P ? (y|do(x)) with
experiments over {X} in ?a and {Z} in ?b . (c,d) Example of diagrams in which some paths need
to be extended for satisfying the definition of mz ? -shedge.
models M1 and M2 such that the following equalities and inequality between distributions hold,
? (a)
(a)
PM1 (X, Z, Y ) = PM2 (X, Z, Y ),
?
?
?
?
(b)
(b)
?
? PM1 (X, Z, Y ) = PM2 (X, Z, Y ),
(a)
(a)
(3)
PM1 (Z, Y |do(X)) = PM2 (Z, Y |do(X)),
?
?
(b)
(b)
?
? PM (X, Y |do(Z)) = PM (X, Y |do(Z)),
?
2
? ?1
?
(X, Z, Y ),
PM1 (X, Z, Y ) = PM
2
for all values of X, Z, and Y , and
?
?
PM
(Y |do(X)) 6= PM
(Y |do(X)),
1
2
(4)
for some value of X and Y .
Let us assume that all variables in U ? V are binary. Let U1 , U2 ? U be the common causes of X
and Y and Z and Y , respectively; let U3 , U4 ? U be the random disturbances exclusive to Z and
Y , respectively, and U5 , U6 ? U be extra random disturbances exclusive to Y . Let Sa and Sb index
the model in the following way: the tuples hSa = 1, Sb = 0i, hSa = 0, Sb = 1i, hSa = 0, Sb = 0i
represent domains ?a , ?b , and ? ? , respectively. Define the two models as follows:
?
?
X = U1
X = U1
?
?
?
?
Z = U2 ? (U3 ? Sa )
Z = U2 ? (U3 ? Sa )
M2 =
M1 =
Y
=
((X
?
Z
?
U
?
U
?
(U
?
S
))
?
?
1
2
4
b
? Y = ((Z ? U2 ? (U4 ? Sb ))
?
?U5 ) ? (?U5 ? U6 )
?U5 ) + (?U5 ? U6 )
where ? represents the exclusive or function. Both models agree in respect to P (U), which is
defined as P (Ui ) = 1/2, i = 1, ..., 6. It is not difficult to evaluate these models and note that the
constraints given in Eqs. (3) and (4) are indeed satisfied (including positivity), the result follows. 7
Given that our goal is to demonstrate the converse of Theorem 1, we collect different examples of
non-transportability, as the previous one, and try to make sense whether there is a pattern in such
cases and how to generalize them towards a complete characterization of mz-transportability.
One syntactic subtask of mz-transportability is to determine whether certain effects are identifiable
in some source domains where interventional data is available. There are two fundamental results
developed for identifiability that will be relevant for mz-transportability as well. First, we should
consider confounded components (or c-components), which were defined in [20] and stand for a
cluster of variables connected through bidirected edges (which are not separable through the observables in the system). One key result is that each causal graph (and subgraphs) induces an unique
C-component decomposition ([20, Lemma 11]). This decomposition was indeed instrumental for
a series of conditions for ordinary identification [21] and the inability to recursively decompose a
certain graph was later used to prove completeness.
Definition 3 (C-component). Let G be a causal diagram such that a subset of its bidirected arcs
forms a spanning tree over all vertices in G. Then G is a C-component (confounded component).
Subsequently, [22] proposed an extension of C-components called C-forests, essentially enforcing
that each C-component has to be a spanning forest and closed under ancestral relations [20].
7
To a more sophisticated argument on how to evaluate these models, see proofs in appendix 3.
5
Definition 4 (C-forest). Let G be a causal diagram where Y is the maximal root set. Then G is a
Y-rooted C-forest if G is a C-component and all observable nodes have at most one child.
For concreteness, consider Fig. 1(c) and note that there exists a C-forest over nodes {Z1 , X, Z2 } and
rooted in {Z2 }. There exists another C-forest over nodes {Z1 , X, Z2 , Y } rooted in {Y }. It is also
the case that {Z2 } and {Y } are themselves trivial C-forests. When we have a pair of C-forests as
{Z1 , X, Z2 } and {Z2 } or {Z1 , X, Z2 , Y } and {Y } ? i.e., the root set does not intersect the treatment
variables; these structures are called hedges and identifiability was shown to be infeasible whenever
a hedge exists [22]. Clearly, despite the existence of hedges in Fig. 1(c,d), the effects of interest
were shown to be mz-transportable. This example is an indication that hedges do not capture in an
immediate way the structure needed for characterizing mz-transportability ? i.e., a graph might be a
hedge (or have a hedge as an edge sub?graph) but the target quantity might still be mz-transportable.
Based on these observations, we propose the following definition that may lead to the boundaries of
the class of mz-transportable relations:
Definition 5 (mz ? -shedge). Let D = (D(1) , . . . , D(n) ) be a collection of selection diagrams relative to source domains ? = (?1 , . . . , ?n ) and target domain ? ? , respectively, Si represents the
collection of S-variables in the selection diagram D(i) , and let D(?) be the causal diagram of ? ? .
Let {hP i , Izi i}
S be the collection of pairs of observational and interventional distributions of {?i },
where Izi = Z0 ?Zi P i (v|do(z0 )), and in an analogous manner, hP ? , Iz? i be the observational and
interventional distributions of ? ? , for Zi the set of experimental variables in ?i . Consider a pair of
R-rooted C-forests F = hF, F 0 i such that F 0 ? F , F 0 ? X = ?, F ? X 6= ?, and R ? An(Y)GX
(called a hedge [22]). We say that the induced collection of pairs of R-rooted C-forests over each
diagram, hF (?) , F (1) , ..., F (n) i, is an mz-shedge for Px? (y) relative to experiments (Iz? , Iz1 , ..., Izn )
if they are all hedges and one of the following conditions hold for each domain ?i , i = {?, 1, ..., n}:
1. There exists at least one variable of Si pointing to the induced diagram F 0(i) , or
2. (F (i) \ F 0(i) ) ? Zi is an empty set, or
3. The collection of pairs of C-forests induced over diagrams, hF (?) , F (1) , . . . , F (i) \
?
i
n
Z?i , . . . , F (n) i, is also an mz-shedge relative to (Iz? , Iz1 , ..., Iz\z
? , ..., Iz ), where Zi =
i
(F (i) \ F 0(i) ) ? Zi .
Furthermore, we call mz ? -shedge the mz-shedge in which there exist one directed path from R \
(R ? De(X)F ) to (R ? De(X)F ) not passing through X (see also appendix 3).
The definition of mz ? -shedge might appear involved, but it is nothing more than the articulation
of the computability requirement of Def. 2 (and implicitly the syntactic goal of Thm. 1) in a more
explicit graphical fashion. Specifically, for a certain factor Q?i needed for the computation of the
effect Q? = P ? (y|do(x)), in at least one domain, (i) it should be enforced that the S-nodes are
separable from the inducing root set of the component in which Q?i belongs, and further, (ii) the
experiments available in this domain are sufficient for solving Q?i . For instance, assuming we want
to compute Q? = P ? (y|do(x)) in Fig. 1(c, d), Q? can be decomposed into two factors, Q?1 =
Pz?1 ,x (z2 ) and Q?2 = Pz?1 ,x,z2 (y). It is the case that for factor Q?1 , (i) holds true in ?b and (ii)
the experiments available over Z1 are enough to guarantee the computability of this factor (similar
analysis applies to Q?2 ) ? i.e., there is no mz ? -shedge and Q? is computable from the available data.
Def. 5 also asks for the explicit existence of a path from the nodes in the root set R \ (R ? De(X)F )
to (R ? De(X)F ), a simple example can help to illustrate this requirement. Consider Fig. 2(c)
and the goal of computing Q = P ? (y|do(x)) without extra experimental information. There exists a hedge for Q induced over {X, Z, Y } without the node W (note that {W } is a c-component
itself) and the induced graph G{X,Z,Y } indeed leads to a counter-example for the computability of
P ? (z, y|do(x)). Using this subgraph alone, however, it would not be possible to construct a counterexample for the marginal effect P ? (y|do(x)). Despite the fact that P ? (z, y|do(x)) is not computable
from P ? (x, z, y), the quantity P ? (y|do(x)) is identifiable in G{X,Z,Y } , and so any structural model
compatible with this subgraph will generate the same value under the marginalization over Z from
P ? (z, y|do(x)). Also, it might happen that the root set R must be augmented (Fig. 2(d)), so we
prefer to add this requirement explicitly to the definition. (There are more involved scenarios that
6
PROCEDURE TRmz (y, x, P, I, S, W, D)
INPUT: x, y: value assignments; P: local distribution relative to domain S (S = 0 indexes ? ? ) and active
experiments I; W: weighting scheme; D: backbone of selection diagram; Si : selection nodes in ?i (S0 = ?
(i)
relative to ? ? ); [The following set and distributions are globally defined: Zi , P ? , PZi .]
(i)
OUTPUT: Px? (y) in terms of P ? , PZ? , PZi or F AIL(D, C0 ).
P
1 if x = ?, return V\Y P.
P
2 if V \ An(Y)D 6= ?, return TRmz (y, x ? An(Y)D , V\An(Y)D P, I, S, W, DAn(Y) ).
3 set W = (V \ X) \ An(Y)DX .
if W 6= ?, return TRmz (y, x ? w, P, I,P
S, W, D). Q
4 if C(D \ X) = {C0 , C1 , ..., Ck }, return V\{Y,X} i TRmz (ci , v \ ci , P, I, S, W, D).
5 if C(D \ X) = {C0 },
6
if C(D) 6= {D},
Q
P
P
7
if C0 ? C(D), return i|Vi ?C0 V\V (i) P/ V\V (i?1) P.
D
8
9
10
11
12
D
if (?C 0 )C0 ? C 0 ? C(D),
(i?1)
for {i|Vi ? C 0 }, set ?i = ?i ? vD
\ C0.
(i?1)
mz
0 Q
return TR (y, x ? C , i|Vi ?C 0 P(Vi |VD
? C 0 , ?i ), I, S, W, C 0 ).
else,
if I`= ?, for i = 0, ..., |D|,
?
if (Si ?
? Y | X)D(i) ? (Zi ? X 6= ?) , Ei = TRmz (y, x \ zi , P, Zi ? X, i, W, D \ {Zi ? X}).
PX
(j)
if |E| > 0, return |E|
i=1 wi Ei .
else, FAIL(D, C0 ).
Figure 3: Modified version of identification algorithm capable of recognizing mz-transportability.
we prefer to omit for the sake of presentation.) After adding the directed path from Z to Y that
passes through W , we can construct the following counter-example for Q:
?
?
X = U1
X = U1
?
?
?
?
?
?
? Z = U2
? Z = U1 ? U2
W = ((Z ? U3 ) ? B) ? (B ? (1 ? Z))
W = ((Z ? U3 ) ? B) ? (B ? (1 ? Z)) M2 =
M1 =
?
?
?
?
Y
=
((X
?
W
?
U
)
?
A)
2
? Y = ((W ? U2 ) ? A)
?
?
?
? (A ? (1 ? W ? U 2)),
? (A ? (1 ? X ? W ? U 2)),
with P (Ui ) = 1/2, ?i, P (A) = P (B) = 1/2. It is not immediate to show that the two models
produce the desired property. Refer to Appendix 2 for a formal proof of this statement.
Given that the definition of mz ? -shedge is justified and well-understood, we can now state the
connection between hedges and mz ? -shedges more directly (the proof can be found in Appendix 3):
Theorem 2. If there is a hedge for Px? (y) in G and no experimental data is available (i.e., Iz? = {}),
there exists an mz ? -shedge for Px? (y) in G.
Whenever one domain is considered and no experimental data is available, this result states that a
mz ? -shedge can always be constructed from a hedge, which implies that we can operate with mz ? shedges from now on (the converse holds for Z = {}). Finally, we can concentrate on the most
general case of mz ? -shedges with experimental data in multiple domains as stated in the sequel:
Theorem 3. Let D = {D(1) , ..., D(n) } be a collection of selection diagrams relative to source
domains ? = {?1 , ..., ?n }, and target domain ? ? , respectively, and {Izi }, for i = {?, 1, ..., n}
defined appropriately. If there is an mz ? -shedge for the effect R = Px? (y) relative to experiments
(Iz? , Iz1 , ..., Izn ) in D, R is not mz-transportable from ? to ? ? in D.
This is a powerful result that states that the existence of a mz ? -shedge precludes mz-transportability.
(The proof of this statement is somewhat involved, see the supplementary material for more details.)
For concreteness, let us consider the selection diagrams D = (D(a) , D(b) ) relative to domains ?a
and ?b in Fig. 2(a,b). Our goal is to mz-transport Q = P ? (y|do(x)) with experiments over {X} in
?a and {Z} in ?b . It is the case that there exists an mz ? -shedge relative to the given experiments.
To witness, first note that F 0 = {Y, Z} and F = F 0 ? {X}, and also that there exists a selection
variable S pointing to F 0 in both domains ? the first condition of Def. 5 is satisfied. This is a trivial
graph with 3 variables that can be solved by inspection, but it is somewhat involved to efficiently
evaluate the conditions of the definition in more intricate structures, which motivates the search for
a procedure for recognizing mz ? -shedges that can be coupled with the previous theorem.
7
4
Complete Algorithm for mz-transportability
There exists an extensive literature concerned with the problem of computability of causal relations
from a combination of assumptions and data [21, 22, 7, 13]. In this section, we build on the works
that treat this problem by graphical means, and we concentrate particularly in the algorithm called
TRmz constructed in [13] (see Fig. 3) that followed some of the results in [21, 22, 7].
The algorithm TRmz takes as input a collection of selection diagrams with the corresponding experimental data from the corresponding domains, and it returns a transport formula whenever it is
able to produce one. The main idea of the algorithm is to leverage the c-component factorization
[20] and recursively decompose the target relation into manageable pieces (line 4), so as to try to
solve each of them separately. Whenever this standard evaluation fails in the target domain ? ? (line
6), TRmz tries to use the experimental information available from the target and source domains
(line 10). (For a concrete view of how TRmz works, see the running example in [13, pp. 7]. )
In a systematic fashion, the algorithm basically implements the declarative condition delineated in
Theorem 1. TRmz was shown to be sound [13, Thm. 3], but there is no theoretical guarantee on
whether failure in finding a transport formula implies its non-existence and perhaps, the complete
lack of transportability. This guarantee is precisely what we state in the sequel.
Theorem 4. Assume TRmz fails to transport the effect Px? (y) (exits with failure executing line 12).
Then there exists X0 ? X, Y0 ? Y, such that the graph pair D, C0 returned by the fail condition
of TRmz contains as edge subgraphs C-forests F, F? that span a mz ? -shedge for Px?0 (y0 ).
Proof. Let D be the subgraph local to the call in which TRmz failed, and R be the root set of D. It
is possible to remove some directed arrows from D while preserving R as root, which result in a Rrooted c-forest F . Since by construction F 0 = F ? C0 is closed under descendents and only directed
arrows were removed, both F, F 0 are C-forests. Also by construction R ? An(Y)GX together with
the fact that X and Y from the recursive call are clearly subsets of the original input. Before failure,
TRmz evaluated false consecutively at lines 6, 10, and 11, and it is not difficult to see that an S-node
points to F 0 or the respective experiments were not able to break the local hedge (lines 10 and 11).
It remains to be showed that this mz-shedge can be stretched to generate a mz ? -shedge, but now the
same construction given in Thm. 2 can be applied (see also supplementary material).
Finally, we are ready to state the completeness of the algorithm and the graphical condition.
Theorem 5 (completeness). TRmz is complete.
Corollary 1 (mz ? -shedge characterization). Px? (y) is mz-transportable from ? to ? ? in D if and
only if there is not mz ? -shedge for Px0 (y0 ) in D for any X0 ? X and Y0 ? Y.
Furthermore, we show below that the do-calculus is complete for establishing mz-transportability,
which means that failure in the exhaustive application of its rules implies the non-existence of a
mapping from the available data to the target relation (i.e., there is no mz-transport formula), independently of the method used to obtain such mapping.
Corollary 2 (do-calculus characterization). The rules of do-calculus together with standard probability manipulations are complete for establishing mz-transportability of causal effects.
5
Conclusions
In this paper, we provided a complete characterization in the form of a graphical condition for deciding mz-transportability. We further showed that the procedure introduced in [1] for computing
the transport formula is complete, which means that the set of transportable instances identified by
the algorithm cannot be broadened without strengthening the assumptions. Finally, we showed that
the do-calculus is complete for this class of problems, which means that finding a proof strategy in
this language suffices to solve the problem. The non-parametric characterization established in this
paper gives rise to a new set of research questions. While our analysis aimed at achieving unbiased
transport under asymptotic conditions, additional considerations need to be taken into account when
dealing with finite samples. Specifically, when sample sizes vary significantly across studies, statistical power considerations need to be invoked along with bias considerations. Furthermore, when
no transport formula exists, approximation techniques must be resorted to, for example, replacing
the requirement of non-parametric analysis with assumptions about linearity or monotonicity of certain relationships in the domains. The nonparametric characterization provided in this paper should
serve as a guideline for such approximation schemes.
8
References
[1] D. Campbell and J. Stanley. Experimental and Quasi-Experimental Designs for Research. Wadsworth
Publishing, Chicago, 1963.
[2] C. Manski. Identification for Prediction and Decision. Harvard University Press, Cambridge, Massachusetts, 2007.
[3] L. V. Hedges and I. Olkin. Statistical Methods for Meta-Analysis. Academic Press, January 1985.
[4] W.R. Shadish, T.D. Cook, and D.T. Campbell. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Houghton-Mifflin, Boston, second edition, 2002.
[5] S. Morgan and C. Winship. Counterfactuals and Causal Inference: Methods and Principles for Social
Research (Analytical Methods for Social Research). Cambridge University Press, New York, NY, 2007.
[6] J. Pearl and E. Bareinboim. Transportability of causal and statistical relations: A formal approach. In
W. Burgard and D. Roth, editors, Proceedings of the Twenty-Fifth National Conference on Artificial Intelligence, pages 247?254. AAAI Press, Menlo Park, CA, 2011.
[7] E. Bareinboim and J. Pearl. Transportability of causal effects: Completeness results. In J. Hoffmann and
B. Selman, editors, Proceedings of the Twenty-Sixth National Conference on Artificial Intelligence, pages
698?704. AAAI Press, Menlo Park, CA, 2012.
[8] E. Bareinboim and J. Pearl. A general algorithm for deciding transportability of experimental results.
Journal of Causal Inference, 1(1):107?134, 2013.
[9] E. Bareinboim and J. Pearl. Causal transportability with limited experiments. In M. desJardins and
M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on Artificial Intelligence,
pages 95?101, Menlo Park, CA, 2013. AAAI Press.
[10] S. Lee and V. Honavar. Causal transportability of experiments on controllable subsets of variables: ztransportability. In A. Nicholson and P. Smyth, editors, Proceedings of the Twenty-Ninth Conference on
Uncertainty in Artificial Intelligence (UAI), pages 361?370. AUAI Press, 2013.
[11] E. Bareinboim and J. Pearl. Meta-transportability of causal effects: A formal approach. In C. Carvalho
and P. Ravikumar, editors, Proceedings of the Sixteenth International Conference on Artificial Intelligence
and Statistics (AISTATS), pages 135?143. JMLR W&CP 31, 2013.
[12] S. Lee and V. Honavar. m-transportability: Transportability of a causal effect from multiple environments.
In M. desJardins and M. Littman, editors, Proceedings of the Twenty-Seventh National Conference on
Artificial Intelligence, pages 583?590, Menlo Park, CA, 2013. AAAI Press.
[13] E. Bareinboim, S. Lee, V. Honavar, and J. Pearl. Transportability from multiple environments with limited
experiments. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors,
Advances in Neural Information Processing Systems 26, pages 136?144. Curran Associates, Inc., 2013.
[14] H. Daume III and D. Marcu. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence
Research, 26:101?126, 2006.
[15] A.J. Storkey. When training and test sets are different: characterising learning transfer. In J. Candela,
M. Sugiyama, A. Schwaighofer, and N.D. Lawrence, editors, Dataset Shift in Machine Learning, pages
3?28. MIT Press, Cambridge, MA, 2009.
[16] B. Sch?olkopf, D. Janzing, J. Peters, E. Sgouritsa, K. Zhang, and J. Mooij. On causal and anticausal
learning. In J Langford and J Pineau, editors, Proceedings of the 29th International Conference on
Machine Learning (ICML), pages 1255?1262, New York, NY, USA, 2012. Omnipress.
[17] K. Zhang, B. Sch?olkopf, K. Muandet, and Z. Wang. Domain adaptation under target and conditional
shift. In Proceedings of the 30th International Conference on Machine Learning (ICML). JMLR: W&CP
volume 28, 2013.
[18] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, New York, 2000.
2nd edition, 2009.
[19] P. Spirtes, C.N. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, Cambridge,
MA, 2nd edition, 2000.
[20] J. Tian. Studies in Causal Reasoning and Learning. PhD thesis, Department of Computer Science,
University of California, Los Angeles, Los Angeles, CA, November 2002.
[21] J. Tian and J. Pearl. A general identification condition for causal effects. In Proceedings of the Eighteenth
National Conference on Artificial Intelligence, pages 567?573. AAAI Press/The MIT Press, Menlo Park,
CA, 2002.
[22] I. Shpitser and J. Pearl. Identification of joint interventional distributions in recursive semi-Markovian
causal models. In Proceedings of the Twenty-First National Conference on Artificial Intelligence, pages
1219?1226. AAAI Press, Menlo Park, CA, 2006.
9
| 5536 |@word trial:1 illustrating:3 version:2 manageable:1 instrumental:1 nd:2 c0:10 calculus:11 hu:1 nicholson:1 decomposition:2 asks:1 tr:1 recursively:2 reduction:2 contains:2 exclusively:1 series:1 interestingly:2 current:1 z2:28 olkin:1 si:7 intriguing:1 must:3 dx:1 chicago:1 happen:2 remove:1 alone:3 intelligence:9 cook:1 inspection:1 short:1 completeness:5 characterization:7 node:14 gx:2 zhang:2 mathematical:1 along:1 constructed:3 descendant:1 prove:1 combine:1 dan:1 manner:2 x0:2 intricate:1 indeed:4 themselves:1 globally:1 decomposed:1 armed:1 becomes:1 provided:2 estimating:1 underlying:1 linearity:1 what:1 backbone:1 ail:1 developed:1 finding:4 possession:1 guarantee:4 every:3 auai:1 classifier:1 converse:2 omit:1 appear:2 broadened:1 before:1 understood:1 local:3 treat:1 despite:2 encoding:1 establishing:2 path:4 might:8 eb:1 weakened:1 studied:2 suggests:1 conversely:1 collect:1 limited:10 factorization:1 tian:2 directed:4 practical:1 unique:1 transporting:1 recursive:2 implement:1 differs:1 procedure:3 intersect:1 empirical:2 significantly:2 pre:1 induce:1 word:1 cannot:3 selection:22 operator:2 context:1 roth:1 eighteenth:1 graphically:1 economics:1 independently:3 m2:3 rule:4 subgraphs:2 cholesterol:1 u6:3 population:7 notion:1 traditionally:1 fx:3 analogous:2 target:27 construction:3 smyth:1 curran:1 harvard:1 synthesize:2 associate:1 satisfying:1 particularly:1 storkey:1 continues:1 marcu:1 u4:2 houghton:1 observed:1 reducible:1 solved:2 capture:2 wang:1 connected:1 counter:2 mz:55 removed:2 subtask:1 environment:9 ui:4 littman:2 solving:1 predictive:1 serve:1 upon:1 manski:1 exit:1 observables:1 joint:1 various:1 represented:1 articulated:2 distinct:1 artificial:9 outcome:2 exhaustive:1 heuristic:1 posed:1 encoded:1 supplementary:2 say:2 solve:2 precludes:2 statistic:2 syntactic:3 itself:1 sequence:5 indication:2 analytical:1 propose:1 maximal:1 strengthening:1 adaptation:2 relevant:1 combining:3 mifflin:1 date:1 subgraph:5 sixteenth:1 inducing:1 convoluted:1 olkopf:2 los:3 parent:1 cluster:1 requirement:7 empty:1 produce:4 generating:3 executing:1 help:1 illustrate:2 progress:1 eq:1 sa:3 c:2 implies:4 come:1 convention:1 concentrate:2 anticausal:2 subsequently:2 consecutively:1 enable:1 observational:8 material:2 suffices:1 decompose:2 extension:1 hold:5 considered:1 deciding:4 lawrence:1 scope:2 mapping:2 claim:2 pointing:2 u3:5 achieves:1 commonality:2 smallest:2 vary:2 desjardins:2 uniqueness:1 narrative:1 estimation:1 individually:1 establishes:2 city:1 mit:3 clearly:2 always:2 modified:1 ck:1 corollary:2 derived:1 biomarker:1 impossibility:1 sense:1 inference:5 spx:1 i0:1 sb:5 transferring:1 relation:12 ancestor:1 quasi:3 issue:3 overall:1 among:1 classification:1 denoted:3 unobservable:1 special:1 wadsworth:1 marginal:1 equal:1 construct:2 never:1 transportability:48 represents:7 park:6 icml:2 constitutes:1 discrepancy:4 causation:1 simultaneously:1 national:6 individual:2 replaced:1 interest:1 evaluation:1 entailed:1 mixture:2 pzi:2 edge:7 pm2:3 capable:1 necessary:5 respective:2 machinery:1 conduct:1 tree:1 re:1 desired:2 causal:50 theoretical:1 instance:5 markovian:1 bidirected:2 assignment:1 ordinary:1 vertex:1 parametrically:1 subset:5 burgard:1 recognizing:2 conducted:3 seventh:2 characterize:1 answer:3 randomizing:1 generalizability:1 muandet:1 fundamental:2 randomized:2 international:3 automating:1 ancestral:1 sequel:2 systematic:1 lee:3 together:2 concrete:1 thesis:1 aaai:6 central:1 satisfied:2 opposed:1 containing:1 positivity:1 external:1 shpitser:1 return:8 account:2 de:5 accompanied:1 gy:1 bold:2 inc:1 descendent:1 explicitly:1 vi:5 piece:2 performed:2 try:3 extrapolation:1 later:1 exogenous:1 closed:2 root:7 px0:1 counterfactuals:1 hf:3 candela:1 identifiability:4 view:1 square:2 characteristic:1 efficiently:1 gathered:2 yield:1 pm1:4 generalize:2 identification:5 basically:1 semantical:1 sgouritsa:1 researcher:1 reach:1 janzing:1 whenever:6 sharing:1 lengthy:1 definition:12 sixth:1 failure:6 recalibration:2 pp:4 involved:5 obvious:1 conveys:1 naturally:1 proof:8 associated:1 judea:2 dataset:2 treatment:6 massachusetts:1 break:1 counterfactual:1 knowledge:11 stanley:1 subtle:1 sophisticated:1 campbell:2 higher:1 violating:1 diet:1 follow:1 improved:1 izi:7 evaluated:1 though:1 furthermore:3 until:1 langford:1 transport:21 replacing:2 ei:2 overlapping:1 lack:2 pineau:1 perhaps:1 scientific:4 usa:1 effect:29 validity:1 concept:1 unbiased:2 true:1 equality:1 laboratory:2 spirtes:1 interchangeably:1 uniquely:2 rooted:5 substantiate:1 generalized:1 complete:15 theoretic:1 demonstrate:2 cp:2 passive:2 characterising:1 omnipress:1 reasoning:2 image:1 consideration:3 invoked:1 recently:1 fi:2 common:2 functional:2 volume:1 discussed:3 interpretation:1 m1:3 refer:1 cambridge:5 counterexample:1 stretched:1 nyc:8 trivially:1 pm:5 similarly:1 hp:10 sugiyama:1 language:2 moving:1 entail:3 add:1 recent:1 showed:3 belongs:1 scenario:1 manipulation:1 certain:8 inequality:1 binary:1 arbitrarily:1 meta:2 captured:1 preserving:1 unrestricted:2 relaxed:2 somewhat:2 additional:1 morgan:1 determine:2 ii:2 semi:1 multiple:11 sound:2 infer:3 academic:1 devised:1 equally:1 ravikumar:1 feasibility:1 prediction:2 basic:1 heterogeneous:5 essentially:1 represent:2 c1:1 justified:1 background:1 remarkably:1 want:1 separately:1 diagram:34 else:2 source:20 modality:1 appropriately:1 extra:3 rest:1 operate:1 sch:2 pass:1 induced:7 call:4 structural:11 leverage:2 intermediate:1 iii:1 enough:2 concerned:2 marginalization:1 zi:13 identified:1 opposite:2 regarding:1 idea:1 shadish:1 computable:5 angeles:3 shift:2 whether:10 expression:2 peter:1 returned:1 york:4 cause:1 constitute:1 action:2 passing:1 useful:1 clear:1 aimed:1 amount:2 nonparametric:1 u5:5 extensively:1 induces:2 generate:3 exist:4 iz:14 express:1 group:1 key:2 terminology:1 achieving:1 capital:2 interventional:8 resorted:1 computability:5 graph:9 concreteness:2 estimand:1 enforced:1 letter:4 powerful:1 uncertainty:1 place:1 throughout:2 almost:1 winship:1 decision:1 appendix:5 prefer:2 def:4 followed:1 identifiable:2 constraint:3 precisely:1 unlimited:1 sake:1 ucla:4 aspect:2 answered:1 argument:2 u1:6 span:1 separable:2 px:11 transferred:1 glymour:1 department:1 honavar:3 combination:2 across:4 y0:4 wi:1 delineated:1 modification:1 s1:1 invariant:1 heart:1 taken:1 equation:2 agree:1 previously:2 remains:1 discus:1 turn:1 mechanism:1 fail:2 needed:3 scheines:1 confounded:2 rubric:1 available:19 generalizes:1 experimentation:1 apply:2 weinberger:1 existence:7 original:1 assumes:1 running:1 publishing:1 graphical:5 ghahramani:1 prof:1 build:1 question:2 realized:2 quantity:2 hoffmann:1 transportable:14 strategy:2 exclusive:3 parametric:2 said:2 mx:1 vd:2 collected:3 trivial:3 spanning:2 enforcing:1 declarative:1 assuming:2 index:2 relationship:4 providing:1 izn:2 difficult:3 potentially:1 statement:3 hsa:3 stated:1 disparate:1 bareinboim:7 rise:1 design:2 guideline:2 motivates:1 twenty:6 observation:3 arc:1 finite:1 november:1 january:1 immediate:2 heterogeneity:1 extended:3 witness:1 locate:1 ninth:1 arbitrary:1 thm:3 introduced:2 pair:9 required:2 extensive:1 z1:21 connection:1 california:1 learned:1 established:2 pearl:10 address:1 able:3 below:2 usually:1 pattern:1 articulation:1 challenge:1 including:1 power:1 disturbance:2 scheme:2 imply:1 ready:1 hm:2 coupled:1 health:1 embodied:1 sn:1 literature:3 discovery:2 mooij:1 determining:1 relative:13 asymptotic:1 carvalho:1 age:6 elia:1 sufficient:6 consistent:1 s0:1 suspected:1 principle:1 editor:9 share:2 production:1 elsewhere:1 compatible:2 last:1 infeasible:1 drastically:1 verbal:1 formal:5 understand:1 bias:1 burges:1 fall:1 characterizing:1 fifth:1 boundary:1 world:2 valid:1 gxz:1 stand:2 selman:1 collection:11 social:2 welling:1 observable:3 implicitly:1 dealing:1 monotonicity:1 active:1 incoming:1 uai:1 tuples:1 search:2 transported:2 transfer:2 ca:7 controllable:1 menlo:6 obtaining:1 forest:14 bottou:1 domain:61 aistats:1 main:2 arrow:5 motivation:1 edition:3 daume:1 nothing:2 child:1 augmented:2 fig:15 causality:1 scattered:1 depicts:1 fashion:2 ny:2 formalization:1 sub:1 fails:2 wish:1 explicit:2 jmlr:2 weighting:2 trmz:15 formula:16 z0:3 theorem:9 specific:5 pz:3 concern:2 evidence:1 exists:14 intractable:1 false:1 adding:1 ci:2 phd:1 illustrates:1 boston:1 generalizing:2 depicted:2 simply:1 failed:1 schwaighofer:1 u2:7 applies:1 ch:1 hedge:14 ma:2 abbreviation:1 conditional:1 goal:7 presentation:1 towards:2 absence:1 feasible:3 change:1 specifically:3 typical:1 lemma:1 called:8 invariance:1 experimental:25 la:5 attempted:1 acyclicity:1 estimable:4 formally:4 owed:1 latter:1 arises:1 inability:2 investigator:1 evaluate:3 outgoing:1 phenomenon:2 |
5,011 | 5,537 | A Statistical Decision-Theoretic Framework for
Social Choice
Hossein Azari Soufiani?
David C. Parkes ?
Lirong Xia?
Abstract
In this paper, we take a statistical decision-theoretic viewpoint on social choice,
putting a focus on the decision to be made on behalf of a system of agents. In
our framework, we are given a statistical ranking model, a decision space, and a
loss function defined on (parameter, decision) pairs, and formulate social choice
mechanisms as decision rules that minimize expected loss. This suggests a general
framework for the design and analysis of new social choice mechanisms. We
compare Bayesian estimators, which minimize Bayesian expected loss, for the
Mallows model and the Condorcet model respectively, and the Kemeny rule. We
consider various normative properties, in addition to computational complexity
and asymptotic behavior. In particular, we show that the Bayesian estimator for the
Condorcet model satisfies some desired properties such as anonymity, neutrality,
and monotonicity, can be computed in polynomial time, and is asymptotically
different from the other two rules when the data are generated from the Condorcet
model for some ground truth parameter.
1
Introduction
Social choice studies the design and evaluation of voting rules (or rank aggregation rules). There
have been two main perspectives: reach a compromise among subjective preferences of agents, or
make an objectively correct decision. The former has been extensively studied in classical social
choice in the context of political elections, while the latter is relatively less developed, even though
it can be dated back to the Condorcet Jury Theorem in the 18th century [9].
In many multi-agent and social choice scenarios the main consideration is to achieve the second
objective, and make an objectively correct decision. Meanwhile, we also want to respect agents?
preferences and opinions, and require the voting rule to satisfy well-established normative properties in social choice. For example, when a group of friends vote to choose a restaurant for dinner,
perhaps the most important goal is to find an objectively good restaurant, but it is also important
to use a good voting rule in the social choice sense. Even for applications with less societal context, e.g. using voting rules to aggregate rankings in meta-search engines [12], recommender systems [15], crowdsourcing [23], semantic webs [27], some social choice normative properties are still
desired. For example, monotonicity may be desired, which requires that raising the position of an
alternative in any vote does not hurt the alternative in the outcome of the voting rule. In addition,
we require voting rules to be efficiently computable.
Such scenarios propose the following new challenge: How can we design new voting rules with
good statistical properties as well as social choice normative properties?
To tackle this challenge, we develop a general framework that adopts statistical decision theory [3].
Our approach couples a statistical ranking model with an explicit decision space and loss function.
?
[email protected], Google Research, New York, NY 10011, USA. The work was done when the author
was at Harvard University.
?
[email protected], Harvard University, Cambridge, MA 02138, USA.
?
[email protected], Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
1
Anonymity, neutrality Majority,
Consistency
Monotonicity
Condorcet
Kemeny
Bayesian est. of
M1? (uni. prior)
Bayesian est. of
M2? (uni. prior)
Y
Y
N
Y
N
N
Y
N
N
Complexity
NP-hard,
NP-hard,
PNP
|| -hard
PNP
|| -hard
(Theorem 3)
P (Theorem 4)
Min. Bayesian risk
N
Y
Y
Table 1: Kemeny for winners vs. Bayesian estimators of M1? and M2? to choose winners.
Given these, we can adopt Bayesian estimators as social choice mechanisms, which make decisions
to minimize the expected loss w.r.t. the posterior distribution on the parameters (called the Bayesian
risk). This provides a principled methodology for the design and analysis of new voting rules.
To show the viability of the framework, we focus on selecting multiple alternatives (the alternatives
that can be thought of as being ?tied? for the first place) under a natural extension of the 0-1 loss
function for two models: let M1? denote the Mallows model with fixed dispersion [22], and let M2?
denote the Condorcet model proposed by Condorcet in the 18th century [9, 34]. In both models the
dispersion parameter, denoted ?, is taken as a fixed parameter. The difference is that in the Mallows
model the parameter space is composed of all linear orders over alternatives, while in the Condorcet
model the parameter space is composed of all possibly cyclic rankings over alternatives (irreflexive,
antisymmetric, and total binary relations). M2? is a natural model that captures real-world scenarios
where the ground truth may contain cycles, or agents? preferences are cyclic, but they have to report
a linear order due to the protocol. More importantly, as we will show later, a Bayesian estimator on
M2? is superior from a computational viewpoint.
Through this approach, we obtain two voting rules as Bayesian estimators and then evaluate them
with respect to various normative properties, including anonymity, neutrality, monotonicity, the majority criterion, the Condorcet criterion and consistency. Both rules satisfy anonymity, neutrality,
and monotonicity, but fail the majority criterion, Condorcet criterion,1 and consistency. Admittedly,
the two rules do not enjoy outstanding normative properties, but they are not bad either. We also
investigate the computational complexity of the two rules. Strikingly, despite the similarity of the
two models, the Bayesian estimator for M2? can be computed in polynomial time, while computing
the Bayesian estimator for M1? is PNP
|| -hard, which means that it is at least NP-hard. Our results are
summarized in Table 1.
We also compare the asymptotic outcomes of the two rules with the Kemeny rule for winners,
which is a natural extension of the maximum likelihood estimator of M1? proposed by Fishburn
[14]. It turns out that when n votes are generated under M1? , all three rules select the same winner
asymptotically almost surely (a.a.s.) as n ? ?. When the votes are generated according to M2? ,
the rule for M1? still selects the same winner as Kemeny a.a.s.; however, for some parameters, the
winner selected by the rule for M2? is different with non-negligible probability. These are confirmed
by experiments on synthetic datasets.
Related work. Along the second perspective in social choice (to make an objectively correct decision), in addition to Condorcet?s statistical approach to social choice [9, 34], most previous work
in economics, political science, and statistics focused on extending the theorem to heterogeneous,
correlated, or strategic agents for two alternatives, see [25, 1] among many others. Recent work in
computer science views agents? votes as i.i.d. samples from a statistical model, and computes the
MLE to estimate the parameters that maximize the likelihood [10, 11, 33, 32, 2, 29, 7]. A limitation
of these approaches is that they estimate the parameters of the model, but may not directly inform
the right decision to make in the multi-agent context. The main approach has been to return the
modal rank order implied by the estimated parameters, or the alternative with the highest, predicted
marginal probability of being ranked in the top position.
There have also been some proposals to go beyond MLE in social choice. In fact, Young [34]
proposed to select a winning alternative that is ?most likely to be the best (i.e., top-ranked in the true
ranking)? and provided formulas to compute it for three alternatives. This idea has been formalized
and extended by Procaccia et al. [29] to choose a given number of alternatives with highest marginal
1
?
The new voting rule for M1? fails them for all ? < 1/ 2.
2
probability under the Mallows model. More recently, independent to our work, Elkind and Shah
[13] investigated a similar question for choosing multiple winners under the Condorcet model. We
will see that these are special cases of our proposed framework in Example 2. Pivato [26] conducted
a similar study to Conitzer and Sandholm [10], examining voting rules that can be interpreted as
expect-utility maximizers.
We are not aware of previous work that frames the problem of social choice from the viewpoint
of statistical decision theory, which is our main conceptual contribution. Technically, the approach
taken in this paper advocates a general paradigm of ?design by statistics, evaluation by social choice
and computer science?. We are not aware of a previous work following this paradigm to design
and evaluate new rules. Moreover, the normative properties for the two voting rules investigated in
this paper are novel, even though these rules are not really novel. Our result on the computational
complexity of the first rule strengthens the NP-hardness result by Procaccia et al. [29], and the
complexity for the second rule (Theorem 5) was independently discovered by Elkind and Shah [13].
The statistical decision-theoretic framework is quite general, allowing considerations such as estimators that minimize the maximum expected loss, or the maximum expected regret [3]. In a different
context, focused on uncertainty about the availability of alternatives, Lu and Boutilier [20] adopt a
decision-theoretic view of the design of an optimal voting rule. Caragiannis et al. [8] studied the
robustness of social choice mechanisms w.r.t. model uncertainty, and characterized a unique social
choice mechanism that is consistent w.r.t. a large class of ranking models.
A number of recent papers in computational social choice take utilitarian and decision-theoretical
approaches towards social choice [28, 6, 4, 5]. Most of them evaluate the joint decision w.r.t. agents?
subjective preferences, for example the sum of agents? subjective utilities (i.e. the social welfare).
We don?t view this as fitting into the classical approach to statistical decision theory as formulated
by Wald [30]. In our framework, the joint decision is evaluated objectively w.r.t. the ground truth in
the statistical model. Several papers in machine learning developed algorithms to compute MLE or
Bayesian estimators for popular ranking models [18, 19, 21], but without considering the normative
properties of the estimators.
2
Preliminaries
In social choice, we have a set of m alternatives C = {c1 , . . . , cm } and a set of n agents. Let
L(C) denote the set of all linear orders over C. For any alternative c, let Lc (C) denote the set
of linear orders over C where c is ranked at the top. Agent j uses a linear order Vj ? L(C) to
represent her preferences, called her vote. The collection of agents votes is called a profile, denoted
by P = {V1 , . . . , Vn }. A (irresolute) voting rule r : L(C)n ? (2C \ ?) selects a set of winners that
are ?tied? for the first place for every profile of n votes.
For any pair of linear orders V, W , let Kendall(V, W ) denote the Kendall-tau distance between
V and W , that is, the number of different pairwise comparisons in V and W . The Kemeny rule
(a.k.a. Kemeny-Young method) [17, 35] selects all linear orders with the minimum Kendall-tau distance from the preference profile P , that is, Kemeny(P ) = arg minW Kendall(P, W ). The most
well-known variant of Kemeny to select winning alternatives, denoted by KemenyC , is due to Fishburn [14], who defined it as a voting rule that selects all alternatives that are ranked in the top
position of some winning linear orders under the Kemeny rule. That is, KemenyC (P ) = {top(V ) :
V ? Kemeny(P )}, where top(V ) is the top-ranked alternative in V .
Voting rules are often evaluated by the following normative properties. An irresolute rule r satisfies:
? anonymity, if r is insensitive to permutations over agents;
? neutrality, if r is insensitive to permutations over alternatives;
? monotonicity, if for any P , c ? r(P ), and any P 0 that is obtained from P by only raising the
positions of c in one or multiple votes, then c ? r(P 0 );
? Condorcet criterion, if for any profile P where a Condorcet winner exists, it must be the unique
winner. A Condorcet winner is the alternative that beats every other alternative in pair-wise elections.
? majority criterion, if for any profile P where an alternative c is ranked in the top positions for more
than half of the votes, then r(P ) = {c}. If r satisfies Condorcet criterion then it also satisfies the
majority criterion.
? consistency, if for any pair of profiles P1 , P2 with r(P1 )?r(P2 ) 6= ?, r(P1 ?P2 ) = r(P1 )?r(P2 ).
3
For any profile P , its weighted majority graph (WMG), denoted by WMG(P ), is a weighted directed
graph whose vertices are C, and there is an edge between any pair of alternatives (a, b) with weight
wP (a, b) = #{V ? P : a V b} ? #{V ? P : b V a}.
A parametric model M = (?, S, Pr) is composed of three parts: a parameter space ?, a sample
space S composing of all datasets, and a set of probability distributions over S indexed by elements
of ?: for each ? ? ?, the distribution indexed by ? is denoted by Pr(?|?).2
Given a parametric model M, a maximum likelihood estimator (MLE) is a function fMLE : S ? ?
such that for any data P ? S, fMLE (P ) is a parameter that maximizes the likelihood of the data.
That is, fMLE (P ) ? arg max??? Pr(P |?).
In this paper we focus on parametric ranking models. Given C, a parametric ranking model MC =
(?, Pr) is composed of a parameter space ? and a distribution Pr(?|?) over L(C) for each ? ?
?, such that for any number of voters n, the sample space is Sn = L(C)n , where each vote is
generated
i.i.d. from Pr(?|?). Hence, for any profile P ? Sn and any ? ? ?, we have Pr(P |?) =
Q
V ?P Pr(V |?). We omit the sample space because it is determined by C and n.
Definition 1 In the Mallows model [22], a parameter is composed of a linear order W ? L(C)
and
parameter ? with 0 < ? < 1. For any profile P and
?), Pr(P |?) =
Q a dispersion
P ? = (W,
1 Kendall(V,W )
Kendall(V,W )
?
,
where
Z
is
the
normalization
factor
with
Z
=
?
.
V ?P Z
V ?L(C)
Statistical decision theory [30, 3] studies scenarios where the decision maker must make a decision
d ? D based on the data P generated from a parametric model, generally M = (?, S, Pr). The
quality of the decision is evaluated by a loss function L : ??D ? R, which takes the true parameter
and the decision as inputs.
In this paper, we focus on the Bayesian principle of statistical decision theory to design social
choice mechanisms as choice functions that minimize the Bayesian risk under a prior distribution
over ?. More precisely, the Bayesian risk, RB (P, d), is the expected loss of the decision d when
the parameter is generated according to the posterior distribution given data P . That is, RB (P, d) =
E?|P L(?, d). Given a parametric model M, a loss function L, and a prior distribution over ?, a
(deterministic) Bayesian estimator fB is a decision rule that makes a deterministic decision in D
to minimize the Bayesian risk, that is, for any P ? S, fB (P ) ? arg mind RB (P, d). We focus on
deterministic estimators in this work and leave randomized estimators for future research.
Example 1 When ? is discrete, an MLE of a parametric model M is a Bayesian estimator of the
statistical decision problem (M, D = ?, L0-1 ) under the uniform prior distribution, where L0-1 is
the 0-1 loss function such that L0-1 (?, d) = 0 if ? = d, otherwise L0-1 (?, d) = 1.
In this sense, all previous MLE approaches in social choice can be viewed as the Bayesian estimators
of a statistical decision-theoretic framework for social choice where D = ?, a 0-1 loss function, and
the uniform prior.
3
Our Framework
Our framework is quite general and flexible because we can choose any parametric ranking model,
any decision space, any loss function, and any prior to use the Bayesian estimators social choice
mechanisms. Common choices of both ? and D are L(C), C, and (2C \ ?).
Definition 2 A statistical decision-theoretic framework for social choice is a tuple F =
(MC , D, L), where C is the set of alternatives, MC = (?, Pr) is a parametric ranking model,
D is the decision space, and L : ? ? D ? R is a loss function.
Let B(C) denote the set of all irreflexive, antisymmetric, and total binary relations over C. For
any c ? C, let Bc (C) denote the relations in B(C) where c a for all a ? C ? {c}. It follows
that L(C) ? B(C), and moreover, the Kendall-tau distance can be defined to count the number of
pairwise disagreements between elements of B(C).
In the rest of the paper, we focus on the following two parametric ranking models, where the dispersion is a fixed parameter.
2
This notation should not be taken to mean a conditional distribution over S unless we are taking a Bayesian
point of view.
4
Definition 3 (Mallows model with fixed dispersion, and the Condorcet model) Let M1? denote
the Mallows model with fixed dispersion, where the parameter space is ? = L(C) and given any
W ? ?, Pr(?|W ) is Pr(?|(W, ?)) in the Mallows model, where ? is fixed.
In the Condorcet model, M2? , the parameter space is ? = B(C). For any W ? ? and any profile
Q
P , we have Pr(P |W ) = V ?P Z1 ?Kendall(V,W ) , where Z is the normalization factor such that
P
Z = V ?B(C) ?Kendall(V,W ) , and parameter ? is fixed.3
M1? and M2? degenerate to the Condorcet model for two alternatives [9]. The Kemeny rule that
selects a linear order is an MLE of M1? for any ?.
We now formally define two statistical decision-theoretic frameworks associated with M1? and M2? ,
which are the focus of the rest of our paper.
Definition 4 For ? = L(C) or B(C), any ? ? ?, and any c ? C, we define a loss function Ltop (?, c)
such that Ltop (?, c) = 0 if for all b ? C, c b in ?; otherwise Ltop (?, c) = 1.
Let F?1 = (M1? , 2C \ ?, Ltop ) and F?2 = (M2? , 2C \ ?, Ltop ), where for any C ? C, Ltop (?, C) =
P
1
2
1
c?C Ltop (?, c)/|C|. Let fB (respectively, fB ) denote the Bayesian estimators of F? (respectively,
2
F? ) under the uniform prior.
We note that Ltop in the above definition takes a parameter and a decision in 2C \ ? as inputs, which
makes it different from the 0-1 loss function L0-1 that takes a pair of parameters as inputs, as the
one in Example 1. Hence, fB1 and fB2 are not the MLEs of their respective models, as was the
case in Example 1. We focus on voting rules obtained by our framework with Ltop . Certainly our
framework is not limited to this loss function.
Example 2 Bayesian estimators fB1 and fB2 coincide with Young [34]?s idea of selecting the alternative that is ?most likely to be the best (i.e., top-ranked in the true ranking)?, under F?1 and
F?2 respectively. This gives a theoretical justification of Young?s idea and other followups under
our framework. Specifically, fB1 is similar to rule studied by Procaccia et al. [29] and fB2 was
independently studied by Elkind and Shah [13].
4
Normative Properties of Bayesian Estimators
All omitted proofs can be found in the full version on arXiv.
Theorem 1 For any ?, fB1 satisfies anonymity, neutrality, and monotonicity. fB1 does not satisfy
majority or the Condorcet criterion for any ? < ?12 ,4 and it does not satisfy consistency.
Proof sketch: Anonymity and neutrality are obviously satisfied.
Monotonicity. Monotonicity follows from the following lemma.
Lemma 1 For any c ? C, let P 0 denote a profile obtained from P by raising the position of c in
one vote. For any W ? Lc (C), Pr(P 0 |W ) = Pr(P |W )/?; for any b ? C and any V ? Lb (C),
Pr(P 0 |V ) ? Pr(P |V )/?.
Majority and the Condorcet criterion. Let C = {c, b, c3 , . . . , cm }. We construct a profile P ?
where c is ranked in the top positions for more than half of the votes, but c 6? fB1 (P ? ).
For any k, let P ? denote a profile composed of k copies of [c b c3 ? ? ? cm ], 1 of
[c b cm ? ? ? c3 ] and k ? 1 copies of [b cm ? ? ? c3 c]. It is not hard to verify that
the WMG of P ? is as in Figure 1 (a).
P
Then, we prove that for any ? <
?1 ,
2
we can find m and k so that
P V ?Lc (C)
W ?Lb (C)
Pr(P |V )
Pr(P |W )
1+?2k +???+?2k(m?2)
1+?2 +???+?2(m?2)
=
? ?2 < 1. It follows that c is the Condorcet winner in P ? but it does not
minimize the Bayesian risk under M1? , which means that it is not the winner under fB1 .
3
4
In the Condorcet model the sample space is B(C)n [31]. We study a variant with sample space L(C)n .
1
Characterizing majority and Condorcet criterion of fB
for ? ? ?12 is an open question.
5
c
2
c3
2
c4
2k
c
b
2
2
2k
?
2k
2k
4k
c3
cm
(a) The WMG of P ? .
c
b
2k
2k
c4
c3
b
b
2k
4k
c4
(b) The WMGs of P1 (left) and P2 (right).
4
6
6
c
2
a
6
WMG of 6P
6
6
(c) The WMG of P 0 (Thm. 3).
Figure 1: WMGs of the profiles for proofs: (a) for majority and Condorcet (Thm. 1); (b) for consistency
(Thm. 1); (c) for computational complexity (Thm. 3).
Consistency. We construct an example to show that fB1 does not satisfy consistency. In our construction m and n are even, and C = {c, b, c3 , c4 }. Let P1 and P2 denote profiles whose WMGs are
as shown in Figure 1 (b), respectively. We have the following lemma.
Lemma 2 Let P ? {P1 , P2 },
P
Pr(P |V )
P V ?Lc (C)
W ?L (C) Pr(P |W )
=
b
3(1+?4k )
.
2(1+?2k +?4k )
4k
3(1+? )
1
1
For any 0 < ? < 1, 2(1+?
2k +?4k ) > 1 for all k. It is not hard to verify that fB (P1 ) = fB (P2 ) = {c}
1
1
and fB (P1 ? P2 ) = {c, b}, which means that fB is not consistent.
Similarly, we can prove the following theorem for fB2 .
Theorem 2 For any ?, fB2 satisfies anonymity, neutrality, and monotonicity. It does not satisfy
majority, the Condorcet criterion, or consistency.
By Theorem 1 and 2, fB1 and fB2 do not satisfy as many desired normative properties as the Kemeny
rule for winners. On the other hand, they minimize Bayesian risk under F?1 and F?2 , respectively,
for which Kemeny does neither. In addition, neither fB1 nor fB2 satisfy consistency, which means
that they are not positional scoring rules.
5
Computational Complexity
We consider the following two types of decision problems.
Definition 5 In the BETTER BAYESIAN DECISION problem for a statistical decision-theoretic
framework (MC , D, L) under a prior distribution, we are given d1 , d2 ? D, and a profile P . We are
asked whether RB (P, d1 ) ? RB (P, d2 ).
We are also interested in checking whether a given alternative is the optimal decision.
Definition 6 In the OPTIMAL BAYESIAN DECISION problem for a statistical decision-theoretic
framework (MC , D, L) under a prior distribution, we are given d ? D and a profile P . We are
asked whether d minimizes the Bayesian risk RB (P, ?).
PNP
|| is the class of decision problems that can be computed by a P oracle machine with polynomial
NP
number of parallel calls to an NP oracle. A decision problem A is PNP
|| -hard, if for any P|| problem
B, there exists a polynomial-time many-one reduction from B to A. It is known that PNP
|| -hard
problems are NP-hard.
Theorem 3 For any ?, BETTER BAYESIAN DECISION and OPTIMAL BAYESIAN DECISION for F?1
under uniform prior are PNP
|| -hard.
Proof: The hardness of both problems is proved by a unified reduction from the K EMENY WINNER
problem, which is PNP
|| -complete [16]. In a K EMENY WINNER problem, we are given a profile P and
an alternative c, and we are asked if c is ranked in the top of at least one V ? L(C) that minimizes
Kendall(P, V ).
For any alternative c, the Kemeny score of c under M1? is the smallest distance between the profile
1
P and any linear order where c is ranked in the top. We next prove that when ? < m!
, the Bayesian
risk of c is largely determined by the Kemeny score of c.
1
, any c, b ? C, and any profile P , if the Kemeny score of c is strictly
Lemma 3 For any ? < m!
smaller than the Kemeny score of b in P , then RB (P, c) < RB (P, b) for M1? .
6
1
Let t be any natural number such that ?t < m!
. For any K EMENY WINNER instance (P, c) for
0
alternatives C , we add two more alternatives {a, b} and define a profile P 0 whose WMG is as
shown in Figure 3(c) using McGarvey?s trick [24]. The WMG of P 0 contains the WMG(P ) as a
subgraph, where the weights are 6 times the weights in WMG(P ).
Then, we let P ? = tP 0 , which is t copies of P 0 . It follows that for any V ? L(C), Pr(P ? |V, ?) =
Pr(P 0 |V, ?t ). By Lemma 3, if an alternative e has the strictly lowest Kemeny score for profile P 0 ,
then it the unique alternative that minimizes the Bayesian risk for P 0 and dispersion parameter ?t ,
which means that e minimizes the Bayesian risk for P ? and dispersion parameter ?.
Let O denote the set of linear orders over C 0 that minimizes the Kendall tau distance from P and let
k denote this minimum distance. Choose an arbitrary V 0 ? O. Let V = [b a V 0 ]. It follows
that Kendall(P 0 , V ) = 4 + 6k. If there exists W 0 ? O where c is ranked in the top position, then
we let W = [a c b (V 0 ? {c})]. We have Kendall(P 0 , W ) = 2 + 6k. If c is not a Kemeny
winner in P , then for any W where d is not ranked in the top position, Kendall(P 0 , W ) ? 6 + 6k.
Therefore, a minimizes the Bayesian risk if and only if c is a Kemeny winner in P , and if c does not
minimize the Bayesian risk, then b does. Hence BETTER DECISION (checking if a is better than b)
and OPTIMAL BAYESIAN DECISION (checking if a is the optimal alternative) are PNP
|| -hard.
We note that OPTIMAL BAYESIAN DECISION in Theorem 3 is equivalent to checking whether a
given alternative c is in fB1 (P ). We do not know whether these problems are PNP
|| -complete. In
1
2
sharp contrast to fB , the next theorem states that fB under uniform prior is in P.
Theorem 4 For any rational number5 ?, BETTER BAYESIAN DECISION and OPTIMAL BAYESIAN
2
under uniform prior are in P.
DECISION for F?
The theorem is a corollary of the following stronger theorem that provides a closed-form formula
for Bayesian loss for F?2 .6 We recall that for any profile P and any pair of alternatives c, b, that
wP (c, b) is the weight on c ? b in the weighted majority graph of P .
Theorem 5 For F?2 under uniform prior, for any c ? C and any profile P , RB (P, c) = 1 ?
Q
1
.
b6=c
1 + ?wP (c,b)
The comparisons of Kemeny, fB1 , and fB2 are summarized in Table 1. According to the criteria we
consider, none of the three outperforms the others. Kemeny does well in normative properties, but
does not minimize Bayesian risk under either F?1 or F?2 , and is hard to compute. fB1 minimizes the
Bayesian risk under F?1 , but is hard to compute. We would like to highlight fB2 , which minimizes
the Bayesian risk under F?2 , and more importantly, can be computed in polynomial time despite the
similarity between F?1 and F?2 .
6
Asymptotic Comparisons
In this section, we ask the following question: as the number of voters, n ? ?, what is the
probability that Kemeny, fB1 , and fB2 choose different winners? We show that when the data is
generated from M1? , all three methods are equal asymptotically almost surely (a.a.s.), that is, they
are equal with probability 1 as n ? ?.
Theorem 6 Let Pn denote a profile of n votes generated i.i.d. from M1? given W ? Lc (C). Then,
Prn?? (Kemeny(Pn ) = fB1 (Pn ) = fB2 (Pn ) = c) = 1.
However, when the data are generated from M2? , we have a different story.
Theorem 7 For any W ? B(C) and any ?, fB1 (Pn ) = Kemeny(Pn ) a.a.s. as n ? ? and votes in
Pn are generated i.i.d. from M2? given W .
For any m ? 5, there exists W ? B(C) such that for any ?, there exists > 0 such that with
probability at least , fB1 (Pn ) 6= fB2 (Pn ) and Kemeny(Pn ) 6= fB2 (Pn ) as n ? ? and votes in Pn
are generated i.i.d. from M2? given W .
5
We require ? to be rational to avoid representational issues.
The formula resembles Young?s calculation for three alternatives [34], where it was not clear whether the
calculation was done for F?2 . Recently it was clarified by Xia [31] that this is indeed the case.
6
7
c1
c5
c2
c4
c3
(a) W ? B(C) for m = 5.
(b) Probability that g is different from Kemeny under M2? .
Figure 2: The ground truth W and asymptotic comparisons between Kemeny and g in Definition 7.
Proof sketch: The first part of Theorem 7 is proved by the Central Limit Theorem. For the second
part, the proof for m = 5 uses an acyclic W ? B(C) illustrated in Figure 2 (a).
Theorem 6 suggests that, when n is large and the votes are generated from M1? , it does not matter
much which of fB1 , fB2 , and Kemeny we use. A similar observation has been made for other voting
rules by Caragiannis et al. [7]. On the other hand, Theorem 7 states that when the votes are generated
from M2? , interestingly, for some ground truth parameter, fB2 is different from the other two with
non-negligible probability, and as we will see in the experiments, this probability can be quite large.
6.1
Experiments
We focus on the comparison between rule fB2 and Kemeny using synthetic data generated from M2?
given the binary relation W illustrated in Figure 2 (a). By Theorem 5, the computation involves
computing ??(n) , which is exponentially small for large n since ? < 1. Hence, we need a special
data structure to handle the computation of fB2 , because a straightforward implementation easily
loses precision. In our experiments, we use the following approximation for fB2 .
P
Definition 7 For any c ? C and profile P , let s(c, P ) = b:wP (b,c)>0 wP (b, c). Let g be the voting
rule such that for any profile P , g(P ) = arg minc s(c, P ).
In words, g selects the alternative c with the minimum total weight on the incoming edges in the
WMG. By Theorem 5, the Bayesian risk is largely determined by ??s(c,P ) . Therefore, g is a good
approximation of fB2 with reasonably large n. Formally, this is stated in the following theorem.
Theorem 8 For any W ? B(C) and any ?, fB2 (Pn ) = g(Pn ) a.a.s. as n ? ? and votes in Pn are
generated i.i.d. from M2? given W .
In our experiments, data are generated by M2? given W in Figure 2 (a) for m = 5, n ?
{100, 200, . . . , 2000}, and ? ? {0.1, 0.5, 0.9}. For each setting we generate 3000 profiles, and
calculate the fraction of trials in which g and Kemeny are different. The results are shown in Figuire 2 (b). We observe that for ? = 0.1 and 0.5, the probability for g(Pn ) 6= Kemeny(Pn ) is about
30% for most n in our experiments; when ? = 0.9, the probability is about 10%. In light of Theorem 8, these results confirm Theorem 7. We have also conducted similar experiments for M1? , and
found that the g winner is the same as the Kemeny winner in all 10000 randomly generated profiles
with m = 5, n = 100. This provides a check for Theorem 6.
7
Acknowledgments
We thank Shivani Agarwal, Craig Boutilier, Yiling Chen, Vincent Conitzer, Edith Elkind, Ariel
Procaccia, and anonymous reviewers of AAAI-14 and NIPS-14 for helpful suggestions and discussions. Azari Soufiani acknowledges Siebel foundation for the scholarship in his last year of PhD
studies. Parkes was supported in part by NSF grant CCF #1301976 and the SEAS TomKat fund.
Xia acknowledges an RPI startup fund for support.
8
References
[1] David Austen-Smith and Jeffrey S. Banks. Information Aggregation, Rationality, and the Condorcet Jury
Theorem. The American Political Science Review, 90(1):34?45, 1996.
[2] Hossein Azari Soufiani, David C. Parkes, and Lirong Xia. Random utility theory for social choice. In
Proc. NIPS, pages 126?134, 2012.
[3] James O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer, 2nd edition, 1985.
[4] Craig Boutilier and Tyler Lu. Probabilistic and Utility-theoretic Models in Social Choice: Challenges for
Learning, Elicitation, and Manipulation. In IJCAI-11 Workshop on Social Choice and AI, 2011.
[5] Craig Boutilier, Ioannis Caragiannis, Simi Haber, Tyler Lu, Ariel D. Procaccia, and Or Sheffet. Optimal
social choice functions: A utilitarian view. In Proc. EC, pages 197?214, 2012.
[6] Ioannis Caragiannis and Ariel D. Procaccia. Voting Almost Maximizes Social Welfare Despite Limited
Communication. Artificial Intelligence, 175(9?10):1655?1671, 2011.
[7] Ioannis Caragiannis, Ariel Procaccia, and Nisarg Shah. When do noisy votes reveal the truth? In Proc. EC,
2013.
[8] Ioannis Caragiannis, Ariel D. Procaccia, and Nisarg Shah. Modal Ranking: A Uniquely Robust Voting
Rule. In Proc. AAAI, 2014.
[9] Marquis de Condorcet. Essai sur l?application de l?analyse a` la probabilit?e des d?ecisions rendues a` la
pluralit?e des voix. Paris: L?Imprimerie Royale, 1785.
[10] Vincent Conitzer and Tuomas Sandholm. Common voting rules as maximum likelihood estimators. In
Proc. UAI, pages 145?152, Edinburgh, UK, 2005.
[11] Vincent Conitzer, Matthew Rognlie, and Lirong Xia. Preference functions that score rankings and maximum likelihood estimation. In Proc. IJCAI, pages 109?115, 2009.
[12] Cynthia Dwork, Ravi Kumar, Moni Naor, and D. Sivakumar. Rank aggregation methods for the web. In
Proc. WWW, pages 613?622, 2001.
[13] Edith Elkind and Nisarg Shah. How to Pick the Best Alternative Given Noisy Cyclic Preferences? In
Proc. UAI, 2014.
[14] Peter C. Fishburn. Condorcet social choice functions. SIAM Journal on Applied Mathematics, 33(3):
469?489, 1977.
[15] Sumit Ghosh, Manisha Mundhe, Karina Hernandez, and Sandip Sen. Voting for movies: the anatomy of
a recommender system. In Proc. AAMAS, pages 434?435, 1999.
[16] Edith Hemaspaandra, Holger Spakowski, and J?org Vogel. The complexity of Kemeny elections. Theoretical Computer Science, 349(3):382?391, December 2005.
[17] John Kemeny. Mathematics without numbers. Daedalus, 88:575?591, 1959.
[18] Jen-Wei Kuo, Pu-Jen Cheng, and Hsin-Min Wang. Learning to Rank from Bayesian Decision Inference.
In Proc. CIKM, pages 827?836, 2009.
[19] Bo Long, Olivier Chapelle, Ya Zhang, Yi Chang, Zhaohui Zheng, and Belle Tseng. Active Learning for
Ranking Through Expected Loss Optimization. In Proc. SIGIR, pages 267?274, 2010.
[20] Tyler Lu and Craig Boutilier. The Unavailable Candidate Model: A Decision-theoretic View of Social
Choice. In Proc. EC, pages 263?274, 2010.
[21] Tyler Lu and Craig Boutilier. Learning mallows models with pairwise preferences. In Proc. ICML, pages
145?152, 2011.
[22] Colin L. Mallows. Non-null ranking model. Biometrika, 44(1/2):114?130, 1957.
[23] Andrew Mao, Ariel D. Procaccia, and Yiling Chen. Better human computation through principled voting.
In Proc. AAAI, 2013.
[24] David C. McGarvey. A theorem on the construction of voting paradoxes. Econometrica, 21(4):608?610,
1953.
[25] Shmuel Nitzan and Jacob Paroush. The significance of independent decisions in uncertain dichotomous
choice situations. Theory and Decision, 17(1):47?60, 1984.
[26] Marcus Pivato. Voting rules as statistical estimators. Social Choice and Welfare, 40(2):581?630, 2013.
[27] Daniele Porello and Ulle Endriss. Ontology Merging as Social Choice: Judgment Aggregation under the
Open World Assumption. Journal of Logic and Computation, 2013.
[28] Ariel D. Procaccia and Jeffrey S. Rosenschein. The Distortion of Cardinal Preferences in Voting. In
Proc. CIA, volume 4149 of LNAI, pages 317?331. 2006.
[29] Ariel D. Procaccia, Sashank J. Reddi, and Nisarg Shah. A maximum likelihood approach for selecting
sets of alternatives. In Proc. UAI, 2012.
[30] Abraham Wald. Statistical Decision Function. New York: Wiley, 1950.
[31] Lirong Xia. Deciphering young?s interpretation of condorcet?s model. ArXiv, 2014.
[32] Lirong Xia and Vincent Conitzer. A maximum likelihood approach towards aggregating partial orders. In
Proc. IJCAI, pages 446?451, Barcelona, Catalonia, Spain, 2011.
[33] Lirong Xia, Vincent Conitzer, and J?er?ome Lang. Aggregating preferences in multi-issue domains by using
maximum likelihood estimators. In Proc. AAMAS, pages 399?406, 2010.
[34] H. Peyton Young. Condorcet?s theory of voting. American Political Science Review, 82:1231?1244, 1988.
[35] H. Peyton Young and Arthur Levenglick. A consistent extension of Condorcet?s election principle. SIAM
Journal of Applied Mathematics, 35(2):285?300, 1978.
9
| 5537 |@word trial:1 version:1 polynomial:5 stronger:1 nd:1 open:2 d2:2 sheffet:1 jacob:1 pick:1 reduction:2 cyclic:3 contains:1 score:6 selecting:3 siebel:1 bc:1 interestingly:1 subjective:3 outperforms:1 com:1 rpi:2 lang:1 peyton:2 must:2 john:1 nisarg:4 fund:2 v:1 half:2 selected:1 intelligence:1 smith:1 imprimerie:1 parkes:4 provides:3 clarified:1 preference:11 karina:1 org:1 zhang:1 along:1 c2:1 prove:3 naor:1 advocate:1 fitting:1 pnp:10 pairwise:3 hardness:2 indeed:1 ontology:1 p1:9 nor:1 expected:7 multi:3 behavior:1 election:4 considering:1 provided:1 spain:1 moreover:2 notation:1 maximizes:2 lowest:1 what:1 null:1 cm:6 interpreted:1 minimizes:8 developed:2 unified:1 ghosh:1 every:2 voting:28 tackle:1 biometrika:1 uk:1 grant:1 enjoy:1 conitzer:6 omit:1 negligible:2 aggregating:2 limit:1 despite:3 marquis:1 sivakumar:1 fb1:18 hernandez:1 voter:2 studied:4 resembles:1 rendues:1 suggests:2 sandip:1 limited:2 directed:1 unique:3 acknowledgment:1 mallow:10 regret:1 utilitarian:2 fb2:20 probabilit:1 thought:1 word:1 context:4 risk:17 www:1 equivalent:1 deterministic:3 reviewer:1 go:1 economics:1 straightforward:1 independently:2 focused:2 formulate:1 sigir:1 formalized:1 m2:20 rule:47 estimator:25 importantly:2 his:1 century:2 handle:1 crowdsourcing:1 hurt:1 justification:1 construction:2 rationality:1 olivier:1 us:2 harvard:3 element:2 trick:1 strengthens:1 anonymity:8 wang:1 capture:1 calculate:1 soufiani:3 cycle:1 azari:4 jury:2 highest:2 prn:1 principled:2 complexity:8 asked:3 econometrica:1 compromise:1 technically:1 strikingly:1 easily:1 joint:2 various:2 tomkat:1 artificial:1 aggregate:1 edith:3 outcome:2 choosing:1 startup:1 quite:3 whose:3 distortion:1 otherwise:2 objectively:5 statistic:2 analyse:1 noisy:2 obviously:1 sen:1 propose:1 yiling:2 ome:1 subgraph:1 degenerate:1 achieve:1 representational:1 ijcai:3 extending:1 sea:1 leave:1 friend:1 develop:1 andrew:1 p2:9 c:1 predicted:1 involves:1 anatomy:1 correct:3 human:1 opinion:1 require:3 really:1 preliminary:1 anonymous:1 extension:3 strictly:2 ground:5 welfare:3 tyler:4 matthew:1 adopt:2 smallest:1 omitted:1 estimation:1 proc:18 maker:1 weighted:3 pn:17 avoid:1 dinner:1 minc:1 corollary:1 l0:5 focus:9 rank:4 likelihood:9 check:1 political:4 contrast:1 sense:2 helpful:1 inference:1 irreflexive:2 lnai:1 her:2 relation:4 selects:6 interested:1 issue:2 arg:4 flexible:1 hossein:2 denoted:5 caragiannis:6 among:2 special:2 marginal:2 equal:2 aware:2 construct:2 nitzan:1 holger:1 icml:1 future:1 np:7 report:1 others:2 cardinal:1 randomly:1 composed:6 neutrality:8 hemaspaandra:1 jeffrey:2 investigate:1 dwork:1 zheng:1 evaluation:2 certainly:1 ecisions:1 light:1 edge:2 tuple:1 partial:1 arthur:1 minw:1 respective:1 unless:1 indexed:2 desired:4 theoretical:3 uncertain:1 instance:1 tp:1 strategic:1 vertex:1 deciphering:1 uniform:7 examining:1 conducted:2 sumit:1 essai:1 eec:1 synthetic:2 mles:1 randomized:1 siam:2 probabilistic:1 central:1 satisfied:1 aaai:3 choose:6 possibly:1 fishburn:3 american:2 return:1 de:4 summarized:2 ioannis:4 availability:1 matter:1 satisfy:8 ranking:17 later:1 view:6 hsin:1 closed:1 kendall:14 aggregation:4 parallel:1 belle:1 b6:1 contribution:1 minimize:10 who:1 efficiently:1 largely:2 judgment:1 bayesian:49 vincent:5 elkind:5 craig:5 lu:5 mc:5 none:1 confirmed:1 reach:1 inform:1 zhaohui:1 definition:9 james:1 associated:1 proof:6 couple:1 rational:2 proved:2 popular:1 ask:1 recall:1 back:1 methodology:1 modal:2 wei:1 done:2 though:2 evaluated:3 sketch:2 hand:2 web:2 google:2 ulle:1 quality:1 perhaps:1 reveal:1 usa:3 contain:1 true:3 verify:2 ccf:1 former:1 hence:4 wp:5 semantic:1 illustrated:2 uniquely:1 daniele:1 criterion:13 theoretic:11 complete:2 wise:1 consideration:2 novel:2 recently:2 superior:1 common:2 winner:22 insensitive:2 exponentially:1 volume:1 interpretation:1 m1:20 cambridge:1 ai:1 consistency:10 mathematics:3 similarly:1 chapelle:1 similarity:2 moni:1 add:1 pu:1 posterior:2 recent:2 perspective:2 scenario:4 manipulation:1 meta:1 binary:3 societal:1 lirong:6 scoring:1 yi:1 minimum:3 surely:2 maximize:1 paradigm:2 colin:1 multiple:3 full:1 characterized:1 calculation:2 long:1 mle:7 variant:2 wald:2 heterogeneous:1 arxiv:2 represent:1 normalization:2 agarwal:1 c1:2 proposal:1 addition:4 want:1 vogel:1 rest:2 december:1 call:1 reddi:1 viability:1 restaurant:2 simi:1 followup:1 idea:3 computable:1 whether:6 utility:4 peter:1 sashank:1 york:2 boutilier:6 generally:1 clear:1 extensively:1 shivani:1 generate:1 nsf:1 estimated:1 cikm:1 rb:9 discrete:1 group:1 putting:1 neither:2 ravi:1 v1:1 asymptotically:3 graph:3 fraction:1 sum:1 year:1 uncertainty:2 place:2 almost:3 vn:1 decision:57 cheng:1 oracle:2 precisely:1 dichotomous:1 min:2 kumar:1 relatively:1 according:3 sandholm:2 smaller:1 pr:24 ariel:8 taken:3 turn:1 count:1 mechanism:7 fail:1 royale:1 mind:1 know:1 polytechnic:1 observe:1 disagreement:1 cia:1 alternative:39 robustness:1 shah:7 top:14 endriss:1 scholarship:1 classical:2 implied:1 objective:1 question:3 parametric:10 behalf:1 kemeny:36 distance:6 thank:1 majority:12 condorcet:32 tseng:1 marcus:1 tuomas:1 sur:1 berger:1 troy:1 stated:1 design:8 implementation:1 allowing:1 recommender:2 observation:1 dispersion:8 datasets:2 beat:1 situation:1 extended:1 communication:1 paradox:1 frame:1 discovered:1 lb:2 thm:4 arbitrary:1 sharp:1 david:4 pair:7 paris:1 c3:9 z1:1 raising:3 engine:1 c4:5 established:1 barcelona:1 nip:2 beyond:1 elicitation:1 challenge:3 including:1 tau:4 max:1 haber:1 catalonia:1 natural:4 ranked:12 paroush:1 movie:1 dated:1 acknowledges:2 ltop:9 voix:1 sn:2 prior:14 review:2 checking:4 asymptotic:4 loss:19 expect:1 permutation:2 highlight:1 suggestion:1 limitation:1 wmg:11 acyclic:1 foundation:1 agent:14 consistent:3 principle:2 viewpoint:3 story:1 bank:1 supported:1 last:1 copy:3 institute:1 taking:1 characterizing:1 edinburgh:1 xia:8 world:2 computes:1 fb:11 adopts:1 made:2 author:1 collection:1 coincide:1 c5:1 ec:3 social:37 uni:2 rosenschein:1 logic:1 monotonicity:10 confirm:1 active:1 incoming:1 uai:3 conceptual:1 don:1 search:1 rensselaer:1 table:3 reasonably:1 robust:1 composing:1 shmuel:1 unavailable:1 investigated:2 meanwhile:1 protocol:1 vj:1 antisymmetric:2 domain:1 significance:1 main:4 abraham:1 profile:29 edition:1 aamas:2 ny:2 wiley:1 lc:5 precision:1 fails:1 position:9 mao:1 explicit:1 winning:3 candidate:1 tied:2 young:8 theorem:31 formula:3 bad:1 jen:2 normative:12 pluralit:1 cynthia:1 er:1 maximizers:1 exists:5 workshop:1 merging:1 phd:1 chen:2 likely:2 positional:1 bo:1 chang:1 springer:1 truth:6 satisfies:6 loses:1 ma:1 conditional:1 goal:1 formulated:1 viewed:1 towards:2 hard:15 determined:3 specifically:1 lemma:6 admittedly:1 called:3 total:3 kuo:1 la:2 ya:1 vote:20 est:2 select:3 procaccia:11 fmle:3 formally:2 support:1 latter:1 outstanding:1 evaluate:3 d1:2 correlated:1 |
5,012 | 5,538 | Causal Strategic Inference in Networked
Microfinance Economies
Luis E. Ortiz
Department of Computer Science
Stony Brook University
Stony Brook, NY 11794
[email protected]
Mohammad T. Irfan
Department of Computer Science
Bowdoin College
Brunswick, ME 04011
[email protected]
Abstract
Performing interventions is a major challenge in economic policy-making. We
propose causal strategic inference as a framework for conducting interventions
and apply it to large, networked microfinance economies. The basic solution
platform consists of modeling a microfinance market as a networked economy,
learning the parameters of the model from the real-world microfinance data, and
designing algorithms for various causal questions. For a special case of our model,
we show that an equilibrium point always exists and that the equilibrium interest
rates are unique. For the general case, we give a constructive proof of the existence of an equilibrium point. Our empirical study is based on the microfinance
data from Bangladesh and Bolivia, which we use to first learn our models. We
show that causal strategic inference can assist policy-makers by evaluating the
outcomes of various types of interventions, such as removing a loss-making bank
from the market, imposing an interest rate cap, and subsidizing banks.
1
Introduction
Although the history of microfinance systems takes us back to as early as the 18th century, the
foundation of the modern microfinance movement was laid in the 1970s by Muhammad Yunus,
a then-young Economics professor in Bangladesh. It was a time when the newborn nation was
struggling to recover from a devastating war and an ensuing famine. A blessing in disguise may it be
called, it led Yunus to design a small-scale experimentation on micro-lending as a tool for poverty
alleviation. The feedback from that experimentation gave Yunus and his students the insight that
micro-lending mechanism, with its social and humanitarian goals, could successfully intervene in
the informal credit market that was predominated by opportunistic moneylenders. Although far from
experiencing a smooth ride, the microfinance movement has nevertheless been a great success story
ever since, especially considering the fact that it began with just a small, out-of-pocket investment
on 42 clients and boasts a staggering 100 million poor clients worldwide at present [27]. Yunus and
his organization Grameen Bank have recently been honored with the Nobel peace prize ?for their
efforts to create economic and social development from below.?
A puzzling element in the success of microfinance programs is that while commercial banks dealing
with well-off customers struggle to recover loans, microfinance institutions (MFI) operate without
taking any collateral and yet experience very low default rates! The central mechanism that MFIs use
to mitigate risks is known as the group lending with joint-liability contract. Roughly speaking, loans
are given to groups of clients, and if a person fails to repay her loan, then either her partners repay
it on her behalf or the whole group gets excluded from the program. Besides risk-mitigation, this
mechanism also helps lower MFI?s cost of monitoring clients? projects. Group lending with a jointliability contract also improves repayment rates and mitigates moral hazard [13]. Group lending
and many other interesting aspects of microfinance systems, such as efficiency and distribution of
1
intervening informal credit markets, failure of pro-poor commercial banks, gender issues, subsidies,
etc., have been beautifully delineated by de Aghion and Morduch in their book [9].
Here, we assume that assortative matching and joint-liability contracts would mitigate the risks of
adverse selection [13] and moral hazard. We further assume that due to these mechanisms, there
would be no default on loans. This assumption of complete repayment of loans may seem to be
very much idealistic. However, practical evidence suggests very high repayment rates. For example,
Grameen Bank?s loan recovery rate is 99.46% [21].
Next, we present causal strategic inference, followed by our model of microfinance markets and our
algorithms for computing equilibria and learning model parameters. We present an empirical study
at the end. We leave much of the details to the Appendix, located in the supplementary material.
2
Causality in Strategic Settings
Going back two decades, one of the most celebrated success stories in the study of causality, which
studies cause and effect questions using mathematical models of real-world phenomena, was the
development of causal probabilistic inference. It was led by Judea Pearl, who was later awarded the
ACM Turing prize in 2011 for his seminal contribution. In his highly acclaimed book on causality, Pearl organizes causal queries in probabilistic settings in three different levels of difficulty?
prediction, interventions, and counterfactuals (in the order of increasing difficulty) [22, p. 38]. For
example, an intervention query is about the effects of changing an existing system by what Judea
Pearl calls ?surgery.? We focus on this type of query here.
Causal Strategic Inference. We study causal inferences in game-theoretic settings for interventiontype queries. Since game theory reliably encodes strategic interactions among a set of players,
we call this type of inference causal strategic inference. Note that interventions in game-theoretic
settings are not new (see Appendix B for a survey). Therefore, we use causal strategic inference
simply as a convenient name here. Our main contribution is a framework for performing causal
strategic inference in networked microfinance economy.
As mentioned above, interventions are carried out by surgeries. So, what could be a surgery in a
game-theoretic setting? Analogous to the probabilistic settings [22, p. 23], the types of surgeries we
consider here change the ?structure? of the game. This can potentially mean changing the payoff
function of a player, removing a player from the game, adding a new player to the game, changing
the set of actions of a player, as well as any combination of these. We discuss other possibilities in
Appendix A. See also [14].
The proposed framework of causal strategic inference is composed of the following components:
mathematically modeling a complex system, learning the parameters of the model from real-world
data, and designing algorithms to predict the effects of interventions.
Review of Literature. There is a growing literature in econometrics on modeling strategic scenarios
and estimating the parameters of the model. Examples are Bjorn and Vuong?s model of labor force
participation [5], Bresnahan and Reiss? entry models [6, 7], Berry?s model of airline markets [4],
Seim?s model of product differentiation [24], Augereau et al.?s model of technology adoption [2].
A survey of the recent results is given by Bajari et al. [3]. All of the above models are based on
McFadden?s random utility model [18], which often leads to an analytical solution. In contrast, our
model is based on classical models of two-sided economies, for which there is no known analytical
solution. Therefore, our solution approach is algorithmic, not analytic.
More importantly, although all of the above studies model a strategic scenario and estimate the
parameters of the respective model, none of them perform any intervention, which is one of our main
goals. We present more details on each of these as well as several additional studies in Appendix B.
Our model is closely related to the classical Fisher model [12]. An important distinction between
our model and Fisher?s, including its graphical extension [16], is that our model allows buyers (i.e.,
villages) to invest the goods (i.e., loans) in productive projects, thereby generating revenue that can
be used to pay for the goods (i.e., repay the loans). In other words, the crucial modeling parameter of
?endowment? is no longer a constant in our case. For the same reason, the classical Arrow-Debreu
model [1] or the recently developed graphical extension to the Arrow-Debreu model [15], does not
capture our setting. Moreover, in our model, the buyers have a very different objective function.
2
3
Our Model of Microfinance Markets
We model a microfinance market as a two-sided market consisting of MFIs and villages. Each MFI
has branches in a subset of the villages, and each branch of an MFI deals with the borrowers in that
village only. Similarly, each village can only interact with the MFIs present there.
We use the following notation. There are n MFIs and m villages. Vi is the set of villages where MFI
i operates and Bj is the set of MFIs that operate in village j. Ti is the finite total amount of loan
available to MFI i to be disbursed. gj (l) := dj + ej l is the revenue generation function of village
j (parameterized by the loan amount l), where the initial endowment dj > 0 (i.e., each village has
other sources of income [9, Ch. 1.3]) and the rate of revenue generation ej ? 1 are constants. ri
is the flat interest rate of MFI i and xj,i is the amount of loan borrowed by village j from MFI i.
Finally, the villages have a diversification parameter ? ? 0 that quantifies how much they want their
loan portfolios to be diversified. 1 The problem statement is given below.
Following are the inputs to the problem. First, for each MFI i, 1 ? i ? n, we are given the total
amount of money Ti that the MFI has and the set Vi of villages that the MFI has branches. Second,
for each village j, 1 ? j ? m, we are given the parameters dj > 0 and ej > 1 of the village?s
revenue generation function 2 and the set Bj of the MFIs that operate in that village.
MFI-side optimization problem. Each MFI i wants to set its interest rate ri such that all of its loan
is disbursed. This is known as market-clearance in economics. Here, the objective function is a
constant due to the MFIs? goal of market-clearance.
max
ri
1
?
?
subject to ri ?Ti ?
X
xj,i ? = 0
j?Vi
X
xj,i ? Ti
(PM )
j?Vi
ri ? 0
Village-side optimization problem. Each village j wants to maximize its diversified loan portfolio,
subject to its repaying it. We call the second term of the objective function of (PV ) the diversification
term, where ? is chosen using the data. 3 We call the first constraint of (PV ) the budget constraint.
max
xj =(xj,i )i?Bj
subject to
X
xj,i + ?
i?Bj
X
X
i?Bj
xj,i log
1
xj,i
xj,i (1 + ri ? ej ) ? dj
(PV )
i?Bj
xj ? 0
For this two-sided market, we use an equilibrium point as the solution concept. It is defined by an
interest rate ri? for each MFI i and a vector x?j = (x?j,i )i?Bj of loan allocations for each village j such
that the following two conditions hold. First, given the allocations x? , each MFI i is optimizing the
program (PM ). Second, given the interest rates r? , each village j is optimizing the program (PV ).
Justification of Modeling Aspects. Our model is inspired by the book of de Aghion and Morduch [9] and several other studies [20, 26, 23]. We list some of our modeling aspects below.
1
For simplicity, we assume that all the villages have the same diversification parameters.
When we apply our model to real-world settings, we will see that in contrast to the other inputs, dj and ej
are not explicitly mentioned in the data and therefore, need to be learned from the data. The machine learning
scheme for that will be presented in Section 4.2.
3
Note that although this term bears a similarity with the well-known entropic term, they are different, because xj,i ?s here can be larger than 1.
2
3
Objective of MFIs. It may seem unusual that although MFIs are banks, we do not model them as
profit-maximizing agents. The perception that MFIs make profits while serving the poor has been
described as a ?myth? [9, Ch. 1]. In fact, the book devotes a whole chapter to bust this myth [9, Ch.
9]. Therefore, empirical evidence supports modeling MFIs as not-for-profit organizations.
Objective of Villages. Typical customers of MFIs are low-income people engaged in small projects
and most of them are women working at home (e.g., Grameen Bank has a 95% female customer
base) [9]. Clearly, there is a distinction between customers borrowing from an MFI and those borrowing from commercial banks. Therefore, we model the village side as non-corporate agents.
Diversification of Loan Portfolios. Empirical studies suggest that the village side does not maximize
its loan by borrowing only from the lowest interest rate MFI [26, 23]. There are other factors, such
as large loan sizes, shorter waiting periods, and flexible repayment schemes [26]. We added the
diversification term in the village objective function to reflect this. Furthermore, this formulation is
in line with the quantal response approach [19] and human subjects are known to respond to it[17].
Complete repayment of loans. A hallmark of microfinance systems worldwide is very high repayment rates. For example, the loan recovery rate of Grameen Bank is 99.46% and PKSF 99.51% [21].
Due to such empirical evidence, we assume that the village-side completely repays its loan.
3.1
Special Case: No Diversification of Loan Portfolios
It will be useful to first study the case of non-diversified loan portfolios, i.e., ? = 0. In this case, the
villages simply wish to maximize the amount of loan that they can borrow. Several properties of an
equilibrium point can be derived for this special case. We give the complete proofs in Appendix C.
Property 3.1.PAt any equilibrium point (x? , r? ), every MFI i?s supply must match the demand for
its loan, i.e., j?Vi x?j,i = Ti . Furthermore, every village j borrows only from those MFIs i ? Bj
P
that offer the lowest interest rate. That is, i?Bj ,r? =r? x?j,i (1 + ri? ? ej ) = dj for any MFI
i
mj
?
.
mj ? argmini?Bj ri? , and x?j,k = 0 for any MFI k such that rk? > rm
j
Proof Sketch. Show by contradiction that at an equilibrium point, the constraints of the village-side
or the MFI-side optimization are violated otherwise.
We next present a lower bound on interest rates at an equilibrium point.
Property 3.2. At any equilibrium point (x? , r? ), for every MFI i, ri? > maxj?Vi ej ? 1.
Proof Sketch. Otherwise,
P the village-side demand would be unbounded, which would violate the
MFI-side constraint j?Vi x?j,i ? Ti .
Following are two related results that preclude certain trivial allocations such as all the allocations
being zero at an equilibrium point.
Property 3.3. At any equilibrium point (x? , r? ), for any village j, there exists an MFI i ? Bj such
that x?j,i > 0.
Proof Sketch. In this case, j satisfies its constraints but does not maximize its objective function.
Property 3.4. At any equilibrium point (x? , r? ), for any MFI i, there exists a village j ? Vi such
that x?j,i > 0.
Proof Sketch. The first constraint of (PM ) for MFI i is violated.
3.2
Eisenberg-Gale Formulation
We now present an Eisenberg-Gale convex program formulation of a restricted case of our model
where the diversification parameter ? = 0 and all the villages j, 1 ? j ? m, have the same revenue
generation function gj (l) := d + el, where d > 0 and e ? 1 are constants. We first prove that
this case is equivalent to the following Eisenberg-Gale convex program [11, 25], which gives us the
existence of an equilibrium point and the uniqueness of the equilibrium interest rates as a corollary.
Below is the Eisenberg-Gale convex program [11, p. 166].
4
min
z
subject to
m
X
? log
j=1
X
X
zj,i
i?Bj
zj,i ? Ti ? 0, 1 ? i ? n
(PE )
j?Vi
zj,i ? 0,
1 ? i ? n, j ? Vi
We have the following theorem and corollary.
Theorem 3.5. The special case of microfinance markets with identical villages and no loan portfolio
diversification, has an equivalent Eisenberg-Gale formulation.
Proof Sketch. The complete proof is very long and given in Appendix C. We first make a connection
between an equilibrium point (x? , r? ) of a microfinance market and the variables of program (PE ).
?
In particular, we define x?j,i ? zj,i
and express ri? in terms of certain dual variables of (PE ). Using
the properties given in Section 3.1, we show that the equilibrium conditions of (PM ) and (PV ) in
this special case are equivalent to the Karush-Kuhn-Tucker (KKT) conditions of (PE ).
Corollary 3.6. For the above special case, there exists an equilibrium point with unique interest
rates [11] and a combinatorial polynomial-time algorithm to compute it [25].
An implication of Theorem 3.5 is that in a more restricted case of our model (with the additional
constraint of Ti being same for all MFI i), our model is indeed a graphical linear Fisher model where
all the ?utility coefficients? are set to 1 (see the convex program 5.1 [25] to verify this).
3.3
Equilibrium Properties of General Case
P
In the general case, the objective function of (PV ) can be written as
i?Bj xj,i ?
P
? i?Bj xj,i log xj,i . While the first term wants to maximize the total amount of loan, the second (diversification term) wants, in colloquial terms, ?not to put all the eggs in one basket.? If ? is
sufficiently small, then the first term dominates the second, which is a desirable assumption.
Assumption 3.1.
1
0???
2 + log Tmax
where Tmax ? maxi Ti and w.l.o.g., Ti > 1 for all i.
The following equilibrium properties will be used in the next section.
Property 3.7. The first constraint of (PV ) must be tight at any equilibrium point.
Proof Sketch. Otherwise, the village can increase its objective function slightly.
We define eimax ? maxj?Vi ej and dimax ? maxj?Vi dj and obtain the following bounds.
Property 3.8. At any equilibrium point, for each MFI i, eimax ? 1 < ri? ?
|Vi |dimax
Ti
+ eimax ? 1.
Proof Sketch. The proof of eimax ? 1 < ri? is similar to the proof of Property 3.2. The upper bound
is derived from the maximum loan a village j can seek from the MFI i at an equilibrium point.
4
Computational Scheme
For the clarity of presentation we first design an algorithm for equilibrium computation and then
talk about learning the parameters of our model.
4.1
Computing an Equilibrium Point
We give a constructive proof of the existence of an equilibrium point in the microfinance market
defined by (PM ) and (PV ). The inputs are ? > 0, ej and dj for each village j, and Ti for each MFI
i. We first give a brief outline of our scheme in Algorithm 1.
5
Algorithm 1 Outline of Equilibrium Computation
1: For each MFI i, initialize ri to eimax ? 1.
2: For each village j, compute its best response xj .
3: repeat
4:
for all MFI i do P
5:
while Ti 6= j?Vi xj,i do
6:
Change ri as described after Lemma 4.3.
7:
For each village j ? Vi , update its best response xj reflecting the change in ri .
8:
end while
9:
end for
10: until no change to ri occurs for any i
Before going on to the details of how to change ri in Line 6 of Algorithm 1, we characterize the best
response of the villages used in Line 7.
Lemma 4.1. (Village?s Best Response) Given the interest rates of all the MFIs, the following is the
unique best response of any village j to any MFI i ? Bj :
1 ? ? ? ?j? (1 + ri ? ej )
x?j,i = exp
(1)
?
where ?j? ? 0 is the unique solution to
X
1 ? ? ? ?j? (1 + ri ? ej )
(1 + ri ? ej ) = dj .
exp
?
(2)
i?Bj
Proof Sketch. Derive the Lagrangian of (PV ) and argue about optimality.
Therefore, as soon as ri of some MFI i changes in Line 6 of Algorithm 1, both x?j,i and the Lagrange
multiplier ?j? change in Line 7, for any village j ? Vi . Next, we show the direction of these changes.
Lemma 4.2. Whenever ri increases (decreases) in Line 6, xj,i must decrease (increase) for every
village j ? Vi in Line 7 of Algorithm 1.
Proof Sketch. Rewrite the expression of x?j,i given in Lemma 4.1 in terms of ?j? . Do the same for
x?j,k for some k ? Bj . Use the two expressions for ?j? to argue about the increase of ri .
The next lemma is a cornerstone of our theoretical results. Here, we use the term turn of an MFI to
refer to the iterative execution of Line 6, wherein an MFI sets its interest rate to clear its market.
Lemma 4.3. (Strategic Complementarity) Suppose that an MFI i has increased its interest rate at
the end of its turn. Thereafter, it cannot be the best response of any other MFI k to lower its interest
rate when its turn comes in the algorithm.
Proof Sketch. The proof follows from Lemma 4.2 and Assumption 3.1. The main task is to show
that when ri increases ?j? for j ? Vi cannot increase.
In essence, Lemma 4.2 is a result of strategic substitutability [10] between the MFI and the village
sides, while Lemma 4.3 is a result of strategic complementarity [8] among the MFIs. Our algorithm
exploits these two properties as we fill in the details of Lines 6 and 7 next.
Line 6: MFI?s Best Response. By Lemma 4.2, the total demand for MFI i?s loan monotonically
decreases with the increase of ri . We use a binary search between the upper and the lower bounds
of ri given in Property 3.8 to find the ?right? value of ri . More details are given in Appendix D.
Line 7: Village?s Best Response. We use Lemma 4.1 to compute each village j?s best response x?j,i
to MFIs i ? Bj . However, Equation (1) requires computation of ?j? , the solution to Equation (2).
We exploit the convexity of Equation (2) to design a simple search algorithm to find ?j? .
Theorem 4.4. There always exists an equilibrium point in a microfinance market specified by programs (PM ) and (PV ).
Proof Sketch. Use Lemmas 4.3 and 4.1 and the well-known monotone convergence theorem.
6
4.2
Learning the Parameters of the Model
The inputs are the spatial structure of the market, the observed loan allocations x
?j,i for all village j
and all MFI i ? Bj , the observed interest rates r?i and total supply Ti for all MFI i. The objective of
the learning scheme is to instantiate parameters ej and dj for all j. We learn these parameters using
the program below so that an equilibrium point closely approximates the observed data.
min
e,d,r
XX
i
(x?j,i ? x
?j,i )2 + C
X
(ri? ? r?i )2
i
j?Vi
such that
for all j,
x?j ? arg maxxj
X
xj,i + ?
s. t.
xj,i (1 +
xj,i log
i?Bj
i?Bj
X
X
ri?
1
xj,i
? ej ) ? dj
i?Bj
xj ? 0
ej ? 1, dj ? 0
X
x?j,i = Ti , for all i
(3)
j?Vi
ri ? ej ? 1, for all i and all j ? Vi
The above is a nested (bi-level) optimization program. The term C is a constant. In the interior
optimization program, x? are best responses of the villages, w.r.t. the parameters and the interest
rates r? . In practice, we exploit Lemma 4.1 to compute x? more efficiently, since it suffices to search
for Lagrange multipliers ?j in a much smaller search space and then apply Equation (1). We use the
interior-point algorithm of Matlab?s large-scale optimization package to solve the above program.
In the next section, we show that the above learning procedure does not overfit the real-world data.
We also highlight the issue of equilibrium selection for parameter estimation.
5
Empirical Study
We now present our empirical study based on the microfinance data from Bolivia and Bangladesh.
The details of this study can be found in Appendix E (included in the supplementary material).
Case Study: Bolivia
Data. We obtained microfinance data of Bolivia from several sources, such as ASOFIN, the apex
body of MFIs in Bolivia, and the Central Bank of Bolivia. 4 We were only able to collect somewhat
coarse, region-level data (June 2011). It consists of eight MFIs operating in 10 regions.
Computational Results. We first choose a value of ? such that the objective function value of the
learning optimization is low as well as ?stable? and the interest rates are also relatively dissimilar.
Using this value of ?, the learned ej ?s and dj ?s capture the variation among the villages w.r.t. the
revenue generation function. The learned loan allocations closely approximate the observed allocations. The learned model matches each MFI?s total loan allocations due to the learning scheme.
Issues of Bias and Variance. Our dataset consists of a single sample. As a result, the traditional
approach of performing cross validation using hold-out sets or plotting learning curves by varying
the number of samples do not work in our setting. Instead, we systematically introduce noise to
the observed data sample. In the case of overfitting, increasing the level of noise would lead the
equilibrium outcome to be significantly different from the observed data. To that end, we used two
noise models?Gaussian and Dirichlet. In both cases, the training and test errors are very low and the
learning curves do not suggest overfitting.
4
http://www.asofinbolivia.com; http://www.bcb.gob.bo/
7
Equilibrium Selection. In the case of multiple equilibria, our learning scheme biases its search
for an equilibrium point that most closely explains the data. However, does the equilibrium point
change drastically when noise is added to data? For this, we extended the above procedure using
a bootstrapping scheme to measure the distance between different equilibrium points when noise is
added. For both Gaussian and Dirichlet noise models, we found that the equilibrium point does not
change much even with a high degree of noise. Details, including plots, are given in Appendix E.
Case Study: Bangladesh
Based on the microfinance data (consisting of seven MFIs and 464 villages/regions), dated December 2005, from Palli Karma Sahayak Foundation (PKSF), which is the apex body of NGO MFIs in
Bangladesh, we have obtained very similar results to the Bolivia case (see Appendix E).
6
Policy Experiments
For a specific intervention policy, e.g., removal of government-owned MFIs, we first learn the parameters of the model and then compute an equilibrium point, both in the original setting (before
removal of any MFI). Using the parameters learned, we compute a new equilibrium point after the
removal of the government-owned MFIs. Finally, we study changes in these two equilibria (before
and after removal) in order to predict the effect of such an intervention.
Role of subsidies. MFIs are very much dependent on subsidies [9, 20]. We ask a related question:
how does giving subsidies to an MFI affect the market? For instance, one of the Bolivian MFIs
named Eco Futuro exhibits very high interest rates both in observed data and at an equilibrium point.
Eco Futuro is connected to all the villages, but has very little total loan to be disbursed compared to
the leading MFI Bancosol. Using our model, if we inject further subsidies into Eco Futuro to make
its total loan amount equal to Bancosol?s, not only do these two MFIs have the same (but lower than
before) equilibrium interest rates, it also drives down the interest rates of the other MFIs.
Changes in interest rates. Our model computes lower equilibrium interest rate (around 12%) for
ASA than its observed interest rate (15%). It is interesting to note that in late 2005, ASA lowered its
interest rate from 15% to 12.5%, which is close to what our model predicts at an equilibrium point. 5
Interest rates ceiling. PKSF recently capped the interest rates of its partner organization to 12.5%
[23], and more recently, the country?s Microfinance Regulatory Authority has also imposed a ceiling
on interest rate at around 13.5% flat. 6 Such evidence on interest rate ceiling is consistent with the
outcome of our model, since in our model, 13.4975% is the highest equilibrium interest rate.
Government-owned MFIs. Many of the government-owned MFIs are loss-making [26]. Our model
shows that removing government-owned MFIs from the market would result in an increase of equilibrium interest rates by approximately 0.5% for every other MFI. It suggests that less competition
leads to higher interest rates, which is consistent with empirical findings [23].
Adding new branches. Suppose that MFI Fassil in Bolivia expands its business to all villages. It
may at first seem that due to the increase in competition, equilibrium interest rates would go down.
However, since Fassil?s total amount of loan does not change, the new connections and the ensuing
increase in demand actually increase equilibrium interest rates of all MFIs.
Other types of intervention. Through our model, we can ask more interesting questions such as
would an interest rate ceiling be still respected after the removal of certain MFIs from the market?
Surprisingly, according to our discussion above, the answer is yes if we were to remove governmentowned MFIs. Similarly, we can ask what would happen if a major MFI gets entirely shut down? We
can also evaluate effects of subsidies from the donor?s perspective (e.g., which MFIs should a donor
select and how should the donor distribute its grants among these MFIs in order to achieve some
goal). Causal questions like these form the long-term goal of this research.
Acknowledgement
We thank the reviewers. Luis E. Ortiz was supported in part by NSF CAREER Award IIS-1054541.
5
6
http://www.adb.org/documents/policies/ microfinance/microfinance0303.asp?p=microfnc.
http://www.microfinancegateway.org/ p/site/m/template.rc/1.1.10946/
8
References
[1] K. J. Arrow and G. Debreu. Existence of an equilibrium for a competitive economy. Econometrica,
22(3):265?290, 1954.
[2] A. Augereau, S. Greenstein, and M. Rysman. Coordination versus differentiation in a standards war: 56K
modems. The Rand Journal of Economics, 37(4):887?909, 2006.
[3] P. Bajari, H. Hong, and D. Nekipelov. Game theory and econometrics: A survey of some recent research.
Working paper, University of Minnesota, Department of Economics, 2010.
[4] S. T. Berry. Estimation of a model of entry in the airline industry. Econometrica: Journal of the Econometric Society, pages 889?917, 1992.
[5] P. A. Bjorn and Q. H. Vuong. Simultaneous equations models for dummy endogenous variables: A game
theoretic formulation with an application to labor force participation. Technical Report 527, California
Institute of Technology, Division of the Humanities and Social Sciences, 1984.
[6] T. F. Bresnahan and P. C. Reiss. Entry in monopoly market. The Review of Economic Studies, 57(4):531?
553, 1990.
[7] T. F. Bresnahan and P. C. Reiss. Empirical models of discrete games. Journal of Econometrics, 48(1):57?
81, 1991.
[8] J. I. Bulow, J. D. Geanakoplos, and P. D. Klemperer. Multimarket oligopoly: Strategic substitutes and
complements. The Journal of Political Economy, 93(3):488?511, 1985.
[9] B. de Aghion and J. Morduch. The economics of microfinance. MIT Press, 2005.
[10] P. Dubey, O. Haimanko, and A. Zapechelnyuk. Strategic complements and substitutes, and potential
games. Games and Economic Behavior, 54(1):77 ? 94, 2006.
[11] E. Eisenberg and D. Gale. Consensus of subjective probabilities: The pari-mutuel method. Annals Math.
Stat., 30:165?168, 1959.
[12] I. Fisher. Mathematical Investigations in the Theory of Value and Prices. Yale University, 1892.
books.google.com/books?id=djIoAAAAYAAJ.
[13] M. Ghatak and T. Guinnane. The economics of lending with joint liability: Theory and practice. Journal
of Development Economics, 60:195?228, 1999.
[14] M. T. Irfan. Causal Strategic Inference in Social and Economic Networks. PhD thesis, Stony Brook
University, August 2013.
[15] S. M. Kakade, M. Kearns, and L. E. Ortiz. Graphical economics. In In Proceedings of the 17th Annual
Conference on Learning Theory (COLT), pages 17?32. Springer, 2004.
[16] S. M. Kakade, M. Kearns, L. E. Ortiz, R. Pemantle, and S. Suri. Economic properties of social networks.
In Advances in Neural Information Processing Systems 17, pages 633?640. MIT Press, 2005.
[17] R. D. Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley New York, 1959.
[18] D. McFadden. Conditional logit analysis of qualitative choice behavior. In P. Zarembka, editor, Frontiers
of Econometrics, pages 105?142. Academic Press, New York, NY, 1974.
[19] R. McKelvey and T. Palfrey. Quantal response equilibria for normal form games. Games and Economic
Behavior, 10:6?38, 1995.
[20] J. Morduch. The role of subsidies in microfinance: Evidence from the Grameen Bank. Journal of Development Economics, 60:229?248, 1999.
[21] I. of Microfinance (InM). Bangladesh Microfinance Statistics. The University Press Limited (UPL),
Dhaka 1000, Bangladesh, 2009.
[22] J. Pearl. Causality: Models, Reasoning, and Inference. University Press, 2000.
[23] D. Porteous. Competition and microcredit interest rates. Focus Note, 33, 2006.
[24] K. Seim. An empirical model of firm entry with endogenous product-type choices. The RAND Journal of
Economics, 37(3):619?640, 2006.
? Tardos,
[25] V. V. Vazirani. Combinatorial algorithms for market equilibria. In N. Nisan, T. Roughgarden, Eva
and V. V. Vazirani, editors, Algorithmic Game Theory, chapter 5, pages 103?134. Cambridge University
Press, 2007.
[26] D. Wright and D. Alamgir. Microcredit interest rates in Bangladesh ?capping v competition?. Donors
Local Consultative Group on Finance, March 2004.
[27] D. K. X. Gine, P. Jakiela and J. Morduch. Microfinance games. Discussion paper 936, Yale University
Economic Growth Center, 2006.
9
| 5538 |@word polynomial:1 logit:1 seek:1 thereby:1 profit:3 initial:1 celebrated:1 document:1 subjective:1 existing:1 com:2 yet:1 stony:3 must:3 luis:2 written:1 happen:1 analytic:1 remove:1 plot:1 update:1 instantiate:1 shut:1 prize:2 institution:1 stonybrook:1 yunus:4 lending:6 mitigation:1 coarse:1 authority:1 org:2 math:1 unbounded:1 mathematical:2 rc:1 supply:2 qualitative:1 consists:3 prove:1 introduce:1 indeed:1 market:23 behavior:4 roughly:1 growing:1 inspired:1 little:1 preclude:1 considering:1 increasing:2 project:3 estimating:1 moreover:1 notation:1 xx:1 lowest:2 what:4 developed:1 finding:1 differentiation:2 bootstrapping:1 mitigate:2 every:5 nation:1 ti:15 expands:1 finance:1 growth:1 rm:1 grant:1 intervention:12 before:4 local:1 struggle:1 mfi:50 id:1 approximately:1 tmax:2 collect:1 suggests:2 limited:1 bi:1 adoption:1 unique:4 practical:1 investment:1 assortative:1 practice:2 opportunistic:1 procedure:2 empirical:10 significantly:1 matching:1 convenient:1 word:1 suggest:2 get:2 cannot:2 interior:2 selection:3 close:1 put:1 risk:3 seminal:1 www:4 equivalent:3 imposed:1 customer:4 lagrangian:1 maximizing:1 reviewer:1 go:1 economics:10 center:1 convex:4 survey:3 simplicity:1 recovery:2 contradiction:1 insight:1 importantly:1 borrow:1 fill:1 his:4 century:1 leortiz:1 variation:1 justification:1 analogous:1 tardos:1 annals:1 monopoly:1 commercial:3 suppose:2 experiencing:1 alamgir:1 designing:2 humanity:1 complementarity:2 element:1 located:1 econometrics:4 predicts:1 donor:4 observed:8 role:2 capture:2 region:3 eva:1 connected:1 movement:2 decrease:3 highest:1 mentioned:2 convexity:1 econometrica:2 productive:1 honored:1 tight:1 rewrite:1 asa:2 division:1 efficiency:1 completely:1 joint:3 various:2 chapter:2 talk:1 query:4 outcome:3 firm:1 oligopoly:1 supplementary:2 larger:1 solve:1 pemantle:1 otherwise:3 statistic:1 analytical:2 propose:1 interaction:1 product:2 networked:4 achieve:1 intervening:1 competition:4 invest:1 convergence:1 generating:1 leave:1 help:1 derive:1 stat:1 borrowed:1 c:1 come:1 kuhn:1 direction:1 closely:4 repayment:6 human:1 material:2 muhammad:1 explains:1 government:5 suffices:1 karush:1 investigation:1 adb:1 mathematically:1 extension:2 zapechelnyuk:1 frontier:1 hold:2 around:2 credit:2 sufficiently:1 wright:1 exp:2 great:1 equilibrium:51 algorithmic:2 predict:2 bj:22 normal:1 major:2 early:1 entropic:1 idealistic:1 uniqueness:1 estimation:2 combinatorial:2 maker:1 coordination:1 village:53 create:1 successfully:1 tool:1 mit:2 clearly:1 always:2 gaussian:2 ej:17 asp:1 varying:1 newborn:1 corollary:3 derived:2 focus:2 june:1 contrast:2 political:1 inference:14 economy:7 dependent:1 el:1 her:3 borrowing:3 going:2 issue:3 among:4 flexible:1 dual:1 arg:1 maxxj:1 colt:1 development:4 platform:1 special:6 initialize:1 spatial:1 equal:1 devastating:1 identical:1 report:1 micro:2 modern:1 composed:1 individual:1 poverty:1 maxj:3 consisting:2 ortiz:4 organization:3 interest:37 highly:1 possibility:1 implication:1 experience:1 collateral:1 respective:1 shorter:1 causal:15 theoretical:2 inm:1 increased:1 instance:1 modeling:7 industry:1 strategic:19 cost:1 subset:1 entry:4 characterize:1 answer:1 person:1 contract:3 off:1 probabilistic:3 thesis:1 central:2 reflect:1 choose:1 gale:6 woman:1 disguise:1 book:6 inject:1 leading:1 distribute:1 potential:1 de:3 student:1 coefficient:1 devotes:1 explicitly:1 vi:21 nisan:1 later:1 greenstein:1 endogenous:2 subsidizing:1 counterfactuals:1 competitive:1 recover:2 contribution:2 variance:1 conducting:1 who:1 efficiently:1 yes:1 none:1 monitoring:1 drive:1 history:1 simultaneous:1 basket:1 whenever:1 failure:1 tucker:1 proof:18 judea:2 dataset:1 ask:3 cap:1 improves:1 pocket:1 actually:1 back:2 reflecting:1 higher:1 response:12 wherein:1 rand:2 formulation:5 furthermore:2 just:1 myth:2 until:1 overfit:1 working:2 sketch:11 struggling:1 google:1 name:1 effect:5 concept:1 verify:1 multiplier:2 excluded:1 staggering:1 deal:1 game:16 clearance:2 essence:1 hong:1 outline:2 complete:4 mohammad:1 theoretic:4 eco:3 pro:1 reasoning:1 hallmark:1 suri:1 recently:4 began:1 palfrey:1 bust:1 million:1 approximates:1 refer:1 cambridge:1 imposing:1 vuong:2 pm:6 similarly:2 dj:13 portfolio:6 apex:2 lowered:1 ride:1 intervene:1 longer:1 money:1 gj:2 etc:1 similarity:1 base:1 operating:1 stable:1 recent:2 female:1 perspective:1 optimizing:2 awarded:1 scenario:2 diversification:9 certain:3 binary:1 success:3 additional:2 somewhat:1 maximize:5 period:1 monotonically:1 ii:1 branch:4 multiple:1 worldwide:2 corporate:1 violate:1 desirable:1 debreu:3 smooth:1 technical:1 match:2 academic:1 offer:1 bangladesh:8 hazard:2 long:2 cross:1 award:1 peace:1 prediction:1 basic:1 gine:1 want:5 source:2 country:1 crucial:1 operate:3 airline:2 subject:5 december:1 gob:1 seem:3 call:4 ngo:1 xj:23 affect:1 gave:1 bulow:1 economic:8 luce:1 expression:2 war:2 utility:2 assist:1 boast:1 effort:1 moral:2 york:2 speaking:1 cause:1 action:1 matlab:1 cornerstone:1 bajari:2 useful:1 clear:1 dubey:1 amount:8 http:4 zj:4 nsf:1 mckelvey:1 dummy:1 serving:1 discrete:1 waiting:1 express:1 group:6 thereafter:1 nevertheless:1 changing:3 clarity:1 econometric:1 monotone:1 turing:1 parameterized:1 package:1 respond:1 named:1 laid:1 home:1 appendix:10 entirely:1 bound:4 pay:1 followed:1 yale:2 annual:1 roughgarden:1 constraint:8 ri:31 flat:2 encodes:1 minnesota:1 aspect:3 min:2 optimality:1 performing:3 relatively:1 department:3 according:1 combination:1 poor:3 march:1 smaller:1 slightly:1 kakade:2 delineated:1 making:3 alleviation:1 restricted:2 sided:3 ceiling:4 equation:5 discus:1 turn:3 mechanism:4 end:5 unusual:1 informal:2 bolivia:8 available:1 experimentation:2 apply:3 eight:1 zarembka:1 existence:4 original:1 substitute:2 dirichlet:2 porteous:1 graphical:4 exploit:3 giving:1 especially:1 classical:3 respected:1 society:1 surgery:4 objective:11 question:5 added:3 occurs:1 traditional:1 behalf:1 beautifully:1 exhibit:1 distance:1 thank:1 ensuing:2 me:1 seven:1 partner:2 argue:2 consensus:1 trivial:1 nobel:1 reason:1 besides:1 quantal:2 potentially:1 statement:1 design:3 reliably:1 policy:5 perform:1 upper:2 modem:1 finite:1 pat:1 payoff:1 extended:1 ever:1 august:1 complement:2 specified:1 connection:2 california:1 distinction:2 learned:5 pearl:4 brook:3 capped:1 able:1 below:5 perception:1 challenge:1 program:14 including:2 max:2 difficulty:2 client:4 force:2 participation:2 business:1 scheme:8 technology:2 brief:1 dated:1 carried:1 subsidy:7 acclaimed:1 review:2 literature:2 berry:2 removal:5 acknowledgement:1 eisenberg:6 loss:2 bear:1 mcfadden:2 highlight:1 interesting:3 generation:5 allocation:8 versus:1 borrows:1 revenue:6 foundation:2 validation:1 agent:2 degree:1 consistent:2 plotting:1 editor:2 bank:12 story:2 systematically:1 endowment:2 repeat:1 surprisingly:1 soon:1 liability:3 supported:1 drastically:1 side:10 bias:2 institute:1 template:1 taking:1 feedback:1 default:2 curve:2 world:5 evaluating:1 computes:1 far:1 income:2 social:5 vazirani:2 approximate:1 dealing:1 kkt:1 overfitting:2 search:5 iterative:1 regulatory:1 decade:1 quantifies:1 learn:3 mj:2 career:1 irfan:2 interact:1 complex:1 pari:1 main:3 arrow:3 whole:2 noise:7 karma:1 body:2 causality:4 site:1 egg:1 ny:2 wiley:1 fails:1 pv:10 wish:1 pe:4 late:1 capping:1 young:1 removing:3 rk:1 theorem:5 down:3 specific:1 mitigates:1 maxi:1 list:1 evidence:5 dominates:1 exists:5 adding:2 phd:1 execution:1 budget:1 demand:4 led:2 simply:2 lagrange:2 bjorn:2 labor:2 diversified:3 bo:1 springer:1 gender:1 ch:3 nested:1 satisfies:1 owned:5 acm:1 conditional:1 goal:5 presentation:1 price:1 professor:1 fisher:4 adverse:1 change:13 loan:35 typical:1 argmini:1 operates:1 included:1 lemma:13 kearns:2 blessing:1 called:1 total:9 engaged:1 buyer:2 player:5 organizes:1 select:1 college:1 puzzling:1 substitutability:1 support:1 brunswick:1 people:1 bcb:1 dissimilar:1 violated:2 constructive:2 evaluate:1 reiss:3 phenomenon:1 |
5,013 | 5,539 | Depth Map Prediction from a Single Image
using a Multi-Scale Deep Network
David Eigen
[email protected]
Christian Puhrsch
[email protected]
Rob Fergus
[email protected]
Dept. of Computer Science, Courant Institute, New York University
Abstract
Predicting depth is an essential component in understanding the 3D geometry of
a scene. While for stereo images local correspondence suffices for estimation,
finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the
task is inherently ambiguous, with a large source of uncertainty coming from the
overall scale. In this paper, we present a new method that addresses this task by
employing two deep network stacks: one that makes a coarse global prediction
based on the entire image, and another that refines this prediction locally. We also
apply a scale-invariant error to help measure depth relations rather than scale. By
leveraging the raw datasets as large sources of training data, our method achieves
state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth
boundaries without the need for superpixelation.
1
Introduction
Estimating depth is an important component of understanding geometric relations within a scene. In
turn, such relations help provide richer representations of objects and their environment, often leading to improvements in existing recognition tasks [18], as well as enabling many further applications
such as 3D modeling [16, 6], physics and support models [18], robotics [4, 14], and potentially reasoning about occlusions.
While there is much prior work on estimating depth based on stereo images or motion [17], there has
been relatively little on estimating depth from a single image. Yet the monocular case often arises in
practice: Potential applications include better understandings of the many images distributed on the
web and social media outlets, real estate listings, and shopping sites. These include many examples
of both indoor and outdoor scenes.
There are likely several reasons why the monocular case has not yet been tackled to the same degree
as the stereo one. Provided accurate image correspondences, depth can be recovered deterministically in the stereo case [5]. Thus, stereo depth estimation can be reduced to developing robust image
point correspondences ? which can often be found using local appearance features. By contrast,
estimating depth from a single image requires the use of monocular depth cues such as line angles
and perspective, object sizes, image position, and atmospheric effects. Furthermore, a global view
of the scene may be needed to relate these effectively, whereas local disparity is sufficient for stereo.
Moreover, the task is inherently ambiguous, and a technically ill-posed problem: Given an image, an
infinite number of possible world scenes may have produced it. Of course, most of these are physically implausible for real-world spaces, and thus the depth may still be predicted with considerable
accuracy. At least one major ambiguity remains, though: the global scale. Although extreme cases
(such as a normal room versus a dollhouse) do not exist in the data, moderate variations in room
and furniture sizes are present. We address this using a scale-invariant error in addition to more
1
common scale-dependent errors. This focuses attention on the spatial relations within a scene rather
than general scale, and is particularly apt for applications such as 3D modeling, where the model is
often rescaled during postprocessing.
In this paper we present a new approach for estimating depth from a single image. We directly
regress on the depth using a neural network with two components: one that first estimates the global
structure of the scene, then a second that refines it using local information. The network is trained
using a loss that explicitly accounts for depth relations between pixel locations, in addition to pointwise error. Our system achieves state-of-the art estimation rates on NYU Depth and KITTI, as well
as improved qualitative outputs.
2
Related Work
Directly related to our work are several approaches that estimate depth from a single image. Saxena
et al. [15] predict depth from a set of image features using linear regression and a MRF, and later
extend their work into the Make3D [16] system for 3D model generation. However, the system
relies on horizontal alignment of images, and suffers in less controlled settings. Hoiem et al. [6] do
not predict depth explicitly, but instead categorize image regions into geometric structures (ground,
sky, vertical), which they use to compose a simple 3D model of the scene.
More recently, Ladicky et al. [12] show how to integrate semantic object labels with monocular
depth features to improve performance; however, they rely on handcrafted features and use superpixels to segment the image. Karsch et al. [7] use a kNN transfer mechanism based on SIFT Flow
[11] to estimate depths of static backgrounds from single images, which they augment with motion
information to better estimate moving foreground subjects in videos. This can achieve better alignment, but requires the entire dataset to be available at runtime and performs expensive alignment
procedures. By contrast, our method learns an easier-to-store set of network parameters, and can be
applied to images in real-time.
More broadly, stereo depth estimation has been extensively investigated. Scharstein et al. [17] provide a survey and evaluation of many methods for 2-frame stereo correspondence, organized by
matching, aggregation and optimization techniques. In a creative application of multiview stereo,
Snavely et al. [20] match across views of many uncalibrated consumer photographs of the same
scene to create accurate 3D reconstructions of common landmarks.
Machine learning techniques have also been applied in the stereo case, often obtaining better results
while relaxing the need for careful camera alignment [8, 13, 21, 19]. Most relevant to this work is
Konda et al. [8], who train a factored autoencoder on image patches to predict depth from stereo
sequences; however, this relies on the local displacements provided by stereo.
There are also several hardware-based solutions for single-image depth estimation. Levin et al. [10]
perform depth from defocus using a modified camera aperture, while the Kinect and Kinect v2 use
active stereo and time-of-flight to capture depth. Our method makes indirect use of such sensors
to provide ground truth depth targets during training; however, at test time our system is purely
software-based, predicting depth from RGB images.
3
3.1
Approach
Model Architecture
Our network is made of two component stacks, shown in Fig. 1. A coarse-scale network first predicts
the depth of the scene at a global level. This is then refined within local regions by a fine-scale
network. Both stacks are applied to the original input, but in addition, the coarse network?s output
is passed to the fine network as additional first-layer image features. In this way, the local network
can edit the global prediction to incorporate finer-scale details.
3.1.1 Global Coarse-Scale Network
The task of the coarse-scale network is to predict the overall depth map structure using a global view
of the scene. The upper layers of this network are fully connected, and thus contain the entire image
in their field of view. Similarly, the lower and middle layers are designed to combine information
from different parts of the image through max-pooling operations to a small spatial dimension. In
so doing, the network is able to integrate a global understanding of the full scene to predict the
depth. Such an understanding is needed in the single-image case to make effective use of cues such
2
054
055
056
057
058
059
96060
061
11x11 conv
4 stride
2x2 pool
Input
1
96
11x11 conv
4 stride
2x2 pool
256
3x3 conv
5x5 conv
2x2 pool
Coarse 1
384
384
Coarse 2
256
3x3 conv
3x3 conv
Coarse 4
Coarse 3
full
full
Coarse 5
384
63
384
256 64
4096
Coarse 7
Coarse 6
Coarse
1
256
64
1
062
3x3 conv
3x3 conv
3x3 conv
full
full
5x5 conv
2x2 pool
063
9x9 conv
5x5 conv
5x5 conv
Concatenate
2 stride
064
Coarse
6
Coarse
7
Coarse
4
Coarse
5
Coarse
1
Coarse
2
Coarse
3
2x2 pool
Fine 1
Fine 2
Fine 3
065
066
Input
1
Refined
067
64
63
64
068
069
9x9 conv
070
2 stride
2x2
071pool
072
073
074
075
076
077
078
Concatenate
Fine 1
Layer
Size (NYUDepth)
Size (KITTI)
Ratio to input
5x5 conv
Fine 2
input
304x228
576x172
/1
1
37x27
71x20
/8
Refined
Fine 4
5x5 conv
Fine 4
Fine 3
2,3,4
18x13
35x9
/16
Coarse
4096
Coarse
5
8x6
17x4
/32
6
1x1
1x1
?
7
74x55
142x27
/4
Fine
1,2,3,4
74x55
142x27
/4
Figure 1: Model architecture.
079
Figure 1: Model architecture.
080
as vanishing points,
object locations, and room alignment. A local view (as is commonly used for
081
stereo matching)
082 is insufficient to notice important features such as these.
As illustrated083
in Fig. 1, the global, coarse-scale network contains five feature extraction layers of
084 max-pooling, followed by two fully connected layers. The input, feature map and
convolution and
085 also predict
depth
but instead
image regionscompared
into geometric
output sizes are
given in
Fig.explicitly,
1. The final
outputcategorize
is at 1/4-resolution
to thestructures
input (ground, sky,
086
vertical),from
whichthe
they
use to dataset
composebya simple
the scene. to a center
(which is itself
downsampled
original
a factor3D
ofmodel
2), andofcorresponds
087most of the input (as we describe later, we lose a small border area due to the first
crop containing
More recently, Ladicky et al. [?] show how to integrate semantic object labels with monocular depth
088
layer of the fine-scale
networktoand
imageperformance;
transformations).
features
improve
however, they rely on handcrafted features and use superpixels to
089
segment theofimage.
Karsch
et al. [?]
usethat
a kNN
transfer
mechanism
based onfeature
SIFT Flow [?] to estiNote that the spatial
dimension
the output
is larger
than
of the
topmost
convolutional
090
mate depths of static backgrounds from single images, which they augment with motion information
map. Rather than
the output to the feature map size and relying on hardcoded upsampling
091 limiting
to better estimate moving foreground subjects in videos. This can achieve better alignment, but rebefore passing092
the prediction
to entire
the fine
network,
allow the
top fulland
layer
to learnexpensive
templatesalignment
over
quires the
dataset
to bewe
available
at runtime
performs
procedures.
the larger area093
(74x55By
forcontrast,
NYU Depth).
These
are
expected
to
be
blurry,
but
will
be
better
than
our method learns an easier-to-store set of network parameters, andthe
can be applied to
upsampled output
8x6 prediction
(the top feature map size); essentially, we allow the network
in real-time.
094 of aimages
to learn its own
upsampling
based
on
the
features. Sample output weights are shown in Fig. 2
095
More broadly, stereo depth estimation has been extensively investigated. Scharstein et al. [?] provide
096 usearectified
All hidden layers
linear
units for
activations,
the exception
of the coarse output
survey and
evaluation
of many
methodswith
for 2-frame
stereo correspondence
methods, organized by
layer 7, which097is linear.
Dropout
is applied
the fully-connected
hidden
layer 6.application
The convolumatching,
aggregation
andtooptimization
techniques.
In a creative
of multiview stereo,
098 of the
Snavely
et al. [?] network
match across
views of many
uncalibrated
photographs
of the same scene
tional layers (1-5)
coarse-scale
are pretrained
on the
ImageNetconsumer
classification
task [1]
099
to create
accurate
3D reconstructions
common landmarks.
? while developing
the
model,
we found
pretraining onofImageNet
worked better than initializing
1
100
randomly, although
the
difference
was not
very large
Machine
learning
techniques
have. been applied in the stereo case, often obtaining better results
101
while relaxing the need for careful camera alignment [?, ?, ?, ?]. Most relevant to this work is
3.1.2 Local102
Fine-Scale
Network
Konda
et al. [?], who train a factored autoencoder on image patches to predict depth from stereo
103
sequences;
thisthe
relies
on the
localmap,
displacements
by stereo. using
After taking a global perspectivehowever,
to predict
coarse
depth
we make provided
local refinements
104
a second, fine-scale
network.
The
task
of
this
component
is
to
edit
the
coarse
prediction
it receives Levin et al. [?]
There are also several hardware-based solutions for single-image depth estimation.
to align with 105
local details
such
as
object
and
wall
edges.
The
fine-scale
network
stack
consists
perform depth from defocus using a modified camera aperature, while the Kinect of
and Kinect v2 use
convolutional106
layers only,
pooling stage
for thedepth.
first layer
features.
activealong
stereowith
andone
time-of-flight
to capture
Our edge
method
makes indirect use of such sensors
107
to provide ground truth depth targets during training; however, at test time our system is purely
While the coarse network
sees the entire scene, the field of view of an output unit in the fine network
software-based, predicting depth from RGB images only.
is 45x45 pixels of input. The convolutional layers are applied across feature maps at the target output
size, allowing a relatively high-resolution output at 1/4 the input scale.
2
More concretely, the coarse output is fed in as an additional low-level feature map. By design, the
coarse prediction is the same spatial size as the output of the first fine-scale layer (after pooling),
1
When pretraining, we stack two fully connected layers with 4096 - 4096 - 1000 output units each, with
dropout applied to the two hidden layers, as in [9]. We train the network using random 224x224 crops from the
center 256x256 region of each training image, rescaled so the shortest side has length 256. This model achieves
a top-5 error rate of 18.1% on the ILSVRC2012 validation set, voting with 2 flips and 5 translations per image.
3
(a)
(b)
Figure 2: Weight vectors from layer Coarse 7 (coarse output), for (a) KITTI and (b) NYUDepth.
Red is positive (farther) and blue is negative (closer); black is zero. Weights are selected uniformly
and shown in descending order by l2 norm. KITTI weights often show changes in depth on either
side of the road. NYUDepth weights often show wall positions and doorways.
and we concatenate the two together (Fine 2 in Fig. 1). Subsequent layers maintain this size using
zero-padded convolutions.
All hidden units use rectified linear activations. The last convolutional layer is linear, as it predicts
the target depth. We train the coarse network first against the ground-truth targets, then train the
fine-scale network keeping the coarse-scale output fixed (i.e. when training the fine network, we do
not backpropagate through the coarse one).
3.2 Scale-Invariant Error
The global scale of a scene is a fundamental ambiguity in depth prediction. Indeed, much of the error
accrued using current elementwise metrics may be explained simply by how well the mean depth is
predicted. For example, Make3D trained on NYUDepth obtains 0.41 error using RMSE in log space
(see Table 1). However, using an oracle to substitute the mean log depth of each prediction with the
mean from the corresponding ground truth reduces the error to 0.33, a 20% relative improvement.
Likewise, for our system, these error rates are 0.28 and 0.22, respectively. Thus, just finding the
average scale of the scene accounts for a large fraction of the total error.
Motivated by this, we use a scale-invariant error to measure the relationships between points in the
scene, irrespective of the absolute global scale. For a predicted depth map y and ground truth y ? ,
each with n pixels indexed by i, we define the scale-invariant mean squared error (in log space) as
n
1 ?
D(y, y ? ) =
(log yi ? log yi? + ?(y, y ? ))2 ,
(1)
2n i=1
?
where ?(y, y ? ) = n1 i (log yi? ?log yi ) is the value of ? that minimizes the error for a given (y, y ? ).
For any prediction y, e? is the scale that best aligns it to the ground truth. All scalar multiples of y
have the same error, hence the scale invariance.
Two additional ways to view this metric are provided by the following equivalent forms. Setting
di = log yi ? log yi? to be the difference between the prediction and ground truth at pixel i, we have
?2
1 ??
D(y, y ? ) =
(log yi ? log yj ) ? (log yi? ? log yj? )
(2)
2n2 i,j
?
?2
1? 2
1 ?
1? 2
1 ?
=
di ? 2
di dj =
di ? 2
di
(3)
n i
n i,j
n i
n
i
Eqn. 2 expresses the error by comparing relationships between pairs of pixels i, j in the output: to
have low error, each pair of pixels in the prediction must differ in depth by an amount similar to that
of the corresponding pair in the
?ground truth. Eqn. 3 relates the metric to the original l2 error, but
with an additional term, ? n12 ij di dj , that credits mistakes if they are in the same direction and
penalizes them if they oppose. Thus, an imperfect prediction will have lower error when its mistakes
are consistent with one another. The last part of Eqn. 3 rewrites this as a linear-time computation.
In addition to the scale-invariant error, we also measure the performance of our method according
to several error metrics have been proposed in prior works, as described in Section 4.
3.3 Training Loss
In addition to performance evaluation, we also tried using the scale-invariant error as a training loss.
Inspired by Eqn. 3, we set the per-sample training loss to
4
?
L(y, y )
=
1? 2
?
di ? 2
n i
n
?
?
i
di
?2
(4)
where di = log yi ? log yi? and ? ? [0, 1]. Note the output of the network is log y; that is, the final
linear layer predicts the log depth. Setting ? = 0 reduces to elementwise l2 , while ? = 1 is the
scale-invariant error exactly. We use the average of these, i.e. ? = 0.5, finding that this produces
good absolute-scale predictions while slightly improving qualitative output.
During training, most of the target depth maps will have some missing values, particularly near
object boundaries, windows and specular surfaces. We deal with these simply by masking them out
and evaluating the loss only on valid points, i.e. we replace n in Eqn. 4 with the number of pixels
that have a target depth, and perform the sums excluding pixels i that have no depth value.
3.4 Data Augmentation
We augment the training data with random online transformations (values shown for NYUDepth) 2 :
?
?
?
?
?
Scale: Input and target images are scaled by s ? [1, 1.5], and the depths are divided by s.
Rotation: Input and target are rotated by r ? [?5, 5] degrees.
Translation: Input and target are randomly cropped to the sizes indicated in Fig. 1.
Color: Input values are multiplied globally by a random RGB value c ? [0.8, 1.2]3 .
Flips: Input and target are horizontally flipped with 0.5 probability.
Note that image scaling and translation do not preserve the world-space geometry of the scene. This
is easily corrected in the case of scaling by dividing the depth values by the scale s (making the
image s times larger effectively moves the camera s times closer). Although translations are not
easily fixed (they effectively change the camera to be incompatible with the depth values), we found
that the extra data they provided benefited the network even though the scenes they represent were
slightly warped. The other transforms, flips and in-plane rotation, are geometry-preserving. At test
time, we use a single center crop at scale 1.0 with no rotation or color transforms.
4
Experiments
We train our model on the raw versions both NYU Depth v2 [18] and KITTI [3]. The raw distributions contain many additional images collected from the same scenes as in the more commonly used
small distributions, but with no preprocessing; in particular, points for which there is no depth value
are left unfilled. However, our model?s natural ability to handle such gaps as well as its demand for
large training sets make these fitting sources of data.
4.1 NYU Depth
The NYU Depth dataset [18] is composed of 464 indoor scenes, taken as video sequences using
a Microsoft Kinect camera. We use the official train/test split, using 249 scenes for training and
215 for testing, and construct our training set using the raw data for these scenes. RGB inputs are
downsampled by half, from 640x480 to 320x240. Because the depth and RGB cameras operate at
different variable frame rates, we associate each depth image with its closest RGB image in time,
and throw away frames where one RGB image is associated with more than one depth (such a oneto-many mapping is not predictable). We use the camera projections provided with the dataset to
align RGB and depth pairs; pixels with no depth value are left missing and are masked out. To
remove many invalid regions caused by windows, open doorways and specular surfaces we also
mask out depths equal to the minimum or maximum recorded for each image.
The training set has 120K unique images, which we shuffle into a list of 220K after evening the
scene distribution (1200 per scene). We test on the 694-image NYU Depth v2 test set (with filled-in
depth values). We train the coarse network for 2M samples using SGD with batches of size 32.
We then hold it fixed and train the fine network for 1.5M samples (given outputs from the alreadytrained coarse one). Learning rates are: 0.001 for coarse convolutional layers 1-5, 0.1 for coarse full
layers 6 and 7, 0.001 for fine layers 1 and 3, and 0.01 for fine layer 2. These ratios were found by
trial-and-error on a validation set (folded back into the training set for our final evaluations), and the
global scale of all the rates was tuned to a factor of 5. Momentum was 0.9. Training took 38h for
the coarse network and 26h for fine, for a total of 2.6 days using a NVidia GTX Titan Black. Test
prediction takes 0.33s per batch (0.01s/image).
2
For KITTI, s ? [1, 1.2], and rotations are not performed (images are horizontal from the camera mount).
5
4.2 KITTI
The KITTI dataset [3] is composed of several outdoor scenes captured while driving with carmounted cameras and depth sensor. We use 56 scenes from the ?city,? ?residential,? and ?road?
categories of the raw data. These are split into 28 for training and 28 for testing. The RGB images
are originally 1224x368, and downsampled by half to form the network inputs.
The depth for this dataset is sampled at irregularly spaced points, captured at different times using
a rotating LIDAR scanner. When constructing the ground truth depths for training, there may be
conflicting values; since the RGB cameras shoot when the scanner points forward, we resolve conflicts at each pixel by choosing the depth recorded closest to the RGB capture time. Depth is only
provided within the bottom part of the RGB image, however we feed the entire image into our model
to provide additional context to the global coarse-scale network (the fine network sees the bottom
crop corresponding to the target area).
The training set has 800 images per scene. We exclude shots where the car is stationary (acceleration
below a threshold) to avoid duplicates. Both left and right RGB cameras are used, but are treated
as unassociated shots. The training set has 20K unique images, which we shuffle into a list of 40K
(including duplicates) after evening the scene distribution. We train the coarse model first for 1.5M
samples, then the fine model for 1M. Learning rates are the same as for NYU Depth. Training took
took 30h for the coarse model and 14h for fine; test prediction takes 0.40s/batch (0.013s/image).
4.3 Baselines and Comparisons
We compare our method against Make3D trained on the same datasets, as well as the published
results of other current methods [12, 7]. As an additional reference, we also compare to the mean
depth image computed across the training set. We trained Make3D on KITTI using a subset of 700
images (25 per scene), as the system was unable to scale beyond this size. Depth targets were filled
in using the colorization routine in the NYUDepth development kit. For NYUDepth, we used the
common distribution training set of 795 images. We evaluate each method using several errors from
prior works, as well as our scale-invariant metric:
y?
Threshold: % of yi s.t. max( yy?i , yii ) = ? < thr
?i
Abs Relative difference: |T1 | y?T |y ? y ? |/y ?
?
Squared Relative difference: |T1 | y?T ||y ? y ? ||2 /y ?
?
?
RMSE (linear): |T1 | y?T ||yi ? yi? ||2
?
?
RMSE (log): |T1 | y?T || log yi ? log yi? ||2
RMSE (log, scale-invariant): The error Eqn. 1
Note that the predictions from Make3D and our network correspond to slightly different center crops
of the input. We compare them on the intersection of their regions, and upsample predictions to the
full original input resolution using nearest-neighbor. Upsampling negligibly affects performance
compared to downsampling the ground truth and evaluating at the output resolution. 3
5
Results
5.1 NYU Depth
Results for NYU Depth dataset are provided in Table 1. As explained in Section 4.3, we compare
against the data mean and Make3D as baselines, as well as Karsch et al. [7] and Ladicky et al. [12].
(Ladicky et al. uses a joint model which is trained using both depth and semantic labels). Our system
achieves the best performance on all metrics, obtaining an average 35% relative gain compared to
the runner-up. Note that our system is trained using the raw dataset, which contains many more
example instances than the data used by other approaches, and is able to effectively leverage it to
learn relevant features and their associations.
This dataset breaks many assumptions made by Make3D, particularly horizontal alignment of the
ground plane; as a result, Make3D has relatively poor performance in this task. Importantly, our
method improves over it on both scale-dependent and scale-invariant metrics, showing that our system is able to predict better relations as well as better means.
Qualitative results are shown on the left side of Fig. 4, sorted top-to-bottom by scale-invariant MSE.
Although the fine-scale network does not improve in the error measurements, its effect is clearly
visible in the depth maps ? surface boundaries have sharper transitions, aligning to local details.
However, some texture edges are sometimes also included. Fig. 3 compares Make3D as well as
3
On NYUDepth, log RMSE is 0.285 vs 0.286 for upsampling and downsampling, respectively, and scaleinvariant RMSE is 0.219 vs 0.221. The intersection is 86% of the network region and 100% of Make3D for
NYUDepth, and 100% of the network and 82% of Make3D for KITTI.
6
threshold ? < 1.25
threshold ? < 1.252
threshold ? < 1.253
abs relative difference
sqr relative difference
RMSE (linear)
RMSE (log)
RMSE (log, scale inv.)
Mean Make3D Ladicky&al Karsch&al
0.418
0.447
0.542
?
0.711
0.745
0.829
?
0.874
0.897
0.940
?
0.408
0.349
?
0.350
0.581
0.492
?
?
1.244
1.214
?
1.2
0.430
0.409
?
?
0.304
0.325
?
?
Coarse
0.618
0.891
0.969
0.228
0.223
0.871
0.283
0.221
Coarse + Fine
0.611
0.887
0.971
0.215
0.212
0.907
0.285
0.219
higher
is
better
lower
is
better
Table 1: Comparison on the NYUDepth dataset
input
m3d
coarse
L2
L2 ?scale-?inv
ground ?truth
input
g.truth
m3d
L2
coarse
sc.-?inv
Figure 3: Qualitative comparison of Make3D, our method trained with l2 loss (? = 0), and our
method trained with both l2 and scale-invariant loss (? = 0.5).
outputs from our network trained using losses with ? = 0 and ? = 0.5. While we did not observe
numeric gains using ? = 0.5, it did produce slight qualitative improvements in more detailed areas.
5.2 KITTI
We next examine results on the KITTI driving dataset. Here, the Make3D baseline is well-suited
to the dataset, being composed of horizontally aligned images, and achieves relatively good results.
Still, our method improves over it on all metrics, by an average 31% relative gain. Just as importantly, there is a 25% gain in both the scale-dependent and scale-invariant RMSE errors, showing
there is substantial improvement in the predicted structure. Again, the fine-scale network does not
improve much over the coarse one in the error metrics, but differences between the two can be seen
in the qualitative outputs.
The right side of Fig. 4 shows examples of predictions, again sorted by error. The fine-scale network
produces sharper transitions here as well, particularly near the road edge. However, the changes are
somewhat limited. This is likely caused by uncorrected alignment issues between the depth map
and input in the training data, due to the rotating scanner setup. This dissociates edges from their
true position, causing the network to average over their more random placements. Fig. 3 shows
Make3D performing much better on this data, as expected, while using the scale-invariant error as a
loss seems to have little effect in this case.
threshold ? < 1.25
threshold ? < 1.252
threshold ? < 1.253
abs relative difference
sqr relative difference
RMSE (linear)
RMSE (log)
RMSE (log, scale inv.)
6
Discussion
Mean
0.556
0.752
0.870
0.412
5.712
9.635
0.444
0.359
Make3D
0.601
0.820
0.926
0.280
3.012
8.734
0.361
0.327
Coarse
0.679
0.897
0.967
0.194
1.531
7.216
0.273
0.248
Coarse + Fine
0.692
0.899
0.967
0.190
1.515
7.156
0.270
0.246
higher
is
better
lower
is
better
Table 2: Comparison on the KITTI dataset.
Predicting depth estimates from a single image is a challenging task. Yet by combining information
from both global and local views, it can be performed reasonably well. Our system accomplishes
this through the use of two deep networks, one that estimates the global depth structure, and another
that refines it locally at finer resolution. We achieve a new state-of-the-art on this task for NYU
Depth and KITTI datasets, having effectively leveraged the full raw data distributions.
In future work, we plan to extend our method to incorporate further 3D geometry information,
such as surface normals. Promising results in normal map prediction have been made by Fouhey
et al. [2], and integrating them along with depth maps stands to improve overall performance [16].
We also hope to extend the depth maps to the full original input resolution by repeated application
of successively finer-scaled local networks.
7
!"#
!$#
!%#
!&#
!"#
!$#
!%#
!&#
Figure 4: Example predictions from our algorithm. NYUDepth on left, KITTI on right. For each
image, we show (a) input, (b) output of coarse network, (c) refined output of fine network, (d) ground
truth. The fine scale network edits the coarse-scale input to better align with details such as object
boundaries and wall edges. Examples are sorted from best (top) to worst (bottom).
Acknowledgements
The authors are grateful for support from ONR #N00014-13-1-0646, NSF #1116923, #1149633 and
Microsoft Research.
8
References
[1] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-fei. Imagenet: A large-scale hierarchical
image database. In CVPR, 2009.
[2] D. F. Fouhey, A. Gupta, and M. Hebert. Data-driven 3d primitives for single image understanding. In ICCV, 2013.
[3] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research (IJRR), 2013.
[4] R. Hadsell, P. Sermanet, J. Ben, A. Erkan, M. Scoffier, K. Kavukcuoglu, U. Muller, and Y. LeCun. Learning long-range vision for autonomous off-road driving. Journal of Field Robotics,
26(2):120?144, 2009.
[5] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge
University Press, ISBN: 0521540518, second edition, 2004.
[6] D. Hoiem, A. A. Efros, and M. Hebert. Automatic photo pop-up. In ACM SIGGRAPH, pages
577?584, 2005.
[7] K. Karsch, C. Liu, S. B. Kang, and N. England. Depth extraction from video using nonparametric sampling. In TPAMI, 2014.
[8] K. Konda and R. Memisevic.
Unsupervised learning of depth and motion.
In
arXiv:1312.3429v2, 2013.
[9] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, 2012.
[10] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Image and depth from a conventional
camera with a coded aperture. In SIGGRAPH, 2007.
[11] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. Freeman. Sift flow: dense correspondence across
difference scenes. 2008.
[12] M. P. Lubor Ladicky, Jianbo Shi. Pulling things out of perspective. In CVPR, 2014.
[13] R. Memisevic and C. Conrad. Stereopsis via deep learning. In NIPS Workshop on Deep
Learning, 2011.
[14] J. Michels, A. Saxena, and A. Y. Ng. High speed obstacle avoidance using monocular vision
and reinforcement learning. In ICML, pages 593?600, 2005.
[15] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In
NIPS, 2005.
[16] A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3-d scene structure from a single still
image. TPAMI, 2008.
[17] D. Scharstein and R. Szeliski. A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. IJCV, 47:7?42, 2002.
[18] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor segmentation and support inference
from rgbd images. In ECCV, 2012.
[19] F. H. Sinz, J. Q. Candela, G. H. Bak?r, C. E. Rasmussen, and M. O. Franz. Learning depth
from stereo. In Pattern Recognition, pages 245?252. Springer, 2004.
[20] N. Snavely, S. M. Seitz, and R. Szeliski. Photo tourism: Exploring photo collections in 3d.
2006.
[21] K. Yamaguchi, T. Hazan, D. Mcallester, and R. Urtasun. Continuous markov random fields for
robust stereo estimation. In arXiv:1204.1393v1, 2012.
9
| 5539 |@word kohli:1 trial:1 version:1 middle:1 norm:1 seems:1 open:1 seitz:1 tried:1 rgb:13 sgd:1 shot:2 liu:2 contains:2 disparity:1 hoiem:3 tuned:1 existing:1 recovered:1 current:2 comparing:1 activation:2 yet:3 must:1 refines:3 concatenate:3 subsequent:1 visible:1 christian:1 x240:1 remove:1 designed:1 v:2 stationary:1 cue:3 selected:1 half:2 plane:2 vanishing:1 farther:1 oneto:1 coarse:52 location:2 five:1 along:1 qualitative:6 consists:1 ijcv:1 compose:1 combine:1 fitting:1 mask:1 indeed:1 expected:2 examine:1 multi:1 inspired:1 relying:1 globally:1 freeman:2 resolve:1 little:2 window:2 conv:16 provided:8 estimating:5 moreover:2 medium:1 minimizes:1 finding:3 transformation:2 sinz:1 sky:2 saxena:4 voting:1 runtime:2 exactly:1 scaled:2 jianbo:1 unit:4 m3d:2 positive:1 t1:4 local:14 mistake:2 mount:1 meet:1 black:2 relaxing:2 challenging:1 limited:1 oppose:1 outlet:1 range:1 unique:2 camera:14 x256:1 yj:2 testing:2 practice:1 lecun:1 x3:6 procedure:2 displacement:2 area:3 matching:2 projection:1 road:4 integrating:1 downsampled:3 upsampled:1 context:1 descending:1 equivalent:1 map:15 conventional:1 center:4 missing:2 shi:1 straightforward:1 attention:1 primitive:1 survey:2 resolution:6 hadsell:1 factored:2 avoidance:1 importantly:2 handle:1 n12:1 variation:1 autonomous:1 limiting:1 target:13 us:1 associate:1 recognition:2 particularly:4 expensive:1 predicts:3 database:1 bottom:4 negligibly:1 initializing:1 capture:3 worst:1 region:6 connected:4 ilsvrc2012:1 sun:1 shuffle:2 rescaled:2 uncalibrated:2 topmost:1 substantial:1 environment:1 predictable:1 trained:9 grateful:1 rewrite:1 segment:2 technically:1 purely:2 easily:2 joint:1 indirect:2 siggraph:2 various:1 train:10 effective:1 describe:1 sc:1 choosing:1 refined:4 richer:1 posed:1 larger:3 cvpr:2 ability:1 knn:2 itself:1 scaleinvariant:1 final:3 online:1 sequence:3 tpami:2 isbn:1 took:3 reconstruction:2 coming:1 causing:1 relevant:3 yii:1 aligned:1 combining:1 achieve:3 sutskever:1 produce:3 stiller:1 rotated:1 kitti:17 help:2 object:8 ben:1 nearest:1 ij:1 make3d:17 dividing:1 throw:1 c:2 predicted:4 uncorrected:1 differ:1 direction:1 bak:1 hartley:1 x45:1 mcallester:1 suffices:1 shopping:1 wall:3 yuen:1 exploring:1 hold:1 scanner:3 credit:1 ground:15 normal:3 mapping:1 predict:9 hardcoded:1 driving:3 major:1 achieves:5 efros:1 torralba:1 estimation:8 lenz:1 lose:1 label:3 edit:2 create:2 karsch:5 city:1 hope:1 clearly:1 sensor:3 modified:2 rather:3 lubor:1 avoid:1 x27:3 focus:1 improvement:4 superpixels:2 contrast:2 baseline:3 yamaguchi:1 tional:1 inference:1 dependent:3 entire:6 hidden:4 relation:7 x224:1 pixel:10 overall:3 x11:2 ill:1 classification:2 augment:3 issue:1 development:1 plan:1 art:3 integration:1 spatial:4 tourism:1 field:4 construct:1 equal:1 extraction:2 having:1 sampling:1 ng:3 x4:1 flipped:1 unsupervised:1 icml:1 foreground:2 future:1 duplicate:2 fouhey:2 randomly:2 composed:3 preserve:1 geometry:5 occlusion:1 maintain:1 n1:1 microsoft:2 ab:3 edits:1 evaluation:5 alignment:9 runner:1 extreme:1 accurate:3 edge:6 closer:2 indexed:1 filled:2 penalizes:1 rotating:2 instance:1 modeling:2 obstacle:1 subset:1 masked:1 krizhevsky:1 levin:3 accrued:1 fundamental:1 international:1 memisevic:2 physic:1 dong:1 off:1 pool:6 together:1 squared:2 ambiguity:2 x9:3 augmentation:1 containing:1 recorded:2 again:2 leveraged:1 successively:1 warped:1 chung:1 leading:1 li:2 account:2 potential:1 exclude:1 stride:4 titan:1 explicitly:3 caused:2 later:2 view:10 performed:2 break:1 candela:1 doing:1 hazan:1 red:1 aggregation:2 masking:1 rmse:13 accuracy:1 convolutional:5 who:2 likewise:1 listing:1 spaced:1 correspond:1 sqr:2 raw:7 kavukcuoglu:1 produced:1 rectified:1 finer:3 published:1 implausible:1 suffers:1 aligns:1 against:3 regress:1 associated:1 di:9 static:2 sampled:1 gain:4 dataset:15 x13:1 color:2 car:1 improves:2 organized:2 segmentation:1 routine:1 back:1 feed:1 originally:1 courant:1 day:1 x6:2 higher:2 zisserman:1 improved:1 though:2 furthermore:1 just:2 stage:1 flight:2 receives:1 horizontal:3 web:1 eqn:6 indicated:1 quire:1 pulling:1 effect:3 requiring:1 contain:2 gtx:1 true:1 hence:1 unfilled:1 semantic:3 deal:1 x5:6 during:4 ambiguous:2 multiview:2 performs:2 motion:4 reasoning:1 postprocessing:1 image:64 shoot:1 recently:2 common:4 rotation:4 handcrafted:2 extend:3 association:1 slight:1 elementwise:2 measurement:1 cambridge:1 automatic:1 similarly:1 dj:2 moving:2 surface:4 align:3 aligning:1 closest:2 own:1 perspective:2 moderate:1 driven:1 store:2 nvidia:1 n00014:1 onr:1 durand:1 yi:15 muller:1 conrad:1 preserving:1 minimum:1 additional:7 captured:2 kit:1 seen:1 somewhat:1 accomplishes:1 deng:1 shortest:1 relates:1 full:9 multiple:2 reduces:2 match:3 england:1 long:1 dept:1 divided:1 coded:1 controlled:1 prediction:22 mrf:1 regression:1 crop:5 essentially:1 metric:9 vision:4 physically:1 arxiv:2 represent:1 sometimes:1 robotics:4 whereas:1 addition:5 background:2 fine:35 cropped:1 source:3 extra:1 operate:1 subject:2 pooling:4 thing:1 leveraging:1 flow:3 estate:1 near:2 leverage:1 split:2 affect:1 specular:2 architecture:3 imperfect:1 motivated:1 passed:1 stereo:23 york:1 pretraining:2 deep:6 detailed:2 amount:1 transforms:2 nonparametric:1 locally:2 extensively:2 hardware:2 category:1 reduced:1 exist:1 nsf:1 notice:1 per:6 yy:1 blue:1 broadly:2 express:1 threshold:8 v1:1 padded:1 fraction:1 sum:1 residential:1 angle:1 uncertainty:1 patch:2 geiger:1 incompatible:1 scaling:2 dropout:2 layer:26 followed:1 furniture:1 tackled:1 correspondence:7 oracle:1 placement:1 ladicky:6 worked:1 fei:2 scene:33 software:2 x2:6 speed:1 performing:1 relatively:4 developing:2 according:1 creative:2 poor:1 across:5 slightly:3 rob:1 making:1 explained:2 invariant:15 iccv:1 taken:1 monocular:7 remains:1 turn:1 mechanism:2 needed:2 flip:3 irregularly:1 fed:1 photo:3 available:2 operation:1 multiplied:1 apply:1 observe:1 hierarchical:1 v2:5 away:1 blurry:1 apt:1 batch:3 eigen:1 original:5 substitute:1 top:5 include:2 konda:3 silberman:1 move:1 snavely:3 unable:1 upsampling:4 landmark:2 collected:1 urtasun:2 reason:1 consumer:1 length:1 pointwise:1 relationship:2 insufficient:1 ratio:2 colorization:1 downsampling:2 sermanet:1 x20:1 setup:1 potentially:1 relate:1 sharper:2 taxonomy:1 negative:1 design:1 perform:3 allowing:1 upper:1 vertical:2 convolution:2 datasets:3 markov:1 enabling:1 mate:1 hinton:1 excluding:1 frame:5 stack:5 kinect:5 inv:4 atmospheric:1 david:1 pair:4 thr:1 imagenet:2 x55:2 puhrsch:1 conflict:1 sivic:1 conflicting:1 kang:1 pop:1 nip:3 address:2 able:3 beyond:1 below:1 pattern:1 indoor:3 max:3 including:1 video:4 natural:1 rely:2 treated:1 predicting:4 improve:5 ijrr:1 irrespective:1 autoencoder:2 prior:3 understanding:6 geometric:3 l2:8 acknowledgement:1 relative:9 loss:9 fully:4 generation:1 versus:1 validation:2 integrate:3 degree:2 sufficient:1 consistent:1 translation:4 eccv:1 course:1 last:2 keeping:1 hebert:2 rasmussen:1 side:4 allow:2 institute:1 neighbor:1 szeliski:2 taking:1 absolute:2 distributed:1 boundary:4 depth:93 dimension:2 stand:1 numeric:1 world:3 evaluating:2 valid:1 transition:2 collection:1 concretely:1 made:3 commonly:2 refinement:1 preprocessing:1 forward:1 author:1 employing:1 reinforcement:1 social:1 franz:1 scharstein:3 obtains:1 aperture:2 global:18 active:1 andthe:1 doorway:2 fergus:4 stereopsis:1 continuous:1 evening:2 why:1 table:4 promising:1 learn:2 reasonably:1 transfer:2 robust:2 inherently:2 obtaining:3 defocus:2 improving:1 mse:1 investigated:2 constructing:1 official:1 did:2 dense:2 border:1 edition:1 n2:1 repeated:1 rgbd:1 x1:2 site:1 fig:10 benefited:1 position:3 momentum:1 deterministically:1 outdoor:2 erkan:1 learns:2 sift:3 showing:2 nyu:14 list:2 gupta:1 essential:1 socher:1 workshop:1 effectively:5 texture:1 demand:1 gap:1 easier:2 michels:1 backpropagate:1 suited:1 intersection:2 photograph:2 simply:2 likely:2 appearance:1 horizontally:2 upsample:1 scalar:1 pretrained:1 springer:1 truth:13 relies:3 acm:1 sorted:3 acceleration:1 careful:2 invalid:1 room:3 replace:1 considerable:1 change:3 lidar:1 included:1 infinite:1 folded:1 uniformly:1 corrected:1 total:2 invariance:1 exception:1 support:3 arises:1 categorize:1 incorporate:2 evaluate:1 |
5,014 | 554 | A Neurocomputer Board Based on the ANNA
Neural Network Chip
Eduard Sackinger, Bernhard E. Boser, and Lawrence D. Jackel
AT&T Bell Laboratories
Crawfords Corner Road, Holmdel, NJ 07733
Abstract
A board is described that contains the ANN A neural-network chip, and a
DSP32C digital signal processor. The ANNA (Analog Neural Network
Arithmetic unit) chip performs mixed analog/digital processing. The
combination of ANNA with the DSP allows high-speed, end-to-end execution of numerous signal-processing applications, including the preprocessing, the neural-net calculations, and the postprocessing steps. The
ANNA board evaluates neural networks 10 to 100 times faster than the
DSP alone. The board is suitable for implementing large (million connections) networks with sparse weight matrices. Three applications have
been implemented on the board: a convolver network for slant detection
of text blocks, a handwritten digit recognizer, and a neural network for
recognition-based segmentation.
1
INTRODUCTION
Many researchers have built neural-network chips, but few chips have been installed
in board-level systems, even though this next level of integration provides insights
and advantages that can't be attained on a chip testing station. Building a board
demonstrates whether or not the chip can be effectively integrated into the larger
systems required for real applications. A board also exposes bottlenecks in the
system data paths. Most importantly, a working board moves the neural-network
chip from the realm of a research exercise, to that of a practical system, readily
available to users whose primary interest is actual applications. An additional
bonus of carrying the integration to the board level is that the chip designer can
gain the user feedback that will assist in designing new chips with greater utility.
773
774
Sackinger. Boser, and Jackel
32 BIT DATA BUS
DATA
STATE
WEIGHT
SRAM
ANNA
ADDR
MICROCODE
,
DATA
DSP32C
ADDR
0
ii:
.
o
w
~
>
\CODER /
f
24 BIT ADDRESS BUS
Figure 1: Block Diagram of the ANNA Board
2
ARCHITECTURE
The neurocomputer board contains a special purpose chip called ANN A (Boser
et al., 1991), for the parallel evaluation of neuron functions (a squashing function
applied to a weighted sum) and a general purpose digital signal processor, DSP32C.
The board also contains interface and clock synchronization logic as well as 1 MByte
of static memory, SRAM (see Fig. 1). Two version of this board with two different
bus interfaces have been built: a double height VME board (see Fig. 2) and a
PC/ AT board (see Fig. 3).
The ANNA neural network chip is an ALU (Arithmetic and Logic Unit) specialized for neural network functions. It contains a 12-bit wide state-data input, a
12-bit wide state-data output, a 12-bit wide weight-data input, and a 37-bit microinstruction input. The instructions that can be executed by the chip are the following (parameters are not shown):
RFSH Write weight values from the weight-data input into the dynamic on-chip
weight storage.
SHIFT Shift on-chip barrel shifter to the left and load up to four new state values
from state-data input into the right end of the shifter.
STORE Transfer state vector from the shifter into the on-chip state storage and/or
into the state-data latches of the arithmetic unit.
CALC Calculate eight dot-products between on-chip weight vectors and the contents
of the above mentioned data latches; subsequently evaluate the squashing function.
OUT Transfer the results of the calculation to the state-data output.
A Neurocomputer Board Based on the ANNA Neural Network Chip
Figure 2: ANNA Board with VME Bus Interface
Figure 3: ANNA Board with PCI AT Bus Interface
775
776
Sackinger, Boser, and Jackel
Figure 4: Photo Micrograph of the ANNA Chip
Some of the instructions (like SHIFT and CALC) can be executed in parallel. The
barrel shifter at the input as well as the on-chip state storage make the ANN A
chip very effective for evaluating locally-connected, weight-sharing networks such
as feature extraction and time-delay neural networks (TDNN).
The ANNA neural network chip, implemented in a 0.9/-lm CMOS technology, contains 180,000 transistors on a 4.5 x 7 mm 2 die (see Fig. 4). The chip implements
4,096 physical synapses which can be time multiplexed in order to realize networks
with many more than 4,096 connections. The resolution of the synaptic weights is
6 bits and that of the states (input/output of the neurons) is 3 bits. Additionally,
a 4-bit scaling factor can be programmed for each neuron to extend the dynamic
range of the weights. The weight values are stored as charge packets on capacitors
and are periodically refreshed by two on chip 6-bit D/ A converter. The synapses
are realized by multiplying 3-bit D/ A converters (analog weight times digital state).
The analog results of this multiplication are added by means of current summing
and then converted back to digital by a saturating 3-bit A/D converter. Although
the chip uses analog computing internally, all input/output is digital. This combines
the advantages of the high synaptic density, the high speed, and the low power of
analog with the ease of interfacing to a digital system like a digital signal processor
(DSP).
The 32-bit floating-point digital signal processor (DSP32C) on the same board
runs at 40 MHz without wait states (100 ns per instruction) and is connected to
1 MByte of static RAM. The DSP has several functions: (1) It generates the micro
instructions for the ANNA chip. (2) It is responsible for accessing the pixel, feature,
and weight data from the memory and then storing the results of the chip in the
memory. (3) If the precision of the ANNA chip is not sufficient the DSP can do the
calculations with 32-bit floating-point precision. (4) Learning algorithms can be run
A Neurocomputer Board Based on the ANNA Neural Network Chip
on the DSP. (5) The DSP is useful as a pre- and post-processor for neural networks.
In this way a whole task can be carried out on the board without exchanging
intermediate results with the host.
As shown by Fig. 1 ANNA instructions are supplied over the DSP address bus, while
state and weight data is transferred over the data bus. This arrangement makes
it possible to supply or store ANN A data and execute a micro instruction simultaneously, i.e., using only one DSP instruction. The ANNA clock is automatically
generated whenever the DSP issues a micro instruction to the ANN A chip.
3
PERFORMANCE
Using a DSP for supplying micro instructions as well as accessing the data from the
memory makes the board very flexible and fairly simple. Both data and instruction
flow to and from the ANNA chip are under software control and can be programmed
using the C or DSP32C assembly language.
Because of DSP32C features such as one-instruction 32-bit memory-to-memory
transfer with auto increment and overhead free looping, ANNA instruction sequences can be generated at a rate of approximately 5 MIPS. A similar rate of
5 MByte/s is achieved for reading and writing ANNA data from and to the memory.
The speed of the board depends on the application and how well it makes use
of the chip's parallelism and ranges between 30 MC/s and 400 MC/s. For concrete
examples see the section on Applications. Compared to the DSP32C which performs
at about 3 MC/s (for sparsely connected networks) the board with the ANNA chip
is 10 to 100 times faster.
The speed of the board is not limited by the ANNA chip but by the above mentioned
data rates. The use of a dedicated hardware sequencer will improve the speed by
up to ten times. The board can thus be used for prototyping an application, before
building more specialized hardware.
4
SOFTWARE
To make the board easily usable we implemented a LISP interpreter on the host
computer (a SUN workstation) which allows us to make remote procedure calls
(RPC) to the ANNA board. After starting the LISP interpreter on the host it will
download the DSP object code to the board and start the main program on the
DSP. Then, the DSP will transfer the addresses of all procedures that are available
to the user to the LISP interpreter. From then on, all these procedures can be called
as LISP functions of the form (==> anna procedure parameter{s) from the host.
Parameters and return value are handled automatically by the LISP interpreter.
Three ways of using the ANNA board are described. The first two methods do not
require DSP programming; everything is controlled from the LISP interpreter. The
third method requires DSP programming and results in maximum speed for any
application.
777
778
Sackinger, Boser, and Jackel
l. The simplest way to use the board together with this LISP interpreter is to
call existing library functions on the board. For example a neural network for
recognizing handwritten digits can be called as follows:
(==> anna down-weight weight-matrix)
(setq class (==> anna down-ree-up digit-pattern?
The first LISP function activates the down-weight function on the ANN A board
that transfers the LISP matrix, weight-matrix, to the board. This function defines
all the weights of the network and has to be called only once. The second LISP
function calls the down-ree-up function which takes the digit-pattern (pixel image)
as an input, downloads this pattern, runs the recognizer, and uploads the class
number (0 ... 9).
This method requires no knowledge of the ANN A or nsp instruction set. The
library functions are fast since they have been optimized by the implementer. At
the moment library functions for nonlinear convolution, character recognition, and
testing are available.
2. If a function which is not part of the library has to be implemented, an ANNA
program must be written. A collection of LISP functions (ANNANAS), support the
translation of symbolic ANNA program into micro code. The micro code is then
run on the ANNA chip by means of a software sequencer implemented on the nsp.
Assembling and running a simple ANNA program using ANNANAS looks like this:
(anna-repeat 16)
(anna-shift 4 0)
(anna-store 0 'a 2)
(anna-endrep)
(anna-stop)
REPEAT 16
SHIFT 4,RO;
STORE RO,A.L2;
END REP
STOP
(anna-run 0)
start of loop
ANNA shift instruction
ANNA store instruction
end of loop
end of program
start sequencer
In this way, all the features of the ANN A chip and board can be used without
nsp programming. This mode is also helpful for testing and debugging ANN A
programs. Beside the assembler, ANN AN AS also provides several monitoring and
debugging tools.
3. If maximum speed is imperative, an application specific sequencer has to be
written (as opposed to the slower generic sequencer described above). To do this
a nsp assembler and C compiler are required. A toolbox of assembly macros and
C functions help implementing this sequencer. Besides the sequencer, pre- and
post-processing software can also be implemented on the fast nsp hardware. After
successfully testing the program it can be added to the library as a new function.
5
5.1
APPLICATIONS
CONVOLVER NETWORK
In this application the ANNA chip is configured for 16 neurons with 256 synapses
each. First, each of these neurons connect to the upper left 16 x 16 field of a
A Neurocompurer Board Based on the ANNA Neural Network Chip
Table 1: Performance of the Recognizer.
REJECT RATE
IMPLEMENTATION ERROR RATE FOR 1 % ERROR
Full Precision
ANNA/DSP
ANN A/DSP /Retraining
9.1 %
13.5? 0.8%
11.5 ? 0.8%
4.9%
5.3 ? 0.2%
4.9?0.2%
a
FRl..o/ talha Rh
b. c5?
v
oon
/hRaRn
a w 0
nay tu .a.
611.
e0/ NF
r
a h
.IFma ce aFN
779
780
Sackinger, Boser, and Jackel
5.3
RECOGNITION BASED SEGMENTATION
Before individual digits can be passed to a recognizer as described in the previous
section, they typically have to be isolated (segmented) from a string of characters
(e.g. a ZIP code). When characters overlap, segmentation is a difficult problem
and simple algorithms which look for connected components or histograms fail.
A promising solution to this problem is to combine recognition and segmentation
(Keeler et al., 1992, Matan et aI., 1992). For instance recognizers like the one
described above can be replicated horizontally and vertically over the region of interest. This will guarantee, that there is a recognizer centered over each character.
It is crucial, however, to train the recognizer such that it rejects partial characters.
Such a replicated version of the recognizer (at 31 times 6 locations) with approximately 2 million connections has been implemented on the ANN A board and was
used to segment ZIP codes.
6
CONCLUSION
A board with a neural-network chip and a digital signal processor (DSP) has been
built. Large pattern recognition applications have been implemented on the board
giving a speed advantage of 10 to 100 over the DSP alone.
Acknowledgements
The authors would like to thank Steve Deiss for his excellent job in building the
boards and Yann LeCun and Jane Bromley for their help with the digit recognizer.
References
Bernhard Boser, Eduard Sackinger, Jane Bromley, Yann LeCun, and Lawrence D.
Jackel. An analog neural network processor with programmable network topology.
IEEE J. Solid-State Circuits, 26(12):2017-2025, December 1991.
Yann Le Cun, Bernhard Boser, John S. Denker, Donnie Henderson, Richard E.
Howard, Wayne Hubbard, and Lawrence D. Jackel. Handwritten digit recognition
with a back-propagation network. In David S. Touretzky, editor, Neural Information Processing Systems, volume 2, pages 396-404. Morgan Kaufmann Publishers,
San Mateo, CA, 1990.
Eduard Sackinger, Bernhard Boser, Jane Bromley, Yann LeCun, and Lawrence D.
Jackel. Application of the ANNA neural network chip to high-speed character
recognition. IEEE Trans. Neural Networks, 3(2), March 1992.
J. D. Keeler and D. E. Rumelhart. Self-organizing segmentation and recognition
neural network. In J. M. Moody, S. J. Hanson, and R. P. Lippman, editors, Neural
Information Processing Systems, volume 4. Morgan Kaufmann Publishers, San
Mateo, CA, 1992.
Ofer Matan, Christopher J. C. Burges, Yann LeCun, and John S. Denker. Multidigit recognition using a space delay neural network. In J. M. Moody, S. J. Hanson,
and R. P. Lippman, editors, Neural Information Processing Systems, volume 4.
Morgan Kaufmann Publishers, San Mateo, CA, 1992.
| 554 |@word version:2 retraining:1 instruction:15 solid:1 moment:1 contains:5 existing:1 current:1 must:1 readily:1 john:2 written:2 realize:1 periodically:1 alone:2 afn:1 sram:2 supplying:1 provides:2 location:1 height:1 donnie:1 supply:1 combine:2 overhead:1 nay:1 automatically:2 actual:1 bonus:1 circuit:1 barrel:2 coder:1 string:1 interpreter:6 nj:1 guarantee:1 nf:1 charge:1 ro:2 demonstrates:1 control:1 unit:3 internally:1 wayne:1 before:2 vertically:1 installed:1 path:1 ree:2 approximately:2 downloads:1 mateo:3 ease:1 programmed:2 limited:1 range:2 practical:1 responsible:1 lecun:4 testing:4 block:2 implement:1 digit:7 procedure:4 lippman:2 sequencer:7 uploads:1 bell:1 reject:2 pre:2 road:1 wait:1 symbolic:1 storage:3 writing:1 starting:1 resolution:1 insight:1 importantly:1 his:1 increment:1 user:3 programming:3 us:1 designing:1 rumelhart:1 recognition:9 sparsely:1 calculate:1 region:1 connected:4 sun:1 remote:1 mentioned:2 accessing:2 dynamic:2 carrying:1 segment:1 easily:1 chip:40 train:1 fast:2 effective:1 pci:1 matan:2 whose:1 larger:1 advantage:3 sequence:1 transistor:1 net:1 product:1 macro:1 tu:1 loop:2 organizing:1 double:1 cmos:1 object:1 help:2 job:1 implemented:8 subsequently:1 centered:1 packet:1 implementing:2 everything:1 require:1 keeler:2 mm:1 eduard:3 bromley:3 lawrence:4 lm:1 purpose:2 recognizer:8 jackel:8 expose:1 hubbard:1 successfully:1 tool:1 weighted:1 interfacing:1 activates:1 dsp:20 helpful:1 integrated:1 typically:1 pixel:2 issue:1 flexible:1 integration:2 special:1 fairly:1 field:1 once:1 extraction:1 look:2 micro:6 few:1 richard:1 simultaneously:1 individual:1 floating:2 detection:1 interest:2 evaluation:1 henderson:1 multiplexed:1 pc:1 calc:2 partial:1 e0:1 isolated:1 instance:1 mhz:1 exchanging:1 imperative:1 delay:2 recognizing:1 stored:1 connect:1 density:1 together:1 concrete:1 moody:2 opposed:1 corner:1 usable:1 return:1 converted:1 configured:1 depends:1 compiler:1 start:3 parallel:2 kaufmann:3 handwritten:3 vme:2 mc:3 multiplying:1 monitoring:1 researcher:1 processor:7 synapsis:3 touretzky:1 sharing:1 synaptic:2 whenever:1 evaluates:1 refreshed:1 static:2 workstation:1 gain:1 stop:2 realm:1 knowledge:1 convolver:2 segmentation:5 back:2 steve:1 attained:1 execute:1 though:1 clock:2 working:1 sackinger:7 christopher:1 nonlinear:1 propagation:1 defines:1 mode:1 alu:1 building:3 laboratory:1 latch:2 self:1 die:1 performs:2 dedicated:1 interface:4 postprocessing:1 image:1 specialized:2 physical:1 volume:3 million:2 analog:7 extend:1 assembling:1 slant:1 ai:1 language:1 dot:1 recognizers:1 store:5 rep:1 morgan:3 additional:1 greater:1 zip:2 signal:6 arithmetic:3 ii:1 full:1 segmented:1 faster:2 calculation:3 host:4 post:2 controlled:1 histogram:1 achieved:1 diagram:1 crucial:1 publisher:3 december:1 flow:1 capacitor:1 lisp:11 call:3 intermediate:1 mips:1 architecture:1 converter:3 topology:1 shift:6 bottleneck:1 whether:1 handled:1 utility:1 assist:1 passed:1 assembler:2 programmable:1 useful:1 locally:1 ten:1 hardware:3 simplest:1 supplied:1 designer:1 per:1 write:1 four:1 micrograph:1 ce:1 ram:1 sum:1 run:5 yann:5 holmdel:1 scaling:1 bit:15 looping:1 software:4 generates:1 speed:9 dsp32c:7 transferred:1 debugging:2 march:1 combination:1 character:6 cun:1 bus:7 fail:1 addr:2 end:6 photo:1 available:3 ofer:1 eight:1 denker:2 generic:1 jane:3 slower:1 running:1 assembly:2 giving:1 move:1 added:2 realized:1 arrangement:1 primary:1 thank:1 shifter:4 code:5 besides:1 difficult:1 executed:2 implementation:1 mbyte:3 upper:1 neuron:5 convolution:1 howard:1 station:1 download:1 david:1 required:2 toolbox:1 deiss:1 connection:3 optimized:1 hanson:2 boser:9 trans:1 address:3 parallelism:1 pattern:4 prototyping:1 reading:1 program:7 built:3 including:1 memory:7 power:1 suitable:1 overlap:1 improve:1 technology:1 library:5 numerous:1 carried:1 tdnn:1 auto:1 crawford:1 nsp:5 text:1 l2:1 acknowledgement:1 multiplication:1 rpc:1 synchronization:1 beside:1 mixed:1 digital:10 sufficient:1 editor:3 storing:1 squashing:2 translation:1 repeat:2 free:1 burges:1 wide:3 sparse:1 feedback:1 evaluating:1 author:1 collection:1 c5:1 preprocessing:1 replicated:2 san:3 bernhard:4 logic:2 summing:1 table:1 additionally:1 promising:1 transfer:5 ca:3 excellent:1 anna:43 main:1 rh:1 whole:1 fig:5 board:41 n:1 precision:3 exercise:1 third:1 down:4 load:1 specific:1 effectively:1 execution:1 horizontally:1 saturating:1 neurocomputer:4 microcode:1 ann:12 content:1 oon:1 called:4 support:1 evaluate:1 |
5,015 | 5,540 | Optimal decision-making
with time-varying evidence reliability
Jan Drugowitsch1
Rub?en Moreno-Bote2
Alexandre Pouget1
2
1
Research Unit, Parc Sanitari
D?ept. des Neurosciences Fondamentales
Sant Joan de D?eu and
Universit?e de Gen`eve
University of Barcelona
CH-1211 Gen`eve 4, Switzerland
08950 Barcelona, Spain
[email protected],
[email protected]
[email protected]
Abstract
Previous theoretical and experimental work on optimal decision-making was restricted to the artificial setting of a reliability of the momentary sensory evidence
that remained constant within single trials. The work presented here describes the
computation and characterization of optimal decision-making in the more realistic
case of an evidence reliability that varies across time even within a trial. It shows
that, in this case, the optimal behavior is determined by a bound in the decision
maker?s belief that depends only on the current, but not the past, reliability. We
furthermore demonstrate that simpler heuristics fail to match the optimal performance for certain characteristics of the process that determines the time-course of
this reliability, causing a drop in reward rate by more than 50%.
1
Introduction
Optimal decision-making constitutes making optimal use of sensory information to maximize one?s
overall reward, given the current task contingencies. Example of decision-making are the decision
to cross the road based on the percept of incoming traffic, or the decision of an eagle to dive for
prey based on the uncertain information of the prey?s presence and location. Any kind of decisionmaking based on sensory information requires some temporal accumulation of this information,
which makes such accumulation the first integral component of decision-making. Accumulating
evidence for a longer duration causes higher certainty about the stimulus but comes at the cost of
spending more time to commit to a decision. Thus, the second integral component of such decisionmaking is to decide when enough information has been accumulated to commit to a decision.
Previous work has established that, if the reliability of momentary evidence is constant within a
trial but might vary across trials, optimal decision-making can be implemented by a class of models
known as diffusion models [1, 2, 3]. Furthermore, it has been shown that the behavior of humans
and other animals at least qualitatively follow that predicted by such diffusion models [4, 5, 6, 3].
Our work significantly extends this work by moving from the rather artificial case of constant evidence reliability to allowing the reliability of evidence to change within single trials. Based on
a principled formulation of this problem, we describe optimal decision-making with time-varying
evidence reliability. Furthermore, a comparison to simpler decision-making heuristics demonstrates
when such heuristics fail to feature comparable performance. In particular, we derive Bayes-optimal
evidence accumulation for our task setup, and compute the optimal policy for such cases by dynamic
programming. To do so, we borrow concepts from continuous-time stochastic control to keep the
computational complexity linear in the process space size (rather than quadratic for the na??ve approach). Finally, we characterize how the optimal policy depends on parameters that determine the
evidence reliability time-course, and show that simpler, heuristic policies fail to match the optimal
performance for particular sub-regions of this parameter space.
1
2
Perceptual decision-making with time-varying reliability
Within a single trial, the decision maker?s task is to identify the state of a binary hidden variable,
z ? {?1, 1} (with units s?1 , if time is measured in seconds), based on a stream of momentary
evidence dx(t), t ? 0. This momentary evidence provides uncertain information about z by
r
1
2? ?
dx = zdt + p
dW, where d? = ? (? ? ? ) dt + ?
? dB,
(1)
?
? (t)
where dW and dB are independent Wiener processes. In the above, ? (t) controls how informative
the momentary evidence dx(t) is about z, such that ? (t) is the reliability of this momentary evidence.
We assume its time-course to be described by the Cox-Ingersoll-Ross (CIR) process (? (t) in Eq. (1))
[7]. Despite the simplicity of this model and its low number of parameters, it is sufficiently flexible
in modeling how the evidence reliability changes with time, and ensures that ? ? 0, always1 . It is
parameterized by the mean reliability, ?, its variance, ? 2 , and its speed of change, ?, all of which we
assume to be known to the decision maker. At the beginning of each trial, at t = 0, ? (0) is drawn
from the process? steady-state distribution, which is gamma with shape ?2 /? 2 and scale ? 2 /? [7].
It can be shown, that upon observing some momentary evidence, ? (t) can be immediately estimated
with infinite precision, such that it is known for all t ? 0 (see supplement).
Optimal decision-making requires in each trial computing the posterior z, given all evidence dx0:t
from trial onset to some time t. Assuming a uniform prior over z?s, this posterior is given by
Z t
1
g(t) ? p (z = 1|dx0:t ) =
,
where X(t) =
? (s)dx(s),
(2)
1 + e?2X(t)
0
(this has already been established in [8]; see supplement for derivation). Thus, at time t, the decision
maker?s belief g(t) that z = 1 is the sigmoid of the accumulated, reliability-weighted, momentary
evidence up until that time.
We consider two possible tasks. In the ER task, the decision maker is faced with a single trial
in which correct (incorrect) decisions are rewarded by r+ (r? ), and the accumulation of evidence
comes at a constant cost (for example, attentional effort) of c per unit time. The decision maker?s
aim is then to maximize her expected reward, ER, including the cost for accumulating evidence. In
the RR task, we consider a long sequence of trials, separated on average by the inter-trial interval
ti , which might be extended by the penalty time tp for wrong decisions. Maximizing reward in such
a sequence equals maximizing the reward rate, RR, per unit time [9]. Thus, the objective function
for either task is given by
ER (P C, DT ) = P Cr+ +(1?P C)r? ?cDT,
RR (P C, DT ) =
ER (P C, DT )
, (3)
DT + ti + (1 ? P C)tp
where P C is the probability of performing a correct decision, and DT is the expected decision time.
For notational convenience we assume r+ = 1 and r? = 0. The work can be easily generalized to
any choice of r+ and r? .
3
Finding the optimal policy by Dynamic Programming
3.1
Dynamic Programming formulation
Focusing first on the ER task of maximizing the expected reward in a single trial, the optimal policy
can be described by bounds in belief2 at g? (? ) and 1 ? g? (? ) as functions of the current reliability,
? . Once either of these bounds is crossed, the decision maker chooses z = 1 (for g? (? )) or z = ?1
(for 1 ? g? (? )). The bounds are found by solving Bellman?s equation [10, 9],
n
o
V (g, ? ) = max Vd (g), hV (g + ?g, ? + ?? )ip(?g,?? |g,? ) ? c?t ,
(4)
where Vd (g) = max {g, 1 ? g}. Here, the value function V (g, ? ) denotes the expected return for
current state (g, ? ) (i.e. holding belief g, and current reliability ? ), which is the expected reward at
1
2
We restrict ourselves to ? > ?, in which case ? (t) > 0 (excluding ? = 0) is guaranteed for all t ? 0.
The subscript ?? indicates the relation to the optimal decision bound ?.
2
(a)
(b)
value iteration, n = 1, 2, 3, ...
expectation by PDE solver
root finding on
until
value iteration with current
until
bound
where
and
intersect
Figure 1: Finding the optimal policy by dynamic programming. (a) illustrates the approach for the
ER task. Here, Vd (g) and Vc (g, ? ) denote the expected return for immediate decisions and that
for continuing to accumulate evidence, respectively. (b) shows the same approach for RR tasks, in
which, in an outer loop, the reward rate ? is found by root finding.
this state within a trial, given that optimal choices are performed in all future states. The right-hand
side of Bellman?s equation is the maximum of the expected returns for either making a decision
immediately, or continuing to accumulate more evidence and deciding later. When deciding immediately, one expects reward g (or 1 ? g) when choosing z = 1 (or z = ?1), such that the expected
return for this choice is Vd (g). Continuing to accumulate evidence for another small time step ?t
comes at cost c?t, but promises future expected return hV (g + ?g, ? + ?? )ip(?g,?? |g,? ) , as expressed
by the second term in max{?, ?} in Eq. (4). Given a V (g, t) that satisfies Bellman?s equation, it is
easy to see that the optimal policy is to accumulate evidence until the expected return for doing so
is exceeded by that for making immediate decisions. The belief g at which this happens differs for
different reliabilities ? , such that the optimal policy is determined by a bound in belief, g? (? ), that
depends on the current reliability.
We find the solution to Bellman?s equation itself by value iteration on a discretized (g, ? )space, as illustrated in Fig. 1(a). Value iteration is based on a sequence of value functions
V 0 (g, ? ), V 1 (g, ? ), . . . , where V n (g, ? ) is given by the solution to right-hand side of Eq. (4) with
hV (g + ?g, ? + ?? )i based on the previous value function V n?1 (g, ? ). With n ? ?, this procedure guarantees convergence to the solution of Eq. (4). In practice, we terminate value iteration once
maxg,? |V n (g, ? ) ? V n?1 (g, ? )| drops below a pre-defined threshold. The only remaining difficulty
is how to compute the expected future return hV (?, ?)i on the discretized (g, ? )-space, which we
describe in more detail in the next section.
The RR task, in which the aim is to maximize the reward rate, requires the use of average-reward
Dynamic Programming [9, 11], based on the average-adjusted expected return, V? (g, ? ). If ? denotes
the reward rate (avg. reward per unit time, RR in Eq. (3)), this expected return penalizes the passage
of some time ?t by ???t, and can be interpreted as how much better or worse the current state is
than the average. It is relative to an arbitrary baseline, such that adding a constant to this return for
all states does not change the resulting policy [11]. We remove this additional degree of freedom by
fixing the average V? (?, ?) at the beginning of a trial (where g = 1/2) to hV? (1/2, ? )ip(? ) = 0, where
the expectation is with respect to the steady-state distribution of ? . Overall, this leads to Bellman?s
equation,
D
E
V? (g, ? ) = max V?d (g), V? (g + ?g, ? + ?? )
? (c + ?)?t
(5)
p(?g,?? |g,? )
with the average-adjusted expected return for immediate decisions given by
V?d (g) = max {g ? ? (ti + (1 ? g)tp ) , 1 ? g ? ? (ti + gtp )} .
(6)
The latter results from a decision being followed by the inter-trial interval ti and an eventual penalty
time tp for incorrect choices, after which the average-adjusted expected return is hV? (1/2, ? )i = 0,
as previously chosen. The value function is again computed by value iteration, assuming a known
?. The correct ? itself is found in an outer loop, by root-finding on the consistency condition,
hV? (1/2, ? )i = 0, as illustrated in Fig. 1(b).
3.2
Finding hV (g + ?g, ? + ?? )i as solution to a PDE
Performing value iteration on Eq. (4) requires
hV (g + ?g, ? + ?? )ip(?g,?? |g,? ) on a discretized (g, ? ) space.
3
computing the expectation
Na??vely, we could perform the
required integration by the rectangle method or related methods, but this has several disadvantages. First, the method scales quadratically in the size of the (g, ? ) space. Second, with
?t ? 0, p(?g, ?? |g, ? ) becomes singular, such that small time discretization requires even smaller
state discretization. Third, it requires explicit computation of p(?g, ?? |g, ? ), which might be
cumbersome.
Instead, we borrow methods from stochastic optimal control [12] to find the expectation as a solution
to the partial differential equation (PDE). To do so, we link V (g, ? ) to hV (g + ?g, ? + ?? )i, by
considering how g and ? evolve from some time t to time t + ?t. Defining u(g, ?, t) ? V (g, ? ) and
u(g, ?, t + ?t) ? hV (g + ?g, ? + ?? )i, and replacing this expectation by its second-order Taylor
expansion around (g, ? ), we find that, with ?t ? 0, we have
!
2
2
d?
dg ? 2
?u
hdgd? i ? 2
hdgi ?
hd? i ?
?2
+
+
=
+
+
u,
(7)
?t
dt ?g
dt ??
2dt ?g 2
2dt ?? 2
dt ?g??
with all expectations implicitly conditional on g and ? . If we approximate the partial derivatives
with respect to g and ? by their central finite differences, and denote unkj ? u(gk , ?j , t) and un+1
?
kj
u(gk , ?j , t + ?t) (gk and ?j are the discretized state nodes), applying the Crank-Nicolson method
[13] to the above PDE results in the linear system
Ln+1 un+1 = Ln un
n
(8)
n+1
where both L and L
are sparse matrices, and the u?s are vectors that contain all ukj . Computing
hV (g + ?g, ? + ?? )i now conforms to solving the above linear system with respect to un+1 . As the
process on g and ? only appears as its infinitesimal moments in Eq. (7), this approach neither requires
explicit computation of p(?g, ?? |g, ? ) nor suffers from singularities in this density. It still scales
quadratically with the state space discretization, but we achieve linear scaling by switching from
the Crank-Nicolson to the Alternating Direction Implicit (ADI) method [13] (see supplement for
details). This method splits the computation into two steps of size ?t/2, in each of which the partial
derivatives are only implicit with respect to one of the two state space dimensions. This results
in a tri-diagonal structure of the linear system, and an associated reduction of the computational
complexity while preserving the numerical robustness of the Crank-Nicholson method [13].
The PDE approach requires us to specify how V (and thus u) behaves at the boundaries, g ? {0, 1}
and ? ? {0, ?}. Beliefs g ? {0, 1} imply complete certainty about the latent variable z, such
that a decision is imminent. This implies that, at these beliefs, we have V (g, ? ) = Vd (g) for all
? . With ? ? ?, the reliability of the momentary evidence becomes overwhelming, such that the
latent variable z is again immediately known, resulting in V (g, ? ) ? Vd (1) (= Vd (0)) for all g. For
? = 0, the infinitesimal moments are hdgi = hdg 2 i = hd? 2 i = 0, and hd? i = ??dt, such that g
remains unchanged and ? drifts deterministically towards positive values. Thus, there is no leakage
of V towards ? < 0, which makes this lower boundary well-defined.
4
Results
We first provide an example of an optimal policy and how it shapes behavior, followed by how
different parameters of the process on the evidence reliability ? and different task parameters influence the shape of the optimal bound g? (? ). Then, we compare the performance of these bounds to
the performance that can be achieved by simple heuristics, like the diffusion model with a constant
bound, or a bound in belief independent of ? .
In all cases, we computed the optimal bounds by dynamic programming on a 200 ? 200 grid on
(g, ? ), using ?t = 0.005. g spun its whole [0, 1] range, and ? ranged from 0 to twice the 99th
percentile of its steady-state distribution. We used maxg,? |V n (g, ? ) ? V n?1 (g, ? )| ? 10?3 ?t as
convergence criterion for value iteration.
4.1
Decision-making with reliability-dependent bounds
Figure 2(a) shows one example of an optimal policy (black lines) for an ER task with evidence
accumulation cost of c = 0.1 and ? -process parameters ? = 0.4, ? = 0.2, and ? = 1. This
policy can be understood as follows. At the beginning of each trial, the decision maker starts at
4
(a)
(b)
Bound example
1
Reliability time-course
1
(c)
Belief time-course
1
0.5
0.6
belief g
reliability
belief g
0.8
0.4
0.5
0.2
0
0
0.2
0.4
0.6
reliability
0.8
0
0
0.5
1
1.5
time t
0
0
0.5
1
1.5
time t
Figure 2: Decision-making with the optimal policy. (a) shows the optimal bounds, at g? (? ) and 1 ?
g? (? ) (black) and an example trajectory (grey). The dashed curve shows the steady-state distribution
of the ? -process. (b) shows the ? -component (evidence reliability) of this example trajectory over
time. Even though not a jump-diffusion process, the CIR process can feature jump-like transitions
? here at around 1s. (c) shows the g-component (belief) of this trajectory over time (grey), and how
the change in evidence reliability changes the bounds on this belief (black). Note that the bound
fluctuates rapidly due to the rapid fluctuation of ? , even though the bound itself is continuous in ? .
g(0) = 1/2 and some ? (0) drawn from the steady-state distribution over ? ?s (dashed curve in
Fig. 2(a)). When accumulating evidence, the decision maker?s belief g(t) starts diffusing and drifting
towards either 1 or 0, following the dynamics described in Eqs. (1) and (2). At the same time, the
reliability ? (t) changes according to the CIR process, Eq. (1) (Fig. 2(b)). In combination, this leads
to a two-dimensional trajectory in the (g, ? ) space (Fig. 2(a), grey line). A decision is reached
once this trajectory reaches either g? (? ) or 1 ? g? (? ) (Fig. 2(a), black lines). In belief space, this
corresponds to a bound that changes with the current reliability. For the example trajectory in Fig. 2,
this reliability jumps to higher values after around 1s (Fig. 2(b)), which leads to a corresponding
jump of the bound to higher levels of confidence (black line in Fig. 2(c)).
In general, the optimal bound is an increasing function in ? . Thus, the larger the current reliability
of the momentary evidence, the more sense it makes to accumulate evidence to a higher level of
confidence before committing to a choice. This is because a low evidence reliability implies that ?
at least in the close future ? this reliability will remain low, such that it does not make sense to pay the
cost for accumulating evidence without the associated gain in choice accuracy. A higher evidence
reliability implies that high levels of confidence, and associated choice accuracy, are reached more
quickly, and thus at a lower cost. This also indicates that a decision bound increasing in ? does not
imply that high-reliability evidence will lead to slower choices. In fact, the opposite is true, as a
faster move towards higher confidence for high reliability causes faster decisions in such cases.
4.2
Optimal bounds for different reliability/task parameters
To see how different parameters of the CIR process on the reliability influence the optimal decision
bound, we compared bounds where one of its parameters is systematically varied. In all cases, we
assumed an ER task with c = 0.1, and default CIR process parameters ? = 0.4, ? = 0.2, ? = 2.
Figure 3(a) shows how the bound differs for different means ? of the CIR process. A lower mean
implies that, on average, the task will be harder, such that more evidence needs to be accumulated
to reach the same level of performance. This accumulation comes at a cost, such that the optimal
policy is to stop accumulating earlier in harder tasks. This causes lower decision bounds for smaller
?. Fig. 3(b) shows that the optimal bound only very weakly depends on the standard deviation ? of
the reliability process. This standard deviation determines how far ? can deviate from its mean, ?.
The weak dependence of the bound on this parameter shows that it is not that important to which
degree ? fluctuates, as long as it fluctuates with the same speed, ?. This speed has a strong influence
on the optimal bound, as shown in Fig. 3(c). For a slowly changing ? (low ?), the current ? is likely
to remain the same in the future, such that the optimal bound strongly depends on ? . For a rapidly
changing ? , in contrast, the current ? does not provide much information about future reliabilities,
such that the optimal bound features only a very weak dependence on the current evidence reliability.
Similar observations can be made for changes in task parameters. Figure 3(d) illustrates that a larger
cost c generally causes lower bounds, as it pays less to accumulate evidence. In RR tasks, the
5
(a)
(b)
reliability mean
1
(c)
reliability standard deviation
reliability speed
belief g
0.9
0.14
0.40
1.00
0.8
0.7
0.5
(d)
0
0.2 0.4
0.6 0.8
1
0
1.2 1.4
1
0.01
0.05
0.10
0.20
0.40
0.80
0.8
0.5
(e)
evidence accumulation cost c
0.9
belief g
0.25
1.00
4.00
16.00
0.0500
0.1000
0.2000
0.5657
0.6
1
1.5
2
2.5
0
0.4
0.6
0.8
1
0.8
1
penalty time
0.00
0.10
0.30
0.90
2.70
8.10
24.30
0.04
0.20
1.00
5.00
25.00
125.00
625.00
0.7
0.2
(f)
inter-trial interval
0.6
0.5
0
0.2
0.4
0.6
reliability
0.8
1
0
0.2
0.4
0.6
reliability
0.8
1
0
0.2
0.4
0.6
reliability
Figure 3: Optimal bounds for different reliability process / task parameters. In the top row, we vary
(a) the mean, ?, (b) the standard deviation ?, or (c) the speed ? of the CIR process that describes
the reliability time-course. In the bottom row, we vary (d) the momentary cost c in an ER task, and,
in an RR task (e) the inter-trial interval ti , or (f) the penalty time tp . In all panels, solid lines show
optimal bounds, and dashed lines show steady-state densities of ? (vertically re-scaled).
inter-trial timing also plays an important role. If the inter-trial interval ti is long, performing well
in single trials is more important, as there are fewer opportunities per unit time to gather reward. In
fact, for ti ? ?, the optimal bound in RR tasks becomes equivalent to that of an ER task [3]. For
short ti ?s, in contrast, quick, uninformed decisions are better, as many of them can be performed in
quick succession, and they are bound to be correct in at least half of the trials. This is reflected in
optimal bounds that are significantly lower for shorter ti ?s (Fig. 3(e)). A larger penalty time, tp , in
contrast, causes a rise in the optimal bound (Fig.3(f)), as it is better to make better, slower decisions,
if incorrect decisions are penalized by longer waits between consecutive trials.
4.3
Performance comparison with alternative heuristics
As previous examples have shown, the optimal policy is ? due to its two-dimensional nature ? not
only hard to compute but might also be hard to implement. For these reasons we investigated if simpler, one-dimensional heuristics were able to achieve comparable performance. We focused on two
heuristics in particular. First, we considered standard diffusion models [1, 2] that trigger decisions
as soon as the accumulated evidence, x(t) (Eq. (1)), not weighted by ? , reaches one of the timeinvariant bounds at x? and ?x? . These models have been shown to feature optimal performance
when the evidence reliability is constant within single trials [2, 3], and electrophysiological recordings have provided support for their implementation in neural substrate [14, 15]. Diffusion models
use the unweighted x(t) in Eq. (1) and thus do not perform Bayes-optimal inference if the evidence
reliability varies within single trials. For this reason, we considered a second heuristic that performs
Bayes-optimal inference by Eq. (2), with time-invariant bounds X? and ?X? on X(t). This heuristic deviates from the optimal policy only by not taking into account the bound?s dependence on the
current reliability, ? .
We compared the performance of the optimal bound with the two heuristics exhaustively by discretizing a subspace of all possible reliability process parameters. The comparison is shown only
for the ER task with accumulation cost c = 0.1, but we observed qualitatively similar results for
other accumulation costs, and RR tasks with various combinations of c, ti and tp . For a fair comparison, we tuned for each set of reliability process parameters the bound of each of the heuristics
such that it maximized the associated ER / RR. This optimization was performed by the Subplex
algorithm [16] in the NLopt tookit [17], where the ER / RR was found by Monte Carlo simulations.
6
(a)
(b)
1
0.9
0.4
0.8
belief g
0.8
0.6
0.7
0.6
0.2
0.5
(c)
(d)
0.9
0.4
0.8
belief g
0.8
0.6
0
0.5
1
1.5
2
2.5
1
0.7
0.6
0.2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0.5
0
0.2 0.4 0.6 0.8 1.0 1.2 1.4
reliability
Figure 4: Expected reward comparison between optimal bound and heuristics. (a) shows the reward
rate difference (white = no difference, dark green = optimal bound ? 2? higher expected reward)
between optimal bound and diffusion model for different ? -process parameters. The process SD is
shown as fraction of the mean (e.g. ? = 1.4, ?
? = 0.8 implies ? = 1.5?0.8 = 1.12). (b) The optimal
bound (black, for ? = 0 independent of ? and ?) and effective tuned diffusion model bounds (blue,
dotted curves) for speed ? = 0 and two different mean / SD combinations (blue, dotted rectangles
in (a)). The dashed curves show the associated ? steady-state distributions. (c) same as (a), but
comparing optimal bound to constant bound on belief. (d) The optimal bounds (solid curves) and
tuned constant bounds (dotted curves) for different ? and the same ? / ? combination (red rectangles
in (c)). The dashed curve shows the steady-state distribution of ? .
4.3.1
Comparison to diffusion models
Figure 4(a) shows that for very slow process speeds (e.g. ? = 0), the diffusion model performance is
comparable to the optimal bound found by dynamic programming. At higher speeds (e.g. ? = 16),
however, diffusion models are no match for the optimal bound anymore. Their performance degrades
most strongly when the reliability SD is large, and close to the reliability?s mean (dark green area
for ? = 16, large ?
? , in Fig. 4(a)). This pattern can be explained as follows. In the extreme case
of ? = 0, the evidence reliability remains unchanged within single trials. Then, by Eq. (2), we
have X(t) = ? x(t), such that a constant bound x? on x(t) corresponds to a ? -dependent bound
X? = ? x? on X(t). Mapped into belief by Eq. (2), this results in a sigmoidal bound that closely
follows the similarly rising optimal bound. Figure 4(b) illustrates that, depending on the steady-state
distribution of ? , the tuned diffusion model bound focuses on approximating different regions of the
optimal bound.
For a non-stationary evidence reliability, ? > 0, the relation between X(t) and x(t) changes for
different trajectories of ? (t). In this case, the diffusion model bounds cannot be directly related to
a bound in X(t) (or, equivalently, in belief g(t)). As a result, the effective diffusion model bound
in belief fluctuate strongly, causing possibly strong deviations from the optimal bound. This is
illustrated in Fig. 4(a) by a significant loss in performance for larger process speeds. This loss is most
pronounced for large spreads of ? (i.e. a large ?). For small spreads, in contrast, the ? (t) remains
mostly stationary, which is again well approximated by a stationary ? whose associated optimal
policy is well captured by a diffusion model bound. To summarize, diffusion models approximate
well the optimal bound as long as the reliability within single trials is close-to stationary. As soon as
this reliability starts to fluctuate significantly within single trials (e.g. large ? and ?), the performance
of diffusion models deteriorates.
4.3.2
Comparison to a bound that does not depend on evidence reliability
In contrast to diffusion models, a heuristic, constant bound in belief (i.e. either in X(t) or g(t)), as
used in [8], causes a drop in performance for slow rather than fast changes of the evidence reliability.
7
This is illustrated in Fig. 4(c), where the performance loss is largest for ? = 0 and large ?, and drops
with an increase in ?, ?, and ?.
Figure 4(d) shows why this performance loss is particularly pronounced for slow changes in evidence
reliability (i.e. low ?). As can be seen, the optimal bound becomes flatter as a function of ? when the
process speed ? increases. As previously mentioned, for large ?, this is due to the current reliability
providing little information about future reliability. As a consequence, the optimal bound is in these
cases well approximated by a constant bound in belief that completely ignores the current reliability.
For smaller ?, the optimal bound becomes more strongly dependent on the current reliability ? , such
that a constant bound provides a worse approximation, and thus a larger loss in performance.
The dependence of performance loss on the mean ? and standard deviation ? of the steady-state
reliability arises similarly. As has been shown in Fig. 3(a), a larger mean reliability ? causes the
optimal bound to become flatter as a function of the current reliability, such that a constant bound
approximation performs better for larger ?, as confirmed in Fig. 4(c). The smaller performance loss
for smaller spreads of ? (i.e. smaller ?) is not explained by a change in the optimal bound, which
is mostly independent of the exact value of ? (Fig. 3(b)). Instead, it arises from the constant bound
focusing its approximation to regions of the optimal bound where the steady-state distribution of ?
has high density (dashed curves in Fig. 3(b)). The size of this region shrinks with shrinking ?, thus
improving the approximation of the optimal bound by a constant, and the associated performance of
this approximation. Overall, a constant bound in belief features competitive performance compared
to the optimal bound if the evidence reliability changes rapidly (large ?), if the task is generally easy
(large ?), and if the reliability does not fluctuate strongly within single trials (small ?). For widely
and rapidly changing evidence reliability ? in difficult tasks, in contrast, a constant bound in belief
provides a poor approximation to the optimal bound.
5
Discussion
Our work offers the following contributions. First, it pushes the boundaries of the theory of optimal
human and animal decision-making by moving towards more realistic tasks in which the reliability
changes over time within single trials. Second, it shows how to derive the optimal policy while
avoiding the methodological caveats that have plagued previous, related approaches [3]. Third, it
demonstrates that optimal behavior is achieved by a bound on the decision maker?s belief that depends on the current evidence reliability. Fourth, it explains how the shape of the bound depends on
task contingencies and the parameters that determine how the evidence reliability changes with time
(in contrast to, e.g., [18], where the utilized heuristic policy is independent of the ? process). Fifth, it
shows that alternative decision-making heuristics can match the optimal bound?s performance only
for a particular subset of these parameters, outside of which their performance deteriorates.
As derived in Eq. (2), optimal evidence accumulation with time-varying reliability is achieved by
weighting the momentary evidence by its current reliability [8]. Previous work has shown that humans and other animals optimally accumulate evidence if its reliability remains constant within a
trial [5, 3], or changes with a known time-course [8]. It remains to be clarified if humans and other
animals can optimally accumulate evidence if the time-course of its reliability is not known in advance. They have the ability to estimate this reliability on a trial-by-trial basis[19, 20], but how
quickly this estimate is formed remains unclear. To this respect, our model predicts that access
to the momentary evidence is sufficient to estimate its reliability immediately and with high precision. This property arises from the Wiener process being only an approximation of physical realism.
Further work will extend our approach to processes where this reliability is not known with absolute certainty, and that can feature jumps. We do not expect such process modifications to induce
qualitative changes to our predictions.
Our theory predicts that, for optimal decision-making, the decision bounds need to be a function
of the current evidence reliability, that depends on the parameters that describe the reliability timecourse. This prediction can be used to guide the design of experiments that test if humans and other
animals are optimal in the increasingly realistic scenarios addressed in this work. While we do not
expect our quantitative prediction to be a perfect match to the observed behavior, we expect the
decision makers to qualitatively change their decision strategies according to the optimal strategy
for different reliability process parameters. Then, having shown in which cases simpler heuristics
fail to match the optimal performance allows us focus on such cases to validate our theory.
8
References
[1] Roger Ratcliff. A theory of memory retrieval. Psychological Review, 85(2):59?108, 1978.
[2] Rafal Bogacz, Eric Brown, Jeff Moehlis, Philip J. Holmes, and Jonathan D. Cohen. The physics of
optimal decision making: A formal analysis of models of performance in two-alternative forced-choice
tasks. Psychological Review, 113(4):700?765, 2006.
[3] Jan Drugowitsch, Rub?en Moreno-Bote, Anne K. Churchland, Michael N. Shadlen, and Alexandre Pouget.
The cost of accumulating evidence in perceptual decision making. The Journal of Neuroscience, 32(11):
3612?3628, 2012.
[4] John Palmer, Alexander C. Huk, and Michael N. Shadlen. The effect of stimulus strength on the speed
and accuracy of a perceptual decision. Journal of Vision, 5:376?404, 2005.
[5] Roozbeh Kiani, Timothy D. Hanks, and Michael N. Shadlen. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. The Journal of Neuroscience,
28(12):3017?3029, 2008.
[6] Rafal Bogacz, Peter T. Hu, Philip J. Holmes, and Jonathan D. Cohen. Do humans produce the speedaccuarcy trade-off that maximizes reward rate. The Quarterly Journal of Experimental Psychology, 63
(5):863?891, 2010.
[7] John C. Cox, Jonathan E. Ingersoll Jr., and Stephen A. Ross. A theory of the term structure of interest
rates. Econometrica, 53(2):385?408, 1985.
[8] Jan Drugowitsch, Gregory C DeAngelis, Eliana M Klier, Dora E Angelaki, and Alexandre Pouget. Optimal multisensory decision-making in a reaction-time task. eLife, 2014. doi: 10.7554/eLife.03005.
[9] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley
Series in Probability and Statistics. John Wiley & Sons, Inc., 2005.
[10] Richard E. Bellman. Dynamic Programming. Princeton University Press, 1957.
[11] Sridhar Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical
results. Machine Learning, 22:159?195, 1996.
[12] Wendell H. Fleming and Raymond W. Rishel. Deterministic and Stochastic Optimal Control. Stochastic
Modelling and Applied Probability. Springer-Verlag, 1975.
[13] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes:
The Art of Scientific Computing. Cambridge University Press, 3rd edition, 2007.
[14] Jamie D. Roitman and Michael N. Shadlen. Response of neurons in the lateral intraparietal area during
a combined visual discrimination reaction time task. The Journal of Neuroscience, 22(21):9475?9489,
2002.
[15] Mark E. Mazurek, Jamie D. Roitman, Jochen Ditterich, and Michael N. Shadlen. A role for neural
integrators in perceptual decision making. Cerebral Cortex, 13:1257?1269, 2003.
[16] Thomas Harvey Rowan. Functional Stability Analysis of Numerical Algorithms. PhD thesis, Department
of Computer Sciences, University of Texas at Austin, 1990.
[17] Steven G. Johnson. The NLopt nonlinear-optimization package. URL http://ab-initio.mit.
edu/nlopt.
[18] Sophie Deneve. Making decisions with unknown sensory reliability. Frontiers in Neuroscience, 6(75),
2012. ISSN 1662-453X. doi: 10.3389/fnins.2012.00075.
[19] Marc O. Ernst and Martin S. Banks. Humans integrate visual and haptic information in a statistically
optimal fashion. Nature, 415:429?433, 2002.
[20] Christopher R. Fetsch, Amanda H. Turner, Gregory C. DeAngelis, and Dora E. Angelaki. Dynamic
reweighting of visual and vestibular cues during self-motion perception. The Journal of Neuroscience, 29
(49):15601?15612, 2009.
9
| 5540 |@word trial:34 cox:2 rising:1 hu:1 grey:3 simulation:1 nicholson:1 solid:2 harder:2 reduction:1 moment:2 ingersoll:2 series:1 tuned:4 past:1 reaction:2 rowan:1 current:21 com:1 discretization:3 comparing:1 anne:1 gmail:1 dx:4 john:3 realistic:3 numerical:3 informative:1 dive:1 shape:4 moreno:2 drop:4 remove:1 discrimination:1 stationary:4 half:1 fewer:1 cue:1 beginning:3 realism:1 short:1 caveat:1 characterization:1 provides:3 node:1 location:1 clarified:1 org:1 simpler:5 sigmoidal:1 differential:1 become:1 incorrect:3 qualitative:1 inter:6 expected:17 rapid:1 behavior:5 nor:1 integrator:1 discretized:4 bellman:6 little:1 overwhelming:1 solver:1 considering:1 becomes:5 spain:1 increasing:2 provided:1 bounded:1 panel:1 maximizes:1 bogacz:2 kind:1 interpreted:1 finding:6 guarantee:1 temporal:1 quantitative:1 certainty:3 ti:11 universit:1 wrong:1 demonstrates:2 scaled:1 control:4 unit:6 positive:1 before:1 understood:1 vertically:1 timing:1 sd:3 consequence:1 switching:1 despite:1 subscript:1 fluctuation:1 might:4 black:6 twice:1 palmer:1 range:1 statistically:1 practice:1 implement:1 differs:2 procedure:1 jan:3 intersect:1 area:2 empirical:1 significantly:3 imminent:1 pre:1 road:1 confidence:4 induce:1 wait:1 convenience:1 close:3 cannot:1 applying:1 influence:3 accumulating:6 accumulation:10 equivalent:1 deterministic:1 quick:2 maximizing:3 duration:2 focused:1 simplicity:1 immediately:5 pouget:3 holmes:2 borrow:2 dw:2 hd:3 stability:1 play:1 trigger:1 exact:1 programming:9 substrate:1 approximated:2 particularly:1 utilized:1 predicts:2 bottom:1 role:2 observed:2 steven:1 hv:12 region:4 ensures:1 eu:1 trade:1 principled:1 mentioned:1 environment:1 complexity:2 reward:19 econometrica:1 dynamic:11 exhaustively:1 weakly:1 solving:2 parc:1 depend:1 nlopt:3 churchland:1 upon:1 eric:1 completely:1 basis:1 easily:1 various:1 derivation:1 separated:1 forced:1 committing:1 describe:3 effective:2 monte:1 fast:1 artificial:2 deangelis:2 doi:2 choosing:1 zdt:1 outside:1 whose:1 unige:1 heuristic:17 fluctuates:3 larger:7 widely:1 ability:1 statistic:1 commit:2 itself:3 ip:4 sequence:3 rr:12 jamie:2 causing:2 loop:2 rapidly:4 gen:2 ernst:1 achieve:2 pronounced:2 validate:1 recipe:1 convergence:2 mazurek:1 decisionmaking:2 produce:1 perfect:1 derive:2 depending:1 fixing:1 uninformed:1 measured:1 eq:15 strong:2 implemented:1 predicted:1 come:4 implies:5 switzerland:1 direction:1 closely:1 correct:4 stochastic:5 vc:1 human:7 viewing:1 explains:1 brian:1 singularity:1 adjusted:3 frontier:1 initio:1 sufficiently:1 around:3 considered:2 plagued:1 deciding:2 vary:3 consecutive:1 maker:11 ross:2 largest:1 cdt:1 weighted:2 mit:1 aim:2 rather:3 cr:1 fluctuate:3 varying:4 derived:1 focus:2 notational:1 methodological:1 modelling:1 indicates:2 ratcliff:1 contrast:7 ept:1 baseline:1 sense:2 inference:2 dependent:3 accumulated:4 vetterling:1 hidden:1 relation:2 her:1 overall:3 flexible:1 animal:5 art:1 integration:2 equal:1 once:3 having:1 constitutes:1 jochen:1 future:7 stimulus:2 richard:1 dg:1 gamma:1 ve:1 ourselves:1 william:2 ab:1 freedom:1 interest:1 extreme:1 integral:2 partial:3 moehlis:1 conforms:1 shorter:1 vely:1 continuing:3 taylor:1 penalizes:1 re:1 theoretical:1 uncertain:2 psychological:2 modeling:1 earlier:1 tp:7 disadvantage:1 cost:14 deviation:6 subset:1 expects:1 uniform:1 johnson:1 characterize:1 optimally:2 varies:2 gregory:2 chooses:1 combined:1 density:3 physic:1 off:1 michael:5 quickly:2 na:2 again:3 central:1 thesis:1 rafal:2 spun:1 slowly:1 possibly:1 worse:2 derivative:2 return:12 account:1 de:3 flatter:2 inc:1 onset:1 stream:1 depends:8 crossed:1 root:3 performed:3 later:1 observing:1 traffic:1 doing:1 start:3 bayes:3 reached:2 red:1 competitive:1 contribution:1 formed:1 accuracy:3 wiener:2 variance:1 characteristic:1 percept:1 succession:1 maximized:1 identify:1 weak:2 wendell:1 carlo:1 trajectory:7 confirmed:1 reach:3 cumbersome:1 suffers:1 infinitesimal:2 associated:7 gain:1 stop:1 electrophysiological:1 focusing:2 alexandre:4 exceeded:1 higher:8 dt:12 appears:1 follow:1 reflected:1 specify:1 response:1 roozbeh:1 formulation:2 though:2 strongly:5 shrink:1 furthermore:3 roger:1 implicit:2 hank:1 until:4 hand:2 replacing:1 christopher:1 nonlinear:1 reweighting:1 scientific:1 effect:1 roitman:2 concept:1 contain:1 ranged:1 true:1 brown:1 alternating:1 illustrated:4 white:1 puterman:1 during:2 self:1 steady:11 percentile:1 criterion:1 generalized:1 bote:1 complete:1 demonstrate:1 performs:2 belief2:1 motion:1 passage:1 spending:1 sigmoid:1 behaves:1 functional:1 physical:1 cohen:2 cerebral:1 extend:1 accumulate:8 significant:1 cambridge:1 rd:1 consistency:1 grid:1 similarly:2 reliability:88 moving:2 access:1 longer:2 cortex:2 posterior:2 dictated:1 rewarded:1 scenario:1 certain:1 verlag:1 harvey:1 binary:1 discretizing:1 preserving:1 captured:1 additional:1 seen:1 determine:2 maximize:3 dashed:6 stephen:1 match:6 faster:2 cross:1 long:4 pde:5 offer:1 retrieval:1 prediction:3 underlies:1 vision:1 expectation:6 iteration:8 achieved:3 interval:5 addressed:1 singular:1 tri:1 haptic:1 recording:1 db:2 eve:2 presence:1 split:1 enough:1 easy:2 diffusing:1 mahadevan:1 psychology:1 restrict:1 opposite:1 texas:1 url:1 hdg:1 effort:1 ditterich:1 penalty:5 peter:1 cause:7 generally:2 klier:1 dark:2 kiani:1 http:1 dotted:3 neuroscience:6 estimated:1 per:4 deteriorates:2 intraparietal:1 blue:2 discrete:1 promise:1 threshold:1 drawn:2 changing:3 neither:1 prey:2 diffusion:18 rectangle:3 deneve:1 fraction:1 package:1 parameterized:1 fourth:1 extends:1 decide:1 rub:2 decision:61 scaling:1 comparable:3 bound:88 pay:2 guaranteed:1 followed:2 quadratic:1 eagle:1 strength:1 speed:11 elife:2 performing:3 martin:2 department:1 according:2 combination:4 poor:1 jr:1 describes:2 across:2 smaller:6 remain:2 increasingly:1 son:1 making:24 happens:1 modification:1 explained:2 restricted:1 invariant:1 ln:2 equation:6 previously:2 remains:6 fail:4 fetsch:1 quarterly:1 anymore:1 dora:2 alternative:3 robustness:1 slower:2 drifting:1 thomas:1 denotes:2 remaining:1 top:1 opportunity:1 approximating:1 unchanged:2 leakage:1 objective:1 move:1 already:1 degrades:1 strategy:2 dependence:4 diagonal:1 unclear:1 subspace:1 attentional:1 link:1 mapped:1 lateral:1 vd:7 outer:2 philip:2 reason:2 assuming:2 issn:1 providing:1 equivalently:1 setup:1 mostly:2 difficult:1 holding:1 gk:3 rise:1 implementation:1 design:1 policy:19 unknown:1 perform:2 allowing:1 observation:1 neuron:1 markov:1 finite:1 parietal:1 immediate:3 defining:1 extended:1 excluding:1 varied:1 arbitrary:1 drift:1 required:1 crank:3 timecourse:1 quadratically:2 established:2 barcelona:2 vestibular:1 fleming:1 able:1 amanda:1 below:1 pattern:1 perception:1 summarize:1 including:1 max:5 gtp:1 belief:28 green:2 memory:1 difficulty:1 turner:1 cir:7 imply:2 kj:1 raymond:1 joan:1 prior:1 faced:1 deviate:2 review:2 evolve:1 relative:1 loss:7 expect:3 foundation:1 contingency:2 integrate:1 degree:2 gather:1 sufficient:1 shadlen:5 bank:1 systematically:1 row:2 austin:1 course:8 penalized:1 soon:2 side:2 guide:1 formal:1 saul:1 taking:1 fifth:1 sparse:1 absolute:1 boundary:3 dimension:1 curve:8 transition:1 default:1 unweighted:1 drugowitsch:2 sensory:4 ignores:1 qualitatively:3 avg:1 jump:5 made:1 reinforcement:1 far:1 approximate:2 implicitly:1 keep:1 incoming:1 assumed:1 continuous:2 maxg:2 un:4 latent:2 why:1 terminate:1 nature:2 improving:1 adi:1 expansion:1 huk:1 investigated:1 marc:1 spread:3 whole:1 edition:1 sridhar:1 fair:1 angelaki:2 fig:20 en:2 fashion:1 slow:3 wiley:2 precision:2 sub:1 shrinking:1 momentary:13 explicit:2 deterministically:1 perceptual:4 third:2 weighting:1 remained:1 er:13 evidence:59 adding:1 supplement:3 phd:1 illustrates:3 push:1 flannery:1 timothy:1 likely:1 visual:3 expressed:1 springer:1 ch:2 corresponds:2 determines:2 satisfies:1 ukj:1 teukolsky:1 conditional:1 towards:5 eventual:1 jeff:1 change:19 hard:2 determined:2 infinite:1 sophie:1 experimental:2 multisensory:1 timeinvariant:1 support:1 mark:1 latter:1 dx0:2 arises:3 jonathan:3 alexander:1 princeton:1 avoiding:1 |
5,016 | 5,541 | Optimal Teaching for
Limited-Capacity Human Learners
Xiaojin Zhu
Department of Computer Sciences
University of Wisconsin-Madison
[email protected]
Kaustubh Raosaheb Patil
Affective Brain Lab, UCL
& MIT Sloan Neuroeconomics Lab
[email protected]
?ukasz Kope?c
Experimental Psychology
University College London
[email protected]
Bradley C. Love
Experimental Psychology
University College London
[email protected]
Abstract
Basic decisions, such as judging a person as a friend or foe, involve categorizing
novel stimuli. Recent work finds that people?s category judgments are guided by
a small set of examples that are retrieved from memory at decision time. This
limited and stochastic retrieval places limits on human performance for probabilistic classification decisions. In light of this capacity limitation, recent work
finds that idealizing training items, such that the saliency of ambiguous cases is
reduced, improves human performance on novel test items. One shortcoming of
previous work in idealization is that category distributions were idealized in an ad
hoc or heuristic fashion. In this contribution, we take a first principles approach
to constructing idealized training sets. We apply a machine teaching procedure
to a cognitive model that is either limited capacity (as humans are) or unlimited
capacity (as most machine learning systems are). As predicted, we find that the
machine teacher recommends idealized training sets. We also find that human
learners perform best when training recommendations from the machine teacher
are based on a limited-capacity model. As predicted, to the extent that the learning
model used by the machine teacher conforms to the true nature of human learners,
the recommendations of the machine teacher prove effective. Our results provide a
normative basis (given capacity constraints) for idealization procedures and offer
a novel selection procedure for models of human learning.
1
Introduction
Judging a person as a friend or foe, a mushroom as edible or poisonous, or a sound as an \l\ or
\r\ are examples of categorization tasks. Category knowledge is often acquired based on examples that are either provided by a teacher or past experience. One important research challenge
is determining the best set of examples to provide a human learner to facilitate learning and use
of knowledge when making decisions, such as classifying novel stimuli. Such a teacher would be
helpful in a pedagogical setting for curriculum design [1, 2].
Recent work suggests that people?s categorization decisions are guided by a small set of examples
retrieved at the time of decision [3]. This limited and stochastic retrieval places limits on human performance for probabilistic classification decisions, such as predicting the winner of a sports contest
or classifying a mammogram as normal or tumorous [4]. In light of these capacity limits, Gigu`ere
and Love [3] determined and empirically verified that humans perform better at test after being
1
trained on idealized category distributions that minimize the saliency of ambiguous cases during
training. Unlike machine learning systems that can have unlimited retrieval capacity, people performed better when trained on non-representative samples of category members, which is contrary
to common machine learning practices where the aim is to match training and test distributions [5].
One shortcoming of previous work in idealization is that category distributions were idealized in
an ad hoc or heuristic fashion, guided only by the intuitions of the experimenters in contrast to a
rigorous systematic approach. In this contribution, we take a first principles approach to constructing
idealized training sets. We apply a machine teaching procedure [6] to a cognitive model that is
either limited capacity (as humans are) or unlimited capacity (as most machine learning systems
are). One general prediction is that the machine teacher will idealize training sets. Such a result
would establish a conceptual link between idealization manipulations from psychology and optimal
teaching procedures from machine learning [7, 6, 8, 2, 9, 10, 11]. A second prediction is that
human learners will perform best with training sets recommended by a machine teacher that adopts
a limited capacity model of the learner. To the extent that the learning model used by the machine
teacher conforms to the true nature of human learners, the recommendations of the machine teacher
should prove more effective. This latter prediction advances a novel method to evaluate theories of
human learning. Overall, our work aims to provide a normative basis (given capacity constraints)
for idealization procedures.
2
Limited- and Infinite-Capacity Models
Although there are many candidate models of human learning (see [12] for a review), to cement the
connection with prior work [3] and to facilitate evaluation of model variants differing in capacity
limits, we focus on exemplar models of human learning. Exemplar models have proven successful
in accounting for human learning performance [13, 14], are consistent with neural representations of
acquired categories [15], and share strong theoretical connections with machine learning approaches
[16, 17]. Exemplar models represent categories as a collection of experienced training examples. At
the time of decision, category examples (i.e., exemplars) are activated (i.e., retrieved) in proportion
to their similarity to the stimulus. The category with the greatest total similarity across members
tends to be chosen as the category response. Formally, the categorization problem is to estimate the
label y? of a test item x from its similarity with the training exemplars {(x1 , y1 ), . . . , (xn , yn )}.
Exemplar models are consistent with the notion that people stochastically and selectively sample
from memory at the time of decision. For example, in the Exemplar-Based Random Walk (EBRW)
model [18], exemplars are retrieved sequentially and stochastically as a function of their similarity
to the stimulus. Retrieved exemplars provide evidence for category responses. When accumulated
evidence (i.e., retrieved exemplars) for a response exceeds a threshold, the corresponding response
is made. The number of steps in the diffusion process is the predicted response time.
One basic feature of EBRW is that not all exemplars in memory need feed into the decision process.
As discussed by Gigu`ere and Love [3], finite decision thresholds in EBRW can be interpreted as
a capacity limit in memory retrieval. When decision thresholds are finite, a limited number of
exemplars are retrieved from memory. When capacity is limited in this fashion, models perform
better when training sets are idealized. Idealization reduces the noise injected into the decision
process by limited and stochastic sampling of information in memory.
We aim to show that a machine teacher, particularly one using a limited-capacity model of the
learner, will idealize training sets. Such a result would provide a normative basis (given capacity
constraints) for idealization procedures. To evaluate our predictions, we formally specify a limitedand unlimited-capacity exemplar model. Rather than work with EBRW, we instead choose a simpler
mathematical model, the Generalized Context Model (GCM, [14]), which offers numerous advantages for our purposes. As discussed below, a parameter in GCM can be interpreted as specifying
capacity and can be related to decision threshold placement in EBRW?s drift-diffusion process.
Given a finite training set (or a teaching set, we will use the two terms interchangeably) D =
{(x1 , y1 ), . . . , (xn , yn )} and a test item (i.e., stimulus) x, GCM estimates the label probability as:
?
P
b + i?D:yi =1 e?c d(x,xi )
? (1)
?
p?(y = 1 | x, D) =
P
P
b + i?D:yi =1 e?c d(x,xi ) + b + i?D:yi =?1 e?c d(x,xi )
2
where d is the distance function that specifies the distance (e.g., the difference in length between
two line stimuli) between the stimulus x and exemplar xi , c is a scaling parameter that specifies
the rate at which similarity decreases with distance (i.e. the bandwidth parameter for a kernel),
and the parameter b is background similarity, which is related to irrelevant information activated
in memory. Critically, the response scaling parameter, ?, has been shown to bear a relationship
to decision threshold placement in EBRW [18]. In particular, Equation 1 is equivalent to EBRW?s
mean response (averaged over many trials) with decision threshold bounds placed ? units away for
the starting point for evidence accumulation. Thus, GCM with a low value of ? can be viewed as
a limited capacity model, whereas GCM with a high value for ? converges to the predictions of
an infinite capacity model. These two model variations (low and high ? as surrogates for low- and
high-capacity) will figure prominently in our study and analyses.
To select a binary response, the learner samples a label according to the probability y? ?
Bernoulli(?
p(y = 1 | x, D)). Therefore, the learner makes stochastic predictions. When measuring the classification error of the learner, we will take expectation over this randomness. Let the
distance function be d(xi , xj ) = |xi ? xj |. Thus a GCM learner can be represented using three
parameters {b, c, ?}.
3
Machine Teaching for the GCM Learners
Machine teaching is an inverse problem of machine learning. Given a learner and a test distribution,
machine teaching designs a small (typically non-iid) teaching set D such that the learner trained on
D has the smallest test error [6]. The machine teaching framework poses an optimization problem:
min loss(D) + effort(D).
D?D
(2)
The optimization is over D, the teaching set that we present to the learner. For our task, D =
(x1 , y1 ), . . . , (xn , yn ) where xi ? [0, 1] represents the 1D feature of the ith stimulus, and yi ?
{?1, 1} represents the ith label. The search space D = {(X ? Y)n : n ? N} is the (infinite)
set of finite teaching sets. Importantly, D is not required to consist of iid items drawn from the
test distribution p(x, y). Rather, D will usually contain specially arranged items. This is a major
difference to standard machine learning.
Since we want to minimize classification error on future test items, we define the teaching loss
function to be the generalization error:
loss(D) = E(x,y)?p(x,y) Ey??p(y|x,D)
1y6=y?.
?
(3)
The first expectation is with respect to the test distribution p(x, y). That is, we still assume that
test items are drawn iid from the test distribution. The second expectation is w.r.t. the stochastic
predictions that the GCM learner makes. Note that the teaching set D enters the loss() function
through the GCM model p?(y | x, D) in (1). We observe that:
loss(D)
=
=
Ex?p(x) p(y = 1 | x)?
p(y = ?1 | x, D) + p(y = ?1 | x)?
p(y = 1 | x, D)
?
?
Z
?
?
1 ? 2p(y = 1 | x)
?
P
? + p(y = 1 | x)?
?
? p(x)dx.
b+ i?D:y =?1 e?c d(x,xi )
i
1 + b+P
e?c d(x,xi )
(4)
i?D:yi =1
The teaching effort function effort(D) is a powerful way to specify certain preferences on the teaching set space D. For example, if we use effort(D) = |D| the size of D then the machine teaching
problem (2) will prefer smaller teaching sets. In this paper, we use a simple definition of effort():
effort(D) = 0 if |D| = n, and ? otherwise. This infinity indicator function simply acts as a hard
constraint so that D must have exactly n items. Equivalently, we may drop this effort() term from (2)
altogether while requiring the search space D to consist of teaching sets of size exactly n.
In this paper, we consider test distributions p(x, y) whose marginal on x has a special form. Specifically, we assume that p(x) is a uniform distribution over m distinct test stimuli z1 , . . . , zm ? [0, 1].
In other words, there are only m distinct test stimuli. The test label y for stimuli zj in any given
test set is randomly sampled from p(y | zj ). Besides matching the actual behavioral experiments,
3
this discrete marginal test distribution affords a further simplification to our teaching problem: the
integral in (4) is replaced with summation:
?
?
m
min
x1 ...xn ?[0,1];y1 ...yn ?{?1,1}
1 X?
?
m j=1 ?
1 ? 2p(y = 1 | zj )
1+
?
P
b+ i:y =?1 e?c d(zj ,xi )
i
P
b+ i:y =1 e?c d(zj ,xi )
?
+ p(y = 1 | zj )?
?.
(5)
i
It is useful to keep in mind that y1 . . . yn are the training item labels that we can design, while y is a
dummy variable for the stochastic test label.
In fact, equation (5) is a mixed integer program because we design both the continuous training
stimuli x1 . . . xn and the discrete training labels y1 . . . yn . It is computationally challenging. We
will relax this problem to arrive at our final optimization problem. We consider a smaller search
space D where each training item label yi is uniquely determined by the position of xi w.r.t. the
true decision boundary ?? = 0.5. That is, yi = 1 if xi ? ?? and yi = ?1 if xi < ?? . We
do not have evidence that this reduced freedom in training labels adversely affect the power of the
teaching set solution. We now removed the difficult discrete optimization aspect, and arrive at the
following continuous optimization problem to find an optimal teaching set (note the changes to
selector variables i):
?
?
m
min
x1 ...xn ?[0,1]
1 X?
?
m j=1 ?
1 ? 2p(y = 1 | zj )
1+
?
P
b+ i:x <0.5 e?c d(zj ,xi )
i
P
b+ i:x ?0.5 e?c d(zj ,xi )
?
+ p(y = 1 | zj )?
?.
(6)
i
4
Experiments
Using the machine teacher, we derive a variety of optimal training sets for low- and high-capacity
GCM learners. We then evaluate how humans perform when trained on these recommended items
(i.e. training sets). The main predictions are that the machine teacher will idealize training sets
and that humans will perform better on optimal training sets calculated using the low-capacity GCM
variant. In what follows, we first specify parameter values for the GCM variants, present the optimal
teaching sets we calculate, and then discuss human experiments.
4.1
Specifying GCM parameters
The machine teacher requires a full specification of the learner, including its parameters. Parameters
were set for the low-capacity GCM model by fitting the behavioral data from Experiment 2 of
Gigu`ere and Love [3]. GCM was fit to the aggregated data representing an average human learner
by solving the following optimization problem:
2
2
X
X
{?b, c?, ?? } = arg min
g (1) (xi ) ? f (1) (xi ) +
g (2) (xj ) ? f (2) (xj )
(7)
?
b,?
c,?
?
i?X (1)
j?X (2)
where X (1) and X (2) are sets of unique test stimuli for the two training conditions (actual and idealized) in Experiment 2. We define two functions to describe the estimated
P and empirical probabilities,
respectively: g (cond) (xi ) = p(yi = 1 | xi , D(cond) ), f (cond) (xi ) =
j?D
P
(cond):yj =1
j 0 ?D (cond)
1(xj =xi )
1(x0j =xi )
. The
function g above is defined using GCM in Equation 1. We solved Equation 7 to obtain the lowcapacity GCM parameters that best capture human performance {?b, c?, ?? } = {5.066, 2.964, 4.798}.
We define a high-capacity GCM by only changing the ?? parameter, which is set an order of magnitude higher at ?? = 47.98.
4.2
Optimal Teaching Sets
The machine teacher was used to generate a variety of training sets that we evaluated on human
learners. All training sets had size n = 20, which was chosen to maximize expected differences
in human test performance across training sets. All conditions involved the same test conditional
4
p(y = 1 | z)
1.00
0.75
0.50
0.25
0.00
0.00
0.25
0.50
z
0.75
1.00
Figure 1: The test conditional distribution. Each point shows a test item zi and its conditional
probability to be in the category y = 1. The vertical dashed line shows the location of the true
decision boundary ?? = 0.5.
distribution p(y | x) (see Figure 1). The test set consisted of m = 60 representative items evenly
spaced over the stimulus domain [0, 1] with a probabilistic category structure. The conditional distribution p(y = 1 | x = zj ) for j = 1 . . . 60 was adapted from a related study [3]. We then solved
the machine teaching problem (6) to obtain the optimal teaching sets for low- and high-capacity
learners.
The optimal training set for the low-capacity GCM places items for each category in a clump far
from the boundary (see Figure 2 for the optimal training sets). We refer to this optimal training set as
Clump-Far. The placement of these items far from the boundary reflects the low-capacity (i.e., low
? value) of the GCM. By separating the items from the two categories, the machine teacher makes
it less likely that low-capacity GCM will erroneously retrieve items from the opposing category at
the time of test. As predicted, the machine teacher idealized the Clump-Far training set.
A mathematical property of the high-capacity GCM suggests that it is sensitive only to the placement
of training items adjacent to the decision boundary ?? (all other training items have exponentially
small influence). Therefore, for the high-capacity model up to computer precision, there is no unique
optimal teaching set but rather a family of optimal sets (i.e., multiple teaching sets with the same
loss or expected test error). We generated two training sets that are both optimal for the highcapacity model. The Clump-Near training set has one clump of similar items for each category close
to the boundary. In contrast, the Spread training set uniformly spaces items outward, mimicking
the idealization procedure in Gigu`ere and Love [3]. We also generated Random teaching sets by
sampling from the joint distribution U (x)p(y | x), where U (x) is uniform in [0, 1] and p(y | x) is
the test conditional distribution. Note Random is the traditional iid training set in machine learning.
The test error of the low- and high-capacity GCM under Random teaching sets was estimated by
generating 10,000 random teaching sets.
Table 1 shows that Clump-Far outperforms other training sets for the low-capacity GCM. In contrast, Clump-Far, Clump-Near, and Spread are all optimal for high-capacity GCM, reflecting the
fact that for high-capacity GCM the symmetry of the inner-most training item pair about the true
decision boundary ?? determines the learned model. Not surprisingly, Random teaching sets lead to
suboptimal test errors on both low- and high-capacity GCM.
Table 1: Loss (i.e. test error) for different teaching sets on low- and high-capacity GCM. Note the
smallest loss 0.216 matches the optimal Bayes error rate.
GCM Model Clump-Far Spread Clump-Near
Random
Low-capacity
0.245
0.261
0.397 M =0.332, SD=0.040
High-capacity
0.216
0.216
0.216 M =0.262, SD=0.066
In summary, we produced four kinds of teaching sets: (1) Clump-Far which is the optimal teaching
set for the low-capacity GCM, (2) Spread, (3) Clump-Near, the three are all optimal teaching sets
for the high-capacity GCM, and (4) Random. The next section discusses how human participants
fair with each of these four training sets. Consistent with our predictions, the machine teacher?s
choices idealized the training sets with parallels to the idealization procedures used in Gigu`ere and
Love [3]. They found that human learners benefited when within category variance was reduced
(akin to clumping in Clump-Far and Clump-Near), training items were shifted away from the category boundary (akin to Clump-Far), and feedback was idealized (as in all the machine teaching
sets considered). Their actual condition in which training sets were not idealized resembles the
Random condition here. As hoped, low-capacity and high-capacity GCM make radically different
5
Clump-Far
Spread
Clump-Near
Random
y
1.0
?1.0
p?(y = 1 | z, D)
0.00
0.25
0.50
x
0.75
1.00
1.00
0.75
0.50
0.25
0.00
Clump-Far
Spread
Clump-Near
Random
0.00
0.25
0.50
z
0.75
1.00
Figure 2: (A) The teaching sets. The points show the machine teaching sets. Overlapping training
points are shown as clumps along with the number of items. A particular Random teaching set is
shown. All training labels y were in {1, ?1}, but dithered vertically for viewing clarity. (B) The
predictive distribution p?(y = 1 | z, D) produced by the low-capacity GCM given a teaching set D.
The vertical dashed lines show the position of the true decision boundary ?? . The curves for the
high-capacity GCM were omitted for space.
predictions. Whereas high-capacity GCM is insensitive to variations across the machine teaching
sets, low-capacity GCM should perform better under Clump-Far and Spread. The Clump-Near set
leads to more errors in low-capacity GCM because items are confusable in memory and therefore
limited samples from memory can lead to suboptimal classification decisions. In the next section,
we evaluate how humans perform with these four training sets, and compare human performance to
that of low- and high-capacity GCM.
4.3
Human Study
Human participants were trained on one of the four training sets: Clump-Far, Spread, Clump-Near,
and Random. Participants in all four conditions were tested (no corrective feedback provided) on
the m = 60 grid test items z1 . . . zm in [0, 1].
Participants. US-based participants (N = 600) were recruited via Amazon Mechanical Turk, a
paid online crowd-sourcing platform, which is an effective method for recruiting demographically
diverse samples [19] and has been shown to yield results consistent with decision making studies in
the laboratory [20]. In our sample, 297 of the 600 participants were female and the average age was
34.86. Participants were paid $1.00 for completing the study with the highest performing participant
receiving a $20 bonus.
Design. Participants were randomly assigned to one of the four teaching conditions (see Figure 2).
Notice that feedback was deterministic in all the teaching sets provided by the machine teacher, but
was probabilistic as a function of stimulus for the Random condition. For the Random condition,
each participant received a different sample of training items. The test set always consisted of 60
stimuli (see Figure 1). In both training and test trials, stimuli were presented sequentially in a
random order (without replacement) determined for each participant.
Materials and Procedure. The stimuli were horizontal lines of various lengths. Participants learned
to categorize these stimuli. The teaching sets values xi ? [0, 1] were converted into pixels by
multiplying it by 400 and adding an offset. The offset for each participant was a uniformly selected
random number from 30 to 100. As the study was performed online (see below), screen size varied
across participants (height x
?=879.16, s=143.34 and width x
?=1479.6, s=271.04).
During the training phase, on every trial, participants were instructed to fixate on a small cross
appearing in a random position on the screen. After 1000 ms, a line stimulus replaced the cross at
the same position. Participants were then to indicate their category decision by pressing a key (?F? or
?J?) as quickly as possible without sacrificing accuracy. Once the participant responded, the stimulus
6
Test
inconsistency
Test
performance
Training
performance
0.75
0.50
0.25
0.00
0.75
0.50
0.25
0.00
9
6
3
0
Clump-Far
Spread
Clump-Near
Random
Figure 3: Human experiment results. Each bar corresponds to one of the training conditions. (A)
The proportion of agreement between the individual training responses with the Bayes classifier.
(B) The proportion of agreement between the individual test responses with the Bayes classifier. (C)
Inconsistency in individual test responses. The error bars are 95% confidence intervals.
was immediately replaced by a feedback message (?Correct? or ?Wrong?), which was displayed for
2000 ms. The screen coordinates (horizontal/vertical) defining the stimulus (i.e., fixation cross and
line) position were randomized on each trial to prevent participants from using marks or smudges
on the screen as an aid. Participants completed 20 training trials.
The procedure was identical for test trials, except corrective feedback was not provided. Instead,
?Thank You!? was displayed following a response. The test phase consisted of 60 trials. At the
end of the test phase each subject was asked to discriminate between the short and long lines from
the Clump-Near training set (i.e. x = 0.435 and x = 0.565, closest stimuli in the deterministically
labeled training sets). Both lines were presented side-by-side, with their order counterbalanced
between participants. Each participant was asked to indicate which one of those is longer.
Results. It is important that people could perceptually discriminate the categories for the exemplars
close to the boundary, especially for the Clump-Near condition in which all the exemplars are close
to the boundary. At the end of the main study, this was measured by asking each participant to
indicate the longer line between the two. Overall 97% participants correctly indicated the longer
line. This did not differ across conditions, F (3, 596) < 0.84, p ? 0.47.
The optimal (i.e. Bayes) classifier deterministically assigns correct class label y? = sign(x ? ?? ) to
an item x. The agreement between training responses and the optimal classifier were significantly
different across the four teaching conditions, F (3, 596) = 66.97, p < 0.05. As expected, the random
sets resulted in the lowest accuracy (M =65.2%) and the Clump-Far condition resulted in the highest
accuracy (M =89.9%) (Figure 3A).
Figure 3B shows how well the test responses agree with the Bayes classifier. The proportional
agreement was significantly different across conditions, F (3, 596) = 9.16, p < 0.05. The
Clump-Far and Spread conditions were significantly different from the Clump-Near condition,
t(228.05) = 3.22, p < 0.05 and t(243.84) = 4.21, p < 0.05, respectively and the Random condition, t(290.84) = 2.39, p < 0.05 and t(297.37) = 3.71, p < 0.05, respectively. The Clump-Far
and the Spread conditions did not differ, t(294.32) = 1.55, p ? 0.12. This result shows that the
subjects in the Clump-Far and Spread conditions performed more similar to the Bayes classifier than
the subjects in the other two conditions.
Individual test response inconsistency can be calculated using number of neighboring stimuli that
are categorized in opposite categories [3]. This measure of inconsistency attempts to quantify the
stochastic memory retrieval and higher inconsistency reflects more noisy memory sampling. The inconsistency significantly differed between the conditions, F (3, 596) = 7.73, p < 0.05 (Figure 3C).
Both Clump-Far and Spread teaching sets showed lower inconsistency, suggesting that those teaching sets lead to less noisy memory sampling. The inconsistencies for these two conditions did
not differ significantly, two-sample t test, t(290.42) = 1.54, p ? 0.12. Inconsistencies in conditions Clump-Far and Spread significantly differed from Clump-Near, t(281.7) = ?2.53, p < 0.05
and t(291.04) = ?2.58, p < 0.05, respectively and Random, t(259.18) = ?3.98, p < 0.05 and
t(272.12) = ?4.14, p < 0.05, respectively.
Pm
We then calculated test loss for each subject as i=1 (1 ? p(hi | zi )) where hi is the response for
the stimulus zi . Figure 4 compares the observed and estimated test performance (i.e. 1 ? loss()) in
four conditions. Overall, human performance is more closely followed by the low-capacity GCM.
The human performance across four conditions was significantly different, F (3, 596) = 11.15, p <
0.05. The conditions Clump-Far and Spread did not significantly differ, t(295.96) = ?0.8, p ?
7
Test
Performance
Human
Low-capacity
High-capacity
0.75
0.50
Clump-Far
Spread
Clump-Near
Random
Figure 4: Empirical test performance of human learners for low- and high-capacity GCM on four
teaching conditions. Test performance is measured as 1 ? loss() (see (3)). Humans follow the
low-capacity GCM more closely. The error bars are 95% confidence intervals.
0.42. Test performance in conditions Clump-Far and Spread significantly differed from ClumpNear condition, t(226.9) = 4.12, p < 0.05 and t(287.97) = 2.19, p < 0.05, respectively and
Random condition, t(238.41) = 4.59, p < 0.05 and t(294.72) = 2.85, p < 0.05, respectively.
Humans performed significantly worse in the Clump-Near condition than in the Random condition,
t(253.94) = ?2.394, p < 0.05. A similar pattern was observed for the low-capacity GCM while the
opposite for the high-capacity GCM. Inconsistency, as defined above, significantly correlated with
the test loss, Pearson?s r = 0.56, t(148) = 8.34, p < 0.05. Taken together, these results provide
support for the low-capacity account of human decision making [3].
In order to check whether the variability within the training set is predictive of test performance we
correlated the observed test loss with the estimated loss for the subjects in the Random condition.
We observed a significant correlation between the test loss and the estimated loss for both low- and
high-capacity models, Pearson?s r = 0.273, t(148) = 3.45, p < 0.05 and r = 0.203, t(148) =
2.52, p < 0.05, respectively. This result points out that due to their limited capacity human learners
benefit from lower variability in the training sets, i.e. idealization.
The individual median reaction time in the training phase significantly differed across teaching conditions, F (3, 596) = 10.66, p < 0.05. The training median reaction time for the Clump-Far condition was the shortest (M =761 ms, SD=223) and differed significantly from all other conditions,
two-sample t tests, all p < 0.05. Other conditions did not differ significantly from each other.
The individual median reaction times in the test phase (M =767 ms, SD=187) did not differ across
teaching conditions, F (3, 596) = 0.95, p ? 0.42.
Taken together, our results suggest that the recommendations of the machine teacher for the lowcapacity GCM are indeed effective for human learners. Furthermore, the observed lower inconsistency in this condition suggests that machine teacher is performing idealization which aids by
reducing noise in the stochastic memory sampling process.
5
Discussion
A major aim of cognitive science is to understand human learning and to improve learning performance. We devised an optimal teacher for human category learning, a fundamental problem in
cognitive science. Based on recent research we focused on GCM which models limited human capacity of exemplar retrieval during decision making. We developed the optimal teaching sets for
the low- and high-capacity variants of the GCM learner. By using a 1D category learning task, we
have shown that the optimal teaching set for the low-capacity GCM is clumped, symmetrical and
located far from the decision boundary, which is intuitively easy to learn. This provides a normative
basis (given capacity limits) for the idealization procedures that reduce saliency of ambiguous cases
[2, 3]. The optimal teaching set indeed proved effective for human learning.
Future work will pursue several extensions. One interesting topic not considered here is how the
order of training examples affects learning. One possibility is that the optimal teacher will recommend easy examples earlier in training and then gradually progress to harder cases [2, 21]. Another
important extension is use of multi-dimensional stimuli.
Acknowledgments
The authors are thankful to the anonymous reviewers for their comments. This work is partly supported by the Leverhulme Trust grant RPG-2014-075 to BCL, National Science Foundation grant
IIS-0953219 to XZ and WT-MIT fellowship 103811AIA to KRP.
8
References
[1] P Shafto and N Goodman. A Bayesian Model of Pedagogical Reasoning. In AAAI Fall Symposium:
Naturally-Inspired Artificial Intelligence?08, pages 101?102, 2008.
[2] Y Bengio, J Louradour, R Collobert, and J Weston. Curriculum learning. In Proceedings of the 26th
Annual International Conference on Machine Learning - ICML ?09, pages 1?8, New York, USA, June
2009. ACM Press.
[3] G Gigu`ere and B C Love. Limits in decision making arise from limits in memory retrieval. Proceedings
of the National Academy of Sciences of the United States of America, 110(19):7613?8, May 2013.
[4] A N Hornsby and B C Love. Improved classification of mammograms following idealized training.
Journal of Applied Research in Memory and Cognition, 3:72?76, 2014.
[5] J Q Candela, M Sugiyama, A Schwaighofer, and N D Lawrence, editors. Dataset Shift in Machine
Learning. MIT Press, first edit edition, 2009.
[6] X Zhu. Machine Teaching for Bayesian Learners in the Exponential Family. In Advances in Neural
Information Processing Systems, pages 1905?1913, 2013.
[7] S A Goldman and M J Kearns. On the Complexity of Teaching. Journal of Computer and System Sciences,
50(1):20?31, 1995.
[8] F Khan, X Zhu, and B Mutlu. How Do Humans Teach: On Curriculum Learning and Teaching Dimension.
In Advances in Neural Information Processing Systems, pages 1449?1457, 2011.
[9] F J Balbach and T Zeugmann. Recent Developments in Algorithmic Teaching. In A H Dediu, A M
Ionescu, and C Mart??n-Vide, editors, Language and Automata Theory and Applications, volume 5457 of
Lecture Notes in Computer Science, pages 1?18. Springer, Berlin-Heidelberg, March 2009.
[10] M Cakmak and M Lopes. Algorithmic and Human Teaching of Sequential Decision Tasks. In AAAI
Conference on Artificial Intelligence (AAAI-12), July 2012.
[11] R Lindsey, M Mozer, W J Huggins, and H Pashler. Optimizing Instructional Policies. In Advances in
Neural Information Processing Systems, pages 2778?2786, 2013.
[12] B C Love. Categorization. In K N Ochsner and S M Kosslyn, editors, Oxford Handbook of Cognitive
Neuroscience, pages 342?358. Oxford University Press, 2013.
[13] D L Medin and M M Schaffer.
85(3):207?238, 1978.
Context theory of classification learning.
Psychological Review,
[14] R M Nosofsky. Attention, similarity, and the identification-categorization relationship. Journal of experimental psychology. General, 115(1):39?61, March 1986.
[15] M L Mack, A R Preston, and B C Love. Decoding the brain?s algorithm for categorization from its neural
implementation. Current Biology, 23:2023?2027, 2013.
[16] Y Chen, E K Garcia, M R Gupta, A Rahimi, and L Cazzanti. Similarity-based Classification: Concepts
and Algorithms. The Journal of Machine Learning Research, 10:747?776, December 2009.
[17] F Jakel, B Scholkopf, and F A Wichmann. Does cognitive science need kernels? Trends in Cognitive
Science, 13(9):381?388, 2009.
[18] R M Nosofsky and T J Palmeri. An exemplar-based random walk model of speeded classification. Psychological review, 104(2):266?300, April 1997.
[19] M Buhrmester, T Kwang, and S D Gosling. Amazon?s Mechanical Turk: A New Source of Inexpensive,
Yet High-Quality, Data? Perspectives on Psychological Science, 6(1):3?5, February 2011.
[20] M J C Crump, J V McDonnell, and T M Gureckis. Evaluating Amazon?s Mechanical Turk as a tool for
experimental behavioral research. PloS one, 8(3):e57410, January 2013.
[21] H Pashler and M C Mozer. When does fading enhance perceptual category learning? Journal of experimental psychology. Learning, memory, and cognition, 39(4):1162?73, July 2013.
9
| 5541 |@word trial:7 proportion:3 accounting:1 paid:2 harder:1 united:1 past:1 outperforms:1 bradley:1 reaction:3 com:1 current:1 gmail:1 mushroom:1 dx:1 must:1 yet:1 drop:1 clumping:1 intelligence:2 selected:1 item:29 ith:2 short:1 provides:1 location:1 preference:1 simpler:1 height:1 mathematical:2 along:1 symposium:1 scholkopf:1 prove:2 fixation:1 fitting:1 affective:1 behavioral:3 vide:1 acquired:2 indeed:2 expected:3 love:11 xz:1 multi:1 brain:2 inspired:1 goldman:1 actual:3 provided:4 bonus:1 lowest:1 what:1 kind:1 interpreted:2 pursue:1 developed:1 lindsey:1 differing:1 every:1 act:1 exactly:2 classifier:6 wrong:1 uk:2 unit:1 grant:2 yn:6 vertically:1 tends:1 limit:8 sd:4 oxford:2 resembles:1 suggests:3 specifying:2 challenging:1 limited:16 medin:1 clump:42 averaged:1 speeded:1 unique:2 acknowledgment:1 yj:1 practice:1 procedure:12 empirical:2 significantly:14 matching:1 word:1 confidence:2 suggest:1 cazzanti:1 close:3 selection:1 context:2 influence:1 pashler:2 accumulation:1 equivalent:1 deterministic:1 reviewer:1 attention:1 starting:1 automaton:1 focused:1 amazon:3 immediately:1 assigns:1 importantly:1 retrieve:1 notion:1 variation:2 coordinate:1 agreement:4 trend:1 particularly:1 located:1 labeled:1 observed:5 enters:1 solved:2 capture:1 calculate:1 plo:1 decrease:1 removed:1 highest:2 intuition:1 mozer:2 complexity:1 asked:2 trained:5 solving:1 predictive:2 learner:28 basis:4 joint:1 represented:1 various:1 corrective:2 america:1 distinct:2 shortcoming:2 london:2 effective:5 describe:1 artificial:2 pearson:2 crowd:1 whose:1 heuristic:2 relax:1 otherwise:1 noisy:2 final:1 online:2 hoc:2 advantage:1 pressing:1 ucl:3 zm:2 neighboring:1 kopec:1 academy:1 categorization:6 generating:1 converges:1 thankful:1 derive:1 friend:2 ac:2 pose:1 measured:2 exemplar:18 ex:1 received:1 progress:1 strong:1 c:1 predicted:4 indicate:3 quantify:1 differ:6 guided:3 shafto:1 closely:2 correct:2 stochastic:8 human:46 viewing:1 material:1 generalization:1 anonymous:1 summation:1 extension:2 considered:2 normal:1 gigu:6 lawrence:1 cognition:2 algorithmic:2 recruiting:1 major:2 smallest:2 omitted:1 purpose:1 label:12 sensitive:1 edit:1 ere:6 tool:1 reflects:2 mit:3 always:1 idealizing:1 aim:4 rather:3 categorizing:1 focus:1 june:1 bernoulli:1 check:1 contrast:3 rigorous:1 helpful:1 accumulated:1 typically:1 ukasz:1 pixel:1 mimicking:1 overall:3 classification:9 arg:1 development:1 platform:1 special:1 marginal:2 balbach:1 once:1 sampling:5 identical:1 represents:2 y6:1 biology:1 icml:1 future:2 stimulus:26 recommend:1 randomly:2 resulted:2 national:2 individual:6 replaced:3 phase:5 replacement:1 opposing:1 attempt:1 freedom:1 message:1 possibility:1 evaluation:1 light:2 activated:2 integral:1 experience:1 conforms:2 walk:2 confusable:1 sacrificing:1 theoretical:1 psychological:3 earlier:1 asking:1 measuring:1 jerryzhu:1 uniform:2 successful:1 teacher:23 person:2 fundamental:1 randomized:1 international:1 probabilistic:4 systematic:1 receiving:1 decoding:1 enhance:1 together:2 quickly:1 nosofsky:2 aaai:3 choose:1 worse:1 cognitive:7 stochastically:2 adversely:1 suggesting:1 converted:1 account:1 cement:1 sloan:1 idealized:13 ad:2 collobert:1 performed:4 lab:2 candela:1 mutlu:1 bayes:6 participant:23 parallel:1 contribution:2 minimize:2 accuracy:3 responded:1 variance:1 judgment:1 saliency:3 spaced:1 yield:1 bayesian:2 identification:1 critically:1 iid:4 produced:2 multiplying:1 randomness:1 foe:2 definition:1 inexpensive:1 involved:1 turk:3 fixate:1 naturally:1 sampled:1 experimenter:1 proved:1 dataset:1 knowledge:2 improves:1 reflecting:1 feed:1 higher:2 follow:1 response:16 specify:3 improved:1 april:1 arranged:1 evaluated:1 furthermore:1 correlation:1 horizontal:2 gcm:47 trust:1 overlapping:1 quality:1 indicated:1 facilitate:2 usa:1 contain:1 true:6 requiring:1 consisted:3 concept:1 idealize:3 assigned:1 laboratory:1 preston:1 adjacent:1 during:3 interchangeably:1 uniquely:1 width:1 ambiguous:3 m:4 generalized:1 pedagogical:2 reasoning:1 novel:5 common:1 empirically:1 winner:1 exponentially:1 insensitive:1 volume:1 discussed:2 refer:1 significant:1 grid:1 pm:1 teaching:59 clumped:1 contest:1 sugiyama:1 language:1 had:1 specification:1 similarity:8 longer:3 closest:1 recent:5 female:1 retrieved:7 showed:1 irrelevant:1 optimizing:1 perspective:1 manipulation:1 certain:1 binary:1 inconsistency:11 yi:9 ey:1 aggregated:1 maximize:1 shortest:1 recommended:2 dashed:2 ii:1 july:2 full:1 sound:1 multiple:1 reduces:1 rahimi:1 exceeds:1 match:2 offer:2 cross:3 retrieval:7 long:1 devised:1 prediction:10 variant:4 basic:2 expectation:3 represent:1 kernel:2 background:1 whereas:2 want:1 fellowship:1 interval:2 median:3 source:1 goodman:1 unlike:1 specially:1 comment:1 subject:5 recruited:1 member:2 contrary:1 december:1 integer:1 near:16 bengio:1 recommends:1 easy:2 variety:2 xj:5 affect:2 psychology:5 fit:1 zi:3 bandwidth:1 suboptimal:2 counterbalanced:1 inner:1 opposite:2 reduce:1 edible:1 shift:1 whether:1 effort:7 akin:2 york:1 useful:1 gureckis:1 involve:1 outward:1 category:26 reduced:3 generate:1 specifies:2 zeugmann:1 affords:1 zj:11 shifted:1 notice:1 judging:2 sign:1 estimated:5 leverhulme:1 dummy:1 correctly:1 neuroscience:1 ionescu:1 diverse:1 discrete:3 key:1 four:10 threshold:6 drawn:2 wisc:1 changing:1 clarity:1 prevent:1 verified:1 diffusion:2 idealization:12 inverse:1 injected:1 powerful:1 you:1 lope:1 place:3 arrive:2 x0j:1 family:2 decision:29 prefer:1 scaling:2 bound:1 hi:2 completing:1 followed:1 simplification:1 annual:1 adapted:1 placement:4 constraint:4 infinity:1 fading:1 unlimited:4 erroneously:1 aspect:1 min:4 performing:2 department:1 according:1 march:2 mcdonnell:1 across:10 smaller:2 kosslyn:1 making:5 wichmann:1 huggins:1 intuitively:1 gradually:1 instructional:1 mack:1 taken:2 computationally:1 equation:4 agree:1 discus:2 mind:1 end:2 apply:2 observe:1 away:2 appearing:1 kaustubh:2 aia:1 altogether:1 completed:1 patil:2 madison:1 especially:1 establish:1 february:1 traditional:1 surrogate:1 distance:4 link:1 thank:1 separating:1 capacity:65 berlin:1 evenly:1 topic:1 extent:2 length:2 besides:1 palmeri:1 relationship:2 equivalently:1 difficult:1 teach:1 design:5 implementation:1 policy:1 perform:8 vertical:3 finite:4 displayed:2 january:1 defining:1 variability:2 y1:6 varied:1 drift:1 schaffer:1 pair:1 required:1 mechanical:3 khan:1 connection:2 z1:2 learned:2 poisonous:1 bar:3 below:2 usually:1 pattern:1 challenge:1 program:1 including:1 memory:16 greatest:1 power:1 predicting:1 indicator:1 curriculum:3 zhu:3 representing:1 improve:1 numerous:1 xiaojin:1 review:3 prior:1 determining:1 bcl:1 wisconsin:1 loss:16 lecture:1 bear:1 mixed:1 interesting:1 limitation:1 proportional:1 proven:1 age:1 foundation:1 consistent:4 principle:2 editor:3 classifying:2 share:1 summary:1 placed:1 surprisingly:1 sourcing:1 supported:1 side:2 understand:1 fall:1 kwang:1 benefit:1 boundary:12 calculated:3 xn:6 feedback:5 curve:1 dimension:1 evaluating:1 adopts:1 collection:1 made:1 instructed:1 author:1 cakmak:1 far:26 selector:1 keep:1 sequentially:2 handbook:1 conceptual:1 symmetrical:1 xi:24 search:3 continuous:2 table:2 nature:2 learn:1 symmetry:1 heidelberg:1 constructing:2 domain:1 louradour:1 did:6 main:2 spread:17 noise:2 arise:1 edition:1 fair:1 categorized:1 x1:6 benefited:1 representative:2 screen:4 fashion:3 differed:5 aid:2 precision:1 experienced:1 position:5 deterministically:2 exponential:1 prominently:1 candidate:1 perceptual:1 mammogram:2 normative:4 offset:2 gupta:1 evidence:4 consist:2 adding:1 sequential:1 magnitude:1 hoped:1 perceptually:1 chen:1 garcia:1 simply:1 likely:1 krp:1 schwaighofer:1 sport:1 recommendation:4 springer:1 radically:1 corresponds:1 determines:1 acm:1 mart:1 weston:1 conditional:5 viewed:1 hard:1 change:1 determined:3 infinite:3 specifically:1 uniformly:2 except:1 reducing:1 wt:1 kearns:1 total:1 discriminate:2 partly:1 experimental:5 cond:5 formally:2 college:2 selectively:1 select:1 people:5 mark:1 latter:1 support:1 categorize:1 evaluate:4 tested:1 correlated:2 |
5,017 | 5,542 | Recurrent Models of Visual Attention
Volodymyr Mnih
Nicolas Heess Alex Graves
Google DeepMind
Koray Kavukcuoglu
{vmnih,heess,gravesa,korayk} @ google.com
Abstract
Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of
image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting
a sequence of regions or locations and only processing the selected regions at
high resolution. Like convolutional neural networks, the proposed model has a
degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model
is non-differentiable, it can be trained using reinforcement learning methods to
learn task-specific policies. We evaluate our model on several image classification
tasks, where it significantly outperforms a convolutional neural network baseline
on cluttered images, and on a dynamic visual control problem, where it learns to
track a simple object without an explicit training signal for doing so.
1
Introduction
Neural network-based architectures have recently had great success in significantly advancing the
state of the art on challenging image classification and object detection datasets [8, 12, 19]. Their
excellent recognition accuracy, however, comes at a high computational cost both at training and
testing time. The large convolutional neural networks typically used currently take days to train on
multiple GPUs even though the input images are downsampled to reduce computation [12]. In the
case of object detection processing a single image at test time currently takes seconds when running
on a single GPU [8, 19] as these approaches effectively follow the classical sliding window paradigm
from the computer vision literature where a classifier, trained to detect an object in a tightly cropped
bounding box, is applied independently to thousands of candidate windows from the test image at
different positions and scales. Although some computations can be shared, the main computational
expense for these models comes from convolving filter maps with the entire input image, therefore
their computational complexity is at least linear in the number of pixels.
One important property of human perception is that one does not tend to process a whole scene
in its entirety at once. Instead humans focus attention selectively on parts of the visual space to
acquire information when and where it is needed, and combine information from different fixations
over time to build up an internal representation of the scene [18], guiding future eye movements
and decision making. Focusing the computational resources on parts of a scene saves ?bandwidth?
as fewer ?pixels? need to be processed. But it also substantially reduces the task complexity as
the object of interest can be placed in the center of the fixation and irrelevant features of the visual
environment (?clutter?) outside the fixated region are naturally ignored.
In line with its fundamental role, the guidance of human eye movements has been extensively studied
in neuroscience and cognitive science literature. While low-level scene properties and bottom up
processes (e.g. in the form of saliency; [11]) play an important role, the locations on which humans
fixate have also been shown to be strongly task specific (see [9] for a review and also e.g. [15, 22]). In
this paper we take inspiration from these results and develop a novel framework for attention-based
task-driven visual processing with neural networks. Our model considers attention-based processing
1
of a visual scene as a control problem and is general enough to be applied to static images, videos,
or as a perceptual module of an agent that interacts with a dynamic visual environment (e.g. robots,
computer game playing agents).
The model is a recurrent neural network (RNN) which processes inputs sequentially, attending to
different locations within the images (or video frames) one at a time, and incrementally combines
information from these fixations to build up a dynamic internal representation of the scene or environment. Instead of processing an entire image or even bounding box at once, at each step, the model
selects the next location to attend to based on past information and the demands of the task. Both
the number of parameters in our model and the amount of computation it performs can be controlled
independently of the size of the input image, which is in contrast to convolutional networks whose
computational demands scale linearly with the number of image pixels. We describe an end-to-end
optimization procedure that allows the model to be trained directly with respect to a given task and
to maximize a performance measure which may depend on the entire sequence of decisions made by
the model. This procedure uses backpropagation to train the neural-network components and policy
gradient to address the non-differentiabilities due to the control problem.
We show that our model can learn effective task-specific strategies for where to look on several
image classification tasks as well as a dynamic visual control problem. Our results also suggest that
an attention-based model may be better than a convolutional neural network at both dealing with
clutter and scaling up to large input images.
2
Previous Work
Computational limitations have received much attention in the computer vision literature. For instance, for object detection, much work has been dedicated to reducing the cost of the widespread
sliding window paradigm, focusing primarily on reducing the number of windows for which the
full classifier is evaluated, e.g. via classifier cascades (e.g. [7, 24]), removing image regions from
consideration via a branch and bound approach on the classifier output (e.g. [13]), or by proposing
candidate windows that are likely to contain objects (e.g. [1, 23]). Even though substantial speedups
may be obtained with such approaches, and some of these can be combined with or used as an add-on
to CNN classifiers [8], they remain firmly rooted in the window classifier design for object detection
and only exploit past information to inform future processing of the image in a very limited way.
A second class of approaches that has a long history in computer vision and is strongly motivated
by human perception are saliency detectors (e.g. [11]). These approaches prioritize the processing
of potentially interesting (?salient?) image regions which are typically identified based on some
measure of local low-level feature contrast. Saliency detectors indeed capture some of the properties
of human eye movements, but they typically do not to integrate information across fixations, their
saliency computations are mostly hardwired, and they are based on low-level image properties only,
usually ignoring other factors such as semantic content of a scene and task demands (but see [22]).
Some works in the computer vision literature and elsewhere e.g. [2, 4, 6, 14, 16, 17, 20] have embraced vision as a sequential decision task as we do here. There, as in our work, information about
the image is gathered sequentially and the decision where to attend next is based on previous fixations of the image. [4] employs the learned Bayesian observer model from [5] to the task of object
detection. The learning framework of [5] is related to ours as they also employ a policy gradient
formulation (cf. section 3) but their overall setup is considerably more restrictive than ours and only
some parts of the system are learned.
Our work is perhaps the most similar to the other attempts to implement attentional processing in a
deep learning framework [6, 14, 17]. Our formulation which employs an RNN to integrate visual
information over time and to decide how to act is, however, more general, and our learning procedure
allows for end-to-end optimization of the sequential decision process instead of relying on greedy
action selection. We further demonstrate how the same general architecture can be used for efficient
object recognition in still images as well as to interact with a dynamic visual environment in a
task-driven way.
3
The Recurrent Attention Model (RAM)
In this paper we consider the attention problem as the sequential decision process of a goal-directed
agent interacting with a visual environment. At each point in time, the agent observes the environment only via a bandwidth-limited sensor, i.e. it never senses the environment in full. It may extract
2
A)
C)
lt-1
?(xt , lt-1)
xt
fg(?g)
Glimpse Sensor
B)
?(xt , lt-1)
xt
?g1
Glimpse Network :
fg(?g)
gt
ht-1
?g0
Glimpse
Sensor
lt-1
lt
lt-1
?g2
gt
fg( ?g )
gt+1
ht
ht+1
fh(?h)
fh(?h)
fa(?a)
fl(?l)
fa(?a)
fl(?l)
at
lt
at+1
lt+1
Figure 1: A) Glimpse Sensor: Given the coordinates of the glimpse and an input image, the sensor extracts a retina-like representation ?(xt , lt?1 ) centered at lt?1 that contains multiple resolution
patches. B) Glimpse Network: Given the location (lt?1 ) and input image (xt ), uses the glimpse
sensor to extract retina representation ?(xt , lt?1 ). The retina representation and glimpse location is
then mapped into a hidden space using independent linear layers parameterized by ?g0 and ?g1 respectively using rectified units followed by another linear layer ?g2 to combine the information from both
components. The glimpse network fg (.; {?g0 , ?g1 , ?g2 }) defines a trainable bandwidth limited sensor
for the attention network producing the glimpse representation gt . C) Model Architecture: Overall,
the model is an RNN. The core network of the model fh (.; ?h ) takes the glimpse representation gt as
input and combining with the internal representation at previous time step ht?1 , produces the new
internal state of the model ht . The location network fl (.; ?l ) and the action network fa (.; ?a ) use the
internal state ht of the model to produce the next location to attend to lt and the action/classification
at respectively. This basic RNN iteration is repeated for a variable number of steps.
information only in a local region or in a narrow frequency band. The agent can, however, actively
control how to deploy its sensor resources (e.g. choose the sensor location). The agent can also
affect the true state of the environment by executing actions. Since the environment is only partially
observed the agent needs to integrate information over time in order to determine how to act and
how to deploy its sensor most effectively. At each step, the agent receives a scalar reward (which
depends on the actions the agent has executed and can be delayed), and the goal of the agent is to
maximize the total sum of such rewards.
This formulation encompasses tasks as diverse as object detection in static images and control problems like playing a computer game from the image stream visible on the screen. For a game, the
environment state would be the true state of the game engine and the agent?s sensor would operate
on the video frame shown on the screen. (Note that for most games, a single frame would not fully
specify the game state). The environment actions here would correspond to joystick controls, and
the reward would reflect points scored. For object detection in static images the state of the environment would be fixed and correspond to the true contents of the image. The environmental action
would correspond to the classification decision (which may be executed only after a fixed number
of fixations), and the reward would reflect if the decision is correct.
3.1 Model
The agent is built around a recurrent neural network as shown in Fig. 1. At each time step, it
processes the sensor data, integrates information over time, and chooses how to act and how to
deploy its sensor at next time step:
Sensor: At each step t the agent receives a (partial) observation of the environment in the form of
an image xt . The agent does not have full access to this image but rather can extract information
from xt via its bandwidth limited sensor ?, e.g. by focusing the sensor on some region or frequency
band of interest.
In this paper we assume that the bandwidth-limited sensor extracts a retina-like representation
?(xt , lt?1 ) around location lt?1 from image xt . It encodes the region around l at a high-resolution
but uses a progressively lower resolution for pixels further from l, resulting in a vector of much
3
lower dimensionality than the original image x. We will refer to this low-resolution representation
as a glimpse [14]. The glimpse sensor is used inside what we call the glimpse network fg to produce
the glimpse feature vector gt = fg (xt , lt?1 ; ?g ) where ?g = {?g0 , ?g1 , ?g2 } (Fig. 1B).
Internal state: The agent maintains an interal state which summarizes information extracted from
the history of past observations; it encodes the agent?s knowledge of the environment and is instrumental to deciding how to act and where to deploy the sensor. This internal state is formed
by the hidden units ht of the recurrent neural network and updated over time by the core network:
ht = fh (ht?1 , gt ; ?h ). The external input to the network is the glimpse feature vector gt .
Actions: At each step, the agent performs two actions: it decides how to deploy its sensor via the
sensor control lt , and an environment action at which might affect the state of the environment.
The nature of the environment action depends on the task. In this work, the location actions are
chosen stochastically from a distribution parameterized by the location network fl (ht ; ?l ) at time t:
lt ? p(?|fl (ht ; ?l )). The environment action at is similarly drawn from a distribution conditioned
on a second network output at ? p(?|fa (ht ; ?a )). For classification it is formulated using a softmax
output and for dynamic environments, its exact formulation depends on the action set defined for
that particular environment (e.g. joystick movements, motor control, ...). Finally, our model can
also be augmented with an additional action that decides when it will stop taking glimpses. This
could, for example, be used to learn a cost-sensitive classifier by giving the agent a negative reward
for each glimpse it takes, forcing it to trade off making correct classifications with the cost of taking
more glimpses.
Reward: After executing an action the agent receives a new visual observation of the environment
xt+1 and a reward signal rt+1 . The goal of the agent is to maximize the sum of the reward signal1
PT
which is usually very sparse and delayed: R =
t=1 rt . In the case of object recognition, for
example, rT = 1 if the object is classified correctly after T steps and 0 otherwise.
The above setup is a special instance of what is known in the RL community as a Partially Observable Markov Decision Process (POMDP). The true state of the environment (which can be static or
dynamic) is unobserved. In this view, the agent needs to learn a (stochastic) policy ?((lt , at )|s1:t ; ?)
with parameters ? that, at each step t, maps the history of past interactions with the environment
s1:t = x1 , l1 , a1 , . . . xt?1 , lt?1 , at?1 , xt to a distribution over actions for the current time step, subject to the constraint of the sensor. In our case, the policy ? is defined by the RNN outlined above,
and the history st is summarized in the state of the hidden units ht . We will describe the specific
choices for the above components in Section 4.
3.2 Training
The parameters of our agent are given by the parameters of the glimpse network, the core network
(Fig. 1C), and the action network ? = {?g , ?h , ?a } and we learn these to maximize the total reward
the agent can expect when interacting with the environment.
More formally, the policy of the agent, possibly in combination with the dynamics of the environment (e.g. for game-playing), induces a distribution over possible interaction
s1:N and we
i
hP sequences
T
aim to maximize the reward under this distribution: J(?) = Ep(s1:T ;?)
r
=
E
p(s1:T ;?) [R],
t=1 t
where p(s1:T ; ?) depends on the policy
Maximizing J exactly is non-trivial since it involves an expectation over the high-dimensional interaction sequences which may in turn involve unknown environment dynamics. Viewing the problem
as a POMDP, however, allows us to bring techniques from the RL literature to bear: As shown by
Williams [26] a sample approximation to the gradient is given by
?? J =
T
X
Ep(s1:T ;?) [?? log ?(ut |s1:t ; ?)R] ?
t=1
M T
1 XX
?? log ?(uit |si1:t ; ?)Ri ,
M i=1 t=1
(1)
where si ?s are interaction sequences obtained by running the current agent ?? for i = 1 . . . M
episodes.
1
Depending on the scenario it may be more appropriate to consider a sum of discounted rewards, where
P
rewards obtained in the distant future contribute less: R = Tt=1 ? t?1 rt . In this case we can have T ? ?.
4
The learning rule (1) is also known as the REINFORCE rule, and it involves running the agent with
its current policy to obtain samples of interaction sequences s1:T and then adjusting the parameters
? of our agent such that the log-probability of chosen actions that have led to high cumulative reward
is increased, while that of actions having produced low reward is decreased.
Eq. (1) requires us to compute ?? log ?(uit |si1:t ; ?). But this is just the gradient of the RNN that
defines our agent evaluated at time step t and can be computed by standard backpropagation [25].
Variance Reduction : Equation (1) provides us with an unbiased estimate of the gradient but it may
have high variance. It is therefore common to consider a gradient estimate of the form
M T
1 XX
?? log ?(uit |si1:t ; ?) Rti ? bt ,
M i=1 t=1
(2)
PT
where Rti = t0 =1 rti0 is the cumulative reward obtained following the execution of action uit , and
bt is a baseline that may depend on si1:t (e.g. via hit ) but not on the action uit itself. This estimate
is equal to (1) in expectation but may have lower variance. It is natural to select bt = E? [Rt ] [21],
and this form of baseline known as the value function in the reinforcement learning literature. The
resulting algorithm increases the log-probability of an action that was followed by a larger than
expected cumulative reward, and decreases the probability if the obtained cumulative reward was
smaller. We use this type of baseline and learn it by reducing the squared error between Rti ?s and bt .
Using a Hybrid Supervised Loss: The algorithm described above allows us to train the agent when
the ?best? actions are unknown, and the learning signal is only provided via the reward. For instance,
we may not know a priori which sequence of fixations provides most information about an unknown
image, but the total reward at the end of an episode will give us an indication whether the tried
sequence was good or bad.
However, in some situations we do know the correct action to take: For instance, in an object
detection task the agent has to output the label of the object as the final action. For the training
images this label will be known and we can directly optimize the policy to output the correct label
associated with a training image at the end of an observation sequence. This can be achieved, as is
common in supervised learning, by maximizing the conditional probability of the true label given
the observations from the image, i.e. by maximizing log ?(a?T |s1:T ; ?), where a?T corresponds to the
ground-truth label(-action) associated with the image from which observations s1:T were obtained.
We follow this approach for classification problems where we optimize the cross entropy loss to
train the action network fa and backpropagate the gradients through the core and glimpse networks.
The location network fl is always trained with REINFORCE.
4
Experiments
We evaluated our approach on several image classification tasks as well as a simple game. We first
describe the design choices that were common to all our experiments:
Retina and location encodings: The retina encoding ?(x, l) extracts k square patches centered at
location l, with the first patch being gw ? gw pixels in size, and each successive patch having twice
the width of the previous. The k patches are then all resized to gw ? gw and concatenated. Glimpse
locations l were encoded as real-valued (x, y) coordinates2 with (0, 0) being the center of the image
x and (?1, ?1) being the top left corner of x.
Glimpse network: The glimpse network fg (x, l) had two fully connected layers. Let Linear(x) denote a linear transformation of the vector x, i.e. Linear(x) = W x+b for some weight matrix W and
bias vector b, and let Rect(x) = max(x, 0) be the rectifier nonlinearity. The output g of the glimpse
network was defined as g = Rect(Linear(hg ) + Linear(hl )) where hg = Rect(Linear(?(x, l)))
and hl = Rect(Linear(l)). The dimensionality of hg and hl was 128 while the dimensionality of
g was 256 for all attention models trained in this paper.
Location network: The policy for the locations l was defined by a two-component Gaussian with a
fixed variance. The location network outputs the mean of the location policy at time t and is defined
as fl (h) = Linear(h) where h is the state of the core network/RNN.
2
We also experimented with using a discrete representation for the locations l but found that it was difficult
to learn policies over more than 25 possible discrete locations.
5
(a) 28x28 MNIST
Model
FC, 2 layers (256 hiddens each)
Convolutional, 2 layers
RAM, 2 glimpses, 8 ? 8, 1 scale
RAM, 3 glimpses, 8 ? 8, 1 scale
RAM, 4 glimpses, 8 ? 8, 1 scale
RAM, 5 glimpses, 8 ? 8, 1 scale
RAM, 6 glimpses, 8 ? 8, 1 scale
RAM, 7 glimpses, 8 ? 8, 1 scale
Error
1.69%
1.21%
3.79%
1.51%
1.54%
1.34%
1.12%
1.07%
(b) 60x60 Translated MNIST
Model
FC, 2 layers (64 hiddens each)
FC, 2 layers (256 hiddens each)
Convolutional, 2 layers
RAM, 4 glimpses, 12 ? 12, 3 scales
RAM, 6 glimpses, 12 ? 12, 3 scales
RAM, 8 glimpses, 12 ? 12, 3 scales
Error
6.42%
2.63%
1.62%
1.54%
1.22%
1.2%
Table 1: Classification results on the MNIST and Translated MNIST datasets. FC denotes a fullyconnected network with two layers of rectifier units. The convolutional network had one layer of 8
10 ? 10 filters with stride 5, followed by a fully connected layer with 256 units with rectifiers after
each layer. Instances of the attention model are labeled with the number of glimpses, the number of
scales in the retina, and the size of the retina.
(a) Translated MNIST inputs.
(b) Cluttered Translated MNIST inputs.
Figure 2: Examples of test cases for the Translated and Cluttered Translated MNIST tasks.
Core network: For the classification experiments that follow the core fh was a network of rectifier
units defined as ht = fh (ht?1 ) = Rect(Linear(ht?1 ) + Linear(gt )). The experiment done on a
dynamic environment used a core of LSTM units [10].
4.1
Image Classification
The attention network used in the following classification experiments made a classification decision
only at the last timestep t = N . The action network fa was simply a linear softmax classifier defined
as fa (h) = exp (Linear(h)) /Z, where Z is a normalizing constant. The RNN state vector h had
dimensionality 256. All methods were trained using stochastic gradient descent with minibatches
of size 20 and momentum of 0.9. We annealed the learning rate linearly from its initial value to 0
over the course of training. Hyperparameters such as the initial learning rate and the variance of the
location policy were selected using random search [3]. The reward at the last time step was 1 if the
agent classified correctly and 0 otherwise. The rewards for all other timesteps were 0.
Centered Digits: We first tested the ability of our training method to learn successful glimpse
policies by using it to train RAM models with up to 7 glimpses on the MNIST digits dataset. The
?retina? for this experiment was simply an 8 ? 8 patch, which is only big enough to capture a part of
a digit, hence the experiment also tested the ability of RAM to combine information from multiple
glimpses. We also trained standard feedforward and convolutional neural networks with two hidden
layers as a baselines. The error rates achieved by the different models on the test set are shown in
Table 1a. We see that the performance of RAM generally improves with more glimpses, and that
it eventually outperforms a the baseline models trained on the full 28 ? 28 centered digits. This
demonstrates the model can successfully learn to combine information from multiple glimpses.
Non-Centered Digits: The second problem we considered was classifying non-centered digits. We
created a new task called Translated MNIST, for which data was generated by placing an MNIST
digit in a random location of a larger blank patch. Training cases were generated on the fly so the
effective training set size was 50000 (the size of the MNIST training set) multiplied by the possible
number of locations. Figure 2a contains a random sample of test cases for the 60 by 60 Translated
MNIST task. Table 1b shows the results for several different models trained on the Translated
MNIST task with 60 by 60 patches. In addition to RAM and two fully-connected networks we
also trained a network with one convolutional layer of 16 10 ? 10 filters with stride 5 followed
by a rectifier nonlinearity and then a fully-connected layer of 256 rectifier units. The convolutional
network, the RAM networks, and the smaller fully connected model all had roughly the same number
of parameters. Since the convolutional network has some degree of translation invariance built in, it
6
(a) 60x60 Cluttered Translated MNIST
Model
Error
FC, 2 layers (64 hiddens each)
28.58%
FC, 2 layers (256 hiddens each)
11.96%
Convolutional, 2 layers
8.09%
RAM, 4 glimpses, 12 ? 12, 3 scales 4.96%
RAM, 6 glimpses, 12 ? 12, 3 scales 4.08%
RAM, 8 glimpses, 12 ? 12, 3 scales 4.04%
RAM, 8 random glimpses
14.4%
(b) 100x100 Cluttered Translated MNIST
Model
Error
Convolutional, 2 layers
14.35%
RAM, 4 glimpses, 12 ? 12, 4 scales 9.41%
RAM, 6 glimpses, 12 ? 12, 4 scales 8.31%
RAM, 8 glimpses, 12 ? 12, 4 scales 8.11%
RAM, 8 random glimpses
28.4%
Table 2: Classification on the Cluttered Translated MNIST dataset. FC denotes a fully-connected
network with two layers of rectifier units. The convolutional network had one layer of 8 10 ? 10
filters with stride 5, followed by a fully connected layer with 256 units in the 60 ? 60 case and
86 units in the 100 ? 100 case with rectifiers after each layer. Instances of the attention model are
labeled with the number of glimpses, the size of the retina, and the number of scales in the retina.
All models except for the big fully connected network had roughly the same number of parameters.
Figure 3: Examples of the learned policy on 60 ? 60 cluttered-translated MNIST task. Column 1:
The input image with glimpse path overlaid in green. Columns 2-7: The six glimpses the network
chooses. The center of each image shows the full resolution glimpse, the outer low resolution areas
are obtained by upscaling the low resolution glimpses back to full image size. The glimpse paths
clearly show that the learned policy avoids computation in empty or noisy parts of the input space
and directly explores the area around the object of interest.
attains a significantly lower error rate of 1.62% than the fully connected networks. However, RAM
with 4 glimpses gets slightly better performance than the convolutional network and outperforms
it further for 6 and 8 glimpses, reaching 1.2% error. This is possible because the attention model
can focus its retina on the digit and hence learn a translation invariant policy. This experiment also
shows that the attention model is able to successfully search for an object in a big image when the
object is not centered.
Cluttered Non-Centered Digits: One of the most challenging aspects of classifying real-world
images is the presence of a wide range clutter. Systems that operate on the entire image at full
resolution are particularly susceptible to clutter and must learn to be invariant to it. One possible
advantage of an attention mechanism is that it may make it easier to learn in the presence of clutter
by focusing on the relevant part of the image and ignoring the irrelevant part. We test this hypothesis
with several experiments on a new task we call Cluttered Translated MNIST. Data for this task was
generated by first placing an MNIST digit in a random location of a larger blank image and then
adding random 8 by 8 subpatches from other random MNIST digits to random locations of the
image. The goal is to classify the complete digit present in the image. Figure 2b shows a random
sample of test cases for the 60 by 60 Cluttered Translated MNIST task.
Table 2a shows the classification results for the models we trained on 60 by 60 Cluttered Translated
MNIST with 4 pieces of clutter. The presence of clutter makes the task much more difficult but the
performance of the attention model is affected less than the performance of the other models. RAM
with 4 glimpses reaches 4.96% error, which outperforms fully-connected models by a wide margin
and the convolutional neural network by over 3%, and RAM trained with 6 and 8 glimpses achieves
even lower error. Since RAM achieves larger relative error improvements over a convolutional
network in the presence of clutter these results suggest the attention-based models may be better at
dealing with clutter than convolutional networks because they can simply ignore it by not looking at
it. Two samples of learned policy is shown in Figure 3 and more are included in the supplementary
materials. The first column shows the original data point with the glimpse path overlaid. The
7
location of the first glimpse is marked with a filled circle and the location of the final glimpse is
marked with an empty circle. The intermediate points on the path are traced with solid straight lines.
Each consecutive image to the right shows a representation of the glimpse that the network sees. It
can be seen that the learned policy can reliably find and explore around the object of interest while
avoiding clutter at the same time. Finally, Table 2a also includes results for an 8-glimpse RAM
model that selects glimpse locations uniformly at random. RAM models that learn the glimpse
policy achieve much lower error rates even with half as many glimpses.
To further test this hypothesis we also performed experiments on 100 by 100 Cluttered Translated
MNIST with 8 pieces of clutter. The test errors achieved by the models we compared are shown
in Table 2b. The results show similar improvements of RAM over a convolutional network. It has
to be noted that the overall capacity and the amount of computation of our model does not change
from 60 ? 60 images to 100 ? 100, whereas the hidden layer of the convolutional network that is
connected to the linear layer grows linearly with the number of pixels in the input.
4.2 Dynamic Environments
One appealing property of the recurrent attention model is that it can be applied to videos or interactive problems with a visual input just as easily as to static image tasks. We test the ability of our
approach to learn a control policy in a dynamic visual environment while perceiving the environment
through a bandwidth-limited retina by training it to play a simple game. The game is played on a 24
by 24 screen of binary pixels and involves two objects: a single pixel that represents a ball falling
from the top of the screen while bouncing off the sides of the screen and a two-pixel paddle positioned at the bottom of the screen which the agent controls with the aim of catching the ball. When
the falling pixel reaches the bottom of the screen the agent either gets a reward of 1 if the paddle
overlaps with the ball and a reward of 0 otherwise. The game then restarts from the beginning.
We trained the recurrent attention model to play the game of ?Catch? using only the final reward
as input. The network had a 6 by 6 retina at three scales as its input, which means that the agent
had to capture the ball in the 6 by 6 highest resolution region in order to know its precise position.
In addition to the two location actions, the attention model had three game actions (left, right, and
do nothing) and the action network fa used a linear softmax to model a distribution over the game
actions. We used a core network of 256 LSTM units.
We performed random search to find suitable hyper-parameters and trained each agent for 20 million frames. A video of the best agent, which catches the ball roughly 85% of the time, can
be downloaded from http://www.cs.toronto.edu/?vmnih/docs/attention.mov.
The video shows that the recurrent attention model learned to play the game by tracking the ball
near the bottom of the screen. Since the agent was not in any way told to track the ball and was
only rewarded for catching it, this result demonstrates the ability of the model to learn effective
task-specific attention policies.
5
Discussion
This paper introduced a novel visual attention model that is formulated as a single recurrent neural
network which takes a glimpse window as its input and uses the internal state of the network to
select the next location to focus on as well as to generate control signals in a dynamic environment.
Although the model is not differentiable, the proposed unified architecture is trained end-to-end
from pixel inputs to actions using a policy gradient method. The model has several appealing properties. First, both the number of parameters and the amount of computation RAM performs can
be controlled independently of the size of the input images. Second, the model is able to ignore
clutter present in an image by centering its retina on the relevant regions. Our experiments show that
RAM significantly outperforms a convolutional architecture with a comparable number of parameters on a cluttered object classification task. Additionally, the flexibility of our approach allows for
a number of interesting extensions. For example, the network can be augmented with another action
that allows it terminate at any time point and make a final classification decision. Our preliminary
experiments show that this allows the network to learn to stop taking glimpses once it has enough information to make a confident classification. The network can also be allowed to control the scale at
which the retina samples the image allowing it to fit objects of different size in the fixed size retina.
In both cases, the extra actions can be simply added to the action network fa and trained using the
policy gradient procedure we have described. Given the encouraging results achieved by RAM, applying the model to large scale object recognition and video classification is a natural direction for
future work.
8
References
[1] Bogdan Alexe, Thomas Deselaers, and Vittorio Ferrari. What is an object? In CVPR, 2010.
[2] Bogdan Alexe, Nicolas Heess, Yee Whye Teh, and Vittorio Ferrari. Searching for objects driven by
context. In NIPS, 2012.
[3] James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. The Journal of
Machine Learning Research, 13:281?305, 2012.
[4] Nicholas J. Butko and Javier R. Movellan. Optimal scanning for faster object detection. In CVPR, 2009.
[5] N.J. Butko and J.R. Movellan. I-pomdp: An infomax model of eye movement. In Proceedings of the 7th
IEEE International Conference on Development and Learning, ICDL ?08, pages 139 ?144, 2008.
[6] Misha Denil, Loris Bazzani, Hugo Larochelle, and Nando de Freitas. Learning where to attend with deep
architectures for image tracking. Neural Computation, 24(8):2151?2184, 2012.
[7] Pedro F. Felzenszwalb, Ross B. Girshick, and David A. McAllester. Cascade object detection with deformable part models. In CVPR, 2010.
[8] Ross B. Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate
object detection and semantic segmentation. CoRR, abs/1311.2524, 2013.
[9] Mary Hayhoe and Dana Ballard. Eye movements in natural behavior. Trends in Cognitive Sciences,
9(4):188 ? 194, 2005.
[10] Sepp Hochreiter and J?urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?
1780, 1997.
[11] L. Itti, C. Koch, and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 20(11):1254?1259, 1998.
[12] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in Neural Information Processing Systems 25, pages 1106?1114, 2012.
[13] Christoph H. Lampert, Matthew B. Blaschko, and Thomas Hofmann. Beyond sliding windows: Object
localization by efficient subwindow search. In CVPR, 2008.
[14] Hugo Larochelle and Geoffrey E. Hinton. Learning to combine foveal glimpses with a third-order boltzmann machine. In NIPS, 2010.
[15] Stefan Mathe and Cristian Sminchisescu. Action from still image dataset and inverse optimal control to
learn task specific visual scanpaths. In NIPS, 2013.
[16] Lucas Paletta, Gerald Fritz, and Christin Seifert. Q-learning of sequential attention for visual object
recognition from informative local descriptors. In CVPR, 2005.
[17] M. Ranzato. On Learning Where To Look. ArXiv e-prints, 2014.
[18] Ronald A. Rensink. The dynamic representation of scenes. Visual Cognition, 7(1-3):17?42, 2000.
[19] Pierre Sermanet, David Eigen, Xiang Zhang, Micha?el Mathieu, Rob Fergus, and Yann LeCun. Overfeat:
Integrated recognition, localization and detection using convolutional networks. CoRR, abs/1312.6229,
2013.
[20] Kenneth O. Stanley and Risto Miikkulainen. Evolving a roving eye for go. In GECCO, 2004.
[21] Richard S. Sutton, David Mcallester, Satinder Singh, and Yishay Mansour. Policy gradient methods for
reinforcement learning with function approximation. In NIPS, pages 1057?1063. MIT Press, 2000.
[22] Antonio Torralba, Aude Oliva, Monica S Castelhano, and John M Henderson. Contextual guidance of eye
movements and attention in real-world scenes: the role of global features in object search. Psychol Rev,
pages 766?786, 2006.
[23] K E A van de Sande, J.R.R. Uijlings, T Gevers, and A.W.M. Smeulders. Segmentation as Selective Search
for Object Recognition. In ICCV, 2011.
[24] Paul A. Viola and Michael J. Jones. Rapid object detection using a boosted cascade of simple features. In
CVPR, 2001.
[25] Daan Wierstra, Alexander Foerster, Jan Peters, and Juergen Schmidhuber. Solving deep memory pomdps
with recurrent policy gradients. In ICANN. 2007.
[26] R.J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8(3):229?256, 1992.
9
| 5542 |@word cnn:1 instrumental:1 risto:1 tried:1 solid:1 reduction:1 initial:2 contains:2 foveal:1 selecting:1 ours:2 outperforms:5 past:4 freitas:1 current:3 com:1 blank:2 contextual:1 si:1 must:1 gpu:1 john:1 ronald:1 visible:1 distant:1 informative:1 hofmann:1 motor:1 progressively:1 greedy:1 selected:2 fewer:1 half:1 intelligence:1 beginning:1 core:9 short:1 provides:2 contribute:1 location:32 successive:1 toronto:1 si1:4 zhang:1 wierstra:1 fixation:7 combine:6 fullyconnected:1 inside:1 expected:1 indeed:1 behavior:1 x60:2 rapid:2 roughly:3 subpatches:1 relying:1 discounted:1 encouraging:1 window:8 provided:1 xx:2 blaschko:1 what:3 substantially:1 deepmind:1 proposing:1 unified:1 unobserved:1 transformation:1 act:4 interactive:1 exactly:1 classifier:8 hit:1 demonstrates:2 control:14 unit:12 producing:1 attend:4 local:3 sutton:1 encoding:2 path:4 might:1 twice:1 studied:1 challenging:2 christoph:1 micha:1 limited:6 range:1 directed:1 lecun:1 testing:1 implement:1 movellan:2 backpropagation:2 digit:12 procedure:4 jan:1 area:2 rnn:8 evolving:1 significantly:4 cascade:3 downsampled:1 suggest:2 get:2 butko:2 selection:1 context:1 applying:2 yee:1 optimize:2 www:1 map:2 vittorio:2 center:3 maximizing:3 annealed:1 williams:2 attention:28 sepp:1 independently:4 cluttered:13 pomdp:3 resolution:10 go:1 attending:1 rule:2 searching:1 ferrari:2 coordinate:1 updated:1 pt:2 play:4 deploy:5 hierarchy:1 exact:1 yishay:1 us:4 hypothesis:2 trend:1 expensive:1 recognition:7 particularly:1 labeled:2 bottom:4 role:3 module:1 observed:1 ep:2 fly:1 capture:3 thousand:1 region:10 connected:11 episode:2 ranzato:1 movement:7 trade:1 decrease:1 observes:1 highest:1 substantial:1 environment:31 complexity:2 reward:24 dynamic:14 gerald:1 trained:16 depend:2 singh:1 solving:1 localization:2 translated:17 easily:1 geoff:1 joystick:2 x100:1 train:5 describe:3 effective:3 hyper:2 outside:1 whose:1 encoded:1 larger:4 valued:1 supplementary:1 cvpr:6 otherwise:3 ability:4 g1:4 itself:1 noisy:1 final:4 cristian:1 sequence:9 differentiable:2 indication:1 advantage:1 interaction:5 relevant:2 combining:1 flexibility:1 achieve:1 deformable:1 bazzani:1 sutskever:1 empty:2 darrell:1 produce:3 executing:2 object:34 bogdan:2 depending:1 recurrent:11 develop:1 received:1 eq:1 entirety:1 involves:3 come:2 c:1 larochelle:2 direction:1 korayk:1 correct:4 filter:4 stochastic:2 centered:8 human:6 nando:1 viewing:1 mcallester:2 material:1 preliminary:1 extension:1 around:5 considered:1 ground:1 koch:1 deciding:1 great:1 overlaid:2 alexe:2 cognition:1 exp:1 matthew:1 achieves:2 consecutive:1 torralba:1 fh:6 integrates:1 label:5 currently:2 ross:2 sensitive:1 successfully:2 stefan:1 mit:1 clearly:1 sensor:22 always:1 gaussian:1 aim:2 rather:1 reaching:1 denil:1 resized:1 boosted:1 deselaers:1 focus:3 improvement:2 contrast:2 attains:1 baseline:6 detect:1 el:1 typically:3 entire:4 bt:4 integrated:1 hidden:5 seifert:1 selective:1 selects:2 pixel:12 overall:3 classification:21 priori:1 lucas:1 development:1 overfeat:1 art:1 softmax:3 vmnih:2 special:1 urgen:1 equal:1 once:3 never:1 having:2 koray:1 placing:2 represents:1 look:2 jones:1 future:4 yoshua:1 connectionist:1 richard:1 primarily:1 employ:3 retina:17 tightly:1 delayed:2 attempt:1 ab:2 detection:13 interest:4 mnih:1 henderson:1 misha:1 sens:1 hg:3 accurate:1 capable:1 partial:1 glimpse:69 filled:1 circle:2 guidance:2 catching:2 girshick:2 instance:6 increased:1 column:3 classify:1 juergen:1 cost:4 krizhevsky:1 successful:1 paddle:2 scanning:1 considerably:1 combined:1 adaptively:1 chooses:2 st:1 fundamental:1 hiddens:5 lstm:2 explores:1 confident:1 international:1 told:1 off:2 fritz:1 infomax:1 michael:1 ilya:1 monica:1 squared:1 reflect:2 choose:1 prioritize:1 possibly:1 castelhano:1 cognitive:2 external:1 convolving:1 stochastically:1 corner:1 itti:1 actively:1 volodymyr:1 de:2 stride:3 bergstra:1 summarized:1 includes:1 jitendra:1 depends:4 stream:1 piece:2 performed:2 view:1 observer:1 doing:1 maintains:1 gevers:1 smeulders:1 square:1 formed:1 accuracy:1 convolutional:25 variance:5 descriptor:1 christin:1 gathered:1 saliency:5 correspond:3 bayesian:1 kavukcuoglu:1 produced:1 niebur:1 pomdps:1 rectified:1 straight:1 history:4 classified:2 detector:2 inform:1 reach:2 trevor:1 centering:1 frequency:2 james:1 fixate:1 naturally:1 associated:2 static:5 stop:2 dataset:3 adjusting:1 knowledge:1 ut:1 dimensionality:4 improves:1 segmentation:2 embraced:1 javier:1 positioned:1 stanley:1 back:1 focusing:4 day:1 follow:3 supervised:2 specify:1 restarts:1 formulation:4 evaluated:3 though:2 box:2 strongly:2 done:1 just:2 receives:3 google:2 incrementally:1 widespread:1 defines:2 perhaps:1 grows:1 aude:1 mary:1 contain:1 true:5 unbiased:1 hence:2 inspiration:1 semantic:2 gw:4 game:15 width:1 rooted:1 noted:1 whye:1 tt:1 demonstrate:1 complete:1 performs:4 dedicated:1 l1:1 bring:1 image:61 consideration:1 novel:3 recently:1 common:3 rl:2 hugo:2 million:1 refer:1 outlined:1 similarly:1 hp:1 loris:1 nonlinearity:2 had:10 robot:1 access:1 gt:9 add:1 irrelevant:2 driven:3 forcing:1 scenario:1 rewarded:1 schmidhuber:2 sande:1 binary:1 success:1 seen:1 additional:1 determine:1 paradigm:2 maximize:5 signal:4 sliding:3 multiple:4 full:7 branch:1 reduces:1 faster:1 rti:3 cross:1 long:2 x28:1 a1:1 controlled:3 basic:1 oliva:1 vision:5 expectation:2 arxiv:1 iteration:1 foerster:1 achieved:4 hochreiter:1 cropped:1 addition:2 whereas:1 decreased:1 scanpaths:1 extra:1 operate:2 subject:1 tend:1 call:2 extracting:1 near:1 presence:4 feedforward:1 intermediate:1 enough:3 bengio:1 affect:2 fit:1 timesteps:1 architecture:6 bandwidth:6 identified:1 reduce:1 t0:1 whether:1 motivated:1 six:1 peter:1 action:38 deep:4 heess:3 ignored:1 generally:1 gravesa:1 involve:1 antonio:1 amount:5 clutter:12 extensively:1 band:2 induces:1 processed:1 http:1 generate:1 neuroscience:1 track:2 correctly:2 upscaling:1 diverse:1 discrete:2 affected:1 salient:1 traced:1 drawn:1 falling:2 ht:16 kenneth:1 advancing:1 ram:33 timestep:1 sum:3 inverse:1 parameterized:2 bouncing:1 decide:1 yann:1 patch:8 doc:1 decision:11 summarizes:1 scaling:1 comparable:1 bound:1 fl:7 layer:25 followed:5 played:1 constraint:1 alex:2 scene:10 ri:1 encodes:2 aspect:1 gpus:1 speedup:1 combination:1 ball:7 remain:1 across:1 smaller:2 slightly:1 appealing:2 rob:1 making:2 s1:11 rev:1 hl:3 invariant:2 iccv:1 computationally:1 resource:2 equation:1 turn:1 eventually:1 mechanism:1 needed:1 know:3 end:8 multiplied:1 appropriate:1 nicholas:1 pierre:1 save:1 eigen:1 original:2 thomas:2 top:2 running:3 cf:1 denotes:2 exploit:1 giving:1 restrictive:1 concatenated:1 build:2 classical:1 malik:1 g0:4 added:1 print:1 strategy:1 fa:9 rt:5 interacts:1 gradient:13 attentional:1 mapped:1 reinforce:2 capacity:1 gecco:1 outer:1 considers:1 trivial:1 acquire:1 sermanet:1 setup:2 mostly:1 executed:2 difficult:2 potentially:1 susceptible:1 expense:1 negative:1 design:2 reliably:1 policy:26 unknown:3 boltzmann:1 allowing:1 teh:1 observation:6 datasets:2 markov:1 daan:1 descent:1 mathe:1 situation:1 hinton:2 looking:1 precise:1 viola:1 frame:4 interacting:2 mansour:1 community:1 introduced:1 david:3 trainable:1 imagenet:1 engine:1 learned:7 narrow:1 nip:4 address:1 able:2 hayhoe:1 beyond:1 usually:2 perception:2 pattern:1 encompasses:1 built:3 max:1 green:1 video:8 memory:2 overlap:1 suitable:1 natural:3 hybrid:1 hardwired:1 firmly:1 eye:7 mathieu:1 created:1 psychol:1 catch:2 extract:6 review:1 literature:6 graf:1 relative:1 xiang:1 fully:11 expect:1 bear:1 loss:2 interesting:2 limitation:1 dana:1 geoffrey:1 integrate:3 downloaded:1 degree:2 agent:37 playing:3 classifying:2 translation:3 elsewhere:1 course:1 placed:1 last:2 bias:1 side:1 wide:2 taking:3 felzenszwalb:1 sparse:1 fg:7 van:1 uit:5 cumulative:4 avoids:1 world:2 rich:1 made:2 reinforcement:4 subwindow:1 miikkulainen:1 transaction:1 observable:1 ignore:2 dealing:2 satinder:1 global:1 sequentially:2 decides:2 fixated:1 rect:5 fergus:1 search:7 table:7 additionally:1 ballard:1 terminate:1 learn:17 nature:1 nicolas:2 ignoring:2 interact:1 sminchisescu:1 excellent:1 uijlings:1 icann:1 main:1 linearly:4 bounding:2 whole:1 scored:1 hyperparameters:1 big:3 nothing:1 lampert:1 repeated:1 allowed:1 paul:1 x1:1 augmented:2 fig:3 screen:8 paletta:1 position:2 guiding:1 explicit:1 momentum:1 candidate:2 mov:1 perceptual:1 third:1 learns:1 donahue:1 removing:1 bad:1 specific:6 xt:15 rectifier:8 experimented:1 normalizing:1 mnist:23 sequential:4 effectively:2 adding:1 corr:2 execution:1 conditioned:1 demand:3 margin:1 easier:1 backpropagate:1 entropy:1 lt:20 led:1 fc:7 likely:1 simply:4 explore:1 visual:19 tracking:2 g2:4 partially:2 scalar:1 pedro:1 corresponds:1 truth:1 environmental:1 extracted:1 minibatches:1 conditional:1 goal:4 formulated:2 marked:2 jeff:1 shared:1 content:2 change:1 included:1 except:1 reducing:3 uniformly:1 perceiving:1 rensink:1 total:3 called:1 invariance:2 selectively:1 formally:1 select:2 internal:8 alexander:1 icdl:1 evaluate:1 tested:2 avoiding:1 |
5,018 | 5,543 | Unsupervised learning of an efficient short-term
memory network
Pietro Vertechi
Wieland Brendel ?
Christian K. Machens
Champalimaud Neuroscience Programme
Champalimaud Centre for the Unknown
Lisbon, Portugal
[email protected]
Abstract
Learning in recurrent neural networks has been a topic fraught with difficulties
and problems. We here report substantial progress in the unsupervised learning of
recurrent networks that can keep track of an input signal. Specifically, we show
how these networks can learn to efficiently represent their present and past inputs,
based on local learning rules only. Our results are based on several key insights.
First, we develop a local learning rule for the recurrent weights whose main aim
is to drive the network into a regime where, on average, feedforward signal inputs
are canceled by recurrent inputs. We show that this learning rule minimizes a cost
function. Second, we develop a local learning rule for the feedforward weights
that, based on networks in which recurrent inputs already predict feedforward
inputs, further minimizes the cost. Third, we show how the learning rules can be
modified such that the network can directly encode non-whitened inputs. Fourth,
we show that these learning rules can also be applied to a network that feeds a
time-delayed version of the network output back into itself. As a consequence,
the network starts to efficiently represent both its signal inputs and their history.
We develop our main theory for linear networks, but then sketch how the learning
rules could be transferred to balanced, spiking networks.
1
Introduction
Many brain circuits are known to maintain information over short periods of time in the firing of
their neurons [15]. Such ?persistent activity? is likely to arise through reverberation of activity due
to recurrent synapses. While many recurrent network models have been designed that remain active
after transient stimulation, such as hand-designed attractor networks [21, 14] or randomly generated
reservoir networks [10, 13], how neural networks can learn to remain active is less well understood.
The problem of learning to remember the input history has mostly been addressed in supervised
learning of recurrent networks. The classical approaches are based on backpropagation through
time [22, 6]. However, apart from convergence issues, backpropagation through time is not a feasible method for biological systems. More recent work has drawn attention to random recurrent
neural networks, which already provide a reservoir of time constants that allow to store and read out
memories [10, 13]. Several studies have focused on the question of how to optimize such networks
to the task at hand (see [12] for a review), however, the generality of the underlying learning rules
is often not fully understood, since many rules are not based on analytical results or convergence
proofs.
?
current address: Centre for Integrative Neuroscience, University of T?ubingen, Germany
1
The unsupervised learning of short-term memory systems, on the other hand, is largely unchartered
territory. While there have been several ?bottom-up? studies that use biologically realistic learning
rules and simulations (see e.g. [11]), we are not aware of any analytical results based on local
learning rules.
Here we report substantial progress in following through a normative, ?top-down? approach that
results in a recurrent neural network with local synaptic plasticity. This network learns how to
efficiently remember an input and its history. The learning rules are largely Hebbian or covariancebased, but separate recurrent and feedforward inputs. Based on recent progress in deriving integrateand-fire neurons from optimality principles [3, 4], we furthermore sketch how an equivalent spiking
network with local learning rules could be derived. Our approach generalizes analogous work in the
setting of efficient coding of an instantaneous signal, as developed in [16, 19, 23, 4, 1].
2
The autoencoder revisited
We start by recapitulating the autoencoder network shown in Fig. 1a. The autoencoder transforms a
K-dimensional input signal, x, into a set of N firing rates, r, while obeying two constraints. First,
the input signal should be reconstructable from the output firing rates. A common assumption is that
the input can be recovered through a linear decoder, D, so that
? = Dr.
x?x
(1)
Second, the output firing rates, r, should provide an optimal or efficient representation of the input
signals. This optimality can be measured by defining a cost C(r) for the representation r. For
simplicity, we will in the following assume that the costs are quadratic (L2), although linear (L1)
costs in the firing rates could easily be accounted for as well. We note that autoencoder networks
are sometimes assumed to reduce the dimensionality of the input (undercomplete case, N < K) and
sometimes assumed to increase the dimensionality (overcomplete case, N > K). Our results apply
to both cases.
The optimal set of firing rates for a given input signal can then be found by minimizing the loss
function,
1
?
2
2
L = kx ? Drk + krk ,
(2)
2
2
with respect to the firing rates r. Here, the first term is the error between the reconstructed input
? = Dr, and the actual stimulus, x, while the second term corresponds to the ?cost? of
signal, x
the signal representation. The minimization can be carried out via gradient descent, resulting in the
differential equation
r? = ?
?L
= ??r + D> x ? D> Dr.
?r
(3)
This differential equation can be interpreted as a neural network with a ?leak?, ??r, feedforward
connections, F = DT , and recurrent connections, ? = D> D. The derivation of neural networks
from quadratic loss functions was first introduced by Hopfield [7, 8], and the link to the autoencoder
was pointed out in [19]. Here, we have chosen a quadratic cost term which results in a linear
differential equation. Depending on the precise nature of the cost term, one can also obtain nonlinear differential equations, such as the Cowan-Wilson equations [19, 8]. Here, we will first focus
on linear networks, in which case ?firing rates? can be both positive and negative. Further below,
we will also show how our results can be generalized to networks with positive firing rates and to
networks in which neurons spike.
In the case of arbitrarily small costs, the network can be understood as implementing predictive
? = Dr, is subtracted from the actual
coding [17]. The reconstructed (?predicted?) input signal, x
input signal, x, see Fig. 1b. Predictive coding here enforces a cancellation or ?balance? between the
feedforward and recurrent synaptic inputs. If we assume that the actual input acts excitatory, for
instance, then the predicted input is mediated through recurrent lateral inhibition. Recent work has
shown that this cancellation can be mediated by the detailed balance of currents in spiking networks
[3, 1], a result we will return to later on.
2
a
b
?)
F(x x
Fx
x
r
x
rt
c
-
-
Fx
r
1
r
x
rt
-
xt
FDr
Drt
? = Dr
x
Mrt
Figure 1: Autoencoders. (a) Feedforward network. The input signal x is multiplied with the feedforward
weights F. The network generates output firing rates r. (b) Recurrent network. The left panel shows how the
? = Dr is fed back and subtracted from the original input signal x. The right panel
reconstructed input signal x
shows that this subtraction can also be performed through recurrent connections FD. For the optimal network,
we set F = D> . (c) Recurrent network with delayed feedback. Here, the output firing rates are fed back
with a delay. This delayed feedback acts as just another input signal, and is thereby re-used, thus generating
short-term memory.
3
Unsupervised learning of the autoencoder with local learning rules
The transformation of the input signal, x, into the output firing rate, r, is largely governed by the
decoder, D, as can be seen in Eq. (3). When the inputs are drawn from a particular distribution, p(x),
such as the distribution of natural images or natural sounds, some decoders will lead to a smaller
average loss and better performance. The average loss is given by
1
2
2
hLi =
kx ? Drk + ? krk
(4)
2
where the angular brackets denote an average over many signal presentations. In practice, x will
generally be centered and whitened. While it is straightforward to minimize this average loss with
respect to the decoder, D, biological networks face a different problem.1 A general recurrent neural
network is governed by the firing rate dynamics
r? = ??r + Fx ? ?r,
(5)
and has therefore no access to the decoder, D, but only to to its feedforward weights, F, and its
recurrent weights, ?. Furthermore, any change in F and ? must solely relie on information that is
locally available to each synapse.
We will assume that matrix ? is initially chosen such that the dynamical system is stable, in which
case its equilibrium state is given by
Fx = ?r + ?r.
(6)
If the dynamics of the input signal x are slow compared to the firing rate dynamics of the autoencoder, the network will generally operate close to equilibrium. We will assume that this is the case,
and show that this assumption helps us to bridge from these firing rate networks to spiking networks
further below.
A priori, it is not clear how to change the feedforward weights, F, or the recurrent weights, ?, since
neither appears in the average loss function, Eq. (4). We might be inclined to solve Eq. (6) for r
and plug the result into Eq. (4). However, we then have to operate on matrix inverses, the resulting
gradients imply heavily non-local synaptic operations, and we would still need to somehow eliminate
the decoder, D, from the picture.
Here, we follow a different approach. We note that the optimal target network in the previous section
implements a form of predictive coding. We therefore suggest a two-step approach to the learning
problem. First, we fix the feedforward weights and we set up learning rules for the recurrent weights
such that the network moves into a regime where the inputs, Fx, are predicted or ?balanced? by the
recurrent weights, ?r, see Fig. 1b. In this case, ? = FD, and this will be our first target for
learning. Second, once ? is learnt, we change the feedforward weights F to decrease the average
loss even further. We then return to step 1 and iterate.
1
Note that minimization of the average loss with respect to D requires either a hard or a soft normalization
constraint on D.
3
Since F is assumed constant in step 1, we can reach the target ? = FD by investigating how the
decoder D needs to change. The respective learning equation for D can then be translated into
a learning equation for ?, which will directly link the learning of ? to the minimization of the
loss function, Eq. (4). One thing to keep in mind, however, is that any change in ? will cause a
compensatory change in r such that Eq. (6) remains fulfilled. These changes are related through the
equation
? + (? + ?I)?r = 0
?r
(7)
which is obtained by taking the derivative of Eq. (6) and remembering that x changes on much
slower time scales, and can therefore be considered a constant. In consequence, we have to consider
the combined change of the recurrent weights, ?, and the equilibrium firing rate, r, in order to
reduce the average loss.
Let us assume a small change of D in the direction ?D = xr> , which is equivalent to simply
decreasing x in the first term of Eq. (4). Such a small change can be translated into the following
learning rule for D,
? = (xr> ? ?D),
D
(8)
where is sufficiently small to make the learning slow compared to the dynamics of the input signals
x = x(t). The ?weight decay? term, ??D, acts as a soft normalization or regularizer on D. In turn,
to have the recurrent weights ? move towards FD, we multiply with F from the left to obtain the
learning rule2
? = (Fxr> ? ??).
?
(9)
Importantly, this learning rule is completely local: it only rests on information that is available to
each synapse, namely the presynaptic firing rates, r, and the postsynaptic input signal, Fx.
Finally, we show that the ?unnormalized? learning rule decreases the loss function. As noted above,
any change of ? causes a change in the equilibrium firing rate, see Eq. (7). By plugging the unnormalized learning rule for ?, namely Fxr> , into Eq. (7), and by remembering that Fx = ?r + ?r,
we obtain
r? = ?krk2 r.
(10)
So, to first order, the firing rates decay in the direction of r. In turn, the temporal derivative of the
loss function,
E
dhLi D ?
= (?Dr ? D?r)> (x ? Dr) + ??rT r
(11)
dt
D
E
=
? krk2 (x ? Dr)> (x ? Dr) ? ?krk4 ,
(12)
is always negative so that the unnormalized learning rule for ? decreases the error. We then subtract
the term ??? (thus reducing the norm of the matrix but not changing the direction) as a ?soft normalisation? to prevent it from going to infinity. Note that the argument here rests on the parallelism
of the learning of D and ?. The decoder, D, however, is merely a hypothetical quantity that does
not have a physical counterpart in the network.
In step 2, we assume that the recurrent weights have reached their target, ? = FD, and we learn
the feedforward weights. For that we notice that in the absolute minimum, as shown in the previous
section, the feedforward weights become F = D> . Hence, the target for the feedforward weights
should be the transpose of the decoder. Over long time intervals, the expected decoder is simply
D = hxr> i/?, since that is the fixed point of the decoder learning rule, Eq. (8). Hence, we suggest
to learn the feedforward weights on a yet slower time scale ? , according to
? = ?(rx> ? ?F),
F
(13)
where ?F is once more a soft normalization factor. The fixed point of the learning rule is then
F = D> . We emphasize that this learning rule is also local, based solely on the presynaptic input
signal and postsynaptic firing rates.
In summary, we note that the autoencoder operates on four separate time scales. On a very fast,
almost instantaneous time scale, the firing rates run into equilibrium for a given input signal, Eq. (6).
On a slower time scale, the input signal, x, changes. On a yet slower time scale, the recurrent
weights, ?, are learnt, and their learning therefore uses many input signal values. On the final and
slowest time scale, the feedforward weights, F, are optimized.
2
Note that the fixed point of the decoder learning rule is D = hxr> i/?. Hence, the fixed point of the
recurrent learning is ? = FD.
4
4
Unsupervised learning for non-whitened inputs
Algorithms for efficient coding are generally applied to whitened and centered data (see e.g. [2, 16]).
Indeed, if the data are not centered, the read-out of the neurons will concentrate in the direction of
the mean input signal in order to represent it, even though the mean may not carry any relevant information about the actual, time-varying signal. If the data are not whitened, the choice of the decoder
will be dominated by second-order statistical dependencies, at the cost of representing higher-order
dependencies. The latter are often more interesting to represent, as shown by applications of efficient
or sparse coding algorithms to the visual system [20].
While whitening and centering are therefore common pre-processing steps, we note that, with a
simple correction, our autoencoder network can take care of the pre-processing steps autonomously.
This extra step will be crucial later on, when we feed the time-delayed (and non-whitened) network
activity back into the network. The main idea is simple: we suggest to use a cost function that is
invariant under affine transformations and equals the cost function we have been using until now
in case of centered and whitened data. To do so, we introduce the short-hands xc = x ? hxi and
rc = r ? hri for the centered input and the centered firing rates, and we write C = cov(x, x) for
the covariance matrix of the input signal. The corrected loss function is then,
>
?
1
2
L = xc ? Drc C?1 xc ? Drc + krk .
(14)
2
2
The loss function reduces to Eq. (2) if the data are centered and if C = I. Furthermore, the value of
the loss function remains constant if we apply any affine transformation x ? Ax + b.3 In turn, we
can interpret the loss function as the likelihood function of a Gaussian.
From hereon, we can follow through exactly the same derivations as in the previous sections. We
first notice that the optimal firing rate dynamics becomes
V = D> C?1 x ? D> C?1 Dr ? ?r
(15)
r? = V ? hVi
(16)
where V is a placeholder for the overall input. The dynamics differ in two ways from those in
Eq. (3). First, the dynamics now require the subtraction of the averaged input, hVi. Biophysically,
this subtraction could correspond to a slower intracellular process, such as adaptation through hyperpolarization. Second, the optimal feedforward weights are now F = D> C?1 , and the optimal
recurrent weights become ? = D> C?1 Dr.
The derivation of the learning rules follows the outline of the previous section. Initially, the network
starts with some random connectivity, and obeys the dynamical equations,
V = Fx ? ?r ? ?r
(17)
r? = V ? hVi.
(18)
We then apply the following modified learning rules for D and ?,
? = xr> ? hxihri> ? ?D
D
(19)
>
>
?
? = Fxr ? hFxihri ? ?? .
(20)
We note that in both cases, the learning remains local. However, similar to the rate dynamics, the
dynamics of learning now requires a slower synaptic process that computes the averaged signal
inputs and presynaptic firing rates. Synapses are well-known to operate on a large range of time
scales (e.g., [5]), so that such slower processes are in broad agreement with physiology.
The target for learning the feedforward weights becomes F ? D> C?1 . The matrix inverse can be
? = (?FC + D> ) has the required target as
eliminated by noticing that the differential equation F
its fixed point. The covariance matrix C can be estimated by averaging over xc x>
c , and the decoder
D> can be estimated by averaging over xc r>
,
just
as
in
the
previous
section,
or
as follows from
c
Eq. (19). Hence, the learning of the feedforward weights becomes
? = ? (r ? Fx)x> ? hr ? Fxihx> i ? ?F .
F
(21)
As for the recurrent weights, the learning rests on local information, but requires a slower time scale
that computes the mean input signal and presynaptic firing rates.
3
? = A?
Under an affine transformation, y = Ax+b and y
x+b, we obtain: y??
y
>
>
? cov(x, x)?1 x ? x
? .
Ax ? A?
x cov(Ax, Ax)?1 Ax ? A?
x = x?x
5
>
cov(y, y)?1 y??
y =
5
The autoencoder with memory
We are finally in a position to tackle the problem we started out with, how to build a recurrent
network that efficiently represents not just its present input, but also its past inputs. The objective
function used so far, however, completely neglects the input history: even if the dimensionality of
the input is much smaller than the number of neurons available to code it, the network will not try
to use the extra ?space? available to remember the input history.
5.1
An objective function for short-term memory
Ideally, we would want to be able to read out both the present input and the past inputs, such that
xt?n ? Dn rt , where n is an elementary time step, and Dn are appropriately chosen readouts. We
will in the following assume that there is a matrix M such that Dn M = Dn+1 for all n. In other
? t?n = Dn r = D0 Mn rt . Then the cost function
words, the input history should be accessible via x
we would like to minimize is a straightforward generalization of Eq. (2),
L=
1X n
?
? kxt?n ? DMn rt k2 + krt k2 .
2 n=0
2
(22)
where we have set D = D0 . We tacitly assume that x and r are centered and that the L2 norm
is defined with respect to the input signal covariance matrix C, so that we can work in the full
generality of Eq. (14) without keeping the additional notational baggage.
Unfortunately, the direct minimization of this objective is impossible, since the network has no
access to the past inputs xt?n for n ? 1. Rather, information about past inputs will have to be
retrieved from the network activity itself. We can enforce that by replacing the past input signal at
time t, with its estimate in the previous time step, which we will denote by a prime. In other words,
? t?n , we ask that x
? 0(t?1)?(n?1) ? x
? t?n , so that the estimates of the
instead of asking that xt?n ? x
input (and its history) are properly propagated through the network. Given the iterative character
? t?n k = kDMn?1 (rt?1 ? Mrt )k, we can define a loss
of the respective errors, k?
x0(t?1)?(n?1) ? x
function for one time step only,
L=
?
?
1
2
2
2
kxt ? Drt k + krt?1 ? Mrt k + krt k .
2
2
2
(23)
Here, the first term enforces that the instantaneous input signal is properly encoded, while the second
term ensures that the network is remembering past information. The last term is a cost term that
makes the system more stable and efficient.
Note that a network which minimizes this loss function is maximizing its information content, even
if the number of neurons, N , far exceeds the input dimension K, so that N K. As becomes
clear from inspecting the loss function, the network is trying to code an N + K dimensional signal
with only N neurons. Consequently, just as in the undercomplete autoencoder, all of its information
capacity will be used.
5.2
Dynamics and learning
Conceptually, the loss function in Eq. (23) is identical to Eq. (2), or rather, to Eq. (14), if we keep full
generality. We only need to vertically stack the feedforward input and the delayed recurrent input
into a single high-dimensional vector x0 = (xt ; ?rt?1 ). Similarly, we can horizontally combine
the decoder D and the ?time travel? matrix M into a single decoder matrix D0 = (D ?M). The
above loss function then reduces to
2
2
L = kx0t ? D0 rt k + ? krt k ,
(24)
and all of our derivations, including the learning rules, can be directly applied to this system. Note
that the ?input? to the network now combines the actual input signal, xt , and the delayed recurrent
input, rt?1 . Consequently, this extended input is neither white nor centered, and we will need to
work with the generalized dynamics and generalized learning rules derived in the previous section.
6
7
7
6
6
5
5
4
3
4
3
2
2
1
1
0
0
0
10
20
30
40
50
0
10
20
Time [s]
0.6
0.4
0.2
0
2
4
6
50
3
10
2
10
1
10
0
10
-1
10
-2
10
-3
10
-4
10
-5
10
-6
10
-7
Distance to optimal weights
0
50000
8
10
12
F
1.00
150000
200000
Optimal vs Learned weights
0.95
0.8
0.6
0.4
0.2
0.0
100000
Time [s]
History reconstruction (after)
1.0
0.8
0.0
40
Learned weights
Reconstruction performance
E
History reconstruction (before)
1.0
30
10
Time [s]
Reconstruction performance
D
C
Population rate after learning
8
Rate [Hz]
Rate [Hz]
B
Population rate before learning
8
Relative distance to FTF + MTM
A
0
2
Relative time in past
4
6
8
10
12
Relative time in past
0.90
0.85
0.80
0.75
0.75
0.80
0.85
0.90
0.95
1.00
Optimal weights
Figure 2: Emergence of working memory in a network of 10 neurons with random initial connectivity. (A)
Rates of all neurons for the first 50 inputs at the beginning of learning. (B) Same as (A), but after learning.
(C) Distance of fast recurrent weights to optimal configuration, F> F + M> M, relative to L2-norm of optimal
weights. (D) Squared error of optimal linear reconstruction of inputs at time t ? k from rates at time t, relative
to variance of the input before learning; for k ? [0, . . . , 20]. (E) Same as (D) but after learning. (F) Scatter
plot of fast recurrent weights after learning against optimal configuration, F> F + M> M.
The network dynamics will initially follow the differential equation 4
V = Fxt + ?d rt?1 ? ?f rt ? ?rt
r? = V ? hVi.
(25)
(26)
Compared to our previous network, we now have effectively three inputs into the network: the
feedforward inputs with weight F, a delayed recurrent input with weight ?d and a fast recurrent
input with weight ?f , see Fig. 1c. The optimal connectivities can be derived from the loss function
and are (see also Fig. 1c)
F = D>
d
>
f
>
(27)
? =M
(28)
>
? = D D + M M.
(29)
Consequently, there are also three learning rules: one for the fast recurrent weights, which follows
Eq. (20), one for the feedforward weights, which follows Eq. (21), and one for the delayed recurrent
weights, which also follows Eq. (21). In summary,
? f = (Fxt r> ? hFxt ihri> ? ??)
?
t
? = ? (rt ? Fxt )x> ? hrt ? Fxt ihx> i ? ?F .
F
t
t
d
>
d
?? d = ? (rt ? ?d rt?1 )r>
t?1 ? hrt ? ? rt?1 ihrt?1 i ? ?? .
(30)
(31)
(32)
We note that the learning of the slow connections does not strictly minimize the expected loss in
every time step, due to potential non-stationarities in the distribution of firing rates throughout the
course of learning. In practice, we therefore find that the improvement in memory performance is
often dominated by the learning of the fast connectivity (see example below).
4
We are now dealing with a delay-differential equation, which may be obscured by our notation. In practice,
the term rt?1 would be replaced by a term of the form r(r ? ? ), where ? is the actual value of the ?time step?.
7
6
Simulations
We simulated a firing rate network of ten neurons that learn to remember a one-dimensional, temporally uncorrelated white noise stimulus (Fig. 2). Firing rates were constrained to be positive.
We initialized all feedforward weights to one, whereas the matrices ?f and ?d were initialised by
drawing numbers from centered Gaussian distributions with variance 1 and 0.2 respectively. All
matrices were then divided by N 2 = 100. At the onset, the network has some memory, similar to
random networks based on reservoir computing. However, the recurrent inputs are generally not
cancelling out the feedforward inputs. The effect of such imprecise balance are initially high firing
rates and poor coding properties (Fig. 2A,D). At the end of learning, neurons are firing less, and the
coding properties are close to the information-theoretic limit (10 time steps), see Fig. 2B,E. We note
that, although the signal input was white noise for simplicity, the total input into the network (i.e.,
including the delayed firing rates) is neither white nor zero-mean, due to the positivity constraint on
the firing rates. The network converges to the derived connectivity (Fig. 2C,F); we note, however,
that the bulk of the improvements is due to the learning of the fast connections.
7
Towards learning in spiking recurrent networks
While we have shown how a recurrent network can learn to efficiently represent an input and its
history using only local learning rules, our network is still far from being biologically realistic. A
quite obvious discrepancy with biological networks is that the neurons are not spiking, but rather
emit ?firing rates? that can be both positive and negative. How can we make the connection to
spiking networks? Standard solutions have bridged from rate to spiking networks using mean-field
approaches [18]. However, more recent work has shown that there is a direct link from the types of
loss functions considered in this paper to balanced spiking networks.
Recently, Hu et al. pointed out that the minimization of Eq. (2) can be done by a network of
neurons that fires both positive and negative spikes [9], and then argued that these networks can be
translated into real spiking networks. A similar, but more direct approach was introduced in [3, 1]
who suggested to minimize the loss function, Eq. (2), under the constraint that r ? 0. The resulting
networks consist of recurrently connected integrate-and-fire neurons that balance their feedforward
and recurrent inputs [3, 1, 4]. Importantly, Eq. (2) remains a convex function of r, and Eq. (3) still
applies (except that r cannot become negative).
The precise match between the spiking network implementation and the firing rate minimization
[1] opens up the possibility to apply our learning rules to the spiking networks. We note, though,
that this holds only strictly in the regime where the spiking networks are balanced. (For unbalanced
networks, there is no direct link to the firing rate formalism.) If the initial network is not balanced,
we need to first learn how to bring it into the balanced state. For white-noise Gaussian inputs, [4]
showed how this can be done. For more general inputs, this problem will have to be solved in the
future.
8
Discussion
In summary, we have shown how a recurrent neural network can learn to efficiently represent both
its present and past inputs. A key insight has been the link between balancing of feedforward and
recurrent inputs and the minimization of the cost function. If neurons can compensate both external
feedforward and delayed recurrent excitation with lateral inhibition, then, to some extent, they must
be coding the temporal trajectory of the stimulus. Indeed, in order to be able to compensate an input,
the network must be coding it at some level. Furthermore, if synapses are linear, then so must be the
decoder.
We have shown that this ?balance? can be learnt through local synaptic plasticity of the lateral connections, based only on the presynaptic input signals and postsynaptic firing rates of the neurons.
Performance can then be further improved by learning the feedforward connections (as well as the
?time travel? matrix) which thereby take the input statistics into account. In our network simulations,
these connections only played a minor role in the overall improvements. Since the learning rules for
the time-travel matrix do not strictly minimize the expected loss (see above), there may still be room
for future improvements.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
D. G. Barrett, S. Den`eve, and C. K. Machens. ?Firing rate predictions in optimal balanced
networks?. In: Advances in Neural Information Processing Systems 26. 2013, pp. 1538?1546.
A. J. Bell and T. J. Sejnowski. ?An information-maximization approach to blind separation
and blind deconvolution?. In: Neural comp. 7 (1995), pp. 1129?1159.
M. Boerlin, C. K. Machens, and S. Den`eve. ?Predictive coding of dynamical variables in
balanced spiking networks?. In: PLoS Computational Biology 9.11 (2013), e1003258.
R. Bourdoukan et al. ?Learning optimal spike-based representations?. In: Advances in Neural
Information Processing Systems 25. MIT Press, 2012, epub.
S. Fusi, P. J. Drew, and L. F. Abbott. ?Cascade models of synaptically stored memories?. In:
Neuron 45.4 (2005), pp. 599?611.
S. Hochreiter and J. Schmidhuber. ?Long short-term memory?. In: Neural computation 9.8
(1997), pp. 1735?1780.
J. J. Hopfield. ?Neural networks and physical systems with emergent collective computational
abilities?. In: Proceedings of the national academy of sciences 79.8 (1982), pp. 2554?2558.
J. J. Hopfield. ?Neurons with graded response have collective computational properties like
those of two-state neurons?. In: Proc. Natl. Acad. Sci. USA 81 (1984), pp. 3088?3092.
T. Hu, A. Genkin, and D. B. Chklovskii. ?A network of spiking neurons for computing sparse
representations in an energy-efficient way?. In: Neural computation 24.11 (2012), pp. 2852?
2872.
H. Jaeger. ?The ?echo state? approach to analysing and training recurrent neural networks.?
In: German National Research Center for Information Technology. Vol. 48. 2001.
A. Lazaar, G. Pipa, and J. Triesch. ?SORN: a self-organizing recurrent neural network?. In:
Frontiers in computational neuroscience 3 (2009), p. 23.
M. Luko?sevi?cius and H. Jaeger. ?Reservoir computing approaches to recurrent neural network
training?. In: Computer Science Review 3.3 (2009), pp. 127?149.
W. Maass, T. Natschl?ager, and H. Markram. ?Real-time computing without stable states:
A new framework for neural computation based on perturbations?. In: Neural computation
14.11 (2002), pp. 2531?2560.
C. K. Machens, R. Romo, and C. D. Brody. ?Flexible control of mutual inhibition: A neural
model of two-interval discrimination?. In: Science 307 (2005), pp. 1121?1124.
G. Major and D. Tank. ?Persistent neural activity: prevalence and mechanisms?. In: Curr.
Opin. Neurobiol. 14 (2004), pp. 675?684.
B. A. Olshausen and D. J. Field. ?Sparse coding with an overcomplete basis set: A strategy
employed by V1?? In: Vision Research 37.23 (1997), pp. 3311?3325.
R. P. N. Rao and D. H. Ballard. ?Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects?. In: Nature neuroscience 2.1 (1999),
pp. 79?87.
A. Renart, N. Brunel, and X.-J. Wang. ?Mean-field theory of irregularly spiking neuronal
populations and working memory in recurrent cortical networks?. In: Computational neuroscience: A comprehensive approach (2004), pp. 431?490.
C. J. Rozell et al. ?Sparse coding via thresholding and local competition in neural circuits?.
In: Neural computation 20.10 (2008), pp. 2526?2563.
E. P. Simoncelli and B. A. Olshausen. ?Natural image statistics and neural representation?.
In: Ann. Rev. Neurosci. 24 (2001), pp. 1193?1216.
X.-J. Wang. ?Probabilistic decision making by slow reverberation in cortical circuits?. In:
Neuron 36.5 (2002), pp. 955?968.
P. J. Werbos. ?Backpropagation through time: what it does and how to do it?. In: Proceedings
of the IEEE 78.10 (1990), pp. 1550?1560.
J. Zylberberg, J. T. Murphy, and M. R. DeWeese. ?A sparse coding model with synaptically
local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell
receptive fields?. In: PLoS Computational Biology 7.10 (2011), e1002250.
9
| 5543 |@word version:1 norm:3 open:1 hu:2 integrative:1 simulation:3 pipa:1 covariance:3 thereby:2 cius:1 carry:1 initial:2 configuration:2 past:10 current:2 recovered:1 yet:2 scatter:1 must:4 realistic:2 plasticity:3 shape:1 christian:1 opin:1 designed:2 plot:1 v:1 discrimination:1 beginning:1 short:7 revisited:1 org:1 rc:1 dn:5 direct:4 differential:7 become:3 fxt:4 persistent:2 combine:2 introduce:1 x0:2 indeed:2 expected:3 nor:2 brain:1 decreasing:1 actual:6 becomes:4 underlying:1 notation:1 circuit:3 panel:2 what:1 interpreted:1 minimizes:3 neurobiol:1 developed:1 transformation:4 temporal:2 remember:4 every:1 hypothetical:1 act:3 stationarities:1 tackle:1 exactly:1 k2:2 control:1 positive:5 before:3 understood:3 local:16 vertically:1 limit:1 consequence:2 acad:1 rule2:1 firing:37 solely:2 triesch:1 might:1 range:1 averaged:2 obeys:1 enforces:2 practice:3 implement:1 backpropagation:3 prevalence:1 xr:3 dmn:1 krt:4 bell:1 physiology:1 cascade:1 imprecise:1 pre:2 word:2 suggest:3 cannot:1 close:2 impossible:1 optimize:1 equivalent:2 center:1 maximizing:1 romo:1 straightforward:2 attention:1 convex:1 focused:1 simplicity:2 rule:32 insight:2 importantly:2 deriving:1 bourdoukan:1 population:3 fx:9 analogous:1 target:7 heavily:1 mrt:3 us:1 machens:4 agreement:1 drc:2 rozell:1 werbos:1 bottom:1 role:1 solved:1 wang:2 champalimaud:2 readout:1 inclined:1 ensures:1 connected:1 autonomously:1 plo:2 decrease:3 substantial:2 balanced:8 leak:1 ideally:1 hri:1 dynamic:12 tacitly:1 predictive:5 completely:2 basis:1 translated:3 easily:1 hopfield:3 emergent:1 regularizer:1 sevi:1 derivation:4 fast:7 sejnowski:1 whose:1 encoded:1 quite:1 solve:1 drawing:1 ability:1 cov:4 statistic:2 emergence:1 itself:2 echo:1 final:1 kxt:2 analytical:2 reconstruction:5 adaptation:1 cancelling:1 relevant:1 organizing:1 academy:1 competition:1 convergence:2 jaeger:2 generating:1 converges:1 help:1 depending:1 recurrent:49 develop:3 measured:1 minor:1 progress:3 eq:27 hrt:2 predicted:3 differ:1 direction:4 concentrate:1 centered:10 transient:1 implementing:1 fchampalimaud:1 require:1 integrateand:1 argued:1 fix:1 generalization:1 biological:3 elementary:1 inspecting:1 strictly:3 frontier:1 correction:1 hold:1 sufficiently:1 considered:2 baggage:1 equilibrium:5 predict:1 major:1 boerlin:1 hvi:4 proc:1 travel:3 bridge:1 minimization:7 mit:1 always:1 gaussian:3 aim:1 modified:2 rather:3 mtm:1 varying:1 wilson:1 encode:1 derived:4 focus:1 ax:6 notational:1 properly:2 improvement:4 likelihood:1 slowest:1 eliminate:1 initially:4 going:1 germany:1 tank:1 canceled:1 issue:1 overall:2 flexible:1 priori:1 constrained:1 mutual:1 equal:1 aware:1 once:2 field:5 eliminated:1 identical:1 represents:1 broad:1 biology:2 unsupervised:5 discrepancy:1 future:2 report:2 stimulus:3 randomly:1 genkin:1 national:2 comprehensive:1 delayed:10 murphy:1 replaced:1 fire:3 attractor:1 maintain:1 curr:1 normalisation:1 fd:6 possibility:1 multiply:1 bracket:1 natl:1 emit:1 respective:2 ager:1 initialized:1 re:1 overcomplete:2 obscured:1 instance:1 formalism:1 soft:4 asking:1 rao:1 maximization:1 cost:15 undercomplete:2 delay:2 stored:1 dependency:2 learnt:3 combined:1 drk:2 reconstructable:1 accessible:1 probabilistic:1 vertechi:1 connectivity:5 squared:1 positivity:1 dr:12 external:1 derivative:2 return:2 account:2 potential:1 coding:15 onset:1 blind:2 later:2 performed:1 try:1 reached:1 start:3 minimize:5 brendel:1 variance:2 largely:3 efficiently:6 who:1 correspond:1 conceptually:1 territory:1 biophysically:1 hli:1 rx:1 trajectory:1 drive:1 comp:1 history:10 synapsis:3 reach:1 synaptic:5 centering:1 against:1 energy:1 initialised:1 pp:18 obvious:1 proof:1 propagated:1 bridged:1 ask:1 dimensionality:3 back:4 appears:1 feed:2 higher:1 dt:2 supervised:1 follow:3 response:1 improved:1 synapse:2 done:2 though:2 generality:3 furthermore:4 just:4 angular:1 ihx:1 autoencoders:1 until:1 sketch:2 hand:4 working:2 replacing:1 nonlinear:1 somehow:1 olshausen:2 usa:1 effect:2 counterpart:1 hence:4 read:3 maass:1 white:5 self:1 noted:1 excitation:1 unnormalized:3 generalized:3 trying:1 outline:1 theoretic:1 l1:1 hereon:1 bring:1 image:2 instantaneous:3 recently:1 common:2 stimulation:1 spiking:17 physical:2 hyperpolarization:1 functional:1 interpretation:1 interpret:1 similarly:1 pointed:2 portugal:1 centre:2 cancellation:2 hxi:1 access:2 stable:3 cortex:1 inhibition:3 whitening:1 recent:4 showed:1 retrieved:1 apart:1 prime:1 schmidhuber:1 store:1 ubingen:1 arbitrarily:1 seen:1 minimum:1 additional:1 remembering:3 care:1 employed:1 subtraction:3 period:1 signal:37 full:2 sound:1 simoncelli:1 reduces:2 d0:4 hebbian:1 exceeds:1 match:1 plug:1 long:2 compensate:2 divided:1 drt:2 plugging:1 prediction:1 neuro:1 renart:1 whitened:7 vision:1 represent:6 sometimes:2 normalization:3 synaptically:2 hochreiter:1 cell:1 whereas:1 want:1 chklovskii:1 addressed:1 interval:2 crucial:1 appropriately:1 extra:3 operate:3 rest:3 natschl:1 hz:2 cowan:1 thing:1 eve:2 feedforward:29 iterate:1 reduce:2 idea:1 cause:2 generally:4 detailed:1 clear:2 transforms:1 locally:1 ten:1 wieland:1 notice:2 e1002250:1 neuroscience:5 fulfilled:1 track:1 estimated:2 bulk:1 diverse:1 write:1 vol:1 key:2 four:1 drawn:2 changing:1 prevent:1 neither:3 deweese:1 abbott:1 v1:2 pietro:1 merely:1 run:1 inverse:2 noticing:1 fourth:1 almost:1 throughout:1 separation:1 fusi:1 decision:1 brody:1 played:1 quadratic:3 activity:5 constraint:4 infinity:1 dominated:2 generates:1 argument:1 optimality:2 transferred:1 according:1 poor:1 remain:2 smaller:2 postsynaptic:3 character:1 rev:1 biologically:2 making:1 luko:1 den:2 invariant:1 equation:12 remains:4 turn:3 german:1 mechanism:1 mind:1 irregularly:1 fed:2 end:1 generalizes:1 available:4 operation:1 multiplied:1 apply:4 enforce:1 subtracted:2 slower:8 original:1 top:1 placeholder:1 fxr:3 xc:5 neglect:1 build:1 graded:1 classical:2 move:2 objective:3 already:2 question:1 spike:3 quantity:1 kx0t:1 strategy:1 rt:18 receptive:2 gradient:2 distance:3 separate:2 link:5 lateral:3 capacity:1 decoder:17 simulated:1 sci:1 topic:1 presynaptic:5 extent:1 code:2 minimizing:1 balance:5 mostly:1 unfortunately:1 negative:5 reverberation:2 implementation:1 fdr:1 collective:2 unknown:1 neuron:22 descent:1 defining:1 extended:1 precise:2 perturbation:1 stack:1 introduced:2 namely:2 required:1 connection:9 compensatory:1 optimized:1 learned:2 address:1 able:2 suggested:1 below:3 dynamical:3 parallelism:1 regime:3 including:2 memory:12 lisbon:1 difficulty:1 natural:3 hr:1 mn:1 representing:1 technology:1 imply:1 temporally:1 picture:1 started:1 carried:1 mediated:2 autoencoder:11 review:2 l2:3 relative:5 fully:1 loss:26 interesting:1 integrate:1 affine:3 principle:1 thresholding:1 uncorrelated:1 balancing:1 excitatory:1 summary:3 accounted:1 course:1 last:2 transpose:1 keeping:1 allow:1 face:1 taking:1 markram:1 absolute:1 sparse:5 feedback:2 dimension:1 cortical:2 computes:2 programme:1 far:3 reconstructed:3 emphasize:1 zylberberg:1 keep:3 dealing:1 active:2 investigating:1 assumed:3 iterative:1 learn:8 nature:2 ballard:1 fraught:1 krk:3 main:3 intracellular:1 neurosci:1 noise:3 arise:1 neuronal:1 reservoir:4 fig:9 slow:4 position:1 obeying:1 governed:2 krk2:2 third:1 learns:1 down:1 xt:6 normative:1 recurrently:1 decay:2 barrett:1 deconvolution:1 consist:1 effectively:1 drew:1 kx:2 subtract:1 fc:1 simply:2 likely:1 visual:2 horizontally:1 applies:1 brunel:1 corresponds:1 presentation:1 consequently:3 epub:1 towards:2 ann:1 room:1 feasible:1 change:14 hard:1 content:1 specifically:1 except:1 reducing:1 operates:1 corrected:1 averaging:2 analysing:1 total:1 latter:1 unbalanced:1 |
5,019 | 5,544 | Exploiting Linear Structure Within Convolutional
Networks for Efficient Evaluation
Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun and Rob Fergus
Dept. of Computer Science, Courant Institute, New York University
{denton, zaremba, bruna, lecun, fergus} @cs.nyu.edu
Abstract
We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver
impressive accuracy, but each image evaluation requires millions of floating point
operations, making their deployment on smartphones and Internet-scale clusters
problematic. The computation is dominated by the convolution operations in the
lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required
computation. Using large state-of-the-art models, we demonstrate speedups of
convolutional layers on both CPU and GPU by a factor of 2?, while keeping the
accuracy within 1% of the original model.
1
Introduction
Large neural networks have recently demonstrated impressive performance on a range of speech and
vision tasks. However, the size of these models can make their deployment at test time problematic.
For example, mobile computing platforms are limited in their CPU speed, memory and battery life.
At the other end of the spectrum, Internet-scale deployment of these models requires thousands
of servers to process the 100?s of millions of images per day. The electrical and cooling costs of
these servers is significant. Training large neural networks can take weeks, or even months. This
hinders research and consequently there have been extensive efforts devoted to speeding up training
procedure. However, there are relatively few efforts aimed at improving the test-time performance
of the models.
We consider convolutional neural networks (CNNs) used for computer vision tasks, since they are
large and widely used in commercial applications. These networks typically require a huge number
of parameters (? 108 in [1]) to produce state-of-the-art results. While these networks tend to be
hugely over parameterized [2], this redundancy seems necessary in order to overcome a highly nonconvex optimization [3]. As a byproduct, the resulting network wastes computing resources. In this
paper we show that this redundancy can be exploited with linear compression techniques, resulting
in significant speedups for the evaluation of trained large scale networks, with minimal compromise
to performance.
We follow a relatively simple strategy: we start by compressing each convolutional layer by finding
an appropriate low-rank approximation, and then we fine-tune the upper layers until the prediction
performance is restored. We consider several elementary tensor decompositions based on singular
value decompositions, as well as filter clustering methods to take advantage of similarities between
learned features.
Our main contributions are the following: (1) We present a collection of generic methods to exploit
the redundancy inherent in deep CNNs. (2) We report experiments on state-of-the-art Imagenet
1
CNNs, showing empirical speedups on convolutional layers by a factor of 2 ? 3? and a reduction
of parameters in fully connected layers by a factor of 5 ? 10?.
Notation: Convolution weights can be described as a 4-dimensional tensor: W ? RC?X?Y ?F . C
is the number of number of input channels, X and Y are the spatial dimensions of the kernel, and F
is the target number of feature maps. It is common for the first convolutional layer to have a stride
associated with the kernel which we denote by ?. Let I ? RC?N ?M denote an input signal where
C is the number of input maps, and N and M are the spatial dimensions of the maps. The target
value, T = I ? W , of a generic convolutional layer, with ? = 1, for a particular output feature, f ,
and spatial location, (x, y), is
T (f, x, y) =
C X
X X
Y
X
I(c, x ? x0 , y ? y 0 )W (c, x0 , y 0 , f )
c=1 x0 =1 y 0 =1
If W is a tensor, kW k denotes its operator norm, supkxk=1 kW xkF and kW kF denotes its Frobenius
norm.
2
Related Work
Vanhoucke et al. [4] explored the properties of CPUs to speed up execution. They present many
solutions specific to Intel and AMD CPUs and some of their techniques are general enough to be
used for any type of processor. They describe how to align memory, and use SIMD operations
(vectorized operations on CPU) to boost the efficiency of matrix multiplication. Additionally, they
propose the linear quantization of the network weights and input. This involves representing weights
as 8-bit integers (range [?127, 128]), rather than 32-bit floats. This approximation is similar in spirit
to our approach, but differs in that it is applied to each weight element independently. By contrast,
our approximation approach models the structure within each filter. Potentially, the two approaches
could be used in conjunction.
The most expensive operations in CNNs are the convolutions in the first few layers. The complexity
of this operation is linear in the area of the receptive field of the filters, which is relatively large for
these layers. However, Mathieu et al. [5] have shown that convolution can be efficiently computed
in Fourier domain, where it becomes element-wise multiplication (and there is no cost associated
with size of receptive field). They report a forward-pass speed up of around 2? for convolution
layers in state-of-the-art models. Importantly, the FFT method can be used jointly with most of the
techniques presented in this paper.
The use of low-rank approximations in our approach is inspired by work of Denil et al. [2] who
demonstrate the redundancies in neural network parameters. They show that the weights within a
layer can be accurately predicted from a small (e.g. ? 5%) subset of them. This indicates that
neural networks are heavily over-parametrized. All the methods presented here focus on exploiting
the linear structure of this over-parametrization.
Finally, a recent preprint [6] also exploits low-rank decompositions of convolutional tensors to speed
up the evaluation of CNNs, applied to scene text character recognition. This work was developed
simultaneously with ours, and provides further evidence that such techniques can be applied to a
variety of architectures and tasks. Our work differs in several ways. First, we consider a significantly
larger model. This makes it more challenging to compute efficient approximations since there are
more layers to propagate through and thus a greater opportunity for error to accumulate. Second, we
present different compression techniques for the hidden convolutional layers and provide a method
of compressing the first convolutional layer. Finally, we present GPU results in addition to CPU
results.
3
Convolutional Tensor Compression
In this section we describe techniques for compressing 4 dimensional convolutional weight tensors
and fully connected weight matrices into a representation that permits efficient computation and
storage. Section 3.1 describes how to construct a good approximation criteria. Section 3.2 describes
2
techniques for low-rank tensor approximations. Sections 3.3 and 3.4 describe how to apply these
techniques to approximate weights of a convolutional neural network.
3.1
Approximation Metric
? , of a convolutional tensor W that facilitates more efficient
Our goal is to find an approximation, W
computation while maintaining the prediction performance of the network. A natural choice for
? ? W kF . This criterion yields efficient compression
an approximation criterion is to minimize kW
schemes using elementary linear algebra, and also controls the operator norm of each linear convolutional layer. However, this criterion assumes that all directions in the space of weights equally affect
prediction performance. We now present two methods of improving this criterion while keeping the
same efficient approximation algorithms.
Mahalanobis distance metric: The first distance metric we propose seeks to emphasize coordinates more prone to produce prediction errors over coordinates whose effect is less harmful for the
overall system. We can obtain such measurements as follows. Let ? = {W1 , . . . , WS } denote
the set of all parameters of the S-layer network, and let U (I; ?) denote the output after the softmax layer of input image I. We consider a given input training set (I1 , . . . , IN ) with known labels
(y1 , . . . , yN ). For each pair (In , yn ), we compute the forward propagation pass U (In , ?), and define as {?n } the indices of the h largest values of U (In , ?) different from yn . Then, for a given
layer s, we compute
dn,l,s = ?Ws (U (In , ?) ? ?(i ? l)) , n ? N , l ? {?n } , s ? S ,
(1)
where ?(i?l) is the dirac distribution centered at l. In other words, for each input we back-propagate
the difference between the current prediction and the h ?most dangerous? mistakes.
The Mahalanobis distance is defined from the covariance of d: kW k2maha = w??1 wT , where w
is the vector containing all the coordinates of W , and ? is the covariance of (dn,l,s )n,l . We do not
report results using this metric, since it requires inverting a matrix of size equal to the number of parameters, which can be prohibitively expensive in large networks. Instead we use an approximation
that considers only the diagonal of the covariance matrix. In particular, we propose the following,
approximate, Mahalanobis distance metric:
X
1/2
X
2
kW kmaha
:=
d
(p)
?
W
(p)
,
where
?
=
(2)
n,l,s
p
p
^
p
n,l
where the sum runs over the tensor coordinates. Since (2) is a reweighted Euclidiean metric, we can
simply compute W 0 = ? . ? W , where .? denotes element-wise multiplication, then compute the
? 0 on W 0 using the standard L2 norm, and finally output W
? = ??1 . ? W
?0 .
approximation W
Data covariance distance metric:
One can view the Frobenius norm of W as kW k2F =
Ex?N (0,I) kW xk2F . Another alternative, similar to the one considered in [6], is to replace the
isotropic covariance assumption by the empirical covariance of the input of the layer. If W ?
b ? RCXY ?CXY is the empirical estimate of the input
RC?X?Y ?F is a convolutional layer, and ?
data covariance, it can be efficiently computed as
b 1/2 WF kF ,
kW kdata = k?
(3)
where WF is the matrix obtained by folding the first three dimensions of W .As opposed to [6], this
approach adapts to the input distribution without the need to iterate through the data.
3.2
3.2.1
Low-rank Tensor Approximations
Matrix Decomposition
Matrices are 2-tensors which can be linearly compressed using the Singular Value Decomposition.
If W ? Rm?k is a real matrix, the SVD is defined as W = U SV > , where U ? Rm?m , S ?
Rm?k , V ? Rk?k . S is a diagonal matrix with the singular values on the diagonal, and U , V
are orthogonal matrices. If the singular values of W decay rapidly, W can be well approximated
? = U
? S?V? > , where
by keeping only the t largest entries of S, resulting in the approximation W
3
? ? Rm?t , S? ? Rt?t , V? ? Rt?k Then, for I ? Rn?m , the approximation error kI W
? ? IW kF
U
?
satisfies kI W ? IW kF ? st+1 kIkF , and thus is controlled by the decay along the diagonal of S.
? can be done in O(nmt + nt2 + ntk), which, for sufficiently small t is
Now the computation I W
significantly smaller than O(nmk).
3.2.2
Higher Order Tensor Approximations
SVD can be used to approximate a tensor W ? Rm?n?k by first folding all but two dimensions
together to convert it into a 2-tensor, and then considering the SVD of the resulting matrix. For
?m ? U
? S?V? > . W can be compressed even
example, we can approximate Wm ? Rm?(nk) as W
further by applying SVD to V? . We refer to this approximation as the SVD decomposition and use
K1 and K2 to denote the rank used in the first and second application of SVD respectively.
Alternatively, we can approximate a 3-tensor, WS ? Rm?n?k , by a rank 1 3-tensor by finding a
decomposition that minimizes
kW ? ? ? ? ? ?kF ,
(4)
where ? ? Rm , ? ? Rn , ? ? Rk and ? denotes the outer product operation. Problem (4) is solved
efficiently by performing alternate least squares on ?, ? and ? respectively, although more efficient
algorithms can also be considered [7].
This easily extends to a rank K approximation using a greedy algorithm: Given a tensor W , we
compute (?, ?, ?) using (4), and we update W (k+1) ? W k ? ? ? ? ? ?. Repeating this operation
K times results in
K
X
W?S =
?k ? ?k ? ?k .
(5)
k=1
We refer to this approximation as the outer product decomposition and use K to denote the rank of
the approximation.
Output
Intermediate
representation
RGB
input
X?Y
X?Y
F
Bi-cluster
input and
output
Pointwise matrix
multiplication
2D monochromatic
spatial convolution
(a)
C
Input
F
+
+ ?
C
Output
(b)
(c)
Figure 1: A visualization of monochromatic and biclustering approximation structures. (a) The
monochromatic approximation, used for the first layer. Input color channels are projected onto a set
of intermediate color channels. After this transformation, output features need only to look at one
intermediate color channel. (b) The biclustering approximation, used for higher convolution layers.
Input and output features are clustered into equal sized groups. The weight tensor corresponding
to each pair of input and output clusters is then approximated. (c) The weight tensors for each
input-output pair in (b) are approximated by a sum of rank 1 tensors using techniques described in
3.2.2
3.3
Monochromatic Convolution Approximation
Let W ? RC?X?Y ?F denote the weights of the first convolutional layer of a trained network.
We found that the color components of trained CNNs tend to have low dimensional structure. In
particular, the weights can be well approximated by projecting the color dimension down to a 1D
subspace. The low-dimensional structure of the weights is illustrated in Figure 4.1.
The monochromatic approximation exploits this structure and is computed as follows. First, for
every output feature, f , we consider the matrix Wf ? RC?(XY ) , where the spatial dimensions of the
filter corresponding to the output feature have been combined, and find the SVD, Wf = Uf Sf Vf> ,
4
Approximation technique
No approximation
Monochromatic
Biclustering + outer product decomposition
Biclustering + SVD
Number of operations
XY CF N M ??2
C 0 CN M + XY F N M ??2
C
F
GHK(N M G
+ XY N M ??2 + H
N M ??2 )
C
F
?2
GHN M ( G K1 + K1 XY K2 ? + K2 H
)
Table 1: Number of operations required for various approximation methods.
where Uf ? RC?C , Sf ? RC?XY , and Vf ? RXY ?XY . We then take the rank 1 approximation
?f = U
?f S?f V? > , where U
?f ? RC?1 , S?f ? R, V?f ? R1?XY . We can further exploit the
of Wf , W
f
regularity in the weights by sharing the color component basis between different output features.
?f , of each output feature f into C 0 clusters,
We do this by clustering the F left singular vectors, U
0
for C < F . We constrain the clusters to be of equal size as discussed in section 3.4. Then,
for each of the CF0 output features, f , that is assigned to cluster cf , we can approximate Wf with
? f = Uc S?f V? > where Uc ? RC?1 is the cluster center for cluster cf and S?f and V?f are as before.
W
f
f
f
This monochromatic approximation is illustrated in the left panel of Figure 1(c). Table 1 shows the
number of operations required for the standard and monochromatic versions.
3.4
Biclustering Approximations
We exploit the redundancy within the 4-D weight tensors in the higher convolutional layers by clustering the filters, such that each cluster can be accurately approximated by a low-rank factorization.
We start by clustering the rows of WC ? RC?(XY F ) , which results in clusters C1 , . . . , Ca . Then
we cluster the columns of WF ? R(CXY )?F , producing clusters F1 , . . . , Fb . These two operations break the original weight tensor W into ab sub-tensors {WCi ,Fj }i=1,...,a,j=1,...,b as shown in
Figure 1(b). Each sub-tensor contains similar elements, and thus is easier to fit with a low-rank
approximation.
In order to exploit the parallelism inherent in CPU and GPU architectures it is useful to constrain
clusters to be of equal sizes. We therefore perform the biclustering operations (or clustering for
monochromatic filters in Section 3.3) using a modified version of the k-means algorithm which balances the cluster count at each iteration. It is implemented with the Floyd algorithm, by modifying
the Euclidean distance with a subspace projection distance.
After the input and output clusters have been obtained, we find a low-rank approximation of each
sub-tensor using either the SVD decomposition or the outer product decomposition as described in
Section 3.2.2. We concatenate the X and Y spatial dimensions of the sub-tensors so that the decomposition is applied to the 3-tensor, WS ? RC?(XY )?F . While we could look for a separable
approximation along the spatial dimensions as well, we found the resulting gain to be minimal. Using these approximations, the target output can be computed with significantly fewer operations. The
number of operations required is a function the number of input clusters, G, the output clusters H
and the rank of the sub-tensor approximations (K1 , K2 for the SVD decomposition; K for the outer
product decomposition. The number of operations required for each approximation is described in
Table 1.
3.5
Fine-tuning
Many of the approximation techniques presented here can efficiently compress the weights of a
CNN with negligible degradation of classification performance provided the approximation is not
too harsh. Alternatively, one can use a harsher approximation that gives greater speedup gains but
hurts the performance of the network. In this case, the approximated layer and all those below it can
be fixed and the upper layers can be fine-tuned until the original performance is restored.
4
Experiments
We use the 15 layer convolutional architecture of [8], trained on the ImageNet 2012 dataset [9]. The
network contains 5 convolutional layers, 3 fully connected layers and a softmax output layer. We
5
Figure 2: Visualization of the 1st layer filters. (Left) Each component of the 96 7x7 filters is plotted
in RGB space. Points are colored based on the output filter they belong to. Hence, there are 96
colors and 72 points of each color. Leftmost plot shows the original filters and the right plot shows
the filters after the monochromatic approximation, where each filter has been projected down to a
line in colorspace. (Right) Original and approximate versions of a selection of 1st layer filters.
evaluated the network on both CPU and GPU platforms. All measurements of prediction performance are with respect to the 50K validation images from the ImageNet12 dataset.
We present results showing the performance of the approximations described in Section 3 in terms
of prediction accuracy, speedup gains and reduction in memory overhead. All of our fine-tuning
results were achieved by training with less than 2 passes using the ImageNet12 training dataset.
Unless stated otherwise, classification numbers refer to those of fine-tuned models.
4.1
Speedup
The majority of forward propagation time is spent on the first two convolutional layers (see Supplementary Material for breakdown of time across all layers). Because of this, we restrict our attention
to the first and second convolutional layers in our speedup experiments. However, our approximations could easily applied to convolutions in upper layers as well.
We implemented several CPU and GPU approximation routines in an effort to achieve empirical
speedups. Both the baseline and approximation CPU code is implemented in C++ using Eigen3
library [10] compiled with Intel MKL. We also use Intel?s implementation of openmp and multithreading. The baseline gives comparable performance to highly optimized MATLAB convolution
routines and all of our CPU speedup results are computed relative to this. We used Alex Krizhevsky?s
CUDA convolution routines 1 as a baseline for GPU comparisons. The approximation versions are
written in CUDA. All GPU code was run on a standard nVidia Titan card.
We have found that in practice it is often difficult to achieve speedups close to the theoretical gains
based on the number of arithmetic operations (see Supplementary Material for discussion of theoretical gains). Moreover, different computer architectures and CNN architectures afford different
optimization strategies making most implementations highly specific. However, regardless of implementation details, all of the approximations we present reduce both the number of operations and
number of weights required to compute the output by at least a factor of two, often more.
4.1.1
First Layer
The first convolutional layer has 3 input channels, 96 output channels and 7x7 filters. We approximated the weights in this layer using the monochromatic approximation described in Section 3.3.
The monochromatic approximation works well if the color components span a small number of one
dimensional subspaces. Figure 2 illustrates the effect of the monochromatic approximation on the
first layer filters.
The only parameter in the approximation is C 0 , the number of color channels used for the intermediate representation. As expected, the network performance begins to degrade as C 0 decreases. The
number of floating point operations required to compute the output of the monochromatic convolution is reduced by a factor of 2 ? 3?, with the larger gain resulting for small C 0 . Figure 3 shows the
empirical speedups we achieved on CPU and GPU and the corresponding network performance for
various numbers of colors used in the monochromatic approximation. Our CPU and GPU imple1
https://code.google.com/p/cuda-convnet/
6
First layer approximation:
Performance loss vs. empirical CPU speedup
First layer approximation:
Performance loss vs. empirical GPU speedup
3
2.6
2.8
2.4
C? = 6
C? = 6
2.4
C? = 8
C? = 8
C? = 8
2.2
C? = 12
C? = 12
C? = 12
2
C? = 16
C? = 16
||W||F distance metric
1.8
||W||data distance metric
1.6
C? = 24
C? = 4
2.2
2
C? = 6
0
C? = 12
C? = 12
2
3
4
C? = 12
1.6
C? = 16
C? = 16
||W||F distance metric
1.4
||W||data distance metric
1.2
C? = 24
1
C? = 6
1.8
C? = 24
Finetuned
1.4
?1
C? = 4
C? = 4
Empirical gain in speed on GPU
Empirical gain in speed on CPU
C? = 4
2.6
5
1
?1
6
0
C? = 24
1
Percent loss in performance
Finetuned
2
3
4
5
6
Percent loss in performance
Figure 3: Empirical speedups on (Left) CPU and (Right) GPU for the first layer. C 0 is the number
of colors used in the approximation.
Second layer approximation:
Performance loss vs. empirical GPU speedup
Second layer approximation:
Performance loss vs. empirical CPU speedup
2.8
3
2.6
Empirical gain in speed on CPU
K = 16
1
K = 51
K1 = 16
K = 51
2
2.6
Empirical gain in speed on GPU
K=5
K = 19
1
K2 = 44
2.8
2
K1 = 19
K = 51
K1 = 19
K = 51
2
K1 = 19
K = 51
2
2
2.4
2.2
K1 = 19
K2 = 64
2
K1 = 19
K2 = 64
K1 = 19
K2 = 64
||W||F distance metric
1.8
1.6
1.4
?1
K1 = 24
K = 76
K=6
K=6
K=8
K=8
K=8
2
1.8
K = 10
K = 10
1.6
K = 12
K = 12
||W||F distance metric
1.4
K = 14
||W||maha distance metric
K = 14
1.2
2
K=6
2.2
||W||data distance metric
K1 = 24
K = 76
2
K=5
2.4
K = 16
Finetuned
Fine?tuned
K = 16
1
0
1
2
3
4
5
6
0
7
1
2
3
4
5
6
7
Percent loss in performance
Percent loss in performance
Figure 4: Empirical speedups for second convolutional layer. (Left) Speedups on CPU using biclustered (G = 2 and H = 2) with SVD approximation. (Right) peedups on GPU using biclustered
(G = 48 and H = 2) with outer product decomposition approximation.
mentations achieve empirical speedups of 2 ? 2.5? relative to the baseline with less than 1% drop
in classification performance.
4.1.2
Second Layer
The second convolutional layer has 96 input channels, 256 output channels and 5x5 filters. We
approximated the weights using the techniques described in Section 3.4. We explored various configurations of the approximations by varying the number of input clusters G, the number of output
clusters H and the rank of the approximation (denoted by K1 and K2 for the SVD decomposition
and K for the outer product decomposition).
Figure 4 shows our empirical speedups on CPU and GPU and the corresponding network performance for various approximation configurations. For the CPU implementation we used the biclustering with SVD approximation. For the GPU implementation we using the biclustering with
outer product decomposition approximation. We achieved promising results and present speedups
of 2 ? 2.5? relative to the baseline with less than a 1% drop in performance.
4.2
Combining approximations
The approximations can also be cascaded to provide greater speedups. The procedure is as follows. Compress the first convolutional layer weights and then fine-tune all the layers above until
performance is restored. Next, compress the second convolutional layer weights that result from
the fine-tuning. Fine-tune all the layers above until performance is restored and then continue the
process.
We applied this procedure to the first two convolutional layers. Using the monochromatic approximation with 6 colors for the first layer and the biclustering with outer product decomposition approx7
Approximation method
Standard colvolution
Conv layer 1: Monochromatic
Conv layer 2: Biclustering
+ outer product decomposition
Conv layer 2: Biclustering + SVD
Standard FC
FC layer 1: Matrix SVD
Number of parameters
CXY F
CC 0 + XY F
C + XY + F )
GHK( G
H
C K + K XY K + K F )
GH( G
1
1
2
2H
NM
N K + KM
FC layer 2: Matrix SVD
N K + KM
FC layer 3: Matrix SVD
N K + KM
Approximation
hyperparameters
Reduction
in weights
Increase
in error
C0 = 6
G = 48; H = 2; K = 6
3?
5.3?
0.43%
0.68%
G = 2; H = 2; K1 = 19; K2 = 24
3.9?
0.9%
13.4?
3.5?
5.8?
3.14?
8.1?
2.4?
0.8394%
0.09%
0.19%
0.06%
0.67%
0.02%
K
K
K
K
K
K
=
=
=
=
=
=
250
950
350
650
250
850
Table 2: Number of parameters expressed as a function of hyperparameters for various approximation methods and empirical reduction in parameters with corresponding network performance.
imation for the second layer (G = 48; H = 2; K = 8) and fine-tuning with a single pass through
the training set we are able to keep accuracy within 1% of the original model. This procedure could
be applied to each convolutional layer, in this sequential manner, to achieve overall speedups much
greater than any individual layer can provide. A more comprehensive summary of these results can
be found in the Supplementary Material.
4.3
Reduction in memory overhead
In many commercial applications memory conservation and storage are a central concern. This
mainly applies to embedded systems (e.g. smartphones), where available memory is limited, and
users are reluctant to download large files. In these cases, being able to compress the neural network
is crucial for the viability of the product.
In addition to requiring fewer operations, our approximations require significantly fewer parameters when compared to the original model. Since the majority of parameters come from the fully
connected layers, we include these layers in our analysis of memory overhead. We compress the
fully connected layers using standard SVD as described in 3.2.2, using K to denote the rank of the
approximation.
Table 2 shows the number of parameters for various approximation methods as a function of hyperparameters for the approximation techniques. The table also shows the empirical reduction of
parameters and the corresponding network performance for specific instantiations of the approximation parameters.
5
Discussion
In this paper we have presented techniques that can speed up the bottleneck convolution operations
in the first layers of a CNN by a factor 2 ? 3?, with negligible loss of performance. We also show
that our methods reduce the memory footprint of weights in the first two layers by factor of 2 ? 3?
and the fully connected layers by a factor of 5 ? 13?. Since the vast majority of weights reside in
the fully connected layers, compressing only these layers translates into a significant savings, which
would facilitate mobile deployment of convolutional networks. These techniques are orthogonal to
other approaches for efficient evaluation, such as quantization or working in the Fourier domain.
Hence, they can potentially be used together to obtain further gains.
An interesting avenue of research to explore in further work is the ability of these techniques to
aid in regularization either during or post training. The low-rank projections effectively decrease
number of learnable parameters, suggesting that they might improve generalization ability. The
regularization potential of the low-rank approximations is further motivated by two observations.
The first is that the approximated filters for the first conolutional layer appear to be cleaned up
versions of the original filters. Additionally, we noticed that we sporadically achieve better test error
with some of the more conservative approximations.
Acknowledgments
The authors are grateful for support from ONR #N00014-13-1-0646, NSF #1116923, #1149633 and
Microsoft Research.
8
References
[1] Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint
arXiv:1312.6229 (2013)
[2] Denil, M., Shakibi, B., Dinh, L., Ranzato, M., de Freitas, N.: Predicting parameters in deep
learning. arXiv preprint arXiv:1306.0543 (2013)
[3] Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint
arXiv:1207.0580 (2012)
[4] Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on cpus. In:
Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop. (2011)
[5] Mathieu, M., Henaff, M., LeCun, Y.: Fast training of convolutional networks through ffts.
arXiv preprint arXiv:1312.5851 (2013)
[6] Jaderberg, M., Vedaldi, Andrea, Zisserman, A.: Speeding up convolutional neural networks
with low rank expansions. arXiv preprint arXiv:1405.3866 (2014)
[7] Zhang, T., Golub, G.H.: Rank-one approximation to high order tensors. SIAM J. Matrix Anal.
Appl. 23(2) (February 2001) 534?550
[8] Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional neural networks. arXiv
preprint arXiv:1311.2901 (2013)
[9] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR09. (2009)
[10] Guennebaud, G., Jacob, B., et al.: Eigen v3. http://eigen.tuxfamily.org (2010)
[11] Zeiler, M.D., Taylor, G.W., Fergus, R.: Adaptive deconvolutional networks for mid and high
level feature learning. In: Computer Vision (ICCV), 2011 IEEE International Conference on,
IEEE (2011) 2018?2025
[12] Le, Q.V., Ngiam, J., Chen, Z., Chia, D., Koh, P.W., Ng, A.Y.: Tiled convolutional neural
networks. In: Advances in Neural Information Processing Systems. (2010)
[13] Le, Q.V., Ranzato, M., Monga, R., Devin, M., Chen, K., Corrado, G.S., Dean, J., Ng,
A.Y.: Building high-level features using large scale unsupervised learning. arXiv preprint
arXiv:1112.6209 (2011)
[14] Lowe, D.G.: Object recognition from local scale-invariant features. In: Computer vision,
1999. The proceedings of the seventh IEEE international conference on. Volume 2., Ieee (1999)
1150?1157
[15] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional
neural networks. In: Advances in Neural Information Processing Systems 25. (2012) 1106?
1114
9
| 5544 |@word cnn:3 version:5 compression:4 seems:1 norm:5 c0:1 km:3 seek:1 propagate:2 rgb:2 decomposition:20 covariance:7 jacob:1 reduction:6 configuration:2 contains:2 tuned:3 ours:1 deconvolutional:1 freitas:1 current:1 com:1 written:1 gpu:17 devin:1 concatenate:1 designed:1 plot:2 update:1 drop:2 v:4 greedy:1 fewer:3 isotropic:1 parametrization:1 colored:1 provides:1 location:1 org:1 zhang:2 rc:11 dn:2 along:2 overhead:3 manner:1 x0:3 expected:1 andrea:1 inspired:1 salakhutdinov:1 cpu:22 considering:1 becomes:1 provided:1 begin:1 notation:1 moreover:1 panel:1 conv:3 minimizes:1 developed:1 finding:2 transformation:1 every:1 zaremba:2 prohibitively:1 rm:8 k2:10 control:1 yn:3 producing:1 appear:1 before:1 negligible:2 local:1 mistake:1 supkxk:1 ntk:1 might:1 challenging:1 appl:1 deployment:4 ghk:2 co:1 limited:2 factorization:1 range:2 bi:1 acknowledgment:1 lecun:4 practice:1 differs:2 footprint:1 procedure:4 area:1 empirical:19 significantly:5 vedaldi:1 projection:2 word:1 onto:1 close:1 selection:1 operator:2 storage:2 applying:1 map:3 demonstrated:1 center:1 dean:1 attention:1 hugely:1 independently:1 emily:1 regardless:1 importantly:1 cvpr09:1 coordinate:4 hurt:1 target:3 commercial:2 heavily:1 user:1 element:4 recognition:4 expensive:2 approximated:9 finetuned:3 breakdown:1 cooling:1 database:1 preprint:8 electrical:1 solved:1 thousand:1 compressing:4 connected:7 hinders:1 ranzato:2 decrease:2 complexity:1 battery:1 trained:4 grateful:1 algebra:1 compromise:1 deliver:1 localization:1 efficiency:1 basis:1 easily:2 various:6 fast:1 describe:3 maha:1 whose:1 widely:1 larger:2 supplementary:3 otherwise:1 compressed:2 ability:2 jointly:1 advantage:1 propose:3 product:11 adaptation:1 combining:1 rapidly:1 achieve:5 adapts:1 colorspace:1 frobenius:2 dirac:1 exploiting:2 sutskever:2 cluster:19 regularity:1 r1:1 produce:2 object:2 spent:1 derive:1 ex:1 implemented:3 c:1 involves:1 predicted:1 come:1 direction:1 filter:19 cnns:6 modifying:1 centered:1 material:3 require:2 f1:1 clustered:1 generalization:1 elementary:2 around:1 considered:2 sufficiently:1 week:1 proc:1 label:1 iw:2 largest:2 modified:1 rather:1 denil:2 mobile:2 varying:1 conjunction:1 focus:1 ghn:1 rank:21 indicates:1 mainly:1 contrast:1 baseline:5 wf:7 typically:1 integrated:1 hidden:1 w:4 i1:1 overall:2 classification:4 denoted:1 overfeat:1 art:4 platform:2 spatial:7 softmax:2 uc:2 field:2 simd:1 construct:1 equal:4 saving:1 ng:2 kw:10 look:2 denton:2 k2f:1 unsupervised:2 report:3 inherent:2 few:2 simultaneously:1 comprehensive:1 individual:1 floating:2 microsoft:1 ab:1 detection:1 huge:1 highly:3 evaluation:6 golub:1 devoted:1 byproduct:1 necessary:1 xy:13 orthogonal:2 unless:1 harmful:1 euclidean:1 taylor:1 plotted:1 theoretical:2 minimal:2 column:1 cost:2 subset:1 entry:1 krizhevsky:3 seventh:1 too:1 sv:1 combined:1 st:3 international:2 siam:1 dong:1 together:2 w1:1 central:1 nm:1 containing:1 opposed:1 wojciech:1 guennebaud:1 li:2 suggesting:1 potential:1 de:1 stride:1 waste:1 titan:1 view:1 break:1 lowe:1 sporadically:1 start:2 wm:1 contribution:1 minimize:1 shakibi:1 square:1 accuracy:4 cxy:3 convolutional:38 who:1 efficiently:4 yield:1 xk2f:1 accurately:2 cc:1 processor:1 detector:1 sharing:1 associated:2 gain:11 dataset:3 reluctant:1 color:13 routine:3 back:1 higher:3 courant:1 day:1 follow:1 zisserman:1 done:1 evaluated:1 until:4 working:1 propagation:2 mkl:1 google:1 facilitate:1 effect:2 building:1 requiring:1 hence:2 assigned:1 regularization:2 illustrated:2 reweighted:1 mahalanobis:3 floyd:1 x5:1 during:1 visualizing:1 criterion:5 leftmost:1 demonstrate:2 gh:1 fj:1 percent:4 image:5 wise:2 recently:1 common:1 volume:1 million:2 discussed:1 belong:1 accumulate:1 significant:3 measurement:2 refer:3 dinh:1 tuning:4 bruna:2 harsher:1 impressive:2 similarity:1 compiled:1 align:1 recent:1 henaff:1 nvidia:1 server:2 nonconvex:1 n00014:1 onr:1 continue:1 life:1 exploited:1 greater:4 deng:1 v3:1 corrado:1 signal:1 arithmetic:1 chia:1 post:1 equally:1 controlled:1 prediction:7 vision:4 metric:15 arxiv:14 iteration:1 kernel:2 monga:1 achieved:3 folding:2 c1:1 addition:2 fine:10 singular:5 float:1 crucial:1 pass:1 nmt:1 file:1 tend:2 facilitates:1 monochromatic:17 spirit:1 integer:1 intermediate:4 enough:1 viability:1 fft:1 variety:1 affect:1 iterate:1 fit:1 architecture:5 restrict:1 reduce:3 cn:1 avenue:1 translates:1 bottleneck:1 motivated:1 mentation:1 effort:3 speech:1 york:1 afford:1 matlab:1 deep:4 useful:1 aimed:1 tune:3 repeating:1 mid:1 reduced:1 http:2 problematic:2 wci:1 cuda:3 nsf:1 per:1 group:1 redundancy:6 vast:1 sum:2 convert:1 run:2 parameterized:1 extends:1 yann:1 vf:2 comparable:1 bit:2 layer:73 internet:2 ki:2 dangerous:1 constrain:2 alex:1 scene:1 fei:2 dominated:1 wc:1 fourier:2 speed:10 x7:2 span:1 performing:1 separable:1 relatively:3 speedup:23 uf:2 alternate:1 describes:2 smaller:1 across:1 character:1 rob:1 making:2 projecting:1 iccv:1 invariant:1 koh:1 multithreading:1 resource:1 visualization:2 count:1 imation:1 end:1 available:1 operation:21 permit:1 apply:1 hierarchical:1 appropriate:1 generic:2 alternative:1 eigen:3 xkf:1 original:8 compress:5 denotes:4 clustering:5 assumes:1 cf:3 include:1 zeiler:2 opportunity:1 maintaining:1 exploit:7 k1:15 february:1 tensor:29 noticed:1 restored:4 strategy:2 receptive:2 rt:2 diagonal:4 subspace:3 distance:15 convnet:1 card:1 parametrized:1 outer:10 majority:3 degrade:1 amd:1 considers:1 code:3 index:1 pointwise:1 balance:1 sermanet:1 difficult:1 potentially:2 stated:1 implementation:5 anal:1 perform:1 upper:3 convolution:13 observation:1 hinton:2 y1:1 rn:2 download:1 inverting:1 pair:3 required:7 cleaned:1 extensive:1 optimized:1 smartphones:2 imagenet:4 learned:1 boost:1 nip:1 able:2 parallelism:1 below:1 memory:8 natural:1 predicting:1 cascaded:1 representing:1 scheme:1 improve:1 library:1 mathieu:3 harsh:1 speeding:3 joan:1 text:1 understanding:1 l2:1 kf:6 multiplication:4 relative:3 embedded:1 fully:7 loss:9 interesting:1 validation:1 vanhoucke:2 vectorized:1 nmk:1 row:1 prone:1 summary:1 keeping:3 senior:1 institute:1 overcome:1 dimension:8 fb:1 preventing:1 forward:3 collection:1 reside:1 projected:2 author:1 adaptive:1 approximate:7 emphasize:1 jaderberg:1 keep:1 instantiation:1 conservation:1 fergus:5 alternatively:2 spectrum:1 table:6 additionally:2 promising:1 channel:9 ca:1 improving:4 expansion:1 ngiam:1 domain:2 main:1 linearly:1 hyperparameters:3 intel:3 aid:1 sub:5 mao:1 sf:2 ffts:1 rk:2 down:2 specific:3 showing:2 learnable:1 nyu:1 explored:2 decay:2 evidence:1 concern:1 workshop:1 socher:1 quantization:2 sequential:1 effectively:1 execution:1 illustrates:1 nk:1 chen:2 easier:1 fc:4 simply:1 explore:1 expressed:1 biclustering:11 applies:1 satisfies:1 month:1 goal:1 sized:1 consequently:1 tuxfamily:1 replace:1 biclustered:2 openmp:1 wt:1 degradation:1 conservative:1 pas:3 svd:18 tiled:1 nt2:1 support:1 dept:1 srivastava:1 |
5,020 | 5,545 | Unsupervised Deep Haar Scattering on Graphs
Xu Chen1,2 , Xiuyuan Cheng2 , and St?ephane Mallat2
1
2
Department of Electrical Engineering, Princeton University, NJ, USA
?
D?epartement d?Informatique, Ecole
Normale Sup?erieure, Paris, France
Abstract
The classification of high-dimensional data defined on graphs is particularly difficult when the graph geometry is unknown. We introduce a Haar scattering transform on graphs, which computes invariant signal descriptors. It is implemented
with a deep cascade of additions, subtractions and absolute values, which iteratively compute orthogonal Haar wavelet transforms. Multiscale neighborhoods of
unknown graphs are estimated by minimizing an average total variation, with a
pair matching algorithm of polynomial complexity. Supervised classification with
dimension reduction is tested on data bases of scrambled images, and for signals
sampled on unknown irregular grids on a sphere.
1
Introduction
The geometric structure of a data domain can be described with a graph [11], where neighbor data
points are represented by vertices related by an edge. For sensor networks, this connectivity depends
upon the sensor physical locations, but in social networks it may correspond to strong interactions
or similarities between two nodes. In many applications, the connectivity graph is unknown and
must therefore be estimated from data. We introduce an unsupervised learning algorithm to classify
signals defined on an unknown graph.
An important source of variability on graphs results from displacement of signal values. It may
be due to movements of physical sources in a sensor network, or to propagation phenomena in social networks. Classification problems are often invariant to such displacements. Image pattern
recognition or characterization of communities in social networks are examples of invariant problems. They require to compute locally or globally invariant descriptors, which are sufficiently rich
to discriminate complex signal classes.
Section 2 introduces a Haar scattering transform which builds an invariant representation of graph
data, by cascading additions, subtractions and absolute values in a deep network. It can be factorized as a product of Haar wavelet transforms on the graph. Haar wavelet transforms are flexible
representations which characterize multiscale signal patterns on graphs [6, 10, 11]. Haar scattering transforms are extensions on graphs of wavelet scattering transforms, previously introduced for
uniformly sampled signals [1].
For unstructured signals defined on an unknown graph, recovering the full graph geometry is an
NP complete problem. We avoid this complexity by only learning connected multiresolution graph
approximations. This is sufficient to compute Haar scattering representations. Multiscale neighborhoods are calculated by minimizing an average total signal variation over training examples.
It involves a pair matching algorithm of polynomial complexity. We show that this unsupervised
learning algorithms computes sparse scattering representations.
This work was supported by the ERC grant InvariantClass 320959.
1
v
x
S1 x
S2 x
S3 x
Figure 1: A Haar scattering network computes each coefficient of a layer Sj+1 x by adding or subtracting a
pair of coefficients in the previous layer Sj x.
For classification, the dimension of unsupervised Haar scattering representations are reduced with
supervised partial least square regressions [12]. It amounts to computing a last layer of reduced
dimensionality, before applying a Gaussian kernel SVM classifier. The performance of a Haar scattering classification is tested on scrambled images, whose graph geometry is unknown. Results
are provided for MNIST and CIFAR-10 image data bases. Classification experiments are also performed on scrambled signals whose samples are on an irregular grid of a sphere. All computations
can be reproduced with a software available at www.di.ens.fr/data/scattering/haar.
2
2.1
Orthogonal Haar Scattering on a Graph
Deep Networks of Permutation Invariant Operators
We consider signals x defined on an unweighted graph G = (V, E), with V = {1, ..., d}. Edges
relate neighbor vertices. We suppose that d is a power of 2 to simplify explanations. A Haar
scattering is calculated by iteratively applying the following permutation invariant operator
(?, ?) ?? (? + ?, |? ? ?|) .
Its values are not modified by a permutation of ? and ?, and both values are recovered by
1
1
max(?, ?) = ? + ? + |? ? ?| and min(?, ?) = ? + ? ? |? ? ?| .
2
2
(1)
(2)
An orthogonal Haar scattering transform computes progressively more invariant signal descriptors
by applying this invariant operator at multiple scales. This is implemented along a deep network
illustrated in Figure 1. The network layer j is a two-dimensional array Sj x(n, q) of d = 2?j d ? 2j
coefficients, where n is a node index and q is a feature type.
The input network layer is S0 x(n, 0) = x(n). We compute Sj+1 x by regrouping the 2?j d nodes
of Sj x in 2?j?1 d pairs (an , bn ), and applying the permutation invariant operator (1) to each pair
(Sj x(an , q), Sj x(bn , q)):
Sj+1 x(n, 2q) = Sj x(an , q) + Sj x(bn , q)
(3)
and
Sj+1 x(n, 2q + 1) = |Sj x(an , q) ? Sj x(bn , q)| .
(4)
This transform is iterated up to a maximum depth J ? log2 (d). It computes SJ x with Jd/2
additions, subtractions and absolute values. Since Sj x ? 0 for j > 0, one can put an absolute value
on the sum in (3) without changing Sj+1 x. It results that Sj+1 x is calculated from the previous
layer Sj x by applying a linear operator followed by a non-linearity as in most deep neural network
architectures. In our case this non-linearity is an absolute value as opposed to rectifiers used in most
deep networks [4].
For each n, the 2j scattering coefficients {Sj x(n, q)}0?q<2j are calculated from the values of x in a
vertex set Vj,n of size 2j . One can verify by induction on (3) and (4) that V0,n = {n} for 0 ? n < d,
and for any j ? 0
Vj+1,n = Vj,an ? Vj,bn .
(5)
2
V2,n
V3,n
V2,n
V1,n
V3,n
V1,n
(a)
(b)
Figure 2: A connected multiresolution is a partition of vertices with embedded connected sets Vj,n of size
2j . (a): Example of partition for the graph of a square image grid, for 1 ? j ? 3. (b): Example on an irregular
graph.
The embedded subsets {Vj,n }j,n form a multiresolution approximation of the vertex set V . At each
scale 2j , different pairings (an , bn ) define different multiresolution approximations. A small graph
displacement propagates signal values from a node to its neighbors. To build nearly invariant representations over such displacements, a Haar scattering transform must regroup connected vertices. It
is thus computed over multiresolution vertex sets Vj,n which are connected in the graph G. It results
from (5) that a necessary and sufficient condition is that each pair (an , bn ) regroups two connected
sets Vj,an and Vj,bn .
Figure 2 shows two examples of connected multiresolution approximations. Figure 2(a) illustrates
the graph of an image grid, where pixels are connected to 8 neighbors. In this example, each Vj+1,n
regroups two subsets Vj,an and Vj,bn which are connected horizontally if j is even and connected
vertically if j is odd. Figure 2(b) illustrates a second example of connected multiresolution approximation on an irregular graph. There are many different connected multiresolution approximations
resulting from different pairings at each scale 2j . Different multiresolution approximations correspond to different Haar scattering transforms. In the following, we compute several Haar scattering
transforms of a signal x, by defining different multiresolution approximations.
The following theorem proves that a Haar scattering preserves the norm and that it is contractive up
to a normalization factor 2j/2 . The contraction is due to the absolute value which suppresses the
sign and hence reduces the amplitude of differences. The proof is in Appendix A.
Theorem 2.1. For any j ? 0, and any x, x0 defined on V
kSj x ? Sj x0 k ? 2j/2 kx ? x0 k ,
and
kSj xk = 2j/2 kxk .
2.2
Iterated Haar Wavelet Transforms
We show that a Haar scattering transform can be written as a cascade of orthogonal Haar wavelet
transforms and absolute value non-linearities. It is a particular example of scattering transforms introduced in [1]. It computes coefficients measuring signal variations at multiple scales and multiple
orders. We prove that the signal can be recovered from Haar scattering coefficients computed over
enough multiresolution approximations.
A scattering operator is contractive because of the absolute value. When coefficients have an arbitrary sign, suppressing the sign reduces by a factor 2 the volume of the signal space. We say that
SJ x(n, q) is a coefficient of order m if its computation includes m absolute values of differences.
The amplitude of scattering coefficients typically decreases exponentially when the scattering order
m increases, because of the contraction produced by the absolute value. We verify from (3) and (4)
3
that SJ x(n, q) is a coefficient of order m = 0 if q = 0 and of order m > 0 if
q=
m
X
2J?jk for 0 ? jk < jk+1 ? J .
k=1
It results that there are
J
m
2?J d coefficients SJ x(n, q) of order m.
We now show that Haar scattering coefficients of order m are obtained by cascading m orthogonal
Haar wavelet tranforms defined on the graph G. A Haar wavelet at a scale 2J is defined over each
Vj,n = Vj?1,an ? Vj?1,bn by
?j,n = 1Vj?1,an ? 1Vj?1,bn .
For any J ? 0, one can verify [10, 6] that
{1VJ,n }0?n<2?J d ? {?j,n }0?n<2?j d,0?j<J
is a non-normalized
orthogonal Haar basis of the space of signals defined on V . Let us denote
P
hx, x0 i = v?V x(v) x0 (v). Order m = 0 scattering coefficients sum the values of x in each VJ,n
SJ x(n, 0) = hx , 1VJ,n i .
Order m = 1 scattering coefficients are sums of absolute values of orthogonal Haar wavelet coefficients. They measure the variation amplitude x at each scale 2j , in each VJ,n :
X
SJ x(n, 2J?j1 ) =
|hx , ?j1 ,p i|.
p
Vj ,p ?VJ,n
1
Appendix B proves that second order scattering coefficients SJ x(n, 2J?j1 + 2J?j2 ) are computed
by applying a second orthogonal Haar wavelet transform to first order scattering coefficients. A
coefficient SJ x(n, 2J?j1 +2J?j2 ) is an averaged second order increment over VJ,n , calculated from
the variations at the scale 2j2 , of the increments of x at the scale 2j1 . More generally, Appendix B
also proves that order m coefficients measure multiscale variations of x at the order m, and are
obtained by applying a Haar wavelet transform on scattering coefficients of order m ? 1.
A single Haar scattering transform loses information since it applies a cascade of permutation invariant operators. However, the following theorem proves that x can be recovered from scattering
transforms computed over 2J different multiresolution approximations.
Theorem 2.2. There exist 2J multiresolution approximations such that almost all x ? Rd can be
reconstructed from their scattering coefficients on these multiresolution approximations.
This theorem is proved in Appendix C. The key idea is that Haar scattering transforms are computed
with permutation invariants operators. Inverting these operators allows to recover values of signal
pairs but not their locations. However, recombining these values on enough overlapping sets allows
one to recover their locations and hence the original signal x. This is done with multiresolutions
which are interlaced at each scale 2j , in the sense that if a multiresolution is pairing (an , bn ) and
(a0n , b0n ) then another multiresolution approximation is pairing (a0n , bn ). Connectivity conditions are
needed on the graph G to guarantee the existence of ?interlaced? multiresolution approximations
which are all connected.
3
3.1
Learning
Sparse Unsupervised Learning of Multiscale Connectivity
Haar scattering transforms compute multiscale signal variations of multiple orders, over nonoverlapping sets of size 2J . To build signal descriptors which are nearly invariant to signal displacements on the graph, we want to compute scattering transforms over connected sets in the graph,
which a priori requires to know the graph connectivity. However, in many applications, the graph
connectivity is unknown. For piecewise regular signals, the graph connectivity implies some form
of correlation between neighbor signal values, and may thus be estimated from a training set of
unlabeled examples {xi }i [7].
4
Instead of estimating the full graph geometry, which is an NP complete problem, we estimate multiresolution approximations which are connected. This is a hierarchical clustering problem [19].
A multiresolution approximation is connected if at each scale 2j , each pair (an , bn ) regroups two
vertex sets (Vj,an , Vj,bn ) which are connected. This connection is estimated by minimizing the total
variation within each set Vj,n , which are clusters of size 2j [19]. It is done with a fine to coarse
aggregation strategy. Given {Vj,n }0?n<2?j d , we compute Vj+1,n at the next scale, by finding an
optimal pairing {an , bn }n which minimizes the total variation of scattering vectors, averaged over
the training set {xi }i :
j
2?j?1
?1
X d 2X
n=0
q=0
X
|Sj xi (an , q) ? Sj xi (bn , q)| .
(6)
i
This is a weighted matching problem which can be solved by the Blossom Algorithm of Edmonds [8]
with O(d3 ) operations. We use the implementation in [9]. Iterating on this algorithm for 0 ? j < J
thus computes a multiresolution approximation at the scale 2J , with a hierarchical aggregation of
graph vertices.
Observe that
kSj+1 xk1 = kSj xk1 +
XX
q
|Sj x(an , q) ? Sj x(bn , q)| .
n
P
Given Sj x, it results that the minimization of (6) is equivalent to the minimization of i kSj+1 xi k1 .
This can be interpreted as finding a multiresolution approximation which yields an optimally sparse
scattering transform. It operates with a greedy layerwise strategy across the network layers, similarly
to sparse autoencoders for unsupervised deep learning [4].
As explained in the previous section, several Haar scattering transforms are needed to obtain a complete signal representation. The unsupervised learning computes N multiresolution approximations
by dividing the training set {xi }i in N non-overlapping subsets, and learning a different multiresolution approximation from each training subset.
3.2
Supervised Feature Selection and Classification
The unsupervised learning computes a vector of scattering coefficients which is typically much
larger than the dimension d of x. However, only a subset of these invariants are needed for any
particular classification task. The classification is improved by a supervised dimension reduction
which selects a subset of scattering coefficients. In this paper, the feature selection is implemented
with a partial least square regression [12, 13, 14]. The final supervised classifier is a Gaussian kernel
SVM.
Let us denote by ?x = {?p x}p the set of all scattering coefficients at a scale 2J , computed from
N multiresolution approximations. We perform a feature selection adapted to each class c, with a
partial least square regression of the one-versus-all indicator function
1 if x belongs to class c
fc (x) =
.
0 otherwise
A partial least square greedily selects and orthogonalizes each feature, one at a time. At the k th
iteration, it selects a ?pk x, and a Gram-Schmidt orthogonalization yields a normalized ??pk x, which
is uncorrelated relatively to all previously selected features:
X
X
??pk (xi ) ??pr (xi ) = 0 and
|??pk (xi )|2 = 1 .
?r < k ,
i
i
The k feature ?pk x is selected so that the linear regression of fc (x) on {??pr x}1?r?k has a minimum mean-square error, computed on the training set. This is equivalent to finding ?pk so that
P
?
i fc (xi ) ?pk (xi ) is maximum.
th
The partial least square regression thus selects and computes K decorrelated scattering features
{??pk x}k<K for each class c. For a total of C classes, the union of all these feature sets defines a
dictionary of size M = K C. They are linear combinations of the original Haar scattering coefficients {?p x}p . This dimension reduction can thus be interpreted as a last fully connected network
5
Figure 3: MNIST images (left) and images after random pixel permutations (right).
layer, which outputs a vector of size M . The parameter M allows one to optimize the bias versus
variance trade-off. It can be adjusted from the decay of the regression error of each fc [12]. In our
numerical experiments, it is set to a fixed size for all data bases.
4
Numerical Experiments
Unsupervised Haar scattering representations are tested on classification problems, over scrambled
images and scrambled data on a sphere, for which the geometry is therefore unknown. Classification
results are compared with a Haar scattering algorithm computed over the known signal geometry,
and with state of the art algorithms.
A Haar scattering representation involves few parameters which are reviewed. The scattering scale
2J ? d is the invariance scale. Scattering coefficients are computed up to the a maximum order m,
which is set to 4 in all experiments. Indeed, higher order scattering coefficient have a negligible relative energy, which is below 1%. The unsupervised learning algorithm computes N multiresolution
approximations, corresponding to N different scattering transforms. Increasing N decreases the
classification error but it increases computations. The error decay becomes negligible for N ? 40.
The supervised dimension reduction selects a final set of M orthogonalized scattering coefficients.
We set M = 1000 in all numerical experiments.
For signals defined on an unknown graph, the unsupervised learning computes an estimation of
connected multiresolution sets by minimizing an average total variation. For each data basis of
scrambled signals, the precision of this estimation is evaluated by computing the percentage of
multiscale sets which are indeed connected in the original topology (an image grid or a grid on the
sphere).
4.1
MNIST Digit Recognition
MNIST is a data basis with 6 ? 104 hand-written digit images of size d ? 210 , with 5 ? 104 images
for training and 104 for testing. Examples of MNIST images before and after pixel scrambling
are shown in Figure 3. The best classification results are obtained with a maximum invariance scale
2J = 210 . The classification error is 0.9%, with an unsupervised learning of N = 40 multiresolution
approximations. Table 1 shows that it is below but close to state of the art results obtained with fully
supervised deep convolution, which are optimized with supervised backpropagation algorithms.
The unsupervised learning computes multiresolution sets Vj,n from scrambled images. At scales
1 ? 2j ? 23 , 100% of these multiresolution sets are connected in the original image grid, which
proves that the geometry is well estimated at these scales. This is only evaluated on meaningful
pixels which do not remain zero on all training images. For j = 4 and j = 5 the percentages of
connected sets are respectively 85% and 67%. The percentage of connected sets decreases because
long range correlations are weaker.
One can reduce the Haar scattering classification error from 0.9% to 0.59% with a known image
geometry. The Haar scattering transform is then computed over multiresolution approximations
which are directly constructed from the image grid as in Figure 2(a). Rotations and translations
define N = 64 different connected multiresolution approximations, which yield a reduced error of
0.59%. State of the art classification errors on MNIST, for non-augmented data basis (without elastic
deformations), are respectively 0.46% with a Gabor scattering [2] and 0.53% with a supervised
training of deep convolution networks [5]. This shows that without any learning, a Haar scattering
using geometry is close to the state of the art.
6
Maxout MLP + dropout [15]
0.94
Deep convex net. [16]
0.83
DBM + dropout [17]
0.79
Haar Scattering
0.90
Table 1: Percentage of errors for the classification of scrambled MNIST images, obtained by different algorithms.
Figure 4: Images of digits mapped on a sphere.
4.2
CIFAR-10 Images
CIFAR-10 images are color images of 32 ? 32 pixels, which are much more complex than MNIST
digit images. It includes 10 classes, such as ?dogs?, ?cars?, ?ships? with a total of 5 ? 104 training
examples and 104 testing examples. The 3 color bands are represented with Y, U, V channels and
scattering coefficients are computed independently in each channel.
The Haar scattering is first applied to scrambled CIFAR images whose geometry is unknown. The
minimum classification error is obtained at the scale 2J = 27 which is below the maximum scale d =
210 . It maintains some localization information on the image features. With N = 10 multiresolution
approximations, a Haar scattering transform has an error of 27.3%. It is 10% below previous results
obtained on this data basis, given in Table 2.
Nearly 100% of the multiresolution sets Vj,n computed from scrambled images are connected in the
original image grid, for 1 ? j ? 4, which shows that the multiscale geometry is well estimated
at these fine scales. For j = 5, 6 and 7, the proportions of connected sets are 98%, 93% and 83%
respectively. As for MNIST images, the connectivity is not as precisely estimated at large scales.
Fastfood [18]
36.9
Random Kitchen Sinks [18]
37.6
Haar Scattering
27.3
Table 2: Percentage of errors for the classification of scrambled CIFAR-10 images, with different algorithms.
The Haar scattering classification error is reduced from 27.7% to 21.3% if the image geometry is
known. Same as for MNIST, we compute N = 64 multiresolution approximations obtained by
translating and rotating. After dimension reduction, the classification error is 21.3%. This error is
above the state of the art obtained by a supervised convolutional network [15] (11.68%), but the
Haar scattering representation involves no learning.
4.3
Signals on a Sphere
A data basis of irregularly sampled signals on a sphere is constructed in [3], by projecting the
MNIST image digits on d = 4096 points randomly sampled on the 3D sphere, and by randomly
rotating these images on the sphere. The random rotation is either uniformly distributed on the
sphere or restricted with a smaller variance (small rotations) [3]. The digit ?9? is removed from the
data set because it can not be distinguished from a ?6? after rotation. Examples of the dataset are
shown in Figure 4.
The classification algorithms introduced in [3] use the known distribution of points on the sphere,
by computing a representation based on the graph Laplacian. Table 3 gives the results reported in
[3], with a fully connected neural network, and a spectral graph Laplacian network.
As opposed to these algorithms, the Haar scattering algorithm uses no information on the positions of
points on the sphere. Computations are performed from a scrambled set of signal values, without any
7
geometric information. Scattering transforms are calculated up to the maximum scale 2J = d = 212 .
A total of N = 10 multiresolution approximations are estimated by unsupervised learning, and the
classification is performed from M = 103 selected coefficients. Despite the fact that the geometry
is unknown, the Haar scattering reduces the error rate both for small and large 3D random rotations.
In order to evaluate the precision of our geometry estimation, we use the neighborhood information
based on the 3D coordinates of the 4096 points on the sphere of radius 1. We say that two points
are connected if their geodesic distance is smaller than 0.1. Each point on the sphere has on average
8 connected points. For small rotations, the percentage of learned multiresolution sets which are
connected is 92%, 92%, 88% and 83% for j going from 1 to 4. It is computed on meaningful points
with nonneglegible energy. For large rotations, it is 97%, 96%, 95% and 95%. This shows that the
multiscale geometry on the sphere is well estimated.
Small rotations
Large rotations
Nearest
Neighbors
19
80
Fully
Connect.
5.6
52
Spectral
Net.[3]
6
50
Haar
Scattering
2.2
47.7
Table 3: Percentage of errors for the classification of MNIST images rotated and sampled on a sphere [3], with
a nearest neighbor classifier, a fully connected two layer neural network, a spectral network [3], and a Haar
scattering.
5
Conclusion
A Haar scattering transform computes invariant data representations by iterating over a hierarchy
of permutation invariant operators, calculated with additions, subtractions and absolute values. The
geometry of unstructured signals is estimated with an unsupervised learning algorithm, which minimizes the average total signal variation over multiscale neighborhoods. This shows that unsupervised deep learning can be implemented with a polynomial complexity algorithm. The supervised
classification includes a feature selection implemented with a partial least square regression. State
of the art results have been shown on scrambled images as well as random signals sampled on a
sphere. The two important parameters of this architecture are the network depth, which corresponds
to the invariance scale, and the dimension reduction of the final layer, set to 103 in all experiments.
It can thus easily be applied to any data set.
This paper concentrates on scattering transforms of real valued signals. For a boolean vector x,
a boolean scattering transform is computed by replacing the operator (1) by a boolean permutation
invariant operator which transforms (?, ?) into (? and ? , ? xor ?). Iteratively applying this operator
defines a boolean scattering transform Sj x having similar properties.
8
References
[1] S. Mallat, ?Recursive interferometric representations?. Proc. of EUSICO Conf. 2010, Denmark.
[2] J. Bruna, S. Mallat, ?Invariant Scattering Convolution Networks,? IEEE Trans. PAMI, 35(8):
1872-1886, 2013.
[3] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, ?Spectral Networks and Deep Locally Connected Networks on Graphs,? ICLR 2014.
[4] Y. Bengio, A. Courville, P. Vincent, ?Representation Learning: A Review and New Perspectives?, IEEE Trans. on PAMI, no.8, vol. 35, pp 1798-1828, 2013.
[5] Y. LeCun, K. Kavukvuoglu, and C. Farabet, ?Convolutional Networks and Applications in
Vision,? Proc. IEEE Int. Sump. Circuits and Systems 2010.
[6] M. Gavish, B. Nadler, and R. R. Coifman. ?Multiscale wavelets on trees, graphs and high
dimensional data: Theory and applications to semi supervised learning?, in ICML, pages 367374, 2010.
[7] N. L. Roux, Y. Bengio, P. Lamblin, M. Joliveau and B. K?egl, ?Learning the 2-D topology of
images?, in NIPS, pages 841-848, 2008.
[8] J. Edmonds. Paths, trees, and flowers. Canadian Journal of Mathematics, 1965.
[9] E. Rothberg of H. Gabow?s ?An Efficient Implementation of Edmond?s Algorithm for Maximum Matching on Graphs.? JACM, 23, 1v976.
[10] R. Rustamov, L. Guibas, ?Wavelets on Graphs via Deep Learning,? NIPS 2013.
[11] D. Shuman, S. Narang, P. Frossard, A. Ortega, P. Vanderghenyst, ?The Emmerging Field of
Signal Processing on Graphs,? IEEE Signal Proc. Magazine, May 2013.
[12] T. Mehmood, K. H. Liland, L. Snipen and S. S?b?, ?A Review of Variable Selection Methods
in Partial Least Squares Regression?, Chemometrics and Intelligent Laboratory Systems, vol.
118, pages 62-69, 2012.
[13] H. Zhang, S. Kiranyaz and M. Gabbouj, ?Cardinal Sparse Partial Least Square Feature Selection and its Application in Face Recognition?, Signal Processing Conference (EUSIPCO),
2014 Proceedings of the 22st European, Sep. 2014.
[14] W. R. Schwartz, A. Kembhavi, D. Harwood and L. S. Davis, ?Human Detection Using Partial
Least Squares Analysis?, Computer vision, ICCV 2009.
[15] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville and Y. Benjio, ?Maxout Networks?,
Arxiv preprint, arxiv:1302.4389, 2013.
[16] D. Yu and L. Deng, ?Deep Convex Net: A Scalable Architecture for Speech Pattern Classification?,in Proc. INTERSPEECH, 2011, pp.2285-2288.
[17] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, ?Improving neural networks by preventing co-adaptation of feature detectors?, Technical report,
arXiv:1207.0580, 2012.
[18] Q. Le, T. Sarlos and A. Smola,?Fastfood - Approximating Kernel Expansions in Loglinear
Time?, ICML, 2013.
[19] M Hein and S. Setzer, ?Beyond Spectral Clustering - Tight Relaxations of Balanced Graph
Cuts,? NIPS 2011.
9
| 5545 |@word polynomial:3 norm:1 proportion:1 bn:18 contraction:2 gabow:1 reduction:6 epartement:1 ecole:1 suppressing:1 recovered:3 must:2 written:2 numerical:3 partition:2 j1:5 progressively:1 greedy:1 selected:3 xk:1 characterization:1 coarse:1 node:4 location:3 zhang:1 along:1 constructed:2 pairing:5 prove:1 introduce:2 coifman:1 x0:5 indeed:2 frossard:1 salakhutdinov:1 globally:1 increasing:1 becomes:1 provided:1 estimating:1 linearity:3 xx:1 circuit:1 factorized:1 interpreted:2 minimizes:2 suppresses:1 finding:3 nj:1 guarantee:1 zaremba:1 classifier:3 schwartz:1 szlam:1 grant:1 before:2 negligible:2 engineering:1 vertically:1 eusipco:1 despite:1 path:1 pami:2 co:1 regroup:1 contractive:2 averaged:2 range:1 lecun:2 cheng2:1 testing:2 union:1 recursive:1 backpropagation:1 digit:6 displacement:5 cascade:3 gabor:1 matching:4 regular:1 unlabeled:1 selection:6 operator:13 close:2 put:1 applying:8 www:1 equivalent:2 optimize:1 sarlos:1 independently:1 convex:2 roux:1 unstructured:2 cascading:2 array:1 lamblin:1 variation:11 increment:2 coordinate:1 hierarchy:1 suppose:1 mallat:2 magazine:1 us:1 goodfellow:1 orthogonalizes:1 recognition:3 particularly:1 jk:3 cut:1 preprint:1 electrical:1 solved:1 connected:32 movement:1 decrease:3 trade:1 removed:1 balanced:1 complexity:4 harwood:1 warde:1 geodesic:1 tight:1 upon:1 localization:1 basis:6 sink:1 easily:1 sep:1 represented:2 informatique:1 neighborhood:4 whose:3 larger:1 valued:1 narang:1 say:2 otherwise:1 transform:15 final:3 reproduced:1 net:3 subtracting:1 interaction:1 product:1 fr:1 adaptation:1 j2:3 interferometric:1 multiresolution:36 chemometrics:1 sutskever:1 cluster:1 rotated:1 nearest:2 odd:1 strong:1 dividing:1 implemented:5 recovering:1 involves:3 implies:1 concentrate:1 radius:1 human:1 translating:1 require:1 hx:3 adjusted:1 extension:1 sufficiently:1 guibas:1 nadler:1 dbm:1 dictionary:1 gavish:1 estimation:3 proc:4 weighted:1 minimization:2 sensor:3 gaussian:2 modified:1 normale:1 avoid:1 greedily:1 sense:1 typically:2 going:1 france:1 selects:5 pixel:5 classification:26 flexible:1 priori:1 art:6 field:1 having:1 yu:1 unsupervised:16 nearly:3 icml:2 report:1 ephane:1 np:2 simplify:1 piecewise:1 few:1 intelligent:1 cardinal:1 randomly:2 mirza:1 preserve:1 kitchen:1 geometry:16 detection:1 mlp:1 joliveau:1 introduces:1 farley:1 edge:2 partial:9 necessary:1 orthogonal:8 tree:2 rotating:2 orthogonalized:1 deformation:1 hein:1 classify:1 boolean:4 measuring:1 vertex:9 subset:6 krizhevsky:1 characterize:1 optimally:1 reported:1 connect:1 st:2 ksj:5 off:1 connectivity:8 opposed:2 conf:1 nonoverlapping:1 includes:3 coefficient:30 int:1 depends:1 performed:3 sup:1 recover:2 aggregation:2 maintains:1 square:11 convolutional:2 descriptor:4 variance:2 xor:1 correspond:2 yield:3 vincent:1 iterated:2 produced:1 shuman:1 detector:1 decorrelated:1 farabet:1 energy:2 pp:2 proof:1 di:1 sampled:6 proved:1 dataset:1 color:2 car:1 dimensionality:1 amplitude:3 scattering:74 higher:1 supervised:12 improved:1 done:2 evaluated:2 xk1:2 smola:1 correlation:2 autoencoders:1 hand:1 replacing:1 multiscale:11 overlapping:2 propagation:1 defines:2 usa:1 verify:3 normalized:2 hence:2 iteratively:3 laboratory:1 illustrated:1 interspeech:1 davis:1 ortega:1 complete:3 orthogonalization:1 image:36 rotation:9 physical:2 exponentially:1 volume:1 rd:1 erieure:1 grid:9 similarly:1 erc:1 mathematics:1 bruna:2 similarity:1 v0:1 base:3 perspective:1 belongs:1 ship:1 regrouping:1 minimum:2 deng:1 subtraction:4 v3:2 signal:39 semi:1 multiple:4 full:2 reduces:3 technical:1 sphere:17 cifar:5 long:1 laplacian:2 scalable:1 regression:8 vision:2 arxiv:3 iteration:1 kernel:3 normalization:1 irregular:4 addition:4 want:1 fine:2 source:2 benjio:1 canadian:1 bengio:2 enough:2 architecture:3 topology:2 reduce:1 idea:1 interlaced:2 setzer:1 speech:1 deep:15 generally:1 iterating:2 transforms:19 amount:1 locally:2 band:1 reduced:4 exist:1 percentage:7 s3:1 sign:3 estimated:10 edmonds:2 vol:2 key:1 d3:1 changing:1 v1:2 graph:43 relaxation:1 sum:3 almost:1 appendix:4 dropout:2 layer:10 followed:1 multiresolutions:1 courville:2 adapted:1 precisely:1 software:1 layerwise:1 min:1 recombining:1 relatively:1 department:1 combination:1 across:1 remain:1 smaller:2 s1:1 explained:1 invariant:19 pr:2 projecting:1 restricted:1 iccv:1 previously:2 needed:3 know:1 irregularly:1 available:1 operation:1 observe:1 hierarchical:2 v2:2 spectral:5 edmond:1 distinguished:1 schmidt:1 jd:1 original:5 existence:1 kembhavi:1 clustering:2 log2:1 k1:1 build:3 prof:5 approximating:1 strategy:2 loglinear:1 iclr:1 distance:1 mapped:1 rothberg:1 induction:1 denmark:1 index:1 minimizing:4 difficult:1 relate:1 implementation:2 unknown:12 perform:1 convolution:3 defining:1 hinton:1 variability:1 scrambling:1 arbitrary:1 community:1 introduced:3 inverting:1 pair:8 paris:1 dog:1 connection:1 optimized:1 learned:1 chen1:1 nip:3 trans:2 beyond:1 below:4 pattern:3 flower:1 max:1 explanation:1 power:1 haar:52 indicator:1 review:2 geometric:2 relative:1 embedded:2 fully:5 permutation:9 versus:2 sufficient:2 s0:1 propagates:1 uncorrelated:1 translation:1 supported:1 last:2 bias:1 blossom:1 weaker:1 neighbor:7 face:1 absolute:12 sparse:5 distributed:1 dimension:8 calculated:7 depth:2 gram:1 rich:1 computes:14 unweighted:1 preventing:1 social:3 sj:33 reconstructed:1 a0n:2 xi:11 scrambled:13 reviewed:1 table:6 channel:2 elastic:1 improving:1 expansion:1 complex:2 european:1 domain:1 vj:31 pk:8 fastfood:2 s2:1 rustamov:1 xu:1 augmented:1 en:1 xiuyuan:1 precision:2 position:1 wavelet:13 theorem:5 rectifier:1 decay:2 svm:2 mnist:12 adding:1 illustrates:2 egl:1 kx:1 fc:4 jacm:1 horizontally:1 kxk:1 applies:1 corresponds:1 loses:1 gabbouj:1 maxout:2 uniformly:2 operates:1 total:9 discriminate:1 invariance:3 meaningful:2 evaluate:1 princeton:1 tested:3 phenomenon:1 srivastava:1 |
5,021 | 5,546 | Multi-View Perceptron: a Deep Model for Learning
Face Identity and View Representations
Zhenyao Zhu1,3
Ping Luo3,1
Xiaogang Wang2,3
Xiaoou Tang1,3
1
Department of Information Engineering, The Chinese University of Hong Kong
2
Department of Electronic Engineering, The Chinese University of Hong Kong
3
Shenzhen Key Lab of CVPR, Shenzhen Institutes of Advanced Technology,
Chinese Academy of Sciences, Shenzhen, China
{zz012,lp011}@ie.cuhk.edu.hk [email protected] [email protected]
Abstract
Various factors, such as identity, view, and illumination, are coupled in face
images. Disentangling the identity and view representations is a major challenge
in face recognition. Existing face recognition systems either use handcrafted
features or learn features discriminatively to improve recognition accuracy. This
is different from the behavior of primate brain. Recent studies [5, 19] discovered
that primate brain has a face-processing network, where view and identity are
processed by different neurons. Taking into account this instinct, this paper
proposes a novel deep neural net, named multi-view perceptron (MVP), which can
untangle the identity and view features, and in the meanwhile infer a full spectrum
of multi-view images, given a single 2D face image. The identity features of MVP
achieve superior performance on the MultiPIE dataset. MVP is also capable to
interpolate and predict images under viewpoints that are unobserved in the training
data.
1
Introduction
The performance of face recognition systems depends heavily on facial representation, which is
naturally coupled with many types of face variations, such as view, illumination, and expression. As
face images are often observed in different views, a major challenge is to untangle the face identity
and view representations. Substantial efforts have been dedicated to extract identity features by
hand, such as LBP [1], Gabor [14], and SIFT [15]. The best practise of face recognition extracts
the above features on the landmarks of face images with multiple scales and concatenates them into
high dimensional feature vectors [4, 21]. Deep learning methods, such as Boltzmann machine [9],
sum product network [17], and deep neural net [16, 25, 22, 23, 24, 26] have been applied to face
recognition. For instance, Sun et al. [25, 22] employed deep neural net to learn identity features
from raw pixels by predicting 10, 000 identities.
Deep neural net is inspired by the understanding of hierarchical cortex in the primate brain and
mimicking some aspects of its activities. Recent studies [5, 19] discovered that macaque monkeys
have a face-processing network that was made of six interconnected face-selective regions, where
neurons in some of these regions were view-specific, while some others were tuned to identity across
views, making face recognition in brain of primate robust to view variation. This intriguing function
of primate brain inspires us to develop a novel deep neural net, called multi-view perceptron (MVP),
which can disentangle identity and view representations, and also reconstruct images under multiple
views. Specifically, given a single face image of an identity under an arbitrary view, it can generate
a sequence of output face images of the same identity, one at a time, under a full spectrum of
viewpoints. Examples of the input images and the generated multi-view outputs of two identities
are illustrated in Fig. 1. The images in the last two rows are from the same person. The extracted
features of MVP with respect to identity and view are plotted correspondingly in blue and orange.
1
Figure 1: The inputs (first column) and the multi-view outputs (remaining columns) of two identities. The first
input is from one identity and the last two inputs are from the other. Each reconstructed multi-view image (left)
has its ground truth (right) for comparison. The extracted identity features of the inputs (the second column),
and the view features of both the inputs and outputs are plotted in blue and orange, respectively. The identity
features of the same identity are similar, even though the inputs are captured in diverse views, while the view
features of the same viewpoint are similar, although they are from different identities. The two persons look
similar in the frontal view, but can be better distinguished in other views.
We can observe that the identity features of the same identity are similar, even though the inputs are
captured in very different views, whilst the view features of images in the same view are similar,
although they are across different identities.
Unlike other deep networks that produce a deterministic output from an input, MVP employs the
deterministic hidden neurons to learn the identity features, whilst using the random hidden neurons
to capture the view representation. By sampling distinct values of the random neurons, output
images in distinct views are generated. Moreover, to yield images of different viewpoints, we
add regularization that images under similar viewpoints should have similar view representations
on the random neurons. The two types of neurons are modeled in a probabilistic way. In the
training stage, the parameters of MVP are updated by back-propagation, where the gradient is
calculated by maximizing a variational lower bound of the complete data log-likelihood. With our
proposed learning algorithm, the EM updates on the probabilistic model are converted to forward
and backward propagation. In the testing stage, given an input image, MVP can extract its identity
and view features. In addition, if an order of viewpoints is also provided, MVP can sequentially
reconstruct multiple views of the input image by following this order.
This paper has several key contributions. (i) We propose a multi-view perceptron (MVP) and its
learning algorithm to factorize the identity and view representations with different sets of neurons,
making the learned features more discriminative and robust. (ii) MVP can reconstruct a full spectrum
of views given a single 2D image. The full spectrum of views can better distinguish identities, since
different identities may look similar in a particular view but differently in others as illustrated in Fig.
1. (iii) MVP can interpolate and predict images under viewpoints that are unobserved in the training
data, in some sense imitating the reasoning ability of human.
Related Works. In the literature of computer vision, existing methods that deal with view (pose)
variation can be divided into 2D- and 3D-based methods. For example, the 2D methods, such as [6],
infer the deformation (e.g. thin plate splines) between 2D images across poses. The 3D methods,
such as [2, 12], capture 3D face models in different parametric forms. The above methods have
their inherent shortages. Extra cost and resources are necessitated to capture and process 3D data.
Because of lacking one degree of freedom, inferring 3D deformation from 2D transformation is
often ill-posed. More importantly, none of the existing approaches simulates how the primate brain
encodes view representations. In our approach, instead of employing any geometric models, view
information is encoded with a small number of neurons, which can recover the full spectrum of
views together with identity neurons. This representation of encoding identity and view information
into different neurons is closer to the face-processing system in the primate brain and new to the
deep learning literature. Our previous work [28] learned identity features by using CNN to recover a
single frontal view face image, which is a special case of MVP after removing the random neurons.
[28] did not learn the view representation as we do. Experimental results show that our approach not
only provides rich multi-view representation but also learns better identity features compared with
2
[28]. Fig. 1 shows examples that different persons may look similar in the front view, but are better
distinguished in other views. Thus it improves the performance of face recognition significantly.
More recently, Reed et al. [20] untangled factors of image variation by using a high-order Boltzmann
machine, where all the neurons are stochastic and it is solved by gibbs sampling. MVP contains both
stochastic and deterministic neurons and thus can be efficiently solved by back-propagation.
2
Multi-View Perceptron
The training data is a set of image pairs, I = {xij ,
(yik , vik )}N,M,M
i=1,j=1,k=1 , where xij is the input image of the ith identity under the j-th view, yik denotes the output image of
the same identity in the k-th view, and vik is the view label of
the output. vik is a M dimensional binary vector, with the k-th
element as 1 and the remaining zeros. MVP is learned from the
training data such that given an input x, it can output images y
of the same identity in different views and their view labels v.
Then, the output v and y are generated as1 ,
v = F (y, hv ; ?), y = F (x, hid , hv , hr ; ?) + ,
(1)
where F is a non-linear function and ? is a set of weights and
biases to be learned. There are three types of hidden neurons,
hid , hv , and hr , which respectively extract identity features,
view features, and the features to reconstruct the output face
image. signifies a noise variable.
Figure 2:
Network structure
of MVP, which has six layers,
including three layers with only
the deterministic neurons (i.e. the
layers parameterized by the weights
of U0 , U1 , U4 ), and three layers
with both the deterministic and
random neurons (i.e. the weights of
U2 , V2 , W2 , U3 , V3 , U5 , W5 ).
This structure is used throughout
the experiments.
Fig. 2 shows the architecture2 of MVP, which is a directed
graphical model with six layers, where the nodes with and
without filling represent the observed and hidden variables, and
the nodes in green and blue indicate the deterministic and random
neurons, respectively. The generation process of y and v starts
from x, flows through the neurons that extract identity feature
hid , which combines with the hidden view representation hv to
yield the feature hr for face recovery. Then, hr generates y.
Meanwhile, both hv and y are united to generate v. hid and hr are the deterministic binary hidden
neurons, while hv are random binary hidden neurons sampled from a distribution q(hv ). Different
sampled hv generates different y, making the perception of multi-view possible. hv usually has a
low dimensionality, approximately ten, as ten binary neurons can ideally model 210 distinct views.
For clarity of derivation, we take an example of MVP that contains only one hidden layer of hid
and hv . More layers can be added and derived in a similar fashion. We consider a joint distribution,
which marginalizes out the random hidden neurons,
p(y, v |hid ; ?) =
X
p(y, v, hv |hid ; ?) =
hv
X
p(v |y, hv ; ?)p(y|hid , hv ; ?)p(hv ),
(2)
hv
where ? = {U0 , U1 , V1 , U2 , V2 }, the identity feature is extracted from the input image, hid =
f (U0 x), and f is the sigmoid activation function, f (x) = 1/(1 + exp(?x)). Other activation
functions, such as rectified linear function [18] and tangent [11], can be used as well. To model
continuous values of the output, we assume y follows a conditional diagonal Gaussian distribution,
p(y|hid , hv ; ?) = N (y|U1 hid + V1 hv , ?y2 ). The probability of y belonging to the j-th view
is modeled with the softmax function, p(vj = 1|y, hv ; ?) =
indicates the j-th row of the matrix.
1
2
The subscripts i, j, k are omitted for clearness.
For clarity, the biases are omitted.
3
2
exp(U2j? y+Vj?
hv )
PK
2 y+V2 hv ) ,
exp(U
k=1
k?
k?
where Uj?
2.1
Learning Procedure
The weights and biases of MVP are learned by maximizing the data log-likelihood. The lower bound
of the log-likelihood can be written as,
log p(y, v |hid ; ?) = log
X
p(y, v, hv |hid ; ?) ?
X
q(hv ) log
hv
hv
p(y, v, hv |hid ; ?)
.
q(hv )
(3)
Eq.(3) is attained by decomposing the log-likelihood into two terms, log p(y, v |hid ; ?) =
v
P
P
p(y,v,hv |hid ;?)
|y,v;?)
v
+
, which can be easily verified by
? hv q(hv ) log p(hq(h
v)
hv q(h ) log
q(hv )
v id
id
v
substituting the product, p(y, v, h |h ) = p(y, v |h )p(h |y, v), into the right hand side of the
decomposition. In particular, the first term is the KL-divergence [10] between the true posterior
and the distribution q(hv ). As KL-divergence is non-negative, the second term is regarded as the
variational lower bound on the log-likelihood.
The above lower bound can be maximized by using the Monte Carlo Expectation Maximization
(MCEM) algorithm recently introduced by [27], which approximates the true posterior by using the
importance sampling with the conditional prior as the proposal distribution. With the Bayes? rule, the
|hv )p(hv )
true posterior of MVP is p(hv |y, v) = p(y,vp(y,v)
, where p(y, v |hv ) represents the multi-view
perception error, p(hv ) is the prior distribution over hv , and p(y, v) is a normalization constant.
Since we do not assume any prior information on the view distribution, p(hv ) is chosen as a uniform
distribution between zero and one. To estimate the true posterior, we let q(hv ) = p(hv |y, v; ?old ).
It is approximated by sampling hv from the uniform distribution, i.e. hv ? U(0, 1), weighted by the
importance weight p(y, v |hv ; ?old ). With the EM algorithm, the lower bound of the log-likelihood
turns into
S
1X
ws log p(y, v, hvs |hid ; ?),
S
s=1
hv
(4)
where ws = p(y, v |hv ; ?old ) is the importance weight. The E-step samples the random hidden
neurons, i.e. hvs ? U(0, 1), while the M-step calculates the gradient,
L(?, ?old ) =
X
p(hv |y, v; ?old ) log p(y, v, hv |hid ; ?) '
S
S
?L
1 X ?L(?, ?old )
1X
?
'
=
ws
{log p(v |y, hvs ) + log p(y|hid , hvs )},
??
S s=1
??
S s=1 ??
(5)
where the gradient is computed by averaging over all the gradients with respect to the importance
samples.
The two steps have to be iterated. When more samples are needed to estimate the posterior, the
space complexity will increase significantly, because we need to store a batch of data, the proposed
samples, and their corresponding outputs at each layer of the deep network. When implementing the
algorithm with GPU, one needs to make a tradeoff between the size of the data and the accurateness
of the approximation, if the GPU memory is not sufficient for large scale training data. Our empirical
study (Sec. 3.1) shows that the M-step of MVP can be computed by using only one sample, because
the uniform prior typically leads to sparse weights during training. Therefore, the EM process
develops into the conventional back-propagation.
In the forward pass, we sample a number of hvs based on the current parameters ?, such that only
the sample with the largest weight need to be stored. We demonstrate in the experiment (Sec. 3.1)
that a small number of times (e.g. < 20) are sufficient to find good proposal. In the backward pass,
we seek to update the parameters by the gradient,
?
?L(?)
'
ws log p(v |y, hvs ) + log p(y|hid , hvs ) ,
??
??
where hvs is the sample that has the largest weight ws .
id
, hvs )
We need to optimize the fol-
2
kb
y?(U1 hid +V1 hv
s )k2
? log ?y ?
2
2?y
lowing two terms, log p(y|h
=
2
P
exp(U2j? y+Vj?
hv
)
b j log( PK exp(U2 y+V2s hv ) ), where y
b and v
b are the ground truth.
jv
k=1
k?
k?
s
4
(6)
and log p(v |y, hvs ) =
? Continuous View In the previous discussion, v is assumed to be a binary vector. Note that v can
also be modeled as a continuous variable with a Gaussian distribution,
p(v |y, hv ) = N (v |U2 y + V2 hv , ?v ),
?
(7)
?
where v is a scalar corresponding to different views from ?90 to +90 . In this case, we can
generate views not presented in the training data by interpolating v, as shown in Fig. 6.
? Difference with multi-task learning Our model, which only has a single task, is also different
from multi-task learning (MTL), where reconstruction of each view could be treated as a different
task, although MTL has not been used for multi-view reconstruction in literature to the best of our
knowledge. In MTL, the number of views to be reconstructed is predefined, equivalent to the number
of tasks, and it encounters problems when the training data of different views are unbalanced;
while our approach can sample views continuously and generate views not presented in the training
data by interpolating v as described above. Moreover, the model complexity of MTL increases
as the number of views and its training is more difficult since different tasks may have difference
convergence rates.
2.2
Testing Procedure
Given the view label v, and the input x, we generate the face image y under the viewpoint of v in
the testing stage. A set of hv are first sampled, {hvs }Ss=1 ? U(0, 1), which corresponds to a set of
outputs {ys }Ss=1 . For example, in a simple network with only one hidden layer, ys = U1 hid +V1 hvs
and hid = f (U0 x). Then, the desired face image in view v is the output ys that produces the
largest probability of p(v |ys , hvs ). A full spectrum of multi-view images are reconstructed for all
the possible view labels v.
2.3
View Estimation
Our model can also be used to estimate viewpoint of the input image x. First, given all possible
values of viewpoint v, we can generate a set of corresponding output images {yz }, where z
indicates the index of the values of view we generated (or interpolated). Then, to estimate
viewpoint, we assign the view label of the z-th output yz to x, such that yz is the most similar
image to x. The above procedure is formulated as below. If v is discrete, the problem is,
arg minj,z k p(vj = 1|x, hvz ) ? p(vj = 1|yz , hvz ) k22 = arg minj,z k
2
exp(U2j? yz +Vj?
hv
z)
PK
2 y +V2 hv )
exp(U
k=1
k? z
k? z
V2 hvz ) ? (U2 yz + V2 hvz )
?
3
2
exp(U2j? x+Vj?
hv
z)
PK
2 x+V2 hv )
exp(U
k=1
k?
k? z
k22 . If v is continuous, the problem is defined as, arg minz k (U2 x +
k22 = arg minz k x ? yz k22 .
Experiments
Several experiments are designed for evaluation and comparison3 . In Sec. 3.1, MVP is evaluated
on a large face recognition dataset to demonstrate the effectiveness of the identity representation.
Sec. 3.2 presents a quantitative evaluation, showing that the reconstructed face images are in good
quality and the multi-view spectrum has retained discriminative information for face recognition.
Sec. 3.3 shows that MVP can be used for view estimation and achieves comparable result as the
discriminative methods specially designed for this task. An interesting experiment in Sec. 3.4
shows that by modeling the view as a continuous variable, MVP can analyze and reconstruct views
not seen in the training data.
3.1
Multi-View Face Recognition
MVP on multi-view face recognition is evaluated on the MultiPIE dataset [7], which contains
754, 204 images of 337 identities. Each identity was captured under 15 viewpoints from ?90?
to +90? and 20 different illuminations. It is the largest and most challenging dataset for evaluating
face recognition under view and lighting variations. We conduct the following three experiments to
demonstrate the effectiveness of MVP.
3
http://mmlab.ie.cuhk.edu.hk/projects/MVP.htm. For more technical details of this work,
please contact the corresponding author Ping Luo ([email protected]).
5
? Face recognition across views This setting follows the existing methods, e.g. [2, 12, 28], which
employs the same subset of MultiPIE that covers images from ?45? to +45? and with neutral
illumination. The first 200 identities are used for training and the remaining 137 identities for test.
In the testing stage, the gallery is constructed by choosing one canonical view image (0? ) from each
testing identity. The remaining images of the testing identities from ?45? to +45? are selected as
probes. The number of neurons in MVP can be expressed as 32 ? 32 ? 512 ? 512(10) ? 512(10) ?
1024 ? 32 ? 32[7], where the input and output images have the size of 32 ? 32, [7] denotes the length
of the view label vector (v), and (10) represents that the third and forth layers have ten random
neurons.
We examine the performance of using the identity features, i.e. hid
), and
2 (denoted as MVPhid
2
compare it with seven state-of-the-art methods in Table 1. The first three methods are based on
3D face models and the remaining ones are 2D feature extraction methods, including deep models,
such as FIP [28] and RL [28], which employed the traditional convolutional network to recover
the frontal view face image. As the existing methods did, LDA is applied to all the 2D methods
to reduce the features? dimension. The first and the second best results are highlighted for each
viewpoint, as shown in Table 1. The two deep models (MVP and RL) outperform all the existing
methods, including the 3D face models. RL achieves the best results on three viewpoints, whilst
MVP is the best on four viewpoints. The extracted feature dimensions of MVP and RL are 512
and 9216, respectively. In summary, MVP obtains comparable averaged accuracy as RL under this
setting, while the learned feature representation is more compact.
Table 1: Face recognition accuracies across views. The first and the second best performances are in bold.
VAAM [2]
FA-EGFC [12]
SA-EGFC [12]
LE [3]+LDA
CRBM [9]+LDA
FIP [28]+LDA
RL [28]+LDA
MVPhid +LDA
Avg.
?15?
+15?
?30?
+30?
?45?
+45?
86.9
92.7
97.2
93.2
87.6
95.6
98.3
98.1
95.7
99.3
99.7
99.9
94.9
100.0
100.0
100.0
95.7
99.0
99.7
99.7
96.4
98.5
99.3
100.0
89.5
92.9
98.3
95.5
88.3
96.4
98.5
100.0
91.0
95.0
98.7
95.5
90.5
95.6
98.5
99.3
74.1
84.7
93.0
86.9
80.3
93.4
95.6
93.4
74.8
85.2
93.6
81.8
75.2
89.8
97.8
95.6
2
Table 2: Face recognition accuracies across views and illuminations.
performances are in bold.
The first and the second best
Avg.
0?
?15? +15? ?30? +30? ?45? +45? ?60? +60?
Raw Pixels+LDA
LBP [1]+LDA
Landmark LBP [4]+LDA
CNN+LDA
FIP [28]+LDA
RL [28]+LDA
MTL+RL+LDA
MVPhid +LDA
36.7
50.2
63.2
58.1
72.9
70.8
74.8
61.5
81.3
89.1
94.9
64.6
94.3
94.3
93.8
92.5
59.2
77.4
83.9
66.2
91.4
90.5
91.7
85.4
58.3
79.1
82.9
62.8
90.0
89.8
89.6
84.9
35.5
56.8
71.4
60.7
78.9
77.5
80.1
64.3
37.3
55.9
68.2
63.6
82.5
80.0
83.3
67.0
21.0
35.2
52.8
56.4
66.1
63.6
70.4
51.6
19.7
29.7
48.3
57.9
62.0
59.5
63.8
45.4
MVPhid +LDA
79.3
95.7
93.3
92.2
83.4
83.9
75.2
72.6
62.3
91.0
83.4
86.7
77.3
84.1
73.1
74.6
62.0
74.2
63.9
68.5
57.3
1
2
MVPhr +LDA
3
MVPhr +LDA
4
12.8
16.2
35.5
46.4
49.3
44.6
51.5
35.1
7.63
14.6
32.1
44.2
42.5
38.9
50.2
28.3
70.6
60.2
60.0
63.8
53.2
55.7
44.4
56.0
46.9
? Face recognition across views and illuminations To examine the robustness of different feature
representations under more challenging conditions, we extend the first setting by employing a
larger subset of MultiPIE, which contains images from ?60? to +60? and 20 illuminations. Other
experimental settings are the same as the above. In Table 2, feature representations of different
layers in MVP are compared with seven existing features, including raw pixels, LBP [1] on image
grid, LBP on facial landmarks [4], CNN features, FIP [28], RL [28], and MTL+RL. LDA is applied
to all the feature representations. Note that the last four methods are built on the convolutional
neural networks. The only distinction is that they adopted different objective functions to learn
features. Specifically, CNN uses cross-entropy loss to classify face identity as in [26]. FIP and
RL utilized least-square loss to recover the frontal view image. MTL+RL is an extension of RL.
It employs multiple tasks, each of which is formulated as a least square loss, to recover multi-view
images, and all the tasks share feature layers. To achieve fair comparisons, CNN, FIP, and MTL+RL
adopt the same convolutional structure as RL [28], since RL achieves competitive results in our first
experiment.
6
The first and second best results are emphasized in bold in Table 2. The identity feature hid
2 of
MVP outperforms all the other methods on all the views with large margins. MTL+RL achieves
the second best results except on ?60? . These results demonstrate the superior of modeling multiview perception. For the features at different layers of MVP, the performance can be summarized
r
id
r
id
as hid
2 > h3 > h1 > h4 , which conforms our expectation. h2 performs the best because it is
id
the highest level of identity features. hid
performs
better
than
h
2
1 because pose factors coupled in
id
id
the input image x have be further removed, after one more forward mapping from hid
1 to h2 . h2
r
r
v
v
also outperforms h3 and h4 , because some randomly generated view factors (h2 and h3 ) have been
incorporated into these two layers during the construction of the full view spectrum. Please refer to
Fig. 2 for a better understanding.
? Effectiveness of the BP Procedure
Fig. 3 (a) compares the
convergence rates during training, when using
different number of samples to estimate the true
posterior. We observe
that a few number of
samples, such as twenty,
can lead to reasonably
good convergence. Fig. Figure 3: Analysis of MVP on the MultiPIE dataset. (a) Comparison of
3 (b) empirically shows convergence, using different number of samples to estimate the true posterior. (b)
that uniform prior leads Comparison of sparsity of the samples? weights. (c) Comparison of convergence,
to sparse weights during using the largest weighted sample and using the weighted average over all the
training. In other words, samples to compute gradient.
if we seek to calculate the gradient of BP using only one sample, as did in Eq.(6). Fig. 3
(b) demonstrates that 20 samples are sufficient, since only 6 percent of the samples? weights
approximate one (all the others are zeros). Furthermore, as shown in Fig. 3 (c), the convergence
rates of the one-sample gradient and the weighted summation are comparable.
3.2
Reconstruction Quality
Another experiment is designed to quantitatively evaluate the multiview reconstruction result. The setting is the same as the first
experiment in Sec. 3.1. The gallery images are all in the frontal view
(0? ). Differently, LDA is applied to the raw pixels of the original
images (OI) and the reconstructed images (RI) under the same view,
respectively. Fig. 4 plots the accuracies of face recognition with
respect to distinct viewpoints. Not surprisingly, under the viewpoints
of +30? and ?45? the accuracies of RI are decreased compared to
OI. Nevertheless, this decrease is comparatively small (< 5%). It
implies that the reconstructed images are in reasonably good quality.
We notice that the reconstructed images in Fig. 1 lose some detailed
textures, while well preserving the shapes of profile and the facial
components.
Figure 4: Face recognition accuracies. LDA is applied to the
raw pixels of the original images and the reconstructed images.
12
LR
Viewpoint Estimation
Error of view estimation (in degree)
3.3
SVR
MVP
10
8
This experiment is conducted to evaluate the performance of
viewpoint estimation. MVP is compared to Linear Regression
(LR) and Support Vector Regression (SVR), both of which have
been used in viewpoint estimation, e.g. [8, 13]. Similarly, we
employ the first setting as introduced in Sec. 3.1, implying that
we train the models using images of a set of identities, and then Figure 5: Errors of view estimaestimate poses of the images of the remaining identities. For tion.
training LR and SVR, the features are obtained by applying PCA on the raw image pixels. Fig. 5
reports the view estimation errors, which are measured by the differences between the pose degrees
6
4
2
0
0?
7
+15?
?15?
+30?
?30?
Viewpoint
+45?
?45?
Figure 6: We adopt the images in 0? , 30? , and 60? for training, and test whether MVP can analyze and
reconstruct images under 15? and 45? . The reconstructed images (left) and the ground truths (right) are shown
in (a). (b) visualizes the full spectrum of the reconstructed images, when the images in unobserved views are
used as inputs (first column).
of ground truth and the predicted degrees. The averaged errors of MVP, LR, and SVR are 5.03? ,
9.79? , and 5.45? , respectively. MVP achieves slightly better results compared to the discriminative
model, i.e. SVR, demonstrating that it is also capable for view estimation, even though it is not
designated for this task.
3.4
Viewpoint Interpolation
When the viewpoint is modeled as a continuous variable as described in Sec. 2.1, MVP implicitly
captures a 3D face model, such that it can analyze and reconstruct images under viewpoints that have
not been seen before, while this cannot be achieved with MTL. In order to verify such capability, we
conduct two tests. First, we adopt the images from MultiPIE in 0? , 30? , and 60? for training, and test
whether MVP can generate images under 15? and 45? . For each testing identity, the result is obtained
by using the image in 0? as input and reconstructing images in 15? and 45? . Several synthesized
images (left) compared with the ground truth (right) are visualized in Fig. 6 (a). Although the
interpolated images have noise and blurring effect, they have similar views as the ground truth and
more importantly, the identity information is preserved. Second, under the same training setting as
above, we further examine, when the images of the testing identities in 15? and 45? are employed as
inputs, whether MVP can still generate a full spectrum of multi-view images and preserve identity
information in the meanwhile. The results are illustrated in Fig. 6 (b), where the first image is the
input and the remaining are the reconstructed images in 0? , 30? , and 60? .
These two experiments show that MVP essentially models a continuous space of multi-view images
such that first, it can predict images in unobserved views, and second, given an image under an
unseen viewpoint, it can correctly extract identity information and then produce a full spectrum of
multi-view images. In some sense, it performs multi-view reasoning, which is an intriguing function
of human brain.
4
Conclusions
In this paper, we have presented a generative deep network, called Multi-View Perceptron (MVP), to
mimic the ability of multi-view perception in primate brain. MVP can disentangle the identity and
view representations from an input image, and also can generate a full spectrum of views of the input
image. Experiments demonstrated that the identity features of MVP achieve better performance on
face recognition compared to state-of-the-art methods. We also showed that modeling the view
factor as a continuous variable enables MVP to interpolate and predict images under the viewpoints,
which are not observed in training data, imitating the reasoning capacity of human.
Acknowledgement This work is partly supported by Natural Science Foundation of China (91320101,
61472410), Shenzhen Basic Research Program (JCYJ20120903092050890, JCYJ20120617114614438, JCYJ20130402113127496), Guangdong Innovative Research Team Program (201001D0104648280).
References
[1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns:
Application to face recognition. TPAMI, 28:2037?2041, 2006.
[2] A. Asthana, T. K. Marks, M. J. Jones, K. H. Tieu, and M. Rohith. Fully automatic poseinvariant face recognition via 3d pose normalization. In ICCV, 2011.
8
[3] Z. Cao, Q. Yin, X. Tang, and J. Sun. Face recognition with learning-based descriptor. In CVPR,
2010.
[4] D. Chen, X. Cao, F. Wen, and J. Sun. Blessing of dimensionality: High-dimensional feature
and its efficient compression for face verification. In CVPR, 2013.
[5] W. A. Freiwald and D. Y. Tsao. Functional compartmentalization and viewpoint generalization
within the macaque face-processing system. Science, 330(6005):845?851, 2010.
[6] D. Gonz?alez-Jim?enez and J. L. Alba-Castro. Toward pose-invariant 2-d face recognition
through point distribution models and facial symmetry. IEEE Transactions on Information
Forensics and Security, 2:413?429, 2007.
[7] R. Gross, I. Matthews, J. F. Cohn, T. Kanade, and S. Baker. Multi-pie. In Image and Vision
Computing, 2010.
[8] Y. Hu, L. Chen, Y. Zhou, and H. Zhang. Estimating face pose by facial asymmetry and
geometry. In AFGR, 2004.
[9] G. B. Huang, H. Lee, and E. Learned-Miller. Learning hierarchical representations for face
verification with convolutional deep belief networks. In CVPR, 2012.
[10] S. Kullback and R. A. Leibler. On information and sufficiency. In Annals of Mathematical
Statistics, 1951.
[11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient based learning applied to document
recognition. Proceedings of IEEE, 86(11):2278?2324, 1998.
[12] S. Li, X. Liu, X. Chai, H. Zhang, S. Lao, and S. Shan. Morphable displacement field based
image matching for face recognition across pose. In ECCV, 2012.
[13] Y. Li, S. Gong, and H. Liddell. Support vector regression and classification based multi-view
face detection and recognition. In AFGR, 2000.
[14] C. Liu and H. Wechsler. Gabor feature based classification using the enhanced fisher linear
discriminant model for face recognition. TIP, 11:467?476, 2002.
[15] D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60:91?110,
2004.
[16] P. Luo, X. Wang, and X. Tang. Hierarchical face parsing via deep learning. In CVPR, 2012.
[17] P. Luo, X. Wang, and X. Tang. A deep sum-product architecture for robust facial attributes
analysis. In ICCV, 2013.
[18] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In
ICML, 2010.
[19] S. Ohayon, W. A. Freiwald, and D. Y. Tsao. What makes a cell face selective? the importance
of contrast. Neuron, 74:567?581, 2013.
[20] S. Reed, K. Sohn, Y. Zhang, and H. Lee. Learning to disentangle factors of variation with
manifold interaction. In ICML, 2014.
[21] K. Simonyan, O. M. Parkhi, A. Vedaldi, and A. Zisserman. Fisher vector faces in the wild. In
BMVC, 2013.
[22] Y. Sun, X. Wang, and X. Tang. Hybrid deep learning for face verification. In ICCV, 2013.
[23] Y. Sun, X. Wang, and X. Tang. Deep convolutional network cascade for facial point detection.
In CVPR, 2013.
[24] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint
identification-verification. In NIPS, 2014.
[25] Y. Sun, X. Wang, and X. Tang. Deep learning face representation from predicting 10,000
classes. In CVPR, 2014.
[26] Y. Taigman, M. Yang, M. A. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level
performance in face verification. In CVPR, 2014.
[27] Y. Tang and R. Salakhutdinov. Learning stochastic feedforward neural networks. In NIPS,
2013.
[28] Z. Zhu, P. Luo, X. Wang, and X. Tang. Deep learning identity preserving face space. In ICCV,
2013.
9
| 5546 |@word kong:2 cnn:5 compression:1 hu:1 seek:2 decomposition:1 liu:2 contains:4 united:1 tuned:1 document:1 outperforms:2 existing:7 current:1 com:1 luo:4 activation:2 gmail:1 intriguing:2 zhu1:1 written:1 gpu:2 parsing:1 shape:1 enables:1 v2s:1 designed:3 update:2 hvs:13 plot:1 implying:1 generative:1 selected:1 ith:1 lr:4 provides:1 node:2 zhang:3 mathematical:1 constructed:1 h4:2 ijcv:1 combine:1 wild:1 crbm:1 behavior:1 examine:3 multi:28 brain:9 inspired:1 salakhutdinov:1 provided:1 project:1 moreover:2 baker:1 estimating:1 what:1 monkey:1 lowing:1 whilst:3 unobserved:4 transformation:1 quantitative:1 alez:1 k2:1 demonstrates:1 unit:1 compartmentalization:1 before:1 engineering:2 local:1 encoding:1 id:8 subscript:1 interpolation:1 approximately:1 china:2 challenging:2 averaged:2 directed:1 lecun:1 testing:8 procedure:4 displacement:1 empirical:1 gabor:2 significantly:2 matching:1 vedaldi:1 word:1 cascade:1 svr:5 cannot:1 applying:1 optimize:1 conventional:1 deterministic:7 equivalent:1 demonstrated:1 maximizing:2 instinct:1 recovery:1 freiwald:2 rule:1 importantly:2 regarded:1 variation:6 updated:1 annals:1 construction:1 enhanced:1 heavily:1 us:1 element:1 recognition:28 approximated:1 utilized:1 u4:1 observed:3 solved:2 capture:4 hv:59 u2j:4 calculate:1 region:2 wang:7 sun:7 ranzato:1 decrease:1 highest:1 removed:1 substantial:1 gross:1 complexity:2 ideally:1 practise:1 mcem:1 distinctive:1 blurring:1 easily:1 joint:2 htm:1 xiaoou:1 differently:2 various:1 derivation:1 train:1 distinct:4 monte:1 choosing:1 encoded:1 posed:1 larger:1 cvpr:8 hvz:4 s:2 reconstruct:7 ability:2 statistic:1 simonyan:1 unseen:1 highlighted:1 sequence:1 tpami:1 net:5 propose:1 reconstruction:4 interconnected:1 product:3 interaction:1 hid:28 cao:2 achieve:3 academy:1 forth:1 description:1 untangled:1 chai:1 convergence:6 asymmetry:1 produce:3 develop:1 gong:1 pose:9 measured:1 h3:3 sa:1 eq:2 predicted:1 indicate:1 implies:1 attribute:1 stochastic:3 kb:1 human:4 implementing:1 assign:1 generalization:1 summation:1 extension:1 ground:6 exp:9 mapping:1 predict:4 matthew:1 substituting:1 major:2 achieves:5 u3:1 adopt:3 omitted:2 estimation:8 lose:1 label:6 largest:5 weighted:4 gaussian:2 zhou:1 derived:1 likelihood:6 indicates:2 hk:4 contrast:1 sense:2 typically:1 hidden:11 w:5 selective:2 pixel:6 mimicking:1 arg:4 ill:1 classification:2 denoted:1 proposes:1 art:2 special:1 orange:2 softmax:1 field:1 extraction:1 sampling:4 represents:2 look:3 jones:1 filling:1 thin:1 icml:2 mimic:1 report:1 others:3 spline:1 develops:1 inherent:1 employ:4 few:1 quantitatively:1 randomly:1 wen:1 preserve:1 divergence:2 interpolate:3 geometry:1 freedom:1 detection:2 w5:1 evaluation:2 predefined:1 capable:2 closer:1 conforms:1 facial:7 necessitated:1 conduct:2 old:6 desired:1 plotted:2 deformation:2 untangle:2 instance:1 column:4 modeling:3 classify:1 cover:1 maximization:1 signifies:1 cost:1 subset:2 neutral:1 uniform:4 conducted:1 inspires:1 front:1 stored:1 tang1:1 person:3 ie:3 probabilistic:2 lee:2 tip:1 together:1 continuously:1 huang:1 marginalizes:1 lhi:1 li:2 account:1 converted:1 sec:9 bold:3 summarized:1 alba:1 depends:1 tion:1 view:118 h1:1 lab:1 lowe:1 analyze:3 fol:1 start:1 recover:5 bayes:1 competitive:1 capability:1 contribution:1 square:2 oi:2 accuracy:7 convolutional:5 descriptor:1 efficiently:1 maximized:1 yield:2 miller:1 shenzhen:4 vp:1 raw:6 identification:1 iterated:1 multipie:6 none:1 carlo:1 lighting:1 rectified:2 visualizes:1 ping:2 minj:2 naturally:1 sampled:3 ahonen:1 dataset:5 knowledge:1 improves:1 dimensionality:2 back:3 attained:1 forensics:1 mtl:10 zisserman:1 bmvc:1 sufficiency:1 evaluated:2 though:3 furthermore:1 stage:4 hand:2 cohn:1 propagation:4 quality:3 lda:20 effect:1 k22:4 verify:1 y2:1 true:6 regularization:1 guangdong:1 leibler:1 illustrated:3 deal:1 during:4 please:2 hong:2 plate:1 multiview:2 complete:1 demonstrate:4 performs:3 dedicated:1 percent:1 reasoning:3 image:83 variational:2 novel:2 recently:2 superior:2 sigmoid:1 functional:1 rl:17 empirically:1 handcrafted:1 extend:1 approximates:1 synthesized:1 refer:1 gibbs:1 automatic:1 grid:1 similarly:1 closing:1 cortex:1 morphable:1 add:1 disentangle:3 posterior:7 recent:2 showed:1 gonz:1 store:1 binary:6 captured:3 seen:2 preserving:2 employed:3 v3:1 cuhk:4 ii:1 u0:4 full:11 multiple:4 keypoints:1 infer:2 technical:1 cross:1 divided:1 y:4 calculates:1 regression:3 basic:1 vision:2 expectation:2 essentially:1 represent:1 normalization:2 achieved:1 cell:1 proposal:2 lbp:5 addition:1 preserved:1 decreased:1 extra:1 w2:1 unlike:1 specially:1 simulates:1 flow:1 effectiveness:3 ee:1 yang:1 feedforward:1 iii:1 bengio:1 architecture:1 reduce:1 haffner:1 vik:3 tradeoff:1 whether:3 expression:1 six:3 pca:1 effort:1 deep:21 yik:2 detailed:1 shortage:1 u5:1 ten:3 processed:1 visualized:1 sohn:1 generate:9 http:1 outperform:1 xij:2 canonical:1 notice:1 correctly:1 blue:3 diverse:1 discrete:1 key:2 four:2 nevertheless:1 demonstrating:1 clarity:2 jv:1 verified:1 backward:2 v1:4 enez:1 sum:2 taigman:1 parameterized:1 named:1 throughout:1 electronic:1 comparable:3 bound:5 layer:14 shan:1 distinguish:1 activity:1 xiaogang:1 bp:2 ri:2 encodes:1 xgwang:1 aspect:1 u1:5 generates:2 interpolated:2 innovative:1 department:2 designated:1 belonging:1 across:8 slightly:1 em:3 reconstructing:1 primate:8 making:3 castro:1 iccv:4 invariant:2 imitating:2 restricted:1 resource:1 turn:1 needed:1 adopted:1 decomposing:1 probe:1 observe:2 hierarchical:3 v2:8 distinguished:2 batch:1 encounter:1 robustness:1 wang2:1 original:2 denotes:2 remaining:7 graphical:1 wechsler:1 chinese:3 uj:1 yz:7 comparatively:1 contact:1 objective:1 added:1 parametric:1 fa:1 diagonal:1 traditional:1 gradient:9 hq:1 capacity:1 landmark:3 seven:2 manifold:1 discriminant:1 gallery:2 toward:1 length:1 modeled:4 reed:2 index:1 retained:1 architecture2:1 difficult:1 disentangling:1 pie:1 negative:1 boltzmann:3 twenty:1 neuron:27 hinton:1 incorporated:1 team:1 jim:1 discovered:2 arbitrary:1 introduced:2 pair:1 kl:2 security:1 learned:7 distinction:1 macaque:2 nip:2 usually:1 perception:4 below:1 pattern:1 sparsity:1 challenge:2 program:2 built:1 including:4 green:1 memory:1 belief:1 treated:1 natural:1 hybrid:1 predicting:2 hr:5 advanced:1 zhu:1 improve:2 technology:1 lao:1 coupled:3 extract:6 prior:5 understanding:2 literature:3 geometric:1 tangent:1 acknowledgement:1 lacking:1 loss:3 fully:1 discriminatively:1 generation:1 interesting:1 h2:4 foundation:1 degree:4 sufficient:3 verification:5 viewpoint:27 share:1 row:2 eccv:1 summary:1 surprisingly:1 last:3 supported:1 bias:3 side:1 perceptron:6 institute:1 face:64 taking:1 correspondingly:1 sparse:2 tieu:1 calculated:1 dimension:2 evaluating:1 rich:1 forward:3 made:1 author:1 avg:2 employing:2 transaction:1 reconstructed:11 approximate:1 obtains:1 compact:1 fip:6 implicitly:1 kullback:1 sequentially:1 xtang:1 assumed:1 factorize:1 discriminative:4 spectrum:12 continuous:8 table:6 as1:1 kanade:1 learn:5 concatenates:1 robust:3 reasonably:2 symmetry:1 bottou:1 interpolating:2 meanwhile:3 vj:7 did:3 pk:4 noise:2 profile:1 fair:1 fig:15 fashion:1 inferring:1 minz:2 third:1 learns:1 tang:9 removing:1 specific:1 emphasized:1 sift:1 showing:1 importance:5 texture:1 illumination:7 margin:1 chen:3 gap:1 entropy:1 yin:1 expressed:1 scalar:1 u2:6 deepface:1 corresponds:1 truth:6 wolf:1 extracted:4 nair:1 conditional:2 identity:61 formulated:2 tsao:2 fisher:2 parkhi:1 specifically:2 except:1 averaging:1 called:2 pietikainen:1 pas:2 partly:1 experimental:2 blessing:1 support:2 mark:1 unbalanced:1 frontal:5 evaluate:2 |
5,022 | 5,547 | Deep Joint Task Learning for Generic Object
Extraction
Xiaolong Wang1,4 , Liliang Zhang1 , Liang Lin1,3?, Zhujin Liang1 , Wangmeng Zuo2
1
Sun Yat-sen University, Guangzhou 510006, China
2
School of Computer Science and Technology, Harbin Institute of Technology, China
3
SYSU-CMU Shunde International Joint Research Institute, Shunde, China
4
The Robotics Institute, Carnegie Mellon University, Pittsburgh, U.S.
[email protected], [email protected]
Abstract
This paper investigates how to extract objects-of-interest without relying on handcraft features and sliding windows approaches, that aims to jointly solve two subtasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task
learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two
networks work collaboratively to boost performance. In particular, we propose to
incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects
from raw images, and the latent variables adjust the object localizations to feed
the second network that produces pixelwise object masks. An EM-type method is
presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling
method; (ii) it optimizes the parameters of the two networks unitedly via back
propagation, with the fixed latent variables. Extensive experiments suggest that
our framework significantly outperforms other state-of-the-art approaches in both
accuracy and efficiency (e.g. 1000 times faster than competing approaches).
1
Introduction
One typical vision problem usually comprises several subproblems, which tend to be tackled jointly
to achieve superior capability. In this paper, we focus on a general joint task learning framework
based on deep neural networks, and demonstrate its effectiveness and efficiency on generic (i.e.,
category-independent) object extraction.
Generally speaking, two sequential subtasks are comprised in object extraction: rapidly localizing
the objects-of-interest from images and further generating segmentation masks based on the localizations. Despite acknowledged progresses, previous approaches often tackle these two tasks independently, and most of them applied sliding windows over all image locations and scales [17, 22], which
could limit their performances. Recently, several works [33, 18, 5] utilized the interdependencies of
object localization and segmentation, and showed promising results. For example, Yang et al. [33]
introduced a joint framework for object segmentations, in which the segmentation benefits from the
object detectors and the object detections are in consistent with the underlying segmentation of the
?
Corresponding author is Liang Lin. This work was supported by the National Natural Science Foundation of China (no.61173082), the Hi-Tech Research and Development Program of China (no.2012AA011504),
Guangdong Science and Technology Program (no. 2012B031500006), Special Project on Integration of Industry, Educationand Research of Guangdong (no.2012B091000101), and Fundamental Research Funds for the
Central Universities (no.14lgjc11).
1
image. However, these methods still rely on the exhaustively searching to localize objects. On the
other hand, deep learning methods have achieved superior capabilities in classification [21, 19, 23]
and representation learning [4], and they also demonstrate good potentials on several complex vision
tasks [29, 30, 20, 25]. Motivated by these works, we build a deep learning architecture to jointly
solve the two subtasks in object extraction, in which each task (either object localization or object
segmentation) is tackled by a multi-layer convolutional neural network. Specifically, the first network (i.e., localization network) directly predicts the positions and scales of salient objects from raw
images, upon which the second network (i.e., segmentation network) generates the pixelwise object
masks.
Groundtruth
Mask
Groundtruth
Mask
Segmentation
Results
(a)
Segmentation
Results
(b)
Figure 1: Motivation of introducing latent variables in object extraction. Treating predicted object
localizations (the dashed red boxes) as the inputs for segmentation may lead to unsatisfactory segmentation results, and we can make improvements by enlarging or shrinking the localizations (the
solid blue boxes) with the latent variables. Two examples are shown in (a) and (b), respectively.
Rather than being simply stacked up, the two networks are collaboratively integrated with latent variables to boost performance. In general, the two networks optimized for different tasks might have
inconsistent interests. For example, the object localizations predicted by the first network probably
indicate incomplete object (foreground) regions or include a lot of backgrounds, which may lead
to unsatisfactory pixelwise segmentation. This observation is well illustrated in Fig. 1, where we
can obtain better segmentation results through enlarging or shrinking the input object localizations
(denoted by the bounding boxes). To overcome this problem, we propose to incorporate the latent
variables between the two networks explicitly indicating the adjustments of object localizations,
and jointly optimize them with learning the parameters of networks. It is worth mentioning that our
framework can be generally extended to other applications of joint tasks in similar ways. For concise
description, we use the term ?segmentation reference? to represent the predicted object localization
plus the adjustment in the following.
For the framework training, we present an EM-type algorithm, which alternately estimates the latent variables and learns the network parameters. The latent variables are treated as intermediate
auxiliary during training: we search for the optimal segmentation reference, and back tune the two
networks accordingly. The latent variable estimation is, however, non-trivial in this work, as it is
intractable to analytically model the distribution of segmentation reference. To avoid exhaustively
enumeration, we design a data-driven MCMC method to effectively sample the latent variables, inspired by [24, 31]. In sum, we conduct the training algorithm iterating with two steps: (i) Fixing the
network parameters, we estimate the latent variables and determine the optimal segmentation reference under the sampling method. (ii) Fixing the segmentation reference, the segmentation network
can be tuned according to the pixelwise segmentation errors, while the localization network tuned
by taking the adjustment of object localizations into account.
2
Related Works
Extracting pixelwise objects-of-interest from an image, our work is related to the salient region/object detections [26, 9, 10, 32]. These methods mainly focused on feature engineering and
graph-based segmentation. For example, Cheng et al. [9] proposed a regional contrast based saliency
extraction algorithm and further segmented objects by applying an iterative version of GrabCut.
Some approaches [22, 27] trained object appearance models and utilized spatial or geometric priors
to address this task. Kuettel et al. [22] proposed to transfer segmentation masks from training data
2
into testing images by searching and matching visually similar objects within the sliding windows.
Other related approaches [28, 7] simultaneously processed a batch of images for object discovery
and co-segmentation, but they often required category information as priors.
Recently resurgent deep learning methods have also been applied in object detection and image
segmentation [30, 14, 29, 20, 11, 16, 2, 25]. Among these works, Sermanet et al. [29] detected
objects by training category-level convolutional neural networks. Ouyang et al. [25] proposed to
combine multiple components (e.g., feature extraction, occlusion handling, and classification) within
a deep architecture for human detection. Huang et al. [20] presented the multiscale recursive neural
networks for robust image segmentation. These mentioned methods generally achieved impressive
performances, but they usually rely on sliding detect windows over scales and positions of testing
images. Very recently, Erhan et al. [14] adopted neural networks to recognize object categories while
predicting potential object localizations without exhaustive enumeration. This work inspired us to
design the first network to localize objects. To the best of our knowledge, our framework is original
to make the different tasks collaboratively optimized by introducing latent variables together with
network parameter learning.
3
Deep Model
In this section, we introduce a joint deep model for object extraction(i.e., extracting the segmentation
mask for a salient object in the image). Our model is presented as comprising two convolutional
neural networks: localization network and segmentation network, as shown in Fig. 2. Given an
image as input, our first network can generate a 4-digit output, which specifies the salient object
bounding box(i.e. the object localization). With the localization result, our segmentation network
can extract a m?m binary mask for segmentation in its last layer. Both of these networks are stacked
up by convolutional layers, max-pooling operators and full connection layers. In the following, we
introduce the detailed definitions for these two networks.
Image
224x224
x3
Convolution Layers
Five Layers
4096
Full Connection
Layers
4
Three Layers
Cropped,
55x55
Resized
x3
Image
Convolution Layers
Four Layers
256
Full
50x50
Connection Outputs
Layer
One Layer
Segmentation Network
Localization Network
Figure 2: The architecture of our joint deep model. It is stacked up by two convolutional neural
networks: localization network and segmentation network. Given an image, we can generate its
object bounding box and further extract its segmentation mask accordingly.
Localization Network. The architecture of the localization network contains eight layers: five
convolutional layers and three full connection layers. For the parameters setting of the first seven
layers, we refer to the network used by Krizhevsky et al. [21]. It takes an image with size 224?224?
3 as input, where each dimension represents height, width and channel numbers. The eighth layer of
the network contains 4 output neurons, indicating the coordinates (x1 , y1 , x2 , y2 ) of a salient object
bounding box. Note that the coordinates are normalized w.r.t. image dimensions into the range of
0 ? 224.
Segmentation Network. Our segmentation network is a five-layer neural network with four convolutional layers and one full connection layer. To simplify the description, we denote C(k, h ? w ? c)
as a convolutional layer, where k represents kernel numbers, and h, w, c represent the height, width
and channel numbers for each kernel. We also denote F C as a full connection layer, RN as a
response normalization layer, and M P as a max-pooling layer. The size of max-pooling operator
is set as 3 ? 3 and the stride for pooling is 2. Then the network architecture can be described as:
C(256, 5 ? 5 ? 3) ? RN ? M P ? C(384, 3 ? 3 ? 256) ? C(384, 3 ? 3 ? 384) ? C(256, 3 ?
3 ? 384) ? M P ? F C. Taking an image with size 55 ? 55 ? 3 as input, the segmentation network
generates a binary mask with size 50 ? 50 as the output from its full connection layer.
We then introduce the inference process as object extraction. Formally, we define the parameters
of the localization network and segmentation network as ? l and ? s , respectively. Given an input
image Ii , we first resize it to 224 ? 224 ? 3 as the input for the localization network. Then the output
3
of this network via forward propagation is represented as F?l (Ii ), which indicates a 4-dimension
vector bi for the salient object bounding box. We crop the image data for salient object according to
bi , and resize it to 55 ? 55 ? 3 as the input for the segmentation network. By performing forward
propagation, the output for segmentation network is represented as F?s (Ii , bi ), which is a vector
with 50 ? 50 = 2500 dimensions, indicating the binary segmentation result for object extraction.
4
Learning Algorithm
We propose a joint deep learning approach to estimate the parameters of two networks. As the
object bounding boxes indicated by groundtruth object mask might not provide the best references
for segmentation, we embed this domain-specific prior as latent variables in learning. We adjust the
object bounding boxes via the latent variables to mine optimal segmentation references for training.
For optimization, an EM-type algorithm is proposed to iteratively estimate the latent variables and
the model parameters.
4.1
Optimization Formulation
Suppose a set of N training images are I = {I1 , ..., IN }, the segmentation masks for the salient
objects in them are Y = {Y1 , ..., YN }. For each Yi , we use Yij to represent its jth pixel, and Yij = 1
indicates the foreground, while Yij = 0 the background. According to the given object masks Y ,
we can obtain the object bounding boxes around them tightly as L = {L1 , ..., LN }, where Li is a
4-dimensional vector representing the coordinates (x1 , y1 , x2 , y2 ). For each sample, we introduce a
latent variable ?Li as the adjustment for Li . We name the adjusted bounding box as segmentation
e i = Li + ?Li . The learning objective is defined as maximizing
reference, which is represented as L
the probability:
e I) = P (? l , L|Y,
e I)P (? s , L|Y,
e I),
P (? l , ? s , L|Y,
(1)
where we need to jointly optimize the model parameters ? l ,? s , and the segmentation references
e = {L
e 1 , ..., L
e N } indicated by the latent variables. The probability P (? l , ? s , L|Y,
e I) can be deL
l e
composed into the probability for localization network P (? , L|Y, I) and the one for segmentation
e I).
network P (? s , L|Y,
For the localization network, we optimize the model parameters by minimizing the Euclidean dise i = Li + ?Li . Thus the probatance between the output F?l (Ii ) and the segmentation reference L
l
e
bility for ? and L can be represented as,
?
e I) = 1 exp(?
P (? l , L|Y,
||F?l (Ii ) ? Li ? ?Li ||22 ),
Z
i=1
N
(2)
where Z is a normalization term.
For the segmentation network, we specify each neuron of the last layer as a binary classification
output. Then the parameters ? s are estimated via logistic regression,
e I) =
P (? s , L|Y,
N
?
i=1
(
?
F?j s (Ii , Li + ?Li ) ?
{j|Yij =1}
?
(1 ? F?j s (Ii , Li + ?Li )))
(3)
{j|Yij =0}
where F?j s (Ii , Li + ?Li ) is the jth element of the network output, given image Ii and segmentation
reference Li + ?Li as input.
To optimize the model parameters and latent variables, the maximization of probability
e I) equals to minimizing the cost as,
P (? l , ? s , L|Y,
e I)
e = ? 1 log P (? l , ? s , L|Y,
J(? l , ? s , L)
(4)
N
N
1 ?
?
(5)
[ ||F?l (Ii ) ? Li ? ?Li ||22
N i=1
? j
?
(Yi log F?j s (Ii , Li + ?Li ) + (1 ? Yij ) log(1 ? F?j s (Ii , Li + ?Li ))) ], (6)
j
4
where the first term (5) represents the cost for localization network training and the second term (6)
is the cost for segmentation network training.
4.2
Iterative Joint Optimization
e . As Fig. 3 illustrates,
We propose an EM-type algorithm to optimize the learning cost J(? l , ? s , L)
it includes two iterative steps: (i) fixing the model parameters, apply MCMC based sampling to ese (ii) given the segmentation
timate the latent variables which indicate the segmentation references L;
references, compute the model parameters of two networks jointly via back propagation. We explain
these two steps as following.
k=1
k=K
k=2
k=1
?
Localizaon
Network
k=K
k=2
?
Segmentaon
Network
Square Error
Minimization
Selected Target for Localization
Logistic
Regression
Selected Segmentation Result
Figure 3: The Em-type learning algorithm with two steps:(i) K moves of MCMC sampling (gray
arrows), the latent variables ?Li is sampled with considering both the localization costs (indicated
by the dashed gray arrow) and segmentation costs. (ii) Given the segmentation reference and result
after K moves of sampling, we apply back propagation (blue arrows) to estimate parameters of both
networks.
(i) Latent variables estimation. Given a training image Ii and current model parameters, we estimate the latent variables ?Li . As there is no groundtruth labels for latent variables, it is intractable
to estimate the distribution of them. It is also time-consuming by enumerating ?Li for evaluation
given the large searching space. Thus we propose a MCMC Metropolis-Hastings method [24] for
latent variables sampling, which is processed in K moves. In each step, a new latent variable is
sampled from the proposal distribution and it is accepted with an acceptance rate. For fast and effective searching, we design the proposal distribution with a data driven term based on the fact that
the segmentation boundaries are often aligned with the boundaries of superpixels [1] generated from
over-segmentation.
We first initialize the latent variable as ?Li = 0. To find a better latent variable ?L?i and achieve a
reversible transition, we define the acceptance rate of the transition from ?Li to ?L?i as,
?(?Li ? ?L?i ) = min(1,
?(?L?i ) ? q(?L?i ? ?Li )
),
?(?Li ) ? q(?Li ? ?L?i )
(7)
where ?(?Li ) is the invariant distribution and q(?Li ? ?L?i ) is the proposal distribution.
By replacing the dataset with a single sample in Eq. (1), we define the invariant distribution as
e i |Yi , Ii ), which can be decomposed into two probabilities: P (? l , L
e i |Yi , Ii )
?(?Li ) = P (? l , ? s , L
constrains the segmentation reference to be close to the output of the localization network;
e i |Yi , Ii ) encourages a segmentation reference contributing to a better segmentation mask.
P (? s , L
To calculate these probabilities, we need to perform forward propagations in both networks.
The proposal distribution is defined as a combination of a gaussian distribution and a data-driven
term as,
q(?Li ? ?L?i ) = N (?L?i |?i , ?i ) ? Pc (?L?i |Yi , Ii ),
(8)
where ?i and ?i is the mean vector and covariance matrix for the optimal ?L?i in the previous
iterations. It is based on the observation that the current optimal ?L?i has high possibility for being
selected before. For the data driven term Pc (?L?i |Yi , Ii ), it is computed depending on the given
5
image Ii . After over-segmenting Ii into superpixels, we define vj = 1 if the jth image pixel is on
the boundary of a superpixel and vj = 0 if it is inside a superpixel. We then sample c pixels along the
?
e?
segmentation reference
?c Li = Li + ?Li in equal distance, then the data driven term is represented as
1
?
Pc (?L |Y, I) = c j=1 vj . Thus we encourage to avoid cutting through the possible foreground
superpixels with the bounding box edges, which leads to more plausible proposals. We set c = 200
in our experiment, and we only need to perform over-segmentation for superpixels once as preprocessing for training.
(ii) Model parameters estimation. As it is shown in Fig. 3, given the optimal latent variable
e and
?L after K moves of sampling, we can obtain the corresponding segmentation references L
the segmentation results. Then the parameters for segmentation network ? s is optimized via back
propagation with logistic regression(as the second term (6) for Eq. (4)), and the parameters for
localization network ? l is tuned by minimizing the square error between the segmentation references
and the localization output(as the first term (5) for Eq. (4)).
During back propagation, we apply the stochastic gradient descent to update the model parameters.
For the segmentation network, we use an equal learning rate for all layers as ?1 . For localization,
we first pre-train the network discriminatively for classifying 1000 object categories in the Imagenet
dataset [12]. With the pre-training, we can borrow the information learned from a large dataset to
improve our performance. We maintain the parameters of the convolutional layers and reset the
parameters of full connection layer randomly as initialization. The learning rate is set as ?2 for the
full connection layers and ?2 /100 for the convolutional layers.
5
Experiment
We validate our approach on the Saliency dataset [9, 8] and a more challenging dataset newly collected by us, namely Object Extraction(OE) dataset1 . We compare our approach with state-of-the-art
methods and empirical analyses are also presented in the experiment.
The Saliency dataset is a combination of THUR15000 [8] and THUS10000 [9] datasets, which
includes 16233 images with pixelwise groundtruth masks. Most of the images contain one salient
object, and we do not utilize the category information in training and testing. We randomly split the
dataset into 14233 images for training and 2000 images for testing. The OE dataset collected by us
is more comprehensive, including 10183 images with groundtruth masks. We select the images from
the PASCAL [15], iCoseg [3], Internet [28] datasets as well as other data (most of them are about
people and clothes) from the web. Compared to the traditional segmentation dataset, the salient
objects in the OE dataset are more variant in appearance and shape(or pose) and they often appear
in complicated scene with background clutters. For the evaluation in the OE dataset, 8230 samples
are randomly selected for training and the remaining 1953 ones are applied in testing.
Experiment Settings. During training, the domain of each element in the 4-dimension latent variable
vector ?Li is set to [?10, ?5, 0, 5, 10], thus there are 54 = 625 possible proposals for each ?Li .
We set the number of MCMC sampling moves as K = 20 during searching. The learning rate is
?1 = 1.0 ? 10?6 for the segmentation network and ?2 = 1.0 ? 10?8 for the localization network.
For testing, as each pixelwise output of our method is well discriminated to the number around 1 or
0, we simply classify it as foreground or background by setting a threshold 0.5. The experiments
are performed on a desktop with an Intel I7 3.7GHz CPU, 16GB RAM and GTX TITAN GPU.
5.1
Results and Comparisons
We now quantitatively evaluate the performance of our method. For evaluation metric, we adopt
the Precision, P(the average number of pixels which are correctly labeled in both foreground and
background) and Jaccard similarity, J(the average intersection-over-union score: S?G
S?G , where S is
the foreground pixels obtained via our algorithm and G is the groundtruth foreground pixels). We
then compare the results of our approach with machine learning based methods such as figureground segmentation [22], CPMC [6] and Object Proposals [13]. As CPMC and Object Proposals
generates multiple ranked segments intended to cover objects, we follow the process applied in [22]
to evaluate its result. We use the union of the top K ranked segments as salient object prediction.
1
http://vision.sysu.edu.cn/projects/deep-joint-task-learning/
6
P
J
Ours(full) Ours(sim) FgSeg [22] CPMC [6]
97.81
96.62
91.92
83.64
87.02
81.10
70.85
56.14
ObjProp [13] HS [32] GC [10] RC [9]
72.60
89.99
89.23
90.16
54.12
64.72
58.30
63.69
HC [9]
89.24
58.42
Table 1: The evaluation in Saliency dataset with Precision(P) and Jaccard similarity(J). Ours(full)
indicates our joint learning method and Ours(sim) means learning two networks separately.
P
J
Ours(full) Ours(sim) FgSeg [22] CPMC [6]
93.12
91.25
90.42
76.33
77.69
71.50
70.93
53.76
ObjProp [13] HS [32] GC [10] RC [9]
72.14
87.42
85.53
86.25
54.70
62.83
54.83
59.34
HC [9]
83.37
50.61
Table 2: The evaluation in OE dataset with Precision(P) and Jaccard similarity(J). Ours(full) indicates our joint learning method and Ours(sim) means learning two networks separately.
We evaluate the performance of all K ? {1, ..., 100} and report the best result for each sample
in our experiment. Besides machine learning based methods, we also report the results of salient
region detection methods [10, 32, 9]. Note that there are two approaches mentioned in [9] utilizing
histogram based contrast(HC) and region based contrast(RC). Given the salient maps from these
methods, an iterative GrabCut proposed in [9] is utilized to generate binary segmentation results.
Saliency dataset. We report the experiment result in this dataset as Table. 1. Our result with joint task
learning (namely as Ours(full)) reaches 97.81% in Precision(P) and 87.02% in Jaccard similarity(J).
Compared to the figure-ground segmentation method [22], we have 5.89% improvements in P and
16.17% in J. For the saliency region detection methods, the best results are P:89.99% and J:64.72%
in [32]. Our method demonstrates superior performances compared to these approaches.
OE dataset. The evaluation of our method in OE dataset is shown in Table. 2. By jointly learning
localization and segmentation networks, our approach with 93.12% in P and 77.69% in J achieves
the highest performances compared to the state-of-the-art methods.
One spotlight of our work is its high efficiency in testing. As Table. 3 illustrates, the average time for
object extraction from an image with our method is 0.014 seconds, while figure-ground segmentation [22] requires 94.3 seconds, CPMC [6] requires 59.6 seconds and Object Proposal [13] requires
37.4 seconds. For most of the saliency region detection methods, the runtime are dominated by the
iterative GrabCut process, thus we apply its time as the average testing time for the saliency region
detection methods, which is 0.711 seconds. As a result, our approach is 50 ? 6000 times faster than
the state-of-the-art methods.
During training, it requires around 20 hours for convergence in the Saliency dataset and 13 hours for
the OE dataset. For latent variable sampling, we also try to enumerate the 625 possible proposals
exhaustively for each image. It achieves similar accuracy as our approach while costs about 30 times
of runtime in each iteration of training.
5.2
Empirical Analysis
For further evaluation, we conduct two following empirical analyses to illustrate the effectiveness of
our method.
(I) To clarify the significance of joint learning instead of learning two networks separately, we discard the latent variables sampling and set all ?Li = 0 during training, namely as Ours(sim). We
e (Eq. (4)) for these two methods as Fig. 4. We plot the average
illustrate the training cost J(? l , ? s , L)
loss over all training samples though the training iterations, and it is shown that our joint learning
Time
Ours(full)
0.014s
FgSeg [22]
94.3s
CPMC [6]
59.6s
ObjProp [13]
37.4s
Saliency methods
0.711s
Table 3: Testing time for each image. The Saliency methods indicates the saliency region detection
methods [32, 10, 9].
7
Ours(full)
Chen et al. [7]
Rubinstein et al. [28]
Car
P
J
87.95 68.86
87.09 64.67
83.38 63.36
Horse
P
J
88.11 53.80
89.00 57.58
83.69 53.89
Airplane
P
J
92.12 60.10
90.24 59.97
86.14 55.62
Table 4: We compare our method with two object discovery and segmentation methods in the
Internet dataset. We train our model with other data besides the ones in the Internet dataset.
method can achieve lower costs than the one without latent variable adjustment. We also compare
these two methods with Precision and Jaccard similarity in both datasets. As Table. 1 illustrates,
there are 1.19% and 5.92% improvements in P and J when we learn two networks jointly in the
Saliency dataset. For the OE dataset, the joint learning performs 1.87% higher in P and 6.19%
higher in J than learning two networks separately, as shown in Table. 2.
(II) We demonstrate that our method can be well generalized across different datasets. Given the OE
dataset, we train our model with all the data except for the ones collected from Internet dataset [28].
Then the newly trained model is applied for testing on the Internet dataset. We compare the performance of this deep model with two object discovery and co-segmentation methods [28, 7] in the
Internet dataset. As Table. 4 illustrates, our method achieves higher performance in the class of Car
and Airplane, and a comparable result in the class of Horse. Thus our model can be well generalized
to handle other datasets which are not applied in training and achieve state-of-the-art performances.
It is also worth to mention that it requires a few seconds for testing via the co-segmentation methods [28, 7], which is much slower than our approach with 0.014 seconds per image.
1600
Joint Task Learning
Separating Task Learning
1600
Joint Task Learning
Separating Task Learning
1500
1400
1200
Training Loss
Training Loss
1400
1000
800
1300
1200
1100
600
1000
400
0
20000
40000 60000 80000 100000 120000
Training Iterations
0
(a)
10000 20000 30000 40000 50000 60000
Training Iterations
(b)
Figure 4: The training cost across iterations. The cost is evaluated over all the training samples in
each dataset:(a) Saliency dataset;(b) OE dataset.
6
Conclusion
This paper studies joint task learning via deep neural networks for generic object extraction, in which
two networks work collaboratively to boost performance. Our joint deep model has been shown to
handle well realistic data from the internet. More importantly, the approach for extracting object
segmentation mask in the image is very efficient and the speed is 1000 times faster than competing
state-of-the-art methods. The proposed framework can be extended to handle other joint tasks in
similar ways.
References
[1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk, SLIC Superpixels
Compared to State-of-the-art Superpixel Methods, In IEEE Trans. Pattern Anal. Mach. Intell.,
34(11):2274-2282, 2012.
[2] J. M. Alvarez, T. Gevers, Y. LeCun, and A. M. Lopez, Road Scene Segmentation from a Single
Image, In ECCV, 2012.
[3] D. Batra, A. Kowdle, D. Parikh, J. Luo, and T. Chen, iCoseg: Interactive Co-segmentation with
Intelligent Scribble Guidance, In CVPR, 2010.
8
[4] Y. Bengio, A. Courville, and P. Vincent, Representation Learning: A Review and New Perspectives, In IEEE Trans. Pattern Anal. Mach. Intell., 35(8): 1798-1828, 2013.
[5] T. Brox, L. Bourdev, S. Maji, and J. Malik, Object Segmentation by Alignment of Poselet Activations to Image Contours, In CVPR, 2011.
[6] J. Carreira and C. Sminchisescu, Constrained Parametric Min-Cuts for Automatic Object Segmentation, In CVPR, 2010.
[7] X. Chen, A. Shrivastava, and A. Gupta, Enriching Visual Knowledge Bases via Object Discovery and Segmentation, In CVPR, 2014.
[8] M. Cheng, N. Mitra, X. Huang, and S. Hu, SalientShape: Group Saliency in Image Collections,
In The Visual Computer 34(4):443-453, 2014.
[9] M. Cheng, G. Zhang, N. Mitra, X. Huang, and Shi. Hu, Global Contrast based Salient Region
Detection, In CVPR, 2011.
[10] M. Cheng, J. Warrell, W. Lin, S. Zheng, V. Vineet, and N. Crook, Efficient Salient Region
Detection with Soft Image Abstraction, In ICCV, 2013.
[11] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, Deep Neural Networks
Segment Neuronal Membranes in Electron Microscopy Images, In NIPS, 2012.
[12] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li and L. Fei-Fei, ImageNet: A Large-Scale Hierarchical Image Database, In CVPR, 2009.
[13] I. Endres and D. Hoiem, Category-Independent Object Proposals with Diverse Ranking, In
IEEE Trans. Pattern Anal. Mach. Intell., 2014.
[14] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov, Scalable Object Detection using Deep
Neural Networks, In CVPR, 2014.
[15] M. Everingham, L. Van Gool, C. Williams, J. Winn, and A. Zisserman, The Pascal Visual
Object Classes (VOC) Challenge, In Intl J. of Computer Vision, 88:303-338,2010.
[16] C. Farabet, C. Couprie, L. Najman and Y. LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, In ICML, 2012.
[17] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, Object detection with
discriminatively trained part based models, In IEEE Trans. Pattern Anal. Mach. Intell., 2010.
[18] S. Fidler, R. Mottaghi, A.L. Yuille, and R. Urtasun, Bottom-Up Segmentation for Top-Down
Detection, In CVPR, 2013.
[19] G. E. Hinton, and R. R. Salakhutdinov, Reducing the Dimensionality of Data with Neural
Networks, In Science,313(5786):504-507, 2006.
[20] G. B. Huang and V. Jain, Deep and Wide Multiscale Recursive Networks for Robust Image
Labeling, In NIPS, 2013.
[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, In NIPS, 2012.
[22] D. Kuettel and V. Ferrari, Figure-ground segmentation by transferring window masks, In
CVPR, 2012.
[23] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, et
al, Handwritten Digit Recognition with A Back-propagation Network, In NIPS, 1990.
[24] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller, Equation of state calculations by fast computing machines, In J. Chemical Physics, 21(6):1087-1092, 1953.
[25] W. Ouyang and X. Wang, Joint Deep Learning for Pedestrian Detection, In ICCV, 2013.
[26] F. Perazzi, P. Krahenbuhl, Y. Pritch, and A. Hornung, Saliency Filters: Contrast Based Filtering
for Salient Region Detection, In CVPR, 2012.
[27] A. Rosenfeld and D. Weinshall, Extracting Foreground Masks towards Object Recognition, In
ICCV, 2011.
[28] M. Rubinstein, A. Joulin, J. Kopf, and C. Liu, Unsupervised Joint Object Discovery and Segmentation in Internet Images, In CVPR, 2013.
[29] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus and Y. LeCun, OverFeat: Integrated
Recognition, Localization and Detection using Convolutional Networks, In ICLR, 2014.
[30] C. Szegedy, A. Toshev, and D. Erhan, Deep Neural Networks for Object Detection, In NIPS,
2013.
[31] Z. Tu and S.C. Zhu, Image Segmentation by Data-Driven Markov Chain Monte Carlo, In IEEE
Trans. Pattern Anal. Mach. Intell., 24(5):657-673, 2002.
[32] Q. Yan, L. Xu, J. Shi, and J. Jia, Hierarchical Saliency Detection, In CVPR, 2013.
[33] Y. Yang, S. Hallman, D. Ramanan, and C. Fowlkes, Layered Object Detection for Multi-Class
Segmentation, In CVPR, 2010.
9
| 5547 |@word h:2 version:1 everingham:1 hu:2 dise:1 covariance:1 concise:1 mention:1 solid:1 liu:1 contains:2 score:1 hoiem:1 tuned:3 ours:12 outperforms:1 current:2 luo:1 activation:1 gpu:1 parsing:1 realistic:1 shape:1 treating:1 plot:1 fund:1 update:1 selected:4 accordingly:2 desktop:1 smith:1 location:1 org:1 zhang:2 five:3 height:2 rc:3 along:1 lopez:1 combine:1 inside:1 introduce:4 manner:1 mask:19 bility:1 multi:3 inspired:2 relying:1 decomposed:1 voc:1 salakhutdinov:1 cpu:1 enumeration:2 window:5 considering:1 project:2 underlying:1 weinshall:1 resurgent:1 ouyang:2 clothes:1 tackle:1 interactive:1 runtime:2 demonstrates:1 ramanan:2 zhang1:1 yn:1 appear:1 segmenting:2 before:1 engineering:1 mitra:2 limit:1 despite:1 mach:5 might:2 plus:1 initialization:1 china:5 achanta:1 challenging:1 co:4 mentioning:1 icoseg:2 range:1 bi:3 enriching:1 lecun:4 testing:11 recursive:2 union:2 x3:2 digit:2 empirical:3 yan:1 significantly:1 matching:1 pre:2 road:1 suggest:1 close:1 layered:1 operator:2 applying:1 optimize:5 map:1 shi:2 maximizing:1 williams:1 independently:1 focused:1 utilizing:1 importantly:1 borrow:1 searching:5 handle:3 coordinate:3 ferrari:1 target:1 suppose:1 superpixel:3 element:2 recognition:3 utilized:3 cut:1 predicts:2 labeled:1 database:1 bottom:1 wang:1 calculate:1 region:11 sun:1 oe:11 highest:1 mentioned:2 constrains:1 mine:1 exhaustively:3 trained:3 segment:3 yuille:1 localization:37 upon:1 efficiency:3 joint:25 represented:5 maji:1 stacked:3 train:3 jain:1 fast:2 effective:1 monte:1 detected:1 rubinstein:2 horse:2 labeling:1 exhaustive:1 solve:2 plausible:1 cvpr:13 hornung:1 rosenfeld:1 jointly:8 sen:1 propose:5 reset:1 tu:1 aligned:1 rapidly:2 achieve:4 description:2 validate:1 sutskever:1 convergence:1 intl:1 produce:1 generating:1 object:82 depending:1 illustrate:2 bourdev:1 pose:1 fixing:3 wangmeng:1 school:1 progress:1 sim:5 eq:4 auxiliary:1 predicted:3 indicate:2 filter:1 stochastic:1 human:1 mcallester:1 yij:6 adjusted:1 clarify:1 around:3 ground:3 visually:1 exp:1 electron:1 achieves:3 adopt:1 collaboratively:4 estimation:3 label:1 jackel:1 hubbard:1 minimization:1 gaussian:1 aim:1 rather:1 avoid:2 resized:1 guangzhou:1 focus:1 susstrunk:1 improvement:3 unsatisfactory:2 indicates:5 mainly:1 superpixels:5 tech:1 contrast:5 detect:1 wang1:1 inference:1 abstraction:1 integrated:2 transferring:1 comprising:1 x224:1 i1:1 pixel:6 classification:4 among:1 pascal:2 denoted:1 development:1 overfeat:1 art:7 special:1 integration:1 spatial:1 initialize:1 equal:3 once:1 brox:1 extraction:13 constrained:1 sampling:10 represents:3 icml:1 unsupervised:1 foreground:8 report:3 simplify:1 quantitatively:1 few:1 intelligent:1 randomly:3 composed:1 simultaneously:1 national:1 recognize:1 intell:5 comprehensive:1 tightly:1 intended:1 occlusion:1 maintain:1 detection:20 interest:4 lin1:1 acceptance:2 possibility:1 warrell:1 zheng:1 evaluation:7 adjust:2 alignment:1 henderson:1 pc:3 xiaolong:1 chain:1 edge:1 encourage:1 conduct:2 incomplete:1 euclidean:1 tree:1 guidance:1 girshick:1 industry:1 classify:1 soft:1 cover:2 localizing:2 maximization:1 cost:11 introducing:2 comprised:1 krizhevsky:2 pixelwise:7 endres:1 international:1 fundamental:1 vineet:1 dong:1 physic:1 together:1 lucchi:1 central:1 huang:4 li:44 szegedy:2 account:1 potential:2 kuettel:2 stride:1 includes:2 pedestrian:1 titan:1 explicitly:1 ranking:1 performed:1 handcraft:1 lot:1 try:1 red:1 capability:2 complicated:1 gevers:1 jia:1 square:2 accuracy:2 convolutional:13 saliency:17 raw:2 vincent:1 handwritten:1 accurately:1 carlo:1 worth:2 detector:1 explain:1 reach:1 farabet:1 definition:1 timate:1 sampled:2 newly:2 dataset:30 knowledge:2 car:2 dimensionality:1 segmentation:88 back:7 feed:1 higher:3 follow:1 response:1 specify:1 alvarez:1 fua:1 formulation:1 evaluated:1 box:12 though:1 zisserman:1 hand:1 hastings:1 web:1 replacing:1 multiscale:3 reversible:1 propagation:9 del:1 logistic:3 yat:1 indicated:3 gray:2 name:1 normalized:1 y2:2 contain:1 gtx:1 analytically:1 fidler:1 chemical:1 guangdong:2 iteratively:1 illustrated:1 during:6 width:2 encourages:1 generalized:2 demonstrate:3 performs:1 l1:1 kopf:1 image:47 recently:3 parikh:1 superior:3 discriminated:1 mellon:1 refer:1 spotlight:1 anguelov:1 automatic:1 harbin:1 impressive:1 similarity:5 base:1 showed:1 perspective:1 optimizes:1 driven:6 discard:1 schmidhuber:1 poselet:1 binary:5 yi:7 mottaghi:1 deng:1 purity:1 determine:1 grabcut:3 gambardella:1 dashed:2 ii:27 sliding:4 multiple:2 interdependency:1 full:16 segmented:1 faster:3 calculation:1 lin:2 prediction:1 variant:1 crop:1 regression:3 scalable:1 vision:4 cmu:2 metric:1 iteration:6 represent:3 kernel:2 normalization:2 histogram:1 robotics:1 achieved:2 microscopy:1 proposal:11 background:5 cropped:1 separately:4 winn:1 figureground:1 regional:1 probably:1 pooling:4 tend:1 inconsistent:1 effectiveness:2 extracting:4 yang:2 intermediate:1 split:1 bengio:1 architecture:5 competing:2 ciresan:1 cn:1 airplane:2 enumerating:1 i7:1 motivated:1 ese:1 bridging:1 gb:1 giusti:1 speaking:1 deep:20 enumerate:1 generally:3 iterating:2 detailed:1 tune:1 clutter:1 processed:2 category:7 generate:3 specifies:1 http:1 pritch:1 estimated:1 correctly:1 per:1 blue:2 diverse:1 carnegie:1 slic:1 group:1 salient:18 four:2 threshold:1 acknowledged:1 localize:2 utilize:1 ram:1 graph:1 sum:1 cpmc:6 groundtruth:7 resize:2 jaccard:5 investigates:1 comparable:1 krahenbuhl:1 layer:32 hi:1 internet:8 tackled:3 cheng:4 courville:1 fei:2 x2:2 scene:3 dominated:1 generates:3 toshev:2 speed:1 min:2 performing:1 shaji:1 according:3 combination:2 membrane:1 across:2 em:5 metropolis:2 invariant:2 iccv:3 ln:1 equation:1 adopted:1 eight:1 apply:4 hierarchical:2 denker:1 generic:3 fowlkes:1 batch:1 slower:1 eigen:1 original:1 top:2 remaining:1 include:1 build:1 objective:1 move:5 malik:1 parametric:1 traditional:1 gradient:1 iclr:1 distance:1 separating:2 seven:1 perazzi:1 collected:3 trivial:1 urtasun:1 besides:2 minimizing:3 sermanet:2 liang:2 subproblems:1 design:3 anal:5 rosenbluth:2 perform:2 observation:2 convolution:2 neuron:2 datasets:5 howard:1 markov:1 descent:1 najman:1 extended:2 hinton:2 y1:3 rn:2 gc:2 subtasks:3 introduced:1 namely:3 required:1 extensive:1 optimized:3 connection:9 x55:1 imagenet:3 learned:1 x50:1 boser:1 boost:3 hour:2 alternately:1 trans:5 address:1 nip:5 usually:2 pattern:5 eighth:1 challenge:1 program:2 max:3 including:1 gool:1 natural:1 rely:2 treated:1 predicting:1 ranked:2 zhu:1 representing:1 improve:1 technology:3 mathieu:1 extract:3 prior:3 geometric:1 discovery:5 review:1 teller:2 contributing:1 loss:3 discriminatively:2 filtering:1 foundation:1 consistent:1 classifying:1 eccv:1 supported:1 last:2 jth:3 institute:3 wide:1 taking:2 sysu:2 felzenszwalb:1 benefit:1 ghz:1 overcome:1 dimension:5 boundary:3 transition:2 van:1 dataset1:1 contour:1 author:1 forward:3 collection:1 preprocessing:1 employing:1 erhan:3 scribble:1 cutting:1 global:1 pittsburgh:1 consuming:1 fergus:1 search:1 latent:34 iterative:5 table:10 promising:1 channel:2 transfer:1 robust:2 learn:1 shrivastava:1 sminchisescu:1 hc:3 complex:1 domain:2 vj:3 significance:1 joulin:1 arrow:3 motivation:1 bounding:10 x1:2 neuronal:1 fig:5 intel:1 xu:1 shrinking:2 precision:5 position:3 comprises:1 learns:1 down:1 enlarging:2 embed:1 specific:1 gupta:1 intractable:2 socher:1 sequential:1 effectively:1 illustrates:4 chen:3 intersection:1 simply:2 appearance:2 crook:1 visual:3 adjustment:5 towards:1 couprie:1 carreira:1 typical:1 specifically:1 except:1 reducing:1 kowdle:1 batra:1 accepted:1 indicating:3 formally:1 select:1 people:1 incorporate:2 evaluate:3 mcmc:6 handling:1 |
5,023 | 5,548 | Discriminative Unsupervised Feature Learning with
Convolutional Neural Networks
Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller and Thomas Brox
Department of Computer Science
University of Freiburg
79110, Freiburg im Breisgau, Germany
{dosovits,springj,riedmiller,brox}@cs.uni-freiburg.de
Abstract
Current methods for training convolutional neural networks depend on large
amounts of labeled samples for supervised training. In this paper we present an
approach for training a convolutional neural network using only unlabeled data.
We train the network to discriminate between a set of surrogate classes. Each
surrogate class is formed by applying a variety of transformations to a randomly
sampled ?seed? image patch. We find that this simple feature learning algorithm
is surprisingly successful when applied to visual object recognition. The feature
representation learned by our algorithm achieves classification results matching
or outperforming the current state-of-the-art for unsupervised learning on several
popular datasets (STL-10, CIFAR-10, Caltech-101).
1
Introduction
Convolutional neural networks (CNNs) trained via backpropagation were recently shown to perform
well on image classification tasks with millions of training images and thousands of categories [1,
2]. The feature representation learned by these networks achieves state-of-the-art performance not
only on the classification task for which the network was trained, but also on various other visual
recognition tasks, for example: classification on Caltech-101 [2, 3], Caltech-256 [2] and the CaltechUCSD birds dataset [3]; scene recognition on the SUN-397 database [3]; detection on the PASCAL
VOC dataset [4]. This capability to generalize to new datasets makes supervised CNN training an
attractive approach for generic visual feature learning.
The downside of supervised training is the need for expensive labeling, as the amount of required
labeled samples grows quickly the larger the model gets. The large performance increase achieved
by methods based on the work of Krizhevsky et al. [1] was, for example, only possible due to
massive efforts on manually annotating millions of images. For this reason, unsupervised learning
? although currently underperforming ? remains an appealing paradigm, since it can make use of
raw unlabeled images and videos. Furthermore, on vision tasks outside classification it is not even
certain whether training based on object class labels is advantageous. For example, unsupervised
feature learning is known to be beneficial for image restoration [5] and recent results show that it
outperforms supervised feature learning also on descriptor matching [6].
In this work we combine the power of a discriminative objective with the major advantage of unsupervised feature learning: cheap data acquisition. We introduce a novel training procedure for
convolutional neural networks that does not require any labeled data. It rather relies on an automatically generated surrogate task. The task is created by taking the idea of data augmentation ?
which is commonly used in supervised learning ? to the extreme. Starting with trivial surrogate
classes consisting of one random image patch each, we augment the data by applying a random set
of transformations to each patch. Then we train a CNN to classify these surrogate classes. We refer
to this method as exemplar training of convolutional neural networks (Exemplar-CNN).
1
The feature representation learned by Exemplar-CNN is, by construction, discriminative and invariant to typical transformations. We confirm this both theoretically and empirically, showing that
this approach matches or outperforms all previous unsupervised feature learning methods on the
standard image classification benchmarks STL-10, CIFAR-10, and Caltech-101.
1.1
Related Work
Our approach is related to a large body of work on unsupervised learning of invariant features and
training of convolutional neural networks.
Convolutional training is commonly used in both supervised and unsupervised methods to utilize
the invariance of image statistics to translations (e.g. LeCun et al. [7], Kavukcuoglu et al. [8],
Krizhevsky et al. [1]). Similar to our approach the current surge of successful methods employing
convolutional neural networks for object recognition often rely on data augmentation to generate
additional training samples for their classification objective (e.g. Krizhevsky et al. [1], Zeiler and
Fergus [2]). While we share the architecture (a convolutional neural network) with these approaches,
our method does not rely on any labeled training data.
In unsupervised learning, several studies on learning invariant representations exist. Denoising autoencoders [9], for example, learn features that are robust to noise by trying to reconstruct data from
randomly perturbed input samples. Zou et al. [10] learn invariant features from video by enforcing
a temporal slowness constraint on the feature representation learned by a linear autoencoder. Sohn
and Lee [11] and Hui [12] learn features invariant to local image transformations. In contrast to our
discriminative approach, all these methods rely on directly modeling the input distribution and are
typically hard to use for jointly training multiple layers of a CNN.
The idea of learning features that are invariant to transformations has also been explored for supervised training of neural networks. The research most similar to ours is early work on tangent propagation [13] (and the related double backpropagation [14]) which aims to learn invariance to small
predefined transformations in a neural network by directly penalizing the derivative of the output
with respect to the magnitude of the transformations. In contrast, our algorithm does not regularize
the derivative explicitly. Thus it is less sensitive to the magnitude of the applied transformation.
This work is also loosely related to the use of unlabeled data for regularizing supervised algorithms,
for example self-training [15] or entropy regularization [16]. In contrast to these semi-supervised
methods, Exemplar-CNN training does not require any labeled data.
Finally, the idea of creating an auxiliary task in order to learn a good data representation was used
by Ahmed et al. [17], Collobert et al. [18].
2
Creating Surrogate Training Data
The input to the training procedure is a set of unlabeled images, which come from roughly the same
distribution as the images to which we later aim to apply the learned features. We randomly sample
N ? [50, 32000] patches of size 32?32 pixels from different images at varying positions and scales
forming the initial training set X = {x1 , . . . xN }. We are interested in patches containing objects
or parts of objects, hence we sample only from regions containing considerable gradients.
We define a family of transformations {T? | ? ? A} parameterized by vectors ? ? A, where A is
the set of all possible parameter vectors. Each transformation T? is a composition of elementary
transformations from the following list:
?
?
?
?
translation: vertical or horizontal translation by a distance within 0.2 of the patch size;
scaling: multiplication of the patch scale by a factor between 0.7 and 1.4;
rotation: rotation of the image by an angle up to 20 degrees;
contrast 1: multiply the projection of each patch pixel onto the principal components of the
set of all pixels by a factor between 0.5 and 2 (factors are independent for each principal
component and the same for all pixels within a patch);
? contrast 2: raise saturation and value (S and V components of the HSV color representation)
of all pixels to a power between 0.25 and 4 (same for all pixels within a patch), multiply
these values by a factor between 0.7 and 1.4, add to them a value between ?0.1 and 0.1;
2
Figure 1: Exemplary patches sampled from
the STL unlabeled dataset which are later
augmented by various transformations to obtain surrogate data for the CNN training.
Figure 2: Several random transformations
applied to one of the patches extracted from
the STL unlabeled dataset. The original
(?seed?) patch is in the top left corner.
? color: add a value between ?0.1 and 0.1 to the hue (H component of the HSV color representation) of all pixels in the patch (the same value is used for all pixels within a patch).
All numerical parameters of elementary transformations, when concatenated together, form a single
parameter vector ?. For each initial patch xi ? X we sample K ? [1, 300] random parameter
vectors {?i1 , . . . , ?iK } and apply the corresponding transformations Ti = {T?1i , . . . , T?K
} to the
i
patch xi . This yields the set of its transformed versions Sxi = Ti xi = {T xi | T ? Ti }. Afterwards
we subtract the mean of each pixel over the whole resulting dataset. We do not apply any other
preprocessing. Exemplary patches sampled from the STL-10 unlabeled dataset are shown in Fig. 1.
Examples of transformed versions of one patch are shown in Fig. 2 .
3
Learning Algorithm
Given the sets of transformed image patches, we declare each of these sets to be a class by assigning
label i to the class Sxi . We next train a CNN to discriminate between these surrogate classes.
Formally, we minimize the following loss function:
X X
L(X) =
l(i, T xi ),
(1)
xi ?X T ?Ti
where l(i, T xi ) is the loss on the transformed sample T xi with (surrogate) true label i. We use
a CNN with a softmax output layer and optimize the multinomial negative log likelihood of the
network output, hence in our case
l(i, T xi ) = M (ei , f (T xi )),
M (y, f ) = ?hy, log f i = ?
X
yk log fk ,
(2)
k
where f (?) denotes the function computing the values of the output layer of the CNN given the
input data, and ei is the ith standard basis vector. We note that in the limit of an infinite number of
transformations per surrogate class, the objective function (1) takes the form
X
b
L(X)
=
E? [l(i, T? xi )],
(3)
xi ?X
which we shall analyze in the next section.
Intuitively, the classification problem described above serves to ensure that different input samples
can be distinguished. At the same time, it enforces invariance to the specified transformations. In the
following sections we provide a foundation for this intuition. We first present a formal analysis of
the objective, separating it into a well defined classification problem and a regularizer that enforces
invariance (resembling the analysis in Wager et al. [19]). We then discuss the derived properties of
this classification problem and compare it to common practices for unsupervised feature learning.
3.1
Formal Analysis
We denote by ? ? A the random vector of transformation parameters, by g(x) the vector of activations of the second-to-last layer of the network when presented the input patch x, by W the matrix
3
of the weights of the last network layer, by h(x) = Wg(x) the last layer activations before applying
the softmax, and by f (x) = softmax (h(x)) the output of the network. By plugging in the definition
of the softmax activation function
softmax (z) = exp(z)/k exp(z)k1
the objective function (3) with loss (2) takes the form
X
E? ?hei , h(T? xi )i + log k exp(h(T? xi ))k1 .
(4)
(5)
xi ?X
With gbi = E? [g(T? xi )] being the average feature representation of transformed versions of the
image patch xi we can rewrite Eq. (5) as
X
?hei , Wgbi i + log k exp(Wgbi )k1
xi ?X
+
X
E? [log k exp(h(T? xi ))k1 ] ? log k exp(Wgbi )k1 .
(6)
xi ?X
The first sum is the objective function of a multinomial logistic regression problem with input-target
pairs (gbi ,P
ei ). This objective falls back to the transformation-free instance classification problem
L(X) = xi ?X l(i, xi ) if g(xi ) = E? [g(T? x)]. In general, this equality does not hold and thus
the first sum enforces correct classification of the average representation E? [g(T? xi )] for a given
input sample. For a truly invariant representation, however, the equality is achieved. Similarly, if we
suppose that T? x = x for ? = 0, that for small values of ? the feature representation g(T? xi ) is
approximately linear with respect to ? and that the random variable ? is centered, i.e. E? [?] = 0,
then gbi = E? [g(T? xi )] ? E? [g(xi ) + ?? (g(T? xi ))|?=0 ?] = g(xi ).
The second sum in Eq. (6) can be seen as a regularizer enforcing all h(T? xi ) to be close to their
average value, i.e., the feature representation is sought to be approximately invariant to the transformations T? . To show this we use the convexity of the function log k exp(?)k1 and Jensen?s inequality,
which yields (proof in supplementary material)
E? [log k exp(h(T? xi ))k1 ] ? log k exp(Wgbi )k1 ? 0.
(7)
If the feature representation is perfectly invariant, then h(T? xi ) = Wgbi and inequality (7) turns to
equality, meaning that the regularizer reaches its global minimum.
3.2
Conceptual Comparison to Previous Unsupervised Learning Methods
Suppose we want to unsupervisedly learn a feature representation useful for a recognition task, for
example classification. The mapping from input images x to a feature representation g(x) should
then satisfy two requirements: (1) there must be at least one feature that is similar for images of the
same category y (invariance); (2) there must be at least one feature that is sufficiently different for
images of different categories (ability to discriminate).
Most unsupervised feature learning methods aim to learn such a representation by modeling the
input distribution p(x). This is based on the assumption that a good model of p(x) contains information about the category distribution p(y|x). That is, if a representation is learned, from which
a given sample can be reconstructed perfectly, then the representation is expected to also encode
information about the category of the sample (ability to discriminate). Additionally, the learned
representation should be invariant to variations in the samples that are irrelevant for the classification task, i.e., it should adhere to the manifold hypothesis (see e.g. Rifai et al. [20] for a recent
discussion). Invariance is classically achieved by regularization of the latent representation, e.g., by
enforcing sparsity [8] or robustness to noise [9].
In contrast, the discriminative objective in Eq. (1) does not directly model the input distribution
p(x) but learns a representation that discriminates between input samples. The representation is not
required to reconstruct the input, which is unnecessary in a recognition or matching task. This leaves
more degrees of freedom to model the desired variability of a sample. As shown in our analysis (see
Eq. (7)), we achieve partial invariance to transformations applied during surrogate data creation by
forcing the representation g(T? xi ) of the transformed image patch to be predictive of the surrogate
label assigned to the original image patch xi .
4
It should be noted that this approach assumes that the transformations T? do not change the identity
of the image content. If we, for example, use a color transformation we will force the network to be
invariant to this change and cannot expect the extracted features to perform well in a task relying on
color information (such as differentiating black panthers from pumas)1 .
4
Experiments
To compare our discriminative approach to previous unsupervised feature learning methods, we report classification results on the STL-10 [21], CIFAR-10 [22] and Caltech-101 [23] datasets. Moreover, we assess the influence of the augmentation parameters on the classification performance and
study the invariance properties of the network.
4.1
Experimental Setup
The datasets we test on differ in the number of classes (10 for CIFAR and STL, 101 for Caltech)
and the number of samples per class. STL is especially well suited for unsupervised learning as it
contains a large set of 100,000 unlabeled samples. In all experiments (except for the dataset transfer
experiment in the supplementary material) we extracted surrogate training data from the unlabeled
subset of STL-10. When testing on CIFAR-10, we resized the images from 32 ? 32 pixels to 64 ? 64
pixels so that the scale of depicted objects roughly matches the two other datasets.
We worked with two network architectures. A ?small? network was used to evaluate the influence
of different components of the augmentation procedure on classification performance. It consists of
two convolutional layers with 64 filters each followed by a fully connected layer with 128 neurons.
This last layer is succeeded by a softmax layer, which serves as the network output. A ?large?
network, consisting of three convolutional layers with 64, 128 and 256 filters respectively followed
by a fully connected layer with 512 neurons, was trained to compare our method to the state-of-theart. In both models all convolutional filters are connected to a 5 ? 5 region of their input. 2 ? 2 maxpooling was performed after the first and second convolutional layers. Dropout [24] was applied to
the fully connected layers. We trained the networks using an implementation based on Caffe [25].
Details on the training, the hyperparameter settings, and an analysis of the performance depending
on the network architecture is provided in the supplementary material. Our code and training data
are available at http://lmb.informatik.uni-freiburg.de/resources .
We applied the feature representation to images of arbitrary size by convolutionally computing the
responses of all the network layers except the top softmax. To each feature map, we applied the pooling method that is commonly used for the respective dataset: 1) 4-quadrant max-pooling, resulting in
4 values per feature map, which is the standard procedure for STL-10 and CIFAR-10 [26, 10, 27, 12];
2) 3-layer spatial pyramid, i.e. max-pooling over the whole image as well as within 4 quadrants and
within the cells of a 4 ? 4 grid, resulting in 1 + 4 + 16 = 21 values per feature map, which is the
standard for Caltech-101 [28, 10, 29]. Finally, we trained a linear support vector machine (SVM) on
the pooled features.
On all datasets we used the standard training and test protocols. On STL-10 the SVM was trained on
10 pre-defined folds of the training data. We report the mean and standard deviation achieved on the
fixed test set. For CIFAR-10 we report two results: (1) training the SVM on the whole CIFAR-10
training set (?CIFAR-10?); (2) the average over 10 random selections of 400 training samples per
class (?CIFAR-10(400)?). For Caltech-101 we followed the usual protocol of selecting 30 random
samples per class for training and not more than 50 samples per class for testing. This was repeated
10 times.
4.2
Classification Results
In Table 1 we compare Exemplar-CNN to several unsupervised feature learning methods, including
the current state-of-the-art on each dataset. We also list the state-of-the-art for supervised learning
(which is not directly comparable). Additionally we show the dimensionality of the feature vectors
1
Such cases could be covered either by careful selection of applied transformations or by combining features
from multiple networks trained with different sets of transformations and letting the final classifier choose which
features to use.
5
Table 1: Classification accuracies on several datasets (in percent). ? Average per-class accuracy2 78.0% ? 0.4%. ? Average per-class accuracy 84.4% ? 0.6%.
Algorithm
STL-10 CIFAR-10(400) CIFAR-10
Convolutional K-means Network [26]
60.1 ? 1
70.7 ? 0.7
82.0
Multi-way local pooling [28]
?
?
?
61.0
?
?
Slowness on videos [10]
Hierarchical Matching Pursuit (HMP) [27] 64.5 ? 1
?
?
Multipath HMP [29]
?
?
?
View-Invariant K-means [12]
63.7
72.6 ? 0.7
81.9
Exemplar-CNN (64c5-64c5-128f)
67.1 ? 0.3
69.7 ? 0.3
75.7
82.0
Exemplar-CNN (64c5-128c5-256c5-512f) 72.8 ? 0.4 75.3 ? 0.2
Supervised state of the art
70.1[30]
?
91.2 [31]
Caltech-101 #features
?
8000
77.3 ? 0.6 1024 ? 64
74.6
556
?
1000
82.5 ? 0.5
5000
?
6400
79.8 ? 0.5?
256
85.5 ? 0.4?
960
91.44 [32]
?
produced by each method before final pooling. The small network was trained on 8000 surrogate
classes containing 150 samples each and the large one on 16000 classes with 100 samples each.
The features extracted from the larger network match or outperform the best prior result on all
datasets. This is despite the fact that the dimensionality of the feature vector is smaller than that of
most other approaches and that the networks are trained on the STL-10 unlabeled dataset (i.e. they
are used in a transfer learning manner when applied to CIFAR-10 and Caltech 101). The increase in
performance is especially pronounced when only few labeled samples are available for training the
SVM (as is the case for all the datasets except full CIFAR-10). This is in agreement with previous
evidence that with increasing feature vector dimensionality and number of labeled samples, training
an SVM becomes less dependent on the quality of the features [26, 12]. Remarkably, on STL-10 we
achieve an accuracy of 72.8%, which is a large improvement over all previously reported results.
4.3
Detailed Analysis
We performed additional experiments (using the ?small? network) to study the effect of three design
choices in Exemplar-CNN training and validate the invariance properties of the learned features.
Experiments on sampling ?seed? patches from different datasets can be found in the supplementary.
4.3.1
Number of Surrogate Classes
We varied the number N of surrogate classes between 50 and 32000. As a sanity check, we also
tried classification with random filters. The results are shown in Fig. 3.
Clearly, the classification accuracy increases with the number of surrogate classes until it reaches
an optimum at about 8000 surrogate classes after which it did not change or even decreased. This
is to be expected: the larger the number of surrogate classes, the more likely it is to draw very
similar or even identical samples, which are hard or impossible to discriminate. Few such cases are
not detrimental to the classification performance, but as soon as such collisions dominate the set
of surrogate labels, the discriminative loss is no longer reasonable and training the network to the
surrogate task no longer succeeds. To check the validity of this explanation we also plot in Fig. 3 the
classification error on the validation set (taken from the surrogate data) computed after training the
network. It rapidly grows as the number of surrogate classes increases. We also observed that the
optimal number of surrogate classes increases with the size of the network (not shown in the figure),
but eventually saturates. This demonstrates the main limitation of our approach to randomly sample
?seed? patches: it does not scale to arbitrarily large amounts of unlabeled data. However, we do not
see this as a fundamental restriction and discuss possible solutions in Section 5 .
4.3.2
Number of Samples per Surrogate Class
Fig. 4 shows the classification accuracy when the number K of training samples per surrogate class
varies between 1 and 300. The performance improves with more samples per surrogate class and
2
On Caltech-101 one can either measure average accuracy over all samples (average overall accuracy) or
calculate the accuracy for each class and then average these values (average per-class accuracy). These differ,
as some classes contain fewer than 50 test samples. Most researchers in ML use average overall accuracy.
6
80
64
Classification
on STL (? ?)
Validation error on
surrogate data
62
60
60
40
58
20
Classification accuracy on STL?10
100
66
Error on validation data
Classification accuracy on STL?10
68
56
54
50
100
250
0
500 1000 2000 4000 8000 1600032000
70
65
1000 classes
2000 classes
4000 classes
random filters
60
55
50
45
1
Number of classes (log scale)
2
4
8
16
32
64 100 150
300
Number of samples per class (log scale)
Figure 3: Influence of the number of surrogate training classes. The validation error on
the surrogate data is shown in red. Note the
different y-axes for the two curves.
Figure 4: Classification performance on STL
for different numbers of samples per class.
Random filters can be seen as ?0 samples per
class?.
saturates at around 100 samples. This indicates that this amount is sufficient to approximate the
formal objective from Eq. (3), hence further increasing the number of samples does not significantly
change the optimization problem. On the other hand, if the number of samples is too small, there is
insufficient data to learn the desired invariance properties.
4.3.3
Types of Transformations
We varied the transformations used for creating
the surrogate data to analyze their influence on
the final classification performance. The set of
?seed? patches was fixed. The result is shown
in Fig. 5. The value ?0? corresponds to applying random compositions of all elementary
transformations: scaling, rotation, translation,
color variation, and contrast variation. Different columns of the plot show the difference in
classification accuracy as we discarded some
types of elementary transformations.
Difference in classification accuracy
rotation scaling translation color contrast rot+sc+tr col+con
all
0
0
?5
?5
?10
?10
?15
?20
?15
STL?10
CIFAR?10
Caltech?101
?20
Removed transformations
Figure 5: Influence of removing groups of transSeveral tendencies can be observed. First, ro- formations during generation of the surrogate
tation and scaling have only a minor impact on training data. Baseline (?0? value) is applying all
the performance, while translations, color vari- transformations. Each group of three bars correations and contrast variations are significantly sponds to removing some of the transformations.
more important. Secondly, the results on STL10 and CIFAR-10 consistently show that spatial invariance and color-contrast invariance are approximately of equal importance for the classification performance. This indicates that variations
in color and contrast, though often neglected, may also improve performance in a supervised learning scenario. Thirdly, on Caltech-101 color and contrast transformations are much more important
compared to spatial transformations than on the two other datasets. This is not surprising, since
Caltech-101 images are often well aligned, and this dataset bias makes spatial invariance less useful.
4.3.4
Invariance Properties of the Learned Representation
In a final experiment, we analyzed to which extent the representation learned by the network is
invariant to the transformations applied during training. We randomly sampled 500 images from the
STL-10 test set and applied a range of transformations (translation, rotation, contrast, color) to each
image. To avoid empty regions beyond the image boundaries when applying spatial transformations,
we cropped the central 64 ? 64 pixel sub-patch from each 96 ? 96 pixel image. We then applied two
measures of invariance to these patches.
First, as an explicit measure of invariance, we calculated the normalized Euclidean distance between normalized feature vectors of the original image patch and the transformed one [10] (see the
supplementary material for details). The downside of this approach is that the distance between
extracted features does not take into account how informative and discriminative they are. We there7
0.4
1st layer
2nd layer
3rd layer
4?quadrant
HOG
0.2
0
50
40
30
20
No movements in training data
Rotations up to 20 degrees
Rotations up to 40 degrees
10
?20
?10
0
10
Translation (pixels)
20
(c)
60
50
40
30
No color transform
Hue change within ? 0.1
Hue change within ? 0.2
Hue change within ? 0.3
20
10
?50
0
50
Rotation angle (degrees)
?0.2
?0.1
0
0.1
Hue shift
0.2
0.3
Classification accuracy (in %)
0.6
(b)
60
Classification accuracy (in %)
(a)
Classification accuracy (in %)
Distance between feature vectors
1
0.8
(d)
60
50
40
30
No contrast transform
Contrast coefficients (2, 0.5, 0.1)
Contrast coefficients (4, 1, 0.2)
Contrast coefficients (6, 1.5, 0.3)
20
10
?3
?2
?1
0
1
Contrast multiplier
2
3
Figure 6: Invariance properties of the feature representation learned by Exemplar-CNN. (a): Normalized Euclidean distance between feature vectors of the original and the translated image patches
vs. the magnitude of the translation, (b)-(d): classification performance on transformed image
patches vs. the magnitude of the transformation for various magnitudes of transformations applied
for creating surrogate data. (b): rotation, (c): additive color change, (d): multiplicative contrast
change.
fore evaluated a second measure ? classification performance depending on the magnitude of the
transformation applied to the classified patches ? which does not come with this problem. To compute the classification accuracy, we trained an SVM on the central 64 ? 64 pixel patches from one
fold of the STL-10 training set and measured classification performance on all transformed versions
of 500 samples from the test set.
The results of both experiments are shown in Fig. 6 . Due to space restrictions we show only few
representative plots. Overall the experiment empirically confirms that the Exemplar-CNN objective leads to learning invariant features. Features in the third layer and the final pooled feature
representation compare favorably to a HOG baseline (Fig. 6 (a)). Furthermore, adding stronger
transformations in the surrogate training data leads to more invariant classification with respect to
these transformations (Fig. 6 (b)-(d)). However, adding too much contrast variation may deteriorate
classification performance (Fig. 6 (d)). One possible reason is that level of contrast can be a useful
feature: for example, strong edges in an image are usually more important than weak ones.
5
Discussion
We have proposed a discriminative objective for unsupervised feature learning by training a CNN
without class labels. The core idea is to generate a set of surrogate labels via data augmentation.
The features learned by the network yield a large improvement in classification accuracy compared
to features obtained with previous unsupervised methods. These results strongly indicate that a
discriminative objective is superior to objectives previously used for unsupervised feature learning.
One potential shortcoming of the proposed method is that in its current state it does not scale to arbitrarily large datasets. Two probable reasons for this are that (1) as the number of surrogate classes
grows larger, many of them become similar, which contradicts the discriminative objective, and (2)
the surrogate task we use is relatively simple and does not allow the network to learn invariance to
complex variations, such as 3D viewpoint changes or inter-instance variation. We hypothesize that
the presented approach could learn more powerful higher-level features, if the surrogate data were
more diverse. This could be achieved by using additional weak supervision, for example, by means
of video or a small number of labeled samples. Another possible way of obtaining richer surrogate training data and at the same time avoiding similar surrogate classes would be (unsupervised)
merging of similar surrogate classes. We see these as interesting directions for future work.
Acknowledgements
We acknowledge funding by the ERC Starting Grant VideoLearn (279401); the work was also partly
supported by the BrainLinks-BrainTools Cluster of Excellence funded by the German Research
Foundation (DFG, grant number EXC 1086).
References
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In NIPS, pages 1106?1114, 2012.
8
[2] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014.
[3] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014.
[4] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In CVPR, 2014.
[5] K. Cho. Simple sparsification improves sparse denoising autoencoders in denoising highly corrupted
images. In ICML. JMLR Workshop and Conference Proceedings, 2013.
[6] P. Fischer, A. Dosovitskiy, and T. Brox. Descriptor matching with convolutional neural networks: a
comparison to SIFT. 2014. pre-print, arXiv:1405.5769v1 [cs.CV].
[7] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541?551, 1989.
[8] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional
feature hierachies for visual recognition. In NIPS, 2010.
[9] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with
denoising autoencoders. In ICML, pages 1096?1103, 2008.
[10] W. Y. Zou, A. Y. Ng, S. Zhu, and K. Yu. Deep learning of invariant features via simulated fixations in
video. In NIPS, pages 3212?3220, 2012.
[11] K. Sohn and H. Lee. Learning invariant representations with local transformations. In ICML, 2012.
[12] K. Y. Hui. Direct modeling of complex invariances for visual object features. In ICML, 2013.
[13] P. Simard, B. Victorri, Y. LeCun, and J. S. Denker. Tangent Prop - A formalism for specifying selected
invariances in an adaptive network. In NIPS, 1992.
[14] H. Drucker and Y. LeCun. Improving generalization performance using double backpropagation. IEEE
Transactions on Neural Networks, 3(6):991?997, 1992.
[15] M.-R. Amini and P. Gallinari. Semi supervised logistic regression. In ECAI, pages 390?394, 2002.
[16] Y. Grandvalet and Y. Bengio. Entropy regularization. In Semi-Supervised Learning, pages 151?168. MIT
Press, 2006.
[17] A. Ahmed, K. Yu, W. Xu, Y. Gong, and E. Xing. Training hierarchical feed-forward visual recognition
models using transfer learning from pseudo-tasks. In ECCV (3), pages 69?82, 2008.
[18] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493?2537, 2011.
[19] S. Wager, S. Wang, and P. Liang. Dropout training as adaptive regularization. In NIPS. 2013.
[20] S. Rifai, Y. N. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In NIPS.
2011.
[21] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning.
AISTATS, 2011.
[22] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master?s thesis,
Department of Computer Science, University of Toronto, 2009.
[23] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An
incremental bayesian approach tested on 101 object categories. In CVPR WGMBV, 2004.
[24] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural
networks by preventing co-adaptation of feature detectors. 2012. pre-print, arxiv:cs/1207.0580v3.
[25] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe:
Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014.
[26] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In NIPS, pages 2528?2536, 2011.
[27] L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for RGB-D based object recognition. In ISER,
June 2012.
[28] Y. Boureau, N. Le Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: multi-way local pooling for
image recognition. In ICCV?11. IEEE, 2011.
[29] L. Bo, X. Ren, and D. Fox. Multipath sparse coding using hierarchical matching pursuit. In CVPR, pages
660?667, 2013.
[30] K. Swersky, J. Snoek, and R. P. Adams. Multi-task bayesian optimization. In NIPS, 2013.
[31] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014.
[32] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual
recognition. In ECCV, 2014.
9
| 5548 |@word cnn:17 version:4 advantageous:1 stronger:1 nd:1 confirms:1 tried:1 rgb:1 tr:1 initial:2 contains:2 selecting:2 ours:1 outperforms:2 current:5 guadarrama:1 surprising:1 activation:4 assigning:1 videolearn:1 must:2 numerical:1 additive:1 informative:1 cheap:1 hypothesize:1 plot:3 v:2 generative:1 leaf:1 fewer:1 selected:1 ith:1 core:1 hsv:2 toronto:1 zhang:2 direct:1 become:1 ik:1 consists:1 fixation:1 combine:1 manner:1 introduce:1 deteriorate:1 excellence:1 theoretically:1 inter:1 snoek:1 kuksa:1 expected:2 surge:1 roughly:2 multi:3 salakhutdinov:1 voc:1 relying:1 springj:1 automatically:1 increasing:2 becomes:1 provided:1 moreover:1 transformation:43 sparsification:1 temporal:1 pseudo:1 ti:4 ro:1 classifier:2 demonstrates:1 gallinari:1 grant:2 before:2 declare:1 local:5 limit:1 tation:1 despite:1 approximately:3 black:1 alexey:1 bird:1 specifying:1 co:1 range:1 lecun:6 enforces:3 testing:2 practice:1 backpropagation:4 procedure:4 riedmiller:2 yan:1 significantly:2 matching:6 projection:1 puma:1 pre:3 quadrant:3 get:1 onto:1 unlabeled:11 close:1 cannot:1 selection:2 applying:6 influence:5 impossible:1 optimize:1 restriction:2 map:3 resembling:1 starting:2 roux:1 dominate:1 regularize:1 embedding:1 variation:8 construction:1 target:1 suppose:2 massive:1 hierarchy:1 hypothesis:1 agreement:1 recognition:13 expensive:1 labeled:8 database:1 observed:2 preprint:1 wang:1 thousand:1 calculate:1 region:3 connected:4 sun:2 movement:1 removed:1 yk:1 intuition:1 discriminates:1 convexity:1 tobias:1 neglected:1 trained:10 depend:1 raise:1 rewrite:1 creation:1 predictive:1 basis:1 translated:1 panther:1 various:3 regularizer:3 train:3 fast:1 shortcoming:1 sc:1 labeling:1 formation:1 outside:1 caffe:2 sanity:1 richer:1 larger:4 supplementary:5 cvpr:3 annotating:1 reconstruct:2 wg:1 ability:2 statistic:1 fischer:1 jointly:1 transform:2 final:5 advantage:1 karayev:1 exemplary:2 adaptation:1 aligned:1 combining:1 rapidly:1 achieve:2 stl10:1 pronounced:1 validate:1 sutskever:2 double:2 underperforming:1 requirement:1 lmb:1 optimum:1 empty:1 cluster:1 darrell:3 incremental:1 object:10 adam:1 depending:2 gong:1 measured:1 exemplar:10 minor:1 eq:5 strong:1 auxiliary:1 c:3 come:2 indicate:1 larochelle:1 differ:2 direction:1 correct:1 cnns:1 filter:6 centered:1 material:4 require:2 accuracy2:1 generalization:1 probable:1 elementary:4 secondly:1 im:1 hold:1 sufficiently:1 around:1 exp:9 seed:5 mapping:1 major:1 achieves:2 early:1 sought:1 label:7 currently:1 jackel:1 sensitive:1 hubbard:1 hoffman:1 mit:1 clearly:1 aim:3 rather:1 avoid:1 resized:1 varying:1 encode:1 derived:1 ax:1 june:1 ponce:1 improvement:2 consistently:1 indicates:2 likelihood:1 check:2 contrast:21 baseline:2 brainlinks:1 dependent:1 typically:1 perona:1 transformed:9 interested:1 germany:1 i1:1 pixel:15 overall:3 classification:43 pascal:1 augment:1 dauphin:1 art:5 softmax:7 spatial:6 brox:3 tzeng:1 equal:1 field:1 ng:3 sampling:1 manually:1 identical:1 yu:2 unsupervised:21 icml:5 theart:1 future:1 report:3 dosovitskiy:2 few:4 randomly:5 dfg:1 consisting:2 freedom:1 detection:2 highly:1 multiply:2 henderson:1 truly:1 extreme:1 analyzed:1 wager:2 predefined:1 accurate:1 succeeded:1 edge:1 partial:1 respective:1 fox:2 loosely:1 euclidean:2 desired:2 girshick:2 instance:2 column:1 classify:1 modeling:3 downside:2 formalism:1 restoration:1 deviation:1 subset:1 krizhevsky:6 successful:2 too:2 reported:1 perturbed:1 varies:1 corrupted:1 cho:1 st:1 fundamental:1 lee:3 together:1 quickly:1 augmentation:5 central:2 thesis:1 containing:3 choose:1 classically:1 corner:1 creating:4 derivative:2 simard:1 account:1 potential:1 de:2 pooled:2 coding:1 coefficient:3 satisfy:1 explicitly:1 collobert:2 later:2 performed:2 view:1 multiplicative:1 analyze:2 red:1 xing:1 capability:1 jia:2 minimize:1 formed:1 ass:1 accuracy:19 convolutional:22 descriptor:2 yield:3 generalize:1 weak:2 raw:1 handwritten:1 kavukcuoglu:3 vincent:2 produced:1 informatik:1 ren:3 bayesian:2 fore:1 unsupervisedly:1 researcher:1 classified:1 detector:1 reach:2 definition:1 acquisition:1 proof:1 con:1 sampled:4 dataset:11 popular:1 ask:1 color:14 dimensionality:3 improves:2 segmentation:1 back:1 feed:1 higher:1 supervised:14 gbi:3 response:1 evaluated:1 though:1 strongly:1 furthermore:2 autoencoders:3 until:1 hand:1 horizontal:1 ei:3 propagation:1 logistic:2 quality:1 grows:3 effect:1 validity:1 contain:1 true:1 normalized:3 multiplier:1 regularization:4 hence:3 equality:3 assigned:1 semantic:1 attractive:1 visualizing:1 during:3 self:1 noted:1 trying:1 freiburg:4 percent:1 image:37 meaning:1 regularizing:1 novel:1 recently:1 funding:1 common:1 rotation:9 superior:1 multinomial:2 empirically:2 million:2 thirdly:1 he:1 refer:1 composition:2 cv:1 rd:1 fk:1 grid:1 similarly:1 erc:1 iser:1 language:1 rot:1 funded:1 longer:2 supervision:1 maxpooling:1 add:2 recent:2 irrelevant:1 forcing:1 scenario:1 certain:1 slowness:2 inequality:2 outperforming:1 arbitrarily:2 muller:1 caltech:14 seen:2 minimum:1 additional:3 zip:1 hmp:2 paradigm:1 v3:1 semi:3 multiple:3 afterwards:1 full:1 karlen:1 match:3 ahmed:2 convolutionally:1 long:1 cifar:16 bach:1 lin:1 plugging:1 impact:1 jost:1 regression:2 vision:1 arxiv:4 pyramid:2 achieved:5 cell:1 cropped:1 want:1 remarkably:1 decreased:1 victorri:1 adhere:1 pooling:7 extracting:1 bengio:3 variety:1 architecture:4 perfectly:2 idea:4 rifai:2 drucker:1 shift:1 whether:1 effort:1 deep:5 useful:3 collision:1 covered:1 detailed:1 amount:4 hue:5 sohn:2 category:6 generate:2 http:1 outperform:1 exist:1 coates:2 per:16 diverse:1 hyperparameter:1 shall:1 group:2 penalizing:1 sxi:2 utilize:1 v1:1 sum:3 angle:2 parameterized:1 powerful:1 master:1 springenberg:1 swersky:1 family:1 reasonable:1 almost:1 patch:34 draw:1 scaling:4 comparable:1 dropout:2 layer:22 followed:3 fold:2 constraint:1 worked:1 fei:2 scene:1 hy:1 sponds:1 martin:1 relatively:1 department:2 beneficial:1 smaller:1 contradicts:1 appealing:1 intuitively:1 invariant:17 iccv:1 taken:1 resource:1 remains:1 hei:2 discus:2 turn:1 previously:2 eventually:1 german:1 letting:1 serf:2 available:2 pursuit:2 apply:3 multipath:2 hierarchical:3 denker:2 generic:2 amini:1 distinguished:1 robustness:1 thomas:1 original:4 top:2 denotes:1 ensure:1 assumes:1 zeiler:2 concatenated:1 k1:8 especially:2 gregor:1 objective:14 malik:1 print:2 receptive:1 usual:1 surrogate:41 gradient:1 detrimental:1 iclr:1 distance:5 separating:1 simulated:1 exc:1 manifold:2 extent:1 trivial:1 reason:3 enforcing:3 code:2 insufficient:1 manzagol:1 sermanet:1 liang:1 setup:1 hog:2 favorably:1 negative:1 implementation:1 design:1 perform:2 vertical:1 neuron:2 datasets:12 discarded:1 benchmark:1 acknowledge:1 howard:1 saturates:2 variability:1 hinton:3 varied:2 arbitrary:1 pair:1 required:2 specified:1 imagenet:1 learned:12 boser:1 nip:8 beyond:1 bar:1 usually:1 sparsity:1 saturation:1 max:2 including:1 video:5 explanation:1 power:2 braintools:1 rely:3 force:1 natural:1 zhu:1 improve:1 mathieu:1 created:1 autoencoder:1 prior:1 understanding:1 acknowledgement:1 tangent:3 multiplication:1 loss:4 expect:1 fully:3 generation:1 limitation:1 interesting:1 validation:4 foundation:2 shelhamer:1 degree:5 sufficient:1 viewpoint:1 grandvalet:1 tiny:1 share:1 translation:9 eccv:3 surprisingly:1 last:4 free:1 soon:1 supported:1 ecai:1 formal:3 bias:1 allow:1 fall:1 taking:1 differentiating:1 sparse:2 curve:1 boundary:1 xn:1 calculated:1 vari:1 rich:1 preventing:1 forward:1 commonly:3 c5:5 preprocessing:1 adaptive:2 employing:1 transaction:1 reconstructed:1 approximate:1 uni:2 confirm:1 ml:1 global:1 conceptual:1 unnecessary:1 discriminative:11 fergus:3 xi:34 latent:1 table:2 additionally:2 learn:10 transfer:3 robust:2 composing:1 obtaining:1 improving:2 bottou:1 complex:2 zou:2 protocol:2 did:1 aistats:1 main:1 whole:3 noise:2 repeated:1 body:1 x1:1 augmented:1 fig:10 representative:1 xu:1 sub:1 position:1 explicit:1 col:1 jmlr:1 third:1 learns:1 donahue:3 removing:2 caltechucsd:1 showing:1 sift:1 jensen:1 explored:1 list:2 svm:6 stl:21 evidence:1 workshop:1 adding:2 merging:1 importance:1 hui:2 decaf:1 magnitude:6 boureau:2 chen:1 subtract:1 suited:1 entropy:2 depicted:1 likely:1 forming:1 visual:9 vinyals:1 bo:2 corresponds:1 relies:1 extracted:5 hierachies:1 prop:1 weston:1 identity:1 careful:1 considerable:1 hard:2 change:10 content:1 typical:1 infinite:1 except:3 denoising:4 principal:2 discriminate:5 invariance:20 experimental:1 tendency:1 succeeds:1 partly:1 dosovits:1 formally:1 support:1 avoiding:1 evaluate:1 tested:1 scratch:1 srivastava:1 |
5,024 | 5,549 | Modeling Deep Temporal Dependencies with
Recurrent ?Grammar Cells?
Roland Memisevic
University of Montreal, Canada
[email protected]
Vincent Michalski
Goethe University Frankfurt, Germany
[email protected]
Kishore Konda
Goethe University Frankfurt, Germany
[email protected]
Abstract
We propose modeling time series by representing the transformations that take a
frame at time t to a frame at time t+1. To this end we show how a bi-linear model
of transformations, such as a gated autoencoder, can be turned into a recurrent network, by training it to predict future frames from the current one and the inferred
transformation using backprop-through-time. We also show how stacking multiple layers of gating units in a recurrent pyramid makes it possible to represent the
?syntax? of complicated time series, and that it can outperform standard recurrent
neural networks in terms of prediction accuracy on a variety of tasks.
1
Introduction
The predominant paradigm of modeling time series is based on state-space models, in which a
hidden state evolves according to some predefined dynamical law, and an observation model maps
the state to the dataspace. In this work, we explore an alternative approach to modeling time series,
where learning amounts to finding an explicit representation of the transformation that takes an
observation at time t to the observation at time t + 1.
Modeling a sequence in terms of transformations makes it very easy to exploit redundancies that
would be hard to capture otherwise. For example, very little information is needed to specify an
element of the signal class sine-wave, if it is represented in terms of a linear mapping that takes a
snippet of signal to the next snippet: given an initial ?seed?-frame, any two sine-waves differ only
by the amount of phase shift that the linear transformation has to repeatedly apply at each time step.
In order to model a signal as a sequence of transformations, it is necessary to make transformations
?first-class objects?, that can be passed around and picked up by higher layers in the network. To
this end, we use bilinear models (e.g. [1, 2, 3]) which use multiplicative interactions to extract transformations from pairs of observations. We show that deep learning which is proven to be effective in
learning structural hierarchies can also learn to capture hierarchies of relations or transformations.
A deep model can be built by stacking multiple layers of the transformation model, so that higher
layers capture higher-oder transformations (that is, transformations between transformations). To be
able to model multiple steps of a time-series, we propose a training scheme called predictive training: after computing a deep representation of the dynamics from the first frames of a time series, the
model predicts future frames by repeatedly applying the transformations passed down by higher layers, assuming constancy of the transformation in the top-most layer. Derivatives are computed using
back-prop through time (BPTT) [4]. We shall refer to this model as a predictive gating pyramid
(PGP) in the following.
1
Since hidden units at each layer encode transformations, not content of their inputs, they capture only
structural dependencies and we refer to them as ?grammar cells.?1 The model can also be viewed as a
higher-order partial difference equation whose parameters are estimated from data. Generating from
the model amounts to providing boundary conditions in the form of seed-frames, whose number
corresponds to the number of layers (the order of the difference equation). We demonstrate that
a two-layer model is already surprisingly effective at capturing whole classes of complicated time
series, including frequency-modulated sine-waves (also known as ?chirps?) which we found hard to
represent using standard recurrent networks.
1.1
Related Work
LSTM units [5] also use multiplicative interactions, in conjunction with self-connections of weight
1, to model long-term dependencies and to avoid vanishing gradients problems [6]. Instead of constant self-connections, the lower-layer units in our model can represent long-term structure by using
dynamically changing orthogonal transformations as we shall show. Other related work includes
[7], where multiplicative interactions are used to let inputs modulate connections between successive hidden states of a recurrent neural network (RNN), with application to modeling text. Our model
also bears some similarity to [3] who model MOCAP data using a three-way Restricted Boltzmann
Machine, where a second layer of hidden units can be used to model more ?abstract? features of
the time series. In contrast to that work, our higher-order units which are bi-linear too, are used to
explicitly model higher-order transformations. More importantly, we use predictive training using
backprop through time for our model, which is crucial for achieving good performance as we show
in our experiments. Other approaches to sequence modeling include [8], who compress sequences
using a two-layer RNN, where the second layer predicts residuals, which the first layer fails to predict well. In our model, compression amounts to exploiting redundancies in the relations between
successive sequence elements. In contrast to [9] who introduce a recursive bi-linear autoencoder
for modeling language, our model is recurrent and trained to predict, not reconstruct. The model
by [10] is similar to our model in that it learns the dynamics of sequences, but assumes a simple
autoregressive, rather than deep, compositional dependence, on the past. An early version of our
work is described in [11].
Our work is also loosely related to sequence based invariance [12] and slow feature analysis [13],
because hidden units are designed to extract structure that is invariant in time. In contrast to that
work, our multi-layer models assume higher-order invariances, that is, invariance of velocity in the
case of one hidden layer, of acceleration in the case of two, of jerk (the rate of change of acceleration)
in the case of three, etc.
2
Background on Relational Feature Learning
In order to learn transformation features, m, that represent the relationship between two observations x(1) and x(2) it is necessary to learn a basis that can represent the correlation structure across
the observations. In a time series, knowledge of one frame, x(1) , typically highly constrains the
distribution over possible next frames, x(2) . This suggests modeling x(2) using a feature learning
model whose parameters are a function of x(1) [14], giving rise to bi-linear models of transformations, such as the Gated Boltzmann Machine [15, 3], Gated Autoencoder [16], and similar models
(see [14] for an overview). Formally, bi-linear models learn to represent a linear transformation, L,
between two observations x(1) and x(2) , where
x(2) = Lx(1) .
(1)
Bi-linear models encode the transformation in a layer of mapping units that get tuned to rotation
angles in the invariant subspaces of the transformation class [14]. We shall focus on the gated
autoencoder (GAE) in the following but our description could be easily adapted to other bi-linear
models. Formally, the response of a layer of mapping units in the GAE takes the form2
m = ? W(Ux(1) ? Vx(2) ) .
(2)
1
We dedicate this paper to the venerable grandmother cell, a grandmother of the grammar cell.
We are only using ?factored? [15] bi-linear models in this work, but the framework presented in this work
could be applied to unfactored models, too.
2
2
where U, V and W are parameter matrices, ? denotes elementwise multiplication, and ? is an
elementwise non-linearity, such as the logistic sigmoid. Given mapping unit activations, m, and
the first observation, x(1) , the second observation can be reconstructed using
? (2) = VT Ux(1) ? WT m
x
(3)
which amounts to applying the transformation encoded in m to x(1) [16]. As the model is symmetric, the reconstruction of the first observation, given the second, is similarly given by
? (1) = UT Vx(2) ? WT m .
x
(4)
For training one can minimize the symmetric reconstruction error
? (1) ||2 + ||x(2) ? x
? (2) ||2 .
L = ||x(1) ? x
(5)
Training turns the rows of U and V into filter pairs which reside in the invariant subspaces of the
transformation class on which the model was trained. After learning, each pair is tuned to a particular
rotation angle in the subspace, and the components of m are consequently tuned to subspace rotation
angles. Due to the pooling layer, W, they are furthermore independent of the absolute angles in the
subspaces [14].
3
Higher-Order Relational Features
Alternatively, one can think of the bilinear model as performing a first-order Taylor approximation
of the input sequence, where the hidden representation models the partial first-order derivatives of
the inputs with respect to time. If we assume constancy of the first-order derivatives (or higher-order
derivates, as we shall discuss), the complete sequence can be encoded using information about a
single frame and the derivatives. This is a very different way of addressing long-range correlations
than assuming memory units that explicitly keep state [5]. Instead, here we assume that there is
structure in the temporal evolution of the input stream and we focus on capturing this structure.
As an intuitive example, consider a sinusoidal signal with unknown frequency and phase. The
complete signal can be specified exactly and completely after having seen a few seed frames, making
it possible in principle to generate the rest of the signal ad infinitum.
3.1
Learning of Higher-Order Relational Features
The first-order partial derivative of a multidimensional discrete-time dynamical system describes
the correspondences between observations at subsequent time steps. The fact that relational feature
learning applied to subsequent frames may be viewed as a way to learn these derivatives, suggests
modeling higher-order derivatives with another layer of relational features.
To this end, we suggest cascading relational features in a ?pyramid? as depicted in Figure 1 on the
(t?1:t)
left.3 Given a sequence of inputs x(t?2) , x(t?1) , x(t) , first-order relational features m1
de(t?1)
(t)
scribe the transformations between two subsequent inputs x
and x . Second-order relational
(t?2:t)
(t?2:t?1)
features m2
describe correspondences between two first-order relational features m1
(t?1:t)
and m1
, modeling the ?second-order derivatives? of the signal with respect to time.
To learn the higher-order features, we can first train a bottom-layer GAE module to represent correspondences between frame pairs using filter matrices U1 , V1 and W1 (the subscript index refers to
(t?2:t?1)
(t?1:t)
the layer). From the first-layer module we can infer mappings m1
and m1
for overlapping input pairs (x(t?2) , x(t?1) ) and (x(t?1) , x(t) ), and use these as inputs to a second-layer GAE
module. A second GAE can then learn to represent relations between mappings of the first-layer
using parameters U2 , V2 and W2 .
Inference of second-order relational features amounts to computing first- and second-order mappings
according to
(t?2:t?1)
m1
= ? W1 (U1 x(t?2) ) ? (V1 x(t?1) )
(6)
(t?1:t)
m1
= ? W1 (U1 x(t?1) ) ? (V1 x(t) )
(7)
(t?2:t)
(t?2:t?1)
(t?1:t)
m2
= ? W2 (U2 m1
) ? (V2 m1
) .
(8)
3
Images taken from the NORB data set described in [17]
3
Figure 1: Left: A two-layer model encodes a sequence by assuming constant ?acceleration?. Right:
Prediction using first-order relational features.
Like a mixture of experts, a bi-linear model represents a highly non-linear mapping from x(1) to
x(2) as a mixture of linear (and thereby possibly orthogonal) transformations. Similar to the LSTM,
this facilitates error back-propagation, because orthogonal transformations do not suffer from vanishing/exploding gradient problems. This may be viewed as a way of generalizing LSTM [5] which
uses the identity matrix as the orthogonal transformation. ?Grammar units? in contrast try to model
long-term structure that is dynamic and compositional rather than remembering a fixed value.
Cascading GAE modules in this way can also be motivated from the view of orthogonal transformations as subspace rotations: summing over filter-response products can yield transformation
detectors which are sensitive to relative angles (phases in the case of translations) and invariant to
the absolute angles [14]. The relative rotation angle (or phase delta) between two projections is itself
an angle, and the relation between two such angles represents an ?angular acceleration? that can be
picked up by another layer.
In contrast to a single-layer, two-frame model, the reconstruction error is no longer directly applicable (although a naive way to train the model would be to minimize reconstruction error for each pair
of adjacent nodes in each layer). However, a natural way of training the model on sequential data is
to replace the reconstruction task with the objective of predicting future frames as we discuss next.
4
4.1
Predictive Training
Single-Step Prediction
In the GAE model, given two frames x(1) and x(2) one can compute a prediction of the third frame
by first inferring mappings m(1,2) from x(1) and x(2) (see Equation 2) and using these to compute a
? (3) by applying the inferred transformation m(1,2) to frame x(2)
prediction x
? (3) = VT Ux(2) ? WT m(1,2) .
x
(9)
See Figure 1 (right side) for an outline of the prediction scheme. The prediction of x(3) is a good
prediction under the assumption that frame-to-frame transformations from x(1) to x(2) and from
x(2) to x(3) are approximately the same, in other words if transformations themselves are assumed
to be approximately constant in time. We shall show later how to relax the assumption of constancy
of the transformation by adding layers to the model.
The training criterion for this predictive gating pyramid (PGP) is the prediction error
L = ||?
x(3) ? x(3) ||22 .
(10)
Besides allowing us to apply bilinear models to sequences, this training objective, in contrast to the
reconstruction objective, can guide the mapping representation to be invariant to the content of each
frame, because encoding the content of x(2) will not help predicting x(3) well.
4.2
Multi-Step Prediction and Non-Constant Transformations
We can iterate the inference-prediction process in order to look ahead more than one frame in time.
? (4) with the PGP, for example, we can infer the mappings and prediction:
To compute a prediction x
? (4) = VT U x
? (3) ? WT m(2:3) .
m(2:3) = ? W(Ux(2) ? V?
x(3) ) ,
x
(11)
4
Figure 2: Left: Prediction with a 2-layer PGP. Right: Multi-step prediction with a 3-layer PGP.
? (3) and x
? (4) to compute a prediction of x
? (5) , and so on.
Then mappings can be inferred again from x
When the assumption of constancy of the transformations is violated, one can use an additional
layer to model how transformations themselves change over time as described in Section 3. The
assumption behind the two-layer PGP is that the second-order relational structure in the sequence
? (t+1) in two steps after inferring
is constant. Under this assumption, we compute a prediction x
(t?2:t)
m2
according to Equation 8: First, first-order relational features describing the correspondence
between x(t) and x(t+1) are inferred top-down as
(t:t+1)
(t?1:t)
(t?2:t)
?1
m
= V 2 T U2 m 1
? W2T m2
,
(12)
? (t+1) as
from which we can compute x
(t:t+1)
? (t+1) = V1 T U1 x(t) ? W1T m
?1
x
.
(13)
See Figure 2 (left side) for an illustration of the two-layer prediction scheme. To predict multiple
? (t+1) , i.e. by appending
steps ahead we repeat the inference-prediction process on x(t?1) , x(t) and x
the prediction to the sequence and increasing t by one.
As outlined in Figure 2 (right side), the concept can be generalized to more than two layers by
recursion to yield higher-order relational features. Weights can be shared across layers, but we used
untied weights in our experiments.
To summarize, the prediction process consists in iteratively computing predictions of the next lower
levels activations beginning from the top. To infer the top-level activations themselves, one needs a
number of seed frames corresponding to the depth of the model. The models can be trained using
BPTT to compute gradients of the k-step prediction error (the sum of prediction errors) with respect
to the parameters. We observed that starting with few prediction steps and iteratively increasing the
number of prediction steps as training progresses considerably stabilizes the learning.
5
Experiments
We tested and compared the models on sequences and videos with varying degrees of complexity, from synthetic constant to synthetic accelerated transformations to more complex real-world
transformations. A description of the synthetic shift and rotation data sets is provided in the supplementary material.
5.1
Preprocessing and Initialization
For all data sets, except for chirps and bouncing balls, PCA whitening was used for dimensionality
reduction, retaining around 95% of the variance. The chirps-data was normalized by subtracting
the mean and dividing by the standard deviation of the training set. For the multi-layer models we
used greedy layerwise pretraining before predictive training. We found pretraining to be crucial for
the predictive training to work well. Each layer was pretrained using a simple GAE, the first layer
on input frames, the next layer on the inferred mappings. Stochastic gradient descent (SGD) with
learning rate 0.001 and momentum 0.9 was used for all pretraining.
5
Table 1: Classification accuracies (%) on accelerated transformation data using mappings from
different layers in the PGP (accuracies after pretraining shown in parentheses).
(1:2)
m1
Data set
ACC ROT
ACC S HIFT
5.2
18.1 (19.4)
20.9 (20.6)
(2:3)
m1
29.3 (30.9)
34.4 (33.3)
(1:2)
(m1
(2:3)
, m1
74.0 (64.9)
42.7 (38.4)
)
(1:3)
m2
74.4 (53.7)
80.6 (63.4)
Comparison of Predictive and Reconstructive Training
To evaluate whether predictive training (PGP) yields better representations of transformations than
training with a reconstruction objective (GAE), we first performed a classification experiment on
videos showing artificially transformed natural images. 13 ? 13 patches were cropped from the
Berkeley Segmentation data set (BSDS300) [18]. Two data sets with videos featuring constant
velocity shifts (C ONST S HIFT) and rotations (C ONST ROT) were generated. The shift vectors (for
C ONST S HIFT) and rotation angles (for C ONST ROT) were each grouped into 8 bins to generate
labels for classification.
The numbers of filter pairs and mapping units were chosen using a grid search. The setting with the
best performance on the validation set was 256 filters and 256 mapping units for both training objectives on both data sets. The models were each trained for 1 000 epochs using SGD with learning rate
0.001 and momentum 0.9. Mappings of the first two inputs were used as input to a logistic regression classifier. The experiment was performed three times on both data sets. The mean accuracy (%)
on C ONST S HIFT after predictive training was 79.4 compared to 76.4 after reconstructive training.
For C ONST ROT mean accuracies were 98.2 after predictive and 97.6 after reconstructive training.
This confirms that predictive training yields a more explicit representation of transformations, that
is less dependent on image content, as discussed in Section 4.1.
5.3
Detecting Acceleration
To test the hypothesis that the PGP learns to model second-order correspondences in sequences, image sequences with accelerated shifts (ACC S HIFT) and rotations (ACC ROT) of natural image patches
were generated. The acceleration vectors (for ACC S HIFT) and angular rotations (for ACC ROT) were
each grouped into 8 bins to generate output labels for classification.
Numbers of filter pairs and mapping units were set to 512 and 256, respectively, after performing a
grid search. After pretraining, the PGP was trained using SGD with learning rate 0.0001 and momentum 0.9, for 400 epochs on single-step prediction and then 500 epochs on two-step prediction.
After training, first- and second-layer mappings were inferred from the first three frames of the test
sequences. The classification accuracies using logistic regression with second-layer mappings of the
(1:3)
(1:2)
(2:3)
and m1 ), and with their concatenaPGP (m2 ) , with individual first-layer mappings (m1
(1:2)
(2:3)
tion (m1 , m1 ) as classifier inputs are compared in Table 1 for both data sets (before and after
predictive finetuning). The second-layer mappings achieved a significantly higher accuracy for both
data sets after predictive training. For ACC ROT, the concatenation of first-layer mappings performs
almost as well as the second-layer mappings, which may be because rotations have fewer degrees of
freedom than shifts making them easier to model. Note that the accuracy for the first layer mappings
also improved with predictive finetuning.
These results show that the PGP can learn a better representation of the second-order relational
structure in the data than the single-layer model. They further show that predictive training improves
performances of both models and is crucial for the PGP.
5.4
Sequence Prediction
In these experiments we test the capability of the models to predict previously unseen sequences
multiple steps into the future. This allows us to assess to what degree modeling higher order ?derivatives? makes it possible to capture the temporal evolution of a signal without resorting to an explicit
6
Figure 3: Multi-step predictions by the PGP trained on accelerated rotations (left) and shifts (right).
From top to bottom: ground truth, predictions before and after predictive finetuning.
0
?2.5
0
mean squared error
2.5
ground truth
CRBM
RNN
PGP
100
200
300
1.8
1.0
00
time
5
10
predict-ahead interval
Figure 4: Left: Chirp signal and the predictions of the CRBM, RNN and PGP after seeing the first
five 10-frame vectors. Right: The MSE of the three models for each step.
representation of a hidden state. Unless mentioned otherwise, the presented sequences were seeded
with frames from test data (not seen during training).
Accelerated Transformations
Figure 3 shows predictions with the PGP on the data sets introduced in Section 5.3 after different
stages of training. As can be seen in the figures, the prediction accuracy increases significantly with
multi-step training.
Chirps
Performances of the PGP were compared with that of a standard RNN (trained with BPTT) and a
CRBM (trained with contrastive divergence) [19] on a dataset containing chirps (sinusoidal waves
that increase or decrease in frequency over time). Training and test set each contain 20, 000 sequences. The 160 frames of each sequence are grouped into 16 non-overlapping 10-frame windows,
yielding 10-dimensional input vectors. Given the first 5 windows, the remaining 11 windows have to
be predicted. Second-order mappings of the PGP are averaged for the seed windows and then held
fixed for prediction. Predictions for one test sequence are shown in Figure 4 (left). Mean-squared
errors (MSE) on the test set are 1.159 for the RNN, 1.624 for the CRBM and 0.323 for the PGP. A
plot of per-step MSEs is shown in Figure 4 (right).
NorbVideos
The NORBvideos data set introduced in [20] contains videos of objects from the NORB dataset
[17]. The 5 frame videos each show incrementally changed viewpoints of one object. One- and twohidden layer PGP models were trained on this data using the author?s original split. Both models
used 2000 features and 1000 mapping units (per layer). The performance of the one-hidden layer
model stopped improving at 2000 features, while the two-hidden layer model was able to make use
of the additional parameters. Two-step MSEs on test data were 448.4 and 582.1, respectively.
Figure 6 shows predictions made by both models. The second-order PGP generates predictions that
reflect the 3-D structure in the data. In contrast to the first-order PGP, it is able to extrapolate the
observed transformations.
Bouncing Balls
The PGP is also able to capture the highly non-linear dynamics in the bouncing balls data set4 . The
sequence shown in Figure 5 contains 56 frames, where the first 5 are from the training sequences
and are used as seed for sequence generation (similar to the chirps experiment the average top-layer
mapping vector for the seed frames is fixed). Note that the sequences used for training were only
4
The training and test sequences were generated using the script released with [21].
7
Figure 5: PGP generated sequence of bouncing balls (left-to-right, top-to-bottom).
Figure 6: Two-step PGP test predictions on NORBvideos.
20 frames long. The model?s predictions look qualitatively better than most published generated
sequences.5 Further results and data can be found on the project website at http://www.ccc.
cs.uni-frankfurt.de/people/vincent-michalski/grammar-cells
6
Discussion
A major long-standing problem in sequence modeling is dealing with long-range correlations. It
has been proposed that deep learning may help address this problem by finding representations that
capture better the abstract, semantic content of the inputs [22]. In this work we propose learning
representations with the explicit goal of enabling the prediction of the temporal evolution of the
input stream multiple time steps ahead. Thus we seek a hidden representation that captures those
aspects of the input data which allow us to make predictions about the future.
As we discussed, learning the long-term evolution of a sequence can be simplified by modeling
it as a sequence of temporally varying orthogonal (and thus, in particular, linear) transformations.
Since gating networks are like mixtures-of-experts, the PGP does model its input using a sequence
of linear transformations in the lowest layer, it is thus ?horizontally linear?. At the same time,
it is ?vertically compressive?, because its sigmoidal units are encouraged to compute non-linear,
sparse representations, like the hidden units in any standard feed-forward neural network. From an
optimization perspective this is a very sensible way to model time-series, since gradients have to
be back-propagated through many more layers horizontally (in time) than vertically (through the
non-linear network).
It is interesting to note that predictive training can also be viewed as an analogy making task [15].
It amounts to relating the transformation from frame t ? 1 to t with the transformation between a
later pair of observations, e.g. those at time t and t + 1. The difference is that in a genuine analogy
making task, the target observation may be unrelated to the source observation pair, whereas here
target and source are related. It would be interesting to apply the model to word representations,
or language in general, as this is a domain where both, sequentially structured data and analogical
relationships play central roles.
Acknowledgments
This work was supported by the German Federal Ministry of Education and Research (BMBF)
in project 01GQ0841 (BFNT Frankfurt), by an NSERC Discovery grant and by a Google faculty
research award.
5
compare with http://www.cs.utoronto.ca/?ilya/pubs/2007/multilayered/index.
html and http://www.cs.utoronto.ca/?ilya/pubs/2008/rtrbm_vid.tar.gz.
8
References
[1] R. Memisevic and G. E. Hinton. Unsupervised learning of image transformations. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007.
[2] B. A. Olshausen, C. Cadieu, J. Culpepper, and D. K. Warland. Bilinear models of natural
images. 2007.
[3] G. W. Taylor, G. E. Hinton, and S. T. Roweis. Two distributed-state models for generating
high-dimensional time series. The Journal of Machine Learning Research, 12:1025?1068,
2011.
[4] P. J. Werbos. Generalization of backpropagation with application to a recurrent gas market
model. Neural Networks, 1(4):339?356, 1988.
[5] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?
1780, 1997.
[6] S. Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. diploma thesis, institut f?ur
informatik, lehrstuhl prof. brauer, technische universit?at m?unchen. 1991.
[7] I. Sutskever, J. Martens, and G. E. Hinton. Generating text with recurrent neural networks. In
Proceedings of the 2011 International Conference on Machine Learning, 2011.
[8] J. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234?242, 1992.
[9] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive
deep models for semantic compositionality over a sentiment treebank. In Proceedings of the
2013 Conference on Empirical Methods in Natural Language Processing.
[10] J. Luttinen, T. Raiko, and A. Ilin. Linear state-space model with time-varying dynamics. In
Machine Learning and Knowledge Discovery in Databases, pages 338?353. Springer, 2014.
[11] V. Michalski. Neural networks for motion understanding: Diploma thesis. Master?s thesis,
Goethe-Universit?at Frankfurt, Frankfurt, Germany, 2013.
[12] P. F?oldi?ak. Learning invariance from transformation sequences.
3(2):194?200, 1991.
Neural Computation,
[13] L. Wiskott and T. Sejnowski. Slow feature analysis: Unsupervised learning of invariances.
Neural computation, 14(4):715?770, 2002.
[14] R. Memisevic. Learning to relate images. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 35(8):1829?1846, 2013.
[15] R. Memisevic and G. E. Hinton. Learning to represent spatial transformations with factored
higher-order boltzmann machines. Neural Computation, 22(6):1473?1492, 2010.
[16] R. Memisevic. Gradient-based learning of higher-order image features. In 2011 IEEE International Conference on Computer Vision, pages 1591?1598. IEEE, 2011.
[17] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In Proceedings of the 2001 IEEE Conference on Computer
Vision and Pattern Recognition, 2001.
[18] D. Martin, Fowlkes C., D. Tal, and J. Malik. A database of human segmented natural images
and its application to evaluating segmentation algorithms and measuring ecological statistics.
In Proceedings of the Eigth IEEE International Conference on Computer Vision, volume 2,
pages 416?423, July 2001.
[19] G. W. Taylor, G. E. Hinton, and S. T. Roweis. Modeling human motion using binary latent
variables. In Advances in Neural Information Processing Systems 20, pages 1345?1352, 2007.
[20] R. Memisevic and G. Exarchakis. Learning invariant features by harnessing the aperture problem. In Proceedings of the 30th International Conference on Machine Learning, 2013.
[21] I. Sutskever, G. E. Hinton, and G. W. Taylor. The recurrent temporal restricted boltzmann
machine. In Advances in Neural Information Processing Systems 21, pages 1601?1608, 2008.
[22] Y. Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning,
2(1):1?127, 2009. Also published as a book. Now Publishers, 2009.
9
| 5549 |@word faculty:1 version:1 compression:2 bptt:3 confirms:1 seek:1 contrastive:1 sgd:3 thereby:1 reduction:1 initial:1 series:11 contains:2 pub:2 tuned:3 past:1 current:1 com:1 activation:3 gmail:1 subsequent:3 designed:1 plot:1 greedy:1 fewer:1 website:1 intelligence:1 beginning:1 vanishing:2 short:1 detecting:1 node:1 successive:2 lx:1 sigmoidal:1 five:1 ilin:1 consists:1 introduce:1 crbm:4 market:1 themselves:3 multi:6 little:1 window:4 increasing:2 provided:1 project:2 linearity:1 unrelated:1 lowest:1 what:1 compressive:1 finding:2 transformation:53 temporal:5 berkeley:1 multidimensional:1 exactly:1 universit:2 classifier:2 unit:18 grant:1 before:3 vertically:2 bilinear:4 encoding:1 ak:1 subscript:1 approximately:2 initialization:1 chirp:7 dynamically:1 suggests:2 bi:9 range:2 averaged:1 ms:2 acknowledgment:1 lecun:1 recursive:2 backpropagation:1 empirical:1 rnn:6 significantly:2 projection:1 word:2 refers:1 seeing:1 suggest:1 get:1 applying:3 www:3 map:1 marten:1 starting:1 factored:2 m2:6 cascading:2 importantly:1 hierarchy:2 target:2 play:1 netzen:1 us:1 hypothesis:1 element:2 velocity:2 recognition:3 trend:1 werbos:1 predicts:2 database:2 bottom:3 constancy:4 module:4 observed:2 role:1 capture:8 decrease:1 mentioned:1 complexity:1 constrains:1 venerable:1 brauer:1 dynamic:5 trained:9 predictive:18 basis:1 completely:1 easily:1 finetuning:3 represented:1 train:2 effective:2 describe:1 reconstructive:3 sejnowski:1 harnessing:1 whose:3 encoded:2 supplementary:1 relax:1 otherwise:2 reconstruct:1 grammar:5 statistic:1 unseen:1 think:1 itself:1 unfactored:1 sequence:37 michalski:3 propose:3 reconstruction:7 interaction:3 product:1 subtracting:1 turned:1 roweis:2 analogical:1 description:2 intuitive:1 hift:6 exploiting:1 sutskever:2 generating:3 object:4 help:2 recurrent:10 montreal:1 pose:1 progress:1 dividing:1 predicted:1 c:3 differ:1 filter:6 stochastic:1 vx:2 human:2 material:1 bin:2 backprop:2 education:1 generalization:1 around:2 ground:2 seed:7 mapping:28 predict:6 stabilizes:1 major:1 early:1 released:1 applicable:1 label:2 sensitive:1 grouped:3 federal:1 rather:2 form2:1 avoid:1 tar:1 varying:3 conjunction:1 encode:2 focus:2 potts:1 contrast:7 inference:3 dependent:1 typically:1 hidden:12 relation:4 transformed:1 germany:3 classification:5 html:1 retaining:1 spatial:1 genuine:1 having:1 ng:1 untersuchungen:1 encouraged:1 cadieu:1 represents:2 look:2 unsupervised:2 future:5 culpepper:1 few:2 divergence:1 individual:1 set4:1 phase:4 freedom:1 highly:3 predominant:1 mixture:3 yielding:1 behind:1 held:1 predefined:1 partial:3 necessary:2 orthogonal:6 unless:1 institut:1 loosely:1 taylor:4 bsds300:1 stopped:1 modeling:15 measuring:1 stacking:2 addressing:1 deviation:1 technische:1 too:2 dependency:3 considerably:1 synthetic:3 lstm:3 international:4 memisevic:7 standing:1 ilya:2 w1:3 thesis:3 again:1 squared:2 reflect:1 containing:1 central:1 possibly:1 huang:1 book:1 expert:2 derivative:9 sinusoidal:2 de:3 includes:1 explicitly:2 infinitum:1 ad:1 stream:2 performed:2 script:1 sine:3 multiplicative:3 picked:2 try:1 view:1 later:2 wave:4 tion:1 complicated:2 capability:1 minimize:2 ass:1 accuracy:9 variance:1 who:3 yield:4 vincent:2 informatik:1 lighting:1 published:2 history:1 acc:7 detector:1 frequency:3 propagated:1 dynamischen:1 dataset:2 knowledge:2 ut:1 dimensionality:1 improves:1 segmentation:2 back:3 feed:1 higher:18 unchen:1 specify:1 response:2 improved:1 lehrstuhl:1 furthermore:1 angular:2 stage:1 correlation:3 overlapping:2 propagation:1 incrementally:1 google:1 logistic:3 olshausen:1 concept:1 normalized:1 contain:1 evolution:4 seeded:1 symmetric:2 iteratively:2 semantic:2 adjacent:1 during:1 self:2 criterion:1 generalized:1 syntax:1 outline:1 complete:2 demonstrate:1 performs:1 motion:2 image:10 umontreal:1 sigmoid:1 rotation:12 overview:1 volume:1 discussed:2 m1:17 elementwise:2 relating:1 refer:2 frankfurt:7 ai:1 outlined:1 grid:2 similarly:1 resorting:1 language:3 rot:7 similarity:1 longer:1 whitening:1 etc:1 perspective:1 schmidhuber:2 ecological:1 binary:1 vt:3 luttinen:1 seen:3 ministry:1 additional:2 remembering:1 paradigm:1 mocap:1 signal:9 exploding:1 july:1 multiple:6 infer:3 segmented:1 long:9 roland:2 award:1 parenthesis:1 prediction:42 regression:2 vision:4 represent:9 pyramid:4 achieved:1 cell:5 hochreiter:2 background:1 cropped:1 whereas:1 interval:1 source:2 crucial:3 publisher:1 w2:2 rest:1 pooling:1 facilitates:1 structural:2 split:1 easy:1 bengio:1 variety:1 jerk:1 iterate:1 architecture:1 shift:7 whether:1 motivated:1 pca:1 passed:2 sentiment:1 suffer:1 compositional:2 repeatedly:2 oder:1 deep:8 pretraining:5 amount:7 generate:3 http:3 outperform:1 estimated:1 delta:1 per:2 discrete:1 shall:5 redundancy:2 achieving:1 changing:1 v1:4 sum:1 angle:10 master:1 bouncing:4 almost:1 wu:1 patch:2 capturing:2 layer:57 correspondence:5 adapted:1 ahead:4 untied:1 encodes:1 tal:1 generates:1 u1:4 layerwise:1 aspect:1 performing:2 martin:1 structured:1 according:3 ball:4 manning:1 across:2 describes:1 ur:1 evolves:1 making:4 restricted:2 invariant:6 taken:1 equation:4 previously:1 turn:1 discus:2 describing:1 german:1 needed:1 dataspace:1 end:3 apply:3 v2:2 generic:1 appending:1 fowlkes:1 alternative:1 rz:1 compress:1 assumes:1 top:7 include:1 denotes:1 remaining:1 original:1 chuang:1 onst:6 konda:2 exploit:1 giving:1 warland:1 prof:1 objective:5 malik:1 already:1 dependence:1 ccc:1 gradient:6 subspace:6 concatenation:1 exarchakis:1 sensible:1 assuming:3 besides:1 index:2 relationship:2 illustration:1 providing:1 perelygin:1 relate:1 rise:1 boltzmann:4 unknown:1 gated:4 allowing:1 observation:14 enabling:1 snippet:2 descent:1 oldi:1 gas:1 relational:15 hinton:6 extended:1 frame:34 canada:1 inferred:6 compositionality:1 introduced:2 pair:10 specified:1 connection:3 w2t:1 address:1 able:4 dynamical:2 pattern:3 grandmother:2 summarize:1 built:1 including:1 memory:2 video:5 natural:6 predicting:2 residual:1 recursion:1 representing:1 scheme:3 temporally:1 raiko:1 eigth:1 gz:1 autoencoder:4 extract:2 naive:1 text:2 epoch:3 understanding:1 discovery:2 multiplication:1 relative:2 law:1 bear:1 diploma:2 generation:1 interesting:2 proven:1 analogy:2 validation:1 foundation:1 degree:3 wiskott:1 principle:2 viewpoint:1 neuronalen:1 treebank:1 translation:1 row:1 featuring:1 changed:1 surprisingly:1 repeat:1 supported:1 side:3 guide:1 allow:1 absolute:2 sparse:1 distributed:1 boundary:1 depth:1 world:1 evaluating:1 autoregressive:1 reside:1 author:1 made:1 preprocessing:1 qualitatively:1 simplified:1 forward:1 kishore:1 dedicate:1 transaction:1 reconstructed:1 uni:2 aperture:1 keep:1 dealing:1 sequentially:1 scribe:1 gae:9 summing:1 assumed:1 norb:2 alternatively:1 search:2 latent:1 table:2 learn:8 ca:3 improving:1 mse:2 bottou:1 complex:2 artificially:1 domain:1 multilayered:1 whole:1 w1t:1 slow:2 bmbf:1 fails:1 inferring:2 momentum:3 explicit:4 goethe:3 third:1 learns:2 pgp:26 down:2 zu:1 utoronto:2 gating:4 showing:1 socher:1 sequential:1 adding:1 easier:1 depicted:1 generalizing:1 explore:1 horizontally:2 nserc:1 ux:4 u2:3 pretrained:1 springer:1 corresponds:1 truth:2 prop:1 modulate:1 viewed:4 identity:1 goal:1 acceleration:6 consequently:1 replace:1 shared:1 content:5 hard:2 change:2 except:1 wt:4 called:1 invariance:6 formally:2 people:1 modulated:1 violated:1 accelerated:5 evaluate:1 tested:1 extrapolate:1 |
5,025 | 555 | Extracting and Learning an Unknown Grammar with
Recurrent Neural Networks
C.L.Gnes?, C.B. Miller
NEC Research Institute
4 Independence Way
Princeton. NJ. 08540
[email protected]
D. Chen, G.Z. Sun, B.H. Chen, V.C. Lee
*Institute for Advanced Computer Studies
Dept of Physics and Astronomy
University of Maryland
College pm, Mel 20742
Abstract
Simple secood-order recurrent netwoIts are shown to readily learn sman brown
regular grammars when trained with positive and negative strings examples. We
show that similar methods are appropriate for learning unknown grammars from
examples of their strings. TIle training algorithm is an incremental real-time, recurrent learning (RTRL) method that computes the complete gradient and updates
the weights at the end of each string. After or during training. a dynamic clustering
algorithm extracts the production rules that the neural network has learned.. TIle
methods are illustrated by extracting rules from unknown deterministic regular
grammars. For many cases the extracted grammar outperforms the neural net from
which it was extracted in correctly classifying unseen strings.
1 INTRODUCTION
For many reasons, there has been a long interest in "language" models of neural netwoIts;
see [Elman 1991] for an excellent discussion. TIle orientation of this work is somewhat different TIle focus here is on what are good measures of the computational capabilities of
recurrent neural networks. Since currently there is little theoretical knowledge, what problems would be "good" experimental benchmarks? For discrete i.q>uts, a natural choice
would be the problem of learning fonnal grammars - a "hard" problem even for regular
grammars [Angluin, Smith 1982]. Strings of grammars can be presented one charncter at a
time and strings can be of arbitrary length. However, the strings themselves would be, for
the most part, feature independent Thus, the learning capabilities would be, for the most
part, feature independent and, therefore insensitive to feature extraction choice.
TIle learning of known grammars by recurrent neural networks has sbown promise, for ex-
ample [Qeeresman, et al1989], [Giles, et al199O, 1991, 1992], [pollack 1991], [Sun, et al
1990], [Watrous, Kuhn 1992a,b], [Williams, Zipser 1988]. But what about learning Ml!~ grammars? We demonstrate in this paper that not only can unknown grammars be
learned, but it is possible to extract the grammar from the neural network, both during and
after training. Furthennore, the extraction process requires no a priori knowledge about the
317
318
Giles, Miller, Chen, Sun, Chen, and Lee
grammar, except that the grammar's representation can be regular, which is always true for
a grammar of bounded string length; which is the grammatical "training sample."
2 FORMAL GRAMMARS
We give a brief introduction to grammars; for a more detailed explanation see [Hopcroft &
Ullman, 1979]. We define a grammar as a 4-mple (N, V, P, S) where N and V are DOOlerminal and tenninal vocabularies, P is a finite set of production rules and S is the start symbol. All grammars we discuss are detelUlinistic and regular. For every grammar there exists
a language - the set of strings the grammar generates - and an automaton - the machine that
recognizes (classifies) the grammar's strings. For regular grammars, the recognizing machine is a deterministic finite automaton (DFA). There exists a one-ta-one mapping between a DFA and its grammar. Once the DFA is known, the production rules are the
ordered triples (notk, arc, 1Wde).
Grammatical inference [Fu 1982] is defined as the problem of finding (learning) a grammar
from a finite set of strings, often called the training sample. One can interpret this problem
as devising an inference engine that learm and extracts the grammar, see Figure I.
UNKNOWN
LabeBed
striDgs
....
-
GRAMMAR
INFERENCE
.
Extraction
Process
ENGINE
(NEURAL
NETWQRKl
INFERRED
GRAMMAR
Figure I: Grammatical inference
For a training sample of positive and negative strings and no knowledge of the unknown
regular grammar, the problem is NP..complete (for a summary, see [Angluin, Smith 1982]).
It is possible to construct an inference engine that consists of a recurrent neural network and
a rule extraction process that yields an inferred grammar.
3
3.1
RECURRENT NEURAL NETWORK
ARCHITEcruRE
Our recmrent neural network is quite simple and can be considered as a simplified version
of the model by [Elman 1991]. For an excellent discussion of recurrent networks full of references that we don't have room for here, see [Hertz, et all99I].
A fairly general expression for a recunent network (which has the same computational
power as a DFA) is:
s~+ I
r
= F(Stj' I?W)
,
where F is a nonlinearity that maps the stale neuron Sl and the input neuron 1 at time t to
the next state S'+ 1at time t+ 1. The weight matrix W parameterizes the mapping and is usually leamed (however, it can be totally or partially programmed). A DFA has an analogous
mapping but does not use W. For a recurrent neural network we define the mapping F and
order of the mapping in the following manner [Lee, et aI 1986]. For a first-order recmrent
net:
where N is the number of hidden state neurons and L the number of input neurons; W ij and
Y ij are the real-valued weights for respectively the stale and input neurons; and (J is a stan-
Extracting and Learning an Unknown Grammar with Recurrent Neural Networks
N
L
S:+1 = a (7WilJ + Pi/!)
dard sigmoid discriminant function. The values of the hidden state neurons Sl are defined
in the finite N-dimensional space [O,I]N. Assuming all weights are connected and the net
is fully recurrent, the weight space complexity is bounded by O(N2+NL). Note that the input and state neurons are not the same neurons. This representation has the capability. assuming sufficiently large N and L, to represent any state machine. Note that there are nontrainable unit weights on the recurrent feedback connections.
TIle natural second-order extension of this recurrent net is:
where certain state neurons become input neurons. Note that the weights W ijk modify a
product of the hidden Sj and input Ik neurons. This quadratic fonn directly represents the
state transition diagrams of a state automata process -- (input, state) ::::) (next-state) and thus
makes the state transition mapping very easy to learn. It also pennits the net to be directly
programmed to be a particular DFA. Unpublished experiments comparing first and second
order recurrent nets confirm this ease-in-Iearning hypothesis. The space complexity (number of weights) is O(LN2). For L?N, both first- and second-order are of the same complexity,O(N2).
3.2
SUPERVISED TRAINING & ERROR FUNCTION
The error function is defined by a special recurrent output neuron which is checked at the
end of each string presentation to see if it is on or off. By convention this output neuron
should be on if the string is a positive example of the grammar and off if negative. In practice an error tolerance decides the on and off criteria; see [Giles, et all991] for detail. [If a
multiclass recognition is desired, another error scheme using many output neurons can be
constructed.] We define two error cases: (1) the networl.c fails to reject a negative string (the
output neuron is on); (2) the network fails to accept a positive string (the output neuron is
oft). This accept or reject occurs at the end of each string - we define this problem as inference versus prediction.There is no prediction of the next character in the string sequence.
As such, inference is a more difficult problem than prediction. If knowledge of the classification of every substring of every string exists and alphabetical training order is preserved, then the prediction and inference problems are equivalent.
The training method is real-time recurrent training (RTRL). For more details see [Williams,
Zipser 1988]. The error function is defined as:
E
(1/2) (Target-S~)
=
2
where Sf is the output neuron value at the final time step t=fwhen the final character is
presented and Target is the desired value of (1.0) for (positive. negative) examples. Using
gradient descent training, the weight update rule for a second-order recurrent net becomes:
W 1mn
= -aV
E
{
= a(Target-S o )
d~
. dW
lmn
where a is the learning rate. From the recursive network state equation we obtain the relationship between the derivatives of and St+l:
st
319
320
Giles, Miller, Chen, Sun, Chen, and Lee
~;
= a'?
[f>US~-lr.-l + l:W;jtt.-l~~-l J
1m"
jk
1m"
where a' is the derivative of the discriminant function. This pennits on-line learning with
partial derivatives calculated iteratively at each time step. Let "dS'=OIdWlmn = O. Note that
the space complexity is O(L2~) which can be prohibitive for large N and full connectivity.
It is important to note that for all training discussed here, the full gradient is calculated as
given above.
3.3
PRESENTATION OF TRAINING SAMPLES
The training data consists of a series of stimulus-response pairs, where the stimulus is a
string ofO's and 1's, and the response is either "I" for positive examples or "0" for negative
examples. The positive and negative strings are generated by an unknown source grammar
(created by a program that creates random grammars) prior to training. At each discrete
time step, one symbol from the string activates one input neuron, the other input neurons
are zero (one-hot encoding). Training is on-line and occurs after each string presentation;
there is no total error accumulation as in batch learning; contrast this to the batch method
of [Watrous, Kuhn 1992]. An extra end symbol is added to the string alphabet to give the
network more power in deciding the best final neuron state configuration. This requires another input neuron and does not increase the complexity of the DFA (only N 2 more
weights). The sequence of strings presented during training is very important and certainly
gives a bias in learning. We have perfonned many experiments that indicate that training
with alphabetical order with an equal distribution of positive and negative examples is
much faster and converges more often than random order presentation.
TIle training algorithm is on-line, incremental. A small portion of the training set is preselected and presented to the network. The net is trained at the end of each string presentation. Once the net has learned this small set or reaches a maximum number of epochs (set
before training, 1000 for experiments reported), a small number of strings (10) classified
incorrectly are chosen from the rest of the training set and added to the pre-selected set. This
small string increment prevents the training procedure from driving the network too far towards any local minima that the misclassified strings may represent. Another cycle of epoch training begins with the augmented training set. If the net correctly classifies all the
training data, the net is said to converge. The total number of cycles that the network is permitted to run is also limited, usually to about 20.
4 RULE EXTRACTION (DFA GENERATION)
As the network is training (or after training), we apply a procedure we call dynamic state
partitioning (dsp) for extracting the network's current conception of the DFA it is learning
or has learned. The rule extraction process has the following steps: 1) clustering of DFA
states, 2) constructing a transition diagram by connecting these states together with the alphabet-labelled transitions, 3) putting these transitions together to make the full digraph fonning cycles, and 4) reducing the digraph to a minimal representation. The hypothesis is
that during training, the network begins to partition (or quantize) its state space into fairly
well-separated, distinct regions or clusters, which represent corresponding states in some
DFA. See [Cleeremans, et al1989] and [Watrous and Kuhn 1992a] for other clustering
methods. A simple way of finding these clusters is to divide each neuron's range [0,1] into
q partitions of equal size. For N state neurons, qN partitions. For example, for q=2, the values of S'~.5 are 1 and S'<.0.5 are 0 and there are 2N regions with 2N possible values. Thus
for N hidden neurons, there exist possible regions. The DFA is constructed by generating
I'
Extracting and Learning an Unknown Grammar with Recurrent Neural Networks
a state transition diagram -- associating an input symbol with a set of hidden neuron partitions that it is currently in and the set of neuron partitions it activates. This ordered triple
is also a production rule. The initial partition, or start state of the DFA, is detennined from
the initial value of St=O. If the next input symbol maps to the same partition we assume a
loop in the DFA. Otherwise, a new state in the DFA is fonned.This constructed DFA may
contain a maximum of states; in practice it is usually much less, since not all neuron partition sets are ever reached. This is basically a tree pruning method and different DFA could
be generated based on the choice of branching order. TIle extracted DFA can then be reduced to its minimal size using standard minimization algorithms (an 0(N2) algorithm
where N is the number of DFA states) [Hopcroft, Ullman 1979]. [This minimization procedure does not change the grammar of the DFA; the unminimized DFA has same time
complexity as the minimized DFA. TIle process just rids the DFA of redundant, unnecessary states and reduces the space complexity.] Once the DFA is known, the production rules
are easily extracted.
cf
Since many partition values of q are available, many DFA can be extracted. How is the q
that gives the best DFA chosen? Or viewed in another way, using different q, what DFA
gives the best representation of the grammar of the training set? One approach is to use different q's (starting with q=2), different branching order, different runs with different numbers of neurons and different initial conditions, and see if any similar sets of DFA emerge.
Choose the DFA whose similarity set has the smallest number of states and appears most
often - an Occam's razor assumption. Define the guess of the DFA as DFAg.This method
seems to woIk fairly well. Another is to see which of the DFA give the best perfonnance
on the training set, assuming that the training set is not perfectly learned. We have little experience with this method since we usually train to perfection on the training set It should
be noted that this DFA extraction method may be applied to any discrete-time recurrent
net, regardless of network order or number of hidden layers. Preliminary results on firstorder recurrent networks show that the same DFA are extracted as second-order, but the
first-order nets are less likely to converge and take longer to converge than second-order.
5 SIMULATIONS - GRAMMARS LEARNED
Many different small ? 15 states) regular known grammars have been learned successfully
with both first-order [Cleeremans, et al1989] and second-order recurrent models [Giles, et
al 91] and [Watrous, Kuhn 1992a]. In addition [Giles, et al1990 & 1991] and [Watrous,
Kuhn 1992b] show how corresponding DFA and production rules can be extracted. However for all of the above work, the grammars to be learned were alreatb known. What is
more interesting is the learning of unknown grammars.
In figure 2b is a randomly generated minimallO-state regular grammar created by a program in which the only inputs are the number of states of the umninimized DFA and the
alphabet size p. (A good estimate of the number of possible unique DFA is (n2lln1'"/n!)
[Aton, et al1991] where n is number ofDFA states) TIle shaded state is the start state, filled
and dashed arcs represent 1 and 0 transitions and all final states have a shaded outer circle.
This unknown (honestly, we didn't look) DFA was learned with both 6 and 10 hidden state
neuron second-order recurrent nets using the first 1000 strings in alphabetical training order
(we could ask the unknown grammar for strings). Of two runs for both 10 and 6 neurons,
both of the 10 and one of the 6 converged in less than 1000 epochs. (TIle initial weights
were all randomly chosen between [1,-1] and the learning rate and momentum were both
0.5.) Figure 2a shows one of the unminimized DFA that was extracted for a partition parameter of q=2. The minimized 10-state DFA, figure 3b, appeared for q=2 for one 10 neuron net and for q=2,3,4 of the converged 6 neuron net Consequently, using our previous
criteria, we chose this DFA as DFAg, our guess at the unknown grammar. We then asked
321
322
Giles, Miller, Chen, Sun, Chen, and Lee
Figures 2a & 2b. Unminimized and minimized 100state random grammar.
the program what the grammar was and discovered we were correct in our guess. The other
minimized DFA for different q's were all unique and usually very large (number of states
> 1(0).
The trained recurrent nets were then checked for generalization errors on all strings up to
length 15. All made a small number of errors, usually less than 1% of the total of 65,535
strings. However, the correct extracted DFA was perfect and, of course, makes no errors on
strings of any length. Again, [Giles, et a11991, 1992], the extracted DFA outperforms the
trained neural net from which the DFA was extracted.
Figures 3a and 3b, we see the dynamics ofDFA extraction as a 4 bidden neuron neural network is leaming as a function of epoch and partition size. This is for grammar Tomita-4
[Giles, et al 1991, 1992]] - a 4-state grammar that rejects any string which has more than
three 0' s in a row. The number of states of the extracted DFA starts out small, then increases, and finally decreases to a constant value as the grammar is learned As the partition q of
the neuron space increases, the number of minimized and unminimized states increases.
When the grammar is learned, the number of minimized states becomes constant and, as
expected, the number of minimized states, independent of q, becomes the number of states
in the grammar's DFA - 4.
6 CONCLUSIONS
Simple recurrent neural networks are capable ofleaming small regular unknown grammars
rather easily and generalize fairly well on unseen grammatical strings. The training results
are fairly independent of the initial values of the weights and numbers of neurons. For a
well-trained neural net, the generalization perfonnance on long unseen strings can be perfect.
Extracting and Learning an Unknown Grammar with Recurrent Neural Networks
Unminbnlzed
Minimized
3S
30
J
fIJ
~
~
]
Col
II
~
r;iI
3S
triangles q=4
30
11
25
25
fIJ
~
]
20
~
15
..
Col
e
10
~
r;iI
5
o
20
15
10
5
o 10 20 30 40 SO 60 70
E~b
04-~--~-r~r-'-~--~
0
10
20
30
40
SO
60
70
E~b
Figures 3a & 3b. Size of number of states (unmioimized and minimized) ofDFA
versus training epoch for different partition parameter q. The correct state size is 4.
A heuristic algorithm called dynamic state partitioning was created to extract detenninistic
finite state automata (DFA) from the neural network, both during and after training. Using
a standard DFA minimization algorithm, the extracted DFA can be reduced to an equivalent
minimal-state DFA which has reduced space (not time) complexity. When the source or
generating grammar is unknown, a good guess of the unknown grammar DFAg can be obtained from the minimal DFA that is most often extracted from different runs WIth different
numbers of neurons and initial conditions. From the extracted DFA, minimal or not, the
production rules of the learned grammar are evident.
There are some interesting aspects of the extracted DFA. Each of the unminimized DFA
seems to be unique, even those with the same number of states. For recunent nets that converge, it is often possible to extract DFA that are perfect, i.e. the grammar of the unknown
source grammar. For these cases all unminimized DFA whose minimal sizes have the same
number of states constitute a large equivalence class of neural-net-generated DFA. and
have the same performance on string classification.This equivalence class extends across
neural networks which vary both in size (number of neurons) and initial conditions. Thus.
the extracted DFA gives a good indication of how well the neural network learns the grammar.
In fact, for most of the trained neural nets, the extracted DF~ outperforms the
trained neural networks in classification of unseen strings. (By aefinition, a perfect
DFA will correctly classify all unseen strings). This is not surprising due to the possibility
of error accumulation as the neural network classifies long unseen strings [pollack 1991].
However, when the neural network has leamed the grammar well, its generalization performance can be perfect on all strings tested [Giles, et al1991, 1992]. Thus, the neural network
can be considered as a tool for extracting a DFA that is representative of the unknown
grammar. Once the DFAg is obtained, it can be used independently of the trained neural
network.
The learning of small DFA using second-order techniques and the full gradient computation reported here and elsewhere [Giles, et all991, 1992], [Watrous, Kuhn 1992a, 1992b]
give a strong impetus to using these techniques for learning DFA. The question of DFA
state capacity and scalability is unresolved. Further work must show how well these ap-
323
324
Giles, Miller, Chen, Sun, Chen, and Lee
proaches can model grammars with large numbers of states and establish a theoretical and
experimental relationship between DFA state capacity and neural net size.
Acknowledgments
TIle authors acknowledge useful and helpful discussions with E. Baum, M. Goudreau, G.
Kuhn, K. Lang, L. Valiant, and R. Watrous. The University of Maryland authors gratefully
acknowledge partial support from AFOSR and DARPA.
References
N. Alon, A.K. Dewdney, and T.J.Ott, 'Efficient Simulation of Fmite Automata by Neural
Nets, Journal of the ACM, Vol 38,p. 495 (1991).
D. Angluin, C.H. Smith, Inductive Inference: Theory and Methods, ACM Computing Surveys, Vol 15, No 3, p. 237, (1983).
A. Cleeremans, D. Servan-Scbreiber, J. McClelland, Finite State Automata and Simple Recurrent Recurrent Networks, Neural Computation, Vol 1, No 3, p. 372 (1989).
lL. Elman, Distributed Representations, Simple Recurrent Networks, and Grammatical
Structure, Machine Learning, Vol 7, No 2{3, p. 91 (1991).
K.S. Fu, Syntactic Panern Recognition and Applications, Prentice-Hall, Englewood Cliffs,
NJ. Ch10 (1982).
C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, D. Olen, Higher Order Recurrent Networks &
Grammatical Inference, Advances in Neural Information Systems 2, D.S. Touretzky (ed),
Morgan Kaufmann, San Mateo, Ca, p.380 (1990).
C.L. Giles, D. Chen, C.B. Miller, H.H. Chen, G.Z. Sun, Y.C. Lee, Grammatical Inference
Using Second-Order Recurrent Neural Networks, Proceedings of the International Joint
Conference on Neural Networks, IEEE91CH3049-4, Vol 2, p.357 (1991).
C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, Y.C. Lee, Learning and Extracting
Finite State Automata with Second-Order Recurrent Neural Networks, Neural Computation, accepted for publication (1992).
J. Hertz, A. Krogh, R.G. Palmer, Introduction to the Theory of Neural Computation, Addison-Wesley, Redwood City, Ca., Ch. 7 (1991).
J.E. Hopcroft, J.D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Reading, Ma. (1979).
Y.C. Lee, G. Doolen, H.H. Olen, G.Z. Sun, T. Maxwell, H.Y. Lee, C.L. Giles, Machine
Learning Using a Higher Order Correlational Network, PhysicaD, Vol 22-D, Nol-3, p. 276
(1986).
J .B. Pollack, The Induction of Dynamical Recognizers, Machine Learning, Vol 7, No 2/3,
p. 227 (1991).
G.Z. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, D. Chen, Connectionist Pushdown Automata
that Learn Context-Free Grammars, Proceedings of the International Joint Conference on
Neural, Washington D.C., Lawrence Erlbaum Pub., Vol It p. 577 (1990).
R.L. Watrous, G.M. Kuhn, Induction of Finite-State Languages Using Second-Order Recurrent Networks, Neural Computation, accepted for publication (l992a) and these proceedings, (1992b).
RJ. Williams, D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent
Neural Networks, Neural Computation, Vol 1, No 2, p. 270, (1989).
| 555 |@word version:1 seems:2 simulation:2 fmite:1 fonn:1 initial:7 configuration:1 series:1 pub:1 outperforms:3 current:1 comparing:1 surprising:1 lang:1 must:1 readily:1 partition:13 update:2 prohibitive:1 devising:1 selected:1 guess:4 leamed:2 smith:3 lr:1 ofo:1 constructed:3 become:1 ik:1 consists:2 manner:1 expected:1 elman:3 themselves:1 little:2 totally:1 becomes:3 begin:2 classifies:3 bounded:2 didn:1 what:6 pennits:2 watrous:8 string:42 finding:2 astronomy:1 nj:3 every:3 firstorder:1 iearning:1 partitioning:2 unit:1 continually:1 positive:8 before:1 local:1 modify:1 encoding:1 cliff:1 ap:1 chose:1 mateo:1 equivalence:2 shaded:2 ease:1 programmed:2 limited:1 palmer:1 range:1 nol:1 unique:3 acknowledgment:1 practice:2 alphabetical:3 recursive:1 procedure:3 reject:3 pre:1 regular:10 stj:1 prentice:1 context:1 accumulation:2 equivalent:2 deterministic:2 map:2 baum:1 williams:3 regardless:1 starting:1 independently:1 automaton:9 survey:1 rule:12 doolen:1 dw:1 increment:1 analogous:1 target:3 hypothesis:2 recognition:2 jk:1 cleeremans:3 region:3 connected:1 sun:11 cycle:3 decrease:1 complexity:8 asked:1 dynamic:4 trained:8 creates:1 triangle:1 easily:2 hopcroft:3 darpa:1 joint:2 alphabet:3 train:1 separated:1 distinct:1 quite:1 whose:2 heuristic:1 valued:1 furthennore:1 otherwise:1 grammar:63 unseen:6 syntactic:1 final:4 sequence:2 indication:1 net:24 product:1 unresolved:1 loop:1 detennined:1 impetus:1 proaches:1 scalability:1 cluster:2 generating:2 incremental:2 converges:1 perfect:5 recurrent:33 alon:1 ij:2 krogh:1 strong:1 cool:1 indicate:1 convention:1 kuhn:8 fij:2 correct:3 generalization:3 preliminary:1 extension:1 sufficiently:1 considered:2 hall:1 deciding:1 lawrence:1 mapping:6 driving:1 vary:1 smallest:1 currently:2 successfully:1 tool:1 city:1 minimization:3 activates:2 always:1 rather:1 publication:2 focus:1 dsp:1 contrast:1 helpful:1 inference:11 accept:2 hidden:7 misclassified:1 classification:3 orientation:1 priori:1 special:1 fairly:5 equal:2 construct:1 once:4 extraction:8 washington:1 represents:1 look:1 secood:1 np:1 stimulus:2 minimized:9 connectionist:1 randomly:2 interest:1 englewood:1 possibility:1 certainly:1 nl:1 fu:2 capable:1 partial:2 detenninistic:1 experience:1 perfonnance:2 tree:1 filled:1 divide:1 desired:2 circle:1 theoretical:2 pollack:3 minimal:6 classify:1 giles:18 servan:1 ott:1 recognizing:1 erlbaum:1 too:1 reported:2 st:3 international:2 lee:12 physic:1 off:3 connecting:1 together:2 connectivity:1 again:1 choose:1 tile:12 derivative:3 ullman:3 jtt:1 portion:1 start:4 reached:1 capability:3 kaufmann:1 miller:7 yield:1 generalize:1 tenninal:1 basically:1 substring:1 classified:1 converged:2 reach:1 touretzky:1 checked:2 ed:1 ask:1 knowledge:4 ut:1 appears:1 wesley:2 maxwell:1 ta:1 higher:2 supervised:1 permitted:1 response:2 just:1 d:1 stale:2 brown:1 true:1 contain:1 inductive:1 iteratively:1 illustrated:1 ll:1 during:5 branching:2 razor:1 noted:1 mel:1 criterion:2 ln2:1 evident:1 complete:2 demonstrate:1 bidden:1 sigmoid:1 networl:1 insensitive:1 discussed:1 interpret:1 ai:1 pm:1 nonlinearity:1 language:4 gratefully:1 similarity:1 longer:1 recognizers:1 certain:1 morgan:1 minimum:1 somewhat:1 converge:4 redundant:1 dashed:1 ii:3 wilj:1 full:5 rj:1 reduces:1 faster:1 long:3 dewdney:1 prediction:4 df:1 represent:4 preserved:1 addition:1 panern:1 diagram:3 source:3 extra:1 rest:1 ample:1 call:1 extracting:8 zipser:3 easy:1 conception:1 independence:1 associating:1 perfectly:1 parameterizes:1 multiclass:1 expression:1 constitute:1 dfa:64 useful:1 detailed:1 mcclelland:1 reduced:3 angluin:3 sl:2 exist:1 correctly:3 discrete:3 promise:1 vol:9 putting:1 run:4 extends:1 layer:1 quadratic:1 generates:1 aspect:1 hertz:2 across:1 rtrl:2 character:2 equation:1 discus:1 addison:2 end:5 available:1 apply:1 appropriate:1 batch:2 clustering:3 cf:1 tomita:1 recognizes:1 mple:1 running:1 fonnal:1 establish:1 added:2 question:1 occurs:2 said:1 gradient:4 maryland:2 capacity:2 outer:1 discriminant:2 reason:1 induction:2 assuming:3 length:4 relationship:2 difficult:1 negative:8 unknown:19 av:1 neuron:37 benchmark:1 finite:8 arc:2 descent:1 honestly:1 acknowledge:2 incorrectly:1 ever:1 discovered:1 redwood:1 arbitrary:1 fonning:1 inferred:2 unpublished:1 pair:1 connection:1 engine:3 learned:12 usually:6 dynamical:1 appeared:1 oft:1 reading:1 program:3 preselected:1 explanation:1 power:2 hot:1 perfonned:1 natural:2 advanced:1 mn:1 scheme:1 brief:1 ofleaming:1 stan:1 created:3 perfection:1 extract:5 prior:1 epoch:5 l2:1 afosr:1 fully:2 generation:1 interesting:2 versus:2 triple:2 classifying:1 pi:1 occam:1 production:7 lmn:1 row:1 course:1 summary:1 elsewhere:1 free:1 formal:1 bias:1 institute:2 emerge:1 tolerance:1 grammatical:7 feedback:1 calculated:2 vocabulary:1 transition:7 distributed:1 computes:1 qn:1 author:2 dard:1 fonned:1 made:1 simplified:1 san:1 far:1 sj:1 pruning:1 confirm:1 ml:1 decides:1 unnecessary:1 don:1 learn:3 ca:2 quantize:1 excellent:2 constructing:1 n2:3 augmented:1 representative:1 fails:2 momentum:1 sf:1 col:2 learns:1 symbol:5 goudreau:1 exists:3 valiant:1 nec:2 chen:17 likely:1 prevents:1 ordered:2 partially:1 ch:1 extracted:18 recunent:2 acm:2 ma:1 viewed:1 presentation:5 digraph:2 consequently:1 towards:1 leaming:1 room:1 labelled:1 hard:1 change:1 pushdown:1 except:1 reducing:1 correlational:1 called:2 total:3 accepted:2 experimental:2 ijk:1 college:1 support:1 dept:1 princeton:1 tested:1 ex:1 |
5,026 | 5,550 | Convolutional Neural Network Architectures for
Matching Natural Language Sentences
Baotian Hu??
Zhengdong Lu?
Hang Li?
?
Department of Computer Science
& Technology, Harbin Institute of Technology
Shenzhen Graduate School, Xili, China
[email protected]
[email protected]
Qingcai Chen?
?
Noah?s Ark Lab
Huawei Technologies Co. Ltd.
Sha Tin, Hong Kong
[email protected]
[email protected]
Abstract
Semantic matching is of central importance to many natural language tasks [2, 28].
A successful matching algorithm needs to adequately model the internal structures
of language objects and the interaction between them. As a step toward this goal,
we propose convolutional neural network models for matching two sentences, by
adapting the convolutional strategy in vision and speech. The proposed models
not only nicely represent the hierarchical structures of sentences with their layerby-layer composition and pooling, but also capture the rich matching patterns at
different levels. Our models are rather generic, requiring no prior knowledge on
language, and can hence be applied to matching tasks of different nature and in
different languages. The empirical study on a variety of matching tasks demonstrates the efficacy of the proposed model on a variety of matching tasks and its
superiority to competitor models.
1
Introduction
Matching two potentially heterogenous language objects is central to many natural language applications [28, 2]. It generalizes the conventional notion of similarity (e.g., in paraphrase identification
[19]) or relevance (e.g., in information retrieval[27]), since it aims to model the correspondence between ?linguistic objects? of different nature at different levels of abstractions. Examples include
top-k re-ranking in machine translation (e.g., comparing the meanings of a French sentence and an
English sentence [5]) and dialogue (e.g., evaluating the appropriateness of a response to a given
utterance[26]).
Natural language sentences have complicated structures, both sequential and hierarchical, that are
essential for understanding them. A successful sentence-matching algorithm therefore needs to
capture not only the internal structures of sentences but also the rich patterns in their interactions.
Towards this end, we propose deep neural network models, which adapt the convolutional strategy
(proven successful on image [11] and speech [1]) to natural language. To further explore the relation
between representing sentences and matching them, we devise a novel model that can naturally
host both the hierarchical composition for sentences and the simple-to-comprehensive fusion of
matching patterns with the same convolutional architecture. Our model is generic, requiring no
prior knowledge of natural language (e.g., parse tree) and putting essentially no constraints on the
matching tasks. This is part of our continuing effort1 in understanding natural language objects and
the matching between them [13, 26].
?
1
The work is done when the first author worked as intern at Noah?s Ark Lab, Huawei Techologies
Our project page: http://www.noahlab.com.hk/technology/Learning2Match.html
1
Our main contributions can be summarized as follows. First, we devise novel deep convolutional network architectures that can naturally combine 1) the hierarchical sentence modeling through
layer-by-layer composition and pooling, and 2) the capturing of the rich matching patterns at different levels of abstraction; Second, we perform extensive empirical study on tasks with different
scales and characteristics, and demonstrate the superior power of the proposed architectures over
competitor methods.
Roadmap We start by introducing a convolution network in Section 2 as the basic architecture for
sentence modeling, and how it is related to existing sentence models. Based on that, in Section 3,
we propose two architectures for sentence matching, with a detailed discussion of their relation. In
Section 4, we briefly discuss the learning of the proposed architectures. Then in Section 5, we report
our empirical study, followed by a brief discussion of related work in Section 6.
2
Convolutional Sentence Model
We start with proposing a new convolutional architecture for modeling sentences. As illustrated
in Figure 1, it takes as input the embedding of words (often trained beforehand with unsupervised
methods) in the sentence aligned sequentially, and summarize the meaning of a sentence through
layers of convolution and pooling, until reaching a fixed length vectorial representation in the final
layer. As in most convolutional models [11, 1], we use convolution units with a local ?receptive
field? and shared weights, but we design a large feature map to adequately model the rich structures
in the composition of words.
Figure 1: The over all architecture of the convolutional sentence model. A box with dashed lines
indicates all-zero padding turned off by the gating function (see top of Page 3).
Convolution As shown in Figure 1, the convolution in Layer-1 operates on sliding windows of
words (width k1 ), and the convolutions in deeper layers are defined in a similar way. Generally,with
sentence input x, the convolution unit for feature map of type-f (among F` of them) on Layer-` is
(`,f ) def
zi
(`,f )
= zi
(`) def
and its matrix form is zi
(`,f )
? zi
(`?1)
?i
(x) = ?(w(`,f ) z
(`)
+ b(`,f ) ), f = 1, 2, ? ? ? , F`
(`?1)
?i
= zi (x) = ?(W(`) z
(1)
+ b(`) ), where
(x) gives the output of feature map of type-f for location i in Layer-`;
def
? w(`,f ) is the parameters for f on Layer-`, with matrix form W(`) = [w(`,1) , ? ? ? , w(`,F` ) ];
? ?(?) is the activation function (e.g., Sigmoid or Relu [7])
(`?1)
?i
? z
denotes the segment of Layer-`?1 for the convolution at location i , while
(0)
?i
z
def
>
>
>
= xi:i+k1 ?1 = [x>
i , xi+1 , ? ? ? , xi+k1 ?1 ]
concatenates the vectors for k1 (width of sliding window) words from sentence input x.
Max-Pooling
We take a max-pooling in every two-unit window for every f , after each convolution
(`,f )
zi
(`?1,f )
= max(z2i?1
(`?1,f )
, z2i
), ` = 2, 4, ? ? ? .
The effects of pooling are two-fold: 1) it shrinks the size of the representation by half, thus quickly
absorbs the differences in length for sentence representation, and 2) it filters out undesirable composition of words (see Section 2.1 for some analysis).
2
Length Variability The variable length of sentences in a fairly broad range can be readily handled
with the convolution and pooling strategy. More specifically, we put all-zero padding vectors after
the last word of the sentence until the maximum length. To eliminate the boundary effect caused
by the great variability of sentence lengths, we add to the convolutional unit a gate which sets the
output vectors to all-zeros if the input is all zeros. For any given sentence input x, the output of
type-f filter for location i in the `th layer is given by
(`,f ) def
(`,f )
(`?1)
(`?1)
?i
zi
= zi (x) = g(?
zi
) ? ?(w(`,f ) z
+ b(`,f ) ),
(2)
where g(v) = 0 if all the elements in vector v equals 0, otherwise g(v) = 1. This gate, working
with max-pooling and positive activation function (e.g., Sigmoid), keeps away the artifacts from
padding in all layers. Actually it creates a natural hierarchy of all-zero padding (as illustrated in
Figure 1), consisting of nodes in the neural net that would not contribute in the forward process (as
in prediction) and backward propagation (as in learning).
2.1 Some Analysis on the Convolutional Architecture
The convolutional unit, when combined with max-pooling, can act as
the compositional operator with local selection mechanism as in the
recursive autoencoder [21]. Figure
2 gives an example on what could
happen on the first two layers with
input sentence ?The cat sat on
the mat?. Just for illustration purpose, we present a dramatic choice
of parameters (by turning off some
elements in W(1) ) to make the convolution units focus on different seg- Figure 2: The cat example, where in the convolution layer,
ments within a 3-word window. For gray color indicates less confidence in composition.
example, some feature maps (group
2) give compositions for ?the cat?
and ?cat sat?, each being a vector. Different feature maps offer a variety of compositions, with
confidence encoded in the values (color coded in output of convolution layer in Figure 2). The pooling then chooses, for each composition type, between two adjacent sliding windows, e.g., between
?on the? and ?the mat? for feature maps group 2 from the rightmost two sliding windows.
Relation to Recursive Models Our convolutional model differs from Recurrent Neural Network
(RNN, [15]) and Recursive Auto-Encoder (RAE, [21]) in several important ways. First, unlike
RAE, it does not take a single path of word/phrase composition determined either by a separate
gating function [21], an external parser [19], or just natural sequential order [20]. Instead, it takes
multiple choices of composition via a large feature map (encoded in w(`,f ) for different f ), and
leaves the choices to the pooling afterwards to pick the more appropriate segments(in every adjacent
two) for each composition. With any window width k` ? 3, the type of composition would be much
richer than that of RAE. Second, our convolutional model can take supervised training and tune
the parameters for a specific task, a property vital to our supervised learning-to-match framework.
However, unlike recursive models [20, 21], the convolutional architecture has a fixed depth, which
bounds the level of composition it could do. For tasks like matching, this limitation can be largely
compensated with a network afterwards that can take a ?global? synthesis on the learned sentence
representation.
Relation to ?Shallow? Convolutional Models The proposed convolutional sentence model takes
simple architectures such as [18, 10] (essentially the same convolutional architecture as SENNA [6]),
which consists of a convolution layer and a max-pooling over the entire sentence for each feature
map. This type of models, with local convolutions and a global pooling, essentially do a ?soft? local
template matching and is able to detect local features useful for a certain task. Since the sentencelevel sequential order is inevitably lost in the global pooling, the model is incapable of modeling
more complicated structures. It is not hard to see that our convolutional model degenerates to the
SENNA-type architecture if we limit the number of layers to be two and set the pooling window
infinitely large.
3
3
Convolutional Matching Models
Based on the discussion in Section 2, we propose two related convolutional architectures, namely
A RC -I and A RC -II), for matching two sentences.
3.1 Architecture-I (A RC -I)
Architecture-I (A RC -I), as illustrated in Figure 3, takes a conventional approach: It first finds
the representation of each sentence, and then compares the representation for the two sentences
with a multi-layer perceptron (MLP) [3]. It is essentially the Siamese architecture introduced
in [2, 11], which has been applied to different tasks as a nonlinear similarity function [23]. Although A RC -I enjoys the flexibility brought by the convolutional sentence model, it suffers from a
drawback inherited from the Siamese architecture: it defers the interaction between two sentences
(in the final MLP) to until their individual representation matures (in the
convolution model), therefore runs at
the risk of losing details (e.g., a city name) important for the matching task in representing the sentences. In other words, in the forward
phase (prediction), the representation
of each sentence is formed without
knowledge of each other. This cannot be adequately circumvented in
backward phase (learning), when the
convolutional model learns to extracFigure 3: Architecture-I for matching two sentences.
t structures informative for matching
on a population level.
3.2 Architecture-II (A RC -II)
In view of the drawback of Architecture-I, we propose Architecture-II (A RC -II) that is built directly
on the interaction space between two sentences. It has the desirable property of letting two sentences
meet before their own high-level representations mature, while still retaining the space for the individual development of abstraction of each sentence. Basically, in Layer-1, we take sliding windows
on both sentences, and model all the possible combinations of them through ?one-dimensional? (1D)
convolutions. For segment i on SX and segment j on SY , we have the feature map
(1,f ) def
zi,j
where
(0)
?i,j
z
(1,f )
(0)
(0)
?i,j + b(`,f ) ),
= zi,j (x, y) = g(?
zi,j ) ? ?(w(`,f ) z
(3)
? R2k1 De simply concatenates the vectors for sentence segments for SX and SY :
(0)
>
>
?i,j = [x>
z
i:i+k1 ?1 , yj:j+k1 ?1 ] .
Clearly the 1D convolution preserves the location information about both segments. After that in
Layer-2, it performs a 2D max-pooling in non-overlapping 2 ? 2 windows (illustrated in Figure 5)
(2,f )
(2,f )
(2,f )
(2,f )
(2,f )
zi,j = max({z2i?1,2j?1 , z2i?1,2j , z2i,2j?1 , z2i,2j }).
In Layer-3, we perform a 2D convolution on k3 ? k3 windows of output from Layer-2:
(3,f )
(2)
(4)
(2)
?i,j + b(3,f ) ).
zi,j = g(?
zi,j ) ? ?(W(3,f ) z
(5)
This could go on for more layers of 2D convolution and 2D max-pooling, analogous to that of
convolutional architecture for image input [11].
The 2D-Convolution After the first convolution, we obtain a low level representation of the inter(`)
action between the two sentences, and from then we obtain a high level representation zi,j which
encodes the information from both sentences. The general two-dimensional convolution is formulated as
(`)
(`?1)
(`?1)
?i,j + b(`,f ) ), ` = 3, 5, ? ? ?
zi,j = g(?
zi,j ) ? ?(W(`) z
(6)
(`?1)
?i,j concatenates the corresponding vectors from its 2D receptive field in Layer-`?1. This
where z
pooling has different mechanism as in the 1D case, for it selects not only among compositions on
different segments but also among different local matchings. This pooling strategy resembles the
dynamic pooling in [19] in a similarity learning context, but with two distinctions: 1) it happens on
a fixed architecture and 2) it has much richer structure than just similarity.
4
Figure 4: Architecture-II (A RC -II) of convolutional matching model
3.3
Some Analysis on A RC -II
Order Preservation Both the convolution
and pooling operation in Architecture-II have
(`)
this order preserving property. Generally, zi,j
contains information about the words in SX
(`)
before those in zi+1,j , although they may be
generated with slightly different segments in
SY , due to the 2D pooling (illustrated in Figure 5). The orders is however retained in a
?conditional? sense. Our experiments show that
when A RC -II is trained on the (SX , SY , S?Y )
triples where S?Y randomly shuffles the words in SY , it consistently gains some ability of
finding the correct SY in the usual contrastive
negative sampling setting, which however does
not happen with A RC -I.
Figure 5: Order preserving in 2D-pooling.
Model Generality It is not hard to show that A RC -II actually subsumes A RC -I as a special case.
Indeed, in A RC -II if we choose (by turning off some parameters in W(`,?) ) to keep the representations of the two sentences separated until the final MLP, A RC -II can actually act fully like A RC -I,
as illustrated in Figure 6. More specifically, if we let the feature maps in the first convolution layer
to be either devoted to SX or devoted to SY (instead of taking both as in general case), the output
of each segment-pair is naturally divided into two corresponding groups. As a result, the output for
(1,f )
each filter f , denoted z1:n,1:n (n is the number of sliding windows), will be of rank-one, possessing
essentially the same information as the result of the first convolution layer in A RC -I. Clearly the 2D
pooling that follows will reduce to 1D pooling, with this separateness preserved. If we further limit
the parameters in the second convolution units (more specifically w(2,f ) ) to those for SX and SY ,
we can ensure the individual development of different levels of abstraction on each side, and fully
recover the functionality of A RC -I.
Figure 6: A RC -I as a special case of A RC -II. Better viewed in color.
5
As suggested by the order-preserving property and the generality of A RC -II, this architecture offers
not only the capability but also the inductive bias for the individual development of internal abstraction on each sentence, despite the fact that it is built on the interaction between two sentences. As
a result, A RC -II can naturally blend two seemingly diverging processes: 1) the successive composition within each sentence, and 2) the extraction and fusion of matching patterns between them,
hence is powerful for matching linguistic objects with rich structures. This intuition is verified by
the superior performance of A RC -II in experiments (Section 5) on different matching tasks.
4
Training
We employ a discriminative training strategy with a large margin objective. Suppose that we are
given the following triples (x, y+ , y? ) from the oracle, with x matched with y+ better than with
y? . We have the following ranking-based loss as objective:
e(x, y+ , y? ; ?) = max(0, 1 + s(x, y? ) ? s(x, y+ )),
where s(x, y) is predicted matching score for (x, y), and ? includes the parameters for convolution
layers and those for the MLP. The optimization is relatively straightforward for both architectures
with the standard back-propagation. The gating function (see Section 2) can be easily adopted into
the gradient by discounting the contribution from convolution units that have been turned off by
the gating function. In other words, We use stochastic gradient descent for the optimization of
models. All the proposed models perform better with mini-batch (100 ? 200 in sizes) which can
be easily parallelized on single machine with multi-cores. For regularization, we find that for both
architectures, early stopping [16] is enough for models with medium size and large training sets
(with over 500K instances). For small datasets (less than 10k training instances) however, we have
to combine early stopping and dropout [8] to deal with the serious overfitting problem.
We use 50-dimensional word embedding trained with the Word2Vec [14]: the embedding for English
words (Section 5.2 & 5.4) is learnt on Wikipedia (?1B words), while that for Chinese words (Section
5.3) is learnt on Weibo data (?300M words). Our other experiments (results omitted here) suggest
that fine-tuning the word embedding can further improve the performances of all models, at the cost
of longer training. We vary the maximum length of words for different tasks to cope with its longest
sentence. We use 3-word window throughout all experiments2 , but test various numbers of feature
maps (typically from 200 to 500), for optimal performance. A RC -II models for all tasks have eight
layers (three for convolution, three for pooling, and two for MLP), while A RC -I performs better
with less layers (two for convolution, two for pooling, and two for MLP) and more hidden nodes.
We use ReLu [7] as the activation function for all of models (convolution and MLP), which yields
comparable or better results to sigmoid-like functions, but converges faster.
5
Experiments
We report the performance of the proposed models on three matching tasks of different nature, and
compare it with that of other competitor models. Among them, the first two tasks (namely, Sentence
Completion and Tweet-Response Matching) are about matching of language objects of heterogenous
natures, while the third one (paraphrase identification) is a natural example of matching homogeneous objects. Moreover, the three tasks involve two languages, different types of matching, and
distinctive writing styles, proving the broad applicability of the proposed models.
5.1
Competitor Methods
? W ORD E MBED : We first represent each short-text as the sum of the embedding of the
words it contains. The matching score of two short-texts are calculated with an MLP with
the embedding of the two documents as input;
? D EEP M ATCH : We take the matching model in [13] and train it on our datasets with 3
hidden layers and 1,000 hidden nodes in the first hidden layer;
? U RAE+MLP: We use the Unfolding Recursive Autoencoder [19]3 to get a 100dimensional vector representation of each sentence, and put an MLP on the top as in
W ORD E MBED;
? SENNA+MLP/ SIM : We use the SENNA-type sentence model for sentence representation;
2
3
Our other experiments suggest that the performance can be further increased with wider windows.
Code from: http://nlp.stanford.edu/?socherr/classifyParaphrases.zip
6
? S EN MLP: We take the whole sentence as input (with word embedding aligned sequentially), and use an MLP to obtain the score of coherence.
All the competitor models are trained on the same training set as the proposed models, and we report
the best test performance over different choices of models (e.g., the number and size of hidden layers
in MLP).
5.2 Experiment I: Sentence Completion
This is an artificial task designed to elucidate how different matching models can capture the correspondence between two clauses within a sentence. Basically, we take a sentence from Reuters [12]with two ?balanced? clauses (with 8? 28 words) divided by one comma, and use the first
clause as SX and the second as SY . The task is then to recover the original second clause for any
given first clause. The matching here is considered heterogeneous since the relation between the
two is nonsymmetrical on both lexical and semantic levels. We deliberately make the task harder
by using negative second clauses similar to the original ones4 , both in training and testing. One
representative example is given as follows:
Model
P@1(%)
Random Guess 20.00
SX : Although the state has only four votes in the Electoral College,
D EEP M ATCH
32.5
SY+ : its loss would be a symbolic blow to republican presidential candi
W ORD E MBED
37.63
date Bob Dole.
S EN MLP
36.14
SY? : but it failed to garner enough votes to override an expected veto by
SENNA+MLP 41.56
president Clinton.
U RAE+MLP
25.76
All models are trained on 3 million triples (from 600K positive
A
RC -I
47.51
pairs), and tested on 50K positive pairs, each accompanied by
A
RC -II
49.62
four negatives, with results shown in Table 1. The two proposed models get nearly half of the cases right5 , with large margin
over other sentence models and models without explicit sequence Table 1: Sentence Completion.
modeling. A RC -II outperforms A RC -I significantly, showing the power of joint modeling of matching and sentence meaning. As another convolutional model, SENNA+MLP performs fairly well
on this task, although still running behind the proposed convolutional architectures since it is too
shallow to adequately model the sentence. It is a bit surprising that U RAE comes last on this task,
which might be caused by the facts that 1) the representation model (including word-embedding) is
not trained on Reuters, and 2) the split-sentence setting hurts the parsing, which is vital to the quality
of learned sentence representation.
5.3 Experiment II: Matching A Response to A Tweet
We trained our model with 4.5 million original (tweet, response)
pairs collected from Weibo, a major Chinese microblog service
[26]. Compared to Experiment I, the writing style is obviously
more free and informal. For each positive pair, we find ten random responses as negative examples, rendering 45 million triples
for training. One example (translated to English) is given below,
with SX standing for the tweet, SY+ the original response, and SY?
the randomly selected response: SX : Damn, I have to work overtime
this weekend!
SY+ : Try to have some rest buddy.
SY? : It is hard to find a job, better start polishing your resume.
Model
Random Guess
D EEP M ATCH
W ORD E MBED
S EN MLP
SENNA+MLP
A RC -I
A RC -II
P@1(%)
20.00
49.85
54,31
52.22
56.48
59.18
61.95
Table 2: Tweet Matching.
We hold out 300K original (tweet, response) pairs and test the matching model on their ability to
pick the original response from four random negatives, with results reported in Table 2. This task
is slightly easier than Experiment I , with more training instances and purely random negatives. It
requires less about the grammatical rigor but more on detailed modeling of loose and local matching
patterns (e.g., work-overtime? rest). Again A RC -II beats other models with large margins,
while two convolutional sentence models A RC -I and SENNA+MLP come next.
4
We select from a random set the clauses that have 0.7?0.8 cosine similarity with the original. The dataset
and more information can be found from http://www.noahlab.com.hk/technology/Learning2Match.html
5
Actually A RC -II can achieve 74+% accuracy with random negatives.
7
5.4 Experiment III: Paraphrase Identification
Paraphrase identification aims to determine whether two sentences have the same meaning, a problem considered a touchstone of natural language understanding. This experiment
is included to test our methods on matching homogenous
objects. Here we use the benchmark MSRP dataset [17],
Model
Acc. (%) F1(%)
which contains 4,076 instances for training and 1,725 for
Baseline
66.5
79.90
test. We use all the training instances and report the test
Rus et al. (2008) 70.6
80.5
performance from early stopping. As stated earlier, our
W ORD E MBED
68.7
80.49
model is not specially tailored for modeling synonymy,
SENNA+MLP
68.4
79.7
and generally requires ?100K instances to work favorS
EN MLP
68.4
79.5
ably. Nevertheless, our generic matching models still
A
RC
-I
69.6
80.27
manage to perform reasonably well, achieving an accuraA RC -II
69.9
80.91
cy and F1 score close to the best performer in 2008 based
on hand-crafted features [17], but still significantly lowTable 3: The results on Paraphrase.
er than the state-of-the-art (76.8%/83.6%), achieved with
unfolding-RAE and other features designed for this task [19].
5.5 Discussions
A RC -II outperforms others significantly when the training instances are relatively abundant (as in
Experiment I & II). Its superiority over A RC -I, however, is less salient when the sentences have deep
grammatical structures and the matching relies less on the local matching patterns, as in ExperimentI. This therefore raises the interesting question about how to balance the representation of matching
and the representations of objects, and whether we can guide the learning process through something
like curriculum learning [4].
As another important observation, convolutional models (A RC -I & II, SENNA+MLP) perform
favorably over bag-of-words models, indicating the importance of utilizing sequential structures in
understanding and matching sentences. Quite interestingly, as shown by our other experiments,
A RC -I and A RC -II trained purely with random negatives automatically gain some ability in telling
whether the words in a given sentence are in right sequential order (with around 60% accuracy for
both). It is therefore a bit surprising that an auxiliary task on identifying the correctness of word
order in the response does not enhance the ability of the model on the original matching tasks.
We noticed that simple sum of embedding learned via Word2Vec [14] yields reasonably good results
on all three tasks. We hypothesize that the Word2Vec embedding is trained in such a way that the
vector summation can act as a simple composition, and hence retains a fair amount of meaning in
the short text segment. This is in contrast with other bag-of-words models like D EEP M ATCH [13].
6
Related Work
Matching structured objects rarely goes beyond estimating the similarity of objects in the same domain [23, 24, 19], with few exceptions like [2, 18]. When dealing with language objects, most
methods still focus on seeking vectorial representations in a common latent space, and calculating
the matching score with inner product[18, 25]. Few work has been done on building a deep architecture on the interaction space for texts-pairs, but it is largely based on a bag-of-words representation
of text [13].
Our models are related to the long thread of work on sentence representation. Aside from the models
with recursive nature [15, 21, 19] (as discussed in Section 2.1), it is fairly common practice to use
the sum of word-embedding to represent a short-text, mostly for classification [22]. There is very
little work on convolutional modeling of language. In addition to [6, 18], there is a very recent model
on sentence representation with dynamic convolutional neural network [9]. This work relies heavily
on a carefully designed pooling strategy to handle the variable length of sentence with a relatively
small feature map, tailored for classification problems with modest sizes.
7
Conclusion
We propose deep convolutional architectures for matching natural language sentences, which can
nicely combine the hierarchical modeling of individual sentences and the patterns of their matching.
Empirical study shows our models can outperform competitors on a variety of matching tasks.
Acknowledgments: B. Hu and Q. Chen are supported in part by National Natural Science Foundation of
China 61173075. Z. Lu and H. Li are supported in part by China National 973 project 2014CB340301.
8
References
[1] O. Abdel-Hamid, A. Mohamed, H. Jiang, and G. Penn. Applying convolutional neural networks concepts
to hybrid nn-hmm model for speech recognition. In Proceedings of ICASSP, 2012.
[2] B. Antoine, X. Glorot, J. Weston, and Y. Bengio. A semantic matching energy function for learning with
multi-relational data. Machine Learning, 94(2):233?259, 2014.
[3] Y. Bengio. Learning deep architectures for ai. Found. Trends Mach. Learn., 2(1):1?127, 2009.
[4] Y. Bengio, J. Louradourand, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of ICML,
2009.
[5] P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, and R. L. Mercer. The mathematics of statistical machine
translation: Parameter estimation. Computational linguistics, 19(2):263?311, 1993.
[6] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493?2537, 2011.
[7] G. E. Dahl, T. N. Sainath, and G. E. Hinton. Improving deep neural networks for lvcsr using rectified
linear units and dropout. In Proceedings of ICASSP, 2013.
[8] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012.
[9] N. Kalchbrenner, E. Grefenstette, and P. Blunsom. A convolutional neural network for modelling sentences. In Proceedings of ACL, Baltimore and USA, 2014.
[10] Y. Kim. Convolutional neural networks for sentence classification. In Proceedings of EMNLP, 2014.
[11] Y. LeCun and Y. Bengio. Convolutional networks for images, speech and time series. The Handbook of
Brain Theory and Neural Networks, 3361, 1995.
[12] Y. Lewis, David D.and Yang, T. G. Rose, and F. Li. Rcv1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004.
[13] Z. Lu and H. Li. A deep architecture for matching short texts. In Advances in NIPS, 2013.
[14] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector
space. CoRR, abs/1301.3781, 2013.
[15] T. Mikolov and M. Karafi?at. Recurrent neural network based language model. In Proceedings of INTERSPEECH, 2010.
[16] C. Rich, L. Steve, and G. Lee. Overfitting in neural nets: Backpropagation, conjugate gradient, and early
stopping. In Advances in NIPS, 2000.
[17] V. Rus, P. M. McCarthy, M. C. Lintean, D. S. McNamara, and A. C. Graesser. Paraphrase identification
with lexico-syntactic graph subsumption. In Proceedings of FLAIRS Conference, 2008.
[18] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. Learning semantic representations using convolutional
neural networks for web search. In Proceedings of WWW, 2014.
[19] R. Socher, E. H. Huang, and A. Y. Ng. Dynamic pooling and unfolding recursive autoencoders for
paraphrase detection. In Advances in NIPS, 2011.
[20] R. Socher, C. C. Lin, A. Y. Ng, and C. D. Manning. Parsing Natural Scenes and Natural Language with
Recursive Neural Networks. In Proceedings of ICML, 2011.
[21] R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of EMNLP, 2011.
[22] Y. Song and D. Roth. On dataless hierarchical text classification. In Proceedings of AAAI, 2014.
[23] Y. Sun, X. Wang, and X. Tang. Hybrid deep learning for face verification. In Proceedings of ICCV, 2013.
[24] S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, and K. M. Borgwardt. Graph kernels. Journal of
Machine Learning Research(JMLR), 11:1201?1242, 2010.
[25] B. Wang, X. Wang, C. Sun, B. Liu, and L. Sun. Modeling semantic relevance for question-answer pairs
in web social communities. In Proceedings of ACL, 2010.
[26] H. Wang, Z. Lu, H. Li, and E. Chen. A dataset for research on short-text conversations. In Proceedings
of EMNLP, Seattle, Washington, USA, 2013.
[27] W. Wu, Z. Lu, and H. Li. Learning bilinear model for matching queries and documents. The Journal of
Machine Learning Research, 14(1):2519?2548, 2013.
[28] X. Xue, J. Jiwoon, and C. W. Bruce. Retrieval models for question and answer archives. In Proceedings
of SIGIR ?08, New York, NY, USA, 2008.
9
| 5550 |@word kong:1 kondor:1 briefly:1 hu:2 contrastive:1 pick:2 dramatic:1 harder:1 liu:1 contains:3 efficacy:1 score:5 series:1 document:2 interestingly:1 rightmost:1 outperforms:2 existing:1 dole:1 com:5 comparing:1 surprising:2 activation:3 gmail:1 readily:1 parsing:2 happen:2 informative:1 hypothesize:1 designed:3 aside:1 half:2 leaf:1 guess:2 selected:1 core:1 short:6 node:3 location:4 contribute:1 successive:1 rc:41 consists:1 combine:3 absorbs:1 inter:1 expected:1 indeed:1 kuksa:1 multi:3 brain:1 salakhutdinov:1 touchstone:1 automatically:1 little:1 window:14 project:2 estimating:1 matched:1 moreover:1 qingcai:2 medium:1 what:1 proposing:1 finding:1 every:3 act:3 demonstrates:1 unit:9 penn:1 superiority:2 positive:4 before:2 service:1 local:8 subsumption:1 limit:2 despite:1 mach:1 bilinear:1 jiang:1 meet:1 path:1 might:1 blunsom:1 acl:2 china:3 resembles:1 co:2 graduate:1 range:1 acknowledgment:1 lecun:1 yj:1 testing:1 recursive:9 lost:1 practice:1 differs:1 backpropagation:1 empirical:4 rnn:1 adapting:1 significantly:3 matching:57 word:31 confidence:2 synonymy:1 suggest:2 symbolic:1 hangli:1 cannot:1 undesirable:1 selection:1 operator:1 get:2 put:2 risk:1 context:1 writing:2 close:1 applying:1 www:3 conventional:2 map:12 dean:1 compensated:1 lexical:1 roth:1 go:2 straightforward:1 sainath:1 sigir:1 shen:1 identifying:1 utilizing:1 embedding:11 population:1 notion:1 proving:1 hurt:1 analogous:1 president:1 handle:1 hierarchy:1 suppose:1 parser:1 elucidate:1 heavily:1 losing:1 homogeneous:1 element:2 trend:1 recognition:1 ark:2 wang:4 capture:3 seg:1 cy:1 msrp:1 sun:3 shuffle:1 mesnil:1 balanced:1 intuition:1 rose:1 dynamic:3 trained:9 raise:1 segment:10 purely:2 creates:1 distinctive:1 matchings:1 translated:1 easily:2 joint:1 overtime:2 icassp:2 cat:4 various:1 weekend:1 train:1 separated:1 artificial:1 query:1 kalchbrenner:1 quite:1 encoded:2 richer:2 stanford:1 z2i:6 otherwise:1 defers:1 encoder:1 ability:4 presidential:1 polishing:1 favor:1 syntactic:1 final:3 seemingly:1 obviously:1 sequence:1 net:2 propose:6 interaction:6 senna:10 product:1 adaptation:1 aligned:2 turned:2 date:1 degenerate:1 flexibility:1 achieve:1 sutskever:1 seattle:1 categorization:1 converges:1 object:12 wider:1 recurrent:2 completion:3 school:1 job:1 sim:1 auxiliary:1 predicted:1 come:2 appropriateness:1 drawback:2 correct:1 functionality:1 filter:3 stochastic:1 f1:2 hamid:1 nonsymmetrical:1 summation:1 hold:1 around:1 considered:2 great:1 k3:2 major:1 vary:1 early:4 omitted:1 purpose:1 estimation:2 bag:3 correctness:1 city:1 unfolding:3 brought:1 clearly:2 aim:2 rather:1 reaching:1 linguistic:2 focus:2 longest:1 consistently:1 rank:1 indicates:2 modelling:1 hk:2 contrast:1 baseline:1 detect:1 sense:1 kim:1 huawei:4 abstraction:5 stopping:4 nn:1 eliminate:1 entire:1 typically:1 hidden:5 relation:5 selects:1 among:4 html:2 classification:4 denoted:1 retaining:1 development:3 art:1 special:2 fairly:3 homogenous:1 field:2 equal:1 nicely:2 extraction:1 sampling:1 ng:3 washington:1 broad:2 unsupervised:1 nearly:1 icml:2 report:4 others:1 serious:1 employ:1 few:2 randomly:2 preserve:1 national:2 comprehensive:1 individual:5 experiments2:1 pietra:2 phase:2 consisting:1 ab:2 detection:1 mlp:24 rae:7 behind:1 devoted:2 word2vec:3 beforehand:1 modest:1 tree:1 continuing:1 re:1 abundant:1 increased:1 instance:7 soft:1 modeling:11 eep:4 earlier:1 retains:1 phrase:1 cost:1 introducing:1 applicability:1 mcnamara:1 krizhevsky:1 successful:3 too:1 reported:1 answer:2 learnt:2 xue:1 combined:1 chooses:1 borgwardt:1 standing:1 lee:1 off:4 enhance:1 synthesis:1 quickly:1 again:1 central:2 aaai:1 manage:1 choose:1 huang:2 emnlp:3 external:1 dialogue:1 style:2 li:6 de:1 blow:1 accompanied:1 summarized:1 subsumes:1 includes:1 caused:2 ranking:2 collobert:2 view:1 try:1 lab:2 start:3 recover:2 complicated:2 inherited:1 capability:1 bruce:1 ably:1 contribution:2 formed:1 accuracy:2 convolutional:38 characteristic:1 largely:2 sy:15 yield:2 resume:1 shenzhen:1 zhengdong:2 identification:5 garner:1 kavukcuoglu:1 basically:2 lu:6 rectified:1 bob:1 acc:1 detector:1 suffers:1 competitor:6 energy:1 mohamed:1 naturally:4 gain:2 dataset:3 knowledge:3 color:3 conversation:1 carefully:1 actually:4 back:1 steve:1 matures:1 supervised:3 response:10 done:2 box:1 shrink:1 generality:2 just:3 until:4 autoencoders:2 working:1 hand:1 parse:1 web:2 nonlinear:1 overlapping:1 propagation:2 french:1 artifact:1 gray:1 quality:1 atch:4 name:1 effect:2 building:1 requiring:2 brown:1 usa:3 deliberately:1 adequately:4 hence:3 inductive:1 discounting:1 regularization:1 concept:1 semantic:5 illustrated:6 deal:1 adjacent:2 width:3 interspeech:1 flair:1 cosine:1 hong:1 override:1 demonstrate:1 performs:3 baotian:1 meaning:5 image:3 novel:2 possessing:1 superior:2 sigmoid:3 wikipedia:1 common:2 clause:7 million:3 discussed:1 he:1 composition:17 ai:1 tuning:1 mathematics:1 language:20 harbin:1 similarity:6 longer:1 add:1 something:1 own:1 recent:1 mccarthy:1 electoral:1 certain:1 incapable:1 devise:2 preserving:3 zip:1 performer:1 parallelized:1 deng:1 determine:1 corrado:1 dashed:1 semi:1 preservation:1 ii:29 sliding:6 multiple:1 afterwards:2 siamese:2 desirable:1 karlen:1 match:1 adapt:1 faster:1 offer:2 long:1 retrieval:2 lin:1 divided:2 host:1 schraudolph:1 coded:1 prediction:2 basic:1 heterogeneous:1 vision:1 essentially:5 represent:3 tailored:2 kernel:1 achieved:1 preserved:1 addition:1 fine:1 baltimore:1 rest:2 unlike:2 specially:1 archive:1 pooling:29 mature:1 veto:1 yang:1 vital:2 enough:2 split:1 rendering:1 variety:4 iii:1 relu:2 zi:20 bengio:4 architecture:35 reduce:1 inner:1 cn:1 whether:3 thread:1 handled:1 ltd:1 padding:4 lvcsr:1 sentiment:1 song:1 speech:4 york:1 compositional:1 action:1 deep:9 generally:3 useful:1 detailed:2 involve:1 tune:1 amount:1 ten:1 http:3 outperform:1 mat:2 group:3 putting:1 four:3 salient:1 nevertheless:1 achieving:1 verified:1 dahl:1 backward:2 graph:2 tweet:6 sum:3 run:1 powerful:1 throughout:1 almost:1 wu:1 coherence:1 comparable:1 bit:2 dropout:2 capturing:1 def:6 layer:33 followed:1 bound:1 correspondence:2 fold:1 oracle:1 xili:1 noah:2 constraint:1 worked:1 vectorial:2 your:1 scene:1 vishwanathan:1 encodes:1 weibo:2 rcv1:1 mikolov:2 relatively:3 circumvented:1 department:1 structured:1 combination:1 manning:2 conjugate:1 slightly:2 karafi:1 shallow:2 happens:1 hl:1 iccv:1 discus:1 loose:1 mechanism:2 letting:1 end:1 informal:1 adopted:1 generalizes:1 operation:1 eight:1 hierarchical:6 away:1 generic:3 appropriate:1 batch:1 gate:2 original:8 denotes:1 top:3 include:1 ensure:1 nlp:1 running:1 linguistics:1 calculating:1 k1:6 chinese:2 seeking:1 objective:2 noticed:1 question:3 blend:1 strategy:6 sha:1 receptive:2 usual:1 antoine:1 gradient:3 separate:1 hmm:1 roadmap:1 collected:1 toward:1 ru:2 length:8 code:1 retained:1 illustration:1 mini:1 balance:1 mostly:1 potentially:1 favorably:1 negative:8 stated:1 design:1 perform:5 ord:5 convolution:32 observation:1 datasets:2 benchmark:2 descent:1 inevitably:1 beat:1 relational:1 variability:2 hinton:2 paraphrase:7 community:1 introduced:1 david:1 namely:2 pair:8 extensive:1 sentence:76 z1:1 lexico:1 learned:3 distinction:1 heterogenous:2 nip:3 able:1 suggested:1 beyond:1 below:1 pattern:8 summarize:1 built:2 max:10 including:1 power:2 natural:16 hybrid:2 predicting:1 turning:2 curriculum:2 representing:2 improve:1 technology:5 brief:1 republican:1 autoencoder:2 utterance:1 auto:1 text:10 prior:2 understanding:4 buddy:1 fully:2 loss:2 interesting:1 limitation:1 proven:1 triple:4 abdel:1 foundation:1 verification:1 mercer:1 translation:2 supported:2 last:2 free:1 english:3 enjoys:1 side:1 bias:1 deeper:1 perceptron:1 institute:1 guide:1 template:1 taking:1 telling:1 face:1 grammatical:2 boundary:1 depth:1 calculated:1 evaluating:1 rich:6 preventing:1 author:1 forward:2 collection:1 cope:1 social:1 hang:1 keep:2 dealing:1 global:3 sequentially:2 overfitting:2 sat:2 handbook:1 mbed:5 xi:3 discriminative:1 comma:1 latent:1 search:1 table:4 nature:5 concatenates:3 reasonably:2 learn:1 improving:2 bottou:1 clinton:1 domain:1 main:1 whole:1 reuters:2 fair:1 crafted:1 representative:1 en:4 ny:1 explicit:1 jmlr:1 third:1 tin:1 learns:1 tang:1 specific:1 gating:4 showing:1 er:1 ments:1 glorot:1 fusion:2 essential:1 socher:3 sequential:5 corr:2 importance:2 pennington:1 candi:1 sx:10 margin:3 chen:5 easier:1 simply:1 explore:1 infinitely:1 intern:1 gao:1 failed:1 relies:2 lewis:1 weston:3 conditional:1 grefenstette:1 goal:1 formulated:1 viewed:1 towards:1 shared:1 hard:3 included:1 specifically:3 determined:1 operates:1 rigor:1 diverging:1 vote:2 indicating:1 select:1 college:1 rarely:1 internal:3 exception:1 relevance:2 tested:1 scratch:1 srivastava:1 |
5,027 | 5,551 | Deep Recursive Neural Networks
for Compositionality in Language
?
Ozan Irsoy
Department of Computer Science
Cornell University
Ithaca, NY 14853
[email protected]
Claire Cardie
Department of Computer Science
Cornell University
Ithaca, NY 14853
[email protected]
Abstract
Recursive neural networks comprise a class of architecture that can operate on
structured input. They have been previously successfully applied to model compositionality in natural language using parse-tree-based structural representations.
Even though these architectures are deep in structure, they lack the capacity for
hierarchical representation that exists in conventional deep feed-forward networks
as well as in recently investigated deep recurrent neural networks. In this work we
introduce a new architecture ? a deep recursive neural network (deep RNN) ?
constructed by stacking multiple recursive layers. We evaluate the proposed model
on the task of fine-grained sentiment classification. Our results show that deep
RNNs outperform associated shallow counterparts that employ the same number
of parameters. Furthermore, our approach outperforms previous baselines on the
sentiment analysis task, including a multiplicative RNN variant as well as the recently introduced paragraph vectors, achieving new state-of-the-art results. We
provide exploratory analyses of the effect of multiple layers and show that they
capture different aspects of compositionality in language.
1
Introduction
Deep connectionist architectures involve many layers of nonlinear information processing [1]. This
allows them to incorporate meaning representations such that each succeeding layer potentially has
a more abstract meaning. Recent advancements in efficiently training deep neural networks enabled
their application to many problems, including those in natural language processing (NLP). A key
advance for application to NLP tasks was the invention of word embeddings that represent a single
word as a dense, low-dimensional vector in a meaning space [2], and from which numerous problems
have benefited [3, 4].
Recursive neural networks, comprise a class of architecture that operates on structured inputs, and
in particular, on directed acyclic graphs. A recursive neural network can be seen as a generalization
of the recurrent neural network [5], which has a specific type of skewed tree structure (see Figure 1).
They have been applied to parsing [6], sentence-level sentiment analysis [7, 8], and paraphrase detection [9]. Given the structural representation of a sentence, e.g. a parse tree, they recursively
generate parent representations in a bottom-up fashion, by combining tokens to produce representations for phrases, eventually producing the whole sentence. The sentence-level representation (or,
alternatively, its phrases) can then be used to make a final classification for a given input sentence
? e.g. whether it conveys a positive or a negative sentiment.
Similar to how recurrent neural networks are deep in time, recursive neural networks are deep in
structure, because of the repeated application of recursive connections. Recently, the notions of
depth in time ? the result of recurrent connections, and depth in space ? the result of stacking
1
cool
was
movie
that
movie
was
cool
(a)
that
movie
(b)
was
cool
that
(c)
Figure 1: Operation of a recursive net (a), untied recursive net (b) and a recurrent net (c) on an
example sentence. Black, orange and red dots represent input, hidden and output layers, respectively.
Directed edges having the same color-style combination denote shared connections.
multiple layers on top of one another, are distinguished for recurrent neural networks. In order to
combine these concepts, deep recurrent networks were proposed [10, 11, 12]. They are constructed
by stacking multiple recurrent layers on top of each other, which allows this extra notion of depth
to be incorporated into temporal processing. Empirical investigations showed that this results in a
natural hierarchy for how the information is processed [12]. Inspired by these recent developments,
we make a similar distinction between depth in structure and depth in space, and to combine these
concepts, propose the deep recursive neural network, which is constructed by stacking multiple
recursive layers.
The architecture we study in this work is essentially a deep feedforward neural network with an
additional structural processing within each layer (see Figure 2). During forward propagation, information travels through the structure within each layer (because of the recursive nature of the
network, weights regarding structural processing are shared). In addition, every node in the structure (i.e. in the parse tree) feeds its own hidden state to its counterpart in the next layer. This can
be seen as a combination of feedforward and recursive nets. In a shallow recursive neural network,
a single layer is responsible for learning a representation of composition that is both useful and
sufficient for the final decision. In a deep recursive neural network, a layer can learn some parts
of the composition to apply, and pass this intermediate representation to the next layer for further
processing for the remaining parts of the overall composition.
To evaluate the performance of the architecture and make exploratory analyses, we apply deep recursive neural networks to the task of fine-grained sentiment detection on the recently published
Stanford Sentiment Treebank (SST) [8]. SST includes a supervised sentiment label for every node
in the binary parse tree, not just at the root (sentence) level. This is especially important for deep
learning, since it allows a richer supervised error signal to be backpropagated across the network,
potentially alleviating vanishing gradients associated with deep neural networks [13].
We show that our deep recursive neural networks outperform shallow recursive nets of the same size
in the fine-grained sentiment prediction task on the Stanford Sentiment Treebank. Furthermore, our
models outperform multiplicative recursive neural network variants, achieving new state-of-the-art
performance on the task. We conduct qualitative experiments that suggest that each layer handles
a different aspect of compositionality, and representations at each layer capture different notions of
similarity.
2
2.1
Methodology
Recursive Neural Networks
Recursive neural networks (e.g. [6]) (RNNs) comprise an architecture in which the same set of
weights is recursively applied within a structural setting: given a positional directed acyclic graph,
it visits the nodes in topological order, and recursively applies transformations to generate further
representations from previously computed representations of children. In fact, a recurrent neural
network is simply a recursive neural network with a particular structure (see Figure 1c). Even though
2
RNNs can be applied to any positional directed acyclic graph, we limit our attention to RNNs over
positional binary trees, as in [6].
Given a binary tree structure with leaves having the initial representations, e.g. a parse tree with
word vector representations at the leaves, a recursive neural network computes the representations
at each internal node ? as follows (see also Figure 1a):
x? = f (WL xl(?) + WR xr(?) + b)
(1)
where l(?) and r(?) are the left and right children of ?, WL and WR are the weight matrices that
connect the left and right children to the parent, and b is a bias vector. Given that WL and WR
are square matrices, and not distinguishing whether l(?) and r(?) are leaf or internal nodes, this
definition has an interesting interpretation: initial representations at the leaves and intermediate
representations at the nonterminals lie in the same space. In the parse tree example, a recursive
neural network combines the representations of two subphrases to generate a representation for the
larger phrase, in the same meaning space [6]. We then have a task-specific output layer above the
representation layer:
y? = g(U x? + c)
(2)
where U is the output weight matrix and c is the bias vector to the output layer. In a supervised task,
y? is simply the prediction (class label or response value) for the node ?, and supervision occurs
at this layer. As an example, for the task of sentiment classification, y? is the predicted sentiment
label of the phrase given by the subtree rooted at ?. Thus, during supervised learning, initial external
errors are incurred on y, and backpropagated from the root, toward leaves [14].
2.2
Untying Leaves and Internals
Even though the aforementioned definition, which treats the leaf nodes and internal nodes the same,
has some attractive properties (such as mapping individual words and larger phrases into the same
meaning space), in this work we use an untied variant that distinguishes between a leaf and an
internal node. We do this by a simple parametrization of the weights W with respect to whether the
incoming edge emanates from a leaf or an internal node (see Figure 1b in contrast to 1a, color of the
edges emanating from leaves and internal nodes are different):
l(?)
r(?)
h? = f (WL hl(?) + WR
hr(?) + b)
(3)
where h? = x? ? X if ? is a leaf and h? ? H otherwise, and W?? = W?xh if ? is a leaf and
W?? = W?hh otherwise. X and H are vector spaces of words and phrases, respectively. The weights
W?xh act as a transformation from word space to phrase space, and W hh as a transformation from
phrase space to itself.
With this untying, a recursive network becomes a generalization of the Elman type recurrent neural
network with h being analogous to the hidden layer of the recurrent network (memory) and x being analogous to the input layer (see Figure 1c). Benefits of this untying are twofold: (1) Now the
weight matrices W?xh , and W?hh are of size |h| ? |x| and |h| ? |h| which means that we can use
large pretrained word vectors and a small number of hidden units without a quadratic dependence on
the word vector dimensionality |x|. Therefore, small but powerful models can be trained by using
pretrained word vectors with a large dimensionality. (2) Since words and phrases are represented
in different spaces, we can use rectifier activation units for f , which have previously been shown to
yield good results when training deep neural networks [15]. Word vectors are dense and generally
have positive and negative entries whereas rectifier activation causes the resulting intermediate vectors to be sparse and nonnegative. Thus, when leaves and internals are represented in the same space,
a discrepancy arises, and the same weight matrix is applied to both leaves and internal nodes and
is expected to handle both sparse and dense cases, which might be difficult. Therefore separating
leaves and internal nodes allows the use of rectifiers in a more natural manner.
2.3
Deep Recursive Neural Networks
Recursive neural networks are deep in structure: with the recursive application of the nonlinear
information processing they become as deep as the depth of the tree (or in general, DAG). However,
this notion of depth is unlikely to involve a hierarchical interpretation of the data. By applying
3
movie
that
was
cool
Figure 2: Operation of a 3-layer deep recursive neural network. Red and black points denote
output and input vectors, respectively; other colors denote intermediate memory representations.
Connections denoted by the same color-style combination are shared (i.e. share the same set of
weights).
the same computation recursively to compute the contribution of children to their parents, and the
same computation to produce an output response, we are, in fact, representing every internal node
(phrase) in the same space [6, 8]. However, in the more conventional stacked deep learners (e.g. deep
feedforward nets), an important benefit of depth is the hierarchy among hidden representations:
every hidden layer conceptually lies in a different representation space and potentially is a more
abstract representation of the input than the previous layer [1].
To address these observations, we propose the deep recursive neural network, which is constructed
by stacking multiple layers of individual recursive nets:
(i) (i)
(i) (i)
(i) (i?1)
h(i)
h?
+ b(i) )
? = f (WL hl(?) + WR hr(?) + V
(i)
(4)
(i)
where i indexes the multiple stacked layers, WL , WR , and b(i) are defined as before within each
layer i, and V (i) is the weight matrix that connects the (i ? 1)th hidden layer to the ith hidden layer.
Note that the untying that we described in Section 2.2 is only necessary for the first layer, since we
can map both x ? X and h(1) ? H(1) in the first layer to h(2) ? H(2) in the second layer using separate V (2) for leaves and internals (V xh(2) and V hh(2) ). Therefore every node is represented in the
same space at layers above the first, regardless of their ?leafness?. Figure 2 provides a visualization
of weights that are untied or shared.
For prediction, we connect the output layer to only the final hidden layer:
y? = g(U h(`)
? + c)
(5)
where ` is the total number of layers. Intuitively, connecting the output layer to only the last hidden
layer forces the network to represent enough high level information at the final layer to support the
supervised decision. Connecting the output layer to all hidden layers is another option; however, in
that case multiple hidden layers can have synergistic effects on the output and make it more difficult
to qualitatively analyze each layer.
Learning a deep RNN can be conceptualized as interleaved applications of the conventional backpropagation across multiple layers, and backpropagation through structure within a single layer.
During backpropagation a node ? receives error terms from both its parent (through structure), and
from its counterpart in the higher layer (through space). Then it further backpropagates that error
signal to both of its children, as well as to its counterpart in the lower layer.
4
3
3.1
Experiments
Setting
Data. For experimental evaluation of our models, we use the recently published Stanford Sentiment Treebank (SST) [8], which includes labels for 215,154 phrases in the parse trees of 11,855
sentences, with an average sentence length of 19.1 tokens. Real-valued sentiment labels are converted to an integer ordinal label in {0, . . . , 4} by simple thresholding. Therefore the supervised
task is posed as a 5-class classification problem. We use the single training-validation-test set partitioning provided by the authors.
Baselines. In addition to experimenting among deep RNNs of varying width and depth, we compare our models to previous work on the same data. We use baselines from [8]: a naive bayes classifier that operates on bigram counts (B I NB), shallow RNN (RNN) [6, 7] that learns the word vectors
from the supervised data and uses tanh units, in contrast to our shallow RNNs, a matrix-vector
RNN in which every word is assigned a matrix-vector pair instead of a vector, and composition is
defined with matrix-vector multiplications (MV-RNN) [16], and the multiplicative recursive net (or
the recursive neural tensor network) in which the composition is defined as a bilinear tensor product (RNTN) [8]. Additionally, we use a method that is capable of generating representations for
larger pieces of text (PARAGRAPH V ECTORS) [17], and the dynamic convolutional neural network
(DCNN) [18]. We use the previously published results for comparison using the same trainingdevelopment-test partitioning of the data.
Activation
Units. For the output layer, we employ the standard softmax activation: g(x) =
P
exi / j exj . For the hidden layers we use the rectifier linear activation: f (x) = max{0, x}.
Experimentally, rectifier activation gives better performance, faster convergence, and sparse representations. Previous work with rectifier units reported good results when training deep neural
networks, with no pre-training step [15].
Word Vectors. In all of our experiments, we keep the word vectors fixed and do not finetune for
simplicity of our models. We use the publicly available 300 dimensional word vectors by [19],
trained on part of the Google News dataset (?100B words).
Regularizer. For regularization of the networks, we use the recently proposed dropout technique,
in which we randomly set entries of hidden representations to 0, with a probability called the dropout
rate [20]. Dropout rate is tuned over the development set out of {0, 0.1, 0.3, 0.5}. Dropout prevents
learned features from co-adapting, and it has been reported to yield good results when training deep
neural networks [21, 22]. Note that dropped units are shared: for a single sentence and a layer, we
drop the same units of the hidden layer at each node.
Since we are using a non-saturating activation function, intermediate representations are not
bounded from above, hence, they can explode even with a strong regularization over the connections, which is confirmed by preliminary experiments. Therefore, for stability reasons, we use a
small fixed additional L2 penalty (10?5 ) over both the connection weights and the unit activations,
which resolves the explosion problem.
Network Training. We use stochastic gradient descent with a fixed learning rate (.01). We use a
diagonal variant of AdaGrad for parameter updates [23]. AdaGrad yields a smooth and fast convergence. Furthermore, it can be seen as a natural tuning of individual learning rates per each parameter.
This is beneficial for our case since different layers have gradients at different scales because of the
scale of non-saturating activations at each layer (grows bigger at higher layers). We update weights
after minibatches of 20 sentences. We run 200 epochs for training. Recursive weights within a layer
(W hh ) are initialized as 0.5I + where I is the identity matrix and is a small uniformly random
noise. This means that initially, the representation of each node is approximately the mean of its
two children. All other weights are initialized as . We experiment with networks of various sizes,
however we have the same number of hidden units across multiple layers of a single RNN. When
we increase the depth, we keep the overall number of parameters constant, therefore deeper networks become narrower. We do not employ a pre-training step; deep architectures are trained with
the supervised error signal, even when the output layer is connected to only the final hidden layer.
5
`
1
2
3
1
2
3
4
5
|h|
50
45
40
340
242
200
174
157
Fine-grained
46.1
48.0
43.1
48.1
48.3
49.5
49.8
49.0
Binary
85.3
85.5
83.5
86.4
86.4
86.7
86.6
85.5
(a) Results for RNNs. ` and |h| denote the
depth and width of the networks, respectively.
Method
Bigram NB
RNN
MV-RNN
RNTN
DCNN
Paragraph Vectors
DRNN (4, 174)
Fine-grained
41.9
43.2
44.4
45.7
48.5
48.7
49.8
Binary
83.1
82.4
82.9
85.4
86.8
87.8
86.6
(b) Results for previous work and our best model
(DRNN).
Table 1: Accuracies for 5-class predictions over SST, at the sentence level.
Additionally, we employ early stopping: out of all iterations, the model with the best development
set performance is picked as the final model to be evaluated.
3.2
Results
Quantitative Evaluation. We evaluate on both fine-grained sentiment score prediction (5-class
classification) and binary (positive-negative) classification. For binary classification, we do not train
a separate network, we use the network trained for fine-grained prediction, and then decode the 5
dimensional posterior probability vector into a binary decision which also effectively discards the
neutral cases from the test set. This approach solves a harder problem. Therefore there might be
room for improvement on binary results by separately training a binary classifier.
Experimental results of our models and previous work are given in Table 1. Table 1a shows our
models with varying depth and width (while keeping the overall number of parameters constant
within each group). ` denotes the depth and |h| denotes the width of the networks (i.e. number of
hidden units in a single hidden layer).
We observe that shallow RNNs get an improvement just by using pretrained word vectors, rectifiers,
and dropout, compared to previous work (48.1 vs. 43.2 for the fine-grained task, see our shallow
RNN with |h| = 340 in Table 1a and the RNN from [8] in Table 1b). This suggests a validation for
untying leaves and internal nodes in the RNN as described in Section 2.2 and using pre-trained word
vectors.
Results on RNNs of various depths and sizes show that deep RNNs outperform single layer RNNs
with approximately the same number of parameters, which quantitatively validates the benefits of
deep networks over shallow ones (see Table 1a). We see a consistent improvement as we use deeper
and narrower networks until a certain depth. The 2-layer RNN for the smaller networks and 4layer RNN for the larger networks give the best performance with respect to the fine-grained score.
Increasing the depth further starts to cause a degrade. An explanation for this might be the decrease
in width dominating the gains from an increased depth.
Furthermore, our best deep RNN outperforms previous work on both the fine-grained and binary
prediction tasks, and outperforms Paragraph Vectors on the fine-grained score, achieving a new
state-of-the-art (see Table 1b).
We attribute an important contribution of the improvement to dropouts. In a preliminary experiment
with simple L2 regularization, a 3-layer RNN with 200 hidden units each achieved a fine-grained
score of 46.06 (not shown here), compared to our current score of 49.5 with the dropout regularizer.
Input Perturbation. In order to assess the scale at which different layers operate, we investigate
the response of all layers to a perturbation in the input. A way of perturbing the input might be an
addition of some noise, however with a large amount of noise, it is possible that the resulting noisy
input vector is outside of the manifold of meaningful word vectors. Therefore, instead, we simply
pick a word from the sentence that carries positive sentiment, and alter it to a set of words that have
sentiment values shifting towards the negative direction.
6
8
7
6
Roger Dodger
.
5
is
4
one
3
of
2
1
the
on
[best] variations this theme
coolest/good/average/bad/worst
1
2
3
4
5
6
7
8
Figure 3: An example sentence with its parse tree (left) and the response measure of every layer
(right) in a three-layered deep recursive net. We change the word ?best? in the input to one of the
words ?coolest?, ?good?, ?average?, ?bad?, ?worst? (denoted by blue, light blue, black, orange
and red, respectively) and measure the change of hidden layer representations in one-norm for every
node in the path.
1
2
3
4
5
charming ,
charming and
appealingly manic and energetic
refreshingly adult take on adultery
unpretentious , sociologically pointed
1
2
3
4
5
as great
a great
is great
Is n?t it great
be great
charming results
interesting results
riveting performances
gripping performances
joyous documentary
an amazing slapstick instrument
not great
nothing good
not compelling
only good
too great
completely numbing experience
charming chemistry
perfect ingredients
brilliantly played
perfect medium
engaging film
not very informative
not really funny
not quite satisfying
thrashy fun
fake fun
Table 2: Example shortest phrases and their nearest neighbors across three layers.
In Figure 3, we give an example sentence, ?Roger Dodger is one of the best variations on this
theme? with its parse tree. We change the word ?best? into the set of words ?coolest?, ?good?,
?average?, ?bad?, ?worst?, and measure the response of this change along the path that connects
the leaf to the root (labeled from 1 to 8). Note that all other nodes have the same representations,
since a node is completely determined by its subtree. For each node, the response is measured as
the change of its hidden representation in one-norm, for each of the three layers in the network, with
respect to the hidden representations using the original word (?best?).
In the first layer (bottom) we observe a shared trend change as we go up in the tree. Note that
?good? and ?bad? are almost on top of each other, which suggests that there is not necessarily
enough information captured in the first layer yet to make the correct sentiment decision. In the
second layer (middle) an interesting phenomenon occurs: Paths with ?coolest? and ?good? start
close together, as well as ?worst? and ?bad?. However, as we move up in the tree, paths with
?worst? and ?coolest? come closer together as well as the paths with ?good? and ?bad?. This
suggests that the second layer remembers the intensity of the sentiment, rather than direction. The
third layer (top) is the most consistent one as we traverse upward the tree, and correct sentiment
decisions persist across the path.
7
Nearest Neighbor Phrases. In order to evaulate the different notions of similarity in the meaning
space captured by multiple layers, we look at nearest neighbors of short phrases. For a three layer
deep recursive neural network we compute hidden representations for all phrases in our data. Then,
for a given phrase, we find its nearest neighbor phrases across each layer, with the one-norm distance
measure. Two examples are given in Table 2.
For the first layer, we observe that similarity is dominated by one of the words that is composed, i.e.
?charming? for the phrase ?charming results? (and ?appealing?, ?refreshing? for some neighbors),
and ?great? for the phrase ?not great?. This effect is so strong that it even discards the negation for
the second case, ?as great? and ?is great? are considered similar to ?not great?.
In the second layer, we observe a more diverse set of phrases semantically. On the other hand, this
layer seems to be taking syntactic similarity more into account: in the first example, the nearest
neighbors of ?charming results? are comprised of adjective-noun combinations that also exhibit
some similarity in meaning (e.g. ?interesting results?, ?riveting performances?). The account is
similar for ?not great?: its nearest neighbors are adverb-adjective combinations in which the adjectives exhibit some semantic overlap (e.g. ?good?, ?compelling?). Sentiment is still not properly
captured in this layer, however, as seen with the neighbor ?too great? for the phrase ?not great?.
In the third and final layer, we see a higher level of semantic similarity, in the sense that phrases
are mostly related to one another in terms of sentiment. Note that since this is a supervised task
on sentiment detection, it is sufficient for the network to capture only the sentiment (and how it is
composed in context) in the last layer. Therefore, it should be expected to observe an even more
diverse set of neighbors with only a sentiment connection.
4
Conclusion
In this work we propose the deep recursive neural network, which is constructed by stacking multiple
recursive layers on top of each other. We apply this architecture to the task of fine-grained sentiment
classification using binary parse trees as the structure. We empirically evaluated our models against
shallow recursive nets. Additionally, we compared with previous work on the task, including a
multiplicative RNN and the more recent Paragraph Vectors method. Our experiments show that deep
models outperform their shallow counterparts of the same size. Furthermore, deep RNN outperforms
the baselines, achieving state-of-the-art performance on the task.
We further investigate our models qualitatively by performing input perturbation, and examining
nearest neighboring phrases of given examples. These results suggest that adding depth to a recursive
net is different from adding width. Each layer captures a different aspect of compositionality. Phrase
representations focus on different aspects of meaning at each layer, as seen by nearest neighbor
phrase examples.
Since our task was supervised, learned representations seemed to be focused on sentiment, as in
previous work. An important future direction might be an application of the deep RNN to a broader,
more general task, even an unsupervised one (e.g. as in [9]). This might provide better insights on the
operation of different layers and their contribution, with a more general notion of composition. The
effects of fine-tuning word vectors on the performance of deep RNN is also open to investigation.
Acknowledgments
This work was supported in part by NSF grant IIS-1314778 and DARPA DEFT FA8750-13-2-0015.
The views and conclusions contained herein are those of the authors and should not be interpreted as
necessarily representing the official policies or endorsements, either expressed or implied, of NSF,
DARPA or the U.S. Government.
References
[1] Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning,
2(1):1?127, 2009.
[2] Yoshua Bengio, Rjean Ducharme, Pascal Vincent, Christian Jauvin, Jaz K, Thomas Hofmann, Tomaso
Poggio, and John Shawe-taylor. A neural probabilistic language model. In In Advances in Neural Information Processing Systems, 2001.
8
[3] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine
learning, pages 160?167. ACM, 2008.
[4] Ronan Collobert, Jason Weston, L?eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa.
Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493?2537, November
2011.
[5] Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179?211, 1990.
[6] Richard Socher, Cliff C Lin, Andrew Ng, and Chris Manning. Parsing natural scenes and natural language
with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning
(ICML-11), pages 129?136, 2011.
[7] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semisupervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151?161. Association for Computational Linguistics, 2011.
[8] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng,
and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank.
In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ?13,
2013.
[9] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information
Processing Systems, pages 801?809, 2011.
[10] J?urgen Schmidhuber. Learning complex, extended sequences using the principle of history compression.
Neural Computation, 4(2):234?242, 1992.
[11] Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In
Advances in Neural Information Processing Systems, pages 493?499, 1995.
[12] Michiel Hermans and Benjamin Schrauwen. Training and analysing deep recurrent neural networks. In
Advances in Neural Information Processing Systems, pages 190?198, 2013.
[13] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient
descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157?166, 1994.
[14] Christoph Goller and Andreas Kuchler. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1,
pages 347?352. IEEE, 1996.
[15] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier networks. In Proceedings of the
14th International Conference on Artificial Intelligence and Statistics. JMLR W&CP Volume, volume 15,
pages 315?323, 2011.
[16] Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. Semantic compositionality
through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201?1211.
Association for Computational Linguistics, 2012.
[17] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. arXiv preprint
arXiv:1405.4053, 2014.
[18] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling
sentences. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics,
June 2014.
[19] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations
of words and phrases and their compositionality. In Advances in Neural Information Processing Systems,
pages 3111?3119, 2013.
[20] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint
arXiv:1207.0580, 2012.
[21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional
neural networks. In NIPS, volume 1, page 4, 2012.
[22] George E Dahl, Tara N Sainath, and Geoffrey E Hinton. Improving deep neural networks for lvcsr using
rectified linear units and dropout. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE
International Conference on, pages 8609?8613. IEEE, 2013.
[23] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and
stochastic optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
9
| 5551 |@word multitask:1 middle:1 bigram:2 norm:3 seems:1 compression:1 nd:1 open:1 pavel:1 pick:1 harder:1 recursively:4 carry:1 initial:3 score:5 tuned:1 document:1 fa8750:1 outperforms:4 current:1 jaz:1 activation:9 yet:1 parsing:2 john:2 ronan:2 informative:1 hofmann:1 christian:1 drop:1 succeeding:1 update:2 v:1 intelligence:1 leaf:18 advancement:1 parametrization:1 ith:1 vanishing:1 short:1 provides:1 node:23 traverse:1 along:1 constructed:5 become:2 qualitative:1 combine:3 paragraph:5 manner:1 introduce:1 expected:2 kuksa:1 elman:2 tomaso:1 untying:5 inspired:1 salakhutdinov:1 resolve:1 increasing:1 becomes:1 provided:1 bounded:1 medium:1 appealingly:1 interpreted:1 unified:1 finding:1 transformation:3 temporal:1 quantitative:1 every:8 act:1 fun:2 classifier:2 partitioning:2 unit:12 grant:1 producing:1 positive:4 before:1 dropped:1 treat:1 limit:1 bilinear:1 mach:1 cliff:1 path:6 approximately:2 black:3 rnns:11 might:6 blunsom:1 suggests:3 christoph:1 co:2 directed:4 internals:3 responsible:1 acknowledgment:1 recursive:46 backpropagation:4 xr:1 rnn:21 empirical:4 adapting:1 word:30 pre:3 suggest:2 get:1 synergistic:1 layered:1 close:1 nb:2 context:1 applying:1 conventional:3 map:1 dean:1 phil:1 conceptualized:1 go:1 attention:1 regardless:1 sainath:1 focused:1 tomas:2 simplicity:1 insight:1 enabled:1 deft:1 stability:1 handle:2 exploratory:2 notion:6 variation:2 analogous:2 hierarchy:2 alleviating:1 decode:1 distinguishing:1 us:1 engaging:1 trend:2 satisfying:1 persist:1 labeled:1 bottom:2 preprint:2 capture:4 worst:5 news:1 connected:1 decrease:1 benjamin:1 dynamic:2 trained:5 eric:2 learner:1 completely:2 icassp:1 exi:1 darpa:2 joint:1 represented:3 various:2 pennin:1 regularizer:2 stacked:2 train:1 fast:1 emanating:1 artificial:1 outside:1 kalchbrenner:1 quite:1 richer:1 stanford:3 larger:4 valued:1 posed:1 dominating:1 otherwise:2 dodger:2 film:1 ducharme:1 statistic:1 syntactic:1 itself:1 validates:1 final:7 noisy:1 patrice:1 online:1 sequence:1 net:11 propose:3 product:1 adaptation:1 neighboring:1 combining:1 emanates:1 sutskever:3 parent:4 convergence:2 produce:2 generating:1 perfect:2 recurrent:13 andrew:5 amazing:1 measured:1 nearest:8 edward:1 solves:1 strong:2 c:2 cool:4 predicted:1 come:1 direction:3 correct:2 attribute:1 stochastic:2 government:1 generalization:2 really:1 investigation:2 preliminary:2 considered:1 great:15 mapping:1 early:1 ruslan:1 travel:1 label:6 tanh:1 wl:6 successfully:1 unfolding:1 dcnn:2 rather:1 cornell:4 varying:2 broader:1 focus:1 june:1 improvement:4 properly:1 potts:1 modelling:1 experimenting:1 contrast:2 baseline:4 sense:1 dependent:1 stopping:1 jauvin:1 el:1 unlikely:1 initially:1 hidden:24 rntn:2 upward:1 overall:3 classification:9 aforementioned:1 among:2 denoted:2 pascal:1 development:3 art:4 softmax:1 orange:2 noun:1 documentary:1 urgen:1 comprise:3 having:2 ng:5 koray:1 frasconi:1 look:1 unsupervised:1 icml:1 alter:1 discrepancy:1 future:1 connectionist:1 yoshua:5 quantitatively:1 richard:5 employ:4 distinguishes:1 randomly:1 composed:2 individual:3 connects:2 jeffrey:3 negation:1 detection:4 investigate:2 evaluation:2 light:1 edge:3 capable:1 explosion:1 necessary:1 experience:1 closer:1 poggio:1 tree:17 conduct:1 taylor:1 initialized:2 re:1 increased:1 compelling:2 phrase:26 stacking:6 entry:2 neutral:1 comprised:1 krizhevsky:2 goller:1 nonterminals:1 examining:1 too:2 reported:2 connect:2 dependency:2 international:5 probabilistic:1 michael:1 connecting:2 together:2 ilya:3 schrauwen:1 huang:2 emnlp:1 external:1 cognitive:1 simard:1 style:2 account:2 converted:1 huval:1 chemistry:1 includes:2 mv:2 collobert:2 piece:1 multiplicative:4 charming:7 root:3 picked:1 jason:3 analyze:1 hazan:1 red:3 start:2 bayes:1 option:1 contribution:3 ass:1 square:1 publicly:1 accuracy:1 convolutional:3 greg:1 view:1 efficiently:1 yield:3 conceptually:1 vincent:1 kavukcuoglu:1 cardie:2 confirmed:1 rectified:1 published:3 history:1 detector:1 definition:2 against:1 conveys:1 associated:2 gain:1 dataset:1 color:4 dimensionality:2 finetune:1 feed:2 higher:3 supervised:10 methodology:1 response:6 evaluated:2 though:3 furthermore:5 just:2 roger:2 until:1 autoencoders:2 hand:1 receives:1 parse:10 christopher:5 nonlinear:2 propagation:1 lack:1 google:1 grows:1 semisupervised:1 effect:4 concept:2 counterpart:5 xavier:1 regularization:3 assigned:1 hence:1 semantic:4 attractive:1 skewed:1 during:3 width:6 rooted:1 backpropagates:1 duchi:1 cp:1 meaning:8 recently:6 empirically:1 perturbing:1 irsoy:1 volume:4 association:3 interpretation:2 salah:1 hihi:1 composition:6 dag:1 ai:1 tuning:2 pointed:1 exj:1 language:12 shawe:1 dot:1 similarity:6 supervision:1 posterior:1 own:1 recent:3 showed:1 kai:1 elad:1 discard:2 adverb:1 schmidhuber:1 certain:1 binary:12 meeting:1 seen:5 captured:3 additional:2 george:1 shortest:1 corrado:1 signal:4 ii:1 multiple:12 karlen:1 smooth:1 faster:1 michiel:1 long:2 lin:1 visit:1 bigger:1 jean:1 prediction:7 variant:4 essentially:1 arxiv:4 iteration:1 represent:3 achieved:1 addition:3 whereas:1 fine:14 separately:1 ithaca:2 extra:1 operate:2 pooling:1 integer:1 structural:5 feedforward:3 intermediate:5 embeddings:1 enough:2 bengio:5 architecture:12 andreas:1 regarding:1 whether:3 sentiment:28 penalty:1 energetic:1 lvcsr:1 speech:1 cause:2 deep:49 useful:1 generally:1 fake:1 involve:2 sst:4 amount:1 backpropagated:2 processed:1 generate:3 outperform:5 nsf:2 wr:6 per:1 blue:2 diverse:2 paolo:1 group:1 key:1 achieving:4 nal:1 dahl:1 invention:1 graph:3 subgradient:1 run:1 powerful:1 almost:2 wu:1 funny:1 endorsement:1 decision:5 interleaved:1 brody:1 layer:88 dropout:8 gripping:1 played:1 topological:1 quadratic:1 nonnegative:1 annual:1 alex:3 untied:3 scene:1 explode:1 dominated:1 aspect:4 nitish:1 performing:1 mikolov:2 department:2 structured:2 combination:5 manning:5 across:6 beneficial:1 smaller:1 appealing:1 shallow:10 quoc:1 hl:2 intuitively:1 visualization:1 previously:4 eventually:1 count:1 hh:5 singer:1 ordinal:1 instrument:1 available:1 operation:3 apply:3 observe:5 hierarchical:3 ozan:1 distinguished:1 original:1 thomas:1 top:5 remaining:1 nlp:2 denotes:2 linguistics:3 chuang:1 yoram:1 eon:1 especially:1 tensor:2 move:1 implied:1 occurs:2 dependence:1 diagonal:1 antoine:1 exhibit:2 gradient:4 distance:1 separate:2 separating:1 capacity:1 degrade:1 chris:1 manifold:1 toward:1 reason:1 length:1 index:1 difficult:3 mostly:1 potentially:3 perelygin:1 negative:4 policy:1 subphrases:1 observation:1 descent:2 november:1 extended:1 incorporated:1 hinton:3 perturbation:3 paraphrase:2 intensity:1 compositionality:8 introduced:1 pair:1 sentence:17 connection:7 imagenet:1 acoustic:1 distinction:1 learned:2 herein:1 nip:1 address:1 adult:1 herman:1 adjective:3 including:3 memory:2 max:1 explanation:1 shifting:1 overlap:1 natural:13 force:1 predicting:1 hr:2 representing:2 movie:4 numerous:1 naive:1 remembers:1 text:1 epoch:1 l2:2 multiplication:1 adagrad:2 interesting:4 acyclic:3 geoffrey:3 ingredient:1 validation:2 foundation:1 incurred:1 sufficient:2 consistent:2 thresholding:1 treebank:4 principle:1 share:1 bordes:1 claire:1 token:2 supported:1 last:2 keeping:1 bias:2 deeper:2 neighbor:10 taking:1 sparse:4 benefit:3 distributed:3 depth:18 computes:1 seemed:1 forward:2 qualitatively:2 author:2 preventing:1 adaptive:1 transaction:1 keep:2 incoming:1 alternatively:1 table:9 additionally:3 nature:1 learn:2 improving:2 investigated:1 necessarily:2 bottou:1 complex:1 official:1 refreshing:1 dense:3 whole:1 noise:3 nothing:1 repeated:1 child:6 drnn:2 benefited:1 fashion:1 ny:2 theme:2 xh:4 xl:1 lie:2 jmlr:1 third:2 learns:1 grained:13 bad:6 specific:2 rectifier:8 glorot:1 exists:1 socher:5 adding:2 effectively:1 pennington:1 subtree:2 chen:1 simply:3 positional:3 prevents:1 expressed:1 saturating:2 contained:1 pretrained:3 applies:1 acm:1 minibatches:1 weston:2 grefenstette:1 identity:1 narrower:2 towards:1 twofold:1 shared:6 room:1 jeff:1 experimentally:1 change:6 analysing:1 determined:1 operates:2 uniformly:1 semantically:1 total:1 called:1 pas:1 experimental:2 meaningful:1 tara:1 internal:10 support:1 arises:1 phenomenon:1 incorporate:1 evaluate:3 scratch:1 srivastava:1 |
5,028 | 5,552 | Algorithm selection by rational metareasoning as a
model of human strategy selection
Falk Lieder
Helen Wills Neuroscience Institute, UC Berkeley
[email protected]
Dillon Plunkett
Department of Psychology, UC Berkeley
[email protected]
Stuart J. Russell
EECS Department, UC Berkeley
[email protected]
Jessica B. Hamrick
Department of Psychology, UC Berkeley
[email protected]
Nicholas J. Hay
EECS Department, UC Berkeley
[email protected]
Thomas L. Griffiths
Department of Psychology, UC Berkeley
tom [email protected]
Abstract
Selecting the right algorithm is an important problem in computer science, because the algorithm often has to exploit the structure of the input to be efficient.
The human mind faces the same challenge. Therefore, solutions to the algorithm
selection problem can inspire models of human strategy selection and vice versa.
Here, we view the algorithm selection problem as a special case of metareasoning
and derive a solution that outperforms existing methods in sorting algorithm selection. We apply our theory to model how people choose between cognitive strategies and test its prediction in a behavioral experiment. We find that people quickly
learn to adaptively choose between cognitive strategies. People?s choices in our
experiment are consistent with our model but inconsistent with previous theories
of human strategy selection. Rational metareasoning appears to be a promising
framework for reverse-engineering how people choose among cognitive strategies
and translating the results into better solutions to the algorithm selection problem.
1
Introduction
To solve complex problems in real-time, intelligent agents have to make efficient use of their finite
computational resources. Although there are general purpose algorithms, particular problems can
often be solved more efficiently by specialized algorithms. The human mind can take advantage of
this fact: People appear to have a toolbox of cognitive strategies [1] from which they choose adaptively [2, 3]. How these choices are made is an important, open question in cognitive science [4].
At an abstract level, choosing a cognitive strategy is equivalent to the algorithm selection problem
in computer science [5]: given a set of possible inputs I, a set of possible algorithms A, and a performance metric, find the selection mapping from I to A that maximizes the expected performance.
Here, we draw on a theoretical framework from artificial intelligence?rational metareasoning [6]?
and Bayesian machine learning to develop a mathematical theory of how people should choose
between cognitive strategies and test its predictions in a behavioral experiment.
In the first section, we apply rational metareasoning to the algorithm selection problem and derive how the optimal algorithm selection mapping can be efficiently approximated by model-based
learning when a small number of features is predictive of the algorithm?s runtime and accuracy. In
Section 2, we evaluate the performance of our solution against state-of-the-art methods for sorting
1
algorithm selection. In Sections 3 and 4, we apply our theory to cognitive modeling and report a behavioral experiment demonstrating that people quickly learn to adaptively choose between cognitive
strategies in a manner predicted by our model but inconsistent with previous theories. We conclude
with future directions at the interface of psychology and artificial intelligence.
2
Algorithm selection by rational metareasoning
Metareasoning is the problem of deciding which computations to perform given a problem and a
computational architecture [6]. Algorithm selection is a special case of metareasoning in which
the choice is limited to a few sequences of computations that generate complete results. According
to rational metareasoning [6], the optimal solution maximizes the value of computation (VOC).
The VOC is the expected utility of acting after having performed the computation (and additional
computations) minus the expected utility of acting immediately. In the general case, determining
the VOC requires solving a Markov decision problem [7]. Yet, in the special case of algorithm
selection, the hard problem of planning which computations to perform how often and in which
order reduces to the simpler one-shot choice between a small number algorithms. We can therefore
use the following approximation to the VOC from [6] as the performance metric to be maximized:
VOC(a; i) ? EP (S|a,i) [S] ? EP (T |a,i) [TC(T )]
(1)
m(i) = arg max VOC(a; i),
(2)
a?A
where a ? A is one of the available algorithms, i ? I is the input, S and T are the score and runtime
of algorithm a on input i, and TC(T ) is the opportunity cost of running the algorithm for T units
of time. The score S can be binary (correct vs. incorrect output) or numeric (e.g., error penalty).
The selection mapping m defined in Equation 2 depends on the conditional distributions of score
and runtime (P (S|a, i) and P (T |a, i)). These distributions are generally unknown, but they can be
learned. Learning an approximation to the VOC from experience, i.e. meta-level learning [6], is a
hard technical challenge [8], but it is tractable in the special case of algorithm selection.
Learning the conditional distributions of score and runtime separately for every possible input is
generally intractable. However, in many domains the inputs are structured and can be approximately
represented by a small number of features. Concretely, the effect of the input on score and runtime
is mediated by its features f = (f1 (i), ? ? ? , fN (i)):
P (S|a, i) = P (S|f , a) = P (S|f1 (i), ? ? ? , fN (i), a)
P (T |a, i) = P (T |f , a) = P (T |f1 (i), ? ? ? , fN (i), a).
(3)
(4)
If the features are observable and the distributions P (S|f1 (i), ? ? ? , fN (i), a) and
P (T |f1 (i), ? ? ? , fN (i), a) have been learned, then one can very efficiently compute an estimate of the expected value of applying the algorithm to a novel input. To learn the distributions
P (S|f1 (i), ? ? ? , fN (i), a) and P (T |f1 (i), ? ? ? , fN (i), a) from examples, we assume simple parametric forms for these distributions and estimate their parameters from the scores and runtimes of
the algorithms on previous problem instances.
As a first approximation, we assume that the runtime of an algorithm on problems with
features f is normally distributed with mean ?(f ; a) and standard deviation ?(f ; a). We
further assumed that the mean is a 2nd order polynomial in the extended features ?
f =
(f1 (i), ? ? ? , fN (i), log(f1 (i)), ? ? ? , log(fN (i))) and that the variance is independent of the mean:
P (T |f ; a, ?) = N (?T (f ; a, ?), ?T (a))
?T (f ; a, ?) =
2
X
2?
PN ?1
i=1
X
???
k1 =0
(5)
ki
kN
?k1 ,??? ,kN ;a ? f?1k1 ? . . . ? f?N
(6)
kN =0
P (?T (a)) = Gamma(?T?1 ; 0.01, 0.01),
(7)
where ? are the regression coefficients. Similarly, we model the probability that the algorithm returns the correct answer by a logistic function of a second order polynomial of the extended features:
P (S = 1|a, f , ?) =
1 + exp
P
2
k1 =0
???
1
?1
P2?PN
i=1 ki
kN =0
2
kN
?k1 ,??? ,kN ;a ? f?1k1 ? . . . ? f?N
,
(8)
with regression coefficients ?. The conditional distribution of a continuous score can be modeled
analogously to Equation 5, and we use ? to denote its regression coefficients.
If the time cost is a linear function of the algorithm?s runtime, i.e. TC(t) = c ? t for some constant c,
then the value of applying the algorithm depends only on the expectations of the runtime and score
distributions. For linear scores
EP (S,T |a,i) [S ? TC(T )] = ?S (f (i); a, ?) ? c ? ?T (f (i); a, ?),
(9)
and for binary scores
EP (S,T |a,i) [S ? TC(T )] = EP (?|s,a,i) [P (S = 1; i, ?)] ? c ? ?T (f (i); a, ?).
(10)
We approximated EP (?|s,a,i) [P (S = 1; i, ?)] according to Equation 10 in [9].
Thus, the algorithm selection mapping m can be learned by estimating the parameters ? and ? or
?. Our method estimates ? by Bayesian linear regression. When the score is binary, ? is estimated
by variational Bayesian logistic regression [9], and when the score is continuous, ? is estimated by
Bayesian linear regression. For Bayesian linear regression, we use conjugate Gaussian priors with
mean zero and unit variance, so that the posterior distributions can be computed very efficiently
by analytic update equations. Given the posterior distributions on the parameters, we compute the
expected VOC by marginalization. When the score is continuous ?S (f (i); a, ?) is linear in ? and
?T (f (i); a, ?) is linear in ?. Thus integrating out ? and ? with respect to the posterior yields
VOC(a; i) = ?S f (i); a, ??|i,s ? c ? ?T f (i); a, ??|i,t ,
(11)
where ?? and ?? are posterior means of ? and ? respectively. This implies the following simple
solution to the algorithm selection problem:
(12)
a(i; c) = arg max ?S (f (i); a, ??|itrain ,strain ? c ? ?T (f (i); a, ??|itrain ,ttrain )).
a?A
For binary scores, the runtime component is predicted in exactly the same way, and a variational
approximation to the posterior predictive density can be used for the score component [9].
To discover the best model of an algorithm?s runtime and score, our method performs feature selection by Bayesian model choice [10]. We consider all possible combinations of the regressors
defined above. To efficiently find the optimal set of features in this exponential large model space,
we exploit that all models are nested within the full model. This allows us to efficiently compute
Bayes factors using Savage-Dickey ratios [11].
3
Performance evaluation against methods for selecting sorting algorithms
Our goal was to evaluate rational metareasoning not only against existing methods but also against
human performance. To facilitate the comparison with how people choose between cognitive strategies, we chose to evaluate our method in the domain of sorting. Algorithm selection is relevant to
sorting, because there are many sorting algorithms with very different characteristics. In sorting,
the input i is the sequence to be sorted. Conventional sorting algorithms are guaranteed to return
the elements in correct order. Thus, the critical difference between them is in their runtimes, and
runtime depends primarily on the number of elements to be sorted and their presortedness. The
number of elements determines the relative importance of the coefficients of low (e.g., constant and
linear) versus high order terms (e.g., n2 , or n ? log(n)) whose weights differ between algorithms.
Presortedness is important because it determines the relative performance of algorithms that exploit
pre-existing order, e.g., insertion sort, versus algorithms that do not, e.g., quicksort.
According to recent reviews [12, 13], there are two key methods for sorting algorithm selection:
Guo?s decision-tree method [14] and Lagoudakis et al.?s recursive algorithm selection method [15].
We thus evaluated the performance of rational metareasoning against these two approaches.
3.1
Evaluation against Guo?s method
Guo?s method learns a decision-tree, i.e. a sequence of logical rules that are applied to the list?s
features to determine the sorting algorithm [14]. Guo?s method and our method represent inputs by
3
test set
Dsort5
nearly sorted lists
inversely sorted lists
random permutations
performance
99.78%
99.99%
83.37%
99.99%
95% CI
[99.7%, 99.9%]
[99.3%, 100%]
[82.7%, 84.1%]
[99.2%, 100%]
Guo?s performance
98.5%
99.4%
77.0%
85.3%
p-value
p < 10?15
p < 10?15
p < 10?15
p < 10?15
Table 1: Evaluation of rational metareasoning against Guo?s method. Performance was measured
by the percentage of problems for which the method chose the fastest algorithm.
the same pair of features: f1 = |i|, the length of the list to be sorted, and f2 , a measure of presortedness. Concretely, f2 estimates the number of inversions from the number of runs in the sequence,
i.e. f2 = f21 ? RUNS(i), where RUNS(i) = |{m : im > im+1 }|. This measure of presortedness can
be computed much more efficiently than the number of inversions.
Our method learns the conditional distributions of runtime and score given these two features, and
uses them to approximate the conditional distributions given the input (Equations 3?4). We verified that our method can learn how runtime depends on sequence length and presortedness (data
not shown). Next, we subjected our method to Guo?s performance evaluation [14]. We thus evaluated rational metareasoning on the problem of choosing between insertion sort, shell sort, heapsort,
merge sort, and quicksort. We matched our training sets to Guo?s DSort4 in the number of lists (i.e.
1875) and the distributions of length and presortedness. We provided the run-time of all algorithms
rather than the index of the fastest algorithm. Otherwise, the training sets were equivalent. For each
of Guo?s four test sets, we trained and evaluated rational metareasoning on 100 randomly generated pairs of training and test sets. The first test set mimicked Guo?s Dsort5 problem set [14]. It
comprised 1000 permutations of the numbers 1 to 1000. Of the 1000 sequences, 950 were random
permutations and 50 were nearly-sorted. The nearly-sorted lists were created by applying 10 random
pair-wise permutations to the numbers 1?1000. The sequences contained between 1 and 520 runs
(mean=260, SD=110). The second test set comprised 1000 nearly-sorted lists of length 1000. Each
list was created by applying 10 different random pair-wise permutations to the numbers 1 to 1000.
The third test set comprised 100 lists in reverse order. The fourth test set comprised 1000 random
permutations.
Table 1 compares how frequently rational metareasoning chose the best algorithm on each test set to
the results reported by Guo [14]. We estimated our method?s expected performance ? by its average
performance and 95% credible intervals. Credible intervals (CI) were computed by Bayesian inference with a uniform prior, and they comprise the values with highest posterior density whose total
probability is 0.95. In brief, rational metareasoning significantly outperformed Guo?s decision-tree
method on all four test sets. The performance gain was highest on random permutations: rational
metareasoning chose the best algorithm 99.99% rather than only 85.3% of the time.
3.2
Evaluation against Lagoudakis et al.?s method
Depending on a list?s length Lagoudakis et al.?s method chooses either insertion sort, merge sort,
or quicksort [15]. If merge sort or quicksort is chosen the same decision rule is applied to each
of the two sublists it creates. The selection mapping from lengths to algorithms is determined by
minimizing the expected runtime [15]. We evaluated rational metareasoning against Lagoudakis et
al.?s recursive method on 21 versions of Guo?s Dsort5 test set [14] with 0%, 5%, ? ? ? , 100% nearlysorted sequences. To accommodate differences in implementation and architecture, we recomputed
Lagoudakis et al.?s solution for the runtimes measured on our system. Rational metareasoning chose
between the five algorithms used by Guo and was trained on Guo?s Dsort4 [14]. We compare the
performance of the two methods in terms of their runtime, because none of the numerous choices of
recursive algorithm selection corresponds to our method?s algorithm choice.
On average, our implementation of Lagoudakis et al.?s method took 102.5 ? 0.83 seconds to sort the
21 test sets, whereas rational metareasoning finished in only 27.96 ? 0.02 seconds. Rational metareasoning was thus significantly faster (p < 10?15 ). Next, we restricted the sorting algorithms available to rational metareasoning to those used by Lagoudakis et al.?s method. The runtime increased
to 47.90 ? 0.02 seconds, but rational metareasoning remained significantly faster than Lagoudakis
4
et al.?s method (p < 10?15 ). These comparisons highlight two advantages of our method: i) it can
exploit presortedness, and ii) it can be used with arbitrarily many algorithms of any kind.
3.3
Discussion
Rational metareasoning outperformed two state-of-the-art methods for sorting algorithm selection.
Our results in the domain of sorting should be interpreted as a lower bound on the performance gain
that rational metareasoning can achieve on harder problems such as combinatorial optimization,
planning, and search, where the runtimes of different algorithms are more variable [12]. Future
research might explore the application of our theory to these harder problems, take into account
heavy-tailed runtime distributions, use better representations, and incorporate active learning.
Our results show that rational metareasoning is not just theoretically sound, but it is also competitive.
We can therefore use it as a normative model of human strategy selection learning.
4
Rational metareasoning as a model of human strategy selection
Most previous theories of how humans learn when to use which cognitive strategy assume basic
model-free reinforcement learning [16?18]. The REinforcement Learning among Cognitive Strategies model (RELACS [17]) and the Strategy Selection Learning model (SSL [18]) each postulate
that people learn just one number for each cognitive strategy: the expected reward of applying it to
an unknown problem and the sum of past rewards, respectively. These theories therefore predict that
people cannot learn to instantly adapt their strategy to the characteristics of a new problem. By contrast, the Strategy Choice And Discovery Simulation (SCADS [16]) postulates that people separately
learn about a strategy?s performance on particular types of problems and its overall performance and
integrate the resulting predictions by multiplication.
Our theory makes critically different assumptions about the mental representation of problems and
each strategy?s performance than the three previous psychological theories. First, rational metareasoning assumes that problems are represented by multiple features that can be continuous or binary. Second, rational metareasoning postulates that people maintain separate representations of
a strategy?s execution time and the quality of its solution. Third, rational metareasoning can discover non-additive interactions between features. Furthermore, rational metareasoning postulates
that learning, prediction, and strategy choice are more rational than previously modeled. Since our
model formalizes substantially different assumptions about mental representation and information
processing, determining which theory best explains human behavior will teach us more about how
the human brain represents and solves strategy selection problems.
To understand when and how the predictions of our theory differ from the predictions of the three
existing psychological theories, we performed computer simulations of how people would choose
between sorting strategies. In order to apply the psychological theories to the selection among sorting strategies, we had to define the reward (r). We considered three notions of reward: i) correctness
(r ? {?0.1, +0.1}; these numbers are based on the SCADS model [16]), ii) correctness minus time
cost (r ? c ? t, where t is the execution time and c is a constant), and iii) reward rate (r/t). We evaluated all nine combinations of the three theories with the three notions of reward. We provided the
SCADS model with reasonable problem types: short lists (length ? 16), long lists (length ? 32),
nearly-sorted lists (less than 10% inversions), and random lists (more than 25% inversions). We
evaluated the performance of these nine models against the rational metareasoning in the selection
between seven sorting algorithms: insertion sort, selection sort, bubble sort, shell sort, heapsort,
merge sort, and quicksort. To do so, we trained each model on 1000 randomly generated lists, fixed
the learned parameters and evaluated how many lists each model could sort per second. Training and
test lists were generated by sampling. Sequence lengths were sampled from a Uniform({2, ? ? ? , u})
distribution where u was 10, 100, 1000, or 10000 with equal probability. The fraction of inversions
between subsequent numbers was drawn from a Beta(2, 1) distribution. We performed 100 trainand-test episodes. Sorting time was measured by selection time plus execution time. We estimated
the expected sorting speed for each model by averaging. We found that while rational metareasoning achieved 88.1 ? 0.7% of the highest possible sorting speed, none of the nine alternative models
achieved more than 30% of the maximal sorting speed. Thus, the time invested in metareasoning
was more than offset by the time saved with the chosen strategy.
5
5
How do people choose cognitive strategies?
Given that rational metareasoning outperformed the nine psychological models in strategy selection,
we asked whether the mind is more adaptive than those theories assume. To answer this question,
we designed an experiment for which rational metareasoning predicts distinctly different choices.
5.1
Pilot studies and simulations
To design an experiment that could distinguish between our competing hypotheses, we ran two
pilot studies measuring the execution time characteristics of cocktail sort (CS) respectively merge
sort (MS). For each pilot study we recruited 100 participants on Amazon Mechanical Turk. In the
first pilot study, the interface shown in Figure 1(a) required participants to follow the step-by-step
instructions of the cocktail sort algorithm. In the second pilot study, participants had to execute
merge sort with the computer interface shown in Figure 1(b). We measured their sorting times
for lists of varying length and presortedness. Then, based on this data, we estimated how long
comparisons and moves take using each strategy. This led to the following sorting time models:
TCS = t?CS + ?CS , t?CS = 19.59 + 0.19 ? ncomparisons + 0.31 ? nmoves , ?CS ? N (0, 0.21 ? t?2CS ) (13)
TMS = t?MS + ?MS , t?MS = 13.98 + 1.10 ? ncomparisons + 0.52 ? nmoves , ?MS ? N (0, 0.15 ? t?2MS ) (14)
We then used these sorting time models to simulate 104 candidate strategy selection experiments according to each of the 10 models. We found several potential experiments for which rational metareasoning makes qualitatively different predictions than all of the alternative psychological theories,
and we chose the one that achieved the best compromise between discriminability and duration.
According to the two runtime models (Equations 13?14) and how many comparisons and moves
each algorithm would perform, people should choose merge sort for long and nearly inversely sorted
sequences and cocktail sort for sequences that are either nearly-sorted or short. For the chosen experimental design, the three existing psychological theories predicted that people would fail to learn
this contingency; see Figure 2. By contrast, rational metareasoning predicted that adaptive strategy
selection would be evident from the choices of more than 70% of our participants. Therefore, the
chosen experimental design was well suited to discriminate rational metareasoning from previous
theories. The next section describes the strategy choice experiment in detail.
5.2
Methods
The experiment was run online1 with 100 participants recruited on Amazon Mechanical Turk and it
paid $1.25. The experiment comprised three stages: training, choice, and execution. In the training
stage, each participant was taught to sort lists of numbers by executing the two contrasting strategies tested in the pilot studies: cocktail sort and merge sort. On each of the 11 training trials, the
participant was instructed which strategy to use. The interface enforced that he or she correctly
performed each step of that strategy. The interfaces were the same as in the pilot studies (see Figure
1). For both strategies, the chosen lists comprised nearly reversely sorted lists of length 4, 8, and
16 and nearly-sorted lists of length 16 and 32. For the cocktail sort strategy, each participant was
also trained on a nearly inversely sorted list with 32 elements. Participants first practiced cocktail
sort for five trials and then practiced merge sort. The last two trials contrasted the two strategies
on long, nearly-sorted sequences with identical length. Nearly-sorted lists were created by inserting
a randomly selected element at a different random location of an ascending list. Nearly inversely
sorted lists were created applying the same procedure to a descending list. In the choice phase,
participants were shown 18 test lists. For each list, they were asked to choose which sorting strategy
they would use, if they had to sort this sequence. Participants were told that they would have to sort
one randomly selected list with the strategy they chose for it. The test lists comprised six instances
of each of three kinds of sequences: long and nearly inversely sorted, long and nearly-sorted, and
short and nearly-sorted. The order of these sequences was randomized across participants. In the
execution phase, one of the 12 short lists was randomly selected, and the participant had to sort it
using the strategy he or she had previously chosen for that list.
To derive theoretical predictions, we gave each model the same information as our participants.
1
http://cocosci.berkeley.edu/mturk/falk/StrategyChoice/consent.html
6
a) Cocktail sort
b) Merge sort
Figure 1: Interfaces used to train participants to perform (a) cocktail sort and (b) merge sort in the
behavioral experiment.
5.3
Results
Our participants took 24.7 ? 6.7 minutes to complete the experiment (mean ? standard deviation).
The median number of errors per training sequence was 2.45, and 95% of our participants made
between 0.73 and 12.55 errors per training sequence. In the choice phase, 83% of our participants
were more likely to choose merge sort when it was the superior strategy (compared to trials when
it was not). We can thus be 95% confident that the population frequency of this adaptive strategy
choice pattern lies between 74.9% and 89.4%; see Figure 2b). This adaptive choice pattern was
significantly more frequent than could be expected, if strategy choice was independent of the lists?
features (p < 10?11 ). This is consistent with our model?s predictions but inconsistent with the
predictions of the RELACS, SSL, and SCADS models. Only rational metareasoning correctly predicted that the frequency of the adaptive strategy choice pattern would be above chance (p < 10?5
for our model and p ? 0.46 for all other models). Figure 2(b) compares the proportion of participants exhibiting this pattern with the models? predictions. The non-overlapping credible intervals
suggest that we can be 95% confident that the choices of people and rational metareasoning are more
adaptive than those predicted by the three previous theories (all p < 0.001). Yet we can also be 95%
confident that, at least in our experiment, people choose their strategy even more adaptively than
rational metareasoning (p ? 0.02).
On average, our participants chose merge sort for 4.9 of the 6 long and nearly inversely sorted
sequences (81.67% of the time, 95% credible interval: [77.8%; 93.0%]), but for only 1.79 of the
6 nearly-sorted long sequences (29.83% of the time, 95% credible interval: [12.9%, 32.4%]), and
for only 1.62 of the 6 nearly-sorted short sequences (27.00% of the time, 95% credible interval:
[16.7%, 40.4%]); see Figure 2(a). Thus, when merge sort was superior, our participants chose it
significantly more often than cocktail sort (p < 10?10 ). But, when merge sort was inferior, they
chose cocktail sort more often than merge sort (p < 10?7 ).
5.4
Discussion
We evaluated our rational metareasoning model of human strategy selection against nine models
instantiating three psychological theories. While those nine models completely failed to predict our
participants? adaptive strategy choices, the predictions of rational metareasoning were qualitatively
correct, and its choices came close to human performance. The RELACS and the SSL model failed,
because they do not represent problem features and do not learn about how those features affect each
strategy?s performance. The model-free learning assumed by SSL and RELACS was maladaptive
because cocktail sort was faster for most training sequences, but was substantially slower for the
7
Figure 2: Pattern of strategy choices: (a) Relative frequency with which humans and models chose
merge sort by list type. (b) Percentage of participants who chose merge sort more often when it was
superior than when it was not. Error bars indicate 95% credible intervals.
long, nearly inversely sorted test sequences. The SCADS model failed mainly because its suboptimal learning mechanism was fooled by the slight imbalance between the training examples for
cocktail sort and merge sort, but also because it can neither extrapolate nor capture the non-additive
interaction between length and presortedness. Instead human-like adaptive strategy selection can
be achieved by learning to predict each strategy?s execution time and accuracy given features of
the problem. To further elucidate the human mind?s strategy selection learning algorithm, future
research will evaluate our theory against an instance-based learning model [19].
Our participants outperformed the RELACS, SSL, and SCADS models, as well as rational metareasoning in our strategy selection task. This suggests that neither psychology nor AI can yet fully
account for people?s adaptive strategy selection. People?s superior performance could be enabled by
a more powerful representation of the sequences, perhaps one that includes reverse-sortedness, or
the ability to choose strategies based on mental simulations of their execution on the presented list.
These are just two of many possibilities and more experiments are needed to unravel people?s superior performance. In contrast to the sorting strategies in our experiment, most cognitive strategies
operate on internal representations. However, there are two reasons to expect our conclusions to
transfer: First, the metacognitive principles of strategy selection might be domain general. Second,
the strategies people use to order things mentally might be based on their sorting strategies in the
same way in which mental arithmetic is based on calculating with fingers or on paper.
6
Conclusions
Since neither psychology nor AI can yet fully account for people?s adaptive strategy selection, further research into how people learn to select cognitive strategies may yield not only a better understanding of human intelligence, but also better solutions to the algorithm selection problem in
computer science and artificial intelligence. Our results suggest that reasoning about which strategy
to use might contribute to people?s adaptive intelligence and can save more time than it takes. Since
our framework is very general, it can be applied to strategy selection in all areas of human cognition
including judgment and decision-making [1, 3], as well as to the discovery of novel strategies [2].
Future research will investigate human strategy selection learning in more ecological domains such
as mental arithmetic, decision-making, and problem solving where people have to trade off speed
versus accuracy. In conclusion, rational metareasoning is a promising theoretical framework for
reverse-engineering people?s capacity for adaptive strategy selection.
Acknowledgments. This work was supported by ONR MURI N00014-13-1-0341.
8
References
[1] G. Gigerenzer and R. Selten, Bounded rationality: The adaptive toolbox. MIT Press, 2002.
[2] R. S. Siegler, ?Strategic development,? Trends in Cognitive Sciences, vol. 3, pp. 430?435, Nov. 1999.
[3] J. W. Payne, J. R. Bettman, and E. J. Johnson, ?Adaptive strategy selection in decision making.,? Journal
of Experimental Psychology: Learning, Memory, and Cognition, vol. 14, no. 3, p. 534, 1988.
[4] J. N. Marewski and D. Link, ?Strategy selection: An introduction to the modeling challenge,? Wiley
Interdisciplinary Reviews: Cognitive Science, vol. 5, no. 1, pp. 39?59, 2014.
[5] J. R. Rice, ?The algorithm selection problem,? Advances in Computers, vol. 15, pp. 65?118, 1976.
[6] S. Russell and E. Wefald, ?Principles of metareasoning,? Artificial Intelligence, vol. 49, no. 1-3, pp. 361?
395, 1991.
[7] N. Hay, S. Russell, D. Tolpin, and S. Shimony, ?Selecting computations: Theory and applications,? in
Uncertainty in Artificial Intelligence: Proceedings of the Twenty-Eighth Conference (N. de Freitas and
K. Murphy, eds.), (P.O. Box 866 Corvallis, Oregon 97339 USA), AUAI Press, 2012.
[8] D. Harada and S. Russell, ?Meta-level reinforcement learning,? in NIPS?98 Workshop on Abstraction and
Hierarchy in Reinforcement Learning, 1998.
[9] T. Jaakkola and M. Jordan, ?A variational approach to Bayesian logistic regression models and their
extensions,? in Sixth International Workshop on Artificial Intelligence and Statistics, 1997.
[10] R. E. Kass and A. E. Raftery, ?Bayes factors,? Journal of the American Statistical Association, vol. 90,
pp. 773?795, June 1995.
[11] W. D. Penny and G. R. Ridgway, ?Efficient posterior probability mapping using Savage-Dickey ratios,?
PLoS ONE, vol. 8, no. 3, pp. e59655+, 2013.
[12] L. Kotthoff, ?Algorithm selection for combinatorial search problems: A survey,? AI Magazine, 2014.
[13] K. A. Smith-Miles, ?Cross-disciplinary perspectives on meta-learning for algorithm selection,? ACM
Comput. Surv., vol. 41, Jan. 2009.
[14] H. Guo, Algorithm selection for sorting and probabilistic inference: a machine learning-based approach.
PhD thesis, Kansas State University, 2003.
[15] M. G. Lagoudakis, M. L. Littman, and R. Parr, ?Selecting the right algorithm,? in Proceedings of the 2001
AAAI Fall Symposium Series: Using Uncertainty within Computation, Cape Cod, MA, 2001.
[16] J. Shrager and R. S. Siegler, ?SCADS: A model of children?s strategy choices and strategy discoveries,?
Psychological Science, vol. 9, pp. 405?410, Sept. 1998.
[17] I. Erev and G. Barron, ?On adaptation, maximization, and reinforcement learning among cognitive strategies.,? Psychological review, vol. 112, pp. 912?931, Oct. 2005.
[18] J. Rieskamp and P. E. Otto, ?SSL: A theory of how people learn to select strategies.,? Journal of Experimental Psychology: General, vol. 135, pp. 207?236, May 2006.
[19] C. Gonzalez and V. Dutt, ?Instance-based learning: Integrating sampling and repeated decisions from
experience,? Psychological Review, vol. 118, no. 4, pp. 523?551, 2011.
9
| 5552 |@word trial:4 version:1 inversion:5 polynomial:2 proportion:1 nd:1 open:1 instruction:1 simulation:4 paid:1 minus:2 accommodate:1 harder:2 shot:1 series:1 score:17 selecting:4 practiced:2 outperforms:1 existing:5 past:1 savage:2 freitas:1 ka:1 yet:4 fn:9 additive:2 subsequent:1 analytic:1 designed:1 update:1 rieskamp:1 v:1 intelligence:8 selected:3 smith:1 short:5 ttrain:1 mental:5 contribute:1 location:1 simpler:1 five:2 mathematical:1 beta:1 symposium:1 incorrect:1 behavioral:4 manner:1 theoretically:1 expected:10 behavior:1 planning:2 frequently:1 nor:3 brain:1 voc:9 provided:2 estimating:1 discover:2 matched:1 maximizes:2 bounded:1 kind:2 interpreted:1 substantially:2 contrasting:1 formalizes:1 berkeley:13 every:1 auai:1 runtime:18 exactly:1 unit:2 normally:1 appear:1 engineering:2 sd:1 approximately:1 merge:19 might:4 chose:12 plus:1 discriminability:1 suggests:1 fastest:2 limited:1 acknowledgment:1 recursive:3 procedure:1 jan:1 area:1 metareasoning:47 significantly:5 pre:1 integrating:2 griffith:2 suggest:2 cannot:1 close:1 selection:55 applying:6 descending:1 equivalent:2 conventional:1 lied:2 helen:1 duration:1 unravel:1 survey:1 amazon:2 immediately:1 rule:2 enabled:1 population:1 notion:2 elucidate:1 rationality:1 hierarchy:1 magazine:1 us:1 hypothesis:1 surv:1 element:5 trend:1 approximated:2 predicts:1 maladaptive:1 muri:1 ep:6 solved:1 capture:1 episode:1 plo:1 russell:5 highest:3 trade:1 ran:1 insertion:4 reward:6 asked:2 littman:1 trained:4 solving:2 gigerenzer:1 compromise:1 predictive:2 creates:1 f2:3 completely:1 plunkett:1 represented:2 finger:1 train:1 cod:1 artificial:6 choosing:2 whose:2 solve:1 otherwise:1 otto:1 ability:1 statistic:1 invested:1 advantage:2 sequence:23 took:2 interaction:2 maximal:1 adaptation:1 frequent:1 inserting:1 relevant:1 payne:1 consent:1 achieve:1 ridgway:1 executing:1 derive:3 develop:1 depending:1 measured:4 solves:1 p2:1 c:7 predicted:6 implies:1 indicate:1 differ:2 direction:1 exhibiting:1 correct:4 saved:1 human:19 translating:1 disciplinary:1 explains:1 f1:10 im:2 extension:1 considered:1 deciding:1 exp:1 mapping:6 predict:3 cognition:2 parr:1 purpose:1 outperformed:4 combinatorial:2 vice:1 f21:1 correctness:2 mit:1 gaussian:1 rather:2 pn:2 varying:1 jaakkola:1 june:1 she:2 selten:1 mainly:1 fooled:1 contrast:3 inference:2 abstraction:1 arg:2 among:4 overall:1 html:1 development:1 art:2 special:4 ssl:6 uc:6 equal:1 comprise:1 having:1 sampling:2 runtimes:4 identical:1 represents:1 stuart:1 nearly:20 future:4 report:1 intelligent:1 few:1 primarily:1 randomly:5 gamma:1 falk:3 murphy:1 phase:3 maintain:1 jessica:1 possibility:1 investigate:1 evaluation:5 dickey:2 experience:2 tree:3 theoretical:3 psychological:10 instance:4 increased:1 modeling:2 measuring:1 shimony:1 maximization:1 cost:3 strategic:1 deviation:2 uniform:2 comprised:7 harada:1 johnson:1 reported:1 kn:6 answer:2 eec:2 tolpin:1 chooses:1 adaptively:4 confident:3 density:2 international:1 randomized:1 interdisciplinary:1 told:1 off:1 probabilistic:1 analogously:1 quickly:2 thesis:1 postulate:4 aaai:1 choose:14 cognitive:19 american:1 return:2 account:3 potential:1 de:1 includes:1 coefficient:4 dillon:1 oregon:1 depends:4 performed:4 view:1 competitive:1 bayes:2 sort:45 participant:24 accuracy:3 variance:2 characteristic:3 efficiently:7 maximized:1 yield:2 who:1 judgment:1 bayesian:8 critically:1 none:2 ed:1 sixth:1 against:12 frequency:3 turk:2 pp:10 rational:43 gain:2 sampled:1 pilot:7 logical:1 credible:7 appears:1 follow:1 tom:1 inspire:1 evaluated:8 execute:1 box:1 furthermore:1 just:3 stage:2 overlapping:1 logistic:3 quality:1 perhaps:1 facilitate:1 effect:1 usa:1 mile:1 inferior:1 m:6 evident:1 complete:2 performs:1 interface:6 reasoning:1 variational:3 wise:2 novel:2 lagoudakis:9 superior:5 specialized:1 mentally:1 association:1 he:2 slight:1 corvallis:1 versa:1 ai:3 similarly:1 had:5 posterior:7 recent:1 perspective:1 reverse:4 hay:2 n00014:1 ecological:1 meta:3 binary:5 arbitrarily:1 came:1 onr:1 additional:1 determine:1 ii:2 arithmetic:2 full:1 sound:1 multiple:1 reduces:1 technical:1 faster:3 adapt:1 hamrick:1 long:9 cross:1 prediction:12 instantiating:1 regression:8 basic:1 mturk:1 expectation:1 metric:2 represent:2 achieved:4 whereas:1 separately:2 interval:7 median:1 shrager:1 operate:1 recruited:2 thing:1 inconsistent:3 jordan:1 iii:1 marginalization:1 affect:1 psychology:8 gave:1 architecture:2 competing:1 suboptimal:1 tm:1 whether:1 six:1 utility:2 penalty:1 nine:6 cocktail:12 generally:2 generate:1 http:1 percentage:2 neuroscience:1 estimated:5 per:3 correctly:2 instantly:1 vol:12 taught:1 key:1 four:2 recomputed:1 demonstrating:1 drawn:1 neither:3 verified:1 fraction:1 sum:1 enforced:1 run:6 fourth:1 powerful:1 uncertainty:2 reasonable:1 erev:1 draw:1 gonzalez:1 decision:9 ki:2 bound:1 guaranteed:1 distinguish:1 speed:4 simulate:1 department:5 structured:1 according:5 combination:2 scad:7 conjugate:1 describes:1 across:1 making:3 online1:1 restricted:1 resource:1 equation:6 previously:2 fail:1 mechanism:1 needed:1 mind:4 tractable:1 subjected:1 ascending:1 available:2 apply:4 barron:1 nicholas:1 save:1 mimicked:1 alternative:2 slower:1 thomas:1 assumes:1 running:1 opportunity:1 cape:1 calculating:1 exploit:4 k1:6 move:2 question:2 strategy:75 parametric:1 separate:1 link:1 capacity:1 seven:1 reason:1 length:14 modeled:2 index:1 ratio:2 minimizing:1 teach:1 implementation:2 design:3 unknown:2 perform:4 twenty:1 imbalance:1 markov:1 finite:1 extended:2 strain:1 pair:4 mechanical:2 toolbox:2 required:1 learned:4 nip:1 bar:1 pattern:5 eighth:1 challenge:3 max:2 including:1 memory:1 critical:1 brief:1 inversely:7 numerous:1 kansa:1 finished:1 created:4 raftery:1 bubble:1 mediated:1 sept:1 prior:2 review:4 discovery:3 understanding:1 multiplication:1 determining:2 relative:3 fully:2 expect:1 permutation:7 highlight:1 versus:3 integrate:1 contingency:1 agent:1 consistent:2 principle:2 heavy:1 reversely:1 supported:1 last:1 free:2 understand:1 institute:1 fall:1 face:1 distinctly:1 distributed:1 penny:1 numeric:1 concretely:2 made:2 reinforcement:5 regressors:1 adaptive:14 qualitatively:2 instructed:1 approximate:1 observable:1 nov:1 quicksort:5 active:1 conclude:1 assumed:2 continuous:4 search:2 tailed:1 table:2 promising:2 learn:12 transfer:1 complex:1 domain:5 n2:1 child:1 repeated:1 wiley:1 exponential:1 comput:1 candidate:1 lie:1 third:2 learns:2 minute:1 remained:1 normative:1 list:36 offset:1 intractable:1 workshop:2 importance:1 ci:2 phd:1 execution:8 siegler:2 sorting:27 suited:1 tc:6 led:1 explore:1 likely:1 failed:3 contained:1 nested:1 corresponds:1 determines:2 chance:1 acm:1 rice:1 shell:2 ma:1 conditional:5 oct:1 goal:1 sorted:24 hard:2 determined:1 contrasted:1 acting:2 averaging:1 total:1 discriminate:1 experimental:4 select:2 internal:1 people:28 guo:16 incorporate:1 evaluate:4 tested:1 extrapolate:1 |
5,029 | 5,553 | A Framework for Testing Identifiability
of Bayesian Models of Perception
Luigi Acerbi1,2
Wei Ji Ma2
Sethu Vijayakumar1
1
School of Informatics, University of Edinburgh, UK
Center for Neural Science & Department of Psychology, New York University, USA
{luigi.acerbi,weijima}@nyu.edu
[email protected]
2
Abstract
Bayesian observer models are very effective in describing human performance in
perceptual tasks, so much so that they are trusted to faithfully recover hidden mental representations of priors, likelihoods, or loss functions from the data. However,
the intrinsic degeneracy of the Bayesian framework, as multiple combinations of
elements can yield empirically indistinguishable results, prompts the question of
model identifiability. We propose a novel framework for a systematic testing of
the identifiability of a significant class of Bayesian observer models, with practical applications for improving experimental design. We examine the theoretical
identifiability of the inferred internal representations in two case studies. First,
we show which experimental designs work better to remove the underlying degeneracy in a time interval estimation task. Second, we find that the reconstructed
representations in a speed perception task under a slow-speed prior are fairly robust.
1
Motivation
Bayesian Decision Theory (BDT) has been traditionally used as a benchmark of ideal perceptual
performance [1], and a large body of work has established that humans behave close to Bayesian
observers in a variety of psychophysical tasks (see e.g. [2, 3, 4]). The efficacy of the Bayesian
framework in explaining a huge set of diverse behavioral data suggests a stronger interpretation
of BDT as a process model of perception, according to which the formal elements of the decision
process (priors, likelihoods, loss functions) are independently represented in the brain and shared
across tasks [5, 6]. Importantly, such mental representations, albeit not directly accessible to the
experimenter, can be tentatively recovered from the behavioral data by ?inverting? a model of the
decision process (e.g., priors [7, 8, 9, 10, 11, 12, 13, 14], likelihood [9], and loss functions [12, 15]).
The ability to faithfully reconstruct the observer?s internal representations is key to the understanding
of several outstanding issues, such as the complexity of statistical learning [11, 12, 16], the nature
of mental categories [10, 13], and linking behavioral to neural representations of uncertainty [4, 6].
In spite of these successes, the validity of the conclusions reached by fitting Bayesian observer
models to the data can be questioned [17, 18]. A major issue is that the inverse mapping from
observed behavior to elements of the decision process is not unique [19]. To see this degeneracy,
consider a simple perceptual task in which the observer is exposed to stimulus s that induces a noisy
sensory measurement x. The Bayesian observer reports the optimal estimate s? that minimizes his
or her expected loss, where the loss function L (s, s?) encodes the loss (or cost) for choosing s? when
the real stimulus is s. The optimal estimate for a given measurement x is computed as follows [20]:
Z
s? (x) = arg min qmeas (x|s)qprior (s)L (s, s?) ds
(1)
s?
where qprior (s) is the observer?s prior density over stimuli and qmeas (x|s) the observer?s sensory
likelihood (as a function of s). Crucially, for a given x, the solution of Eq. 1 is the same for any
1
triplet of prior qprior (s) ? ?1 (s), likelihood qmeas (x|s) ? ?2 (s), and loss function L (?
s, s) ? ?3 (s), where
Q3
the ?i (s) are three generic functions such that i=1 ?i (s) = c, for a constant c > 0. This analysis
shows that the ?inverse problem? is ill-posed, as multiple combinations of priors, likelihoods and
loss functions yield identical behavior [19], even before considering other confounding issues, such
as latent states. If uncontrolled, this redundancy of solutions may condemn the Bayesian models of
perception to a severe form of model non-identifiability that prevents the reliable recovery of model
components, and in particular the sought-after internal representations, from the data.
In practice, the degeneracy of Eq. 1 can be prevented by enforcing constraints on the shape that the
internal representations are allowed to take. Such constraints include: (a) theoretical considerations
(e.g., that the likelihood emerges from a specific noise model [21]); (b) assumptions related to
the experimental layout (e.g., that the observer will adopt the loss function imposed by the reward
system of the task [3]); (c) additional measurements obtained either in independent experiments or
in distinct conditions of the same experiment (e.g., through Bayesian transfer [5]). Crucially, both
(b) and (c) are under partial control of the experimenter, as they depend on the experimental design
(e.g., choice of reward system, number of conditions, separate control experiments). Although
several approaches have been used or proposed to suppress the degeneracy of Bayesian models of
perception [12, 19], there has been no systematic analysis ? neither empirical nor theoretical ? of
their effectiveness, nor a framework to perform such study a priori, before running an experiment.
This paper aims to fill this gap for a large class of psychophysical tasks. Similar issues of model
non-identifiability are not new to psychology [22], and generic techniques of analysis have been
proposed (e.g., [23]). Here we present an efficient method that exploits the common structure shared
by many Bayesian models of sensory estimation. First, we provide a general framework that allows a
modeller to perform a systematic, a priori investigation of identifiability, that is the ability to reliably
recover the parameters of interest, for a chosen Bayesian observer model. Second, we show how,
by comparing identifiability within distinct ideal experimental setups, our framework can be used
to improve experimental design. In Section 2 we introduce a novel class of observer models that is
both flexible and efficient, key requirements for the subsequent analysis. In Section 3 we describe
a method to efficiently explore identifiability of a given observer model within our framework. In
Section 4 we show an application of our technique to two well-known scenarios in time perception
[24] and speed perception [9]. We conclude with a few remarks in Section 5.
2
Bayesian observer model
Here we introduce a continuous class of Bayesian observer models parametrized by vector ?. Each
value of ? corresponds to a specific observer that can be used to model the psychophysical task of
interest. The current model (class) extends previous work [12, 14] by encompassing any sensorimotor estimation task in which a one-dimensional stimulus magnitude variable s, such as duration,
distance, speed, etc. is directly estimated by the observer. This is a fundamental experimental condition representative of several studies in the field (e.g., [7, 9, 12, 24, 14]). With minor modifications,
the model can also cover angular variables such as orientation (for small errors) [8, 11] and multidimensional variables when symmetries make the actual inference space one-dimensional [25]. The
main novel feature of the presented model is that it covers a large representational basis with a single parametrization, while still allowing fast computation of the observer?s behavior, both necessary
requirements to permit an exploration of the complex model space, as described in Section 3.
The generic observer model is constructed in four steps (Figure 1 a & b): 1) the sensation stage
describes how the physical stimulus s determines the internal measurement x; 2) the perception stage
describes how the internal measurement x is combined with the prior to yield a posterior distribution;
3) the decision-making stage describes how the posterior distribution and loss function guide the
choice of an ?optimal? estimate s? (possibly corrupted by lapses); and finally 4) the response stage
describes how the optimal estimate leads to the observed response r.
2.1
Sensation stage
For computational convenience, we assume that the stimulus s ? R+ (the task space) P
comes from
a discrete experimental distribution of stimuli si with frequencies Pi , with Pi > 0, i Pi = 1
for 1 ? i ? Nexp . Discrete distributions of stimuli are common in psychophysics, and continu2
Internal model
Generative model
a.
Perception &
Decision-making
Sensation
s
s?
x
pmeas (x |s )
pest (s? | x)
b.
Response
r
x
t
preport (r |s? )
Decision-making
Perception
qmeas (x |t )
qprior (t)
minimize
1 ? ?
L(t? ? t)
?
lapse
t?
Figure 1: Observer model. Graphical model of a sensorimotor estimation task, as seen from the
outside (a), and from the subjective point of view of the observer (b). a: Objective generative
model of the task. Stimulus s induces a noisy sensory measurement x in the observer, who decides
for estimate s? (see b). The recorded response r is further perturbed by reporting noise. Shaded
nodes denote experimentally accessible variables. b: Observer?s internal model of the task. The
observer performs inference in an internal measurement space in which the unknown stimulus is
denoted by t (with t = f (s)). The observer either chooses the subjectively optimal value of t, given
internal measurement x, by minimizing the expected loss, or simply lapses with probability ?. The
observer?s chosen estimate t? is converted to task space through the inverse mapping s? = f ?1 (t? ).
The whole process in this panel is encoded in (a) by the estimate distribution pest (s? |x).
ous distributions can be ?binned? and approximated up to the desired precision by increasing Nexp .
Due to noise in the sensory systems, stimulus s induces an internal measurement x ? R according to measurement distribution pmeas (x|s) [20]. In general, the magnitude of sensory noise may be
stimulus-dependent in task space, in which case the shape of the likelihood would change from point
to point ? which is unwieldy for subsequent computations. We want instead to find a transformed
space in which the scale of the noise is stimulus-independent and the likelihood translationally invariant [9] (see Supplementary Material). We assume that such change of variables is performed by
a function f (s) : s ? t that monotonically maps stimulus s from task space into t = f (s), which
lives with x in an internal measurement space. We assume for f (s) the following parametric form:
"
q
d #
d
s
t?B
f (s) = A ln 1 +
(2)
+ B with inverse f ?1 (t) = s0 e A ? 1
s0
where A and B are chosen, without loss of generality, such that the discrete distribution of stimuli
mapped in internal space, {f (si )} for 1 ? i ? Nexp , has range [?1, 1]. The parametric form of the
sensory map in Eq. 2 can approximate both Weber-Fechner?s law and Steven?s law, for different
values of base noise magnitude s0 and power exponent d (see Supplementary Material).
We determine the shape of pmeas (x|s) with a maximum-entropy approach by fixing the first four
moments of the distribution, and under the rather general assumptions that the sensory measurement is unimodal and centered on the stimulus in internal measurement space. For computational
convenience, we express pmeas (x|s) as a mixture of (two) Gaussians in internal measurement space:
pmeas (x|s) = ?N x|f (s) + ?1 , ?12 + (1 ? ?)N x|f (s) + ?2 , ?22
(3)
2
2
where N x|?, ? is a normal distribution with mean ? and variance ? (in this paper we consider
a two-component mixture but derivations easily generalize to more components). The parameters in
Eq. 3 are partially determined by specifying the first four central moments: E [x] = f (s), Var[x] =
? 2 , Skew[x] = ?, Kurt[x] = ?; where ?, ?, ? are free parameters. The remaining degrees of freedom
(one, for two Gaussians) are fixed by picking a distribution that satisfies unimodality and locally
maximizes the differential entropy (see Supplementary Material). The sensation model represented
by Eqs. 2 and 3 allows to express a large class of sensory models in the psychophysics literature,
including for instance stimulus-dependent noise [9, 12, 24] and ?robust? mixture models [21, 26].
2.2
Perceptual stage
Without loss of generality, we represent the observer?s prior distribution qprior (t) as a mixture of M
dense, regularly spaced Gaussian distributions in internal measurement space:
qprior (t) =
M
X
m=1
wm N t|?min + (m ? 1)a, a2
3
a?
?max ? ?min
M ?1
(4)
where wm are the mixing weights, a the lattice spacing and [?min , ?max ] the range in internal space
over which the prior is defined (chosen 50% wider than the true stimulus range). Eq. 4 allows
the modeller to approximate any observer?s prior, where M regulates the fine-grainedness of the
representation and is determined by computational constraints (for all our analyses we fix M = 15).
For simplicity, we assume that the observer?s internal representation of the likelihood, qmeas (x|t), is
expressed in the same measurement space and takes again the form of a unimodal mixture of two
Gaussians, Eq. 3, although with possibly different variance, skewness and kurtosis (respectively,
?
? 2 , ?? and ?
? ) than the true likelihood. We write the observer?s posterior distribution as: qpost (t|x) =
1
q
(t)q
meas (x|t) with Z the normalization constant.
Z prior
2.3
Decision-making stage
According to Bayesian Decision Theory (BDT), the observer?s ?optimal? estimate corresponds to the
value of the stimulus that minimizes the expected loss, with respect to loss function L(t, t?), where
t is the true value of the stimulus and t? its estimate. In general the loss could depend on t and t? in
different ways, but for now we assume a functional dependence only on the stimulus difference in
internal measurement space, t? ? t. The (subjectively) optimal estimate is:
Z
?
t (x) = arg min qpost (t|x)L t? ? t dt
(5)
t?
where the integral on the r.h.s. represents the expected loss. We make the further assumption that
the loss function is well-behaved, that is smooth, with a unique minimum at zero (i.e., the loss is
minimal when the estimate matches the true stimulus), and with no other local minima. As before,
we adopt a maximum-entropy approach and we restrict ourselves to the class of loss functions that
can be described as mixtures of two (inverted) Gaussians:
2
2
L(t? ? t) = ?? ` N t? ? t|?`1 , ?1` ? (1 ? ? ` )N t? ? t|?`2 , ?2` .
(6)
Although the loss function is not a distribution, we find convenient to parametrize it in terms
of statistics of a corresponding unimodal distribution obtained by flipping Eq. 6 upside down:
Mode [t0 ] = 0, Var [t0 ] = ?`2 , Skew [t0 ] = ?` , Kurt [t0 ] = ?` ; with t0 ? t? ? t. Note that we fix
the location of the mode of the mixture of Gaussians so that the global minimum of the loss is at
zero. As before, the remaining free parameter is fixed by taking a local maximum-entropy solution.
A single inverted Gaussian already allows to express a large variety of losses, from a delta function
(MAP strategy) for ?` ? 0 to a quadratic loss for ?` ? ? (in practice, for ?` & 1), and it has been
shown to capture human sensorimotor behavior quite well [15]. Eq. 6 further extends the range of
describable losses to asymmetric and more or less peaked functions. Crucially, Eqs. 3, 4, 5 and 6
combined yield an analytical expression for the expected loss that is a mixture of Gaussians (see
Supplementary Material) that allows for a fast numerical solution [14, 27].
We allow the possibility that the observer may occasionally deviate from BDT due to lapses with
probability ? ? 0. In the case of lapse, the observer?s estimate t? is drawn randomly from the prior
[11, 14]. The combined stochastic estimator with lapse in task space has distribution:
pest (s? |x) = (1 ? ?) ? ? s? ? f ?1 (t? (x)) + ? ? qprior (s? ) |f 0 (s? )|
(7)
where f 0 (s? ) is the derivative of the mapping in Eq. 2 (see Supplementary Material).
2.4
Response stage
We assume that the observer?s response r is equal to the observer?s estimate corrupted by independent normal noise in task space, due to motor error and other residual sources of variability:
2
preport (r|s? ) = N r|s? , ?report
(s? )
(8)
2
where we choose a simple parameteric form for the variance: ?report
(s) = ?20 + ?21 s2 , that is the sum
of two independent noise terms (constant noise plus some noise that grows with the magnitude of
the stimulus). In our current analysis we are interested in observer models of perception, so we do
not explicitly model details of the motor aspect of the task and we do not include the consequences
of response error into the decision making part of the model (Eq. 5).
4
Finally, the main observable that the experimenter can measure is the response probability density,
presp (r|s; ?), of a response r for a given stimulus s and observer?s parameter vector ? [12]:
Z
2
presp (r|s; ?) = N r|s? , ?report
(s? ) pest (s? |x)pmeas (x|s) ds? dx,
(9)
obtained by marginalizing over unobserved variables (see Figure 1 a), and which we can compute
through Eqs. 3?8. An observer model is fully characterized by parameter vector ?:
M
(10)
? = ?, ?, ?, s0 , d, ?
? , ?? , ?
? , ?` , ?` , ?` , {wm }m=1 , ?0 , ?1 , ? .
An experimental design is specified by a reference observer model ? ? , an experimental distribution
of stimuli (a discrete set of Nexp stimuli si , each with relative frequency Pi ), and possibly a subset
of parameters that are assumed to be equal to some a priori or experimentally measured values
during the inference. For experiments with multiple conditions, an observer model typically shares
several parameters across conditions. The reference observer ? ? represents a ?typical? observer for
the idealized task under examination; its parameters are determined from pilot experiments, the
literature, or educated guesses. We are ready now to tackle the problem of identifiability of the
parameters of ? ? within our framework for a given experimental design.
3
Mapping a priori identifiability
Two observer models ? and ? ? are a priori practically non-identifiable if they produce similar response probability densities presp (r|si ; ?) and presp (r|si ; ? ? ) for all stimuli si in the experiment.
Specifically, we assume that data are generated by the reference observer ? ? and we ask what is
the chance that a randomly generated dataset D of a fixed size Ntr will instead provide support for
observer ?. For one specific dataset D, a natural way to quantify support would be the posterior
probability of a model given the data, Pr(?|D). However, randomly generating a large number of
datasets so as to approximate the expected value of Pr(?|D) over all datasets, in the spirit of previous
work on model identifiability [23], becomes intractable for complex models such as ours.
Instead, we define the support for observer model ?, given dataset D, as its log likelihood,
log Pr(D|?). The log (marginal) likelihood is a widespread measure of evidence in model comparison, from sampling algorithms to metrics such as AIC, BIC and DIC [28]. Since we know the
generative model of the data, Pr(D|? ? ), we can compute the expected support for model ? as:
Z
hlog Pr(D|?)i =
log Pr (D|?) Pr (D|? ? ) dD.
(11)
|D|=Ntr
The formal integration over all possible datasets with fixed number of trials Ntr yields:
Nexp
hlog Pr(D|?)i = ?Ntr
X
i=1
Pi ? DKL (presp (r|si ; ? ? )||presp (r|si ; ?)) + const
(12)
where DKL (?||?) is the Kullback-Leibler (KL) divergence between two distributions, and the constant is an entropy term that does not affect our subsequent analysis, not depending on ? (see Supplementary Material for the derivation). Crucially, DKL is non-negative, and zero only when the
two distributions are identical. The asymmetry of the KL-divergence captures the different status of
? ? and ? (that is, we measure differences only on datasets generated by ? ? ). Eq. 12 quantifies the
average support for model ? given true model ? ? , which we use as a proxy to assess model identifiability. As an empirical tool to explore the identifiability landscape, we define the approximate
expected posterior density as:
E (?|? ? ) ? ehlog Pr(D|?)i
(13)
and we sample from Eq. 13 via MCMC. Clearly, E (?|? ? ) is maximal for ? = ? ? and generally
high for regions of the parameter space empirically close to the predictions of ? ? . Moreover, the
peakedness of E(?|? ? ) is modulated by the number of trials Ntr (the more the trials, the more
information to discriminate between models).
4
Results
We apply our framework to two case studies: the inference of priors in a time interval estimation
task (see [24]) and the reconstruction of prior and noise characteristics in speed perception [9].
5
a.
Prior
Mean
SD
d.
Kurtosis b.
Skewness
??
10
0
0
5
10
MAP
1
0
0
40
1
1
0
1
10
1
1
0
0.1
1
0.01
0
MTR
40
KL
5
0
5
10
0
0
0
40
1
1
c.
BSL SRT MAP MTR
0
BSL
1
0.12
0.1
0
0
5
10
0
494
847
ms
0
600 800
ms
0
0
0
40
1
1
0
50 100
ms
0
?1
0
1
0
2 ?2
0
?1
SRT
BSL
P ? 0.06 0.13 0.02 0.79
0.08
1
0.06
0
2
4
2
0.0
6
0.0 ?
0
0.1
0
0 0.5 1 1.5
Figure 2: Internal representations in interval timing (Short condition). Accuracy of the reconstructed priors in the Short range; each row corresponds to a different experimental design. a: The
first column shows the reference prior (thick red line) and the recovered mean prior ? 1 SD (black
line and shaded area). The other columns display the distributions of the recovered central moments
of the prior. Each panel shows the median (black line), the interquartile range (dark-shaded area)
and the 95 % interval (light-shaded area). The green dashed line marks the true value. b: Box plots
of the symmetric KL-divergence between the reconstructed priors and the prior of the reference observer. At top, the primacy probability P ? of each setup having less reconstruction error than all
the others (computed by bootstrap). c: Joint posterior density of sensory noise ? and motor noise
?1 in setup BSL (gray contour plot; colored plots are marginal distributions). The parameters are
anti-correlated, and discordant with the true value (star and dashed lines). d: Marginal posterior
density for loss width parameter ?` , suitably rescaled.
4.1
Temporal context and interval timing
We consider a time interval estimation and reproduction task very similar to [24]. In each trial, the
stimulus s is a time interval (e.g., the interval between two flashes), drawn from a fixed experimental
distribution, and the response r is the reproduced duration (e.g., the interval between the second
flash and a mouse click). Subjects perform in one or two conditions, corresponding to two different
discrete uniform distributions of durations, either on a Short (494-847 ms) or a Long (847-1200
ms) range. Subjects are trained separately on each condition till they (roughly) learn the underlying
distribution, at which point their performance is measured in a test session; here we only simulate the
test sessions. We assume that the experimenter?s goal is to faithfully recover the observer?s priors,
and we analyze the effect of different experimental designs on the reconstruction error.
To cast the problem within our framework, we need first to define the reference observer ? ? . We
make the following assumptions: (a) the observer?s priors (or prior, in only one condition) are
smoothed versions of the experimental uniform distributions; (b) the sensory noise is affected by
the scalar property of interval timing, so that the sensory mapping is logarithmic (s0 ? 0, d = 1);
(c) we take average sensorimotor noise parameters from [24]: ? = 0.10, ? = 0, ? = 0, and ?0 ? 0,
?1 = 0.07; (d) for simplicity, the internal likelihood coincides with the measurement distribution;
(e) the loss function in internal measurement space is almost-quadratic, with ?` = 0.5, ?` = 0,
?` = 0; (f) we assume a small lapse probability ? = 0.03; (g) in case the observer performs in two
conditions, all observer?s parameters are shared across conditions (except for the priors). For the
inferred observer ? we allow all model parameters to change freely, keeping only assumptions (d)
and (g). We compare the following variations of the experimental setup:
1. BSL: The baseline version of the experiment, the observer performs in both the Short and
Long conditions (Ntr = 500 each);
2. SRT or LNG: The observer performs more trials (Ntr = 1000), but only either in the Short
(SRT) or in the Long (LNG) condition;
6
3. MAP: As BSL, but we assume a difference in the performance feedback of the task such
that the reference observer adopts a narrower loss function, closer to MAP (?` = 0.1);
4. MTR: As BSL, but the observer?s motor noise parameters ?0 , ?1 are assumed to be known
(e.g. measured in a separate experiment), and therefore fixed during the inference.
We sample from the approximate posterior density (Eq. 13), obtaining a set of sampled priors for
each distinct experimental setup (see Supplementary Material for details). Figure 2 a shows the
reconstructed priors and their central moments for the Short condition (results are analogous for
the Long condition; see Supplementary Material). We summarize the reconstruction error of the
recovered priors in terms of symmetric KL-divergence from the reference prior (Figure 2 b). Our
analysis suggests that the baseline setup BSL does a relatively poor job at inferring the observers?
priors. Mean and skewness of the inferred prior are generally acceptable, but for example the SD
tends to be considerably lower than the true value. Examining the posterior density across various
dimensions, we find that this mismatch emerges from a partial non-identifiability of the sensory
noise, ?, and the motor noise, w1 (Figure 2 c).1 Limiting the task to a single condition with double
number of trials (SRT) only slightly improves the quality of the inference. Surprisingly, we find that
a design that encourages the observer to adopt a loss function closer to MAP considerably worsens
the quality of the reconstruction in our model. In fact, the loss width parameter ?` is only weakly
identifiable (Figure 2 d), with severe consequences for the recovery of the priors in the MAP case.
Finally, we find that if we can independently measure the motor parameters of the observer (MTR),
the degeneracy is mostly removed and the priors can be recovered quite reliably.
Our analysis suggests that the reconstruction of internal representations in interval timing requires
strong experimental constraints and validations [12]. This worked example also shows how our
framework can be used to rank experimental designs by the quality of the inferred features of interest
(here, the recovered priors), and to identify parameters that may critically affect the inference. Some
findings align with our intuitions (e.g., measuring the motor parameters) but others may be nonobvious, such as the bad impact that a narrow loss function may have on the inferred priors within
our model. Incidentally, the low identifiability of ?` that we found in this task suggests that claims
about the loss function adopted by observers in interval timing (see [24]), without independent
validation, might deserve additional investigation. Finally, note that the analysis we performed
is theoretical, as the effects of each experimental design are formulated in terms of changes in the
parameters of the ideal reference observer. Nevertheless, the framework allows to test the robustness
of our conclusions as we modify our assumptions about the reference observer.
4.2
Slow-speed prior in speed perception
As a further demonstration, we use our framework to re-examine a well-known finding in visual
speed perception, that observers have a heavy-tailed prior expectation for slow speeds [9, 29]. The
original study uses a 2AFC paradigm [9], that we convert for our analysis into an equivalent estimation task (see e.g. [30]). In each trial, the stimulus magnitude s is speed of motion (e.g., the speed
of a moving dot in deg/s), and the response r is the perceived speed (e.g., measured by interception
timing). Subjects perform in two conditions, with different contrast levels of the stimulus, either
High (cHigh = 0.5) or Low (cLow = 0.075), corresponding to different levels of estimation noise.
Note that in a real speed estimation experiment subjects quickly develop a prior that depends on the
experimental distribution of speeds [30] ? but here we assume no learning of that kind in agreement
with the underlying 2AFC task. Instead, we assume that observers use their ?natural? prior over
speeds. Our goal is to probe the reliability of the inference of the slow-speed prior and of the noise
characteristics of the reference observer (see [9]).
We define the reference observer ? ? as follows: (a) the observer?s prior is defined in task space by
a parametric formula: pprior (s) = (s2 + s2prior )?kprior , with sprior = 1 deg/s and kprior = 2.4 [29];
(b) the sensory mapping has parameters s0 = 0.35 deg/s, d = 1 [29]; (c) the amount of sensory
noise depends on the contrast level, as per [9]: ?High = 0.2, ?Low = 0.4, and ? = 0, ? = 0; (d)
the internal likelihood coincides with the measurement distribution; (e) the loss function in internal
measurement space is almost-quadratic, with ?` = 0.5, ?` = 0, ?` = 0; (f) we assume a consider1
This degeneracy is not surprising, as both sensory and motor noise of the reference observer ? ? are approximately Gaussian in internal measurement space (? log task space). This lack of identifiability also affects
the prior since the relative weight between prior and likelihood needs to remain roughly the same.
7
kprior
a. Log prior
10
0
1
1
0
5
0.1
0.51 2 4 8
deg/s
0
0
2
0
4 0
0.01
1
2
deg/s
STD UNC
5
5
0
10
5
0
0
10
0
5
5
0
0
0.01 0.1 1 0
deg/s
?
?Low
10
10
5
?10
?
?High
?Low
5
?10
0
1
?High
s0
5
1
0
UNC
c.
b.
KL
STD
0
sprior
1
5
0
0
0.2 0.40.2 0.4 0.6 0
0
0.2 0.40.2 0.4 0.6
Figure 3: Internal representations in speed perception. Accuracy of the reconstructed internal
representations (priors and likelihoods). Each row corresponds to different assumptions during the
inference. a: The first column shows the reference log prior (thick red line) and the recovered mean
log prior ? 1 SD (black line and shaded area). The other two columns display the approximate
posteriors of kprior and sprior , obtained by fitting the reconstructed ?non-parametric? priors with a
parametric formula (see text). Each panel shows the median (black line), the interquartile range
(dark-shaded area) and the 95 % interval (light-shaded area). The green dashed line marks the
true value. b: Box plots of the symmetric KL-divergence between the reconstructed and reference
prior. c: Approximate posterior distributions for sensory mapping and sensory noise parameters. In
experimental design STD, the internal likelihood parameters (?
?High , ?
?Low ) are equal to their objective
counterparts (?High , ?Low ).
able amount of reporting noise, with ?0 = 0.3 deg/s, ?1 = 0.21; (g) we assume a contrast-dependent
lapse probability (?High = 0.01, ?Low = 0.05); (h) all parameters that are not contrast-dependent
are shared across the two conditions. For the inferred observer ? we allow all model parameters to
change freely, keeping only assumptions (d) and (h). We consider the standard experimental setup
described above (STD), and an ?uncoupled? variant (UNC) in which we do not take the usual assumption that the internal representation of the likelihoods is coupled to the experimental one (so,
?
?High , ?
?Low , ?? and ?
? are free parameters). As a sanity check, we also consider an observer with a
uniformly flat speed prior (FLA), to show that in this case the algorithm can correctly infer back the
absence of a prior for slow speeds (see Supplementary Material).
Unlike the previous example, our analysis shows that here the reconstruction of both the prior and the
characteristics of sensory noise is relatively reliable (Figure 3 and Supplementary Material), without
major biases, even when we decouple the internal representation of the noise from its objective
counterpart (except for underestimation of the noise lower bound s0 , and of the internal noise ?
?High ,
Figure 3 c). In particular, in all cases the exponent kprior of the prior over speeds can be recovered
with good accuracy. Our results provide theoretical validation, in addition to existing empirical
support, for previous work that inferred internal representations in speed perception [9, 29].
5
Conclusions
We have proposed a framework for studying a priori identifiability of Bayesian models of perception.
We have built a fairly general class of observer models and presented an efficient technique to
explore their vast identifiability landscape. In one case study, a time interval estimation task, we have
demonstrated how our framework could be used to rank candidate experimental designs depending
on their ability to resolve the underlying degeneracy of parameters of interest. The obtained ranking
is non-trivial: for example, it suggests that experimentally imposing a narrow loss function may
be detrimental, under certain assumptions. In a second case study, we have shown instead that the
inference of internal representations in speed perception, at least when cast as an estimation task in
the presence of a slow-speed prior, is generally robust and in theory not prone to major degeneracies.
Several modifications can be implemented to increase the scope of the psychophysical tasks covered
by the framework. For example, the observer model could include a generalization to arbitrary loss
spaces (see Supplementary Material), the generative model could be extended to allow multiple
cues (to analyze cue-integration studies), and a variant of the model could be developed for discretechoice paradigms, such as 2AFC, whose identifiability properties are largely unknown.
8
References
[1] Geisler, W. S. (2011) Contributions of ideal observer theory to vision research. Vision Res 51, 771?781.
[2] Knill, D. C. & Richards, W. (1996) Perception as Bayesian inference. (Cambridge University Press).
[3] Trommersh?auser, J., Maloney, L., & Landy, M. (2008) Decision making, movement planning and statistical decision theory. Trends Cogn Sci 12, 291?297.
[4] Pouget, A., Beck, J. M., Ma, W. J., & Latham, P. E. (2013) Probabilistic brains: knowns and unknowns.
Nat Neurosci 16, 1170?1178.
[5] Maloney, L., Mamassian, P., et al. (2009) Bayesian decision theory as a model of human visual perception:
testing Bayesian transfer. Vis Neurosci 26, 147?155.
[6] Vilares, I., Howard, J. D., Fernandes, H. L., Gottfried, J. A., & K?ording, K. P. (2012) Differential
representations of prior and likelihood uncertainty in the human brain. Curr Biol 22, 1641?1648.
[7] K?ording, K. P. & Wolpert, D. M. (2004) Bayesian integration in sensorimotor learning. Nature 427,
244?247.
[8] Girshick, A., Landy, M., & Simoncelli, E. (2011) Cardinal rules: visual orientation perception reflects
knowledge of environmental statistics. Nat Neurosci 14, 926?932.
[9] Stocker, A. A. & Simoncelli, E. P. (2006) Noise characteristics and prior expectations in human visual
speed perception. Nat Neurosci 9, 578?585.
[10] Sanborn, A. & Griffiths, T. L. (2008) Markov chain monte carlo with people. Adv Neural Inf Process Syst
20, 1265?1272.
[11] Chalk, M., Seitz, A., & Seri`es, P. (2010) Rapidly learned stimulus expectations alter perception of motion.
J Vis 10, 1?18.
[12] Acerbi, L., Wolpert, D. M., & Vijayakumar, S. (2012) Internal representations of temporal statistics and
feedback calibrate motor-sensory interval timing. PLoS Comput Biol 8, e1002771.
[13] Houlsby, N. M., Husz?ar, F., Ghassemi, M. M., Orb?an, G., Wolpert, D. M., & Lengyel, M. (2013) Cognitive
tomography reveals complex, task-independent mental representations. Curr Biol 23, 2169?2175.
[14] Acerbi, L., Vijayakumar, S., & Wolpert, D. M. (2014) On the origins of suboptimality in human probabilistic inference. PLoS Comput Biol 10, e1003661.
[15] K?ording, K. P. & Wolpert, D. M. (2004) The loss function of sensorimotor learning. Proc Natl Acad Sci
U S A 101, 9839?9842.
[16] Gekas, N., Chalk, M., Seitz, A. R., & Seri`es, P. (2013) Complexity and specificity of experimentallyinduced expectations in motion perception. J Vis 13, 1?18.
[17] Jones, M. & Love, B. (2011) Bayesian Fundamentalism or Enlightenment? On the explanatory status and
theoretical contributions of Bayesian models of cognition. Behav Brain Sci 34, 169?188.
[18] Bowers, J. S. & Davis, C. J. (2012) Bayesian just-so stories in psychology and neuroscience. Psychol
Bull 138, 389.
[19] Mamassian, P. & Landy, M. S. (2010) It?s that time again. Nat Neurosci 13, 914?916.
[20] Simoncelli, E. P. (2009) in The Cognitive Neurosciences, ed. M, G. (MIT Press), pp. 525?535.
[21] Knill, D. C. (2003) Mixture models and the probabilistic structure of depth cues. Vision Res 43, 831?854.
[22] Anderson, J. R. (1978) Arguments concerning representations for mental imagery. Psychol Rev 85, 249.
[23] Navarro, D. J., Pitt, M. A., & Myung, I. J. (2004) Assessing the distinguishability of models and the
informativeness of data. Cognitive Psychol 49, 47?84.
[24] Jazayeri, M. & Shadlen, M. N. (2010) Temporal context calibrates interval timing. Nat Neurosci 13,
1020?1026.
[25] Tassinari, H., Hudson, T., & Landy, M. (2006) Combining priors and noisy visual cues in a rapid pointing
task. J Neurosci 26, 10154?10163.
[26] Natarajan, R., Murray, I., Shams, L., & Zemel, R. S. (2009) Characterizing response behavior in multisensory perception with conflicting cues. Adv Neural Inf Process Syst 21, 1153?1160.
[27] Carreira-Perpi?na? n, M. A. (2000) Mode-finding for mixtures of gaussian distributions. IEEE T Pattern
Anal 22, 1318?1323.
[28] Spiegelhalter, D. J., Best, N. G., Carlin, B. P., & Van Der Linde, A. (2002) Bayesian measures of model
complexity and fit. J R Stat Soc B 64, 583?639.
[29] Hedges, J. H., Stocker, A. A., & Simoncelli, E. P. (2011) Optimal inference explains the perceptual
coherence of visual motion stimuli. J Vis 11, 14, 1?16.
[30] Kwon, O. S. & Knill, D. C. (2013) The brain uses adaptive internal models of scene statistics for sensorimotor estimation and planning. Proc Natl Acad Sci U S A 110, E1064?E1073.
9
| 5553 |@word trial:7 worsens:1 version:2 stronger:1 suitably:1 seitz:2 crucially:4 moment:4 efficacy:1 ours:1 kurt:2 ording:3 luigi:2 subjective:1 existing:1 recovered:8 comparing:1 current:2 surprising:1 si:8 dx:1 subsequent:3 numerical:1 shape:3 motor:9 remove:1 plot:4 generative:4 cue:5 guess:1 parametrization:1 short:6 colored:1 mental:5 node:1 location:1 constructed:1 differential:2 fitting:2 behavioral:3 chalk:2 introduce:2 expected:8 rapid:1 roughly:2 behavior:5 examine:2 nor:2 planning:2 brain:5 love:1 resolve:1 actual:1 considering:1 increasing:1 becomes:1 underlying:4 moreover:1 panel:3 maximizes:1 what:1 kind:1 minimizes:2 skewness:3 developed:1 unobserved:1 finding:3 interception:1 temporal:3 multidimensional:1 tackle:1 uk:2 control:2 before:4 educated:1 hudson:1 local:2 timing:8 sd:4 tends:1 consequence:2 modify:1 acad:2 approximately:1 black:4 plus:1 might:1 suggests:5 shaded:7 specifying:1 range:8 practical:1 unique:2 testing:3 practice:2 parameteric:1 bootstrap:1 cogn:1 area:6 empirical:3 convenient:1 griffith:1 specificity:1 spite:1 convenience:2 close:2 unc:3 context:2 equivalent:1 imposed:1 map:9 center:1 demonstrated:1 layout:1 independently:2 duration:3 simplicity:2 recovery:2 pouget:1 estimator:1 rule:1 importantly:1 fill:1 his:1 traditionally:1 variation:1 analogous:1 limiting:1 us:2 origin:1 agreement:1 element:3 trend:1 approximated:1 natarajan:1 asymmetric:1 std:4 richards:1 observed:2 steven:1 capture:2 region:1 adv:2 plo:2 movement:1 rescaled:1 removed:1 intuition:1 complexity:3 reward:2 trained:1 depend:2 weakly:1 exposed:1 basis:1 easily:1 joint:1 represented:2 unimodality:1 various:1 derivation:2 distinct:3 fast:2 effective:1 describe:1 monte:1 seri:2 bdt:4 zemel:1 choosing:1 outside:1 sanity:1 quite:2 encoded:1 posed:1 supplementary:11 whose:1 reconstruct:1 ability:3 statistic:4 noisy:3 reproduced:1 kurtosis:2 analytical:1 propose:1 reconstruction:7 maximal:1 combining:1 rapidly:1 mixing:1 till:1 representational:1 double:1 requirement:2 asymmetry:1 assessing:1 produce:1 generating:1 incidentally:1 wider:1 depending:2 develop:1 ac:1 stat:1 fixing:1 measured:4 minor:1 school:1 job:1 strong:1 soc:1 implemented:1 eq:16 come:1 quantify:1 orb:1 sensation:4 thick:2 stochastic:1 exploration:1 human:7 centered:1 material:11 explains:1 fechner:1 fix:2 generalization:1 investigation:2 practically:1 normal:2 mapping:7 scope:1 cognition:1 claim:1 pitt:1 pointing:1 major:3 sought:1 adopt:3 a2:1 perceived:1 estimation:12 proc:2 faithfully:3 tool:1 trusted:1 reflects:1 mit:1 clearly:1 gaussian:4 aim:1 rather:1 husz:1 q3:1 rank:2 likelihood:20 check:1 contrast:4 baseline:2 inference:13 dependent:4 typically:1 explanatory:1 hidden:1 her:1 transformed:1 interested:1 issue:4 arg:2 ill:1 flexible:1 orientation:2 priori:6 denoted:1 exponent:2 integration:3 fairly:2 psychophysics:2 marginal:3 field:1 equal:3 auser:1 having:1 sampling:1 identical:2 represents:2 jones:1 afc:3 peaked:1 alter:1 report:4 stimulus:32 others:2 cardinal:1 few:1 kwon:1 randomly:3 divergence:5 beck:1 translationally:1 ourselves:1 curr:2 freedom:1 huge:1 interest:4 possibility:1 interquartile:2 severe:2 mixture:10 light:2 natl:2 pprior:1 stocker:2 chain:1 integral:1 closer:2 partial:2 necessary:1 mamassian:2 desired:1 re:3 girshick:1 theoretical:6 minimal:1 jazayeri:1 instance:1 column:4 cover:2 ar:1 measuring:1 lattice:1 calibrate:1 cost:1 bull:1 subset:1 uniform:2 examining:1 perturbed:1 corrupted:2 considerably:2 combined:3 chooses:1 density:8 fundamental:1 geisler:1 accessible:2 vijayakumar:3 systematic:3 probabilistic:3 informatics:1 picking:1 mouse:1 quickly:1 na:1 w1:1 again:2 central:3 recorded:1 imagery:1 choose:1 possibly:3 cognitive:3 derivative:1 syst:2 converted:1 star:1 explicitly:1 ranking:1 idealized:1 depends:2 vi:4 performed:2 view:1 observer:74 analyze:2 reached:1 wm:3 recover:3 red:2 houlsby:1 identifiability:20 contribution:2 minimize:1 ass:1 accuracy:3 variance:3 who:1 efficiently:1 characteristic:4 yield:5 spaced:1 landscape:2 identify:1 largely:1 generalize:1 bayesian:26 critically:1 carlo:1 lengyel:1 modeller:2 ed:2 maloney:2 sensorimotor:7 frequency:2 pp:1 degeneracy:9 sampled:1 pilot:1 experimenter:4 dataset:3 ask:1 knowledge:1 emerges:2 improves:1 back:1 dt:1 response:13 wei:1 box:2 generality:2 anderson:1 angular:1 stage:8 just:1 d:2 lack:1 widespread:1 mode:3 fla:1 quality:3 gray:1 behaved:1 grows:1 usa:1 effect:2 validity:1 true:9 counterpart:2 symmetric:3 leibler:1 lapse:8 indistinguishable:1 during:3 width:2 encourages:1 davis:1 coincides:2 suboptimality:1 m:5 latham:1 performs:4 motion:4 weber:1 consideration:1 novel:3 common:2 functional:1 ji:1 empirically:2 physical:1 regulates:1 linking:1 interpretation:1 significant:1 measurement:22 cambridge:1 imposing:1 discordant:1 knowns:1 session:2 dot:1 reliability:1 nexp:5 moving:1 etc:1 subjectively:2 base:1 align:1 posterior:11 confounding:1 inf:2 scenario:1 occasionally:1 certain:1 success:1 life:1 ous:1 der:1 inverted:2 seen:1 minimum:3 additional:2 freely:2 determine:1 paradigm:2 monotonically:1 dashed:3 multiple:4 simoncelli:4 unimodal:3 upside:1 ntr:7 infer:1 smooth:1 sham:1 match:1 characterized:1 long:4 concerning:1 prevented:1 dic:1 dkl:3 impact:1 prediction:1 variant:2 bsl:8 vision:3 metric:1 expectation:4 represent:1 normalization:1 addition:1 want:1 fine:1 spacing:1 interval:16 separately:1 median:2 source:1 unlike:1 navarro:1 subject:4 regularly:1 spirit:1 effectiveness:1 peakedness:1 presence:1 ideal:4 variety:2 affect:3 bic:1 psychology:3 carlin:1 fit:1 restrict:1 click:1 t0:5 expression:1 linde:1 questioned:1 york:1 behav:1 remark:1 generally:3 covered:1 amount:2 dark:2 locally:1 induces:3 tomography:1 category:1 estimated:1 delta:1 per:1 correctly:1 neuroscience:2 diverse:1 discrete:5 write:1 affected:1 express:3 key:2 redundancy:1 four:3 nevertheless:1 drawn:2 neither:1 vast:1 ma2:1 sum:1 convert:1 inverse:4 uncertainty:2 extends:2 reporting:2 almost:2 lng:2 decision:13 acceptable:1 coherence:1 bound:1 uncontrolled:1 aic:1 display:2 quadratic:3 calibrates:1 identifiable:2 binned:1 constraint:4 worked:1 scene:1 flat:1 encodes:1 nonobvious:1 aspect:1 speed:24 simulate:1 min:5 argument:1 relatively:2 department:1 according:3 combination:2 poor:1 across:5 describes:4 slightly:1 remain:1 describable:1 rev:1 modification:2 making:6 invariant:1 pr:9 ln:1 describing:1 skew:2 know:1 adopted:1 parametrize:1 gaussians:6 studying:1 permit:1 apply:1 probe:1 generic:3 fernandes:1 primacy:1 robustness:1 pest:4 original:1 top:1 running:1 include:3 remaining:2 graphical:1 landy:4 const:1 exploit:1 murray:1 psychophysical:4 objective:3 question:1 already:1 flipping:1 parametric:5 strategy:1 dependence:1 usual:1 detrimental:1 sanborn:1 distance:1 separate:2 mapped:1 sci:4 parametrized:1 sethu:2 trivial:1 enforcing:1 minimizing:1 demonstration:1 setup:7 mostly:1 hlog:2 negative:1 suppress:1 design:13 reliably:2 anal:1 unknown:3 perform:4 allowing:1 datasets:4 markov:1 benchmark:1 howard:1 behave:1 anti:1 extended:1 variability:1 smoothed:1 arbitrary:1 prompt:1 inferred:7 inverting:1 cast:2 specified:1 kl:7 learned:1 narrow:2 uncoupled:1 established:1 conflicting:1 distinguishability:1 deserve:1 able:1 perception:25 mismatch:1 pattern:1 summarize:1 built:1 reliable:2 including:1 max:2 green:2 power:1 natural:2 examination:1 residual:1 improve:1 spiegelhalter:1 ready:1 psychol:3 coupled:1 tentatively:1 deviate:1 prior:58 understanding:1 literature:2 text:1 marginalizing:1 relative:2 law:2 loss:37 encompassing:1 fully:1 var:2 validation:3 degree:1 proxy:1 s0:8 acerbi:3 dd:1 myung:1 story:1 informativeness:1 shadlen:1 pi:5 share:1 heavy:1 row:2 prone:1 surprisingly:1 free:3 keeping:2 formal:2 guide:1 allow:4 bias:1 explaining:1 taking:1 characterizing:1 edinburgh:1 van:1 feedback:2 dimension:1 depth:1 contour:1 sensory:20 adopts:1 adaptive:1 reconstructed:7 approximate:7 observable:1 kullback:1 status:2 deg:7 global:1 decides:1 reveals:1 conclude:1 assumed:2 continuous:1 latent:1 triplet:1 quantifies:1 tailed:1 learn:1 nature:2 transfer:2 robust:3 symmetry:1 obtaining:1 improving:1 complex:3 main:2 dense:1 neurosci:7 motivation:1 noise:30 whole:1 s2:2 knill:3 weijima:1 allowed:1 body:1 representative:1 slow:6 precision:1 inferring:1 comput:2 candidate:1 perceptual:5 bower:1 unwieldy:1 down:1 formula:2 bad:1 specific:3 perpi:1 nyu:1 meas:1 evidence:1 reproduction:1 intrinsic:1 intractable:1 albeit:1 magnitude:5 nat:5 gap:1 entropy:5 wolpert:5 logarithmic:1 simply:1 explore:3 visual:6 prevents:1 expressed:1 partially:1 scalar:1 corresponds:4 determines:1 satisfies:1 chance:1 ma:1 environmental:1 hedge:1 goal:2 narrower:1 formulated:1 flash:2 srt:5 shared:4 absence:1 experimentally:3 change:5 carreira:1 determined:3 typical:1 specifically:1 except:2 uniformly:1 decouple:1 discriminate:1 experimental:25 e:2 underestimation:1 multisensory:1 internal:36 support:6 mark:2 people:1 modulated:1 outstanding:1 mcmc:1 biol:4 correlated:1 |
5,030 | 5,554 | Automatic Discovery of Cognitive Skills
to Improve the Prediction of Student Learning
Robert V. Lindsey, Mohammad Khajah, Michael C. Mozer
Department of Computer Science and Institute of Cognitive Science
University of Colorado, Boulder
Abstract
To master a discipline such as algebra or physics, students must acquire
a set of cognitive skills. Traditionally, educators and domain experts use
intuition to determine what these skills are and then select practice exercises to hone a particular skill. We propose a technique that uses student
performance data to automatically discover the skills needed in a discipline. The technique assigns a latent skill to each exercise such that a
student?s expected accuracy on a sequence of same-skill exercises improves
monotonically with practice. Rather than discarding the skills identified by
experts, our technique incorporates a nonparametric prior over the exerciseskill assignments that is based on the expert-provided skills and a weighted
Chinese restaurant process. We test our technique on datasets from five
different intelligent tutoring systems designed for students ranging in age
from middle school through college. We obtain two surprising results. First,
in three of the five datasets, the skills inferred by our technique support
significantly improved predictions of student performance over the expertprovided skills. Second, the expert-provided skills have little value: our
technique predicts student performance nearly as well when it ignores the
domain expertise as when it attempts to leverage it. We discuss explanations for these surprising results and also the relationship of our skilldiscovery technique to alternative approaches.
1
Introduction
With the advent of massively open online courses (MOOCs) and online learning platforms
such as Khan Academy and Reasoning Mind, large volumes of data are collected from
students as they solve exercises, acquire cognitive skills, and achieve a conceptual understanding. A student?s data provides clues as to his or her knowledge state?the specific facts,
concepts, and operations that the student has mastered, as well as the depth and robustness
of the mastery. Knowledge state is dynamic and evolves as the student learns and forgets.
Tracking a student?s time-varying knowledge state is essential to an intelligent tutoring system. Knowledge state pinpoints the student?s strengths and deficiencies and helps determine
what material the student would most benefit from studying or practicing. In short, efficient
and effective personalized instruction requires inference of knowledge state [20, 25].
Knowledge state can be decomposed into atomic elements, often referred to as knowledge
components [7, 13], though we prefer the term skills. Skills include retrieval of specific facts,
e.g., the translation of ?dog? into Spanish is perro, as well as operators and rules in a domain,
e.g., dividing each side of an algebraic equation by a constant to transform 3(x + 2) = 15
into x + 2 = 5, or calculating the area of a circle with radius r by applying the formula
1
?r2 . When an exercise or question is posed, students must apply one or more skills, and
the probability of correctly applying a skill is dependent on their knowledge state.
To predict a student?s performance on an exercise, we thus must: (1) determine which skill
or skills are required to solve the exercise, and (2) infer the student?s knowledge state for
those skills. With regard to (1), the correspondence between exercises and skills, which
we will refer to as an expert labeling, has historically been provided by human experts.
Automated techniques have been proposed, although they either rely on an expert labeling
which they then refine [5] or treat the student knowledge state as static [3]. With regard
to (2), various dynamical latent state models have been suggested to infer time-varying
knowledge state given an expert labeling. A popular model, Bayesian knowledge tracing
assumes that knowledge state is binary?the skill is either known or not known [6]. Other
models posit that knowledge state is continuous and evolves according to a linear dynamical
system [21].
Only recently have methods been suggested that simultaneously address (1) and (2), and
which therefore perform skill discovery. Nearly all of this work has involved matrix factorization [24, 22, 14]. Consider a student ? exercise matrix whose cells indicate whether a
student has answered an exercise correctly. Factorization leads to a vector for each student
characterizing the degree to which the student has learned each of Nskill skills, and a vector for each exercise characterizing the degree to which that exercise requires each of Nskill
skills. Modeling student learning presents a particular challenge because of the temporal
dimension: students? skills improve as they practice. Time has been addressed either via
dynamical models of knowledge state or by extending the matrix into a tensor whose third
dimension represents time.
We present an approach to skill discovery that differs from matrix factorization approaches in
three respects. First, rather than ignoring expert labeling, we adopt a Bayesian formulation
in which the expert labels are incorporated into the prior. Second, we explore a nonparametric approach in which the number of skills is determined from the data. Third, rather than
allowing an exercise to depend on multiple skills and to varying degrees, we make a stronger
assumption that each exercise depends on exactly one skill in an all-or-none fashion. With
this assumption, skill discovery is equivalent to the partitioning of exercises into disjoint
sets. Although this strong assumption is likely to be a simplification of reality, it serves
to restrict the model?s degrees of freedom compared to factorization approaches in which
each student and exercise is assigned an Nskill -dimensional vector. Despite the application
of sparsity and nonnegativity constraints, the best models produced by matrix factorization
have had low-dimensional skill spaces, specifically, Nskill ? 5 [22, 14]. We conjecture that
the low dimensionality is not due to the domains being modeled requiring at most 5 skills,
but rather to overfitting for Nskill > 5. With our approach of partitioning exercises into
disjoint skill sets, we can afford Nskill 5 without giving the model undue flexibility. We
are aware of one recent approach to skill discovery [8, 9] which shares our assumption that
each exercise depends on a single skill. However, it differs from our approach in that it does
not try to exploit expert labels and presumes a fixed number of skills. We contrast our work
to various alternative approaches toward the end of this paper.
2
A nonparametric model for automatic skill discovery
We now introduce a generative probabilistic model of student problem-solving in terms of
two components: (1) a prior over the assignment of exercises to skills, and (2) the likelihood
of a sequence of responses produced by a student on exercises requiring a common skill.
2.1
Weighted CRP: A prior on skill assignments
Any instructional domain (e.g., algebra, geometry, physics) has an associated set of exercises
which students must practice to attain domain proficiency. We are interested in the common
situation where an expert has identified, for each exercise, a specific skill which is required
for its solution (the expert labeling). It may seem unrealistic to suppose that each exercise
requires no more than one skill, but in intelligent tutoring systems [7, 13], complex exercises
(e.g., algebra word problems) are often broken down into a series of steps which are small
2
enough that they could plausibly require only one skill (e.g., adding a constant to both
sides of an algebraic equation). Thus, when we use the term ?exercise?, in some domains we
are actually referring to a step of a compound exercise. In other domains (e.g., elementary
mathematics instruction), the exercises are designed specifically to tap what is being taught
in a lesson and are thus narrowly focused.
We wish to exploit the expert labeling to design a nonparametric prior over assignments
of exercises to skills?hereafter, skill assignments?and we wish to vary the strength of the
bias imposed by the expert labeling. With a strong bias, the prior would assign nonzero
probability to only the expert labeling. With no bias, the expert labeling would be no more
likely than any other. With an intermediate bias, which provides soft constraints on the
skill assignment, a suitable model might improve on the expert labeling.
We considered various methods, including fragmentation-coagulation processes [23] and the
distance-dependent Chinese restaurant process [4]. In this article, we describe a straightforward approach based on the Chinese restaurant process (CRP) [1], which induces a distribution over partitions. The CRP is cast metaphorically in terms of a Chinese restaurant in
which each entering customer chooses a table at which to sit. Denoting the table at which
customer i sits as Yi , customer i can take a seat at an occupied table y with P (Yi = y) ? ny
or at an empty table with P (Yi = Ntable + 1) ? ?, where Ntable is the number of occupied
tables and ny is the number of customers currently seated at table y.
The weighted Chinese restaurant process (WCRP) [10] extends this metaphor by supposing that customers each have a fixed affiliation and are biased to sit at tables with other
customers having similar affiliations. The WCRP is nothing more than the posterior over
table assignments given a CRP prior and a likelihood function based on affiliations. In the
mapping of the WCRP to our domain, customers correspond to exercises, tables to distinct
skills, and affiliations to expert labels. The WCRP thus partitions the exercises into groups
sharing a common skill, with a bias to assign the same skill to exercises having the same
expert label.
The WCRP is specified in terms of a set of parameters ? ? {?1 , . . . , ?Ntable }, where ?y
represents the affiliation associated with table y. In our domain, the affiliation corresponds
to one of the expert labels: ?y ? {1, . . . , Nskill }. From a generative modeling perspective,
the affiliation of a table influences the affiliations of each customer seated at the table. Using
Xi to denote the affiliation of customer i?or equivalently, the expert label associated with
exercise i?we make the generative assumption:
P (Xi = x|Yi = y, ?) ? ??x,?y + 1 ? ? ,
where ? is the Kronecker delta and ? is the previously mentioned bias. With ? = 0, a
customer is equally likely to have any affiliation; with ? = 1, all customers at a table will
have the table?s affiliation. With uniform priors on ?y , the conditional distribution on ?y is:
?y
P (?y |X(y) ) ? (1 ? ?)?ny
where X(y) is the set of affiliations of customers seated at table y and nay ?
is the number of customers at table y with affiliation a.
P
Xi ?X(y)
?xi ,a
Marginalizing over ?, the WCRP specifies a distribution over table assignments for a new
customer: an occupied table y ? {1, . . . , Ntable } is chosen with probability
(y)
P (Yi = y|Xi , X
) ? ny
1 + ?(?xy i ? 1)
1 + ?(Nskill ?1 ? 1)
a
,
with
?ay
(1 ? ?)?ny
? PNskill
a
?=1
a
?
(1 ? ?)?ny
.
(1)
?ay is a softmax function that tends toward 1 if a is the most common affiliation among
customers at table y, and tends toward 0 otherwise. In the WCRP, an empty table Ntable +1
is selected with probability
P (Yi = Ntable + 1) ? ?.
(2)
We choose to treat ? not as a constant but rather define ? ? ?0 (1 ? ?) where ?0 becomes
the free parameter of the model that modulates the expected number of occupied tables,
and the term 1 ? ? serves to give the model less freedom to assign new tables when the
3
affiliation bias is high. (We leave the constant in the denominator of Equation 1 so that ?
has the same interpretation regardless of ?.)
For ? = 0, the WCRP reduces to the CRP and expert labels are ignored. Although the
WCRP is undefined for ? = 1, it is defined in the limit ? ? 1, and it produces a seating
arrangement equivalent to the expert labels with probability 1. For intermediate ?, the
expert labels serve as an intermediate constraint. For any ?, the WCRP seating arrangement
specifies a skill assignment over exercises.
2.2
BKT: A theory of human skill acquisition
In the previous section, we described a prior over skill assignments. Given an assignment,
we turn to a theory of the temporal dynamics of human skill acquisition. Suppose that a
particular student practices a series of exercises, {e1 , e2 , . . . , et , . . . , eT }, where the subscript
indicates order and each exercise et depends on a corresponding skill, st .1 We assume that
whether or not a student responds correctly to exercise et depends solely on the student?s
mastery of st . We further assume that when a student works on et , it has no effect on
the student?s mastery of other skills s?, s? 6= st . These assumptions?adopted by nearly
all past models of student learning?allow us to consider each skill independently of the
others. Thus, for skill s?, we can select its subset of exercises from the sequence, es? = {et |
st = s?}, preserving order in the sequence, and predict whether the student will answer
each exercise correctly or incorrectly. Given the uncertainty in such predictions, models
typically predict the joint likelihood over the sequence of responses, P (R1 , . . . , R|es?| ), where
the binary random variable Rt indicates the correctness of the response to et .
The focus of our research is not on developing novel models of skill acquisition. Instead,
we incorporate a simple model that is a mainstay of the field, Bayesian knowledge tracing
(BKT) [6]. BKT is based on a theory of all-or-none human learning [2] which postulates
that a student?s knowledge state following trial t, Kt , is binary: 1 if the skill has been
mastered, 0 otherwise. BKT is a hidden Markov model (HMM) with internal state Kt and
emissions Rt .
Because BKT is typically used to model practice over brief intervals, the model assumes
no forgetting, i.e., K cannot transition from 1 to 0. This assumption constrains the timevarying knowledge state: it can make at most one transition from 0 to 1 over the sequence
of trials. Consequently, the {Kt } can be replaced by a single latent variable, T , that denotes
the trial following which a transition is made, leading to the BKT generative model:
?L
if t = 0
P (T = t|?L , ?M ) =
(3)
(1 ? ?L )?M (1 ? ?M )t?1 if t > 0
P (Rt = 1|?G , ?S , T ) =
?G
1 ? ?S
if i ? T
otherwise,
(4)
where ?L is the probability that a student has mastered the skill prior to performing the
first exercise, ?M is the transition probability from the not-mastered to mastered state,
?G is the probability of correctly guessing the answer prior to skill mastery, and ?S is the
probability of answering incorrectly due to a slip following skill mastery.
Although we have chosen to model student learning with BKT, any other probabilistic
model of student learning could be used in conjunction with our approach to skill discovery, including more sophisticated variants of BKT [11] or models of knowledge state with
continuous dynamics [21]. Further, our approach does not require BKT?s assumption that
learning a skill is conditionally independent of the practice history of other skills. However,
the simplicity of BKT allows one to conduct modeling on a relatively large scale.
1
To tie this notation to the notation of the previous section, st ? yet , i.e., the table assignments
of the WCRP correspond to skills, and exercise et is seated at table yet . Note that i in the previous
section was used as an index over distinct exercises, whereas t in this section is used as an index
over trials. The same exercise may be presented multiple times.
4
3
Implementation
We perform posterior inference through Markov chain Monte Carlo (MCMC) sampling.
The conditional probability for Yi given the other variables is proportional to the product
of the WCRP prior term and the likelihood of each student?s response sequence. The prior
term is given by Equations 1 and 2, where by exchangeability we can take Yi to be the
last customer to enter the restaurant and where we analytically marginalize ?. For an
existing table, the likelihood is given by the BKT HMM emission sequence probability. For
a new table, we must add an extra step to calculating the emission sequence probability
because the BKT parameters do not have conjugate priors. We used Algorithm 8 from [16],
which effectively produces a Monte Carlo approximation to the intractable marginal data
likelihood, integrating out over the BKT parameters that could be drawn for the new table.
For lack of conjugacy and any strong prior knowledge, we give each table?s ?L , ?M , and ?S
independent uniform priors on [0, 1]. Because we wish to interpret BKT?s K = 1 state as a
?learned? state, we parameterize ?G as being a fraction of 1 ? ?S , where the fraction has a
uniform prior on [0, 1]. We give log(1??) a uniform prior on [?5, 0] based on the simulations
described in Section 4.1, and ?0 is given an improper uniform prior with support on ?0 > 0.
Because of the lack of conjugacy, we explicitly represent each table?s BKT parameters during
sampling. In each iteration of the sampler, we update the table assignments of each exercise
and then apply five axis-aligned slice sampling updates to each table?s BKT parameters and
to the hyperparameters ? and ?0 [17].
For all simulations, we run the sampler for 200 iterations and discard the first 100 as the
burn-in period. The seating arrangement is initialized to the expert-provided skills; all other
parameters are initialized by sampling from the generative model. We use the post burn-in
samples to estimate the expected posterior probability of a student correctly responding in
a trial, integrating out over uncertainty in all skill assignments, BKT parameterizations,
and hyperparameters. We explored using more iterations and a longer burn-in period but
found that doing so did not yield appreciable increases in training or test data likelihoods.
4
4.1
Simulations
Sampling from the WCRP
We generated synthetic exercise-skill assignments via a draw from a CRP prior with ? = 3
and Nexercise = 100. Using these assignments as both the ground-truth and expert labels, we
then simulated draws from the WCRP to determine the effect of ? (the expert labeling bias)
and ?0 (concentration scaling parameter; see Equation 2) on the model?s behavior. Figure 1a
shows the reconstruction score, a measure of similarity between the induced assignment and
the true labels. This score is the difference between (1) the proportion of pairs of exercises
that belong to the same true skill that are assigned to the same recovered skill, and (2)
the proportion of pairs of exercises that belong to different true skills that are assigned to
different recovered skills. The score is in [0, 1], with 0 indicating no better than a chance
relationship to the true labels, and 1 indicating the true labels are recovered exactly. The
reported score is the mean over replications of the simulation and MCMC samples. As ?
increases, the recovered skills better approximate the expert (true) skills, independent of
?0 . Figure 1b shows the expected interaction between ?0 and ? on the number of occupied
tables (induced skills): only when the bias is weak does ?0 have an effect.
4.2
Skill recovery from synthetic student data
We generated data for Nstudent synthetic students responding to Nexercise exercises presented in a random order for each student. Using a draw from the CRP prior with ? = 3,
we generated exercise-skill assignments. For each skill, we generated sequences of student
correct/incorrect responses via BKT, with parameters sampled from plausible distributions:
?L ? Uniform(0, 1), ?M ? Beta(10, 30), ?G ? Beta(1, 9), and ?S ? Beta(1, 9).
Figure 1c shows the model?s reconstruction of true skills for 24 replications of the simulation
with Nstudent = 100 and Nexercise = 200, varying ?, providing a set of expert skill labels
that were either the true labels or a permutation of the true labels. The latter conveys no
information about the true labels. The most striking feature of the result is that the model
5
(a)
(b)
(c)
25
0.8
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
20
15
10
5
0
1
1
?? = 2.0
?? = 5.0
?? = 10.0
reconstruction score
?? = 2.0
?? = 5.0
?? = 10.0
# occupied tables
reconstruction score
1
0.2
expert labeling bias (?)
0.4
0.6
0.8
0.9
permuted labels
0.85
0.8
0.75
0
1
true labels
0.95
expert labeling bias (?)
0.2
0.4
0.6
0.8
1
expert labeling bias (?)
Figure 1: (a,b) Effect of varying expert labeling bias (?) and ?0 on sampled skill assignments
from a WCRP; (c) Effect of expert labels and ? on the full model?s reconstruction of the
true skills from synthetic data
true labels
permuted labels
0.9
0.8
0.7
0.6
0.5
0.4
0.3
10 skills
20 skills
30 skills
0.2
0.1
50
100 150 200
100
50
100 150 200
200
50
100 150 200 #
#
300
reconstruction score
reconstruction score
0.9
0.8
0.7
0.6
0.5
0.4
0.3
10 skills
20 skills
30 skills
0.2
students 0.1
exercises
50
100 150
100
200
50
100 150
200
200
50
100 150
300
200 #
students
# exercises
Figure 2: Effect of expert labels, Nstudent , Nexercise , and Nskill on the model?s reconstruction
of the true skills from synthetic data
does an outstanding job of reconstructing the true labeling whether the expert labels are
correct or not. Only when the bias ? is strong and the expert labels are erroneous does the
model?s reconstruction performance falter. The bottom line is that a good expert labeling
can help, whereas a bad expert labeling should be no worse than no expert-provided labels.
In a larger simulation, we systematically varied Nstudent ? {50, 100, 150, 200}, Nexercise ?
{100, 200, 300}, and assigned the exercises to one of Nskill ? {10, 20, 30} skills via uniform
multinomial sampling. Figure 2 shows the result from 30 replications of the simulation
using expert labels that were either true or permuted (left and right panels, respectively).
With a good expert labeling, skill reconstruction is near perfect with Nstudent ? 100 and an
Nexercise : Nskill ratio of at least 10. With a bad expert labeling, more data is required to
obtain accurate reconstructions, say, Nstudent ? 200. As one would expect, a helpful expert
labeling can overcome noisy or inadequate data.
4.3
Evaluation of student performance data
We ran simulations on five student performance datasets (Table 1). The datasets varied
in the number of students, exercises, and expert skill labels; the students in the datasets
ranged in age from middle school to college. Each dataset consists of student identifiers,
exercise identifiers, trial numbers, and binary indicators of response correctness from students undergoing variable-length sequences of exercises over time.2 Exercises may appear
in different orders for each student and may occur multiple times for a given student.
2
For the DataShop datasets, exercises were identified by concatenating what they call the problem hierarchy, problem name, and the step name columns. Expert-provided skill labels were identified by concatenating the problem hierarchy column with the skill column following the same
practice as in [19, 18]. The expert skill labels infrequently associate an exercise with multiple skills.
For such exercises, we treat the combination of skills as one unique skill.
6
PSLC
PSLC
PSLC
PSLC
source
DataShop
DataShop
DataShop
[15]
DataShop
[12]
[12]
[12]
[12]
#
#
# # skills
# skills
?
dataset
students exercises
trials (expert) (WCRP) (WCRP)
fractions game
51
179
4,349
45
7.9
0.886
physics tutor
66
4,816 110,041
652
49.4
0.947
engineering statics
333
1,223 189,297
156
99.2
0.981
Spanish vocabulary
182
409 578,726
221
183
0.996
geometry tutor
59
139
5,104
18
19.7
0.997
Table 1: Five student performance datasets used in simulations
We compared a set of models which we will describe shortly. For each model, we ran ten
replications of five-fold cross validation on each dataset. In each replication, we randomly
partitioned the set of all students into five equally sized disjoint subsets. In each replicationfold, we collected posterior samples using our MCMC algorithm given the data recorded for
students in four of the five subsets. We then used the samples to predict the response
sequences (correct vs. incorrect) of the remaining students. On occasion, students in the
test set were given exercises that had not appeared in the training set. In those cases, the
model used samples from Equations 1-2 to predict the new exercises? skill assignments.
The models we compare differ in how skills are assigned to exercises. However, every model
uses BKT to predict student performance given the skill assignments. Before presenting
results from the models, we first need to verify the BKT assumption that students improve
on a skill over time. We compared BKT to a baseline model which assumes a stationary
probability of a correct response for each skill. Using the expert-provided skills, BKT
achieves a mean 11% relative improvement over the baseline model across the five datasets.
Thus, BKT with expert-provided skills is sensitive to the temporal dynamics of learning.
To evaluate models, we use BKT to predict the test students? data given the model-specified
skill assignment. We calculated several prediction-accuracy metrics, including RMSE and
mean log loss. We report area under the ROC curve (AUC), though all metrics yield the
same pattern of results. Figure 3 shows the mean AUC, where larger AUC values indicate
better performance. Each graph is a different dataset. The five colored bars represent
alternative approaches to determining the exercise-skill assignments. LFA uses skills from
Learning Factors Analysis, a semi-automated technique that refines expert-provided skills
[5]; LFA skills are available for only the Fractions and Geometry datasets. Single assigns
the same skill to all exercises. Exercise specific assigns a different skill to each exercise.
Expert uses the expert-provided skills. WCRP(0) uses the WCRP with no bias toward
the expert-provided skills, i.e., ? = 0, which is equivalent to a CRP. WCRP(?) is our
technique with the level of bias inferred from the data.
The performance of expert is unimpressive. On Fractions, expert is worse than the single
baseline. On Physics and Statics, expert is worse than the exercise-specific baseline.
WCRP(?) is consistently better than both the single and exercise-specific baselines
across all five datasets. WCRP(?) also outperforms expert by doing significantly better
on three datasets and equivalently on two. Finally, WCRP(?) is about the same as LFA
on Geometry, but substantially better on Fractions. (A comparison between these models
is somewhat inappropriate. LFA has an advantage because it was developed on Geometry
and is provided entire data sets for training, but it has a disadvantage because it was not
designed to improve the performance of BKT.) Surprisingly, WCRP(0), which ignores
the expert-provided skills, performs nearly as well as WCRP(?). Only for Geometry was
WCRP(?) reliably better (two-tailed t-test with t(49) = 5.32, p < .00001). The last
column of Table 1, which shows the mean inferred ? value for WCRP(?), helps explain
the pattern of results. The datasets are arranged in order of smallest to largest inferred
?, both in Table 1 and Figure 3. The inferred ? values do a good job of indicating where
WCRP(?) outperforms expert: the model infers that the expert skill assignments are
useful for Geometry and Spanish, but less so for the other datasets. Where the expert skill
assignments are most useful, WCRP(0) suffers. On the datasets where WCRP(?) is
highly biased, the mean number of inferred skills (Table 1, column 7) closely corresponds
to the number of expert-provided skills.
7
Geometry
Spanish
.65
.75
.55
.70
.65
.60
WCRP(?)
Expert
.70
WCRP(0)
Exercise Specific
WCRP(?)
Expert
WCRP(0)
.80
.60
.60
Exercise Specific
WCRP(?)
WCRP(0)
Expert
.70
Exercise Specific
WCRP(?)
WCRP(0)
.75
LFA
.75
Single
.60
.80
Single
.65
Expert
Exercise Specific
.70
Single
WCRP(?)
WCRP(0)
Expert
Single
Exercise Specific
LFA
AUC
Statics
.85
.70
.65
Physics
Single
Fractions
.75
.55
Figure 3: Mean AUC on test students? data for six different methods of determining skill
assignments in BKT. Error bars show ?1 standard error of the mean.
5
Discussion
We presented a technique that discovers a set of cognitive skills which students use for
problem solving in an instructional domain. The technique assumes that when a student
works on a sequence of exercises requiring the same skill, the student?s expected performance
should monotonically improve. Our technique addresses two challenges simultaneously: (1)
determining which skill is required to correctly answer each exercise, and (2) modeling a
student?s dynamical knowledge state for each skill. We conjectured that a technique which
jointly addresses these two challenges might lead to more accurate predictions of student
performance than a technique which was based on expert skill labels. We found strong
evidence for this conjecture: On 3 of 5 datasets, skill discovery yields significantly improved
predictions over fixed expert-labeled skills; on the other two datasets, the two approaches
obtain comparable results.
Counterintuitively, incorporating expert labels into the prior provided little or no benefit.
Although one expects prior knowledge to play a smaller role as datasets become larger, we
observed that even medium-sized datasets (relative to the scale of today?s big data) are
sufficient to support a pure data-driven approach. In simulation studies with both synthetic
data and actual student datasets, 50-100 students and roughly 10 exercises/skill provides
strong enough constraints on inference that expert labels are not essential.
Why should the expert skill labeling ever be worse than an inferred labeling? After all, educators design exercises to help students develop particular cognitive skills. One explanation
is that educators understand the knowledge structure of a domain, but have not parsed the
domain at the right level of granularity needed to predict student performance. For example, a set of exercises may all tap the same skill, but some require a deep understanding
of the skill whereas others require only a superficial or partial understanding. In such a
case, splitting the skill into two subskills may be beneficial. In other cases, combining two
skills which are learned jointly may subserve prediction, because the combination results
in longer exercise histories which provide more context for prediction. These arguments
suggest that fragmentation-coagulation processes [23] may be an interesting approach to
leveraging expert labelings as a prior.
One limitation of the results we report is that we have yet to perform extensive comparisons
of our technique to others that jointly model the mapping of exercises to skills and the
prediction of student knowledge state. Three matrix factorization approaches have been
proposed, two of which are as yet unpublished [24, 22, 14]. The most similar work to ours,
which also assumes each exercise is mapped to a single skill, is the topical HMM [8, 9]. The
topical HMM differs from our technique in that the underlying generative model supposes
that the exercise-skill mapping is inherently stochastic and thus can change from trial to
trial and student to student. (Also, it does not attempt to infer the number of skills or
to leverage expert-provided skills.) We have initated collaborations with several authors of
these alternative approaches, with the goal of testing the various approaches on exactly the
same datasets with the same evaluation metrics.
Acknowledgments This research was supported by NSF grants BCS-0339103 and BCS720375 and by an NSF Graduate Research Fellowship to R. L.
8
References
?
[1] D. Aldous. Exchangeability and related topics. In Ecole
d??et?e de probabilit?es de Saint-Flour,
pages 1?198. Springer, Berlin, 1985.
[2] R. Atkinson. Optimizing the learning of a second-language vocabulary. Journal of Experimental
Psychology, 96:124?129, 1972.
[3] T. Barnes. The Q-matrix method: Mining student response data for knowledge. In J. Beck,
editor, Proceedings of the 2005 AAAI Educational Data Mining Workshop, 2005.
[4] D. Blei and P. Frazier. Distance dependent Chinese restaurant processes. Journal of Machine
Learning Research, 12:2383?2410, 2011.
[5] H. Cen, K. Koedinger, and B. Junker. Learning factors analysis?A general method for cognitive model evaluation and improvement. In M. Ikeda, K. Ashley, and T. Chan, editors, Intell.
Tutoring Systems, volume 4053 of Lec. Notes in Comp. Sci., pages 164?175. Springer, 2006.
[6] A. Corbett and J. Anderson. Knowledge tracing: Modeling the acquisition of procedural
knowledge. User Modeling & User-Adapted Interaction, 4:253?278, 1995.
[7] A. Corbett, K. Koedinger, and J. Anderson. Intelligent tutoring systems. In M. Helander,
T. Landauer, and P. Prabhu, editors, Handbook of Human Computer Interaction, pages 849?
874. Elsevier Science, Amsterdam, 1997.
[8] J. Gonz?
alez-Brenes and J. Mostow. Dynamic cognitive tracing: Towards unified discovery of
student and cognitive models. In Proc. of the 5th Intl. Conf. on Educ. Data Mining, 2012.
[9] J. Gonz?
alez-Brenes and J. Mostow. What and when do students learn? Fully data-driven joint
estimation of cognitive and student models. In Proc. 6th Intl. Conf. Educ. Data Mining, 2013.
[10] H. Ishwaran and L. James. Generalized weighted Chinese restaurant processes for species
sampling mixture models. Statistica Sinica, 13:1211?1235, 2003.
[11] M. Khajah, R. Wing, R. Lindsey, and M. Mozer. Integrating latent-factor and knowledgetracing models to predict individual differences in learning. EDM 2014, 2014.
[12] K. Koedinger, R. Baker, K. Cunningham, A. Skogsholm, B. Leber, and J. Stamper. A data
repository for the EDM community: The PSLC DataShop. In C. Romero, S. Ventura, M. Pechenizkiy, and R. Baker, editors, Handbook of Educ. Data Mining, http://pslcdatashop.org, 2010.
[13] K. Koedinger, A. Corbett, and C. Perfetti. The knowledge-learning-instruction framework:
Bridging the science-practice chasm to enhance robust student learning. Cognitive Science,
36(5):757?798, 2012.
[14] A. S. Lan, C. Studer, and R. G. Baraniuk. Time-varying learning and content analytics via
sparse factor analysis. In ACM SIGKDD Conf. on Knowledge Disc. and Data Mining, 2014.
[15] R. Lindsey, J. Shroyer, H. Pashler, and M. Mozer. Improving student?s long-term knowledge
retention with personalized review. Psychological Science, 25:639?647, 2014.
[16] R. Neal. Markov chain sampling methods for dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9(2):249?265, 2000.
[17] R. Neal. Slice sampling. The Annals of Statistics, 31(3):705?767, 2003.
[18] Z. A. Pardos and N. T. Heffernan. KT-IDEM: Introducing item difficulty to the knowledge
tracing model. In User Modeling, Adaption and Pers., pages 243?254. Springer, 2011.
[19] Z. A. Pardos, S. Trivedi, N. T. Heffernan, and G. N. S?
ark?
ozy. Clustered knowledge tracing. In
S. A. Cerri, W. J. Clancey, G. Papadourakis, and K. Panourgia, editors, ITS, volume 7315 of
Lecture Notes in Computer Science, pages 405?410. Springer, 2012.
[20] A. Rafferty, E. Brunskill, T. Griffiths, and P. Shafto. Faster teaching by POMDP planning.
In Proc. of the 15th Intl. Conf. on AI in Education, 2011.
[21] A. Smith, L. Frank, S. Wirth, M. Yanike, D. Hu, Y. Kubota, A. Graybiel, W. Suzuki, and
E. Brown. Dynamic analysis of learning in behav. experiments. J. Neuro., 24:447?461, 2004.
[22] J. Sohl-Dickstein. Personalized learning and temporal modeling at Khan Academy. Invited
Talk at NIPS Workshop on Data Driven Education, 2013.
[23] Y. Teh, C. Blundell, and L. Elliott. Modelling genetic variations with fragmentationcoagulation processes. In Advances In Neural Information Processing Systems, 2011.
[24] N. Thai-Nghe, L. Drumond, T. Horv?
ath, A. Krohn-Grimberghe, A. Nanopoulos, and
L. Schmidt-Thieme. Factorization techniques for predicting student performance. In O. Santos
and J. Botcario, editors, Educ. Rec. Systems and Technologies, pages 129?153. 2011.
[25] J. Whitehill. A stochastic optimal control perspective on affect-sensitive teaching. PhD thesis,
Department of Computer Science, UCSD, 2012.
9
| 5554 |@word trial:9 repository:1 middle:2 stronger:1 proportion:2 open:1 instruction:3 hu:1 simulation:10 unimpressive:1 series:2 score:8 hereafter:1 mastery:5 ecole:1 denoting:1 ours:1 genetic:1 past:1 existing:1 outperforms:2 recovered:4 surprising:2 yet:4 must:5 ikeda:1 refines:1 partition:2 romero:1 designed:3 update:2 v:1 stationary:1 generative:6 selected:1 item:1 smith:1 short:1 colored:1 blei:1 provides:3 parameterizations:1 sits:1 coagulation:2 org:1 proficiency:1 five:11 beta:3 become:1 replication:5 incorrect:2 consists:1 nay:1 introduce:1 forgetting:1 expected:5 roughly:1 behavior:1 planning:1 decomposed:1 automatically:1 little:2 metaphor:1 inappropriate:1 actual:1 becomes:1 provided:16 discover:1 notation:2 underlying:1 panel:1 medium:1 advent:1 baker:2 what:5 santos:1 thieme:1 substantially:1 developed:1 lindsey:3 unified:1 temporal:4 alez:2 every:1 tie:1 exactly:3 partitioning:2 control:1 grant:1 appear:1 before:1 retention:1 engineering:1 treat:3 tends:2 limit:1 despite:1 mainstay:1 subscript:1 solely:1 might:2 burn:3 factorization:7 analytics:1 graduate:1 unique:1 acknowledgment:1 testing:1 atomic:1 practice:9 differs:3 probabilit:1 area:2 significantly:3 attain:1 word:1 integrating:3 griffith:1 suggest:1 studer:1 cannot:1 marginalize:1 operator:1 context:1 applying:2 influence:1 chasm:1 pashler:1 equivalent:3 imposed:1 customer:16 straightforward:1 regardless:1 educational:1 independently:1 focused:1 pomdp:1 simplicity:1 recovery:1 assigns:3 pure:1 splitting:1 rule:1 seat:1 his:1 traditionally:1 variation:1 annals:1 hierarchy:2 suppose:2 colorado:1 play:1 today:1 user:3 us:5 slip:1 associate:1 element:1 infrequently:1 rec:1 ark:1 predicts:1 labeled:1 bottom:1 role:1 observed:1 parameterize:1 improper:1 ran:2 mentioned:1 mozer:3 intuition:1 broken:1 constrains:1 thai:1 dynamic:6 depend:1 solving:2 algebra:3 serve:1 joint:2 various:4 talk:1 distinct:2 effective:1 describe:2 monte:2 labeling:23 whose:2 posed:1 solve:2 plausible:1 larger:3 say:1 otherwise:3 statistic:2 transform:1 noisy:1 jointly:3 online:2 sequence:13 advantage:1 propose:1 reconstruction:11 interaction:3 product:1 aligned:1 combining:1 ath:1 educator:3 flexibility:1 achieve:1 academy:2 empty:2 extending:1 r1:1 produce:2 intl:3 perfect:1 leave:1 help:4 develop:1 mostow:2 school:2 job:2 strong:6 dividing:1 indicate:2 differ:1 posit:1 radius:1 closely:1 correct:4 shafto:1 stochastic:2 human:5 material:1 education:2 require:4 assign:3 clustered:1 elementary:1 considered:1 ground:1 mapping:3 predict:9 cen:1 vary:1 adopt:1 achieves:1 smallest:1 estimation:1 proc:3 label:33 currently:1 counterintuitively:1 sensitive:2 largest:1 correctness:2 weighted:4 rather:5 occupied:6 pslc:5 exchangeability:2 varying:6 timevarying:1 conjunction:1 focus:1 emission:3 improvement:2 consistently:1 frazier:1 likelihood:7 indicates:2 modelling:1 contrast:1 sigkdd:1 baseline:5 helpful:1 inference:3 elsevier:1 dependent:3 typically:2 entire:1 cunningham:1 hidden:1 her:1 idem:1 labelings:1 interested:1 ashley:1 among:1 undue:1 platform:1 softmax:1 marginal:1 field:1 aware:1 having:2 sampling:9 represents:2 nearly:4 others:3 report:2 intelligent:4 randomly:1 simultaneously:2 intell:1 individual:1 beck:1 replaced:1 geometry:8 attempt:2 freedom:2 highly:1 mining:6 evaluation:3 flour:1 mixture:2 undefined:1 chain:2 kt:4 accurate:2 partial:1 xy:1 conduct:1 initialized:2 circle:1 psychological:1 column:5 modeling:8 soft:1 disadvantage:1 assignment:26 introducing:1 subset:3 expects:1 uniform:7 yanike:1 inadequate:1 reported:1 answer:3 supposes:1 synthetic:6 chooses:1 referring:1 st:5 rafferty:1 probabilistic:2 physic:5 discipline:2 michael:1 enhance:1 thesis:1 postulate:1 recorded:1 aaai:1 choose:1 worse:4 cognitive:11 conf:4 expert:76 wing:1 leading:1 presumes:1 falter:1 krohn:1 de:2 student:84 explicitly:1 depends:4 try:1 doing:2 rmse:1 pardos:2 accuracy:2 correspond:2 lesson:1 yield:3 weak:1 bayesian:3 produced:2 disc:1 none:2 carlo:2 expertise:1 comp:1 history:2 explain:1 suffers:1 sharing:1 acquisition:4 involved:1 james:1 e2:1 conveys:1 associated:3 pers:1 static:4 sampled:2 dataset:4 popular:1 knowledge:32 improves:1 dimensionality:1 infers:1 sophisticated:1 actually:1 response:9 improved:2 formulation:1 arranged:1 though:2 anderson:2 crp:8 educ:4 lack:2 name:2 effect:6 concept:1 requiring:3 true:16 ranged:1 verify:1 analytically:1 assigned:5 brown:1 entering:1 nonzero:1 neal:2 conditionally:1 during:1 spanish:4 game:1 auc:5 occasion:1 generalized:1 presenting:1 ay:2 mohammad:1 performs:1 edm:2 reasoning:1 ranging:1 novel:1 recently:1 discovers:1 common:4 permuted:3 multinomial:1 volume:3 belong:2 interpretation:1 interpret:1 refer:1 enter:1 ai:1 automatic:2 subserve:1 mathematics:1 teaching:2 language:1 had:2 longer:2 similarity:1 cerri:1 add:1 posterior:4 recent:1 chan:1 perspective:2 conjectured:1 aldous:1 driven:3 discard:1 massively:1 compound:1 optimizing:1 gonz:2 binary:4 affiliation:15 yi:8 preserving:1 somewhat:1 determine:4 period:2 monotonically:2 semi:1 multiple:4 full:1 bcs:1 infer:3 reduces:1 datashop:6 faster:1 cross:1 pechenizkiy:1 retrieval:1 long:1 post:1 equally:2 e1:1 prediction:9 variant:1 neuro:1 denominator:1 metric:3 iteration:3 represent:2 cell:1 whereas:3 fellowship:1 addressed:1 interval:1 source:1 biased:2 extra:1 invited:1 induced:2 supposing:1 incorporates:1 leveraging:1 seem:1 call:1 near:1 leverage:2 granularity:1 intermediate:3 enough:2 automated:2 affect:1 restaurant:8 psychology:1 identified:4 restrict:1 blundell:1 narrowly:1 whether:4 six:1 bridging:1 algebraic:2 afford:1 behav:1 deep:1 ignored:1 useful:2 nonparametric:4 ten:1 induces:1 http:1 specifies:2 nsf:2 metaphorically:1 delta:1 disjoint:3 correctly:7 dickstein:1 taught:1 group:1 four:1 procedural:1 lan:1 drawn:1 practicing:1 graybiel:1 graph:1 fraction:7 run:1 master:1 uncertainty:2 striking:1 baraniuk:1 extends:1 draw:3 prefer:1 scaling:1 comparable:1 simplification:1 atkinson:1 correspondence:1 fold:1 lec:1 refine:1 barnes:1 strength:2 occur:1 adapted:1 constraint:4 deficiency:1 kronecker:1 personalized:3 answered:1 argument:1 performing:1 relatively:1 conjecture:2 kubota:1 department:2 developing:1 according:1 combination:2 nanopoulos:1 conjugate:1 across:2 smaller:1 reconstructing:1 beneficial:1 partitioned:1 evolves:2 boulder:1 instructional:2 equation:6 conjugacy:2 previously:1 discus:1 turn:1 needed:2 mind:1 serf:2 end:1 studying:1 adopted:1 operation:1 available:1 ozy:1 apply:2 ishwaran:1 alternative:4 robustness:1 shortly:1 schmidt:1 assumes:5 denotes:1 include:1 responding:2 remaining:1 saint:1 dirichlet:1 graphical:1 bkt:26 mastered:5 calculating:2 exploit:2 giving:1 plausibly:1 parsed:1 chinese:7 koedinger:4 tutor:2 tensor:1 question:1 arrangement:3 concentration:1 rt:3 responds:1 guessing:1 lfa:6 distance:2 mapped:1 simulated:1 berlin:1 hmm:4 sci:1 seating:3 topic:1 collected:2 prabhu:1 tutoring:5 toward:4 length:1 modeled:1 relationship:2 index:2 providing:1 ratio:1 acquire:2 equivalently:2 sinica:1 ventura:1 robert:1 frank:1 whitehill:1 design:2 implementation:1 reliably:1 perform:3 allowing:1 teh:1 hone:1 datasets:20 markov:3 incorrectly:2 situation:1 incorporated:1 ever:1 topical:2 varied:2 ucsd:1 community:1 inferred:7 dog:1 required:4 cast:1 khan:2 specified:2 pair:2 extensive:1 tap:2 unpublished:1 learned:3 nip:1 address:3 suggested:2 bar:2 dynamical:4 pattern:2 appeared:1 sparsity:1 challenge:3 including:3 explanation:2 unrealistic:1 suitable:1 difficulty:1 rely:1 predicting:1 indicator:1 improve:6 historically:1 brief:1 technology:1 axis:1 prior:24 understanding:3 discovery:9 review:1 marginalizing:1 relative:2 determining:3 loss:1 expect:1 permutation:1 fully:1 lecture:1 interesting:1 limitation:1 proportional:1 age:2 validation:1 degree:4 sufficient:1 elliott:1 article:1 editor:6 systematically:1 seated:4 share:1 collaboration:1 translation:1 course:1 surprisingly:1 last:2 free:1 supported:1 horv:1 side:2 bias:16 allow:1 understand:1 institute:1 characterizing:2 sparse:1 tracing:6 benefit:2 regard:2 slice:2 depth:1 dimension:2 transition:4 overcome:1 vocabulary:2 calculated:1 ignores:2 curve:1 made:1 clue:1 author:1 suzuki:1 approximate:1 skill:143 overfitting:1 handbook:2 conceptual:1 xi:5 corbett:3 landauer:1 continuous:2 latent:4 tailed:1 why:1 reality:1 table:38 learn:1 superficial:1 robust:1 inherently:1 ignoring:1 improving:1 complex:1 domain:13 did:1 statistica:1 big:1 hyperparameters:2 nothing:1 identifier:2 referred:1 roc:1 fashion:1 ny:6 brunskill:1 nonnegativity:1 wish:3 pinpoint:1 exercise:82 concatenating:2 answering:1 forgets:1 third:2 wirth:1 learns:1 formula:1 down:1 erroneous:1 bad:2 discarding:1 specific:11 undergoing:1 r2:1 explored:1 evidence:1 sit:2 essential:2 intractable:1 incorporating:1 workshop:2 adding:1 effectively:1 modulates:1 fragmentation:2 sohl:1 phd:1 trivedi:1 explore:1 likely:3 amsterdam:1 tracking:1 springer:4 corresponds:2 truth:1 chance:1 adaption:1 acm:1 conditional:2 sized:2 goal:1 consequently:1 towards:1 appreciable:1 content:1 change:1 determined:1 specifically:2 sampler:2 specie:1 e:3 experimental:1 indicating:3 select:2 college:2 internal:1 support:3 latter:1 outstanding:1 incorporate:1 evaluate:1 mcmc:3 |
5,031 | 5,555 | Message Passing Inference for Large Scale Graphical
Models with High Order Potentials
Jian Zhang
ETH Zurich
Alexander G. Schwing
University of Toronto
Raquel Urtasun
University of Toronto
[email protected]
[email protected]
[email protected]
Abstract
To keep up with the Big Data challenge, parallelized algorithms based on dual decomposition have been proposed to perform inference in Markov random fields.
Despite this parallelization, current algorithms struggle when the energy has high
order terms and the graph is densely connected. In this paper we propose a partitioning strategy followed by a message passing algorithm which is able to exploit
pre-computations. It only updates the high-order factors when passing messages
across machines. We demonstrate the effectiveness of our approach on the task of
joint layout and semantic segmentation estimation from single images, and show
that our approach is orders of magnitude faster than current methods.
1
Introduction
Graphical models are a very useful tool to capture the dependencies between the variables of interest. In domains such as computer vision, natural language processing and computational biology
they have been very widely used to solve problems such as semantic segmentation [37], depth reconstruction [21], dependency parsing [4, 25] and protein folding [36].
Despite decades of research, finding the maximum a-posteriori (MAP) assignment or the minimimum energy configuration remains an open problem, as it is NP-hard in general. Notable exceptions
are specialized solvers such as graph-cuts [7, 3] and dynamic programming [19, 1], which retrieve
the global optima for sub-modular energies and tree-shaped graphs. Algorithms based on message
passing [18, 9], a series of graph cut moves [16] or branch-and-bound techniques [5] are common
choices to perform approximate inference in the more general case. A task closely related to MAP
inference but typically harder is computation of the probability for a given configuration. It requires
computing the partition function, which is typically done via message passing [18], sampling or by
repeatedly using MAP inference to solve tasks perturbed via Gumbel distributions [8].
Of particular difficulty is the case where the involved potentials depend on more than two variables,
i.e., they are high-order, or the graph is densely connected. Several techniques have been developed
to allow current algorithms to handle high-order potentials, but they are typically restricted to potentials of a specific form, e.g., a function of the cardinality [17] or piece-wise linear potentials [11, 10].
For densely connected graphs with Gaussian potentials efficient inference methods based on filtering
have been proposed [14, 33].
Alternating minimization approaches, which iterate between solving for subsets of variables have
also been studied [32, 38, 29]. However, most approaches loose their guarantees since related subproblems are solved independently. Another method to improve computational efficiency is to
divide the model into smaller tasks, which are solved in parallel using dual decomposition techniques [13, 20, 22]. Contrasting alternating minimization, convergence properties are ensured.
However, these techniques are computationally expensive despite the division of computation, since
global and dense interactions are still present.
1
In this work we show that for many graphical models it is possible to devise a partitioning strategy followed by a message passing algorithm such that efficiency can be improved significantly.
In particular, our approach adds additional terms to the energy function (i.e., regions to the Hasse
diagram) such that the high-order factors can be pre-computed and remain constant during local
message passing within each machine. As a consequence, high-order factors are only accessed once
before sending messages across machines. This contrasts tightening approaches [27, 28, 2, 26],
where additional regions are added to better approximate the marginal polytope at the cost of additional computations, while we are mainly interested in computational efficiency. In contrast to
re-scheduling strategies [6, 30, 2], our rescheduling is fixed and does not require additional computation.
Our experimental evaluations show that state-of-the-art techniques [9, 22] have difficulties optimizing energy functions that correspond to densely connected graphs with high-order factors. In contrast our approach is able to achieve more than one order of magnitude speed-ups while retrieving
the same solution in the complex task of jointly estimating 3D room layout and image segmentation
from a single RGB-D image.
2
Background: Dual Decomposition for Message Passing
We start by reviewing dual-decomposition approaches for inference in graphical models with highQN
order factors. To this end, we consider distributions defined over a discrete domain S = i=1 Si ,
which is composed of a product of N smaller discrete spaces Si = {1, . . . , |Si |}. We model our distribution to depend log-linearly on a scoring function ?(s) defined over the aforementioned discrete
product space S, i.e., p(s) = Z1 exp ?(s), with Z the partition function. Given the scoring function
?(s) of a configuration s, it is unfortunately generally #P-complete to compute its probability since
the partition function Z is required. Its logarithm equals the following variational program [12]:
X
log Z = max
p(s)?(s) + H(p),
(1)
p??
s
where H denotes the entropy and ? indicates the probability simplex.
The variational program in Eq. (1) is challenging as it operates on the exponentially sized domain
S. However, we can make use of the fact that for many
Prelevant applications the scoring function
?(s) is additively composed of local terms, i.e., ?(s) = r?R ?r (sr ). These local scoring functions
?r depend on a subset of variables sr = (si )i?r , defined on a domain Q
Sr ? S, which is specified by
the restriction often referred to as region r ? {1, . . . , N }, i.e., Sr = i?r Si . We refer to R as the
set of all restriction required to compute the scoring function ?.
P
Locality of the scoring function allows to equivalently rewrite
P
P the expected score via s p(s)?(s) =
r,sr pr (sr )?r (sr ) by employing marginals pr (sr ) =
s\sr p(s). Unfortunately an exact decomposition of the entropy H(p) using marginals is not possible.
P Instead, the entropy is typically
approximated by a weighted sum of local entropies H(p) ?
r cr H(pr ), with cr the counting
numbers. The task remains intractable despite the entropy approximation since the marginals pr (sr )
are required to arise from a valid joint distribution p(s). However, if we require the marginals to be
consistent only locally, we obtain a tractable approximation [34]. We thus introduce local beliefs
br (sr ) to denote the approximation, not to be confused with
P the true marginals pr . The beliefs are
required to fulfill local marginalization constraints, i.e., sp \sr bp (sp ) = br (sr ) ?r, sr , p ? P (r),
where the set P (r) subsumes the set of all parents of region r for which we want marginalization to
hold.
Putting all this together, we obtain the following approximation:
X
X
max
br (sr )?r (sr ) +
cr H(br )
b
r,sr
r
s.t.
?r
br ? C =
br ? ?
br : P
sp \sr bp (sp ) = br (sr ) ?sr , p ? P (r).
(2)
The computation and memory requirements can be too demanding when dealing with large graphical models. To address this issue, [13, 22] showed that this task can be distributed onto multiple
2
Algorithm: Distributed Message Passing Inference
Let a = 1/|M (r)| and repeat until convergence
1. For every ? in parallel: iterate T times over r ? R(?):
?p ? P (r), sr
P
??p (sp ) ?
?p?p0 (sp ) +
?p?r (sr ) = ?
cp ln
X
p0 ?P (p)
exp
P
?r0 ?p (sr0 ) + ???p (sp )
r 0 ?C(p)??\r
(3)
?
cp
sp \sr
?
?
?r?p (sr ) ?
c?r +
c?p
P
c?p
???r (sr ) +
X
?c?r (sc ) + ???r (sr ) +
c?C(r)??
p?P (r)
X
?p?r (sr )?? ?p?r (sr )(4)
p?P (r)
2. Exchange information by iterating once over r ? G ?? ? M (r)
X
X
X
X
???r (sr ) = a
?c?r (sc ) ?
?c?r (sc ) +
?r?p (sr ) ? a
?r?p (sr ) (5)
c?C(r)
c?C(r)??
p?P (r)
??M (r),p?P (r)
Figure 1: A block-coordinate descent algorithm for the distributed inference task.
computers ? by employing dual decomposition techniques. More specifically, the task is partitioned
into multiple independent tasks with constraints at the boundary ensuring consistency of the parts
upon convergence. Hence, an additional constraint is added to make sure that all beliefs b?r that
are assigned to multiple computers, i.e., those at the boundary of the parts, are consistent upon
convergence and equal a single region belief br . The distributed program is then:
X
X
?
?r (sr ) +
max
b
(s
)
?
c?r H(b?r )
r
r
?
br ,br ??
?,r,sr
?,r
??, r ? R? , sr , p ? P (r)
s.t.
??, r ? R? , sr
b?p (sp ) = b?r (sr )
sp \sr
b?r (sr ) = br (sr ),
P
where R? refers to regions on comptuer ?. We uniformly distributed the scores ?r (sr ) and the
counting numbers cr of a region r to all overlapping machines. Thus ??r = ?r /|M (r)| and c?r =
cr /|M (r)| with M (r) the set of machines that are assigned to region r.
Note that this program operates on the regions defined by the energy decomposition. To derive an
efficient algorithm making use of the structure incorporated in the constraints we follow [22] and
change to the dual domain. For the marginalization constraints we introduce Lagrange multipliers
??r?p (sr ) for every computer ?, all regions r ? R? assigned to that computer, all its states sr
and all its parents p. For the consistency constraint we introduce Lagrange multipliers ???r (sr )
for all computers, regions and states. The arrows indicate that the Lagrange multipliers can be
interpreted as messages sent between different nodes in a Hasse diagram with nodes corresponding
to the regions.
The resulting distributed inference algorithm [22] is summarized in Fig. 1. It consists of two parts,
the first of which is a standard message passing on the Hasse-diagram defined locally on each computer ?. The second operation interrupts message passing occasionally to exchange information
between computers. This second task of exchanging messages is often visualized on a graph G with
nodes corresponding to computers and additional vertices denoting shared regions.
Fig. 2(a) depicts a region graph with four unary regions and two high-order ones, i.e., R =
{{1}, {2}, {3}, {4}, {1, 2, 3}, {1, 2, 3, 4}}. We partition this region graph onto two computers
?1 , ?2 as indicated via the dashed rectangles. The graph G containing as nodes both computers
and the shared region is provided as well. The connections between all regions are labeled with the
corresponding message, i.e., ?, ? and ?. We emphasize that the consistency messages ? are only
modified when sending information between computers ?. Investigating the provided example in
Fig. 2(a) more carefully we observe that the computation of ? as defined in Eq. (3) in Fig. 1 involves summing over the state-space of the third-order region {1, 2, 3} and the fourth-order region
{1, 2, 3, 4}. The presence of those high-order regions make dual decomposition approaches [22]
3
? = {1, 2, 3}
??
?? ??
? = {1, 2, 3, 4}
?? ? ?
2 ??
? = {1, 2, 3}
? = {1, 2, 3, 4}
? = {1, 2, 3}
??
1
1
? = {1, 2, 3}
?? ??
2 ??
? = {1, 2, 3, 4}
? = {1, 2, 3}
?? ?1
?2? ?
? ? ?2
?3??
?? ?3
?? ?2
?1? ?
?? ? ?
? ? ?4
?4? ?
?1
{1}
? ? ?3
?2??
?2
{3}
??
? = {1, 2, 3}
?1
{4}
? = {1, 2, 3, 4}
? ? ??
?? ?3
? = {3, 4}
?? ?4
?3??
?? ?2
{1}
2 ??
?? ? ?
? ? ??
?3??
?? ?1
{2}
? = {1, 2, 3, 4}
1
? = {1, 2}
?1??
? ? ?1
?? ? ?
2 ??
? = {1, 2, 3, 4}
?? ??
?? ??
?3? ?
?1??
??
1
?2??
?? ?3
{2}
(a)
?2
{3}
?4??
{4}
(b)
Figure 2: Standard distributed message passing operating on an inference task partitioned to two
computers (left) is compared to the proposed approach (right) where newly introduced regions (yellow) ensure constant messages ? from the high-order regions.
impractical. In the next section we show how message passing algorithms can become orders of
magnitude faster when adding additional regions.
3
Efficient Message Passing for High-order Models
The distributed message passing procedure described in the previous section involves summations
over large state-spaces when computing the messages ?. In this section we derive an approach
that can significantly reduce the computation by adding additional regions and performing messagepassing with a specific message scheduling. Our key observation is that computation can be greatly
reduced if the high-order regions are singly-connected since their outgoing message ? remains constant. Generally, singly-connected high-order regions do not occur in graphical models. However, in
many cases we can use dual decomposition to distribute the computation in a way that the high-order
regions become singly-connected if we introduce additional intermediate regions located between
the high-order regions and the low-order ones (e.g., unary regions).
At first sight, adding regions increases computational complexity since we have to iterate over additional terms. However, we add regions only if they result in constant messages from regions with
even larger state space. By pre-computing those constant messages rather than re-evaluating them at
every iteration, we hence decrease computation time despite augmenting the graph with additional
regions, i.e., additional marginal beliefs br .
Specifically, we observe that there are no marginalization constraints for the singly-connected high? ? : P (r) = ?, |C(r)| = 1}, since their set
order regions, subsumed in the set H? = {r ? R
of parents is empty. An important observation made precise in Claim 1 is that the corresponding
messages ? are constant for high-order regions unless ???r changes. Therefore we can improve the
message passing algorithm discussed in the previous section by introducing additional regions to
increase the size of the set |H? | as much as possible while not changing the cost function. The latter
is ensured by requiring the additional counting numbers and potentials to equal zero. However, we
note that the program will change since the constraint set is augmented.
? ? be the set of all regions, i.e., the regions R? of the original task on computer
More formally, let R
? ? \ R? . Let H? = {r ? R
? ? : P (r) = ?, |C(r)| = 1}
? in addition to the newly added regions r? ? R
be the set of high-order regions on computer ? that are singly connected and have no parent. Further,
? ? \ H? denote all remaining regions. The inference task is given by
let its complement H? = R
X
X
max
b?r (sr )??r (sr ) +
c?r H(b?r )
?
br ,br ??
?,r,sr
?,r
??, r ? H? , sr , p ? P (r)
s.t.
? ? , sr
??, r ? R
b?p (sp ) = b?r (sr )
sp \sr
b?r (sr ) = br (sr ).
P
(9)
? ? \R? ,
Even though we set ?r (sr ) ? 0 for all states sr , and c?r = 0 for all newly added regions r ? R
the inference task is not identical to the original problem since the constraint set is not the same. Note
that new regions introduce new marginalization constraints. Next we show that messages leaving
singly-connected high-order regions are constant.
4
Algorithm: Message Passing for Large Scale Graphical Models with High Order Potentials
Let a = 1/|M (r)| and repeat until convergence
1. For every ? in parallel: Update singly-connected regions p ? H? : let r = C(p) ?sr
P
P
??p (sp ) ?
?p?p0 (sp ) +
?r0 ?p (sr0 ) + ???p (sp )
X
p0 ?P (p)
r 0 ?C(p)??\r
?p?r (sr ) = ?
cp ln
exp
?
cp
sp \sr
? ?:
2. For every ? in parallel: iterate T times over r ? R
?p ? P (r) \ H? , sr
P
??p (sp ) ?
?p?p0 (sp ) +
?p?r (sr ) = ?
cp ln
X
p0 ?P (p)
exp
P
?r0 ?p (sr0 ) + ???p (sp )
r 0 ?C(p)??\r
(6)
?
cp
sp \sr
?p ? P (r), sr
?
?r?p (sr ) ?
c?r +
c?p
P
c?p
p?P (r)
?
???r (sr ) +
X
?c?r (sc ) + ???r (sr ) +
c?C(r)??
X
?p?r (sr )?? ?p?r (sr )(7)
p?P (r)
3. Exchange information by iterating once over r ? G ?? ? M (r)
X
X
X
X
???r (sr ) = a
?c?r (sc ) ?
?c?r (sc ) +
?r?p (sr ) ? a
?r?p (sr ) (8)
c?C(r)
c?C(r)??
p?P (r)
??M (r),p?P (r)
Figure 3: A block-coordinate descent algorithm for the distributed inference task.
Claim 1. During message passing updates defined in Fig. 1 the multiplier ?p?r (sr ) is constant for
singly-connected high-order regions p.
P
Proof: More carefully investigating Eq. (3) which defines ?, it follows that p0 ?P (p) ?p?p0 (sp ) =
0 because P (p) = ? since p is assumed singly-connected. For the same reason we obtain
P
0
0
0
r 0 ?C(p)??\r ?r ?p (sr ) = 0 because r ? C(p) ? ? \ r = ? and ???p (sp ) is constant upon
each exchange of information. Therefore, ?p?r (sr ) is constant irrespective of all other messages
and can be pre-computed upon exchange of information.
We can thus pre-compute the constant messages before performing message passing. Our approach
is summarized in Fig. 3. We now provide its convergence properties in the following claim.
Claim 2. The algorithm outlined in Fig. 3 is guaranteed to converge to the global optimum of the
program given in Eq. (9) for cr > 0 ?r and is guaranteed to converge in case cr ? 0 ?r.
Proof: The message passing algorithm is derived as a block-coordinate descent algorithm in the
dual domain. Hence it inherits the properties of block-coordinate descent algorithms [31] which are
guaranteed to converge to a single global optimum in case of strict concavity (cr > 0 ?r) and which
are guaranteed to converge in case of concavity only (cr ? 0 ?r), which proves the claim.
We note that Claim 1 nicely illustrates the benefits of working with region graphs rather than factor
graphs. A bi-partite factor graph contains variable nodes connected to possibly high-order factors.
Assume that we distributed the task at hand such that every high-order region of size larger than two
is connected to at most two local variables. By adding a pairwise region in between the original
high-order factor node and the variable nodes we are able to reduce computational complexity since
the high-order factors are now singly connected. Therefore, we can guarantee that the complexity of
the local message-passing steps run in each machine reduces from the state-space size of the largest
factor to the size of the largest newly introduced region in each computer. This is summarized in the
following claim.
Claim 3. Assume we are given a high-order factor-graph representation of a graphical model. By
distributing the model onto multiple computers and by introducing additional regions we reduce the
complexity of the message passing iterations on every computer generally dominated by the state5
r3
vp0 y1
r4
vp2
y2
r1
vp0 y1
r3
r4
vp2
y2
y1
r2
y3
vp1
r2
y4
(a) Layout parameterization.
Compatibility Network
r1
y3
vp1
y3
y2
y4
y4
l1
l5
l3
Layout Network
(b) Compatibility.
l2
l4
Segmentation Network
(c) Joint model.
Figure 4: Parameterization of the layout task is visualized in (a). Compatibility of a superpixel
labeling with a wall parameterization using third-order functions is outlined in (b) and the graphical
model for the joint layout-segmentation task is depicted in (c).
rel. duality gap
Ours [s]
cBP [s]
dcBP [s]
1
0.1
0.01
rel. duality gap
1
0.1
0.01
0.78
5.92
51.59
Ours [s]
15.58 448.26 1150.1
31.60 986.54 1736.6
cBP [s]
411.81 4357.9 4479.9
19.48 1042.8 1772.6
dcBP [s]
451.71 4506.6 4585.3
=0
=1
Table 1: Average time to achieve the specified relative duality gap for = 0 (left) and = 1 (right).
space size of the largest region smax = maxr?R? |Sr | from O(smax ) to O(s0max ) with s0max =
maxr?R? ? |Sr?H? |.
Proof: The complexity of standard message passing on a region graph is linear on the largest statespace region, i.e., O(smax ). Since some operations can be pre-computed as per Claim 1 we emphasize that the largest newly introduced region on computer ? is of state-space size s0max which
concludes the proof.
Claim 3 indicates that distributing computation in addition to message rescheduling is a powerful
tool to cope with high-order potentials. To gain some insight, we illustrate our idea with a specific
example. Suppose we distribute the inference computation on two computers ?1 , ?2 as shown in
? regions, i.e., we introduce additional regions r? ? R
? \ R. The
Fig. 2(a). We compare it to a task on R
messages required in the augmented task are visualized in Fig. 2(b). Each computer (box highlighted
with dashed lines) is assigned a task specified by the contained region graph. As before we also
visualize the messages ? occasionally sent between the computers in a graph containing as nodes
the shared factors and the computers (boxes drawn with dashed lines). The algorithm proceeds by
passing messages ?, ? on each computer independently for T rounds. Afterwards messages ? are
exchanged between computers. Importantly, we note that messages for singly-connected high-order
regions within dashed boxes are only required to be computed once upon exchanging message ?.
This is the case for all high-order regions in Fig. 2(b) and for no high-order region in Fig. 2(a),
highlighting the obtained computational benefits.
4
Experimental Evaluation
We demonstrate the effectiveness of our approach in the task of jointly estimating the layout and
semantic labels of indoor scenes from a single RGB-D image. We use the dataset of [38], which is
a subset of the NYU v2 dataset [24]. Following [38], we utilize 202 images for training and 101 for
testing. Given the vanishing points (points where parallel lines meet at infinity), the layout task can
be formulated with four random variables s1 , . . . , s4 , each of which corresponds to angles for rays
originating from two distinct vanishing points [15]. We discretize each ray into |Si | = 25 states. To
define the segmentation task, we partition each image into super pixels. We then define a random
variable with six states for each super pixel si ? Si = {left, front, right, ceiling, floor, clutter} with
i > 4. We refer the reader to Fig. 4(a) and Fig. 4(b) for an illustration of the parameterization of the
problem. The graphical model for the joint problem is depicted in Fig. 4(c).
The score of the joint model is given by a sum of scores
?(s) = ?lay (s1 , . . . , s4 ) + ?label (s5 , . . . , sM +4 ) + ?comp (s),
where ?lay is defined as the sum of scores over the layout faces, which can be decomposed into a
sum of pairwise functions using integral geometry [23]. The labeling score ?label contains unary
6
2.5
5
ours primal
cBP primal
dcBP c = 1 primal
dcBP c = 2 primal
ours dual
cBP dual
dcBP c = 1 dual
dcBP c = 2 dual
2
1.5
ours primal
cBP primal
dcBP c = 1 primal
dcBP c = 2 primal
ours dual
cBP dual
dcBP c = 1 dual
dcBP c = 2 dual
4
3
2
1
1
0
?1
0.5
?2
0
0
1
10
2
10
3
10
?3
1
10
2
10
(normalized primal/dual = 0)
3
10
10
(normalized primal/dual = 1)
1
1
0.9
0.95
0.8
0.9
0.7
0.6
0.85
0.5
0.8
0.4
0.3
0.75
ours agreement
cBP agreement
dcBP c = 1 agreement
dcBP c = 2 agreement
0.7
0.65
0
100
200
300
400
500
600
700
800
900
ours agreement
cBP agreement
dcBP c = 1 agreement
dcBP c = 2 agreement
0.2
0.1
1000
0
0
500
1000
1500
2000
2500
3000
(factor agreement = 0)
(factor agreement = 1)
Figure 5: Average normalized primal/dual and factor agreement for = 1 and = 0.
potentials and pairwise regularization between neighboring superpixels. The third function, ?comp ,
couples the two tasks and encourages the layout and the segmentation to agree in their labels, e.g., a
superpixel on the left wall of the layout is more likely to be assigned the left-wall or the object label.
The compatibility
score decomposes into a sum of fifth-order scores, one for each superpixel, i.e.,
P
?comp (s) = i>4 ?comp,i (s1 , . . . , s4 , si ). Using integral geometry [23], we can further decompose
each superpixel score ?comp,i into a sum of third-order energies. As illustrated in Fig. 4(c), every
superpixel variable si , i > 4 is therefore linked to 4-choose-2 third order functions of state-space
size 6 ? 252 . These functions measure the overlap of each superpixel with a region specified by two
layout ray angles si , sj with i, j ? {1, . . . , 4}, i 6= j. This is illustrated in Fig. 4(b) for the area
highlighted in purple and the blue region defined by s2 and s3 . Since a typical image has around
250 superpixels, there are approximately 1000 third-order factors.
Following Claim 3 we recognize that the third-order functions are connected to at most two variables if we distribute the inference such that the layout task is assigned to one computer while the
segmentation task is divided onto other machines. Importantly, this corresponds to a roughly equal
split of the problem when using our approach, since all tasks are pairwise and the state-space of
the layout task is higher than the one of the semantic-segmentation. Despite the third-order regions
involved in the original model, every local inference task contains at most pairwise factors.
We use convex BP [35, 18, 9] and distributed convex BP [22] as baselines. For our method, we assign
layout nodes to the first machine and segmentation nodes to the second one. Without introducing
additional regions and pre-computations the workload of this split is highly unbalanced. This makes
distributed convex BP even slower than convex BP since many messages are exchanged over the
network. To be more fair to distributed convex BP, we split the nodes into two parts, each with 2
layout variables and half of the segmentation variables. For all experiments, we set cr = 1 and
evaluate the settings = 1 and = 0. For a fair comparison we employ a single core for our
approach and convex BP and two cores for distributed convex BP. Note that our approach can be run
in parallel to achieve even faster convergence.
We compare our method to the baselines using two metrics: Normalized primal/dual is a rescaled
version of the original primal and dual normalized by the absolute value of the optimal score. This
allow us to compare different images that might have fairly different energies. In case none of
the algorithms converged we normalize all energies using the mean of the maximal primal and the
minimum dual. The second metric is the factor agreement, which is defined as the proportion of
factors that agree with the connected node marginals.
Fig. 5 depicts the normalized primal/dual as well as the factor agreement for = 0 (i.e., MAP)
and = 1 (i.e., marginals). We observe that our proposed approach converges significantly faster
7
layout err: 0.90% segmentation err: 4.74%
layout err: 1.15% segmentation err: 5.12%
layout err: 1.75% segmentation err: 3.98%
layout err: 2.36% segmentation err: 4.06%
layout err: 2.38% segmentation err: 3.77%
layout err: 2.88% segmentation err: 6.01%
layout err: 2.89% segmentation err: 3.99%
layout err: 4.20% segmentation err: 3.65%
layout err: 4.79% segmentation err: 4.17%
layout err: 13.97% segmentation err: 32.08% layout err: 25.89% segmentation err: 16.70% layout err: 18.04% segmentation err: 5.34%
Figure 6: Qualitative Result ( = 0) : First column illustrates the inferred layout (blue) and layout
ground truth (red). The second and third columns are estimated and ground truth segmentations respectively. Failure modes are shown in the last row. They are due to bad vanishing point estimation.
than the baselines. We additionally observe that for densely coupled tasks, the performance of
dcBP degrades when exchanging messages every other iteration (yellow curves). Importantly, in
our experiments we never observed any of the other approaches to converge when our approach
did not converge. Tab. 1 depicts the time in seconds required to achieve a certain relative duality
gap. We observe that our proposed approach outperforms all baselines by more than one order of
magnitude. Fig. 6 shows qualitative results for = 0. Note that our approach manages to accurately
predict layouts and corresponding segmentations. Some failure cases are illustrated in the bottom
row. They are largely due to failures in the vanishing point detection which our approach can not
recover from.
5
Conclusions
We have proposed a partitioning strategy followed by a message passing algorithm which is able to
speed-up significantly dual decomposition methods for parallel inference in Markov random fields
with high-order terms and dense connections. We demonstrate the effectiveness of our approach on
the task of joint layout and semantic segmentation estimation from single images, and show that our
approach is orders of magnitude faster than existing methods. In the future, we plan to investigate
the applicability of our approach to other scene understanding tasks.
References
[1] A. Amini, T. Wymouth, and R. Jain. Using Dynamic Programming for Solving Variational Problems in
Vision. PAMI, 1990.
[2] D. Batra, S. Nowozin, and P. Kohli. Tighter Relaxations for MAP-MRF Inference: A Local Primal-Dual
Gap based Separation Algorithm. In Proc. AISTATS, 2011.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast Approximate Energy Minimization via Graph Cuts. PAMI,
2001.
[4] M. Collins. Head-Driven Statistical Models for Natural Language Parsing. Computational Linguistics,
2003.
[5] R. Dechter. Reasoning with Probabilistic and Deterministic Graphical Models: Exact Algorithms. Morgan & Claypool, 2013.
[6] G. Elidan, I. McGraw, and D. Koller. Residual belief propagation: Informed scheduling for asynchronous
message passing. In Proc. UAI, 2006.
[7] L. R. Ford and D. R. Fulkerson. Maximal flow through a network. Canadian Journal of Mathematics,
1956.
[8] T. Hazan and T. Jaakkola. On the Partition Function and Random Maximum A-Posteriori Perturbations.
In Proc. ICML, 2012.
[9] T. Hazan and A. Shashua. Norm-Product Belief Propagation: Primal-Dual Message-Passing for LPRelaxation and Approximate-Inference. Trans. Information Theory, 2010.
8
[10] P. Kohli and P. Kumar. Energy Minimization for Linear Envelope MRFs. In Proc. CVPR, 2010.
[11] P. Kohli, L. Ladick`y, and P. H. S. Torr. Robust higher order potentials for enforcing label consistency.
IJCV, 2009.
[12] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press,
2009.
[13] N. Komodakis, N. Paragios, and G. Tziritas. MRF Optimization via Dual Decomposition: MessagePassing Revisited. In Proc. ICCV, 2007.
[14] P. Kr?ahenb?uhl and V. Koltun. Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials.
In Proc. NIPS, 2011.
[15] D. C. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating Spatial Layout of Rooms using Volumetric
Reasoning about Objects and Surfaces. In Proc. NIPS, 2010.
[16] V. Lempitsky, C. Rother, S. Roth, and A. Blake. Fusion Moves for Markov Random Field Optimization.
PAMI, 2010.
[17] Y. Li, D. Tarlow, and R. Zemel. Exploring compositional high order pattern potentials for structured
output learning. In Proc. CVPR, 2013.
[18] T. Meltzer, A. Globerson, and Y. Weiss. Convergent Message Passing Algorithms: a unifying view. In
Proc. UAI, 2009.
[19] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988.
[20] M. Salzmann. Continuous Inference in Graphical Models with Polynomial Energies. In Proc. CVPR,
2013.
[21] M. Salzmann and R Urtasun. Beyond feature points: structured prediction for monocular non-rigid 3d
reconstruction. In Proc. ECCV, 2012.
[22] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed Message Passing for Large Scale
Graphical Models. In Proc. CVPR, 2011.
[23] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efficient Structured Prediction for 3D Indoor
Scene Understanding. In Proc. CVPR, 2012.
[24] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. Indoor Segmentation and Support Inference from
RGBD Images. In Proc. ECCV, 2012.
[25] D. A. Smith and J. Eisner. Dependency parsing by belief propagation. In Proc. EMNLP, 2008.
[26] D. Sontag, D. K. Choe, and Y. Li. Efficiently Searching for Frustrated Cycles in MAP Inference. In Proc.
UAI, 2012.
[27] D. Sontag and T. Jaakkola. New Outer Bounds on the Marginal Polytope. In Proc. NIPS, 2007.
[28] D. Sontag, T. Meltzer, A. Globerson, and T. Jaakkola. Tightening LP Relaxations for MAP using Message
Passing. In Proc. NIPS, 2008.
[29] D. Sun, C. Liu, and H. Pfister. Local Layering for Joint Motion Estimation and Occlusion Detection. In
Proc. CVPR, 2014.
[30] C. Sutton and A. McCallum. Improved dynamic schedules for belief propagation. In Proc. UAI, 2007.
[31] P. Tseng and D. P. Bertsekas. Relaxation Methods for Problems with Strictly Convex Separable Costs and
Linear Constraints. Mathematical Programming, 1987.
[32] L. Valgaerts, A. Bruhn, H. Zimmer, J. Weickert, C. Stroll, and C. Theobalt. Joint Estimation of Motion,
Structure and Geometry from Stereo Sequences. In Proc. ECCV, 2010.
[33] V. Vineet and P. H. S. Torr J. Warrell. Filter-based Mean-Field Inference for Random Fields with Higher
Order Terms and Product Label-Spaces. In Proc. ECCV, 2012.
[34] M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families and Variational Inference.
Foundations and Trends in Machine Learning, 2008.
[35] Y. Weiss, C. Yanover, and T. Meltzer. MAP Estimation, Linear Programming and Belief Propagation with
Convex Free Energies. In Proc. UAI, 2007.
[36] C. Yanover, O. Schueler-Furman, and Y. Weiss. Minimizing and Learning Energy Functions for SideChain Prediction. J. of Computational Biology, 2008.
[37] J. Yao, S. Fidler, and R. Urtasun. Describing the scene as a whole: Joint object detection, scene classification and semantic segmentation. In Proc. CVPR, 2012.
[38] J. Zhang, K. Chen, A. G. Schwing, and R. Urtasun. Estimating the 3D Layout of Indoor Scenes and its
Clutter from Depth Sensors. In Proc. ICCV, 2013.
9
| 5555 |@word kohli:4 version:1 polynomial:1 proportion:1 norm:1 open:1 additively:1 rgb:2 decomposition:11 p0:8 harder:1 configuration:3 series:1 score:10 contains:3 hoiem:1 salzmann:2 denoting:1 ours:8 liu:1 outperforms:1 existing:1 err:24 current:3 si:11 parsing:3 dechter:1 partition:6 update:3 half:1 parameterization:4 mccallum:1 vanishing:4 core:2 smith:1 tarlow:1 node:12 toronto:4 cbp:8 revisited:1 zhang:2 accessed:1 mathematical:1 become:2 koltun:1 retrieving:1 qualitative:2 consists:1 ijcv:1 ray:3 introduce:6 pairwise:5 expected:1 roughly:1 decomposed:1 solver:1 cardinality:1 confused:1 estimating:4 provided:2 interpreted:1 developed:1 contrasting:1 informed:1 finding:1 impractical:1 guarantee:2 every:10 y3:3 ensured:2 partitioning:3 bertsekas:1 before:3 local:11 struggle:1 consequence:1 despite:6 sutton:1 meet:1 approximately:1 pami:3 might:1 studied:1 r4:2 challenging:1 aschwing:1 bi:1 globerson:2 testing:1 block:4 procedure:1 area:1 eth:1 significantly:4 ups:1 pre:7 refers:1 protein:1 onto:4 scheduling:3 restriction:2 map:8 deterministic:1 roth:1 crfs:1 layout:34 independently:2 convex:9 insight:1 importantly:3 retrieve:1 fulkerson:1 handle:1 searching:1 coordinate:4 suppose:1 exact:2 programming:4 superpixel:6 agreement:13 trend:1 expensive:1 approximated:1 located:1 lay:2 cut:3 labeled:1 observed:1 bottom:1 solved:2 capture:1 region:67 connected:20 cycle:1 sun:1 decrease:1 rescaled:1 complexity:5 dynamic:3 depend:3 solving:2 reviewing:1 rewrite:1 upon:5 division:1 efficiency:3 workload:1 joint:10 distinct:1 jain:1 fast:1 sc:6 labeling:2 zemel:1 modular:1 widely:1 solve:2 larger:2 cvpr:7 plausible:1 jointly:2 highlighted:2 ford:1 sequence:1 propose:1 reconstruction:2 interaction:1 product:4 maximal:2 neighboring:1 achieve:4 normalize:1 convergence:7 parent:4 optimum:3 requirement:1 empty:1 r1:2 smax:3 converges:1 object:3 derive:2 illustrate:1 augmenting:1 eq:4 c:2 involves:2 indicate:1 tziritas:1 closely:1 filter:1 vp0:2 require:2 exchange:5 assign:1 wall:3 decompose:1 tighter:1 summation:1 exploring:1 strictly:1 hold:1 around:1 ground:2 blake:1 exp:4 claypool:1 predict:1 visualize:1 claim:11 layering:1 estimation:6 proc:25 label:7 largest:5 tool:2 weighted:1 minimization:4 mit:1 sensor:1 gaussian:2 sight:1 super:2 modified:1 fulfill:1 rather:2 cr:10 jaakkola:3 derived:1 interrupt:1 inherits:1 indicates:2 mainly:1 superpixels:2 greatly:1 contrast:3 ladick:1 baseline:4 posteriori:2 inference:27 mrfs:1 rigid:1 unary:3 typically:4 koller:2 originating:1 interested:1 compatibility:4 issue:1 dual:28 aforementioned:1 pixel:2 classification:1 plan:1 art:1 spatial:1 fairly:1 uhl:1 marginal:3 field:5 once:4 equal:4 shaped:1 sampling:1 choe:1 biology:2 identical:1 never:1 nicely:1 icml:1 bruhn:1 future:1 simplex:1 np:1 intelligent:1 employ:1 composed:2 densely:5 recognize:1 geometry:3 occlusion:1 friedman:1 subsumed:1 detection:3 interest:1 message:53 warrell:1 highly:1 investigate:1 sr0:3 evaluation:2 schueler:1 primal:17 integral:2 edge:1 unless:1 tree:1 divide:1 logarithm:1 exchanged:2 re:2 column:2 assignment:1 exchanging:3 cost:3 introducing:3 vertex:1 subset:3 applicability:1 veksler:1 too:1 front:1 dependency:3 perturbed:1 vineet:1 l5:1 probabilistic:3 lee:1 together:1 theobalt:1 yao:1 containing:2 choose:1 possibly:1 emnlp:1 li:2 potential:13 distribute:3 summarized:3 subsumes:1 notable:1 piece:1 view:1 furman:1 linked:1 tab:1 red:1 start:1 recover:1 hazan:4 parallel:7 shashua:1 purple:1 partite:1 kaufmann:1 largely:1 efficiently:1 correspond:1 yellow:2 accurately:1 manages:1 none:1 comp:5 converged:1 volumetric:1 failure:3 energy:14 involved:2 proof:4 couple:1 gain:1 newly:5 dataset:2 segmentation:28 schedule:1 carefully:2 higher:3 follow:1 improved:2 wei:3 done:1 though:1 box:3 until:2 working:1 hand:1 overlapping:1 propagation:5 defines:1 mode:1 indicated:1 vp1:2 requiring:1 true:1 multiplier:4 y2:3 normalized:6 hence:3 assigned:6 hasse:3 alternating:2 regularization:1 fidler:1 semantic:6 illustrated:3 round:1 komodakis:1 during:2 encourages:1 complete:1 demonstrate:3 cp:6 l1:1 motion:2 reasoning:3 image:10 wise:1 variational:4 boykov:1 common:1 specialized:1 exponentially:1 discussed:1 marginals:7 refer:2 s5:1 consistency:4 outlined:2 mathematics:1 language:2 l3:1 operating:1 surface:1 add:2 showed:1 optimizing:1 driven:1 occasionally:2 certain:1 devise:1 scoring:6 morgan:2 minimum:1 additional:17 floor:1 parallelized:1 r0:3 converge:6 elidan:1 dashed:4 branch:1 multiple:4 afterwards:1 reduces:1 faster:5 divided:1 ensuring:1 prediction:3 mrf:2 vision:2 metric:2 iteration:3 ahenb:1 folding:1 background:1 want:1 addition:2 diagram:3 jian:1 leaving:1 parallelization:1 envelope:1 sr:74 sure:1 strict:1 sent:2 flow:1 effectiveness:3 jordan:1 counting:3 presence:1 intermediate:1 split:3 canadian:1 meltzer:3 iterate:4 marginalization:5 reduce:3 idea:1 br:16 six:1 distributing:2 rescheduling:2 stereo:1 sontag:3 passing:30 compositional:1 repeatedly:1 useful:1 generally:3 iterating:2 singly:11 s4:3 clutter:2 locally:2 statespace:1 zabih:1 visualized:3 reduced:1 s3:1 estimated:1 per:1 blue:2 discrete:3 pollefeys:2 putting:1 four:2 key:1 drawn:1 changing:1 utilize:1 rectangle:1 graph:20 relaxation:3 sum:6 run:2 angle:2 fourth:1 powerful:1 raquel:1 family:1 reader:1 separation:1 bound:2 followed:3 guaranteed:4 convergent:1 weickert:1 occur:1 constraint:11 infinity:1 bp:9 scene:6 dominated:1 speed:2 kumar:1 performing:2 separable:1 structured:3 across:2 smaller:2 remain:1 partitioned:2 lp:1 making:1 s1:3 restricted:1 pr:5 iccv:2 ceiling:1 computationally:1 ln:3 zurich:1 remains:3 agree:2 monocular:1 loose:1 r3:2 describing:1 tractable:1 end:1 sending:2 operation:2 observe:5 v2:1 amini:1 slower:1 original:5 denotes:1 remaining:1 ensure:1 linguistics:1 graphical:15 unifying:1 exploit:1 eisner:1 prof:1 silberman:1 move:2 added:4 strategy:4 degrades:1 outer:1 polytope:2 tseng:1 urtasun:7 reason:1 enforcing:1 rother:1 y4:3 illustration:1 minimizing:1 equivalently:1 unfortunately:2 subproblems:1 tightening:2 perform:2 discretize:1 observation:2 markov:3 sm:1 descent:4 incorporated:1 precise:1 head:1 y1:3 perturbation:1 inferred:1 introduced:3 complement:1 required:7 specified:4 z1:1 connection:2 pearl:1 nip:4 trans:1 address:1 able:4 beyond:1 proceeds:1 pattern:1 indoor:4 challenge:1 program:6 max:4 memory:1 belief:10 wainwright:1 overlap:1 demanding:1 natural:2 difficulty:2 residual:1 yanover:2 improve:2 irrespective:1 concludes:1 coupled:1 understanding:2 l2:1 relative:2 fully:1 filtering:1 foundation:1 consistent:2 principle:1 nowozin:1 row:2 eccv:4 repeat:2 last:1 asynchronous:1 hebert:1 free:1 allow:2 face:1 fifth:1 absolute:1 distributed:15 benefit:2 boundary:2 depth:2 curve:1 valid:1 evaluating:1 concavity:2 vp2:2 made:1 employing:2 cope:1 sj:1 approximate:4 emphasize:2 mcgraw:1 keep:1 dealing:1 global:4 maxr:2 investigating:2 uai:5 summing:1 assumed:1 fergus:1 zimmer:1 continuous:1 decade:1 decomposes:1 table:1 additionally:1 kanade:1 robust:1 messagepassing:2 complex:1 domain:6 sp:22 did:1 dense:2 aistats:1 linearly:1 arrow:1 big:1 s2:1 arise:1 whole:1 fair:2 rgbd:1 augmented:2 fig:18 referred:1 depicts:3 sub:1 paragios:1 exponential:1 third:9 bad:1 specific:3 r2:2 nyu:1 gupta:1 fusion:1 intractable:1 rel:2 adding:4 kr:1 magnitude:5 illustrates:2 gumbel:1 gap:5 chen:1 locality:1 entropy:5 depicted:2 likely:1 sidechain:1 lagrange:3 highlighting:1 contained:1 ch:1 corresponds:2 truth:2 frustrated:1 lempitsky:1 sized:1 formulated:1 room:2 shared:3 hard:1 change:3 specifically:2 typical:1 operates:2 uniformly:1 torr:2 schwing:4 batra:1 pfister:1 duality:4 experimental:2 exception:1 formally:1 l4:1 support:1 latter:1 unbalanced:1 alexander:1 collins:1 ethz:1 evaluate:1 outgoing:1 |
5,032 | 5,556 | A Filtering Approach to Stochastic Variational
Inference
Neil M.T. Houlsby ?
Google Research
Zurich, Switzerland
[email protected]
David M. Blei
Department of Statistics
Department of Computer Science
Colombia University
[email protected]
Abstract
Stochastic variational inference (SVI) uses stochastic optimization to scale up
Bayesian computation to massive data. We present an alternative perspective on
SVI as approximate parallel coordinate ascent. SVI trades-off bias and variance
to step close to the unknown true coordinate optimum given by batch variational
Bayes (VB). We define a model to automate this process. The model infers the location of the next VB optimum from a sequence of noisy realizations. As a consequence of this construction, we update the variational parameters using Bayes rule,
rather than a hand-crafted optimization schedule. When our model is a Kalman
filter this procedure can recover the original SVI algorithm and SVI with adaptive
steps. We may also encode additional assumptions in the model, such as heavytailed noise. By doing so, our algorithm outperforms the original SVI schedule
and a state-of-the-art adaptive SVI algorithm in two diverse domains.
1
Introduction
Stochastic variational inference (SVI) is a powerful method for scaling up Bayesian computation to
massive data sets [1]. It has been successfully used in many settings, including topic models [2],
probabilistic matrix factorization [3], statistical network analysis [4, 5], and Gaussian processes [6].
SVI uses stochastic optimization to fit a variational distribution, following cheap-to-compute noisy
natural gradients that arise from repeatedly subsampling the data. The algorithm follows these
gradients with a decreasing step size [7]. One nuisance, as for all stochastic optimization techniques,
is setting the step size schedule.
In this paper we develop variational filtering, an alternative perspective of stochastic variational
inference. We show that this perspective leads naturally to a tracking algorithm?one based on
a Kalman filter?that effectively adapts the step size to the idiosyncrasies of data subsampling.
Without any tuning, variational filtering performs as well or better than the best constant learning
rate chosen in retrospect. Further, it outperforms both the original SVI algorithm and SVI with
adaptive learning rates [8].
In more detail, variational inference optimizes a high-dimensional variational parameter ? to find
a distribution that approximates an intractable posterior. A concept that is important in SVI is the
parallel coordinate update. This refers to setting each dimension of ? to its coordinate optimum, but
where these coordinates are computed parallel. We denote the resulting updated parameters ?VB .
With this definition we have a new perspective on SVI. At each iteration it attempts to reach its parallel coordinate update, but one estimated from a randomly sampled data point. (The true coordinate
update requires iterating over all of the data.) Specifically, SVI iteratively updates an estimate of ?
?
Work carried out while a member of the University of Cambridge, visiting Princeton University.
1
as follows,
?t,
?t = (1 ? ?t )?t?1 + ?t ?
(1)
? t is a random variable whose expectation is
where ?
and ?t is the learning rate. The original
paper on SVI points out that this iteration works because ?VB
t ? ?t is the natural gradient of the
variational objective, and so Eq 1 is a noisy gradient update. But we can also see the iteration as a
?
noisy attempt to reach the parallel coordinate optimum ?VB
t . While ? is an unbiased estimate of this
quantity, we will show that Eq 1 uses a biased estimate but with reduced variance.
?VB
t
This new perspective opens the door to other ways of updating ?t based on the noisy estimates of
?VB
t . In particular, we use a Kalman filter to track the progress of ?t based on the sequence of
noisy coordinate updates. This gives us a ?meta-model? about the optimal parameter, which we now
estimate through efficient inference. We show that one setting of the Kalman filter corresponds to
SVI; another corresponds to SVI with adaptive learning rates; and others, like using a t-distribution
in place of a Gaussian, account better for noise than any previous methods.
2
Variational Filtering
We first introduce stochastic variational inference (SVI) as approximate parallel coordinate ascent.
We use this view to present variational filtering, a model-based approach to variational optimization
that observes noisy parallel coordinate optima and seeks to infer the true VB optimum. We instantiate this method with a Kalman filter, discuss relationships to other optimization schedules, and
extend the model to handle real-world SVI problems.
Stochastic Variational Inference Given data x1:N , we want to infer the posterior distribution
over model parameters ?, p(?|x1:N ). For most interesting models exact inference is intractable and
we must use approximations. Variational Bayes (VB) formulates approximate inference as a batch
optimization problem. The intractable posterior distribution p(?|x1:N ) is approximated by a simpler
distribution q(?; ?) where ? are the variational parameters of q.1 These parameters are adjusted to
maximize a lower bound on the model evidence (the ELBO),
L(?) =
N
X
Eq [log p(xi |?)] + Eq [log p(?)] ? Eq [log q(?)] .
(2)
i=1
Maximizing Eq 2 is equivalent to minimizing the KL divergence between the exact and approximate
posterior, KL[q||p]. Successive optima of the ELBO often have closed-form [1], so to maximize
Eq 2 VB can perform successive parallel coordinate updates on the elements in ?, ?t+1 = ?VB
t .
is too expensive on large
Unfortunately, the sum over all N datapoints in Eq 2 means that ?VB
t
datasets. SVI avoids this difficulty by sampling a single datapoint (or a mini-batch) and optimizing
?t,
?
?
a cheap, noisy estimate of the ELBO L(?).
The optimum of L(?)
is denoted ?
?
L(?)
=N Eq [log p(xi |?)] + Eq [log p(?)] ? Eq [log q(?)] ,
? := argmax L(?)
?
?
= Eq [N log p(xi |?) + log p(?)] .
(3)
(4)
?
The constant N in Eq 4 ensures the noisy parallel coordinate optimum is unbiased with respect to
? t ] = ?VB . After computing ?
? t , SVI updates the parameters using Eq 1.
the full VB optimum, E[?
t
This corresponds to using natural gradients [9] to perform stochastic gradient ascent on the ELBO.
We present an alternative perspective on Eq 1. SVI may be viewed as an attempt to reach the true
? t . The observation ?
? t is an unbiased
parallel coordinate optimum ?VB
using the noisy estimate ?
t
VB
?
estimator of ?t with variance Var[?t ]. The variance may be large, so SVI makes a bias/variance
trade-off to reduce the overall error. The bias and variance in ?t computed using SVI (Eq 1) are
VB
E[?t ? ?VB
t ] = (1 ? ?t )(?t?1 ? ?t ) ,
?t] ,
Var[?t ] = ?2t Var[?
(5)
respectively. Decreasing the step size reduces the variance but increases the bias. However, as the
algorithm converges, the bias decreases as the VB optima fall closer to the current parameters. Thus,
1
To readers familiar with stochastic variational inference, we refer to the global variational parameters,
assuming that the local parameters are optimized at each iteration. Details can be found in [1].
2
?t?1 ? ?VB
t tends to zero and as optimization progresses, ?t should decay. This reduces the variance
given the same level of bias.
Indeed, most stochastic optimization schedules decay the step size, including the Robbins-Monro
schedule [7] used in SVI. Different schedules yield different bias/variance trade-offs, but the tradeoff is heuristic and these schedules often require hand tuning. Instead we use a model to infer the
location of ?VB
t from the observations, and use Bayes rule to determine the optimal step size.
Probabilistic Filtering for SVI We described our view of SVI as approximate parallel coordinate
ascent. With this perspective, we can define a model to infer ?VB
t . We have three sets of variables:
?t are the current parameters of the approximate posterior q(?; ?t ); ?VB
t is a hidden variable cor?
responding to the VB coordinate update at the current time step; and ?t is an unbiased, but noisy
observation of ?VB
t .
? 1:t , and we use it
We specify a model that observes the sequence of noisy coordinate optima ?
VB ?
to compute a distribution over the full VB update p(?t |?1:t ). When making a parallel coordinate
?
update at time t we move to the best estimate of the VB optimum under the model, ?t = E[?VB
t |?1:t ].
Using this approach we i) avoid the need to tune the step size because Bayes rule determines how
the posterior mean moves at each iteration; ii) can use a Kalman filter to recover particular static
and adaptive step size algorithms; and iii) can add extra modelling assumptions to vary the step size
schedule in useful ways.
In variational inference, our ?target? is ?VB
t . It moves because the parameters of approximate posterior ?t change as optimization progresses. Therefore, we use a dynamic tracking model, the
Kalman filter [10]. We compute the posterior over next VB optimum given previous observations,
2
?
p(?VB
t |?1:t ). In tracking, this is called filtering, so we call our method variational filtering (VF). At
each time t, VF has a current set of model parameters ?t?1 and takes these steps.
1. Sample a datapoint xt .
? t using Eq 3.
2. Compute the noisy estimate of the coordinate update ?
?
3. Run Kalman filtering to compute the posterior over the VB optimum, p(?VB
t |?1:t ).
VB ?
4. Update the parameters to the posterior mean ?t = E[?t |?1:t ] and repeat.
Variational filtering uses the entire history of observations, encoded by the posterior, to infer the
location of the VB update. Standard optimization schedules use only the current parameters ?t to
regularize the noisy coordinate update, and these methods require tuning to balance bias and variance
in the update. In our setting, Bayes rule automatically makes this trade-off.
To illustrate this perspective we consider a small problem. We fit a variational distribution for latent
Dirichlet allocation on a small corpus of 2.5k documents from the ArXiv. For this problem we can
2
compute the full parallel coordinate update and thus compute the tracking error ||?VB
t ? ?t ||2 and
VB
2
? t || for various algorithms. We emphasize that ?
? t is unbiased, and
the observation noise ||?t ? ?
2
so the observation noise is completely due to variance. A reduction in tracking error indicates an
advantage to incurring bias for a reduction in variance.
We compared variational filtering (Alg. 1) to the original Robbins-Monro schedule used in SVI [1],
and a large constant step size of 0.5. The same sequence of random documents was handed to each
algorithm. Figs. 1 (a-c) show the tracking error of each algorithm. The large constant step size yields
large error due to high variance, see Eq 5. The SVI updates are too small and the bias dominates.
Here, the bias is even larger than the variance in the noisy observations during early stages, but it
decays as the term (?t ??VB
t?1 ) in Eq 5 slowly decreases. The variational filter automatically balances
bias and variance, yielding the smallest tracking error. As a result of following the VB optima more
closely, the variational filter achieves larger values of the ELBO, shown in Fig. 1 (d).
3
Kalman Variational Filter
We now detail our Kalman filter for SVI. Then we discuss different settings of the parameters and
estimating these online. Finally, we extend the filter to handle heavy-tailed noise.
2
We do not perform ?smoothing? in our dynamical system because we are not interested in old VB coordinate optima after the parameters have been optimized further.
3
(b) SVI, Robbins-Monro
tracking error
observation error
14
13
12
50
t
100
log Euclidean distance
15
log Euclidean distance
log Euclidean distance
16
11
0
(c) Constant Rate
17
16
15
14
13
12
tracking error
observation error
11
0
50
t
?8
16
?8.5
15
14
13
12
11
0
100
(d) ELBO
17
ELBO
(a) Variational Filtering
17
tracking error
observation error
50
t
100
?9
?9.5
?10
?10.5
0
Variational Filtering
SVI
Constant
50
t
100
Figure 1: (a-c) Curves show the error in tracking the VB update. Markers depict the error in the
? t to the VB update. (d) Evolution of the ELBO computed on the entire dataset.
noisy observations ?
The Gaussian Kalman filter (KF) is attractive because inference is tractable and, in SVI, computational time is the limiting factor, not the rate of data acquisition. The model is specified as
? t |?VB ) = N (?VB , R) ,
p(?
t
t
VB
VB
p(?VB
t+1 |?t ) = N (?t , Q) ,
(6)
where R models the variance in the noisy coordinate updates and Q models how far the VB optima
move at each iteration. The observation noise has zero mean because the noisy updates are unbiased.
VB
We assume no systematic parameter drift, so E[?VB
t+1 ] = ?t . Filtering in this linear-Gaussian model
VB ?
is tractable, given the current posterior p(?t?1 |?1:t?1 ) = N (?t?1 ; ?t?1 ) and a noisy coordinate
? t , the next posterior is computed directly using Gaussian manipulations [11],
update ?
?1
?
?
[?t?1 + Q] ,
p(?VB
t |?1:t ) = N [1 ? Pt ]?t?1 + Pt ?t , [1 ? Pt ]
Pt = [?t?1 + Q][?t?1 + Q + R]?1 .
(7)
(8)
The variable Pt is known as the Kalman gain. Notice the update to the posterior mean has the same
form as the SVI update in Eq 1. The gain Pt is directly equivalent to the SVI step size ?t .3 Different
modelling choices to get different optimization schedules. We now present some key cases.
Static Parameters If the parameters Q and R are fixed, the step size progression in Eq 7 can
be computed a priori as Pt+1 = [Q/R + Pt ][1 + Q/R + Pt ]?1 . This yields a fixed sequence of
decreasing step size. A popular schedule is the Robbins-Monro routine, ? ? (t0 + t)?? also used in
SVI [1]. If we set Q = 0 the variational filter returns a Robbins-Monro schedule with ? = 1. This
corresponds to online estimation of the mean of a Gaussian. This is because Q = 0 assumes that the
optimization has converged and the filter simply averages the noisy updates.
In practice, decay rates slower that ? = 1 perform better [2, 8]. This is because updates which
were computed using old parameter values are forgotten faster. Setting Q > 0 yields
the same
p
1
+
4R/Q
+
reduced
memory.
In
this
case,
the
step
size
tends
to
a
constant
lim
P
=
[
t??
t
p
1][ 1 + 4R/Q + 1 + 2R/Q]?1 . Larger the noise-to-signal ratios R/Q result in smaller limiting
step sizes. This demonstrates the automatic bias/variance trade-off. If R/Q is large, the variance in
? t ] is assumed large. Therefore, the filter uses a smaller step size, yielding
the noisy updates Var[?
more bias (Eq 5), but with lower overall error. Conversely, if there is no noise R/Q = 0, P? = 1
and we recover batch VB.
Parameter Estimation Normally the parameters will not be known a priori. Further, if Q is
fixed then the step size does not tend to zero and so Robbins-Monro criteria do not hold [7]. We can
address both issues by estimating Q and R online.
The parameter R models the variance in the noisy optima, and Q measures how near the process is
to convergence. These parameters are unknown and will change as the optimization progresses. Q
will decrease as convergence is approached; R may decrease or increase. In our demonstration in
Fig. 1, it increases during early iterations and then plateaus. Therefore we estimate these parameters
online, similar to [8, 12]. The desired parameter values are
? t ? ?VB ||2 ] = E[||?
? t ? ?VB ||2 ] ? ||?VB ? ?VB ||2 ,
R = E[||?
t
t?1 2
t
t?1 2
Q=
||?VB
t
?
2
?VB
t?1 ||2
.
(9)
(10)
3
In general, Pt is a full-rank matrix update. For simplicity, and to compare to scalar learning rates, we
present the 1D case. The multi-dimensional generalization is straightforward.
4
0
Students t Filter
Gaussian Filter
SVI?adapt [Ran13]
?0.5
log(?t )
?1
?1.5
?2
?2.5
?3
?3.5
0
5000
# docs
10000
15000
Figure 2: Step sizes learned by the Gaussian
Kalman filter, the Student?s t filter (Alg. 1) and
the adaptive learning rate in [8], on non-stationary
ArXiv data. The adaptive algorithms react to the
dataset shift by increasing the step size. The variational filters react even faster than adaptive-SVI
because not only do Q and R adjust, but the posterior variance increases at the shift which further
augments the next step size.
We estimate these using exponentially weighted moving averages. To estimate the two terms in Eq 9,
? t ??VB ],
we estimate the expected difference between the current state and the observation gt = E[?
t?1
VB 2
?
and the norm of this difference ht = E[||?t ? ?t?1 ||2 ], using
? t ? ?t?1 ) ,
gt = (1 ? ?t?1 )gt?1 + ?t?1 (?
? t ? ?t?1 ||22 ,
ht = (1 ? ?t?1 )ht?1 + ?t?1 ||?
(11)
where ? is the window length and ?t?1 is the current posterior mean. The parameters are estimated
as R = ht ? ||gt ||22 and Q = ||gt ||22 . After filtering, the window length is adjusted to ?t+1 =
(1 ? Pt )?t + 1. Larger steps result in shorter memory of old parameter values. Joint parameter
and state estimation can be poorly determined. Initializing the parameters to appropriate values
with Monte Carlo sampling, as in [8], mitigates this issue. In our experiments we avoid this underspecification by tying the filtering parameters across the filters for each variational parameter.
The variational filter with parameter estimation recovers an automatic step size similar to the
adaptive-SVI algorithm in [8]. Their step size is equivalent to ?t = Q/[Q + R]. Variational filtering uses Pt = [?t?1 + Q]/[?t?1 + Q + R], Eq 7. If this posterior variance ?t?1 is zero the
updates are identical. If ?t?1 is large, as in early time steps, the filter produces a larger step size.
Fig. 3 demonstrates how the these methods react to non-stationary data. LDA was run on ArXiv
abstracts whose category changed every 5k documents. Variational filtering and adaptive-SVI react
to the shift by increasing the step size, the ELBO is similar for both methods.
? t are often heavy-tailed. For example, in maStudent?s t Filter In SVI, the noisy estimates ?
trix factorization heavy-tailed parameters distributions [13] produce to heavy-tailed noisy updates.
Empirically, we observe similar heavy tails in LDA. Heavy tails may also arise from computing
Euclidean distances between parameter vectors and not using the more natural Fisher information
metric [9]. We add robustness these sources of noise with a heavy-tailed Kalman filter.
? t |?VB ) = T (?VB , R, ?), where T (m, V, d) denotes a tWe use a t-distributed noise model, p(?
t
t
distribution with mean m, covariance V and d degrees of freedom. For computational convenience
VB
VB
we also use a t-distributed transition model, p(?VB
t+1 |?t ) = T (?t , Q, ?). If the current posterior
?
is t-distributed, p(?VB
t |?1:t ) = T (?t , ?t , ?t ) and the degrees of freedom are identical, ?t = ? = ?,
then filtering has closed-form,
?t?1 + ?2
(1 ? Pt )[?t?1 + Q], ?t?1 + ||?||0
?t?1 + ||?||0
? t ? ?t?1 ||22
||?
?t?1 + Q
where Pt =
, and ?2 =
.
?t?1 + Q + R
?t?1 + Q + R
?
p(?VB
t |?1:t ) =T
?t,
(1 ? Pt )?t?1 + Pt ?
,
(12)
(13)
The update to the mean is the same as in the Gaussian KF. The crucial difference is in the update to
? t arrives, then ?2 , and hence ?t , are augmented. The increased
the variance in Eq 12. If an outlier ?
posterior uncertainty at time t + 1 yields an increased gain Pt+1 . This allows the filter to react
quickly to a large perturbation. The t-filter differs fundamentally to the Gaussian KF in that the step
size is now a direct function of the observations. In the Gaussian KF the dependency is indirect,
through the estimation of R and Q.
Eq 12 has closed-form because the d.o.f. are equal. Unfortunately, this will not generally be the
case because the posterior degrees of freedom grow, so we require an approximation. Following
[14], we approximate the ?incompatible? t-distributions by adjusting their degrees of freedom to be
equal. We choose all of these to equal ??t = min(?t , ?, ?). We match the degrees of freedom in
5
this way because it prevents the posterior degree of freedom from growing over time. If ?t , Eq 12
were allowed to grow large, the t-distributed filter reverts back to a Gaussian KF. This is undesirable
because the heavy-tailed noise does not necessarily disappear at convergence.
To account for adjusting the degrees of freedom, we moment match the old and new t-distributions.
? ??) to T (m, ?, ?), the variance is set
This has closed-from; to match the second moments of T (m, ?,
?(?
? ?2)
?
to ? = (??2)?? ?. This results in tractable filtering and has the same computational cost as Gaussian
filtering. The routine is summarized in Algorithm 1.
Algorithm 1 Variational filtering with Student?s t-distributed noise
1: procedure F ILTER(data x1:N )
2:
Initialize filtering distribution ?0 , ?0 , ?0 , see ? 5
3:
Initialize statistics g0 , h0 , ?0 with Monte-Carlo sampling
4:
Set initial variational parameters ?0 ? ?0
5:
for t = 1, . . . , T do
6:
Sample a datapoint xt
. Or a mini-batch of data.
? t ? f (?t , xt ), f given by Equation Eq 4 . Noisy estimate of the coordinate optimum.
7:
?
8:
Compute gt and ht using Eq 11.
. Update parameters of the filter.
9:
R ? ht ? gt2 , Q ? ht
10:
??t?1 ? min(?t?1 , ?, ?)
. Match degrees of freedom.
?1
?
? Q
? . Moment match.
11:
?t?1 ? ?t?1 (?
?t?1 ? 2)[(?t?1 ? 2)?
?t?1 ] ?t?1 , similar for R,
?1
?
?
?
?
?
12:
Pt ? [?t?1 + Q][?t?1 + Q + R]
. Compute gain, or step size.
? t ? ?t?1 ||2 [?
? t?1 + Q
? + R]
? ?1
13:
?2 ? ||?
2
?t,
14:
?t ? [I ? Pt ]?t?1 + Pt ?
. Update filter posterior.
??t?1 +?2
?
?
15:
?t ? ??t?1 +||?||0 [I ? Pt ][?t?1 + Q], ?t ? ?t?1 + 1
16:
?t ? ?t
. Update the variational parameters of q.
17:
end for
18:
return ?T
19: end procedure
4
Related Work
Stochastic and Streamed VB SVI performs fast inference on a fixed dataset of known size N .
Online VB algorithms process an infinite stream of data [15, 16], but these methods cannot use a
re-sampled datapoint. Variational filtering falls between both camps. The noisy observations require
an estimate of N . However, Kalman filtering does not try to optimize a static dataset like a fixed
Robbins-Monro schedule. As observed in Fig. 3 the algorithm can adapt to a regime change, and
forgets the old data. The filter simply tries to move to the VB coordinate update at each step, and is
not directly concerned about asymptotic convergence on static dataset.
Kalman filters for parameter learning Kalman filters have been used to learn neural network
parameters. Extended Kalman filters have been used to train supervised networks [17, 18, 19].
The network weights evolve because of data non-stationarity. This problem differs fundamentally
to SVI. In the neural network setting, the observations are the fixed data labels, but in SVI the
observations are noisy realizations of a moving VB parallel coordinate optimum. If the VF draws
? will still change because ?t will have changed. In the work
the same datapoint, the observations ?
with neural nets, the same datapoint always yields the same observation for the filter.
Adaptive learning rates Automatic step size schedules have been proposed for online estimation
of the mean of a Gaussian [20], or drifting parameters [21]. The latter work uses a Gaussian KF for
parameter estimation in approximate dynamic programming. Automatic step sizes are derived for
stochastic gradient descent in [12] and SVI in [8]. These methods set the step size to minimize the
expected update error. Our work is the first Bayesian approach to learn the SVI schedule.
Meta-modelling Variational filtering is a ?meta-model?, these are models that assist training of a
more complex method. They are becoming increasingly popular, examples include Kalman filters
6
(a) LDA ArXiv
(b) LDA NYT
(c) LDA Wikipedia
?7
?7.6
?8
?7.1
?7.7
test ELBO
test ELBO
test ELBO
?7.5
?7.8
?7.9
?8.5
)
er
ap
sp
GV
F
3]
1
of1
an
Co
[H
le
I [R
ac
SV
Or
?
t
ap
ap
sp
i
(th
3]
F
TV
Ad
i
(th
(d) BMF WebView
recall@10
recall@10
0.3
0.2
F
ap
sp
i
(th
)
er
3]
3]
1
of1
an
Co
[H
le
I [R
ac
SV
Or
?
t
ap
t
ns
s
thi
pa
GV
pe
F
r)
(
s
thi
pa
t
]
3]
ns
13
of1
an
Co
le
I [H
I [R
ac
r
SV ?SV
O
t
ap
Ad
r)
pe
ap
sp
I
SV
F
TV
Ad
i
(th
F
GV
)
er
t
]
)
3]
er
ns
13
of1
an
Co
I [H
I [R
cle
V
a
V
S
Or
t?S
ap
Ad
ap
sp
i
(th
(f) BMF Netflix
0.22
0.2
0.4
0.35
0.3
0.25
?7.4
(e) BMF Kosarak
0.4
0.35
TV
F
GV
)
er
0.45
0.45
(
ap
sp
I
SV
?7.3
?7.5
?8
t
ns
recall@10
F
TV
i
(th
)
er
?7.2
0.18
0.16
0.14
0.25
t
]
r)
r)
3]
ns
13
pe
pe
of1
an
Co
pa
pa
[H
is
I [R
cle
VI
a
V
r
(
(th
S
O
F
F
t?S
TV
ap
GV
Ad
0.12
s
thi
t
]
r)
r)
3]
ns
13
pe
pe
of1
an
Co
pa
pa
[H
is
I [R
cle
VI
a
V
r
(th
S
O
F
t?S
ap
GV
Ad
is
F
TV
(th
Figure 3: Final performance achieved by each algorithm on the two problems. Stars indicate the
best performing non-oracle algorithm and those statistically indistinguishable at p = 0.05. (a-c)
LDA: Value of the ELBO after observing 0.5M documents. (d-f) BMF: recall@10 after observing
2 ? 108 cells.
for training neural networks [17], Gaussian process optimization for hyperparameter search [22] and
Gaussian process regression to construct Bayesian quasi-Newton methods [23].
5
Empirical Case Studies
We tested variational filtering on two diverse problems: topic modelling with Latent Dirichlet Allocation (LDA) [24], a popular testbed for scalable inference routines, and binary matrix factorization
(BMF). Variational filtering outperforms Robbins-Monro SVI and a state-of-the-art adaptive method
[8] in both domains. The Student?s t filter performs substantially better than the Gaussian KF and is
competitive with an oracle that picks the best constant step size with hindsight.
Models We used 100 topics in LDA and set the Dirichlet hyperparameters to 0.5. This value is
slightly larger than usual because it helps the stochastic routines escape local minima early on. For
BMF we used a logistic matrix factorization model with a Gaussian variational posterior over the
latent matrices [3]. This task differs to LDA in two ways. The variational parameters are Gaussian
and we sample single cells from the matrix to form stochastic updates. We used minibatches of 100
documents in LDA, and 5 times the number of rows in BMF.
Datasets We trained LDA on three large document corpora: 630k abstracts from the ArXiv,
1.73M New York Times articles, and Wikipedia, which has ? 4M articles. For BMF we used three
recommendation matrices: clickstream data from the Kosarak news portal; click data from an ecommerce website, BMS-WebView-2 [25]; and the Netflix data, treating 4-5 star ratings as ones.
Following [3] we kept the 1000 items with most ones and sampled up to 40k users.
Algorithms We ran our Student?s t variational filter in Algorithm 1 (TVF) and the Gaussian
version in ? 3 (GVF). The variational parameters were initialized randomly in LDA and with an
SVD-based routine [26] in BMF. The prior variance was set to ?0 = 103 and t-distribution?s degrees
of freedom to ?0 = 3 to get the heaviest tails with a finite variance for moment matching.
In general, VF can learn full-rank matrix stepsizes. LDA and BMF, however, have many parameters,
and so we used the simplest setting of VF in which a single step size was learned for all of them;
that is, Q and R are constrained to be proportional to the identity matrix. This choice reduces the
cost of VF from O(N 3 ) to O(N ). Empirically, this computational overhead was negligible. Also
7
(a) LDA ArXiv, ELBO
(b) BMF WebView, recall@10
0.5
0.4
?8
TVF (this paper)
GVF (this paper)
SVI [Hof13]
Adapt?SVI [Ran13]
Oracle Const
?8.2
?8.4
0
1
2
# docs
3
4
5
5
x 10
recall@10
test ELBO
?7.8
0.3
0.2
0.1
0
0.5
1
1.5
# matrix entries
2
8
x 10
Figure 4:
Example
learning curves of
(a) the ELBO (plot
smoothed with Lowess?
method) and (b) recall@10, on the LDA
and BMF problems,
respectively.
it allows us to aggregate statistics across the variational parameters, yielding more robust estimates.
Finally, we can directly compare our Bayesian adaptive rate to the single adaptive rate in [8].
We compared to the SVI schedule proposed in [1]. This is a Robbins-Monro schedule
?t = (t0 + t)?? , we used ? = 0.7; t0 = 1000 for LDA as these performed well in [1, 2, 8] and
? = 0.7, t0 = 0 for BMF, as in [3]. We also compared to the adaptive-SVI routine in [8]. Finally, we
used an oracle method that picked the constant learning rate from a grid of rates 10?k , k ? 1, . . . , 5,
that gave the best final performance. In BMF, the Robbins-Monro SVI schedule learns a different
rate for each row and column. All other methods computed a single rate.
Evaluation In LDA, we evaluated the algorithms using the per-word ELBO, estimated on random
sets of held-out documents. Each algorithm was given 0.5M documents and the final ELBO was
averaged over the final 10% of the iterations. We computed statistical significance between the
algorithms with a t-test on these noisy estimates of the ELBO. Our BMF datasets were from item
recommendation problems, for which recall is a popular metric [27]. We computed recall at N by
removing a single one from each row during training. We then ranked the zeros by their posterior
probability of being a one and computed the fraction of the rows in which the held-out one was in
the top N . We used a budget of 2 ? 108 observations and computed statistical significance over 8
repeats of the experiment, including the random train/test split.
Results The final performance levels on both tasks are plotted in Fig. 3. These plots show that over
the six datasets and two tasks the Student?s t variational filter is the strongest non-oracle method.
SVI [1] and Adapt-SVI [8] come close on LDA, which they were originally used for, but on the
WebView and Kosarak binary matrices they yield a substantially lower recall. In terms of the ELBO
in BMF (not plotted), TVF was the best non-oracle method on WebView and Kosarak and SVI was
best on Netflix, with TVF second best. The Gaussian Kalman filter worked less well. It produced
high learning rates due to the inaccurate Gaussian noise assumption.
The t-distributed filter appears to be robust to highly non-Gaussian noise. It was even competitive
with the oracle method (2 wins, 2 draws, 1 loss). Note that the oracle picked the best final performance at time T , but at t < T the variational filter converged faster, particularly in LDA. Fig. 4
(a) shows example learning curves on the ArXiv data. Although the oracle just outperforms TVF at
0.5M documents, TVF converged much faster. Fig. 4 (b) shows example learning curves in BMF
on the WebView data. This figure shows that most of the BMF routines converge within the budget.
Again, TVF not only reached the best solution, but also converged fastest.
Conclusions We have presented a new perspective on SVI as approximate parallel coordinate descent. With our model-based approach to this problem, we shift the requirement from hand tuning
optimization schedules to constructing an appropriate tracking model. This approach allows us to
derive a new algorithm for robust SVI that uses a model with Student?s t-distributed noise. This Student?s t variational filtering algorithm performed strongly on two domains with completely different
variational distributions. Variational filtering is a promising new direction for SVI.
Acknowedgements
NMTH is grateful to the Google European Doctoral Fellowship scheme for
funding this research. DMB is supported by NSF CAREER NSF IIS-0745520, NSF BIGDATA NSF
IIS-1247664, NSF NEURO NSF IIS-1009542, ONR N00014-11-1-0651 and DARPA FA8750-142-0009. We thank James McInerney, Alp Kucukelbir, Stephan Mandt, Rajesh Ranganath, Maxim
Rabinovich, David Duvenaud, Thang Bui and the anonymous reviews for insightful feedback.
8
References
[1] M.D. Hoffman, D.M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14:1303?
1347, 2013.
[2] M.D. Hoffman, D.M. Blei, and F. Bach. Online learning for latent Dirichlet allocation. NIPS, 23:856?864,
2010.
[3] J.M. Hernandez-Lobato, N.M.T. Houlsby, and Z. Ghahramani. Stochastic inference for scalable probabilistic modeling of binary matrices. ICML, 2014.
[4] P.K. Gopalan and D.M. Blei. Efficient discovery of overlapping communities in massive networks. PNAS,
110(36):14534?14539, 2013.
[5] J. Yin, Q. Ho, and E. Xing. A scalable approach to probabilistic latent space inference of large-scale
networks. In NIPS, pages 422?430. 2013.
[6] J. Hensman, N. Fusi, and N.D. Lawrence. Gaussian processes for big data. CoRR, abs/1309.6835, 2013.
[7] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics,
22(3):400?407, 1951.
[8] R. Ranganath, C. Wang, D.M. Blei, and E.P. Xing. An adaptive learning rate for stochastic variational
inference. In ICML, pages 298?306, 2013.
[9] Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276,
1998.
[10] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of basic Engineering,
82(1):35?45, 1960.
[11] S. Roweis and Z. Ghahramani. A unifying review of linear gaussian models. Neural computation,
11(2):305?345, 1999.
[12] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. In ICML, 2013.
[13] B. Lakshminarayanan, G. Bouchard, and C. Archambeau. Robust Bayesian matrix factorisation. In
AISTATS, pages 425?433, 2011.
[14] M. Roth, E. Ozkan, and F. Gustafsson. A Student?s t filter for heavy tailed process and measurement
noise. In ICASSP, pages 5770?5774. IEEE, 2013.
[15] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael Jordan. Streaming
variational Bayes. In NIPS, pages 1727?1735, 2013.
[16] Zoubin Ghahramani and H Attias. Online variational Bayesian learning. In Slides from talk presented at
NIPS workshop on Online Learning, 2000.
[17] J.F.G. de Freitas, M. Niranjan, and A.H. Gee. Hierarchical Bayesian models for regularization in sequential learning. Neural Computation, 12(4):933?953, 2000.
[18] S.S. Haykin. Kalman filtering and neural networks. Wiley Online Library, 2001.
[19] Enrico Capobianco. Robust control methods for on-line statistical learning. EURASIP Journal on Advances in Signal Processing, (2):121?127, 2001.
[20] Y.T. Chien and K. Fu. On Bayesian learning and stochastic approximation. Systems Science and Cybernetics, IEEE Transactions on, 3(1):28?38, 1967.
[21] A.P. George and W.B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Machine learning, 65(1):167?198, 2006.
[22] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian optimization of machine learning
algorithms. In NIPS, pages 2960?2968, 2012.
[23] P. Hennig and M. Kiefel. Quasi-newton methods: A new direction. JMLR, 14(1):843?865, 2013.
[24] D. M Blei, A. Y Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[25] R. Kohavi, C.E. Brodley, B. Frasca, L. Mason, and Z. Zheng. Kdd-cup 2000 organizers? report: peeling
the onion. ACM SIGKDD Explorations Newsletter, 2(2):86?93, 2000.
[26] S. Nakajima, M. Sugiyama, and R. Tomioka. Global analytic solution for variational Bayesian matrix
factorization. NIPS, 23:1759?1767, 2010.
[27] A. Gunawardana and G. Shani. A survey of accuracy evaluation metrics of recommendation tasks. JMLR,
10:2935?2962, 2009.
9
| 5556 |@word version:1 norm:1 open:1 seek:1 covariance:1 pick:1 moment:4 reduction:2 initial:1 document:9 fa8750:1 outperforms:4 freitas:1 current:9 com:1 must:1 kdd:1 cheap:2 gv:6 analytic:1 treating:1 plot:2 update:40 depict:1 stationary:2 instantiate:1 website:1 item:2 haykin:1 blei:7 location:3 successive:2 simpler:1 zhang:1 mathematical:1 direct:1 gustafsson:1 overhead:1 introduce:1 snoek:1 expected:2 indeed:1 growing:1 multi:1 decreasing:3 automatically:2 window:2 increasing:2 estimating:2 tying:1 substantially:2 hindsight:1 shani:1 forgotten:1 every:1 demonstrates:2 control:1 normally:1 negligible:1 engineering:1 local:2 tends:2 consequence:1 becoming:1 ap:12 mandt:1 hernandez:1 doctoral:1 conversely:1 co:6 fastest:1 factorization:5 archambeau:1 statistically:1 averaged:1 practical:1 lecun:1 practice:1 recursive:1 differs:3 pesky:1 svi:58 procedure:3 powell:1 thi:3 empirical:1 matching:1 boyd:1 word:1 refers:1 zoubin:1 get:2 convenience:1 close:2 undesirable:1 cannot:1 optimize:1 equivalent:3 roth:1 maximizing:1 lobato:1 straightforward:1 survey:1 simplicity:1 react:5 factorisation:1 rule:4 estimator:1 colombia:2 regularize:1 datapoints:1 handle:2 coordinate:28 updated:1 limiting:2 construction:1 target:1 pt:21 massive:3 exact:2 programming:2 user:1 us:8 annals:1 pa:6 element:1 approximated:1 expensive:1 updating:1 particularly:1 lowess:1 observed:1 initializing:1 wang:2 ensures:1 news:1 trade:5 decrease:4 observes:2 of1:6 ran:1 broderick:1 dynamic:3 trained:1 grateful:1 completely:2 icassp:1 joint:1 indirect:1 darpa:1 various:1 talk:1 train:2 fast:1 monte:2 approached:1 aggregate:1 h0:1 whose:2 heuristic:1 encoded:1 larger:6 elbo:20 amari:1 statistic:4 neil:1 tvf:7 noisy:28 final:6 online:10 sequence:5 advantage:1 net:1 realization:2 poorly:1 adapts:1 roweis:1 schaul:1 convergence:4 optimum:22 requirement:1 produce:2 adam:1 converges:1 help:1 illustrate:1 develop:1 ac:3 derive:1 capobianco:1 progress:4 eq:29 indicate:1 come:1 larochelle:1 switzerland:1 direction:2 closely:1 filter:44 stochastic:21 exploration:1 alp:1 shun:1 require:4 generalization:1 anonymous:1 frasca:1 ryan:1 adjusted:2 hold:1 duvenaud:1 lawrence:1 automate:1 achieves:1 vary:1 early:4 smallest:1 heavytailed:1 estimation:8 label:1 robbins:11 successfully:1 weighted:1 hoffman:2 offs:1 gaussian:26 always:1 rather:1 avoid:2 stepsizes:2 wilson:1 encode:1 derived:1 modelling:4 indicates:1 rank:2 sigkdd:1 camp:1 inference:19 streaming:1 inaccurate:1 entire:2 hidden:1 onion:1 quasi:2 interested:1 overall:2 issue:2 denoted:1 priori:2 art:2 smoothing:1 initialize:2 constrained:1 equal:3 construct:1 ng:1 sampling:3 thang:1 identical:2 icml:3 others:1 report:1 fundamentally:2 escape:1 randomly:2 divergence:1 familiar:1 argmax:1 attempt:3 freedom:9 ab:1 stationarity:1 highly:1 zheng:1 evaluation:2 adjust:1 arrives:1 yielding:3 held:2 rajesh:1 fu:1 closer:1 shorter:1 ilter:1 old:5 euclidean:4 initialized:1 desired:1 re:1 plotted:2 handed:1 increased:2 column:1 modeling:1 formulates:1 rabinovich:1 cost:2 entry:1 too:2 dependency:1 sv:6 probabilistic:4 off:4 systematic:1 dmb:1 michael:1 quickly:1 heaviest:1 again:1 gunawardana:1 kucukelbir:1 choose:1 slowly:1 idiosyncrasy:1 return:2 account:2 de:1 star:2 student:9 summarized:1 lakshminarayanan:1 gt2:1 ad:6 stream:1 vi:2 performed:2 view:2 try:2 closed:4 picked:2 observing:2 doing:1 houlsby:2 netflix:3 bayes:7 recover:3 parallel:15 competitive:2 xing:2 bouchard:1 reached:1 monro:11 minimize:1 accuracy:1 variance:25 efficiently:1 yield:7 bayesian:11 produced:1 carlo:2 cybernetics:1 history:1 converged:4 datapoint:6 plateau:1 reach:3 strongest:1 andre:1 definition:1 acquisition:1 tamara:1 james:1 naturally:1 recovers:1 static:4 sampled:3 gain:4 dataset:5 adjusting:2 popular:4 recall:10 lim:1 infers:1 schedule:21 routine:7 back:1 appears:1 originally:1 supervised:1 specify:1 evaluated:1 strongly:1 just:1 stage:1 retrospect:1 hand:3 marker:1 overlapping:1 google:3 logistic:1 lda:19 concept:1 true:4 unbiased:6 evolution:1 hence:1 regularization:1 iteratively:1 attractive:1 indistinguishable:1 during:3 nuisance:1 criterion:1 performs:3 newsletter:1 variational:59 funding:1 wikipedia:2 jasper:1 empirically:2 hugo:1 exponentially:1 extend:2 tail:3 approximates:1 refer:1 measurement:1 cambridge:1 cup:1 paisley:1 tuning:4 automatic:4 grid:1 sugiyama:1 moving:2 gt:6 add:2 posterior:24 perspective:9 optimizing:1 optimizes:1 manipulation:1 n00014:1 meta:3 binary:3 onr:1 minimum:1 additional:1 george:1 determine:1 maximize:2 converge:1 signal:2 ii:4 full:5 pnas:1 infer:5 reduces:3 faster:4 adapt:4 match:5 bach:1 mcinerney:1 niranjan:1 prediction:1 scalable:3 regression:1 neuro:1 basic:1 expectation:1 metric:3 arxiv:7 iteration:8 nakajima:1 achieved:1 cell:2 want:1 fellowship:1 enrico:1 grow:2 source:1 crucial:1 kohavi:1 biased:1 extra:1 ascent:4 tend:1 member:1 jordan:2 call:1 near:1 door:1 iii:1 split:1 concerned:1 stephan:1 fit:2 gave:1 click:1 reduce:1 tradeoff:1 attias:1 shift:4 t0:4 six:1 assist:1 york:1 repeatedly:1 useful:1 iterating:1 generally:1 gopalan:1 tune:1 slide:1 augments:1 category:1 reduced:2 simplest:1 nsf:6 notice:1 estimated:3 track:1 per:1 diverse:2 hyperparameter:1 hennig:1 ichi:1 key:1 ht:7 kept:1 nyt:1 fraction:1 sum:1 run:2 powerful:1 uncertainty:1 place:1 reader:1 doc:2 draw:2 incompatible:1 fusi:1 scaling:1 vb:73 vf:6 bound:1 oracle:9 worked:1 min:2 performing:1 department:2 tv:6 smaller:2 across:2 increasingly:1 slightly:1 gvf:2 making:1 organizer:1 outlier:1 equation:1 zurich:1 discus:2 tractable:3 cor:1 end:2 incurring:1 progression:1 observe:1 hierarchical:1 appropriate:2 nicholas:1 alternative:3 batch:5 robustness:1 slower:1 drifting:1 ho:1 original:5 responding:1 dirichlet:5 subsampling:2 assumes:1 denotes:1 include:1 top:1 newton:2 unifying:1 const:1 ghahramani:3 disappear:1 objective:1 move:5 g0:1 quantity:1 streamed:1 usual:1 visiting:1 gradient:8 win:1 kosarak:4 distance:4 thank:1 topic:3 assuming:1 kalman:22 length:2 relationship:1 mini:2 ratio:1 minimizing:1 balance:2 demonstration:1 unfortunately:2 ashia:1 unknown:2 perform:4 observation:21 datasets:4 kiefel:1 finite:1 descent:2 extended:1 perturbation:1 smoothed:1 community:1 drift:1 rating:1 david:3 kl:2 specified:1 optimized:2 learned:2 testbed:1 nip:6 address:1 dynamical:1 regime:1 reverts:1 including:3 memory:2 natural:5 difficulty:1 ranked:1 scheme:1 brodley:1 library:1 carried:1 prior:1 review:2 discovery:1 kf:7 evolve:1 asymptotic:1 loss:1 interesting:1 filtering:32 allocation:4 proportional:1 var:4 degree:9 article:2 heavy:9 row:4 changed:2 repeat:2 supported:1 gee:1 bias:14 fall:2 distributed:7 curve:4 dimension:1 cle:3 world:1 avoids:1 transition:1 feedback:1 hensman:1 adaptive:17 bm:1 far:1 transaction:1 ranganath:2 approximate:11 emphasize:1 bui:1 chien:1 global:2 corpus:2 assumed:1 xi:3 underspecification:1 search:1 latent:6 tailed:7 promising:1 learn:3 robust:5 career:1 alg:2 necessarily:1 complex:1 constructing:1 domain:3 european:1 sp:6 significance:2 aistats:1 big:1 noise:16 arise:2 bmf:18 hyperparameters:1 allowed:1 x1:4 augmented:1 crafted:1 fig:8 wiley:1 n:6 tomioka:1 pe:6 forgets:1 jmlr:4 learns:1 peeling:1 removing:1 xt:3 mitigates:1 er:6 insightful:1 mason:1 decay:4 evidence:1 dominates:1 intractable:3 workshop:1 sequential:1 effectively:1 corr:1 maxim:1 portal:1 budget:2 yin:1 simply:2 prevents:1 tracking:12 trix:1 scalar:1 recommendation:3 corresponds:4 determines:1 acm:1 minibatches:1 viewed:1 identity:1 fisher:1 change:4 eurasip:1 specifically:1 determined:1 infinite:1 called:1 svd:1 latter:1 wibisono:1 bigdata:1 princeton:1 tested:1 |
5,033 | 5,557 | Smoothed Gradients for
Stochastic Variational Inference
David Blei
Department of Computer Science
Department of Statistics
Columbia University
[email protected]
Stephan Mandt
Department of Physics
Princeton University
[email protected]
Abstract
Stochastic variational inference (SVI) lets us scale up Bayesian computation to
massive data. It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients. As with most traditional stochastic optimization methods, SVI takes precautions to use unbiased stochastic gradients whose expectations are equal to the true gradients. In this paper, we explore
the idea of following biased stochastic gradients in SVI. Our method replaces
the natural gradient with a similarly constructed vector that uses a fixed-window
moving average of some of its previous terms. We will demonstrate the many advantages of this technique. First, its computational cost is the same as for SVI and
storage requirements only multiply by a constant factor. Second, it enjoys significant variance reduction over the unbiased estimates, smaller bias than averaged
gradients, and leads to smaller mean-squared error against the full gradient. We
test our method on latent Dirichlet allocation with three large corpora.
1
Introduction
Stochastic variational inference (SVI) lets us scale up Bayesian computation to massive data [1]. SVI
has been applied to many types of models, including topic models [1], probabilistic factorization [2],
statistical network analysis [3, 4], and Gaussian processes [5].
SVI uses stochastic optimization [6] to fit a variational distribution, following easy-to-compute noisy
natural gradients that come from repeatedly subsampling from the large data set. As with most
traditional stochastic optimization methods, SVI takes precautions to use unbiased, noisy gradients
whose expectations are equal to the true gradients. This is necessary for the conditions of [6] to
apply, and guarantees that SVI climbs to a local optimum of the variational objective. Innovations
on SVI, such as subsampling from data non-uniformly [2] or using control variates [7, 8], have
maintained the unbiasedness of the noisy gradient.
In this paper, we explore the idea of following a biased stochastic gradient in SVI. We are inspired
by the recent work in stochastic optimization that uses biased gradients. For example, stochastic
averaged gradients (SAG) iteratively updates only a subset of terms in the full gradient [9]; averaged
gradients (AG) follows the average of the sequence of stochastic gradients [10]. These methods lead
to faster convergence on many problems.
However, SAG and AG are not immediately applicable to SVI. First, SAG requires storing all of the
terms of the gradient. In most applications of SVI there is a term for each data point, and avoiding
such storage is one of the motivations for using the algorithm. Second, the SVI update has a form
where we update the variational parameter with a convex combination of the previous parameter
and a new noisy version of it. This property falls out of the special structure of the gradient of
the variational objective, and has the significant advantage of keeping the parameter in its feasible
1
space. (E.g., the parameter may be constrained to be positive or even on the simplex.) Averaged
gradients, as we show below, do not enjoy this property. Thus, we develop a new method to form
biased gradients in SVI.
To understand our method, we must briefly explain the special structure of the SVI stochastic natural
gradient. At any iteration of SVI, we have a current estimate of the variational parameter ?i , i.e., the
parameter governing an approximate posterior that we are trying to estimate. First, we sample a data
point wi . Then, we use the current estimate of variational parameters to compute expected sufficient
statistics S?i about that data point. (The sufficient statistics S?i is a vector of the same dimension as
?i .) Finally, we form the stochastic natural gradient of the variational objective L with this simple
expression:
?? L = ? + N S?i ? ?i ,
(1)
where ? is a prior from the model and N is an appropriate scaling. This is an unbiased noisy
gradient [11, 1], and we follow it with a step size ?i that decreases across iterations [6]. Because of
its algebraic structure, each step amounts to taking a weighted average,
?i+1 = (1 ? ?i )?i + ?i (? + N S?i ).
(2)
Note that this keeps ?i in its feasible set.
With these details in mind, we can now describe our method. Our method replaces the natural
gradient in Eq. (1) with a similarly constructed vector that uses a fixed-window moving average
of the previous sufficient statistics. That is, we replace the sufficient statistics with an appropriate
PL?1
scaled sum, j=0 S?i?j . Note this is different from averaging the gradients, which also involves the
current iteration?s estimate.
We will demonstrate the many advantages of this technique. First, its computational cost is the
same as for SVI and storage requirements only multiply by a constant factor (the window length
L). Second, it enjoys significant variance reduction over the unbiased estimates, smaller bias than
averaged gradients, and leads to smaller mean-squared error against the full gradient. Finally, we
tested our method on latent Dirichlet allocation with three large corpora. We found it leads to faster
convergence and better local optima.
Related work We first discuss the related work from the SVI literature. Both Ref. [8] and Ref. [7]
introduce control variates to reduce the gradient?s variance. The method leads to unbiased gradient
estimates. On the other hand, every few hundred iterations, an entire pass through the data set is
necessary, which makes the performance and expenses of the method depend on the size of the
data set. Ref. [12] develops a method to pre-select documents according to their influence on the
global update. For large data sets, however, it also suffers from high storage requirements. In the
stochastic optimization literature, we have already discussed SAG [9] and AG [10]. Similarly, Ref.
[13] introduces an exponentially fading momentum term. It too suffers from the issues of SAG and
AG, mentioned above.
2
Smoothed stochastic gradients for SVI
Latent Dirichlet Allocation and Variational Inference We start by reviewing stochastic variational inference for LDA [1, 14], a topic model that will be our running example. We are given a
corpus of D documents with words w1:D,1:N . We want to infer K hidden topics, defined as multinomial distributions over a vocabulary of size V . We define a multinomial parameter ?1:V,1:K , termed
the topics. Each document d is associated with a normalized vector of topic weights ?d . Furthermore, each word n in document d has a topic assignment zdn . This is a K?vector of binary entries,
k
k
such that zdn
= 1 if word n in document d is assigned to topic k, and zdn
= 0 otherwise.
In the generative process, we first draw the topics from a Dirichlet, ?k ? Dirichlet(?). For each
document, we draw the topic weights, ?d ? Dirichlet(?). Finally, for each word in the document,
we draw an assignment zdn ? Multinomial(?d ), and we draw the word from the assigned topic,
wdn ? Multinomial(?zdn ). The model has the following joint probability distribution:
p(w, ?, ?, z|?, ?) =
K
Y
k=1
p(?k |?)
D
Y
p(?d |?)
N
Y
n=1
d=1
2
p(zdn |?d )p(wdn |?1:K , zdn )
(3)
Following [1], the topics ? are global parameters, shared among all documents. The assignments z
and topic proportions ? are local, as they characterize a single document.
In variational inference [15], we approximate the posterior distribution,
p(?, ?, z|w) = P R
z
p(?, ?, z, w)
,
d?d? p(?, ?, z, w)
which is intractable to compute. The posterior is approximated by a factorized distribution,
! D
!
D Y
N
Y
Y
q(?, ?, z) = q(?|?)
q(zdn |?dn )
q(?d |?d )
d=1 n=1
(4)
(5)
d=1
Here, q(?|?) and q(?d |?d ) are Dirichlet distributions, and q(zdn |?dn ) are multinomials. The parameters ?, ? and ? minimize the Kullback-Leibler (KL) divergence between the variational distribution
and the posterior [16]. As shown in Refs. [1, 17], the objective to maximize is the evidence lower
bound (ELBO),
L(q) = Eq [log p(x, ?, ?, z)] ? Eq [log q(?, ?, z)].
(6)
This is a lower bound on the marginal probability of the observations. It is a sensible objective
function because, up to a constant, it is equal to the negative KL divergence between q and the
posterior. Thus optimizing the ELBO with respect to q is equivalent to minimizing its KL divergence
to the posterior.
In traditional variational methods, we iteratively update the local and global parameters. The local
parameters are updated as described in [1, 17] . They are a function of the global parameters, so
at iteration i the local parameter is ?dn (?i ). We are interested in the global parameters. They are
updated based on the (expected) sufficient statistics S(?i ),
S(?i )
N
X
X
=
T
?dn (?i ) ? Wdn
(7)
d?{1,...,D} n=1
?i+1
=
? + S(?i )
For fixed d and n, the multinomial parameter ?dn is K?1. The binary vector Wdn is V?1; it satisfies
v
Wdn
= 1 if the word n in document d is v, and else contains only zeros. Hence, S is K?V and
therefore has the same dimension as ?. Alternating updates lead to convergence.
Stochastic variational inference for LDA The computation of the sufficient statistics is inefficient because it involves a pass through the entire data set. In Stochastic Variational Inference for
LDA [1, 14], it is approximated by stochastically sampling a ?minibatch? Bi ? {1, ..., D} of |Bi |
documents, estimating S on the basis of the minibatch, and scaling the result appropriately,
? i , Bi )
S(?
=
N
D X X
T
.
?dn (?i ) ? Wdn
|Bi |
n=1
d?Bi
? i , Bi ) is now a random variable. We will denote
Because it depends on the minibatch, S?i = S(?
variables that explicitly depend on the random minibatch Bi at the current time i by circumflexes,
?
such as g? and S.
In SVI, we update ? by admixing the random estimate of the sufficient statistics to the current value
of ?. This involves a learning rate ?i < 1,
?i+1
=
? i , Bi ))
(1 ? ?i )?i + ?i (? + S(?
(8)
The case of ? = 1 and |Bi | = D corresponds to batch variational inference (when sampling without
replacement) . For arbitrary ?, this update is just stochastic gradient ascent, as a stochastic estimate
of the natural gradient of the ELBO [1] is
g?(?i , Bi )
=
? i , Bi ),
(? ? ?i ) + S(?
(9)
This interpretation opens the world of gradient smoothing techniques. Note that the above stochastic
gradient is unbiased: its expectation value is the full gradient. However, it has a variance. The goal
of this paper will be to reduce this variance at the expense of introducing a bias.
3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Algorithm 1: Smoothed stochastic gradients for Latent Dirichlet Allocation
Input: D documents, minibatch size B, number of stored
sufficient statistics L, learning rate ?t , hyperparameters ?, ?.
Output: Hidden variational parameters ?, ?, ?.
Initialize ? randomly and g?iL = 0.
Initialize empty queue Q = {}.
for i = 0 to ? do
Sample minibatch Bi ? {1, . . . , D} uniformly.
initialize ?
repeat
For d ? Bi and n ? {1, . . . , N } set
?kdn ? exp(E[log
?dk ] + E[log ?k,wd ]), k ? {1, . . . , K}
P
?d = ? + n ?dn
until ?dn and ?d converge.
For each topic k, calculate sufficient statistics for minibatch Bi :
P
PN
T
S?i = |BDi | d?Bi n=1 ?dn Wdn
Add new sufficient statistic in front of queue Q:
Q ? {S?i } + Q
Remove last element when length L has been reached:
if length(Q) > L then
Q ? Q ? {S?i?L }
end
Update ?, using stored sufficient statistics:
L
S?iL ? S?i?1
+ (S?i ? S?i?L )/L
g?L ? (? ? ?i ) + S?L
i
i
?t+1 = ?t + ?t g?tL .
end
Smoothed stochastic gradients for SVI Noisy stochastic gradients can slow down the convergence of SVI or lead to convergence to bad local optima. Hence, we propose a smoothing scheme
to reduce the variance of the noisy natural gradient. To this end, we average the sufficient statistics
over the past L iterations. Here is a sketch:
1. Uniformly sample a minibatch Bi ? {1, . . . , D} of documents. Compute the local variational parameters ? from a given ?i .
?
2. Compute the sufficient statistics S?i = S(?(?
i ), Bi ).
PL?1
?
3. Store Si , along with the L most recent sufficient statistics. Compute S?iL = L1 j=0 S?i?j
as their mean.
4. Compute the smoothed stochastic gradient according to
g?iL = (? ? ?i ) + S?iL
(10)
5. Use the smoothed stochastic gradient to calculate ?i+1 . Repeat.
Details are in Algorithm 1. We now explore its properties. First, note that smoothing the sufficient
statistics comes at almost no extra computational costs. In fact, the mean of the stored sufficient
statistics does not explicitly have to be computed, but rather amounts to the update
L
S?iL ? S?i?1
+ (S?i ? S?i?L )/L,
(11)
after which S?i?L is deleted. Storing the sufficient statistics can be expensive for large values of L:
In the context of LDA involving the typical parameters K = 102 and V = 104 , using L = 102
amounts to storing 108 64-bit floats which is in the Gigabyte range.
Note that when L = 1 we obtain stochastic variational inference (SVI) in its basic form. This
includes deterministic variational inference for L = 1, B = D in the case of sampling without
replacement within the minibatch.
Biased gradients Let us now investigate the algorithm theoretically. Note that the only noisy part
in the stochastic gradient in Eq. (9) is the sufficient statistics. Averaging over L stochastic sufficient
statistics thus promises to reduce the noise in the gradient. We are interested in the effect of the
additional parameter L.
4
When we average over the L most recent sufficient statistics, we introduce a bias. As the variational
parameters change during each iteration, the averaged sufficient statistics deviate in expectation
from its current value. This induces biased gradients. In a nutshell, large values of L will reduce the
variance but increase the bias.
To better understand this tradeoff, we need to introduce some notation. We defined the stochastic
gradient g?i = g?(?i , Bi ) in Eq. (9) and refer to gi = EBi [?
g (?i , Bi )] as the full gradient (FG). We also
defined the smoothed stochastic gradient g?i L in Eq. (10). Now, we need to introduce an auxiliary
PL?1
variable, giL := (? ? ?i ) + L1 j=0 Si?j . This is the time-averaged full gradient. It involves
the full sufficient statistics Si = S(?i ) evaluated along the sequence ?1 , ?2,... generated by our
algorithm.
We can expand the smoothed stochastic gradient into three terms:
gL ? gL )
g?iL = gi + (giL ? gi ) + (?
|{z} | {z }
| i {z i }
FG
(12)
noise
bias
This involves the full gradient (FG), a bias term and a stochastic noise term. We want to minimize
the statistical error between the full gradient and the smoothed gradient by an optimal choice of L.
We will show this the optimal choice is determined by a tradeoff between variance and bias.
For the following analysis, we need to compute expectation values with respect to realizations of
our algorithm, which is a stochastic process that generates a sequence of ?i ?s. Those expectation
values are denoted by E[?]. Notably, not only the minibatches Bi are random variables under this
expectation, but also the entire sequences ?1 , ?2 , ... . Therefore, one needs to keep in mind that even
the full gradients gi = g(?i ) are random variables and can be studied under this expectation.
We find that the mean squared error of the smoothed stochastic gradient dominantly decomposes
into a mean squared bias and a noise term:
E[(?
giL ? gi )2 ] ? E[(?
g L ? g L )2 ] + E[(giL ? gi )2 ]
| i {z i }
|
{z
}
variance
(13)
mean squared bias
To see this, consider the mean squared error of the smoothed stochastic gradient with respect to the
full gradient, E[(?
giL ? gi )2 ], adding and subtracting giL :
L
L
L
E (?
gi ? giL + giL ? gi )2 = E (?
gi ? giL )2 + 2 E (?
gi ? giL )(giL ? gi ) + E (giL ? gi )2 .
We encounter a cross-term, which we argue to be negligible. In defining ?S?i = (S?i ? Si ) we find
PL?1
that (?
giL ? giL ) = L1 j=0 ?Si?j . Therefore,
L?1
i
L
1 X h ?
E (?
gi ? giL )(giL ? gi ) =
E ?Si?j (giL ? gi ) .
L j=0
The fluctuations of the sufficient statistics ?S?i is a random variable with mean zero, and the randomness of (giL ? ghi ) enters only via ?ii . Onehcan assume
statistical correlation between
i a very small
L
L
those two terms, E ?S?i?j (g ? gi ) ? E ?S?i?j E (g ? gi ) = 0. Therefore, the cross-term
i
i
can be expected to be negligible. We confirmed this fact empirically in our numerical experiments:
the top row of Fig. 1 shows that the sum of squared bias and variance is barely distinguishable from
the squared error.
By construction, all bias comes from the sufficient statistics:
2
PL?1
.
E[(giL ? gi )2 ] = E L1 j=0 (Si?j ? Si )
(14)
At this point, little can be said in general about the bias term, apart from the fact that it should shrink
with the learning rate. We will explore it empirically in the next section. We now consider the
variance term:
L?1
L?1
i
PL?1 ? 2
1 X h ?
1 X
L
L 2
1
E[(?
gi ? gi ) ] = E L j=0 ?Si?j
= 2
E (?Si?j )2 = 2
E[(?
gi?j ? gi?j )2 ].
L j=0
L j=0
5
Figure 1: Empirical test of the variance-bias tradeoff on 2,000 abstracts from the Arxiv repository
(? = 0.01, B = 300). Top row. For fixed L = 30 (left), L = 100 (middle), and L = 300
(right), we compare the squared bias, variance, variance+bias and the squared error as a function
of iterations. Depending on L, the variance or the bias give the dominant contribution to the error.
Bottom row. Squared bias (left), variance (middle) and squared error (right) for different values of
L. Intermediate values of L lead to the smallest squared error and hence to the best tradeoff between
small variance and small bias.
PL?1
gi?j ). Assuming that the variance changes
This can be reformulated as var(?
giL ) = L12 j=0 var(?
little during those L successive updates, we can approximate var(?
gi?j ) ? var(?
gi ), which yields
1
var(?
gi ).
(15)
L
The smoothed gradient has therefore a variance that is approximately L times smaller than the
variance of the original stochastic gradient.
var(?
giL ) ?
Bias-variance tradeoff To understand and illustrate the effect of L in our optimization problem,
we used a small data set of 2000 abstracts from the Arxiv repository. This allowed us to compute
the full sufficient statistics and the full gradient for reference. More details on the data set and the
corresponding parameters will be given below.
We computed squared bias (SB), variance (VAR) and squared error (SE) according to Eq. (13) for a
single stochastic optimization run. More explicitly,
SBi =
K X
V
X
k=1 v=1
giL ? gi
2
, VARi =
kv
K X
V
X
g?iL ? giL
k=1 v=1
2
, SEi =
kv
K X
V
X
g?iL ? gi
2
kv
.
(16)
k=1 v=1
In Fig. 1, we plot those quantities as a function of iteration steps (time). As argued before, we arrive
at a drastic variance reduction (bottom, middle) when choosing large values of L.
In contrast, the squared bias (bottom, left) typically increases with L. The bias shows a complex
time-evolution as it maintains memory of L previous steps. For example, the kinks in the bias curves
(bottom, left) occur at times 3, 10, 30, 100 and 300, i.e. they correspond to the values of L. Those are
the times from which on the smoothed gradient looses memory of its initial state, typically carrying
a large bias. The variances become approximately stationary at iteration L (bottom, middle). Those
are the times where the initialization process ends and the queue Q in Algorithm 1 has reached its
maximal length L. The squared error (bottom, right) is to a good approximation just the sum of
squared bias and variance. This is also shown in the top panel of Fig. 1.
6
Due to the long-time memory of the smoothed gradients, one can associate some ?inertia? or ?momentum? to each value of L. The larger L, the smaller the variance and the larger the inertia. In
a non-convex optimization setup with many local optima as in our case, too much inertia can be
harmful. This effect can be seen for the L = 100 and L = 300 runs in Fig. 1 (bottom), where the
mean squared bias and error curves bend upwards at long times. Think of a marble rolling in a wavy
landscape: with too much momentum it runs the danger of passing through a good optimum and
eventually getting trapped in a bad local optimum. This picture suggests that the optimal value of
L depends on the ?ruggedness? of the potential landscape of the optimization problem at hand. Our
empirical study suggest that choosing L between 10 and 100 produces the smallest mean squared
error.
Aside: connection to gradient averaging Our algorithm was inspired by various gradient averaging schemes. However, we cannot easily used averaged gradients in SVI. To see the drawbacks of
gradient averaging, let us consider L stochastic gradients g?i , g?i?1 , g?i?2 , ..., g?i?L+1 and replace
PL?1
g?i ?? L1 j=0 g?i?j .
(17)
One arrives at the following parameter update for ?i :
?
?
L?1
L?1
X
X
1
1
S?i?j ?
?i+1 = (1 ? ?i )?i + ?i ?? +
(?i?j ? ?i )? .
L j=0
L j=0
(18)
This update can lead to the violation of optimization constraints, namely to a negative variational
parameter ?. Note that for L = 1 (the case of SVI), the third term is zero, guaranteeing positivity of
the update. This is no longer guaranteed for L > 1, and the gradient updates will eventually become
negative. We found this in practice. Furthermore, we find that there is an extra contribution to the
bias compared to Eq. (14),
2
PL?1
PL?1
L
2
1
1
E[(gi ? gi ) ] = E L j=0 (?i ? ?i?j ) + L j=0 (Si?j ? Si )
.
(19)
Hence, the averaged gradient carries an additional bias in ? - it is the same term that may violate
optimization constraints. In contrast, the variance of the averaged gradient is the same as the variance
of the smoothed gradient. Compared to gradient averaging, the smoothed gradient has a smaller bias
while profiting from the same variance reduction.
3
Empirical study
We tested SVI for LDA, using the smoothed stochastic gradients, on three large corpora:
? 882K scientific abstracts from the Arxiv repository, using a vocabulary of 14K words.
? 1.7M articles from the New York Times, using a vocabulary of 8K words.
? 3.6M articles from Wikipedia, using a vocabulary of 7.7K words.
We set the minibatch size to B = 300 and furthermore set the number of topics to K = 100, and
the hyper-parameters ? = ? = 0.5. We fixed the learning rate to ? = 10?3 . We also compared our
results to a decreasing learning rate and found the same behavior.
For a quantitative test of model fitness, we evaluate the predictive probability over the vocabulary [1].
To this end, we separate a test set from the training set. This test set is furthermore split into two
parts: half of it is used to obtain the local variational parameters (i.e. the topic proportions by fitting
LDA with the fixed global parameters ?. The second part is used to compute the likelihoods of the
contained words:
Z
PK
q(?)q(?)d?d? = Eq [?k ]Eq [?k,wnew ].
(20)
p(wnew |wold , D) ?
?
?
k
k,w
new
k=1
We show the predictive probabilities as a function of effective passes through the data set in Fig. 2
for the New York Times, Arxiv, and Wikipedia corpus, respectively. Effective passes through the
data set are defined as (minibatch size * iterations / size of corpus). Within each plot, we compare
7
Figure 2: Per-word predictive probabilitiy as a function of the effective number of passes through
the data (minibatch size * iterations / size of corpus). We compare results for the New York Times,
Arxiv, and Wikipedia data sets. Each plot shows data for different values of L. We used a constant
learning rate of 10?3 , and set a time budget of 24 hours. Highest likelihoods are obtained for L
between 10 and 100, after which strong bias effects set in.
different numbers of stored sufficient statistics, L ? {1, 10, 100, 1000, 10000, ?}. The last value
of L = ? corresponds to a version of the algorithm where we average over all previous sufficient
statistics, which is related to averaged gradients (AG), but which has a bias too large to compete
with small and finite values of L. The maximal values of 30, 5 and 6 effective passes through the
Arxiv, New York Times and Wikipedia data sets, respectively, approximately correspond to a run
time of 24 hours, which we set as a hard cutoff in our study.
We obtain the highest held-out likelihoods for intermediate values of L. E.g., averaging only over
10 subsequent sufficient statistics results in much faster convergence and higher likelihoods at very
little extra storage costs. As we discussed above, we attribute this fact to the best tradeoff between
variance and bias.
4
Discussion and Conclusions
SVI scales up Bayesian inference, but suffers from noisy stochastic gradients. To reduce the mean
squared error relative to the full gradient, we averaged the sufficient statistics of SVI successively
over L iteration steps. The resulting smoothed gradient is biased, however, and the performance of
the method is governed by the competition between bias and variance. We argued theoretically and
showed empirically that intermediate values of the number of stored sufficient statistics L give the
highest held-out likelihoods.
Proving convergence for our algorithm is still an open problem, which is non-trivial especially because the variational objective is non-convex. To guarantee convergence, however, we can simply
phase out our algorithm and reduce the number of stored gradients to one as we get close to convergence. At this point, we recover SVI.
Acknowledgements We thank Laurent Charlin, Alp Kucukelbir, Prem Gopolan, Rajesh Ranganath, Linpeng Tang, Neil Houlsby, Marius Kloft, and Matthew Hoffman for discussions. We
acknowledge financial support by NSF CAREER NSF IIS-0745520, NSF BIGDATA NSF IIS1247664, NSF NEURO NSF IIS-1009542, ONR N00014-11-1-0651, the Alfred P. Sloan foundation, DARPA FA8750-14-2-0009 and the NSF MRSEC program through the Princeton Center for
Complex Materials Fellowship (DMR-0819860).
8
References
[1] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational
inference. The Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[2] Prem Gopalan, Jake M Hofman, and David M Blei. Scalable recommendation with Poisson
factorization. Preprint, arXiv:1311.1704, 2013.
[3] Prem K Gopalan and David M Blei. Efficient discovery of overlapping communities in massive
networks. Proceedings of the National Academy of Sciences, 110(36):14534?14539, 2013.
[4] Edoardo M Airoldi, David M Blei, Stephen E Fienberg, and Eric P Xing. Mixed membership
stochastic blockmodels. In Advances in Neural Information Processing Systems, pages 33?40,
2009.
[5] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. Uncertainty in Artificial Intelligence, 2013.
[6] Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of Mathematical Statistics, pages 400?407, 1951.
[7] Chong Wang, Xi Chen, Alex Smola, and Eric Xing. Variance reduction for stochastic gradient
optimization. In Advances in Neural Information Processing Systems, pages 181?189, 2013.
[8] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[9] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic
average gradient. Technical report, HAL 00860051, 2013.
[10] Yurii Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221?259, 2009.
[11] Masa-Aki Sato. Online model selection based on the variational Bayes. Neural Computation,
13(7):1649?1681, 2001.
[12] Mirwaes Wahabzada and Kristian Kersting. Larger residuals, less work: Active document
scheduling for latent Dirichlet allocation. In Machine Learning and Knowledge Discovery in
Databases, pages 475?490. Springer, 2011.
[13] Paul Tseng. An incremental gradient (-projection) method with momentum term and adaptive
stepsize rule. SIAM Journal on Optimization, 8(2):506?531, 1998.
[14] Matthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent Dirichlet
allocation. In Advances in Neural Information Processing Systems, pages 856?864, 2010.
[15] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, 2008.
[16] Christopher M Bishop et al. Pattern Recognition and Machine Learning, volume 1. Springer
New York, 2006.
[17] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. The Journal
of Machine Learning Research, 3:993?1022, 2003.
9
| 5557 |@word repository:3 briefly:1 version:2 middle:4 proportion:2 open:2 carry:1 reduction:6 initial:1 contains:1 document:14 fa8750:1 past:1 current:6 wd:1 si:12 must:1 john:1 subsequent:1 numerical:1 remove:1 plot:3 update:15 aside:1 precaution:2 generative:1 stationary:1 half:1 intelligence:1 blei:8 successive:1 zhang:1 mathematical:2 dn:9 constructed:2 along:2 become:2 fitting:1 ghi:1 introduce:4 theoretically:2 notably:1 expected:3 behavior:1 inspired:2 decreasing:1 little:3 window:3 estimating:1 notation:1 panel:1 factorized:1 loos:1 ag:5 guarantee:2 quantitative:1 every:1 nutshell:1 sag:5 scaled:1 control:2 enjoy:1 before:1 positive:1 negligible:2 local:11 sutton:1 marble:1 laurent:1 fluctuation:1 mandt:1 approximately:3 initialization:1 studied:1 suggests:1 factorization:2 bi:20 range:1 averaged:12 practice:1 svi:30 danger:1 empirical:3 projection:1 pre:1 word:11 suggest:1 get:1 cannot:1 close:1 selection:1 bend:1 scheduling:1 storage:5 context:1 influence:1 equivalent:1 deterministic:1 center:1 convex:4 roux:1 immediately:1 rule:1 financial:1 gigabyte:1 proving:1 updated:2 annals:1 construction:1 massive:3 programming:1 us:5 associate:1 element:1 trend:1 approximated:2 expensive:1 recognition:1 database:1 bottom:7 preprint:1 enters:1 wang:2 calculate:2 decrease:1 sbi:1 highest:3 mentioned:1 nesterov:1 depend:2 reviewing:1 carrying:1 hofman:1 predictive:4 eric:2 basis:1 easily:1 joint:1 darpa:1 various:1 describe:1 effective:4 artificial:1 hyper:1 choosing:2 whose:2 larger:3 otherwise:1 elbo:3 statistic:32 gi:31 neil:2 think:1 noisy:10 online:2 advantage:3 sequence:4 propose:1 subtracting:1 maximal:2 realization:1 academy:1 kv:3 competition:1 ebi:1 getting:1 kink:1 convergence:9 empty:1 requirement:3 optimum:6 produce:1 guaranteeing:1 wavy:1 incremental:1 depending:1 develop:1 illustrate:1 andrew:1 eq:10 strong:1 auxiliary:1 involves:5 come:3 drawback:1 attribute:1 stochastic:49 alp:1 material:1 argued:2 pl:10 exp:1 lawrence:1 matthew:3 smallest:2 applicable:1 sei:1 robbins:1 weighted:1 hoffman:3 gaussian:2 rather:1 pn:1 kersting:1 likelihood:5 contrast:2 inference:14 membership:1 sb:1 entire:3 typically:2 hidden:2 expand:1 interested:2 issue:1 among:1 dual:1 denoted:1 constrained:1 special:2 smoothing:3 initialize:3 marginal:1 equal:3 ng:1 sampling:3 simplex:1 report:1 develops:1 few:1 randomly:1 divergence:3 national:1 fitness:1 phase:1 replacement:2 wdn:7 investigate:1 multiply:2 chong:2 introduces:1 violation:1 arrives:1 masa:1 primal:1 held:2 rajesh:1 necessary:2 harmful:1 assignment:3 cost:4 introducing:1 subset:1 entry:1 rolling:1 hundred:1 johnson:1 too:4 front:1 characterize:1 stored:6 unbiasedness:1 siam:1 kloft:1 probabilistic:1 physic:1 michael:2 w1:1 squared:21 successively:1 kucukelbir:1 positivity:1 stochastically:1 inefficient:1 potential:1 includes:1 explicitly:3 sloan:1 depends:2 francis:2 reached:2 start:1 recover:1 maintains:1 houlsby:1 xing:2 bayes:1 monro:1 contribution:2 minimize:2 il:9 variance:33 yield:1 correspond:2 landscape:2 bayesian:3 confirmed:1 randomness:1 explain:1 suffers:3 against:2 james:1 associated:1 dmr:1 knowledge:1 higher:1 follow:1 bdi:1 rie:1 evaluated:1 shrink:1 wold:1 charlin:1 furthermore:4 just:2 smola:1 governing:1 correlation:1 until:1 hand:2 sketch:1 christopher:1 overlapping:1 minibatch:12 lda:6 scientific:1 hal:1 effect:4 normalized:1 true:2 unbiased:7 evolution:1 hence:4 assigned:2 alternating:1 iteratively:2 leibler:1 during:2 aki:1 maintained:1 trying:1 demonstrate:2 l1:5 upwards:1 variational:30 wikipedia:4 multinomial:6 empirically:3 exponentially:1 volume:1 discussed:2 interpretation:1 significant:3 refer:1 paisley:1 similarly:3 moving:2 longer:1 add:1 nicolo:1 dominant:1 posterior:6 recent:3 showed:1 optimizing:1 apart:1 termed:1 store:1 n00014:1 binary:2 onr:1 seen:1 herbert:1 additional:2 converge:1 maximize:1 ii:3 stephen:1 full:14 violate:1 infer:1 technical:1 faster:3 cross:2 long:2 bach:2 involving:1 basic:1 neuro:1 scalable:1 expectation:8 poisson:1 arxiv:7 iteration:13 want:2 fellowship:1 else:1 float:1 appropriately:1 biased:7 extra:3 ascent:1 pass:4 climb:1 jordan:2 intermediate:3 split:1 stephan:1 easy:2 fit:2 variate:2 reduce:7 idea:2 tradeoff:6 expression:1 accelerating:1 edoardo:1 queue:3 algebraic:1 reformulated:1 passing:1 york:5 repeatedly:1 se:1 gopalan:2 amount:3 induces:1 nsf:7 gil:23 trapped:1 per:1 alfred:1 promise:1 deleted:1 cutoff:1 subgradient:1 sum:4 run:4 compete:1 uncertainty:1 arrive:1 almost:1 family:1 draw:4 fusi:1 scaling:2 bit:1 bound:2 guaranteed:1 replaces:2 sato:1 occur:1 fading:1 constraint:2 alex:1 generates:1 martin:1 marius:1 department:3 according:3 combination:1 smaller:7 across:1 wi:1 fienberg:1 discus:1 eventually:2 mind:2 drastic:1 end:5 yurii:1 profiting:1 apply:1 appropriate:2 stepsize:1 batch:1 encounter:1 schmidt:1 original:1 top:3 dirichlet:11 subsampling:2 running:1 graphical:1 especially:1 jake:1 objective:6 already:1 quantity:1 traditional:3 said:1 gradient:85 separate:1 thank:1 sensible:1 topic:15 argue:1 l12:1 trivial:1 barely:1 tseng:1 assuming:1 length:4 minimizing:2 innovation:1 setup:1 expense:2 negative:3 kdn:1 observation:1 finite:2 acknowledge:1 descent:1 defining:1 smoothed:18 arbitrary:1 zdn:9 community:1 david:8 namely:1 kl:3 connection:1 hour:2 below:2 pattern:1 program:1 including:1 memory:3 wainwright:1 natural:8 residual:1 scheme:2 picture:1 columbia:2 deviate:1 prior:1 literature:2 acknowledgement:1 discovery:2 relative:1 mixed:1 allocation:7 ruggedness:1 var:7 foundation:2 sufficient:30 article:2 storing:3 row:3 repeat:2 last:2 keeping:1 gl:2 enjoys:2 dominantly:1 bias:34 understand:3 fall:1 taking:1 fg:3 curve:2 dimension:2 vocabulary:5 world:1 vari:1 hensman:1 inertia:3 adaptive:1 ranganath:1 approximate:3 kullback:1 keep:2 global:6 active:1 corpus:7 xi:1 latent:7 decomposes:1 nicolas:1 career:1 complex:2 pk:1 blockmodels:1 motivation:1 noise:4 hyperparameters:1 big:1 paul:1 allowed:1 ref:5 fig:5 tl:1 slow:1 tong:1 momentum:4 exponential:1 governed:1 third:1 tang:1 down:1 bad:2 bishop:1 dk:1 evidence:1 intractable:1 adding:1 airoldi:1 budget:1 chen:1 distinguishable:1 simply:1 explore:4 contained:1 recommendation:1 kristian:1 springer:2 corresponds:2 satisfies:1 wnew:2 minibatches:1 goal:1 replace:2 shared:1 feasible:2 change:2 hard:1 typical:1 determined:1 uniformly:3 averaging:7 pas:2 select:1 support:1 mark:1 prem:3 bigdata:1 evaluate:1 princeton:3 tested:2 avoiding:1 |
5,034 | 5,558 | Analysis of Variational Bayesian Latent Dirichlet
Allocation: Weaker Sparsity than MAP
Shinichi Nakajima
Berlin Big Data Center, TU Berlin
Berlin 10587 Germany
[email protected]
Issei Sato
University of Tokyo
Tokyo 113-0033 Japan
[email protected]
Masashi Sugiyama
University of Tokyo
Tokyo 113-0033, Japan
[email protected]
Kazuho Watanabe
Toyohashi University of Technology
Aichi 441-8580 Japan
[email protected]
Hiroko Kobayashi
Nikon Corporation
Kanagawa 244-8533 Japan
[email protected]
Abstract
Latent Dirichlet allocation (LDA) is a popular generative model of various objects
such as texts and images, where an object is expressed as a mixture of latent topics. In this paper, we theoretically investigate variational Bayesian (VB) learning
in LDA. More specifically, we analytically derive the leading term of the VB free
energy under an asymptotic setup, and show that there exist transition thresholds
in Dirichlet hyperparameters around which the sparsity-inducing behavior drastically changes. Then we further theoretically reveal the notable phenomenon that
VB tends to induce weaker sparsity than MAP in the LDA model, which is opposed to other models. We experimentally demonstrate the practical validity of
our asymptotic theory on real-world Last.FM music data.
1
Introduction
Latent Dirichlet allocation (LDA) [5] is a generative model successfully used in various applications
such as text analysis [5], image analysis [15], genometrics [6, 4], human activity analysis [12],
and collaborative filtering [14, 20]1 . Given word occurrences of documents in a corpora, LDA
expresses each document as a mixture of multinomial distributions, each of which is expected to
capture a topic. The extracted topics provide bases in a low-dimensional feature space, in which
each document is compactly represented. This topic expression was shown to be useful for solving
various tasks including classification [15], retrieval [26], and recommendation [14].
Since rigorous Bayesian inference is computationally intractable in the LDA model, various approximation techniques such as variational Bayesian (VB) learning [3, 7] are used. Previous theoretical
studies on VB learning revealed that VB tends to produce sparse solutions, e.g., in mixture models
[24, 25, 13], hidden Markov models [11], Bayesian networks [23], and fully-observed matrix factorization [17]. Here, we mean by sparsity that VB exhibits the automatic relevance determination
1
For simplicity, we use the terminology in text analysis below. However, the range of application of our
theory given in this paper is not limited to texts.
1
(ARD) effect [19], which automatically prunes irrelevant degrees of freedom under non-informative
or weakly sparse prior. Therefore, it is naturally expected that VB-LDA also produces a sparse solution (in terms of topics). However, it is often observed that VB-LDA does not generally give sparse
solutions.
In this paper, we attempt to clarify this gap by theoretically investigating the sparsity-inducing mechanism of VB-LDA. More specifically, we first analytically derive the leading term of the VB free
energy in some asymptotic limits, and show that there exist transition thresholds in Dirichlet hyperparameters around which the sparsity-inducing behavior changes drastically. We then analyze
the behavior of MAP and its variants in a similar way, and show that the VB solution is less sparse
than the MAP solution in the LDA model. This phenomenon is completely opposite to other models such as mixture models [24, 25, 13], hidden Markov models [11], Bayesian networks [23], and
fully-observed matrix factorization [17], where VB tends to induce stronger sparsity than MAP. We
numerically demonstrate the practical validity of our asymptotic theory using artificial and realworld Last.FM music data for collaborative filtering, and further discuss the peculiarity of the LDA
model in terms of sparsity.
The free energy of VB-LDA was previously analyzed in [16], which evaluated the advantage of
collapsed VB [21] over the original VB learning. However, that work focused on the difference
between VB and collapsed VB, and neither the absolute free energy nor the sparsity was investigated.
The update rules of VB was compared with those of MAP [2]. However, that work is based on
approximation, and rigorous analysis was not made. To the best of our knowledge, our paper is the
first work that theoretically elucidates the sparsity-inducing mechanism of VB-LDA.
2
Formulation
In this section, we introduce the latent Dirichlet allocation model and variational Bayesian learning.
2.1
Latent Dirichlet Allocation
Suppose that we observe M documents, each of which consists of N (m) words. Each word is
included in a vocabulary with size L. We assume that each word is associated with one of the H
topics, which is not observed. We express the word occurrence by an L-dimensional indicator vector
w, where one of the entries is equal to one and the others are equal to zero. Similarly, we express
the topic occurrence as an H-dimensional indicator vector z. We define the following functions that
give the item numbers chosen by w and z, respectively:
?l(w) = l if wl = 1 and wl? = 0 for l? ?= l,
?
h(z)
= h if zh = 1 and zh? = 0 for h? ?= h.
In the latent Dirichlet allocation (LDA) model [5], the word occurrence w(n,m) of the n-th position
in the m-th document is assumed to follow the multinomial distribution:
#wl(n,m)
!L "
p(w(n,m) |?, B) = l=1 (B? ? )l,m
= (B? ? )?l(w(n,m) ),m ,
(1)
where ? ? [0, 1]M ?H and B ? [0, 1]L?H are parameter matrices to be estimated. The rows of ?
and the columns of B are probability mass vectors that sum up to one. We denote a column vector
of a matrix by a bold lowercase letter, and a row vector by a bold lowercase letter with a tilde, i.e.,
%
&
$1 , . . . , ?
$ M )? ,
$ ,...,?
$ ?.
? = (? 1 , . . . , ? H ) = (?
B = (? 1 , . . . , ? H ) = ?
1
L
$m denotes the topic distribution of the m-th document, and ? denotes the word
With this notation, ?
h
distribution of the h-th topic.
Given the topic occurrence latent variable z (n,m) , the complete likelihood is written as
p(w(n,m) , z (n,m) |?, B) = p(w(n,m) |z (n,m) , B)p(z (n,m) |?),
(2)
!
!
!
(n,m) (n,m)
(n,m)
L
H
H
zh
where p(w(n,m) |z (n,m), B) = l=1 h=1 (Bl,h )wl
, p(z (n,m) |?) = h=1 (?m,h )zh .
We assume the Dirichlet prior on ? and B:
!M !H
p(?|?) ? m=1 h=1 (?m,h )??1 ,
p(B|?) ?
2
!H
h=1
!L
l=1 (Bl,h )
??1
,
(3)
Outlined font
Figure 1: Graphical model of LDA.
where ? and ? are hyperparameters that control the prior sparsity. We can make ? dependent on m
and/or h, and ? dependent on l and/or h, and they can be estimated from observation. However, we
fix those hyperparameters as given constants for simplicity in our analysis below. Figure 1 shows
the graphical model of LDA.
2.2
Variational Bayesian Learning
The Bayes posterior of LDA is written as
(n,m)
(n,m)
},{z
}|?,B)p(?|?)p(B|?)
p(?, B, {z (n,m) }|{w(n,m) }, ?, ?) = p({w
,
(4)
p({w(n,m) })
'
where p({w(n,m) }) =
p({w(n,m) }, {z (n,m) }|?, B)p(?|?)p(B|?)d?dBd{z (n,m) } is intractable to compute and thus requires some approximation method. In this paper, we focus on
the variational Bayesian (VB) approximation and investigate its behavior theoretically.
In the VB approximation, we assume that our approximate posterior is factorized as
q(?, B, {z (n,m) }) = q(?, B)q({z (n,m) }),
and minimize the free energy:
(
)
q(?,B,{z (n,m) })
F = log p({w(n,m) },{z
(n,m) }|?,B)p(?|?)p(B|?)
q(?,B,{z (n,m) })
(5)
,
(6)
where ???p denotes the expectation over the distribution p. This amounts to finding the distribution
that is closest to the Bayes posterior (4) under the constraint (5). Using the variational method, we
can obtain the following stationary condition:
(
)
q(?) ? p(?|?) exp log p({w(n,m) }, {z (n,m) }|?, B)
,
(7)
q(B)q({z (n,m) })
(
)
q(B) ? p(B|?) exp log p({w(n,m) }, {z (n,m) }|?, B)
,
(8)
q(?)q({z (n,m) })
(
)
q({z (n,m) }) ? exp log p({w(n,m) }, {z (n,m) }|?, B)
.
(9)
q(?)q(B)
$m )} and {q(? )} follow the Dirichlet distribution and
From this, we can confirm that {q(?
h
(n,m)
{q(z
)} follows the multinomial distribution:
!M !H
!H !L
?
?
q(?) ? m=1 h=1 (?m,h )?m,h ?1 ,
q(B) ? h=1 l=1 (Bl,h )Bl,h ?1 ,
(10)
(m) !
!
!
(n,m)
(n,m)
M
N
H
q({z (n,m) }) = m=1 n=1
zh
)zh
,
(11)
h=1 (*
where, for ?(?) denoting the Digamma function, the variational parameters satisfy
(m)
(m)
? m,h = ? + +N z*(n,m) ,
?l,h = ? + +M +N w(n,m) z*(n,m) ,
?
B
n=1
m=1
n=1
h
l
h
(n,m)
2.3
z*h
=
!
#
? m,h )+"L w(n,m) (? (B
? l,h )?? ("L?
? ? ))
exp ? (?
B
l ,h
l=1
l
l =1
!
#.
"H
"L
"
(n,m)
?
exp ? (?
(? (B?l,h? )?? ( Ll? =1 B?l? ,h? ))
m,h? )+
l=1 wl
h? =1
(12)
(13)
Partially Bayesian Learning and MAP Estimation
We can partially apply VB learning by approximating the posterior of ? or B by the delta function.
This approach is called the partially Bayesian (PA) learning [18], whose behavior was analyzed
3
and compared with VB in fully-observed matrix factorization. We call it PBA learning if ? is
marginalized and B is point-estimated, and PBB learning if B is marginalized and ? is pointestimated. Note that the original VB algorithm for LDA proposed by [5] corresponds to PBA in
our terminology. We also analyze the behavior of MAP estimation, where both of ? and B are
point-estimated. This corresponds to the probabilistic latent semantic analysis (pLSA) model [10],
if we assume the flat prior ? = ? = 1 [8].
3
Theoretical Analysis
In this section, we first give an explicit form of the free energy in the LDA model. We then investigate
its asymptotic behavior for VB learning, and further conduct similar analyses to the PBA, PBB, and
MAP methods. Finally, we discuss the sparsity-inducing mechanism of these learning methods, and
the relation to previous theoretical studies.
3.1
Explicit Form of Free Energy
? and B:
?
We first express the free energy (6) as a function of the variational parameters ?
F = R + Q,
where
(14)
(
)
q(?)q(B)
R = log p(?|?)p(B|?)
q(?,B)
#"
##
"
H
+M "
+H " ?
+H
?
?( H
h=1 ?m,h ) ? (?)
?
?
?
= m=1 log $H ? (?? ) ? (H?) + h=1 ?
?
?
?
(
?
)
?
?
(
?
)
?
m,h
m,h
h =1 m,h
m,h
h=1
"
#"
##
"L ?
L
+H "
+
+
?(
Bl,h ) ? (?)
L
?l,h ? ? ? (B
?l,h ) ? ? ( L? B
?l? ,h ) , (15)
+ h=1 log $L l=1
+
B
?
l=1
l =1
? (L?)
l=1 ? (Bl,h )
(
)
q({z (n,m) })
Q = log p({w(n,m) },{z(n,m) }|?,B)
q(?,B,{z (n,m) })
,
? m,h ))
? l,h ))
+M
+
+H
exp(? (?
exp(? (B
L
"
"
= ? m=1 N (m) l=1 Vl,m log
(16)
L
h=1 exp(? ( H?
?
? ? )) .
?
B
m,h? )) exp(? (
l ,h
h =1
l? =1
Here, V ? RL?M is the empirical word distribution matrix with its entries given by Vl,m =
+N (m) (n,m)
1
. Note that we have eliminated the variational parameters {*
z (n,m) } for the topic
n=1 wl
N (m)
occurrence latent variables by using the stationary condition (13).
3.2
Asymptotic Analysis of VB Solution
Below, we investigate the leading term of the free energy in the asymptotic limit when N ?
minm N (m) ? ?. Unlike the previous analysis for latent variable models [24], we do not assume L, M ? N , but 1 ? L, M, N at this point. This amounts to considering the asymptotic
limit when L, M, N ? ? with a fixed mutual ratio, or equivalently, assuming L, M ? O(N ).
Throughout the paper, H is set at H = min(L, M ) (i.e., the matrix B? ? can express any multinomial distribution). We assume that the word distribution matrix V is a sample from the multinomial
distribution with the true parameter U ? ? RL?M whose rank is H ? ? O(1), i.e., U ? = B ? ? ??
?
?
where ? ? ? RM ?H and B ? ? RL?H .2 We assume that ?, ? ? O(1).
The stationary condition (12) leads to the following lemma (the proof is given in Appendix A):
*?
* ? = ?B? ? ?q(?,B) . Then, it holds that
Lemma 1 Let B
?
*?
* )2 ?q(?,B) = Op (N ?2 ),
?(B? ? ? B
l,m
+M
+L
(m)
* *?
Q = ? m=1 N
l=1 Vl,m log(B ? )l,m + Op (M ),
where Op (?) denotes the order in probability.
2
More precisely, U ? = B ? ? ?? + O(N ?1 ) is sufficient.
4
(17)
(18)
Eq.(17) implies the convergence of the posterior. Let
"
#
+L +M
*?
* ? )l,m ?= (B ? ? ?? )l,m + Op (N ?1 )
J* = l=1 m=1 ? (B
(19)
?
*?
* that does not converge to the true value. Here, we denote by ?(?)
be the number of entries of B
the indicator function equal to one if the event is true, and zero otherwise. Then, Eq.(18) leads to
the following lemma:
*?
* ? = B ? ? ?? + Op (N ?1 ), and it holds that
Lemma 2 Q is minimized when B
* + M ),
Q = S + Op (JN
S = ? log p({w
(n,m)
where
}, {z
(n,m)
}|? ? , B ? ) = ?
+M
m=1
N (m)
+L
l=1
Vl,m log(B ? ? ? )l,m .
Lemma 2 simply states that Q/N converges to the normalized entropy S/N of the true distribution
(which is the lowest achievable value with probability 1), if and only if VB converges to the true
distribution (i.e., J* = 0).
* = +H ?( 1 +M ?
*
Let H
m=1 m,h ? Op (1)) be the number of topics used in the whole corpus,
+Mh=1 M
(h)
.
*
M
= m=1 ?(?m,h ? Op (1)) be the number of documents that contain the h-th topic, and
* (h) = +L ?(B
*l,h ? Op (1)) be the number of words of which the h-th topic consist. We have
L
l=1
the following lemma (the proof is given in Appendix B):
Lemma 3 R is written as follows:
/ %
"
&
%
&
%
&
%
�
* L? ? 1 ? +H%
.(h) ? ? 1 + L
* (h) ? ? 1
R = M H? ? 12 + H
M
log N
h=1
2
2
2
%
&
* L? ? 1 log L + Op (H(M + L)).
+ (H ? H)
(20)
2
* = H ? ? O(1) is
Since we assumed that the true matrices ? ? and B ? are of the rank of H ? , H
*
sufficient for the VB posterior to converge to the true distribution. However, H can be much larger
than H ? with ?B? ? ?q(?,B) unchanged because of the non-identifiability of matrix factorization?
duplicating topics with divided weights, for example, does not change the distribution.
Based on Lemma 2 and Lemma 3, we obtain the following theorem (the proof is given in Appendix C):
Theorem 1 In the limit when N ? ? with L, M ? O(1), it holds that J* = 0 with probability 1,
and
/ %
"
&
%
&
%
&
%
�
* L? ? 1 ? +H%
.(h) ? ? 1 + L
* (h) ? ? 1
F = S + M H? ? 12 + H
log N
h=1 M
2
2
2
+ Op (1).
In the limit when N, M ? ? with
? O(1), it holds that J* = op (log N ), and
/ %
&
&0
+H% .(h) %
F = S + M H? ? 12 ? h=1 M
? ? 12 log N + op (N log N ).
M
N ,L
In the limit when N, L ? ? with
L
N,M
? O(1), it holds that J* = op (log N ), and
F = S + HL? log N + op (N log N ).
In the limit when N, L, M ? ? with
L M
N, N
? O(1), it holds that J* = op (N log N ), and
F = S + H(M ? + L?) log N + op (N 2 log N ).
Since Eq.(17) was shown to hold, the predictive distribution converges to the true distribution if
J* = 0. Accordingly, Theorem 1 states that the consistency holds in the limit when N ? ? with
L, M ? O(1).
Theorem 1 also implies that, in the asymptotic limits with small L ? O(1), the leading term depends
* meaning that it dominates the topic sparsity of the VB solution. We have the following
on H,
corollary (the proof is given in Appendix D):
5
Table 1: Sparsity thresholds of VB, PBA, PBB, and MAP methods (see Theorem 2). The first four
columns show the thresholds (?sparse , ?dense ), of which the function forms depend on the range of
?, in the limit when N ? ? with L, M ? O(1). A single value is shown if ?sparse = ?dense . The
last column shows the threshold ?M ?? in the limit when N, M ? ? with M
N , L ? O(1).
? range
VB
0<
1
2
PBA
?
PBB
1
? ? 2L
1 ?L?
2
minh M ?(h)
1
MAP
"
?sparse , ?dense
1
1
<?? 2
<?<1
1??<?
#2
$
L?? 1
L?? 1
1
1
L?1
+ max M2?(h)
+ 2 max M ?(h) , 12 + min M 2?(h)
2
2
h
h
h
#
$
L(??1)
1 1
?
, + min
?(h)
2 2
M
h
#
$
L?? 1
L?? 1
2
1 + max M2?(h)
1 + 2 maxL?1
,
1
+
?(h)
?(h)
minh M
h
hM#
$
L(??1)
?
1, 1 + min
?(h)
M
1
2L
!
h
?M ??
0<?<?
1
2
1
2
1
1
+M
+L
?
?
Corollary 1 Let M ?(h) = m=1 ?(?m,h
? O(1)) and L?(h) = l=1 ?(Bl,h
? O(1)). Consider
1
the limit when N ? ? with L, M ? O(1). When 0 < ? ? 2L , the VB solution is sparse if
1
1
?L?
1
2 ?L?
, and dense if ? > 12 ? min2h M ?(h) . When 2L
< ? ? 12 , the VB solution is
minh M ?(h)
L?? 1
L?? 1
sparse if ? < 12 + maxh M2?(h) , and dense if ? > 12 + maxh M2?(h) . When ? > 12 , the VB solution is
L?? 12
1
sparse if ? < 12 + 2 maxL?1
?(h) , and dense if ? > 2 + min M ?(h) . In the limit when N, M ? ?
hM
h
1
with M
,
L
?
O(1),
the
VB
solution
is
sparse
if
?
<
,
and
dense
if ? > 12 .
N
2
? <
1
2
?
In the case when L, M ? N and in the case when L ? M, N , Corollary 1 provides information
on the sparsity of the VB solution, which will be compared with other methods in Section 3.3. On
the other hand, although we have successfully derived the leading term of the free energy also in the
case when M ? L, N and in the case when 1 ? L, M, N , it unfortunately provides no information
on sparsity of the solution.
3.3
Asymptotic Analysis of PBA, PBB, and MAP
By applying similar analysis to PBA learning, PBB learning, and MAP estimation, we can obtain
the following theorem (the proof is given in Appendix E):
Theorem 2 In the limit when N ? ? with L, M ? O(1), the solution is sparse if ? < ?sparse ,
and dense if ? > ?dense . In the limit when N, M ? ? with M
N , L ? O(1), the solution is sparse if
? < ?M ?? , and dense if ? > ?M ?? . Here, ?sparse , ?dense , and ?M ?? are given in Table 1.
A notable finding from Table 1 is that the threshold that determines the topic sparsity of PBB-LDA
is (most of the case exactly) 12 larger than the threshold of VB-LDA. The same relation is observed
between MAP-LDA and PBA-LDA. From these, we can conclude that point-estimating ?, instead
of integrating it out, increases the threshold by 12 in the LDA model. We will validate this observation
by numerical experiments in Section 4.
3.4
Discussion
The above theoretical analysis (Thereom 2) showed that VB tends to induce weaker sparsity than
MAP in the LDA model3 , i.e., VB requires sparser prior (smaller ?) than MAP to give a sparse
solution (mean of the posterior). This phenomenon is completely opposite to other models such
as mixture models [24, 25, 13], hidden Markov models [11], Bayesian networks [23], and fullyobserved matrix factorization [17], where VB tends to induce stronger sparsity than MAP. This
phenomenon might be partly explained as follows: In the case of mixture models, the sparsity
threshold depends on the degree of freedom of a single component [24]. This is reasonable because
3
Although this tendency was previously pointed out [2] by using the approximation exp(?(n)) ? n ? 12
and comparing the stationary condition, our result has first clarified the sparsity behavior of the solution based
on the asymptotic free energy analysis without using such an approximation.
6
100
1
60
?
50
0.6
30
30
20
0.2
0
0
0.2
1
0.5
?
1
0.2
40
30
0
0
0
20
0.2
10
0.5
?
(a) VB
50
0.4
20
10
0
0
0
60
0.6
30
0.4
70
0.8
40
20
10
0.5
0.6
40
0.4
60
50
80
1
70
0.8
50
0.6
40
0.4
60
0.8
?
0.8
90
1.2
80
1
70
100
90
1.2
80
1
70
100
90
1.2
80
?
90
?
100
1.2
1
0
0
0
10
0.5
?
(b) PBA
1
0
?
(c) PBB
(d) MAP
* of topics by (a) VB, (b) PBA, (c) PBB, and (d) MAP, for the artificial
Figure 2: Estimated number H
data with L = 100, M = 100, H ? = 20, and N ? 10000.
100
1
60
?
50
0.6
40
30
0.4
60
0.8
0.6
30
20
0.2
0
0
10
0.5
1
0
40
30
0.4
20
0.2
0
0
0.5
1
70
60
0.8
50
0.6
40
30
0.4
20
0.2
10
?
(a) VB
0.6
40
0.4
60
50
80
1
70
0.8
50
?
0.8
90
1.2
80
1
70
100
90
1.2
80
1
70
100
90
1.2
80
?
90
?
100
1.2
0
0
0
?
10
0.5
1
?
(b) PBA
(c) PBB
0
20
0.2
0
0
10
0.5
1
0
?
(d) MAP
* of topics for the Last.FM data with L = 100, M = 100, and
Figure 3: Estimated number H
N ? 700.
adding a single component increases the model complexity by this amount. Also, in the case of
LDA, adding a single topic requires additional L + 1 parameters. However, the added topic is
shared over M documents, which could discount the increased model complexity relative to the
increased data fidelity. Corollary 1, which implies the dependency of the threshold for ? on L and
M , might support this conjecture. However, the same applies to the matrix factorization, where VB
was shown to give a sparser solution than MAP [17]. Investigation on related models, e.g., Poisson
MF [9], would help us fully explain this phenomenon.
Technically, our theoretical analysis is based on the previous asymptotic studies on VB learning conducted for latent variable models [24, 25, 13, 11, 23]. However, our analysis is not just a straightforward extension of those works to the LDA model. For example, the previous analysis either
implicitly [24] or explicitly [13] assumed the consistency of VB learning, while we also analyzed
the consistency of VB-LDA, and showed that the consistency does not always hold (see Theorem 1).
Moreover, we derived a general form of the asymptotic free energy, which can be applied to different
asymptotic limits. Specifically, the standard asymptotic theory requires a large number N of words
per document, compared to the number M of documents and the vocabulary size L. This may be
reasonable in some collaborative filtering data such as the Last.FM data used in our experiments in
Section 4. However, L and/or M would be comparable to or larger than N in standard text analysis.
Our general form of the asymptotic free energy also allowed us to elucidate the behavior of the
VB free energy when L and/or M diverges with the same order as N . This attempt successfully
revealed the sparsity of the solution for the case when M diverges while L ? O(1). However, when
L diverges, we found that the leading term of the free energy does not contain interesting insight into
the sparsity of the solution. Higher-order asymptotic analysis will be necessary to further understand
the sparsity-inducing mechanism of the LDA model with large vocabulary.
4
Numerical Illustration
In this section, we conduct numerical experiments on artificial and real data for collaborative filtering.
The artificial data were created as follows. We first sample the true document matrix ? ? of size
$? of ? ? follows
M ? H ? and the true topic matrix B ? of size L ? H ? . We assume that each row ?
m
the Dirichlet distribution with ?? = 1/H ? , while each column ? ?h of B ? follows the Dirichlet
distribution with ? ? = 1/L. The document length N (m) is sampled from the Poisson distribution
with its mean N . The word histogram N (m) v m for each document is sampled from the multinomial
7
100
1
60
?
50
0.6
40
30
0.4
60
0.8
0.6
30
20
0.2
0
0
10
0.5
1
0
?
(a) L = 100, M = 100
0.6
40
0.4
60
50
40
30
0.4
20
0.2
0
0
1
70
60
0.8
50
0.6
40
30
0.4
20
0.2
10
0.5
80
1
70
0.8
50
?
0.8
90
1.2
80
1
70
100
90
1.2
80
1
70
100
90
1.2
80
?
90
?
100
1.2
0
0
0
?
10
0.5
1
0
?
(b) L = 100, M = 1000
(c) L = 500, M = 100
20
0.2
0
0
10
0.5
1
0
?
(d) L = 500, M = 1000
* of topics by VB-LDA for the artificial data with H ? = 20 and
Figure 4: Estimated number H
N ? 10000. For the case when L = 500, M = 1000, the maximum estimated rank is limited to
100 for computational reason.
distribution with the parameter specified by the m-th row vector of B ? ? ?? . Thus, we obtain the
L ? M matrix V , which corresponds to the empirical word distribution over M documents.
As a real-world dataset, we used the Last.FM dataset.4 Last.FM is a well-known social music web
site, and the dataset includes the triple (?user,? ?artist,? ?Freq?) which was collected from the playlists of users in the community by using a plug-in in users? media players. This triple means that
?user? played ?artist? music ?Freq? times, which indicates users? preferred artists. A user and a
played artist are analogous to a document and a word, respectively. We randomly chose L artists
from the top 1000 frequent artists, and M users who live in the United States. To find a better local
solution (which hopefully is close to the global solution), we adopted a split and merge strategy [22],
and chose the local solution giving the lowest free energy among different initialization schemes.
* of topics by different approximation methods, i.e., VB,
Figure 2 shows the estimated number H
PBA, PBB, and MAP, for the Artificial data with L = 100, M = 100, H ? = 20, and N ? 10000.
We can clearly see that the sparsity threshold in PBB and MAP, where ? is point-estimated, is
larger than that in VB and PBA, where ? is marginalized. This result supports the statement by
Theorem 2. Figure 3 shows results on the Last.FM data with L = 100, M = 100 and N ? 700. We
see a similar tendency to Figure 2 except the region where ? < 1 for PBA, in which our theory does
not predict the estimated number of topics.
Finally, we investigate how different asymptotic settings affect the topic sparsity. Figure 4 shows
the sparsity dependence on L and M for the artificial data. The graphs correspond to the four
cases mentioned in Theorem 1, i.e, (a) L, M ? N , (b) L ? N, M , (c) M ? N, L, and (d)
1 ? N, L, M . Corollary 1 explains the behavior in (a) and (b), and further analysis is required to
explain the behavior in (c) and (d).
5
Conclusion
In this paper, we considered variational Bayesian (VB) learning in the latent Dirichlet allocation
(LDA) model and analytically derived the leading term of the asymptotic free energy. When the
vocabulary size is small, our result theoretically explains the phase-transition phenomenon. On the
other hand, when vocabulary size is as large as the number of words per document, the leading term
tells nothing about sparsity. We need more accurate analysis to clarify the sparsity in such cases.
Throughout the paper, we assumed that the hyperparameters ? and ? are pre-fixed. However, ?
would often be estimated for each topic h, which is one of the advantages of using the LDA model
in practice [5]. In the future work, we will extend the current line of analysis to the empirical
Bayesian setting where the hyperparameters are also learned, and further elucidate the behavior of
the LDA model.
Acknowledgments
The authors thank the reviewers for helpful comments. Shinichi Nakajima thanks the support
from Nikon Corporation, MEXT Kakenhi 23120004, and the Berlin Big Data Center project (FKZ
01IS14013A). Masashi Sugiyama thanks the support from the JST CREST program. Kazuho Watanabe thanks the support from JSPS Kakenhi 23700175 and 25120014.
4
http://mtg.upf.edu/node/1671
8
References
[1] H. Alzer. On some inequalities for the Gamma and Psi functions.
66(217):373?389, 1997.
Mathematics of Computation,
[2] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic models. In
Proc. of UAI, pages 27?34, 2009.
[3] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proc. of
UAI, pages 21?30, 1999.
[4] M. Bicego, P. Lovato, A. Ferrarini, and M. Delledonne. Biclustering of expression microarray data with
topic models. In Proc. of ICPR, pages 2728?2731, 2010.
[5] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[6] X. Chen, X. Hu, X. Shen, and G. Rosen. Probabilistic topic modeling for genomic data interpretation. In
2010 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 149?152, 2010.
[7] Z. Ghahramani and M. J. Beal. Graphical models and variational methods. In Advanced Mean Field
Methods, pages 161?177. MIT Press, 2001.
[8] M. Girolami and A. Kaban. On an equivalence between PLSI and LDA. In Proc. of SIGIR, pages 433?
434, 2003.
[9] P. Gopalan, J. M. Hofman, and D. M. Blei.
arXiv:1311.1704 [cs.IR], 2013.
Scalable recommendation with Poisson factorization.
[10] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42:177?
196, 2001.
[11] T. Hosino, K. Watanabe, and S. Watanabe. Stochastic complexity of hidden markov models on the variational Bayesian learning. IEICE Trans. on Information and Systems, J89-D(6):1279?1287, 2006.
[12] T. Huynh, Mario F., and B. Schiele. Discovery of activity patterns using topic models. In International
Conference on Ubiquitous Computing (UbiComp), 2008.
[13] D. Kaji, K. Watanabe, and S. Watanabe. Phase transition of variational Bayes learning in Bernoulli
mixture. Australian Journal of Intelligent Information Processing Systems, 11(4):35?40, 2010.
[14] R. Krestel, P. Fankhauser, and W. Nejdl. Latent dirichlet allocation for tag recommendation. In Proceedings of the Third ACM Conference on Recommender Systems, pages 61?68, 2009.
[15] F.-F. Li and P. Perona. A bayesian hierarchical model for learning natural scene categories. In Proc. of
CVPR, pages 524?531, 2005.
[16] I. Mukherjee and D. M. Blei. Relative performance guarantees for approximate inference in latent Dirichlet allocation. In Advances in NIPS, 2008.
[17] S. Nakajima and M. Sugiyama. Theoretical analysis of Bayesian matrix factorization. Journal of Machine
Learning Research, 12:2579?2644, 2011.
[18] S. Nakajima, M. Sugiyama, and S. D. Babacan. On Bayesian PCA: Automatic dimensionality selection
and analytic solution. In Proc. of ICML, pages 497?504, 2011.
[19] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996.
[20] S. Purushotham, Y. Liu, and C. C. J. Kuo. Collaborative topic regression with social matrix factorization
for recommendation systems. In Proc. of ICML, 2012.
[21] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for latent
Dirichlet allocation. In Advances in NIPS, 2007.
[22] N. Ueda, R. Nakano, Z. Ghahramani, and G. E. Hinton. SMEM algorithm for mixture models. Neural
Computation, 12(9):2109?2128, 2000.
[23] K. Watanabe, M. Shiga, and S. Watanabe. Upper bound for variational free energy of Bayesian networks.
Machine Learning, 75(2):199?215, 2009.
[24] K. Watanabe and S. Watanabe. Stochastic complexities of Gaussian mixtures in variational Bayesian
approximation. Journal of Machine Learning Research, 7:625?644, 2006.
[25] K. Watanabe and S. Watanabe. Stochastic complexities of general mixture models in variational Bayesian
learning. Neural Networks, 20(2):210?219, 2007.
[26] X. Wei and W. B. Croft. LDA-based document models for ad-hoc retrieval. In Prof. of SIGIR, pages
178?185, 2006.
9
| 5558 |@word achievable:1 stronger:2 plsa:1 hu:1 liu:1 united:1 denoting:1 document:17 current:1 com:1 comparing:1 written:3 numerical:3 informative:1 hofmann:1 analytic:1 update:1 stationary:4 generative:2 item:1 accordingly:1 blei:3 provides:2 node:1 clarified:1 issei:1 consists:1 introduce:1 theoretically:6 expected:2 behavior:12 nor:1 bibm:1 automatically:1 considering:1 project:1 estimating:1 notation:1 moreover:1 mass:1 factorized:1 lowest:2 medium:1 finding:2 corporation:2 guarantee:1 duplicating:1 masashi:2 exactly:1 rm:1 control:1 kobayashi:2 local:2 tends:5 limit:16 merge:1 might:2 chose:2 initialization:1 equivalence:1 factorization:9 limited:2 range:3 practical:2 acknowledgment:1 practice:1 empirical:3 word:15 induce:4 integrating:1 pre:1 pbb:12 close:1 selection:1 collapsed:3 applying:1 live:1 map:23 reviewer:1 center:2 straightforward:1 focused:1 shen:1 sigir:2 simplicity:2 is14013a:1 m2:4 rule:1 insight:1 analogous:1 elucidate:2 suppose:1 user:7 elucidates:1 smyth:1 pa:1 mukherjee:1 observed:6 capture:1 region:1 mentioned:1 complexity:5 schiele:1 weakly:1 solving:1 depend:1 hofman:1 predictive:1 technically:1 completely:2 compactly:1 mh:1 various:4 represented:1 ubicomp:1 artificial:7 tell:1 newman:1 whose:2 larger:4 cvpr:1 kaji:1 otherwise:1 beal:1 hoc:1 advantage:2 frequent:1 tu:2 inducing:6 validate:1 convergence:1 diverges:3 produce:2 converges:3 object:2 help:1 derive:2 ac:3 ard:1 op:17 eq:3 c:2 implies:3 australian:1 girolami:1 tokyo:6 peculiarity:1 stochastic:3 human:1 jst:1 explains:2 fix:1 investigation:1 extension:1 clarify:2 hold:9 around:2 considered:1 exp:10 predict:1 estimation:3 proc:7 wl:6 successfully:3 mit:1 clearly:1 genomic:1 always:1 gaussian:1 corollary:5 derived:3 focus:1 kakenhi:2 rank:3 likelihood:1 indicates:1 bernoulli:1 digamma:1 rigorous:2 helpful:1 inference:4 dependent:2 lowercase:2 vl:4 shiga:1 hidden:4 relation:2 perona:1 playlist:1 germany:1 classification:1 fidelity:1 among:1 smoothing:1 mutual:1 equal:3 field:1 ng:1 eliminated:1 unsupervised:1 icml:2 future:1 minimized:1 others:1 rosen:1 intelligent:1 randomly:1 gamma:1 phase:2 attempt:2 freedom:2 investigate:5 mixture:10 analyzed:3 tut:1 accurate:1 necessary:1 conduct:2 theoretical:6 increased:2 column:5 modeling:1 entry:3 jsps:1 conducted:1 dependency:1 thereom:1 thanks:3 international:2 probabilistic:3 opposed:1 leading:8 li:1 japan:4 de:1 bold:2 includes:1 satisfy:1 notable:2 explicitly:1 depends:2 ad:1 analyze:2 mario:1 bayes:4 identifiability:1 asuncion:1 collaborative:5 minimize:1 ir:1 pba:14 kazuho:2 who:1 correspond:1 bayesian:23 artist:6 minm:1 biomedicine:1 explain:2 bicego:1 energy:18 sugi:1 naturally:1 associated:1 proof:5 psi:1 sampled:2 dataset:3 popular:1 knowledge:1 dimensionality:1 ubiquitous:1 higher:1 follow:2 wei:1 formulation:1 evaluated:1 just:1 hand:2 web:1 hopefully:1 lda:35 reveal:1 ieice:1 effect:1 validity:2 normalized:1 true:10 contain:2 analytically:3 semantic:2 freq:2 neal:1 ll:1 huynh:1 complete:1 demonstrate:2 image:2 variational:19 meaning:1 multinomial:6 rl:3 jp:3 extend:1 interpretation:1 numerically:1 aichi:1 automatic:2 outlined:1 consistency:4 similarly:1 pointed:1 mathematics:1 sugiyama:4 maxh:2 base:1 posterior:7 closest:1 showed:2 plsi:1 irrelevant:1 inequality:1 additional:1 prune:1 converge:2 upf:1 determination:1 plug:1 retrieval:2 divided:1 variant:1 scalable:1 regression:1 itc:1 expectation:1 poisson:3 arxiv:1 histogram:1 nakajima:5 microarray:1 unlike:1 comment:1 jordan:1 call:1 revealed:2 split:1 affect:1 fm:7 opposite:2 fkz:1 attias:1 expression:2 pca:1 useful:1 generally:1 gopalan:1 amount:3 discount:1 category:1 http:1 exist:2 estimated:12 delta:1 per:2 dbd:1 express:5 four:2 terminology:2 threshold:11 neither:1 nikon:3 graph:1 sum:1 realworld:1 letter:2 throughout:2 reasonable:2 ueda:1 appendix:5 vb:53 comparable:1 bound:1 played:2 mtg:1 activity:2 sato:2 constraint:1 precisely:1 scene:1 flat:1 tag:1 babacan:1 min:5 conjecture:1 icpr:1 smaller:1 nejdl:1 hl:1 explained:1 computationally:1 previously:2 discus:2 mechanism:4 adopted:1 apply:1 observe:1 hierarchical:1 occurrence:6 jn:1 original:2 denotes:4 dirichlet:17 top:1 graphical:3 marginalized:3 nakano:1 music:4 giving:1 ghahramani:2 prof:1 approximating:1 unchanged:1 bl:7 added:1 font:1 strategy:1 dependence:1 exhibit:1 thank:1 berlin:5 topic:32 collected:1 reason:1 assuming:1 length:1 illustration:1 ratio:1 equivalently:1 setup:1 unfortunately:1 statement:1 teh:2 recommender:1 upper:1 observation:2 markov:4 minh:3 tilde:1 hinton:1 shinichi:2 community:1 required:1 specified:1 learned:1 nip:2 trans:1 below:3 pattern:1 sparsity:29 program:1 including:1 max:3 event:1 natural:1 indicator:3 advanced:1 scheme:1 technology:1 smem:1 created:1 hm:2 text:5 prior:5 discovery:1 zh:6 asymptotic:19 relative:2 fully:4 interesting:1 allocation:11 filtering:4 triple:2 degree:2 sufficient:2 row:4 last:8 free:18 drastically:2 weaker:3 understand:1 absolute:1 sparse:17 maxl:2 vocabulary:5 transition:4 world:2 author:1 made:1 social:2 welling:2 crest:1 approximate:2 kaban:1 implicitly:1 preferred:1 confirm:1 global:1 investigating:1 uai:2 corpus:2 assumed:4 conclude:1 latent:19 table:3 kanagawa:1 model3:1 investigated:1 dense:11 big:2 whole:1 hyperparameters:6 nothing:1 allowed:1 site:1 position:1 watanabe:12 explicit:2 inferring:1 third:1 croft:1 theorem:10 dominates:1 dl:1 intractable:2 consist:1 adding:2 gap:1 sparser:2 mf:1 chen:1 entropy:1 simply:1 expressed:1 partially:3 recommendation:4 biclustering:1 applies:1 springer:1 corresponds:3 determines:1 extracted:1 acm:1 shared:1 change:3 experimentally:1 included:1 specifically:3 except:1 lemma:9 called:1 kuo:1 partly:1 tendency:2 player:1 support:5 mext:1 relevance:1 bioinformatics:1 phenomenon:6 |
5,035 | 5,559 | Decoupled Variational Gaussian Inference
Mohammad Emtiyaz Khan
Ecole Polytechnique F?ed?erale de Lausanne (EPFL), Switzerland
[email protected]
Abstract
Variational Gaussian (VG) inference methods that optimize a lower bound to the
marginal likelihood are a popular approach for Bayesian inference. A difficulty
remains in computation of the lower bound when the latent dimensionality L is
large. Even though the lower bound is concave for many models, its computation
requires optimization over O(L2 ) variational parameters. Efficient reparameterization schemes can reduce the number of parameters, but give inaccurate solutions or destroy concavity leading to slow convergence. We propose decoupled
variational inference that brings the best of both worlds together. First, it maximizes a Lagrangian of the lower bound reducing the number of parameters to
O(N ), where N is the number of data examples. The reparameterization obtained
is unique and recovers maxima of the lower-bound even when it is not concave.
Second, our method maximizes the lower bound using a sequence of convex problems, each of which is parallellizable over data examples. Each gradient computation reduces to prediction in a pseudo linear regression model, thereby avoiding
all direct computations of the covariance and only requiring its linear projections.
Theoretically, our method converges at the same rate as existing methods in the
case of concave lower bounds, while remaining convergent at a reasonable rate for
the non-concave case.
1
Introduction
Large-scale Bayesian inference remains intractable for many models, such as logistic regression,
sparse linear models, or dynamical systems with non-Gaussian observations. Approximate Bayesian
inference requires fast, robust, and reliable algorithms. In this context, algorithms based on variational Gaussian (VG) approximations are growing in popularity [17, 3, 13, 6] since they strike a
favorable balance between accuracy, generality, speed, and ease of use.
VG inference remains problematic for models with large latent-dimensionality. While some variants are convex [3], they require O(L2 ) variational parameters to be optimized, where L is the
latent-dimensionality. This slows down the optimization. One solution is to restrict the covariance
representations by naive mean-field [2] or restricted Cholesky [3], but this can result in considerable
loss of accuracy when significant posterior correlations exist. An alternative is to reparameterize
the covariance to obtain O(N ) number of parameters, where N is the number of data examples
[17]. However, this destroys the convexity and converges slowly [12]. A recent approach called
dual variational inference [10] obtains fast convergence while retaining this parameterization, but is
applicable to only some models such as Poisson regression.
In this paper, we propose an approach called decoupled variational Gaussian inference which extends the dual variational inference to a large class of models. Our method relies on the theory
of Lagrangian multiplier methods. While remaining widely applicable, our approach reduces the
number of variational parameters similar to [17, 10] and converges at similar convergence rates as
convex methods such as [3]. Our method is similar in spirit to parallel expectation-propagation (EP)
but has provable convergence guarantees even when likelihoods are not log-concave.
1
2
The Model
In this paper, we apply our method for Bayesian inference on Latent Gaussian Models (LGMs).
This choice is motivated by a large amount of existing work on VG approximations for LGMs
[16, 17, 3, 10, 12, 11, 7, 2], and because LGMs include many popular models, such as Gaussian
processes, Bayesian regression and classification, Gaussian Markov random field, and probabilistic
PCA. An extensive list of these models is given in Chapter 1 of [9]. We have also included few
examples in the supplementary material.
Given a vector of observations y of length N , LGMs model the dependencies among its components
using a latent Gaussian vector z of length L. The joint distribution is shown below.
N
Y
p(y, z) =
pn (yn |?n )p(z),
? = Wz,
p(z) := N (z|?, ?)
(1)
n=1
where W is a known real-valued matrix of size N ? L, and is used to define linear predictors ?.
Each ?n is used to model the observation yn using a link function pn (yn |?n ). The exact form of
this function depends on the type of observations, e.g. a Bernoulli-logit distribution can be used for
binary data [14, 7]. See the supplementary material for an example. Usually, an exponential family
distribution is used, although there are other choices (such as T-distribution [8, 17]). The parameter
set ? includes {W, ?, ?} and other parameters of the link function and is assumed to be known.
We suppress ? in our notation, for simplicity.
In Bayesian inference, we wish to compute expectations with respect to the posterior distribution
p(z|y) which is shown below. Another important task is the computation of the marginal likelihood
p(y) which can be maximized to estimate parameters ?, for example, using empirical Bayes [18].
Z Y
N
N
Y
p(z|y) ?
p(yn |?n )N (z|?, ?) , p(y) =
p(yn |?n )N (z|?, ?) dz
(2)
n=1
n=1
For non-Gaussian likelihoods, both of these tasks are intractable. Applications in practice demand
good approximations that scale favorably in N and L.
3
VG Inference by Lower Bound Maximization
In variational Gaussian (VG) inference [17], we assume the posterior to be a Gaussian q(z) =
N (z|m, V). The posterior mean m and covariance V form the set of variational parameters, and
are chosen to maximize the variational lower bound to the log marginal likelihood shown in Eq. (3).
To get this lower bound, we first multiply and divide by q(z) and then apply Jensen?s inequality
using the concavity of log.
Q
Q
Z
n p(yn |?n )p(z)
n p(yn |?n )p(z)
log p(y) = log q(z)
dz ? Eq(z) log
(3)
q(z)
q(z)
The simplified lower bound is shown in Eq. (4). The detailed derivation can be found in Eqs. (4)?(7)
in [11] (and in the supplementary material). Below, we provide a summary of its components.
N
X
2 ) [? log p(yn |?n )] (4)
max ?D[q(z) k p(z)] ?
fn (m
? n, ?
?n ), fn (m
? n, ?
?n ) := EN (?n |m
? n ,?
?n
m,V0
n=1
The first term is the KL-divergence D[q k p] = Eq [log q(z) ? log p(z)], which is jointly concave in
(m, V). The second term sums over data examples, where each term denoted by fn is the expectation of ? log p(yn |?n ) with respect to ?n . Since ?n = wTn z, it follows a Gaussian distribution
q(?n ) = N (m
? n, ?
?n2 ) with mean m
? n = wTn m and variances ?
?n2 = wTn Vwn . The terms fn are not
always available in closed form, but can be computed using quadrature or look-up tables [14]. Note
that unlike many other methods such [2, 11, 10, 7, 21], we do not bound or approximate these terms.
Such approximations lead to loss of accuracy.
We denote the lower bound of Eq. (3) by f and expand it below in Eq. (5):
f (m, V) := 21 [log |V| ? Tr(V??1 ) ? (m ? ?)T ??1 (m ? ?) + L] ?
N
X
fn (m
? n, ?
?n )
(5)
n=1
Here |V| denotes the determinant of V. We now discuss existing methods and their pros and cons.
2
3.1
Related Work
A straight-forward approach is to optimize Eq. (5) directly in (m, V) [2, 3, 14, 11]. In practice,
direct methods are slow and memory-intensive because of the very large number L + L(L + 1)/2
of variables. Challis and Barber [3] show that for log-concave likelihoods p(yn |?n ), the original
problem Eq. (4) is jointly concave in m and the Cholesky factor of V. This fact, however, does
not result in any reduction in the number of parameters, and they propose to use factorizations of a
restricted form, which negatively affects the approximation accuracy.
[17] and [16] note that the optimal V? must be of the form V? = [??1 + WT diag(?)W]?1 , which
suggests reparameterizing Eq. (5) in terms of L+N parameters (m, ?), where ? is the new variable.
However, the problem is not concave in this alternative parameterization [12]. Moreover, as shown
in [12] and [10], convergence can be exceedingly slow. The coordinate-ascent approach of [12] and
dual variational inference [10] both speed-up convergence, but only for a limited class of models.
A range of different deterministic inference approximations exist as well. The local variational
method is convex for log-concave potentials and can be solved at very large scales [23], but applies
only to models with super-Gaussian likelihoods. The bound it maximizes is provably less tight than
Eq. (4) [22, 3] making it less accurate. Expectation propagation (EP) [15, 21] is more general
and can be more accurate than most other approximations mentioned here. However, it is based
on a saddle-point rather than an optimization problem, and the standard EP algorithm does not always converge and can be numerically unstable. Among these alternatives, the variational Gaussian
approximation stands out as a compromise between accuracy and good algorithmic properties.
4
Decoupled Variational Gaussian Inference using a Lagrangian
We simplify the form of the objective function by decoupling the KL divergence term from the
terms including fn . In other words, we separate the prior distribution from the likelihoods. We do
so by introducing real-valued auxiliary-variables hn and ?n > 0, such that the following constraints
hold: hn = m
? n and ?n = ?
?n . This gives us the following (equivalent) optimization problem over
x := {m, V, h, ?},
max g(x) :=
x
1
2
N
X
fn (hn , ?n )
log |V| ? Tr(V??1 ) ? (m ? ?)T ??1 (m ? ?) + L ?
(6)
n=1
subject to constraints c1n (x) := hn ? wTn m = 0 and c2n (x) := 21 (?n2 ? wTn Vwn ) = 0 for all n.
For log-concave likelihoods, the function g(x) is concave in V, unlike the original function f (see
Eq. (5)) which is concave with respect to Cholesky of V. The difficulty now lies with the nonlinear constraints c2n (x). We will now establish that the new problem gives rise to a convenient
parameterization, but does not affect the maximum.
The significance of this reformulation lies in its Lagrangian, shown below.
L(x, ?, ?) := g(x) +
N
X
?n (hn ? wTn m) + 21 ?n (?n2 ? wTn Vwn )
(7)
n=1
Here, ?n , ?n are Lagrangian multipliers for the constraints c1n (x) and c2 (x). We will now show
that the maximum of f of Eq. (5) can be parameterized in terms of these multipliers, and that this
reparameterization is unique. The following theorem states this result along with three other useful
relationships between the maximum of Eq. (5), (6), and (7). Proof is in the supplementary material.
Theorem 4.1. The following holds for maxima of Eq. (5), (6), and (7):
1. A stationary point x? of Eq. (6) will also be a stationary point of Eq. (5). For every such
stationary point x? , there exist unique ?? and ?? such that,
V? = [??1 + WT diag(?? )W]?1 ,
m? = ? ? ?WT ??
(8)
2. The ?n? and ??n depend on the gradient of function fn and satisfy the following conditions,
5hn fn (h?n , ?n? ) = ?n? ,
3
5?n fn (h?n , ?n? ) = ?n? ??n
(9)
where h?n = wTn m? and (?n? )2 = wTn V? wn for all n and 5x f (x? ) denotes the gradient
of f (x) with respect to x at x = x? .
3. When {m? , V? } is a local maximizer of Eq. (5), then the set {m? , V? , h? , ? ? , ?? , ?? } is
a strict maximizer of Eq. (7).
4. When likelihoods p(yn |?n ) are log-concave, there is only one global maximum of f , and
any {m? , V? } obtained by maximizing Eq. (7) will be the global maximizer of Eq. (5).
Part 1 establishes the parameterization of (m? , V? ) by (?? , ?? ) and its uniqueness, while part
2 shows the conditions that (?? , ?? ) satisfy. This form has also been used in [12] for Gaussian
Processes where a fixed-point iteration was employed to search for ?? . Part 3 shows that such
parameterization can be obtained at maxima of the Lagrangian rather than minima or saddle-points.
The final part considers the case when f is concave and shows that the global maximum can be
obtained by maximizing the Lagrangian. Note that concavity of the lower bound is required for the
last part only and the other three parts are true irrespective of concavity.
Detailed proof of the theorem is given in the supplementary material.
Note that the conditions of Eq. (9) restrict the values that ?n? and ??n can take. Their values will be
valid only in the range of the gradients of fn . This is unlike the formulation of [17] which does not
constrain these variables, but is similar to the method of [10]. We will see later that our algorithm
makes the problem infeasible for values outside this range. Ranges of these variables vary depending
on the likelihood p(yn |?n ). However, we show below in Eq. (10) that ??n is always strictly positive
for log-concave likelihoods. The first equality is obtained using Eq. (9), while the second equality
is simply change of variables from ?n to ?n2 . The third equality is obtained using Eq. (19) from
[17]. The final inequality is obtained since fn is convex for all log-concave likelihoods (5xx f (x)
denotes the Hessian of f (x)).
??n = ?n? ?1 5?n fn (h?n , ?n? ) = 2 5?n2 fn (h?n , ?n? ) = 52hn hn fn (h?n , ?n? ) > 0
5
(10)
Optimization Algorithms for Decoupled Variational Gaussian Inference
Theorem 4.1 suggests that the optimal solution can be obtained by maximizing g(x) or the Lagrangian L. The maximization is difficult for two reasons. First, the constraints c2n (x) are non-linear
and second the function g(x) may not always be concave. Note that it is not easy to apply the augmented Lagrangian method or first-order methods (see Chapter 4 of [1]) because their application
would require storage of V. Instead, we use a method based on linearization of the constraints which
will avoid explicit computation and storage of V. First, we will show that when g(x) is concave,
we can maximize it by minimizing a sequence of convex problems. We will then solve each convex
problem using the dual-variational method of [10].
5.1
Linearly Constrained Lagrangian (LCL) Method
We now derive an algorithm based on the linearly constrained Lagrangian (LCL) method [19]. The
LCL approach involves linearization of the non-linear constraints and is an effective method for
large-scale optimization, e.g. in packages such as MINOS [24]. There are variants of this method
that are globally convergent and robust [4], but we use the variant described in Chapter 17 of [24].
The final algorithm: See Algorithm 1. We start with a ?, ? and ?. At every iteration k, we
minimize the following dual:
N
X
e ??
eT ? +
fnk? (?n , ?n )
min ? 21 log |??1 + WT diag(?)W| + 12 ?T ??
?,??S
n=1
(11)
e = W?WT and ?
e = W?. The functions fnk? are obtained as follows:
Here, ?
fnk? (?n , ?n ) := max ?fn (hn , ?n ) + ?n hn + 12 ?n ?nk (2?n ? ?nk ) ? 12 ?kn (?n ? ?nk )2
hn ,?n >0
where ?kn and ?nk were obtained at the previous iteration.
4
(12)
Algorithm 1 Linearly constrained Lagrangian (LCL) method for VG approximation
Initialize ?, ? ? S and ? 0.
for k = 1, 2, 3, . . . do
?k ? ? and ? k ? ?.
repeat
For all n, compute predictive mean m
? ?n and variances v?n? using linear regression (Eq. (13))
?
For all n, in parallel, compute (hn , ?n? ) that maximizes Eq. (12).
Find next (?, ?) ? S using gradients gn? = h?n ? m
? ?n and gn? = 21 [?(?nk )2 + 2?nk ?n ? v?n? ].
until convergence
end for
The constraint set S is a box constraints on ?n and ?n such that a global minimum of Eq. (12)
exists. We will show some examples later in this section.
Efficient gradient computation: An advantage of this approach is that the gradient at each iteration
can be computed efficiently, especially for large N and L. The gradient computation is decoupled
into two terms. The first term can be computed by computing fnk? in parallel, while the second
term involves prediction in a linear model. The gradients with respect to ?n and ?n (derived in the
supplementary material) are given as gn? := h?n ? m
? ?n and gn? := 12 [?(?nk )2 + 2?nk ?n? ? v?n? ], where
(h?n , ?n? ) are maximizers of Eq. (12) and v?n? and m
? ?n are computed as follows:
e n,: (?
e + diag(?)?1 )?1 ?
e n,:
e nn ? ?
v?n? := wTn V?n wn = wTn (??1 + WT diag(?)W)?1 wn = ?
e n,: ?
m
? ? := wT m? = wT (? ? ?WT ?) = ?
en ? ?
(13)
n
n
n
n
The quantities (h?n , ?n? ) can be computed in parallel over all n. Sometimes, this can be done in closed
form (as we shown in the next section), otherwise we can compute them by numerically optimizing
over two-dimensional functions. Since these problems are only two-dimensional, a Newton method
can be easily implemented to obtain fast convergence.
? ?n can be interpreted as predictive means and variances of a pseudo
The other two terms v?n? and m
linear model, e.g. compare Eq. (13) with Eq. 2.25 and 2.26 of Rasmussen?s book [18]. Hence
every gradient computation can be expressed as Bayesian prediction in a linear model for which
we can use existing implementation. For example, for binary or multi-class GP classification, we
can reuse efficient implementation of GP regression. In general, we can use a Bayesian inference
in a conjuate model to compute the gradient of a non-conjugate model. This way the method also
avoids forming V? and work only with its linear projections which can be efficiently computed
using vector-matrix-vector products.
The ?decoupling? nature of our algorithm should now be clear. The non-linear computations depending on the data, are done in parallel to compute h?n and ?n? . These are completely decoupled
from linear computations for m
? n and v?n . This is summarized in Algorithm (1).
Derivation: To derive the algorithm, we first linearize the constraints. Given multiplier ?k and a
point xk at the k?th iteration, we linearize the constraints c2n (x):
c?2nk (x) := c2n (xk ) + 5c2n (xk )T (x ? xk )
=
=
k 2
T k
k
1
2 [(?n ) ? wn V wn + 2?n (?n ?
? 21 [(?nk )2 ? 2?nk ?n + wTn Vwn ]
(14)
?nk )
?
(wTn Vwn
?
wTn Vk wn )]
(15)
(16)
Since we want the linearized constraint c?2nk (x) to be close to the original constraint c2n (x), we will
penalize the difference between the two.
c2n (x) ? c?2nk (x) = 12 {?n2 ? wTn Vwn ? [?(?nk )2 + 2?nk ?n ? wTn Vwn ]} = 21 (?n ? ?nk )2
(17)
The key point is that this term is independent of V, allowing us to obtain a closed-form solution for
V? . This will also be crucial for the extension to non-concave case in the next section.
5
The new k?th subproblem is defined with the linearized constraints and the penalization term:
N
X
k 2
1 k
max g k (x) := g(x) ?
(18)
2 ?n (?n ? ?n )
x
n=1
wTn mn
s.t. hn ?
= 0 , ? 12 [(?nk )2 ? 2?nk ?n + wTn Vwn ] = 0, ?n
This is a concave problem with linear constraints and can be optimized using dual variational inference [10]. Detailed derivation is given in the supplementary material.
Convergence: When LCL algorithm converges, it has quadratic convergence rates [19]. However,
it may not always converge. Globally convergent methods do exist (e.g. [4]) although we do not
explore them in this paper. Below, we present a simple approach that improves the convergence for
non log-concave likelihoods.
Augmented Lagrangian Methods for non log-concave likelihoods: When the likelihood p(yn |?n )
are not log-concave, the lower bound can contain local minimum, making the optimization difficult
for function f (m, V). In such scenarios, the algorithm may not converge for all starting values.
The convergence of our approach can be improved for such cases. We simply add an augmented
Lagrangian term [?
c2nk (x)]2 to the linearly constrained Lagrangian defined in Eq. (18), as shown
below [24]. Here, ?ik > 0 and i is the i?th iteration of k?th subproblem:
N
X
k
k 2
k 4
1 k
1 k
gaug
(x) := g(x) ?
(19)
2 ?n (?n ? ?n ) + 2 ?i (?n ? ?n )
n=1
subject to the same constraints as Eq. (18).
The sequence ?ik can either be set to a constant or be increased slowly to ensure convergence to a
local maximum. More details on setting this sequence and its affect on the convergence can be found
in Chapter 4.2 of [1]. It is in fact possible to know the value of ?ik such that the algorithm always
converge. This value can be set by examining the primal function - a function with respect to the
deviations in constraints. It turns out that it should be set larger than the largest eigenvalues of the
Hessian of the primal function at 0. A good discussion of this can be found in Chapter 4.2 of [1].
The fact that that the linearized constraint c?2nk (x) does not depend on V is very useful here since
addition of this term then only affects computation of fnk? . We modify the algorithm by simply
changing the computation to optimization of the following function:
?k
max ?fn (hn , ?n ) + ?n hn + 12 ?n ?nk (2?n ? ?nk ) ? 12 ?kn (?n ? ?nk )2 ? i (?n ? ?nk )4 (20)
hn ,?n >0
2
It is clear from this that the augmented Lagrangian term is trying to ?convexify? the non-convex
function fn , leading to improved convergence.
Computation of fnk? (?, ?n ) These functions are obtained by solving the optimization problem
shown in Eq. (12). In some cases, we can compute these functions in closed form. For example, as shown in the supplementary material, we can compute h? and ? ? in closed form for Poisson
likelihood as shown below. We also show the range of ?n and ?n for which fnk? is finite.
?n + ?kn
?n? =
? k , h?n = ? 12 ?n?2 + log(yn + ?n ), S = {?n > ?yn , ?n > 0, ?n} (21)
yn + ?n + ?kn n
An expression for Laplace likelihood is also derived in the supplementary material.
When we do not have a closed-form expression for fnk? , we can use a 2-D Newton method for optimization. To facilitate convergence, we must warm-start the optimization. When fn is concave, this
usually converges in few iterations, and since we can parallelize over n, a significant speed-up can
be obtained. A significant engineering effort is required for parallelization and for our experiments
in this paper, we have not done so.
An issue that remains open is the evaluation of the range S for which each fnk? is finite. For now,
we have simply set it to the range of gradients of function fn as shown by Eq. 9 (also see the last
paragraph in that section). It is not clear whether this will always assure convergence for the 2-D
optimization.
Prediction: Given ?? and ?? , we can compute the predictions by using equations similar to GP
regression. See details in Rasmussen?s book [18].
6
6
Results
We demonstrate the advantages of our approach on a binary GP classification problem. We model
the binary data using Bernoulli-logit likelihoods. Function fn are computed to a reasonable accuracy
using the piecewise bound [14] with 20 pieces.
We apply this model to a subproblem of the USPS digit data [18]. Here, the task is to classify
between 3?s vs. 5?s. There are a total of 1540 data examples with feature dimensionality of 256.
Since we want to compare the convergence, we will show results for different data sizes by subsampling randomly from these examples.
We set ? = 0 and use a squared-exponential kernel, for which the (i, j)th entry of ? is defined as:
?ij = ?? 2 exp[? 12 ||xi ? xj ||2 /s] where xi is i?th feature. We show results for log(?) = 4 and
log(s) = ?1 which corresponds to a difficult case where VG approximations converge slowly (due
to the ill-conditioning of the Kernel) [18]. Our conclusions hold for other parameter settings as well.
We compare our algorithm with the approach of Opper and Archambeau [17] and Challis and Barber
[3]. We refer to them as ?Opper? and ?Cholesky?, respectively. We call our approach ?Decoupled?.
For all methods, we use L-BFGS method for optimization (implemented in minFunc by Mark
Schmidt), since a Newton method would be too expensive for large N . All algorithms were stopped
when the subsequent changes in the lower bound value of Eq. 5 were less than 10?4 . All methods
were randomly initialized. Our results are not sensitive to initialization. We compare convergence
in terms of the value of lower bound. The prediction errors show very similar trend, therefore we do
not present them.
The results are summarized in Figure 1. Each plot shows the negative of the lower bound vs time in
seconds for increasing data sizes N = 200, 500, 1000 and 1500. For Opper and Cholesky, we show
markers for every iteration. For decoupled, we show markers after completion of each subproblem.
We can not see the result of first subproblem here, and the first visible marker is obtained from the
second subproblem onwards.
We see that as the data size increases, Decoupled converges faster than the other methods, showing
a clear advantage over other methods for large dimensionality.
7
Discussion and Future Work
In this paper, we proposed the decoupled VG inference method for approximate Bayesian inference.
We obtain efficient reparameterization using a Lagrangian to the lower bound. We showed that such
a parameterization is unique, even for non log-concave likelihood functions, and the maximum of
the lower bound can be obtained by maximizing the Lagrangian. For concave likelihood function,
our method recovers the global maximum. We proposed a linearly constrained Lagrangian method
to maximize the Lagrangian. The algorithm has the desired property that it reduces each gradient computation to a linear model computation, while parallelizing non-linear computations over
data examples. Our proposed algorithm is capable of attaining convergence rates similar to convex
methods.
Unlike methods such as mean-field approximation, our method preserves all posterior correlations
and can be useful towards generalizing stochastic variational inference (SVI) methods [5] to nonconjugate models. Existing SVI methods rely on mean-field approximations and are widely applied
for conjugate models. Under our method, we can stochastically include only few constraints to
maximize the Lagrangian. This amounts to a low-rank approximation of the covariance matrix and
can be used to construct an unbiased estimate of the gradient.
We have focused only on latent Gaussian models for simplicity. It is easy to extend our approach
to other non-Gaussian latent models, e.g. sparse Bayesian linear model [21] and Bayesian nonnegative matrix factorization [20]. Similar decoupling method can also be applied to general latent
variable models. Note that a choice of proper posterior distribution is required to get an efficient
parameterization of the posterior.
It is also possible to get sparse posterior covariance approximation using our decoupled formulation.
One possible idea is to use Hinge type of loss to approximate the likelihood terms. Using the
dualization similar to what we have shown here would give us a sparse posterior covariance.
7
N = 200
N = 500
Cholesky
Opper
Decoupled
545
Negative Lower Bound
Negative Lower Bound
1500
535
525
0.2
0.3
0.4
0.5
Cholesky
Opper
Decoupled
1480
1460
1440
1420
1400
1380
1360
0
10
1
Time in seconds
1
10
Time in seconds
N = 1000
N = 1500
Cholesky
Opper
Decoupled
2840
Negative Lower Bound
Negative Lower Bound
4280
2820
2800
2780
Cholesky
Opper
Decoupled
4270
4260
4250
4240
2760
20
30
40 50
100
150
4230
50
200
Time in seconds
100
150
200
300
400 500
Time in seconds
Figure 1: Convergence results for a GP classification on the USPS-3vs5 data set. Each plot shows
the negative of the lower bound vs time in seconds for data sizes N = 200, 500, 1000 and 1500.
For Opper and Cholesky, we show markers for every iteration. For decoupled, we show markers
after completion of each subproblem. We can not see the result of first subproblem here, and the
first visible marker is obtained from the second subproblem. As the data size increases, Decoupled
converges faster, showing a clear advantage over other methods for large dimensionality.
A weakness of our paper is a lack of strong experiments showing that the decoupled method indeed
converge at a fast rate. The implementation of decoupled method requires a good engineering effort
for it to scale to big data. In future, we plan to have an efficient implementation of this method and
demonstrate that this enables variational inference to scale to large data.
Acknowledgments
This work was supported by School of Computer Science and Communication at EPFL. I would
specifically like to thank Matthias Grossglauser, Rudiger Urbanke, and Jame Larus for providing me
support and funding during this work. I would like to personally thank Volkan Cevher, Quoc TranDinh, and Matthias Seeger from EPFL for early discussions of this work and Marc Desgroseilliers
from EPFL for checkin some proofs.
I would also like to thank the reviewers for their valuable feedback. The experiments in this paper
are less extensive than what I promised them. Due to time and space constraints, I have not been
able to add all of them. More experiments will appear in an arXiv version of this paper.
8
References
[1] Dimitri P Bertsekas. Nonlinear programming. Athena Scientific, 1999.
[2] M. Braun and J. McAuliffe. Variational inference for large-scale models of discrete choice. Journal of
the American Statistical Association, 105(489):324?335, 2010.
[3] E. Challis and D. Barber. Concave Gaussian variational approximations for inference in large-scale
Bayesian linear models. In International conference on Artificial Intelligence and Statistics, 2011.
[4] Michael P Friedlander and Michael A Saunders. A globally convergent linearly constrained lagrangian
method for nonlinear optimization. SIAM Journal on Optimization, 15(3):863?897, 2005.
[5] M. Hoffman, D. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 14:1303?1347, 2013.
[6] A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate Riemannian conjugate
gradient learning for fixed-form variational Bayes. Journal of Machine Learning Research, 11:3235?
3268, 2011.
[7] T. Jaakkola and M. Jordan. A variational approach to Bayesian logistic regression problems and their
extensions. In International conference on Artificial Intelligence and Statistics, 1996.
[8] P. Jyl?anki, J. Vanhatalo, and A. Vehtari. Robust Gaussian process regression with a Student-t likelihood.
The Journal of Machine Learning Research, 999888:3227?3257, 2011.
[9] Mohammad Emtiyaz Khan. Variational Learning for Latent Gaussian Models of Discrete Data. PhD
thesis, University of British Columbia, 2012.
[10] Mohammad Emtiyaz Khan, Aleksandr Y. Aravkin, Michael P. Friedlander, and Matthias Seeger. Fast
dual variational inference for non-conjugate latent gaussian models. In ICML (3), volume 28 of JMLR
Proceedings, pages 951?959. JMLR.org, 2013.
[11] Mohammad Emtiyaz Khan, Shakir Mohamed, Benjamin Marlin, and Kevin Murphy. A stick breaking
likelihood for categorical data analysis with latent Gaussian models. In International conference on
Artificial Intelligence and Statistics, 2012.
[12] Mohammad Emtiyaz Khan, Shakir Mohamed, and Kevin Murphy. Fast Bayesian inference for nonconjugate Gaussian process regression. In Advances in Neural Information Processing Systems, 2012.
[13] M. L?azaro-Gredilla and M. Titsias. Variational heteroscedastic Gaussian process regression. In International Conference on Machine Learning, 2011.
[14] B. Marlin, M. Khan, and K. Murphy. Piecewise bounds for estimating Bernoulli-logistic latent Gaussian
models. In International Conference on Machine Learning, 2011.
[15] T. Minka. Expectation propagation for approximate Bayesian inference. In Proceedings of the Conference
on Uncertainty in Artificial Intelligence, 2001.
[16] H. Nickisch and C.E. Rasmussen. Approximations for binary Gaussian process classification. Journal of
Machine Learning Research, 9(10), 2008.
[17] M. Opper and C. Archambeau. The variational gaussian approximation revisited. Neural Computation,
21(3):786?792, 2009.
[18] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT
Press, 2006.
[19] Stephen M Robinson. A quadratically-convergent algorithm for general nonlinear programming problems. Mathematical programming, 3(1):145?156, 1972.
[20] Mikkel N Schmidt, Ole Winther, and Lars Kai Hansen. Bayesian non-negative matrix factorization. In
Independent Component Analysis and Signal Separation, pages 540?547. Springer, 2009.
[21] M. Seeger. Bayesian Inference and Optimal Design in the Sparse Linear Model. Journal of Machine
Learning Research, 9:759?813, 2008.
[22] M. Seeger. Sparse linear models: Variational approximate inference and Bayesian experimental design.
Journal of Physics: Conference Series, 197(012001), 2009.
[23] M. Seeger and H. Nickisch. Large scale Bayesian inference and experimental design for sparse linear
models. SIAM Journal of Imaging Sciences, 4(1):166?199, 2011.
[24] SJ Wright and J Nocedal. Numerical optimization, volume 2. Springer New York, 1999.
9
| 5559 |@word determinant:1 version:1 logit:2 open:1 vanhatalo:1 linearized:3 covariance:7 thereby:1 tr:2 reduction:1 series:1 ecole:1 existing:5 com:1 gmail:1 must:2 fn:21 visible:2 numerical:1 subsequent:1 enables:1 plot:2 v:3 stationary:3 intelligence:4 parameterization:7 xk:4 volkan:1 blei:1 revisited:1 org:1 mathematical:1 along:1 c2:1 direct:2 ik:3 paragraph:1 theoretically:1 indeed:1 growing:1 multi:1 globally:3 increasing:1 xx:1 notation:1 moreover:1 maximizes:4 estimating:1 vs5:1 what:2 interpreted:1 marlin:2 convexify:1 guarantee:1 pseudo:2 every:5 concave:28 braun:1 anki:1 stick:1 yn:16 appear:1 bertsekas:1 mcauliffe:1 positive:1 engineering:2 local:4 modify:1 aleksandr:1 parallelize:1 initialization:1 suggests:2 lausanne:1 heteroscedastic:1 ease:1 factorization:3 limited:1 archambeau:2 range:7 challis:3 unique:4 acknowledgment:1 practice:2 svi:2 digit:1 empirical:1 projection:2 convenient:1 word:1 get:3 close:1 storage:2 context:1 optimize:2 equivalent:1 deterministic:1 lagrangian:22 dz:2 maximizing:4 reviewer:1 williams:1 starting:1 convex:9 focused:1 simplicity:2 reparameterization:4 coordinate:1 laplace:1 exact:1 programming:3 carl:1 assure:1 trend:1 expensive:1 ep:3 subproblem:9 solved:1 wang:1 valuable:1 mentioned:1 vehtari:1 benjamin:1 convexity:1 depend:2 tight:1 solving:1 compromise:1 predictive:2 titsias:1 negatively:1 completely:1 usps:2 easily:1 joint:1 chapter:5 derivation:3 fast:6 effective:1 ole:1 artificial:4 kevin:2 outside:1 saunders:1 lcl:5 widely:2 supplementary:9 valued:2 solve:1 larger:1 otherwise:1 kai:1 statistic:3 gp:5 jointly:2 final:3 shakir:2 sequence:4 advantage:4 eigenvalue:1 matthias:3 propose:3 product:1 erale:1 convergence:21 converges:7 tornio:1 depending:2 derive:2 linearize:2 completion:2 ij:1 school:1 eq:36 edward:1 strong:1 auxiliary:1 implemented:2 involves:2 switzerland:1 aravkin:1 stochastic:2 lars:1 material:9 require:2 minos:1 strictly:1 extension:2 hold:3 wright:1 exp:1 algorithmic:1 vary:1 early:1 uniqueness:1 favorable:1 applicable:2 hansen:1 sensitive:1 largest:1 establishes:1 hoffman:1 reparameterizing:1 mit:1 destroys:1 gaussian:31 always:7 super:1 rather:2 pn:2 avoid:1 jaakkola:1 derived:2 vk:1 bernoulli:3 likelihood:24 rank:1 seeger:5 inference:33 epfl:4 inaccurate:1 nn:1 expand:1 provably:1 issue:1 dual:7 classification:5 among:2 denoted:1 ill:1 retaining:1 plan:1 constrained:6 initialize:1 marginal:3 field:4 construct:1 look:1 icml:1 wtn:18 future:2 simplify:1 piecewise:2 few:3 randomly:2 preserve:1 divergence:2 murphy:3 onwards:1 multiply:1 evaluation:1 weakness:1 primal:2 accurate:2 capable:1 decoupled:20 divide:1 urbanke:1 initialized:1 desired:1 minfunc:1 stopped:1 increased:1 classify:1 cevher:1 jyl:1 gn:4 maximization:2 introducing:1 c1n:2 deviation:1 entry:1 predictor:1 examining:1 too:1 dependency:1 kn:5 nickisch:2 international:5 siam:2 winther:1 probabilistic:1 physic:1 michael:3 together:1 squared:1 thesis:1 hn:16 slowly:3 mikkel:1 stochastically:1 book:2 american:1 leading:2 dimitri:1 potential:1 de:1 bfgs:1 attaining:1 summarized:2 student:1 includes:1 satisfy:2 depends:1 piece:1 later:2 closed:6 start:2 bayes:2 parallel:5 minimize:1 accuracy:6 variance:3 efficiently:2 maximized:1 emtiyaz:6 bayesian:19 straight:1 ed:1 mohamed:2 minka:1 proof:3 riemannian:1 recovers:2 con:1 popular:2 dimensionality:6 improves:1 nonconjugate:2 improved:2 formulation:2 done:3 though:1 box:1 generality:1 correlation:2 until:1 honkela:1 christopher:1 nonlinear:4 maximizer:3 propagation:3 marker:6 lack:1 logistic:3 brings:1 scientific:1 facilitate:1 requiring:1 multiplier:4 true:1 contain:1 unbiased:1 equality:3 hence:1 during:1 trying:1 mohammad:5 polytechnique:1 demonstrate:2 pro:1 variational:32 funding:1 conditioning:1 volume:2 extend:1 association:1 numerically:2 significant:3 refer:1 paisley:1 larus:1 add:2 posterior:9 recent:1 showed:1 optimizing:1 scenario:1 inequality:2 binary:5 minimum:3 employed:1 converge:6 maximize:4 strike:1 signal:1 stephen:1 reduces:3 faster:2 prediction:6 variant:3 regression:11 expectation:5 poisson:2 arxiv:1 iteration:9 sometimes:1 kernel:2 penalize:1 addition:1 want:2 crucial:1 parallelization:1 unlike:4 ascent:1 strict:1 subject:2 spirit:1 jordan:1 call:1 easy:2 wn:6 affect:4 xj:1 restrict:2 reduce:1 idea:1 intensive:1 whether:1 motivated:1 pca:1 expression:2 reuse:1 effort:2 hessian:2 york:1 useful:3 detailed:3 clear:5 amount:2 exist:4 problematic:1 popularity:1 discrete:2 grossglauser:1 key:1 reformulation:1 promised:1 changing:1 nocedal:1 destroy:1 imaging:1 sum:1 package:1 parameterized:1 uncertainty:1 extends:1 family:1 reasonable:2 separation:1 bound:28 convergent:5 quadratic:1 nonnegative:1 constraint:20 constrain:1 speed:3 reparameterize:1 min:1 gredilla:1 conjugate:4 making:2 quoc:1 restricted:2 equation:1 remains:4 discus:1 turn:1 know:1 end:1 available:1 apply:4 alternative:3 schmidt:2 original:3 denotes:3 remaining:2 include:2 ensure:1 subsampling:1 hinge:1 newton:3 especially:1 establish:1 objective:1 quantity:1 gradient:15 link:2 separate:1 thank:3 c2n:8 athena:1 me:1 barber:3 considers:1 unstable:1 reason:1 provable:1 length:2 relationship:1 providing:1 balance:1 minimizing:1 difficult:3 favorably:1 slows:1 rise:1 negative:7 suppress:1 implementation:4 design:3 proper:1 allowing:1 observation:4 markov:1 finite:2 communication:1 parallelizing:1 required:3 kl:2 khan:6 optimized:2 extensive:2 quadratically:1 robinson:1 able:1 dynamical:1 below:9 usually:2 reliable:1 wz:1 max:5 memory:1 including:1 difficulty:2 warm:1 rely:1 mn:1 scheme:1 irrespective:1 raiko:1 categorical:1 naive:1 columbia:1 prior:1 l2:2 personally:1 friedlander:2 loss:3 vg:9 penalization:1 summary:1 repeat:1 last:2 rasmussen:4 supported:1 infeasible:1 sparse:7 dualization:1 opper:9 feedback:1 world:1 stand:1 valid:1 concavity:4 exceedingly:1 forward:1 avoids:1 simplified:1 sj:1 approximate:7 obtains:1 global:5 assumed:1 xi:2 search:1 latent:12 table:1 nature:1 robust:3 decoupling:3 marc:1 diag:5 significance:1 linearly:6 big:1 n2:7 quadrature:1 augmented:4 en:2 slow:3 wish:1 explicit:1 exponential:2 lie:2 breaking:1 jmlr:2 third:1 down:1 vwn:8 theorem:4 british:1 showing:3 jensen:1 list:1 maximizers:1 intractable:2 exists:1 phd:1 linearization:2 karhunen:1 demand:1 nk:24 generalizing:1 simply:4 saddle:2 explore:1 fnk:9 forming:1 azaro:1 expressed:1 applies:1 springer:2 corresponds:1 relies:1 kuusela:1 towards:1 considerable:1 change:2 included:1 specifically:1 reducing:1 wt:9 called:2 total:1 experimental:2 cholesky:10 mark:1 support:1 lgms:4 avoiding:1 |
5,036 | 556 | Computer Recognition of Wave Location
in Graphical Data by a Neural Network
Donald T. Freeman
School of Medicine
University of Pittsburgh
Pittsburgh. PA 15261
Abstract
Five experiments were performed using several neural network architectures to
identify the location of a wave in the time ordered graphical results from a
medical test. Baseline results from the first experiment found correct
identification of the target wave in 85% of cases (n=20). Other experiments
investigated the effect of different architectures and preprocessing the raw data on
the results. The methods used seem most appropriate for time oriented graphical
data which has a clear starting point such as electrophoresis Or spectrometry
rather than continuous teSts such as ECGs and EEGs.
I
INTRODUCTION
Complex wave form recognition is generally considered to be a difficult task for
machines. Analytical approaches to this problem have been described and they work with
reasonable accuracy (Gabriel et al. 1980. Valdes-Sosa et al. 1987) The use of these
techniques, however, requires substantial mathematical Iraining and the process is often
time consuming and labor intensive (Boston 1987). Mathematical modeling also requires
substantial knowledge of the particular details of the wave forms in order to determine
how to apply the models and to determine detection criteria. Rule-based expert systems
have also been used for the recognition of wave forms (Boston 1989). They require that a
knowledge engineer work closely with a domain expert to exlract the rules that the expert
uses to perform the recognition. If the rules are ad hoc or if it is difficult for experts to
articulate the rules they use. then rule-based expert systems are cumbersome to
implement.
This paper describes the use of neural networks to recognize the location of peak V from
the wave-form recording of brain stem auditory evoked potential tests. General
discussions of connectionist networks can be found in (Rumelhart and McClelland 1986).
The main features of neural networks that are relevant for our purposes revolve around
their ease of use as compared to other modeling techniques. Neural networks provide
several advantages over modeling with differential equations or rule-based systems. First.
there is no knowledge engineering phase. The network is trained automatically using a
series of examples along with the "right answer" to each example. Second. the resulting
network typically has significant predictive power when novel examples are presented.
So, neural network technology allows expert performance to be mimicked without
requiring that expert knowledge be codified in a Iraditional fashion. In addition. neural
networks. when used to perform signal analysis. require vastly less restrictive
706
Computer Recognition of Wave Location in Graphical Data by a Neural Network
assumptions about the strucblre of the input signal than analytical techniques (Gonnan
and Sejnowski 1988). Still, neural nets have not yet been widely applied to problems of
this sort (DeRoach 1989). Nevertheless, it seems that interest is growing in using
computers, especially neural networks, to solve advanced problems in medical decision
making (Sblbbs 1988).
1.1
BRAIN STEM AUDITORY EVOKED POTENTIAL (BAEP)
Sensory evoked potentials are electric signals from the brain that occur in response to
transient auditory, somatosensory, or visual stimuli such as a click, pinprick, or flash of
light. The signals, recorded from electrodes placed on a subject's scalp, are a measure of
the electrical activity in the subject's brain both from response to the stimulus and from
the spontaneous electroencephalographic (EEG) activity of the brain. One way of
discerning the response to the stimulus from the background EEG noise is to average the
individual responses from many identical stimuli. When "cortical noise" has been
removed in this way, evoked potentials can be an important noninvasive measure of
central nervous system function. They are used in sbldies of physiology and psychology,
for the diagnosis of neurologic disorders (Greenberg et al. 1981). Recently attention has
focused on continuous automated monitoring of the BAEP intraoperatively as well as
post-operatively for evaluation of central nervous system function (Moulton et al. 1991).
Brain stem auditory evoked potentials (BAEP) are generated in the auditory pathways of
the brain stem. They can be used to asses hearing and brain stem function even in
unresponsive or uncooperative patients.
The BAEP test involves placing headphones on the patient, flooding one ear with white
noise. and delivering clicks into the other ear. Electrodes on the scalp both on the same
side (ipsilateral) and opposite side (contralateral) of the clicks record the electric potentials
of brain activity for 10 msec. following each click. In the protocol used at the University
of Pittsburgh Presbyterian University Hospital (pUH). a series of 2000 clicks is delivered
and the results from each click - a graph of electrode activit>;: over the 10 msec. - are
averaged into a single graph. Results from the stimulation of one ear with the clicks is
referred to as "one ear of data".
A graph of the wave fonn which results from the averaging of many stimuli appears as a
series of peaks following the stimulus (Figure 1). The resulting graph typically has 7
important peaks but often includes other peaks resulting from the noise which remains
after averaging. Each important peak represents the firing of a group of neurons in the
auditory neural pathwayl. The time of arrival of the peaks (the peak latencies) and the
amplitudes of the peaks are used to characterize the response. The latencies of peaks I. III,
and V are typically used to detennine if there is evidence of slowed central nervous system
conduction which is of value in the diagnosis of multiple sclerosis and other disease
states2. Conduction delay may be seen in the left, right, or both BAEP pathways. It is
of interest that the time of arrival of a wave on the ipsilateral and contralateral sides may
be slightly different. This effect becomes more exagerated the more distant the correlated
peaks are from the origin (Durrant. Boston, and Martin 1990).
Typically there are several issues in the interpretation of the graphs. First. it must be
clear that some neural response to the auditory stimulus is represented in the wave fonn.
If a response is present, the peaks which correspond to nonnal and abnonnal responses
must be distinguished from noise which remains in the signal even after averaging. Wave
IV and wave V occasionally fuse, forming a wave IVN complex, confounding this
IPutative generators are: I-Acoustic nerve; II-Cochlear nucleus; III-Superior olivary
nucleus; IV-Lateral lemniscus; V-Inferior colliculus: VI-Medial geniculate nucleus;
VII-Auditory radiations.
20ther disorders include brain edema. acoustic neuroma. gliomas. and central pontine
myelinolysis.
707
708
Freeman
process. In these cases we say that wave V is absenL Finally, the latencies and possibly
the amplitudes of the identified peaks are be measured and a diagnostic explanation for
them is developed.
r'"_. I
f
. . . . ?? ... ?f??--- ?? ? j
I
i
i
I
I
l--
II
._, ---n
Ii
I!
.... ,
??_?4 ?!
!
I
I;
I
I'
f. ___. ?
..i..
iJI
!.
i
I
II
."
. .,
I
I
I
.. . __ .J
., ' ..
I
j
j
. -~-'
.1,- _. .__.
Figure I. BAEP chart with the time of arrival for waves I to V identified.
2 METHODS AND PROCEDURES
2.1
DATA
Plots of BAEP tests were obtained from the evoked potential files from the last 4 years at
PUH. A preliminary group of training cases consisting of 13 patients or 26 ears was
selected by traversing the files alphabetically from the beginning of the alphabet. This
Computer Recognition of Wave Location in Graphical Data by a Neural Network
group was subsequently extended to 25 patients Or 50 ears, 39 nonnals and 11 abnonnals.
Most BAEP tests show no abnonnalities: only 1 of the first 40 ears was abnonnal. In
order to create a training set with an adequate number of abnonnal cases we included only
patients with abnonnal ears after these first 40 had been selected. Ten abnonnal ears were
obtained from a search of 60 patient meso Test cases were selected from files starting at
the end of the alphabet, moving toward the beginning, the opposite of the process used
for the training cases. Unlike the training set - where some cases were selected over
others - all cases were included in the test set without bias. No cases were common to
both sets. A total of 10 patients or 20 ears were selected. Table I summarizes the input
data.
For one of the experiments, another data set was made using the ipsilateral data for 80
inputs and the derivative of the curve for the other 80 inputs. The derivative was
computed by subtracting the amplitude of the point's successor from the amplitude of the
point and dividing by 0.1.
The ipsilateral and contralateral wave recordings were transfonned to machine readable
fonnat by manual tracing with a BitPad Plus~ digitizer. A fonnal protocol was followed
to ensure that a high fidelity transcription had been effected. The approximately 400
points which resulted from the digitization of each ear were graphed and compared to the
original tracings. If the tracings did not match, then the transcription was performed
again. In addition, the originally recorded latency values for peak V were corrected for any
distortion in the digitizing process. The distortion was judged by a neurologist to be
minimal.
Table I: Composition of Input Data
Cases
NonnalEars
Abnonnal Ears
Total Ears
Prolonged V
Absent V
Total
Training
39
8
3
11
50
Testing
18
0
2
2
20
A program was written to process the digital wave fonns, creating an output file readable
by the neural network simulator. The program discarded the rust and last 1 msec. of the
recordings. The remaining points were sampled at 0.1 msec. intervals using linear
interpolation to estimate an amplitude if a point had not been recorded within 0.01 msec.
of the desired time. These points were then normalized to the range <-1,1>. The
resulting 80 points for the ipsilateral wave and 80 points for the contralateral wave (a
total of 160 points) were used as the initial activations for the input layer of processing
elements.
2.2
ARCHITECTURES
Each of the four network architectures had 160 input nodes. Each node represented the
amplitude of the wave at each sample time (1.0 to 8.9 ms, every 0.1 ms). Each
architecture also had 80 output nodes with a similar temporal interpretation (Figure 2).
Architecture 1 (AI) had 30 hidden units connected only to the ipsilateral input units. 5
hidden units connected only to the contralateral input units and 5 hidden units connected
to all the input units. The hidden units for all architectures were fully connected to the
output units. Architecture 2 (A2) reversed these proportions. Architecture 3 (A3) was
fully connected to the inputs. Architecture 4 (A4) preserved the proportions of Al but
had 16 ipsilateral hidden units, 3 contralateral. and 3 connected to both. All architectures
used the sigmoid transfer function at both the hidden and output layers and all units were
attached to a bias unit.
The distribution of the hidden units was chosen with the knowledge that human experts
usually use information from the ipsilateral side but refer to the contralateral side only
709
710
Freeman
when features in the ipsilateral side are too obscure to resolve. The selection of the
number of hidden units in neural network models remains an art. In order to detennine
whether the size of the hidden unit layer could be changed, we repeated the experiments
using Architecture 2 where the number of hidden units was reduced to 16, with 10
connected to the ipsilateral inputs, 3 to the contralateral inputs, and 3 connected to all the
inputs.
2.3
TRAININ G
For training, target values for the output layer were all 0.0 except for the output nodes
representing the time of arrival for wave V (reported on the BAEP chart) and one node on
each side of it The peak node target was 0.95 and the two adjacent nodes had targets of
0.90. For cases in which wave V was absent, the target for all the output nodes was 0.0.
A neural network simulator (NeuralWorks Professional II~ version 3.5) was used to
construct the networks and run the simulations. The back-propagation learning algorithm
was used to train the networks. The random number generator was initialized with
random number seeds taken from a random number table. Then network weights were
initialized to random values between -0.2 and 0.2 and the training begun. Since our
random number generator is detenninistic - given the random number seed - these trials
are replicable.
output
hidden
'--_ _ _ _--' input
ipsilateral
contralateral
Figure 2. Diagram of Architecture 1 with representation of input and output data shown.
Each of the 50 ears of data in the training set was presented using a randomize, shuffle,
and deal technique. Network weights were saved at various stages of learning, usually
after every 1000 presentations (20 epochs) until the cumulative RMS error for an epoch
fell below 0.01. The contribution of each training example to the total error was
examined to detennine whether a few examples were the source of most of the error. If
so, training was continued until these examples had been learned to an error level
comparable to the rest of the cases. After training, the 20 ears in the test set were
presented to each of the saved networks and the output nodes of the net were examined for
each test case.
Computer Recognition of Wave Location in Graphical Data by a Neural Network
2.4
ANALYSIS OF RESULTS
A threshold method was used to analyze the data. For each of the test cases the actual
location of the maximum valued output unit was compared to the expected location of the
maximum valued output unit. For a network result to be classified as a correct
identification in the wave V present (true positive), we require that the maximum valued
output unit have an activation which is over an activity-threshold (0.50) and that the unit
be within a distance-threshold (0.2 msec.) of the expected location of wave V. For a true
negative identification of wave V - a correct identification of wave V being absent - we
require that all the output activities be below the activity threshold and that the case have
no wave V to find. The network makes a false positive prediction of the location of wave
V if some activity is above the activity threshold for a case which has no wave V.
Finally, there are two ways for the network to make a false negative identification of
wave V. In both instances, wave V must be present in the case. In one instance, some
output node has activity above the activity threshold, but it is outside of the distance
threshold. This corresponds to the identification of a wave V but in the wrong place. In
the other instance, no node attains activity over the activity threshold, corresponding to a
failure to find a wave V when there exists a wave V in the case to find.
2.5
EXPERIMENTS
Five experiments were performed. The flfst four used different architectures on the same
data set and the last used architecture Al on the derivatives data set. Each of the network
architectures was trained from different random starting positions. For each trial, a
network was randomized and trained as described above. The networks were sampled as
learning progressed.
Experiment 1 determined how well archtecture Al could identify wave V and provided
baseline results for the remaining experiments. Experiments 2 and 3 tested whether our
use of more hidden units attached to ipsilateral data made sense by reversing the
proportion of hidden units alloted to ipsilateral data processing (experiment 2) and by
tring a fully connected network (experiment 3). Experiment 4 determined whether fewer
hidden units could be used. Experiment 5 investigated whether preprocessing of the input
data to make derivative information available would facilitate network identification of
peak location.
3 RESULTS
Results from the best network found for each of five experiments are shown in Table 2.
Table 2: Results from presentation of 20 test cases to various network architectures.
4
Experiment
Network
TP
'IN
Total
FP
FN
Total
I
Al
16
1
17
1
2
3
2
A2
16
0
16
2
2
4
3
A3
16
0
16
2
2
4
4
A4
15
0
IS
3
2
5
5
Al
15
1
16
1
3
4
DISCUSSION
In Experiment I, the three cases which were incorrectly identified were examined closely.
It is not evident from inspection why the net failed to identify the peaks or identified
711
712
Freeman
peaks where there were none to identify. Where peaks are present, they are not unusually
located or surrounded by noise. The appearance of their shape seems similar to the cases
which were identified correctly. We believe that more training examples which are
"similar" to these 3 test cases, as well as examples with greater variety, will improve
recognition of these cases. This improvement comes not from better generalization but
rather from a reduced requirement for generalization. If the net is trained with cases which
are increasingly similar to the cases which will be used to test it, then recognition of the
test cases becomes easier at any given level of generalization.
The distribution of hidden units in Al was chosen with the knowledge that human experts
use information primarily from the ipsilateral side, referring to the contralateral side only
when ipsilateral features are too obscure to resolve. Experiments 2 and 3 investigate
whether this reliance on ipsilateral data suggests that there should be more hidden units
for the ipsilateral side or for the contralateral side. The identical results from these
experiments are similar to those of Experiment l. One interpretation is that it is possible
to make diagnoses of BAEPs using very few features from the ipsilateral side. Another
interpretation is that it is possible to use the contralateral data as the chief information
source, contrary to our expert's belief.
Experiment 4 investigates whether fewer features are needed by restricting the hidden layer
to 20 hidden units. The slight degradation of performance indicates that it is possible to
make BAEP diagnoses with fewer ipsilateral features. Experiment 5 utilized the
ipsilateral waveform and its derivative to determine whether this pre-processing would
improve the results. Surprisingly, the results did nOl improve, but it is possible that a
better estimator of the derivative will prove this method useful.
Finally, when the weights from all the networks above were examined, we found that
amplitudes from only the area where wave V falls were used. This suggests that it is not
necessary to know the location of wave III before determining the location of wave V, in
sharp contrast to expert's intuition. We believe the networks form a "local expert" for the
identification of wave V which does not need 10 interact with da"l from other parts of the
graph, and that other such local experts will be formed as we expand the project's scope.
5
CONCLUSIONS
Automated wave form recognition is considered to be a difficult task for machines and an
especially difficult task for neural networks. Our results offer some encouragement that
in some domains neural networks may be applied to perform wave form recognition and
that the technique will be extensible as problem complexity increases.
Still, the accuracy of the networks we have discussed is not high enough for clinical use.
Several extensions have been attempted and others considered including 1) increasing the
sampling rate to decrease the granularity of the input data, 2) increasing the training set
size, 3) using a different representation of the output for wave V absent cases, 4) using a
different representation of the input, such as the derivative of the amplitudes, and 5)
architectures which allow hybrids of these ideas.
Finally. since many other tests in medicine as well as other fields require the
interpretation of graphical data, it is tempting to consider extending this method to other
domains. Many other tests in medicine as well as other fields require the interpretation of
graphical data.One distinguishing feature of the BAEP is that there is no difficulty with
the time registration of the data; we always know where to start looking for the wave.
This is in contrast to an EKG, for example, which may require substantial effort just to
identify the beginning of a QRS complex. Our results indicate that the interpretation of
graphs where the time registration of data is not an issue is possible using neural
networks. Medical tests for which this technique would be appropriale include: other
evoked potentials, spectrometry, and gel electrophoresis.
Computer Recognition of Wave Location in Graphical Data by a Neural Network
Acknowledgements
The author wishes to thank Dr. Scott Shoemaker of the Department of Neurology for his
expertise, encouragement, constructive criticism, patience, and collaboration throughout
the progress of this work. This research has been supported by NLM Training grant TIS
LM-07059.
References
Boston, ].R. 1987. Detection criteria for sensory evoked potentials. Proceedings of 9th
Ann. IEEElEMBS Conf., Boston, MA.
Boston, ].R. 1989. Automated interpretation of brainstem auditory evoked potentials: a
prototype system. IEEE Trans. Biomed. Eng. 36 (5) : 528-532.
DeRoach, ].N. 1989. Neural networks - an artificial intelligence approach to the analysis
of clinical data. Austral. Pbys. & Eng. Sci. in Med. 12 (2); 100-106.
Durrant, ].0., ] .R. Boston, and W.H. Martin. 1990. Correlation study of two-channel
recordings of the brain stem auditory evoked potential. Ear and Hearing 11 (3) ; 215-221.
Gabriel, S., ].0. Durrant, A.E. Dickter, and 1.E. Kephart. 1980. Computer identification
of waves in the auditory brain stem evoked potentials. EEG and Clin. Neurophys. 49 :
421-423.
Gorman, R. Paul, and Terrence 1. Sejnowski. 1988. Analysis of hidden units in a layered
network trained to classify sonar targets. Neural Networks 1 : 75-89.
Greenberg, R.P., P.G. Newlon, M.S. Hyatt, R.K. Narayan, and D.P. Becker. 1981.
Prognostic implications of early multi modality evoked potentials in severely head-injured
patients. 1. Neurosurg 5; 227-236.
Moulton, Richard, Peter Kresta, Mario Ramirez, and William Tucker. 1991. Continuous
automated monitoring of somatosensory evoked potentials in posttraumatic coma.l.o.um.al
of Trauma 31 (5) ; 676-685.
Rumelhart, David E., and James L. McClelland. 1986. Parallel distributed processing.
Cambridge. Mass: MIT Press.
Stubbs, 0 F. 1988. Neurocomputers. MD ComDut 5 (3) : 14-24.
Valdes-Sosa, M.J., M.A. Bobes. M.C. Perez-abalo. M. Perra. 1.A. Carballo, and P.
Valdes-Sosa. 1987. Comparison of auditory evoked potential detection methods using
signal detection theory. Au.di,Ql26: 166-178.
713
| 556 |@word trial:2 version:1 seems:2 proportion:3 meso:1 prognostic:1 simulation:1 eng:2 fonn:2 edema:1 initial:1 series:3 sosa:3 neurophys:1 activation:2 yet:1 must:3 written:1 fn:1 distant:1 uncooperative:1 shape:1 plot:1 medial:1 intelligence:1 selected:5 fewer:3 nervous:3 inspection:1 beginning:3 record:1 valdes:3 node:11 location:14 five:3 mathematical:2 along:1 differential:1 prove:1 pathway:2 expected:2 growing:1 simulator:2 brain:12 multi:1 freeman:4 automatically:1 prolonged:1 resolve:2 actual:1 increasing:2 becomes:2 provided:1 project:1 mass:1 developed:1 temporal:1 every:2 ti:1 unusually:1 olivary:1 um:1 wrong:1 unit:26 medical:3 digitizer:1 grant:1 positive:2 before:1 engineering:1 local:2 severely:1 firing:1 interpolation:1 approximately:1 plus:1 au:1 examined:4 ecg:1 evoked:14 suggests:2 ease:1 nol:1 range:1 averaged:1 testing:1 implement:1 procedure:1 area:1 physiology:1 pre:1 donald:1 selection:1 layered:1 judged:1 attention:1 starting:3 focused:1 disorder:2 rule:6 continued:1 estimator:1 his:1 spontaneous:1 target:6 us:1 distinguishing:1 origin:1 pa:1 element:1 rumelhart:2 recognition:12 located:1 utilized:1 electrical:1 connected:9 shuffle:1 removed:1 decrease:1 substantial:3 disease:1 intuition:1 complexity:1 trained:5 predictive:1 represented:2 various:2 alphabet:2 train:1 sejnowski:2 artificial:1 outside:1 widely:1 solve:1 valued:3 say:1 distortion:2 neurocomputers:1 delivered:1 hoc:1 advantage:1 analytical:2 net:4 subtracting:1 relevant:1 electrode:3 requirement:1 extending:1 radiation:1 narayan:1 measured:1 school:1 progress:1 dividing:1 fonnat:1 involves:1 somatosensory:2 come:1 indicate:1 waveform:1 closely:2 correct:3 saved:2 subsequently:1 human:2 nlm:1 transient:1 successor:1 brainstem:1 require:7 generalization:3 preliminary:1 articulate:1 extension:1 around:1 considered:3 seed:2 scope:1 lm:1 early:1 a2:2 purpose:1 geniculate:1 puh:2 create:1 mit:1 always:1 rather:2 discerning:1 improvement:1 electroencephalographic:1 indicates:1 contrast:2 attains:1 baseline:2 sense:1 criticism:1 typically:4 hidden:19 expand:1 nonnal:1 biomed:1 issue:2 fidelity:1 activit:1 art:1 field:2 construct:1 sampling:1 identical:2 placing:1 represents:1 progressed:1 connectionist:1 stimulus:7 others:2 richard:1 few:2 primarily:1 oriented:1 recognize:1 resulted:1 individual:1 phase:1 consisting:1 william:1 detection:4 interest:2 headphone:1 investigate:1 evaluation:1 light:1 perez:1 implication:1 detenninistic:1 necessary:1 traversing:1 iv:2 initialized:2 desired:1 minimal:1 instance:3 kephart:1 modeling:3 classify:1 tp:1 extensible:1 hearing:2 contralateral:12 delay:1 too:2 characterize:1 reported:1 conduction:2 answer:1 referring:1 peak:18 randomized:1 terrence:1 vastly:1 central:4 ear:16 recorded:3 again:1 possibly:1 dr:1 conf:1 creating:1 expert:13 derivative:7 potential:15 hyatt:1 includes:1 unresponsive:1 ad:1 vi:1 performed:3 analyze:1 mario:1 wave:48 sort:1 effected:1 start:1 parallel:1 contribution:1 ass:1 chart:2 formed:1 accuracy:2 correspond:1 identify:5 glioma:1 identification:9 raw:1 none:1 monitoring:2 expertise:1 classified:1 cumbersome:1 manual:1 failure:1 tucker:1 james:1 di:1 sampled:2 auditory:12 neuralworks:1 iji:1 begun:1 knowledge:6 injured:1 amplitude:8 back:1 nerve:1 appears:1 originally:1 flooding:1 response:8 just:1 stage:1 until:2 correlation:1 propagation:1 graphed:1 believe:2 facilitate:1 effect:2 requiring:1 true:2 normalized:1 white:1 deal:1 adjacent:1 inferior:1 criterion:2 m:2 evident:1 novel:1 recently:1 common:1 superior:1 sigmoid:1 stimulation:1 rust:1 neurosurg:1 attached:2 digitizing:1 interpretation:8 slight:1 discussed:1 abnonnal:6 significant:1 composition:1 refer:1 cambridge:1 ai:1 encouragement:2 had:9 moving:1 austral:1 confounding:1 occasionally:1 seen:1 greater:1 determine:3 tempting:1 signal:6 ii:5 multiple:1 stem:7 match:1 offer:1 clinical:2 post:1 prediction:1 moulton:2 fonns:1 patient:8 spectrometry:2 addition:2 background:1 preserved:1 interval:1 diagram:1 source:2 modality:1 rest:1 unlike:1 file:4 fell:1 recording:4 subject:2 med:1 contrary:1 seem:1 granularity:1 iii:3 enough:1 automated:4 variety:1 psychology:1 architecture:18 identified:5 click:7 opposite:2 idea:1 prototype:1 intensive:1 absent:4 whether:8 rms:1 becker:1 effort:1 flfst:1 peter:1 trauma:1 adequate:1 gabriel:2 generally:1 latency:4 clear:2 delivering:1 useful:1 ten:1 mcclelland:2 reduced:2 diagnostic:1 correctly:1 ipsilateral:20 diagnosis:4 group:3 revolve:1 four:2 reliance:1 nevertheless:1 threshold:8 shoemaker:1 registration:2 graph:7 fuse:1 year:1 colliculus:1 run:1 place:1 throughout:1 reasonable:1 decision:1 summarizes:1 patience:1 investigates:1 comparable:1 layer:5 followed:1 scalp:2 activity:12 occur:1 lemniscus:1 martin:2 department:1 sclerosis:1 describes:1 slightly:1 increasingly:1 qrs:1 making:1 slowed:1 taken:1 equation:1 remains:3 needed:1 know:2 end:1 available:1 detennine:3 apply:1 appropriate:1 distinguished:1 mimicked:1 professional:1 original:1 remaining:2 include:2 ensure:1 graphical:9 a4:2 readable:2 clin:1 medicine:3 restrictive:1 especially:2 fonnal:1 randomize:1 md:1 neurologist:1 reversed:1 distance:2 thank:1 lateral:1 sci:1 digitization:1 cochlear:1 toward:1 transfonned:1 gel:1 difficult:4 negative:2 perform:3 neuron:1 discarded:1 ekg:1 incorrectly:1 extended:1 looking:1 head:1 sharp:1 david:1 acoustic:2 learned:1 ther:1 trans:1 usually:2 below:2 scott:1 fp:1 program:2 including:1 explanation:1 belief:1 power:1 difficulty:1 hybrid:1 advanced:1 representing:1 improve:3 technology:1 epoch:2 acknowledgement:1 determining:1 fully:3 generator:3 digital:1 ivn:1 nucleus:3 surrounded:1 obscure:2 collaboration:1 changed:1 placed:1 last:3 surprisingly:1 supported:1 side:12 bias:2 allow:1 fall:1 tracing:3 distributed:1 greenberg:2 noninvasive:1 cortical:1 curve:1 cumulative:1 sensory:2 replicable:1 made:2 author:1 preprocessing:2 alphabetically:1 transcription:2 pittsburgh:3 consuming:1 neurology:1 pontine:1 continuous:3 search:1 sonar:1 why:1 table:5 chief:1 channel:1 transfer:1 neurologic:1 eeg:4 interact:1 alloted:1 investigated:2 complex:3 electric:2 domain:3 protocol:2 da:1 did:2 main:1 codified:1 noise:6 paul:1 arrival:4 repeated:1 referred:1 fashion:1 position:1 msec:6 wish:1 trainin:1 durrant:3 evidence:1 a3:2 exists:1 false:2 restricting:1 gorman:1 easier:1 boston:7 vii:1 appearance:1 ramirez:1 forming:1 visual:1 failed:1 labor:1 ordered:1 corresponds:1 ma:1 presentation:2 ann:1 flash:1 included:2 determined:2 except:1 corrected:1 reversing:1 averaging:3 engineer:1 degradation:1 total:7 hospital:1 attempted:1 constructive:1 tested:1 correlated:1 |
5,037 | 5,560 | Stochastic Variational Inference for Hidden Markov
Models
Nicholas J. Foti? , Jason Xu? , Dillon Laird, and Emily B. Fox
University of Washington
{nfoti@stat,jasonxu@stat,dillonl2@cs,ebfox@stat}.washington.edu
Abstract
Variational inference algorithms have proven successful for Bayesian analysis
in large data settings, with recent advances using stochastic variational inference (SVI). However, such methods have largely been studied in independent or
exchangeable data settings. We develop an SVI algorithm to learn the parameters
of hidden Markov models (HMMs) in a time-dependent data setting. The challenge in applying stochastic optimization in this setting arises from dependencies
in the chain, which must be broken to consider minibatches of observations. We
propose an algorithm that harnesses the memory decay of the chain to adaptively
bound errors arising from edge effects. We demonstrate the effectiveness of our
algorithm on synthetic experiments and a large genomics dataset where a batch
algorithm is computationally infeasible.
1
Introduction
Modern data analysis has seen an explosion in the size of the datasets available to analyze. Significant progress has been made scaling machine learning algorithms to these massive datasets based on
optimization procedures [1, 2, 3]. For example, stochastic gradient descent employs noisy estimates
of the gradient based on minibatches of data, avoiding a costly gradient computation using the full
dataset [4]. There is considerable interest in leveraging these methods for Bayesian inference since
traditional algorithms such as Markov chain Monte Carlo (MCMC) scale poorly to large datasets,
though subset-based MCMC methods have been recently proposed as well [5, 6, 7, 8].
Variational Bayes (VB) casts posterior inference as a tractable optimization problem by minimizing
the Kullback-Leibler divergence between the target posterior and a family of simpler variational
distributions. Thus, VB provides a natural framework to incorporate ideas from stochastic optimization to perform scalable Bayesian inference. Indeed, a scalable modification to VB harnessing
stochastic gradients?stochastic variational inference (SVI)?has recently been applied to a variety
of Bayesian latent variable models [9, 10]. Minibatch-based VB methods have also proven effective
in a streaming setting where data arrives sequentially [11].
However, these algorithms have been developed assuming independent or exchangeable data. One
exception is the SVI algorithm for the mixed-membership stochastic block model [12], but independence at the level of the generative model must be exploited. SVI for Bayesian time series including
HMMs was recently considered in settings where each minibatch is a set of independent series [13],
though in this setting again dependencies do not need to be broken.
In contrast, we are interested in applying SVI to very long time series. As a motivating example,
consider the application in Sec. 4 of a genomics dataset consisting of T = 250 million observations in 12 dimensions modeled via an HMM to learn human chromatin structure. An analysis of
the entire sequence is computationally prohibitive using standard Bayesian inference techniques for
?
Co-first authors contributed equally to this work.
1
HMMs due to a per-iteration complexity linear in T . Unfortunately, despite the simple chain-based
dependence structure, applying a minibatch-based method is not obvious. In particular, there are two
potential issues immediately arising in sampling subchains as minibatches: (1) the subsequences are
not mutually independent, and (2) updating the latent variables in the subchain ignores the data
outside of the subchain introducing error. We show that for (1), appropriately scaling the noisy subchain gradients preserves unbiased gradient estimates. To address (2), we propose an approximate
message-passing scheme that adaptively bounds error by accounting for memory decay of the chain.
We prove that our proposed SVIHMM algorithm converges to a local mode of the batch objective,
and empirically demonstrate similar performance to batch VB in significantly less time on synthetic datasets. We then consider our genomics application and show that SVIHMM allows efficient
Bayesian inference on this massive dataset where batch inference is computationally infeasible.
2
2.1
Background
Hidden Markov models
Hidden Markov models (HMMs) [14] are a class of discrete-time doubly stochastic processes consisting of observations yt and latent states xt ? {1, . . . , K} generated by a discrete-valued Markov
chain. Specifically, for y = (y1 , . . . , yT ) and x = (x1 , . . . , xT ), the joint distribution factorizes as
p(x, y) = ?0 (x1 )p(y1 |x1 )
T
Y
p(xt |xt?1 , A)p(yt |xt , ?)
(1)
t=2
K
where A = [Aij ]i,j=1 is the transition matrix with Aij = Pr(xt = j|xt?1 = i), ? = {?k }K
k=1
the emission parameters, and ?0 the initial distribution. We denote the set of HMM parameters
as ? = (?0 , A, ?). We assume that the underlying chain is irreducible and aperiodic so that a
stationary distribution ? exists and is unique. Furthermore, we assume that we observe the sequence
at stationarity so that ?0 = ?, where ? is given by the leading left-eigenvector of A. As such, we do
not seek to learn ?0 in the setting of observing a single realization of a long chain.
We specify conjugate Dirichlet priors on the rows of the transition matrix as
p(A) =
K
Y
Dir(Ai: | ?jA ).
(2)
j=1
Here, Dir(? | ?) denotes a K-dimensional Dirichlet distribution with concentration parameters ?.
Although our methods are more broadly applicable, we focus on HMMs with multivariate Gaussian
emissions where ?k = {?k , ?k }, with conjugate normal-inverse-Wishart (NIW) prior
yt | xt ? N (yt | ?xt , ?xt ),
?k = (?k , ?k ) ? NIW(?0 , ?0 , ?0 , ?0 ).
(3)
For simplicity, we suppress dependence on ? and write ?(x0 ), p(xt |xt?1 ), and p(yt |xt ) throughout.
2.2
Structured mean-field VB for HMMs
We are interested in the posterior distribution of the state sequence and parameters given an observation sequence, denoted p(x, ?|y). While evaluating marginal likelihoods, p(y|?), and most probable state sequences, arg maxx p(x|y, ?), are tractable via the forward-backward (FB) algorithm
when parameter values ? are fixed [14], exact computation of the posterior is intractable for HMMs.
Markov chain Monte Carlo (MCMC) provides a widely used sampling-based approach to posterior
inference in HMMs [15, 16]. We instead focus on variational Bayes (VB), an optimization-based
approach that approximates p(x, ?|y) by a variational distribution q(?, x) within a simpler family.
Typically, for HMMs a structured mean field approximation is considered:
q(?, x) = q(A)q(?)q(x),
(4)
breaking dependencies only between the parameters ? = {A, ?} and latent state sequence x [17].
QT
Note that making a full mean field assumption in which q(x) = i=1 q(xi ) loses crucial information
about the latent chain needed for accurate inference.
2
Each factor in Eq. (4) is endowed with its own variational parameter and is set to be in the same
exponential family distribution as its respective complete conditional. The variational parameters
are optimized to maximize the evidence lower bound (ELBO) L:
ln p(y) ? Eq [ln p(?)] ? Eq [ln q(?)] + Eq [ln p(y, x|?)] ? Eq [ln q(x)] := L(q(?), q(x)).
(5)
Maximizing L is equivalent to minimizing the KL divergence KL(q(x, ?)||p(x, ?|y)) [18]. In
practice, we alternate updating the global parameters ??those coupled to the entire set of
observations?and the local variables {xt }?a variable corresponding to each observation, yt . Details on computing the terms in the equations and algorithms that follow are in the Supplement.
The global update is derived by differentiating L with respect to the global variational parameters
[17]. Assuming a conjugate exponential family leads to a simple coordinate ascent update [9]:
w = u + Eq(x) [t(x, y)] .
(6)
A
?
A
?
Here, t(x, y) denotes the vector of sufficient statistics, and w = (w , w ) and u = (u , u ) the
variational parameters and model hyperparameters, respectively, in natural parameter form.
The local update is derived analogously, yielding the optimal variational distribution over the latent
sequence:
!
T
T
X
X
?
Eq(?) [ln p(yt |xt )] . (7)
Eq(A) ln Axt?1 ,xt +
q (x) ? exp Eq(A) [ln ?(x1 )] +
t=1
t=2
Compare with Eq. (1). Here, we have replaced probabilities by exponentiated expected log probabilities under the current variational distribution. To determine the optimal q ? (x) in Eq. (7), define:
ej,k := exp Eq(A) ln(Aj,k )
A
pe(yt |xt = k) := exp Eq(?) ln p(yt |xt = k) .
(8)
ej,k ), and
We estimate ? with ?
? being the leading eigenvector of Eq(A) [A]. We then use ?
? , A? = (A
p? = {e
p(yt |xt = k), k = 1, . . . , K, t = 1, . . . , T } to run a forward-backward algorithm, producing forward messages ? and backward messages ? which allow us to compute q ? (xt = k) and
q ? (xt?1 = j, xt = k). [19, 17]. See the Supplement.
2.3
Stochastic variational inference for non-sequential models
Even in non-sequential models, the batch VB algorithm requires an entire pass through the dataset
for each update of the global parameters. This can be costly in large datasets, and wasteful when
local-variable passes are based on uninformed initializations of the global parameters or when many
data points contain redundant information.
To cope with this computational challenge, stochastic variational inference (SVI) [9] leverages a
Robbins-Monro algorithm [1] to optimize the ELBO via stochastic gradient ascent. When the data
are independent, the ELBO in Eq. (5) can be expressed as
L = Eq(?) [ln p(?)] ? Eq(?) [ln q(?)] +
T
X
Eq(xi ) [ln p(yi , xi |?)] ? Eq(x) [ln q(x)] .
(9)
i=1
If a single observation index s is sampled uniformly s ? Unif(1, . . . , T ), the ELBO corresponding
to (xs , ys ) as if it were replicated T times is given by
Ls = Eq(?) [ln p(?)] ? Eq(?) [ln q(?)] + T ? Eq(xs ) [ln p(ys , xs |?)] ? Eq(xs ) [ln q(xs )] , (10)
and it is clear that Es [Ls ] = L. At each iteration n of the SVI algorithm, a data point ys is sampled
and its local q ? (xs ) is computed given the current estimate of global variational parameters wn .
? w Ls ] = ?w L).
Next, the global update is performed via a noisy, unbiased gradient step (Es [?
When all pairs of distributions in the model are conditionally conjugate, it is cheaper to compute the
e w Ls , which additionally accounts for the information geometry of the
stochastic natural gradient, ?
distribution [9]. The resulting stochastic natural gradient step with step-size ?n is:
e w Ls (wn ).
wn+1 = wn + ?n ?
e w Ls in Sec. 3.2, specifically in Eq. (13) with details in the Supplement.
We show the form of ?
3
(11)
3
Stochastic variational inference for HMMs
The batch VB algorithm of Sec. 2.2 becomes prohibitively expensive as the length of the chain T
becomes large. In particular, the forward-backward algorithm in the local step takes O(K 2 T ) time.
Instead, we turn to a subsampling approach, but naively applying SVI from Sec. 2.3 fails in the
HMM setting: decomposing the sum over local variables into a sum of independent terms as in
Eq. (9) ignores crucial transition counts, equivalent to making a full mean-field approximation.
Extending SVI to HMMs requires additional considerations due to the dependencies between the observations. It is clear that subchains of consecutive observations rather than individual observations
are necessary to capture the transition structure (see Sec. 3.1). We show that if the local variables
of each subchain can be exactly optimized, then stochastic gradients computed on subchains can be
scaled to preserve unbiased estimates of the full gradient (see Sec. 3.2).
Unfortunately, as we show in Sec. 3.3, the local step becomes approximate due to edge effects:
local variables are incognizant of nodes outside of the subchain during the forward-backward pass.
Although an exact scheme requires message passing along the entire chain, we harness the memory
decay of the latent Markov chain to guarantee that local state beliefs in each subchain form an approximation q (x) to the full-data beliefs q ? (x). We achieve these approximations by adaptively
buffering the subchains with extra observations based on current global parameter estimates. We
then prove that for sufficiently small, the noisy gradient computed using q (x) corresponds to an
ascent direction in L, guaranteeing convergence of our algorithm to a local optimum. We refer to
our algorithm, which is outlined in Alg. 1, as SVIHMM.
Algorithm 1 Stochastic Variational Inference for HMMs (SVIHMM)
?
1: Initialize variational parameters (w0A , w0 ) and choose stepsize schedule ?n , n = 1, 2, . . .
2: while (convergence criterion is not met) do
3:
Sample a subchain yS ? {y1 , . . . , yT } with S ? p(S)
e peS and run q(xS ) = ForwardBackward(yS , ?
e peS ).
4:
Local step: Compute ?
? , A,
? , A,
T
S
S
5:
Global update: wn+1 = wn (1 ? ?n ) + ?n (u + c Eq(xS ) [t(x , y )])
6: end while
3.1
ELBO for subsets of data
Unlike the independent data case (Eq. (9)), the local term in the HMM setting decomposes as
ln p(y, x|?) = ln ?(x1 ) +
T
X
ln Axt?1 ,xt +
t=2
T
X
ln p(yt |xt ).
(12)
i=1
Because of the paired terms in the first sum, it is necessary to consider consecutive observations
to learn transition structure. For the SVIHMM algorithm, we define our basic sampling unit as
S
subchains yS = (y1S , . . . , yL
), where S refers to the associated indices. We denote the ELBO
S
S
e w LS .
restricted to y as L , and associated natural gradient as ?
3.2
Global update
We detail the global update assuming we have optimized q ? (x) exactly (i.e., as in the batch setting), although this assumption will be relaxed as discussed in Sec 3.3. Paralleling Sec. 2.3, the
global SVIHMM step involves updating the global variational parameters w via stochastic (natural)
gradient ascent based on q ? (xS ), the beliefs corresponding to our current subchain S.
e w Ls ] = ?
e w L by scaling the
Recall from Eq. (10) that the original SVI algorithm maintains Es [?
gradient based on an individual observation s by the total number of observations T . In the HMM
case, we analogously derive a batch factor vector c = (cA , c? ) such that
e w LS ] = ?
e w L with ?
e w LS = u + cT Eq? (xS ) t(xS , yS ) ? w.
ES [?
(13)
The specific form of Eq. (13) for Gaussian emissions is in the Supplement. Now, the Robbins-Monro
average in Eq. (11) can be written as
wn+1 = wn (1 ? ?n ) + ?n (u + cT Eq? (xS ) [t(xS , yS )]).
4
(14)
e w LS are independent and unbiased estimates of the true natural
When the noisy natural gradients ?
gradient, the iterates in Eq. (14)Pconverge to a local
P maximum of L under mild regularity conditions
as long as step-sizes ?n satisfy n ?2n < ?, and n ?n = ? [2, 9]. In our case, the noisy gradients
are necessarily correlated even for independently sampled subchains due to dependence between
observations (y1 , . . . , yT ). However, as detailed in [20], unbiasedness suffices for convergence of
Eq. (14) to a local mode.
Batch factor Recalling our assumption of being at stationarity, Eq(?) ln ?(x1 ) = Eq(?) ln ?(xi )
for all i. If we sample subchains from the uniform distribution over subchains of length L, denoted
p(S), then we can write
"T ?L+1
#
T
T
X
X
X
S
S
ES Eq ln p(y , x |?) ? p(S)Eq
ln ?(xt ) + (L ? 1)
ln Axt?1 ,xt + L
p(yt |xt ) ,
t=1
t=2
t=1
(15)
where the expectation is with respect to (?, A, ?); this is detailed in the Supplement. The approximate equality in Eq. (15) arises because while most transitions appear in L ? 1 subchains, those
near the endpoints of the full chain do not, e.g., x1 and xT appear in only one subchain. This error
becomes negligible as the length of the HMM increases. Since p(S) is uniform over all length L subchains, by linearity of expectation the batch factor c = (cA , c? ) is given by cA = (T ?L+1)/(L?1),
c? = (T ? L + 1)/L. Other choices of p(S) can be used by considering the appropriate version of
Eq. (15) analogously to [12], generally with a batch factor cS varying with each subset yS .
3.3
Local update
The optimal SVIHMM local variational distribution arises just as in the batch case of Eq. (7), but
with time indices restricted to the length L subchain yS :
!
L
L
h
i X
X
S S
S
? S
Eq(A) ln AxS`?1 ,xS` +
q (x ) ? exp Eq(A) ln ?(x1 ) +
Eq(?) ln p(y` |x` ) . (16)
`=2
`=1
To compute these local beliefs, we use our current q(A), q(?)?which have been informed by all
e peS = {e
previous subchains?to form ?
? , A,
p(y`S |xS` = k), ?k, ` = 1, . . . , L}, with these parameters
defined as in the batch case. We then use these parameters in a forward-backward algorithm detailed
in the Supplement. However, this message passing produces only an approximate optimization due
to loss of information incurred at the ends of the subchain. Specifically, for yS = (yt , . . . , yt+L ),
the forward messages coming from y1 , . . . , yt?1 are not available to yt , and similarly the backwards
messages from yt+L+1 , . . . , yT are not available to yt+L .
Recall our assumption in the global update step that q ? (xS ) corresponds to a subchain of the fulldata optimal beliefs q ? (x). Here, we see that this assumption is assuredly false; instead, we analyze
the implications of using approximate local subchain beliefs and aim to ameliorate the edge effects.
Buffering subchains To cope with the subchain edge effects, we augment the subchain S with
enough extra observations on each end so that the local state beliefs, q(xi ), i ? S, are within an
-ball of q ? (xi ) ? those had we considered the entire chain. The practicality of this approach arises
from the approximate finite memory of the process. In particular, consider performing a forwardbackward pass on (xS1?? , . . . , xSL+? ) leading to approximate beliefs q?? (xi ). Given > 0, define ?
as the smallest buffer length ? such that
max ||?
q ? (xi ) ? q ? (xi )||1 ? .
i?S
(17)
The ? that satisfies Eq. (17) determines the number of observations used to buffer the subchain. After
improving subchain beliefs, we discard q?? (xi ), i ? buffer, prior to the global update. As will be
seen in Sec. 4, in practice the necessary ? is typically very small relative to the lengthy observation
sequences of interest.
Buffering subchains is related to splash belief propagation (BP) for parallel inference in undirected
graphical models, where the belief at any given node is monitored based on locally-aware message
passing in order to maintain a good approximation to the true belief [21]. Unlike splash BP, we
5
embed the buffering scheme inside an iterative procedure for updating both the local latent structure
and the global parameters, which affects the -approximation in future iterations. Likewise, we wish
to maintain the approximation on an entire subchain, not just at a single node.
Even in settings where parameters ? are known, as in splash BP, analytically choosing ? is generally
infeasible. As such, we follow the approach of splash BP to select an approximate ? . We then go
further by showing that SVIHMM still converges using approximate messages within an uncertain
parameter setting where ? is learned simultaneously with the state sequence x.
Specifically, we approximate ? by monitoring the change in belief residuals with a sub-routine
GrowBuf, outlined in Alg. 2, that iteratively expands a buffer q old ? q new around a given subchain
yS . Growbuf terminates when all belief residuals satisfy
max ||q(xi )new ? q(xi )old ||1 ? .
i?S
(18)
The GrowBuf sub-routine can be computed efficiently due to (1) monotonicity of the forward and
backward messages so that only residuals at endpoints, q(xS1 ) and q(xSL ), need be considered, and
(2) the reuse of computations. Specifically, the forward-backward pass can be rooted at the midpoint
of yS so that messages to the endpoints can be efficiently propagated, and vice versa [22].
Furthermore, choosing sufficiently small guarantees that the noisy natural gradient lies in the same
half-plane as the true natural gradient, a sufficient condition for maintaining convergence when using
approximate gradients [23]; the proof is presented in the Supplement.
Algorithm 2 GrowBuf procedure.
1: Input: subchain S, min buffer length u ? Z+ , error tolerance > 0.
e peS ) and set S old = S.
2: Initialize q old (xS ) = ForwardBackward(yS , ?
? , A,
3: while true do
4:
Grow buffer S new by extending S old by u observations in each direction.
S new
S new
e peS new ), reusing messages from S old .
5:
q new
) = ForwardBackward(y
,?
? , A,
(x
new
S
old
S
6:
if q (x ) ? q (x ) < then
7:
return q ? (xS ) = q new (xS )
8:
end if
9:
Set S old = S new and q old = q new .
10: end while
3.4
Minibatches for variance mitigation and their effect on computational complexity
Stochastic gradient algorithms often benefit from sampling multiple observations in order to reduce
the variance of the gradient estimates at each iteration. We use a similar idea in SVIHMM by
sampling a minibatch B = (yS1 , . . . , ySM ) consisting of M subchains. If the latent Markov chain
tends to dwell in one component for extended periods, sampling one subchain may only contain
information about a select number of states observed in that component. Increasing the length
of this subchain may only lead to redundant information from this component. In contrast, using a
minibatch of many smaller subchains may discover disparate components of the chain at comparable
computational cost, accelerating learning and leading to a better local optimum. However, subchains
must be sufficiently long to be informative of transition dynamics. In this setting, the local step on
each subchain is identical; summing over subchains in the minibatch yields the gradient update:
X
?B
w
?B =
w
cT Eq(xS ) t(xS , yS ) , wn+1 = wn (1 ? ?n ) + ?n u +
.
|B|
S?B
We see that the computational complexity of SVIHMM is O(K 2 (L + 2? )M ), leading to significant
efficiency gains compared to O(K 2 T ) in batch inference when (L + 2? )M << T .
4
Experiments
We evaluate the performance of SVIHMM compared to batch VB on synthetic experiments designed
to illustrate the trade off between the choice of subchain length L and the number of subchains per
6
Table 1: Runtime and predictive log-probability (without GrowBuf) on RC data.
bL/2c
Runtime (sec.)
Avg. iter. time (sec.)
log-predictive
100
2.74 ? 0.001
0.03 ? 0.000
?5.915 ? 0.004
500
11.79 ? 0.004
0.12 ? 0.000
?5.850 ? 0.000
1000
23.17 ? 0.006
0.23 ? 0.000
?5.850 ? 0.000
batch 1240.73 ? 0.370
248.15 ? 0.074
?5.840 ? 0.000
minibatch M . We also demonstrate the utility of GrowBuf. We then apply our algorithm to gene
segmentation in a large human chromatin data set.
Synthetic data We create two synthetic datasets with T = 10, 000 observations and K = 8
latent states. The first, called diagonally dominant (DD), illustrates the potential benefit of large
M , the number of sampled subchains per minibatch. The Markov chain heavily self-transitions so
that most subchains contain redundant information with observations generated from the same latent
state. Although transitions are rarely observed, the emission means are set to be distinct so that this
example is likelihood-dominated and highly identifiable. Thus, fixing a computational budget, we
expect large M to be preferable to large L, covering more of the observation sequence and avoiding
poor local modes arising from redundant information.
The second dataset we consider contains two reversed cycles (RC): the Markov chain strongly transitions from states 1 ? 2 ? 3 ? 1 and 5 ? 7 ? 6 ? 5 with a small probability of transitioning
between cycles via bridge states 4 and 8. The emission means for the two cycles are very similar
but occur in reverse order with respect to the transitions. Transition information in observing long
enough dynamics is thus crucial to identify between states 1, 2, 3 and 5, 6, 7, and a large enough L
is imperative. The Supplement contains details for generating both synthetic datasets.
We compare SVIHMM to batch VB on these two synthetic examples. For each per parameter setting,
we ran 20 random restarts of SVIHMM for 100 iterations and batch VB until convergence of the
ELBO. A forgetting rate ? parametrizes step sizes ?n = (1 + n)?? . We fix the total number of
observations L ? M used per iteration of SVIHMM such that increasing M implies decreasing L
(and vice versa).
?
?
In Fig. 1(a) we compare ||A?A||
F , where A is the true transition matrix and A its learned variational
mean. We see trends one would expect: the small L, large M settings achieve better performance
for the DD example, but the opposite holds for RC, with bL/2c = 1 significantly underperforming.
(Of course, allowing large L and M is always preferable, except computationally.) Under appropriate settings in both cases, we achieve comparable performance to batch VB. In Fig. 1(b), we see
similar trends in terms of predictive log-probability holding out 10% of the observations as a test
set and using 5-fold cross validation. Here, we actually notice that SVIHMM often achieves higher
predictive log-probability than batch VB, which is attributed to the fact that stochastic algorithms
can find better local modes than their non-random counterparts.
A timing comparison of SVIHMM to batch VB with T = 3 million is presented in Table 4. All
settings of SVIHMM run faster than even a single iteration of batch, with only a negligible change
in predictive log-likelihood. Further discussion on these timing results is in the Supplement.
Motivated by the demonstrated importance of choice of L, we now turn to examine the impact of
the GrowBuf routine via predictive log-probability. In Fig. 1(b), we see a noticeable improvement
for small L settings when GrowBuf is incorporated (the dashed lines in Fig. 1(b)). In particular,
the RC example is now learning dynamics of the chain even with bL/2c = 1, which was not
possible without buffering. GrowBuf thus provides robustness by guarding against poor choice of
L. We note that the buffer routine does not overextend subchains, on average growing by only ? 8
observations with = 1?10?6 . Since the number of observations added is usually small, GrowBuf
does not significantly add to per-iteration computational cost (see the Supplement).
Human chromatin segmentation We apply the SVIHMM algorithm to a massive human chromatin dataset provided by the ENCODE project [24]. This data was studied in [25] with the goal
of unsupervised pattern discovery via segmentation of the genome. Regions sharing the same labels
have certain common properties in the observed data, and because the labeling at each position is
unknown but influenced by the label at the previous position, an HMM is a natural model [26].
7
L/2 = 1
?
||A||F
0.5
0.0
1.00
?
?
?
?
?
0.50
0.25
?
?
L/2 = 10
?3.5
?4.0
GrowBuffer
Off
On
?
?4.5
0.1
?6.0
Rev. Cycles
Rev. Cycles
0.75
Held out log?probability
?
L/2 = 3
?3.0
Diag. Dom.
1.0
Diag. Dom.
1.5
?6.2
?6.4
0.3
0.5
0.7
?
0.00
?6.6
1
10
100
0
20
40
60
0
L/2 (log?scale)
20
40
60
0
20
40
60
Iteration
(a)
(b)
Figure 1: (a) Transition matrix error varying L with L ? M fixed. (b) Effect of incorporating
GrowBuf. Batch results denoted by horizontal red line in both figures.
We were provided with 250 million observations consisting of twelve assays carried out in the
chronic myeloid leukemia cell line K562. We analyzed the data using SVIHMM on an HMM with
25 states and 12 dimensional Gaussian emissions. We compare our performance to the corresponding segmentation learned by an expectation maximization (EM) algorithm applied to a more flexible
dynamic Bayesian network model (DBN) [27]. Due to the size of the dataset, the analysis of [27]
requires breaking the chain into several blocks, severing long range dependencies.
We assess performance by comparing the false discovery rate (FDR) of predicting active promoter
elements in the sequence. The lowest (best) FDR achieved with SVIHMM over 20 random restarts
trials was .999026 using bL/2c = 2000, M = 50, ? = .51 , comparable and slightly lower than
the .999038 FDR obtained using DBN-EM on the severed data [27]. We emphasize that even when
restricted to a simpler HMM model, learning on the full data via SVIHMM attains similar results to
that of [27] with significant gains in efficiency. In particular, our SVIHMM runs require only under
an hour for a fixed 100 iterations, the maximum iteration limit specified in the DBN-EM approach.
In contrast, even with a parallelized implementation over the broken chain, the DBN-EM algorithm
can take days. In conclusion, SVIHMM enables scaling to the entire dataset, allowing for a more
principled approach by utilizing the data jointly.
5
Discussion
We have presented stochastic variational inference for HMMs, extending such algorithms from independent data settings to handle time dependence. We elucidated the complications that arise when
sub-sampling dependent observations and proposed a scheme to mitigate the error introduced from
breaking dependencies. Our approach provides an adaptive technique with provable guarantees for
convergence to a local mode. Further extensions of the algorithm in the HMM setting include adaptively selecting the length of meta-observations and parallelizing the local step when the number of
meta-observations is large. Importantly, these ideas generalize to other settings and can be applied to
Bayesian nonparametric time series models, general state space models, and other graph structures
with spatial dependencies.
Acknowledgements
This work was supported in part by the TerraSwarm Research Center sponsored by MARCO and DARPA,
DARPA Grant FA9550-12-1-0406 negotiated by AFOSR, and NSF CAREER Award IIS-1350133. JX was
supported by an NDSEG fellowship. We also appreciate the data, discussions, and guidance on the ENCODE
project provided by Max Libbrecht and William Noble.
1
Other parameter settings were explored.
8
References
[1] H. Robbins and S. Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics,
22(3):400?407, 1951.
[2] L. Bottou. Online algorithms and stochastic approximations. In Online Learning and Neural Networks.
Cambridge University Press, 1998.
[3] L. Bottou. Large-Scale Machine Learning with Stochastic Gradient Descent. In International Conference
on Computational Statistics, pages 177?187, August 2010.
[4] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM J. on Optimization, 19(4):1574?1609, January 2009.
[5] M. Welling and Y. W. Teh. Bayesian learning via stochastic gradient Langevin dynamics. In International
Conference on Machine Learning, pages 681?688, 2011.
[6] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with subsets of data. CoRR,
abs/1403.5693, 2014.
[7] X. Wang and D. B. Dunson. Parallelizing MCMC via Weierstrass sampler. CoRR, abs/1312.4605, 2014.
[8] W. Neiswanger, C. Wang, and E. Xing. Asymptotically exact, embarrassingly parllel MCMC. CoRR,
abs/1311.4780, 2014.
[9] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. Journal of Machine
Learning Research, 14(1):1303?1347, May 2013.
[10] M. Bryant and E. B. Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet
processes. In Advances in Neural Information Processing Systems, pages 2708?2716, 2012.
[11] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan. Streaming variational Bayes. In
Advances in Neural Information Processing Systems, pages 1727?1735, 2013.
[12] P. Gopalan, D. M. Mimno, S. Gerrish, M. J. Freedman, and D. M. Blei. Scalable inference of overlapping
communities. In Advances in Neural Information Processing Systems, pages 2258?2266, 2012.
[13] M. J. Johnson and A. S. Willsky. Stochastic variational inference for Bayesian time series models. In
International Conference on Machine Learning, 2014.
[14] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77(2):257?286, 1989.
[15] S. Fr?uhwirth-Schnatter. Finite mixture and Markov switching models. Springer Verlag, 2006.
[16] S. L. Scott. Bayesian methods for hidden Markov models: Recursive computing in the 21st century.
Journal of the American Statistical Association, 97(457):337?351, March 2002.
[17] M. J. Beale. Variational Algorithms for Approximate Bayesian Inference. Ph.D. thesis, University College
London, 2003.
[18] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Machine Learning, 37(2):183?233, November 1999.
[19] C. M. Bishop. Pattern Recognition and Machine Learning. Springer Verlag, 2006.
[20] B. T. Polyak and Y. Tsypkin. Pseudo-gradient adaptation and learning algorithms. Automatics and Telemechanics, 3:45?68, 1973.
[21] J. Gonzalez, Y. Low, and C. Guestrin. Residual splash for optimally parallelizing belief propagation. In
International Conference on Artificial Intelligence and Statistics, 2009.
[22] S. J. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education, 2003.
[23] J. Nocedal and S. Wright. Numerical Optimization. Springer Series in Operations Research and Financial
Engineering. Springer, 2006.
[24] ENCODE Project Consortium. An integrated encyclopedia of DNA elements in the human genome.
Nature, 489(7414):57?74, September 2012.
[25] M. M. Hoffman, O. J. Buske, J. Wang, Z. Weng, J. A. Bilmes, and W. S. Noble. Unsupervised pattern
discovery in human chromatin structure through genomic segmentation. Nature Methods, 9:473?476,
2012.
[26] N. Day, A. Hemmaplardh, R. E. Thurman, J. A. Stamatoyannopoulos, and W. S. Noble. Unsupervised
segmentation of continuous genomic data. Bioinformatics, 23(11):1424?1426, 2007.
[27] M. M. Hoffman, J. Ernst, S. P. Wilder, A. Kundaje, R. S. Harris, M. Libbrecht, B. Giardine, P. M. Ellenbogen, J. A. Bilmes, E. Birney, R. C. Hardison, M. Dunham, I. Kellis, and W. S. Noble. Integrative
annotation of chromatin elements from encode data. Nucleic Acids Research, 41(2):827?841, 2013.
9
| 5560 |@word mild:1 trial:1 version:1 unif:1 integrative:1 seek:1 guarding:1 accounting:1 myeloid:1 initial:1 series:6 contains:2 selecting:1 current:5 comparing:1 k562:1 must:3 written:1 numerical:1 informative:1 enables:1 designed:1 sponsored:1 update:12 juditsky:1 stationary:1 generative:1 prohibitive:1 half:1 selected:1 intelligence:2 plane:1 fa9550:1 blei:2 mitigation:1 provides:4 iterates:1 node:3 complication:1 weierstrass:1 simpler:3 rc:4 along:1 mathematical:1 prove:2 doubly:1 firefly:1 inside:1 subchains:21 x0:1 forgetting:1 expected:1 indeed:1 examine:1 growing:1 decreasing:1 considering:1 increasing:2 becomes:4 provided:3 discover:1 underlying:1 linearity:1 project:3 lowest:1 eigenvector:2 developed:1 informed:1 y1s:1 guarantee:3 pseudo:1 mitigate:1 expands:1 bryant:1 runtime:2 axt:3 prohibitively:1 exactly:2 scaled:1 preferable:2 exchangeable:2 unit:1 grant:1 appear:2 producing:1 negligible:2 engineering:1 local:28 timing:2 tends:1 limit:1 switching:1 despite:1 initialization:1 studied:2 w0a:1 co:1 hmms:13 nemirovski:1 range:1 unique:1 practice:2 block:2 recursive:1 svi:11 procedure:3 maxx:1 significantly:3 boyd:1 refers:1 consortium:1 applying:4 optimize:1 equivalent:2 demonstrated:1 yt:22 maximizing:1 chronic:1 go:1 center:1 l:11 emily:1 independently:1 simplicity:1 immediately:1 utilizing:1 importantly:1 financial:1 century:1 handle:1 coordinate:1 annals:1 target:1 norvig:1 heavily:1 massive:3 exact:4 paralleling:1 programming:1 trend:2 element:3 expensive:1 recognition:2 updating:4 observed:3 wang:4 capture:1 region:1 cycle:5 trade:1 russell:1 forwardbackward:4 ran:1 principled:1 broken:3 complexity:3 broderick:1 dynamic:5 dom:2 predictive:6 efficiency:2 joint:1 darpa:2 distinct:1 effective:1 london:1 monte:3 artificial:2 labeling:1 outside:2 choosing:2 harnessing:1 pearson:1 widely:1 valued:1 elbo:7 statistic:4 jointly:1 noisy:7 laird:1 online:3 sequence:11 propose:2 coming:1 fr:1 adaptation:1 realization:1 poorly:1 achieve:3 uhwirth:1 ernst:1 convergence:6 regularity:1 optimum:2 extending:3 underperforming:1 produce:1 generating:1 guaranteeing:1 converges:2 adam:1 derive:1 develop:1 illustrate:1 fixing:1 stat:3 uninformed:1 qt:1 noticeable:1 progress:1 eq:46 c:2 involves:1 implies:1 met:1 direction:2 aperiodic:1 stochastic:29 human:6 education:1 ja:1 require:1 suffices:1 fix:1 probable:1 extension:1 hold:1 marco:1 sufficiently:3 considered:4 around:1 normal:1 exp:4 wright:1 maclaurin:1 achieves:1 consecutive:2 smallest:1 jx:1 applicable:1 severed:1 label:2 bridge:1 robbins:3 vice:2 create:1 hoffman:3 genomic:2 gaussian:3 always:1 aim:1 thurman:1 rather:1 ej:2 factorizes:1 varying:2 wilson:1 jaakkola:1 encode:4 derived:2 emission:6 focus:2 ax:1 improvement:1 likelihood:3 contrast:3 chromatin:6 attains:1 inference:24 dependent:2 membership:1 streaming:2 entire:7 typically:2 integrated:1 hidden:6 interested:2 birney:1 issue:1 arg:1 flexible:1 denoted:3 augment:1 spatial:1 initialize:2 marginal:1 field:4 aware:1 washington:2 sampling:7 identical:1 buffering:5 unsupervised:3 leukemia:1 foti:1 noble:4 future:1 parametrizes:1 employ:1 irreducible:1 modern:2 simultaneously:1 preserve:2 divergence:2 individual:2 cheaper:1 replaced:1 geometry:1 consisting:4 maintain:2 william:1 ab:3 recalling:1 stationarity:2 interest:2 message:12 highly:1 analyzed:1 arrives:1 truly:1 yielding:1 mixture:1 weng:1 held:1 chain:22 implication:1 accurate:1 edge:4 explosion:1 necessary:3 respective:1 fox:1 old:9 terraswarm:1 guidance:1 uncertain:1 maximization:1 cost:2 introducing:1 subset:4 imperative:1 uniform:2 successful:1 johnson:1 motivating:1 optimally:1 dependency:7 dir:2 synthetic:7 adaptively:4 unbiasedness:1 st:1 twelve:1 international:4 siam:1 yl:1 off:2 analogously:3 again:1 thesis:1 ndseg:1 choose:1 wishart:1 american:1 leading:5 return:1 reusing:1 account:1 potential:2 sec:12 dillon:1 satisfy:2 performed:1 jason:1 observing:2 analyze:2 red:1 xing:1 bayes:3 maintains:1 parallel:1 annotation:1 monro:3 ass:1 variance:2 largely:1 likewise:1 efficiently:2 yield:1 identify:1 rabiner:1 acid:1 generalize:1 bayesian:13 carlo:3 monitoring:1 bilmes:2 influenced:1 sharing:1 lengthy:1 against:1 obvious:1 associated:2 proof:1 monitored:1 attributed:1 propagated:1 sampled:4 gain:2 dataset:9 recall:2 segmentation:6 schedule:1 embarrassingly:1 routine:4 actually:1 higher:1 day:2 follow:2 harness:2 specify:1 restarts:2 though:2 strongly:1 furthermore:2 just:2 until:1 horizontal:1 overlapping:1 propagation:2 minibatch:8 mode:5 aj:1 effect:6 contain:3 unbiased:4 true:5 counterpart:1 equality:1 analytically:1 leibler:1 iteratively:1 assay:1 conditionally:1 during:1 self:1 rooted:1 covering:1 criterion:1 complete:1 demonstrate:3 variational:30 consideration:1 recently:3 common:1 empirically:1 endpoint:3 million:3 discussed:1 association:1 approximates:1 significant:3 refer:1 versa:2 cambridge:1 ai:1 paisley:1 automatic:1 outlined:2 dbn:4 similarly:1 had:1 add:1 dominant:1 posterior:5 multivariate:1 recent:1 own:1 discard:1 reverse:1 buffer:7 certain:1 verlag:2 meta:2 yi:1 exploited:1 niw:2 seen:2 guestrin:1 additional:1 relaxed:1 parallelized:1 determine:1 maximize:1 redundant:4 period:1 dashed:1 ii:1 full:7 multiple:1 faster:1 cross:1 long:6 equally:1 y:15 award:1 paired:1 impact:1 scalable:3 basic:1 expectation:3 iteration:11 achieved:1 cell:1 background:1 fellowship:1 grow:1 sudderth:1 crucial:3 appropriately:1 extra:2 unlike:2 ascent:4 pass:1 undirected:1 leveraging:1 effectiveness:1 jordan:2 near:1 leverage:1 backwards:1 enough:3 wn:10 variety:1 independence:1 affect:1 opposite:1 polyak:1 reduce:1 idea:3 motivated:1 utility:1 reuse:1 accelerating:1 speech:1 passing:4 generally:2 clear:2 detailed:3 gopalan:1 kundaje:1 nonparametric:2 encyclopedia:1 locally:1 ph:1 dna:1 shapiro:1 nsf:1 tutorial:1 notice:1 arising:3 per:6 broadly:1 discrete:2 write:2 iter:1 lan:1 wasteful:1 backward:8 nocedal:1 graph:1 asymptotically:1 sum:3 run:4 inverse:1 ameliorate:1 family:4 throughout:1 gonzalez:1 scaling:4 vb:15 comparable:3 bound:3 ct:3 dwell:1 fold:1 identifiable:1 elucidated:1 occur:1 bp:4 dominated:1 min:1 performing:1 structured:2 alternate:1 ball:1 poor:2 march:1 conjugate:4 wilder:1 terminates:1 smaller:1 em:4 slightly:1 rev:2 modification:1 making:2 severing:1 restricted:3 pr:1 computationally:4 ln:30 mutually:1 equation:1 turn:2 count:1 needed:1 neiswanger:1 tsypkin:1 tractable:2 end:5 available:3 decomposing:1 endowed:1 operation:1 apply:2 observe:1 hierarchical:1 appropriate:2 nicholas:1 stepsize:1 batch:23 robustness:1 beale:1 original:1 denotes:2 dirichlet:3 subsampling:1 include:1 graphical:2 maintaining:1 practicality:1 ghahramani:1 ebfox:1 kellis:1 appreciate:1 bl:4 objective:1 added:1 costly:2 dependence:4 concentration:1 traditional:1 ys1:1 september:1 gradient:28 reversed:1 hmm:10 w0:1 provable:1 willsky:1 assuming:3 length:10 modeled:1 index:3 minimizing:2 unfortunately:2 dunson:1 holding:1 disparate:1 suppress:1 implementation:1 fdr:3 unknown:1 perform:1 contributed:1 allowing:2 teh:1 observation:31 nucleic:1 markov:14 datasets:7 finite:2 descent:2 november:1 january:1 langevin:1 extended:1 incorporated:1 y1:5 august:1 parallelizing:3 community:1 introduced:1 cast:1 pair:1 kl:2 specified:1 optimized:3 learned:3 hour:1 address:1 usually:1 pattern:3 scott:1 challenge:2 including:1 memory:4 max:3 belief:15 natural:11 predicting:1 residual:4 scheme:4 carried:1 coupled:1 genomics:3 prior:3 discovery:3 acknowledgement:1 relative:1 afosr:1 loss:1 expect:2 mixed:1 proven:2 validation:1 incurred:1 sufficient:2 dd:2 row:1 course:1 diagonally:1 supported:2 infeasible:3 aij:2 exponentiated:1 allow:1 xs1:2 saul:1 differentiating:1 midpoint:1 tolerance:1 benefit:2 mimno:1 dimension:1 transition:14 evaluating:1 genome:2 fb:1 ignores:2 author:1 made:1 forward:9 replicated:1 avg:1 adaptive:1 subchain:24 cope:2 welling:1 approximate:12 emphasize:1 kullback:1 gene:1 monotonicity:1 global:16 sequentially:1 active:1 summing:1 xi:12 subsequence:1 continuous:1 latent:11 iterative:1 decomposes:1 table:2 additionally:1 learn:4 nature:2 robust:1 ca:3 career:1 improving:1 alg:2 bottou:2 necessarily:1 diag:2 promoter:1 hyperparameters:1 arise:1 freedman:1 xu:1 x1:8 fig:4 schnatter:1 fails:1 sub:3 position:2 wish:1 exponential:2 lie:1 pe:6 breaking:3 embed:1 transitioning:1 xt:28 specific:1 bishop:1 showing:1 explored:1 decay:3 x:21 evidence:1 exists:1 intractable:1 naively:1 false:2 sequential:2 incorporating:1 importance:1 corr:3 supplement:10 illustrates:1 budget:1 splash:5 expressed:1 springer:4 corresponds:2 loses:1 satisfies:1 determines:1 gerrish:1 minibatches:4 harris:1 conditional:1 goal:1 considerable:1 change:2 specifically:5 except:1 uniformly:1 nfoti:1 sampler:1 total:2 called:1 pas:4 e:5 exception:1 select:2 rarely:1 college:1 arises:4 bioinformatics:1 wibisono:1 incorporate:1 evaluate:1 mcmc:6 avoiding:2 correlated:1 |
5,038 | 5,561 | Object Localization based on Structural SVM
using Privileged Information
Jan Feyereisl, Suha Kwak?, Jeany Son, Bohyung Han
Dept. of Computer Science and Engineering, POSTECH, Pohang, Korea
[email protected], {mercury3,jeany,bhhan}@postech.ac.kr
Abstract
We propose a structured prediction algorithm for object localization based on Support Vector Machines (SVMs) using privileged information. Privileged information provides useful high-level knowledge for image understanding and facilitates
learning a reliable model even with a small number of training examples. In our
setting, we assume that such information is available only at training time since it
may be difficult to obtain from visual data accurately without human supervision.
Our goal is to improve performance by incorporating privileged information into
ordinary learning framework and adjusting model parameters for better generalization. We tackle object localization problem based on a novel structural SVM
using privileged information, where an alternating loss-augmented inference procedure is employed to handle the term in the objective function corresponding to
privileged information. We apply the proposed algorithm to the Caltech-UCSD
Birds 200-2011 dataset, and obtain encouraging results suggesting further investigation into the benefit of privileged information in structured prediction.
1
Introduction
Object localization is often formulated as a binary classification problem, where a learned classifier
determines the existence or absence of a target object within a candidate window of every location,
size, and aspect ratio. Recently, a structured prediction technique using Support Vector Machine
(SVM) has been applied to this problem [1], where the optimal bounding box containing target object is obtained by a trained classifier. This approach provides a unified framework for detection and
post-processing (non-maximum suppression), and handles issues related to the object with variable
aspect ratios naturally. However, object localization is an inherently difficult task due to the large
amount of variations in objects and scenes, e.g., shape deformations, color variations, pose changes,
occlusion, view point changes, background clutter, etc. This issue is aggravated when the size of
training dataset is small.
More reliable model can be learned even with fewer training examples if additional high-level
knowledge about an object of interest is available during training. Such high-level knowledge is
called privileged information, which typically describes useful semantic properties of an object such
as parts, attributes, and segmentations. This idea corresponds to the Learning Using Privileged Information (LUPI) paradigm [3], which exploits the additional information to improve predictive
models in training but does not require the information for prediction. The LUPI framework has
been incorporated into SVM in the form of the SVM+ algorithm [4]. However, the applications of
SVM+ are often limited to binary classification problems [3, 4].
We propose a novel Structural SVM using privileged information (SSVM+) framework, shown in
Figure 1, and apply the algorithm to the problem of object localization. In this formulation, privileged information, e.g., parts, attributes and segmentations, are incorporated to learn a structured
?
Current affiliation: INRIA?WILLOW Project, Paris, France; e-mail: [email protected]
1
xi : Image
Bag-of-Words Features
?
?
? ? ?
?
?
Keypoints
Vocabulary
Histogram
SSVM+
Learning
Loss-Augmented Inference by ESS Localization
y*
Testing example
x : Image
?
Visual descriptors
(SURF)
Privileged space
Training example
xi* : Segmentation/Part/Attributes
Visual space
???
Alternating
optimization
Model
Prediction
y
??
arg max ? (?? , ?? , ? ? ) + ? ? , ? ? ? ?
??
arg max ? (?? , ?, ??? ) + ?, ? ?
Groundtruth (yi)
?
Output (y)
Figure 1: Overview of our object localization framework using privileged information. Unlike
visual observations, privileged information is available only during training. We use attributes and
segmentation masks of an object as privileged information to improve generalization of trained
model. To incorporate privileged information during training, we propose an extension of SSVM,
called SSVM+, whose loss-augmented inference is performed by alternating Efficient Subwindow
Search (ESS) [2].
prediction function for object localization. Note that high-level information is available only for
training but not testing in this framework. Our algorithm employs an efficient branch-and-bound
loss-augmented subwindow search procedure to perform the inference by a joint optimization in
original and privileged spaces during training. Since the additional information is not used in testing, the inference in testing phase is the same as the standard Structural SVM (SSVM) case. We
evaluate our method by learning to localize birds in the Caltech-UCSD Birds 200-2011 (CUB-2011)
dataset [5] and exploiting attributes and segmentation masks as privileged information in addition to
standard visual features. The main contributions of our work are as follows:
? We introduce a novel framework for object localization exploiting privileged information
that is not required or needed to be inferred at test time.
? We formulate an SSVM+ framework, where an alternating loss-augmented inference procedure for efficient subwindow search is incorporated to handle the privileged information
together with the conventional visual features.
? Performance gains in localization and classification are achieved, especially with small
training datasets.
Methods that exploit additional information have been discussed to improve models for image classification or search in the context of transfer learning [6, 7], learning with side information [8, 9, 10]
and domain adaptation [11], where underlying techniques rely on pair-wise constraints [8], multiple
kernels [9] or metric learning [9]. Zero-shot learning is an extreme framework, where the models
for unseen classes are constructed even without training data [12, 13]. Recent works often rely on
natural language processing techniques to handle pure textual description [14, 15].
Standard learning algorithms require many data to construct a robust model while zero-shot learning
does not need any training examples. LUPI framework is in the middle of traditional data-driven
learning and zero-shot learning since it aims to learn a good model with a small number of training
data by taking advantage of privileged information available at training time. Privileged information
has been considered in face recognition [16], facial feature detection [17], and event recognition
[18], but such works are still uncommon. Our work applies the LUPI framework to an object localization problem based on SSVM. The use of SSVMs for object localization is originally investigated
by [1]. More recently, [19, 20] employ SSVM as part of their localization procedure, however none
of them incorporate privileged information or similar idea. Recently, [21] presented the potential
benefit of SVM+ in object recognition task.
The rest of this paper is organized as follows. We first review the LUPI framework and SSVM
in Section 2, and our SSVM+ formulation for object localization is presented in Section 3. The
performance of our object localization algorithm is evaluated in Section 4.
2
2
2.1
Background
Learning Using Privileged Information
The LUPI paradigm [3, 4, 22, 23] is a framework for incorporating additional information during
training that is not available at test time. The inclusion of such information is exploited to find
a better model, which yields lower generalization error. Contrary to classical supervised learning, where pairs of data are provided (x1 , y1 ), . . . , (xn , yn ), xi ? X , yi ? {?1, 1}, in the LUPI
paradigm additional information x? ? X ? is provided with each training example as well, i.e.,
(x1 , x?1 , y1 ), . . . , (xn , x?n , yn ), xi ? X , x?i ? X ? , yi ? {?1, 1}. This information is, however, not
required during testing. In both learning paradigms, the task is then to find among a collection of
functions the one that best approximates the underlying decision function from the given data.
Specifically, we formulate object localization within a LUPI framework as learning a pair of functions h : X 7? Y and ? : X ? 7? Y jointly, where only h is used for prediction. These functions, for
example, map the space of images and attributes to the space of bounding box coordinates Y. The
decision function h and the correcting function ? depend on each other by the following relation,
`X (h(xi ), yi ) ? `X ? (?(x?i ), yi ),
? 1 ? i ? n,
(1)
where `X and `X ? denote the empirical loss functions on the visual (X ) and the privileged space
(X ? ), respectively. This inequality is inspired by the LUPI paradigm [3, 4, 22, 23], where for all
training examples the model h is always corrected to have a smaller loss on data than the model ? on
privileged information. The constraint in Eq. (1) is meaningful when we assume that, for the same
number of training examples, the combination of visual and privileged information provides a space
to learn a better model than visual information alone.
To translate this general learning idea into practice, the SVM+ algorithm for binary classification
has been developed [3, 4, 22]. The SVM+ algorithm replaces the slack variable ? in the standard
SVM formulation by a correcting function ? = (hw? , x? i + d), which estimates its values from the
privileged information. This results in the following formulation,
n
1
?
CX
2
? 2
min
(hw? , x?i i + b? ),
kwk
+
kw
k
+
(2)
2
2
w,w? ,b,b? 2
|
{z
}
2
n
i=1
s.t.
yi (hw, xi i + b) ?1 ? (hw
|
?
, x?i i
{z
?i
?
+ b ),
}
(hw
|
?
, x?i i
{z
?i
?i
?
+ b ) ? 0,
}
? 1 ? i ? n,
where the terms w? , x? and b? play the same role as w, x and b in the classical SVM, however
within the new correcting space X ? . Furthermore, ? denotes a regularization parameter for w? . It is
important to observe that the weight vector w depends not only on x but also on x? . For this reason
the function that replaces the slack ? is called the correcting function. As privileged information
is only used to estimate the values of the slacks, it is required only during training but not during
testing. Theoretical analysis [4] shows that the bound on the convergence rate of the above SVM+
algorithm could substantially improve upon standard SVM if suitable privileged information is used.
2.2
Structural SVM (SSVM)
SSVMs discriminatively learn a weight vector w for a scoring function f : X ? Y 7? R over the set
of training input/output pairs. Once learned, the prediction function h is obtained by maximizing f
over all possible y ? Y as follows:
? = h(x) = arg max f (x, y) = arg maxhw, ?(x, y)i,
y
y?Y
(3)
y?Y
where ? : X ? Y ? Rd is the joint feature map that models the relationship between input x and
structured output y. To learn the weight vector w, the following optimization problem (marginrescaling) then needs to be solved:
n
1
CX
min kwk2 +
?i ,
(4)
w,? 2
n i=1
s.t.
hw, ??i (y)i ? ?(y i , y) ? ?i
3
1 ? i ? n, ?y ? Y,
where ??i (y) ? ?(xi , y i )??(xi , y), and ?(y i , y) is a task-specific loss that measures the quality
of the prediction y with respect to the ground-truth y i . To obtain a prediction, we need to maximize
Eq. (3) over the response variable y for a given input x. SSVMs are a general method for solving a
variety of prediction tasks. For each application, the joint feature map ?, the loss function ? and an
efficient loss-augmented inference technique need to be customized.
3
Object Localization with Privileged Information
We deal with object localization with privileged information: given a set of training images of
objects, their locations and their attribute and segmentation information, we want to learn a function
to localize objects of interest in yet unseen images. Unlike existing methods, our learned function
does not need explicit or even inferred attribute and segmentation information during prediction.
3.1
Structural SVM with Privileged Information (SSVM+)
We extend the above structured prediction problem to exploit privileged information. Recollecting
Eq. (1), to learn the pair of interdependent functions h and ?, we learn to predict a structure y based
on a training set of triplets, (x1 , x?1 , y 1 ), . . . , (xn , x?n , y n ), xi ? X , x?i ? X ? , y i ? Y, where X
corresponds to various visual features, X ? to attributes or segmentations, and Y is the space of all
possible bounding boxes. Once learned, only the function h is used for prediction. It is obtained by
maximizing the learned function over all possible joint features based on input x ? X and output
y ? Y as in Eq. (3), identically to standard SSVMs.
On the other hand, to jointly learn h and ?, subject to the constraint in Eq. (1), we need to extend the
SSVM framework substantially. The functions h and ? are characterized by the parameter vectors
w and w? , respectively as
h(x) = arg maxhw, ?(x, y)i and ?(x? ) = arg maxhw? , ?(x? , y ? )i.
(5)
y ? ?Y
y?Y
To learn the weight vectors w and w? simultaneously, we propose a novel max-margin structured
prediction framework called SSVM+ that incorporates the constraint in Eq. (1) and hence learns two
models jointly as follows:
min
?
w,w ,?
s.t.
n
1
?
CX
kwk2 + kw? k2 +
?i ,
2
2
n i=1
(6)
? i , y, y ? ) ? ?i ? 1 ? i ? n, ?y, y ? ? Y.
hw, ??i (y)i+hw? , ???i (y ? )i ? ?(y
where ???i (y ? ) ? ?? (x?i , y i ) ? ?? (x?i , y ? ) and the inequality in Eq. (1) is introduced via a surro? derived from [23]. This surrogate loss is defined as
gate task-specific loss ?
? i , y, y ? ) = 1 ?? (y i , y ? ) + [?(y i , y) ? ?? (y i , y ? )]+ ,
?(y
?
(7)
where [t]+ = max(t, 0) and ? > 0 is a penalization parameter corresponding to the constraint in
Eq. (1), and task-specific loss functions ? and ?? are defined in Section 3.3. Through this surrogate
loss, we can apply the inequality in Eq. (1) within the ordinary max-margin optimization framework.
Our framework enforces that the model learned on attributes and segmentations (w? ) always corrects
the model trained on visual features (w). This results in a model with better generalization on visual
features alone. Similar to SSVMs, we can tractably deal with the exponential number of possible
constraints present in our problem via loss-augmented inference and optimization methods such
as the cutting plane algorithm [24] or the more recent block-coordinate Frank Wolfe method [25].
Pseudocode for solving Eq. (6) using the the cutting plane method is presented in Algorithm 1.
Our formulation has a general form that follows the SSVM framework. This means that Eq. (6) is
independent of the definitions of joint feature map, task-specific loss and loss-augmented inference.
We can therefore apply our method to a variety of other problems in addition to object localization.
All that is required is the definition of the three problem specific components, which are also required
in the standard SSVMs. As will be shown later, only the loss-augmented inference step becomes
harder compared to SSVMs due to the inclusion of privileged information.
4
Algorithm 1 Cutting plane method for solving Eq. (6)
1: Input: (x1 , x?1 , y 1 ), . . . , (xn , x?n , y n ), C, ?, ?,
2: Si ? ? for all i = 1, . . . , n
3: repeat
4:
for i = 1, . . . , n do
5:
S ET- UP SURROGATE TASK - SPECIFIC LOSS (E Q . (7))
? i , y, y ? ) = 1 ?? (y i , y ? ) + [?(y i , y) ? ?? (y i , y ? )]+
6:
?(y
?
7:
S ET- UP COST FUNCTION (E Q . (12))
? i , y, y ? ) ? hw, ??i (y)i ? hw? , ??? (y ? )i
8:
H(y, y ? ) = ?(y
i
9:
F IND CUTTING PLANE
? ? ) = arg maxy,y? ?Y H(y, y ? )
10:
(?
y, y
11:
F IND VALUE OF CURRENT SLACK
12:
?i = max{0, maxy,y? ?Si H(y, y ? )}
? ? ) > ?i + then
13:
if H(?
y, y
14:
A DD CONSTRAINT TO WORKING SET
? ? )}
15:
Si ? Si ? {(?
y, y
?
16:
(w, w ) ? optimize Eq. (6) over ?i Si .
17:
end if
18:
end for
19: until no Si has changed during iteration
3.2
Joint Feature Map
Our extended structured output regressor, SSVM+, estimates bounding box coordinates within target
images by considering all possible bounding boxes. The structured output space is defined as Y ?
{(?, t, l, b, r) | ? ? {+1, ?1}, (t, l, b, r) ? R4 }, where ? denotes the presence/absence of an object
and (t, l, b, r) correspond to coordinates of the top, left, bottom, and right corners of a bounding box,
respectively. To model the relationship between input and output variables, we define a joint feature
map, encoding features in x to their bounding boxes defined by y. This is modeled as
?(xi , y) = xi |y ,
(8)
where x|y denotes the region of an image inside a bounding box with coordinates y. Identically,
for the privileged space, we define another joint feature map, which instead of on visual features, it
operates on the space of attributes aided by segmentation information as
?? (x?i , y ? ) = x?i |y? .
(9)
The definition of the joint feature map is problem specific, and we follow the method in [1] proposed for object localization. Implementation details about both joint feature maps are described in
Section 4.2
3.3
Task-Specific Loss
To measure the level of discrepancy between the predicted output y and the true structured label
y i , we need to define a loss function that accurately measures such a level of disagreement. In our
object localization problem, the following task-specific loss, based on the Pascal VOC overlap ratio
[1], is employed in both spaces,
(
i ?y)
1 ? area(y
if y i? = y ? = 1
area(y i ?y)
?(y i , y) =
(10)
1
1 ? ( 2 (y i? y ? + 1)) otherwise,
where y i? ? {+1, ?1} denotes the presence (+1) or absence (?1) of an object in the i-th image. In
the case y i? = ?1, ?(x|y ) = 0, where 0 is an all zero vector. The loss is 0 when bounding boxes
defined by y i and y are identical, and equal to 1 when they are disjoint or y i? 6= y ? .
3.4
Loss-Augmented Inference
Due to the exponential number of constraints that arise during learning of Eq. (6) and the possibly
very large search space Y dealt with during prediction, we require an efficient inference technique,
which may differ in training and testing in the SSVM+ framework.
5
3.4.1
Prediction
The goal is to find the best bounding box given the learned weight vector w and the visual feature x.
Privileged information is not available at testing time, and inference is performed on visual features
only. Therefore, the same maximization problem as in standard SSVMs needs to be solved during
prediction, which is given by
h(x) = arg maxhw, ?(x, y)i.
(11)
y?Y
This maximization problem is over the space of bounding box coordinates. However, this problem
involves a very large search space and therefore cannot be solved exhaustively. In the object localization task, the Efficient Subwindow Search (ESS) algorithm [2] is employed to solve the optimization
problem efficiently.
3.4.2
Learning
Compared to the inference problem required during the prediction step shown in Eq. (11), the optimization of our main objective during training involves a more complex inference procedure. We
need to perform the following maximization with the surrogate loss and an additional term corresponding to the privileged space during an iterative procedure:
? i , y, y ? ) ? hw, ??i (y)i ? hw? , ??? (y ? )i
? ? ) = arg max ?(y
(?
y, y
i
y,y ? ?Y
? i , y, y ? ) + hw, ?(xi , y)i + hw? , ?? (x?i , y ? )i.
= arg max ?(y
(12)
y,y ? ?Y
Note that hw, ?(xi , y i )i and hw? , ?? (x?i , y i )i are constants in Eq. (12) and do not affect the optimization. The problem in Eq. (12), called loss-augmented inference, is required during each iteration
of the cutting plane method, which is used for learning the functions h and ? and hence the weight
vectors w and w? .
We adopt an alternating approach for the inference, where we first solve for y ? in the privileged
space given the fixed solution in the original space y c
? i , y c , y ? ) + hw? , ?? (x?i , y ? )i
arg max ?(y
(13)
y ? ?Y
and subsequently perform optimization in the original space while fixing y ?c
? i , y, y ?c ) + hw, ?(xi , y)i.
arg max ?(y
(14)
y?Y
These two sub-procedures in Eq. (13) and (14) are repeated until convergence, and we obtain the
final solutions w and w? . In the object localization task, both problems are solved by ESS [2], a
branch-and-bound optimization technique, for which it is essential to derive upper bounds of the
above objective functions over a set of rectangles from Y. Here we derive the upper bounds of only
the surrogate loss terms in Eq. (7); the derivation for the other terms can be found in [2].
When the solution in the privileged space is fixed, we need to consider the upper bound of only
[? ? ?? ]+ to obtain the upper bound of the surrogate loss. Since [? ? ?? ]+ is a monotonically
increasing function of ?, its upper bound is derived directly from the upper bound of ?. Specifically,
the upper bound of ? is given by
?=1?
area(y i ? y)
miny?Y area(y i ? y)
?1?
,
area(y i ? y)
maxy?Y area(y i ? y)
and the upper bound of the surrogate loss with a fixed ?? is given by
miny?Y area(y i ? y)
? ?? .
[? ? ?? ]+ ? 1 ?
maxy?Y area(y i ? y)
+
(15)
(16)
When the original space is fixed, the problem is not straightforward since the surrogate loss becomes
a V-shaped function with ? > 1. In this case, we need to check outputs of the function at both upper
6
and lower bounds of ?? . The upper bound of ?? is derived identically to that of ?, and the lower
bound of ?? is given by
?? = 1 ?
area(y i ? y ? )
maxy? ?Y area(y i ? y ? )
?1?
.
?
area(y i ? y )
miny? ?Y area(y i ? y ? )
(17)
Let ??u and ??l be the upper and lower bounds of ?? , respectively. Then the upper bound of the
surrogate loss with a fixed ? is given by
1 ?
1 ?
1 ?
?
?
?
? + [? ? ? ]+ ? max
? + [? ? ?u ]+ , ?l + [? ? ?l ]+ .
(18)
?
? u
?
By identifying the bounds of the surrogate loss as in Eq. (17) and (18), we can optimize the objective
function in Eq. (12) through the alternating procedure based on the standard ESS algorithm.
4
4.1
Experiments
Dataset
Empirical evaluation of our method is performed on the Caltech-UCSD Birds 2011 (CUB-2011)
[5] fine-grained categorization dataset. It contains 200 categories of different species of birds. The
location of each bird is specified using a bounding box. In addition, a large collection of privileged
information is provided in the form of 15 different part annotations, 312 attributes and segmentation
masks, manually labeled in each image by human annotators. Each category contains 30 training
images and around 30 testing images.
4.2
Visual and Privileged Feature Extraction
Our feature descriptor in visual space adopts the bag-of-visual-words model based on Speeded Up
Robust Features (SURF) [26], which is almost identical to [2]. The dimensionality of visual feature
descriptors is 3,000. We additionally employ attributes and segmentation masks as privileged information. The information about attributes is described by a 312 dimensional vector, whose element
corresponds to each attribute and which has a binary value depending on its visibility and relevance.
We use segmentation information to inpaint segmentation masks into each image, which results in
an image containing the original background pixels with uniform foreground pixels. Subsequently,
we extract the 3,000-dimensional feature descriptor based on the same bag-of-visual-words model as
in the visual space. The intuition behind this approach is to generate a set of features that provide a
guaranteed strong response in the foreground region. This response is to be stronger than in the original space, hence allowing for easier localization in the privileged space. For each sub-window, we
create a histogram based on the presence of attributes and the frequency of the privileged codewords
corresponding to the augmented visual space.
4.3
Evaluation
To evaluate our SSVM+ algorithm, we compare it against the original SSVM localization method
by Blaschko and Lampert [1] in several training scenarios. In all experiments we tune the hyperparameters C, ? and ? on a 4 ? 4 ? 4 space spanning values [2?8 , ..., 25 ]. For SSVM, one dimension
of the search space corresponding to the parameter C is searched.
We first investigate the influence of small training sample sizes on localization performance. For
this setting, we loosely adopt the experimental setup of [27]. For training, we focus on 14 bird
categories corresponding to 2 major bird groups. We train four different models, each trained on
a distinctive number of training images, namely nc = {1, 5, 10, 20} images per class, resulting
in n = {14, 70, 140, 280} training images, respectively. Additionally, we train a model on n =
1000 images, corresponding to 100 bird classes, each with 10 training images. As a validation
set, 500 training images chosen at random from categories other than the ones used for training
are used. For testing, we use all testing images of the entire CUB-2011 dataset. Table 1 presents
results of this experiment. In all cases, our method outperforms the SSVM method in both average
overlap as well as average detection (PASCAL VOC overlap ratio > 50%). This implies that for
7
Table 1: Comparison between our SSVM+ and the standard SSVM [1] by varying the number of
classes and training images.
(A) OVERLAP
# training images
(B) D ETECTION
14
70
140
280
1000
14
70
140
280
1000
SSVM [1]
SSVM+
38.2
41.3
43.8
45.7
42.3
45.8
44.9
46.9
48.1
49.0
25.9
32.6
37.3
42.4
34.3
41.5
39.8
43.3
46.2
48.1
D IFF .
+3.1
+1.9
+3.5
+2.0
+0.9
+6.7
+5.1
+7.2
+3.5
+1.9
$&GVGEVKQP
#1XGTNCRTCVKQ
7KHQXPEHURIGHWHFWLRQQ
2YHUODSUDWLR
6690
6690
GLII
6690
6690
GLII
Figure 2: Comparison results of average overlap (A) and detection results (B) between our structured
learning with privileged information (SSVM+) and the standard structured learning (SSVM) on 100
classes of the CUB-2011 dataset. The bird classes aligned in x-axis are sorted by the differences of
two methods shown in black area in a non-increasing order.
the same number of training examples, our method consistently converges to a model with better
generalization performance than SSVM. A previously observed trend [4, 23] of decreasing benefit
of privileged information with increasing training set sizes is also apparent here.
To evaluate the benefit of SSVM+ in more depth, we illustrate average overlap and detection performance on all the 100 classes in Figure 2, where 10 images per class are used for training with
14 classes (n = 140). In most of bird classes, SSVM+ shows relatively better performance in both
overlap ratio and detection rate. Note that each class typically has 30 testing images but some classes
have as little as 18 images. Average overlap ratio is 45.8% and average detection is 12.1 (41.5%).
5
Discussion
We presented a structured prediction algorithm for object localization based on SSVM with privileged information. Our algorithm is the first method for incorporating privileged information within
a structured prediction framework. Our method allows the use of various types of additional information during training to improve generalization performance at testing time. We applied our
proposed method to an object localization problem, which is solved by a novel structural SVM
formulation using privileged information. We employed an alternating loss-augmented inference
procedure to handle the term in the objective function corresponding to privileged information. We
applied the proposed algorithm to the Caltech-UCSD Birds 200-2011 dataset and obtained encouraging results, suggesting the potential benefit of exploiting additional information that is available
during training only. Unfortunately, the benefit of privileged information tends to reduce as the
number of training examples increases; our SSVM+ framework would be particularly useful when
there exist only a few training data or annotation cost is very high.
Acknowledgement
This work was supported partly by ICT R&D program of MSIP/IITP [14-824-09-006; 14-824-09014] and IT R&D Program of MKE/KEIT (10040246).
8
References
[1] Matthew B. Blaschko and Christoph H. Lampert. Learning to localize objects with structured output
regression. In ECCV, pages 2?15, 2008.
[2] Christoph H. Lampert, Matthew B. Blaschko, and Thomas Hofmann. Efficient subwindow search: A
branch and bound framework for object localization. TPAMI, 31(12):2129?2142, 2009.
[3] Vladimir Vapnik, Akshay Vashist, and Natalya Pavlovitch. Learning using hidden information: Masterclass learning. In NATO Workshop on Mining Massive Data Sets for Security, pages 3?14, 2008.
[4] Vladimir Vapnik and Akshay Vashist. A new learning paradigm: Learning using privileged information.
Neural Networks, 22(5-6):544?557, 2009.
[5] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech-UCSD
Birds-200-2011 Dataset. Technical report, California Institute of Technology, 2011.
[6] Lixin Duan, Dong Xu, Ivor W. Tsang, and Jiebo Luo. Visual event recognition in videos by learning from
web data. TPAMI, 34(9):1667?1680, 2012.
[7] Lixin Duan, Ivor W. Tsang, and Dong Xu. Domain transfer multiple kernel learning. TPAMI, 34(3):465?
479, 2012.
[8] Qiang Chen, Zheng Song, Yang Hua, Zhongyang Huang, and Shuicheng Yan. Hierarchical matching with
side information for image classification. In CVPR, pages 3426?3433, 2012.
[9] Hao Xia, Steven C.H. Hoi, Rong Jin, and Peilin Zhao. Online multiple kernel similarity learning for
visual search. TPAMI, 36(3):536?549, 2013.
[10] Gang Wang, David Forsyth, and Derek Hoiem. Improved object categorization and detection using comparative object similarity. TPAMI, 35(10):2442?2453, 2013.
[11] Wen Li, Lixin Duan, Dong Xu, and Ivor W. Tsang. Learning with augmented features for supervised and
semi-supervised heterogeneous domain adaptation. TPAMI, 36(6):1134?11148, 2013.
[12] Christoph H. Lampert, Hannes Nickisch, and Stefan Harmeling. Learning to detect unseen object classes
by between-class attribute transfer. In CVPR, 2009.
[13] Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. Describing objects by their attributes. In
CVPR, 2009.
[14] Mohamed Elhoseiny, Babak Saleh, and Ahmed Elgammal. Write a classifier: Zero-shot learning using
purely textual descriptions. In ICCV, 2013.
[15] Richard Socher, Milind Ganjoo, Christopher D. Manning, and Andrew Y. Ng. Zero-shot learning through
cross-modal transfer. In NIPS, pages 935?943, 2013.
[16] Lior Wolf and Noga Levy. The svm-minus similarity score for video face recognition. In CVPR, 2013.
[17] Heng Yang and Ioannis Patras. Privileged information-based conditional regression forest for facial feature detection. In IEEE FG, pages 1?6, 2013.
[18] Xiaoyang Wang and Qiang Ji. A novel probabilistic approach utilizing clip attribute as hidden knowledge
for event recognition. In ICPR, pages 3382?3385, 2012.
[19] Cezar Ionescu, Liefeng Bo, and Cristian Sminchisescu. Structural svm for visual localization and continuous state estimation. In ICCV, pages 1157?1164, 2009.
[20] Qieyun Dai and Derek Hoiem. Learning to localize detected objects. In CVPR, pages 3322?3329, 2012.
[21] Viktoriia Sharmanska, Novi Quadrianto, and Christoph H. Lampert. Learning to rank using privileged
information. In ICCV, pages 825?832, 2013.
[22] Vladimir Vapnik. Estimation of Dependences Based on Empirical Data. Springer, 2006.
[23] Dmitry Pechyony and Vladimir Vapnik. On the theory of learning with privileged information. NIPS,
pages 1894?1902, 2010.
[24] Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. Large margin methods for structured and interdependent output variables. JMLR, 6:1453?1484, 2005.
[25] Simon Lacoste-Julien, Martin Jaggi, Mark Schmidt, and Patrick Pletscher. Block-coordinate frank-wolfe
optimization for structural svms. In ICML, 2013.
[26] Herbert Bay, Andreas Ess, Tinne Tuytelaars, and Luc Van Gool. Speeded-up robust features (SURF).
CVIU, 110(3):346?359, 2008.
[27] Ryan Farrell, Om Oza, Ning Zhang, Vlad I. Morariu, Trevor Darrell, and Larry S. Davis. Birdlets:
Subordinate categorization using volumetric primitives and pose-normalized appearance. In ICCV, pages
161?168, 2011.
9
| 5561 |@word middle:1 stronger:1 shuicheng:1 minus:1 harder:1 shot:5 contains:2 score:1 hoiem:3 outperforms:1 existing:1 current:2 com:1 luo:1 si:6 gmail:1 yet:1 hofmann:2 shape:1 visibility:1 alone:2 fewer:1 morariu:1 plane:5 es:6 provides:3 location:3 zhang:1 constructed:1 inside:1 introduce:1 mask:5 inspired:1 voc:2 decreasing:1 duan:3 encouraging:2 little:1 window:2 considering:1 increasing:3 becomes:2 project:1 provided:3 underlying:2 blaschko:3 farhadi:1 substantially:2 developed:1 unified:1 every:1 tackle:1 classifier:3 k2:1 yn:2 engineering:1 tends:1 encoding:1 inria:2 black:1 bird:13 r4:1 christoph:4 branson:1 limited:1 speeded:2 harmeling:1 enforces:1 testing:13 practice:1 block:2 procedure:9 jan:1 area:13 empirical:3 yan:1 matching:1 word:3 altun:1 cannot:1 tsochantaridis:1 context:1 influence:1 optimize:2 conventional:1 map:9 maximizing:2 straightforward:1 primitive:1 formulate:2 identifying:1 pure:1 correcting:4 keit:1 utilizing:1 handle:5 variation:2 coordinate:7 target:3 play:1 massive:1 wolfe:2 element:1 recognition:6 trend:1 particularly:1 labeled:1 bottom:1 role:1 observed:1 steven:1 solved:5 wang:2 tsang:3 oza:1 region:2 iitp:1 intuition:1 miny:3 exhaustively:1 babak:1 trained:4 depend:1 solving:3 ali:1 predictive:1 purely:1 localization:31 upon:1 distinctive:1 tinne:1 joint:10 glii:2 various:2 derivation:1 train:2 detected:1 whose:2 apparent:1 solve:2 cvpr:5 otherwise:1 unseen:3 tuytelaars:1 jointly:3 final:1 online:1 cristian:1 advantage:1 tpami:6 propose:4 fr:1 adaptation:2 aligned:1 translate:1 iff:1 description:2 kh:1 exploiting:3 convergence:2 darrell:1 categorization:3 comparative:1 converges:1 object:42 derive:2 depending:1 ac:1 fixing:1 pose:2 illustrate:1 andrew:1 eq:21 strong:1 predicted:1 involves:2 implies:1 differ:1 ning:1 attribute:19 subsequently:2 human:2 larry:1 hoi:1 subordinate:1 require:3 generalization:6 investigation:1 ryan:1 extension:1 rong:1 around:1 considered:1 ground:1 predict:1 matthew:2 major:1 adopt:2 cub:4 estimation:2 bag:3 label:1 create:1 stefan:1 always:2 aim:1 varying:1 derived:3 focus:1 joachim:1 consistently:1 kwak:2 check:1 rank:1 suppression:1 detect:1 inference:18 typically:2 entire:1 hidden:2 relation:1 perona:1 willow:1 france:1 pixel:2 issue:2 classification:6 arg:12 among:1 pascal:2 equal:1 construct:1 once:2 shaped:1 extraction:1 ng:1 manually:1 identical:2 kw:2 qiang:2 novi:1 icml:1 foreground:2 discrepancy:1 report:1 richard:1 employ:3 few:1 wen:1 simultaneously:1 phase:1 occlusion:1 detection:9 interest:2 investigate:1 mining:1 zheng:1 evaluation:2 uncommon:1 extreme:1 behind:1 jeany:2 korea:1 facial:2 loosely:1 deformation:1 theoretical:1 maximization:3 ordinary:2 cost:2 uniform:1 welinder:1 endres:1 nickisch:1 probabilistic:1 dong:3 corrects:1 regressor:1 together:1 milind:1 containing:2 huang:1 possibly:1 corner:1 natalya:1 zhao:1 li:1 suggesting:2 potential:2 ioannis:2 forsyth:2 farrell:1 depends:1 msip:1 performed:3 view:1 later:1 kwk:1 annotation:2 simon:1 contribution:1 om:1 descriptor:4 efficiently:1 yield:1 correspond:1 serge:1 dealt:1 vashist:2 accurately:2 none:1 pechyony:1 trevor:1 definition:3 volumetric:1 against:1 frequency:1 derek:3 mohamed:1 naturally:1 lior:1 gain:1 dataset:9 adjusting:1 vlad:1 feyereisl:1 knowledge:4 color:1 dimensionality:1 segmentation:14 organized:1 steve:1 originally:1 supervised:3 follow:1 response:3 improved:1 modal:1 hannes:1 formulation:6 evaluated:1 box:12 furthermore:1 bhhan:1 until:2 hand:1 working:1 web:1 christopher:1 liefeng:1 quality:1 normalized:1 true:1 regularization:1 hence:3 alternating:7 semantic:1 postech:2 deal:2 ind:2 during:19 davis:1 image:28 wise:1 novel:6 recently:3 pseudocode:1 ji:1 overview:1 discussed:1 extend:2 approximates:1 kwk2:2 rd:1 inclusion:2 language:1 han:1 supervision:1 similarity:3 etc:1 patrick:1 jaggi:1 birdlets:1 recent:2 driven:1 scenario:1 catherine:1 inequality:3 binary:4 affiliation:1 yi:6 exploited:1 caltech:5 scoring:1 yasemin:1 herbert:1 additional:9 dai:1 employed:4 paradigm:6 maximize:1 monotonically:1 semi:1 branch:3 multiple:3 keypoints:1 technical:1 characterized:1 ahmed:1 cross:1 post:1 privileged:56 prediction:21 regression:2 heterogeneous:1 metric:1 histogram:2 kernel:3 iteration:2 achieved:1 background:3 addition:3 want:1 fine:1 ssvms:8 rest:1 unlike:2 noga:1 subject:1 facilitates:1 bohyung:1 contrary:1 incorporates:1 structural:9 presence:3 yang:2 identically:3 variety:2 aggravated:1 affect:1 elgammal:1 reduce:1 idea:3 andreas:1 song:1 peter:1 ssvm:32 useful:3 tune:1 amount:1 clutter:1 clip:1 svms:2 category:4 generate:1 exist:1 disjoint:1 per:2 ionescu:1 write:1 group:1 four:1 pohang:1 localize:4 lacoste:1 rectangle:1 pietro:1 almost:1 groundtruth:1 decision:2 peilin:1 bound:18 lupi:9 guaranteed:1 replaces:2 gang:1 constraint:8 scene:1 ri:1 aspect:2 surro:1 min:3 relatively:1 martin:1 structured:16 icpr:1 combination:1 manning:1 describes:1 smaller:1 son:1 maxy:5 iccv:4 thorsten:1 previously:1 slack:4 describing:1 needed:1 ganjoo:1 end:2 available:8 apply:4 observe:1 hierarchical:1 disagreement:1 inpaint:1 schmidt:1 gate:1 existence:1 original:7 thomas:2 denotes:4 top:1 lixin:3 exploit:3 especially:1 classical:2 objective:5 codewords:1 dependence:1 traditional:1 surrogate:10 evaluate:3 mail:1 reason:1 spanning:1 modeled:1 relationship:2 ratio:6 vladimir:4 nc:1 difficult:2 setup:1 unfortunately:1 frank:2 hao:1 implementation:1 perform:3 allowing:1 upper:12 observation:1 datasets:1 jin:1 extended:1 incorporated:3 y1:2 ucsd:5 jiebo:1 sharmanska:1 inferred:2 introduced:1 david:2 pair:5 paris:1 required:7 specified:1 namely:1 security:1 wah:1 california:1 learned:8 textual:2 tractably:1 nip:2 program:2 reliable:2 max:12 video:2 gool:1 event:3 suitable:1 natural:1 rely:2 overlap:8 customized:1 pletscher:1 improve:6 technology:1 julien:1 axis:1 extract:1 review:1 understanding:1 interdependent:2 acknowledgement:1 ict:1 loss:33 discriminatively:1 maxhw:4 viktoriia:1 annotator:1 penalization:1 validation:1 dd:1 heng:1 eccv:1 changed:1 repeat:1 supported:1 side:2 institute:1 taking:1 face:2 akshay:2 fg:1 benefit:6 van:1 xia:1 dimension:1 vocabulary:1 xn:4 depth:1 suha:2 adopts:1 subwindow:5 collection:2 cutting:5 nato:1 dmitry:1 belongie:1 xi:14 search:10 iterative:1 continuous:1 triplet:1 bay:1 table:2 additionally:2 learn:10 transfer:4 robust:3 inherently:1 forest:1 sminchisescu:1 investigated:1 complex:1 domain:3 surf:3 main:2 bounding:12 arise:1 lampert:5 hyperparameters:1 quadrianto:1 repeated:1 x1:4 augmented:14 xu:3 sub:2 explicit:1 exponential:2 candidate:1 levy:1 jmlr:1 learns:1 hw:18 grained:1 ian:1 specific:9 svm:20 incorporating:3 essential:1 workshop:1 vapnik:4 socher:1 kr:1 recollecting:1 margin:3 chen:1 easier:1 patras:1 cviu:1 cx:3 appearance:1 ivor:3 visual:25 bo:1 applies:1 hua:1 springer:1 corresponds:3 truth:1 determines:1 wolf:1 saleh:1 conditional:1 goal:2 formulated:1 sorted:1 luc:1 absence:3 change:2 aided:1 specifically:2 corrected:1 operates:1 called:5 specie:1 partly:1 experimental:1 meaningful:1 support:2 searched:1 mark:1 relevance:1 incorporate:2 dept:1 |
5,039 | 5,562 | Efficient Inference of Continuous Markov Random
Fields with Polynomial Potentials
Shenlong Wang
University of Toronto
Alexander G. Schwing
University of Toronto
Raquel Urtasun
University of Toronto
[email protected]
[email protected]
[email protected]
Abstract
In this paper, we prove that every multivariate polynomial with even degree can
be decomposed into a sum of convex and concave polynomials. Motivated by
this property, we exploit the concave-convex procedure to perform inference on
continuous Markov random fields with polynomial potentials. In particular, we
show that the concave-convex decomposition of polynomials can be expressed as
a sum-of-squares optimization, which can be efficiently solved via semidefinite
programing. We demonstrate the effectiveness of our approach in the context
of 3D reconstruction, shape from shading and image denoising, and show that
our method significantly outperforms existing techniques in terms of efficiency as
well as quality of the retrieved solution.
1
Introduction
Graphical models are a convenient tool to illustrate the dependencies among a collection of random
variables with potentially complex interactions. Their widespread use across domains from computer vision and natural language processing to computational biology underlines their applicability.
Many algorithms have been proposed to retrieve the minimum energy configuration, i.e., maximum
a-posteriori (MAP) inference, when the graphical model describes energies or distributions defined
on a discrete domain. Although this task is NP-hard in general, message passing algorithms [16] and
graph-cuts [4] can be used to retrieve the global optimum when dealing with tree-structured models
or binary Markov random fields composed out of sub-modular energy functions.
In contrast, graphical models with continuous random variables are much less well understood. A
notable exception is Gaussian belief propagation [31], which retrieves the optimum when the potentials are Gaussian for arbitrary graphs under certain conditions of the underlying system. Inspired
by discrete graphical models, message-passing algorithms based on discrete approximations in the
form of particles [6, 17] or non-linear functions [27] have been developed for general potentials.
They are, however, computationally expensive and do not perform well when compared to dedicated algorithms [20]. Fusion moves [11] are a possible alternative, but they rely on the generation
of good proposals, a task that is often difficult in practice. Other related work focuses on representing
relations on pairwise graphical models [24], or marginalization rather than MAP [13].
In this paper we study the case where the potentials are polynomial functions. This is a very general
family of models as many applications such as collaborative filtering [8], surface reconstruction [5]
and non-rigid registration [30] can be formulated in this way. Previous approaches rely on either
polynomial equation system solvers [20], semi-definite programming relaxations [9, 15] or approximate message-passing algorithms [17, 27]. Unfortunately, existing methods either cannot cope with
large-scale graphical models, and/or do not have global convergence guarantees.
In particular, we exploit the concave-convex procedure (CCCP) [33] to perform inference on continuous Markov random fields (MRFs) with polynomial potentials. Towards this goal, we first show
that an arbitrary multivariate polynomial function can be decomposed into a sum of a convex and
1
a concave polynomial. Importantly, this decomposition can be expressed as a sum-of-squares optimization [10] over polynomial Hessians, which is efficiently solvable via semidefinite programming.
Given the decomposition, our inference algorithm proceeds iteratively as follows: at each iteration
we linearize the concave part and solve the resulting subproblem efficiently to optimality. Our algorithm inherits the global convergence property of CCCP [25].
We demonstrate the effectiveness of our approach in the context of 3D reconstruction, shape from
shading and image denoising. Our method proves superior in terms of both computational cost and
the energy of the solutions retrieved when compared to approaches such as dual decomposition [20],
fusion moves [11] and particle belief propagation [6].
2
Graphical Models with Continuous Variables and Polynomial Functions
In this section we first review inference algorithms for graphical models with continuous random
variables, as well as the concave-convex procedure. We then prove existence of a concave-convex
decomposition for polynomials and provide a construction. Based on this decomposition and construction, we propose a novel inference algorithm for continuous MRFs with polynomial potentials.
2.1 Graphical Models with Polynomial Potentials
Q
The MRFs we consider represent distributions defined over a continuous domain X = i Xi , which
is a product-space assembled by continuous sub-spaces Xi ? R. Let x ? X be the output configuration of interest, e.g., a 3D mesh or a denoised image. Note that each output configuration tuple
x = (x1 , ? ? ? , xn ) subsumes a set of random variables. Graphical
P models describe the energy of
the system as a sum of local scoring functions, i.e., f (x) =
r?R fr (xr ). Each local function
fr (xr ) : Xr ? R depends on a subset of variables xr = (xi )i?r defined on a domain X
Qr ? X ,
which is specified by the restriction often referred to as region r ? {1, . . . , n}, i.e., Xr = i?r Xi .
We refer to R as the set of all restrictions required to compute the energy of the system.
We tackle the problem of maximum a-posteriori (MAP) inference, i.e., we want to find the configuration x? having the minimum energy. This is formally expressed as
X
x? = arg min
fr (xr ).
(1)
x
r?R
Solving this program for general functions is hard. In this paper we focus on energies composed of
polynomial functions. This is a fairly general case, as the energies employed in many applications
obey this assumption. Furthermore, for well-behaved continuous non-polynomial functions (e.g.,
k-th order differentiable) polynomial approximations could be used (e.g., via a Taylor expansion).
Let us define polynomials more formally:
Definition 1. A d-degree multivariate polynomial f (x) : Rn ? R is a finite linear combination of
monomials, i.e.,
X
mn
1 m2
f (x) =
cm xm
1 x2 ? ? ? xn ,
m?M
where we let the coefficient cm ? R and the tuple m = (m1 , . . . , mn ) ? M ? Nn with
d ?m ? M. The set M subsumes all tuples relevant to define the function f .
Pn
i=1
mi ?
We are interested in minimizing Eq. (1) where the potential functions fr are polynomials with arbitrary degree. This is a difficult problem as polynomial functions are in general non-convex. Moreover, for many applications of interest we have to deal with a large number of variables, e.g., more
than 60,000 when reconstructing shape from shading of a 256 ? 256 image. Optimal solutions exist under certain conditions when the potentials are Gaussian [31], i.e., polynomials of degree 2.
Message passing algorithms have not been very successful for general polynomials due to the fact
that the messages are continuous functions. Discrete [6, 17] and non-parametric [27] approximations have been employed with limited success. Furthermore, polynomial system solvers [20], and
moment-based methods [9] cannot scale up to such a large number of variables. Dual-decomposition
provides a plausible approach for tackling large-scale problems by dividing the task into many small
sub-problems [20]. However, solving a large number of smaller systems is still a bottleneck, and
decoding the optimal solution from the sub-problems might be difficult. In contrast, we propose to
use the Concave-Convex Procedure (CCCP) [33], which we now briefly review.
2
2.2 Inference via CCCP
CCCP is a majorization-minimization framework for optimizing non-convex functions that can be
written as the sum of a convex and a concave part, i.e., f (x) = fvex (x) + fcave (x). This framework has recently been used to solve a wide variety of machine learning tasks, such as learning in
structured models with latent variables [32, 22], kernel methods with missing entries [23] and sparse
principle component analysis [26]. In CCCP, f is optimized by iteratively computing a linearization
of the concave part at the current iterate x(i) and solving the resulting convex problem
x(i+1) = arg min fvex (x) + xT ?fcave (x(i) ).
x
(2)
This process is guaranteed to monotonically decrease the objective and it converges globally, i.e.,
for any point x (see Theorem 2 of [33] and Theorem 8 [25]). Moreover, Salakhutdinov et al. [19]
showed that the convergence rate of CCCP, which is between super-linear and linear, depends on
the curvature ratio between the convex and concave part. In order to take advantage of CCCP to
solve our problem, we need to decompose the energy function into a sum of convex and concave
parts. In the next section we show that this decomposition always exists. Furthermore, we provide a
procedure to perform this decomposition given general polynomials.
2.3 Existence of a Concave-Convex Decomposition of Polynomials
Theorem 1 in [33] shows that for all arbitrary continuous functions with bounded Hessian a decomposition into convex and concave parts exists. However, Hessians of polynomial functions are not
bounded in Rn . Furthermore, [33] did not provide a construction for the decomposition. In this
section we show that for polynomials this decomposition always exists and we provide a construction. Note that since odd degree polynomials are unbounded from below, i.e., not proper, we only
focus on even degree polynomials in the following. Let us therefore consider the space spanned by
polynomial functions with an even degree d.
Proposition 1. The set of polynomial functions f (x) : Rn ? R witheven degree d,denoted Pdn , is
n+d?1
.
a topological vector space. Furthermore, its dimension dim(Pdn ) =
d
Proof. (Sketch) According to the definition of vector spaces, we know that the set of polynomial
functions forms a vector space over R. We can then show that addition and multiplication over the
polynomial ring Pdn is continuous. Finally, dim(Pdn ) is equivalent to computing a d-combination
with repetition from n elements [3].
Next we investigate the geometric properties of convex even degree polynomials.
Lemma 1. Let the set of convex polynomial functions c(x) : Rn ? R with even degree d be Cdn .
This subset of Pdn is a convex cone.
Proof. Given two arbitrary convex polynomial functions f and g ? Cdn , let h = af +bg with positive
scalars a, b ? R+ . ?x, y ? Rn , ?? ? [0, 1], we have:
h(?x + (1 ? ?)y) = af (?x + (1 ? ?)y) + bg(?x + (1 ? ?)y)
? a(?f (x) + (1 ? ?)f (y)) + b(?h(x) + (1 ? ?)h(y))
= ?h(x) + (1 ? ?)h(y).
Therefore, ?f, g ? Cdn , ?a, b ? R+ , we have af + bg ? Cdn , i.e., Cdn is a convex cone.
We now show that the eigenvalues of the Hessian of f (hence the smallest one) continuously depend
on f ? Pdn .
Proposition 2. For any polynomial function f ? Pdn with d ? 2, the eigenvalues of its Hessian
eig(?2 f (x)) are continuous w.r.t. f in the polynomial space Pdn .
P
Proof. ?f ? Pdn , given a basis {gi } of Pdn , we obtain the representation f = i ci gi , linear in
n
2
the coefficients ci . It is easy
matrix,
Pto see2 that ?f ? Pd , the Hessian ? 2f (x) is aPpolynomial
2
linear in ci , i.e., ? f (x) = i ci ? gi (x). Let M (c1 , ? ? ? , cn ) = ? f (x) = i ci ?2 gi (x) define
the Hessian as a function of the coefficients (c1 , ? ? ? , cn ). The eigenvalues eig(M (c1 , ? ? ? , cn )) are
3
equivalent to the root of the characteristic polynomial of M (c1 , ? ? ? , cn ), i.e., the set of solutions for
det(M ? ?I) = 0. All the coefficients of the characteristic polynomial are polynomial expressions
w.r.t. the entries of M , hence they are also polynomial w.r.t. (c1 , ? ? ? , cn ) since each entry of M is
linear on (c1 , ? ? ? , cn ). Therefore, the coefficients of the characteristic polynomial are continuously
dependent on (c1 , ? ? ? , cn ). Moreover, the root of a polynomial is continuously dependent on the
coefficients of the polynomial [28]. Based on these dependencies, eig(M (c1 , ? ? ? , cn )) are continuously dependent on (c1 , ? ? ? , cn ), and eig(M (c1 , ? ? ? , cn )) are continuous w.r.t. f in the polynomial
space Pdn .
The following proposition illustrates that the relative interior of the convex cone of even degree
polynomials is not empty.
Proposition 3. For an even degree function space Pdn , there exists a function f (x) ? Pdn , such that
?x ? Rn , the Hessian is strictly positive definite, i.e., ?2 f (x) 0. Hence the relative interior of
Cdn is not empty.
P
P
Proof. Let f (x) = i xdi + i x2i ? Pdn . It follows trivially that
?2 f (x) = diag d(d ? 1)xd?2
+ 2, d(d ? 1)x2d?2 + 2, ? ? ? , d(d ? 1)xnd?2 + 2 0 ?x.
1
Given the above two propositions it follows that the dimensionality of Cdn and Pdn is identical.
Lemma 2. The dimension of the polynomial vector space is equal to the dimension of the convex
even degree polynomial cone having the same degree d and the same number of variables n, i.e.,
dim(Cdn ) = dim(Pdn ).
Proof. According to Proposition 3, there exists a function f ? Pdn , with strictly positive definite
Hessian, i.e., ?x ? Rn , eig(?2 f (x)) > 0. Consider a polynomial basis {gi } of Pdn . Consider
the vector of eigenvalues E(?
ci ) = eig(?2 (f (x) + c?i gi )). According to Proposition 2, E(?
ci ) is
continuous w.r.t. c?i , and E(0) is an all-positive vector. According to the definition of continuity,
there exists an > 0, such that E(?
ci ) > 0, ??
ci ? {c : |c| < }. Hence, there exists a nonzero
constant c?i such that the polynomial f + c?i gi is also strictly convex. We can construct such a strictly
convex polynomial ?gi . Therefore the polynomial set f + c?i gi is linearly independent and hence a
basis of Cdn . This concludes the proof.
Lemma 3. The linear span of the basis of Cdn is Pdn
Proof. Suppose Pdn is N -dimensional. According to Lemma 2, Cdn is also N -dimensional. Denote
{g1 , g2 , ? ? ? gN } a basis of Cdn . Assume there exists h ? Pdn such that h cannot be linearly represented
by {g1 , g2 , ? ? ? gN }. We have {g1 , g2 , ? ? ? , gN , h} are N +1 linear independent vectors in Pdn , which
is in contradiction with Pdn being N -dimensional.
Theorem 1. ?f ? Pdn , there exist convex polynomials h, g ? Cdn such that f = h ? g.
Proof. Let the basis of Cdn be {g1 , g2 , ? ? ? , gN }. According to Lemma 3, there exist coefficients
c1 , ? ? ? , cN , such that f = c1 g1 + c2 g2 +P
? ? ? + cN gN . P
We can partition the coefficients
into
P
two sets, according to their sign, i.e., f =
c
g
+
c
g
.
Let
h
=
c
g
and
i
i
j
j
i
i
ci ?0
cj <0
ci ?0
P
g = ? cj <0 cj gj . We have f = h ? g, while both h and g are convex polynomials.
According to Theorem 1 there exists a concave-convex decomposition given any polynomial, where
both the convex and concave parts are
with degree no greater than the original
also polynomials
n+d?1
polynomial. As long as we can find
linearly independent convex polynomial basis
d
functions for any arbitrary polynomial function f ? Pdn , we obtain a valid decomposition by looking
at the sign of the coefficients. It is however worth noting that the concave-convex decomposition
is not unique. In fact, there is an infinite number of decompositions, trivially seen by adding and
subtracting an arbitrary convex polynomial to an existing decomposition.
Finding a convex basis is however not an easy task, mainly due to the difficulties on checking
convexity and the exponentially increasing dimension. Recently, Ahmadi et al. [1] proved that even
deciding on the convexity of quartic polynomials is NP-hard.
4
Algorithm 1 CCCP Inference on Continuous MRFs with Polynomial Potentials
Input: Initial estimation x0
?r find fr (xr ) = fr,vex (xr ) + fr,cave (xr ) via Eq. (4) or via a polynomial basis (Theorem 1)
repeat
P
P
(i)
solve x(i+1) = arg minx r fr,vex (xr ) + xT ?x ( r?R fr,cave (xr )) with L-BFGS.
until convergence
Output: x?
2.4 Constructing a Concave-Convex Decomposition of Polynomials
In this section we derive an algorithm to construct the concave-convex decomposition of arbitrary
polynomials. Our algorithm first constructs the convex basis of the polynomial vector space Pdn
before extracting a convex polynomial containing the target polynomial via a sum-of-squares (SOS)
program. More formally, given a non-convex
polynomial f (x) we are interested in constructing
P
a convex function h(x) = f (x) + i ci gi (x), with gi (x), i = {1, . . . , m}, the set of all convex monomials
P with degree no grater than deg(f (x)). From this it follows that fvex = h(x) and
fcave = ? i ci gi (x). In particular, we want a convex function h(x), with coefficients ci as small
as possible:
X
min wT c s.t. ?2 f (x) +
ci ?2 gi (x) 0 ?x ? Rn ,
(3)
c
i
with the objective function being a weighted sum of coefficients. The weight vector w can encode
preferences in the minimization, e.g., smaller coefficients for larger degrees. This minimization
problem is NP-hard. If it was not, we could decide whether an arbitrary polynomial f (x) is convex
by solving such a program, which contradicts the NP-hardness result of [1]. Instead, we utilize a
tighter set of constraints, i.e., sum-of-square constraints, which are easier to solve [14].
Definition 2. For an even degree polynomial f (x) ? Pdn , with d = 2m, f is an SOS polynomial if
Pk
n
and only if there exist g1 , . . . , gk ? Pm
such that f (x) = i=1 gi (x)2 .
Thus, instead of solving the NP-hard program stated in Eq. (3), we optimize:
X
min wT c s.t. ?2 f (x) +
ci ?2 gi (x) ? SOS.
c
(4)
i
The set of SOS Hessians is a subset of the positive definite Hessians [9]. Hence, every solution of
this problem can be considered a valid construction. Furthermore, the sum-of-squares optimization
in Eq. (4) can be formulated as an efficiently solvable semi-definite program (SDP) [10, 9]. It is important to note that the gap between the SOS Hessians and the positive definite Hessians increases
as the degree of the polynomials grows. Hence using SOS constraints we might not find a solution,
even though there exists one for the original program given in Eq. (3). In practice, SOS optimization
works well for monomials and low-degree polynomials. For pairwise graphical models with arbitrary degree polynomials, as well as for graphical models of order up to four with maximum fourth
order degree polynomials, we are guaranteed to find a decomposition. This is due to the fact that
SOS convexity and polynomial convexity coincide (Theorem 5.2 in [2]). Most practical graphical
models are within this set. Known counter-examples [2] are typically found using specific tools.
We summarize our algorithm in Alg. 1. Given a graphical model with polynomial potentials with
degree at most d, we obtain a concave-convex decomposition by solving Eq. (4). This can be done
for the full polynomial or for each non-convex monomial. We then apply CCCP in order to perform
inference, where we solve a convex problem at each iteration. In particular, we employ L-BFGS,
mainly due to its super-linear convergence and its storage efficiency [12]. In each L-BFGS step, we
apply a line search scheme based on the Wolfe conditions [12].
2.5 Extensions
Dealing with very large graphs: Motivated by recent progress on accelerating graphical model
inference [7, 21, 20], we can handle large-scale problems by employing dual decomposition and
using our approach to solve the sub-problems.
Non-polynomial cases: We have described our method in the context of graphical models with
polynomial potentials. It can be extended to the non-polynomial case if the involved functions have
5
L-BFGS
10736.4
4.98
0.11
Energy
RMSE (mm)
Time (second)
PCBP
6082.7
4.50
56.60
FusionMove
4317.7
2.95
0.12
ADMM-Poly
3221.1
3.82
18.32
Ours
3062.8
3.07
8.70 (?2)
Table 1: 3D Reconstruction on 3 ? 3 meshes with noise variance ? = 2.
13
12
11
10
12
11.5
11
0
?5
ADMM?Poly
LBFGS
Ours
10.5
9
0
10
2
10
Time (seconds)
(a) Synthetic meshes
10
0
10
FoE Energy Evolution
GradDesc
LBFGS
Ours
14.95
?10 ?5
10
1
10
Time (seconds)
(b) Cardboard meshes
Log?scale Energy
14
Shape?from?shading Energy Evolution Curve
5
Synthetic 3? 3 Mesh Energy Evolution
13
ADMM?Poly
Ours
12.5
Log?scale Energy
Log?scale Energy
Log?scale Energy
Real Data 3D Reconstruction Energy Evolution
16
ADMM?Poly
15
Ours
14.94
14.93
14.92
14.91
14.9
0
10
Time (seconds)
5
10
(c) Shape-from-Shading
14.89
1
10
2
10
Time (seconds)
3
10
(d) Denoising
Figure 1: Average energy evolution curve for different applications.
bounded Hessians, since we can still construct the concave-convex decomposition. For instance, for
2
2
2
2
the Lorentzian regularizer ?(x) = log(1 + x2 ), we note that ?(x) = {log(1 + x2 ) + x8 } ? x8
is a valid concave-convex decomposition. We refer the reader to the supplementary material for
a detailed proof. Alternatively, we can approximate any continuous function with polynomials by
employing a Taylor expansion around the current iterate, and updating the solution via one CCCP
step within a trust region.
3
Experimental Evaluation
We demonstrate the effectiveness of our approach using three different applications: non-rigid 3D
reconstruction, shape from shading and image denoising. We refer the reader to the supplementary
material for more figures as well as an additional toy experiment on a densely connected graph with
box constraints.
3.1 Non-rigid 3D Reconstruction
We tackle the problem of deformable surface reconstruction from a single image. Following [30],
we parameterize the 3D shape via the depth of keypoints. Let x ? RN be the depth of N points.
We follow the locally isometric deformation assumption [20], i.e., the distance between neighboring
keypoints remains constant as the non-rigid surface deforms. The 3D reconstruction problem is then
formulated as
X
2
kxi qi ? xj qj k2 ? d2i,j ,
(5)
min
x
(i,j)?N
where di,j is the distance between keypoints (given as input), N is the set of all neighboring pixels,
xi is the unknown depth of point i, qi = A?1 (ui , vi , 1)T is the line-of-sight of pixel i with A
denoting the known internal camera parameters. We consider a six-neighborhod system, i.e., up,
down, left, right, upper-left and lower-right. Note that each pairwise potential is a four-degree nonconvex polynomial with two random variables. We can easily decompose it into 15 monomials,
and perform a concave-convex decomposition given the corresponding convex polynomials (see
supplementary material for an example).
We first conduct reconstruction experiments on the 100 randomly generated 3 ? 3 meshes of [20],
where zero-mean Gaussian noise with standard deviation ? = 2 is added to each observed keypoint
coordinate. We compare our approach to Fusion Moves [30], particle convex belief propagation
(PCBP) [17], L-BFGS as well as dual decomposition with the alternating direction method of multipliers using a polynomial solver (ADMM-Poly) [20]. We employ three different metrics, energy at
convergence, running time and root mean square error (RMSE). For L-BFGS and our method, we
use a flat mesh as initialization with two rotation angles (0, 0, 0) and (?/4, 0, 0). The convergence
criteria is an energy decrease of less than 10?5 or a maximum of 500 iterations is reached. As
shown in Table 1 our algorithm achieves lower energy, lower RMSE, and faster running time than
ADMM-Poly and PCBP. Furthermore, as shown in Fig. 1(a) the time for running our algorithm to
convergence is similar to a single iteration of ADMM-Poly, while we achieve much lower energy.
6
L-BFGS
736.98
4.16
0.3406
Energy
RMSE (mm)
Time (second)
CLVM
N/A
7.23
N/A
ADMM-Poly
905.37
5.68
314.8
Ours
687.21
3.29
10.16
1500
Energy
Log?Energy Evolution Curve (4th Sample)
16
Ours
ADMM?Poly
14
Log?Energy
Table 2: 3D Reconstruction on Cardboard sequences.
Convergent Energy for Samples
2000
Ours
ADMM?Poly
1000
12
10
8
500
0
5
10
15
Sample Index
20
GroundTruth
ADMM?Poly, Error: 4.9181 mm
200
200
200
150
150
150
100
100
100
50
6 0
10
2
10
Time (log scale)
Ours, Error: 2.1997 mm
0
?50
4
50
50
?50
10
0
50
0
?50
?50
0
50
0
?50
?50
0
50
Figure 2: 3D reconstruction results on Cardboard. Left to right: sample comparison, energy curve,
groundtruth, ADMM-Poly and our reconstruction.
Log?Energy evolution curve
Log?Energy
20
Iteration: 98, RMSE: 0.012595, Time: 28.549
GroundTruth
Iteration: 98, Energy: 81.564, Time: 28.549
15
20
10
0
10
120
100
80
60
40
20
5
0
0
10
20
Time (sceonds)
20 40
60 80
120
100
30
Figure 3: Shape-from-Shading results on Penny. Left to right: energy curve, inferred shape, rendered
image with inferred shape, groundtruth image.
We next reconstruct the real-world 9?9 Cardboard sequence [20]. We compare with both ADMMPoly and L-BFGS in terms of energy, time and RMSE. We also compare with the constrained latent
variable model of [29], in terms of RMSE. We cannot compare the energy value since the energy
function is different. Again, we use a flat mesh as initialization. As shown in Table 2, our algorithm
outperforms all baselines. Furthermore, it is more than 20 times faster than ADMM-Poly, which is
the second best algorithm. Average energy as a function of time is shown in Fig. 1(b). We refer
the reader to Fig. 2 and the video in the supplementary material for a visual comparison between
ADMM-Poly and our method. From the first subfigure we observe that our method achieves lower
energy for most samples. The second subfigure illustrate the fact that our approach monotonically
decreases the energy, as well as our method being much faster than ADMM-Poly.
3.2
Shape-from-Shading
Following [5, 20], we formulate the shape from shading problem with 3rd-order 4-th degree polynomial functions.
Let xi,j = (ui,j , vi,j , wi,j )T be the 3D coordinates of
each triangle vertex. Under the Lambertian model assumption, the intensity of a trianpr +l2 qr +l3
gle r is represented as: Ir = l1?
, where l = (l1 , l2 , l3 )T is the direction of
2
2
pr +qr +1
the light, pr and qr are the x and y coordinates of normal vector nr = (pr , qr , 1)T ,
(v
?vi,j )(wi+1,j ?wi,j )?(vi+1,j ?vi,j )(wi,j+1 ?wi,j )
and pr =
which is computed as pr = (ui,j+1
i,j+1 ?ui,j )(vi+1,j ?vi,j )?(ui+1,j ?ui,j )(vi,j+1 ?vi,j )
(ui,j+1 ?ui,j )(wi+1,j ?wi,j )?(ui+1,j ?ui,j )(wi,j+1 ?wi,j )
(ui,j+1 ?ui,j )(vi+1,j ?vi,j )?(ui+1,j ?ui,j )(vi,j+1 ?vi,j ) ,
respectively. Each clique r represents a triangle, which is constructed by three neighboring points on the grid, i.e., either (xi,j , xi,j+1 , xi+1,j )
or (xi,j , xi,j?1 , xi+1,j ). Given the rendered image and lighting direction, shape from shading is
formulated as
X
2
min
(p2r + qr2 + 1)Ir2 ? (l1 pr + l2 qr + l3 )2 .
(6)
w
r?R
We tested our algorithm on the Vase, Penny and Mozart datasets, where Vase and Penny are 128?128
images and Mozart is a 256 ? 256 image with light direction l = (0, 0, 1)T . The energy evolution
curve, the inferred shape as well as the rendered and groud-truth images are illustrated in Fig. 3.
See the supplementary material for more figures on Penny and Mozart. Our algorithm achieves very
low energy, producing very accurate results in only 30 seconds. ADMM-Poly hardly runs on such
large-scale data due to the computational cost of the polynomial system solver (more than 2 hours
7
L-BFGS GradDesc Ours
Energy
29547
29598
29413
PSNR
30.96
31.56
31.43
Time (sec)
189.5
1122.5
384.5
Table 3: FoE Energy Minimization Results.
Noisy Image, PSNR: 24.5952
GradDesc, PSNR: 31.0689
Ours, PSNR: 30.9311
L?BFGS, PSNR: 30.7695
Energy evolution curve for FoE
Energy (log?scale)
Clean Image
11.46
11.45
11.44
GradDesc
LBFGS
Ours
11.43
11.42
11.41
0
50
100
Time (seconds)
150
Figure 4: FoE based image denoising results on Cameraman, ? = 15.
per iteration). In order to compare with ADMM-Poly, we also conduct the shape from shading
experiment on a scaled 16 ? 16 version of the Vase data. Both methods retrieve a shape that is very
close to the global optimum (0.00027 for ADMM-Poly and 0.00032 for our approach), however,
our algorithm is over 500 times faster than ADMM-Poly (2250 seconds for ADMM-Poly and 13.29
seconds for our proposed method). The energy evolution curve on the 16 ? 16 re-scaled image in
shown in Fig. 1(c).
3.3 Image Denoising
We formulate image denoising via minimizing the Fields-of-Experts (FoE) energy [18]. The
data term encodes the fact that the recovered image should be close to the noisy input, where
closeness is weighted by the noise level ?. Given a pre-learned linear filterbank of ?experts?
{Ji }i=1,...,K , the image prior term encodes the fact that natural images are Gibbs distributed via
Q
QK
p(x) = Z1 exp( r?R i=1 (1 + 12 (JTi xr )2 )?i ). Thus we formulate denoising as
min
x
K
XX
?
1
2
kx
?
yk
+
?i log(1 + (JTi xr )2 ),
2
?2
2
i=1
(7)
r?R
where y is the noisy image input, x is the clean image estimation, r indexes 5 ? 5 cliques and i is
the index for each FoE filter. Note that this energy function is not a polynomial function. However,
for each FoE model, the Hessian of the energy function log(1 + 21 (JTi xr )2 ) is lower bounded by
JiT Ji
(proof in the supplementary material). Therefore, we simply add an extra term ?xTr xr with
8
T
J J
? > i8 i to obtain the concave-convex decomposition log(1+ 12 (JTi xr )2 ) = {log(1+ 21 (JTi xr )2 )+
T
?xr xr } ? ?xTr xr . We utilize a pre-trained 5 ? 5 filterbank with 24 filters, and conduct experiments
1
?
on the BM3D benchmark with noise level ? = 15. In addition to the other baselines, we compare
to the original FoE inference algorithm, which essentially is a first-order gradient descent method
with fixed gradient step [18]. For L-BFGS, we set the maximum number of iterations to 10,000, to
make sure that the algorithm converges. As shown in Table 3 and Fig. 1(d), our algorithm achieves
lower energy than L-BFGS and first-order gradient descent. Furthermore, we see that lower energy
does not translate to higher PSNR, showing the limitation of FoE as an image prior.
4
Conclusions
We investigated the properties of polynomials, and proved that every multivariate polynomial with
even degree can be decomposed into a sum of convex and concave polynomials with degree no
greater than the original one. Motivated by this property, we exploited the concave-convex procedure to perform inference on continuous Markov random fields with polynomial potentials. Our algorithm is especially fit for solving inference problems on continuous graphical models, with a large
number of variables. Experiments on non-rigid reconstruction, shape-from-shading and image denoising validate the effectiveness of our approach. We plan to investigate continuous inference with
arbitrary differentiable functions, by making use of polynomial approximations as well as tighter
concave-convex decompositions.
1
http://www.cs.tut.fi/?foi/GCF-BM3D/
8
References
[1] A. A. Ahmadi, A. Olshevsky, P. A. Parrilo, and J. N. Tsitsiklis. Np-hardness of deciding convexity of
quartic polynomials and related problems. Mathematical Programming, 2013.
[2] A. A. Ahmadi and P. A. Parrilo. A complete characterization of the gap between convexity and sosconvexity. SIAM J. on Optimization, 2013.
[3] K. Batselier, P. Dreesen, and B. D. Moor. The geometry of multivariate polynomial division and elimination. SIAM Journal on Matrix Analysis and Applications, 2013.
[4] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 2001.
[5] A. Ecker and A. D. Jepson. Polynomial shape from shading. In CVPR, 2010.
[6] A. T. Ihler and D. A. McAllester. Particle belief propagation. In AISTATS, 2009.
[7] N. Komodakis, N. Paragios, and G. Tziritas. Mrf energy minimization and beyond via dual decomposition.
PAMI, 2011.
[8] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer,
2009.
[9] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM Journal on
Optimization, 2001.
[10] J. B. Lasserre. Convergent sdp-relaxations in polynomial optimization with sparsity. SIAM Journal on
Optimization, 2006.
[11] V. Lempitsky, C. Rother, S. Roth, and A. Blake. Fusion moves for markov random field optimization.
PAMI, 2010.
[12] J. Nocedal and S. J. Wright. Numerical optimization 2ed. Springer-Verlag, 2006.
[13] N. Noorshams and M. J. Wainwright. Belief propagation for continuous state spaces: Stochastic messagepassing with quantitative guarantees. JMLR, 2013.
[14] A. Papachristodoulou, J. Anderson, G. Valmorbida, S. Prajna, P. Seiler, and P. Parrilo. Sostools version
3.00 sum of squares optimization toolbox for matlab. arXiv:1310.4716, 2013.
[15] P. A. Parrilo. Structured semidefinite programs and semialgebraic geometry methods in robustness and
optimization. PhD thesis, Caltech, 2000.
[16] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988.
[17] J. Peng, T. Hazan, D. McAllester, and R. Urtasun. Convex max-product algorithms for continuous mrfs
with applications to protein folding. In ICML, 2011.
[18] S. Roth and M. J. Black. Fields of experts. IJCV, 2009.
[19] R. Salakhutdinov, S. Roweis, and Z. Ghahramani. On the convergence of bound optimization algorithms.
In UAI, 2002.
[20] M. Salzmann. Continuous inference in graphical models with polynomial energies. In CVPR, 2013.
[21] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed Message Passing for Large Scale
Graphical Models. In CVPR, 2011.
[22] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efficient Structured Prediction with Latent
Variables for General Graphical Models. In ICML, 2012.
[23] A. Smola, S. Vishwanathan, and T. Hofmann. Kernel methods for missing variables. AISTATS, 2005.
[24] L. Song, A. Gretton, D. Bickson, Y. Low, and C. Guestrin. Kernel belief propagation. In AISTATS, 2011.
[25] B. Sriperumbudur and G. Lanckriet. On the convergence of the concave-convex procedure. In NIPS, ?09.
[26] B. Sriperumbudur, D. Torres, and G. Lanckriet. Sparse eigen methods by dc programming. In ICML, ?07.
[27] E. B. Sudderth, A. T. Ihler, M. Isard, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation.
Communications of the ACM, 2010.
[28] D. J. Uherka and A. M. Sergott. On the continuous dependence of the roots of a polynomial on its
coefficients. American Mathematical Monthly, 1977.
[29] A. Varol, M. Salzmann, P. Fua, and R. Urtasun. A constrained latent variable model. In CVPR, 2012.
[30] S. Vicente and L. Agapito. Soft inextensibility constraints for template-free non-rigid reconstruction. In
ECCV, 2012.
[31] Y. Weiss and W. T. Freeman. Correctness of belief propagation in gaussian graphical models of arbitrary
topology. Neural computation, 2001.
[32] C. N. Yu and T. Joachims. Learning structural svms with latent variables. In ICML, 2009.
[33] A. L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 2003.
9
| 5562 |@word version:2 briefly:1 polynomial:104 underline:1 decomposition:30 shading:13 moment:2 initial:1 configuration:4 salzmann:2 denoting:1 ours:12 outperforms:2 existing:3 current:2 recovered:1 tackling:1 written:1 mesh:8 numerical:1 partition:1 shape:18 hofmann:1 bickson:1 isard:1 provides:1 characterization:1 toronto:6 preference:1 unbounded:1 mathematical:2 c2:1 constructed:1 prove:2 ijcv:1 x0:1 pairwise:3 peng:1 hardness:2 sdp:2 bm3d:2 inspired:1 globally:1 decomposed:3 salakhutdinov:2 freeman:2 solver:4 increasing:1 xx:1 underlying:1 moreover:3 bounded:4 pto:1 cm:2 developed:1 finding:1 guarantee:2 cave:2 quantitative:1 every:3 concave:30 tackle:2 xd:1 k2:1 scaled:2 filterbank:2 producing:1 positive:6 before:1 understood:1 local:2 pami:3 might:2 black:1 initialization:2 limited:1 factorization:1 aschwing:1 unique:1 practical:1 camera:1 practice:2 definite:6 xr:20 procedure:8 deforms:1 lorentzian:1 bell:1 significantly:1 convenient:1 pre:2 protein:1 cannot:4 interior:2 ir2:1 close:2 storage:1 context:3 restriction:2 equivalent:2 map:3 optimize:1 missing:2 www:1 ecker:1 roth:2 convex:57 formulate:3 m2:1 contradiction:1 seiler:1 importantly:1 spanned:1 retrieve:3 handle:1 coordinate:3 construction:5 suppose:1 target:1 programming:4 lanckriet:2 element:1 wolfe:1 expensive:1 updating:1 cut:2 xnd:1 observed:1 subproblem:1 wang:1 solved:1 parameterize:1 region:2 connected:1 decrease:3 counter:1 yk:1 pd:1 convexity:6 ui:14 d2i:1 trained:1 depend:1 solving:7 yuille:1 division:1 efficiency:2 basis:10 triangle:2 easily:1 represented:2 retrieves:1 regularizer:1 x2d:1 fast:1 describe:1 modular:1 larger:1 solve:7 plausible:2 supplementary:6 cvpr:4 reconstruct:1 gi:15 g1:6 noisy:3 advantage:1 differentiable:2 eigenvalue:4 sequence:2 reconstruction:15 propose:2 interaction:1 product:2 subtracting:1 fr:9 varol:1 neighboring:3 relevant:1 translate:1 achieve:1 deformable:1 roweis:1 validate:1 qr:6 convergence:10 empty:2 optimum:3 rangarajan:1 converges:2 ring:1 illustrate:2 linearize:1 derive:1 odd:1 progress:1 eq:6 dividing:1 c:4 tziritas:1 direction:4 filter:2 stochastic:1 mcallester:2 material:6 elimination:1 decompose:2 proposition:7 tighter:2 strictly:4 extension:1 mm:4 around:1 considered:1 blake:1 normal:1 deciding:2 exp:1 wright:1 achieves:4 smallest:1 estimation:2 repetition:1 correctness:1 tool:2 weighted:2 moor:1 minimization:6 xtr:2 gaussian:5 always:2 super:2 sight:1 rather:1 pn:1 encode:1 focus:3 inherits:1 joachim:1 mainly:2 contrast:2 baseline:2 posteriori:2 inference:18 dim:4 mrfs:5 rigid:6 dependent:3 nn:1 typically:1 relation:1 interested:2 pixel:2 arg:3 among:1 dual:5 denoted:1 plan:1 constrained:2 fairly:1 field:8 equal:1 construct:4 having:2 biology:1 identical:1 represents:1 yu:1 icml:4 np:6 intelligent:1 employ:2 randomly:1 composed:2 densely:1 geometry:2 interest:2 message:6 investigate:2 evaluation:1 semidefinite:3 light:2 tut:1 accurate:1 tuple:2 tree:1 conduct:3 taylor:2 cardboard:4 re:1 deformation:1 subfigure:2 instance:1 soft:1 gn:5 applicability:1 cost:2 vertex:1 deviation:1 subset:3 monomials:4 veksler:1 entry:3 successful:1 xdi:1 dependency:2 kxi:1 synthetic:2 siam:4 probabilistic:1 decoding:1 continuously:4 again:1 thesis:1 containing:1 expert:3 american:1 toy:1 potential:15 parrilo:4 bfgs:12 sec:1 subsumes:2 coefficient:13 notable:1 depends:2 bg:3 vi:13 root:4 hazan:3 reached:1 denoised:1 rmse:7 collaborative:1 majorization:1 square:7 ir:1 variance:1 characteristic:3 efficiently:4 qk:1 kaufmann:1 groud:1 foi:1 worth:1 lighting:1 foe:9 ed:1 definition:4 volinsky:1 sriperumbudur:2 energy:55 involved:1 proof:10 mi:1 di:1 ihler:2 proved:2 dimensionality:1 psnr:6 cj:3 higher:1 isometric:1 follow:1 wei:1 fua:1 done:1 though:1 box:1 anderson:1 furthermore:9 smola:1 until:1 sketch:1 trust:1 eig:6 propagation:8 widespread:1 continuity:1 quality:1 behaved:1 grows:1 agapito:1 multiplier:1 evolution:10 hence:7 alternating:1 iteratively:2 nonzero:1 illustrated:1 deal:1 mozart:3 komodakis:1 criterion:1 complete:1 demonstrate:3 jti:5 dedicated:1 l1:3 reasoning:1 image:25 novel:1 recently:2 fi:1 boykov:1 superior:1 rotation:1 ji:2 exponentially:1 m1:1 refer:4 monthly:1 gibbs:1 rd:1 trivially:2 pm:1 grid:1 particle:4 language:1 l3:3 surface:3 gj:1 add:1 curvature:1 multivariate:5 showed:1 recent:1 shenlong:1 retrieved:2 optimizing:1 quartic:2 certain:2 nonconvex:1 verlag:1 binary:1 success:1 exploited:1 scoring:1 caltech:1 seen:1 minimum:2 greater:2 additional:1 olshevsky:1 morgan:1 employed:2 guestrin:1 monotonically:2 semi:2 full:1 keypoints:3 gretton:1 faster:4 af:3 long:1 cccp:11 qi:2 prediction:1 mrf:1 vision:1 metric:1 essentially:1 arxiv:1 iteration:8 represent:1 kernel:3 c1:12 proposal:1 addition:2 want:2 folding:1 sudderth:1 extra:1 sure:1 effectiveness:4 extracting:1 structural:1 noting:1 easy:2 variety:1 marginalization:1 iterate:2 xj:1 fit:1 topology:1 cn:12 det:1 qj:1 bottleneck:1 whether:1 motivated:3 pdn:27 expression:1 six:1 accelerating:1 song:1 passing:5 hessian:15 hardly:1 matlab:1 detailed:1 gle:1 nonparametric:1 locally:1 zabih:1 svms:1 http:1 exist:4 sign:2 per:1 discrete:4 pollefeys:2 four:2 clean:2 registration:1 utilize:2 nocedal:1 graph:5 relaxation:2 sum:13 cone:4 run:1 angle:1 fourth:1 raquel:1 family:1 reader:3 decide:1 groundtruth:4 vex:2 bound:1 guaranteed:2 koren:1 convergent:2 topological:1 fcave:3 constraint:5 vishwanathan:1 x2:3 flat:2 encodes:2 fvex:3 optimality:1 min:7 span:1 rendered:3 structured:4 according:8 combination:2 across:1 describes:1 reconstructing:1 smaller:2 contradicts:1 wi:9 making:1 pr:6 computationally:1 equation:1 remains:1 cameraman:1 know:1 apply:2 obey:1 observe:1 lambertian:1 alternative:1 ahmadi:3 robustness:1 eigen:1 existence:2 original:4 running:3 graphical:21 exploit:2 ghahramani:1 prof:1 especially:1 move:4 objective:2 added:1 parametric:1 dependence:1 nr:1 minx:1 gradient:3 distance:2 urtasun:6 jit:1 willsky:1 rother:1 index:3 ratio:1 minimizing:2 difficult:3 unfortunately:1 potentially:1 gk:1 stated:1 proper:1 unknown:1 perform:7 upper:1 recommender:1 markov:6 datasets:1 benchmark:1 finite:1 descent:2 extended:1 looking:1 communication:1 dc:1 rn:9 arbitrary:12 intensity:1 inferred:3 required:1 specified:1 toolbox:1 optimized:1 z1:1 learned:1 hour:1 pearl:1 nip:1 assembled:1 beyond:1 proceeds:1 below:1 xm:1 sparsity:1 summarize:1 program:7 graddesc:4 max:1 video:1 belief:8 wainwright:1 natural:2 rely:2 difficulty:1 solvable:2 vase:3 mn:2 representing:1 scheme:1 x2i:1 keypoint:1 concludes:1 x8:2 review:2 geometric:1 l2:3 checking:1 prior:2 multiplication:1 relative:2 generation:1 limitation:1 filtering:1 cdn:14 semialgebraic:1 degree:27 principle:1 i8:1 eccv:1 repeat:1 free:1 monomial:1 tsitsiklis:1 wide:1 template:1 sparse:2 penny:4 distributed:2 curve:9 dimension:4 xn:2 valid:3 depth:3 world:1 collection:1 coincide:1 grater:1 employing:2 cope:1 approximate:3 dealing:2 deg:1 global:5 clique:2 uai:1 tuples:1 xi:12 alternatively:1 continuous:25 latent:5 search:1 table:6 lasserre:2 messagepassing:1 alg:1 expansion:2 investigated:1 complex:1 poly:20 constructing:2 domain:4 diag:1 jepson:1 did:1 pk:1 aistats:3 linearly:3 noise:4 x1:1 fig:6 referred:1 torres:1 sub:5 paragios:1 jmlr:1 theorem:7 down:1 xt:2 specific:1 showing:1 closeness:1 fusion:4 exists:10 adding:1 ci:16 phd:1 linearization:1 illustrates:1 p2r:1 kx:1 gap:2 easier:1 simply:1 lbfgs:3 visual:1 expressed:3 g2:5 scalar:1 springer:1 truth:1 acm:1 lempitsky:1 goal:1 formulated:4 towards:1 admm:20 hard:5 programing:1 vicente:1 infinite:1 wt:2 schwing:3 denoising:9 lemma:5 experimental:1 exception:1 formally:3 internal:1 alexander:1 tested:1 |
5,040 | 5,563 | Structure Regularization for Structured Prediction
Xu Sun??
?MOE Key Laboratory of Computational Linguistics, Peking University
?School of Electronics Engineering and Computer Science, Peking University
[email protected]
Abstract
While there are many studies on weight regularization, the study on structure regularization is rare. Many existing systems on structured prediction focus on increasing the level of structural dependencies within the model. However, this trend
could have been misdirected, because our study suggests that complex structures
are actually harmful to generalization ability in structured prediction. To control
structure-based overfitting, we propose a structure regularization framework via
structure decomposition, which decomposes training samples into mini-samples
with simpler structures, deriving a model with better generalization power. We
show both theoretically and empirically that structure regularization can effectively control overfitting risk and lead to better accuracy. As a by-product, the
proposed method can also substantially accelerate the training speed. The method
and the theoretical results can apply to general graphical models with arbitrary
structures. Experiments on well-known tasks demonstrate that our method can
easily beat the benchmark systems on those highly-competitive tasks, achieving
record-breaking accuracies yet with substantially faster training speed.
1
Introduction
Structured prediction models are popularly used to solve structure dependent problems in a wide
variety of application domains including natural language processing, bioinformatics, speech recognition, and computer vision. Recently, many existing systems on structured prediction focus on
increasing the level of structural dependencies within the model. We argue that this trend could
have been misdirected, because our study suggests that complex structures are actually harmful to
model accuracy. While it is obvious that intensive structural dependencies can effectively incorporate structural information, it is less obvious that intensive structural dependencies have a drawback
of increasing the generalization risk, because more complex structures are easier to suffer from
overfitting. Since this type of overfitting is caused by structure complexity, it can hardly be solved
by ordinary regularization methods such as L2 and L1 regularization schemes, which is only for
controlling weight complexity.
To deal with this problem, we propose a simple structure regularization solution based on tag structure decomposition. The proposed method decomposes each training sample into multiple minisamples with simpler structures, deriving a model with better generalization power. The proposed
method is easy to implement, and it has several interesting properties: (1) We show both theoretically and empirically that the proposed method can effectively reduce the overfitting risk on structured
prediction. (2) The proposed method does not change the convexity of the objective function, such
that a convex function penalized with a structure regularizer is still convex. (3) The proposed method
has no conflict with the weight regularization. Thus we can apply structure regularization together
with weight regularization. (4) The proposed method can accelerate the convergence rate in training.
The term structural regularization has been used in prior work for regularizing structures of features,
including spectral regularization [1], regularizing feature structures for classifiers [20], and many
1
recent studies on structured sparsity in structured prediction scenarios [11, 8], via adopting mixed
norm regularization [10], Group Lasso [22], and posterior regularization [5]. Compared with those
prior work, we emphasize that our proposal on tag structure regularization is novel. This is because
the term structure in all of the aforementioned work refers to structures of feature space, which
is substantially different compared with our proposal on regularizing tag structures (interactions
among tags).
Also, there are some other related studies. [17] described an interesting heuristic piecewise training method. [19] described a ?lookahead" learning method. Our work differs from [17] and [19]
mainly because our work is built on a regularization framework, with arguments and theoretical
justifications on reducing generalization risk and improving convergence rate. Also, our method
and the theoretical results can fit general graphical models with arbitrary structures, and the detailed algorithm is very different. On generalization risk analysis, related studies include [2, 12] on
non-structured classification and [18, 7] on structured classification.
To the best of our knowledge, this is the first theoretical result on quantifying the relation between
structure complexity and the generalization risk in structured prediction, and this is also the first
proposal on structure regularization via regularizing tag-interactions. The contributions of this work1
are two-fold:
? On the methodology side, we propose a structure regularization framework for structured
prediction. We show both theoretically and empirically that the proposed method can effectively reduce the overfitting risk, and at the same time accelerate the convergence rate in
training. Our method and the theoretical analysis do not make assumptions based on specific structures. In other words, the method and the theoretical results can apply to graphical
models with arbitrary structures, including linear chains, trees, and general graphs.
? On the application side, for several important natural language processing tasks, our simple
method can easily beat the benchmark systems on those highly-competitive tasks, achieving
record-breaking accuracies as well as substantially faster training speed.
2 Structure Regularization
A graph of observations (even with arbitrary structures) can be indexed and be denoted by using
an indexed sequence of observations O = {o1 , . . . , on }. We use the term sample to denote O =
{o1 , . . . , on }. For example, in natural language processing, a sample may correspond to a sentence
of n words with dependencies of tree structures (e.g., in syntactic parsing). For simplicity in analysis,
we assume all samples have n observations (thus n tags). In a typical setting of structured prediction,
all the n tags have inter-dependencies via connecting each Markov dependency between neighboring
tags. Thus, we call n as tag structure complexity or simply structure complexity below.
x(1) , . . . , x(n) }, where x(k) ?
A sample is converted to an indexed sequence of feature vectors x = {x
X is of the dimension d and corresponds to the local features extracted from the position/index k.
x, y ) ? Z denote
We can use an n ? d matrix to represent x ? X n . Let Z = (X n , Y n ) and let z = (x
x1 , y 1 ), . . . , z m = (x
xm , y m )},
a sample in the training data. Suppose a training set is S = {zz 1 = (x
with size m, and the samples are drawn i.i.d. from a distribution D which is unknown. A learning
algorithm is a function G : Z m 7? F with the function space F ? {X n 7? Y n }, i.e., G maps a
training set S to a function GS : X n 7? Y n . We suppose G is symmetric with respect to S, so that
G is independent on the order of S.
Structural dependencies among tags are the major difference between structured prediction and nonstructured classification. For the latter case, a local classification of g based on a position k can be
x(k?a) , . . . , x (k+a) ), where the term {x
x(k?a) , . . . , x (k+a) } represents a local winexpressed as g(x
dow. However, for structured prediction, a local classification on a position depends on the whole
x(1) , . . . , x (n) } rather than a local window, due to the nature of structural dependencies
input x = {x
among tags (e.g., graphical models like CRFs). Thus, in structured prediction a local classification
x(1) , . . . , x (n) , k). To simplify the notation, we define
on k should be denoted as g(x
x, k) , g(x
x(1) , . . . , x (n) , k)
g(x
1
See the code at http://klcl.pku.edu.cn/member/sunxu/code.htm
2
y
y
(1)
(2)
y
(2)
x
y
(3)
y
(3)
x
(4)
y
(5)
y
x
(5)
x
y
(1)
y
x
x
(1)
x
(1)
x
(4)
(2)
(6)
(3)
y
(3)
x
(4)
y
(2)
x
(6)
(5)
y
(5)
x
(6)
(4)
x
(6)
Figure 1: An illustration of structure regularization in simple linear chain case, which decompose a
training sample z with structure complexity 6 into three mini-samples with structure complexity 2.
Structure regularization can apply to more general graphs with arbitrary dependencies.
x, k), y (k) ], which measures the cost on
We define point-wise cost function c : Y ?Y 7? R+ as c[GS (x
x, k) and the gold-standard tag y (k) , and we introduce the point-wise
a position k by comparing GS (x
loss as
x, k), y (k) ]
?(GS , z , k) , c[GS (x
Then, we define sample-wise cost function C : Y n ? Y n 7? R+ , which is the cost function with
respect to a whole sample, and we introduce the sample-wise loss as
x), y ] =
L(GS , z ) , C[GS (x
n
?
?(GS , z , k) =
k=1
n
?
x, k), y (k) ]
c[GS (x
k=1
Given G and a training set S, what we are most interested in is the generalization risk in structured
prediction (i.e., expected average loss) [18, 7]:
[ L(G , z ) ]
S
R(GS ) = Ez
n
Since the distribution D is unknown, we have to estimate R(GS ) by using the empirical risk:
1 ??
1 ?
L(GS , z i ) =
?(GS , z i , k)
mn i=1
mn i=1
m
Re (GS ) =
m
n
k=1
To state our theoretical results, we must describe several quantities and assumptions following prior
work [2, 12]. We assume a simple real-valued structured prediction scheme such that the class
x, k) ? D.2 Also, we assume the point-wise cost
predicted on position k of x is the sign of GS (x
function c? is convex and ? -smooth such that ?y1 , y2 ? D, ?y ? ? Y
|c? (y1 , y ? ) ? c? (y2 , y ? )| ? ? |y1 ? y2 |
(1)
x, k) ? GS \i (x
x, k)| while changing a single
Also, we use a value ? to quantify the bound of |GS (x
sample (with size n? ? n) in the training set with respect to the structured input x . This ?-admissible
assumption can be formulated as ?k,
x, k) ? GS \i (x
x, k)| ? ?||GS ? GS \i ||2 ? ||x
x||2
|GS (x
(2)
where ? ? R+ is a value related to the design of algorithm G.
2.1 Structure Regularization
Most existing regularization techniques are for regularizing model weights/parameters (e.g., a representative regularizer is the Gaussian regularizer or so called L2 regularizer), and we call such
regularization techniques as weight regularization.
Definition 1 (Weight regularization) Let N? : F 7? R+ be a weight regularization function on
F with regularization strength ?, the structured classification based objective function with general
weight regularization is as follows:
R? (GS ) , Re (GS ) + N? (GS )
(3)
2
In practice, many popular structured prediction models have a convex and real-valued cost function (e.g.,
CRFs).
3
Algorithm 1 Training with structure regularization
1: Input: model weights w , training set S, structure regularization strength ?
2: repeat
3:
S? ? ?
4:
for i = 1 ? m do
5:
Randomly decompose z i ? S into mini-samples N? (zz i ) = {zz (i,1) , . . . , z (i,?) }
6:
S ? ? S ? ? N? (zz i )
7:
end for
8:
for i = 1 ? |S ? | do
w)
9:
Sample z ? uniformly at random from S ? , with gradient ?gz ? (w
w)
10:
w ? w ? ??gz ? (w
11:
end for
12: until Convergence
13: return w
While weight regularization is normalizing model weights, the proposed structure regularization
method is normalizing the structural complexity of the training samples. As illustrated in Figure 1,
our proposal is based on tag structure decomposition, which can be formally defined as follows:
Definition 2 (Structure regularization) Let N? : F 7? F be a structure regularization function
on F with regularization strength ? with 1 ? ? ? n, the structured classification based objective
function with structure regularization is as follows3 :
1 ??
1 ???
R? (GS ) , Re [GN? (S) ] =
L[GS ? , z (i,j) ] =
?[GS ? , z (i,j) , k]
mn i=1 j=1
mn i=1 j=1
m
?
m
? n/?
(4)
k=1
where N? (zz i ) randomly splits z i into ? mini-samples {zz (i,1) , . . . , z (i,?) }, so that the mini-samples
have a distribution on their sizes (structure complexities) with the expected value n? = n/?. Thus,
we get
S ? = {zz (1,1) , z(1,2) , . . . , z (1,?) , . . . , z (m,1) , z (m,2) , . . . , z (m,?) }
(5)
|
{z
}
|
{z
}
?
?
with m? mini-samples with expected structure complexity n/?. We can denote S ? more compactly
as S ? = {zz ?1 , z ?2 , . . . , z ?m? } and R? (GS ) can be simplified as
1 ?
1 ??
R? (GS ) ,
L(GS ? , z ?i ) =
?[GS ? , z ?i , k]
mn i=1
mn i=1
m? n/?
m?
(6)
k=1
When the structure regularization strength ? = 1, we have S ? = S and R? = Re . The structure
regularization algorithm (with the stochastic gradient descent setting) is summarized in Algorithm
x(1) , . . . , x (n) } represents feature vectors. Thus, it should be emphasized that
1. Recall that x = {x
the decomposition of x is the decomposition of the feature vectors, not the original observations.
Actually the decomposition of the feature vectors is more convenient and has no information loss ?
decomposing observations needs to regenerate features and may lose some features.
The structure regularization has no conflict with the weight regularization, and the structure regularization can be applied together with the weight regularization.
Definition 3 (Structure & weight regularization) By combining structure regularization in Definition 2 and weight regularization in Definition 1, the structured classification based objective
function is as follows:
R?,? (GS ) , R? (GS ) + N? (GS )
(7)
When ? = 1, we have R?,? = Re (GS ) + N? (GS ) = R? .
Like existing weight regularization methods, currently our structure regularization is only for the
training stage. Currently we do not use structure regularization in the test stage.
3
The notation N is overloaded here. For clarity throughout, N with subscript ? refers to weight regularization function, and N with subscript ? refers to structure regularization function.
4
2.2
Reduction of Generalization Risk
In contrast to the simplicity of the algorithm, the theoretical analysis is quite technical. In this paper
we only describe the major theoretical result. Detailed analysis and proofs are given in the full
version of this work [14].
Theorem 4 (Generalization vs. structure regularization) Let the structured prediction objective
function of G be penalized by structure regularization with factor ? ? [1, n] and L2 weight regularization with factor ?, and the penalized function has a minimizer f :
m?
( 1 ?
)
?
(8)
f = argmin R?,? (g) = argmin
L? (g, z ?j ) + ||g||22
mn j=1
2
g?F
g?F
Assume the point-wise loss ?? is convex and differentiable, and is bounded by ?? (f, z , k) ? ?.
x, k) is ?-admissible. Let a local feature value be bounded by v such that x(k,q) ? v for
Assume f (x
q ? {1, . . . , d}. Then, for any ? ? (0, 1), with probability at least 1 ? ? over the random draw of
the training set S, the generalization risk R(f ) is bounded by
?
) ? ln ? ?1
2d? 2 ?2 v 2 n2 ( (4m ? 2)d? 2 ?2 v 2 n2
R(f ) ? Re (f ) +
+
(9)
+?
m??
m??2
2m
Since ?, ?, and v are typically small compared with other variables, especially m, (9) can be approximated as follows by ignoring small terms:
( dn2 ?ln ? ?1 )
?
(10)
R(f ) ? Re (f ) + O
??1.5 m
)
( 2?
ln?? ?1
in (10)
The proof is given in the full version of this work [14]. We call the term O dn
1.5
??
m
as ?overfit-bound", and reducing the overfit-bound is crucial for reducing the generalization risk
bound. First, (10) suggests that structure complexity n can increase the overfit-bound on a magnitude of O(n2 ), and applying weight regularization can reduce the overfit-bound by O(?). Importantly, applying structure regularization further (over weight regularization) can additionally reduce
the overfit-bound by a magnitude of O(?1.5 ). Since many applications in practice are based on sparse features, using a sparse feature assumption can further improve the generalization bound. The
improved generalization bounds are given in the full version of this work [14].
2.3
Accelerating Convergence Rates in Training
We also analyze the impact on the convergence rate of online learning by applying structure regularization. Following prior work [9], our analysis is based on the stochastic gradient descent (SGD)
w ) be the structured prediction objective function and w ? W is the
with fixed learning rate. Let g(w
weight vector. Recall that the SGD update with fixed learning rate ? has a form like this:
wt)
w t+1 ? w t ? ??gz t (w
(11)
w t ) is the stochastic estimation of the objective function based on z which is randomly
where gz (w
drawn from S. To state our convergence rate analysis results, we need several assumptions following
w , w ? ? W,
(Nemirovski et al. 2009). We assume g is strongly convex with modulus c, that is, ?w
c ?
w ? ) ? g(w
w ) + (w
w ? ? w )T ?g(w
w ) + ||w
w ? w ||2
g(w
(12)
2
When g is strongly convex, there is a global optimum/minimizer w ? . We also assume Lipschitz
w , w ? ? W,
continuous differentiability of g with the constant q, that is, ?w
w ? ) ? ?g(w
w )|| ? q||w
w ? ? w ||
||?g(w
(13)
w ) has almost surely positive correlation with
It is also reasonable to assume that the norm of ?gz (w
the structure complexity of z ,4 which can be quantified by a bound ? ? R+ :
w )||2 ? ?|zz | almost surely for ?w
w?W
||?gz (w
(14)
4
Many structured prediction systems (e.g., CRFs) satisfy this assumption that the gradient based on a larger
sample (i.e., n is large) is expected to have a larger norm.
5
where |zz | denotes the structure complexity of z . Moreover, it is reasonable to assume
?c < 1
(15)
because even the ordinary gradient descent methods will diverge if ?c > 1. Then, we show that
structure regularization can quadratically accelerate the SGD rates of convergence:
Proposition 5 (Convergence rates vs. structure regularization) With the aforementioned asc???2
sumptions, let the SGD training have a learning rate defined as ? = q?
2 n2 , where ? > 0 is a
convergence tolerance value and ? ? (0, 1]. Let t be a integer satisfying
t?
q?2 n2 log (qa0 /?)
??c2 ?2
(16)
where n and ? ? [1, n] is like before, and a0 is the initial distance which depends on the initialization
w 0 ? w ? ||2 . Then, after t updates of w it
of the weights w 0 and the minimizer w ? , i.e., a0 = ||w
w t ) ? g(w
w ? )] ? ?.
converges to E[g(w
The proof is given in the full version of this work [14]. As we can see, using structure regularization
with the strength ? can quadratically accelerate the convergence rate with a factor of ?2 .
3
Experiments
Diversified Tasks. The natural language processing tasks include (1) part-of-speech tagging, (2)
biomedical named entity recognition, and (3) Chinese word segmentation. The signal processing
task is (4) sensor-based human activity recognition. The tasks (1) to (3) use boolean features and
the task (4) adopts real-valued features. From tasks (1) to (4), the averaged structure complexity
(number of observations) n is very different, with n = 23.9, 26.5, 46.6, 67.9, respectively. The
dimension of tags |Y| is also diversified among tasks, with |Y| ranging from 5 to 45.
Part-of-Speech Tagging (POS-Tagging). Part-of-Speech (POS) tagging is an important and highly
competitive task. We use the standard benchmark dataset in prior work [3], with 38,219 training
samples and 5,462 test samples. Following prior work [19], we use features based on words and
lexical patterns, with 393,741 raw features5 . The evaluation metric is per-word accuracy.
Biomedical Named Entity Recognition (Bio-NER). This task is from the BioNLP-2004 shared
task [19]. There are 17,484 training samples and 3,856 test samples. Following prior work [19],
we use word pattern features and POS features, with 403,192 raw features in total. The evaluation
metric is balanced F-score.
Word Segmentation (Word-Seg). We use the MSR data provided by SIGHAN-2004 contest [4].
There are 86,918 training samples and 3,985 test samples. The features are similar to [16], with
1,985,720 raw features in total. The evaluation metric is balanced F-score.
Sensor-based Human Activity Recognition (Act-Recog). This is a task based on real-valued sensor signals, with the data extracted from the Bao04 activity recognition dataset [15]. The features
are similar to [15], with 1,228 raw features in total. There are 16,000 training samples and 4,000
test samples. The evaluation metric is accuracy.
We choose the CRFs [6] and structured perceptrons (Perc) [3], which are arguably the most popular
probabilistic and non-probabilistic structured prediction models, respectively. The CRFs are trained
using the SGD algorithm,6 and the baseline method is the traditional weight regularization scheme
(WeightReg), which adopts the most representative L2 weight regularization, i.e., a Gaussian prior.7 For the structured perceptrons, the baseline WeightAvg is the popular implicit regularization
technique based on parameter averaging, i.e., averaged perceptron [3].
5
Raw features are those observation features based only on x , i.e., no combination with tag information.
In theoretical analysis, following prior work we adopt the SGD with fixed learning rate, as described in
Section 2.3. However, since the SGD with decaying learning rate is more commonly used in practice, in
experiments we use the SGD with decaying learning rate.
7
We also tested on sparsity emphasized regularization methods, including L1 regularization and Group
Lasso regularization [8]. However, we find that in most cases those sparsity emphasized regularization methods
have lower accuracy than the L2 regularization.
6
6
Bio?NER: CRF
POS?Tagging: CRF
Word?Seg: CRF
Act?Recog: CRF
97.4
72.4
97.2
72.2
72
StructReg
WeightReg
97.15
5
10
15
71.8
0
20
5
5
10
97.3
71.8
97.2
5
10
15
20
Mini?Sample Size (n/?)
20
0
5
10
15
20
Mini?Sample Size (n/?)
Act?Recog: Perc
93.5
97.1
93
97
StructReg
WeightAvg
71.2
0
15
Word?Seg: Perc
72
71.6
StructReg
WeightReg
93
Mini?Sample Size (n/?)
71.4
97.1
0
93.2
92.6
97.4
0
20
F?score (%)
97.15
F?score (%)
StructReg
WeightAvg
93.4
92.8
97.42
Bio?NER: Perc
POS?Tagging: Perc
Accuracy (%)
15
97.44
Mini?Sample Size (n/?)
Mini?Sample Size (n/?)
97.2
10
97.46
Accuracy (%)
97.1
0
StructReg
WeightReg
97.48
Accuracy (%)
97.25
93.6
97.5
F?score (%)
StructReg
WeightReg
97.3
F?score (%)
Accuracy (%)
97.35
5
10
15
StructReg
WeightAvg
96.9
0
20
Mini?Sample Size (n/?)
5
10
15
Mini?Sample Size (n/?)
20
StructReg
WeightAvg
92.5
0
5
10
15
20
Mini?Sample Size (n/?)
Figure 2: On the four tasks, comparing the structure regularization method (StructReg) with existing
regularization methods in terms of accuracy/F-score. Row-1 shows the results on CRFs and Row-2
shows the results on structured perceptrons.
Table 1: Comparing our results with the benchmark systems on corresponding tasks.
POS-Tagging (Acc%) Bio-NER (F1%) Word-Seg (F1%)
Benchmark system
97.33 (see [13])
72.28 (see [19])
97.19 (see [4])
Our results
97.36
72.43
97.50
The rich edge features [16] are employed for all methods. All methods are based on the 1st-order
Markov dependency. For WeightReg, the L2 regularization strengths (i.e., ?/2 in Eq.(8)) are tuned
among values 0.1, 0.5, 1, 2, 5, and are determined on the development data (POS-Tagging) or simply
via 4-fold cross validation on the training set (Bio-NER, Word-Seg, and Act-Recog). With this
automatic tuning for WeightReg, we set 2, 5, 1 and 5 for POS-Tagging, Bio-NER, Word-Seg, and
Act-Recog tasks, respectively.
3.1
Experimental Results
The experimental results in terms of accuracy/F-score are shown in Figure 2. For the CRF model,
the training is convergent, and the results on the convergence state (decided by relative objective
change with the threshold value of 0.0001) are shown. For the structured perceptron model, the
training is typically not convergent, and the results on the 10?th iteration are shown. For stability of
the curves, the results of the structured perceptrons are averaged over 10 repeated runs.
Since different samples have different size n in practice, we set ? being a function of n, so that the
generated mini-samples are with fixed size n? with n? = n/?. Actually, n? is a probabilistic distribution because we adopt randomized decomposition. For example, if n? = 5.5, it means the minisamples are a mixture of the ones with the size 5 and the ones with the size 6, and the mean of the
size distribution is 5.5. In the figure, the curves are based on n? = 1.5, 2.5, 3.5, 5.5, 10.5, 15.5, 20.5.
As we can see, the results are quite consistent. It demonstrates that structure regularization leads to
higher accuracies/F-scores compared with the existing baselines. We also conduct significance tests
based on t-test. Since the t-test for F-score based tasks (Bio-NER and Word-Seg) may be unreliable8 , we only perform t-test for the accuracy-based tasks, i.e., POS-Tagging and Act-Recog. For
POS-Tagging, the significance test suggests that the superiority of StructReg over WeightReg is very
statistically significant, with p < 0.01. For Act-Recog, the significance tests suggest that both the
StructReg vs. WeightReg difference and the StructReg vs. WeightAvg difference are extremely statis8
Indeed we can convert F-scores to accuracy scores for t-test, but in many cases this conversion is unreliable.
For example, very different F-scores may correspond to similar accuracy scores.
7
4
2.5
x 10
POS?Tagging: CRF
Bio?NER: CRF
Word?Seg: CRF
5000
Act?Recog: CRF
5000
5000
StructReg
WeightReg
1
3000
StructReg
WeightReg
2000
4000
3500
StructReg
WeightReg
3000
Train?time (sec)
1.5
4000
Train?time (sec)
Train?time (sec)
Train?time (sec)
4500
2
4000
3000
StructReg
WeightReg
2000
2500
0.5
0
5
10
15
1000
0
20
5
10
15
2000
0
20
Mini?Sample Size (n/?)
Mini?Sample Size (n/?)
5
Bio?NER: Perc
POS?Tagging: Perc
15
1000
0
20
600
350
300
250
StructReg
WeightAvg
200
400
StructReg
WeightAvg
150
400
0
5
10
15
Mini?Sample Size (n/?)
20
100
0
20
350
Train?time (sec)
StructReg
WeightAvg
15
300
Train?time (sec)
Train?time (sec)
Train?time (sec)
800
10
Act?Recog: Perc
450
400
1000
5
Mini?Sample Size (n/?)
Word?Seg: Perc
450
1200
10
Mini?Sample Size (n/?)
5
10
15
350
0
20
Mini?Sample Size (n/?)
5
10
15
20
250
StructReg
WeightAvg
200
150
100
0
Mini?Sample Size (n/?)
5
10
15
20
Mini?Sample Size (n/?)
Figure 3: On the four tasks, comparing the structure regularization method (StructReg) with existing
regularization methods in terms of wall-clock training time.
tically significant, with p < 0.0001 in both cases. The experimental results support our theoretical
analysis that structure regularization can further reduce the generalization risk over existing weight
regularization techniques.
Our method outperforms the benchmark systems on the three important natural language processing
tasks. The POS-Tagging task is a highly competitive task, with many methods proposed, and the best
report (without using extra resources) until now is achieved by using a bidirectional learning model
in [13],9 with the accuracy 97.33%. Our simple method achieves better accuracy compared with all
of those state-of-the-art systems. Furthermore, our method achieves as good scores as the benchmark
systems on the Bio-NER and Word-Seg tasks. On the Bio-NER task, [19] achieves 72.28% based
on lookahead learning and [21] achieves 72.65% based on reranking. On the Word-Seg task, [4]
achieves 97.19% based on maximum entropy classification and our recent work [16] achieves 97.5%
based on feature-frequency-adaptive online learning. The comparisons are summarized in Table 1.
Figure 3 shows experimental comparisons in terms of wall-clock training time. As we can see, the
proposed method can substantially improve the training speed. The speedup is not only from the
faster convergence rates, but also from the faster processing time on the structures, because it is
more efficient to process the decomposed samples with simple structures.
4
Conclusions
We proposed a structure regularization framework, which decomposes training samples into minisamples with simpler structures, deriving a trained model with regularized structural complexity.
Our theoretical analysis showed that this method can effectively reduce the generalization risk, and
can also accelerate the convergence speed in training. The proposed method does not change the
convexity of the objective function, and can be used together with any existing weight regularization
methods. Note that, the proposed method and the theoretical results can fit general structures including linear chains, trees, and graphs. Experimental results demonstrated that our method achieved
better results than state-of-the-art systems on several highly-competitive tasks, and at the same time
with substantially faster training speed.
Acknowledgments. This work was supported in part by NSFC (No.61300063).
9
See a collection of the systems at http://aclweb.org/aclwiki/index.php?title=POS_
Tagging_(State_of_the_art)
8
References
[1] A. Argyriou, C. A. Micchelli, M. Pontil, and Y. Ying. A spectral regularization framework for multi-task
structure learning. In Proceedings of NIPS?07. MIT Press, 2007.
[2] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research,
2:499?526, 2002.
[3] M. Collins. Discriminative training methods for hidden markov models: Theory and experiments with
perceptron algorithms. In Proceedings of EMNLP?02, pages 1?8, 2002.
[4] J. Gao, G. Andrew, M. Johnson, and K. Toutanova. A comparative study of parameter estimation methods
for statistical natural language processing. In Proceedings of ACL?07, pages 824?831, 2007.
[5] J. Gra?a, K. Ganchev, B. Taskar, and F. Pereira. Posterior vs parameter sparsity in latent variable models.
In Proceedings of NIPS?09, pages 664?672, 2009.
[6] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. In ICML?01, pages 282?289, 2001.
[7] B. London, B. Huang, B. Taskar, and L. Getoor. Pac-bayes generalization bounds for randomized structured prediction. In NIPS Workshop on Perturbation, Optimization and Statistics, 2007.
[8] A. F. T. Martins, N. A. Smith, M. A. T. Figueiredo, and P. M. Q. Aguiar. Structured sparsity in structured
prediction. In Proceedings of EMNLP?11, pages 1500?1511, 2011.
[9] F. Niu, B. Recht, C. Re, and S. J. Wright. Hogwild: A lock-free approach to parallelizing stochastic
gradient descent. In NIPS?11, pages 693?701, 2011.
[10] A. Quattoni, X. Carreras, M. Collins, and T. Darrell. An efficient projection for l1,infinity regularization.
In Proceedings of ICML?09, page 108, 2009.
[11] M. W. Schmidt and K. P. Murphy. Convex structure learning in log-linear models: Beyond pairwise
potentials. In Proceedings of AISTATS?10, volume 9 of JMLR Proceedings, pages 709?716, 2010.
[12] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability and stability in the general
learning setting. In Proceedings of COLT?09, 2009.
[13] L. Shen, G. Satta, and A. K. Joshi. Guided learning for bidirectional sequence classification. In Proceedings of ACL?07, 2007.
[14] X. Sun. Structure regularization for structured prediction: Theories and experiments. In Technical report,
arXiv, 2014.
[15] X. Sun, H. Kashima, and N. Ueda. Large-scale personalized human activity recognition using online
multitask learning. IEEE Trans. Knowl. Data Eng., 25(11):2551?2563, 2013.
[16] X. Sun, W. Li, H. Wang, and Q. Lu. Feature-frequency-adaptive on-line training for fast and accurate
natural language processing. Computational Linguistics, 40(3):563?586, 2014.
[17] C. A. Sutton and A. McCallum. Piecewise pseudolikelihood for efficient training of conditional random
fields. In ICML?07, pages 863?870. ACM, 2007.
[18] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS?03, 2003.
[19] Y. Tsuruoka, Y. Miyao, and J. Kazama. Learning with lookahead: Can history-based models rival globally
optimized models? In Conference on Computational Natural Language Learning, 2011.
[20] H. Xue, S. Chen, and Q. Yang. Structural regularized support vector machine: A framework for structural
large margin classifier. IEEE Transactions on Neural Networks, 22(4):573?587, 2011.
[21] K. Yoshida and J. Tsujii. Reranking for biomedical named-entity recognition. In ACL Workshop on
BioNLP, page 209?lC216, 2007.
[22] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society, Series B, 68:49?67, 2006.
9
| 5563 |@word multitask:1 msr:1 version:4 norm:3 decomposition:7 eng:1 elisseeff:1 sgd:8 reduction:1 initial:1 electronics:1 series:1 score:15 tuned:1 outperforms:1 existing:9 comparing:4 yet:1 must:1 parsing:1 update:2 v:5 reranking:2 mccallum:2 smith:1 record:2 org:1 simpler:3 dn:1 c2:1 yuan:1 introduce:2 pairwise:1 theoretically:3 inter:1 indeed:1 tagging:14 expected:4 multi:1 globally:1 decomposed:1 window:1 increasing:3 provided:1 notation:2 bounded:3 moreover:1 what:1 argmin:2 substantially:6 act:9 classifier:2 demonstrates:1 control:2 bio:11 superiority:1 arguably:1 segmenting:1 positive:1 before:1 engineering:1 local:7 ner:11 sutton:1 nsfc:1 subscript:2 niu:1 acl:3 initialization:1 quantified:1 suggests:4 nemirovski:1 statistically:1 averaged:3 decided:1 acknowledgment:1 satta:1 practice:4 implement:1 differs:1 pontil:1 empirical:1 convenient:1 projection:1 word:18 refers:3 suggest:1 get:1 selection:1 risk:14 applying:3 map:1 demonstrated:1 lexical:1 crfs:6 yoshida:1 convex:8 shen:1 simplicity:2 argyriou:1 importantly:1 deriving:3 stability:3 justification:1 controlling:1 suppose:2 shamir:1 trend:2 recognition:8 approximated:1 satisfying:1 recog:9 taskar:3 solved:1 wang:1 seg:11 aclweb:1 sun:4 balanced:2 convexity:2 complexity:15 sighan:1 trained:2 compactly:1 accelerate:6 easily:2 htm:1 po:13 regularizer:4 train:8 fast:1 describe:2 london:1 labeling:1 shalev:1 quite:2 heuristic:1 larger:2 solve:1 valued:4 ability:1 statistic:1 syntactic:1 online:3 sequence:4 differentiable:1 propose:3 interaction:2 product:1 neighboring:1 combining:1 lookahead:3 gold:1 convergence:14 optimum:1 darrell:1 comparative:1 converges:1 andrew:1 school:1 eq:1 predicted:1 quantify:1 guided:1 popularly:1 drawback:1 stochastic:4 human:3 f1:2 generalization:18 wall:2 decompose:2 proposition:1 wright:1 major:2 achieves:6 adopt:2 estimation:3 lose:1 currently:2 knowl:1 title:1 grouped:1 ganchev:1 mit:1 sensor:3 gaussian:2 rather:1 focus:2 mainly:1 contrast:1 baseline:3 dependent:1 typically:2 a0:2 hidden:1 relation:1 koller:1 interested:1 aforementioned:2 among:5 classification:11 denoted:2 colt:1 development:1 art:2 field:2 zz:10 represents:2 icml:3 report:2 piecewise:2 simplify:1 randomly:3 murphy:1 sumptions:1 highly:5 asc:1 evaluation:4 mixture:1 chain:3 accurate:1 edge:1 tree:3 indexed:3 pku:2 harmful:2 conduct:1 re:8 theoretical:13 boolean:1 gn:1 ordinary:2 cost:6 rare:1 johnson:1 learnability:1 dependency:11 xue:1 st:1 recht:1 randomized:2 probabilistic:4 diverge:1 together:3 connecting:1 choose:1 huang:1 perc:9 emnlp:2 return:1 li:1 converted:1 potential:1 summarized:2 sec:8 satisfy:1 caused:1 depends:2 hogwild:1 analyze:1 competitive:5 decaying:2 bayes:1 contribution:1 php:1 accuracy:19 correspond:2 raw:5 lu:1 history:1 acc:1 quattoni:1 definition:5 frequency:2 obvious:2 proof:3 dataset:2 popular:3 recall:2 knowledge:1 segmentation:2 actually:4 bidirectional:2 higher:1 methodology:1 improved:1 tsuruoka:1 strongly:2 furthermore:1 stage:2 biomedical:3 implicit:1 until:2 overfit:5 correlation:1 dow:1 clock:2 tsujii:1 modulus:1 y2:3 regularization:83 symmetric:1 laboratory:1 illustrated:1 deal:1 crf:9 demonstrate:1 l1:3 ranging:1 wise:6 regularizing:5 novel:1 recently:1 empirically:3 volume:1 significant:2 automatic:1 tuning:1 contest:1 language:8 carreras:1 posterior:2 recent:2 showed:1 scenario:1 guestrin:1 employed:1 surely:2 signal:2 multiple:1 full:4 smooth:1 technical:2 faster:5 cross:1 lin:1 peking:2 impact:1 prediction:23 regression:1 vision:1 metric:4 arxiv:1 iteration:1 represent:1 adopting:1 achieved:2 proposal:4 crucial:1 extra:1 member:1 lafferty:1 sridharan:1 call:3 integer:1 structural:12 joshi:1 yang:1 split:1 easy:1 variety:1 fit:2 lasso:2 reduce:6 cn:2 intensive:2 accelerating:1 suffer:1 speech:4 hardly:1 qa0:1 detailed:2 rival:1 differentiability:1 http:2 sign:1 per:1 group:2 key:1 four:2 threshold:1 achieving:2 drawn:2 changing:1 clarity:1 graph:4 convert:1 run:1 named:3 gra:1 throughout:1 almost:2 reasonable:2 ueda:1 draw:1 bound:11 convergent:2 fold:2 g:36 activity:4 strength:6 infinity:1 personalized:1 tag:15 bousquet:1 speed:6 argument:1 extremely:1 martin:1 speedup:1 structured:36 combination:1 ln:3 resource:1 end:2 decomposing:1 apply:4 spectral:2 kashima:1 schmidt:1 original:1 denotes:1 linguistics:2 include:2 graphical:4 lock:1 especially:1 chinese:1 miyao:1 society:1 micchelli:1 objective:9 quantity:1 traditional:1 gradient:6 distance:1 entity:3 argue:1 code:2 o1:2 index:2 mini:23 illustration:1 ying:1 bionlp:2 design:1 unknown:2 perform:1 conversion:1 observation:7 regenerate:1 markov:4 benchmark:7 descent:4 beat:2 y1:3 perturbation:1 arbitrary:5 parallelizing:1 overloaded:1 moe:1 sentence:1 optimized:1 conflict:2 quadratically:2 nip:5 trans:1 beyond:1 below:1 pattern:2 xm:1 sparsity:5 built:1 including:5 max:1 royal:1 power:2 getoor:1 natural:8 regularized:2 mn:7 scheme:3 improve:2 gz:6 prior:9 l2:6 relative:1 loss:5 mixed:1 interesting:2 srebro:1 validation:1 consistent:1 row:2 penalized:3 repeat:1 supported:1 free:1 figueiredo:1 side:2 pseudolikelihood:1 perceptron:3 wide:1 sparse:2 tolerance:1 curve:2 dimension:2 rich:1 dn2:1 adopts:2 commonly:1 adaptive:2 collection:1 simplified:1 transaction:1 emphasize:1 unreliable:1 global:1 overfitting:6 discriminative:1 shwartz:1 continuous:1 latent:1 decomposes:3 table:2 additionally:1 nature:1 ignoring:1 improving:1 complex:3 domain:1 aistats:1 significance:3 whole:2 n2:5 repeated:1 xu:1 x1:1 representative:2 position:5 pereira:2 breaking:2 jmlr:1 admissible:2 theorem:1 specific:1 emphasized:3 pac:1 normalizing:2 workshop:2 toutanova:1 effectively:5 magnitude:2 margin:2 chen:1 easier:1 entropy:1 simply:2 gao:1 ez:1 diversified:2 corresponds:1 minimizer:3 extracted:2 tically:1 acm:1 conditional:2 formulated:1 quantifying:1 aguiar:1 lipschitz:1 shared:1 change:3 typical:1 determined:1 reducing:3 uniformly:1 wt:1 averaging:1 called:1 total:3 experimental:5 perceptrons:4 formally:1 support:2 latter:1 collins:2 bioinformatics:1 incorporate:1 tested:1 |
5,041 | 5,564 | Expectation-Maximization
for Learning Determinantal Point Processes
Jennifer Gillenwater
Computer and Information Science
University of Pennsylvania
[email protected]
Alex Kulesza
Computer Science and Engineering
University of Michigan
[email protected]
Emily Fox
Statistics
University of Washington
[email protected]
Ben Taskar
Computer Science and Engineering
University of Washington
[email protected]
Abstract
A determinantal point process (DPP) is a probabilistic model of set diversity compactly parameterized by a positive semi-definite kernel matrix. To fit a DPP to a
given task, we would like to learn the entries of its kernel matrix by maximizing
the log-likelihood of the available data. However, log-likelihood is non-convex
in the entries of the kernel matrix, and this learning problem is conjectured to be
NP-hard [1]. Thus, previous work has instead focused on more restricted convex
learning settings: learning only a single weight for each row of the kernel matrix
[2], or learning weights for a linear combination of DPPs with fixed kernel matrices [3]. In this work we propose a novel algorithm for learning the full kernel
matrix. By changing the kernel parameterization from matrix entries to eigenvalues and eigenvectors, and then lower-bounding the likelihood in the manner
of expectation-maximization algorithms, we obtain an effective optimization procedure. We test our method on a real-world product recommendation task, and
achieve relative gains of up to 16.5% in test log-likelihood compared to the naive
approach of maximizing likelihood by projected gradient ascent on the entries of
the kernel matrix.
1
Introduction
Subset selection is a core task in many real-world applications. For example, in product recommendation we typically want to choose a small set of products from a large collection; many other
examples of subset selection tasks turn up in domains like document summarization [4, 5], sensor
placement [6, 7], image search [3, 8], and auction revenue maximization [9], to name a few. In
these applications, a good subset is often one whose individual items are all high-quality, but also all
distinct. For instance, recommended products should be popular, but they should also be diverse to
increase the chance that a user finds at least one of them interesting. Determinantal point processes
(DPPs) offer one way to model this tradeoff; a DPP defines a distribution over all possible subsets
of a ground set, and the mass it assigns to any given set is a balanced measure of that set?s quality
and diversity.
Originally discovered as models of fermions [10], DPPs have recently been effectively adapted for a
variety of machine learning tasks [8, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 3, 20]. They offer attractive
computational properties, including exact and efficient normalization, marginalization, conditioning,
and sampling [21]. These properties arise in part from the fact that a DPP can be compactly param1
eterized by an N ? N positive semi-definite matrix L. Unfortunately, though, learning L from
example subsets by maximizing likelihood is conjectured to be NP-hard [1, Conjecture 4.1]. While
gradient ascent can be applied in an attempt to approximately optimize the likelihood objective, we
show later that it requires a projection step that often produces degenerate results.
For this reason, in most previous work only partial learning of L has been attempted. [2] showed that
the problem of learning a scalar weight for each row of L is a convex optimization problem. This
amounts to learning what makes an item high-quality, but does not address the issue of what makes
two items similar. [3] explored a different direction, learning weights for a linear combination of
DPPs with fixed Ls. This works well in a limited setting, but requires storing a potentially large set
of kernel matrices, and the final distribution is no longer a DPP, which means that many attractive
computational properties are lost. [8] proposed as an alternative that one first assume L takes on a
particular parametric form, and then sample from the posterior distribution over kernel parameters
using Bayesian methods. This overcomes some of the disadvantages of [3]?s L-ensemble method,
but does not allow for learning an unconstrained, non-parametric L.
The learning method we propose in this paper differs from those of prior work in that it does not
assume fixed values or restrictive parameterizations for L, and exploits the eigendecomposition of L.
Many properties of a DPP can be simply characterized in terms of the eigenvalues and eigenvectors
of L, and working with this decomposition allows us to develop an expectation-maximization (EM)
style optimization algorithm. This algorithm negates the need for the problematic projection step that
is required for naive gradient ascent to maintain positive semi-definiteness of L. As the experiments
show, a projection step can sometimes lead to learning a nearly diagonal L, which fails to model
the negative interactions between items. These interactions are vital, as they lead to the diversityseeking nature of a DPP. The proposed EM algorithm overcomes this failing, making it more robust
to initialization and dataset changes. It is also asymptotically faster than gradient ascent.
2
Background
Formally, a DPP P on a ground set of items Y = {1, . . . , N } is a probability measure on 2Y , the set
of all subsets of Y. For every Y ? Y we have P(Y ) ? det(LY ), where L is a positive semi-definite
(PSD) matrix. The subscript LY ? [Lij ]i,j?Y denotes the restriction of L to the entries indexed by
elements of Y , and we have det(L? ) ? 1. Notice that the restriction to PSD matrices ensures that
all principal minors of L are non-negative, so that det(LY ) ? 0 as required for a proper probability
distribution. P
The normalization constant for the distribution can be computed explicitly thanks to
the fact that Y det(LY ) = det(L + I), where I is the N ? N identity matrix. Intuitively, we
can think of a diagonal entry Lii as capturing the quality of item i, while an off-diagonal entry Lij
measures the similarity between items i and j.
An alternative representation of a DPP is given by the marginal kernel: K = L(L + I)?1 . The
L-K relationship can also be written in terms of their eigendecompositons. L and K share the same
eigenvectors v, and an eigenvalue ?i of K corresponds to an eigenvalue ?i /(1 ? ?i ) of L:
K=
N
X
?j v j v >
j
?
L=
j=1
N
X
j=1
?j
vj v>
j .
1 ? ?j
(1)
Clearly, if L if PSD then K is as well, and the above equations also imply that the eigenvalues of K
are further restricted to be ? 1. K is called the marginal kernel because, for any set Y ? P and for
every A ? Y:
P(A ? Y ) = det(KA ) .
(2)
We can also write the exact (non-marginal, normalized) probability of a set Y ? P in terms of K:
P(Y ) =
det(LY )
= | det(K ? IY )| ,
det(L + I)
(3)
where IY is the identity matrix with entry (i, i) zeroed for items i ? Y [1, Equation 3.69]. In what
follows we use the K-based formula for P(Y ) and learn the marginal kernel K. This is equivalent
to learning L, as Equation (1) can be applied to convert from K to L.
2
3
Learning algorithms
In our learning setting the input consists of n example subsets, {Y1 , . . . , Yn }, where Yi ?
{1, . . . , N } for all i. Our goal is to maximize the likelihood of these example sets. We first describe in Section 3.1 a naive optimization procedure: projected gradient ascent on the entries of the
marginal matrix K, which will serve as a baseline in our experiments. We then develop an EM
method: Section 3.2 changes variables from kernel entries to eigenvalues and eigenvectors (introducing a hidden variable in the process), Section 3.3 applies Jensen?s inequality to lower-bound the
objective, and Sections 3.4 and 3.5 outline a coordinate ascent procedure on this lower bound.
3.1
Projected gradient ascent
The log-likelihood maximization problem, based on Equation (3), is:
max
K
n
X
log | det(K ? IY i )|
s.t. K 0, I ? K 0
(4)
i=1
where the first constraint ensures that K is PSD and the second puts an upper limit of 1 on its
eigenvalues. Let L(K) represent this log-likelihood objective. Its partial derivative with respect to
K is easy to compute by applying a standard matrix derivative rule [22, Equation 57]:
n
?L(K) X
=
(K ? IY i )?1 .
?K
i=1
(5)
Thus, projected gradient ascent [23] is a viable, simple optimization technique. Algorithm 1 outlines
this method, which we refer to as K-Ascent (KA). The initial K supplied as input to the algorithm
can be any PSD matrix with eigenvalues ? 1. The first part of the projection step, max(?, 0),
chooses the closest (in Frobenius norm) PSD matrix to Q [24, Equation 1]. The second part,
min(?, 1), caps the eigenvalues at 1. (Notice that only the eigenvalues have to be projected; K
remains symmetric after the gradient step, so its eigenvectors are already guaranteed to be real.)
Unfortunately, the projection can take us to a poor local optima. To see this, consider the case where
the starting kernel K is a poor fit to the data. In this case, a large initial step size ? will probably
be accepted; even though such a step will likely result in the truncation of many eigenvalues at 0,
the resulting matrix will still be an improvement over the poor initial K. However, with many zero
eigenvalues, the new K will be near-diagonal, and, unfortunately, Equation (5) dictates that if the
current K is diagonal, then its gradient is as well. Thus, the KA algorithm cannot easily move
to any highly non-diagonal matrix. It is possible that employing more complex step-size selection
mechanisms could alleviate this problem, but the EM algorithm we develop in the next section will
negate the need for these entirely.
The EM algorithm we develop also has an advantage in terms of asymptotic runtime. The computational complexity of KA is dominated by the matrix inverses of the L derivative, each of which
requires O(N 3 ) operations, and by the eigendecomposition needed for the projection, also O(N 3 ).
The overall runtime of KA, assuming T1 iterations until convergence and an average of T2 iterations
to find a step size, is O(T1 nN 3 + T1 T2 N 3 ). As we will show in the following sections, the overall
runtime of the EM algorithm is O(T1 nN k 2 +T1 T2 N 3 ), which can be substantially better than KA?s
runtime for k N .
3.2
Eigendecomposing
Eigendecomposition is key to many core DPP algorithms such as sampling and marginalization.
This is because the eigendecomposition provides an alternative view of the DPP as a generative process, which often leads to more efficient algorithms. Specifically, sampling a set Y can
be broken down into a two-step process, the first of which involves generating a hidden variable
J ? {1, . . . , N } that codes for a particular set of K?s eigenvectors. We review this process below,
then exploit it to develop an EM optimization scheme.
Suppose K = V ?V > is an eigendecomposition of K. Let V J denote the submatrix of V containing
only the columns corresponding to the indices in a set J ? {1, . . . , N }. Consider the corresponding
3
Algorithm 1 K-Ascent (KA)
Algorithm 2 Expectation-Maximization (EM)
Input: K, {Y1 , . . . , Yn }, c
repeat
G ? ?L(K)
?K (Eq. 5)
??1
repeat
Q ? K + ?G
Eigendecompose Q into V, ?
? ? min(max(?, 0), 1)
Q ? V diag(?)V >
? ? ?2
until L(Q) > L(K)
? ? L(Q) ? L(K)
K?Q
until ? < c
Output: K
Input: K, {Y1 , . . . , Yn }, c
Eigendecompose K into V, ?
repeat
for j = 1, .P
. . , N do
?0j ? n1 i pK (j ? J | Yi ) (Eq. 19)
0
(V,? )
G ? ?F ?V
(Eq. 20)
??1
repeat
V 0 ? V exp[? V > G ? G> V ]
? ? ?2
until L(V 0 , ?0 ) > L(V, ?0 )
? ? F (V 0 , ?0 ) ? F (V, ?)
? ? ?0 , V ? V 0 , ? ? 2?
until ? < c
Output: K
marginal kernel, with all selected eigenvalues set to 1:
X
J
J
J >
KV =
vj v>
j = V (V ) .
(6)
j?J
Any such kernel whose eigenvalues are all 1 is called an elementary DPP. According to [21, Theorem
7], a DPP with marginal kernel K is a mixture of all 2N possible elementary DPPs:
X
Y Y
J
J
J
P(Y ) =
P V (Y )
?j
(1 ? ?j ) ,
P V (Y ) = 1(|Y | = |J|) det(KYV ) . (7)
j?J
J?{1,...,N }
j ?J
/
This perspective leads to an efficient DPP sampling algorithm, where a set J is first chosen according
J
to its mixture weight in Equation (7), and then a simple algorithm is used to sample from P V [5,
Algorithm 1]. In this sense, the index set J is an intermediate hidden variable in the process for
generating a sample Y .
We can exploit this hidden variable J to develop an EM algorithm for learning K. Re-writing the
data log-likelihood to make the hidden variable explicit:
!
!
n
n
X
X
X
X
L(K) = L(?, V ) =
log
pK (J, Yi ) =
log
pK (Yi | J)pK (J) , where (8)
i=1
pK (J) =
Y
j?J
?j
Y
i=1
J
(1 ? ?j ) ,
J
pK (Yi | J) =1(|Yi | = |J|) det([V J (V J )> ]Yi ) .
(9)
j ?J
/
These equations follow directly from Equations (6) and (7).
3.3
Lower bounding the objective
We now introduce an auxiliary distribution, q(J | Yi ), and deploy it with Jensen?s inequality to
lower-bound the likelihood objective. This is a standard technique for developing EM schemes for
dealing with hidden variables [25]. Proceeding in this direction:
!
n X
n
X
X
X
pK (J, Yi )
pK (J, Yi )
?
q(J | Yi ) log
? F (q, V, ?) .
L(V, ?) =
log
q(J | Yi )
q(J | Yi )
q(J | Yi )
i=1 J
i=1
J
(10)
4
The function F (q, V, ?) can be expressed in either of the following two forms:
F (q, V, ?) =
=
n
X
i=1
n
X
?KL(q(J | Yi ) k pK (J | Yi )) + L(V, ?)
(11)
Eq [log pK (J, Yi )] + H(q)
(12)
i=1
where H is entropy. Consider optimizing this new objective by coordinate ascent. From Equation (11) it is clear that, holding V, ? constant, F is concave in q. This follows from the concavity
of KL divergence. Holding q constant in Equation (12) yields the following function:
F (V, ?) =
n X
X
i=1
q(J | Yi ) [log pK (J) + log pK (Yi | J)] .
(13)
J
This expression is concave in ?j , since log is concave. However, it is not concave in V due to the
non-convex V > V = I constraint. We describe in Section 3.5 one way to handle this.
To summarize, coordinate ascent on F (q, V, ?) alternates the following ?expectation? and ?maximization? steps; the first is concave in q, and the second is concave in the eigenvalues:
E-step: min
n
X
q
M-step: max
V,?
3.4
KL(q(J | Yi ) k pK (J | Yi ))
i=1
n
X
(14)
Eq [log pK (J, Yi )] s.t. 0 ? ? ? 1, V > V = I
(15)
i=1
E-step
The E-step is easily solved by setting q(J | Yi ) = pK (J | Yi ), which minimizes the KL divergence. Interestingly, we can show that this distribution is itself a conditional DPP, and hence can be
compactly described by an N ? N kernel matrix. Thus, to complete the E-step, we simply need to
construct this kernel. Lemma 1 (see the supplement for a proof) gives an explicit formula. Note that
q?s probability mass is restricted to sets of a particular size k, and hence we call it a k-DPP. A k-DPP
is a variant of DPP that can also be efficiently sampled from and marginalized, via modifications of
the standard DPP algorithms. (See the supplement and [3] for more on k-DPPs.)
Lemma 1. At the completion of the E-step, q(J | Yi ) with |Yi | = k is a k-DPP with (non-marginal)
kernel QYi :
QYi = RZ Yi R, and q(J | Yi ) ? 1(|Yi | = |J|) det(QYJi ) , where
p
?/(1 ? ?) .
U = V > , Z Yi = U Yi (U Yi )> , and R = diag
3.5
(16)
(17)
M-step
The M-step update for the eigenvalues is a closed-form expression with no need for projection.
Taking the derivative of Equation (13) with respect to ?j , setting it equal to zero, and solving for ?j :
?j =
n
1X X
q(J | Yi ) .
n i=1
(18)
J:j?J
The exponential-sized sum here is impractical, but we can eliminate it. Recall from Lemma 1 that
q(J | Yi ) is a k-DPP with kernel QYi . Thus, we can use k-DPP marginalization algorithms to
efficiently compute the sum over J. More concretely, let V? represent the eigenvectors of QYi , with
v?r (j) indicating the jth element of the rth eigenvector. Then the marginals are:
X
q(J | Yi ) = q(j ? J | Yi ) =
N
X
r=1
J:j?J
5
v?r (j)2 ,
(19)
which allows us to compute the eigenvalue updates in time O(nN k 2 ), for k = maxi |Yi |. (See the
supplement for the derivation of Equation (19) and its computational complexity.) Note that this
update is self-normalizing, so explicit enforcement of the 0 ? ?j ? 1 constraint is unnecessary.
There is one small caveat: the QYi matrix will be infinite if any ?j is exactly equal to 1 (due to R in
Equation (17)). In practice, we simply tighten the constraint on ? to keep it slightly below 1.
Turning now to the M-step update for the eigenvectors, the derivative of Equation (13) with respect
to V involves an exponential-size sum over J similar to that of the eigenvalue derivative. However,
the terms of the sum in this case depend on V as well as on q(J | Yi ), making it hard to simplify.
Yet, for the particular case of the initial gradient, where we have q = p, simplification is possible:
n
?F (V, ?) X
=
2BYi (H Yi )?1 VYi R2
?V
i=1
(20)
where H Yi is the |Yi | ? |Yi | matrix VYi R2 VY>i and VYi = (U Yi )> . BYi is a N ? |Yi | matrix
containing the columns of the N ? N identity corresponding to items in Yi ; BYi simply serves
to map the gradients with respect to VYi into the proper positions in V . This formula allows us
to compute the eigenvector derivatives in time O(nN k 2 ), where again k = maxi |Yi |. (See the
supplement for the derivation of Equation (20) and its computational complexity.)
Equation (20) is only valid for the first gradient step, so in practice we do not bother to fully optimize
V in each M-step; we simply take a single gradient step on V . Ideally we would repeatedly evaluate
the M-step objective, Equation (13), with various step sizes to find the optimal one. However,
the M-step objective is intractable to evaluate exactly, as it is an expectation with respect to an
exponential-size distribution. In practice, we solve this issue by performing an E-step for each trial
step size. That is, we update q?s distribution to match the updated V and ? that define pK , and then
determine if the current step size is good by checking for improvement in the likelihood L.
There is also the issue of enforcing the non-convex constraint V > V = I. We could project V to ensure this constraint, but, as previously discussed for eigenvalues, projection steps often lead to poor
local optima. Thankfully, for the particular constraint associated with V , more sophisticated update
techniques exist: the constraint V > V = I corresponds to optimization over a Stiefel manifold, so
the algorithm from [26, Page 326] can be employed. In practice, we simplify this algorithm by
negelecting second-order information (the Hessian) and using the fact that the V in our application
is full-rank. With these simplifications, the following multiplicative update is all that is needed:
"
> !#
?L
> ?L
?
V
,
(21)
V ? V exp ? V
?V
?V
where exp denotes the matrix exponential and ? is the step size. Algorithm 2 summarizes the overall
EM method. As previously mentioned, assuming T1 iterations until convergence and an average of
T2 iterations to find a step size, its overall runtime is O(T1 nN k 2 + T1 T2 N 3 ). The first term in
this complexity comes from the eigenvalue updates, Equation (19), and the eigenvector derivative
computation, Equation (20). The second term comes from repeatedly computing the Stiefel manifold
update of V , Equation (21), during the step size search.
4
Experiments
We test the proposed EM learning method (Algorithm 2) by comparing it to K-Ascent (KA, Algo? Note that neither EM nor KA can
rithm 1)1 . Both methods require a starting marginal kernel K.
deal well with starting from a kernel with too many zeros. For example, starting from a diagonal
kernel, both gradients, Equations (5) and (20), will be diagonal, resulting in no modeling of diversity. Thus, the two initialization options that we explore have non-trivial off-diagonals. The first of
these options is relatively naive, while the other incorporates statistics from the data.
For the first initialization type, we use a Wishart distribution with N degrees of freedom and an
? ? WN (N, I). The Wishart distribution is relatively unassumidentity covariance matrix to draw L
ing: in terms of eigenvectors, it spreads its mass uniformly over all unitary matrices [27]. We make
1
Code and data for all experiments can be downloaded from https://code.google.com/p/em-for-dpps
6
11.0
safety
strollers
bath
media
toys
bedding
apparel
diaper
gear
feeding
5.8
5.3
5.3
carseats
strollers
2.5
2.5
2.4
health
10.4
furniture
8.1
7.7
carseats
16.5
safety
9.8
furniture
health
3.5
1.9
2.3
3.1
1.5
bath
media
1.8
1.3
0.9
0.5
0.0
0.0
toys
bedding
apparel
-0.1
diaper
2.6
gear
0.6
feeding
relative log likelihood difference
relative log likelihood difference
(a)
(b)
Figure 1: Relative test log-likelihood differences, 100 (EM?KA)
|KA| , using: (a) Wishart initialization in
the full-data setting, and (b) moments-matching initialization in the data-poor setting.
just one simple modification to its output to make it a better fit for practical data: we re-scale the resulting matrix by 1/N so that the corresponding DPP will place a non-trivial amount of probability
mass on small sets. (The Wishart?s mean is N I, so it tends to over-emphasize larger sets unless we
? to K
? via Equation (1).
re-scale.) We then convert L
For the second initialization type, we employ a form of moment matching. Let mi and mij represent
the normalized frequencies of single items and pairs of items in the training data:
n
mi =
1X
1(i ? Y` ),
n
n
mij =
`=1
1X
1(i ? Y` ? j ? Y` ) .
n
(22)
`=1
? as:
Recalling Equation (2), we attempt to match the first and second order moments by choosing K
r
? ii K
? ii = mi , K
? ij = max K
? jj ? mij , 0 .
K
(23)
? by clipping its eigenvalues at 0 and 1.
To ensure a valid starting kernel, we then project K
4.1
Baby registry tests
Consider a product recommendation task, where the ground set comprises N products that can be
added to a particular category (e.g., toys or safety) in a baby registry. A very simple recommendation
system might suggest products that are popular with other consumers; however, this does not account
for negative interactions: if a consumer has already chosen a carseat, they most likely will not choose
an additional carseat, no matter how popular it is with other consumers. DPPs are ideal for capturing
such negative interactions. A learned DPP could be used to populate an initial, basic registry, as well
as to provide live updates of product recommendations as a consumer builds their registry.
To test our DPP learning algorithms, we collected a dataset consisting of 29,632 baby registries
from Amazon.com, filtering out those listing fewer than 5 or more than 100 products. Amazon
characterizes each product in a baby registry as belonging to one of 18 categories, such as ?toys?
and?safety?. For each registry, we created sub-registries by splitting it according to these categories.
(A registry with 5 toy items and 10 safety items produces two sub-registries.) For each category, we
then filtered down to its top 100 most frequent items, and removed any product that did not occur
in at least 100 sub-registries. We discarded categories with N < 25 or fewer than 2N remaining
(non-empty) sub-registries for training. The resulting 13 categories have an average inventory size
of N = 71 products and an average number of sub-registries n = 8,585. We used 70% of the
data for training and 30% for testing. Note that categories such as ?carseats? contain more diverse
items than just their namesake; for instance, ?carseats? also contains items such as seat back kick
protectors and rear-facing baby view mirrors. See the supplement for more dataset details and for
quartile numbers for all of the experiments.
Figure 1a shows the relative test log-likelihood differences of EM and KA when starting from a
Wishart initialization. These numbers are the medians from 25 trials (draws from the Wishart). EM
7
Graco Sweet Slumber
Sound Machine
Cloud b Twilight
Boppy Noggin Nest
Constellation Night Light
Head Support
Braun ThermoScan
Lens Filters
Aquatopia Bath
Thermometer Alarm
7.4
feeding
6.0
gear
4.0
bedding
bath
apparel
diaper
media
furniture
health
toys
safety
carseats
strollers
Britax EZ-Cling
Sun Shades
TL Care Organic
Cotton Mittens
Regalo Easy Step
Walk Thru Gate
VTech Comm.
Audio Monitor
Infant Optics
Video Monitor
(a)
2.2
1.9
1.3
1.3
1.1
0.7
0.7
0.6
0.5
0.4
KA runtime / EM runtime
(b)
Figure 2: (a) A high-probability set of size k = 10 selected using an EM model for the ?safety?
category. (b) Runtime ratios.
gains an average of 3.7%, but has a much greater advantage for some categories than for others.
Speculating that EM has more of an advantage when the off-diagonal components of K are truly
important?when products exhibit strong negative interactions?we created a matrix M for each
category with the true data marginals from Equation (22) as its entries. We then checked the value
||M ||F
of d = N1 ||diag(M
)||2 . This value correlates well with the relative gains for EM: the 4 categories
for which EM has the largest gains (safety, furniture, carseats, and strollers) all exhibit d > 0.025,
while categories such as feeding and gear have d < 0.012. Investigating further, we found that, as
foreshadowed in Section 3.1, KA performs particularly poorly in the high-d setting because of its
projection step?projection can result in KA learning a near-diagonal matrix.
Tuesday, August 5, 14
If instead of the Wishart initialization we use the moments-matching initializer, this alleviates KA?s
projection problem, as it provides a starting point closer to the true kernel. With this initializer, KA
and EM have comparable test log-likelihoods (average EM gain of 0.4%). However, the momentsmatching initializer is not a perfect fix for the KA algorithm in all settings. For instance, consider
a data-poor setting, where for each category we have only n = 2N training examples. In this
case, even with the moments-matching initializer EM has a significant edge over KA, as shown in
Figure 1b: EM gains an average of 4.5%, with a maximum gain of 16.5% for the safety category.
To give a concrete example of the advantages of EM training, Figure 2a shows a greedy approximation [28, Section 4] to the most-likely ten-item registry in the category ?safety?, according to
a Wishart-initialized EM model. The corresponding KA selection differs from Figure 2a in that it
replaces the lens filters and the head support with two additional baby monitors: ?Motorola MBP36
Remote Wireless Video Baby Monitor?, and ?Summer Infant Baby Touch Digital Color Video Monitor?. It seems unlikely that many consumers would select three different brands of video monitor.
Having established that EM is more robust than KA, we conclude with an analysis of runtimes.
Figure 2b shows the ratio of KA?s runtime to EM?s for each category. As discussed earlier, EM is
asymptotically faster than KA, and we see this borne out in practice even for the moderate values of
N and n that occur in our registries dataset: on average, EM is 2.1 times faster than KA.
5
Conclusion
We have explored learning DPPs in a setting where the kernel K is not assumed to have fixed values
or a restrictive parametric form. By exploiting K?s eigendecomposition, we were able to develop a
novel EM learning algorithm. On a product recommendation task, we have shown EM to be faster
and more robust than the naive approach of maximizing likelihood by projected gradient. In other
applications for which modeling negative interactions between items is important, we anticipate that
EM will similarly have a significant advantage.
Acknowledgments
This work was supported in part by ONR Grant N00014-10-1-0746.
8
References
[1] A. Kulesza. Learning with Determinantal Point Processes. PhD thesis, University of Pennsylvania, 2012.
[2] A. Kulesza and B. Taskar. Learning Determinantal Point Processes. In Conference on Uncertainty in
Artificial Intelligence (UAI), 2011.
[3] A. Kulesza and B. Taskar. k-DPPs: Fixed-Size Determinantal Point Processes. In International Conference on Machine Learning (ICML), 2011.
[4] H. Lin and J. Bilmes. Learning Mixtures of Submodular Shells with Application to Document Summarization. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012.
[5] A. Kulesza and B. Taskar. Determinantal Point Processes for Machine Learning. Foundations and Trends
in Machine Learning, 5(2-3), 2012.
[6] A. Krause, A. Singh, and C. Guestrin. Near-Optimal Sensor Placements in Gaussian Processes: Theory,
Efficient Algorithms, and Empirical Studies. Journal of Machine Learning Research (JMLR), 9:235?284,
2008.
[7] A. Krause and C. Guestrin. Near-Optimal Non-Myopic Value of Information in Graphical Models. In
Conference on Uncertainty in Artificial Intelligence (UAI), 2005.
[8] R. Affandi, E. Fox, R. Adams, and B. Taskar. Learning the Parameters of Determinantal Point Process
Kernels. In International Conference on Machine Learning (ICML), 2014.
[9] S. Dughmi, T. Roughgarden, and M. Sundararajan. Revenue Submodularity. In Electronic Commerce,
2009.
[10] O. Macchi. The Coincidence Approach to Stochastic Point Processes. Advances in Applied Probability,
7(1), 1975.
[11] J. Snoek, R. Zemel, and R. Adams. A Determinantal Point Process Latent Variable Model for Inhibition
in Neural Spiking Data. In NIPS, 2013.
[12] B. Kang. Fast Determinantal Point Process Sampling with Application to Clustering. In NIPS, 2013.
[13] R. Affandi, E. Fox, and B. Taskar. Approximate Inference in Continuous Determinantal Point Processes.
In NIPS, 2013.
[14] A. Shah and Z. Ghahramani. Determinantal Clustering Process ? A Nonparametric Bayesian Approach
to Kernel Based Semi-Supervised Clustering. In Conference on Uncertainty in Artificial Intelligence
(UAI), 2013.
[15] R. Affandi, A. Kulesza, E. Fox, and B. Taskar. Nystr?om Approximation for Large-Scale Determinantal
Processes. In Conference on Artificial Intelligence and Statistics (AIStats), 2013.
[16] J. Gillenwater, A. Kulesza, and B. Taskar. Near-Optimal MAP Inference for Determinantal Point Processes. In NIPS, 2012.
[17] J. Zou and R. Adams. Priors for Diversity in Generative Latent Variable Models. In NIPS, 2013.
[18] R. Affandi, A. Kulesza, and E. Fox. Markov Determinantal Point Processes. In Conference on Uncertainty
in Artificial Intelligence (UAI), 2012.
[19] J. Gillenwater, A. Kulesza, and B. Taskar. Discovering Diverse and Salient Threads in Document Collections. In Empirical Methods in Natural Language Processing (EMNLP), 2012.
[20] A. Kulesza and B. Taskar. Structured Determinantal Point Processes. In NIPS, 2010.
[21] J. Hough, M. Krishnapur, Y. Peres, and B. Vir?ag. Determinantal Processes and Independence. Probability
Surveys, 3, 2006.
[22] K. Petersen and M. Pedersen. The Matrix Cookbook. Technical report, University of Denmark, 2012.
[23] E. Levitin and B. Polyak. Constrained Minimization Methods. USSR Computational Mathematics and
Mathematical Physics, 6(5):1?50, 1966.
[24] D. Henrion and J. Malick. Projection Methods for Conic Feasibility Problems. Optimization Methods
and Software, 26(1):23?46, 2011.
[25] R. Neal and G. Hinton. A New View of the EM Algorithm that Justies Incremental, Sparse and Other
Variants. Learning in Graphical Models, 1998.
[26] A. Edelman, T. Arias, and S. Smith. The Geometry of Algorithms with Orthogonality Constraints. SIAM
Journal on Matrix Analysis and Applications (SIMAX), 1998.
[27] A. James. Distributions of Matrix Variates and Latent Roots Derived from Normal Samples. Annals of
Mathematical Statistics, 35(2):475?501, 1964.
[28] G. Nemhauser, L. Wolsey, and M. Fisher. An Analysis of Approximations for Maximizing Submodular
Set Functions I. Mathematical Programming, 14(1), 1978.
9
| 5564 |@word trial:2 norm:1 seems:1 decomposition:1 covariance:1 nystr:1 moment:5 initial:5 contains:1 document:3 interestingly:1 ka:24 current:2 comparing:1 com:2 yet:1 written:1 determinantal:17 update:10 infant:2 generative:2 selected:2 fewer:2 item:18 parameterization:1 greedy:1 gear:4 intelligence:6 discovering:1 smith:1 core:2 filtered:1 caveat:1 provides:2 parameterizations:1 thermometer:1 mathematical:3 viable:1 edelman:1 fermion:1 consists:1 introduce:1 manner:1 snoek:1 upenn:1 nor:1 motorola:1 project:2 mass:4 qyi:5 medium:3 what:3 substantially:1 minimizes:1 eigenvector:3 ag:1 impractical:1 eterized:1 every:2 concave:6 braun:1 runtime:9 exactly:2 jengi:1 vir:1 ly:5 grant:1 yn:3 positive:4 t1:8 engineering:2 local:2 safety:10 tends:1 limit:1 subscript:1 approximately:1 might:1 initialization:8 limited:1 practical:1 acknowledgment:1 commerce:1 testing:1 lost:1 practice:5 definite:3 differs:2 procedure:3 empirical:2 dictate:1 projection:12 matching:4 organic:1 suggest:1 petersen:1 cannot:1 selection:4 put:1 applying:1 writing:1 live:1 optimize:2 restriction:2 equivalent:1 map:2 maximizing:5 starting:7 l:1 emily:1 convex:5 focused:1 survey:1 amazon:2 splitting:1 assigns:1 rule:1 seat:1 handle:1 coordinate:3 updated:1 annals:1 suppose:1 deploy:1 user:1 exact:2 programming:1 element:2 trend:1 particularly:1 taskar:11 cloud:1 coincidence:1 solved:1 ensures:2 sun:1 remote:1 removed:1 balanced:1 mentioned:1 broken:1 complexity:4 comm:1 ideally:1 depend:1 solving:1 singh:1 algo:1 serve:1 compactly:3 easily:2 various:1 derivation:2 distinct:1 fast:1 effective:1 describe:2 artificial:6 zemel:1 choosing:1 whose:2 larger:1 solve:1 statistic:4 think:1 itself:1 final:1 advantage:5 eigenvalue:21 propose:2 interaction:6 product:14 frequent:1 bath:4 alleviates:1 degenerate:1 achieve:1 poorly:1 frobenius:1 kv:1 exploiting:1 convergence:2 empty:1 optimum:2 produce:2 generating:2 perfect:1 adam:3 ben:1 incremental:1 develop:7 completion:1 stat:1 ij:1 minor:1 eq:5 strong:1 dughmi:1 auxiliary:1 c:1 involves:2 come:2 direction:2 submodularity:1 filter:2 quartile:1 stochastic:1 apparel:3 require:1 feeding:4 fix:1 alleviate:1 anticipate:1 elementary:2 ground:3 normal:1 exp:3 graco:1 failing:1 largest:1 minimization:1 clearly:1 sensor:2 gaussian:1 derived:1 improvement:2 rank:1 likelihood:19 baseline:1 sense:1 inference:2 rear:1 nn:5 typically:1 eliminate:1 unlikely:1 hidden:6 issue:3 overall:4 vyi:4 malick:1 ussr:1 constrained:1 marginal:9 equal:2 construct:1 having:1 washington:4 sampling:5 runtimes:1 icml:2 nearly:1 cookbook:1 report:1 np:2 t2:5 simplify:2 others:1 few:1 employ:1 sweet:1 divergence:2 individual:1 geometry:1 consisting:1 maintain:1 n1:2 attempt:2 psd:6 freedom:1 recalling:1 highly:1 mixture:3 truly:1 light:1 myopic:1 edge:1 closer:1 partial:2 fox:5 indexed:1 unless:1 hough:1 walk:1 re:3 initialized:1 instance:3 column:2 modeling:2 earlier:1 disadvantage:1 maximization:7 clipping:1 introducing:1 entry:11 subset:7 diaper:3 too:1 chooses:1 thanks:1 international:2 siam:1 probabilistic:1 off:3 physic:1 iy:4 concrete:1 again:1 thesis:1 initializer:4 containing:2 choose:2 emnlp:1 wishart:8 nest:1 borne:1 lii:1 derivative:8 style:1 toy:6 account:1 diversity:4 matter:1 explicitly:1 later:1 root:1 view:3 closed:1 multiplicative:1 characterizes:1 option:2 om:1 efficiently:2 ensemble:1 yield:1 listing:1 bayesian:2 pedersen:1 bilmes:1 checked:1 frequency:1 james:1 proof:1 associated:1 mi:3 gain:7 sampled:1 dataset:4 popular:3 recall:1 color:1 cap:1 sophisticated:1 back:1 originally:1 supervised:1 follow:1 though:2 just:2 until:6 working:1 night:1 touch:1 google:1 defines:1 quality:4 name:1 normalized:2 eigendecomposing:1 contain:1 true:2 hence:2 symmetric:1 neal:1 deal:1 attractive:2 during:1 self:1 outline:2 complete:1 performs:1 auction:1 stiefel:2 image:1 novel:2 recently:1 spiking:1 conditioning:1 discussed:2 rth:1 marginals:2 sundararajan:1 refer:1 significant:2 dpps:10 unconstrained:1 mathematics:1 similarly:1 gillenwater:3 submodular:2 language:1 longer:1 similarity:1 inhibition:1 posterior:1 closest:1 showed:1 perspective:1 conjectured:2 optimizing:1 moderate:1 n00014:1 inequality:2 onr:1 baby:8 yi:46 guestrin:2 additional:2 care:1 greater:1 employed:1 determine:1 maximize:1 recommended:1 semi:5 ii:2 full:3 sound:1 bother:1 ing:1 technical:1 faster:4 characterized:1 match:2 offer:2 lin:1 feasibility:1 variant:2 basic:1 expectation:6 iteration:4 kernel:30 normalization:2 sometimes:1 represent:3 background:1 want:1 krause:2 median:1 ascent:13 probably:1 incorporates:1 call:1 unitary:1 near:5 kick:1 ideal:1 intermediate:1 vital:1 easy:2 wn:1 variety:1 marginalization:3 fit:3 independence:1 variate:1 pennsylvania:2 registry:15 polyak:1 tradeoff:1 det:13 thread:1 expression:2 hessian:1 jj:1 repeatedly:2 clear:1 eigenvectors:9 amount:2 nonparametric:1 ten:1 category:16 http:1 supplied:1 exist:1 problematic:1 vy:1 notice:2 diverse:3 write:1 levitin:1 negates:1 key:1 salient:1 monitor:6 changing:1 neither:1 asymptotically:2 convert:2 sum:4 inverse:1 parameterized:1 uncertainty:5 place:1 electronic:1 draw:2 summarizes:1 comparable:1 submatrix:1 capturing:2 bound:3 entirely:1 guaranteed:1 simplification:2 furniture:4 summer:1 replaces:1 roughgarden:1 adapted:1 occur:2 placement:2 constraint:9 optic:1 alex:1 orthogonality:1 software:1 dominated:1 min:3 performing:1 relatively:2 conjecture:1 structured:1 developing:1 according:4 alternate:1 combination:2 poor:6 belonging:1 slightly:1 em:36 making:2 modification:2 intuitively:1 restricted:3 macchi:1 equation:26 remains:1 jennifer:1 turn:1 previously:2 mechanism:1 needed:2 enforcement:1 serf:1 umich:1 available:1 operation:1 carseats:6 alternative:3 shah:1 gate:1 rz:1 denotes:2 top:1 ensure:2 remaining:1 clustering:3 graphical:2 marginalized:1 exploit:3 restrictive:2 ghahramani:1 build:1 ebfox:1 objective:8 move:1 already:2 added:1 parametric:3 diagonal:11 exhibit:2 gradient:15 nemhauser:1 manifold:2 collected:1 trivial:2 reason:1 enforcing:1 denmark:1 assuming:2 byi:3 code:3 consumer:5 index:2 relationship:1 ratio:2 unfortunately:3 potentially:1 holding:2 negative:6 twilight:1 proper:2 summarization:2 upper:1 speculating:1 markov:1 discarded:1 peres:1 hinton:1 head:2 y1:3 discovered:1 august:1 pair:1 required:2 kl:4 cotton:1 learned:1 bedding:3 kang:1 established:1 nip:6 address:1 able:1 below:2 kulesza:11 summarize:1 including:1 max:5 video:4 natural:1 turning:1 scheme:2 imply:1 conic:1 created:2 naive:5 health:3 lij:2 thru:1 prior:2 review:1 checking:1 relative:6 asymptotic:1 fully:1 interesting:1 wolsey:1 filtering:1 facing:1 revenue:2 eigendecomposition:6 downloaded:1 digital:1 degree:1 foundation:1 zeroed:1 storing:1 share:1 row:2 repeat:4 wireless:1 truncation:1 supported:1 jth:1 populate:1 allow:1 affandi:4 taking:1 sparse:1 dpp:25 world:2 valid:2 concavity:1 concretely:1 collection:2 projected:6 employing:1 tighten:1 correlate:1 approximate:1 emphasize:1 overcomes:2 dealing:1 keep:1 investigating:1 uai:5 krishnapur:1 unnecessary:1 conclude:1 assumed:1 eigendecompose:2 search:2 latent:3 continuous:1 thankfully:1 learn:2 nature:1 robust:3 tuesday:1 inventory:1 complex:1 zou:1 domain:1 vj:2 diag:3 did:1 pk:16 spread:1 aistats:1 bounding:2 arise:1 alarm:1 tl:1 rithm:1 definiteness:1 fails:1 position:1 comprises:1 explicit:3 sub:5 exponential:4 jmlr:1 formula:3 down:2 theorem:1 shade:1 jensen:2 maxi:2 explored:2 r2:2 constellation:1 negate:1 normalizing:1 intractable:1 stroller:4 effectively:1 ci:1 supplement:5 mirror:1 phd:1 aria:1 entropy:1 michigan:1 simply:5 likely:3 explore:1 ez:1 expressed:1 scalar:1 recommendation:6 applies:1 mij:3 corresponds:2 chance:1 shell:1 conditional:1 identity:3 goal:1 sized:1 fisher:1 hard:3 change:2 henrion:1 specifically:1 infinite:1 uniformly:1 principal:1 lemma:3 called:2 lens:2 accepted:1 attempted:1 brand:1 indicating:1 formally:1 select:1 support:2 evaluate:2 audio:1 |
5,042 | 5,565 | Submodular Attribute Selection for Action
Recognition in Video
Zhuolin Jiang
Noah?s Ark Lab
Huawei Technologies
[email protected]
Jinging Zheng
UMIACS, University of Maryland
College Park, MD, USA
[email protected]
Rama Chellappa
UMIACS, University of Maryland
College Park, MD, USA
[email protected]
P. Jonathon Phillips
National Institute of Standards and Technology
Gaithersburg, MD, USA
[email protected]
Abstract
In real-world action recognition problems, low-level features cannot adequately
characterize the rich spatial-temporal structures in action videos. In this work,
we encode actions based on attributes that describes actions as high-level concepts e.g., jump forward or motion in the air. We base our analysis on two types
of action attributes. One type of action attributes is generated by humans. The
second type is data-driven attributes, which are learned from data using dictionary learning methods. Attribute-based representation may exhibit high variance
due to noisy and redundant attributes. We propose a discriminative and compact
attribute-based representation by selecting a subset of discriminative attributes
from a large attribute set. Three attribute selection criteria are proposed and formulated as a submodular optimization problem. A greedy optimization algorithm
is presented and guaranteed to be at least (1-1/e)-approximation to the optimum.
Experimental results on the Olympic Sports and UCF101 datasets demonstrate
that the proposed attribute-based representation can significantly boost the performance of action recognition algorithms and outperform most recently proposed
recognition approaches.
1
Introduction
Action recognition in real-world videos has many potential applications in multimedia retrieval,
video surveillance and human computer interaction. In order to accurately recognize human actions from videos, most existing approaches developed various discriminative low-level features,
including spatio-temporal interest point (STIP) based features [8, 15], shape and optical flow-based
features [19, 5], and trajectory-based representations [28, 33]. Because of large variations in viewpoints, complicated backgrounds, and people performing the actions differently, videos of an action
vary greatly. A result of this variability is that conventional low-level features are not able to characterize the rich spatio-temporal structures in real-world action videos. Inspired by recent progress
on object recognition [6, 14], multiple high-level semantic concepts called action attributes were
introduced in [20, 17] to describe the spatio-temporal evolution of the action, object shapes and human poses, and contextual scenes. Since these action attributes are relatively robust to changes in
viewpoints and scenes, they bridge the gap between low-level features and class labels. In this work,
we focus on improving action recognition performance of attribute-based representations.
Even though attribute-based representation appear effective for action recognition, they require humans to generate a list of attributes that may adequately describe a set of actions. From this list,
humans then need to assign the action attributes to each class. Previous approaches [20, 17] simply
used all the given attributes and ignored the difference in discriminative capability among attributes.
This caused two major problems. First, a set of human-labeled attributes may be not be able to
1
Indoor
One_hand_visible
Stick_like
Sharp_like
One_arm_bent
Facing_front
(a) ApplyEyeMakeup
(b) ApplyLipStick
=Yes
=Yes
=Yes
=Yes
=Yes
=Yes
(c) Attribute set
Figure 1: Key frames from two actions ?ApplyEyeMakeup? and ?ApplyLipStick? and the associated
attribute set that the two actions share.
represent and distinguish a set of action classes. This is because humans subjectively annotate action videos with arbitrary attributes. For example, consider the two classes ?ApplyEyeMakeup? and
?ApplyLipStick? in UCF101 action dataset [30] shown in Figure 1. They have the same humanlabeled attribute set and cannot be distinguished from one another. Second, some manually labeled
attributes may be noisy or redundant which leads to degradation in action recognition performance.
In addition, their inclusion also increases the feature extraction time. Thus, it would be beneficial to
use a smaller subset of attributes while achieving comparable or even improved performance.
To overcome the first problem, we propose another type of attributes that we call data-driven attributes. We show that data data-driven attributes are complementary to human-labeled attributes.
Instead of using clustering-based algorithms to discover data-driven attributes as in [20], we propose a dictionary-based sparse representation method to discover a large data-driven attribute set.
Our learned attributes are more suited to represent all the input data points because our method
avoids the problem of hard assignment of data points to clusters. To address the attribute selection problem, we propose to select a compact and discriminative set of attributes from a large set
of attributes. Three attribute selection criteria are proposed and then combined to form a submodular objective function. Our method encourages the selected attributes to have strong and similar
discrimination capability for all pairs of actions. Furthermore, our method maximizes the sum of
maximum coverage that each pairwise class can obtain from the selected attributes.
2
Related Work
Attribute-based representation for action recognition: Recently, several attribute-based representations have been proposed for improving action recognition performance. Liu et al. [20] modeled
attributes as latent variables and searched for the best configuration of attributes for each action
using latent SVMs. However, the performance may drop drastically when some attributes are too
noisy or redundant. This is because pretrained attribute classifiers from these noisy attributes perform poorly. Li et al. [17] decomposed a video sequence into short-term segments and characterized
segments by the dynamics of their attributes. However, since attributes are defined over the entire
action video instead of short-term segments, different decomposition of video segments may obtain
different attribute dynamics.
Another line of work similar to attribute-based methods is based on learning different types of midlevel representations. These mid-level representations usually identify the occurrence of semantic
concepts of interest, such as scene types, actions and objects. Fathi et al. [7] proposed to construct
mid-level motion features from low-level optical flow features using AdaBoost. Wang et al. [35]
modeled a human action as a global root template and a constellation of several parts. Raptis et
al. [27] used trajectory clusters as candidates for the parts of an action and assembled these clusters
into an action class by graphical modeling. Jain et al. [10] presented a new mid-level representation
for videos based on discriminative spatio-temporal patches, which are automatically mined from
videos using an exemplar-based clustering approach.
Submodularity: Submodular functions are a class of set functions that have the the property
of diminishing returns [24]. Given a set E, a set function F : 2E ? R is submodular if
F (A ? v) ? F (A) ? f (B ? v) ? F (B) holds for all A ? B ? E and v ? E \ B. The diminishing returns mean that the marginal value of the element v decreases if used in a later stage.
Recently, submodular functions have been widely exploited in various applications, such as sensor
placements [13], superpixel segmentation [22], document summarization [18], and feature selection [3, 23]. Liu et al. [23] presented a submodular feature selection method for acoustic score
spaces based on existing facility location and saturated coverage functions. Krause et al. [12] de2
veloped a submodular method for selecting dictionary columns from multiple candidates for sparse
representation. Iyer et al. [9] designed a new framework for both unconstrained and constrained
submodular function optimization. Streeter et al. [31] proposed an online algorithm for maximizing
submodular functions. Different from these approaches, we define a novel submodular objective
function for attribute selection. Although we only evaluate our approach for action recognition, it
can be applied to other recognition tasks that use attribute descriptions.
3
Submodular Attribute Selection
In this section, we first propose three attribute selection criteria. In order to satisfy these criteria,
we define a submodular function based on entropy rate of a random walk and a weighted maximum
coverage function. Then we introduce algorithms for the detection of human-labeled attributes and
extraction of data-driven attributes.
3.1 Attribute Selection Criteria
Assume that we have C classes and a large attribute set P = {a1 , a2 , .., aM } which contains M attributes. The set that includes all combinations of pairwise classes is represented by
U = {u1 (1, 1), u2 (1, 2), ..., ul (i, j), ..., uL (C ? 1, C)} where ul (i, j), i < j denotes the pairwise
combination of classes i and j, l is the index of this combination in U, and L = C ? (C ? 1)/2
is the total number of all possible pairwise classes. Here we propose to use the Fisher score to
construct an attribute contribution matrix A ? RM ?L , where an entry Ad,l represents the discrimination capability of attribute ad for differentiating the class pair (i, j) indexed by ul (i, j).
Specifically, given the attribute ad and class pair (i, j), let ?dk and ?kd be the mean and standard
deviation of k-th class and ?d be the mean of samples from both classes i and j corresponding to
d-th attribute. The Fisher
score of attribute ad for differentiating the class pair (i, j) is computed
P
nk (?d ??d )2
k
P
as follows: Ad,l(i,j) = k=i,j
where l is the index of pairwise classes (i, j) in U, and
2
k=i,j nk ?k
nk is the number of points from class k. Note that different methods can be used to measure the
discrimination capability of ad , such as mutual information and T-test.
Given A, we can obtain a row vector r by summing up its elements from each column that are in
rows corresponding to selected attributes S. An example of vector r is shown in Figure 2a. We
would like to have r satisfy two selection criteria: (1) each entry of r should be as large as possible;
and (2) the variance of all entries of r should be small. The first criterion encourages S to provide as
much discrimination capability as possible for each pairwise classes. The second criterion makes S
have similar discrimination capability for each pairwise classes. These two criteria can be satisfied
by maximizing the entropy rate of a random walk on the proposed graphs. Meanwhile, since some
attributes may well differentiate the same collection of pairwise classes, it would be redundant to
select all these attributes. In other words, one combination of pairwise classes may be repeatedly
?covered? (differentiated) by multiple attributes. It is better to select other attributes which can differentiate ?uncovered? combinations of pairwise classes. Therefore, we propose the third criterion:
the sum of maximum discrimination capability that each pairwise classes can obtain from the selected attributes should be maximized. We will model it as a weighted maximum coverage problem
and encourage S to have a maximum coverage of all pairwise classes.
3.2 Entropy Rate-based Attribute Selection
In order to achieve the first two criteria, we need to construct an undirected graph and maximize the
entropy rate of a random walk on this graph. We aim to obtain a subset S so that the attribute-based
representation has good discrimination power.
Graph Construction: We use G = (V, E) to denote an undirected graph where V is the vertex
set, and E is the edge set. The vertex vi represents class i and the edge ei,j connecting class i
and j represents that class i and j can be differentiated
P by the selected attribute subset S to some
extent. The edge weight for ei,j is defined as wi,j = d?S Ad,l , which represents the discrimination
capability of S for differentiating class i from class j. The edge weights are symmetric, i.e. wi,j =
wj,i . In addition, we add
P a self-loop ei,i for each vertex vi of G. And the weight for self-loop ei,i
is defined as wi,i = d?P\S Ad,l . The total incident weight for each vertex is kept constant so
that it produces a stationary distribution for the later proposed random walk on this graph. Note that
the addition of these self-loops do not affect the selection of attributes and the graph will change
with the selected subset S. Figure 2 gives an example to illustrate the benefits of the entropy rate.
3
Subset c1 /c2 c1 /c3 c1 /c4 c2 /c3 c2 /c4 c3 /c4
S1
1
1
1
1
1
1
S2
2
2
2
2
2
2
S3
2
1
3
3
1
2
1
1
1
1
4
2
1
1
3
1
2
4
(b) S1
(a) Vector r corresponding to different subsets.
2
1
2
2
2
(c) S2
2
3
2
1
2
3
4
1
2
1
2
3
3
(d) S3
Figure 2: The summations of different rows in the contribution matrix corresponding to three different
selected subsets are provided in the left table and the corresponding undirected graphs are in the right
figure. We show the role of the entropy rate in selecting attributes which have large and similar discrimination
capability for each pair of classes. The circles with numbers denote the corresponding class vertices and the
numbers next to the edge denote the edge weights, which is a measure of the discrimination capability of
selected attribute subset. The self-loops are not displayed. The entropy rate of the graph with large edge
weights in (c) has a higher objective value than that of a graph with smaller edge weights in (b). The entropy
rate of graph with equal edge weights in (c) has a higher objective value than that of the graph with different
edge weights in (d).
Entropy Rate: Let X = {Xt |t ? T, Xt ? V } be a random walk on the graph G = (V, E) with
nonnegative discrimination measure w. We use the random walk model from [2] with a transition
probability defined as below:
P
wi,j
d?S Ad,l
=
if i 6= j
w
i
P wi
P
pi,j (S) =
(1)
d?P\S Ad,l
k:k6=i wi,k
1?
=
if
i
=
j
wi
wi
P
where S is the selected attribute subset and wi = m:ei,m ?E wi,m is the sum of incident weights
of the vertex vi including the self-loop. The stationary distribution for this random walk is given by
PC
w1 w2
? = (?1 , ?2 , ..., ?C )T = ( w
, , ..., wwC0 ) where w0 = i=1 wi is the sum of the total weights
0 w0
incident on all vertices. For a stationary 1st-order Markov chain, the entropy rate which measures the
uncertainty of the stochastic process X is given by: H(X) = limt?? H(Xt |Xt?1 , Xt?2 , ..., X1 ) =
limt?? H(Xt |Xt?1 ) = H(X2 |X1 ). More details can be found in [2]. Consequently, the entropy
rate of the random walk X on our proposed graph G = (V, E) can be written as a set function:
X
X X
H(S) =
ui H(X2 |X1 = vi ) = ?
ui
pi,j (S)log(pi,j (S))
(2)
i
i
j
Intuitively, the maximization of the entropy rate will have two properties. First, it encourages the
maximization of pi,j (S) where i = 1, ..., C and i 6= j. This can make edge weights wi,j , i 6= j as
large as possible, so class i can be easily differentiated from other classes j (i.e., satisfying the first
criteria). Second, it makes all class vertices have transition probabilities similar to other connected
class vertices, so the discrimination capabilities of class i from other classes are very similar (i.e.,
satisfying the second criteria). Maximizing the entropy rate of the random walk on the proposed
graph can select a subset of attributes that are compact and discriminative for differentiating all
pairwise classes.
Proposition 3.1. The entropy rate of the random walk H : 2M ? R is a submodular function under
the proposed graph construction.
The observation that adding an attribute in a later stage has a lower increase in the uncertainty
establishes the submodularity of the entropy rate. This is because at a later stage, the increased edge
weights from the added attribute will be shared with attributes which contribute to the differentiation
of the same pair of classes. A detailed proof based on [22] is given in the supplementary section.
3.3 Weighted Maximum Coverage-based Attribute Selection
We consider a weighted maximum coverage function to achieve the last criteria that the selected
subset S should maximize the coverage of all combinations of pairwise classes. For each attribute
ad , we define a coverage set Ud ? U which covers all the combinations of pairwise classes that
attribute ad can differentiate. Meanwhile, for each element (combination) ul ? U that is covered by
Ud , we define a coverage weight w(Ud , ul ) = Ad,l . Given the universe set U and these coverage sets
Ud , d = 1, ..., M , the weighted maximum coverage problem is to select at most K coverage sets,
such that the sum of maximum coverage weight each element can obtain from S is maximized. The
weighted maximum coverage function is defined as follows:
X
X
Q(S) =
max w(Ud , ul ) =
max Ad,l , s.t. NS ? K
(3)
ul ?U
d?S
ul ?U
4
d?S
Attrs.c1 /c2 c1 /c3 c1 /c4 c2 /c3 c2 /c4 c3 /c4
a1
2
2
0
1
1
0
a2
1
1
0
0
0
0
a3
0
0
1
0
0
2
a4
0
0
0
2
2
0
1/2
1/3
2
2 1
a1
(a) Attribute contribution matrix A.
1/4
1
1 1
a2
2/3
1
2
a3
2/4
2
3/4
2
a4
(b) Coverage graph
Figure 3: An example of attribute contribution matrix is given in the left table and the corresponding
coverage graph is in the right figure. We show the role of weighted maximum coverage term in selecting
attributes which have large coverage weights. Two numbers separated by a backslash in the top circles denote
a pair of classes, while the bottom circles denote different attributes. The number next to one edge is the
coverage weight associated with the class pair when covered by the corresponding attribute. The edge which
provides maximum coverage weight for each class pair is in red color. We consider three attribute subsets
S1 = {a1 , a2 }, S2 = {a1 , a3 }, S3 = {a1 , a4 }. S2 has a higher objective value than S1 and S3 because the
sum of maximum coverage weights for all class pairs obtained using attributes from subset S2 is largest.
where NS is the number of attributes in S. Note that the weighted maximum coverage problem is
reduced to the well studied set-cover problem when all the coverage weights are equal to be ones.
Proposition 3.2. The weighted maximum coverage function Q : 2M ? R is a monotonically
increasing submodular function under the proposed set representation.
For the weighted maximum coverage term, monotonicity is obvious because the addition of any
attribute will increase the number of covered elements in U. Submodularity results from the observation that the coverage weights of increased covered elements will be less from adding an attribute
in a later stage because some elements may be already covered by previously selected attributes.
The proof is given in the supplementary section.
3.4 Objective Function and Optimization
Combing the entropy rate term and the weighted maximum coverage term, the overall objective
function for attribute selection is formulated as follows:
max F(S) = max H(S) + ?Q(S) s.t. NS ? K
S
(4)
where ? controls the relative contribution between entropy rate and the weighted maximum coverage
term. The objective function is submodular because linear combination of two submodular functions
with nonnegative coefficients preserves submodularity [24]. Direct maximization of a submodular
Algorithm 1 Submodular Attribute Selection
1:
2:
3:
4:
5:
6:
7:
Input: G = (V, E), A and ?
Output: S
Initialization: S ? ?
for NS < K and F (S ? a) ? F (S) ? 0 do
am = argmaxS?am F(S ? {am }) ? F(S)
S ? am
end for
function is an NP-hard problem. However, a greedy algorithm from [24] gives a near-optimal solution with a (1 ? 1/e)-approximation bound. The greedy algorithm starts from an empty attribute
set S = ? ; and iteratively adds one attribute that provides the largest gain for F at each iteration.
The iteration stops when the maximum number of selected attributes is obtained or F(S) decreases.
Algorithm 1 presents the pseudo code of our algorithm. A naive implementation of this algorithm
has the complexity of O(|M |2 ), because it needs to loop O(|M |) times to add a new attribute and
scan through the whole attribute list in each loop. By exploiting the submodularity of the objective
function, we use the lazy greedy approach presented in [16] to speed up the optimization process.
3.5 Human-labeled Attribute and Data-driven Attribute Extraction
Action videos can be characterized by a collection of human-labeled attributes [20]. For example,
the action ?long-jump? in Olympic Sports Dataset [25] is associated with either the motion attributes
(jump forward, motion in the air), or with the scene attributes (e.g., outdoor, track). Given an action
5
video x, an attribute classifier fa : x ? {0, 1} predicts the confidence score of the presence of
attribute a in the video. This classifier fa is learned using the training samples of all action classes
which have this attribute as positive and the rest as negative. Given a set of attribute classifiers S =
d
d
m
{fai (x)}m
i=1 , an action video x ? R is mapped to the semantic space O: h : R ? O = [0, 1]
T
where h(x) = (h1 (x), ..., hm (x)) is a m-dimensional attribute score vector.
Previous works [21, 20] on data-driven attribute discovery used k-means or information theoretic
clustering algorithms to obtain the clusters as the learned attributes. In this paper, we propose to
discover a large initial set of data-driven attributes using a dictionary learning method. Specifically,
assume that we have a set of N videos in a n-dimensional feature space X = [x1 , ..., xN ], xi ? Rn ,
then a data-driven dictionary is learned by solving the following problem:
arg min ||X ? DZ||22 s.t. ?i, ||zi ||0 ? T
D,Z
(5)
where D = [d1 ...dK ], di ? Rn is the learned attribute dictionary of size K, Z = [zi ...zN ], zi ? RK
are the sparse codes of X, and T specifies the sparsity that each video has fewer than T items in its
decomposition. Compared to k-means clustering, this dictionary-based learning scheme avoids the
hard assignment of cluster centers to data points. Meanwhile, it doesn?t require the estimation of the
probability density function of clusters in information theoretic clustering. Note that our attribute
selection framework is very general and different initial attribute extraction methods can be used
here.
4
Experiments
In this section, we validate our method for action recognition on two public datasets: Sports
dataset [25] and UCF101 [20] dataset. Specifically, we consider three sets of attributes: humanlabeled attribute set (HLA set), data-driven attribute set (DDA set) and the set mixing both types of
attributes (Mixed set). To demonstrate the effectiveness of our selection framework, we compare
the result using the selected subset with the result based on the initial set.
We also compare our method with other two submodular approaches based on the facility location
function (FL) and saturated coverageP
function (SC) respectively in [23].
P These objective functions
are defined as follows: Ff a (S) =
=
i?V maxj?S wi,j , Fsa (S)
i?V min{Ci (S), ?Ci (V)}
P
where wi,j is a similarity between attribute i and j, Ci (S) = j?S wi,j measures the degree that
attribute i is ?covered? by S and ? is a hyperparameter that determines a global saturation threshold.
For the two approaches compared against, we consider an undirected k-nearest neighbor graph and
use a Gaussian kernel to compute pairwise similarities wi,j = exp(??d2i,j ) where di,j is the distance
between attribute i and j, ? = (2hd2i,j i)?1 and h?i denotes expectation over all pairwise distances.
Finally, we compare the performance of attribute-based representation with several state-of-the-art
approaches on the two datasets.
4.1 Olympic Sports Dataset
The Olympic Sports dataset contains 783 YouTube video clips of 16 sports activities. We followed
the protocol in [20] to extract STIP features [4]. Each action video is finally represented by a 2000dimensional histogram. We use 40 human-labeled attributes provided by [20]. Three attribute-based
representations are constructed as follows: (1) HLA set: For each human-labeled attribute, we train
a binary SVM with a histogram intersection kernel. We concatenate confidence scores from all
these attribute classifiers into a 40-dimensional vector to represent this video. (2) DDA set: For
data-driven attributes, we learn a dictionary of size 457 from all video features using KSVD [1]
and each video is represented by a 457-dimensional sparse coefficient vector. (3) Mixed set: This
attribute set is obtained by combining HLA set and DDA set.
We compare the performance of features based on selected attributes with those based on the initial
attribute set. For all the different attribute-based features, we use an SVM with Gaussian kernel for
classification. Table 1 shows classification accuracies of different attribute-based representations.
Compared with the initial attribute set, the selected attributes have greatly improved the classification accuracy, which demonstrates the effectiveness of our method for selecting a subset of discriminative attributes. Moreover, features based on the Mixed set outperform features based on either
HLA set or DDA set. This shows that data-driven attributes are complementary to human-labeled
attributes and together they offer a better description of actions. Table 2 shows the per-category average precision (AP) and mean AP of different approaches. It can be seen that our method achieves
6
dataset
Olympic
UCF101
All
61.8
81.7
HLA
Subset
64.1
83.4
All
49.0
79.0
DDA
Subset
53.8
81.6
Mixed
All Subset
63.1
66.7
82.3
85.2
Table 1: Recognition results of different attribute-based representations. ?All? denotes the original attribute sets and ?Subset? denote the selected subsets.
40
20
25
Attribute subset size
(a) HLA set
40
30
100
Our method
FL [23]
SC [23]
200 300 400
66
64
62
60
58
Attribute subset size
200
Our method
FL [23]
SC [23]
300
400
500
Attribute subset size
(b) DDA set
(c) Mixed set
Accuracy
Our method
FL [23]
SC [23]
30
35
40
50
Accuracy
50
Accuracy
Accuracy
68
60
68
66
64
62
60
58
56
100
Entropy rate
Maximum Coverage
? =0.01
? =0.1
?=1
300
500
Attribute subset size
(d) Effect of ? in Mixed set
Figure 4: Recognition results by different submodular methods on the Olympic Sports dataset.
Activity
high-jump
long-jump
triple-jump
pole-vault
gym. vault
short-put
snatch
clean-jerk
javelin throw
hammer throw
discuss throw
diving-plat.
diving-sp. bd.
bask. layup
bowling
tennis-serve
mean-AP
[15]
52.4
66.8
36.1
47.8
88.6
56.2
41.8
83.2
61.1
65.1
37.4
91.5
80.7
75.8
66.7
39.6
62.0
[25]
68.9
74.8
52.3
82.0
86.1
62.1
69.2
84.1
74.6
77.5
58.5
87.2
77.2
77.9
72.7
49.1
72.1
[32]
18.4
81.8
16.1
84.9
85.7
43.3
88.6
78.2
79.5
70.5
48.9
93.7
79.3
85.5
64.3
49.6
66.8
[20]
93.2
82.6
48.3
74.4
86.7
76.2
71.6
79.4
62.1
65.5
68.9
77.5
65.2
66.7
72.0
55.2
71.6
[17]
82.2
92.5
52.1
79.4
83.4
70.3
72.7
85.1
87.5
74.0
57.0
86.0
78.3
78.1
52.5
38.7
73.2
HLA
80.4
88.8
61.4
55.1
98.2
63.7
74.5
73.8
36.0
76.9
53.9
94.8
79.7
88.7
43.0
78.8
72.1
DDA
66.4
85.3
60.7
45.5
84.2
39.5
34.2
57.9
26.4
77.2
45.6
55.3
59.7
89.7
55.3
35.3
57.2
Mixed
83.1
93.9
73.6
56.8
98.4
72.2
79.8
82.6
36.5
80.4
56.0
99.2
90.4
90.7
55.4
83.7
77.0
Table 2: Average precisions for activity recognition on the Olympic Sporst dataset.
the best performance. This illustrates the benefits of selecting discriminative attributes and removing
noisy and redundant attributes. Note that our method outperforms the method that is most similar to
ours [20] which uses complex latent SVMs to combine low-level features, human-labeled attributes
and data-driven attributes. Moreover, compared with other dynamic classifiers [25, 17] which account for the dynamics of bag-of-features or action attributes, our method still obtains comparable
results. This is because the provided human-labeled attributes are very noisy and they can greatly
affect the training of latent SVM and representation of the attribute dynamics.
Figures 4a 4b 4c show classification accuracies of attribute subsets selected by different submodular
selection methods. It can be seen that our method outperforms the other two submodular selection
methods for the three different attribute sets. This is because our method prefers attributes with large
and similar discrimination capability for differentiating pairwise classes, while the other two methods prefer attributes with large similarity to other attributes (i.e. representative), without explicitly
considering the discrimination capabilities of selected attributes. Figure 4d shows the performance
curves for a range of ?. We observe that the combination of entropy rate term and maximum coverage term obtains a higher classification accuracy than when only one of them is used. In addition,
our approach is insensitive to the selection of ?. Hence we use ? = 0.1 throughout the experiments.
4.2
UCF101 Dataset
UCF101 dataset contains over 10,000 video clips from 101 different human action categories. We
compute the improved version of dense trajectories in [34] and extract three types of descriptors:
histogram of oriented gradients (HOG), histogram of optical flow (HOF) and motion boundary his7
splits
1
2
3
Avg
[34]
83.03
84.22
84.80
84.02
[36]
83.11
84.60
84.23
83.98
[37]
79.41
81.25
82.03
80.90
[11]
65.22
65.39
67.24
65.95
[29]
63.41
65.37
64.12
64.30
HLA
82.45
83.27
84.60
83.44
DDA
80.35
82.16
82.42
81.64
Mixed
84.19
85.51
86.30
85.24
Table 3: Recognition results of different approaches on UCF101 dataset.
75
70
40
60
Our method
FL [23]
SC [23]
80
100 120
Attribute subset size
(a) HLA set
80
70
60
Our method
FL [23]
SC [23]
1000
2000
3000
Attribute subset size
(b) DDA set
Accuracy
80
Accuracy
Accuracy
85
85
80
75
70
1000
Our method
FL [23]
SC [23]
2000
3000
Attribute subset size
(c) Mixed set
Figure 5: Recognition results by different submodular methods on UCF101 dataset.
togram (MBH). We use Fisher vector encoding [26] and obtain 101,376-dimensional histogram to
represent each action video. Three different attribute sets and corresponding attribute-based representations are constructed as follows: (1) HLA set: Due to the high dimensionality of features
and large number of samples, the linear SVM is trained for the detection of each human-labeled attribute. We concatenate confidence scores from all these attribute classifiers into a 115-dimensional
vector to represent a video. (2) DDA set: For data-driven attributes, we first apply PCA to reduce
the dimension of histogram descriptors to be 3300 and then learn a dictionary of size 3030. The
features based on data-driven attributes are 3030-dimensional sparse coefficient vectors. (3) Mixed
set: HLA set plus DDA set.
Following the training and testing dataset partitions proposed in [30], we train a linear SVM and
report classification accuracies of different attribute-based representations in Table 1. The selected
attribute subset outperforms the initial attribute set again which demonstrates the effectiveness of
our proposed attribute selection method. Figure 5 shows the results of attribute subsets selected
by different submodular selection methods. Note that this dataset is highly challenging because
the training and test videos of the same action have different backgrounds and actors. You can see
that our method still substantially outperforms the other two submodular methods. This is because
some redundant attributes dominated the selection process and the attributes selected by comparing approaches had very unbalanced discrimination capability for different classes. However, the
attributes selected by our method have strong and similar discrimination capability for each class.
Table 3 presents the classification accuracies of several state-of-the-art approaches on this dataset.
Our method achieves comparable results to the best result 85.9% from [34] which uses complex
spatio-temporal pyramids to embed structure information in features. Note that our method also
outperforms other methods which make use of complicated and advanced feature extraction and
encoding techniques.
5
Conclusion
We exploited human-labeled attributes and data-driven attributes for improving the performance of
action recognition algorithms. We first presented three attribute selection criteria for the selection of
discriminative and compact attributes. Then we formulated the selection procedure as one of optimizing a submodular function based on the entropy rate of a random walk and weighted maximum
coverage function. Our selected attributes not only have strong and similar discrimination capability
for all pairwise classes, but also maximize the sum of largest discrimination capability that each
pairwise classes can obtain from the selected attributes. Experimental results on two challenging
dataset show that the proposed method significantly outperforms many state-of-the art approaches.
6
Acknowledgements
The identification of any commercial product or trade name does not imply endorsement or recommendation by NIST. This research was partially supported by a MURI from the Office of Naval
research under the Grant 1141221258513.
8
References
[1] M. Aharon, M. Elad, and A. Bruckstein. KSVD: An algorithm for designing overcomplete dictionaries
for sparse representation. In IEEE Transactions on Signal Processing, 2006.
[2] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, 2006.
[3] A. Das, A. Dasgupta, and R. Kumar. Selecting diverse features via spectral regularization. In NIPS, 2012.
[4] P. Dollar, V. Rabaud, G. Cottrell, and S. Belongie. Behavior recognition via sparse spatio-temporal
features. In VS-PETS, 2005.
[5] A. A. Efros, A. C. Berg, E. C. Berg, G. Mori, and J. Malik. Recognizing action at a distance. In ICCV,
2003.
[6] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009.
[7] A. Fathi and G. Mori. Action recognition by learning mid-level motion features. In CVPR, 2008.
[8] L. Gorelick, M. Blank, E. Shechtman, M. Irani, and R. Basri. Actions as space-time shapes. In ICCV,
2005.
[9] R. Iyer, S. Jegelka, and J. Bilmes. Fast semidifferential-based submodular function optimization. In
ICML, 2013.
[10] A. Jain, A. Gupta, M. Rodriguez, and L. Davis. Representing videos using mid-level discriminative
patches. In CVPR, 2013.
[11] Z. Jiang, Z. Lin, and L. S. Davis. Label consistent K-SVD: Learning a discriminative dictionary for
recognition. In PAMI, 2013.
[12] A. Krause and V. Cevher. Submodular dictionary selection for sparse representation. In ICML, 2010.
[13] A. Krause, A. Singh, C. Guestrin, and C. Williams. Near-optimal sensor placements in gaussian processes.
In ICML, 2005.
[14] C. H. Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. In CVPR, 2009.
[15] I. Laptev and T. Lindeberg. Space-time interest points. In ICCV, 2003.
[16] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak
detection in networks. In KDD, 2007.
[17] W. Li and N. Vasconcelos. Recognizing activities by attribute dynamics. In NIPS, 2012.
[18] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In Proceedings of
ACL, 2011.
[19] Z. Lin, Z. Jiang, and L. S. Davis. Recognizing actions by shape-motion prototype trees. In CVPR, 2009.
[20] J. Liu, B. Kuipers, and S. Savarese. Recognizing human actions by attributes. In CVPR, 2011.
[21] J. Liu, Y. Yang, and M. Shah. Learning semantic visual vocabularies using diffusion distance. In CVPR,
2009.
[22] M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa. Entropy-rate clustering: Cluster analysis via
maximizing a submodular function subject to a matroid constraint. In PAMI, 2014.
[23] Y. Liu, K. Wei, K. Kirchhoff, Y. Song, and J. Bilmes. Submodular feature selection for high-dimensional
acoustic score spaces. In ICASSP, 2013.
[24] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functionsi. Mathematical Programming, 1978.
[25] J. C. Niebles, C. wei Chen, and L. Fei-fei. Modeling temporal structure of decomposable motion segments
for activity classification. In ECCV, 2010.
[26] F. Perronnin, J. S?anchez, and T. Mensink. Improving the fisher kernel for large-scale image classification.
In ECCV, 2010.
[27] M. Raptis, I. Kokkinos, and S. Soatto. Discovering discriminative action parts from mid-level video
representations. In CVPR, 2012.
[28] M. Raptis and S. Soatto. Tracklet descriptors for action modeling and video analysis. In ECCV, 2010.
[29] S. Sadanand and J. Corso. Action bank: A high-level representation of activity in video. In CVPR, 2012.
[30] K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human action classes from videos in
the wild. In CRCV-TR-12-01, 2012.
[31] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In NIPS, 2008.
[32] K. D. Tang, F.-F. Li, and D. Koller. Learning latent temporal structure for complex event detection. In
CVPR, 2012.
[33] H. Wang, A. Kl?aser, C. Schmid, and C.-L. Liu. Dense trajectories and motion boundary descriptors for
action recognition. International Journal of Computer Vision, 2013.
[34] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
[35] Y. Wang and G. Mori. Max-margin hidden conditional random fields for human action recognition. In
CVPR, 2009.
[36] J. Wu, Y. Zhang, and W. Lin. Towards good practices for action video encoding. In ICCV, 2013.
[37] J. Zhu, B. Wang, X. Yang, W. Zhang, and Z. Tu. Action recognition with actons. In ICCV, 2013.
9
| 5565 |@word version:1 kokkinos:1 semidifferential:1 decomposition:2 tr:1 shechtman:1 initial:6 liu:7 configuration:1 score:8 selecting:7 contains:3 uncovered:1 backslash:1 document:2 ours:1 hoiem:1 outperforms:6 existing:2 blank:1 com:1 contextual:1 comparing:1 written:1 bd:1 cottrell:1 concatenate:2 partition:1 kdd:1 shape:4 drop:1 designed:1 mbh:1 discrimination:18 stationary:3 greedy:4 selected:24 fewer:1 item:1 v:1 discovering:1 short:3 provides:2 contribute:1 location:2 zhang:2 mathematical:1 c2:6 direct:1 constructed:2 vault:2 ksvd:2 combine:1 wild:1 interscience:1 introduce:1 pairwise:20 veloped:1 behavior:1 inspired:1 decomposed:1 automatically:1 gov:1 kuiper:1 farhadi:1 considering:1 increasing:1 lindeberg:1 provided:3 discover:3 moreover:2 maximizes:1 substantially:1 developed:1 differentiation:1 temporal:9 pseudo:1 classifier:7 rm:1 demonstrates:2 control:1 grant:1 appear:1 positive:1 encoding:3 jiang:4 ap:3 pami:2 plus:1 acl:1 initialization:1 studied:1 challenging:2 range:1 harmeling:1 testing:1 practice:1 procedure:1 tuzel:1 significantly:2 ucf101:9 word:1 confidence:3 cannot:2 selection:29 put:1 conventional:1 dz:1 maximizing:6 center:1 williams:1 decomposable:1 de2:1 variation:1 construction:2 commercial:1 programming:1 us:2 designing:1 superpixel:1 element:8 recognition:27 satisfying:2 ark:1 predicts:1 labeled:13 muri:1 bottom:1 role:2 wang:5 zamir:1 wj:1 connected:1 decrease:2 trade:1 ui:2 complexity:1 dynamic:6 d2i:1 trained:1 singh:1 solving:1 segment:5 laptev:1 serve:1 easily:1 icassp:1 kirchhoff:1 differently:1 various:2 represented:3 train:2 separated:1 jain:2 fast:1 describe:2 chellappa:2 effective:2 sc:7 widely:1 supplementary:2 elad:1 cvpr:11 unseen:1 noisy:6 online:2 tracklet:1 differentiate:3 sequence:1 fsa:1 propose:8 interaction:1 product:1 tu:1 argmaxs:1 loop:7 combining:1 mixing:1 poorly:1 achieve:2 description:2 validate:1 exploiting:1 cluster:7 optimum:1 empty:1 produce:1 rama:2 object:5 illustrate:1 pose:1 exemplar:1 nearest:1 progress:1 strong:3 throw:3 coverage:32 submodularity:5 attribute:188 hammer:1 stochastic:1 human:24 jonathon:2 public:1 require:2 assign:1 niebles:1 proposition:2 summation:1 hold:1 exp:1 major:1 achieves:2 dictionary:12 vary:1 a2:4 efros:1 estimation:1 bag:1 label:2 bridge:1 largest:3 dda:11 establishes:1 weighted:13 sensor:2 gaussian:3 aim:1 surveillance:1 office:1 encode:1 focus:1 naval:1 greatly:3 am:5 dollar:1 detect:1 huawei:2 perronnin:1 sadanand:1 entire:1 diminishing:2 hidden:1 koller:1 overall:1 among:1 arg:1 classification:9 k6:1 spatial:1 constrained:1 art:3 mutual:1 marginal:1 field:1 equal:2 construct:3 extraction:5 vasconcelos:1 manually:1 represents:4 park:2 icml:3 np:1 report:1 oriented:1 preserve:1 national:1 recognize:1 maxj:1 detection:4 interest:3 highly:1 zheng:1 saturated:2 pc:1 chain:1 edge:14 encourage:1 indexed:1 tree:1 savarese:1 walk:11 circle:3 overcomplete:1 leskovec:1 cevher:1 increased:2 column:2 modeling:3 cover:3 zn:1 assignment:2 maximization:3 cost:1 pole:1 deviation:1 subset:31 entry:3 vertex:9 hof:1 recognizing:4 too:1 characterize:2 endres:1 nickisch:1 combined:1 st:1 density:1 international:1 stip:2 connecting:1 together:1 w1:1 again:1 satisfied:1 return:2 li:3 combing:1 account:1 potential:1 includes:1 coefficient:3 forsyth:1 satisfy:2 explicitly:1 caused:1 gaithersburg:1 ad:14 vi:4 later:5 root:1 h1:1 lab:1 red:1 start:1 complicated:2 capability:17 raptis:3 contribution:5 air:2 accuracy:13 variance:2 descriptor:4 maximized:2 identify:1 yes:6 identification:1 accurately:1 trajectory:5 bilmes:3 against:1 corso:1 obvious:1 associated:3 proof:2 di:2 gain:1 stop:1 dataset:18 color:1 dimensionality:1 segmentation:1 higher:4 adaboost:1 improved:4 wei:2 mensink:1 though:1 furthermore:1 stage:4 aser:1 ei:5 rodriguez:1 glance:1 fai:1 usa:3 effect:1 name:1 concept:3 adequately:2 evolution:1 facility:2 hence:1 regularization:1 symmetric:1 iteratively:1 irani:1 soatto:2 semantic:4 bowling:1 self:5 encourages:3 davis:3 criterion:15 ramalingam:1 theoretic:2 demonstrate:2 motion:9 image:1 novel:1 recently:3 insensitive:1 phillips:2 unconstrained:1 inclusion:1 submodular:34 had:1 tennis:1 similarity:3 actor:1 subjectively:1 base:1 add:3 recent:1 optimizing:1 diving:2 driven:17 binary:1 exploited:2 seen:2 guestrin:2 maximize:3 redundant:6 ud:5 monotonically:1 signal:1 multiple:3 characterized:2 offer:1 long:2 retrieval:1 lin:4 a1:6 vision:1 expectation:1 annotate:1 represent:5 limt:2 iteration:2 kernel:4 histogram:6 pyramid:1 c1:6 background:2 addition:5 krause:4 w2:1 rest:1 umiacs:4 umd:2 subject:1 undirected:4 flow:3 effectiveness:3 call:1 near:2 presence:1 yang:2 split:1 jerk:1 affect:2 gorelick:1 zi:3 matroid:1 reduce:1 prototype:1 pca:1 ul:9 soomro:1 song:1 action:63 repeatedly:1 prefers:1 ignored:1 covered:7 detailed:1 mid:6 hla:11 clip:2 svms:2 category:2 reduced:1 generate:1 specifies:1 outperform:2 s3:4 track:1 per:1 diverse:1 hyperparameter:1 dasgupta:1 key:1 threshold:1 achieving:1 clean:1 diffusion:1 kept:1 graph:19 sum:7 uncertainty:2 you:1 throughout:1 wu:1 patch:2 endorsement:1 prefer:1 comparable:3 bound:1 fl:7 guaranteed:1 distinguish:1 mined:1 followed:1 nonnegative:2 activity:6 noah:1 placement:2 constraint:1 fei:2 scene:4 x2:2 dominated:1 u1:1 speed:1 min:2 kumar:1 performing:1 optical:3 relatively:1 combination:10 kd:1 describes:1 beneficial:1 smaller:2 vanbriesen:1 wi:16 fathi:2 s1:4 outbreak:1 intuitively:1 iccv:6 mori:3 previously:1 discus:1 describing:1 end:1 aharon:1 apply:1 observe:1 differentiated:3 spectral:1 occurrence:1 distinguished:1 gym:1 faloutsos:1 shah:2 original:1 thomas:1 denotes:3 clustering:6 top:1 graphical:1 a4:3 objective:10 malik:1 added:1 already:1 fa:2 md:3 exhibit:1 gradient:1 nemhauser:1 distance:4 maryland:2 mapped:1 w0:2 extent:1 pet:1 code:2 modeled:2 index:2 hog:1 negative:1 implementation:1 summarization:2 perform:1 observation:2 anchez:1 datasets:3 markov:1 nist:2 displayed:1 variability:1 frame:1 rn:2 arbitrary:1 introduced:1 pair:10 kl:1 c3:6 c4:6 acoustic:2 learned:6 boost:1 nip:3 assembled:1 address:1 able:2 usually:1 below:1 indoor:1 sparsity:1 saturation:1 including:2 max:5 video:34 power:1 event:1 advanced:1 zhu:1 representing:1 scheme:1 technology:2 imply:1 hm:1 naive:1 extract:2 schmid:2 discovery:1 acknowledgement:1 relative:1 mixed:10 wolsey:1 triple:1 incident:3 degree:1 jegelka:1 consistent:1 viewpoint:2 bank:1 share:1 pi:4 row:3 eccv:3 supported:1 last:1 drastically:1 institute:1 neighbor:1 template:1 differentiating:5 sparse:8 benefit:2 snatch:1 boundary:2 dimension:1 vocabulary:1 overcome:1 curve:1 xn:1 world:3 avoids:2 rich:2 transition:2 doesn:1 forward:2 collection:2 jump:6 avg:1 rabaud:1 transaction:1 compact:4 obtains:2 basri:1 monotonicity:1 global:2 bruckstein:1 summing:1 belongie:1 spatio:6 discriminative:13 xi:1 latent:5 olympic:7 streeter:2 table:9 learn:2 transfer:1 robust:1 golovin:1 improving:4 complex:3 meanwhile:3 protocol:1 da:1 sp:1 dense:2 universe:1 s2:5 whole:1 lampert:1 complementary:2 x1:4 representative:1 ff:1 wiley:1 n:4 precision:2 candidate:2 outdoor:1 third:1 tang:1 rk:1 removing:1 embed:1 xt:7 constellation:1 list:3 dk:2 svm:5 gupta:1 a3:3 adding:2 ci:3 iyer:2 illustrates:1 margin:1 nk:3 gap:1 chen:1 suited:1 entropy:21 intersection:1 simply:1 visual:1 lazy:1 sport:7 partially:1 pretrained:1 u2:1 recommendation:1 midlevel:1 determines:1 conditional:1 formulated:3 consequently:1 towards:1 shared:1 fisher:5 change:2 hard:3 youtube:1 specifically:3 degradation:1 crcv:1 multimedia:1 called:1 total:3 experimental:2 svd:1 select:5 college:2 berg:2 people:1 searched:1 scan:1 unbalanced:1 evaluate:1 d1:1 |
5,043 | 5,566 | Scale Adaptive Blind Deblurring
Haichao Zhang
Jianchao Yang
Duke University, NC
[email protected]
Adobe Research, CA
[email protected]
Abstract
The presence of noise and small scale structures usually leads to large kernel estimation errors in blind image deblurring empirically, if not a total failure. We
present a scale space perspective on blind deblurring algorithms, and introduce a
cascaded scale space formulation for blind deblurring. This new formulation suggests a natural approach robust to noise and small scale structures through tying
the estimation across multiple scales and balancing the contributions of different
scales automatically by learning from data. The proposed formulation also allows
to handle non-uniform blur with a straightforward extension. Experiments are
conducted on both benchmark dataset and real-world images to validate the effectiveness of the proposed method. One surprising finding based on our approach is
that blur kernel estimation is not necessarily best at the finest scale.
1 Introduction
Blind deconvolution is an important inverse problem that gains increasing attentions from various
fields, such as neural signal analysis [3, 10] and computational imaging [6, 8]. Although some results obtained in this paper are applicable to more general bilinear estimation problems, we will use
blind image deblurring as an example. Image blur is an undesirable degradation that often accompanies the image formation process due to factors such as camera shake. Blind image deblurring
aims to recover a sharp image from only one blurry observed image. While significant progress has
been made recently [6, 16, 14, 2, 22, 11], most of the existing blind deblurring methods do not work
well in the presence of noise, leading to inaccurate blur kernel estimation, which is a problem that
has been observed in several recent work [17, 26]. Figure 1 shows an example where the kernel
recovery quality of previous methods degrades significantly even though only 5% of Gaussian noise
is added to the blurry input. Moreover, it has been empirically observed that even for noise-free images, image structures with scale smaller than that of the blur kernel are actually harmful for kernel
estimation [22]. Therefore, various structure selection techniques, such as hard/hysteresis gradient
thresholding [2, 16], selective edge map [22], and image decomposition [24] are incorporated into
kernel estimation.
In this paper, we propose a novel formulation for blind deblurring, which explains the conventional
empirical coarse-to-fine estimation scheme and reveals some novel perspectives. Our new formulation not only offers the ability to encompass the conventional multi-scale estimation scheme, but
also offers the ability to achieve robust blind deblurring in a simple but principled way. Our model
analysis leads to several interesting and perhaps surprising observations: (i) Blur kernel estimation
is not necessarily best at the finest image scale and (ii) There is no universal single image scale that
can be defined as a priori to maximize the performance of blind deblurring.
The remainder of the paper is structured as follows. In Section 2, we conduct an analysis to motivate
our proposed scale-adaptive blind deblurring approach. Section 3 presents the proposed approach,
including a generalization to noise-robust kernel estimation as well as non-uniform blur estimation.
We discuss the relationship of the proposed method to several previous methods in Section 4. Ex1
(a) Blurry & Noisy
(b) Levin et al. [13]
(c) Zhang et al. [25]
(d) Zhong et al. [26]
(e) Proposed
Figure 1: Sensitivity of blind deblurring to image noise. Random gaussian noise (5%) is added
to the observed blurry image before kernel estimation. The deblurred images are obtained with
the corresponding estimated blur kernels and the noise-free blurry image to capitalize the kernel
estimation accuracy.
periments are carried out in Section 5, and the results are compared with those of the state-of-the-art
methods in the literature. Finally, we conclude the paper in Section 6.
2 Motivational Analysis
For uniform blur, the blurry image can be modeled as follows
y = k ? x + n,
(1)
where ? denotes 2D convolution, 1 x is the unknown sharp image, y is the observed blurry image,
k is the unknown blur kernel (a.k.a., point spread function), and n is a zero-mean Gaussian noise
term [6]. As mentioned above, most of the blind deblurring methods are sensitive to image noise
and small scale structures [17, 26, 22]. Although these effects have been empirically observed [2,
22, 24, 17], we provide a complementary analysis in the following, which motivates our proposed
approach later. Our analysis is based on the following result:
Theorem 1 (Point Source Recovery [1]) For a signal x containing point sources at different locations, if the minimum distance between sources is at least 2/f c, where fc denotes the cut-off frequency of the Gaussian kernel k, then x can be recovered exactly given k and the observed signal
y in the noiseless case.
Although Theorem 1 is stated in the noiseless and non-blind case with a parametric Gaussian kernel,
it is still enlightening for analyzing the general blind deblurring case we are interested in. As sparsity
of the image is typically exploited in the image derivative domain for blind deblurring, Theorem 1
implies that large image structures whose gradients are distributed far from each other are likely
to be recovered more accurately, which in return, benefits the kernel estimation. On the contrary,
small image structures with gradients distributed near each other are likely to have larger recovery
errors, and thus is harmful for kernel estimation. We refer these small image structures as small
scale structure in this paper.
Apart from the above recoverability analysis, Theorem 1 also suggests a straightforward approach
to deal with noise and small scale structures by performing blur kernel estimation after smoothing
the noisy (and blurry) image y with a low-pass filter f p with a proper cut-off frequency f c
yp = fp ? y ? yp = fp ? k ? x + fp ? n ? yp = kp ? x + np
(2)
where kp fp ? k and np fp ? n. As fp is a low-pass filter, the noise level of y p is reduced.
Also, as the small scale structures correspond to signed spikes with small separation distance in
the derivative domain, applying a local averaging will make them mostly canceled out [22], and
therefore, noise and small scale structure can be effectively suppressed. However, applying the lowpass filter will also smooth the large image structures besides noise, and as a result, it will alter the
profile of the edges. As the salient large scale edge structures are the crucial information for blur
kernel estimation, the low-pass filtering may lead to inaccurate kernel estimation. This is the inherent
limitation of linear filtering for blind deblurring. To achieve noise reduction while retaining the latent
edge structures, one may resort to non-linear filtering schemes, such as anisotropic diffusion [20],
Bilateral filtering [19], sparse regression [5]. These approaches typically assume the absence of
motion blur, and thus can cause over-sharpening of the edge structures and over-smoothing of image
details when blur is present [17], resulting in a filtered image that is no longer linear with respect to
the latent sharp image, making accurate kernel estimation even more difficult.
1
We also overload ? to denote the 2D convolution followed by lexicographic ordering based on the context.
2
True
0
Scale1 Scale2 Scale3 Scale4
Recovered
1
?1
0
1
Signal x
20
40
60
80
100
120
0.2
0.1
0
0
1
Blur Kernel k
5
10
15
0.5
0
?1
0
1
20
40
60
80
100
0
0
120
5
10
15
0.5
20
40
60
80
100
120
3
4
5
6
7
8
9
0
1
0.2
2
3
4
5
6
7
8
9
0
0
0.4
5
10
15
0
1
0.5
2
3
4
5
6
7
8
9
5
10
15
0
1
1
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
0.2
20
40
60
80
100
120
0
?1
0
2
0.1
0
?1
0
1
0
1
0.2
0.1
0
?1
0
1
Scale Filter fp
0.5
0
0
0.4
0.2
20
40
60
80
100
120
0
0
0.5
5
10
15
0
1
Figure 2: Multi-Scale Blind Sparse Recovery. The signal structures of different scales will be
recovered at different scales. Large scale structures are recovered first and small structures are
recovered later. Top: original signal, blur kernel. Bottom: the recovered signal and bluer kernel
progressively across different scales (scale-4 to scale-1 represents the coarsest scale to the finest
(original) scale. The blur kernel at the i-th scale is initialized with the solution from the i-1-th scale.
3 The Proposed Approach
To facilitate subsequent analysis, we first introduce the definition of scale space [15, 4]:
Definition 1 For an image x, its scale-space representation corresponding to a Gaussian filter G s
is defined by the convolution G s ? x, where the variance s is referred to as the scale parameter.
Without loss of clarity, we also refer the different scale levels as different scale spaces in the sequel.
Natural images have a multi-scale property, meaning that different scale levels reveal different scales
of image structures. According to Theorem 1, different scale spaces may play different roles for
kernel estimation, due to the different recoverability of the signal components in the corresponding
scale spaces. We propose a new framework for blind deblurring by introducing a variable scale filter,
which defines the scale space where the blind estimation process is operated. With the scale filter, it
is straightforward to come up with a blur estimation procedure similar to the conventional coarse-tofine estimation by constructing an image pyramid. However, we operate deblurring in a space with
the same spatial resolution as the original image rather than a downscaled space as conventionally
done. Therefore, it avoids the additional estimation error caused by interpolation between spatial
scales in the pyramid. To mitigate the problem of structure smoothing, we incorporate the knowledge
about the filter into the deblurring model, which is different from the way of using filtering simply as
a pre-processing step. More importantly, we can formulate the deblurring problem in multiple scale
spaces in this way, and learn the contribution of each scale space adaptively for each input image.
3.1 Scale-Space Blind Deblurring Model
Our task is to recover k and x from the filtered observation y p , obtained via (2) with a known scale
filter fp . The model is derived in the derivative domain, and we use x ? R m and yp ? Rn to denote
the lexicographically ordered sharp and (filtered-) blurry image derivatives respectively. 2 The final
deblurred image is recovered via a non-blind deblurring step with the estimated blur kernel [26].
From the modifed observation model (2), we can obtain the following likelihood:
fp ? y ? fp ? k ? x22
yp ? kp ? x22
p(yp |x, k, ?) ? exp ?
= exp ?
,
2?
2?
(3)
where ? is the variance of the Gaussian noise. Maximum likelihood estimation using (3) is ill-posed
and further
regularizationover the unknowns is required. We use a parametrized Gaussian prior for
x, p(x) = i p(xi ) ? i N (xi ; 0, ?i ), where the unknown scale variables ? = [? 1 , ?2 , ? ? ? ] are
closely related to the sparsity of x and they will be estimated jointly with other variables. Rather than
computing the Maximum A Posteriori (MAP) solution, which typically requires empirical tricks to
achieve success [16, 2], we use type-II maximum likelihood estimation following [13, 21, 25], by
marginalizing over the latent image and maximizing over the other unknowns
max?,k,??0 p(yp |x, k, ?)p(x)dx ? min?,k,??0 ypT ?T yp + log |?p | ,
(4)
2
The derivative filters used in this work are {[?1, 1], [?1, 1]T }.
3
where ?p ?I + Hp ?HTp , Hp is the convolution matrix of k p and ? diag[?]. Using standard
linear algebra techniques together with an upper-bound over ? p ,3 we can reform (4) as follows [21]
1
fp ? y ? fp ? k ? x22 + rp (x, k, ?) + (n ? m) log ?,
?
x2
with rp (x, k, ?)
min i + log(? + ?i kp 22 ),
? i ?i
i
min
?,k?0,x
(5)
which now resembles a typical regularized-regression formulation for blind deblurring when eliminating fp . The proposed objective function has one interesting property as stated in the following.
Theorem 2 (Scale Space Blind Deblurring) Taking f p as a Gaussian filter, solving (5) essentially
achieves estimation for x and k in the scale space defined by f p given y in the original space.
In essence, Theorem 2 reveals the equivalence between performing blind deblurring on y directly
while constraining x and k in a certain scale space and by solving the proposed model (5) with the
aid of the additional filter f p . This places the proposed model (5) on a sound theoretical footing.
Cascaded Scale-Space Blind Deblurring. If the blur kernel k has a clear cut-off frequency and
the target signal contains structures at distinct scales, then we can suppress the structures with scale
smaller than k using a properly designed scale filter f p according to Theorem 1, and then solve (5)
for kernel estimation. However, in practice, the blur kernels are typically non-parametric and with
complex forms, therefore do not have a clear cut-off frequency. Moreover, natural images have a
multi-scale property, meaning different scale spaces reveal different image structures. All these facts
suggests that it is not easy to select a fixed scale filter f p a priori and calls for a variable scale filter.
Nevertheless, based on the basic point that large scale structures are more advantageous than small
scale structures for kernel estimation, a natural idea is to perform (5) separately at different scales,
and pick the best estimation as the output. While this is an appealing idea, it is not applicable in
practice due to the non-availability of the ground-truth, which is required for evaluating the estimation quality. A more practical approach is to perform (5) in a cascaded way, starting the estimation
from a large scale and then reducing the scale for the next cascade. The kernel estimation from
the previous scale is used as the starting point for the next one. With this scheme, the blur kernel
is refined along with the resolution of the scale space, and may become accurate enough before
reaching the finest resolution level, as shown in Figure 2 for a 1D example. The latent sparse signal in this example contains 4 point sources, with the minimum separation distance of 2, which is
smaller than the support of the blur kernel. It is observed that some large elements of the blur kernel
are recovered first and then the smaller ones appear later at a smaller scale. It can also be noticed
that the kernel estimation is already fairly accurate before reaching the finest scale (i.e., the original
pixel-level representation). In this case, the final estimation at the last scale is fairly stable given the
initialization from the last scale. However, performing blind deblurring by solving (5) in the last
original scale directly (i.e., f p ? ?) cannot achieve successful kernel estimation (results not shown).
A similar strategy by constructing an image pyramid has been applied successfully in many of the
recent deblurring methods [6, 16, 2, 22, 8, 25]. It is important to emphasize that the main purpose of
our scale-space perspective is more to provide complementary analysis and understanding of the empirical coarse-to-fine approach in blind deblurring algorithms, than to replace it. More discussions
on this point are provided in Section 4. Nevertheless, the proposed alternative approach can achieve
performance on par with state-of-the-art methods, as shown in Figure 4. More importantly, this alternative formulation offers us a number of extra dimensions for generalization, such as extensions
to noise robust kernel estimation and scale-adaptive estimation, as shown in the next section.
3.2 Scale-Adaptive Deblurring via Tied Scale-Space Estimation
In the above cascade procedure, a single filter f p is used at each step in a greedy way. Instead, we
can define a set of scale filters P {fp }P
p=1 , apply each of them to the observed image y to get
a set of filtered observations {y p }P
,
and
then tie the estimation across all scales with the shared
p=1
latent sharp image x. By constructing P as a set of Gaussian filters with decreasing radius, it is
equivalent to perform blind deblurring in different scale spaces. Large scale space is more robust
to image noise, and thus is more effective in stabilizing the estimation; however, only large scale
3
log |?p | ?
i
log ? + ?i kp 22 + (n ? m) log ? [25].
4
without additive noise
with additive noise
80
0
Iter.1
0
0
4
1
2
3
4
5
70
Iter.1
0
70
0
Iterations
0
0
4
0
0
0
6
80
2
0
60
0
6
0
0
1
2
3
4
5
0
60
0
8
0
Iter.3
0
0
10
0
0
1
2
3
4
0
8
0
0
0
0
4
5
40
30
0
Iter.15
0
2
3
Filtering Radius
3
0
0
1
2
0
0
0
0
1
0
40 2
0
14
Iter.3
0
50 0
5
0
12
50
0
1
4
2
3
4
30 4
5
5
0
0
0
1
2
3
Filtering Radius
Iter.15
1
4
2
3
4
20
5
5
estimation error
2
w/o noise
101.9
43.8
39.4
36.7
5% noise
org.scale
316.3
opt.scale
63.2
uni.scale
77.6
adaptive
46.4
org.scale
opt.scale
uni.scale
adaptive
Figure 3: Scale Adaptive Contribution Learning for a set of 25 Gaussian filters with radius r ?
(0, 5] on the first image [14]. Left: without adding noise. Right: with 5% additive noise. The values
in the heat-map represent the contribution weight (? ?1
p ) for each scale filter during the iterations.
The table on the right shows the performance (SSD error) of blind deblurring with different scales:
original scale (org.scale), empirically optimal scale (opt.scale), multiple scales with uniform
contribution weights (uni.scale) and multiple scales with adaptive weights (adaptive).
structures are ?visible? (recoverable) in this space. Small scale space offers the potential to recover
more fine details, but is less robust to image noise. By conducting deblurring in multiple scale
spaces simultaneously, we can exploit the complementary property of different scales for robust
blind deblurring in a unified framework. Furthermore, different scales may contribute differently
to the kernel estimation, we therefore use a distinct noise level parameter ? p for each scale, which
reflects the relative contribution of that scale to the estimation. Concretely, the final cost function can
be obtained by accumulating the cost function (5) over all the P filtered observations with adaptive
noise parameters 4
P
1
min
fp ? y ? fp ? k ? x22 + R(x, k, {?p }) + (n ? m)
log ?p ,
{?p },k?0,x
?
p
p=1 p
(6)
x2i
2
rp (x, k, {?p }) =
min
+ log(?p + ?i kp 2 ).
where R(x, k, {?p }) =
? i ?i
p
p,i
The penalty function R here is in effect a penalty term that exploits multi-scale regularity/consistency of the solution space. The effectiveness of the proposed approach compared to other
methods is illustrated in Figure 1 and more results are provided in Section 5. Formulating the deblurring problem as (6), our joint estimation framework enjoys a number of features that are particularly
appropriate for the purpose of blind deblurring in presence of noise and small scale image structures:
(i) It exploits both the regularization of sharing the latent sharp image x across all filtered observations and the knowledge about the set of filters {f p }. In this way, k is recovered directly without
post-processing as previous work [26]; (ii) the proposed approach can be extended to handle nonuniform blur, as discussed in Section 3.3; and (iii) there is no inherent limitations on the form of the
filters we can use besides Gaussian filters, e.g., we can also use directional filters as in [26].
Scale Adaptiveness. With this cost function, the contribution of each filtered observation y p con?1
structed by fp is reflected by weight ??1
p . The parameters {? p } are initialized uniformly across all
filters and are then learned during the kernel estimation process automatically. In this scenario, a
smaller noise level estimation indicates a larger contribution in estimation. It is natural to expect that
the distribution of the contribution weights for the same set of filters will change under different input noise levels, as shown in Figure 3. From the figure, we obtain a number of interest observations:
?The proposed algorithm is adaptive to observations with different noise levels. As we can see,
filters with smaller radius contribute more in the noise-free case, while in the noisy case, filters
with larger radius contribute more.
?The distribution of the contribution weights evolves during the iterative estimation process. For
example in the noise-less case, starting with uniform weights, the middle-scale filters contribute the
most at the beginning of the iterations, while smaller-scale filters contribute more to the estimation
later on, a natural coarse-to-fine behavior. Similar trends can also be observed for the noisy case.
4
This can be achieved either in an online fashion or in one shot.
5
Estimation Error
150
(a)
(b)
Fergus
Shan
Cho
Levin
Zhang
Proposed
100
(c)
12
34
12
34
50
56
78
0
1
2
3
Image Index
4
Figure 4: Blind Deblurring Results: Noise-free Case. (a) Performance comparison (image estimation error) on the benchmark dataset [14], which contains (b) 8 blur kernels and (c) 4 images.
?While it is expected that the original scale space is not the ?optimal? scale for kernel estimation
in presence of noise, it is somewhat surprising to find that this is also the case for the noisefree case. This corroborates previous findings that small scale structures are harmful to kernel
estimation [22], and our algorithm automatically learn the scale space to suppress the effects of
small scale structures.
?The weight distribution is more flat in the noise-free case, while it is more peaky for the noisy case.
Figure 3 is obtained with the first kernel and image in Figure 4. Similar properties can be observed
for different images/blurs, although the position of the empirical mode are unlikely to be the same.
The table in Figure 3 shows the estimation error using difference scale space configurations. Blind
deblurring in the original space directly (org.scale) fails, indicated by the large estimation error.
However, when setting the filter as f o , whose contribution ? ?1
o is empirically the largest among all
filters (opt.scale), the performance is much better than in the original scale directly, with the
estimation error reduced significantly. The proposed method, by tying multiple scales together and
learning adaptive contribution weights (adaptive), performs the best across all the configurations,
especially in the noisy case.
3.3 Non-Uniform Blur Extension
The extension of the uniform blind deblurring model proposed above to the non-uniform blur case is
achieved by using a generalized observation model [18, 9], representing the blurry image
as the summation of differently transformed versions of the latent sharp image y = Hx+n =
j=1 wj Pj x+
n = Dw + n. Here Pj is the j-th projection or homography operator (a combination of rotations
and translations) and w j is the corresponding combination weight representing the proportion of
time spent at that particular camera pose during exposure. D = [P 1 x, P2 x, ? ? ? , Pj x, ? ? ? ] denotes
the dictionary constructed by projectively transforming x using a set of transformation operators.
w [w1 , w2 , ? ? ? ]T denotes the combination weights of the blurry image over the dictionary. The
uniform convolutional model (1) can be obtained by restricting {P j } to be translations only. With
derivations similar to those in Section 3.1, it can be shown that the cost function for the general
non-uniform blur case is
P
1
x2
yp ? Hp x22 +
min i + log(?p + ?i hip 22 ) + (n ? m)
log ?p ,
? i ?i
?,w?0,x
?
p
p=1 p
p,i
min
(7)
where Hp Fp j wj Pj is the compound operator incorporating both the additional filter and the
non-uniform blur. F p is the convolutional matrix form of f p and hip denotes the effective compound
local kernel at site i in the image plane constructed with w and the set of transformation operators.
4 Discussions
We discuss the relationship of the proposed approach with several recent methods to help understanding properties of our approach further.
Image Pyramid based Blur Kernel Estimation. Since the blind deblurring work of Fergus et
al. [6], image pyramid has been widely used as a standard architecture for blind deblurring [16, 2, 8,
22, 13, 25]. The image pyramid is constructed by resizing the observed image with a fixed ratio for
multiple times until reaching a scale where the corresponding kernel is very small, e.g. 3 ? 3. Then
the blur kernel is estimated firstly from the smallest image and is upscaled for initializing the next
level. This process is repeated until the last level is reached. While it is effective for exploiting the
6
Image Estimation Quality
Image Estimation Quality
200
100
50
160
Estimation Error
150
180
Zhong
Proposed
(b)
Estimation Error
Estimation Error
Image Estimation Quality
150
Zhong
Proposed
(a)
100
50
Levin
Zhang
Zhong
Proposed
(c)
140
120
100
80
60
40
0
1
2
3
4
5
6
Kernel Index
7
8
0
1
2
3
Image Index
4
20
0
2
4
6
Noise Level (%)
8
Figure 5: Deblurring results in the presence of noise on the benchmark dataset [14]. Performance averaged over (a) different images and (b) different kernels, with 5% additive Gaussian noise.
(c) Comparison of the proposed method with Levin et al. [13], Zhang et al. [25], Zhong et al. [26]
on the first image with the first kernel, under different noise levels.
solution space, this greedy pyramid construction does not provide an effective way to handle image
noise. Our formulation not only retains properties similar to the pyramid coarse-to-fine estimation,
but also offers the extra flexibility to achieve scale-adaptive estimation, which is robust to noise and
small scale structures.
Noise-Robust Blind Deblurring [17, 26]. Based on the observation that using denoising as a preprocessing can help with blur kernel estimation in the presence of noise, Tai et al. [17] proposed to
perform denoising and kernel estimation alternatively, by incorporating an additional image penalty
function designed specially taking the blur kernel into account [17]. This approach uses separate
penalty terms and introduces additional balancing parameters. Our proposed model, on the contrary,
has a coupled penalty function and learns the balancing parameters from the data. Moreover, the
proposed model can be generalized to non-uniform blur in a straightforward way. Another recent
method [26] performs blind kernel estimation on images filtered with different directional filters
separately and then reconstructs the final kernel in a second step via inverse Radon transform [26].
This approach is only applicable to uniform blur and directional auxiliary filters. Moreover, it treats
each filtered observation independently thus may introduce additional errors in the second kernel
reconstruction step, due to factors such as mis-alignment between the estimated compound kernels.
Small Scale Structures in Blur Kernel Estimation [22, 2]. Based on the observation that small
scale structures are harmful for kernel estimation, Xu and Jia [22] designed an empirical approach
for structure selection based on gradient magnitudes. Structure selection has also been incorporated
into blind deblurring in various forms before, such as gradient thresholding [2, 16]. However, it
is hard to determine a universal threshold for different images and kernels. Other techniques such
as image decomposition has also been incorporated [24], where the observed blurry image is decomposed into structure and texture layers. However, standard image decomposition techniques do
not consider image blur, thus might not work well in the presence of blur. Another issue for this
approach is again the selection of the parameter for separating texture from structure, which is image dependent in general. The proposed method achieves robustness to small scale structures by
optimizing the scale contribution weights jointly with blind deblurring, in an image adaptive way.
The optimization techniques used in this paper has been used before for image deblurring [13, 21,
25], with different context and motivations.
5 Experimental Results
We perform extensive experiments in this section to evaluate the performance of the proposed
method compared with several state-of-the-art blind deblurring methods, including two recent noise
robust deblurring methods of Tai et al. [17], and Zhong et al. [26], as well as a non-uniform deblurring method of Xu et al. [23]. We construct {f p } as Gaussian filters, with the radius uniformly
sampled over a specified range, which is typically set as [0.1, 3] in the experiment. 5 The number of
iterations is used as the stopping criteria and is fixed as 15 in practice.
Evaluation using the Benchmark Dataset of Levin et al. [14]. We first perform evaluation on
the benchmark dataset of Levin et al. [14], containing 4 images and 8 blur kernels, leading to 32
blurry images in total (see Figure 4). Performances for the noise-free case are reported in Figure 4,
where the proposed approach performs on par with state-of-the-art. To evaluate the performances
5
The number of filters P should be large enough to characterize the scale space. We typically set P = 7.
7
Kyoto
Blurry
Tai [17]
Zhong [26]
Proposed
Blurry
Blurry
Xu [23]
Zhong [26]
Proposed
Blurry
Blurry
Xu [23]
Zhong [26]
Proposed
Elephant
Building
Blurry
Figure 6: Deblurring results on image with non-uniform blur, compared with Tai et al. [17], Zhong
et al. [26] and Xu et al. [23]. Full images are shown in the supplementary file.
of different methods in the presence of noise, we add i.i.d. Gaussian noise to the blurry images, and
then perform kernel estimation. The estimated kernels are used for non-blind deblurring [12] on
the noise-free blurry images. The bar plots in Figure 5 show the sum-of-squared-difference (SSD)
error of the deblurred images using the proposed method and the method of Zhong et al. [26] when
the noise level is 5%. As the same non-blind deblurring method is used, this SSD error reflects
the quality of the kernel estimation. It is clear that the proposed method performs better than the
method of Zhong et al. [26] overall. We also show the results of different methods with increasing
noise levels in Figure 5. It is observed that while the conventional methods (e.g. Levin et al. [13],
Zhang et al. [25]) performs well when the noise level is low, their performances degrade rapidly
when the noise level increases. The method of Zhong et al. [26] performs more robustly across
different noise levels, but does not performs as well as the other methods when the noise level is
very low. This might be caused by the loss of information during its two-step process. The proposed
method outperforms the other methods for all the noise levels, proving its effectiveness.
Deblurring on Real-World Images. We further evaluate the performance of the proposed method
on real-world images from the literature [17, 7, 8]. The results are shown in Figure 6. For the Kyoto
image from [17], the deblurred image of Tai et al. [17] has some ringing artifacts while the result
of Zhong et al. [26] has ghosting effects due to the inaccurate kernel estimation. The deblurred
image from the propose method has neither ghosting or strong ringing artifacts. For the other two
test images, the non-uniform deblurring method [23] produces deblurred images that are still very
blurry, as it achieves kernel estimations close to a delta kernel for both images, due to the presence
of noise. The method of Zhong et al. [26] can only handle uniform blur and the deblurred images
have strong ringing artifacts. The proposed method can estimate the non-uniform blur accurately
and can produce high-quality deblurring results better than the other methods.
6 Conclusion
We present an analysis of blind deblurring approach from the scale-space perspective. The novel
analysis not only helps in understanding several empirical techniques widely used in the blind deblurring literature, but also inspires new extensions. Extensive experiments on benchmark dataset
as well as real-world images verify the effectiveness of the proposed method. For future work, we
would like to investigate the extension of the proposed approach in several directions, such as blind
image denoising and multi-scale dictionary learning. The task of learning the auxiliary filters in a
blur and image adaptive fashion is another interesting future research direction.
Acknowledgement The research was supported in part by Adobe Systems.
8
References
[1] E. J. Cand?s and C. Fernandez-Granda. Towards a mathematical theory of super-resolution.
CoRR, abs/1203.5871, 2012.
[2] S. Cho and S. Lee. Fast motion deblurring. In SIGGRAPH ASIA, 2009.
[3] C. Ekanadham, D. Tranchina, and E. P. Simoncelli. A blind sparse deconvolution method for
neural spike identification. In NIPS, 2011.
[4] J. H. Elder and S. W. Zucker. Local scale control for edge detection and blur estimation. IEEE
Trans. Pattern Anal. Mach. Intell., 20(7):699?716, 1998.
[5] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski. Edge-preserving decompositions for
multi-scale tone and detail manipulation. In SIGGRAPH, 2008.
[6] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake
from a single photograph. In SIGGRAPH, 2006.
[7] A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using
motion density functions. In ECCV, 2010.
[8] S. Harmeling, M. Hirsch, and B. Sch?lkopf. Space-variant single-image blind deconvolution
for removing camera shake. In NIPS, 2010.
[9] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Sch?lkopf. Fast removal of non-uniform camera
shake. In ICCV, 2011.
[10] Y. Karklin and E. P. Simoncelli. Efficient coding of natural images with a population of noisy
linear-nonlinear neurons. In NIPS, 2011.
[11] D. Krishnan, T. Tay, and R. Fergus. Blind deconvolution using a normalized sparsity measure.
In CVPR, 2011.
[12] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Deconvolution using natural image priors.
Technical report, MIT, 2007.
[13] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Efficient marginal likelihood optimization
in blind deconvolution. In CVPR, 2011.
[14] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 33(12):2354?2367, 2011.
[15] T. Lindeberg and B. M. H. Romeny. Linear scale-space: I. Basic theory, II. Early visual
operations. In Geometry-Driven Diffusion in Computer Vision, 1994.
[16] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. In
SIGGRAPH, 2008.
[17] Y.-W. Tai and S. Lin. Motion-aware noise filtering for deblurring of noisy and blurry images.
In CVPR, pages 17?24, 2012.
[18] Y.-W. Tai, P. Tan, and M. S. Brown. Richardson-Lucy deblurring for scenes under a projective
motion path. IEEE Trans. Pattern Anal. Mach. Intell., 33(8):1603?1618, 2011.
[19] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In ICCV, 1998.
[20] D. Tschumperl? and R. Deriche. Vector-valued image regularization with PDEs: A common
framework for different applications. IEEE Trans. Pattern Anal. Mach. Intell., 27(4):506?517,
2005.
[21] D. P. Wipf and H. Zhang. Revisiting Bayesian blind deconvolution. CoRR, abs/1305.2362,
2013.
[22] L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. In ECCV, 2010.
[23] L. Xu, S. Zheng, and J. Jia. Unnatural L0 sparse representation for natural image deblurring.
In CVPR, 2013.
[24] Y. Xu, X. Hu, L. Wang, and S. Peng. Single image blind deblurring with image decomposition.
In ICASSP, 2012.
[25] H. Zhang and D. Wipf. Non-uniform camera shake removal using a spatially adaptive sparse
penalty. In NIPS, 2013.
[26] L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang. Handling noise in single image deblurring
using directional filters. In CVPR, 2013.
9
| 5566 |@word version:1 middle:1 eliminating:1 advantageous:1 proportion:1 hu:1 decomposition:5 pick:1 shot:1 reduction:1 configuration:2 contains:3 outperforms:1 existing:1 recovered:10 com:2 surprising:3 gmail:1 dx:1 finest:5 subsequent:1 additive:4 blur:46 visible:1 designed:3 plot:1 progressively:1 greedy:2 tone:1 plane:1 beginning:1 footing:1 filtered:9 coarse:5 contribute:5 location:1 org:4 firstly:1 zhang:8 mathematical:1 along:1 constructed:3 become:1 downscaled:1 introduce:3 peng:1 expected:1 behavior:1 cand:1 multi:7 freeman:4 decreasing:1 decomposed:1 automatically:3 romeny:1 lindeberg:1 increasing:2 motivational:1 provided:2 moreover:4 granda:1 tying:2 ringing:3 unified:1 finding:2 sharpening:1 transformation:2 mitigate:1 tie:1 exactly:1 control:1 appear:1 before:5 local:3 treat:1 bilinear:1 mach:4 analyzing:1 path:1 interpolation:1 signed:1 might:2 initialization:1 resembles:1 equivalence:1 suggests:3 projective:1 range:1 averaged:1 practical:1 camera:6 harmeling:2 practice:3 procedure:2 empirical:6 universal:2 significantly:2 cascade:2 projection:1 pre:1 get:1 cannot:1 undesirable:1 selection:4 operator:4 close:1 context:2 applying:2 accumulating:1 conventional:4 map:3 equivalent:1 maximizing:1 straightforward:4 attention:1 starting:3 exposure:1 independently:1 resolution:4 formulate:1 stabilizing:1 recovery:4 importantly:2 dw:1 proving:1 handle:4 population:1 target:1 play:1 construction:1 tan:1 duke:1 us:1 deblurring:66 trick:1 element:1 trend:1 peaky:1 particularly:1 tranchina:1 cut:4 observed:14 bottom:1 role:1 initializing:1 wang:2 revisiting:1 wj:2 ordering:1 principled:1 mentioned:1 transforming:1 hertzmann:1 motivate:1 singh:1 solving:3 algebra:1 icassp:1 lowpass:1 joint:1 differently:2 siggraph:4 various:3 derivation:1 distinct:2 heat:1 effective:4 fast:2 kp:6 formation:1 refined:1 whose:2 larger:3 posed:1 solve:1 widely:2 supplementary:1 elephant:1 cvpr:5 resizing:1 ability:2 valued:1 richardson:1 jointly:2 noisy:8 transform:1 final:4 online:1 propose:3 reconstruction:1 remainder:1 rapidly:1 flexibility:1 achieve:6 roweis:1 validate:1 exploiting:1 regularity:1 produce:2 spent:1 help:3 pose:1 progress:1 strong:2 p2:1 auxiliary:2 implies:1 come:1 direction:2 radius:7 closely:1 filter:39 explains:1 modifed:1 hx:1 generalization:2 opt:4 summation:1 extension:6 ground:1 lischinski:1 exp:2 achieves:3 dictionary:3 smallest:1 early:1 purpose:2 estimation:78 applicable:3 sensitive:1 largest:1 successfully:1 reflects:2 mit:1 htp:1 lexicographic:1 gaussian:15 aim:1 super:1 rather:2 reaching:3 zhong:16 derived:1 l0:1 properly:1 likelihood:4 indicates:1 manduchi:1 posteriori:1 dependent:1 stopping:1 inaccurate:3 typically:6 unlikely:1 selective:1 transformed:1 interested:1 pixel:1 agarwala:1 canceled:1 among:1 ill:1 issue:1 priori:2 retaining:1 reform:1 overall:1 art:4 smoothing:3 fairly:2 spatial:2 marginal:1 field:1 construct:1 noisefree:1 aware:1 represents:1 capitalize:1 alter:1 future:2 wipf:2 np:2 report:1 inherent:2 deblurred:7 deriche:1 simultaneously:1 intell:4 geometry:1 phase:1 ab:2 detection:1 interest:1 investigate:1 zheng:1 evaluation:2 alignment:1 introduces:1 operated:1 accurate:3 edge:7 projectively:1 conduct:1 harmful:4 initialized:2 theoretical:1 hip:2 retains:1 cost:4 introducing:1 ekanadham:1 uniform:20 successful:1 levin:10 conducted:1 inspires:1 characterize:1 reported:1 cho:3 adaptively:1 density:1 sensitivity:1 sequel:1 lee:1 off:4 homography:1 together:2 w1:1 again:1 squared:1 containing:2 reconstructs:1 resort:1 derivative:5 leading:2 return:1 yp:9 account:1 potential:1 coding:1 availability:1 hysteresis:1 caused:2 fernandez:1 blind:55 later:4 bilateral:2 reached:1 recover:3 jia:4 contribution:13 accuracy:1 convolutional:2 variance:2 conducting:1 correspond:1 directional:4 lkopf:2 identification:1 bayesian:1 curless:1 accurately:2 metaxas:1 ghosting:2 sharing:1 definition:2 failure:1 frequency:4 mi:1 con:1 gain:1 sampled:1 dataset:6 knowledge:2 color:1 actually:1 elder:1 reflected:1 asia:1 wei:2 formulation:8 done:1 though:1 bluer:1 furthermore:1 until:2 nonlinear:1 defines:1 mode:1 quality:8 perhaps:1 reveal:2 indicated:1 artifact:3 gray:1 facilitate:1 effect:4 building:1 verify:1 true:1 normalized:1 brown:1 regularization:3 spatially:1 illustrated:1 deal:1 ex1:1 during:5 essence:1 criterion:1 generalized:2 performs:7 motion:7 image:112 meaning:2 novel:3 recently:1 common:1 rotation:1 empirically:5 cohen:1 anisotropic:1 discussed:1 significant:1 refer:2 consistency:1 hp:4 ssd:3 stable:1 zucker:1 longer:1 add:1 recent:5 perspective:4 optimizing:1 apart:1 driven:1 scenario:1 compound:3 certain:1 manipulation:1 durand:3 success:1 exploited:1 preserving:1 minimum:2 additional:6 somewhat:1 determine:1 maximize:1 signal:10 ii:4 recoverable:1 full:1 simoncelli:2 multiple:7 encompass:1 sound:1 kyoto:2 smooth:1 technical:1 lexicographically:1 ypt:1 offer:5 lin:1 post:1 adobe:3 variant:1 regression:2 basic:2 noiseless:2 essentially:1 vision:1 iteration:4 kernel:69 represent:1 pyramid:8 achieved:2 fine:5 separately:2 source:4 crucial:1 sch:2 extra:2 operate:1 w2:1 specially:1 file:1 contrary:2 effectiveness:4 call:1 joshi:1 near:1 yang:1 presence:9 constraining:1 iii:1 easy:1 enough:2 krishnan:1 architecture:1 idea:2 jianchao:1 unnatural:1 penalty:6 accompanies:1 cause:1 clear:3 shake:5 reduced:2 estimated:6 delta:1 iter:6 salient:1 nevertheless:2 threshold:1 clarity:1 pj:4 neither:1 diffusion:2 imaging:1 sum:1 inverse:2 place:1 separation:2 radon:1 bound:1 shan:2 layer:1 followed:1 periments:1 tofine:1 x2:2 flat:1 scene:1 min:7 formulating:1 performing:3 coarsest:1 structured:1 according:2 combination:3 across:7 smaller:8 suppressed:1 appealing:1 evolves:1 making:1 iccv:2 tai:7 discus:2 operation:1 apply:1 appropriate:1 blurry:23 robustly:1 alternative:2 robustness:1 rp:3 original:10 denotes:5 top:1 exploit:3 especially:1 objective:1 noticed:1 added:2 already:1 spike:2 degrades:1 parametric:2 strategy:1 gradient:5 distance:3 separate:1 separating:1 parametrized:1 degrade:1 besides:2 modeled:1 relationship:2 index:3 ratio:1 upscaled:1 nc:1 difficult:1 mostly:1 stated:2 suppress:2 anal:4 motivates:1 proper:1 unknown:5 perform:7 upper:1 observation:13 convolution:4 neuron:1 benchmark:6 extended:1 incorporated:3 rn:1 nonuniform:1 sharp:7 recoverability:2 required:2 specified:1 extensive:2 paris:1 tomasi:1 learned:1 nip:4 trans:4 bar:1 usually:1 pattern:4 fp:18 sparsity:3 including:2 max:1 enlightening:1 natural:9 regularized:1 cascaded:3 karklin:1 representing:2 scheme:4 x2i:1 carried:1 conventionally:1 coupled:1 haichao:1 literature:3 prior:2 understanding:4 acknowledgement:1 removal:2 marginalizing:1 relative:1 loss:2 par:2 expect:1 interesting:3 limitation:2 filtering:9 thresholding:2 balancing:3 translation:2 eccv:2 supported:1 last:4 free:7 pdes:1 enjoys:1 szeliski:1 taking:2 sparse:6 distributed:2 benefit:1 dimension:1 world:4 avoids:1 evaluating:1 concretely:1 made:1 adaptive:17 preprocessing:1 far:1 emphasize:1 uni:3 hirsch:2 reveals:2 conclude:1 corroborates:1 xi:2 fergus:5 alternatively:1 latent:7 iterative:1 table:2 learn:2 schuler:1 robust:11 ca:1 necessarily:2 complex:1 constructing:3 domain:3 diag:1 zitnick:1 spread:1 main:1 motivation:1 noise:60 profile:1 repeated:1 complementary:3 xu:8 site:1 referred:1 fashion:2 aid:1 fails:1 position:1 tied:1 learns:1 theorem:8 removing:2 gupta:1 deconvolution:8 incorporating:2 structed:1 restricting:1 adding:1 effectively:1 corr:2 texture:2 magnitude:1 fc:1 simply:1 likely:2 photograph:1 lucy:1 visual:1 ordered:1 truth:1 towards:1 replace:1 absence:1 shared:1 hard:2 change:1 typical:1 tay:1 reducing:1 uniformly:2 averaging:1 denoising:3 degradation:1 total:2 pas:3 experimental:1 select:1 support:1 adaptiveness:1 overload:1 incorporate:1 evaluate:3 handling:1 |
5,044 | 5,567 | Shape and Illumination from Shading using the
Generic Viewpoint Assumption
Dilip Krishnan ?
CSAIL, MIT
[email protected]
Daniel Zoran ?
CSAIL, MIT
[email protected]
William T. Freeman
CSAIL, MIT
[email protected]
Jose Bento
Boston College
[email protected]
Abstract
The Generic Viewpoint Assumption (GVA) states that the position of the viewer
or the light in a scene is not special. Thus, any estimated parameters from an
observation should be stable under small perturbations such as object, viewpoint
or light positions. The GVA has been analyzed and quantified in previous works,
but has not been put to practical use in actual vision tasks. In this paper, we show
how to utilize the GVA to estimate shape and illumination from a single shading
image, without the use of other priors. We propose a novel linearized Spherical
Harmonics (SH) shading model which enables us to obtain a computationally efficient form of the GVA term. Together with a data term, we build a model whose
unknowns are shape and SH illumination. The model parameters are estimated
using the Alternating Direction Method of Multipliers embedded in a multi-scale
estimation framework. In this prior-free framework, we obtain competitive shape
and illumination estimation results under a variety of models and lighting conditions, requiring fewer assumptions than competing methods.
1
Introduction
The generic viewpoint assumption (GVA) [5, 9, 21, 22] postulates that what we see in the world
is not seen from a special viewpoint, or lighting condition. Figure 1 demonstrates this idea with
the famous Necker cube example1 . A three dimensional cube may be observed with two vertices
or edges perfectly aligned, giving rise to a two dimensional interpretation. Another possibility is
a view that exposes only one of the faces of the cube, giving rise to a square. However, these 2D
views are unstable to slight perturbations in viewing position. Other examples in [9] and [22] show
situations where views are unstable to lighting rotations.
While there has been interest in the GVA in the psychophysics community [22, 12], to the best of
our knowledge, this principle seems to have been largely ignored in the computer vision community.
One notable exception is the paper by Freeman [9] which gives a detailed analytical account on how
to incorporate the GVA in a Bayesian framework. In that paper, it is shown that using the GVA
modifies the probability space of different explanations to a scene, preferring perceptually valid and
stable solutions to contrived and unstable ones, even though all of these fully explain the observed
image. No algorithm incorporating the GVA, beyond exhaustive search, was proposed.
?
1
Equal contribution
Taken from http://www.cogsci.uci.edu/?ddhoff/three-cubes.gif
1
Figure 1: Illustration of the GVA principle using the Necker cube example. The cube in the middle
can be viewed in multiple ways. However, the views on the left and right require a very specific
viewing angle. Slight rotations of the viewer around the exact viewing positions would dramatically
change the observed image. Thus, these views are unstable to perturbations. The middle view, on
the contrary, is stable to viewer rotations.
Shape from shading is a basic low-level vision task. Given an input shading image - an image of
a constant albedo object depicting only changes in illumination - we wish to infer the shape of the
objects in the image. In other words, we wish to recover the relative depth Zi at each pixel i in
the image. Given values of Z, local surface orientations are given by the gradients ?x Z and ?y Z
along the coordinate axes. A key component in estimating the shape is the illumination L. The
parameters of L may be given with the image, or may need to be estimated from the image along
with the shape. The latter is a much harder problem due to the ambiguous nature of the problem, as
many different surface orientations and light combinations may explain the same image. While the
notion of a shading image may seem unnatural, extracting them from natural images has been an
active field of research. There are effective ways of decomposing images into shading and albedo
images (so called ?intrinsic images? [20, 10, 1, 29]), and the output of those may be used as input to
shape from shading algorithms.
In this paper we show how to effectively utilize the GVA for shape and illumination estimation from
a single shading image. The only terms in our optimization are the data term which explains the
observation and the GVA term. We propose a novel shading model which is a linearization of the
spherical harmonics (SH) shading model [25]. The SH model has been gaining popularity in the
vision and graphics communities in recent years [26, 17]) as it is more expressive than the popular single source Lambertian model. Linearizing this model allows us, as we show below, to get
simple expressions for our image and GVA terms, enabling us to use them effectively in an optimization framework. Given a shading image with an unknown light source, our optimization procedure
solves for the depth and illumination in the scene. We optimize using Alternating Direction Method
of Multipliers (ADMM) [4, 6]. We show that this method is competitive with current shape and
illumination from shading algorithms, without the use of other priors over illumination or geometry.
2
Related Work
Classical works on shape from shading include [13, 14, 15, 8, 23] and newer works include [3, 2,
19, 30]. It is out of scope of this paper to give a full survey of this well studied field, and we refer the
reader to [31] and [28] for good reviews. A large part of the research has been focused on estimating
the shape under known illumination conditions. While still a hard problem, it is more constrained
than estimating both the illumination and the shape.
In impressive recent work, Barron and Malik [3] propose a method for estimating not just the illumination and shape, but also the albedo of a given masked object from a single image. By using
a number of novel (and carefully balanced) priors over shape (such as smoothness and contour information), albedo and illumination, it is shown that reasonable estimates of shape and illumination
may be extracted. These priors and the data term are combined in a novel multi-scale framework
which weights coarser scale (lower frequency) estimates of shape more than finer scale estimates.
Furthermore, Barron and Malik use a spherical harmonics lighting model to provide for richer recovery of real world scenes and diffuse outdoor lighting conditions. Another contribution of their
work has been the observation that joint inference of multiple parameters may prove to be more
robust (although this is hard to prove rigorously). The expansion to the original MIT dataset [11]
provided in [3] is also a useful contribution.
2
Another recent notable example is that of Xiong et al. [30]. In this thorough work, the distribution
of possible shape/illumination combinations in a small image patch is derived, assuming a quadratic
depth model. It is shown that local patches may be quite informative, and that are only a few possible
explanations of light/shape pairs for each patch. A framework for estimating full model geometry
with known lighting conditions is also proposed.
3
Using the Generic View Assumption for Shape from Shading
In [9], Freeman gave an analytical framework to use the GVA. However, the computational examples in the paper were restricted to linear shape from shading models. No inference algorithm was
presented; instead the emphasis was on analyzing how the GVA term modifies the posterior distribution of candidate shape and illumination estimates. The key idea in [9] is to marginalize the
posterior distribution over a set of ?nuisance? parameters - these correspond to object or illumination perturbations. This integration step corresponds to finding a solution that is stable to these
perturbations.
3.1
A Short Introduction to the GVA
Here we give a short summary of the derivations in [9], which we use in our model. We start
with a generative model f for images, which depends on scene parameters Q and a set of generic
parameters w. The generative model we use is explained in Section 4. w are the parameters which
will eventually be marginalized. In our shape and illumination from shading case, f corresponds to
our shading model in Eq. 14 (defined below). Q includes both surface depth at each point Z and the
light coefficients vector L. Finally, the generic variable w corresponds to different object rotation
angles around different axes of rotations (though there could be other generic variables, we only use
this one). Assuming measurement noise ? the result of the generative process would be:
I = f (Q, w) + ?
(1)
Now, given an image I we wish to infer scene parameters Q by marginalizing out the generic variables w. Using Bayes? theorem, this results in the following probability function:
Z
P (Q)
P (Q|I) =
P (w)P (I|Q, w)dw
(2)
P (I) w
Assuming a low Gaussian noise model for ?, the above integral can be approximated with a Laplace
approximation, which involves expanding f using a Taylor expansion around w0 . We get the following expression, aptly named in [9] as the ?scene probability equation?:
1
kI ? f (Q, w0 )k2
P (Q)P (w0 ) ?
(3)
P (Q|I) = |{z}
C exp ?
{z
} det A
|
2? 2
constant |
|
{z
}
{z
}
prior
genericity
fidelity
where A is a matrix whose i, j-th entry is:
T
Ai,j =
?f (Q, w) ?f (Q, w)
?wi
?wj
(4)
and the derivatives are estimated at w0 . A is often called the Fisher information matrix.
Eq. 3 has three terms: the fidelity term (sometimes called the likelihood term, data term or image
term) tells us how close we are to the observed image. The prior tells us how likely are our current
parameter estimates. The last term, genericity, tells us how much our observed image would change
under perturbations of the different generic variables. This term is the one which penalizes for
unstable results w.r.t to the generic variables. From the form of A, it is clear why the genericity term
helps; the determinant of A is large when the rendered image f changes rapidly with respect to w.
This makes the genericity term small and the corresponding hypothesis Q less probable.
3
3.2
Using the GVA for Shape and Illumination Estimation
We now show how to derive the GVA term for general object rotations by using the result in [9] and
applying it to our linearized shading model. Due to lack of space, we provide the main results here;
please refer to the supplementary material for full details. Given an axis of rotation parametrized by
angles ? and ?, the derivative of f w.r.t to a rotation ? about the axis is:
?f
??
= aRx + bRy + cRz
a = cos(?) sin(?),
b = sin(?) sin(?),
(5)
c = cos(?)
(6)
where Rx ,Ry and Rz are three derivative images for rotations around the canonical axes for which
the i-th pixel is:
Rxi = Iix Zi + ?i ?i kix + (1 + ?i2 )kiy
(7)
Ryi = ?Iiy Zi ? ?i ?i kiy ? (1 + ?i2 )kix
Rzi = Iix Yi ? Iiy Xi + ?i kiy ? ?i kix
(8)
(9)
We use these images to derive the GVA term for rotations around different axes, resulting in:
XX
1
q
GVA(Z, L) =
?f 2
2?? 2 k ??
k
??? ???
(10)
where ? and ? are discrete sets of angles in [0, ?) and [0, 2?) respectively. Looking at the term in
Eqs. 5?10 we see that had we used the full, non-linearized, shading model in Eq. 11 it would result
in a very complex expression, especially considering that ? = ?x Z and ? = ?y Z are functions
of the depth Z. Even after linearization, this expression may seem a bit daunting, but we show in
Section 5 how we can significantly simplify the optimization of this function.
4
Linearized Spherical Harmonics Shading Model
The Spherical Harmonics (SH) lighting2 model allows for a rich yet concise description of a lighting environment [25]. By keeping just a few of the leading SH coefficients when describing the
illumination, it allows an accurate description for low frequency changes of lighting as a function
of direction, without needing to explicitly model the lighting environment in whole. This model
has been used successfully in the graphics and the vision communities. The popular setting for SH
lighting is to keep the first three orders of the SH functions, resulting in nine coefficients which we
will denote by the vector L. Let Z be a depth map, with the depth at pixel i given by Zi . The surface
slopes at pixel i are defined as ?i = (?x Z)i and ?i = (?y Z)i respectively. Given L and Z, the log
shading at pixel i for a diffuse, Lambertian surface under the SH model is given by:
log Si = nTi Mni
(11)
where ni :
ni =
and:
h
?
?i
1+?2i +?i2
?
c1 L9
?c1 L5
M=?
c1 L8
c2 L4
?
?i
1+?2i +?i2
c1 L5
?c1 L9
c1 L6
c2 L2
c1 L8
c1 L6
c3 L7
c2 L3
?
1
1+?2i +?i2
1
iT
?
c2 L4
c2 L2
?
?
c2 L3
c4 L1 ? c5 L7
(12)
(13)
c1 = 0.429043, c2 = 0.511664, c3 = 0.743125, c4 = 0.886227, c5 = 0.247708
The formation model in Eq. 11 is non-linear and non-convex in the surface slopes ? and ?. In
practice, this leads to optimization difficulties such as local minima, which have been noted by
Barron and Malik in [3]. In order to overcome this, we linearize Eq. 11 around the local surface
slope estimate ?i0 and ?i0 , such that:
log Si ? k c (?i0 , ?i0 , L) + k x (?i0 , ?i0 , L)?i + k y (?i0 , ?i0 , L)?i
2
We will use the terms lighting and shading interchangeably
4
(14)
where the local surface slopes are estimated in a local patch around each pixel in our current estimated surface. The derivation of the linearization is given in the supplementary material. For the
sake of brevity, we will omit the dependence on the ?i0 , ?i0 and L terms, and denote the coefficients
at each location as kic ,kix and kiy respectively for the remainder of the paper.
A natural question is the accuracy of the linearized model Eq. 14. The linearization is accurate
in most situations where the depth Z changes gradually, such that the change in slope is linear or
small in magnitude. In [30], locally quadratic shapes are assumed; this leads to linear changes in
slopes, and in such situations, the linearization is highly accurate. We tested the accuracy of the
linearization by computing the difference between the estimates in Eq. 14 and Eq. 11, over ground
truth shape and illumination estimates. We found it to be highly accurate for the models in our
experiments. The linearization in Eq. 14 leads to a quadratic formation model for the image term
(described in Section 5.2.1), leading to more efficient updates for ? and ?. Furthermore, this allows
us to effectively incorporate the GVA even with the spherical harmonics framework.
5
Optimization using the Alternating Direction Method of Multipliers
5.1
The Cost Function
Following Eq. 3, we can now derive the cost function we will optimize w.r.t the scene parameters
Z and L. To derive a MAP estimate, we take the negative log of Eq. 3 and use constant priors over
both the scene parameters and the generic variables; thus we have a prior-free cost function. This
results in the following cost:
g(Z, L) = ?img kI ? log S(Z, L)k2 ? ?GVA log GVA(Z, L)
(15)
where f (Z, L) = log S(Z, L) is our linearized shading model Eq. 14 and the GVA term is defined in
Eq. 10. ?img and ?GVA are hyper-parameters which we set to 2 and 1 respectively for all experiments.
Because of the dependence of ? and ? on Z directly optimizing for this cost function is hard, as it
results in a large, non-linear differential system for Z. In order to make this more tractable, we
? the surface spatial derivatives, as auxiliary variables, and solve for the following
introduce ?
? and ?,
cost function which constrains the resulting surface to be integrable:
? L|I) = ?img kI ? log S(?
? L)k2 ? ?GVA log GVA(Z, ?
? L)
g?(Z, ?
? , ?,
?, ?,
? , ?,
(16)
?? = ?y Z,
s.t ?
? = ?x Z,
?y ?x Z = ?x ? y Z
ADMM allows us to subdivide the cost into relatively simple subproblems, solve each one independently and then aggregate the results. We briefly review the message passing variant of ADMM [7]
in the supplementary material.
5.2
5.2.1
Subproblems
Image Term
This subproblem ties our solution to the input log shading image. The participating variables are the
slopes ?
? and ?? and illumination L. We minimize the following cost:
2 ?
X
?
?
arg min ?img
Ii ? kic ? kix ?
? i ? kiy ??i + k?
? ? n?? k2 + k?? ? n?? k2 + kL ? nL k2 (17)
2
2
2
?
?,
? ?,L
i
where n?? , n?? and nL are the incoming messages for the corresponding variables as described above.
We solve this subproblem iteratively: for ?
? and ?? we keep L constant (and as a result the k-s are
constant). A closed form solution exists since this is just a quadratic due to our relinearization model.
In order to solve for L we do a few (5 to 10) steps of L-BFGS [27].
5.2.2
GVA Term
The participating variables here are the depth values Z, the slopes ?
? and ?? and the light L. We look
for the parameters which minimize:
?GVA
?
?
? L) + ? k?
arg min ?
log GVA(Z, ?
? , ?,
? ? n?? k2 + k?? ? n?? k2 + kL ? nL k2
(18)
2
2
2
2
?
Z,?,
? ?,L
5
Here, though the expression for the GVA (Eq. 10) term is greatly simplified due to the shading model
linearization, we have to resort to numerical optimization. We solve for the parameters using a few
steps of L-BFGS [27].
5.2.3
Depth Integrability Constraint
Shading only depends on local slope (regardless of the choice of shading model, as long as there
are no shadows in the scene), hence the image term only gives us information about surface slopes.
Using this information we need to find an integrable surface Z [8]. Finding integrable surfaces from
local slope measurements has been a long standing research question and there are several ways
of doing this [8, 14, 18]. By finding such as a surface we will satisfy both constraints in Eq. 16
automatically. Enforcing integrability through message passing was performed in [24], where it was
shown to be helpful in recovering smooth surfaces. In that work, belief propagation based messagepassing was used. The cost for this subproblem is:
arg min
Z,?,
? ??
?
?
?
kZ ? nZ k2 + k?
? ? n?? k2 + k?? ? n?? k2
2
2
2
s.t ?
? = ?x Z,
?? = ?y Z,
(19)
?y ?x Z = ?x ?y Z
We solve for the surface Z given the messages for the slopes n?? and n?? by solving a least squares
system to get the integrable surface. Then, the solution for ?
? and ?? is just the spatial derivative of
the resulting surface, satisfying all the constraints and minimizing the cost simultaneously.
5.3
Relinearization
After each ADMM iteration, we perform re-linearization of the kc ,kx and ky coefficients. We take
the current estimates for Z and L and use them as input to our linearization procedure (see the
supplementary material for details). These coefficients are then used for the next ADMM iteration.
and this process is repeated.
6
Experiments and Results
0.8
SIFS
Ours ?GVA
Ours ?No GVA
0.6
0.7
SIFS
Ours ?GVA
Ours ?No GVA
0.6
0.7
SIFS
Ours ?GVA
Ours ?No GVA
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0.4
0.2
0
0
N?MAE
L?MSE
(a) Models from [30] using
?lab? lights
0
N?MAE
L?MSE
(b) MIT models using
?natural? lights
N?MAE
L?MSE
(c) Average result over all
models and lights
Figure 2: Summary of results: Our performance is quite similar to that of SIFS [3] although we do
not use contour normals, nor any shape or illumination priors unlike [3]. We outperform SIFS on
models from [30], while SIFS performs well on the MIT models. On average, we are comparable to
SIFS in N-MAE and sightly better at light estimation.
We use the GVA algorithm to estimate shape and illumination from synthetic, grayscale shading
images, rendered using 18 different models from the MIT/Berkeley intrinsic images dataset [3] and
7 models from the Harvard dataset in [30]. Each of these models is rendered using several different
light sources: the MIT models are lit with a ?natural? light dataset which comes with each model,
and we use 2 lights from the ?lab? dataset in order to light the models from [30], resulting in 32
different images. We use the provided mask just in the image term, where we solve only for pixels
within the mask. We do not use any other contour information as in [3]. Models were downscaled
to a quarter of their original size. Running times for our algorithm are roughly 7 minutes per image
6
Ground Truth
Ours - GVA
SIFS
Ours - No GVA
Viewpoint 1
Viewpoint 2
Estimated Light
Rendered Image
Figure 3: Example of our results - note that the vertical scale of the mesh plots is different between
the plots and have been rescaled for display (specifically, the SIFS result are 4 times deeper). Our
method preserves features such as the legs and belly while SIFS smoothes them out. The GVA light
estimate is also quite reasonable. Unlike SIFS, no contour normals, nor tuned shape or lighting
priors are needed for GVA.
with the GVA term and about 1 minute without the GVA term. This is with unoptimized MATLAB
code. We compare to the SIFS algorithm of [3] which is a subset of their algorithm that does not
estimate albedo. We use their publicly released code.
We initialize with an all zeros depth (corresponding to a flat surface) and the light is initialized to
the mean light from the ?natural? dataset in [3]. We perform the estimation in multiple scales using
V-sweeps - solving at a coarse scale, upscaling, solving at a finer scale then downsampling the result,
repeating the process 3 times. The same parameter settings were used in all cases3 .
We use the same error measures as in [3]. The error for the normals is measured using Median
Angular Error (MAE) in radians. For the light, we take the resulting light coefficients and render
a sphere lit by this light. We look for a DC shift which minimizes the distance between this image
and the rendered ground truth light and shift the two images. Then the final error for the light is the
L2 distance of the two images, normalized by the number of pixels. The error measure for depth Z
used in [3] is quite sensitive to the absolute scaling of the results. We have decided to omit it from
the main paper (even though our performance under this measure is much better than [3]).
A summary of the results can be seen in Figure 2. The GVA term helps significantly in estimation
results. This is especially true for light estimation. On average, our performance is similar to that
of [3]. Our light estimation results are somewhat better, while our geometry estimation results are
slightly poorer. It seems that [3] is somewhat overfit to the models in the MIT dataset. When tested
on the models from [30], it gets poorer results.
Figure 3 shows an example of the results we get, compared to that of SIFS [3], our algorithm with
no GVA term, and the ground truth. As can be seen, the light we estimate is quite close to the
ground truth. The geometry we estimate certainly captures the main structures of the ground truth.
Even though we use no smoothness prior, the resulting mesh is acceptable - though a smoothness
prior, such as the one used [3] would help significantly. The result by [3] misses a lot of the large
3
We will make our code publicly available at http://dilipkay.wordpress.com/sfs/
7
Ground Truth
Ours - GVA
SIFS
Ours - No GVA
Viewpoint 1
Viewpoint 2
Estimated Light
Rendered Image
Figure 4: Another example. Note how we manage to recover some of the dominant structure like
the neck and feet, while SIFS mostly smooths features (albeit resulting in a more pleasing surface).
scale structures of such as the hippo?s belly and feet, but it is certainly smooth and aesthetic. It is
seen that without the GVA term, the resulting light is highly directed and the recovered shape has
snake-like structures which precisely line up with the direction of the light. These are very specific
local minima which satisfy the observed image well, in agreement with the results in [9]. Figure 4
shows some more results on a different model where the general story is similar.
7
Discussion
In this paper, we have presented a shape and illumination from shading algorithm which makes use
of the Generic View Assumption. We have shown how to utilize the GVA within an optimization
framework. We achieve competitive results on shape and illumination estimation without the use of
shape or illumination priors. The central message of our work is that the GVA can be a powerful
regularizing term for the shape from shading problem. While priors for scene parameters can be very
useful, balancing the effect of different priors can be hard and inferred results may be biased towards
a wrong solution. One may ask: is the GVA just another prior? The GVA is a prior assumption,
but a very reasonable one: it merely states that all viewpoints and lighting directions are equally
likely. Nevertheless, there may exist multiple stable solutions and priors may be necessary to enable
choosing between these solutions [16]. A classical example of this is the convex/concave ambiguity
in shape and light.
Future directions for this work are applying the GVA to more vision tasks, utilizing better optimization techniques and investigating the coexistence of priors and GVA terms.
Acknowledgments
This work was supported by NSF CISE/IIS award 1212928 and by the Qatar Computing Research
Institute. We would like to thank Jonathan Yedidia for fruitful discussions.
References
[1] J. T. Barron and J. Malik. Color constancy, intrinsic images, and shape estimation. In Computer Vision?
ECCV 2012, pages 57?70. Springer, 2012.
8
[2] J. T. Barron and J. Malik. Shape, albedo, and illumination from a single image of an unknown object.
In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 334?341. IEEE,
2012.
[3] J. T. Barron and J. Malik. Shape, illumination, and reflectance from shading. Technical Report
UCB/EECS-2013-117, EECS, UC Berkeley, May 2013.
[4] J. Bento, N. Derbinsky, J. Alonso-Mora, and J. S. Yedidia. A message-passing algorithm for multi-agent
trajectory planning. In Advances in Neural Information Processing Systems, pages 521?529, 2013.
[5] T. O. Binford. Inferring surfaces from images. Artificial Intelligence, 17(1):205?244, 1981.
[6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
R in Machine Learning,
via the alternating direction method of multipliers. Foundations and Trends
3(1):1?122, 2011.
[7] N. Derbinsky, J. Bento, V. Elser, and J. S. Yedidia. An improved three-weight message-passing algorithm.
arXiv preprint arXiv:1305.1961, 2013.
[8] R. T. Frankot and R. Chellappa. A method for enforcing integrability in shape from shading algorithms.
Pattern Analysis and Machine Intelligence, IEEE Transactions on, 10(4):439?451, 1988.
[9] W. T. Freeman. Exploiting the generic viewpoint assumption. International Journal of Computer Vision,
20(3):243?261, 1996.
[10] P. V. Gehler, C. Rother, M. Kiefel, L. Zhang, and B. Sch?olkopf. Recovering intrinsic images with a global
sparsity prior on reflectance. In NIPS, volume 2, page 4, 2011.
[11] R. Grosse, M. K. Johnson, E. H. Adelson, and W. T. Freeman. Ground truth dataset and baseline evaluations for intrinsic image algorithms. In Computer Vision, 2009 IEEE 12th International Conference on,
pages 2335?2342. IEEE, 2009.
[12] D. D. Hoffman. Genericity in spatial vision. Geometric Representations of Perceptual Phenomena:
Papers in Honor of Tarow indow on His 70th Birthday, page 95, 2013.
[13] B. K. Horn. Obtaining shape from shading information. MIT press, 1989.
[14] B. K. Horn and M. J. Brooks. The variational approach to shape from shading. Computer Vision, Graphics, and Image Processing, 33(2):174?208, 1986.
[15] K. Ikeuchi and B. K. Horn. Numerical shape from shading and occluding boundaries. Artificial intelligence, 17(1):141?184, 1981.
[16] A. D. Jepson. Comparing stories. Perception as Bayesian Inference, pages 478?488, 1995.
[17] J. Kautz, P.-P. Sloan, and J. Snyder. Fast, arbitrary brdf shading for low-frequency lighting using spherical
harmonics. In Proceedings of the 13th Eurographics workshop on Rendering, pages 291?296. Eurographics Association, 2002.
[18] P. Kovesi. Shapelets correlated with surface normals produce surfaces. In Computer Vision, 2005. ICCV
2005. Tenth IEEE International Conference on, volume 2, pages 994?1001. IEEE, 2005.
[19] B. Kunsberg and S. W. Zucker. The differential geometry of shape from shading: Biology reveals curvature structure. In Computer Vision and Pattern Recognition Workshops (CVPRW), 2012 IEEE Computer
Society Conference on, pages 39?46. IEEE, 2012.
[20] Y. Li and M. S. Brown. Single image layer separation using relative smoothness. CVPR, 2014.
[21] J. Malik. Interpreting line drawings of curved objects. International Journal of Computer Vision, 1(1):73?
103, 1987.
[22] K. Nakayama and S. Shimojo. Experiencing and perceiving visual surfaces. Science, 257(5075):1357?
1363, 1992.
[23] A. P. Pentland. Linear shape from shading. International Journal of Computer Vision, 4(2):153?162,
1990.
[24] N. Petrovic, I. Cohen, B. J. Frey, R. Koetter, and T. S. Huang. Enforcing integrability for surface reconstruction algorithms using belief propagation in graphical models. In Computer Vision and Pattern
Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on, volume 1, pages I?743. IEEE, 2001.
[25] R. Ramamoorthi and P. Hanrahan. An efficient representation for irradiance environment maps. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pages 497?500.
ACM, 2001.
[26] R. Ramamoorthi and P. Hanrahan. A signal-processing framework for inverse rendering. In Proceedings
of the 28th annual conference on Computer graphics and interactive techniques, pages 117?128. ACM,
2001.
[27] M. Schmidt. Minfunc, 2005.
[28] R. Szeliski. Computer vision: algorithms and applications. Springer, 2010.
[29] Y. Weiss. Deriving intrinsic images from image sequences. In Computer Vision, 2001. ICCV 2001.
Proceedings. Eighth IEEE International Conference on, volume 2, pages 68?75. IEEE, 2001.
[30] Y. Xiong, A. Chakrabarti, R. Basri, S. J. Gortler, D. W. Jacobs, and T. Zickler. From shading to local
shape. http://arxiv.org/abs/1310.2916, 2014.
[31] R. Zhang, P.-S. Tsai, J. E. Cryer, and M. Shah. Shape-from-shading: a survey. Pattern Analysis and
Machine Intelligence, IEEE Transactions on, 21(8):690?706, 1999.
9
| 5567 |@word determinant:1 briefly:1 middle:2 seems:2 linearized:6 jacob:1 concise:1 harder:1 shading:41 qatar:1 daniel:1 tuned:1 bc:1 ours:10 current:4 com:1 recovered:1 comparing:1 si:2 yet:1 chu:1 mesh:2 numerical:2 informative:1 koetter:1 shape:47 enables:1 plot:2 update:1 generative:3 fewer:1 intelligence:4 short:2 coarse:1 location:1 kix:5 org:1 zhang:2 along:2 c2:7 differential:2 zickler:1 chakrabarti:1 prove:2 downscaled:1 introduce:1 hippo:1 mask:2 roughly:1 nor:2 planning:1 multi:3 ry:1 freeman:5 spherical:7 automatically:1 actual:1 considering:1 provided:2 estimating:5 xx:1 elser:1 what:1 gif:1 minimizes:1 finding:3 thorough:1 berkeley:2 concave:1 interactive:2 tie:1 k2:12 wrong:1 demonstrates:1 omit:2 gortler:1 local:10 frey:1 analyzing:1 birthday:1 emphasis:1 nz:1 studied:1 quantified:1 co:2 decided:1 practical:1 directed:1 acknowledgment:1 horn:3 practice:1 procedure:2 significantly:3 boyd:1 word:1 get:5 marginalize:1 close:2 put:1 applying:2 www:1 optimize:2 map:3 fruitful:1 kovesi:1 modifies:2 regardless:1 independently:1 convex:2 survey:2 focused:1 recovery:1 utilizing:1 deriving:1 his:1 dw:1 mora:1 notion:1 coordinate:1 laplace:1 experiencing:1 exact:1 hypothesis:1 agreement:1 harvard:1 bry:1 trend:1 approximated:1 satisfying:1 recognition:3 coarser:1 gehler:1 observed:6 constancy:1 subproblem:3 preprint:1 capture:1 wj:1 rescaled:1 balanced:1 ryi:1 environment:3 constrains:1 rigorously:1 belly:2 zoran:1 solving:3 joint:1 sif:15 derivation:2 brdf:1 fast:1 effective:1 chellappa:1 cogsci:1 artificial:2 tell:3 aggregate:1 formation:2 hyper:1 choosing:1 exhaustive:1 whose:2 richer:1 quite:5 supplementary:4 solve:7 cvpr:3 drawing:1 tested:2 bento:4 final:1 sequence:1 analytical:2 propose:3 reconstruction:1 remainder:1 aligned:1 uci:1 rapidly:1 achieve:1 description:2 participating:2 ky:1 olkopf:1 exploiting:1 contrived:1 produce:1 object:9 help:3 derive:4 linearize:1 measured:1 eq:16 solves:1 auxiliary:1 recovering:2 involves:1 shadow:1 come:1 direction:8 foot:2 viewing:3 enable:1 material:4 explains:1 require:1 sightly:1 probable:1 viewer:3 around:7 ground:8 normal:4 exp:1 scope:1 released:1 albedo:6 estimation:12 expose:1 sensitive:1 wordpress:1 successfully:1 hoffman:1 mit:13 gaussian:1 ax:4 derived:1 likelihood:1 integrability:4 greatly:1 baseline:1 dilip:1 helpful:1 inference:3 i0:10 snake:1 kc:1 unoptimized:1 pixel:8 arg:3 fidelity:2 orientation:2 l7:2 constrained:1 special:2 psychophysics:1 integration:1 cube:6 equal:1 field:2 spatial:3 initialize:1 uc:1 biology:1 lit:2 look:2 adelson:1 future:1 report:1 simplify:1 few:4 simultaneously:1 preserve:1 geometry:5 william:1 ab:1 pleasing:1 interest:1 message:7 possibility:1 highly:3 evaluation:1 certainly:2 analyzed:1 sh:9 nl:3 light:31 iiy:2 accurate:4 poorer:2 edge:1 integral:1 necessary:1 taylor:1 penalizes:1 re:1 initialized:1 minfunc:1 rxi:1 cost:10 vertex:1 entry:1 subset:1 masked:1 johnson:1 graphic:5 eec:2 synthetic:1 combined:1 petrovic:1 international:6 csail:3 preferring:1 l5:2 standing:1 together:1 postulate:1 eurographics:2 manage:1 central:1 ambiguity:1 huang:1 resort:1 derivative:5 leading:2 li:1 account:1 bfgs:2 includes:1 coefficient:7 satisfy:2 notable:2 explicitly:1 sloan:1 depends:2 performed:1 view:8 lot:1 closed:1 lab:2 doing:1 competitive:3 recover:2 start:1 bayes:1 kautz:1 slope:12 contribution:3 minimize:2 square:2 ni:2 accuracy:2 publicly:2 largely:1 correspond:1 necker:2 famous:1 bayesian:2 rx:1 lighting:14 trajectory:1 finer:2 explain:2 binford:1 frequency:3 radian:1 coexistence:1 dataset:8 popular:2 kic:2 ask:1 knowledge:1 color:1 carefully:1 irradiance:1 improved:1 daunting:1 wei:1 though:6 furthermore:2 just:6 angular:1 overfit:1 expressive:1 lack:1 propagation:2 effect:1 requiring:1 multiplier:4 normalized:1 true:1 brown:1 hence:1 alternating:4 iteratively:1 i2:5 sin:3 interchangeably:1 nuisance:1 please:1 ambiguous:1 noted:1 linearizing:1 performs:1 l1:1 interpreting:1 image:52 harmonic:7 variational:1 novel:4 parikh:1 rotation:10 quarter:1 cohen:1 volume:4 association:1 interpretation:1 slight:2 mae:5 refer:2 measurement:2 rzi:1 ai:1 smoothness:4 had:1 l3:2 stable:5 zucker:1 impressive:1 surface:26 dominant:1 curvature:1 posterior:2 recent:3 optimizing:1 honor:1 yi:1 hanrahan:2 integrable:4 seen:4 minimum:2 somewhat:2 arx:1 signal:1 ii:2 multiple:4 full:4 needing:1 infer:2 smooth:3 technical:1 long:2 sphere:1 equally:1 award:1 variant:1 basic:1 vision:19 arxiv:3 iteration:2 sometimes:1 c1:9 median:1 source:3 sch:1 biased:1 unlike:2 ramamoorthi:2 contrary:1 seem:2 ikeuchi:1 extracting:1 aesthetic:1 krishnan:1 variety:1 rendering:2 zi:4 gave:1 competing:1 perfectly:1 idea:2 det:1 shift:2 expression:5 unnatural:1 render:1 passing:4 nine:1 matlab:1 ignored:1 dramatically:1 useful:2 detailed:1 clear:1 sfs:1 repeating:1 locally:1 http:3 outperform:1 exist:1 canonical:1 nsf:1 estimated:8 popularity:1 per:1 upscaling:1 discrete:1 snyder:1 key:2 nevertheless:1 tenth:1 utilize:3 merely:1 year:1 jose:2 angle:4 powerful:1 inverse:1 named:1 reader:1 reasonable:3 smoothes:1 patch:4 separation:1 acceptable:1 scaling:1 comparable:1 bit:1 ki:3 layer:1 display:1 quadratic:4 annual:2 mni:1 constraint:3 precisely:1 scene:11 l9:2 diffuse:2 flat:1 sake:1 min:3 rendered:6 relatively:1 combination:2 slightly:1 newer:1 wi:1 leg:1 explained:1 restricted:1 gradually:1 iccv:2 taken:1 computationally:1 equation:1 describing:1 eventually:1 needed:1 tractable:1 available:1 decomposing:1 yedidia:3 lambertian:2 barron:6 generic:13 xiong:2 schmidt:1 shah:1 subdivide:1 original:2 rz:1 running:1 include:2 graphical:1 iix:2 marginalized:1 l6:2 giving:2 reflectance:2 build:1 especially:2 classical:2 society:2 sweep:1 malik:7 question:2 dependence:2 gradient:1 distance:2 thank:1 parametrized:1 aptly:1 w0:4 alonso:1 unstable:5 enforcing:3 assuming:3 rother:1 code:3 illustration:1 minimizing:1 downsampling:1 mostly:1 subproblems:2 negative:1 rise:2 unknown:3 perform:2 vertical:1 observation:3 kiefel:1 enabling:1 curved:1 pentland:1 situation:3 looking:1 dc:1 perturbation:6 arbitrary:1 community:4 peleato:1 inferred:1 pair:1 eckstein:1 kl:2 c3:2 c4:2 nti:1 derbinsky:2 nip:1 brook:1 beyond:1 below:2 pattern:5 perception:1 eighth:1 sparsity:1 gaining:1 explanation:2 belief:2 natural:5 difficulty:1 axis:2 prior:21 review:2 l2:3 geometric:1 marginalizing:1 relative:2 embedded:1 billf:1 fully:1 foundation:1 agent:1 principle:2 viewpoint:11 story:2 balancing:1 eccv:1 l8:2 summary:3 supported:1 last:1 free:2 keeping:1 deeper:1 institute:1 szeliski:1 face:1 absolute:1 distributed:1 overcome:1 depth:12 boundary:1 world:2 valid:1 contour:4 rich:1 kz:1 c5:2 simplified:1 transaction:2 basri:1 keep:2 relinearization:2 global:1 active:1 incoming:1 investigating:1 reveals:1 img:4 assumed:1 shimojo:1 xi:1 grayscale:1 search:1 dilipkay:2 why:1 nature:1 robust:1 expanding:1 messagepassing:1 nakayama:1 obtaining:1 depicting:1 expansion:2 example1:1 mse:3 complex:1 jepson:1 main:3 whole:1 noise:2 cryer:1 repeated:1 grosse:1 position:4 inferring:1 wish:3 candidate:1 outdoor:1 perceptual:1 theorem:1 minute:2 cvprw:1 specific:2 incorporating:1 intrinsic:6 exists:1 albeit:1 workshop:2 effectively:3 magnitude:1 linearization:10 illumination:30 perceptually:1 genericity:5 kx:1 boston:1 likely:2 visual:1 springer:2 corresponds:3 truth:8 extracted:1 acm:2 viewed:1 towards:1 admm:5 fisher:1 change:8 hard:4 cise:1 specifically:1 perceiving:1 miss:1 called:3 neck:1 ucb:1 occluding:1 exception:1 l4:2 college:1 latter:1 jonathan:1 brevity:1 tsai:1 incorporate:2 regularizing:1 phenomenon:1 correlated:1 |
5,045 | 5,568 | Self-Paced Learning with Diversity
Lu Jiang1 , Deyu Meng1,2 , Shoou-I Yu1 , Zhenzhong Lan1 , Shiguang Shan1,3 ,
Alexander G. Hauptmann1
1
School of Computer Science, Carnegie Mellon University
2
School of Mathematics and Statistics, Xi?an Jiaotong University
3
Institute of Computing Technology, Chinese Academy of Sciences
[email protected], [email protected]
{iyu, lanzhzh}@cs.cmu.edu, [email protected], [email protected]
Abstract
Self-paced learning (SPL) is a recently proposed learning regime inspired by the
learning process of humans and animals that gradually incorporates easy to more
complex samples into training. Existing methods are limited in that they ignore
an important aspect in learning: diversity. To incorporate this information, we
propose an approach called self-paced learning with diversity (SPLD) which formalizes the preference for both easy and diverse samples into a general regularizer.
This regularization term is independent of the learning objective, and thus can be
easily generalized into various learning tasks. Albeit non-convex, the optimization
of the variables included in this SPLD regularization term for sample selection can
be globally solved in linearithmic time. We demonstrate that our method significantly outperforms the conventional SPL on three real-world datasets. Specifically, SPLD achieves the best MAP so far reported in literature on the Hollywood2
and Olympic Sports datasets.
1 Introduction
Since it was raised in 2009, Curriculum Learning (CL) [1] has been attracting increasing attention
in the field of machine learning and computer vision [2]. The learning paradigm is inspired by the
learning principle underlying the cognitive process of humans and animals, which generally starts
with learning easier aspects of an aimed task, and then gradually takes more complex examples into
consideration. It has been empirically demonstrated to be beneficial in avoiding bad local minima
and in achieving a better generalization result [1].
A sequence of gradually added training samples [1] is called a curriculum. A straightforward way
to design a curriculum is to select samples based on certain heuristical ?easiness? measurements [3,
4, 5]. This ad-hoc implementation, however, is problem-specific and lacks generalization capacity.
To alleviate this deficiency, Kumar et al. [6] proposed a method called Self-Paced Learning (SPL)
that embeds curriculum designing into model learning. SPL introduces a regularization term into
the learning objective so that the model is jointly learned with a curriculum consisting of easy to
complex samples. As its name suggests, the curriculum is gradually determined by the model itself
based on what it has already learned, as opposed to some predefined heuristic criteria. Since the
curriculum in the SPL is independent of model objectives in specific problems, SPL represents a
general implementation [7, 8] for curriculum learning.
In SPL, samples in a curriculum are selected solely in terms of ?easiness?. In this work, we reveal
that diversity, an important aspect in learning, should also be considered. Ideal self-paced learning
should utilize not only easy but also diverse examples that are sufficiently dissimilar from what has
already been learned. Theoretically, considering diversity in learning is consistent with the increasing entropy theory in CL that a curriculum should increase the diversity of training examples [1].
This can be intuitively explained in the context of human education. A rational curriculum for a
pupil not only needs to include examples of suitable easiness matching her learning pace, but also,
1
Positive training samples of ?Rock Climbing?
Artificial wall climbing
Outdoor bouldering
a1
a3
0.05
a5
0.12
a2
b1
0.14
a4
0.17
a6
0.12
0.13
c1
0.20
b2
0.40
0.18
a2
a3
c3
0.15
b4
c2
0.35
Curriculum for SPL
a1
Snow mountain climbing
b3
0.20
c4
0.16
0.50
Curriculum for SPLD
a4
c4
a1
c1
b1
c2
a2
c4
...
0.05
0.12
0.12
easy
0.13
...
0.50
hard
0.05
0.15
easy and diverse
0.17
0.12
0.16
0.50
hard
Figure 1: Illustrative comparison of SPL and SPLD on ?Rock Climbing? event using real samples [15]. SPL tends to first select the easiest samples from a single group. SPLD inclines to select
easy and diverse samples from multiple groups.
importantly, should include some diverse examples on the subject in order for her to develop more
comprehensive knowledge. Likewise, learning from easy and diverse samples is expected to be
better than learning from either criterion alone.
We name the learning paradigm that considers both easiness and diversity Self-Paced Learning with
Diversity (SPLD). SPLD proves to be a general learning framework as its intuition is embedded as
a regularization term that is independent of specific model objectives. In addition, by considering
diversity in learning, SPLD is capable of obtaining better solutions. For example, Fig. 1 plots some
positive samples for the event ?Rock Climbing? on a real dataset, named MED [15]. Three groups
of samples are depicted for illustration. The number under the keyframe indicates the loss, and a
smaller loss corresponds to an easier sample. Every group has easy and complex samples. Having
learned some samples from a group, the SPL model prefers to select more samples from the same
group as they appear to be easy to what the model has learned. This may lead to overfitting to a data
subset while ignoring easy samples in other groups. For example, in Fig. 1, the samples selected in
first iterations of SPL are all from the ?Outdoor bouldering? sub-event because they all look like a1 .
This is significant as the overfitting becomes more and more severe as the samples from the same
group are kept adding into training. This phenomenon is more evident in real-world data where the
collected samples are usually biased towards some groups. In contrast, SPLD, considering both easiness and diversity, produces a curriculum that reasonably mixes easy samples from multiple groups.
The diverse curriculum is expected to help quickly grasp easy and comprehensive knowledge and to
obtain better solutions. This hypothesis is substantiated by our experiments.
The contribution of this paper is threefold: (1) We propose a novel idea of considering both easiness
and diversity in the self-paced learning, and formulate it into a concise regularization term that
can be generally applied to various problems (Section 4.1). (2) We introduce the algorithm that
globally optimizes a non-convex problem w.r.t. the variables included in this SPLD regularization
term for sample selection (Section 4.2). (3) We demonstrate that the proposed SPLD significantly
outperforms SPL on three real-word datasets. Notably, SPLD achieves the best MAP so far reported
in literature on two action datasets.
2 Related work
Bengio et al. [1] proposed a new learning paradigm called curriculum learning (CL), in which a model is learned by gradually including samples into training from easy to complex so as to increase the
entropy of training samples. Afterwards, Bengio and his colleagues [2] presented insightful explorations for the rationality underlying this learning paradigm, and discussed the relationship between
the CL and conventional optimization techniques, e.g., the continuation and annealing methods.
From human behavioral perspective, Khan et al. [10] provided evidence that CL is consistent with
the principle in teaching. The curriculum is often derived by predetermined heuristics in particular
problems. For example, Ruvolo and Eaton [3] took the negative distance to the boundary as the indicator for easiness in classification. Spitkovsky et al. [4] used the sentence length as an indicator in
2
studying grammar induction. Shorter sentences have fewer possible solutions and thus were learned
earlier. Lapedriza et al. [5] proposed a similar approach by first ranking examples based on certain
?training values? and then greedily training the model on these sorted examples.
The ad-hoc curriculum design in CL turns out onerous or conceptually difficult to implement in
different problems. To alleviate this issue, Kumar et al. [6] designed a new formulation, called
self-paced learning (SPL). SPL embeds curriculum design (from easy to more complex samples)
into model learning. By virtue of its generality, various applications based on the SPL have been
proposed very recently [7, 8, 11, 12, 13]. For example, Jiang et al. [7] discovered that pseudo
relevance feedback is a type of self-paced learning which explains the rationale of this iterative
algorithm starting from the easy examples i.e. the top ranked documents/videos. Tang et al. [8]
formulated a self-paced domain adaptation approach by training target domain knowledge starting
with easy samples in the source domain. Kumar et al. [11] developed an SPL strategy for the
specific-class segmentation task. Supan?ci?c and Ramanan [12] designed an SPL method for longterm tracking by setting smallest increase in the SVM objective as the loss function. To the best of
our knowledge, there has been no studies to incorporate diversity in SPL.
3 Self-Paced Learning
Before introducing our approach, we first briefly review the SPL. Given the training dataset D =
{(x1 , y1 ), ? ? ? , (xn , yn )}, where xi ? Rm denotes the ith observed sample, and yi represents its
label, let L(yi , f (xi , w)) denote the loss function which calculates the cost between the ground
truth label yi and the estimated label f (xi , w). Here w represents the model parameter inside the
decision function f . In SPL, the goal is to jointly learn the model parameter w and the latent weight
variable v = [v1 , ? ? ? , vn ] by minimizing:
min E(w, v; ?) =
w,v
n
X
vi L(yi , f (xi , w)) ? ?
i=1
n
X
vi , s.t. v ? [0, 1]n ,
(1)
i=1
where ? is a parameter for controlling the learning pace. Eq. (1) indicates the loss of a sample is
discounted by a weight. The objective of SPL is P
to minimize the weighted training loss together
n
with the negative l1 -norm regularizer ?kvk1 = ? i=1 vi (since vi ? 0). This regularization term
is general and applicable to various learning tasks with different loss functions [7, 11, 12].
ACS (Alternative Convex Search) is generally used to solve Eq. (1) [6, 8]. It is an iterative method
for biconvex optimization, in which the variables are divided into two disjoint blocks. In each
iteration, a block of variables are optimized while keeping the other block fixed. When v is fixed,
the existing off-the-shelf supervised learning methods can be employed to obtain the optimal w? .
With the fixed w, the global optimum v? = [v1? , ? ? ? , vn? ] can be easily calculated by [6]:
1, L(yi , f (xi , w)) < ?,
?
vi =
(2)
0, otherwise.
There exists an intuitive explanation behind this alternative search strategy: 1) when updating v with
a fixed w, a sample whose loss is smaller than a certain threshold ? is taken as an ?easy? sample,
and will be selected in training (vi? = 1), or otherwise unselected (vi? = 0); 2) when updating w
with a fixed v, the classifier is trained only on the selected ?easy? samples. The parameter ? controls
the pace at which the model learns new samples, and physically ? corresponds to the ?age? of the
model. When ? is small, only ?easy? samples with small losses will be considered. As ? grows,
more samples with larger losses will be gradually appended to train a more ?mature? model.
4 Self-Paced Learning with Diversity
In this section we detail the proposed learning paradigm called SPLD. We first formally define its
objective in Section 4.1, and discuss an efficient algorithm to solve the problem in Section 4.2.
4.1 SPLD Model
Diversity implies that the selected samples should be less similar or clustered. An intuitive approach
for realizing this is by selecting samples of different groups scattered in the sample space. We
assume that the correlation of samples between groups is less than that of within a group. This
3
auxiliary group membership is either given, e.g. in object recognition frames from the same video
can be regarded from the same group, or can be obtained by clustering samples.
This aim of SPLD can be mathematically described as follows. Assume that the training samples
X = (x1 , ? ? ? , xn ) ? Rm?n are partitioned into b groups: X(1) , ? ? ? , X(b) , where columns of
X(j) ? Rm?nj correspond to the samples in the j th group, nj is the sample number in the group
Pb
and j=1 nj = n. Accordingly denote the weight vector as v = [v(1) , ? ? ? , v(b) ], where v(j) =
(j)
(j)
(v1 , ? ? ? , vnj )T ? [0, 1]nj . SPLD on one hand needs to assign nonzero weights of v to easy
samples as the conventional SPL, and on the other hand requires to disperse nonzero elements across
possibly more groups v(i) to increase the diversity. Both requirements can be uniformly realized
through the following optimization model:
min E(w, v; ?, ?) =
w,v
n
X
vi L(yi , f (xi , w)) ? ?
i=1
n
X
vi ? ?kvk2,1 , s.t. v ? [0, 1]n ,
(3)
i=1
where ?, ? are the parameters imposed on the easiness term (the negative l1 -norm: ?kvk1 ) and the
diversity term (the negative l2,1 -norm: ?kvk2,1 ), respectively. As for the diversity term, we have:
?kvk2,1 = ?
b
X
kv(j) k2 .
(4)
j=1
The SPLD introduces a new regularization term in Eq. (3) which consists of two components. One
is the negative l1 -norm inherited from the conventional SPL, which favors selecting easy over complex examples. The other is the proposed negative l2,1 -norm, which favors selecting diverse samples residing in more groups. It is well known that the l2,1 -norm leads to the group-wise sparse
representation of v [14], i.e. non-zero entries of v tend to be concentrated in a small number of
groups. Contrariwise, the negative l2,1 -norm should have a counter-effect to group-wise sparsity,
i.e. nonzero entries of v tend to be scattered across a large number of groups. In other words, this
anti-group-sparsity representation is expected to realize the desired diversity. Note that when each
group only contains a single sample, Eq. (3) degenerates to Eq. (1).
Unlike the convex regularization term in Eq. (1) of SPL, the term in the SPLD is non-convex. Consequently, the traditional (sub)gradient-based methods cannot be directly applied to optimizing v.
We will discuss an algorithm to resolve this issue in the next subsection.
4.2 SPLD Algorithm
Similar as the SPL, the alternative search strategy can be employed for solving Eq. (3). However, a
challenge is that optimizing v with a fixed w becomes a non-convex problem. We propose a simple
yet effective algorithm for extracting the global optimum of this problem, as listed in Algorithm 1.
It takes as input the groups of samples, the up-to-date model parameter w, and two self-paced
parameters, and outputs the optimal v of minv E(w, v; ?, ?). The global minimum is proved in the
following theorem (see the proof in supplementary materials):
Theorem 1 Algorithm 1 attains the global optimum to minv E(w, v) for any given w in linearithmic time.
As shown, Algorithm 1 selects samples in terms of both the easiness and the diversity. Specifically:
? Samples with L(yi , f (xi , w)) < ? will be selected in training (vi = 1) in Step 5. These
samples represent the ?easy? examples with small losses.
? Samples with L(yi , f (xi , w)) > ? + ? will not be selected in training (vi = 0) in Step 6.
These samples represent the ?complex? examples with larger losses.
? Other samples will be selected by comparing their losses to a threshold ?+ ?i+??i?1 , where
i is the sample?s rank w.r.t. its loss value within its group. The sample with a smaller loss
than the threshold will be selected in training. Since the threshold decreases considerably
as the rank i grows, Step 5 penalizes samples monotonously selected from the same group.
We study a tractable example that allows for clearer diagnosis in Fig. 2, where each keyframe represents a video sample on the event ?Rock Climbing? of the TRECVID MED data [15], and the
number below indicates its loss. The samples are clustered into four groups based on the visual
similarity. A colored block on the right shows a curriculum selected by Algorithm 1. When ? = 0,
4
Algorithm 1: Algorithm for Solving minv E(w, v; ?, ?).
input : Input dataset D, groups X(1) , ? ? ? , X(b) , w, ?, ?
output: The global solution v = (v(1) , ? ? ? , v(b) ) of minv E(w, v; ?, ?).
1
2
3
4
5
for j = 1 to b do // for each group
(j)
(j)
Sort the samples in X(j) as (x1 , ? ? ? , xnj ) in ascending order of their loss values L;
(j)
(j)
(j)
(j)
Accordingly, denote the labels and weights of X(j) as (y1 , ? ? ? , ynj ) and (v1 , ? ? ? , vnj );
for i = 1 to nj do // easy samples first
(j)
(j)
(j)
1
then vi = 1 ; // select this sample
if L(yi , f (xi , w)) < ? + ? ?i+?
i?1
(j)
else vi
6
7
8
9
= 0; //
not select this sample
end
end
return v
Outdoor bouldering
c
a
(a)
e
Bear climbing
a rock
Curriculum: a, b, c, d
n
n
0.05
0.12
b
0.15
d
a
c
b
d
g
f
i
h
(b)
e
f
j
l
k
m
Curriculum: a, j, g, b
0.28
0.12
0.12
Artificial wall climbing
g
0.40
n
Snow mountain climbing
l
j
h
0.15
n
m
0.35
g
h
0.18
0.16
e
f
j
l
k
m
Curriculum: a, j, g, n
0.20
k
d
i
(c)
0.17
c
b
g
i
h
a
i
a
c
b
d
e
f
j
l
k
m
0.50
Figure 2: An example on samples selected by Algorithm 1. A colored block denotes a curriculum
with given ? and ?, and the bold (red) box indicates the easy sample selected by Algorithm 1.
as shown in Fig. 2(a), SPLD, which is identical to SPL, selects only easy samples (with the smallest
losses) from a single cluster. Its curriculum thus includes duplicate samples like b, c, d with the same
loss value. When ? 6= 0 and ? 6= 0 in Fig. 2(b), SPLD balances the easiness and the diversity, and
produces a reasonable and diverse curriculum: a, j, g, b. Note that even if there exist 3 duplicate
samples b, c, d, SPLD only selects one of them due to the decreasing threshold in Step 5 of Algorithm 1. Likewise, samples e and j share the same loss, but only j is selected as it is better in increasing
the diversity. In an extreme case where ? = 0 and ? 6= 0, as illustrated in Fig. 2(c), SPLD selects
only diverse samples, and thus may choose outliers, such as the sample n which is a confusable
video about a bear climbing a rock. Therefore, considering both easiness and diversity seems to
be more reasonable than considering either one alone. Physically the parameters ? and ? together
correspond to the ?age? of the model, where ? focuses on easiness whereas ? stresses diversity.
As Algorithm 1 finds the optimal v, the alternative search strategy can be readily applied to solving Eq. (3). The details are listed in Algorithm 2. As aforementioned, Step 4 can be implemented
using the existing off-the-shelf learning method. Following [6], we initialize v by setting vi = 1 to
randomly selected samples. Following SPL [6], the self-paced parameters are updated by absolute
values of ?1 , ?2 (?1 , ?2 ? 1) in Step 6 at the end of every iteration. In practice, it seems more
robust by first sorting samples in ascending order of their losses, and then setting the ?, ? according
to the statistics collected from the ranked samples (see the discussion in supplementary materials). According to [6], the alternative search in Algorithm 1 converges as the objective function is
monotonically decreasing and is bounded from below.
5 Experiments
We present experimental results for the proposed SPLD on two tasks: event detection and action
recognition. We demonstrate that our approach significantly outperforms SPL on three real-world
challenging datasets. The code is at (http://www.cs.cmu.edu/?lujiang/spld).
5
Algorithm 2: Algorithm of Self-Paced Learning with Diversity.
input : Input dataset D, self-pace parameters ?1 , ?2
output: Model parameter w
1
2
3
4
5
6
7
8
if no prior clusters exist then cluster the training samples X into b groups X(1) , ? ? ? , X(b) ;
Initialize v? , ?, ? ; // assign the starting value
while not converged do
Update w? = arg minw E(w, v? ; ?, ?) ; // train a classification model
Update v? = arg minv E(w? , v; ?, ?) using Algorithm 1; // select easy & diverse
? ? ?1 ? ; ? ? ?2 ? ; // update the learning pace
end
return w = w?
samples
SPLD is compared against four baseline methods: 1) RandomForest is a robust bootstrap method
that trains multiple decision trees using randomly selected samples and features [16]. 2) AdaBoost is
a classical ensemble approach that combines the sequentially trained ?base? classifiers in a weighted
fashion [18]. Samples that are misclassified by one base classifier are given greater weight when
used to train the next classifier in sequence. 3) BatchTrain represents a standard training approach
in which a model is trained simultaneously using all samples; 4) SPL is a state-of-the-art method
that trains models gradually from easy to more complex samples [6]. The baseline methods are a
mixture of the well-known and the state-of-the-art methods on training models using sampled data.
5.1 Multimedia Event Detection (MED)
Problem Formulation Given a collection of videos, the goal of MED is to detect events of interest,
e.g. ?Birthday Party? and ?Parade?, solely based on the video content. The task is very challenging
due to complex scenes, camera motion, occlusions, etc. [17, 19, 8].
Dataset The experiments are conducted on the largest collection on event detection: TRECVID
MED13Test, which consists of about 32,000 Internet videos. There are a total of 3,490 videos from
20 complex events, and the rest are background videos. For each event 10 positive examples are
given to train a detector, which is tested on about 25,000 videos. The official test split released by
NIST (National Institute of Standards and Technology) is used [15].
Experimental setting A Deep Convolutional Neural Network is trained on 1.2 million ImageNet
challenge images from 1,000 classes [20] to represent each video as a 1,000-dimensional vector.
Algorithm 2 is used. By default, the group membership is generated by the spectral clustering, and
the number of groups is set to 64. Following [9, 8], LibLinear is used as the solver in Step 4 of
Algorithm 2 due to its robust performance on this task. The performance is evaluated using MAP as
recommended by NIST. The parameters of all methods are tuned on the same validation set.
Table 1 lists the overall MAP comparison. To reduce the influence brought by initialization, we
repeated experiments of SPL and SPLD 10 times with random starting values, and report the best
run and the mean (with the 95% confidence interval) of the 10 runs. The proposed SPLD outperforms
all baseline methods with statistically significant differences at the p-value level of 0.05, according
to the paired t-test. It is worth emphasizing that MED is very challenging [15] and 26% relative
(2.5 absolute) improvement over SPL is a notable gain. SPLD outperforms other baselines on both
the best run and the 10 runs average. RandomForest and AdaBoost yield poorer performance. This
observation agrees with the study in literature [15, 9] that SVM is more robust on event detection.
Table 1: MAP (x100) comparison with the baseline methods on MED.
Run Name
Best Run
10 Runs Average
RandomForest
3.0
3.0
AdaBoost
2.8
2.8
BatchTrain
8.3
8.3
SPL
9.6
8.6?0.42
SPLD
12.1
9.8?0.45
BatchTrain, SPL and SPLD are all performed using SVM. Regarding the best run, SPL boosts the
MAP of the BatchTrain by a relative 15.6% (absolute 1.3%). SPLD yields another 26% (absolute
2.5%) over SPL. The MAP gain suggests that optimizing objectives with the diversity is inclined
to attain a better solution. Fig. 3 plots the validation and test AP on three representative events.
As illustrated, SPLD attains a better solution within fewer iterations than SPL, e.g. in Fig. 3(a)
SPLD obtains the best test AP (0.14) by 6 iterations as opposed to AP (0.12) by 11 iterations in
6
0.7
Dev AP
Test AP
BatchTrain
10
20
30
40
0.5
0.4
0.3
0.2
0.1
0
50
Dev AP
Test AP
BatchTrain
0.3
Average Precision
0.1
0.05
0
0.35
Dev AP
Test AP
BatchTrain
0.6
Average Precision
SPL
Average Precision
0.2
0.15
0.25
0.2
0.15
0.1
0.05
10
20
Iteration
30
40
0
50
10
Iteration
20
30
40
50
Iteration
0.35
0.5
0.15
0.1
Dev AP
Test AP
BatchTrain
0.05
0
10
20
30
40
0.3
0.4
0.3
0.2
Dev AP
Test AP
BatchTrain
0.1
50
Iteration
Average Precision
Average Precision
SPLD
Average Precision
0.2
0
10
20
30
40
0.25
0.2
Dev AP
Test AP
BatchTrain
0.15
0.1
0.05
50
0
10
(a) E006: Birthday party
20
30
40
50
Iteration
Iteration
(b) E008: Flash mob gathering
(c) E023: Dog show
Figure 3: The validation and test AP in different iterations. Top row plots the SPL result and bottom
shows the proposed SPLD result. The x-axis represents the iteration in training. The blue solid curve
(Dev AP) denotes the AP on the validation set, the red one marked by squares (Test AP) denotes the
AP on the test set, and the green dashed curve denotes the Test AP of BatchTrain which remains the
same across iterations.
Iter 4
Iter2
Iter 3
Indoorbirthday party
Indoorbirthday party
Indoorbirthday party
Indoorbirthday party
Outdoor birthday party
Indoorbirthday party
(a)
E006: Birthday
party
The number of iterations in training
Iter 1
Iter 10
Indoorbirthday party
Outdoorbirthday party
Indoorbirthday party
Indoorbirthday party
Bicycle/Scooter
Bicycle/Scooter
Car/Truck
Bicycle/Scooter
...
(b)
Indoor birthday party
...
Outdoorbirthday party
(a)
E007: Changing a
vehicle tire
Iter 9
...
Car/Truck
Car/Truck
Car/Truck
(b)
Car/Truck
...
Car/Truck
Bicycle/Scooter
Bicycle/Scooter
Car/Truck
Figure 4: Comparison of positive samples used in each iteration by (a) SPL (b) SPLD.
SPL. Studies [1, 6] have shown that SPL converges fast, while this observation further suggests that
SPLD may lead to an even faster convergence. We hypothesize that it is because the diverse samples
learned in the early iterations in SPLD tend to be more informative. The best Test APs of both SPL
and SPLD are better than BatchTrain, which is consistent with the observation in [5] that removing
some samples may be beneficial in training a better detector. As shown, Dev AP and Test AP share
a similar pattern justifying the rationale for parameters tuning on the validation set.
Fig. 4 plots the curriculum generated by SPL and SPLD in a first few iterations on two representative
events. As we see, SPL tends to select easy samples similar to what it has already learned, whereas
SPLD selects samples that are both easy and diverse to the model. For example, for the event ?E006
Birthday Party?, SPL keeps selecting indoor scenes due to the sample learned in the first place.
However, the samples learned by SPLD are a mixture of indoor and outdoor birthday parties. For
the complex samples, both methods leave them to the last iterations, e.g. the 10th video in ?E007?.
5.2 Action Recognition
Problem Formulation The goal is to recognize human actions in videos.
Datasets Two representative datasets are used: Hollywood2 was collected from 69 different Hollywood movies [21]. It contains 1,707 videos belonging to 12 actions, splitting into a training set (823
videos) and a test set (884 videos). Olympic Sports consists of athletes practicing different sports
collected from YouTube [22]. There are 16 sports actions from 783 clips. We use 649 for training
and 134 for testing as recommended in [22].
Experimental setting The improved dense trajectory feature is extracted and further represented by
the fisher vector [23, 24]. A similar setting discussed in Section 5.1 is applied, except that the groups
are generated by K-means (K=128).
Table 2 lists the MAP comparison on the two datasets. A similar pattern can be observed that
SPLD outperforms SPL and other baseline methods with statistically significant differences. We
then compare our MAP with the state-of-the-art MAP in Table 3. Indeed, this comparison may be
7
Table 2: MAP (x100) comparison with the baseline methods on Hollywood2 and Olympic Sports.
Run Name
Hollywood2
Olympic Sports
RandomForest
28.20
63.32
AdaBoost
41.14
69.25
BatchTrain
58.16
90.61
SPL
63.72
90.83
SPLD
66.65
93.11
less fair since the features are different in different methods. Nevertheless, with the help of SPLD,
we are able to achieve the best MAP reported so far on both datasets. Note that the MAPs in Table 3
are obtained by recent and very competitive methods on action recognition. This improvement
confirms the assumption that considering diversity in learning is instrumental.
Table 3: Comparison of SPLD to the state-of-the-art on Hollywood2 and Olympic Sports
Hollywood2
Vig et al. 2012 [25]
Jiang et al. 2012 [26]
Jain et al. 2013 [27]
Wang et al. 2013 [23]
SPLD
59.4%
59.5%
62.5%
64.3%
66.7%
Olympic Sports
Brendel et al. 2011 [28] 73.7%
Jiang et al. 2012 [26]
80.6%
Gaidon et al. 2012 [29]
82.7%
Wang et al. 2013 [23]
91.2%
SPLD
93.1%
5.3 Sensitivity Study
We conduct experiments using different number of groups generated by two clustering algorithm:
K-means and Spectral Clustering. Each experiment is fully tuned under the given #groups and the
clustering algorithm, and the best run is reported in Table 4. The results suggest that SPLD is
relatively insensitive to the clustering method and the given group numbers. We hypothesize that
SPLD may not improve SPL in the cases where the assumption in Section 4.1 is violated, and the
given groups, e.g. random clusters, cannot reflect the latent variousness in data.
Table 4: MAP (x100) comparison of different clustering algorithms and #clusters.
Dataset
SPL
MED
8.6?0.42
Hollywood2
63.72
Olympic
90.83
Clustering
K-means
Spectral
K-means
Spectral
K-means
Spectral
#Groups=32
9.16?0.31
9.29?0.42
66.372
66.639
91.86
91.08
#Groups=64
9.20?0.36
9.79?0.45
66.358
66.504
92.37
92.51
#Groups=128
9.25?0.32
9.22?0.41
66.653
66.264
93.11
93.25
#Groups=256
9.03?0.28
9.38?0.43
66.365
66.709
92.65
92.54
6 Conclusion
We advanced the frontier of the self-paced learning by proposing a novel idea that considers both
easiness and diversity in learning. We introduced a non-convex regularization term that favors selecting both easy and diverse samples. The proposed regularization term is general and can be
applied to various problems. We proposed a linearithmic algorithm that finds the global optimum of
this non-convex problem on updating the samples to be included. Using three real-world datasets,
we showed that the proposed SPLD outperforms the state-of-the-art approaches.
Possible directions for future work may include studying the diversity for samples in the mixture
model, e.g. mixtures of Gaussians, in which a sample is assigned to a mixture of clusters. Another
possible direction would be studying assigning reliable starting values for SPL/SPLD.
Acknowledgments
This work was partially supported by Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D11PC20068. Deyu Meng was partially supported
by 973 Program of China (3202013CB329404) and the NSFC project (61373114). The U.S. Government is
authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should
not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied,
of IARPA, DoI/NBC, or the U.S. Government.
8
References
[1] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, Curriculum learning. In ICML, 2009.
[2] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE
Trans. PAMI 35(8):1798-1828, 2013.
[3] S. Basu and J. Christensen. Teaching classification boundaries to humans. In AAAI, 2013.
[4] V. I. Spitkovsky, H. Alshawi, and D. Jurafsky. Baby steps: How ?Less is More? in unsupervised dependency
parsing. In NIPS, 2009.
[5] A. Lapedriza, H. Pirsiavash, Z. Bylinskii, and A. Torralba. Are all training examples equally valuable?
CoRR abs/1311.6510, 2013.
[6] M. P. Kumar, B. Packer, and D. Koller. Self-paced learning for latent variable models. In NIPS, 2010.
[7] L. Jiang, D. Meng, T. Mitamura, and A. Hauptmann. Easy samples first: self-paced reranking for zeroexample multimedia search. In MM, 2014.
[8] K. Tang, V. Ramanathan, L. Fei-Fei, and D. Koller. Shifting weights: Adapting object detectors from image
to video. In NIPS, 2012.
[9] Z. Lan, L. Jiang, S. I. Yu, et al. CMU-Informedia at TRECVID 2013 multimedia event detection. In
TRECVID, 2013.
[10] F. Khan, X. J. Zhu, and B. Mutlu. How do humans teach: On curriculum learning and teaching dimension.
In NIPS, 2011.
[11] M. P. Kumar, H. Turki, D. Preston, and D. Koller. Learning specfic-class segmentation from diverse data.
In ICCV, 2011.
[12] J. S. Supancic and D. Ramanan. Self-paced learning for long-term tracking. In CVPR, 2013.
[13] Y. J. Lee and K. Grauman. Learning the easy things first: Self-paced visual category discovery. In CVPR,
2011.
[14] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B 68(1):49-67, 2006.
[15] P. Over, G. Awad, M. Michel, et al. TRECVID 2013 - an overview of the goals, tasks, data, evaluation
mechanisms and metrics. In TRECVID, 2013.
[16] L. Breiman. Random forests. Machine learning. 45(1):5-32, 2001.
[17] L. Jiang, A. Hauptmann, and G. Xiang. Leveraging high-level and low-level features for multimedia event
detection. In MM, 2012.
[18] J. Friedman. Stochastic Gradient Boosting. Coputational Statistics and Data Analysis. 38(4):367-378,
2002.
[19] L. Jiang, T. Mitamura, S. Yu, and A. Hauptmann. Zero-Example Event Search using MultiModal Pseudo
Relevance Feedback. In ICMR, 2014.
[20] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image
database. In CVPR, 2009.
[21] M. Marszalek, I. Laptev, and C. Schmid. Actions in context. In CVPR, 2009.
[22] J. C. Niebles, C. W. Chen, and L. Fei-Fei. Modeling temporal structure of decomposable motion segments
for activity classification. In ECCV, 2010.
[23] H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013.
[24] Z. Lan, X. Li, and A. Hauptmann. Temporal Extension of Scale Pyramid and Spatial Pyramid Matching
for Action Recognition. In arXiv preprint arXiv:1408.7071, 2014.
[25] E. Vig, M. Dorr, and D. Cox. Space-variant descriptor sampling for action recognition based on saliency
and eye movements. In ECCV, 2012.
[26] Y. G. Jiang, Q. Dai, X. Xue, W. Liu, and C. W. Ngo. Trajectory-based modeling of human actions with
motion reference points. In ECCV, 2012.
[27] M. Jain, H. J?egou, and P. Bouthemy. Better exploiting motion for better action recognition. In CVPR,
2013.
[28] W. Brendel and S. Todorovic. Learning spatiotemporal graphs of human activities. In ICCV, 2011.
[29] A. Gaidon, Z. Harchaoui, and C. Schmid. Recognizing activities with cluster-trees of tracklets. In BMVC,
2012
9
| 5568 |@word cox:1 briefly:1 longterm:1 norm:7 seems:2 instrumental:1 confirms:1 gaidon:2 egou:1 concise:1 solid:1 liblinear:1 liu:1 contains:2 series:1 selecting:5 tuned:2 document:1 outperforms:7 existing:3 xnj:1 comparing:1 yet:1 assigning:1 readily:1 parsing:1 realize:1 informative:1 predetermined:1 hypothesize:2 plot:4 designed:2 update:3 aps:1 alone:2 intelligence:1 selected:16 fewer:2 reranking:1 accordingly:2 ruvolo:1 ith:1 realizing:1 colored:2 boosting:1 preference:1 c2:2 kvk2:3 yuan:1 consists:3 yu1:1 combine:1 behavioral:1 inside:1 introduce:1 nbc:1 theoretically:1 notably:1 indeed:1 expected:3 inspired:2 globally:2 discounted:1 decreasing:2 resolve:1 considering:7 increasing:3 becomes:2 provided:1 lapedriza:2 underlying:2 bounded:1 project:2 solver:1 what:4 mountain:2 easiest:1 interpreted:1 developed:1 proposing:1 nj:5 formalizes:1 pseudo:2 temporal:2 every:2 grauman:1 rm:3 classifier:4 k2:1 control:1 ramanan:2 appear:1 yn:1 positive:4 before:1 thereon:1 local:1 tends:2 vig:2 nsfc:1 jiang:8 meng:2 solely:2 marszalek:1 ap:23 birthday:7 pami:1 initialization:1 china:1 suggests:3 challenging:3 jurafsky:1 limited:1 statistically:2 acknowledgment:1 camera:1 testing:1 dorr:1 practice:1 block:5 implement:1 minv:5 bootstrap:1 significantly:3 attain:1 matching:2 adapting:1 word:2 confidence:1 suggest:1 cannot:2 interior:1 selection:3 context:2 influence:1 www:1 conventional:4 map:14 demonstrated:1 imposed:1 center:1 straightforward:1 attention:1 starting:5 convex:8 formulate:1 decomposable:1 splitting:1 importantly:1 regarded:1 his:1 updated:1 iter2:1 target:1 rationality:1 controlling:1 xjtu:1 designing:1 hypothesis:1 element:1 recognition:8 updating:3 database:1 bouthemy:1 observed:2 bottom:1 preprint:1 solved:1 wang:3 inclined:1 counter:1 decrease:1 movement:1 valuable:1 intuition:1 trained:4 solving:3 segment:1 laptev:1 easily:2 multimodal:1 various:5 x100:3 represented:1 regularizer:2 substantiated:1 train:6 jain:2 fast:1 effective:1 doi:1 artificial:2 whose:1 heuristic:2 larger:2 solve:2 supplementary:2 cvpr:5 otherwise:2 grammar:1 favor:3 statistic:3 jointly:2 itself:1 hoc:2 sequence:2 rock:6 took:1 propose:3 adaptation:1 date:1 degenerate:1 achieve:1 academy:1 intuitive:2 kv:1 exploiting:1 convergence:1 cluster:7 optimum:4 requirement:1 produce:2 converges:2 leave:1 object:2 help:2 develop:1 ac:2 clearer:1 heuristical:1 school:2 eq:8 auxiliary:1 c:4 implemented:1 implies:1 direction:2 snow:2 stochastic:1 exploration:1 human:9 material:2 education:1 explains:1 government:2 assign:2 generalization:2 wall:2 alleviate:2 clustered:2 niebles:1 mathematically:1 frontier:1 extension:1 mm:2 sufficiently:1 considered:2 ground:1 residing:1 bicycle:5 eaton:1 informedia:1 achieves:2 early:1 a2:3 smallest:2 released:1 purpose:1 torralba:1 estimation:1 applicable:1 label:4 largest:1 agrees:1 hollywood:1 grouped:1 weighted:2 brought:1 aim:1 tracklets:1 shelf:2 breiman:1 kvk1:2 derived:1 focus:1 improvement:2 rank:2 indicates:4 contrast:1 greedily:1 attains:2 baseline:7 detect:1 membership:2 her:2 koller:3 misclassified:1 reproduce:1 selects:5 arg:2 issue:2 classification:4 aforementioned:1 overall:1 animal:2 raised:1 art:5 initialize:2 spatial:1 field:1 having:1 sampling:1 identical:1 represents:6 look:1 icml:1 unsupervised:1 yu:2 future:1 report:1 duplicate:2 few:1 randomly:2 simultaneously:1 national:2 comprehensive:2 packer:1 recognize:1 consisting:1 occlusion:1 ab:1 friedman:1 detection:6 interest:1 a5:1 disperse:1 evaluation:1 severe:1 grasp:1 introduces:2 mixture:5 extreme:1 behind:1 copyright:1 predefined:1 poorer:1 capable:1 shorter:1 minw:1 tree:2 conduct:1 icmr:1 penalizes:1 desired:1 confusable:1 ynj:1 column:1 earlier:1 vnj:2 modeling:2 dev:8 a6:1 cost:1 introducing:1 subset:1 entry:2 recognizing:1 conducted:1 monotonously:1 reported:4 dependency:1 spatiotemporal:1 xue:1 considerably:1 sensitivity:1 contract:1 off:2 lee:1 dong:1 together:2 quickly:1 reflect:1 aaai:1 opposed:2 choose:1 possibly:1 cognitive:1 return:2 michel:1 li:3 distribute:1 diversity:28 b2:1 bold:1 includes:1 notable:1 ranking:1 ad:2 vi:14 collobert:1 performed:1 vehicle:1 view:1 mutlu:1 red:2 start:1 sort:1 competitive:1 inherited:1 annotation:1 contribution:1 minimize:1 square:1 appended:1 brendel:2 convolutional:1 descriptor:1 disclaimer:1 likewise:2 ensemble:1 correspond:2 yield:2 saliency:1 climbing:10 conceptually:1 vincent:1 lu:1 trajectory:3 worth:1 converged:1 detector:3 against:1 colleague:1 proof:1 rational:1 sampled:1 dataset:6 proved:1 gain:2 knowledge:4 subsection:1 car:7 segmentation:2 supervised:1 adaboost:4 improved:2 bmvc:1 formulation:3 evaluated:1 box:1 generality:1 correlation:1 hand:2 lack:1 reveal:1 grows:2 b3:1 name:4 effect:1 mob:1 regularization:11 assigned:1 nonzero:3 preston:1 illustrated:2 self:21 illustrative:1 biconvex:1 criterion:2 generalized:1 stress:1 evident:1 demonstrate:3 l1:3 motion:4 image:3 wise:2 consideration:1 novel:2 recently:2 empirically:1 overview:1 b4:1 insensitive:1 million:1 discussed:2 mellon:1 measurement:1 significant:3 tuning:1 mathematics:1 teaching:3 similarity:1 attracting:1 etc:1 base:2 recent:1 showed:1 perspective:2 optimizing:3 optimizes:1 certain:3 baby:1 yi:9 jiaotong:1 minimum:2 greater:1 dai:1 employed:2 deng:1 paradigm:5 monotonically:1 recommended:2 dashed:1 multiple:3 mix:1 afterwards:1 harchaoui:1 faster:1 long:1 lin:1 divided:1 justifying:1 equally:1 a1:4 paired:1 calculates:1 zhenzhong:1 variant:1 regression:1 vision:1 cmu:5 metric:1 physically:2 iteration:20 represent:3 arxiv:2 pyramid:2 c1:2 addition:1 whereas:2 background:1 annealing:1 interval:1 else:1 source:1 biased:1 rest:1 unlike:1 subject:1 med:7 tend:3 mature:1 thing:1 incorporates:1 leveraging:1 ngo:1 extracting:1 ideal:1 bengio:4 easy:33 split:1 reduce:1 idea:2 cn:2 randomforest:4 regarding:1 todorovic:1 prefers:1 action:13 deep:1 generally:3 aimed:1 listed:2 concentrated:1 clip:1 category:1 continuation:1 http:1 exist:2 governmental:1 estimated:1 disjoint:1 pace:5 blue:1 diverse:15 diagnosis:1 carnegie:1 threefold:1 group:44 iter:5 four:2 easiness:13 threshold:5 pb:1 achieving:1 nevertheless:1 lan:2 changing:1 practicing:1 utilize:1 kept:1 v1:4 graph:1 run:10 named:1 place:1 reasonable:2 spl:51 vn:2 endorsement:1 decision:2 internet:1 paced:20 courville:1 truck:7 activity:4 deficiency:1 alex:1 fei:6 scene:2 athlete:1 aspect:3 min:2 kumar:5 relatively:1 department:1 according:3 belonging:1 beneficial:2 smaller:3 across:3 partitioned:1 christensen:1 intuitively:1 iccv:3 gradually:7 explained:1 outlier:1 gathering:1 hollywood2:7 taken:1 remains:1 turn:1 discus:2 mechanism:1 tractable:1 ascending:2 end:4 studying:3 gaussians:1 hierarchical:1 spectral:5 alternative:5 top:2 denotes:5 include:3 clustering:8 a4:2 alshawi:1 chinese:1 prof:1 classical:1 society:1 implied:1 objective:9 parade:1 added:1 already:3 realized:1 bylinskii:1 strategy:4 traditional:1 gradient:2 distance:1 capacity:1 mail:1 considers:2 collected:4 awad:1 induction:1 length:1 code:1 relationship:1 illustration:1 minimizing:1 balance:1 difficult:1 teach:1 negative:7 shiguang:1 design:3 implementation:2 policy:1 linearithmic:3 observation:3 datasets:10 nist:2 anti:1 y1:2 discovered:1 frame:1 introduced:1 dog:1 c3:1 khan:2 sentence:2 optimized:1 imagenet:2 c4:3 learned:11 herein:1 boost:1 nip:4 trans:1 able:1 usually:1 below:2 pattern:2 indoor:3 regime:1 sparsity:2 challenge:2 program:1 pirsiavash:1 royal:1 including:1 video:17 explanation:1 green:1 reliable:1 suitable:1 event:17 ranked:2 business:1 shifting:1 indicator:2 curriculum:29 advanced:2 zhu:1 representing:1 improve:1 movie:1 technology:2 eye:1 unselected:1 axis:1 reprint:1 schmid:3 review:2 ict:1 literature:3 l2:4 prior:1 discovery:1 relative:2 xiang:1 embedded:1 loss:21 fully:1 bear:2 rationale:2 deyu:2 age:2 validation:5 consistent:3 principle:2 share:2 row:1 eccv:3 supported:2 last:1 keeping:1 tire:1 institute:2 basu:1 absolute:4 sparse:1 boundary:2 feedback:2 xn:2 world:4 calculated:1 contrariwise:1 default:1 curve:2 author:1 collection:2 dimension:1 spitkovsky:2 far:3 party:17 obtains:1 ignore:1 keyframe:2 keep:1 global:6 overfitting:2 sequentially:1 b1:2 xi:10 search:7 iterative:2 latent:3 olympic:7 onerous:1 table:9 learn:1 reasonably:1 incline:1 robust:4 ignoring:1 obtaining:1 forest:1 complex:12 cl:6 necessarily:1 domain:3 official:2 louradour:1 dense:1 iarpa:2 repeated:1 fair:1 x1:3 fig:9 representative:3 pupil:1 scattered:2 fashion:1 trecvid:6 embeds:2 precision:6 sub:2 outdoor:5 learns:1 tang:2 theorem:2 emphasizing:1 removing:1 bad:1 specific:4 insightful:1 list:2 svm:3 virtue:1 a3:2 evidence:1 exists:1 socher:1 albeit:1 adding:1 corr:1 ramanathan:1 ci:1 hauptmann:4 notwithstanding:1 sorting:1 easier:2 authorized:1 chen:1 entropy:2 depicted:1 visual:2 expressed:1 contained:1 tracking:2 sport:8 partially:2 scooter:5 corresponds:2 truth:1 extracted:1 weston:1 sorted:1 formulated:1 goal:4 consequently:1 flash:1 towards:1 marked:1 fisher:1 content:1 hard:2 youtube:1 included:3 specifically:2 determined:1 uniformly:1 except:1 called:6 multimedia:4 total:1 experimental:3 select:8 formally:1 alexander:1 dissimilar:1 relevance:2 violated:1 avoiding:1 incorporate:2 tested:1 phenomenon:1 |
5,046 | 5,569 | Hardness of parameter estimation
in graphical models
Guy Bresler1 David Gamarnik2 Devavrat Shah1
Laboratory for Information and Decision Systems
Department of EECS1 and Sloan School of Management2
Massachusetts Institute of Technology
{gbresler,gamarnik,devavrat}@mit.edu
Abstract
We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For
graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible
in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in
general intractable: no algorithm can learn the canonical parameters of a generic
pair-wise binary graphical model from the mean parameters in time bounded by a
polynomial in the number of variables (unless RP = NP). Indeed, such a result has
been believed to be true (see [1]) but no proof was known.
Our proof gives a polynomial time reduction from approximating the partition
function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has
an inherent repulsive property, which validates an optimization procedure over
the polytope that does not use any knowledge of its structure (as required by the
ellipsoid method and others).
1
Introduction
Graphical models are a powerful framework for succinct representation of complex highdimensional distributions. As such, they are at the core of machine learning and artificial intelligence, and are used in a variety of applied fields including finance, signal processing, communications, biology, as well as the modeling of social and other complex networks. In this paper we focus
on binary pairwise undirected graphical models, a rich class of models with wide applicability. This
is a parametric family of probability distributions, and for the models we consider, the canonical
parameters ? are uniquely determined by the vector ? of mean parameters, which consist of the
node-wise and pairwise marginals.
Two primary statistical tasks pertaining to graphical models are inference and parameter estimation.
A basic inference problem is the computation of marginals (or conditional probabilities) given the
model, that is, the forward mapping ? 7! ?. Conversely, the backward mapping ? 7! ? corresponds
to learning the canonical parameters from the mean parameters. The backward mapping is defined
only for ? in the marginal polytope M of realizable mean parameters, and this is important in what
follows. The backward mapping captures maximum likelihood estimation of parameters; the study
of the statistical properties of maximum likelihood estimation for exponential families is a classical
and important subject.
In this paper we are interested in the computational tractability of these statistical tasks. A basic
question is whether or not these maps can be computed efficiently (namely in time polynomial in
1
the problem size). As far as inference goes, it is well known that approximating the forward map
(inference) is computational hard in general. This was shown by Luby and Vigoda [2] for the hardcore model, a simple pairwise binary graphical model (defined in (2.1)). More recently, remarkably
sharp results have been obtained, showing that computing the forward map for the hard-core model
is tractable if and only if the system exhibits the correlation decay property [3, 4]. In contrast, to the
best of our knowledge, no analogous hardness result exists for the backward mapping (parameter
estimation), despite its seeming intractability [1].
Tangentially related hardness results have been previously obtained for the problem of learning the
graph structure underlying an undirected graphical model. Bogdanov et al. [5] showed hardness
of determining graph structure when there are hidden nodes, and Karger and Srebro [6] showed
hardness of finding the maximum likelihood graph with a given treewidth. Computing the backward
mapping, in comparison, requires estimation of the parameters when the graph is known.
Our main result, stated precisely in the next section, establishes hardness of approximating the
backward mapping for the hard-core model. Thus, despite the problem being statistically feasible,
it is computationally intractable.
The proof is by reduction, showing that the backward map can be used as a black box to efficiently
estimate the partition function of the hard-core model. The reduction, described in Section 4, uses
the variational characterization of the log-partition function as a constrained convex optimization
over the marginal polytope of realizable mean parameters. The gradient of the function to be minimized is given by the backward mapping, and we use a projected gradient optimization method.
Since approximating the partition function of the hard-core model is known to be computationally
hard, the reduction implies hardness of approximating the backward map.
The main technical difficulty in carrying out the argument arises because the convex optimization
is constrained to the marginal polytope, an intrinsically complicated object. Indeed, even determining membership (or evaluating the projection) to within a crude approximation of the polytope
is NP-hard [7]. Nevertheless, we show that it is possible to do the optimization without using any
knowledge of the polytope structure, as is normally required by ellipsoid, barrier, or projection methods. To this end, we prove that the polytope boundary has an inherent repulsive property that keeps
the iterates inside the polytope without actually enforcing the constraint. The consequence of the
boundary repulsion property is stated in Proposition 4.6 of Section 4, which is proved in Section 5.
Our reduction has a close connection to the variational approach to approximate inference [1]. There,
the conjugate-dual representation of the log-partition function leads to a relaxed optimization problem defined over a tractable bound for the marginal polytope and with a simple surrogate to the
entropy function. What our proof shows is that accurate approximation of the gradient of the entropy obviates the need to relax the marginal polytope.
We mention a related work of Kearns and Roughgarden [8] showing a polynomial-time reduction
from inference to determining membership in the marginal polytope. Note that such a reduction
does not establish hardness of parameter estimation: the empirical marginals obtained from samples
are guaranteed to be in the marginal polytope, so an efficient algorithm could hypothetically exist
for parameter estimation without contradicting the hardness of marginal polytope membership.
After completion of our manuscript, we learned that Montanari [9] has independently and simultaneously obtained similar results showing hardness of parameter estimation in graphical models from
the mean parameters. His high-level approach is similar to ours, but the details differ substantially.
2
Main result
In order to establish hardness of learning parameters from marginals for pairwise binary graphical
models, we focus on a specific instance of this class of graphical models, the hard-core model.
Given a graph G = (V, E) (where V = {1, . . . , p}), the collection of independent set vectors
I(G) ? {0, 1}V consist of vectors such that i = 0 or j = 0 (or both) for every edge {i, j} 2 E.
Each vector 2 I(G) is the indicator vector of an independent set. The hard-core model assigns
nonzero probability only to independent set vectors, with
?X
?
P? ( ) = exp
?i i
(?)
for each
2 I(G) .
(2.1)
i2V
2
This is an exponential family with vector of sufficient statistics ( ) = ( i )i2V 2 {0, 1}p and
vector of canonical parameters ? = (?i )i2V 2 Rp . In the statistical physics literature the model
is usually parameterized in terms of node-wise fugacity (or activity) i = e?i . The log-partition
function
?X
?!
X
(?) = log
exp
?i i
2I(G)
i2V
serves to normalize the distribution; note that (?) is finite for all ? 2 Rp . Here and throughout, all
logarithms are to the natural base.
The set M of realizable mean parameters plays a major role in the paper, and is defined as
M = {? 2 Rp | there exists a ? such that E? [ ( )] = ?} .
For the hard-core model (2.1), the set M is a polytope equal to the convex hull of independent set
vectors I(G) and is called the marginal polytope. The marginal polytope?s structure can be rather
complex, and one indication of this is that the number of half-space inequalities needed to represent
M can be very large, depending on the structure of the graph G underlying the model [10, 11].
The model (2.1) is a regular minimal exponential family, so for each ? in the interior M of the
marginal polytope there corresponds a unique ?(?) satisfying the dual matching condition
E? [ ( )] = ? .
We are concerned with approximation of the backward mapping ? 7! ?, and we use the following
notion of approximation.
Definition 2.1. We say that y? 2 R is a -approximation to y 2 R if y(1
) ? y? ? (1 + ). A
vector v? 2 Rp is a -approximation to v 2 Rp if each entry v?i is a -approximation to vi .
We next define the appropriate notion of efficient approximation algorithm.
Definition 2.2. A fully polynomial randomized approximation scheme (FPRAS) for a mapping fp :
Xp ! R is a randomized algorithm that for each > 0 and input x 2 Xp , with probability at
least 3/4 outputs a -approximation f?p (x) to fp (x) and moreover the running time is bounded by a
polynomial Q(p, 1 ).
Our result uses the complexity classes RP and NP, defined precisely in any complexity text (such
as [12]). The class RP consists of problems solvable by efficient (randomized polynomial) algorithms, and NP consists of many seemingly difficult problems with no known efficient algorithms.
It is widely believed that NP 6= RP. Assuming this, our result says that there cannot be an efficient
approximation algorithm for the backward mapping in the hard-core model (and thus also for the
more general class of binary pairwise graphical models).
We recall that approximating the backward mapping entails taking a vector ? as input and producing
an approximation of the corresponding vector of canonical parameters ? as output. It should be noted
that even determining whether a given vector ? belongs to the marginal polytope M is known to be
an NP-hard problem [7]. However, our result shows that the problem is NP-hard even if the input
vector ? is known a priori to be an element of the marginal polytope M.
Theorem 2.3. Assuming NP 6= RP, there does not exist an FPRAS for the backward mapping
? 7! ?.
As discussed in the introduction, Theorem 2.3 is proved by showing that the backward mapping
can be used as a black-box to efficiently estimate the partition function of the hard core model,
known to be hard. This uses the variational characterization of the log-partition function as well as a
projected gradient optimization method. Proving validity of the projected gradient method requires
overcoming a substantial technical challenge: we show that the iterates remain within the marginal
polytope without explicitly enforcing this (in particular, we do not project onto the polytope). The
bulk of the paper is devoted to establishing this fact, which may be of independent interest.
In the next section we give necessary background on conjugate-duality and the variational characterization as well as review the result we will use on hardness of computing the log-partition function.
The proof of Theorem 2.3 is then given in Section 4.
3
3
3.1
Background
Exponential families and conjugate duality
We now provide background on exponential families (as can be found in the monograph by Wainwright and Jordan [1]) specialized to the hard-core model (2.1) on a fixed graph G = (V, E).
General theory on conjugate duality justifying the statements of this subsection can be found in
Rockafellar?s book [13].
The basic relationship between the canonical and mean parameters is expressed via conjugate (or
Fenchel) duality. The conjugate dual of the log-partition function (?) is
n
o
?
(?) := sup h?, ?i
(?) .
?2Rd
Note that for our model (?) is finite for all ? 2 Rp and furthermore the supremum is uniquely
?
attained. On the interior M of the marginal polytope,
is the entropy function. The logpartition function can then be expressed as
n
o
?
(?) = sup h?, ?i
(?) ,
(3.1)
?2M
with
n
?(?) = arg max h?, ?i
?2M
?
o
(?) .
(3.2)
The forward mapping ? 7! ? is specified by the variational characterization (3.2) or alternatively by
the gradient map r : Rp ! M.
As mentioned earlier, for each ? in the interior M there is a unique ?(?) satisfying the dual matching condition E?(?) [ ( )] = (r )(?(?)) = ?.
For mean parameters ? 2 M , the backward mapping ? !
7 ?(?) to the canonical parameters is
given by
n
o
?(?) = arg max h?, ?i
(?)
?2Rp
or by the gradient
r ? (?) = ?(?) .
The latter representation will be the more useful one for us.
3.2
Hardness of inference
We describe an existing result on the hardness of inference and state the corollary we will use. The
result says that, subject to widely believed conjectures in computational complexity, no efficient
algorithm exists for approximating the partition function of certain hard-core models. Recall that
the hard-core model with fugacity is given by (2.1) with ?i = ln for each i 2 V .
d
1
Theorem 3.1 ([3, 4]). Suppose d 3 and > c (d) = (d(d 1)2)d . Assuming NP 6= RP, there exists
no FPRAS for computing the partition function of the hard-core model with fugacity on regular
graphs of degree d. In particular, no FPRAS exists when = 1 and d 5.
We remark that the source of hardness is the long-range dependence property of the hard-core model
for > c (d). It was shown in [14] that for < c (d) the model exhibits decay of correlations
and there is an FPRAS for the log-partition function (in fact there is a deterministic approximation
scheme as well). We note that a number of hardness results are known for the hardcore and Ising
models, including [15, 16, 3, 2, 4, 17, 18, 19]. The result stated in Theorem 3.1 suffices for our
purposes.
From this section we will need only the following corollary, proved in the Appendix. The proof,
standard in the literature, uses the self-reducibility of the hard-core model to express the partition
function in terms of marginals computed on subgraphs.
Corollary 3.2. Consider the hard-core model (2.1) on graphs of degree most d with parameters
?i = 0 for all i 2 V . Assuming NP 6= RP, there exists no FPRAS ?
?(0) for the vector of marginal
probabilities ?(0), where error is measured entry-wise as per Definition 2.1.
4
4
Reduction by optimizing over the marginal polytope
In this section we describe our reduction and prove Theorem 2.3. We define polynomial constants
? 2
? = p 8 , q = p5 , and s =
,
(4.1)
2p
which we will leave as ?, q, and s to clarify the calculations. Also, given the asymptotic nature of the
results, we assume that p is larger than a universal constant so that certain inequalities are satisfied.
Proposition 4.1. Fix a graph G on p nodes. Let ?? : M ! Rp be a black box giving a approximation for the backward mapping ? 7! ? for the hard-core model (2.1). Using 1/? 2 calls
? and computation bounded by a polynomial in p, 1/ , it is possible to produce a 4 p7/2 /q?2 to ?,
approximation ?
?(0) to the marginals ?(0) corresponding to all zero parameters.
We first observe that Theorem 2.3 follows almost immediately.
Proof of Theorem 2.3. A standard median amplification trick (see e.g. [20]) allows to decrease the
probability 1/4 of erroneous output by a FPRAS to below 1/p? 2 using O(log(p? 2 )) function calls.
Thus the assumed FPRAS for the backward mapping can be made to give a -approximation ?? to ?
on 1/? 2 successive calls, with probability of no erroneous outputs equal to at least 3/4. By taking
= ? q?2 p 7/2 /2 in Proposition 4.1 we get a ? -approximation to ?(0) with computation bounded
by a polynomial in p, 1/? . In other words, the existence of an FPRAS for the mapping ? 7! ? gives
an FPRAS for the marginals ?(0), and by Corollary 3.2 this is not possible if NP 6= RP.
We now work towards proving Proposition 4.1, the goal being to estimate the vector of marginals
?(0) for some fixed graph G. The desired marginals are given by the solution to the optimization (3.2) with ? = 0:
?(0) = arg min ? (?) .
(4.2)
?2M
We know from Section 3 that for x 2 M the gradient r ? (x) = ?(x), that is, the backward
mapping amounts to a gradient first order (gradient) oracle. A natural approach to solving the
optimization problem (4.2) is to use a projected gradient method. For reasons that will be come clear
later, instead of projecting onto the marginal polytope M, we project onto the shrunken marginal
polytope M1 ? M defined as
M1 = {? 2 M \ [q?, 1)p : ? + ? ? ei 2 M for all i} ,
(4.3)
where ei is the ith standard basis vector.
As mentioned before, projecting onto M1 is NP-hard, and this must therefore be avoided if we
are to obtain a polynomial-time reduction. Nevertheless, we temporarily assume that it is possible
to do the projection and address this difficulty later. With this in mind, we propose to solve the
optimization (4.2) by a projected gradient method with fixed step size s,
xt+1 = PM1 (xt sr ? (xt )) = PM1 (xt s?(xt )) ,
(4.4)
In order for the method (4.4) to succeed a first requirement is that the optimum is inside M1 . The
following lemma is proved in the Appendix.
Lemma 4.2. Consider the hard core model (2.1) on a graph G with maximum degree d on p 2d+1
nodes and canonical parameters ? = 0. Then the corresponding vector of mean parameters ?(0) is
in M1 .
One of the benefits of operating within M1 is that the gradient is bounded by a polynomial in p,
and this will allow the optimization procedure to converge in a polynomial number of steps. The
following lemma amounts to a rephrasing of Lemmas 5.3 and 5.4 in Section 5 and the proof is
omitted.
Lemma 4.3. We have the gradient bound kr ? (x)k1 = k?(x)k1 ? p/? = p9 for any x 2 M1 .
Next, we state general conditions under which an approximate projected gradient algorithm converges quickly. Better convergence rates are possible using the strong convexity of ? (shown in
Lemma 4.5 below), but this lemma suffices for our purposes. The proof is standard (see [21] or
Theorem 3.1 in [22] for a similar statement) and is given in the Appendix for completeness.
5
Lemma 4.4 (Projected gradient method). Let G : C ! R be a convex function defined over a compact convex set C with minimizer x? 2 arg minx2C G(x). Suppose we have access to an approxid
d
mate gradient oracle rG(x)
for x 2 C with error bounded as supx2C krG(x)
rG(x)k1 ? /2.
t+1
d
d t ))
Let L = supx2C krG(x)k.
Consider the projected gradient method x
= PC (xt srG(x
1
2
1
? 2 2 2
starting at x 2 C and with fixed step size s = /2L . After T = 4kx
x k L / iterations the
PT
1
T
t
T
?
average x
? = T t=1 x satisfies G(?
x ) G(x ) ? .
To translate accuracy in approximating the function ? (x? ) to approximating x? , we use the fact that
?
is strongly convex. The proof (in the Appendix) uses the equivalence between strong convexity
of ? and strong smoothness of the Fenchel dual , the latter being easy to check. Since we
only require the implication of the lemma, we defer the definitions of strong convexity and strong
smoothness to the appendix where they are used.
3
Lemma 4.5. The function ? : M ! R is p 2 -strongly convex. As a consequence, if
3
? ?
(x ) ? for x 2 M and x? = arg miny2M ? (y), then kx x? k ? 2p 2 .
?
(x)
At this point all the ingredients are in place to show that the updates (4.4) rapidly approach ?(0),
but a crucial difficulty remains to be overcome. The assumed black box ?? for approximating the
mapping ? 7! ? is only defined for ? inside M, and thus it is not at all obvious how to evaluate
the projection onto the closely related polytope M1 . Indeed, as shown in [7], even approximate
projection onto M is NP-hard, and no polynomial time reduction can require projecting onto M1
(assuming P 6= NP).
The goal of the subsequent Section 5 is to prove Proposition 4.6 below, which states that the optimization procedure can be carried out without any knowledge about M or M1 . Specifically, we
show that thresholding coordinates suffices, that is, instead of projecting onto M1 we may project
onto the translated non-negative orthant [q?, 1)p . Writing P for this projection, we show that the
original projected gradient method (4.4) has identical iterates xt as the much simpler update rule
xt+1 = P (xt
s?(xt )) .
(4.5)
Proposition 4.6. Choose constants as per (4.1). Suppose x 2 M1 , and consider the iterates
? t )) for t 1, where ?(x
? t ) is a -approximation of ?(xt ) for all t 1. Then
xt+1 = P (xt s?(x
xt 2 M1 , for all t 1, and thus the iterates are the same using either P or PM1 .
1
The next section is devoted to the proof of Proposition 4.6. We now complete the reduction.
? t )) at the
Proof of Proposition 4.1. We start the gradient update procedure xt+1 = P (xt s?(x
1
1
1
1
point x = ( 2p , 2p , . . . , 2p ), which we claim is within M1 for any graph G for p = |V | large
enough. To see this, note that ( p1 , p1 , . . . , p1 ) is in M, because it is a convex combination (with
1
weight 1/p each) of the independent set vectors e1 , . . . , ep . Hence x1 + 2p
?ei 2 M, and additionally
1
1
xi = 2p q?, for all i.
We establish that xt 2 M1 for each t
1 by induction, having verified the base case t = 1 in
the preceding paragraph. Let xt 2 M1 for some t 1. At iteration t of the update rule we make
? t ) giving a -approximation to the backward mapping ?(xt ), compute
a call to the black box ?(x
t
t
?
x
s?(x ), and then project onto [q?, 1)p . Proposition 4.6 ensures that xt+1 2 M1 . Therefore,
? t )) is the same as xt+1 = PM (xt s?(x
? t )).
the update xt+1 = P (xt s?(x
1
Now we can now apply
with G =
p Lemma 4.4
d
supx2C krG(x)k
p(p/?)2 = p3/2 /?. After
2 ?
T = 4kx1
iterations the average x
?T =
1
T
?
, C = M1 ,
= 2 p2 /? and L =
x? k2 L2 / 2 ? 4p(p3 /?2 )/(4 2 p4 /?2 ) = 1/
PT
t
xT ) G(x? ) ? .
t=1 x satisfies G(?
3
2
Lemma 4.5 implies that k?
xT
x? k2 ? 2 p 2 , and since x?i
q?, we get the entry-wise bound
3
T
?
?
|?
xi
xi | ? 2 p 2 xi /q? for each i 2 V . Hence x
?T is a 4 p7/2 /q?2 -approximation for x? .
6
5
Proof of Proposition 4.6
In Subsection 5.1 we prove estimates on the parameters ? corresponding to ? close to the boundary
of M1 , and then in Subsection 5.2 we use these estimates to show that the boundary of M1 has a
certain repulsive property that keeps the iterates inside.
5.1
Bounds on gradient
We start by introducing some helpful notation. For a node i, let N (i) = {j 2 [p] : (i, j) 2 E}
denote its neighbors. We partition the collection of independent set vectors as
I = Si [ Si [ Si? ,
where
Si = { 2 I :
Si = {
Si?
i
= 1} = {Ind sets containing i}
2 Si } = {Ind sets where i can be added}
ei :
={ 2I:
j
= 1 for some j 2 N (i)} = {Ind sets conflicting with i} .
For a collection of independent set vectors S ? I we write P(S) as shorthand for P? ( 2 S) and
?X
?
X
f (S) = P(S) ? e (?) =
exp
?j j .
2S
j2V
We can then write the marginal at node i as ?i = P(Si ), and since Si , Si , Si? partition I, the space
of all independent sets of G, 1 = P(Si ) + P(Si ) + P(Si? ). For each i let
?i = P(Si? ) = P(a neighbor of i is in ) .
The following lemma specifies a condition on ?i and ?i that implies a lower bound on ?i .
Lemma 5.1. If ?i + ?i
and ?i ? 1
1
? for ? > 1, then ?i
ln(?
1).
Proof. Let ? = e?i , and observe that f (Si ) = ?f (Si ). We want to show that ?
The first condition ?i + ?i
f (Si ) +
1
1.
implies that
f (Si? )
)(f (Si ) + f (Si? ) + f (Si ))
(1
)(f (Si ) + f (Si? ) + ?
= (1
and rearranging gives
1
f (Si? ) + f (Si )
The second condition ?i ? 1
?
? reads f (Si? ) ? (1
f (Si? ) ?
1
?
?
?
1
f (Si )) ,
(5.1)
f (Si ) .
? )(f (Si ) + f (Si? ) + f (Si )) or
f (Si )(1 + ?
Combining (5.1) and (5.2) and simplifying results in ?
1
?
1
(5.2)
)
1.
We now use the preceding lemma to show that if a coordinate is close to the boundary of the shrunken
marginal polytope M1 , then the corresponding parameter is large.
Lemma 5.2. Let r be a positive real number. If ? 2 M1 and ? + r? ? ei 2
/ M, then ?i
ln
q
r
1 .
Proof. We would like to apply Lemma 5.1 with ? = q/r and = r?, which requires showing that
(a) ?i ? 1 q? and (b) ?i + ?i
1 r?. To show (a), note that if ? 2 M1 , then ?i
q? by
definition of M1 . It follows that ?i ? 1 ?i ? 1 q?.
We now show (b). Since ?i = P(Si ), ?i = P(Si? ), and 1 = P(Si ) + P(Si? ) + P (Si ), (b)
is equivalent to P(Si ) ? r?. We assume that ? + r? ? ei 2
/ M and suppose for the sake of
7
P
contradiction that P(Si ) > r?. Writing ? = P( ) for 2 I, so that ? =
2I ? ? , we define
a new probability measure
8
<? + ? ei if 2 Si
0
? = 0
if 2 Si
:
?
otherwise .
P
0
0
0
One can check that ? =
has ?j = ?j for each i 6= j and ?0i = ?i + P(Si ) > ?i + r?.
2I ?
0
The point ? , being a convex combination of independent set vectors, must be in M, and hence so
must ? + r? ? ei . But this contradicts the hypothesis and completes the proof of the lemma.
The proofs of the next two lemmas are similar in spirit to Lemma 8 in [23] and are proved in the
Appendix. The first lemma gives an upper bound on the parameters (?i )i2V corresponding to an
arbitrary point in M1 .
Lemma 5.3. If ? + ? ? ei 2 M, then ?i ? p/?. Hence if ? 2 M1 , then ?i ? p/? for all i.
The next lemma shows that if a component ?i is not too small, the corresponding parameter ?i is
also not too negative. As before, this allows to bound from below the parameters corresponding to
an arbitrary point in M1 .
Lemma 5.4. If ?i
5.2
q?, then ?i
p/q?. Hence if ? 2 M1 , then ?i
p/q? for all i.
Finishing the proof of Proposition 4.6
We sketch the remainder of the proof here; full detail is given in Section D of the Supplement.
? t )) remains
Starting with an arbitrary xt in M1 , our goal is to show that xt+1 = P (xt s?(x
1
in M1 . The proof will then follow by induction, because our initial point x is in M1 by the
hypothesis.
The argument considers separately each hyperplane constraint for M of the form hh, xi ? 1. The
distance of x from the hyperplane is 1 hh, xi. Now, the definition of M1 implies that if x 2 M1 ,
then x + ? ? ei 2 M1 for all coordinates i, and thus 1 hh, xi ?khk1 for all constraints. We call a
constraint hh, xi ? 1 critical if 1 hh, xi < ?khk1 , and active if ?khk1 ? 1 hh, xi < 2?khk1 .
For xt 2 M1 there are no critical constraints, but there may be active constraints.
We first show that inactive constraints can at worst become active for the next iterate xt+1 , which
requires only that the step-size is not too large relative to the magnitude of the gradient (Lemma 4.3
gives the desired bound). Then we show (using the gradient estimates from Lemmas 5.2, 5.3,
and 5.4) that the active constraints have a repulsive property and that xt+1 is no closer than xt
to any active constraint, that is, hh, xt+1 i ? hh, xt i. The argument requires care, because the projection P may prevent coordinates i from decreasing despite xti s??i (xt ) being very negative if xti
is already small. These arguments together show that xt+1 remains in M1 , completing the proof.
6
Discussion
This paper addresses the computational tractability of parameter estimation for the hard-core model.
Our main result shows hardness of approximating the backward mapping ? 7! ? to within a small
polynomial factor. This is a fairly stringent form of approximation, and it would be interesting
to strengthen the result to show hardness even for a weaker form of approximation. A possible
goal would be to show that there exists a universal constant c > 0 such that approximation of the
backward mapping to within a factor 1 + c in each coordinate is NP-hard.
Acknowledgments
GB thanks Sahand Negahban for helpful discussions. Also we thank Andrea Montanari for sharing
his unpublished manuscript [9]. This work was supported in part by NSF grants CMMI-1335155
and CNS-1161964, and by Army Research Office MURI Award W911NF-11-1-0036.
8
References
[1] M. Wainwright and M. Jordan, ?Graphical models, exponential families, and variational inference,? Foundations and Trends in Machine Learning, vol. 1, no. 1-2, pp. 1?305, 2008.
[2] M. Luby and E. Vigoda, ?Fast convergence of the glauber dynamics for sampling independent
sets,? Random Structures and Algorithms, vol. 15, no. 3-4, pp. 229?241, 1999.
[3] A. Sly and N. Sun, ?The computational hardness of counting in two-spin models on d-regular
graphs,? in FOCS, pp. 361?369, IEEE, 2012.
[4] A. Galanis, D. Stefankovic, and E. Vigoda, ?Inapproximability of the partition function for the
antiferromagnetic Ising and hard-core models,? arXiv preprint arXiv:1203.2226, 2012.
[5] A. Bogdanov, E. Mossel, and S. Vadhan, ?The complexity of distinguishing Markov random
fields,? Approximation, Randomization and Combinatorial Optimization, pp. 331?342, 2008.
[6] D. Karger and N. Srebro, ?Learning Markov networks: Maximum bounded tree-width graphs,?
in Symposium on Discrete Algorithms (SODA), pp. 392?401, 2001.
[7] D. Shah, D. N. Tse, and J. N. Tsitsiklis, ?Hardness of low delay network scheduling,? Information Theory, IEEE Transactions on, vol. 57, no. 12, pp. 7810?7817, 2011.
[8] T. Roughgarden and M. Kearns, ?Marginals-to-models reducibility,? in Advances in Neural
Information Processing Systems, pp. 1043?1051, 2013.
[9] A. Montanari, ?Computational implications of reducing data to sufficient statistics.? unpublished, 2014.
[10] M. Deza and M. Laurent, Geometry of cuts and metrics. Springer, 1997.
[11] G. M. Ziegler, ?Lectures on 0/1-polytopes,? in Polytopes?combinatorics and computation,
pp. 1?41, Springer, 2000.
[12] C. H. Papadimitriou, Computational complexity. John Wiley and Sons Ltd., 2003.
[13] R. T. Rockafellar, Convex analysis, vol. 28. Princeton university press, 1997.
[14] D. Weitz, ?Counting independent sets up to the tree threshold,? in Proceedings of the thirtyeighth annual ACM symposium on Theory of computing, pp. 140?149, ACM, 2006.
[15] M. Dyer, A. Frieze, and M. Jerrum, ?On counting independent sets in sparse graphs,? SIAM
Journal on Computing, vol. 31, no. 5, pp. 1527?1541, 2002.
[16] A. Sly, ?Computational transition at the uniqueness threshold,? in FOCS, pp. 287?296, 2010.
[17] F. Jaeger, D. Vertigan, and D. Welsh, ?On the computational complexity of the jones and tutte
polynomials,? Math. Proc. Cambridge Philos. Soc, vol. 108, no. 1, pp. 35?53, 1990.
[18] M. Jerrum and A. Sinclair, ?Polynomial-time approximation algorithms for the Ising model,?
SIAM Journal on computing, vol. 22, no. 5, pp. 1087?1116, 1993.
[19] S. Istrail, ?Statistical mechanics, three-dimensionality and NP-completeness: I. universality
of intracatability for the partition function of the Ising model across non-planar surfaces,? in
STOC, pp. 87?96, ACM, 2000.
[20] M. R. Jerrum, L. G. Valiant, and V. V. Vazirani, ?Random generation of combinatorial structures from a uniform distribution,? Theoretical Computer Science, vol. 43, pp. 169?188, 1986.
[21] Y. Nesterov, Introductory lectures on convex optimization: A basic course, vol. 87. Springer,
2004.
[22] S. Bubeck, ?Theory of convex optimization for machine learning.? Available at
http://www.princeton.edu/ sbubeck/pub.html.
[23] L. Jiang, D. Shah, J. Shin, and J. Walrand, ?Distributed random access algorithm: scheduling
and congestion control,? IEEE Trans. on Info. Theory, vol. 56, no. 12, pp. 6182?6207, 2010.
[24] D. P. Bertsekas, Nonlinear programming. Athena Scientific, 1999.
[25] S. M. Kakade, S. Shalev-Shwartz, and A. Tewari, ?Regularization techniques for learning with
matrices,? J. Mach. Learn. Res., vol. 13, pp. 1865?1890, June 2012.
[26] J. M. Borwein and J. D. Vanderwerff, Convex functions: constructions, characterizations and
counterexamples. No. 109, Cambridge University Press, 2010.
9
| 5569 |@word polynomial:17 simplifying:1 mention:1 reduction:13 initial:1 karger:2 pub:1 ours:1 existing:1 si:44 universality:1 must:3 john:1 subsequent:1 partition:18 update:5 congestion:1 intelligence:1 half:1 p7:2 ith:1 core:22 characterization:5 iterates:6 node:7 completeness:2 successive:1 math:1 simpler:1 become:1 symposium:2 focs:2 prove:4 consists:2 shorthand:1 introductory:1 inside:4 paragraph:1 pairwise:5 indeed:3 hardness:20 andrea:1 p1:3 mechanic:1 decreasing:1 p9:1 xti:2 project:4 bounded:7 underlying:2 moreover:1 notation:1 what:2 substantially:1 finding:1 every:1 finance:1 k2:2 control:1 normally:1 grant:1 producing:1 bertsekas:1 before:2 positive:1 consequence:2 vigoda:3 despite:3 mach:1 jiang:1 establishing:1 laurent:1 black:5 equivalence:1 specifying:1 conversely:1 range:1 statistically:1 unique:2 acknowledgment:1 procedure:4 shin:1 srg:1 universal:2 empirical:1 projection:7 matching:2 word:1 regular:3 get:2 cannot:1 close:3 interior:3 onto:10 scheduling:2 writing:2 www:1 equivalent:1 map:6 logpartition:1 deterministic:1 fpras:10 go:1 starting:2 independently:1 convex:13 assigns:1 immediately:1 subgraphs:1 rule:2 contradiction:1 his:2 proving:2 notion:2 coordinate:5 analogous:1 pt:2 play:1 suppose:4 strengthen:1 construction:1 programming:1 us:5 distinguishing:1 hypothesis:2 trick:1 element:1 trend:1 satisfying:2 cut:1 ising:4 muri:1 ep:1 role:1 p5:1 preprint:1 capture:1 worst:1 ensures:1 sun:1 decrease:1 substantial:1 monograph:1 mentioned:2 convexity:3 complexity:6 nesterov:1 dynamic:1 carrying:1 solving:1 basis:1 translated:1 fast:1 describe:2 gbresler:1 pertaining:1 artificial:1 shalev:1 widely:2 larger:1 solve:1 say:3 relax:1 otherwise:1 statistic:2 jerrum:3 validates:1 seemingly:1 indication:1 propose:1 p4:1 remainder:1 combining:1 rapidly:1 shrunken:2 translate:1 kx1:1 amplification:1 normalize:1 convergence:2 requirement:1 optimum:1 jaeger:1 produce:1 leave:1 converges:1 object:1 depending:1 completion:1 measured:1 school:1 p2:1 soc:1 strong:5 treewidth:1 implies:5 come:1 differ:1 closely:1 hull:1 stringent:1 require:2 suffices:3 fix:1 randomization:1 proposition:11 clarify:1 exp:3 mapping:24 claim:1 major:1 omitted:1 purpose:2 uniqueness:1 estimation:11 proc:1 combinatorial:2 ziegler:1 establishes:1 mit:1 rather:1 office:1 corollary:4 focus:2 finishing:1 june:1 likelihood:3 check:2 contrast:1 realizable:3 helpful:2 inference:9 repulsion:1 membership:3 hidden:1 interested:1 arg:5 dual:5 html:1 i2v:5 priori:1 constrained:2 fairly:1 marginal:22 field:3 equal:2 having:1 sampling:1 biology:1 identical:1 jones:1 minimized:1 np:16 others:1 papadimitriou:1 inherent:2 frieze:1 simultaneously:1 geometry:1 welsh:1 cns:1 interest:1 investigate:1 pc:1 devoted:2 implication:2 accurate:1 edge:1 closer:1 necessary:1 unless:1 tree:2 logarithm:1 desired:2 re:1 theoretical:1 minimal:2 instance:1 tse:1 modeling:1 fenchel:2 earlier:1 w911nf:1 applicability:1 tractability:2 introducing:1 entry:3 uniform:1 delay:1 too:3 thanks:1 randomized:3 negahban:1 siam:2 physic:1 together:1 quickly:1 borwein:1 satisfied:1 containing:1 choose:1 guy:1 sinclair:1 book:1 seeming:1 rockafellar:2 combinatorics:1 sloan:1 explicitly:1 vi:1 later:2 sup:2 start:2 complicated:1 weitz:1 defer:1 spin:1 accuracy:1 tangentially:1 efficiently:3 pm1:3 sharing:1 definition:6 hardcore:2 pp:17 obvious:1 proof:21 proved:5 massachusetts:1 intrinsically:1 recall:2 knowledge:4 subsection:3 dimensionality:1 tutte:1 actually:1 manuscript:2 attained:1 follow:1 planar:1 box:5 strongly:2 furthermore:1 sly:2 correlation:2 sketch:1 ei:10 nonlinear:1 scientific:1 validity:1 true:1 hence:5 regularization:1 read:1 laboratory:1 nonzero:1 glauber:1 ind:3 krg:3 self:1 uniquely:3 width:1 noted:1 complete:1 wise:5 variational:6 gamarnik:1 recently:1 specialized:1 discussed:1 m1:36 marginals:10 cambridge:2 counterexample:1 smoothness:2 rd:1 philos:1 pm:1 access:2 entail:2 operating:1 surface:1 base:2 showed:2 optimizing:1 belongs:1 certain:3 inequality:2 binary:5 relaxed:1 preceding:2 care:1 converge:1 signal:1 full:1 technical:2 calculation:1 believed:3 long:1 justifying:1 e1:1 award:1 feasibility:1 basic:4 metric:1 arxiv:2 iteration:3 represent:1 background:3 remarkably:1 want:1 separately:1 completes:1 median:1 source:1 crucial:1 sr:1 subject:2 undirected:3 spirit:1 jordan:2 call:5 vadhan:1 counting:3 easy:1 concerned:1 enough:1 variety:1 iterate:1 inactive:1 whether:2 vanderwerff:1 gb:1 sahand:1 ltd:1 bogdanov:2 remark:1 useful:1 tewari:1 clear:1 amount:2 http:1 specifies:1 exist:2 walrand:1 canonical:10 nsf:1 per:2 bulk:1 write:2 discrete:1 vol:11 express:1 nevertheless:2 threshold:2 prevent:1 verified:1 backward:21 graph:16 parameterized:1 powerful:1 soda:1 place:1 family:8 throughout:1 almost:1 p3:2 decision:1 appendix:6 bound:8 completing:1 guaranteed:1 oracle:2 activity:1 roughgarden:2 annual:1 precisely:2 constraint:9 sake:1 argument:4 min:1 conjecture:1 department:1 combination:2 conjugate:6 remain:1 across:1 son:1 contradicts:1 kakade:1 projecting:4 computationally:2 ln:3 previously:1 devavrat:2 remains:3 hh:8 needed:1 know:1 mind:1 dyer:1 tractable:2 end:1 serf:1 repulsive:4 available:1 apply:2 observe:2 generic:1 appropriate:1 luby:2 shah:2 rp:17 existence:1 original:1 obviates:1 running:1 graphical:14 giving:2 k1:3 establish:3 approximating:11 classical:1 question:1 added:1 already:1 parametric:1 primary:1 dependence:1 cmmi:1 surrogate:1 exhibit:2 gradient:23 distance:1 thank:1 athena:1 polytope:28 considers:1 reason:1 enforcing:2 induction:2 assuming:5 relationship:1 ellipsoid:2 difficult:1 statement:2 stoc:1 info:1 stated:3 negative:3 upper:1 markov:3 finite:2 mate:1 orthant:1 communication:1 sharp:1 arbitrary:3 overcoming:1 princeton:2 david:1 pair:1 required:2 namely:1 specified:1 connection:1 unpublished:2 learned:1 conflicting:1 polytopes:2 trans:1 address:2 usually:1 below:4 fp:2 challenge:1 including:2 max:2 wainwright:2 critical:2 difficulty:3 natural:2 indicator:1 solvable:1 representing:1 scheme:2 technology:1 fugacity:3 mossel:1 carried:1 text:1 review:1 literature:2 reducibility:2 l2:1 determining:4 asymptotic:1 relative:1 fully:1 lecture:2 interesting:1 generation:1 srebro:2 ingredient:1 foundation:1 degree:3 sufficient:2 xp:2 principle:1 thresholding:1 intractability:1 deza:1 course:1 supported:1 tsitsiklis:1 allow:1 weaker:1 institute:1 wide:1 neighbor:2 taking:2 barrier:1 sparse:1 benefit:1 distributed:1 boundary:6 overcome:1 evaluating:1 transition:1 rich:1 forward:4 collection:3 made:1 projected:9 avoided:1 far:1 social:1 transaction:1 vazirani:1 approximate:4 compact:1 keep:2 supremum:1 active:5 assumed:2 xi:10 shwartz:1 alternatively:1 additionally:1 learn:2 nature:1 rearranging:1 complex:3 main:5 montanari:3 succinct:1 contradicting:1 rephrasing:1 x1:1 wiley:1 exponential:7 crude:1 theorem:9 erroneous:2 specific:1 xt:37 showing:7 decay:2 intractable:2 consist:2 exists:7 valiant:1 kr:1 supplement:1 magnitude:1 kx:2 entropy:3 rg:2 army:1 bubeck:1 expressed:2 temporarily:1 inapproximability:1 springer:3 corresponds:2 minimizer:1 satisfies:2 antiferromagnetic:1 acm:3 succeed:1 conditional:1 goal:5 towards:1 feasible:2 hard:31 determined:2 specifically:1 reducing:1 hyperplane:2 kearns:2 lemma:26 called:1 duality:4 highdimensional:1 hypothetically:1 latter:2 arises:1 evaluate:1 sbubeck:1 |
5,047 | 557 | Multi-Digit Recognition Using A Space
Displacement Neural Network
Ofer Matan*, Christopher J.C. Burges,
Yann Le Cun and John S. Denker
AT&T Bell Laboratories, Holmdel, N. J. 07733
Abstract
We present a feed-forward network architecture for recognizing an unconstrained handwritten multi-digit string. This is an extension of previous
work on recognizing isolated digits. In this architecture a single digit recognizer is replicated over the input. The output layer of the network is
coupled to a Viterbi alignment module that chooses the best interpretation
of the input. Training errors are propagated through the Viterbi module.
The novelty in this procedure is that segmentation is done on the feature
maps developed in the Space Displacement Neural Network (SDNN) rather
than the input (pixel) space.
1
Introduction
In previous work (Le Cun et al., 1990) we have demonstrated a feed-forward backpropagation network that recognizes isolated handwritten digits at state-of-the-art
performance levels. The natural extension of this work is towards recognition of
unconstrained strings of handwritten digits. The most straightforward solution is
to divide the process into two: segmentation and recognition. The segmenter will
divide the original image into pieces (each containing an isolated digit) and pass
it to the recognizer for scoring. This approach assumes that segmentation and
recognition can be decoupled. Except for very simple cases this is not true.
Speech-recognition research (Rabiner, 1989; Franzini, Lee and Waibel, 1990) has
demonstrated the power of using the recognition engine to score each segment in
? Author's current address: Department of Computer Science, Stanford University,
Stanford, CA 94305.
488
Multi-Digit Recognition Using a Space Displacement Neural Network
a candidate segmentation. The segmentation that gives the best combined score
is chosen. "Recognition driven" segmentation is usually used in conjunction with
dynamic programming, which can find the optimal solution very efficiently.
Though dynamic programming algorithms save us from exploring an exponential
number of segment combinations, they are still linear in the number of possible
segments - requiring one call to the recognition unit per candidate segment. In
order to solve the problem in reasonable time it is necessary to: 1) limit the number
of possible segments, or 2) have a rapid recognition unit.
We have built a ZIP code reading system that "prunes" the number of candidate
segments (Matan et al., 1991). The candidate segments were generated by analyzing
the image's pixel projection onto the horizontal axis. The strength of this system
is that the number of calls to the recognizer is small (only slightly over twice the
number of real digits). The weakness is that by generating only a small number
of candidates one often misses the correct segmentation. In addition, generation
of this small set is based on multi-parametric heuristics, making tuning the system
difficult.
It would be attractive to discard heuristics and generate many more candidates,
but then the time spent in the recognition unit would have to be reduced considerably. Reducing the computation of the recognizer usually gives rise to a reduction
in recognition rates. However, it is possible to have our segments and eat them
too. We propose an architecture which can explore many more candidates without
compromising the richness of the recognition engine.
2
The Design
Let us describe a simplified and less efficient solution that will lead us to our final
design. Consider a de-skewed image such as the one shown in Figure 1. The system
will separate it into candidate segments using vertical cuts. A few examples of these
are shown beneath the original image ill Figure 1. In the process of finding the
best overall segmentation each candidate segment will be passed to the recognizer
described in (Le Cun et al., 1990). The scores will be converted to probabilities
(Bridle, 1989) that are inserted into nodes of a direct acyclic graph. Each path on
this graph represents a candidate segmentation where the length of each path is the
product of the node values along it. The Viterbi algorithm is used to determine
the longest path (which corresponds to the segmentation with the highest combined
score).
It seems somewhat redundant to process the same pixels numerous times (as part
of different, overlapping candidate segments). For this reason we propose to pass
a whole size-normalized image to the recognition unit and to segment a feature
map, after most of the neural network computation has been done. Since the first
four layers in our recognizer are convolutional, we can easily extend the single-digit
network by applying the convolution kernels to the multi-digit image.
Figure 2 shows the example image (Figure 1) processed by the extended network.
We now proceed to segment the top layer. Since the network is convolutional,
segmenting this feature-map layer is similar to segmenting the input layer. (Because
of overlapping receptive fields and reduced resolution, it is not exactly equivalent.)
This gives a speed-up of roughly an order of magnitude.
489
490
Matan, Burges, Cun, and Denker
Figure 1: A sample ZIP code image and possible segmentations.
Figure 2: The example ZIP code processed by 4 layers of a convolutional feedforward network.
Multi-Digit Recognition Using a Space Displacement Neural Network
In the single digit network, we can view the output layer as a lO-unit column vector
that is connected to a zone of width 5 on the last feature layer. If we replicate the
single digit network over the input in the horizontal direction, the output layer will
be replicated. Each output vector will be connected to a different zone of width 5
on the feature layer. Since the width of a handwritten digit is highly variable, we
construct alternate output vectors that are connected to feature segment zones of
widths 4,3 and 2. The resulting output maps for the example ZIP code are shown
in Figure 3.
The network we have constructed is a shared weight network reminiscent of a TDNN
(Lang and Hinton, 1988). We have termed this architecture a Space Displacement
Neural Network (SDNN). We rely on the fact that most digit strings lie on more
or less one line; therefore, the network is replicated in the horizontal direction. For
other applications it is conceivable to replicate in the vertical direction as well.
3
The Recognition Procedure
The output maps are processed by a Viterbi algorithm which chooses the set of
output vectors corresponding to the segmentation giving the highest combined score.
We currently assume that we know the number of digits in the image; however, this
procedure can be generalized to an unknown number of digits. In Figure 3 the five
output vectors that combined to give the best overall score are marked by thin lines
beneath them.
4
The Training Procedure
During training we follow the above procedure and repeat it under the constraint
that the winning combination corresponds to the ground truth. In Figure 4 the
constrained-winning output vectors are marked by small circles. We perform backpropagation through both the ground truth vectors (reinforcement) and highest
scoring vectors (negative reinforcement).
We have trained and tested this architecture on size normalized 5-digit ZIP codes
taken from U.S Mail. 6000 images were used for training and 3000 where used for
testing. The images were cleaned, deskewed and height normalized according to
the assumed largest digit height. The data was not "cleaned" after the automatic
preprocessing, leaving non centered images and non digits in both the training and
test set.
Training was done using stochastic back propagation with some sweeps using Newton's method for adjusting the learning rates. We tried various methods of initializing the gradient on the last layer:
? Reinforce only units picked by the constrained Viterbi. (all other units have a
gradient of zero).
? Same as above, but set negative feedback through units chosen by regular
Viterbi that are different from those chosen by the constrained version. (Push
down the incorrect segmentation if it is different from the correct answer). This
speeds up the convergence.
? Reinforce units chosen by the constrained Viterbi. Set negative feed back
491
492
Matan, Burges, Cun, and Denker
patt-nw>-t3
file-3U
Viterl>i: (2 3 2 0 "
(0 ?? IIU21
COftnraine? Viterl>i: (2 3 2 0 61
(0.'"4421
Figure 3: Recognition using the SDNN/Viterbi. The output maps of the SDNN
are shown. White indicates a positive activation. The output vectors chosen by
the Viterbi alignment are marked by a thin line beneath them. The input regions
corresponding to these vectors are shown. One can see that the system centers on
the individual digits. Each of the 4 output maps shown is connected to different size
zone in the last feature layer (5,4,3 and 2, top to bottom). In order to implement
weight sharing between output units connected to different zone sizes, the dangling
connections to the output vectors of narrower zones are connected to feature units
corresponding to background in the input.
Multi-Digit Recognition Using a Space Displacement Neural Network
patt-nUIII-U'
Ule-11276
V1terbi:
I ' J 0 8 0)
10.JU80J)
Con_trained V1terb1:
11 , J 8 0)
10.294892)
Figure 4: Training using the SDNN /Viterbi. The output vectors chosen by the
Viterbi algorithm are marked by a thin line beneath them. The corresponding
input regions are shown in the left column. The output vectors chosen by the
constrained Viterbi algorithm are marked by small circles and their corresponding
input regions are shown to the right. Given the ground truth the system can learn
to center on the correct digit.
493
494
Matan, Burges, Cun, and Denker
through all other units except those that are "similar" to ones in the correct
set. ("similar" is defined by corresponding to a close center of frame in the
input and responding with the correct class).
As one adds more units that have a non zero gradient, each training iteration is
more similar to batch-training and is more prone to oscillations. In this case more
Newton sweeps are required.
5
Results
The current raw recognition rates for the whole 5-digit string are 70% correct from
the training set and 66% correct from the test set. Additional interesting statistics
are the distribution of the number of correct digits across the whole ZIP code and the
recognition rates for each digit's position within the ZIP code. These are presented
in the tables shown below.
Table 1: Top: Distribution of test images according to the number of correct single
digit classifications out of 5. Bottom: Rates of single digit classification according
to position. Digits on the edges are classified more easily since one edge is predetermined.
Number of digits correct
5
4
3
2
1
0
Percent of cases
66.3
19.7
7.2
4.7
1.4
0.7
I Digit position I Percent correct
1st
2nd
3rd
4th
5th
6
92
87
87
86
90
Conclusions and Future Work
The SDNN combined with the Viterbi algorithm learns to recognize strings of handwritten digits by "centering" on the individual digits in the string. This is similar
in concept to other work in speech (Haffner, Franzini and Waibel, 1991) but differs
from (Keeler, Rumelhart and Leow, 1991), where no alignment procedure is used.
The current recognition rates are still lower than our best system that uses pixel
projection information to guide a recognition based segmenter. The SDNN is much
faster and lends itself to parallel hardware. Possible improvements to the architecture may be:
Multi-Digit Recognition Using a Space Displacement Neural Network
? Modified constraints on the segmentation rules of the feature layer.
? Applying the Viterbi algorithm in the vertical direction as well might overcome
problems due to height variance.
? It might be too hard to segment using local information only; one might try
using global information, such as pixel projection or recognizing doublets or
triplets.
Though there is still considerable work to be done in order to reach state-of-the-art
recognition levels, we believe that this type of approach is the correct direction for
future image processing applications. Applying recognition based segmentation at
the line, word and character level on high feature maps is necessary in order to
achieve fast processing while exploring a large set of possible interpretations.
Acknowledgements
Support of this work by the Technology Resource Department of the U.S. Postal
Service under Task Order 104230-90-C-2456 is gratefully acknowledged.
References
Bridle, J. S. (1989). Probabilistic Interpretation of Feedforward Classification
Network Outputs with Relationships to Statistical Pattern Recognition. In
Fogelman-Soulie, F. and Herault, J., editors, N euro-computing: algorithms,
architectures and applications. Springer-Verlag.
Franzini, M., Lee, K. F., and Waibel, A. (1990). Connectionist Viterbi Training:
A New Hybrid Method For Continuous Speech Recognition. In Proceedings
ICASSP 90, pages 425-428. IEEE.
Haffner, P., Franzini, M., and Waibel, A. (1991) . Integrating Time Alignment and
Neural Networks for High Performance Continuous Speech Recognition. In
Proceedings ICASSP 91. IEEE.
Keeler, J. D., Rumelhart, D. E., and Leow, W. (1991). Integrated Segmentation
and Recognition of Handwritten-Printed Numerals. In Lippman, Moody, and
Touretzky, editors, Advances in Neural Information Processing Systems, volume 3. Morgan Kaufman.
Lang, K. J. and Hinton, G. E. (1988). A Time Delay Neural Network Architecture
for Speech Recognition. Technical Report CMU-cs-88-152, Carnegie-Mellon
University, Pittsburgh PA.
Le Cun, Y., Matan, 0., Boser, B., Denker, J. S., Henderson, D., Howard, R. E.,
Hubbard, W., Jackel, L. D., and Baird, H. S. (1990). Handwritten Zip Code
Recognition with Multilayer Networks. In Proceedings of the 10th International
Conference on Pattern Recognition. IEEE Computer Society Press.
Matan, 0., Bromley, J., Burges, C. J. C., Denker, J. S., Jackel, 1. D., Le Cun,
Y., Pednault, E. P. D., Satterfield, W. D., Stenard, C. E., and Thompson,
T. J. (1991). Reading Handwritten Digits: A ZIP code Recognition System
(To appear in COMPUTER).
Rabiner, L. R. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proceedings of the IEEE, 77:257-286.
495
| 557 |@word version:1 seems:1 replicate:2 nd:1 tried:1 leow:2 reduction:1 score:6 current:3 lang:2 activation:1 reminiscent:1 john:1 predetermined:1 selected:1 postal:1 node:2 five:1 height:3 along:1 constructed:1 direct:1 incorrect:1 rapid:1 roughly:1 multi:8 kaufman:1 string:6 developed:1 finding:1 exactly:1 unit:13 appear:1 segmenting:2 positive:1 service:1 local:1 limit:1 analyzing:1 path:3 might:3 twice:1 testing:1 implement:1 differs:1 backpropagation:2 lippman:1 digit:36 procedure:6 displacement:7 bell:1 printed:1 projection:3 word:1 integrating:1 regular:1 onto:1 close:1 applying:3 equivalent:1 map:8 demonstrated:2 center:3 straightforward:1 thompson:1 resolution:1 rule:1 programming:2 us:1 sdnn:7 pa:1 rumelhart:2 recognition:34 cut:1 bottom:2 inserted:1 module:2 initializing:1 region:3 connected:6 richness:1 highest:3 dynamic:2 segmenter:2 trained:1 segment:15 easily:2 icassp:2 various:1 fast:1 describe:1 matan:7 heuristic:2 stanford:2 solve:1 statistic:1 itself:1 final:1 propose:2 product:1 beneath:4 achieve:1 convergence:1 generating:1 spent:1 c:1 direction:5 correct:12 compromising:1 stochastic:1 centered:1 numeral:1 extension:2 exploring:2 keeler:2 ground:3 bromley:1 viterbi:15 nw:1 recognizer:6 currently:1 jackel:2 hubbard:1 largest:1 modified:1 rather:1 conjunction:1 longest:1 improvement:1 indicates:1 integrated:1 hidden:1 pixel:5 overall:2 classification:3 ill:1 fogelman:1 herault:1 art:2 constrained:5 field:1 construct:1 represents:1 thin:3 future:2 connectionist:1 report:1 few:1 recognize:1 individual:2 highly:1 alignment:4 weakness:1 henderson:1 edge:2 necessary:2 decoupled:1 divide:2 circle:2 isolated:3 column:2 recognizing:3 delay:1 too:2 answer:1 stenard:1 considerably:1 chooses:2 combined:5 st:1 international:1 lee:2 probabilistic:1 moody:1 containing:1 converted:1 de:1 baird:1 piece:1 view:1 picked:1 try:1 parallel:1 convolutional:3 variance:1 efficiently:1 rabiner:2 t3:1 handwritten:8 raw:1 classified:1 reach:1 touretzky:1 sharing:1 centering:1 propagated:1 bridle:2 adjusting:1 segmentation:16 back:2 feed:3 follow:1 done:4 though:2 horizontal:3 christopher:1 overlapping:2 propagation:1 believe:1 requiring:1 true:1 normalized:3 concept:1 laboratory:1 white:1 attractive:1 skewed:1 width:4 during:1 generalized:1 percent:2 image:14 volume:1 extend:1 interpretation:3 mellon:1 tuning:1 unconstrained:2 automatic:1 rd:1 gratefully:1 add:1 driven:1 discard:1 termed:1 verlag:1 scoring:2 morgan:1 additional:1 somewhat:1 zip:9 prune:1 novelty:1 determine:1 redundant:1 technical:1 faster:1 doublet:1 multilayer:1 cmu:1 iteration:1 kernel:1 addition:1 background:1 leaving:1 file:1 call:2 feedforward:2 architecture:8 haffner:2 passed:1 speech:6 proceed:1 hardware:1 processed:3 reduced:2 generate:1 dangling:1 tutorial:1 per:1 patt:2 carnegie:1 four:1 acknowledged:1 graph:2 reasonable:1 yann:1 oscillation:1 holmdel:1 layer:13 strength:1 constraint:2 speed:2 eat:1 department:2 according:3 waibel:4 alternate:1 combination:2 across:1 slightly:1 character:1 cun:8 making:1 taken:1 resource:1 know:1 ofer:1 denker:6 save:1 batch:1 original:2 top:3 assumes:1 responding:1 recognizes:1 newton:2 giving:1 society:1 franzini:4 sweep:2 parametric:1 receptive:1 conceivable:1 gradient:3 lends:1 separate:1 reinforce:2 mail:1 reason:1 code:9 length:1 relationship:1 difficult:1 negative:3 rise:1 design:2 unknown:1 perform:1 vertical:3 convolution:1 markov:1 howard:1 extended:1 hinton:2 frame:1 cleaned:2 required:1 connection:1 engine:2 boser:1 address:1 usually:2 below:1 pattern:2 reading:2 built:1 power:1 natural:1 rely:1 hybrid:1 technology:1 numerous:1 axis:1 tdnn:1 coupled:1 acknowledgement:1 generation:1 interesting:1 acyclic:1 editor:2 lo:1 prone:1 repeat:1 last:3 guide:1 burges:5 feedback:1 overcome:1 soulie:1 forward:2 author:1 reinforcement:2 replicated:3 simplified:1 preprocessing:1 global:1 pittsburgh:1 assumed:1 continuous:2 triplet:1 table:2 learn:1 ca:1 whole:3 ule:1 euro:1 position:3 exponential:1 winning:2 candidate:11 lie:1 learns:1 down:1 deskewed:1 magnitude:1 push:1 explore:1 springer:1 corresponds:2 truth:3 marked:5 narrower:1 towards:1 shared:1 considerable:1 hard:1 except:2 reducing:1 miss:1 pas:2 zone:6 support:1 tested:1 |
5,048 | 5,570 | Sequential Monte Carlo for Graphical Models
Christian A. Naesseth
Div. of Automatic Control
Link?oping University
Link?oping, Sweden
[email protected]
Fredrik Lindsten
Dept. of Engineering
The University of Cambridge
Cambridge, UK
[email protected]
Thomas B. Sch?on
Dept. of Information Technology
Uppsala University
Uppsala, Sweden
[email protected]
Abstract
We propose a new framework for how to use sequential Monte Carlo (SMC) algorithms for inference in probabilistic graphical models (PGM). Via a sequential
decomposition of the PGM we find a sequence of auxiliary distributions defined
on a monotonically increasing sequence of probability spaces. By targeting these
auxiliary distributions using SMC we are able to approximate the full joint distribution defined by the PGM. One of the key merits of the SMC sampler is that it
provides an unbiased estimate of the partition function of the model. We also show
how it can be used within a particle Markov chain Monte Carlo framework in order
to construct high-dimensional block-sampling algorithms for general PGMs.
1
Introduction
Bayesian inference in statistical models involving a large number of latent random variables is in
general a difficult problem. This renders inference methods that are capable of efficiently utilizing
structure important tools. Probabilistic Graphical Models (PGMs) are an intuitive and useful way
to represent and make use of underlying structure in probability distributions with many interesting
areas of applications [1].
Our main contribution is a new framework for constructing non-standard (auxiliary) target distributions of PGMs, utilizing what we call a sequential decomposition of the underlying factor graph, to
be targeted by a sequential Monte Carlo (SMC) sampler. This construction enables us to make use
of SMC methods developed and studied over the last 20 years, to approximate the full joint distribution defined by the PGM. As a byproduct, the SMC algorithm provides an unbiased estimate of the
partition function (normalization constant). We show how the proposed method can be used as an
alternative to standard methods such as the Annealed Importance Sampling (AIS) proposed in [2],
when estimating the partition function. We also make use of the proposed SMC algorithm to design
efficient, high-dimensional MCMC kernels for the latent variables of the PGM in a particle MCMC
framework. This enables inference about the latent variables as well as learning of unknown model
parameters in an MCMC setting.
During the last decade there has been substantial work on how to leverage SMC algorithms [3] to
solve inference problems in PGMs. The first approaches were PAMPAS [4] and nonparametric belief
propagation by Sudderth et al. [5, 6]. Since then, several different variants and refinements have been
proposed by e.g. Briers et al. [7], Ihler and Mcallester [8], Frank et al. [9]. They all rely on various
particle approximations of messages sent in a loopy belief propagation algorithm. This means that
in general, even in the limit of Monte Carlo samples, they are approximate methods. Compared
to these approaches our proposed methods are consistent and provide an unbiased estimate of the
normalization constant as a by-product.
Another branch of SMC-based methods for graphical models has been suggested by Hamze and
de Freitas [10]. Their method builds on the SMC sampler by Del Moral et al. [11], where the
1
initial target is a spanning tree of the original graph and subsequent steps add edges according to an
annealing schedule. Everitt [12] extends these ideas to learn parameters using particle MCMC [13].
Yet another take is provided by Carbonetto and de Freitas [14], where an SMC sampler is combined
with mean field approximations. Compared to these methods we can handle both non-Gaussian
and/or non-discrete interactions between variables and there is no requirement to perform MCMC
steps within each SMC step.
The left-right methods described by Wallach et al. [15] and extended by Buntine [16] to estimate
the likelihood of held-out documents in topic models are somewhat related in that they are SMCinspired. However, these are not actual SMC algorithms and they do not produce an unbiased
estimate of the partition function for finite sample set. On the other hand, a particle learning based
approach was recently proposed by Scott and Baldridge [17] and it can be viewed as a special case
of our method for this specific type of model.
2
Graphical models
A graphical model is a probabilistic model which factorizes according to the structure of an underlying graph G = {V, E}, with vertex set V and edge set E. By this we mean that the joint probability
density function (PDF) of the set of random variables indexed by V, XV := {x1 , . . . , x|V| }, can be
represented as a product of factors over the cliques of the graph:
1 Y
p(XV ) =
?C (XC ),
(1)
Z
C?C
RQ
where C is the set of cliques in G, ?C is the factor for clique C, and Z =
C?C ?C (xC )dXV is
the partition function.
S
We will frequently use the notation XI = i?I {xi } for some
x3
subset I ? {1, . . . , |V|} and we write XI for the range of XI
x1
x2
x5
(i.e., XI ? XI ). To make the interactions between the random
0
variables explicit we define a factor graph F = {V, ?, E }
x4
corresponding to G. The factor graph consists of two types
(a) Undirected graph.
of vertices, the original set of random variables XV and the
factors ? = {?C : C ? C}. The edge set E 0 consists only
x3
?3
of edges from variables to factors. In Figure 1a we show a
x5
?5
simple toy example of an undirected graphical model, and one x1 ?1 x2 ?2
x4
?4
possible corresponding factor graph, Figure 1b, making the dependencies explicit. Both directed and undirected graphs can
(b) Factor graph.
be represented by factor graphs.
3
Figure 1: Undirected PGM and a
corresponding factor graph.
Sequential Monte Carlo
In this section we propose a way to sequentially decompose a graphical model which we then make
use of to design an SMC algorithm for the PGM.
3.1
Sequential decomposition of graphical models
SMC methods can be used to approximate a sequence of probability distributions on a sequence of
probability spaces of increasing dimension. This is done by recursively updating a set of samples?
or particles?with corresponding nonnegative importance weights. The typical scenario is that of
state inference in state-space models, where the probability distributions targeted by the SMC sampler are the joint smoothing distributions of a sequence of latent states conditionally on a sequence
of observations; see e.g., Doucet and Johansen [18] for applications of this type. However, SMC is
not limited to these cases and it is applicable to a much wider class of models.
To be able to use SMC for inference in PGMs we have to define a sequence of target distributions.
However, these target distributions do not have to be marginal distributions under p(XV ). Indeed, as
long as the sequence of target distributions is constructed in such a way that, at some final iteration,
we recover p(XV ), all the intermediate target distributions may be chosen quite arbitrarily.
2
x3
x5
x3
?5
x5
x5
(a)
?
e1 (XL1 )
x3
?3
?3
x4
?5
(b) ?
e2 (XL2 )
?5
x2
?3
x5
?2
x4
?4
(c) ?
e3 (XL3 )
?5
x1
(d) ?
e4 (XL4 )
x1
?1
x2
(f) ?
e1 (XL1 )
?1
x2
x1
?2
x4
(g) ?
e2 (XL2 )
?1
x3
?3
x4
?4
x5
?2
?4
?5
(e) ?
e5 (XL5 )
x3
x1
x2
?1
x2
x3
?3
x4
?4
x5
?2
?5
(h) ?
e3 (XL3 )
Figure 2: Examples of five- (top) and three-step (bottom) sequential decomposition of Figure 1.
This is key to our development, since it lets us use the structure of the PGM to define a sequence of
intermediate target distributions for the sampler. We do this by a so called sequential decomposition
of the graphical model. This amounts to simply adding factors to the target distribution, from the
product of factors in (1), at each step of the algorithm and iterate until all the factors have been
added. Constructing an artificial sequence of intermediate target distributions for an SMC sampler
is a simple, albeit underutilized, idea as it opens up for using SMC samplers for inference in a wide
range of probabilistic models; see e.g., Bouchard-C?ot?e et al. [19], Del Moral et al. [11] for a few
applications of this approach.
Given a graph G with cliques C, let {?k }K
of factors defined as follows ?k (XIk ) =
k=1 be a sequence
Q
SK
?
(X
),
where
C
?
C
are
chosen
such
that
C
k
C?Ck C
k=1 Ck = C and Ci ? Cj = ?, iS6= j, and
where Ik ? {1, . . . , |V|} is the index set of the variables in the domain of ?k , Ik = C?Ck C.
We emphasize that the cliques in C need not be maximal. In fact even auxiliary factors may be
introduced to allow for e.g. annealing between distributions. It follows that the PDF in (1) can be
QK
written as p(XV ) = Z1 k=1 ?k (XIk ). Principally, the choices and the ordering of the Ck ?s is
arbitrary, but in practice it will affect the performance of the proposed sampler. However, in many
common PGMs an intuitive ordering can be deduced from the structure of the model, see Section 5.
The sequential decomposition of the PGM is then based on the auxiliary quantities ?
ek (XLk ) :=
Qk
Sk
?
(X
),
with
L
:=
I
,
for
k
?
{1,
.
.
.
,
K}.
By
construction,
L
I`
k
K = V and
`=1 `
`=1 `
the joint PDF p(XLK ) will be proportional to ?
eK (XLK ). Consequently, by using ?
ek (XLk ) as
the basis for the target sequence for an SMC sampler, we will obtain the correct target distribution at iteration K. However, a further requirement for this to be possible is that all the functions in the sequence are normalizable. For many graphical models this is indeed the case, and
then we can use
ek (XLk ), k = 1 to K, directly as our sequence of intermediate target densities.
R ?
If, however, ?
ek (XLk )dXLk = ? for some k < K, an easy remedy is to modify the target
density to ensure normalizability.
This is done by setting ?k (XLk ) = ?
ek (XLk )qk (XLk ), where
R
qk (XLk ) is choosen so that ?k (XLk )dXLkR < ?. We set qK (XLK ) ? 1 to make sure that
?K (XLK ) ? p(XLk ). Note that the integral ?k (XLk )dXLk need not be computed explicitly, as
long as it can be established that it is finite. With this modification we obtain a sequence of unnormalized intermediate target densities for the SMC sampler as ?1 (XL1 ) = q1 (XL1 )?1 (XL1 ) and
q (X )
?k (XLk ) = ?k?1 (XLk?1 ) qk?1k (XLLk ) ?k (XIk ) for k = 2, . . . , K. The corresponding normalized
k?1
R
PDFs are given by ??k (XLk ) = ?k (XLk )/Zk , where Zk = ?k (XLk )dXLk . Figure 2 shows two
examples of possible subgraphs when applying the decomposition, in two different ways, to the
factor graph example in Figure 1.
3.2
Sequential Monte Carlo for PGMs
At iteration k, the SMC sampler approximates the target distribution ??k by a collection of weighted
particles {XLi k , wki }N
i=1 . These samples define an empirical point-mass approximation of the target
distribution. In what follows, we shall use the notation ?k := XIk \Lk?1 to refer to the collection of
random variables that are in the domain of ?k , but not in the domain of ?k?1 . This corresponds to
the collection of random variables, with which the particles are augmented at each iteration.
Initially, ??1 is approximated by importance sampling. We proceed inductively and assume that we
i
have at hand a weighted sample {XLi k?1 , wk?1
}N
?k?1 (XLk?1 ). This sample is
i=1 , approximating ?
3
propagated forward by simulating, conditionally independently given the particle generation up to
j
j
iteration k ? 1, and drawing an ancestor index aik with P(aik = j) ? ?k?1
wk?1
, j = 1, . . . , N ,
i
where ?k?1
:= ?k?1 (XLi k?1 )?known as adjustment multiplier weights?are used in the auxiliary
SMC framework to adapt the resampling procedure to the current target density ??k [20]. Given the
aik
i
ancestor indices, we simulate particle increments {?ki }N
i=1 from a proposal density ?k ? rk (?|XLk?1 )
ai
on XIk \Lk?1 , and augment the particles as XLi k := XLkk?1 ? ?ki .
After having performed this procedure for the N ancestor indices and particles, they are assigned
importance weights wki = Wk (XLi k ). The weight function, for k ? 2, is given by
Wk (XLk ) =
?k (XLk )
,
?k?1 (XLk?1 )?k?1 (XLk?1 )rk (?k |XLk?1 )
(2)
where, again, we write ?k = XIk \Lk?1 . We give a summary of the SMC method in Algorithm 1.
In the case that Ik \ Lk?1 = ? for
some k, resampling and propagation Algorithm 1 Sequential Monte Carlo (SMC)
Perform each step for i = 1, . . . , N .
steps are superfluous. The easiest
Sample XLi 1 ? r1 (?).
way to handle this is to simply skip
these steps and directly compute imSet w1i = ?1 (XLi 1 )/r1 (XLi 1 ).
portance weights. An alternative apfor k = 2 to K do
j
? j wk?1
proach is to bridge the two target disSample aik according to P(aik = j) = P k?1
.
l
l
?
l k?1 wk?1
tributions ??k?1 and ??k similarly to
i
i
a
a
Del Moral et al. [11].
Sample ?ki ? rk (?|XLkk?1 ) and set XLi k = XLkk?1 ? ?ki .
Set wki = Wk (XLi k ).
Since the proposed sampler for
end for
PGMs falls within a general SMC
framework, standard convergence
analysis applies. See e.g., Del Moral [21] for a comprehensive collection of theoretical results
on consistency, central limit theorems, and non-asymptotic bounds for SMC samplers.
The choices of proposal density and adjustment multipliers can quite significantly affect the performance of the sampler. It follows from (2) that Wk (XLk ) ? 1 if we choose ?k?1 (XLk?1 ) =
R ?k (XLk )
?k (XLk )
?k?1 (XLk?1 ) d?k and rk (?k |XLk?1 ) = ?k?1 (XLk?1 )?k?1 (XLk?1 ) . In this case, the SMC sampler is
said to be fully adapted.
3.3
Estimating the partition function
The partition function of a graphical model is a very interesting quantity in many applications.
Examples include likelihood-based learning of the parameters of the PGM, statistical mechanics
where it is related to the free energy of a system of objects, and information theory where it is
related to the capacity of a channel. However, as stated by Hamze and de Freitas [10], estimating
the partition function of a loopy graphical model is a ?notoriously difficult? task. Indeed, even for
discrete problems simple and accurate estimators have proved to be elusive, and MCMC methods
do not provide any simple way of computing the partition function.
On the contrary, SMC provides a straightforward estimator of the normalizing constant (i.e. the
partition function), given as a byproduct of the sampler according to,
! (k?1
)
N
N
X
Y 1 X
1
bkN :=
wi
? i wi .
(3)
Z
N i=1 k
N i=1 ` `
`=1
It may not be obvious to see why (3) is a natural estimator of the normalizing constant Zk . However,
a by now well known result is that this SMC-based estimator is unbiased. This result is due to
Del Moral [21, Proposition 7.4.1] and, for the special case of inference in state-space models, it has
also been established by Pitt et al. [22]. For completeness we also offer a proof using the present
notation in the supplementary material. Since ZK = Z, we thus obtain an estimator of the partition
function of the PGM at iteration K of the sampler. Besides from being unbiased, this estimator is
also consistent and asymptotically normal; see Del Moral [21].
4
In [23] we have studied a specific information theoretic application (computing the capacity of a
two-dimensional channel) and inspired by the algorithm proposed here we were able to design a
sampler with significantly improved performance compared to the previous state-of-the-art.
4
Particle MCMC and partial blocking
Two shortcomings of SMC are: (i) it does not solve the parameter
learning problem, and (ii) the
R
quality of the estimates of marginal distributions p(XLk ) = ??K (XLK )dXLK \Lk deteriorates for
k K due to the fact that the particle trajectories degenerate as the particle system evolves (see
e.g., [18]). Many methods have been proposed in the literature to address these problems; see e.g.
[24] and the references therein. Among these, the recently proposed particle MCMC (PMCMC)
framework [13], plays a prominent role. PMCMC algorithms make use of SMC to construct (in
general) high-dimensional Markov kernels that can be used within MCMC. These methods were
shown by [13] to be exact, in the sense that the apparent particle approximation in the construction
of the kernel does not change its invariant distribution. This property holds for any number of
particles N ? 2, i.e., PMCMC does not rely on asymptotics in N for correctness.
The fact that the SMC sampler for PGMs presented in Algorithm 1 fits under a general SMC umbrella implies that we can also straightforwardly make use of this algorithm within PMCMC. This
allows us to construct a Markov kernel (indexed by the number of particles N ) on the space of latent
variables of the PGM, PN (XL0 K , dXLK ), which leaves the full joint distribution p(XV ) invariant.
We do not dwell on the details of the implementation here, but refer instead to [13] for the general
setup and [25] for the specific method that we have used in the numerical illustration in Section 5.
PMCMC methods enable blocking of the latent variables of the PGM in an MCMC scheme. Simulating all the latent variables XLK jointly is useful since, in general, this will reduce the autocorrelation when compared to simulating the variables xj one at a time [26]. However, it is also possible to
employ PMCMC to construct an algorithm in between these two extremes, a strategy that we believe
will be particularly useful in the context of PGMs. Let {V m , m ? {1, . . . , M }} be a partition of V.
Ideally, a Gibbs sampler for the joint distribution p(XV ) could then be constructed by simulating,
using a systematic or a random scan, from the conditional distributions
p(XV m |XV\V m ) for m = 1, . . . , M.
(4)
We refer to this strategy as partial blocking, since it amounts to simulating a subset of the variables,
but not necessarily all of them, jointly. Note that, if we set M = |V| and V m = {m} for m =
1, . . . , M , this scheme reduces to a standard Gibbs sampler. On the other extreme, with M = 1
and V 1 = V, we get a fully blocked sampler which targets directly the full joint distribution p(XV ).
From (1) it follows that the conditional distributions (4) can be expressed as
Y
?C (XC ),
p(XV m |XV\V m ) ?
(5)
C?C m
where C m = {C ? C : C ? V m 6= ?}. While it is in general not possible to sample exactly from
these conditionals, we can make use of PMCMC to facilitate a partially blocked Gibbs sampler for
a PGM. By letting p(XV m |XV\V m ) be the target distribution for the SMC sampler of Algorithm 1,
we can construct a PMCMC kernel PNm that leaves the conditional distribution (5) invariant. This
suggests the following approach: with XV0 being the current state of the Markov chain, update block
m by sampling
0
0
XV m ? PNm hXV\V
m i(XV m , ?).
(6)
Here we have indicated explicitly in the notation that the PMCMC kernel for the conditional dis0
tribution p(XV m |XV\V m ) depends on both XV\V
m (which is considered to be fixed throughout the
0
sampling procedure) and on XV m (which defines the current state of the PMCMC procedure).
As mentioned above, while being generally applicable, we believe that partial blocking of PMCMC
samplers will be particularly useful for PGMs. The reason is that we can choose the vertex sets V m
for m = 1, . . . , M in order to facilitate simple sequential decompositions of the induced subgraphs.
For instance, it is always possible to choose the partition in such a way that all the induced subgraphs
are chains.
5
5
Experiments
In this section we evaluate the proposed SMC sampler on three examples to illustrate the merits of
our approach. Additional details and results are available in the supplementary material and code to
reproduce results can be found in [27]. We first consider an example from statistical mechanics, the
classical XY model, to illustrate the impact of the sequential decomposition. Furthermore, we profile
our algorithm with the ?gold standard? AIS [2] and Annealed Sequential Importance Resampling
(ASIR1 ) [11]. In the second example we apply the proposed method to the problem of scoring of
topic models, and finally we consider a simple toy model, a Gaussian Markov random field (MRF),
which illustrates that our proposed method has the potential to significantly decrease correlations
between samples in an MCMC scheme. Furthermore, we provide an exact SMC-approximation of
the tree-sampler by Hamze and de Freitas [28] and thereby extend the scope of this powerful method.
5.1
Classical XY model
The classical XY model (see e.g. [29]) is a
member in the family of n-vector models used
in statistical mechanics. It can be seen as a
generalization of the well known Ising model
with a two-dimensional electromagnetic spin.
The spin vector is described by its angle x ?
(??, ?]. We will consider square lattices with
periodic boundary conditions. The joint PDF of
the classical XY model with equal interaction is
given by
P
p(XV ) ? e? (i,j)?E cos(xi ?xj ) ,
where ? denotes the inverse temperature.
3
10
2
10
1
MSE
10
0
10
?1
10
?2
10
AIS
SMC RND?N
SMC SPIRAL
SMC DIAG
SMC L?R
?3
10
(7)
4
5
10
10
N
Figure 3: Mean-squared-errors for sample size N
To evaluate the effect of different sequence or- in the estimates of log Z for AIS and four different
ders on the accuracy of the estimates of the log- orderings in the proposed SMC framework.
normalizing-constant log Z we ran several experiments on a 16 ? 16 XY model with ? = 1.1 (approximately the critical inverse temperature
[30]). For simplicity we add one node at a time and all factors bridging this node with previously
added nodes. Full adaptation in this case is possible due to the optimal proposal being a von Mises
distribution. We show results for the following cases: Random neighbour (RND-N) First node selected randomly among all nodes, concurrent nodes selected randomly from the set of nodes with a
neighbour in XLk?1 . Diagonal (DIAG) Nodes added by traversing diagonally (45? angle) from left
to right. Spiral (SPIRAL) Nodes added spiralling in towards the middle from the edges. Left-Right
(L-R) Nodes added by traversing the graph left to right, from top to bottom.
We also give results of AIS with single-site-Gibbs updates and 1 000 annealing distributions linearly
spaced from zero to one, starting from a uniform distribution (geometric spacing did not yield any
improvement over linear spacing for this case). The ?true value? was estimated using AIS with
10 000 intermediate distributions and 5 000 importance samples. We can see from the results in Figure 3 that designing a good sequential decomposition for the SMC sampler is important. However,
the intuitive and fairly simple choice L-R does give very good results comparable to that of AIS.
Furthermore, we consider a larger size of 64 ? 64 and evaluate the performance of the L-R ordering
compared to AIS and the ASIR method. Figure 4 displays box-plots of 10 independent runs. We
set N = 105 for the proposed SMC sampler and then match the computational costs of AIS and
ASIR with this computational budget. A fair amount of time was spent in tuning the AIS and ASIR
algorithms; 10 000 linear annealing distributions seemed to give best performance in these cases. We
can see that the L-R ordering gives results comparable to fairly well-tuned AIS and ASIR algorithms;
the ordering of the methods depending on the temperature of the model. One option that does make
the SMC algorithm interesting for these types of applications is that it can easily be parallelized
over the particles, whereas AIS/ASIR has limited possibilities of parallel implementation over the
(crucial) annealing steps.
1
ASIR is a specific instance of the SMC sampler by [11], corresponding to AIS with the addition of resampling steps, but to avoid confusion with the proposed method we choose to refer to it as ASIR.
6
4
1.052
8064.15
x 10
1.4395
1.4393
8064.05
b
log(Z)
1.0515
b
log(Z)
b
log(Z)
8064.1
4
x 10
1.051
1.4391
1.4389
8064
1.0505
1.4387
8063.95
1.05
AIS
ASIR
SMC L?R
AIS
ASIR
SMC L?R
AIS
ASIR
SMC L?R
Figure 4: The logarithm of the estimated partition function for the 64 ? 64 XY model with inverse
temperature 0.5 (left), 1.1 (middle) and 1.7 (right).
4
?8700
?90.5
LRS
SMC
Exact
?1.348
?8716
?91
?1.35
?91.5
?8732
b
log(Z)
b
log(Z)
b
log(Z)
x 10
?8748
?92
?1.352
?1.354
?8764
?92.5
50
100
150
200
250
300
350
?8780
LRS 1
LRS 2
SMC 1
SMC 2
?1.356
LRS 1
LRS 2
SMC 1
SMC 2
N
(a) Small simulated example.
(b) PMC.
(c) 20 newsgroups.
Figure 6: Estimates of the log-likelihood of heldout documents for various datasets.
5.2
Likelihood estimation in topic models
Topic models such as Latent Dirichlet Allocation (LDA) [31] are popular
?m
models for reasoning about large text corpora. Model evaluation is often
conducted by computing the likelihood of held-out documents w.r.t. a
learnt model. However, this is a challenging problem on its own?which
?
has received much recent interest [15, 16, 17]?since it essentially corresponds to computing the partition function of a graphical model; see
z1
zM
Figure 5. The SMC procedure of Algorithm 1 can used to solve this problem by defining a sequential decomposition of the graphical model. In
???
particular, we consider the decomposition corresponding to first including the node ? and then, subsequently, introducing the nodes z1 to zM in
w1
wM
any order. Interestingly, if we then make use of a Rao-Blackwellization
over the variable ?, the SMC sampler of Algorithm 1 reduces exactly
?
to a method that has previously been proposed for this specific problem
[17]. In [17], the method is derived by reformulating the model in terms
of its sufficient statistics and phrasing this as a particle learning problem; Figure 5: LDA as graphhere we obtain the same procedure as a special case of the general SMC ical model.
algorithm operating on the original model.
We use the same data and learnt models as Wallach et al. [15], i.e. 20 newsgroups, and PubMed
Central abstracts (PMC). We compare with the Left-Right-Sequential (LRS) sampler [16], which is
an improvement over the method proposed by Wallach et al. [15]. Results on simulated and real data
experiments are provided in Figure 6. For the simulated example (Figure 6a), we use a small model
with 10 words and 4 topics to be able to compute the exact log-likelihood. We keep the number of
particles in the SMC algorithm equal to the number of Gibbs steps in LRS; this means LRS is about
an order-of-magnitude more computationally demanding than the SMC method. Despite the fact that
the SMC sampler uses only about a tenth of the computational time of the LRS sampler, it performs
significantly better in terms of estimator variance. The other two plots show results on real data with
10 held-out documents for each dataset. For a fixed number of Gibbs steps we choose the number of
particles for each document to make the computational cost approximately equal. Run #2 has twice
the number of particles/samples as in run #1. We show the mean of 10 runs and error-bars estimated
7
using bootstrapping with 10 000 samples. Computing the logarithm of Z? introduces a negative bias,
which means larger values of log Z? typically implies more accurate results. The results on real
data do not show the drastic improvement we see in the simulated example, which could be due to
degeneracy problems for long documents. An interesting approach that could improve results would
be to use an SMC algorithm tailored to discrete distributions, e.g. Fearnhead and Clifford [32].
5.3
Gaussian MRF
Finally, we consider a simple toy model to illustrate how the SMC sampler of Algorithm 1 can be
incorporated in PMCMC sampling. We simulate data from a zero mean Gaussian 10 ? 10 lattice
MRF with observation and interaction standard deviations of ?i = 1 and ?ij = 0.1 respectively.
We use the proposed SMC algorithm together with the PMCMC method by Lindsten et al. [25]. We
compare this with standard Gibbs sampling and the tree sampler by Hamze and de Freitas [28].
ACF
We use a moderate number of N = 50 particles in the
Gibbs sampler
PMCMC sampler (recall that it admits the correct invari1
PMCMC w. partial blocking
Tree sampler
ant distribution for any N ? 2). In Figure 7 we can
PMCMC
0.8
see the empirical autocorrelation funtions (ACF) centered
around the true posterior mean for variable x82 (selected
0.6
randomly from among XV ; similar results hold for all
0.4
the variables of the model). Due to the strong interaction between the latent variables, the samples generated
0.2
by the standard Gibbs sampler are strongly correlated.
0
Tree-sampling and PMCMC with partial blocking show
0
50
100
150
200
250
300
nearly identical gains compared to Gibbs. This is interestLag
ing, since it suggest that simulating from the SMC-based
PMCMC kernel can be almost as efficient as exact sim- Figure 7: The empirical ACF for Gibbs
ulation, even using a moderate number of particles. In- sampling, PMCMC, PMCMC with pardeed, PMCMC with partial blocking can be viewed as an tial blocking, and tree sampling.
exact SMC-approximation of the tree sampler, extending
the scope of tree-sampling beyond discrete and Gaussian models. The fully blocked PMCMC algorithm achieves the best ACF, dropping off to zero considerably faster than for the other methods.
This is not surprising since this sampler simulates all the latent variables jointly which reduces the
autocorrelation, in particular when the latent variables are strongly dependent. However, it should
be noted that this method also has the highest computational cost per iteration.
6
Conclusion
We have proposed a new framework for inference in PGMs using SMC and illustrated it on three
examples. These examples show that it can be a viable alternative to standard methods used for inference and partition function estimation problems. An interesting avenue for future work is combining
our proposed methods with AIS, to see if we can improve on both.
Acknowledgments
We would like to thank Iain Murray for his kind and very prompt help in providing the data for
the LDA example. This work was supported by the projects: Learning of complex dynamical systems (Contract number: 637-2014-466) and Probabilistic modeling of dynamical systems (Contract
number: 621-2013-5524), both funded by the Swedish Research Council.
References
[1] M. I. Jordan. Graphical models. Statistical Science, 19(1):140?155, 2004.
[2] R. M Neal. Annealed importance sampling. Statistics and Computing, 11(2):125?139, 2001.
[3] A. Doucet, N. De Freitas, N. Gordon, et al. Sequential Monte Carlo methods in practice. Springer New
York, 2001.
[4] M. Isard. PAMPAS: Real-valued graphical models for computer vision. In Proceedings of the conference
on Computer Vision and Pattern Recognition (CVPR), Madison, WI, USA, June 2003.
8
[5] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation. In
Proceedings of the conference on Computer Vision and Pattern Recognition (CVPR), Madison, WI, USA,
2003.
[6] E. B. Sudderth, A. T. Ihler, M. Isard, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation.
Communications of the ACM, 53(10):95?103, 2010.
[7] M. Briers, A. Doucet, and S. S. Singh. Sequential auxiliary particle belief propagation. In Proceedings of
the 8th International Conference on Information Fusion, Philadelphia, PA, USA, 2005.
[8] A. T. Ihler and D. A. Mcallester. Particle belief propagation. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), Clearwater Beach, FL, USA, 2009.
[9] A. Frank, P. Smyth, and A. T. Ihler. Particle-based variational inference for continuous systems. In
Advances in Neural Information Processing Systems (NIPS), pages 826?834, 2009.
[10] F. Hamze and N. de Freitas. Hot coupling: a particle approach to inference and normalization on pairwise
undirected graphs of arbitrary topology. In Advances in Neural Information Processing Systems (NIPS),
2005.
[11] P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. Journal of the Royal Statistical
Society: Series B, 68(3):411?436, 2006.
[12] R. G. Everitt. Bayesian parameter estimation for latent Markov random fields and social networks. Journal
of Computational and Graphical Statistics, 21(4):940?960, 2012.
[13] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. Journal of the
Royal Statistical Society: Series B, 72(3):269?342, 2010.
[14] P. Carbonetto and N. de Freitas. Conditional mean field. In Advances in Neural Information Processing
Systems (NIPS) 19. MIT Press, 2007.
[15] H. M Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In
Proceedings of the 26th International Conference on Machine Learning, pages 1105?1112, 2009.
[16] W. Buntine. Estimating likelihoods for topic models. In Advances in Machine Learning, pages 51?64.
Springer, 2009.
[17] G. S. Scott and J. Baldridge. A recursive estimate for the predictive likelihood in a topic model. In Proceedings of the 16th International Conference on Artificial Intelligence and Statistics (AISTATS), pages
1105?1112, Clearwater Beach, FL, USA, 2009.
[18] A. Doucet and A. Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. In D. Crisan
and B. Rozovskii, editors, The Oxford Handbook of Nonlinear Filtering. Oxford University Press, 2011.
[19] A. Bouchard-C?ot?e, S. Sankararaman, and M. I. Jordan. Phylogenetic inference via sequential Monte
Carlo. Systematic Biology, 61(4):579?593, 2012.
[20] M. K. Pitt and N. Shephard. Filtering via simulation: Auxiliary particle filters. Journal of the American
Statistical Association, 94(446):590?599, 1999.
[21] P. Del Moral. Feynman-Kac Formulae - Genealogical and Interacting Particle Systems with Applications.
Probability and its Applications. Springer, 2004.
[22] M. K. Pitt, R. S. Silva, P. Giordani, and R. Kohn. On some properties of Markov chain Monte Carlo
simulation methods based on the particle filter. Journal of Econometrics, 171:134?151, 2012.
[23] C. A. Naesseth, F. Lindsten, and T. B. Sch?on. Capacity estimation of two-dimensional channels using
sequential Monte Carlo. In Proceedings of the IEEE Information Theory Workshop (ITW), Hobart, Tasmania, Australia, November 2014.
[24] F. Lindsten and T. B. Sch?on. Backward simulation methods for Monte Carlo statistical inference. Foundations and Trends in Machine Learning, 6(1):1?143, 2013.
[25] F. Lindsten, M. I. Jordan, and T. B. Sch?on. Particle Gibbs with ancestor sampling. Journal of Machine
Learning Research, 15:2145?2184, june 2014.
[26] C. P. Robert and G. Casella. Monte Carlo statistical methods. Springer New York, 2004.
[27] C. A. Naesseth, F. Lindsten, and T. B. Sch?on. smc-pgm, 2014. URL http://dx.doi.org/10.
5281/zenodo.11947.
[28] F. Hamze and N. de Freitas. From fields to trees. In Proceedings of the 20th conference on Uncertainty
in artificial intelligence (UAI), Banff, Canada, July 2004.
[29] J. M. Kosterlitz and D. J. Thouless. Ordering, metastability and phase transitions in two-dimensional
systems. J of Physics C: Solid State Physics, 6(7):1181, 1973.
[30] Y. Tomita and Y. Okabe. Probability-changing cluster algorithm for two-dimensional XY and clock
models. Physical Review B: Condensed Matter and Materials Physics, 65:184405, 2002.
[31] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, March 2003.
[32] Paul Fearnhead and Peter Clifford. On-line inference for hidden markov models via particle filters. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 65(4):887?899, 2003.
9
| 5570 |@word middle:2 open:1 simulation:3 decomposition:12 q1:1 fifteen:1 thereby:1 solid:1 recursively:1 initial:1 liu:1 series:3 tuned:1 document:6 interestingly:1 freitas:9 current:3 surprising:1 yet:1 dx:1 written:1 ulation:1 subsequent:1 partition:16 numerical:1 christian:1 enables:2 plot:2 update:2 resampling:4 isard:2 leaf:2 selected:3 intelligence:3 lr:9 blei:1 provides:3 completeness:1 node:12 uppsala:2 banff:1 org:1 five:1 phylogenetic:1 constructed:2 ik:3 viable:1 consists:2 autocorrelation:3 pairwise:1 indeed:3 brier:2 frequently:1 mechanic:3 blackwellization:1 inspired:1 freeman:2 salakhutdinov:1 actual:1 increasing:2 provided:2 estimating:4 notation:4 underlying:3 wki:3 mass:1 project:1 what:2 easiest:1 kind:1 developed:1 lindsten:6 bootstrapping:1 exactly:2 uk:2 control:1 engineering:1 modify:1 xv:23 limit:2 sankararaman:1 despite:1 oxford:2 approximately:2 twice:1 therein:1 studied:2 wallach:4 suggests:1 challenging:1 metastability:1 co:1 limited:2 smc:68 range:2 directed:1 acknowledgment:1 practice:2 block:2 tribution:1 recursive:1 x3:8 procedure:6 asymptotics:1 area:1 empirical:3 pnm:2 significantly:4 word:1 suggest:1 get:1 targeting:1 context:1 applying:1 annealed:3 elusive:1 straightforward:1 starting:1 independently:1 simplicity:1 subgraphs:3 estimator:7 iain:1 utilizing:2 his:1 handle:2 portance:1 increment:1 target:20 construction:3 play:1 aik:5 exact:6 smyth:1 us:1 designing:1 pa:1 trend:1 approximated:1 particularly:2 updating:1 recognition:2 econometrics:1 tributions:1 ising:1 blocking:8 bottom:2 role:1 ordering:7 decrease:1 highest:1 ran:1 substantial:1 rq:1 mentioned:1 bkn:1 ideally:1 inductively:1 cam:1 singh:1 predictive:1 basis:1 easily:1 joint:9 various:2 represented:2 shortcoming:1 monte:16 doi:1 artificial:4 clearwater:2 quite:2 apparent:1 supplementary:2 solve:3 larger:2 valued:1 drawing:1 cvpr:2 statistic:5 jointly:3 final:1 sequence:16 propose:2 interaction:5 product:3 maximal:1 adaptation:1 zm:2 hxv:1 combining:1 degenerate:1 gold:1 intuitive:3 pampas:2 convergence:1 cluster:1 requirement:2 r1:2 extending:1 produce:1 object:1 help:1 wider:1 illustrate:3 andrew:1 ac:1 spent:1 depending:1 dxv:1 ij:1 coupling:1 received:1 sim:1 shephard:1 strong:1 auxiliary:8 fredrik:1 skip:1 uu:1 implies:2 correct:2 filter:3 subsequently:1 centered:1 australia:1 mcallester:2 enable:1 material:3 carbonetto:2 generalization:1 electromagnetic:1 decompose:1 proposition:1 hold:2 around:1 considered:1 normal:1 scope:2 pitt:3 achieves:1 estimation:4 applicable:2 okabe:1 condensed:1 bridge:1 council:1 concurrent:1 correctness:1 tool:1 weighted:2 mit:1 schon:1 gaussian:5 always:1 normalizability:1 fearnhead:2 ck:4 pn:1 crisan:1 avoid:1 factorizes:1 derived:1 june:2 pdfs:1 improvement:3 likelihood:8 sense:1 inference:16 dependent:1 typically:1 initially:1 hidden:1 ical:1 ancestor:4 reproduce:1 among:3 augment:1 development:1 smoothing:2 special:3 art:1 fairly:2 marginal:2 field:5 construct:5 equal:3 having:1 beach:2 sampling:13 ng:1 x4:7 identical:1 biology:1 nearly:1 future:1 gordon:1 few:1 employ:1 randomly:3 neighbour:2 comprehensive:1 thouless:1 phase:1 interest:1 message:1 possibility:1 evaluation:2 introduces:1 extreme:2 superfluous:1 held:3 chain:5 accurate:2 edge:5 capable:1 byproduct:2 integral:1 partial:6 xy:7 sweden:2 traversing:2 indexed:2 tree:9 logarithm:2 theoretical:1 instance:2 modeling:1 rao:1 tial:1 w1i:1 lattice:2 loopy:2 cost:3 introducing:1 vertex:3 subset:2 deviation:1 uniform:1 conducted:1 oping:2 buntine:2 straightforwardly:1 dependency:1 periodic:1 learnt:2 considerably:1 combined:1 deduced:1 density:7 international:4 probabilistic:5 systematic:2 off:1 contract:2 physic:3 michael:1 together:1 w1:1 again:1 central:2 squared:1 von:1 clifford:2 choose:5 ek:6 american:1 toy:3 potential:1 de:9 wk:8 matter:1 explicitly:2 depends:1 performed:1 later:1 wm:1 recover:1 option:1 parallel:1 bouchard:2 contribution:1 square:1 spin:2 accuracy:1 qk:6 variance:1 efficiently:1 spaced:1 yield:1 ant:1 xli:10 bayesian:2 carlo:16 trajectory:1 notoriously:1 holenstein:1 casella:1 energy:1 x82:1 dis0:1 obvious:1 e2:2 proof:1 ihler:5 mi:1 propagated:1 degeneracy:1 gain:1 proved:1 dataset:1 popular:1 recall:1 cj:1 schedule:1 methodology:1 improved:1 swedish:1 done:2 box:1 strongly:2 furthermore:3 until:1 correlation:1 hand:2 clock:1 nonlinear:1 propagation:7 del:8 xv0:1 defines:1 quality:1 indicated:1 lda:3 believe:2 facilitate:2 effect:1 umbrella:1 normalized:1 unbiased:6 remedy:1 multiplier:2 true:2 andrieu:1 assigned:1 reformulating:1 neal:1 illustrated:1 conditionally:2 x5:8 during:1 noted:1 unnormalized:1 prominent:1 pdf:4 theoretic:1 confusion:1 performs:1 temperature:4 silva:1 reasoning:1 variational:1 ders:1 recently:2 common:1 physical:1 extend:1 association:1 approximates:1 refer:4 blocked:3 cambridge:2 gibbs:12 ai:18 everitt:2 automatic:1 tuning:1 consistency:1 similarly:1 particle:37 funded:1 phrasing:1 operating:1 add:2 posterior:1 own:1 recent:1 moderate:2 scenario:1 arbitrarily:1 itw:1 kosterlitz:1 scoring:1 seen:1 additional:1 somewhat:1 parallelized:1 monotonically:1 july:1 ii:1 branch:1 full:5 reduces:3 isy:1 ing:1 match:1 adapt:1 faster:1 offer:1 long:3 dept:2 e1:2 impact:1 involving:1 variant:1 mrf:3 essentially:1 vision:3 iteration:7 represent:1 normalization:3 kernel:7 tailored:1 proposal:3 whereas:1 conditionals:1 addition:1 spacing:2 annealing:5 sudderth:3 crucial:1 sch:5 ot:2 sure:1 induced:2 sent:1 undirected:5 member:1 contrary:1 simulates:1 jordan:4 call:1 hamze:6 leverage:1 intermediate:6 easy:1 spiral:3 iterate:1 affect:2 fit:1 xj:2 newsgroups:2 topology:1 reduce:1 idea:2 avenue:1 kohn:1 bridging:1 url:1 moral:8 render:1 peter:1 e3:2 proceed:1 york:2 useful:4 generally:1 se:2 amount:3 nonparametric:3 http:1 kac:1 tutorial:1 deteriorates:1 estimated:3 per:1 discrete:4 write:2 shall:1 proach:1 dropping:1 key:2 four:1 changing:1 tenth:1 backward:1 graph:16 asymptotically:1 year:2 run:4 angle:2 inverse:3 powerful:1 uncertainty:1 extends:1 throughout:1 family:1 almost:1 comparable:2 ki:4 bound:1 fl:2 dwell:1 display:1 nonnegative:1 adapted:1 pgm:15 normalizable:1 x2:7 simulate:2 jasra:1 according:4 march:1 underutilized:1 wi:4 evolves:1 making:1 modification:1 invariant:3 principally:1 computationally:1 previously:2 apfor:1 merit:2 letting:1 drastic:1 end:1 feynman:1 available:1 apply:1 simulating:6 alternative:3 thomas:2 original:3 top:2 denotes:1 ensure:1 include:1 dirichlet:2 graphical:18 tomita:1 madison:2 xc:3 build:1 murray:2 approximating:1 classical:4 society:3 added:5 quantity:2 strategy:2 diagonal:1 said:1 div:1 link:2 thank:1 simulated:4 capacity:3 topic:8 spanning:1 reason:1 willsky:2 besides:1 code:1 index:4 illustration:1 providing:1 pmc:2 difficult:2 setup:1 robert:1 frank:2 xik:6 stated:1 negative:1 design:3 implementation:2 unknown:1 perform:2 observation:2 markov:9 datasets:1 finite:2 november:1 defining:1 extended:1 incorporated:1 communication:1 interacting:1 arbitrary:2 prompt:1 canada:1 introduced:1 david:1 z1:3 johansen:2 established:2 nip:3 address:1 able:4 suggested:1 bar:1 beyond:1 dynamical:2 scott:2 pattern:2 including:1 royal:3 belief:6 hot:1 critical:1 demanding:1 natural:1 rely:2 scheme:3 improve:2 technology:1 lk:5 philadelphia:1 text:1 review:1 literature:1 geometric:1 asymptotic:1 fully:3 heldout:1 interesting:5 generation:1 proportional:1 allocation:2 filtering:3 foundation:1 sufficient:1 consistent:2 usa:5 editor:1 summary:1 pmcmc:22 diagonally:1 last:2 free:1 supported:1 choosen:1 bias:1 allow:1 wide:1 fall:1 pgms:12 mimno:1 boundary:1 dimension:1 transition:1 seemed:1 forward:1 xl2:2 refinement:1 collection:4 social:1 approximate:4 emphasize:1 keep:1 clique:5 doucet:6 sequentially:1 uai:1 handbook:1 corpus:1 giordani:1 xi:7 continuous:1 latent:13 decade:1 sk:2 why:1 learn:1 zk:4 channel:3 e5:1 mse:1 necessarily:1 complex:1 constructing:2 domain:3 diag:2 did:1 aistats:2 main:1 linearly:1 paul:1 profile:1 fair:1 x1:7 augmented:1 site:1 pubmed:1 explicit:2 acf:4 e4:1 rk:4 theorem:1 formula:1 specific:5 zenodo:1 admits:1 normalizing:3 fusion:1 workshop:1 albeit:1 sequential:23 adding:1 importance:7 ci:1 magnitude:1 illustrates:1 budget:1 simply:2 expressed:1 adjustment:2 partially:1 xl1:5 applies:1 rnd:2 springer:4 corresponds:2 acm:1 conditional:5 viewed:2 targeted:2 consequently:1 towards:1 xlk:39 naesseth:3 change:1 typical:1 sampler:44 called:1 scan:1 xl0:1 genealogical:1 evaluate:3 mcmc:11 correlated:1 |
5,049 | 5,571 | Concavity of reweighted Kikuchi approximation
Po-Ling Loh
Department of Statistics
The Wharton School
University of Pennsylvania
[email protected]
Andre Wibisono
Computer Science Division
University of California, Berkeley
[email protected]
Abstract
We analyze a reweighted version of the Kikuchi approximation for estimating the
log partition function of a product distribution defined over a region graph. We
establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion, and show that a
reweighted version of the sum product algorithm applied to the Kikuchi region
graph will produce global optima of the Kikuchi approximation whenever the algorithm converges. When the region graph has two layers, corresponding to a
Bethe approximation, we show that our sufficient conditions for concavity are
also necessary. Finally, we provide an explicit characterization of the polytope of
concavity in terms of the cycle structure of the region graph. We conclude with
simulations that demonstrate the advantages of the reweighted Kikuchi approach.
1
Introduction
Undirected graphical models are a familiar framework in diverse application domains such as computer vision, statistical physics, coding theory, social science, and epidemiology. In certain settings
of interest, one is provided with potential functions defined over nodes and (hyper)edges of the
graph. A crucial step in probabilistic inference is to compute the log partition function of the distribution based on these potential functions for a given graph structure. However, computing the log
partition function either exactly or approximately is NP-hard in general [2, 17]. An active area of research involves finding accurate approximations of the log partition function and characterizing the
graph structures for which such approximations may be computed efficiently [29, 22, 7, 19, 25, 18].
When the underlying graph is a tree, the log partition function may be computed exactly via the sum
product algorithm in time linear in the number of nodes [15]. However, when the graph contains
cycles, a generalized version of the sum product algorithm known as loopy belief propagation may
either fail to converge or terminate in local optima of a nonconvex objective function [26, 20, 8, 13].
In this paper, we analyze the Kikuchi approximation method, which is constructed from a variational
representation of the log partition function by replacing the entropy with an expression that decomposes with respect to a region graph. Kikuchi approximations were previously introduced in the
physics literature [9] and reformalized by Yedidia et al. [28, 29] and others [1, 14] in the language of
graphical models. The Bethe approximation, which is a special case of the Kikuchi approximation
when the region graph has only two layers, has been studied by various authors [3, 28, 5, 25]. In addition, a reweighted version of the Bethe approximation was proposed by Wainwright et al. [22, 16].
As described in Vontobel [21], computing the global optimum of the Bethe variational problem may
in turn be used to approximate the permanent of a nonnegative square matrix.
The particular objective function that we study generalizes the Kikuchi objective appearing in previous literature by assigning arbitrary weights to individual terms in the Kikuchi entropy expansion.
We establish necessary and sufficient conditions under which this class of objective functions is
concave, so a global optimum may be found efficiently. Our theoretical results synthesize known results on Kikuchi and Bethe approximations, and our main theorem concerning concavity conditions
for the reweighted Kikuchi entropy recovers existing results when specialized to the unweighted
1
Kikuchi [14] or reweighted Bethe [22] case. Furthermore, we provide a valuable converse result
in the reweighted Bethe case, showing that when our concavity conditions are violated, the entropy
function cannot be concave over the whole feasible region. As demonstrated by our experiments,
a message-passing algorithm designed to optimize the Kikuchi objective may terminate in local
optima for weights outside the concave region. Watanabe and Fukumizu [24, 25] provide a similar
converse in the unweighted Bethe case, but our proof is much simpler and our result is more general.
In the reweighted Bethe setting, we also present a useful characterization of the concave region of
the Bethe entropy function in terms of the geometry of the graph. Specifically, we show that if the
region graph consists of only singleton vertices and pairwise edges, then the region of concavity
coincides with the convex hull of incidence vectors of single-cycle forest subgraphs of the original
graph. When the region graph contains regions with cardinality greater than two, the latter region
may be strictly contained in the former; however, our result provides a useful way to generate weight
vectors within the region of concavity. Whereas Wainwright et al. [22] establish the concavity of
the reweighted Bethe objective on the spanning forest polytope, that region is contained within the
single-cycle forest polytope, and our simulations show that generating weight vectors in the latter
polytope may yield closer approximations to the log partition function.
The remainder of the paper is organized as follows: In Section 2, we review background information
about the Kikuchi and Bethe approximations. In Section 3, we provide our main results on concavity
conditions for the reweighted Kikuchi approximation, including a geometric characterization of the
region of concavity in the Bethe case. Section 4 outlines the reweighted sum product algorithm
and proves that fixed points correspond to global optima of the Kikuchi approximation. Section 5
presents experiments showing the improved accuracy of the reweighted Kikuchi approximation over
the region of concavity. Technical proofs and additional simulations are contained in the Appendix.
2
Background and problem setup
In this section, we review basic concepts of the Kikuchi approximation and establish some terminology to be used in the paper.
Let G = (V, R) denote a region graph defined over the vertex set V , where each region r 2 R is a
subset of V . Directed edges correspond to inclusion, so r ! s is an edge of G if s ? r. We use the
following notation, for r 2 R:
A(r) := {s 2 R : r ( s}
(ancestors of r)
F(r) := {s 2 R : r ? s}
(forebears of r)
N (r) := {s 2 R : r ? s or s ? r}
(neighbors of r).
S
For R0 ? R, we define A(R0 ) = r2R0 A(r), and we define F(R0 ) and N (R0 ) similarly.
We consider joint distributions x = (xs )s2V that factorize over the region graph; i.e.,
1 Y
p(x) =
?r (xr ),
Z(?)
(1)
r2R
for potential functions ?r > 0. Here, Z(?) is the normalization factor, or partition function, which
is a function of the potential functions ?r , and each variable xs takes values in a finite discrete
set X . One special case of the factorization (1) is the pairwise Ising model, defined over a graph
G = (V, E), where the distribution is given by
?X
?
X
p (x) = exp
A( ) ,
(2)
s (xs ) +
st (xs , xt )
s2V
(s,t)2E
and X = { 1, +1}. Our goal is to analyze the log partition function
n X Y
o
log Z(?) = log
?r (xr ) .
(3)
x2X |V | r2R
2.1
Variational representation
It is known from the theory of graphical models [14] that the log partition function (3) may be
written in the variational form
nXX
o
log Z(?) =
sup
?r (xr ) log(?r (xr )) + H(p? ) ,
(4)
{?r (xr )}2
R
r2R xr
2
where p? is the maximum entropy distribution with marginals {?r (xr )} and
X
H(p) :=
p(x) log p(x)
x
is the usual entropy. Here, R denotes the R-marginal polytope;
P i.e., {?r (xr ) : r 2 R} 2 R if
and only if there exists a distribution ? (x) such that ?r (xr ) = x\r ? (xr , x\r ) for all r. For ease of
notation, we also write ? ? {?r (xr ) : r 2 R}. Let ? ? ?(x) denote the collection of log potential
functions {log(?r (xr )) : r 2 R}. Then equation (4) may be rewritten as
log Z(?) = sup {h?, ? i + H(p? )} .
(5)
?2
R
Specializing to the Ising model (2), equation (5) gives the variational representation
A( ) = sup {h , ?i + H(p? )} ,
(6)
?2M
which appears in Wainwright and Jordan [23]. Here, M ? M(G) denotes the marginal polytope,
corresponding to the collection of mean parameter vectors of the sufficient statistics in the exponential family representation (2), ranging over different values of , and p? is the maximum entropy
distribution with mean parameters ?.
2.2
Reweighted Kikuchi approximation
Although the set R appearing in the variational representation (5) is a convex polytope, it may
have exponentially many facets [23]. Hence, we replace R with the set
n
o
X
X
K
?u (xt , xu\t ) = ?t (xt ) and 8u 2 R,
?u (xu ) = 1
R = ? : 8t, u 2 R s.t. t ? u,
xu\t
xu
of locally consistent R-pseudomarginals. Note that R ?
ally many facets, making optimization more tractable.
K
R
and the latter set has only polynomi-
In the case of the pairwise Ising model (2), we let L ? L(G) denote the polytope K
R . Then L is
the collection of nonnegative functions ? = (?s , ?st ) satisfying the marginalization constraints
P
8s 2 V,
xs ?s (xs ) = 1,
P
P
8(s, t) 2 E.
xt ?st (xs , xt ) = ?s (xs ) and
xs ?st (xs , xt ) = ?t (xt ),
Recall that M(G) ? L(G), with equality achieved if and only if the underlying graph G is a tree. In
the general case, we have R = K
R when the Hasse diagram of the region graph admits a minimal
representation that is loop-free (cf. Theorem 2 of Pakzad and Anantharam [14]).
Given a collection of R-pseudomarginals ? , we also replace the entropy term H(p? ), which is
difficult to compute in general, by the approximation
X
H(p? ) ?
?r Hr (?r ) := H(? ; ?),
(7)
r2R
P
where Hr (?r ) :=
?
(x
)
log
?
(x
) is the entropy computed over region r, and {?r : r 2 R}
r
r
r
r
xr
are weights assigned to the regions. Note that in the pairwise Ising case (2), with p := p , we have
the equality
X
X
H(p) =
Hs (ps )
Ist (pst )
s2V
(s,t)2E
when G is a tree, where Ist (pst ) = Hs (ps ) + Ht (pt ) Hst (pst ) denotes the mutual information
and ps and pst denote the node and edge marginals. Hence, the approximation (7) is exact with
?st = 1, 8(s, t) 2 E,
and
?s = 1 deg(s), 8s 2 V.
Using the approximation (7), we arrive at the following reweighted Kikuchi approximation:
B(?; ?) := sup {h?, ? i + H(? ; ?)} .
{z
}
?2 K |
R
(8)
B?,? (? )
Note that when {?r } are the overcounting numbers {cr }, defined recursively by
X
cr = 1
cs ,
(9)
s2A(r)
the expression (8) reduces to the usual (unweighted) Kikuchi approximation considered in Pakzad
and Anantharam [14].
3
3
Main results and consequences
In this section, we analyze the concavity of the Kikuchi variational problem (8). We derive a sufficient condition under which the function B?,? (? ) is concave over the set K
R , so global optima of
the reweighted Kikuchi approximation may be found efficiently. In the Bethe case, we also show
that the condition is necessary for B?,? (? ) to be concave over the entire region K
R , and we provide
a geometric characterization of K
in
terms
of
the
edge
and
cycle
structure
of
the
graph.
R
3.1
Sufficient conditions for concavity
We begin by establishing sufficient conditions for the concavity of B?,? (? ). Clearly, this is equivalent to establishing conditions under which H(? ; ?) is concave. Our main result is the following:
Theorem 1. If ? 2 R|R| satisfies
X
?s
0,
s2F (S)
(10)
8S ? R,
then the Kikuchi entropy H(? ; ?) is strictly concave on
K
R.
The proof of Theorem 1 is contained in Appendix A.1, and makes use of a generalization of Hall?s
marriage lemma for weighted graphs (cf. Lemma 1 in Appendix A.2).
The condition (10) depends heavily on the structure of the region graph. For the sake of interpretability, we now specialize to the case where the region graph has only two layers, with the first
layer corresponding to vertices and the second layer corresponding to hyperedges. In other words,
for r, s 2 R, we have r ? s only if |r| = 1, and R = V [ F , where F is the set of hyperedges and
V denotes the set of singleton vertices. This is the Bethe case, and the entropy
X
X
H(? ; ?) =
?s Hs (?s ) +
?? H? (?? )
(11)
s2V
?2F
is consequently known as the Bethe entropy.
The following result is proved in Appendix A.3:
Corollary 1. Suppose ??
0 for all ? 2 F , and the following condition also holds:
X
X
?s +
?? 0,
8U ? V.
s2U
Then the Bethe entropy H(? ; ?) is strictly concave over
3.2
(12)
?2F : ?\U 6=;
K
R.
Necessary conditions for concavity
We now establish a converse to Corollary 1 in the Bethe case, showing that condition (12) is also
necessary for the concavity of the Bethe entropy. When ?? = 1 for ? 2 F and ?s = 1 |N (s)|
for s 2 V , we recover the result of Watanabe and Fukumizu [25] for the unweighted Bethe case.
However, our proof technique is significantly simpler and avoids the complex machinery of graph
zeta functions. Our approach proceeds by considering the Bethe entropy H(? ; ?) on appropriate
slices of the domain K
R so as to extract condition (12) for each U ? V . The full proof is provided
in Appendix B.1.
Theorem 2. If the Bethe entropy H(? ; ?) is concave over
condition (12) holds.
K
R,
then ??
0 for all ? 2 F , and
Indeed, as demonstrated in the simulations of Section 5, the Bethe objective function B?,? (? ) may
have multiple local optima if ? does not satisfy condition (12).
3.3
Polytope of concavity
We now characterize the polytope defined by the inequalities (12). We show that in the pairwise
Bethe case, the polytope may be expressed geometrically as the convex hull of single-cycle forests
4
formed by the edges of the graph. In the more general (non-pairwise) Bethe case, however, the
polytope of concavity may strictly contain the latter set.
Note that the Bethe entropy (11) may be written in the alternative form
X
X
H(? ; ?) =
?0s Hs (?s )
?? Ie? (?? ),
s2V
(13)
?2F
P
where Ie? (?? ) := { s2? HsQ
(?s )} H? (?? ) is the KL divergence between the joint distribution ??
and the product distribution s2? ?s , and the weights ?0s are defined appropriately.
We show that the polytope of concavity has a nice geometric characterization when ?0s = 1 for
all s 2 V , and ?? 2 [0, 1] for all ? 2 F . Note that this assignment produces the expression
for the reweighted Bethe entropy analyzed in Wainwright et al. [22] (when all elements of F have
cardinality two). Equation (13) then becomes
?
X?
X
X
H(? ; ?) =
1
?? Hs (?s ) +
?? H? (?? ),
(14)
s2V
?2F
?2N (s)
and the inequalities (12) defining the polytope of concavity are
X
(|? \ U | 1)?? ? |U |, 8U ? V.
(15)
?2F : ?\U 6=;
Consequently, we define
n
C := ? 2 [0, 1]|F | :
X
?2F : ?\U 6=;
(|? \ U |
1)?? ? |U |,
o
8U ? V .
By Theorem 2, the set C is the region of concavity for the Bethe entropy (14) within [0, 1]|F | .
We also define the set
F := {1F 0 : F 0 ? F and F 0 [ N (F 0 ) is a single-cycle forest in G} ? {0, 1}|F | ,
where a single-cycle forest is defined to be a subset of edges of a graph such that each connected
component contains at most one cycle. (We disregard the directions of edges in G.) The following
theorem gives our main result. The proof is contained in Appendix C.1.
Theorem 3. In the Bethe case (i.e., the region graph G has two layers), we have the containment
conv(F) ? C. If in addition |?| = 2 for all ? 2 F , then conv(F) = C.
The significance of Theorem 3 is that it provides us with a convenient graph-based method for
constructing vectors ? 2 C. From the inequalities (15), it is not even clear how to efficiently verify
whether a given ? 2 [0, 1]|F | lies in C, since it involves testing 2|V | inequalities.
Comparing Theorem 3 with known results, note that in the pairwise case (|?| = 2 for all ? 2 F ),
Theorem 1 of Wainwright et al. [22] states that the Bethe entropy is concave over conv(T), where
T ? {0, 1}|E| is the set of edge indicator vectors for spanning forests of the graph. It is trivial to
check that T ? F, since every spanning forest is also a single-cycle forest. Hence, Theorems 2
and 3 together imply a stronger result than in Wainwright et al. [22], characterizing the precise
region of concavity for the Bethe entropy as a superset of the polytope conv(T) analyzed there. In
the unweighted Kikuchi case, it is also known [1, 14] that the Kikuchi entropy is concave for the
assignment ? = 1F when the region graph G is connected and has at most one cycle. Clearly,
1F 2 C in that case, so this result is a consequence of Theorems 2 and 3, as well. However, our
theorems show that a much more general statement is true.
It is tempting to posit that conv(F) = C holds more generally in the Bethe case. However, as the following example shows, settings arise where conv(F) ( C. Details are contained in Appendix C.2.
Example 1. Consider a two-layer region graph with vertices V = {1, 2, 3, 4, 5} and factors ?1 =
{1, 2, 3}, ?2 = {2, 3, 4}, and ?3 = {3, 4, 5}. Then (1, 12 , 1) 2 C\ conv(F).
In fact, Example 1 is a special case of a more general statement, which we state in the following
proposition. Here, F := {F 0 ? F : 1F 0 2 F}, and an element F ? 2 F is maximal if it is not
contained in another element of F.
5
Proposition 1. Suppose (i) G is not a single-cycle forest, and (ii) there exists a maximal element
F ? 2 F such that the induced subgraph F ? [ N (F ? ) is a forest. Then conv(F) ( C.
The proof of Proposition 1 is contained in Appendix C.3. Note that if |?| = 2 for all ? 2 F , then
condition (ii) is violated whenever condition (i) holds, so Proposition 1 provides a partial converse
to Theorem 3.
4
Reweighted sum product algorithm
In this section, we provide an iterative message passing algorithm to optimize the Kikuchi variational problem (8). As in the case of the generalized belief propagation algorithm for the unweighted
Kikuchi approximation [28, 29, 11, 14, 12, 27] and the reweighted sum product algorithm for the
Bethe approximation [22], our message passing algorithm searches for stationary points of the Lagrangian version of the problem (8). When ? satisfies condition (10), Theorem 1 implies that the
problem (8) is strictly concave, so the unique fixed point of the message passing algorithm globally
maximizes the Kikuchi approximation.
Let G = (V, R) be a region graph defining our Kikuchi approximation. Following Pakzad and
Anantharam [14], for r, s 2 R, we write r
s if r ( s and there does not exist t 2 R such that
r ( t ( s. For r 2 R, we define the parent set of r to be P(r) = {s 2 R : r s} and the child set
of r to be C(r)P= {s 2 R : s r}. With this notation, ? = {?r (xr ) : r 2 R} belongs to the set K
R
if and only if xs\r ?s (xr , xs\r ) = ?r (xr ) for all r 2 R, s 2 P(r).
The message passing algorithm we propose is as follows: For each r 2 R and s 2 P(r), let
Msr (xr ) denote the message passed from s to r at assignment xr . Starting with an arbitrary positive
initialization of the messages, we repeatedly perform the following updates for all r 2 R, s 2 P(r):
Msr (xr )
2P
exp ?s (xs )/?s
6 xs\r
C4
exp ?r (xr )/?r
Q
Mvs (xs )?v /?s
v2P(s)
Q
Q
Msw (xw )
w2C(s)\r
Mur (xr )?u /?r
u2P(r)\s
Q
Mrt (xt )
t2C(r)
1
1 3 ?r +?s
?r
7
5
. (16)
Here,
C > 0 may be chosen to ensure a convenient normalization condition; e.g.,
P
xr Msr (xr ) = 1. Upon convergence of the updates (16), we compute the pseudomarginals according to
?
? Y
Y
?r (xr )
?r (xr ) / exp
Msr (xr )?s /?r
Mrt (xt ) 1 ,
(17)
?r
s2P(r)
t2C(r)
and we obtain the corresponding Kikuchi approximation by computing the objective function (8)
with these pseudomarginals. We have the following result, which is proved in Appendix D:
Theorem 4. The pseudomarginals ? specified by the fixed points of the messages {Msr (xr )} via
the updates (16) and (17) correspond to the stationary points of the Lagrangian associated with the
Kikuchi approximation problem (8).
As with the standard belief propagation and reweighted sum product algorithms, we have several
options for implementing the above message passing algorithm in practice. For example, we may
perform the updates (16) using serial or parallel schedules. To improve the convergence of the
algorithm, we may damp the updates by taking a convex combination of new and previous messages
using an appropriately chosen step size. As noted by Pakzad and Anantharam [14], we may also use
a minimal graphical representation of the Hasse diagram to lower the complexity of the algorithm.
Finally, we remark that although our message passing algorithm proceeds in the same spirit as classical belief propagation algorithms by operating on the Lagrangian of the objective function, our
algorithm as presented above does not immediately reduce to the generalized belief propagation
algorithm for unweighted Kikuchi approximations or the reweighted sum product algorithm for
tree-reweighted pairwise Bethe approximations. Previous authors use algebraic relations between
the overcounting numbers (9) in the Kikuchi case [28, 29, 11, 14] and the two-layer structure of the
Hasse diagram in the Bethe case [22] to obtain a simplified form of the updates. Since the coefficients ? in our problem lack the same algebraic relations, following the message-passing protocol
used in previous work [11, 28] leads to more complicated updates, so we present a slightly different
algorithm that still optimizes the general reweighted Kikuchi objective.
6
5
Experiments
In this section, we present empirical results to demonstrate the advantages of the reweighted Kikuchi
approximation that support our theoretical results. For simplicity, we focus on the binary pairwise
Ising model given in equation (2). Without loss of generality, we may take the potentials to be
= ( s , st ) 2 R|V |+|E| . We run our
s (xs ) =
s xs and st (xs , xt ) =
st xs xt for some
experiments
on two types of graphs: (1) Kn , the complete graph on n vertices, and (2) Tn , the
p
p
n ? n toroidal grid graph where every vertex has degree four.
Bethe approximation. We consider the pairwise
Bethe approximation of the log partition function
P
A( ) with weights ?st
0 and ?s = 1
t2N (s) ?st . Because of the regularity structure of Kn
and Tn , we take ?st = ? 0 for all (s, t) 2 E and study the behavior of the Bethe approximation
as ? varies. For this particular choice of weight vector ?
~ = ?1E , we define
?tree = max{? 0 : ?
~ 2 conv(T)},
and
It is easily verified that for Kn , we have ?tree =
?tree = n2n1 and ?cycle = 12 .
2
n
?cycle = max{? 0 : ?
~ 2 conv(F)}.
2
and ?cycle = n 1 ; while for Tn , we have
Our results in Section 3 imply that the Bethe objective function B ,? (? ) in equation (8) is concave
if and only if ? ? ?cycle , and Wainwright et al. [22] show that we have the bound A( ) ? B( ; ?)
for ? ? ?tree . Moreover, since the Bethe entropy may be written in terms of the edge mutual
information (13), the function B( ; ?) is decreasing in ?. In our results below, we observe that we
may obtain a tighter approximation to A( ) by moving from the upper bound region ? ? ?tree to the
concavity region ? ? ?cycle . In addition, for ? > ?cycle , we observe multiple local optima of B ,? (? ).
Procedure. We generate a random potential = ( s , st ) 2 R|V |+|E| for the Ising model (2) by
sampling each potential { s }s2V and { st }(s,t)2E independently. We consider two types of models:
Attractive:
st
? Uniform[0, !st ],
and
Mixed:
st
? Uniform[ !st , !st ].
In each case, s ? Uniform[0, !s ]. We set !s = 0.1 and !st = 2. Intuitively, the attractive model
encourages variables in adjacent nodes to assume the same value, and it has been shown [18, 19] that
the ordinary Bethe approximation (?st = 1) in an attractive model lower-bounds the log partition
function. For ? 2 [0, 2], we compute stationary points of B ,? (? ) by running the reweighted sum
product algorithm of Wainwright et al. [22]. We use a damping factor of = 0.5, convergence
threshold of 10 10 for the average change of messages, and at most 2500 iterations. We repeat this
process with at least 8 random initializations for each value of ?. Figure 1 shows the scatter plots
of ? and the Bethe approximation B ,? (? ). In each plot, the two vertical lines are the boundaries
? = ?tree and ? = ?cycle , and the horizontal line is the value of the true log partition function A( ).
Results. Figures 1(a)?1(d) show the results of our experiments on small graphs (K5 and T9 ) for
both attractive and mixed models. We see that the Bethe approximation with ? ? ?cycle generally
provides a better approximation to A( ) than the Bethe approximation computed over ? ? ?tree .
However, in general we cannot guarantee whether B( ; ?) will give an upper or lower bound for
A( ) when ? ? ?cycle . As noted above, we have B( ; 1) ? A( ) for attractive models.
We also observe from Figures 1(a)?1(d) that shortly after ? leaves the concavity region {? ? ?cycle },
multiple local optima emerge for the Bethe objective function. The presence of the point clouds
near ? = 1 in Figures 1(a) and 1(c) arises because the sum product algorithm has not converged
after 2500 iterations. Indeed, the same phenomenon is true for all our results: in the region where
multiple local optima begin to appear, it is more difficult for the algorithm to converge. See Figure 2
and the accompanying text in Appendix E for a plot of the points (?, log10 ( )), where
is the
final average change in the messages at termination of the algorithm. From Figure 2, we see that the
values of are significantly higher for the values of ? near where multiple local optima emerge. We
suspect that for these values of ?, the sum product algorithm fails to converge since distinct local
optima are close together, so messages oscillate between the optima. For larger values of ?, the local
optima become sufficiently separated and the algorithm converges to one of them. However, it is
interesting to note that this point cloud phenomenon does not appear for attractive models, despite
the presence of distinct local optima.
Simulations for larger graphs are shown in Figures 1(e)?1(h). If we zoom into the region near
? ? ?cycle , we still observe the same behavior that ? ? ?cycle generally provides a better Bethe
7
K 5 , attractive
? t r ee
? cy cl e
A(? )
14
13
12
11
10
9
12.5
12
11.5
11
10.5
10
18
16
14
0.5
1
1.5
2
0
0.5
?
1
1.5
10
0
2
(a) K5 , mixed
80
70
60
50
1.5
2
112
110
108
106
104
1
1.5
?
(e) K15 , mixed
2
0
0.5
1
0
0.5
1.5
?
55
50
45
2
? t r ee
? cy cl e
A(? )
60
55
50
45
40
0.5
1
1.5
?
(f) K15 , attractive
1.5
T 25, attractive
? t r ee
? cy cl e
A(? )
60
35
0
2
1
?
102
30
0.5
20
(d) T9 , attractive
40
40
21
T 25, mixed
Bethe approximation
90
1
65
? t r ee
? cy cl e
A(? )
114
Bethe approximation
100
22
(c) T9 , mixed
K 15, attractive
? t r ee
? cy cl e
A(? )
23
?
(b) K5 , attractive
K 15, mixed
110
0
0.5
?
Bethe approximation
0
24
19
9.5
7
? t r ee
? cy cl e
A(? )
25
12
8
Bethe approximation
T 9 , attractive
? t r ee
? cy cl e
A(? )
20
Bethe approximation
Bethe approximation
Bethe approximation
15
T 9 , mixed
? t r ee
? cy cl e
A(? )
13
Bethe approximation
K 5 , mixed
16
(g) T25 , mixed
2
0
0.5
1
1.5
2
?
(h) T25 , attractive
Figure 1: Values of the reweighted Bethe approximation as a function of ?. See text for details.
approximation than ? ? ?tree . Moreover, the presence of the point clouds and multiple local optima
are more pronounced, and we see from Figures 1(c), 1(g), and 1(h) that new local optima with even
worse Bethe values arise for larger values of ?. Finally, we note that the same qualitative behavior
also occurs in all the other graphs that we have tried (Kn for n 2 {5, 10, 15, 20, 25} and Tn for
n 2 {9, 16, 25, 36, 49, 64}), with multiple random instances of the Ising model p .
6
Discussion
In this paper, we have analyzed the reweighted Kikuchi approximation method for estimating the log
partition function of a distribution that factorizes over a region graph. We have characterized necessary and sufficient conditions for the concavity of the variational objective function, generalizing
existing results in literature. Our simulations demonstrate the advantages of using the reweighted
Kikuchi approximation and show that multiple local optima may appear outside the region of concavity.
An interesting future research direction is to obtain a better understanding of the approximation
guarantees of the reweighted Bethe and Kikuchi methods. In the Bethe case with attractive potentials
?, several recent results [22, 19, 18] establish that the Bethe approximation B(?; ?) is an upper bound
to the log partition function A(?) when ? lies in the spanning tree polytope, whereas B(?; ?) ? A(?)
when ? = 1F . By continuity, we must have B(?; ?? ) = A(?) for some values of ?? , and it would
be interesting to characterize such values where the reweighted Bethe approximation is exact.
Another interesting direction is to extend our theoretical results on properties of the reweighted
Kikuchi approximation, which currently depend solely on the structure of the region graph and the
weights ?, to incorporate the effect of the model potentials ?. For example, several authors [20, 6]
present conditions under which loopy belief propagation applied to the unweighted Bethe approximation has a unique fixed point. The conditions for uniqueness of fixed points slightly generalize the
conditions for convexity, and they involve both the graph structure and the strength of the potentials.
We suspect that similar results would hold for the reweighted Kikuchi approximation.
Acknowledgments. The authors thank Martin Wainwright for introducing the problem to them
and providing helpful guidance. The authors also thank Varun Jog for discussions regarding the
generalization of Hall?s lemma. The authors thank the anonymous reviewers for feedback that improved the clarity of the paper. PL was partly supported from a Hertz Foundation Fellowship and an
NSF Graduate Research Fellowship while at Berkeley.
8
References
[1] S. M. Aji and R. J. McEliece. The generalized distributive law and free energy minimization. In Proceedings of the 39th Allerton Conference, 2001.
[2] F. Barahona. On the computational complexity of Ising spin glass models. Journal of Physics A: Mathematical and General, 15(10):3241, 1982.
[3] H. A. Bethe. Statistical theory of superlattices. Proceedings of the Royal Society of London. Series A,
Mathematical and Physical Sciences, 150(871):552?575, 1935.
[4] P. Hall. On representatives of subsets. Journal of the London Mathematical Society, 10:26?30, 1935.
[5] T. Heskes. Stable fixed points of loopy belief propagation are minima of the Bethe free energy. In
Advances in Neural Information Processing Systems 15, 2002.
[6] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation, 16(11):2379?
2413, 2004.
[7] T. Heskes. Convexity arguments for efficient minimization of the Bethe and Kikuchi free energies. Journal
of Artificial Intelligence Research, 26:153?190, 2006.
[8] A. T. Ihler, J. W. Fischer III, and A. S. Willsky. Loopy belief propagation: Convergence and effects of
message errors. Journal of Machine Learning Research, 6:905?936, December 2005.
[9] R. Kikuchi. A theory of cooperative phenomena. Phys. Rev., 81:988?1003, March 1951.
[10] B. Korte and J. Vygen. Combinatorial Optimization: Theory and Algorithms. Springer, 4th edition, 2007.
[11] R. J. McEliece and M. Yildirim. Belief propagation on partially ordered sets. In Mathematical Systems
Theory in Biology, Communications, Computation, and Finance, pages 275?300, 2002.
[12] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms: a unifying view. In
Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ?09, 2009.
[13] J. M. Mooij and H. J. Kappen. Sufficient conditions for convergence of the sum-product algorithm. IEEE
Transactions on Information Theory, 53(12):4422?4437, December 2007.
[14] P. Pakzad and V. Anantharam. Estimation and marginalization using Kikuchi approximation methods.
Neural Computation, 17:1836?1873, 2003.
[15] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988.
[16] T. Roosta, M. J. Wainwright, and S. S. Sastry. Convergence analysis of reweighted sum-product algorithms. IEEE Transactions on Signal Processing, 56(9):4293?4305, 2008.
[17] D. Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(12):273 ? 302, 1996.
[18] N. Ruozzi. The Bethe partition function of log-supermodular graphical models. In Advances in Neural
Information Processing Systems 25, 2012.
[19] E. B. Sudderth, M. J. Wainwright, and A. S. Willsky. Loop series and Bethe variational bounds in attractive
graphical models. In Advances in Neural Information Processing Systems 20, 2007.
[20] S. C. Tatikonda and M. I. Jordan. Loopy belief propagation and Gibbs measures. In Proceedings of the
Eighteenth Conference on Uncertainty in Artificial Intelligence, UAI ?02, 2002.
[21] P. O. Vontobel. The Bethe permanent of a nonnegative matrix. IEEE Transactions on Information Theory,
59(3):1866?1901, 2013.
[22] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition
function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005.
[23] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1(1?2):1?305, January 2008.
[24] Y. Watanabe and K. Fukumizu. Graph zeta function in the Bethe free energy and loopy belief propagation.
In Advances in Neural Information Processing Systems 22, 2009.
[25] Y. Watanabe and K. Fukumizu. Loopy belief propagation, Bethe free energy and graph zeta function.
arXiv preprint arXiv:1103.0605, 2011.
[26] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1):1?41, 2000.
[27] T. Werner. Primal view on belief propagation. In UAI 2010: Proceedings of the Conference of Uncertainty
in Artificial Intelligence, pages 651?657, Corvallis, Oregon, July 2010. AUAI Press.
[28] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural
Information Processing Systems 13, 2000.
[29] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and generalized
belief propagation algorithms. IEEE Transactions on Information Theory, 51:2282?2312, 2005.
9
| 5571 |@word h:5 msr:5 version:5 stronger:1 termination:1 barahona:1 simulation:6 tried:1 recursively:1 kappen:1 t2n:1 contains:3 series:2 existing:2 comparing:1 incidence:1 assigning:1 scatter:1 written:3 must:1 partition:17 pseudomarginals:5 designed:1 plot:3 update:7 k15:2 stationary:3 intelligence:5 leaf:1 characterization:5 provides:5 node:4 allerton:1 simpler:2 mathematical:4 constructed:1 become:1 qualitative:1 consists:1 specialize:1 pairwise:10 upenn:1 hardness:1 indeed:2 behavior:3 freeman:2 globally:1 decreasing:1 cardinality:2 considering:1 becomes:1 provided:2 estimating:2 underlying:2 notation:3 begin:2 conv:10 maximizes:1 moreover:2 finding:1 guarantee:2 berkeley:3 every:2 auai:1 concave:14 finance:1 exactly:2 toroidal:1 converse:4 appear:3 positive:1 local:14 consequence:2 despite:1 establishing:2 solely:1 approximately:1 initialization:2 studied:1 ease:1 factorization:1 graduate:1 directed:1 unique:2 acknowledgment:1 globerson:1 testing:1 practice:1 xr:27 procedure:1 aji:1 area:1 empirical:1 significantly:2 convenient:2 word:1 cannot:2 close:1 optimize:2 equivalent:1 demonstrated:2 lagrangian:3 reviewer:1 roth:1 eighteenth:1 starting:1 overcounting:2 convex:4 independently:1 simplicity:1 immediately:1 subgraphs:1 pt:1 suppose:2 heavily:1 exact:2 mrt:2 synthesize:1 element:4 satisfying:1 trend:1 ising:8 cooperative:1 cloud:3 preprint:1 cy:8 region:41 cycle:24 connected:2 valuable:1 convexity:2 complexity:2 depend:1 upon:1 division:1 po:1 joint:2 easily:1 various:1 separated:1 distinct:2 london:2 artificial:5 hyper:1 outside:2 larger:3 plausible:1 statistic:2 fischer:1 final:1 advantage:3 propose:1 product:15 maximal:2 remainder:1 loop:3 subgraph:1 pronounced:1 parent:1 convergence:6 optimum:19 p:3 regularity:1 produce:2 generating:1 converges:2 kikuchi:46 derive:1 school:1 hst:1 involves:2 c:1 implies:1 direction:3 posit:1 hull:2 implementing:1 generalization:2 anonymous:1 proposition:4 tighter:1 strictly:5 pl:1 hold:5 accompanying:1 marriage:1 considered:1 hall:3 sufficiently:1 exp:4 uniqueness:2 estimation:1 combinatorial:1 currently:1 tatikonda:1 correctness:1 weighted:1 fukumizu:4 minimization:2 clearly:2 cr:2 factorizes:1 jaakkola:1 corollary:2 focus:1 check:1 glass:1 helpful:1 inference:3 s2f:1 entire:1 relation:2 ancestor:1 special:3 mutual:2 marginal:2 wharton:2 sampling:1 biology:1 future:1 np:1 others:1 intelligent:1 divergence:1 zoom:1 individual:1 familiar:1 geometry:1 interest:1 message:17 analyzed:3 primal:1 accurate:1 edge:11 closer:1 partial:1 necessary:6 machinery:1 damping:1 tree:13 vontobel:2 guidance:1 theoretical:3 minimal:2 instance:1 facet:2 superlattices:1 assignment:4 werner:1 loopy:8 ordinary:1 introducing:1 vertex:7 subset:3 uniform:3 characterize:2 kn:4 damp:1 varies:1 st:20 epidemiology:1 ie:2 probabilistic:2 physic:3 zeta:3 together:2 worse:1 potential:11 singleton:2 coding:1 coefficient:1 inc:1 oregon:1 permanent:2 satisfy:1 mv:1 depends:1 view:2 analyze:4 sup:4 recover:1 option:1 parallel:1 complicated:1 square:1 formed:1 accuracy:1 spin:1 kaufmann:1 efficiently:4 yield:1 correspond:3 generalize:1 yildirim:1 converged:1 phys:1 andre:1 whenever:2 energy:6 proof:7 associated:1 recovers:1 ihler:1 pst:4 proved:2 recall:1 organized:1 schedule:1 appears:1 higher:1 vygen:1 varun:1 supermodular:1 improved:2 wei:4 generality:1 furthermore:1 mceliece:2 ally:1 horizontal:1 replacing:1 propagation:17 lack:1 continuity:1 usa:1 effect:2 concept:1 verify:1 contain:1 true:3 former:1 hence:3 assigned:1 hasse:3 equality:2 reweighted:34 attractive:16 adjacent:1 encourages:1 noted:2 t25:2 coincides:1 generalized:6 outline:1 complete:1 demonstrate:3 tn:4 reasoning:2 ranging:1 variational:11 specialized:1 physical:1 exponentially:1 extend:1 marginals:2 corvallis:1 gibbs:1 grid:1 heskes:3 similarly:1 inclusion:1 sastry:1 language:1 moving:1 stable:1 operating:1 recent:1 belongs:1 optimizes:1 certain:1 nonconvex:1 inequality:4 binary:1 morgan:1 minimum:1 greater:1 additional:1 r0:4 converge:3 tempting:1 signal:1 ii:2 pakzad:5 full:1 multiple:8 july:1 reduces:1 technical:1 jog:1 characterized:1 concerning:1 serial:1 specializing:1 basic:1 vision:1 arxiv:2 iteration:2 normalization:2 achieved:1 addition:3 whereas:2 background:2 x2x:1 fellowship:2 diagram:3 sudderth:1 hyperedges:2 crucial:1 appropriately:2 publisher:1 induced:1 suspect:2 undirected:1 december:2 spirit:1 jordan:3 ee:8 near:3 presence:3 iii:1 superset:1 meltzer:1 marginalization:2 pennsylvania:1 reduce:1 regarding:1 whether:2 expression:3 passed:1 loh:2 algebraic:2 passing:9 oscillate:1 repeatedly:1 remark:1 useful:2 generally:3 clear:1 involve:1 korte:1 locally:1 generate:2 exist:1 nsf:1 ruozzi:1 diverse:1 discrete:1 write:2 ist:2 four:1 terminology:1 threshold:1 clarity:1 verified:1 ht:1 graph:44 geometrically:1 sum:13 run:1 uncertainty:3 arrive:1 family:2 appendix:10 layer:8 bound:7 convergent:1 nonnegative:3 strength:1 constraint:1 sake:1 argument:1 martin:1 s2v:7 department:1 according:1 combination:1 march:1 hertz:1 slightly:2 rev:1 making:1 intuitively:1 equation:5 previously:1 turn:1 fail:1 tractable:1 generalizes:1 rewritten:1 yedidia:3 k5:3 observe:4 appropriate:1 appearing:2 nxx:1 alternative:1 shortly:1 original:1 denotes:4 running:1 cf:2 ensure:1 graphical:8 log10:1 unifying:1 xw:1 prof:1 establish:6 classical:1 society:2 objective:14 occurs:1 r2r:4 usual:2 thank:3 distributive:1 polytope:16 trivial:1 spanning:4 willsky:3 providing:1 roosta:1 setup:1 difficult:2 statement:2 twenty:1 perform:2 upper:4 vertical:1 finite:1 january:1 defining:2 communication:1 precise:1 arbitrary:2 introduced:1 kl:1 specified:1 c4:1 california:1 pearl:1 proceeds:2 below:1 including:1 interpretability:1 max:2 belief:16 wainwright:13 royal:1 indicator:1 hr:2 improve:1 imply:2 extract:1 text:2 review:2 literature:3 geometric:3 nice:1 understanding:1 mooij:1 law:1 loss:1 mixed:10 interesting:4 t2c:2 foundation:2 degree:1 sufficient:9 consistent:1 repeat:1 supported:1 free:7 neighbor:1 mur:1 characterizing:2 taking:1 emerge:2 fifth:1 slice:1 boundary:1 feedback:1 avoids:1 unweighted:8 concavity:27 author:6 collection:4 san:1 simplified:1 social:1 transaction:5 approximate:2 deg:1 global:5 active:1 uai:3 containment:1 conclude:1 francisco:1 factorize:1 search:1 iterative:1 decomposes:1 bethe:69 terminate:2 ca:1 forest:11 expansion:2 complex:1 cl:8 constructing:2 domain:2 protocol:1 significance:1 main:5 whole:1 ling:1 s2:2 arise:2 edition:1 child:1 xu:4 representative:1 fails:1 watanabe:4 explicit:1 exponential:2 lie:2 theorem:17 xt:11 showing:3 x:19 admits:1 t9:3 exists:2 entropy:24 generalizing:1 expressed:1 contained:8 ordered:1 partially:1 springer:1 satisfies:2 goal:1 consequently:2 replace:2 feasible:1 hard:1 change:2 specifically:1 lemma:3 partly:1 disregard:1 support:1 latter:4 arises:1 violated:2 wibisono:2 incorporate:1 anantharam:5 phenomenon:3 |
5,050 | 5,572 | A Complete Variational Tracker
Ryan Turner
Northrop Grumman Corp.
Steven Bottone
Northrop Grumman Corp.
Bhargav Avasarala
Northrop Grumman Corp.
[email protected]
[email protected]
[email protected]
Abstract
We introduce a novel probabilistic tracking algorithm that incorporates combinatorial data association constraints and model-based track management using
variational Bayes. We use a Bethe entropy approximation to incorporate data
association constraints that are often ignored in previous probabilistic tracking algorithms. Noteworthy aspects of our method include a model-based mechanism
to replace heuristic logic typically used to initiate and destroy tracks, and an assignment posterior with linear computation cost in window length as opposed to
the exponential scaling of previous MAP-based approaches. We demonstrate the
applicability of our method on radar tracking and computer vision problems.
The field of tracking is broad and possesses many applications, particularly in radar/sonar [1],
robotics [14], and computer vision [3]. Consider the following problem: A radar is tracking a flying
object, referred to as a target, using measurements of range, bearing, and elevation; it may also have
Doppler measurements of radial velocity. We would like to construct a track which estimates the trajectory of the object over time. The Kalman filter [16], or a more general state space model, is used
to filter out measurement errors. The key difference between tracking and filtering is the presence of
clutter (noise measurements) and missed detections of true objects. We must determine which measurement to ?plug in? to the filter before applying it; this is known as data association. Additionally
complicating the situation is that we may be in a multi-target tracking scenario in which there are
multiple objects to track and we do not know which measurement originated from which object.
There is a large body of work on tracking algorithms given its standing as a long-posed and important
problem. Algorithms vary primarily on their approach to data association. The dominant approach
uses a sliding window MAP estimate of the measurement-to-track assignment, in particular the
multiple hypothesis tracker (MHT) [1]. In the standard MHT, at every frame the algorithm finds
the most likely matching of measurements to tracks, in the form of an assignment matrix, under a
one-to-one constraint (see Figure 1). One track can only result in one measurement, and vice versa,
which we refer to as framing constraints. As is typical in MAP estimation, once an assignment
is determined, the filters are updated and the tracker proceeds as if these assignments were known
to be correct. The one-to-one constraint makes MAP estimation a bipartite matching task where
algorithms exist to solve it exactly in polynomial time in the number of tracks NT [15]. However,
the multi-frame MHT finds the joint MAP assignment over multiple frames, in which case the
assignment problem is known to be NP-hard, although good approximate solvers exist [20].
S1
S2
S3
Assignment Matrices A1
A2
A3
(all) Track States X1
X2
X3
Measurements Z1
Z2
Z3
Meta-states
Track Swap
track 3 (Cesna)
track 2 (777)
clutter (birds)
track 1 (747)
?
Sk
Ak
?
Xk
Zk
Figure 1: Simple scenario with a track swap: filtered state estimates ?, associated measurements +, and clutter ?;
and corresponding graphical model. Note that Xk is a matrix since it contains state vectors for all three tracks.
1
Despite the complexity of the MHT, it only finds a sliding window MAP estimate of measurementto-track assignments. If a clutter measurement is by chance associated with a track for the duration
of a window then the tracker will assume with certainty that the measurement originated from that
track, and never reconsider despite all future evidence to the contrary. If multiple clutter (or otherwise incorrect) measurements are associated with a track, then it may veer ?off into space? and result
in spurious tracks. Likewise, an endemic problem in tracking is the issue of track swaps, where two
trajectories can cross and get mixed up as shown in Figure 1. Alternatives to the MAP approach
include the probabilistic MHT (PMHT) [9, Ch. 4] and probabilistic data association (PDA). However, the PMHT drops the one-to-one constraint in data association and the PDA only allows for a
single target. This led to the development of the joint PDA (JPDA) algorithm for multiple targets,
which utilizes heuristic calculations of the assignment weights and does not scale to multiple frame
assignment. Particle filter implementations of the JPDA have tried to alleviate these issues, but they
have not been adopted into real-time systems due to their inefficiency and lack of robustness. The
probability hypothesis density (PHD) filter [19] addresses many of these issues, but only estimates
the intensity of objects and does not model full trajectories; this is undesirable since the identity of
an object is required for many applications including the examples in this paper.
L?azaro-Gredilla et al. [18] made the first attempt at a variational Bayes (VB) tracker. In their approach every trajectory follows a Gaussian process (GP); measurements are thus modeled by a mixture of GPs. We develop additional VB machinery to retain the framing constraints, which are
dropped in L?azaro-Gredilla et al. [18] despite being viewed as important in many systems. Secondly, our algorithm utilizes a state space approach (e.g. Kalman filters) to model tracks, providing
linear rather than cubic time complexity in track length. Hartikainen and S?arkk?a [11] showed by an
equivalence that there is little loss of modeling flexibility by taking a state space approach over GPs.
Most novel tracking algorithms neglect the critical issue of track management. Many tracking algorithms unrealistically assume that the number of tracks NT is known a priori and fixed. Additional
?wrapper logic? is placed around the trackers to initiate and destroy tracks. This logic involves many
heuristics such as M -of-N logic [1, Ch. 3]. Our method replaces these heuristics in a model-based
manner to make significant performance gains. We call our method a complete variational tracker
as it simultaneously does inference for track management, data association, and state estimation.
The outline of the paper is as follows: We first describe the full joint probability distribution of
the tracking problem in Section 1. This includes how to solve the track management problem by
augmenting tracks with an active/dormant state to address the issue of an unknown number of tracks.
By studying the full joint we develop a new conjugate prior on assignment matrices in Section 2.
Using this new formulation we develop a variational algorithm for estimating the measurement-totrack assignments and track states in Section 3. To retain the framing constraints and efficiently
scale in tracks and measurements, we modify the variational lower bound in Section 4 using a Bethe
entropy approximation. This results in a loopy belief propagation (BP) algorithm being used as
a subroutine in our method. In Sections 5?6 we show the improvements our method makes on a
difficult radar tracking example and a real data computer vision problem in sports.
Our paper presents the following novel contributions: First, we develop the first efficient deterministic approximate inference algorithm for solving the full tracking problem, which includes the
framing constraints and track management. The most important observation is that the VB assignment posterior has an induced factorization over time with regard to assignment matrices. Therefore,
the computational cost of our variational approach is linear in window length as opposed to the exponential cost of the MAP approach. The most astounding aspect is that by introducing a weaker
approximation (VB factorization vs MAP) we lower the computational cost from exponential to
linear; this is a truly rare and noteworthy example. Second, in the process, we develop new approximate inference methods on assignment matrices and a new conjugate assignment prior (CAP). We
believe these methods have much larger applicability beyond our current tracking algorithm. Third,
we develop a process to handle the track management problem in a model-based way.
1
Model Setup for the Tracking Problem
In this section we describe the full model used in the tracking problem and develop an unambiguous
notation. At each time step k ? N1 , known as a frame, we observe NZ (k) ? N0 measurements,
NZ (k)
in a matrix Zk = {zj,k }j=1
, from both real targets and clutter (spurious measurements). In the
2
radar example zj,k ? Z is a vector of position measurements in R3 . In data association we estimate
the assignment matrices A, where Aij = 1 if and only if track i is associated with measurement j.
Recall that each track is associated with at most one measurement, and vice versa, implying:
NZ
NT
X
X
Aij = 1 , i ? 1:NT , A00 = 0 .
(1)
Aij = 1 , j ? 1:NZ ,
j=0
i=0
The zero indices of A ? {0, 1}NT +1?NZ +1 are the ?dummy row? and ?dummy column? to represent the assignment of a measurement to clutter and the assignment of a track to a missed detection.
Distribution on Assignments Although not explicitly stated in the literature, a careful examination of the cost functions used in the MAP optimization in MHT yields a particular and intuitive prior on the assignment matrices. The number of tracks NT is assumed known a priori and NZ is random. The corresponding generative process on assignment matrices is as follows: 1) Start with a one-to-one mapping from measurements to tracks: A ? INT ?NT . 2) Each
track is observed with probability PD ? [0, 1]NT . Only keep the columns of detected tracks:
A ? A(?, d), di ? Bernoulli(PD (i)). 3) Sample a Poisson number of clutter measurements
(columns): A ?[A , 0NT ?Nc ], Nc ? Poisson(?). 4) Use a random permutation vector ? to make
the measurement order arbitrary: A ? A(?, ?). 5) Append a dummy row and column on A to satisfy
the summation constraints (1). This process gives the following normalized prior on assignments:
NT
Y
P (A|PD ) = ?Nc exp(??)/NZ !
PD (i)di (1 ? PD (i))1?di .
(2)
i=1
Note that the detections d, NZ , and clutter measurement count Nc are deterministic functions of A.
Track Model We utilize a state space formulation over K time steps. The latent states x1:K ? X K
follow a Markov process, while the measurements z1:K ? Z K are iid conditional on the track state:
K
K
Y
Y
p(z1:K , x1:K ) = p(x1 )
p(xk |xk?1 )
p(zk |xk ) ,
(3)
k=2
k=1
where we have dropped the track and measurements indices i and j. Although more general models
are possible, within this paper each track independently follows a linear system (i.e. Kalman filter):
p(xk |xk?1 ) = N (xk |Fxk?1 , Q) , p(zk |xk ) = N (zk |Hxk , R) .
(4)
Track Meta-states We address the track management problem by augmenting track states with a
two-state Markov model with an active/dormant meta-state sk in a 1-of-N encoding:
K
Y
P (s1:K ) = P (s1 )
P (sk |sk?1 ) , sk ? {0, 1}NS .
(5)
k=2
This effectively allows us to handle an unknown number of tracks by making NT arbitrarily large;
PD is now a function of s with a very small PD in the dormant state and a larger PD in the active
state. Extensions with a larger number of states NS are easily implementable. We refer to the collecNT
T
tion of track meta-states over all tracks at frame k as Sk := {si,k }N
i=1 ; likewise, Xk := {xi,k }i=1 .
Full Model We combine the assignment process and track models to get the full model joint:
K
Y
p(Z1:K , X1:K , A1:K , S1:K ) =
p(Zk |Xk , Ak )p(Xk |Xk?1 )P (Sk |Sk?1 )P (Ak |Sk )
(6)
k=1
=
K
Y
k=1
P (Ak |Sk ) ?
NT
Y
NZ (k)
p(xi,k |xi,k?1 )P (si,k |si,k?1 )?
i=1
Y
j=1
k
p0 (zj,k )A0j
NT
Y
k
p(zj,k |xi,k , Akij = 1)Aij ,
i=1
where p0 is the clutter distribution, which is often a uniform distribution. The traditional goal
in tracking is to compute p(Xk |Z1:k ), the exact computation of which is intractable due to the
?combinatorial explosion? in summing out the assignments A1:k . The MHT MAP-based approach
? k :k } for a sliding window w = k2 ? k1 + 1.
tackles this with P (Ak1 :k2 |Z1:k ) ? I{Ak1 :k2 = A
1 2
Clearly an approximation is needed, but we show how to do much better than the MAP approach
of the MHT. This motivates the next section where we derive a conjugate prior on the assignments
A1:k , which is useful for improving upon MAP; and we cast (2) as a special case of this distribution.
3
2
The Conjugate Assignment Prior
Given that we must compute the posterior P (A|Z),1 it is natural to ask what conjugate priors on A
are possible. Deriving approximate inference procedures is often greatly simplified if the prior on
the parameters is conjugate to the complete data likelihood: p(Z, X|A) [2]. We follow the standard
procedure for deriving the conjugate prior for an exponential family (EF) complete likelihood:
p(Z, X|A) =
NZ
Y
j=1
p0 (zj )A0j
NT
Y
p(zj |xi , Aij = 1)Aij
i=1
Lij := log p(zj |xi , Aij = 1) ,
NT
Y
p(xi ) =
i=1
Li0 := 0 ,
NT
Y
p(xi ) exp(1> (A L)1) ,
i=1
L0j := log p0 (zj ) ,
(7)
where we have introduced the matrix L ? RNT +1?NZ +1 to represent log likelihood contributions
from variousQassignments. Therefore, we have the following EF quantities [4, Ch. 2.4]: base measure
NT
p(xi ), partition function g(A) = 1, natural parameters ?(A) = vec A, and suffih(Z, X) = i=1
cient statistics T (Z, X) = vec L. This implies the conjugate assignments prior (CAP) for P (A|?):
X
CAP(A|?) := Z(?)?1 I{A ? A} exp(1> (? A)1) , Z(?) :=
exp(1> (? A)1) , (8)
A?A
where A is the set of all assignment matrices that obey the one-to-one constraints (1). Note that ? is
a function of the track meta-states S. We recover the assignment prior of (2) in the form of the CAP
distribution (8) via the following parameter settings, with ?(?) denoting the logistic,
PD (i)
?ij = log
= ? ?1 (PD (i)) ? log ? , i ? 1:NT , j ? 1:NZ , ?0j = ?i0 = 0 . (9)
(1 ? PD (i))?
Due to the symmetries in the prior of (9) we can analytically normalize (8) in this special case:
Z(?)?1 = P (A1:NT ,1:NZ = 0) = Poisson(NZ |?)
NT
Y
(1 ? PD (i)) .
(10)
i=1
Given that the dummy row and columns of ? are zero in (9), equation (10) is clearly the only way
to get (8) to match (2) for the 0 assignment case.
Although the conjugate prior (8) allows us to ?compute? the posterior, ?posterior = ?prior + L, computing E[A] or Z(?) remains difficult in general. This will cause problems in Section 3, but be
ameliorated in Section 4 by a slight modification of the variational objective.
One insight into the partition function Z(?) is that if we slightly change the constraints in A so
that all the rows and columns must sum to one, i.e. we do not use a dummy row or column and A
becomes the set of permutation matrices, then Z(?) is equal to the matrix permanent of exp(?),
which is #P-complete to compute [24]. Although the matrix permanent is #P-complete, accurate
and computationally efficient approximations exist, some based on belief propagation [25; 17].
3
Variational Formulation
As explained in Section 1, exact inference on the full model in (6) is intractable, and as promised we
show how to perform better inference than the existing solution of sliding window MAP. Our variational tracker enforces the factorization constraint that the posterior factorizes across assignment
matrices and latent track states:
p(A1:K , X1:K , S1:K |Z1:K ) ? q(A1:K , X1:K , S1:K ) = q(A1:K )q(X1:K , S1:K ) .
(11)
In some sense we can think of A as the ?parameters? with X and S as the ?latent variables? and
use the common variational practice of factorizing these two groups of variables. This gives the
variational lower bound L(q):
L(q) = Eq [log p(Z1:K , X1:K , A1:K , S1:K )] + H[q(X1:K , S1:K )] + H[q(A1:K )] ,
1
In this section we drop the frame index k and implicitly condition on meta-states Sk for brevity.
4
(12)
where H[?] denotes the Shannon entropy. From inspecting the VB lower bound (12) and (6) we
arrive at the following induced factorizations without forcing further factorization upon (11):
K
Y
q(A1:K ) =
q(Ak ) ,
q(X1:K , S1:K ) =
NT
Y
q(xi,? )q(si,? ) .
(13)
i=1
k=1
In other words, the approximate posterior on assignment matrices factorizes across time; and the
approximate posterior on latent states factorizes across tracks.
State Posterior Update Based on the induced factorizations in (13) we derive the updates for the
track states xi,? and meta-states si,? separately. Additionally, we derive the updates for each track
separately. We begin with the variational updates for q(xi,? ) using the standard VB update rules [4,
c
Ch. 10] and (6), denoting equality to an additive constant with =,
c
log q(xi,? ) = log p(xi,? ) +
K NX
Z (k)
X
E[Akij ] log N (zj,k |Hxi,k , R)
(14)
k=1 j=1
=? q(xi,? ) ? p(xi,? )
K NY
Z (k)
Y
N (zj,k |Hxi,k , R/E[Akij ]) .
(15)
k=1 j=1
Using the standard product of Gaussians formula [6] this is proportional to
q(xi,? ) ? p(xi,? )
K
Y
N
N (?
zi,k |Hxi,k , R/E[di,k ]) ,
?i,k :=
z
k=1
Z
1 X
E[Akij ]zj,k ,
E[di,k ] j=1
(16)
PNZ
and recall that E[di,k ] = 1 ? E[Aki0 ] = j=1
E[Akij ]. The form of the posterior q(xi,? ) is equiva?i,k and non-stationary measurement
lent to a linear dynamical system with pseudo-measurements z
covariance R/E[di,k ]. Therefore, q(xi,? ) is simply implemented using a Kalman smoother [22].
Meta-state Posterior Update We next consider the posterior on the track meta-states:
c
log q(si,? ) = log P (si,? ) +
K
X
c
Eq(Ak ) [log P (Ak |Sk )] = log P (si,? ) +
k=1
=? q(si,? ) ? P (si,? )
s>i,k `i,k ,
(17)
k=1
`i,k (s) := E[di,k ] log(PD (s)) + (1 ? E[di,k ]) log(1 ? PD (s)) ,
K
Y
K
X
exp(s>i,k `i,k ) ,
s ? 1:NS
(18)
(19)
k=1
where (18) follows from (2). If P (si,? ) follows a Markov chain then the form for q(si,? ) is the same
as a hidden Markov model (HMM) with emission log likelihoods `i,k ? [R? ]NS . Therefore, the
meta-state posterior q(si,? ) update is implemented using the forward-backward algorithm [21].
Like the MHT, our algorithm also works in an online fashion using a (much larger) sliding window.
Assignment Matrix Update The reader can verify using (7)?(9) that the exact updates under the
lower bound L(q) (12) yields a product of CAP distributions:
q(A1:K ) =
K
Y
CAP(Ak |Eq(Xk ) [Lk ] + Eq(Sk ) [?k ]) .
(20)
k=1
This poses a challenging problem, as the state posterior updates of (16) and (19) require Eq(Ak ) [Ak ];
since q(Ak ) is a CAP distribution we know from Section 2 its expectation is difficult to compute.
4
The Assignment Matrix Update Equations
In this section we modify the variational lower bound (12) to obtain a tractable algorithm. The
resulting algorithm uses loopy belief propagation to compute Eq(Ak ) [Ak ] for use in (16) and (19).
5
We first note that the CAP distribution (8) is naturally represented as a factor graph:
NT
NZ
NT Y
NZ
Y
Y
Y
S
CAP(A|?) ?
fiR (Ai? )
fjC (A?j )
fij
(Aij ) ,
i=1
j=1
(21)
i=0 j=0
PNT
PNZ
vi = 1} (C for column
vj = 1} (R for row factors), fjC (v) := I{ i=0
with fiR (v) := I{ j=0
S
factors), and fij
(v) := exp(?ij v). We use reparametrization methods (see [10]) to convert (21) to a
pairwise factor graph, where derivation of the Bethe free energy is easier. The Bethe entropy is:
NZ X
NT
NT X
NZ
X
X
H[q(cj , Aij )]
H[q(ri , Aij )] +
H? [q(A)] :=
j=1 i=0
i=1 j=0
?
NT
X
NZ H[q(ri )] ?
NT
X
i=1
NT H[q(cj )] ?
j=1
i=1
=
NZ
X
H[q(Ai? )] +
NZ
X
H[q(A?j )] ?
NT X
NZ
X
H[q(Aij )]
(22)
i=1 j=1
NZ
NT X
X
H[q(Aij )] ,
(23)
i=1 j=1
j=1
where the pairwise conversion used constrained auxiliary variables ri := Ai? and cj := A?j ; and
used the implied relations H[q(ri , Aij )] = H[q(ri )] + H[q(Aij |ri )] = H[q(ri )] = H[q(Ai? )].
We define an altered variational lower bound L? (q), which merely replaces the entropy H[q(Ak )]
c
with H? [q(Ak )].2 Note that L? (q) = L(q) with respect to q(X1:K , S1:K ), which implies the state
posterior updates under the old bound L(q) in (16) and (19) remain unchanged with the new bound
L? (q). To get the new update equations for q(Ak ) we examine L? (q) in terms of q(A1:K ):
K
X
c
H? [q(Ak )]
(24)
L? (q) = Eq [log p(Z1:K |X1:K , A1:K )] + Eq [log P (A1:K |S1:K )] +
k=1
c
=
K
X
Eq(Ak ) [1> (Ak (Eq(Xk ) [Lk ] + Eq(Sk ) [?k ]))1] +
k=1
c
=
K
X
K
X
H? [q(Ak )]
(25)
k=1
Eq(Ak ) [log CAP(Ak |Eq(Xk ) [Lk ] + Eq(Sk ) [?k ])] + H? [q(Ak )] .
(26)
k=1
This corresponds to the Bethe free energy of the factor graph described in (21), with E[Lk ] + E[?k ]
as the CAP parameter [26; 12]. Therefore, we can compute E[Ak ] using loopy belief propagation.
Loopy BP Derivation We define the key (row/column) quantities for the belief propagation:
C
R
C
?R
ij := msgfiR ?Aij , ?ij := msgfjC ?Aij , ?ij := msgAij ?fiR , ?ij := msgAij ?fjC ,
where all messages form functions in {0, 1} ? R+ . Using the standard rules of BP we derive:
Y
X
Y
R
S
R
R
?ij
(x) = ?C
?R
?ik
(0) , ?R
?ilR (1)
?ik
(0) ,
(27)
ij (x)fij (x) ,
ij (1) =
ij (0) =
k6=j
l6=j
k6=j,l
where we have exploited that there is only one nonzero value in the row Ai,? . Notice that
NZ
NZ
R
Y
X
R
?R
(1)
?ilR (1) ?ij
ij (0)
R
?R
?ik
(0) ?ij
(0) =? ?
?R
=
?
? R+ ,
ij (1) =
ij := R
R (0)
R (0)
?
(1)
?
?
ij
ij
il
k=0
l=0
(28)
R
where we have pulled ?R
ij (1) out of (27). We write the ratio of messages to row factors ? as
R
R
R
C
+
??ij
:= ?ij
(1)/?ij
(0) = (?C
(29)
ij (1)/?ij (0)) exp(?ij ) ? R .
C
We symmetrically apply (27)?(29) to the column (i.e. C) messages ?
?C
?ij
. As is common in
ij and ?
binary graphs, we summarize the entire message passing update scheme in terms of message ratios:
NZ
NT
X
X
exp(?ij )
exp(?ij )
R
R
R
C
C
C
C
?
?R
=
?
?
?
?
?
,
?
?
=
,
?
?
=
??lj
? ??ij
, ??ij
=
.
(30)
ij
il
ij
ij
ij
C
?
?
?
?R
ij
ij
l=0
l=0
Finally, we compute the marginal distributions E[Aij ] by normalizing the product of the incoming
messages to each variable: E[Aij ] = P (Aij = 1) = ?(?ij ? log ?
?R
?C
ij ? log ?
ij ).
2
In most models H? [?] ? H[?], but without proof we always observe H? [?] ? H[?]; so L? is a lower bound.
6
track 2
track 1
Performance
track 3
Performance (%)
1
100
80
60
40
20
0
0.6
0.4
0.2
0
PA
(a) Radar Example
0.8
S
(b) SIAP Metrics
C
ARI
NC?ARI
0?1
(c) Assignment Accuracy
Figure 2: Left: The output of the trackers on the radar example. We show the true trajectories (red ?), 2D MHT
(solid magenta), 3D MHT (solid green), and OMGP (cyan ?). The state estimates for the VB tracker when
active (black ?) and dormant (black ?) are shown, where a ? 90% threshold on the meta-state s is used to
deem a track active for plotting. Center: SIAP metrics for N = 100 realizations of the scenario on the left
with 95% error bars. We show positional accuracy (i.e. RMSE) (PA, lower better), spurious tracks (S, lower
better), and track completeness (C, higher better). The bars are in order: VB tracker (blue), 3D MHT (cyan),
2D MHT (yellow), and OMGP (red). The PA has been rescaled relative to OMGP so all metrics are in %.
Right: Same as center but looking at assignment accuracy on ARI (higher better), no clutter (NC) ARI (higher
better), and 0-1 loss (lower better) for classifying measurements as clutter.
5
Radar Tracking Example
We borrow the radar tracking example of the OMGP paper [18]. We have made the example more
realistic by adding clutter ? = 8 and missed detections PD = 0.5, which were omitted in [18];
and also used N = 100 realizations to get confidence intervals on the results. We also compare
with the 2D and 3D (i.e. multi-frame) MHT trackers as a baseline as they are the most widely used
methods in practice. The OMGP requires the number of tracks NT to be specified in advance, so
we provided it with the true number of tracks, which should have given it an extra advantage. The
trackers were evaluated using the SIAP metrics, which are the standard evaluation metrics in the
field [7]. We also use the adjusted Rand index (ARI) [13] to compare the accuracy of the assignments
made by the algorithms; the ?no clutter? ARI (which ignores clutter) and the 0-1 loss for classifying
measurements as clutter also serve as assignment metrics.
In Figure 2(a) both OMGP and 2D MHT miss the real tracks and create spurious tracks from clutter
measurements. The 3D MHT does better, but misses the western portion of track 3 and makes a swap
between track 1 and 3 at their intersection. By contrast, the VB tracker gets the scenario almost
perfect, except for a small bit of the southern portion of track 2. In that area, VB designates the
track as dormant, acknowledging that the associated measurements are likely clutter. This replaces
the notion of a ?confirmed? track in the standard tracking literature with a model-based method,
and demonstrates the advantages of using a principled and model-based paradigm for the track
management problem. This is quantitatively shown over repeated trials in Figure 2(b) in terms of
positional error; even more striking are illustrations of the near lack of spurious tracks in VB and
much higher completeness than the competing methods. We also show that the assignments are
much more accurate in Figure 2(c). To check the statistical significance of our results we used a
paired t-test to compare the difference between VB and the second best method, the 3D MHT. Both
the SIAP and assignment metrics all have p ? 10?4 .
6
Real Data: Video Tracking in Sports
We use the VS-PETS 2003 soccer player data set as a real data example to validate our method.
The data set is a 2500 frame video of players moving around a soccer field, with annotated ground
truth; the variety of player interactions make it a challenging test case for multi-object tracking
algorithms. To demonstrate the robustness of our tracker to correct a detector provided minimal
training examples, we used multi-scale histogram of oriented gradients (HOG) features from 50
positive and 50 negative examples of soccer players to train a sliding window support vector machine
(SVM) [23]. HOG features have been shown to work particularly well for pedestrian detection on
the Caltech and INRIA data sets, and thus used for this example [8]. For each frame, the center of
each bounding box is provided as the only input to our tracker. Despite modest detection rates from
HOG-SVM, our tracker is still capable of separating clutter and dealing with missed detections.
7
1
Performance
0.8
0.6
0.4
0.2
0
ARI
(a) Soccer Tracking Problem
NC?ARI
0?1
(b) Soccer Assignment Metrics
Figure 3: Left: Example from soccer player tracking. We show the filtered state estimates of the MHT (magenta ?) and VB tracker (cyan ?) for the last 25 frames as well as the true positions (black). The green boxes
show the detection of the HOG-SVM for the current frame. Right: Same as Figure 2(c) but for the soccer data.
Methods in order: VB-DP (dark blue), VB (light blue), 3D MHT (green), 2D MHT (orange), and OMGP (red).
Soccer data source: http://www.cvg.rdg.ac.uk/slides/pets.html.
We modeled player motion using (4) with F and Q derived from an NCV model [1, Ch. 1.5]. The
parameters for the NCV, R, PD , ?, and the track meta-state parameters were trained by optimizing
the variational lower bound L? on the first 1000 frames, although the algorithm did not appear sensitive to these parameters. We additionally show an extension to the VB tracker with nonparametric
clutter map learning; we learned the clutter map by passing the training measurements into a VB
Dirichlet process (DP) mixture [5] with their probability of being clutter under q(A) as weights. The
resulting posterior predictive distribution served as p0 in the test phase; we refer to this method as
the VB-DP tracker. We split the remainder of the data into 70 sequences of K = 20 frames for a test
set. Due to the nature of this example, we evaluate the batch accuracy of assigning boxes to the correct players. This demonstrates the utility of our algorithm for building a database of player images
for later processing and other applications. In Figure 3(b) we show the ARI and related assignment
metrics for VB-DP, VB, 2D MHT, 3D MHT, and OMGP. Note that the ARI only evaluates the
accuracy of the MAP assignment estimate of VB; VB additionally provides uncertainty estimates
on the assignments, unlike the MHT. VB manages to increase the no clutter ARI to 0.95 ? 0.01
from 0.86 ? 0.01 for 3D MHT; and decrease the 0-1 clutter loss to 0.18 ? 0.01 from 0.21 ? 0.01 for
OMGP. Using the nonparametric clutter map lowered the 0-1 loss to 0.016 ? 0.005 and increased
the ARI to 0.94 ? 0.01 (vs. 0.76 ? 0.01 for the 2D and 3D MHT) as the VB-DP tracker knew certain
areas, such as the post in the lower right, were more prone to clutter. As in the radar example the
VB vs. MHT and VB vs. OMGP improvements are significant at p ? 10?4 . The poor NC-ARI of
OMGP is likely due to its lack of framing constraints, ignoring prior information on the assignments.
Furthermore, in Figure 3(a) we plot filtered state estimates for the (non-DP) VB tracker; we again
use the ? 90% meta-state threshold as a ?confirmed track.? We see that the MHT is tricked by the
various false detections from HOG-SVM and has spurious tracks across the field; the VB tracker
?introspectively? knows when a track is unlikely to be real. While both the MHT and VB detect the
referee in the upper right of the frame, the VB tracker quickly sets this track to dormant when he
leaves the frame. The MHT temporarily extrapolates the track into the field before destroying it.
7
Conclusions
The model-based manner of handling the track management problem shows clear advantages and
may be the path forward for the field, which can clearly benefit from algorithms that eliminate
arbitrary tuning parameters. Our method may be desirable even in tracking scenarios under which
a full posterior does not confer advantages over a point estimate. We improve accuracy and reduce
the exponential cost of the MAP approach to linear, which is a result of the induced factorizations
of (13). We have also incorporated the often neglected framing constraints into our variational
algorithm, which fits nicely with loopy belief propagation methods. Other areas, such as more
sophisticated meta-state models, provide opportunities to extend this work into more applications of
tracking and prove it as a general method and alternative to dominant approaches such as the MHT.
8
References
[1] Bar-Shalom, Y., Willett, P., and Tian, X. (2011). Tracking and Data Fusion: A Handbook of Algorithms.
YBS Publishing.
[2] Beal, M. and Ghahramani, Z. (2003). The variational Bayesian EM algorithm for incomplete data: with
application to scoring graphical model structures. In Bayesian Statistics, volume 7, pages 453?464.
[3] Benfold, B. and Reid, I. (2011). Stable multi-target tracking in real-time surveillance video. In Computer
Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 3457?3464. IEEE.
[4] Bishop, C. M. (2007). Pattern Recognition and Machine Learning. Springer.
[5] Blei, D. M., Jordan, M. I., et al. (2006). Variational inference for Dirichlet process mixtures. Bayesian
analysis, 1(1):121?143.
[6] Bromiley, P. (2013). Products and convolutions of Gaussian probability density functions. Tina-Vision
Memo 2003-003, University of Manchester.
[7] Byrd, E. (2003). Single integrated air picture (SIAP) attributes version 2.0. Technical Report 2003-029,
DTIC.
[8] Dalal, N. and Triggs, B. (2005). Histograms of oriented gradients for human detection. In Computer Vision
and Pattern Recognition (CVPR), 2005 IEEE Conference on, pages 886?893.
[9] Davey, S. J. (2003). Extensions to the probabilistic multi-hypothesis tracker for improved data association.
PhD thesis, The University of Adelaide.
[10] Eaton, F. and Ghahramani, Z. (2013). Model reductions for inference: Generality of pairwise, binary, and
planar factor graphs. Neural Computation, 25(5):1213?1260.
[11] Hartikainen, J. and S?arkk?a, S. (2010). Kalman filtering and smoothing solutions to temporal Gaussian
process regression models. In Machine Learning for Signal Processing (MLSP), pages 379?384. IEEE.
[12] Heskes, T. (2003). Stable fixed points of loopy belief propagation are minima of the Bethe free energy. In
Advances in Neural Information Processing Systems 15, pages 359?366. MIT Press.
[13] Hubert, L. and Arabie, P. (1985). Comparing partitions. Journal of classification, 2(1):193?218.
[14] Jensfelt, P. and Kristensen, S. (2001). Active global localization for a mobile robot using multiple hypothesis tracking. Robotics and Automation, IEEE Transactions on, 17(5):748?760.
[15] Jonker, R. and Volgenant, A. (1987). A shortest augmenting path algorithm for dense and sparse linear
assignment problems. Computing, 38(4):325?340.
[16] Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the
ASME ? Journal of Basic Engineering, 82(Series D):35?45.
[17] Lau, R. A. and Williams, J. L. (2011). Multidimensional assignment by dual decomposition. In Intelligent
Sensors, Sensor Networks and Information Processing (ISSNIP), 2011 Seventh International Conference on,
pages 437?442. IEEE.
[18] L?azaro-Gredilla, M., Van Vaerenbergh, S., and Lawrence, N. D. (2012). Overlapping mixtures of Gaussian processes for the data association problem. Pattern Recognition, 45(4):1386?1395.
[19] Mahler, R. (2003). Multitarget Bayes filtering via first-order multitarget moments. Aerospace and Electronic Systems, IEEE Transactions on, 39(4):1152?1178.
[20] Poore, A. P., Rijavec, N., Barker, T. N., and Munger, M. L. (1993). Data association problems posed as
multidimensional assignment problems: algorithm development. In Optical Engineering and Photonics in
Aerospace Sensing, pages 172?182. International Society for Optics and Photonics.
[21] Rabiner, L. (1989). A tutorial on hidden Markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77(2):257?286.
[22] Rauch, H. E., Tung, F., and Striebel, C. T. (1965). Maximum likelihood estimates of linear dynamical
systems. AIAA Journal, 3(8):1445?1450.
[23] Sch?olkopf, B. and Smola, A. J. (2001). Learning with Kernels: Support Vector Machines, Regularization,
Optimization, and Beyond. The MIT Press, Cambridge, MA, USA.
[24] Valiant, L. G. (1979). The complexity of computing the permanent. Theoretical computer science,
8(2):189?201.
[25] Watanabe, Y. and Chertkov, M. (2010). Belief propagation and loop calculus for the permanent of a
non-negative matrix. Journal of Physics A: Mathematical and Theoretical, 43(24):242002.
[26] Yedidia, J. S., Freeman, W. T., and Weiss, Y. (2001). Bethe free energy, Kikuchi approximations, and
belief propagation algorithms. In Advances in Neural Information Processing Systems 13.
9
| 5572 |@word trial:1 version:1 dalal:1 polynomial:1 triggs:1 calculus:1 tried:1 covariance:1 p0:5 decomposition:1 solid:2 moment:1 reduction:1 inefficiency:1 contains:1 series:1 wrapper:1 denoting:2 existing:1 current:2 com:3 nt:32 z2:1 arkk:2 si:13 assigning:1 comparing:1 must:3 additive:1 partition:3 realistic:1 grumman:3 drop:2 plot:1 update:14 n0:1 v:5 implying:1 generative:1 stationary:1 leaf:1 selected:1 xk:17 filtered:3 blei:1 completeness:2 provides:1 mathematical:1 rnt:1 ik:3 incorrect:1 prove:1 combine:1 manner:2 introduce:1 pairwise:3 examine:1 multi:7 freeman:1 byrd:1 little:1 window:9 solver:1 deem:1 becomes:1 begin:1 estimating:1 notation:1 provided:3 what:1 pseudo:1 temporal:1 certainty:1 every:2 multidimensional:2 tackle:1 exactly:1 k2:3 demonstrates:2 uk:1 appear:1 reid:1 before:2 positive:1 dropped:2 engineering:2 modify:2 aiaa:1 despite:4 encoding:1 ak:24 path:2 noteworthy:2 northrop:3 black:3 inria:1 bird:1 nz:26 equivalence:1 challenging:2 factorization:7 range:1 tian:1 enforces:1 practice:2 x3:1 procedure:2 area:3 matching:2 word:1 radial:1 confidence:1 akij:5 get:6 undesirable:1 applying:1 www:1 map:19 deterministic:2 center:3 destroying:1 williams:1 duration:1 independently:1 barker:1 insight:1 rule:2 deriving:2 borrow:1 handle:2 notion:1 pmht:2 updated:1 target:6 exact:3 gps:2 us:2 hypothesis:4 pa:3 velocity:1 referee:1 equiva:1 particularly:2 recognition:5 database:1 observed:1 steven:2 tung:1 decrease:1 rescaled:1 principled:1 pd:16 complexity:3 pda:3 neglected:1 arabie:1 radar:10 trained:1 solving:1 predictive:1 flying:1 upon:2 bipartite:1 serve:1 localization:1 swap:4 easily:1 joint:5 represented:1 various:1 mht:30 derivation:2 train:1 describe:2 detected:1 heuristic:4 posed:2 solve:2 larger:4 widely:1 cvpr:2 otherwise:1 statistic:2 vaerenbergh:1 gp:1 think:1 online:1 beal:1 advantage:4 sequence:1 interaction:1 product:4 remainder:1 loop:1 realization:2 flexibility:1 intuitive:1 validate:1 normalize:1 olkopf:1 manchester:1 perfect:1 object:8 kikuchi:1 derive:4 develop:7 ac:1 pose:1 augmenting:3 ij:39 eq:14 implemented:2 auxiliary:1 involves:1 implies:2 dormant:6 fij:3 correct:3 annotated:1 filter:8 attribute:1 human:1 require:1 alleviate:1 elevation:1 ryan:2 secondly:1 summation:1 hartikainen:2 extension:3 inspecting:1 adjusted:1 tracker:25 around:2 ground:1 exp:10 lawrence:1 mapping:1 eaton:1 vary:1 a2:1 omitted:1 estimation:3 combinatorial:2 sensitive:1 vice:2 create:1 a0j:2 mit:2 clearly:3 sensor:2 gaussian:4 pnt:1 always:1 rather:1 factorizes:3 surveillance:1 mobile:1 derived:1 emission:1 improvement:2 bernoulli:1 likelihood:5 check:1 greatly:1 contrast:1 baseline:1 sense:1 detect:1 inference:8 striebel:1 i0:1 eliminate:1 typically:1 entire:1 lj:1 spurious:6 hidden:2 relation:1 unlikely:1 integrated:1 subroutine:1 issue:5 classification:1 html:1 dual:1 priori:2 k6:2 development:2 constrained:1 special:2 orange:1 smoothing:1 marginal:1 field:6 construct:1 once:1 never:1 equal:1 nicely:1 broad:1 future:1 np:1 report:1 quantitatively:1 intelligent:1 primarily:1 oriented:2 simultaneously:1 veer:1 astounding:1 phase:1 n1:1 attempt:1 detection:10 message:6 evaluation:1 photonics:2 mixture:4 truly:1 light:1 hubert:1 chain:1 accurate:2 capable:1 explosion:1 machinery:1 modest:1 incomplete:1 old:1 theoretical:2 minimal:1 increased:1 column:10 modeling:1 assignment:50 loopy:6 cost:6 applicability:2 introducing:1 rare:1 uniform:1 seventh:1 ngc:3 density:2 international:2 multitarget:2 retain:2 standing:1 probabilistic:5 off:1 physic:1 quickly:1 again:1 thesis:1 management:9 opposed:2 fir:3 bromiley:1 includes:2 mlsp:1 int:1 siap:5 li0:1 satisfy:1 explicitly:1 permanent:4 vi:1 pedestrian:1 tion:1 later:1 red:3 start:1 bayes:3 recover:1 reparametrization:1 portion:2 rmse:1 contribution:2 il:2 air:1 accuracy:7 acknowledging:1 likewise:2 efficiently:1 yield:2 rabiner:1 yellow:1 bayesian:3 manages:1 iid:1 tricked:1 trajectory:5 confirmed:2 served:1 detector:1 evaluates:1 energy:4 naturally:1 associated:6 di:9 proof:1 gain:1 ask:1 recall:2 cap:11 cj:3 sophisticated:1 higher:4 follow:2 planar:1 improved:1 rand:1 yb:1 formulation:3 evaluated:1 box:3 wei:1 generality:1 furthermore:1 smola:1 lent:1 overlapping:1 lack:3 propagation:9 western:1 logistic:1 believe:1 building:1 usa:1 normalized:1 true:4 verify:1 analytically:1 equality:1 volgenant:1 regularization:1 nonzero:1 confer:1 unambiguous:1 soccer:8 asme:1 outline:1 complete:6 demonstrate:2 motion:1 image:1 variational:19 novel:3 ef:2 ari:13 common:2 volume:1 association:11 slight:1 he:1 extend:1 willett:1 measurement:36 refer:3 significant:2 versa:2 a00:1 vec:2 ai:5 cambridge:1 tuning:1 heskes:1 particle:1 hxi:3 moving:1 lowered:1 stable:2 robot:1 base:1 dominant:2 posterior:17 showed:1 optimizing:1 shalom:1 hxk:1 scenario:5 forcing:1 corp:3 certain:1 meta:14 binary:2 arbitrarily:1 exploited:1 caltech:1 scoring:1 minimum:1 additional:2 determine:1 paradigm:1 shortest:1 signal:1 sliding:6 multiple:7 full:9 smoother:1 desirable:1 technical:1 match:1 plug:1 cross:1 long:1 calculation:1 post:1 munger:1 a1:15 paired:1 prediction:1 regression:1 basic:1 vision:6 expectation:1 poisson:3 metric:9 histogram:2 represent:2 kernel:1 robotics:2 unrealistically:1 separately:2 interval:1 source:1 sch:1 extra:1 unlike:1 posse:1 induced:4 contrary:1 incorporates:1 jordan:1 call:1 near:1 presence:1 symmetrically:1 split:1 variety:1 fit:1 zi:1 competing:1 reduce:1 rauch:1 jonker:1 utility:1 speech:1 passing:2 cause:1 ignored:1 useful:1 clear:1 clutter:26 dark:1 slide:1 nonparametric:2 http:1 exist:3 zj:11 tutorial:1 s3:1 notice:1 track:82 dummy:5 blue:3 write:1 group:1 key:2 threshold:2 promised:1 utilize:1 backward:1 destroy:2 graph:5 merely:1 sum:1 convert:1 uncertainty:1 striking:1 arrive:1 family:1 reader:1 almost:1 electronic:1 missed:4 utilizes:2 scaling:1 vb:30 bit:1 bound:10 cyan:3 replaces:3 extrapolates:1 optic:1 constraint:15 bp:3 x2:1 ri:7 aspect:2 optical:1 gredilla:3 poor:1 conjugate:9 l0j:1 slightly:1 across:4 remain:1 em:1 making:1 s1:12 modification:1 explained:1 lau:1 computationally:1 equation:3 remains:1 r3:1 mechanism:1 count:1 needed:1 initiate:2 know:3 tractable:1 adopted:1 studying:1 gaussians:1 yedidia:1 apply:1 observe:2 obey:1 alternative:2 robustness:2 batch:1 denotes:1 dirichlet:2 include:2 tina:1 publishing:1 graphical:2 opportunity:1 l6:1 neglect:1 k1:1 ghahramani:2 society:1 unchanged:1 implied:1 objective:1 quantity:2 traditional:1 southern:1 ak1:2 gradient:2 dp:6 separating:1 hmm:1 nx:1 pet:2 length:3 kalman:6 modeled:2 index:4 z3:1 providing:1 ratio:2 illustration:1 nc:8 difficult:3 setup:1 ilr:2 hog:5 stated:1 negative:2 reconsider:1 append:1 memo:1 implementation:1 motivates:1 unknown:2 perform:1 conversion:1 upper:1 observation:1 convolution:1 markov:5 implementable:1 endemic:1 situation:1 looking:1 incorporated:1 frame:16 arbitrary:2 intensity:1 introduced:1 cast:1 required:1 doppler:1 specified:1 z1:9 aerospace:2 framing:6 learned:1 address:3 beyond:2 bar:3 proceeds:1 dynamical:2 pattern:4 automation:1 bottone:2 summarize:1 including:1 green:3 video:3 belief:9 critical:1 natural:2 examination:1 turner:2 scheme:1 altered:1 improve:1 picture:1 lk:4 lij:1 prior:15 literature:2 relative:1 loss:5 permutation:2 mixed:1 filtering:4 proportional:1 plotting:1 classifying:2 row:9 prone:1 placed:1 last:1 free:4 aij:19 weaker:1 pulled:1 taking:1 sparse:1 benefit:1 regard:1 davey:1 van:1 complicating:1 ignores:1 forward:2 made:3 simplified:1 transaction:3 approximate:6 ameliorated:1 implicitly:1 logic:4 keep:1 dealing:1 global:1 active:6 incoming:1 handbook:1 summing:1 assumed:1 knew:1 xi:20 factorizing:1 latent:4 designates:1 sonar:1 sk:15 additionally:4 bethe:7 zk:6 nature:1 ignoring:1 symmetry:1 improving:1 bearing:1 vj:1 did:1 significance:1 dense:1 s2:1 noise:1 bounding:1 repeated:1 body:1 x1:13 referred:1 cient:1 cubic:1 fashion:1 ny:1 n:4 position:2 originated:2 watanabe:1 exponential:5 third:1 chertkov:1 formula:1 magenta:2 bishop:1 fxk:1 sensing:1 svm:4 a3:1 evidence:1 intractable:2 normalizing:1 fusion:1 false:1 adding:1 effectively:1 jpda:2 valiant:1 phd:2 dtic:1 easier:1 entropy:5 intersection:1 led:1 azaro:3 likely:3 simply:1 positional:2 tracking:30 sport:2 temporarily:1 springer:1 ch:5 corresponds:1 truth:1 chance:1 ma:1 conditional:1 identity:1 viewed:1 goal:1 pnz:2 careful:1 replace:1 hard:1 change:1 typical:1 determined:1 except:1 miss:2 player:8 shannon:1 support:2 brevity:1 adelaide:1 incorporate:1 evaluate:1 handling:1 |
5,051 | 5,573 | Joint Training of a Convolutional Network and a
Graphical Model for Human Pose Estimation
Jonathan Tompson, Arjun Jain, Yann LeCun, Christoph Bregler
New York University
{tompson, ajain, yann, bregler}@cs.nyu.edu
Abstract
This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is
successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show
that joint training of these two model paradigms improves performance and allows
us to significantly outperform existing state-of-the-art techniques.
1
Introduction
Despite a long history of prior work, human body pose estimation, or specifically the localization
of human joints in monocular RGB images, remains a very challenging task in computer vision.
Complex joint inter-dependencies, partial or full joint occlusions, variations in body shape, clothing
or lighting, and unrestricted viewing angles result in a very high dimensional input space, making
naive search methods intractable.
Recent approaches to this problem fall into two broad categories: 1) more traditional deformable
part models [27] and 2) deep-learning based discriminative models [15, 30]. Bottom-up part-based
models are a common choice for this problem since the human body naturally segments into articulated parts. Traditionally these approaches have relied on the aggregation of hand-crafted low-level
features such as SIFT [18] or HoG [7], which are then input to a standard classifier or a higher level
generative model. Care is taken to ensure that these engineered features are sensitive to the part that
they are trying to detect and are invariant to numerous deformations in the input space (such as variations in lighting). On the other hand, discriminative deep-learning approaches learn an empirical
set of low and high-level features which are typically more tolerant to variations in the training set
and have recently outperformed part-based models [27]. However, incorporating priors about the
structure of the human body (such as our prior knowledge about joint inter-connectivity) into such
networks is difficult since the low-level mechanics of these networks is often hard to interpret.
In this work we attempt to combine a Convolutional Network (ConvNet) Part-Detector ? which
alone outperforms all other existing methods ? with a part-based Spatial-Model into a unified learning framework. Our translation-invariant ConvNet architecture utilizes a multi-resolution feature
representation with overlapping receptive fields. Additionally, our Spatial-Model is able to approximate MRF loopy belief propagation, which is subsequently back-propagated through, and learned
using the same learning framework as the Part-Detector. We show that the combination and joint
training of these two models improves performance, and allows us to significantly outperform existing state-of-the-art models on the task of human body pose recognition.
1
2
Related Work
For unconstrained image domains, many architectures have been proposed, including ?shapecontext? edge-based histograms from the human body [20] or just silhouette features [13]. Many
techniques have been proposed that extract, learn, or reason over entire body features. Some use a
combination of local detectors and structural reasoning [25] for coarse tracking and [5] for persondependent tracking). In a similar spirit, more general techniques using ?Pictorial Structures? such
as the work by Felzenszwalb et al. [10] made this approach tractable with so called ?Deformable
Part Models (DPM)?. Subsequently a large number of related models were developed [1, 9, 31, 8].
Algorithms which model more complex joint relationships, such as Yang and Ramanan [31], use
a flexible mixture of templates modeled by linear SVMs. Johnson and Everingham [16] employ
a cascade of body part detectors to obtain more discriminative templates. Most recent approaches
aim to model higher-order part relationships. Pishchulin [23, 24] proposes a model that augments
the DPM model with Poselet [3] priors. Sapp and Taskar [27] propose a multi-modal model which
includes both holistic and local cues for mode selection and pose estimation. Following the Poselets approach, the Armlets approach by Gkioxari et al. [12] employs a semi-global classifier for part
configuration, and shows good performance on real-world data, however, it is tested only on arms.
Furthermore, all these approaches suffer from the fact that they use hand crafted features such as
HoG features, edges, contours, and color histograms.
The best performing algorithms today for many vision tasks, and human pose estimation in particular ([30, 15, 29]) are based on deep convolutional networks. Toshev et al. [30] show state-of-art
performance on the ?FLIC? [27] and ?LSP? [17] datasets. However, their method suffers from inaccuracy in the high-precision region, which we attribute to inefficient direct regression of pose vectors
from images, which is a highly non-linear and difficult to learn mapping.
Joint training of neural-networks and graphical models has been previously reported by Ning et
al. [22] for image segmentation, and by various groups in speech and language modeling [4, 21].
To our knowledge no such model has been successfully used for the problem of detecting and localizing body part positions of humans in images. Recently, Ross et al. [26] use a message-passing
inspired procedure for structured prediction on computer vision tasks, such as 3D point cloud classification and 3D surface estimation from single images. In contrast to this work, we formulate our
message-parsing inspired network in a way that is more amenable to back-propagation and so can be
implemented in existing neural networks. Heitz et al. [14] train a cascade of off-the-shelf classifiers
for simultaneously performing object detection, region labeling, and geometric reasoning. However,
because of the forward nature of the cascade, a later classifier is unable to encourage earlier ones
to focus its effort on fixing certain error modes, or allow the earlier classifiers to ignore mistakes
that can be undone by classifiers further in the cascade. Bergtholdt et al. [2] propose an approach
for object class detection using a parts-based model where they are able to create a fully connected
graph on parts and perform MAP-inference using A? search, but rely on SIFT and color features to
create the unary and pairwise potentials.
3
3.1
Model
Convolutional Network Part-Detector
Conv + ReLU + Pool (3 Stages)
Image Patches
64x64x3
30x30x128 13x13x128
Fully-Connected
Layers
9x9x128
LCN
5x5 Conv
+ ReLU +
Pool
128x128
64x64
5x5 Conv
+ ReLU +
Pool
5x5 Conv +
ReLU
64x64x3
4
9x9x256
LCN
256
512
30x30x128 13x13x128
9x9x128
9x9x256
Figure 1: Multi-Resolution Sliding-Window With Overlapping Receptive Fields
2
The first stage of our detection pipeline is a deep ConvNet architecture for body part localization.
The input is an RGB image containing one or more people and the output is a heat-map, which
produces a per-pixel likelihood for key joint locations on the human skeleton.
A sliding-window ConvNet architecture is shown in Fig 1. The network is slid over the input image
to produce a dense heat-map output for each body-joint. Our model incorporates a multi-resolution
input with overlapping receptive fields. The upper convolution bank in Fig 1 sees a standard 64x64
resolution input window, while the lower bank sees a larger 128x128 input context down-sampled
to 64x64. The input images are then Local Contrast Normalized (LCN [6]) (after down-sampling
with anti-aliasing in the lower resolution bank) to produce an approximate Laplacian pyramid. The
advantage of using overlapping contexts is that it allows the network to see a larger portion of the
input image with only a moderate increase in the number of weights. The role of the Laplacian
Pyramid is to provide each bank with non-overlapping spectral content which minimizes network
redundancy.
Conv + 98x68x128
ReLU +
Pool
(3 stages)
90x60x512 90x60x256
9x9 Conv
+ ReLU
Full Image 320x240px
1x1 Conv
+ ReLU
90x60x4
1x1 Conv
+ ReLU
Fully-connected equivalent model
Figure 2: Efficient Sliding Window Model with Single Receptive Field
An advantage of the Sliding-Window model (Fig 1) is that the detector is translation invariant.
However a major drawback is that evaluation is expensive due to redundant convolutions. Recent
work [11, 28] has addressed this problem by performing the convolution stages on the full input
image to efficiently create dense feature maps. These dense feature maps are then processed through
convolution stages to replicate the fully-connected network at each pixel. An equivalent but efficient
version of the sliding window model for a single resolution bank is shown in Fig 2. Note that due
to pooling in the convolution stages, the output heat-map will be a lower resolution than the input
image.
For our Part-Detector, we combine an efficient sliding window-based architecture with multiresolution and overlapping receptive fields; the subsequent model is shown in Fig 3. Since the
large context (low resolution) convolution bank requires a stride of 1/2 pixels in the lower resolution
image to produce the same dense output as the sliding window model, the bank must process four
down-sampled images, each with a 1/2 pixel offset, using shared weight convolutions. These four
outputs, along with the high resolution convolutional features, are processed through a 9x9 convolution stage (with 512 output features) using the same weights as the first fully connected stage (Fig 1)
and then the outputs of the low resolution bank are added and interleaved with the output of high
resolution bank.
To improve training time we simplify the above architecture by replacing the lower-resolution stage
with a single convolution bank as shown in Fig 4 and then upscale the resulting feature map. In our
practical implementation we use 3 resolution banks. Note that the simplified architecture is no longer
equivalent to the original sliding-window network of Fig 1 since the lower resolution convolution
features are effectively decimated and replicated leading into the fully-connected stage, however we
have found empirically that the performance loss is minimal.
Supervised training of the network is performed using batched Stochastic Gradient Descent (SGD)
with Nesterov Momentum. We use a Mean Squared Error (MSE) criterion to minimize the distance
between the predicted output and a target heat-map. The target is a 2D Gaussian with a small
variance and mean centered at the ground-truth joint locations. At training time we also perform
random perturbations of the input images (randomly flipping and scaling the images) to increase
generalization performance.
3
Conv +
ReLU +
Pool
(3 stages)
98x68x128
9x9 Conv +
ReLU
Replicate + Offset + Stride 2
(1, 1)
Full Image 320x240px
+
(2, 1)
(2, 2)
+
+
53x38x128
9x9 Conv +
ReLU
Offset 4x160x120px
images
Interleaved
90x60x512 9x9 Conv 90x60x4
...
...
+
Conv +
ReLU +
Pool
(3 stages)
(1, 2)
+ ReLU
9x9 Conv
+ ReLU
Fully-connectioned
equivalent model
Figure 3: Efficient Sliding Window Model with Overlapping Receptive Fields
Conv +
ReLU +
Pool
(3 stages)
9x9 Conv
+ ReLU Point-wise
Upscale
Full Image 320x240px
Half-res Image
160x120px
90x60x512
98x68x128
Conv +
ReLU +
Pool
(3 stages)
53x38x128
45x30x128
+
90x60x512
9x9 Conv
+ ReLU
90x60x4
9x9 Conv
+ ReLU
Fully-connectioned equivalent model
Figure 4: Approximation of Fig 3
3.2
Higher-Level Spatial-Model
The Part-Detector (Section 3.1) performance on our validation set predicts heat-maps that contain
many false positives and poses that are anatomically incorrect; for instance when a peak for face detection is unusually far from a peak in the corresponding shoulder detection. Therefore, in spite of
the improved Part-Detector context, the feed forward network still has difficulty learning an implicit
model of the constraints of the body parts for the full range of body poses. We use a higher-level
Spatial-Model to constrain joint inter-connectivity and enforce global pose consistency. The expectation of this stage is to not increase the performance of detections that are already close to the
ground-truth pose, but to remove false positive outliers that are anatomically incorrect.
Similar to Jain et al. [15], we formulate the Spatial-Model as an MRF-like model over the distribution of spatial locations for each body part. However, the biggest drawback of their model is that the
body part priors and the graph structure are explicitly hand crafted. On the other hand, we learn the
prior model and implicitly the structure of the spatial model. Unlike [15], we start by connecting
every body part to itself and to every other body part in a pair-wise fashion in the spatial model to
create a fully connected graph. The Part-Detector (Section 3.1) provides the unary potentials for
each body part location. The pair-wise potentials in the graph are computed using convolutional
priors, which model the conditional distribution of the location of one body part to another. For
instance, given that body part B is located at the center pixel, the convolution prior PA|B (i, j) is the
likelihood of the body part A occurring in pixel location (i, j). For a body part A, we calculate the
final marginal likelihood p?A as:
1 Y
pA|v ? pv + bv?A
(1)
p?A =
Z
v?V
where v is the joint location, pA|v is the conditional prior described above, bv?a is a bias term used
to describe the background probability for the message from joint v to A, and Z is the partition
4
function. Evaluation of Eq 1 is analogous to a single round of sum-product belief propagation.
Convergence to a global optimum is not guaranteed given that our spatial model is not tree structured.
However, as it can been seen in our results (Fig 8b), the inferred solution is sufficiently accurate for
all poses in our datasets. The learned pair-wise distributions are purely uniform when any pairwise
edge should to be removed from the graph structure. Fig 5 shows a practical example of how the
Spatial-Model is able to remove an anatomically incorrect strong outlier from the face heat-map by
incorporating the presence of a strong shoulder detection. For simplicity, only the shoulder and face
joints are shown, however, this example can be extended to incorporate all body part pairs. If the
shoulder heat-map shown in Fig 5 had an incorrect false-negative (i.e. no detection at the correct
shoulder location), the addition of the background bias bv?A would prevent the output heat-map
from having no maxima in the detected face region.
=
=
Shoulder Face
Face
Shoulder
=
f|s
x
Face Shoulder
Shoulder Shoulder
s|f
Face Unary
*
Shoulder Unary
Face Face
x
*
=
f|f
Face Unary
*
*
s|s
Shoulder Unary
Figure 5: Didactic Example of Message Passing Between the Face and Shoulder Joints
Fig 5 contains the conditional distributions for face and shoulder parts learned on the FLIC [27]
dataset. For any part A the distribution PA|A is the identity map, and so the message passed from
any joint to itself is its unary distribution. Since the FLIC dataset is biased towards front-facing poses
where the right shoulder is directly to the lower right of the face, the model learns the correct spatial
distribution between these body parts and has high probability in the spatial locations describing
the likely displacement between the shoulder and face. For datasets that cover a larger range of the
possible poses (for instance the LSP [17] dataset), we would expect these distributions to be less
tightly constrained, and therefore this simple Spatial-Model will be less effective.
For our practical implementation we treat the distributions above as energies to avoid the evaluation of Z. There are 3 reasons why we do not include the partition function. Firstly, we are only
concerned with the maximum output value of our network, and so we only need the output energy
to be proportional to the normalized distribution. Secondly, since both the part detector and spatial model parameters contain only shared weight (convolutional) parameters that are equal across
pixel positions, evaluation of the partition function during back-propagation will only add a scalar
constant to the gradient weight, which would be equivalent to applying a per-batch learning-rate
modifier. Lastly, since the number of parts is not known a priori (since there can be unlabeled people in the image), and since the distributions pv describe the part location of a single person, we
cannot normalize the Part-Model output. Our final model is a modification to Eq 1:
!
X
e?A = exp
log SoftPlus eA|v ? ReLU (ev ) + SoftPlus (bv?A )
(2)
v?V
where: SoftPlus (x) = 1/? log (1 + exp (?x)) , 1/2 ? ? ? 2
ReLU (x) = max (x, ) , 0 < ? 0.01
Note that the above formulation is no longer exactly equivalent to an MRF, but still satisfactorily
encodes the spatial constraints of Eq 1. The network-based implementation of Eq 2 is shown in
Fig 6. Eq 2 replaces the outer multiplication of Eq 1 with a log space addition to improve numerical
stability and to prevent coupling of the convolution output gradients (the addition in log space means
that the partial derivative of the loss function with respect to the convolution output is not dependent
on the output of any other stages). The inclusion of the SoftPlus and ReLU stages on the weights,
biases and input heat-map maintains a strictly greater than zero convolution output, which prevents
numerical issues for the values leading into the Log stage. Finally, a SoftPlus stage is used to
5
maintain continuous and non-zero weight and bias gradients during training. With this modified
formulation, Eq 2 is trained using back-propagation and SGD.
b11
ReLU
ReLU
SoftPlus
b
W11 SoftPlus
b12 SoftPlus
W
b
W12 SoftPlus
b21 SoftPlus
W
b
W21 SoftPlus
b22 SoftPlus
W
W22 SoftPlus
W
b
Conv
log
Conv
log
Conv
log
Conv
log
+
exp
+
exp
Figure 6: Single Round Message Passing Network
The convolution sizes are adjusted so that the largest joint displacement is covered within the convolution window. For our 90x60 pixel heat-map output, this results in large 128x128 convolution
kernels to account for a joint displacement radius of 64 pixels (note that padding is added on the
heat-map input to prevent pixel loss). Therefore for such large kernels we use FFT convolutions
based on the GPU implementation by Mathieu et al. [19].
The convolution weights are initialized using the empirical histogram of joint displacements created
from the training examples. This initialization improves learned performance, decreases training
time and improves optimization stability. During training we randomly flip and scale the heat-map
inputs to improve generalization performance.
3.3
Unified Model
Since our Spatial-Model (Section 3.2) is trained using back-propagation, we can combine our PartDetector and Spatial-Model stages in a single Unified Model. To do so, we first train the PartDetector separately and store the heat-map outputs. We then use these heat-maps to train a SpatialModel. Finally, we combine the trained Part-Detector and Spatial-Models and back-propagate
through the entire network.
This unified fine-tuning further improves performance. We hypothesize that because the SpatialModel is able to effectively reduce the output dimension of possible heat-map activations, the PartDetector can use available learning capacity to better localize the precise target activation.
4
Results
The models from Sections 3.1 and 3.2 were implemented within the Torch7 [6] framework (with
custom GPU implementations for the non-standard stages above). Training the Part-Detector takes
approximately 48 hours, the Spatial-Model 12 hours, and forward-propagation for a single image
through both networks takes 51ms 1 .
We evaluated our architecture on the FLIC [27] and extended-LSP [17] datasets. These datasets
consist of still RGB images with 2D ground-truth joint information generated using Amazon Mechanical Turk. The FLIC dataset is comprised of 5003 images from Hollywood movies with actors
in predominantly front-facing standing up poses (with 1016 images used for testing), while the
extended-LSP dataset contains a wider variety of poses of athletes playing sport (10442 training and
1000 test images). The FLIC dataset contains many frames with more than a single person, while
the joint locations from only one person in the scene are labeled. Therefore an approximate torso
bounding box is provided for the single labeled person in the scene. We incorporate this data by
including an extra ?torso-joint heat-map? to the input of the Spatial-Model so that it can learn to
select the correct feature activations in a cluttered scene.
1
We use a 12 CPU workstation with an NVIDIA Titan GPU
6
The FLIC-full dataset contains 20928 training images, however many of these training set
images contain samples from the 1016 test set scenes and so would allow unfair overtraining on the FLIC test set. Therefore, we propose a new dataset - called FLIC-plus
(http://cims.nyu.edu/?tompson/flic plus.htm) - which is a 17380 image subset from the FLIC-plus
dataset. To create this dataset, we produced unique scene labels for both the FLIC test set and FLICplus training sets using Amazon Mechanical Turk. We then removed all images from the FLIC-plus
training set that shared a scene with the test set. Since 253 of the sample images from the original
3987 FLIC training set came from the same scene as a test set sample (and were therefore removed
by the above procedure), we added these images back so that the FLIC-plus training set is a superset
of the original FLIC training set. Using this procedure we can guarantee that the additional samples
in FLIC-plus are sufficiently independent to the FLIC test set samples.
For evaluation of the test-set performance we use the measure suggested by Sapp et. al. [27]. For a
given normalized pixel radius (normalized by the torso height of each sample) we count the number
of images in the test-set for which the distance of the predicted UV joint location to the ground-truth
location falls within the given radius.
Fig 7a and 7b show our model?s performance on the the FLIC test-set for the elbow and wrist joints
respectively and trained using both the FLIC and FLIC-plus training sets. Performance on the LSP
dataset is shown in Fig 7c and 8a. For LSP evaluation we use person-centric (or non-observercentric) coordinates for fair comparison with prior work [30, 8]. Our model outperforms existing
state-of-the-art techniques on both of these challenging datasets with a considerable margin.
Toshev et. al.
Jain et. al.
MODEC
Eichner et. al.
Yang et. al.
Sapp et. al.
100
100
90
90
90
80
80
80
70
70
70
60
50
40
Detection rate
100
Detection rate
Detection rate
Ours (FLIC)
Ours (FLIC?plus)
60
50
40
60
50
40
30
30
30
20
20
20
10
10
0
0
2
0
0
4 6 8 10 12 14 16 18 20
Normalized distance error (pixels)
Ours: wrist
Ours: elbow
Toshev et al.: wrist
Toshev et al.: elbow
Dantone et al.: wrist
Dantone et al.: elbow
Pishchulin et al.: wrist
Pishchulin et al.: elbow
10
2
(a) FLIC: Elbow
0
0
4 6 8 10 12 14 16 18 20
Normalized distance error (pixels)
(b) FLIC: Wrist
2
4 6 8 10 12 14 16 18 20
Normalized distance error (pixels)
(c) LSP: Wrist and Elbow
Figure 7: Model Performance
Fig 8b illustrates the performance improvement from our simple Spatial-Model. As expected the
Spatial-Model has little impact on accuracy for low radii threshold, however, for large radii it increases performance by 8 to 12%. Unified training of both models (after independent pre-training)
adds an additional 4-5% detection rate for large radii thresholds.
Detection rate
80
70
60
50
40
100
100
90
90
80
80
70
70
60
50
40
30
30
20
20
10
10
0
0
2
4 6 8 10 12 14 16 18 20
Normalized distance error (pixels)
(a) LSP: Ankle and Knee
Detection rate
Ours: ankle
Ours: knee
Toshev et al.: ankle
Toshev et al.: knee
Dantone et al.: ankle
Dantone et al.: knee
Pishchulin et al.: ankle
Pishchulin et al.: knee
90
Detection rate
100
0
0
60
50
40
30
20
Part?Model
Part and Spatial?Model
Joint Training
2
4 6 8 10 12 14 16 18 20
Normalized distance error (pixels)
(b) FLIC: Wrist
1 Bank
2 Banks
3 Banks
10
0
0
2
4 6 8 10 12 14 16 18 20
Normalized distance error (pixels)
(c) FLIC: Wrist
Figure 8: (a) Model Performance (b) With and Without Spatial-Model (c) Part-Detector Performance
Vs Number of Resolution Banks (FLIC subset)
7
The impact of the number of resolution banks is shown in Fig 8c). As expected, we see a big
improvement when multiple resolution banks are added. Also note that the size of the receptive
fields as well as the number and size of the pooling stages in the network also have a large impact on
the performance. We tune the network hyper-parameters using coarse meta-optimization to obtain
maximal validation set performance within our computational budget (less than 100ms per forwardpropagation).
Fig 9 shows the predicted joint locations for a variety of inputs in the FLIC and LSP test-sets. Our
network produces convincing results on the FLIC dataset (with low joint position error), however,
because our simple Spatial-Model is less effective for a number of the highly articulated poses in
the LSP dataset, our detector results in incorrect joint predictions for some images. We believe that
increasing the size of the training set will improve performance for these difficult cases.
Figure 9: Predicted Joint Positions, Top Row: FLIC Test-Set, Bottom Row: LSP Test-Set
5
Conclusion
We have shown that the unification of a novel ConvNet Part-Detector and an MRF inspired SpatialModel into a single learning framework significantly outperforms existing architectures on the task
of human body pose recognition. Training and inference of our architecture uses commodity level
hardware and runs at close to real-time frame rates, making this technique tractable for a wide variety
of application areas.
For future work we expect to further improve upon these results by increasing the complexity and
expressiveness of our simple spatial model (especially for unconstrained datasets like LSP).
6
Acknowledgments
The authors would like to thank Mykhaylo Andriluka for his support. This research was funded in
part by the Office of Naval Research ONR Award N000141210327.
References
[1] M. Andriluka, S. Roth, and B. Schiele. Pictorial structures revisited: People detection and articulated
pose estimation. In CVPR, 2009.
8
[2] M. Bergtholdt, J. Kappes, S. Schmidt, and C. Schn?orr. A study of parts-based object class detection using
complete graphs. IJCV, 2010.
[3] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3d human pose annotations. In
ICCV, 2009.
[4] H. Bourlard, Y. Konig, and N. Morgan. Remap: recursive estimation and maximization of a posteriori
probabilities in connectionist speech recognition. In EUROSPEECH, 1995.
[5] P. Buehler, A. Zisserman, and M. Everingham. Learning sign language by watching TV (using weakly
aligned subtitles). CVPR, 2009.
[6] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning.
In BigLearn, NIPS Workshop, 2011.
[7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[8] M. Dantone, J. Gall, C. Leistner, and L. Van Gool. Human pose estimation using body parts dependent
joint regressors. In CVPR?13.
[9] M. Eichner and V. Ferrari. Better appearance models for pictorial structures. In BMVC, 2009.
[10] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. In CVPR, 2008.
[11] A. Giusti, D. Ciresan, J. Masci, L. Gambardella, and J. Schmidhuber. Fast image scanning with deep
max-pooling convolutional neural networks. In CoRR, 2013.
[12] G. Gkioxari, P. Arbelaez, L. Bourdev, and J. Malik. Articulated pose estimation using discriminative
armlet classifiers. In CVPR?13.
[13] K. Grauman, G. Shakhnarovich, and T. Darrell. Inferring 3d structure with a statistical image-based shape
model. In ICCV, 2003.
[14] G. Heitz, S. Gould, A. Saxena, and D. Koller. Cascaded classification models: Combining models for
holistic scene understanding. 2008.
[15] A. Jain, J. Tompson, M. Andriluka, G. Taylor, and C. Bregler. Learning human pose estimation features
with convolutional networks. In ICLR, 2014.
[16] S. Johnson and M. Everingham. Learning Effective Human Pose Estimation from Inaccurate Annotation.
In CVPR?11.
[17] S. Johnson and M. Everingham. Clustered pose and nonlinear appearance models for human pose estimation. In BMVC, 2010.
[18] D. Lowe. Object recognition from local scale-invariant features. In ICCV, 1999.
[19] M. Mathieu, M. Henaff, and Y. LeCun. Fast training of convolutional networks through ffts. In CoRR,
2013.
[20] G. Mori and J. Malik. Estimating human body configurations using shape context matching. ECCV, 2002.
[21] F. Morin and Y. Bengio. Hierarchical probabilistic neural network language model. In Proceedings of the
Tenth International Workshop on Artificial Intelligence and Statistics, 2005.
[22] F. Ning, D. Delhomme, Y. LeCun, F. Piano, L. Bottou, and P. Barbano. Toward automatic phenotyping of
developing embryos from videos. IEEE TIP, 2005.
[23] L. Pishchulin, M. Andriluka, P. Gehler, and B. Schiele. Poselet conditioned pictorial structures. In
CVPR?13.
[24] L. Pishchulin, M. Andriluka, P. Gehler, and B. Schiele. Strong appearance and expressive spatial models
for human pose estimation. In ICCV?13.
[25] D. Ramanan, D. Forsyth, and A. Zisserman. Strike a pose: Tracking people by finding stylized poses. In
CVPR, 2005.
[26] S. Ross, D. Munoz, M. Hebert, and J.A Bagnell. Learning message-passing inference machines for
structured prediction. In CVPR, 2011.
[27] B. Sapp and B. Taskar. Modec: Multimodal decomposable models for human pose estimation. In CVPR,
2013.
[28] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition,
localization and detection using convolutional networks. In ICLR, 2014.
[29] J. Tompson, M. Stein, Y. LeCun, and K. Perlin. Real-time continuous pose recovery of human hands
using convolutional networks. In TOG, 2014.
[30] A. Toshev and C. Szegedy. Deeppose: Human pose estimation via deep neural networks. In CVPR, 2014.
[31] Yi Yang and Deva Ramanan. Articulated pose estimation with flexible mixtures-of-parts. In CVPR?11.
9
| 5573 |@word version:1 dalal:1 replicate:2 everingham:4 triggs:1 ankle:5 propagate:1 rgb:3 sgd:2 configuration:2 contains:4 ours:6 outperforms:3 existing:6 activation:3 must:1 parsing:1 gpu:3 subsequent:1 partition:3 numerical:2 shape:3 remove:2 hypothesize:1 v:1 alone:1 generative:1 cue:1 half:1 intelligence:1 coarse:2 detecting:1 provides:1 location:15 revisited:1 firstly:1 x128:3 zhang:1 height:1 along:1 direct:1 incorrect:5 consists:1 ijcv:1 combine:4 pairwise:2 inter:3 expected:2 x60:1 mechanic:1 multi:4 aliasing:1 inspired:3 cpu:1 little:1 window:11 elbow:7 conv:23 provided:1 increasing:2 estimating:1 minimizes:1 developed:1 unified:5 finding:1 guarantee:1 every:2 commodity:1 saxena:1 unusually:1 exactly:1 eichner:2 classifier:7 grauman:1 ramanan:4 positive:2 local:4 treat:1 mistake:1 despite:1 deeppose:1 approximately:1 plus:8 initialization:1 b12:1 christoph:1 challenging:3 range:2 practical:3 lecun:5 satisfactorily:1 testing:1 unique:1 wrist:9 acknowledgment:1 recursive:1 procedure:3 displacement:4 area:1 empirical:2 undone:1 significantly:3 cascade:4 matching:1 pre:1 spite:1 morin:1 cannot:1 close:2 selection:1 unlabeled:1 context:5 applying:1 gkioxari:2 map:21 equivalent:7 center:1 roth:1 cluttered:1 resolution:18 formulate:2 simplicity:1 amazon:2 knee:5 decomposable:1 recovery:1 his:1 stability:2 x64:3 variation:3 traditionally:1 analogous:1 coordinate:1 ferrari:1 target:3 today:1 us:1 gall:1 pa:4 recognition:5 expensive:1 located:1 predicts:1 labeled:2 gehler:2 bottom:2 taskar:2 cloud:1 role:1 subtitle:1 calculate:1 region:3 kappes:1 connected:7 decrease:1 removed:3 environment:1 complexity:1 skeleton:1 schiele:3 nesterov:1 trained:6 weakly:1 shakhnarovich:1 segment:1 deva:1 purely:1 localization:3 upon:1 tog:1 htm:1 joint:33 stylized:1 multimodal:1 various:1 articulated:6 train:3 jain:4 heat:16 describe:2 effective:3 fast:2 detected:1 artificial:1 labeling:1 hyper:1 larger:3 cvpr:13 statistic:1 itself:2 final:2 advantage:2 propose:3 product:1 maximal:1 aligned:1 combining:1 holistic:2 multiresolution:1 deformable:3 normalize:1 konig:1 convergence:1 optimum:1 darrell:1 produce:5 object:4 wider:1 coupling:1 bourdev:2 fixing:1 pose:33 eq:7 arjun:1 strong:3 implemented:2 c:1 predicted:4 poselets:2 ning:2 radius:6 drawback:2 correct:3 attribute:1 subsequently:2 stochastic:1 centered:1 human:23 engineered:1 viewing:1 mcallester:1 generalization:2 leistner:1 clustered:1 secondly:1 bregler:3 strictly:1 adjusted:1 clothing:1 sufficiently:2 ground:4 exp:4 mapping:1 major:1 estimation:17 outperformed:1 label:1 ross:2 sensitive:1 largest:1 hollywood:1 create:5 successfully:2 biglearn:1 gaussian:1 aim:1 modified:1 avoid:1 shelf:1 phenotyping:1 office:1 buehler:1 focus:1 naval:1 improvement:2 likelihood:3 contrast:2 detect:1 posteriori:1 inference:3 dependent:2 unary:7 inaccurate:1 typically:1 entire:2 integrated:1 koller:1 pixel:17 issue:1 classification:2 flexible:2 priori:1 proposes:2 overfeat:1 art:4 spatial:27 constrained:1 andriluka:5 marginal:1 field:8 equal:1 having:1 sampling:1 broad:1 future:1 connectionist:1 simplify:1 employ:2 randomly:2 oriented:1 simultaneously:1 tightly:1 pictorial:4 bergtholdt:2 occlusion:1 maintain:1 attempt:1 detection:19 message:7 highly:2 custom:1 evaluation:6 mixture:2 tompson:5 amenable:1 accurate:1 edge:3 encourage:1 partial:2 unification:1 tree:1 taylor:1 initialized:1 re:1 deformation:1 minimal:1 instance:3 modeling:1 earlier:2 cover:1 localizing:1 maximization:1 loopy:1 subset:2 uniform:1 comprised:1 johnson:3 front:2 eurospeech:1 reported:1 dependency:1 scanning:1 person:5 peak:2 upscale:2 international:1 standing:1 probabilistic:1 off:1 pool:8 tip:1 connecting:1 connectivity:2 squared:1 x9:9 containing:1 watching:1 inefficient:1 leading:2 derivative:1 szegedy:1 account:1 potential:3 orr:1 stride:2 includes:1 titan:1 forsyth:1 explicitly:1 collobert:1 later:1 performed:1 lowe:1 portion:1 start:1 relied:1 aggregation:1 maintains:1 annotation:2 minimize:1 accuracy:1 convolutional:13 variance:1 efficiently:1 kavukcuoglu:1 produced:1 lighting:2 w21:1 history:1 overtraining:1 detector:17 suffers:1 farabet:1 energy:2 turk:2 naturally:1 workstation:1 propagated:1 sampled:2 dataset:13 knowledge:2 color:2 improves:5 sapp:4 segmentation:1 torso:3 ea:1 back:7 centric:1 feed:1 higher:4 supervised:1 modal:1 improved:1 zisserman:2 bmvc:2 formulation:2 evaluated:1 box:1 furthermore:1 just:1 stage:22 implicit:1 lastly:1 hand:6 replacing:1 expressive:1 multiscale:1 overlapping:7 propagation:7 nonlinear:1 mode:2 believe:1 normalized:10 contain:3 decimated:1 round:2 x5:3 during:3 criterion:1 m:2 trying:1 complete:1 reasoning:2 image:39 wise:4 novel:1 recently:2 predominantly:1 common:1 lsp:12 empirically:1 interpret:1 munoz:1 tuning:1 unconstrained:2 consistency:1 uv:1 automatic:1 inclusion:1 language:3 had:1 funded:1 actor:1 longer:2 surface:1 add:2 recent:3 henaff:1 moderate:1 schmidhuber:1 store:1 poselet:2 certain:1 nvidia:1 meta:1 onr:1 came:1 yi:1 seen:1 b22:1 unrestricted:1 care:1 greater:1 additional:2 morgan:1 paradigm:1 redundant:1 gambardella:1 strike:1 semi:1 sliding:9 full:7 multiple:1 long:1 award:1 laplacian:2 x64x3:2 prediction:3 mrf:4 regression:1 impact:3 vision:3 expectation:1 histogram:4 kernel:2 pyramid:2 background:2 addition:3 separately:1 fine:1 addressed:1 armlet:2 biased:1 extra:1 unlike:1 lcn:3 pooling:3 dpm:2 incorporates:1 spirit:1 structural:2 yang:3 presence:1 bengio:1 superset:1 concerned:1 fft:1 variety:3 relu:24 architecture:13 ciresan:1 reduce:1 passed:1 padding:1 effort:1 torch7:2 mykhaylo:1 remap:1 suffer:1 giusti:1 speech:2 york:1 passing:4 matlab:1 deep:7 covered:1 tune:1 stein:1 hardware:1 svms:1 category:1 augments:1 processed:2 http:1 outperform:2 sign:1 per:3 didactic:1 group:1 key:1 redundancy:1 four:2 threshold:2 localize:1 prevent:3 tenth:1 graph:6 sum:1 run:1 angle:1 yann:2 utilizes:1 patch:1 w12:1 scaling:1 interleaved:2 layer:1 modifier:1 guaranteed:1 replaces:1 bv:4 constraint:3 constrain:1 w22:1 scene:8 encodes:1 athlete:1 toshev:7 performing:3 gould:1 structured:3 tv:1 developing:1 combination:2 across:1 making:2 modification:1 anatomically:3 invariant:4 embryo:1 outlier:2 iccv:4 taken:1 pipeline:1 mori:1 monocular:2 remains:1 previously:1 describing:1 count:1 flip:1 tractable:2 available:1 hierarchical:1 spectral:1 enforce:1 batch:1 schmidt:1 eigen:1 original:3 top:1 ensure:1 include:1 graphical:2 exploit:1 especially:1 malik:3 added:4 already:1 flipping:1 receptive:7 traditional:1 bagnell:1 gradient:5 iclr:2 convnet:5 unable:1 distance:8 thank:1 capacity:1 arbelaez:1 outer:1 reason:2 toward:1 modeled:1 relationship:3 modec:2 convincing:1 sermanet:1 difficult:3 hog:2 negative:1 implementation:5 perform:2 upper:1 convolution:19 markov:1 datasets:7 descent:1 anti:1 extended:3 shoulder:16 precise:1 frame:2 perturbation:1 expressiveness:1 inferred:1 pair:4 mechanical:2 schn:1 learned:4 hour:2 inaccuracy:1 nip:1 able:4 suggested:1 ev:1 b21:1 including:2 max:2 video:1 belief:2 gool:1 difficulty:1 hybrid:1 rely:1 bourlard:1 cascaded:1 arm:1 pishchulin:7 improve:5 w11:1 movie:1 numerous:1 mathieu:3 cim:1 created:1 naive:1 extract:1 prior:10 geometric:2 understanding:1 piano:1 multiplication:1 fully:9 loss:3 expect:2 discriminatively:1 proportional:1 facing:2 validation:2 flic:31 bank:17 playing:1 translation:2 row:2 eccv:1 hebert:1 bias:4 allow:2 fall:2 template:2 felzenszwalb:2 face:15 wide:1 van:1 heitz:2 dimension:1 world:1 contour:1 forward:3 made:1 author:1 replicated:1 simplified:1 regressors:1 far:1 approximate:3 ignore:1 implicitly:1 silhouette:1 global:3 tolerant:1 discriminative:4 fergus:1 search:2 continuous:2 why:1 additionally:1 learn:5 nature:1 b11:1 mse:1 bottou:1 complex:2 domain:2 dense:4 bounding:1 big:1 perlin:1 fair:1 body:29 x1:2 crafted:3 fig:19 biggest:1 batched:1 fashion:1 precision:1 position:4 momentum:1 pv:2 inferring:1 unfair:1 learns:1 ffts:1 masci:1 down:3 sift:2 nyu:2 offset:3 intractable:1 incorporating:2 consist:1 false:3 workshop:2 effectively:2 corr:2 slid:1 occurring:1 illustrates:1 margin:1 budget:1 conditioned:1 likely:1 appearance:3 prevents:1 tracking:3 sport:1 scalar:1 truth:4 conditional:3 identity:1 towards:1 shared:3 content:1 hard:1 considerable:1 specifically:1 called:2 select:1 people:4 softplus:13 support:1 jonathan:1 incorporate:2 tested:1 |
5,052 | 5,574 | Divide-and-Conquer Learning by Anchoring a
Conical Hull
?
Tianyi Zhou? , Jeff Bilmes? , Carlos Guestrin?
Computer Science & Engineering, ? Electrical Engineering, University of Washington, Seattle
{tianyizh, bilmes, guestrin}@u.washington.edu
Abstract
We reduce a broad class of fundamental machine learning problems, usually
addressed by EM or sampling, to the problem of finding the k extreme rays
spanning the conical hull of a1 data point set. These k ?anchors? lead to a global
solution and a more interpretable model that can even outperform EM and sampling
on generalization error. To find the k anchors, we propose a novel divide-andconquer learning scheme ?DCA? that distributes the problem to O(k log k) sametype sub-problems on different low-D random hyperplanes, each can be solved
independently by any existing solver. For the 2D sub-problem, we instead present
a non-iterative solver that only needs to compute an array of cosine values and its
max/min entries. DCA also provides a faster subroutine inside other algorithms
to check whether a point is covered in a conical hull, and thus improves these
algorithms by providing significant speedups. We apply our method to GMM,
HMM, LDA, NMF and subspace clustering, then show its competitive performance
and scalability over other methods on large datasets.
1
Introduction
Expectation-maximization (EM) [10], sampling methods [13], and matrix factorization [20, 25] are
three algorithms commonly used to produce maximum likelihood (or maximum a posteriori (MAP))
estimates of models with latent variables/factors, and thus are used in a wide range of applications
such as clustering, topic modeling, collaborative filtering, structured prediction, feature engineering,
and time series analysis. However, their learning procedures rely on alternating optimization/updates
between parameters and latent variables, a process that suffers from local optima. Hence, their quality
greatly depends on initialization and on using a large number of iterations for proper convergence [24].
The method of moments [22, 6, 17], by contrast, solves m equations by relating the first m moments
of observation x ? Rp to the m model parameters, and thus yields a consistent estimator with a
global solution. In practice, however, sample moments usually suffer from unbearably large variance,
which easily leads to the failure of final estimation, especially when m or p is large. Although recent
spectral methods [8, 18, 15, 1] reduces m to 2 or 3 when estimating O(p) m parameters [2] by
relating the eigenspace of lower-order moments to parameters in a matrix form up to column scale,
the variance of sample moments is still sensitive to large p or data noise, which may result in poor
estimation. Moreover, although spectral methods using SVDs or tensor decomposition evidently
simplifies learning, the computation can still be expensive for big data. In addition, recovering a
parameter matrix with uncertain column scale might not be feasible for some applications.
In this paper, we reduce the learning in a rich class of models (e.g., matrix factorization and latent
variable model) to finding the extreme rays of a conical hull from a finite set of real data points.
This is obtained by applying a general separability assumption to either the data matrix in matrix
factorization or the 2nd /3rd order moments in latent variable models. Separability posits that a ground
set of n points, as rows of matrix X, can be represented by X = F XA , where the rows (bases) in
XA are a subset A ? V = [n] of rows in X, which are called ?anchors? and are interesting to various
1
models when |A| = k n. This property was introduced in [11] to establish the uniqueness of
non-negative matrix factorization (NMF) under simplex constraints, and was later [19, 14] extended to
non-negative constraints. We generalize it further to the model X = F YA for two (possibly distinct)
finite sets of points X and Y , and build a new theory for the identifiability of A. This generalization
enables us to apply it to more general models (ref. Table 1) besides NMF. More interestingly, it leads
to a learning method with much higher tolerance to the variance of sample moments or data noise, a
unique global solution, and a more interpretable model.
Cone(YA)
Another primary contribution of this paper is a disX ={ }
tributed learning scheme ?divide-and-conquer anchorY ={ }
ing? (DCA), for finding an anchor set A such that
Y ={ }
X = F YA by solving same-type sub-problems on only
H
O(k log k) randomly drawn low-dimensional (low-D)
X? = { }
Y? = { }
hyperplanes. Each sub-problem is of the form of
Y ?={ }
(X?) = F ? (Y ?)A with random projection matrix
?, and can easily be handled by most solvers due to
the low dimension. This is based on the observation
O
that the geometry of the original conical hull is partially
preserved after a random projection. We analyze the Figure 1: Geometry of general minimum conical hull probprobability of success for each sub-problem to recover lem and basic idea of divide-and-conquer anchoring (DCA).
part of A, and then study the number of sub-problems
for recovering the whole A with high probability (w.h.p.). In particular, we propose an very fast
non-iterative solver for sub-problems on the 2D plane, which requires computing an array of cosines
and its max/min values, and thus results in learning algorithms with speedups of tens to hundreds of
times. DCA improves multiple aspects of algorithm design since: 1) its idea of divide-and-conquer
randomization gives rise to distributed learning that can reduce the original problem to multiple
extremely low-D sub-problems that are much easier and faster to solve, and 2) it provides a fast
subroutine, checking if a point is covered by a conical hull, which can be embedded into other solvers.
A
e(Y ?
con
?
?)
We apply both the conical hull anchoring model and DCA to five learning models: Gaussian mixture
models (GMM) [27], hidden Markov models (HMM) [5], latent Dirichlet allocation (LDA) [7],
NMF [20], and subspace clustering (SC) [12]. The resulting models and algorithms show significant
improvement in efficiency. On generalization performance, they consistently outperform spectral
methods and matrix factorization, and are comparable to or even better than EM and sampling.
In the following, we will first generalize the separability assumption and minimum conical hull
problem risen from NMF in ?2, and then show how to reduce more general learning models to a
(general) minimum conical hull problem in ?3. ?4 presents a divide-and-conquer learning scheme that
can quickly locate the anchors of the conical hull by solving the same problem in multiple extremely
low-D spaces. Comprehensive experiments and comparison can be found in ?5.
2
General Separability Assumption and Minimum Conical Hull Problem
The original separability property [11] is defined on the convex hull of a set of data points, namely
that each point can be represented as a convex combination of certain subsets of vertices that define
the convex hull. Later works on separable NMF [19, 14] extend it to the conical hull case, which
replaced convex with conical combinations. Given the definition of (convex) cone and conical hull,
the separability assumption can be defined both geometrically and algebraically.
Definition 1 (Cone & conical hull). A (convex) cone is a non-empty convex set that is closed with
respect to conical combinations of its elements. In particular, cone(R) can be defined by its k
generators (or rays) R = {ri }ki=1 such that
X
k
cone(R) =
?i ri | ri ? R, ?i ? R+ ?i .
(1)
i=1
See [29] for the original separability assumption, the equivalence between separable NMF and the
minimum conical hull problem, which is defined as a submodular set cover problem.
2.1
General Separability Assumption and General Minimum Conical Hull Problem
By generalizing the separability assumption, we obtain a general minimum conical hull problem
that can reduce more general learning models besides NMF, e.g., latent variable models and matrix
factorization, to finding a set of ?anchors? on the extreme rays of a conical hull.
2
Definition 2 (General separability assumption). All the n data points(rows) in X are covered in a
finitely generated and pointed cone (i.e., if x ? cone(YA ) then ?x 6? cone(YA )) whose generators
form a subset A ? [m] of data points in Y such that @i 6= j, YAi = a ? YAj . Geometrically, it says
?i ? [n], Xi ? cone (YA ) , YA = {yi }i?A .
(2)
(n?k)?k
An equivalent algebraic form is X = F YA , where |A| = k, F 0 ? S ? R+
.
(n?k)?k
When X = Y and S = R+
, it degenerates to the original separability assumption given
in [29]. We generalize the minimum conical hull problem from [29]. Under the general separability
assumption, it aims to find the anchor set A from the points in Y rather than X.
Definition 3 (General Minimum Conical Hull Problem). Given a finite set of points X and a set
Y having an index set V = [m] of its rows, the general minimum conical hull problem finds the
subset of rows in Y that define a super-cone for all the rows in X. That is, find A ? 2V that solves:
min |A|, s.t., cone(YA ) ? cone(X).
(3)
A?V
where cone(YA ) is the cone induced by the rows A of Y .
When X = Y , this also degenerates to the original minimum conical hull problem defined in [29].
A critical question is whether/when the solution A is unique. When X = Y and X = F XA , by
following the analysis of the separability assumption in [29],we can prove that A is unique and
identifiable given X. However, when X 6= Y and X = F YA , it is clear that there could be multiple
legal choices of A (e.g., there could be multiple layers of conical hulls containing a conical hull
covering all points in X). Fortunately, when the rows of Y are rank-one matrices after vectorization
(concatenating all columns to a long vector), which is the common case in most latent variable models
in ?3.2, A can be uniquely determined if the number of rows in X exceeds 2.
Lemma 1 (Identifiability). If X = F YA with the additional structure Ys = vec(Ois ?Ojs ) where Oi
is a pi ? k matrix and Ois is its sth column, under the general separability assumption in Definition
2, two (non-identical) rows in X are sufficient to exactly recover the unique A, Oi and Oj .
See [29] for proof and additional uniqueness conditions when applied to latent variable models.
3
Minimum Conical Hull Problem for General Learning Models
Table 1: Summary of reducing NMF, SC, GMM, HMM and LDA to a conical hull anchoring model X = F YA in ?3, and their learning
algorithms achieved by A = DCA(X, Y, k, M) in Algorithm 1 . Minimal conical hull A = MCH(X, Y ) is defined in Definition 4.
vec(?) denotes the vectorization of a matrix. For GMM and HMM, Xi ? Rn?pi is the data matrix for view i (i.e., a subset of features) and
the ith observation of all triples of sequential observations, respectively. Xt,i is the tth row of Xi and associates with point/triple t. ?t is a
vector uniformly drawn from the unit sphere. More details are given in [29].
X in conical hull problem
n?p
data matrix X ? R+
Y in conical hull problem
Y := X
k in conical hull problem
# of factors
SC
GMM
HMM
data matrix X ? R
[vec[X1T X2 ]; vec[X1T Diag(X3 ?t )X2 ]t?[q] ]/n
[vec[X2T X3 ]; vec[X2T Diag(X1 ?t )X3 ]t?[q] ]/n
Y := X
[vec(Xt,1 ? Xt,2 )]t?[n]
[vec(Xt,2 ? Xt,3 )]t?[n]
# of basis from all clusters
# of components/clusters
# of hidden states
LDA
word-word co-occurrence matrix X ? R+
Y := X
# of topics
Algo
NMF
SC
GMM
HMM
LDA
Each sub-problem in DCA
? = MCH(X?, X?), can be solved by (10)
A
\
? =anchors of clusters achieved by meanshift((X?)?)
A
? = MCH(X?, Y ?), can be solved by (10)
A
? = MCH(X?, Y ?), can be solved by (10)
A
? = MCH(X?, X?), can be solved by (10)
A
S ?i
Post-processing after A := i A
solving F in X = F XA
clustering anchors XA
N/A
solving T in OT = XA,3
col-normalize {F : X = F XA }
Interpretation of anchors indexed by A
basis XA are real data points
cluster i is a cone cone(XAi )
centers [XA,i ]i?[3] from real data
emission matrix O = XA,2
anchor word for topic i (topic prob. Fi )
Model
NMF
n?p
p?p
In this section, we discuss how to reduce the learning of general models such as matrix factorization
and latent variable models to the (general) minimum conical hull problem. Five examples are given
in Table 1 to show how this general technique can be applied to specific models.
3.1 Matrix Factorization
Besides NMF, we consider more general matrix factorization (MF) models that can operate on
negative features and specify a complicated structure of F . The MF X = F W is a deterministic
latent variable model where F and W are deterministic latent factors. By assigning a likelihood
p(Xi,j |Fi , (W T )j ) and priors p(F ) and p(W ), its optimization model can be derived from maximum
3
likelihood or MAP estimate. The resulting objective is usually a loss function `(?) of X ? F W plus
regularization terms for F and W , i.e., min `(X, F W ) + RF (F ) + RW (W ).
Similar to separable NMF, minimizing the objective of general MF can be reduced to a minimum conPk
ical hull problem that selects the subset A with X = F XA . In this setting, RW (W ) = i=1 g(Wi )
where g(w) = 0 if w = Xi for some i and g(w) = ? otherwise. This is equivalent to applying a
prior p(Wi ) with finite support set on the rows of X to each row of W . In addition, the regularization
of F can be transformed to geometric constraints between points in X and in XA . Since Fi,j is the
conical combination weight of XAj in recovering Xi , a large Fi,j intuitively indicates a small angle
between XAj and Xi , and vice verse. For example, the sparse and graph Laplacian prior for rows of
F in subspace clustering can be reduced to ?cone clustering? for finding A. See [29] for an example
of reducing the subspace clustering to general minimum conical hull problem.
3.2
Latent Variable Model
Different from deterministic MF, we build a system of equations from the moments of probabilistic
latent variable models, and then formulate it as a general minimum conical hull problem, rather
than directly solve it. Let the generalization model be h ? p(h; ?) and x ? p(x|h; ?), where h is a
latent variable, x stands for observation, and {?, ?} are parameters. In a variety of graphical models
such as GMMs and HMMs, we need to model conditional independence between groups of features.
This is also known as the multi-view assumption. W.l.o.g., we assume that x is composed of three
groups(views) of features {xi }i?[3] such that ?i 6= j, xi ?
? xj |h. We further assume the dimension
k of h is smaller than pi , the dimension of xi . Since the goal is learning {?, ?}, decomposing the
moments of x rather than the data matrix X can help us get rid of the latent variable h and thus avoid
alternating minimization between {?, ?} and h. When E(xi |h) = hT OiT (linearity assumption),
the second and third order moments can be written in the form of matrix operator.
E (xi ? xj ) = E[E(xi |h) ? E(xj |h)] = Oi E(h ? h)OjT ,
(4)
E (xi ? xj ? h?, xl i) = Oi [E(h ? h ? h) ?3 (Ol ?)] OjT ,
where A ?n U denotes the n-mode product of a tensor A by a matrix U , ? is the outer product, and
the operator parameter ? can be any vector. We will mainly focus on the models in which {?, ?} can
be exactly recovered from conditional mean vectors {Oi }i?[3] and E(h ? h)1 , because they cover
most popular models such as GMMs and HMMs in real applications.
The left hand sides (LHS) of both equations in (4) can be directly estimated from training data, while
their right hand sides (RHS) can be written in a unified matrix form Oi DOjT with Oi ? Rpi ?k and
D ? Rk?k . By using different ?, we can obtain 2 ? q ? pl + 1 independent equations, which
compose a system of equations for Oi and Oj . Given the LHS, we can obtain the column spaces of
Oi and Oj , which respectively equal to the column and row space of Oi DOjT , a low-rank matrix
when pi > k. In order to further determine Oi and Oj , our discussion falls into two types of D.
When D is a diagonal matrix. This happens when ?i 6= j, E(hi hj ) = 0. A common example is
that h is a label/state indicator such that h = ei for class/state i, e.g., h in GMM and HMM. In this
case, the two D matrices in the RHS of (4) are
(
????
E(h ? h) = Diag(E(h2i )),
(5)
????
E(h ? h ? h) ?3 (Ol ?) = Diag(E(h3i ) ? Ol ?),
????
where E(hti ) = [E(ht1 ), . . . , E(htk )]. So either matrix in the LHS of (4) can be written as a sum of k
Pk
rank-one matrices, i.e., s=1 ? (s) Ois ? Ojs , where Ois is the sth column of Oi .
The general separability assumption posits that the set of k rank-one basis matrices constructing the
RHS of (4) is a unique subset A ? [n] of the n samples of xi ? xj constructing the left hand sides,
i.e., Ois ? Ojs = [xi ? xj ]As = XAs ,i ? XAs ,j , the outer product of xi and xj in (As )th data point.
1
Note our method can also handle more complex models that violate the linearity assumption and need higher
order moments for parameter estimation. By replacing xi in (4) with vec(xi ?n ), the vectorization of the nth
tensor power of xi , Oi can contain nth order moments for p(xi |h; ?). However, since higher order moments are
either not necessary or difficult to estimate due to high sample complexity, we will not study them in this paper.
4
Therefore, by applying q ? 1 different ? to (4), we obtain the system of q equations in the following
form, where Y t is the estimate of the LHS of tth equation from training data.
?t ? [q], Y (t) =
k
X
?t,s [xi ? xj ]As ? [vec(Y (t) )]t?[q] = ?[vec(Xt,i ? Xt,j )]t?A .
(6)
s=1
The right equation in (6) is an equivalent matrix representation of the left one. Its LHS is a q ? pi pj
matrix, and its RHS is the product of a q ? k matrix ? and a k ? pi pj matrix. By letting X ?
[vec(Y (t) )]t?[q] , F ? ? and Y ? [vec(Xt,i ? Xt,j )]t?[n] , we can fit (6) to X = F YA in Definition
2. Therefore, learning {Oi }i?[3] is reduced to selecting k rank-one matrices from {Xt,i ? Xt,j }t?[n]
indexed by A whose conical hull covers the q matrices {Y (t) }t?[q] . Given the anchor set A, we have
? i = XA,i and O
? j = XA,j by assigning real data points indexed by A to the columns of Oi and Oj .
O
Given Oi and Oj , ? can be estimated by solving (6). In many models, a few rows of ? are sufficient
to recover ?. See [29] for a practical acceleration trick based on matrix completion.
When D is a symmetric matrix with nonzero off-diagonal entries. This happens in ?admixture?
models, e.g., h can be a general binary vector h ? {0, 1}k or a vector on the probability simplex, and
the conditional mean E(xi |h) is a mixture of columns in Oi . The most well known example is LDA,
in which each document is generated by multiple topics.
We apply the general separability assumption by only using the first equation in (4), and treating the
matrix in its LHS as X in X = F XA . When the data are extremely sparse, which is common in
text data, selecting the rows of the denser second order moment as bases is a more reasonable and
effective assumption compared to sparse data points. In this case, the p rows of F contain k unit
vectors {ei }i?[k] . This leads to a natural assumption of ?anchor word? for LDA [3].
See [29] for the example of reducing multi-view mixture model, HMM, and LDA to general minimum
conical hull problem. It is also worth noting that we can show our method, when applied to LDA,
yields equal results but is faster than a Bayesian inference method [3], see Theorem 4 in [29].
4
4.1
Algorithms for Minimum Conical Hull Problem
Divide-and-Conquer Anchoring (DCA) for General Minimum Conical Hull Problems
The key insights of DCA come from two observations on the geometry of the convex cone. First,
projecting a conical hull to a lower-D hyperplane partially preserves its geometry. This enables us
to distribute the original problem to a few much smaller sub-problems, each handled by a solver
to the minimum conical hull problem. Secondly, there exists a very fast anchoring algorithm for a
sub-problem on 2D plane, which only picks two anchor points based on their angles to an axis without
iterative optimization or greedy pursuit. This results in a significantly efficient DCA algorithm that
can be solely used, or embedded as a subroutine, checking if a point is covered in a conical hull.
4.2
Distributing Conical Hull Problem to Sub-problems in Low Dimensions
Due to the convexity of cones, a low-D projection of a conical hull is still a conical hull that covers the
projections of the same points covered in the original conical hull, and generated by the projections
of a subset of anchors on the extreme rays of the original conical hull.
Lemma 2. For an arbitrary point x ? cone(YA ) ? Rp , where A is the index set of the k anchors
(generators) selected from Y , for any ? ? Rp?d with d ? p, we have
?A? ? A : x? ? cone(YA? ?),
(7)
Since only a subset of A remains as anchors after projection, solving a minimum conical hull problem
on a single low-D hyperplane rarely returns all the anchors in A. However, the whole set A can be
recovered from the anchors detected on multiple low-D hyperplanes. By sampling the projection
matrix ? from a random ensemble M, it can be proved that w.h.p. solving only s = O(ck log k)
sub-problems are sufficient to find all anchors in A. Note c/k is the lower bound of angle ? ? 2? in
Theorem 1, so large c indicates a less flat conical hull. See [29] for our method?s robustness to the
failure in identifying ?flat? anchors.
For the special case of NMF when X = F XA , the above result is proven in [28]. However, the
analysis cannot be trivially extended to the general conical hull problem when X = F YA (see Figure
1). A critical reason is that the converse of Lemma 2 does not hold: the uniqueness of the anchor set A?
5
Algorithm 1 DCA(X, Y, k, M)
Input: Two sets of points (rows) X ? Rn?p and Y ? Rm?p in matrix forms (ref. Table 1 to see X and Y
for different models), number of latent factors/variables k, random matrix ensemble M;
Output: Anchor set A ? [m] such that ?i ? [n], Xi ? cone(YA );
Divide Step (in parallel):
for i = 1 ? s := O(k log k) do
Randomly draw a matrix ? ? Rp?d from M;
Solve sub-problem such as A?t = MCH(X?, Y ?) by any solver, e.g., (10);
end for
Conquer Step:
P
?i ? [m], compute g?(Yi ) = (1/s) st=1 1A?t (Yi );
Return A as index set of the k points with the largest g?(Yi ).
on low-D hyperplane could be violated, because non-anchors in Y may have non-zero probability to
be projected as low-D anchors. Fortunately, we can achieve a unique A? by defining a ?minimal conical
hull? on a low-D hyperplane. Then Proposition 1 reveals when w.h.p. such an A? is a subset of A.
Definition 4 (Minimal conical hull). Given two sets of points (rows) X and Y , the conical hull
spanned by anchors (generators) YA is the minimal conical hull covering all points in X iff
?{i, j, s} ? i, j, s | i ? AC = [m] \ A, j ? A, s ? [n], Xs ? cone(YA ) ? cone(Yi?(A\j) ) (8)
[
[
we have X
cy denotes the angle between two vectors x and y. The solution of
s Yi > Xs Yj , where x
minimal conical hull is denoted by A = MCH(X, Y ).
H
B?
1
1
A?1
?
?
It is easy to verify that the minimal conical hull is unique, and
the general minimum conical hull problem X = F YA under the
C?
C?
general separability assumption (which leads to the identifiability of
H
A?
A?
A) is a special case of A = MCH(X, Y ). In DCA, on each low-D
A
H
hyperplane Hi , the associated sub-problem aims to find the anchor
C
i
i
i
A
?
C
set A = MCH(X? , Y ? ). The following proposition gives the
probability of A?i ? A in a sub-problem solution.
A
H
Proposition 1 (Probability of success in sub-problem). As deO
fined in Figure 2, Ai ? A signifies an anchor point in YA , Ci ? X
signifies a point in X ? Rn?p , Bi ? AC signifies a non-anchor
point in Y ? Rm?p , the green ellipse marks the intersection hyperplane between cone(YA ) and the unit sphere Sp?1 , the superscript ?0 denotes the projection of a point on the intersection hyperplane. Define d-dim (d ? p) hyperplanes {Hi }i?[4] such that
Figure 2: Proposition 1.
\
A03 A02 ? H1 , A01 A02 ? H2 , B10 A02 ? H3 , B10 C10 ? H4 , let ? = H
1 H2 be the angle between hyper\
planes H1 and H2 , ? = H
3 H4 be the angle between H3 and H4 . If H with associated projection
matrix ? ? Rp?d is a d-dim hyperplane uniformly drawn from the Grassmannian manifold Gr(d, p),
and A? = M CH(X?, Y ?) is the solution to the minimal conical hull problem, we have
1
2
2
2
3$
1
3
2
3$
1
?"
2
4
?"
$
? = ? , Pr(A2 ? A)
? = ? ??.
Pr(B1 ? A)
2?
2?
(9)
See [29] for proof, discussion and analysis of robustness to unimportant ?flat? anchors and data noise.
Theorem 1 (Probability bound). Following the same notations in Proposition 1, suppose p?? =
cs
min{A1 ,A2 ,A3 ,B1 ,C1 } (? ? 2?) ? c/k > 0. It holds with probability at least 1 ? k exp ? 3k
that
DCA successfully identifies all the k anchors in A, where s is the number of sub-problems solved.
See [29] for proof. Given Theorem 1, we can immediately achieve the following corollary about the
number of sub-problems that guarantee success of DCA in finding A.
Corollary 1 (Number of sub-problems). With probability 1 ? ?, DCA can correctly recover the
k
anchor set A by solving ?( 3k
c log ? ) sub-problems.
See [29] for the idea of divide-and-conquer randomization in DCA, and its advantage over JohnsonLindenstrauss (JL) Lemma based methods.
6
4.3
Anchoring on 2D Plane
Although DCA can invoke any solver for the sub-problem on any low-D hyperplane, a very fast
solver for the 2D sub-problem always shows high accuracy in locating anchors when embedded into
DCA. Its motivation comes from the geometry of conical hull on a 2D plane, which is a special
case of a d-dim hyperplane H in the sub-problem of DCA. It leads to a non-iterative algorithm for
A = MCH(X, Y ) on the 2D plane. It only requires computing n + m cosine values, finding the
min/max of the n values, and comparing the remaining m ones with the min/max value.
According to Figure 1, the two anchors YA? ? on a 2D plane have the min/max (among points in Y ? )
angle (to either axis) that is larger/smaller than all angles of points in X?, respectively. This leads to
?
the following closed form of A.
\
\
\
\
A? = {arg min ((Y
i ?)? ? max (Xj ?)?)+ , arg min (min (Xj ?)? ? (Yi ?)?)+ },
i?[m]
j?[n]
(10)
i?[m] j?[n]
where (x)+ = x if x ? 0 and ? otherwise, and ? can be either the vertical or horizontal axis on a
2D plane. By plugging (10) in DCA as the solver for s sub-problems on random 2D planes, we can
obtain an extremely fast learning algorithm.
Note for the special case when X = Y , (10) degenerates to finding the two points in X? with the
\
\
smallest and largest angles to an axis ?, i.e., A? = {arg mini?[n] (X
i ?)?, arg maxi?[n] (Xi ?)?}.
This is used in matrix factorization and the latent variable model with nonzero off-diagonal D.
See [29] for embedding DCA as a fast subroutine into other methods, and detailed off-the-shelf DCA
algorithms of NMF, SC, GMM, HMM and LDA. A brief summary is in Table 1.
5
Experiments
See [29] for a complete experimental section with results of DCA for NMF, SC, GMM, HMM, and
LDA, and comparison to other methods on more synthetic and real datasets.
0.8
?0.2
?0.3
0.6
0.5
0.4
10
SPA
XRAY
DCA(s=50)
DCA(s=92)
DCA(s=133)
DCA(s=175)
SFO
LP?test
?0.1
?anchor recovery error
anchor index recovery rate
0.7
1
0
SPA
XRAY
DCA(s=50)
DCA(s=92)
DCA(s=133)
DCA(s=175)
SFO
LP?test
SPA
XRAY
DCA(s=50)
DCA(s=92)
DCA(s=133)
DCA(s=175)
SFO
LP?test
0
10
?1
10
?0.4
CPU seconds
1
0.9
?0.5
?0.6
?2
10
?3
10
0.3
?0.7
0.2
?0.8
?4
10
?0.9
0.1
0 ?2
10
?1
0
10
1
10
10
?5
?1 ?2
10
?1
0
10
1
10
noise level
10
10
?2
?1
10
0
10
1
10
10
noise level
noise level
Figure 3: Separable NMF on a randomly generated 300 ? 500 matrix, each point on each curve is the result by averaging 10 independent
random trials. SFO-greedy algorithm for submodular set cover problem. LP-test is the backward removal algorithm from [4]. LEFT: Accuracy
of anchor detection (higher is better). Middle: Negative relative `2 recovery error of anchors (higher is better). Right: CPU seconds.
cmu?pie
3
10
DCA GMM(s=171)
DCA GMM(s=1023)
0.15
k?means
1
10
1
10
DCA GMM(s=171)
Spectral GMM
0.35
Clustering Accuracy
0.2
DCA GMM(s=853)
0.4
Spectral GMM
EM for GMM
CPU seconds
Clustering Accuracy
DCA GMM(s=682)
k?means
2
10
EM for GMM
DCA GMM(s=341)
0.45
DCA GMM(s=682)
DCA GMM(s=1023)
Spectral GMM
yale
2
10
DCA GMM(s=171)
DCA GMM(s=341)
DCA GMM(s=682)
k?means
0.25
yale
0.5
DCA GMM(s=171)
DCA GMM(s=341)
0.3
DCA GMM(s=341)
EM for GMM
DCA GMM(s=682)
0.3
CPU seconds
cmu?pie
0.35
0.25
0.2
DCA GMM(s=853)
k?means
0
10
Spectral GMM
EM for GMM
0.15
0.1
0
?1
10
10
0.1
0.05
0.05
0
?1
30
60
90
120
150
180
210
Number of Clusters/Mixture Components
240
270
10
300
30
60
90
120
150
180
210
Number of Clusters/Mixture Components
240
270
?2
10
0
300
19
38
57
76
95
114
133
Number of Clusters/Mixture Components
152
171
19
190
38
57
76
95
114
133
Number of Clusters/Mixture Components
152
171
190
Figure 4: Clustering accuracy (higher is better) and CPU seconds vs. Number of clusters for Gaussian mixture model on CMU-PIE (left) and
YALE (right) human face dataset. We randomly split the raw pixel features into 3 groups, each associates to a view in our multi-view model.
JP?Morgan
8
JP?Morgan
1
10
Barclays
33.5
7
32.5
0
10
4
DCA HMM(s=32)
DCA HMM(s=96)
DCA HMM(s=160)
DCA HMM(s=256)
Baum?Welch(EM)
Spectral method
?1
10
DCA HMM(s=32)
DCA HMM(s=96)
loglikelihood
CPU seconds
loglikelihood
5
?2
10
31
30.5
?1
DCA HMM(s=32)
DCA HMM(s=64)
DCA HMM(s=96)
DCA HMM(s=160)
Baum?Welch(EM)
Spectral method
10
?2
10
29.5
DCA HMM(s=256)
29
Baum?Welch(EM)
2
31.5
30
DCA HMM(s=160)
3
0
10
CPU seconds
32
6
Barclays
1
10
DCA HMM(s=32)
DCA HMM(s=64)
DCA HMM(s=96)
DCA HMM(s=160)
Baum?Welch(EM)
Spectral method
33
Spectral method
3
4
5
6
7
Number of States
8
9
?3
10
10
3
4
5
6
7
Number of States
8
9
28.5
10
?3
3
4
5
6
7
Number of States
8
9
10
10
3
4
5
6
7
Number of States
8
9
10
Figure 5: Likelihood (higher is better) and CPU seconds vs. Number of states for using an HMM to model the stock price of 2 companies from
01/01/1995-05/18/2014 collected by Yahoo Finance. Since no ground truth label is given, we measure likelihood on training data.
DCA for Non-negative Matrix Factorization on Synthetic Data. The experimental comparison
results are shown in Figure 3. Greedy algorithms SPA [14], XRAY [19] and SFO achieves the best
7
accuracy and smallest recovery error when noise level is above 0.2, but XRAY and SFO are the
slowest two. SPA is slightly faster but still much slower than DCA. DCA with different number of
sub-problems shows slightly less accuracy than greedy algorithms, but the difference is acceptable.
Considering its significant acceleration, DCA offers an advantageous trade-off. LP-test [4] has the
exact solution guarantee, but it is not robust to noise, and too slow. Therefore, DCA provides a much
faster and more practical NMF algorithm with comparable performance to the best ones.
DCA for Gaussian Mixture Model on CMU-PIE and YALE Face Dataset. The experimental
comparison results are shown in Figure 4. DCA consistently outperforms other methods (k-means,
EM, spectral method [1]) on accuracy, and shows speedups in the range 20-2000. By increasing the
number of sub-problems, the accuracy of DCA improves. Note the pixels of face images always
exceed 1000, and thus results in slow computation of pairwise distances required by other clustering
methods. DCA exhibits the fastest speed because the number of sub-problems s = O(k log k) does
not depend on the feature dimension, and thus merely 171 2D random projections are sufficient
for obtaining a promising clustering result. The spectral method performs poorer than DCA due
to the large variance of sample moments. DCA uses the separability assumption in estimating the
eigenspace of the moment, and thus effectively reduces the variance.
Table 2: Motion prediction accuracy (higher is better) of the test set for 6 motion capture sequences from CMU-mocap dataset. The motion
for each frame is manually labeled by the authors of [16]. In the table, s13s29(39/63) means that we split sequence 29 of subject 13 into
sub-sequences, each has 63 frames, in which the first 39 ones are for training and the rest are for test. Time is measured in ms.
Sequence
Measure
Baum-Welch (EM)
Spectral Method
DCA-HMM (s=9)
DCA-HMM (s=26)
DCA-HMM (s=52)
DCA-HMM (s=78)
s13s29(39/63)
Acc
Time
0.50
383
0.20
80
0.33
3.3
0.50
3.3
0.50
3.4
0.66
3.4
s13s30(25/51)
Acc
Time
0.50
140
0.25
43
0.92
1
1.00
1
0.50
1.1
0.93
1.1
s13s31(25/50)
Acc
Time
0.46
148
0.13
58
0.19
1.5
0.65
1.6
0.43
1.6
0.41
1.6
4
3800
3400
s14s14(29/43)
Acc
Time
0.62
529
0.63
134
0.79
3
0.45
3
0.80
3.2
0.80
6.7
10
0.7
4
10
3
10
0.6
3200
3
DCA LDA(s=801)
DCA LDA(s=2001)
DCA LDA(s=3336)
DCA LDA(s=5070)
EM variational
Gibbs sampling
Spectral method
10
1
10
2600
0.5
0.4
DCA SC(s=307)
DCA SC(s=819)
DCA SC(s=1229)
DCA SC(s=1843)
SSC
SCC
LRR
RSC
0.3
2400
0
10
0.2
2200
?1
5
13
22
30
38
47
Number of Topics
55
63
72
80
10
5
13
22
30
38
47
Number of Topics
55
63
72
0.1
20
80
CPU seconds
CPU seconds
Perplexity
2800
Mutual Information
10
2
3000
2000
s14s20(29/43)
Acc
Time
0.77
345
0.59
70
0.28
3
0.89
3
0.78
3.1
0.83
3.2
5
0.8
10
DCA LDA(s=801)
DCA LDA(s=2001)
DCA LDA(s=3336)
DCA LDA(s=5070)
EM variational
Gibbs sampling
Spectral method
3600
s14s06(24/40)
Acc
Time
0.34
368
0.29
66
0.29
4.8
0.60
4.8
0.48
4.9
0.51
4.9
DCA SC(s=307)
DCA SC(s=819)
DCA SC(s=1229)
DCA SC(s=1843)
SSC
SCC
LRR
RSC
2
10
1
10
0
10
?1
40
60
80
100
120
140
Number of Clusters/Mixture Components
160
180
200
10
20
40
60
80
100
120
140
Number of Clusters/Mixture Components
160
180
200
Figure 6: LEFT: Perplexity (smaller is better) on test set and CPU seconds vs. Number of topics for LDA on NIPS1-17 Dataset, we randomly
selected 70% documents for training and the rest 30% is used for test. RIGHT: Mutual Information (higher is better) and CPU seconds vs.
Number of clusters for subspace clustering on COIL-100 Dataset.
DCA for Hidden Markov Model on Stock Price and Motion Capture Data. The experimental
comparison results for stock price modeling and motion segmentation are shown in Figure 5 and Table
2, respectively. In the former one, DCA always achieves slightly lower but comparable likelihood
compared to Baum-Welch (EM) method [5], while the spectral method [2] performs worse and
unstably. DCA shows a significant speed advantage compared to others, and thus is more preferable
in practice. In the latter one, we evaluate the prediction accuracy on the test set, so the regularization
caused by separability assumption leads to the highest accuracy and fastest speed of DCA.
DCA for Latent Dirichlet Allocation on NIPS1-17 Dataset. The experimental comparison results
for topic modeling are shown in Figure 6. Compared to both traditional EM and the Gibbs sampling [23], DCA not only achieves both the smallest perplexity (highest likelihood) on the test set
and the highest speed, but also the most stable performance when increasing the number of topics. In
addition, the ?anchor word? achieved by DCA provides more interpretable topics than other methods.
DCA for Subspace Clustering on COIL-100 Dataset. The experimental comparison results for
subspace clustering are shown in Figure 6. DCA provides a much more practical algorithm that can
achieve comparable mutual information but at a more than 1000 times speedup over the state-of-the-art
SC algorithms such as SCC [9], SSC [12], LRR [21], and RSC [26].
Acknowledgments: We would like to thank MELODI lab members for proof-reading and the
anonymous reviewers for their helpful comments. This work is supported by TerraSwarm research
center administered by the STARnet phase of the Focus Center Research Program (FCRP) sponsored
by MARCO and DARPA, by the National Science Foundation under Grant No. (IIS-1162606), and
by Google, Microsoft, and Intel research awards, and by the Intel Science and Technology Center for
Pervasive Computing.
8
References
[1] A. Anandkumar, D. P. Foster, D. Hsu, S. Kakade, and Y. Liu. A spectral algorithm for latent dirichlet
allocation. In NIPS, 2012.
[2] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden markov
models. In COLT, 2012.
[3] S. Arora, R. Ge, Y. Halpern, D. M. Mimno, A. Moitra, D. Sontag, Y. Wu, and M. Zhu. A practical algorithm
for topic modeling with provable guarantees. In ICML, 2013.
[4] S. Arora, R. Ge, R. Kannan, and A. Moitra. Computing a nonnegative matrix factorization - provably. In
STOC, 2012.
[5] L. E. Baum and T. Petrie. Statistical inference for probabilistic functions of finite state Markov chains.
Annals of Mathematical Statistics, 37:1554?1563, 1966.
[6] M. Belkin and K. Sinha. Polynomial learning of distribution families. In FOCS, 2010.
[7] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. Journal of Maching Learning Research
(JMLR), 3:993?1022, 2003.
[8] J. T. Chang. Full reconstruction of markov models on evolutionary trees: Identifiability and consistency.
Mathematical Biosciences, 137(1):51?73, 1996.
[9] G. Chen and G. Lerman. Spectral curvature clustering (scc). International Journal of Computer Vision
(IJCV), 81(3):317?330, 2009.
[10] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em
algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1?38, 1977.
[11] D. Donoho and V. Stodden. When does non-negative matrix factorization give a correct decomposition
into parts? In NIPS, 2003.
[12] E. Elhamifar and R. Vidal. Sparse subspace clustering. In CVPR, 2009.
[13] S. Geman and D. Geman. Stochastic relaxation, gibbs distributions, and the bayesian restoration of images.
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 6(6):721?741, 1984.
[14] N. Gillis and S. A. Vavasis. Fast and robust recursive algorithmsfor separable nonnegative matrix factorization. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 36(4):698?714,
2014.
[15] D. Hsu, S. M. Kakade, and T. Zhang. A spectral algorithm for learning hidden markov models. In COLT,
2009.
[16] M. C. Hughes, E. B. Fox, and E. B. Sudderth. Effective split-merge monte carlo methods for nonparametric
models of sequential data. In NIPS, 2012.
[17] A. T. Kalai, A. Moitra, and G. Valiant. Efficiently learning mixtures of two gaussians. In STOC, 2010.
[18] R. Kannan, H. Salmasian, and S. Vempala. The spectral method for general mixture models. In COLT,
2005.
[19] A. Kumar, V. Sindhwani, and P. Kambadur. Fast conical hull algorithms for near-separable nonnegative
matrix factorization. In ICML, 2013.
[20] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization. Nature,
401:788?791, 1999.
[21] G. Liu, Z. Lin, and Y. Yu. Robust subspace segmentation by low-rank representation. In ICML, 2010.
[22] K. Pearson. Contributions to the mathematical theory of evolution. Philosophical Transactions of the
Royal Society of London. A, 185:71?110, 1894.
[23] I. Porteous, D. Newman, A. Ihler, A. Asuncion, P. Smyth, and M. Welling. Fast collapsed gibbs sampling
for latent dirichlet allocation. In SIGKDD, pages 569?577, 2008.
[24] R. A. Redner and H. F. Walker. Mixture Densities, Maximum Likelihood and the Em Algorithm. SIAM
Review, 26(2):195?239, 1984.
[25] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In NIPS, 2008.
[26] M. Soltanolkotabi, E. Elhamifar, and E. J. Cand`es. Robust subspace clustering. arXiv:1301.2603, 2013.
[27] D. Titterington, A. Smith, and U. Makov. Statistical Analysis of Finite Mixture Distributions. Wiley, New
York, 1985.
[28] T. Zhou, W. Bian, and D. Tao. Divide-and-conquer anchoring for near-separable nonnegative matrix
factorization and completion in high dimensions. In ICDM, 2013.
[29] T. Zhou, J. Bilmes, and C. Guestrin. Extended version of ?divide-and-conquer learning by anchoring a
conical hull?. In Extended version of a accepted NIPS-2014 paper, 2014.
9
| 5574 |@word trial:1 version:2 middle:1 polynomial:1 advantageous:1 nd:1 a02:3 decomposition:2 unstably:1 pick:1 tianyi:1 moment:17 liu:2 series:2 selecting:2 document:2 interestingly:1 outperforms:1 existing:1 recovered:2 comparing:1 rpi:1 assigning:2 written:3 enables:2 treating:1 interpretable:3 update:1 sponsored:1 v:4 greedy:4 selected:2 intelligence:2 plane:9 ith:1 smith:1 blei:1 provides:5 hyperplanes:4 zhang:1 five:2 mathematical:3 h4:3 focs:1 prove:1 ijcv:1 compose:1 ray:5 inside:1 pairwise:1 cand:1 multi:3 ol:3 anchoring:9 salakhutdinov:1 company:1 cpu:12 solver:10 considering:1 increasing:2 estimating:2 moreover:1 linearity:2 notation:1 eigenspace:2 titterington:1 unified:1 finding:8 guarantee:3 finance:1 exactly:2 preferable:1 rm:2 unit:3 converse:1 grant:1 engineering:3 local:1 tributed:1 solely:1 merge:1 xaj:2 might:1 plus:1 initialization:1 equivalence:1 co:1 hmms:2 factorization:18 fastest:2 range:2 bi:1 lrr:3 unique:7 practical:4 acknowledgment:1 yj:1 practice:2 recursive:1 hughes:1 x3:3 procedure:1 significantly:1 projection:10 word:5 h2i:1 get:1 cannot:1 operator:2 collapsed:1 applying:3 equivalent:3 map:2 deterministic:3 center:4 baum:7 reviewer:1 independently:1 convex:8 formulate:1 welch:6 identifying:1 immediately:1 recovery:4 estimator:1 insight:1 array:2 spanned:1 embedding:1 handle:1 annals:1 suppose:1 exact:1 smyth:1 us:1 associate:2 element:1 trick:1 expensive:1 geman:2 labeled:1 solved:6 electrical:1 capture:2 svds:1 cy:1 trade:1 highest:3 dempster:1 convexity:1 complexity:1 seung:1 halpern:1 deo:1 depend:1 solving:8 algo:1 barclays:2 efficiency:1 basis:3 easily:2 darpa:1 stock:3 represented:2 various:1 distinct:1 fast:9 effective:2 london:1 monte:1 detected:1 sc:15 newman:1 hyper:1 pearson:1 whose:2 larger:1 solve:3 denser:1 say:1 loglikelihood:2 otherwise:2 cvpr:1 statistic:1 laird:1 final:1 superscript:1 advantage:2 sequence:4 evidently:1 tpami:2 propose:2 reconstruction:1 product:4 iff:1 degenerate:3 achieve:3 normalize:1 scalability:1 x1t:2 seattle:1 convergence:1 empty:1 optimum:1 cluster:12 produce:1 object:1 help:1 completion:2 ac:2 measured:1 finitely:1 h3:2 c10:1 solves:2 recovering:3 ois:5 c:1 come:2 posit:2 correct:1 hull:63 stochastic:1 human:1 yai:1 generalization:4 anonymous:1 randomization:2 proposition:5 secondly:1 pl:1 hold:2 marco:1 ground:2 exp:1 achieves:3 a2:2 smallest:3 uniqueness:3 estimation:3 label:2 sensitive:1 largest:2 vice:1 maching:1 successfully:1 minimization:1 gaussian:3 always:3 aim:2 super:1 rather:3 kalai:1 zhou:3 avoid:1 hj:1 ck:1 shelf:1 corollary:2 pervasive:1 derived:1 emission:1 focus:2 improvement:1 consistently:2 rank:6 check:1 likelihood:9 indicates:2 greatly:1 contrast:1 slowest:1 sigkdd:1 mainly:1 a01:1 posteriori:1 inference:2 dim:3 helpful:1 hidden:5 ical:1 transformed:1 subroutine:4 selects:1 tao:1 provably:1 pixel:2 arg:4 among:1 colt:3 denoted:1 yahoo:1 art:1 special:4 mutual:3 equal:2 having:1 washington:2 sampling:9 manually:1 identical:1 ng:1 broad:1 yu:1 icml:3 simplex:2 others:1 few:2 belkin:1 randomly:5 composed:1 preserve:1 national:1 comprehensive:1 replaced:1 geometry:5 phase:1 microsoft:1 xray:5 detection:1 mnih:1 mixture:16 extreme:4 chain:1 poorer:1 necessary:1 lh:6 fox:1 indexed:3 tree:1 divide:11 incomplete:1 terraswarm:1 minimal:7 uncertain:1 rsc:3 sinha:1 column:9 modeling:4 cover:5 restoration:1 maximization:1 signifies:3 vertex:1 entry:2 subset:10 hundred:1 gr:1 too:1 ojs:3 synthetic:2 st:1 density:1 fundamental:1 international:1 siam:1 probabilistic:3 off:4 invoke:1 lee:1 quickly:1 meanshift:1 moitra:3 containing:1 possibly:1 ssc:3 worse:1 return:2 distribute:1 makov:1 caused:1 depends:1 later:2 view:6 h1:2 closed:2 lab:1 analyze:1 competitive:1 recover:4 carlos:1 complicated:1 parallel:1 identifiability:4 asuncion:1 collaborative:1 contribution:2 oi:17 accuracy:12 variance:5 efficiently:1 ensemble:2 yield:2 generalize:3 bayesian:2 raw:1 salmasian:1 carlo:1 bilmes:3 worth:1 acc:6 suffers:1 definition:8 failure:2 verse:1 johnsonlindenstrauss:1 proof:4 associated:2 bioscience:1 con:1 ihler:1 hsu:3 proved:1 dataset:7 popular:1 improves:3 segmentation:2 redner:1 starnet:1 dca:108 scc:4 higher:9 htk:1 bian:1 specify:1 xa:16 hand:3 horizontal:1 ei:2 replacing:1 google:1 mode:1 quality:1 lda:20 contain:2 verify:1 former:1 hence:1 regularization:3 evolution:1 alternating:2 symmetric:1 nonzero:2 uniquely:1 covering:2 cosine:3 m:1 risen:1 complete:1 performs:2 motion:5 image:2 variational:2 novel:1 fi:4 petrie:1 common:3 jp:2 jl:1 extend:1 interpretation:1 relating:2 significant:4 vec:13 ai:1 gibbs:5 rd:1 trivially:1 consistency:1 pointed:1 soltanolkotabi:1 submodular:2 stable:1 base:2 curvature:1 recent:1 perplexity:3 certain:1 binary:1 success:3 ht1:1 yi:7 guestrin:3 minimum:22 fortunately:2 additional:2 morgan:2 algebraically:1 determine:1 mocap:1 ii:1 multiple:7 violate:1 full:1 reduces:2 ing:1 exceeds:1 faster:5 offer:1 long:1 sphere:2 fined:1 lin:1 icdm:1 post:1 y:1 award:1 a1:2 laplacian:1 plugging:1 prediction:3 basic:1 vision:1 expectation:1 cmu:5 arxiv:1 iteration:1 achieved:3 c1:1 preserved:1 addition:3 addressed:1 sudderth:1 sfo:6 walker:1 ot:1 operate:1 rest:2 comment:1 induced:1 subject:1 member:1 gmms:2 jordan:1 anandkumar:2 near:2 noting:1 exceed:1 split:3 easy:1 gillis:1 variety:1 independence:1 xj:10 fit:1 reduce:6 simplifies:1 idea:3 oit:1 administered:1 whether:2 handled:2 distributing:1 suffer:1 locating:1 algebraic:1 sontag:1 york:1 stodden:1 covered:5 clear:1 unimportant:1 detailed:1 nonparametric:1 ten:1 tth:2 rw:2 reduced:3 vavasis:1 outperform:2 estimated:2 correctly:1 group:3 key:1 drawn:3 gmm:33 pj:2 ht:1 backward:1 graph:1 relaxation:1 geometrically:2 merely:1 cone:26 sum:1 prob:1 angle:9 ojt:2 family:1 reasonable:1 wu:1 draw:1 acceptable:1 comparable:4 spa:5 ki:1 layer:1 hi:3 bound:2 conical:63 yale:4 identifiable:1 nonnegative:4 constraint:3 ri:3 x2:2 flat:3 aspect:1 speed:4 min:11 extremely:4 yaj:1 kumar:1 separable:7 vempala:1 speedup:4 structured:1 according:1 combination:4 poor:1 smaller:4 slightly:3 em:20 separability:19 sth:2 wi:2 lp:5 kakade:3 lem:1 happens:2 intuitively:1 projecting:1 pr:2 legal:1 equation:9 remains:1 discus:1 x2t:2 letting:1 ge:2 end:1 pursuit:1 decomposing:1 gaussians:1 vidal:1 apply:4 spectral:21 occurrence:1 robustness:2 slower:1 rp:5 original:9 denotes:4 clustering:18 dirichlet:5 remaining:1 porteous:1 graphical:1 especially:1 conquer:10 establish:1 build:2 ellipse:1 society:2 tensor:3 objective:2 question:1 primary:1 diagonal:3 traditional:1 exhibit:1 evolutionary:1 subspace:10 distance:1 grassmannian:1 thank:1 hmm:31 outer:2 topic:12 manifold:1 collected:1 spanning:1 reason:1 provable:1 kannan:2 besides:3 index:4 kambadur:1 mini:1 providing:1 minimizing:1 difficult:1 pie:4 stoc:2 negative:7 rise:1 design:1 proper:1 vertical:1 observation:6 datasets:2 markov:6 finite:6 defining:1 extended:4 locate:1 rn:3 frame:2 arbitrary:1 nmf:18 introduced:1 namely:1 required:1 philosophical:1 nip:5 usually:3 pattern:2 reading:1 program:1 rf:1 max:6 oj:6 green:1 royal:2 power:1 critical:2 natural:1 rely:1 indicator:1 nth:2 zhu:1 scheme:3 technology:1 brief:1 identifies:1 axis:4 admixture:1 arora:2 b10:2 text:1 prior:3 geometric:1 review:1 checking:2 removal:1 relative:1 embedded:3 loss:1 interesting:1 filtering:1 allocation:5 proven:1 generator:4 triple:2 h2:3 foundation:1 sufficient:4 consistent:1 rubin:1 foster:1 pi:6 row:21 summary:2 supported:1 algorithmsfor:1 side:3 wide:1 fall:1 face:3 sparse:4 melodi:1 tolerance:1 distributed:1 curve:1 dimension:6 mimno:1 stand:1 rich:1 author:1 commonly:1 projected:1 welling:1 transaction:3 global:3 xai:1 anchor:39 rid:1 reveals:1 b1:2 xi:25 iterative:4 latent:21 vectorization:3 table:8 promising:1 nature:1 robust:4 obtaining:1 complex:1 constructing:2 diag:4 sp:1 pk:1 rh:4 big:1 noise:8 whole:2 motivation:1 ref:2 x1:1 intel:2 slow:2 wiley:1 sub:29 concatenating:1 col:1 xl:1 jmlr:1 third:1 hti:1 rk:1 theorem:4 xt:11 specific:1 maxi:1 x:2 a3:1 exists:1 sequential:2 effectively:1 valiant:1 ci:1 elhamifar:2 chen:1 easier:1 mf:4 generalizing:1 intersection:2 partially:2 chang:1 sindhwani:1 ch:1 truth:1 coil:2 conditional:3 goal:1 acceleration:2 donoho:1 jeff:1 price:3 feasible:1 determined:1 reducing:3 uniformly:2 andconquer:1 hyperplane:10 distributes:1 lemma:4 averaging:1 called:1 accepted:1 experimental:6 ya:24 lerman:1 e:1 rarely:1 mch:10 support:1 mark:1 latter:1 violated:1 evaluate:1 |
5,053 | 5,575 | Graphical Models for Recovering Probabilistic
and Causal Queries from Missing Data
Karthika Mohan and Judea Pearl
Cognitive Systems Laboratory
Computer Science Department
University of California, Los Angeles, CA 90024
{karthika,judea}@cs.ucla.edu
Abstract
We address the problem of deciding whether a causal or probabilistic query
is estimable from data corrupted by missing entries, given a model of missingness process. We extend the results of Mohan et al. [2013] by presenting more general conditions for recovering probabilistic queries of the form
P (y|x) and P (y, x) as well as causal queries of the form P (y|do(x)). We
show that causal queries may be recoverable even when the factors in their
identifying estimands are not recoverable. Specifically, we derive graphical
conditions for recovering causal effects of the form P (y|do(x)) when Y and
its missingness mechanism are not d-separable. Finally, we apply our results to problems of attrition and characterize the recovery of causal effects
from data corrupted by attrition.
1
Introduction
All branches of experimental science are plagued by missing data. Improper handling of
missing data can bias outcomes and potentially distort the conclusions drawn from a study.
Therefore, accurate diagnosis of the causes of missingness is crucial for the success of any research. We employ a formal representation called ?Missingness Graphs? (m-graphs, for short)
to explicitly portray the missingness process as well as the dependencies among variables in
the available dataset (Mohan et al. [2013]). Apart from determining whether recoverability is feasible namely, whether there exists any theoretical impediment to estimability of
queries of interest, m-graphs can also provide a means for communication and refinement
of assumptions about the missingness process. Furthermore, m-graphs permit us to detect
violations in modeling assumptions even when the dataset is contaminated with missing
entries (Mohan and Pearl [2014]).
In this paper, we extend the results of Mohan et al. [2013] by presenting general conditions
under which probabilistic queries such as joint and conditional distributions can be recovered. We show that causal queries of the type P (y|do(x)) can be recovered even when the
associated probabilistic relations such as P (y, x) and P (y|x) are not recoverable. In particular, causal effects may be recoverable even when Y is not separable from its missingness
mechanism. Finally, we apply our results to recover causal effects when the available dataset
is tainted by attrition.
This paper is organized as follows. Section 2 provides an overview of missingness graphs
and reviews the notion of recoverability i.e. obtaining consistent estimates of a query,
given a dataset and an m-graph. Section 3 refines the sequential factorization theorem
presented in Mohan et al. [2013] and extends its applicability to a wider range of problems
in which missingness mechanisms may influence each other. In section 4, we present general
1
Sex (S)
Qualifcation (Q)
RQ
Q*
U
Latent Variable
RI
Missingness Mechanism
of Income
Experience (X)
Income (I)
I*
Proxy variable for Income
Figure 1: Typical m-graph where Vo = {S, X}, Vm = {I, Q}, V ? = {I ? , Q? }, R = {Ri , Rq }
and U is the latent common cause. Members of Vo and Vm are represented by full and hollow
circles respectively. The associated missingness process and assumptions are elaborated in
appendix 10.1.
algorithms to recover joint distributions from the class of problems for which sequential
factorization theorem fails. In section 5, we introduce new graphical criteria that preclude
recoverability of joint and conditional distributions. In section 6, we discuss recoverability
of causal queries and show that unlike probabilistic queries, P (y|do(x)) may be recovered
even when Y and its missingness mechanism (Ry ) are not d-separable. In section 7, we
demonstrate how we can apply our results to problems of attrition in which missingness is a
severe obstacle to sound inferences. Related works are discussed in section 8 and conclusions
are drawn in section 9. Proofs of all theoretical results in this paper are provided in the
appendix.
2
Missingness Graph and Recoverability
Missingness graphs as discussed below was first defined in Mohan et al. [2013] and we adopt
the same notations. Let G(V, E) be the causal DAG where V = V ? U ? V ? ? R. V is the
set of observable nodes. Nodes in the graph correspond to variables in the data set. U is
the set of unobserved nodes (also called latent variables). E is the set of edges in the DAG.
We use bi-directed edges as a shorthand notation to denote the existence of a U variable
as common parent of two variables in V ? R. V is partitioned into Vo and Vm such that
Vo ? V is the set of variables that are observed in all records in the population and Vm ? V
is the set of variables that are missing in at least one record. Variable X is termed as fully
observed if X ? Vo , partially observed if X ? Vm and substantive if X ? Vo ? Vm . Associated
with every partially observed variable Vi ? Vm are two other variables Rvi and Vi? , where
Vi? is a proxy variable that is actually observed, and Rvi represents the status of the causal
mechanism responsible for the missingness of Vi? ; formally,
vi
if rvi = 0
(1)
vi? = f (rvi , vi ) =
m
if rvi = 1
V ? is the set of all proxy variables and R is the set of all causal mechanisms that are
responsible for missingness. R variables may not be parents of variables in V ? U . We
call this graphical representation Missingness Graph (or m-graph). An example of an
m-graph is given in Figure 1 (a).We use the following shorthand. For any variable X, let
X ? be a shorthand for X = 0. For any set W ? Vm ? Vo ? R, let Wr , Wo and Wm be the
shorthand for W ? R, W ? Vo and W ? Vm respectively. Let Rw be a shorthand for RVm ?W
i.e. Rw is the set containing missingness mechanisms of all partially observed variables in
W . Note that Rw and Wr are not the same. GX and GX represent graphs formed by
removing from G all edges leaving and entering X, respectively.
A manifest distribution P (Vo , V ? , R) is the distribution that governs the available dataset.
An underlying distribution P (Vo , Vm , R) is said to be compatible with a given manifest
distribution P (Vo , V ? , R) if the latter can be obtained from the former using equation 1.
Manifest distribution Pm is compatible with a given underlying distribution Pu if ?X, X ?
2
Y
X
Y
RY
R
Z
(a)
Z
R
RX
RW
Y
Z
X
RY
Z
W
X
RX
RW
Z
W
RX
(b)
RY
R
Z
(c)
Figure 2: (a) m-graph in which P (V ) is recoverable by the sequential factorization (b) &
(c): m-graphs for which no admissible sequence exists.
Vm and Y = Vm \ X, the following equality holds true.
Pm (Rx? , Ry , X ? , Y ? , Vo ) = Pu (Rx? , Ry , X, Vo )
where Rx? denotes Rx = 0 and Ry denotes Ry = 1. Refer Appendix 10.2 for an example.
2.1
Recoverability
Given a manifest distribution P (V ? , Vo , R) and an m-graph G that depicts the missingness
process, query Q is recoverable if we can compute a consistent estimate of Q as if no data
were missing. Formally,
Definition 1 (Recoverability (Mohan et al. [2013])). Given a m-graph G, and a target
relation Q defined on the variables in V , Q is said to be recoverable in G if there exists an
algorithm that produces a consistent estimate of Q for every dataset D such that P (D) is (1)
compatible with G and (2) strictly positive1 over complete cases i.e. P (Vo , Vm , R = 0) > 0.
For an introduction to the notion of recoverability see, Pearl and Mohan [2013] and Mohan
et al. [2013].
3
Recovering Probabilistic Queries by Sequential Factorization
Mohan et al. [2013] (theorem-4) presented a sufficient condition for recovering probabilistic
queries such as joint and conditional distributions by using ordered factorizations. However,
the theorem is not applicable to certain classes of problems such as those in longitudinal
studies in which edges exist between R variables. General ordered factorization defined
below broadens the concept of ordered factorization (Mohan et al. [2013]) to include the set of
R variables. Subsequently, the modified theorem (stated below as theorem 1) will permit us
to handle cases in which R variables are contained in separating sets that d-separate partially
observed variables from their respective missingness mechanisms (example: X??Rx |Ry in
figure 2 (a)).
Definition 2 (General Ordered factorization). Given a graph G and a set O of ordered V ?R
variables Y1 < Y2 < . . . < Yk , a general ordered factorization
relative to G, denoted by f (O),
Q
is a product of conditional probabilities f (O) = i P (Yi |Xi ) where Xi ? {Yi+1 , . . . , Yn } is
a minimal set such that Yi ??({Yi+1 , . . . , Yn } \ Xi )|Xi holds in G.
Theorem 1 (Sequential Factorization ). A sufficient condition for recoverability of a relation Q defined over substantive variables is that Q be decomposable into a general ordered
factorization, or a sum of such factorizations, such that every factor Qi = P (Yi |Xi ) satisfies, (1) Yi ??(Ryi , Rxi )|Xi \ {Ryi , Rxi }, if Yi ? (Vo ? Vm ) and (2) Z ?
/ Xi and Xr ? RXm = ?
and Rz ??RXi |Xi if Yi = Rz for any Z ? Vm .
An ordered factorization that satisfies the condition in Theorem 1 is called an admissible
sequence.
The following example illustrates the use of theorem 1 for recovering the joint distribution.
Additionally, it sheds light on the need for the notion of minimality in definition 2.
1
An extension to datasets that are not strictly positive over complete cases is sometimes feasible(Mohan et al. [2013]).
3
Example 1. We are interested in recovering P (X, Y, Z) given the m-graph in Figure 2
(a). We discern from the graph that definition 2 is satisfied because: (1) P (Y |X, Z, Ry ) =
P (Y |X, Z) and (X, Z) is a minimal set such that Y ??({X, Z, Ry } \ (X, Z))|(X, Z), (2)
P (X|Ry , Z) = P (X|Ry ) and Ry is the minimal set such that X??({Ry , Z} \ Ry )|Ry
and (3) P (Z|Ry ) = P (Z) and ? is the minimal set such that Z??Ry |?. Therefore,
the order Y < X < Z < Ry induces a general ordered factorization P (X, Y, Z, Ry ) =
P (Y |X, Z)P (X|Ry )P (Z)P (Ry ). We now rewrite P (X, Y, Z) as follows:
X
X
P (Y, X, Z, Ry ) = P (Y |X, Z)P (Z)
P (X|Ry )P (Ry )
P (X, Y, Z) =
Ry
Ry
Since Y ??Ry |X, Z, Z??Rz , X??Rx |Ry , by theorem 1 we have,
X
P (X|Rx? , Ry )P (Ry )
P (X, Y, Z) = P (Y |X, Z, Rx? , Ry? , Rz? )P (Z|Rz? )
Ry
Indeed, equation 1 permits us to rewrite it as:
P (X, Y, Z) = P (Y ? |X ? , Z ? , Rx? , Ry? , Rz? )P (Z ? |Rz? )
X
Ry
P (X ? |Rx? , Ry )P (Ry )
P (X, Y, Z) is recoverable because every term in the right hand side is consistently estimable
from the available dataset.
Had we ignored the minimality requirement in definition 2 and chosen to factorize
Y < X < Z < Ry using the chain rule, we would have obtained: P (X, Y, Z, Ry ) =
P (Y |X, Z, Ry )P (X|Z, Ry )P (Z|Ry )P (Ry ) which is not admissible since X??(Rz , Rx )|Z does
not hold in the graph. In other words, existence of one admissible sequence based on an order
O of variables does not guarantee that every factorization based on O is admissible; it is for
this reason that we need to impose the condition of minimality in definition 2.
The recovery procedure presented in example 1 requires that we introduce Ry into the order.
Indeed, there is no ordered factorization over the substantive variables {X, Y, Z} that will
permit recoverability of P (X, Y, Z) in figure 2 (a). This extension of Mohan et al. [2013]
thus permits the recovery of probabilistic queries from problems in which the missingness
mechanisms interact with one another.
4
Recoverability in the Absence of an Admissible Sequence
Mohan et al. [2013] presented a theorem (refer appendix 10.4) that stated the necessary and
sufficient condition for recovering the joint distribution for the class of problems in which the
parent set of every R variable is a subset of Vo ?Vm . In contrast to Theorem 1, their theorem
can handle problems for which no admissible sequence exists. The following theorem gives a
generalization and is applicable to any given semi-markovian model (for example, m-graphs
in figure 2 (b) & (c)). It relies on the notion of collider path and two new subsets, R(part) :
the partitions of R variables and M b(R(i) ): substantive variables related to R(i) , which we
will define after stating the theorem.
Theorem 2. Given an m-graph G in which no element in Vm is either a neighbor of its
missingness mechanism or connected to its missingness mechanism by a collider path, P (V )
is recoverable if no M b(R(i) ) contains a partially observed variable X such that Rx ? R(i)
i.e. ?i, R(i) ? RM b(R(i) ) = ?. Moreover, if recoverable, P (V ) is given by,
P (V, R = 0)
P (V ) = Q
(i) = 0|M b(R(i) ), R
P
(R
M b(R(i) ) = 0)
i
In theorem 2:
(i) collider path p between any two nodes X and Y is a path in which every intermediate
node is a collider. Example, X ? Z < ?? > Y .
(ii) Rpart = {R(1) , R(2) , ...R(N ) } are partitions of R variables such that for every element
Rx and Ry belonging to distinct partitions, the following conditions hold true: (i) Rx and
4
Ry are not neighbors and (ii) Rx and Ry are not connected by a collider path. In figure 2
(b): Rpart = {R(1) , R(2) } where R(1) = {Rw , Rz }, R(2) = {Rx , Ry }
(iii) M b(R(i) ) is the markov blanket of R(i) comprising of all substantive variables that are
either neighbors or connected to variables in R(i) by a collider path (Richardson [2003]). In
figure 2 (b): M b(R(1) ) = {X, Y } and M b(R(2) ) = {Z, W }.
Appendix 10.6 demonstrates how theorem 2 leads to the recoverability of P (V ) in figure 2,
to which theorems in Mohan et al. [2013] do not apply.
The following corollary yields a sufficient condition for recovering the joint distribution from
the class of problems in which no bi-directed edge exists between variables in sets R and
Vo ? Vm (for example, the m-graph described in Figure 2 (c)). These problems form a subset
of the class of problems covered in theorem 2. Subset P asub (R(i) ) used in the corollary is
the set of all substantive variables that are parents of variables in R(i) . In figure 2 (b):
P asub (R(1) ) = ? and P asub (R(2) ) = {Z, W }.
Corollary 1. Let G be an m-graph such that (i) ?X ? Vm ? Vo , no latent variable is a
common parent of X and any member of R, and (ii) ?Y ? Vm , Y is not a parent of Ry . If
?i, P asub (R(i) ) does not contain a partially observed variables whose missing mechanism is
in R(i) i.e. R(i) ? RP asub (R(i) ) = ?, then P (V ) is recoverable and is given by,
P (v) =
5
Q
i
P (R=0,V )
P (R(i) =0|P asub (R(i) ),RP asub (R(i) ) =0)
Non-recoverability Criteria for Joint and Conditional
Distributions
Up until now, we dealt with sufficient conditions for recoverability. It is important however
to supplement these results with criteria for non-recoverability in order to alert the user to
the fact that the available assumptions are insufficient to produce a consistent estimate of
the target query. Such criteria have not been treated formally in the literature thus far. In
the following theorem we introduce two graphical conditions that preclude recoverability.
Theorem 3 (Non-recoverability of P (V )). Given a semi-markovian model G, the following
conditions are necessary for recoverability of the joint distribution:
(i) ?X ? Vm , X and Rx are not neighbors and
(ii) ?X ? Vm , there does not exist a path from X to Rx in which every intermediate node
is both a collider and a substantive variable.
In the following corollary, we leverage theorem 3 to yield necessary conditions for recovering
conditional distributions.
Corollary 2. [Non-recoverability of P (Y |X)] Let X and Y be disjoint subsets of substantive
variables. P (Y |X) is non-recoverable in m-graph G if one of the following conditions is true:
(1) Y and Ry are neighbors
(2) G contains a collider path p connecting Y and Ry such that all intermediate nodes in p
are in X.
6
Recovering Causal Queries
Given a causal query and a causal bayesian network a complete algorithm exists for deciding
whether the query is identifiable or not (Shpitser and Pearl [2006]). Obviously, a query that
is not identifiable in the substantive model is not recoverable from missing data. Therefore,
a necessary condition for recoverability of a causal query is its identifiability which we will
assume in the rest of our discussion.
Definition 3 (Trivially Recoverable Query). A causal query Q is said to be trivially recoverable given an m-graph G if it has an estimand (in terms of substantive variables) in which
every factor is recoverable.
5
Ry
W
Z
Y
Figure 3: m-graph in which Y and Ry are not separable but still P (Y |do(Z)) is recoverable.
Classes of problems that fall into the MCAR (Missing Completely At Random) and MAR
(Missing At Random) category are much discussed in the literature ((Rubin [1976])) because in such categories probabilistic queries are recoverable by graph-blind algorithms. An
immediate but important implication of trivial recoverability is that if data are MAR or
MCAR and the query is identifiable, then it is also recoverable by model-blind algorithms.
Example 2. In the gender wage-gap study example in Figure 1 (a), the effect of sex on
income, P (I|do(S)), is identifiable and is given by P (I|S). By theorem 2, P (S, X, Q, I) is
recoverable. Hence P (I|do(S)) is recoverable.
6.1
Recovering P (y|do(z)) when Y and Ry are inseparable
The recoverability of P (V ) hinges on the separability of a partially observed variable from its
missingness mechanism (a condition established in theorem 3). Remarkably, causal queries
may circumvent this requirement. The following example demonstrates that P (y|do(z)) is
recoverable even when Y and Ry are not separable.
P
Example 3. Examine Figure 3. By backdoor criterion, P (y|do(z)) = w P (y|z, w)P (w).
One might be tempted to conclude that the causal relation is non-recoverable because
P (w, z, y) is non-recoverable (by theorem 2) and P (y|z, w) is not recoverable (by corollary
2). However, P (y|do(z)) is recoverable as demonstrated below:
X
P (y|do(z), w, Ry? )P (w|do(z), Ry? )
(2)
P (y|do(z)) = P (y|do(z), Ry? ) =
P (y|do(z), w, Ry? )
P (w|do(z), Ry? )
=
=
w
?
P (y|z, w, Ry ) (by Rule-2 of do-calculus
P (w|Ry? ) (by Rule-3 of do-calculus) )
(Pearl [2009]))
(3)
(4)
Substituting (3) and (4) in (2) we get:
X
X
P (y|z, w, Ry? )P (w|Ry? ) =
P (y ? |z, w, Ry? )P (w|Ry? )
P (y|do(z)) =
w
w
The recoverability of P (y|do(z)) in the previous example follows from the notion of d*separability and dormant independence [Shpitser and Pearl, 2008].
Definition 4 (d? -separation (Shpitser and Pearl [2008])). Let G be a causal diagram. Variable sets X, Y are d? -separated in G given Z, W (written X ?w Y |Z), if we can find sets
Z, W , such that X ? Y |Z in Gw , and P (y, x|z, do(w)) is identifiable.
Definition 5 (Inducing path (Verma and Pearl [1991])). An path p between X and Y is
called inducing path if every node on the path is a collider and an ancestor of either X or
Y.
Theorem 4. Given an m-graph in which |Vm | = 1 and Y and Ry are connected by an
inducing path, P (y|do(x)) is recoverable if there exists Z, W such that Y ?w Ry |Z and for
W = W \ X, the following conditions hold:
(1) Y ??W1 |X, Z in GX,W1 and
(2) P (W1 , Z|do(X)) and P (Y |do(W1 ), do(X), Z, R? y) are identifiable.
Moreover, if recoverable then,
P
P (y|do(x)) = W1 ,Z P (Y |do(W ), do(X), Z, Ry? )P (Z, W1 |do(X))
We can quickly conclude that P (y|do(z)) is recoverable in the m-graph in figure 3 by verifying
that the conditions in theorem 4 hold in the m-graph.
6
X
Y
Z
X
RY
Y
Z
RY
(b)
(a)
Figure 4: (a) m-graphs in which P (y|do(x)) is not recoverable (b) m-graphs in which
P (y|do(x)) is recoverable.
7
Attrition
Attrition (i.e. participants dropping out from a study/experiment), is a ubiquitous phenomenon, especially in longitudinal studies. In this section, we shall discuss a special case
of attrition called ?Simple Attrition? (Garcia [2013]). In this problem, a researcher conducts
a randomized trial, measures a set of variables (X,Y,Z) and obtains a dataset where outcome
(Y) is corrupted by missing values (due to attrition). Clearly, due to randomization, the
effect of treatment (X) on outcome (Y), P (y|do(x)), is identifiable and is given by P (Y |X).
We shall now demonstrate the usefulness of our previous discussion in recovering P (y|do(x)).
Typical attrition problems are depicted in figure 4.PIn Figure 4 (b) we can apply theorem 1
to recover P (y|do(x)) as given below: P (Y |X) = Z P (Y ? |X, Z, Ry? )P (Z|X). In Figure 4
(a), we observe that Y and Ry are connected by a collider path. Therefore by corollary 2,
P (Y |X) is not recoverable; hence P (y|do(x)) is also not recoverable.
7.1
Recovering Joint Distributions under simple attrition
The following theorem yields the necessary and sufficient condition for recovering joint distributions from semi-markovian models with a single partially observed variable i.e. |Vm | = 1
which includes models afflicted by simple attrition.
Theorem 5. Let Y ? Vm and |Vm | = 1. P (V ) is recoverable in m-graph G if and only
if Y and Ry are not neighbors and Y and Ry are not connected by a path in which all
intermediate nodes are colliders. If both conditions are satisfied, then P (V ) is given by,
P (V ) = P (Y |VO , Ry = 0)P (VO )
7.2
Recovering Causal Effects under Simple Attrition
Theorem 6. P (y|do(x)) is recoverable in the simple attrition case (with one partially observed variable) if and only if Y and Ry are neither neighbors nor connected by an inducing
path. Moreover, if recoverable,
X
P (Y |X) =
P (Y ? |X, Z, Ry? )P (Z|X)
(5)
z
where Z is the separating set that d-separates Y from Ry .
These results rectify prevailing opinion in the available literature. For example, according
to Garcia [2013] (Theorem-3), a necessary condition for non-recoverability of causal effect
under simple attrition is that X be an ancestor of Ry . In Figure 4 (a), X is not an ancestor
of Ry and still P (Y |X) is non-recoverable ( due to the collider path between Y and Ry ).
8
Related Work
Deletion based methods such as listwise deletion that are easy to understand as well as
implement, guarantee consistent estimates only for certain categories of missingness such as
MCAR (Rubin [1976]). Maximum Likelihood method is known to yield consistent estimates
under MAR assumption; expectation maximization algorithm and gradient based algorithms
are widely used for searching for ML estimates under incomplete data (Lauritzen [1995],
Dempster et al. [1977], Darwiche [2009], Koller and Friedman [2009]). Most work in machine
learning assumes MAR and proceeds with ML or Bayesian inference. However, there are
exceptions such as recent work on collaborative filtering and recommender systems which
7
develop probabilistic models that explicitly incorporate missing data mechanism (Marlin
et al. [2011], Marlin and Zemel [2009], Marlin et al. [2007]).
Other methods for handling missing data can be classified into two: (a) Inverse Probability
Weighted Methods and (b) Imputation based methods (Rothman et al. [2008]). Inverse
Probability Weighing methods analyze and assign weights to complete records based on
estimated probabilities of completeness (Van der Laan and Robins [2003], Robins et al.
[1994]). Imputation based methods substitute a reasonable guess in the place of a missing
value (Allison [2002]) and Multiple Imputation (Little and Rubin [2002]) is a widely used
imputation method.
Missing data is a special case of coarsened data and data are said to be coarsened at
random (CAR) if the coarsening mechanism is only a function of the observed data (Heitjan
and Rubin [1991]). Robins and Rotnitzky [1992] introduced a methodology for parameter
estimation from data structures for which full data has a non-zero probability of being fully
observed and their methodology was later extended to deal with censored data in which
complete data on subjects are never observed (Van Der Laan and Robins [1998]).
The use of graphical models for handling missing data is a relatively new development.
Daniel et al. [2012] used graphical models for analyzing missing information in the form of
missing cases (due to sample selection bias). Attrition is a common occurrence in longitudinal studies and arises when subjects drop out of the study (Twisk and de Vente [2002],
Shadish [2002]) and Garcia [2013] analysed the problem of attrition using causal graphs.
Thoemmes and Rose [2013] cautioned the practitioner that contrary to popular belief, not
all auxiliary variables reduce bias. Both Garcia [2013] and Thoemmes and Rose [2013]
associate missingness with a single variable and interactions among several missingness
mechanisms are unexplored.
Mohan et al. [2013] employed a formal representation called Missingness Graphs to depict
the missingness process, defined the notion of recoverability and derived conditions under
which queries would be recoverable when datasets are categorized as Missing Not At Random
(MNAR). Tests to detect misspecifications in the m-graph are discussed in Mohan and Pearl
[2014].
9
Conclusion
Graphical models play a critical role in portraying the missingness process, encoding and
communicating assumptions about missingness and deciding recoverability given a dataset
afflicted with missingness. We presented graphical conditions for recovering joint and conditional distributions and sufficient conditions for recovering causal queries. We exemplified
the recoverability of causal queries of the form P (y|do(x)) despite the existence of an inseparable path between Y and Ry , which is an insurmountable obstacle to the recovery of
P(Y). We applied our results to problems of attrition and presented necessary and sufficient
graphical conditions for recovering causal effects in such problems.
Acknowledgement
This paper has benefited from discussions with Ilya Shpitser. This research was supported
in parts by grants from NSF #IIS1249822 and #IIS1302448, and ONR #N00014-13-1-0153
and #N00014-10-1-0933.
References
P.D. Allison. Missing data series: Quantitative applications in the social sciences, 2002.
R.M. Daniel, M.G. Kenward, S.N. Cousens, and B.L. De Stavola. Using causal diagrams to guide
analysis in missing data problems. Statistical Methods in Medical Research, 21(3):243?256, 2012.
A Darwiche. Modeling and reasoning with Bayesian networks. Cambridge University Press, 2009.
8
A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the
em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), pages 1?38,
1977.
F. M. Garcia. Definition and diagnosis of problematic attrition in randomized controlled experiments. Working paper, April 2013. Available at SSRN: http://ssrn.com/abstract=2267120.
D.F. Heitjan and D.B. Rubin. Ignorability and coarse data. The Annals of Statistics, pages 2244?
2253, 1991.
D Koller and N Friedman. Probabilistic graphical models: principles and techniques. 2009.
S L Lauritzen. The em algorithm for graphical association models with missing data. Computational
Statistics & Data Analysis, 19(2):191?201, 1995.
R.J.A. Little and D.B. Rubin. Statistical analysis with missing data. Wiley, 2002.
B.M. Marlin and R.S. Zemel. Collaborative prediction and ranking with non-random missing data.
In Proceedings of the third ACM conference on Recommender systems, pages 5?12. ACM, 2009.
B.M. Marlin, R.S. Zemel, S. Roweis, and M. Slaney. Collaborative filtering and the missing at
random assumption. In UAI, 2007.
B.M. Marlin, R.S. Zemel, S.T. Roweis, and M. Slaney. Recommender systems: missing data and
statistical model estimation. In IJCAI, 2011.
K Mohan and J Pearl. On the testability of models with missing data. Proceedings of AISTAT,
2014.
K Mohan, J Pearl, and J Tian. Graphical models for inference with missing data. In Advances in
Neural Information Processing Systems 26, pages 1277?1285. 2013.
J. Pearl. Causality: models, reasoning and inference. Cambridge Univ Press, New York, 2009.
J Pearl and K Mohan.
Recoverability and testability of missing data:
Introduction and summary of results.
Technical Report R-417, UCLA, 2013.
Available at
http://ftp.cs.ucla.edu/pub/stat ser/r417.pdf.
Thomas Richardson. Markov properties for acyclic directed mixed graphs. Scandinavian Journal
of Statistics, 30(1):145?157, 2003.
J M Robins and A Rotnitzky. Recovery of information and adjustment for dependent censoring
using surrogate markers. In AIDS Epidemiology, pages 297?331. Springer, 1992.
J M Robins, A Rotnitzky, and L P Zhao. Estimation of regression coefficients when some regressors
are not always observed. Journal of the American Statistical Association, 89(427):846?866, 1994.
K J Rothman, S Greenland, and T L Lash. Modern epidemiology. Lippincott Williams & Wilkins,
2008.
D.B. Rubin. Inference and missing data. Biometrika, 63:581?592, 1976.
W R Shadish. Revisiting field experimentation: field notes for the future. Psychological methods, 7
(1):3, 2002.
I Shpitser and J Pearl. Identification of conditional interventional distributions. In Proceedings of
the Twenty-Second Conference on Uncertainty in Artificial Intelligence, pages 437?444. 2006.
I Shpitser and J Pearl. Dormant independence. In AAAI, pages 1081?1087, 2008.
F. Thoemmes and N. Rose. Selection of auxiliary variables in missing data problems: Not all
auxiliary variables are created equal. Technical Report R-002, Cornell University, 2013.
J Twisk and W de Vente. Attrition in longitudinal studies: how to deal with missing data. Journal
of clinical epidemiology, 55(4):329?337, 2002.
M J Van Der Laan and J M Robins. Locally efficient estimation with current status data and
time-dependent covariates. Journal of the American Statistical Association, 93(442):693?701,
1998.
M.J. Van der Laan and J.M. Robins. Unified methods for censored longitudinal data and causality.
Springer Verlag, 2003.
T.S Verma and J Pearl. Equivalence and synthesis of causal models. In Proceedings of the Sixth
Conference in Artificial Intelligence, pages 220?227. Association for Uncertainty in AI, 1991.
9
| 5575 |@word trial:1 sex:2 calculus:2 mcar:3 contains:2 series:2 pub:1 daniel:2 longitudinal:5 recovered:3 com:1 current:1 analysed:1 written:1 refines:1 partition:3 drop:1 depict:1 intelligence:2 weighing:1 guess:1 short:1 record:3 provides:1 completeness:1 node:9 coarse:1 gx:3 alert:1 shorthand:5 darwiche:2 introduce:3 indeed:2 examine:1 nor:1 ry:84 karthika:2 little:2 preclude:2 provided:1 notation:2 underlying:2 moreover:3 unified:1 unobserved:1 marlin:6 guarantee:2 quantitative:1 every:11 unexplored:1 shed:1 biometrika:1 rm:1 demonstrates:2 ser:1 grant:1 medical:1 yn:2 positive:1 despite:1 encoding:1 analyzing:1 path:18 might:1 equivalence:1 factorization:16 range:1 bi:2 tian:1 directed:3 responsible:2 implement:1 xr:1 procedure:1 word:1 get:1 selection:2 influence:1 demonstrated:1 missing:33 williams:1 decomposable:1 identifying:1 recovery:5 communicating:1 rule:3 population:1 handle:2 notion:6 searching:1 annals:1 target:2 play:1 user:1 associate:1 element:2 observed:16 coarsened:2 role:1 verifying:1 revisiting:1 improper:1 connected:7 yk:1 rq:2 ryi:2 dempster:2 rose:3 covariates:1 testability:2 rewrite:2 lippincott:1 completely:1 joint:12 represented:1 rpart:2 distinct:1 separated:1 univ:1 query:29 zemel:4 artificial:2 broadens:1 outcome:3 whose:1 widely:2 statistic:3 richardson:2 laird:1 obviously:1 sequence:5 interaction:1 product:1 roweis:2 inducing:4 aistat:1 los:1 parent:6 ijcai:1 requirement:2 produce:2 wider:1 derive:1 develop:1 stating:1 stat:1 insurmountable:1 ftp:1 lauritzen:2 cautioned:1 recovering:19 c:2 auxiliary:3 blanket:1 collider:12 dormant:2 subsequently:1 opinion:1 assign:1 generalization:1 randomization:1 rothman:2 strictly:2 extension:2 attrition:20 hold:6 deciding:3 plagued:1 substituting:1 inseparable:2 adopt:1 estimation:4 applicable:2 rvm:1 weighted:1 clearly:1 always:1 modified:1 cornell:1 corollary:7 derived:1 consistently:1 methodological:1 likelihood:2 contrast:1 detect:2 inference:5 dependent:2 relation:4 ancestor:3 koller:2 interested:1 comprising:1 among:2 denoted:1 development:1 prevailing:1 special:2 field:2 equal:1 never:1 represents:1 future:1 contaminated:1 report:2 employ:1 modern:1 friedman:2 interest:1 severe:1 violation:1 allison:2 light:1 chain:1 implication:1 accurate:1 edge:5 necessary:7 experience:1 censored:2 respective:1 conduct:1 incomplete:2 circle:1 causal:29 theoretical:2 minimal:4 psychological:1 modeling:2 obstacle:2 markovian:3 rxi:3 afflicted:2 mnar:1 maximization:1 applicability:1 entry:2 subset:5 usefulness:1 characterize:1 dependency:1 corrupted:3 epidemiology:3 randomized:2 minimality:3 probabilistic:12 vm:26 connecting:1 quickly:1 ilya:1 synthesis:1 w1:6 aaai:1 satisfied:2 containing:1 cognitive:1 slaney:2 american:2 zhao:1 shpitser:6 de:3 includes:1 coefficient:1 explicitly:2 ranking:1 vi:7 blind:2 later:1 analyze:1 wm:1 recover:3 participant:1 identifiability:1 elaborated:1 collaborative:3 rotnitzky:3 formed:1 correspond:1 yield:4 dealt:1 bayesian:3 identification:1 rx:21 researcher:1 classified:1 distort:1 definition:10 sixth:1 associated:3 proof:1 judea:2 dataset:9 treatment:1 popular:1 lash:1 manifest:4 car:1 ubiquitous:1 organized:1 actually:1 methodology:2 april:1 mar:4 furthermore:1 until:1 hand:1 working:1 marker:1 effect:9 concept:1 true:3 y2:1 contain:1 former:1 equality:1 hence:2 entering:1 laboratory:1 deal:2 gw:1 criterion:5 pdf:1 presenting:2 complete:5 demonstrate:2 vo:21 reasoning:2 common:4 overview:1 extend:2 discussed:4 association:4 refer:2 cambridge:2 dag:2 ai:1 trivially:2 pm:2 had:1 rectify:1 scandinavian:1 pu:2 recent:1 apart:1 termed:1 certain:2 n00014:2 verlag:1 onr:1 success:1 yi:8 der:4 impose:1 employed:1 semi:3 recoverable:38 branch:1 full:2 sound:1 ii:4 multiple:1 technical:2 clinical:1 controlled:1 qi:1 asub:7 prediction:1 regression:1 expectation:1 represent:1 sometimes:1 remarkably:1 diagram:2 leaving:1 crucial:1 rest:1 unlike:1 subject:2 member:2 contrary:1 coarsening:1 call:1 practitioner:1 leverage:1 intermediate:4 iii:1 easy:1 independence:2 impediment:1 reduce:1 shadish:2 angeles:1 whether:4 wo:1 york:1 cause:2 ignored:1 governs:1 covered:1 locally:1 induces:1 category:3 rw:6 http:2 exist:2 nsf:1 problematic:1 estimated:1 disjoint:1 wr:2 diagnosis:2 dropping:1 shall:2 drawn:2 imputation:4 interventional:1 neither:1 estimands:1 graph:40 sum:1 missingness:33 estimand:1 inverse:2 uncertainty:2 discern:1 extends:1 place:1 reasonable:1 separation:1 appendix:5 ignorability:1 identifiable:7 ri:2 ucla:3 separable:5 relatively:1 ssrn:2 department:1 according:1 belonging:1 em:2 separability:2 partitioned:1 equation:2 discus:2 pin:1 mechanism:17 available:8 permit:5 experimentation:1 apply:5 observe:1 occurrence:1 rp:2 existence:3 rz:9 substitute:1 denotes:2 assumes:1 include:1 thomas:1 graphical:13 hinge:1 especially:1 society:1 surrogate:1 said:4 gradient:1 separate:2 separating:2 trivial:1 reason:1 substantive:10 insufficient:1 potentially:1 stated:2 twenty:1 recommender:3 datasets:2 markov:2 immediate:1 extended:1 communication:1 y1:1 recoverability:28 misspecifications:1 introduced:1 namely:1 california:1 deletion:2 established:1 pearl:16 address:1 proceeds:1 below:5 exemplified:1 thoemmes:3 royal:1 belief:1 critical:1 treated:1 circumvent:1 created:1 portray:1 review:1 literature:3 acknowledgement:1 determining:1 relative:1 fully:2 portraying:1 mixed:1 filtering:2 acyclic:1 wage:1 sufficient:8 consistent:6 proxy:3 rubin:8 principle:1 verma:2 censoring:1 compatible:3 summary:1 supported:1 bias:3 formal:2 side:1 understand:1 guide:1 neighbor:7 fall:1 greenland:1 listwise:1 van:4 refinement:1 regressors:1 far:1 income:4 social:1 observable:1 obtains:1 status:2 ml:2 uai:1 conclude:2 xi:8 factorize:1 latent:4 robin:8 additionally:1 ca:1 obtaining:1 interact:1 categorized:1 benefited:1 causality:2 depicts:1 wiley:1 aid:1 fails:1 third:1 admissible:7 theorem:33 removing:1 exists:7 sequential:5 supplement:1 tainted:1 mohan:21 heitjan:2 illustrates:1 gap:1 depicted:1 garcia:5 ordered:10 contained:1 adjustment:1 partially:9 springer:2 gender:1 satisfies:2 relies:1 acm:2 rvi:5 conditional:8 tempted:1 absence:1 feasible:2 specifically:1 typical:2 laan:4 called:6 experimental:1 wilkins:1 estimable:2 formally:3 exception:1 latter:1 arises:1 hollow:1 incorporate:1 phenomenon:1 handling:3 |
5,054 | 5,576 | Optimization Methods for Sparse Pseudo-Likelihood
Graphical Model Selection
Onkar Dalal
Stanford University
[email protected]
Sang-Yun Oh
Computational Research Division
Lawrence Berkeley National Lab
[email protected]
Kshitij Khare
Department of Statistics
University of Florida
[email protected]
Bala Rajaratnam
Department of Statistics
Stanford University
[email protected]
Abstract
Sparse high dimensional graphical model selection is a popular topic in contemporary machine learning. To this end, various useful approaches have been proposed
in the context of `1 -penalized estimation in the Gaussian framework. Though
many of these inverse covariance estimation approaches are demonstrably scalable and have leveraged recent advances in convex optimization, they still depend
on the Gaussian functional form. To address this gap, a convex pseudo-likelihood
based partial correlation graph estimation method (CONCORD) has been recently
proposed. This method uses coordinate-wise minimization of a regression based
pseudo-likelihood, and has been shown to have robust model selection properties in comparison with the Gaussian approach. In direct contrast to the parallel
work in the Gaussian setting however, this new convex pseudo-likelihood framework has not leveraged the extensive array of methods that have been proposed
in the machine learning literature for convex optimization. In this paper, we address this crucial gap by proposing two proximal gradient methods (CONCORDISTA and CONCORD-FISTA) for performing `1 -regularized inverse covariance
matrix estimation in the pseudo-likelihood framework. We present timing comparisons with coordinate-wise minimization and demonstrate that our approach
yields tremendous payoffs for `1 -penalized partial correlation graph estimation
outside the Gaussian setting, thus yielding the fastest and most scalable approach
for such problems. We undertake a theoretical analysis of our approach and rigorously demonstrate convergence, and also derive rates thereof.
1
Introduction
Sparse inverse covariance estimation has received tremendous attention in the machine learning,
statistics and optimization communities. These sparse models, popularly known as graphical models, have widespread use in various applications, especially in high dimensional settings. The most
popular inverse covariance estimation framework is arguably the `1 -penalized Gaussian likelihood
optimization framework as given by
minimize
p
??S++
? log det ? + tr(S?) + ?k?k1
where Sp++ denotes the space of p-dimensional positive definite
P matrices, and `1 -penalty is imposed
on the elements of ? = (?ij )1?i?j?p by the term k?k1 = i,j |?ij | along with the scaling factor
1
? > 0. The matrix S denotes the sample covariance matrix of the data Y ? IRn?p . As the `1 penalized log likelihood is convex, the problem becomes more tractable and has benefited from
advances in convex optimization. Recent efforts in the literature on Gaussian graphical models
therefore have focused on developing principled methods which are increasingly more and more
scalable. The literature on this topic is simply enormous and for the sake of brevity, space constraints
and the topic of this paper, we avoid an extensive literature review by referring to the references in
the seminal work of [1] and the very recent work of [2]. These two papers contain references to
recent work, including past NIPS conference proceedings.
1.1
The CONCORD method
Despite their tremendous contributions, one shortcoming of the traditional approaches to `1 penalized likelihood maximization is the restriction to the Gaussian assumption. To address this
gap, a number of `1 -penalized pseudo-likelihood approaches have been proposed: SPACE [3] and
SPLICE [4], SYMLASSO [5]. These approaches are either not convex, and/or convergence of
corresponding maximization algorithms are not established. In this sense, non-Gaussian partial
correlation graph estimation methods have lagged severely behind, despite the tremendous need to
move beyond the Gaussian framework for obvious practical reasons. In very recent work, a convex pseudo-likelihood approach with good model selection properties called CONCORD [6] was
proposed. The CONCORD algorithm minimizes
Qcon (?) = ?
p
X
i=1
p
n log ?ii +
X
1X
k?ii Yi +
?ij Yj k22 + n?
2 i=1
j6=i
X
|?ij |
(1)
1?i<j?p
via cyclic coordinate-wise descent that alternates between updating off-diagonal elements and diagonal elements. It is straightforward to show that operators Tij for updating (?ij )1?i<j?p (holding
(?ii )1?i?p constant) and Tii for updating (?ii )1?i?p (holding (?ij )1?i<j?p constant) are given by
P
P
0
0
0
0
S? ?
j 0 6=j ?ij sjj +
i0 6=i ?i j sii
(Tij (?))ij =
(2)
sii + sjj
r
2
P
P
? j6=i ?ij sij +
+ 4sii
j6=i ?ij sij
(Tii (?))ii =
.
(3)
2sii
This coordinate-wise algorithm is shown to converge to a global minima though no rate is given
[6]. Note that the equivalent problem assuming a Gaussian likelihood has seen much development
in the last ten years, but a parallel development for the recently introduced CONCORD framework
is lacking for obvious reasons. We address this important gap by proposing state-of-the-art proximal gradient techniques to minimize Qcon . A rigorous theoretical analysis of the pseudo-likelihood
framework and the associated proximal gradient methods which are proposed is undertaken. We
establish rates of convergence and also demonstrate that our approach can lead to massive computational speed-ups, thus yielding extremely fast and principled solvers for the sparse inverse covariance
estimation problem outside the Gaussian setting.
2
CONCORD using proximal gradient methods
The penalized matrix version the CONCORD objective function in (1) is given by
n
? log |?2D | + tr(S?2 ) + ?k?X k1 .
(4)
Qcon (?) =
2
where ?D and ?X denote the diagonal and off-diagonal elements of ?. We will use the notation
A = AD + AX to split any matrix A into its diagonal and off-diagonal terms.
This section proposes a scalable and thorough approach to solving the CONCORD objective function using recent advances in convex optimization and derives rates of convergence for such algorithms. In particular, we use proximal gradient-based methods to achieve this goal and demonstrate
the efficacy of such methods for the non-Gaussian graphical modeling problem. First, we propose
CONCORD-ISTA and CONCORD-FISTA in section 2.1: methods which are inspired by the iterative soft-thresholding algorithms in [7]. We undertake a comprehensive treatment of the CONCORD
2
optimization problem by also investigating the dual of the CONCORD problem. Other popular
methods in the literature, including the potential use of alternating minimization algorithm and the
second order proximal Newtons method are considered in Supplemental section A.8.
2.1
Iterative Soft Thresholding Algorithms: CONCORD-ISTA, CONCORD-FISTA
The iterative soft-thresholding algorithms (ISTA) have recently gained popularity after the seminal
paper by Beck and Teboulle [7]. The ISTA methods are based on the Forward-Backward Splitting
method from [8] and Nesterov?s accelerated gradient methods [9] using soft-thresholding as the
proximal operator for the `1 -norm. The essence of the proximal gradient algorithms is to divide
the objective function into a smooth part and a non-smooth part, then take a proximal step (w.r.t.
the non-smooth part) in the negative gradient direction of the smooth part. Nesterov?s accelerated
gradient extension [9] uses a combination of gradient and momentum steps to achieve accelerated
rates of convergence. In this section, we apply these methods in the context of CONCORD which
also has a composite objective function.
The matrix CONCORD objective function (4) can be split into a smooth part h1 (?) and a nonsmooth part h2 (?):
1
tr(?S?), h2 (?) = ?k?X k1 .
2
The gradient and hessian of the smooth function h1 are given by
1
?h1 (?) = ?D ?1 +
S?T + ?S ,
2
i=p
X
1
?2
?2 h1 (?) =
?ii
ei ei T ? ei ei T + (S ? I + I ? S) ,
2
i=1
h1 (?) = ? log det ?D +
(5)
(6)
where ei is a column vector of zeros except for a one in the i-th position.
The proximal operator for h2 is given by element-wise soft-thresholding operator S? as
1
proxh2 (?) = arg min h2 (?) + k? ? ?k2F
2
?
= S? (?) = sign(?) max{|?| ? ?, 0},
(7)
where ? is a matrix with 0 diagonal and ? for each off-diagonal entry. The details of the proximal
gradient algorithm CONCORD-ISTA are given in Algorithm 1, and the details of the accelerated
proximal gradient algorithm CONCORD-FISTA are given in Algorithm 2.
2.2
Choice of step size
In the absence of a good estimate of the Lipschitz constant L, the step size for each iteration of
CONCORD-ISTA and CONCORD-FISTA is chosen using backtracking line search. The line search
for iteration k starts with an initial step size ?(k,0) and reduces the step with a constant factor c until
the new iterate satisfies the sufficient descent condition:
h1 (?(k+1) ) ? Q(?(k+1) , ?(k) )
(8)
where,
1
? ? ?
2 .
Q(?, ?) = h1 (?) + tr (? ? ?)T ?h1 (?) +
F
2?
In section 4, we have implemented algorithms choosing the initial step size in three different ways:
(a) a constant starting step size (=1), (b) the feasible step size from the previous iteration ?k?1 , (c)
the step size heuristic of Barzilai-Borwein. The Barzilai-Borwein heuristic step size is given by
tr (?(k+1) ? ?(k) )T (?(k+1) ? ?(k) )
.
?k+1,0 =
(9)
tr (?(k+1) ? ?(k) )T (G(k+1) ? G(k) )
This is an approximation of the secant equation which works as a proxy for second order information
using successive gradients (see [10] for details).
3
Algorithm 1 CONCORD-ISTA
Algorithm 2 CONCORD-FISTA
Input: sample covariance matrix S, penalty ?
Set: ?(0) ?
Sp+ , ?(0,0)
Input: sample covariance matrix S, penalty ?
Set: (?(1) =)?(0) ? Sp+ , ?1 = 1, ?(0,0) ? 1,
? 1, c < 1, ?subg = 1
while ?subg > subg do
?1
(k)
G(k) = ? ?D
+ 12 S ?(k) + ?(k) S
j
Take largest ?k ? {c ?(k,0) }j=0,1,... s.t.
?(k+1) = S?k ? ?(k) ? ?k G(k) ` (8).
while ?subg > subg do
?1
(k)
G(k) = ? ?D
+
1
2
S?(k) + ?(k) S
Take largest ?k ? {cj ?(k,0) }j=0,1,... s.t.
?(k) = S?k ? ?(k) ? ?k G(k) ` (8)
p
?k+1 = (1 + 1 + 4?k 2 )/2
?1
?(k+1) = ?(k) + ??kk+1
?(k) ? ?(k?1)
Compute: ?(k+1,0)
Compute: ?subg 1
end while
1: ?subg
c < 1, ?subg = 1.
Compute: ?(k+1,0)
k?h1 (?(k) ) + ?h2 (?(k) )k
=
k?(k) k
Compute: ?subg 1
end while
2.3
Computational complexity
After the one time calculation of S, the most significant computation for each iteration in
CONCORD-ISTA and CONCORD-FISTA algorithms is the matrix-matrix multiplication W = S?
in the gradient term. If s is the number of non-zeros in ?, then W can be computed using O(sp2 ) operations if we exploit the extreme sparsity in ?. The secondP
matrix-matrix multiplication for the term
tr(?(S?)) can be computed efficiently using tr(?W ) =
?ij wij over the set of non-zero ?ij ?s.
This computation only requires O(s) operations. The remaining computations are all at the element
level which can be completed in O(p2 ) operations. Therefore, the overall computational complexity for each iteration reduces to O(sp2 ). On the other hand, the proximal gradient algorithms for
the Gaussian framework require inversion of a full p ? p matrix which is non-parallelizable and
requires O(p3 ) operations. The coordinate-wise method for optimizing CONCORD in [6] also requires cycling through the p2 entries of ? in specified order and thus does not allow parallelization.
In contrast, CONCORD-ISTA and CONCORD-FISTA can use ?perfectly parallel? implementations
to distribute the above matrix-matrix multiplications. At no step do we need to keep all of the dense
matrices S, S?, ?h1 on a single machine. Therefore, CONCORD-ISTA and CONCORD-FISTA
are scalable to any high dimensions restricted only by the number of machines.
3
Convergence Analysis
In this section, we prove convergence of CONCORD-ISTA and CONCORD-FISTA methods along
with their respective convergence rates of O(1/k) and O(1/k 2 ). We would like to point out that,
although the authors in [6] provide a proof of convergence for their coordinate-wise minimization
algorithm for CONCORD, they do not provide any rates of convergence. The arguments for convergence leverage the results in [7] but require some essential ingredients. We begin with proving
lower and upper bounds on the diagonal entries ?kk for ? belonging to a level set of Qcon (?). The
lower bound on the diagonal entries of ? establishes Lipschitz continuity of the gradient ?h1 (?)
based on the hessian of the smooth function as stated in (6). The proof for the lower bound uses the
existence of an upper bound on the diagonal entries. Hence, we prove both bounds on the diagonal
entries. We begin by defining a level set C0 of the objective function starting with an arbitrary initial
point ?(0) with a finite function value as
n
o
C0 = ? | Qcon (?) ? Qcon (?(0) ) = M .
(10)
For the positive semidefinite matrix S, let U denote ?12 times the upper triangular matrix from the
LU decomposition of S, such that S = 2U T U (the factor 2 simplifies further arithmetic). Assuming
4
the diagonal entries of S to be strictly nonzero (if skk = 0, then the k th component can be ignored
upfront since it has zero variance and is equal to a constant for every data point), we have at least
one k such that uki 6= 0 for every i. Using this, we prove the following theorem.
Theorem 3.1. For any symmetric matrix ? satisfying ? ? C0 , the diagonal elements of ? are
bounded above and below by constants which depend only on M , ? and S. In other words,
0 < a ? |?kk | ? b, ? k = 1, 2, . . . , p,
for some constants a and b. (((removed subscripts for a and b)))
Proof. (a) Upper bound: Suppose |?ii | = max{|?kk |, for k = 1, 2, . . . , p}. Then, we have
M = Qcon (?(0) ) ? Qcon (?) = h1 (?) + h2 (?)
? ? log det ?D + tr (U ?)T (U ?) + ?k?X k1
= ? log det ?D + kU ?k2F + ?k?X k1 .
th
Considering ki
entry in the Frobenious norm and the ith column in the third term we get
?
?2
j=p
j=p
X
X
?
?
+?
|?ji |.
M ? ?p log |?ii | +
ukj ?ji
Pj=p
j=k,j6=i
|x| ?
j=p
X
(12)
j=k,j6=i
j=k
Now, suppose |uki ?ii | = z and
(11)
ukj ?ji = x. Then
|ukj ||?ji | ? u
?
j=k,j6=i
j=p
X
|?ji |,
j=k,j6=i
? = ? , we have
where u
? = max{|ukj |}, for j = k, . . . , p, j 6= i. Substituting in (12), for ?
2?
u
? 2 ? p log |uki | ? ?p log z + (z + x)2 + 2?|x|
? +?
?2
? =M +?
M
(13)
2
?
? sign(x)
= ?p log z + z + x + ?sign(x)
? 2?z
(14)
2
?
?
Here, if x ? 0, then M ? ?p log z + z using the first inequality (13), and if x < 0, then M ?
? using the second inequality (14). In either cases, the functions ?p log z + z 2 and
?p log z + 2?z
? are unbounded as z ? ?. Hence, the upper bound of M
? on these functions
?p log z + 2?z
guarantee an upper bound b such that |?ii | ? b. Therefore, |?kk | ? b for all k = 1, 2, . . . , p.
(b) Lower bound: By positivity of the trace term and the `1 term (for off-diagonals), we have
M ? ? log det ?D =
i=p
X
? log |?ii |.
(15)
i=1
The negative log function g(z) = ? log(z) is a convex function with a lower bound at z ? = b with
g(z ? ) = ? log b. Therefore, for any k = 1, 2, . . . , p, we have
M?
i=p
X
? log |?ii | ? ?(p ? 1) log b ? log |?kk |.
(16)
i=1
Simplifying the above equation, we get
log |?kk | ? ?M ? (p ? 1) log b.
Therefore, |?kk | ? a = e?M ?(p?1) log b > 0 serves as a lower bound for all k = 1, 2, . . . , p.
Given that the function values are non-increasing along the iterates of Algorithms 1, 2 and 3, the
sequence of ?(k) satisfy ?(k) ? C0 for k = 1, 2, ..... The lower bounds on the diagonal elements of
?(k) provides the Lipschitz continuity using
?2 h1 (?(k) ) a?2 + kSk2 (I ? I) .
(17)
Therefore, using the mean-value theorem, the gradient ?h1 satisfies
k?h1 (?) ? ?h1 (?)kF ? Lk? ? ?kF ,
(18)
with the Lipschitz continuity constant L = a?2 + kSk2 . The remaining argument for convergence
follows from the theorems in [7].
5
Theorem 3.2. ([7, Theorem 3.1]). Let {?(k) } be the sequence generated by either Algorithm 1 with
constant step size or with backtracking line-search. Then, for the solution ?? , for any k ? 1,
?Lk?(0) ? ?? k2F
,
(19)
2k
where ? = 1 for the constant step size setting and ? = c for the backtracking step size setting.
Theorem 3.3. ([7, Theorem 4.4]). Let {?(k) }, {?(k) } be the sequences generated by Algorithm 2
with either constant step size or backtracking line-search. Then, for the solution ?? , for any k ? 1,
Qcon (?(k) ) ? Qcon (?? ) ?
Qcon (?(k) ) ? Qcon (?? ) ?
2?Lk?(0) ? ?? k2F
,
(k + 1)2
(20)
where ? = 1 for the constant step size setting and ? = c for the backtracking step size setting.
Hence, CONCORD-ISTA and CONCORD-FISTA converge at the rates of O(1/k) and O(1/k 2 ) for
the k th iteration.
4
Implementation & Numerical Experiments
In this section, we outline algorithm implementation details and present results of our comprehensive numerical evaluation. Section 4.1 gives performance comparisons from using synthetic multivariate Gaussian datasets. These datasets are generated from a wide range of sample sizes (n) and
dimensionality (p). Additionally, convergence of CONCORD-ISTA and CONCORD-FISTA will be
illustrated. Section 4.2 has timing results from analyzing a real breast cancer dataset with outliers.
Comparisons are made to the coordinate-wise CONCORD implementation in gconcord package
for R available at http://cran.r-project.org/web/packages/gconcord/.
For implementing the proposed algorithms, we can take advantage of existing linear algebra libraries. Most of the numerical computations in Algorithms 1 and 2 are linear algebra operations, and, unlike the sequential coordinate-wise CONCORD algorithm, CONCORD-ISTA and
CONCORD-FISTA implementations can solve increasingly larger problems as more and more scalable and efficient linear algebra libraries are made available. For this work, we opted to using Eigen
library [11] for its sparse linear algebra routines written in C++. Algorithms 1 and 2 were also written in C++ then interfaced to R for testing. Table 1 gives names for various CONCORD-ISTA and
CONCORD-FISTA versions using different initial step size choices.
4.1
Synthetic Datasets
Synthetic datasets were generated from true sparse positive random ? matrices of three sizes:
p = {1000, 3000, 5000}. Instances of random matrices used here consist of 4995, 14985 and
24975 non-zeros, corresponding to 1%, 0.33% and 0.20% edge densities, respectively. For each
p, Gaussian and t-distributed datasets of sizes n = {0.25p, 0.75p, 1.25p} were used as inputs.
The initial guess, ?(0) , and the convergence criteria was matched to those of coordinate-wise CONCORD implementation. Highlights of the results are summarized below, and the complete set of
comparisons are given in Supplementary materials Section A.
For normally distributed synthetic datasets, our experiments indicate that two variations of the
CONCORD-ISTA method show little performance difference. However, ccista 0 was marginally
faster in our tests. On the other hand, ccfista 1 variation of CONCORD-FISTA that uses
?(k+1,0) = ?k as initial step size was significantly faster than ccfista 0. Table 2 gives actual
running times for the two best performing algorithms, ccista 0 and ccfista 1, against the
coordinate-wise concord. As p and n increase ccista 0 performs very well. For smaller n
and ?, coordinate-wise concord performs well (more in Supplemental section A). This can be
attributed to min(O(np2 ), O(p3 )) computational complexity of coordinate-wise CONCORD [6],
and the sparse linear algebra routines used in CONCORD-ISTA and CONCORD-FISTA implementations slowing down as the number of non-zero elements in ? increases. On the other hand, for
large n fraction (n = 1.25p), the proposed methods ccista 0 and ccfista 1 are significantly
faster than coordinate-wise concord. In particular, when p = 5000 and n = 6250, the speed-up
of ccista 0 can be as much as 150 times over coordinate-wise concord. Also, for t-distributed
synthetic datasets, ccista 0 is generally fastest, especially when n and p are both large.
6
ccista_0
ccfista_1
?
?
?
1e?02
?
?
?
ccfista_1
?
?
?
?
?
?
?
?
?
?subg
1e?02
?
??
?
?
?
1e?03
??
??
?
????
?
?
?
?
?
1e?05
??
?
method
20
40
0
??
??
?
40
?
????
20
?
?
?
?
?
????
?
?
?
?
??????
?
?
?
?
?
????
??
20
?
?
????
40
??
??
??
ccista_0
?
0
20
??
iter
?
?
????
??
??
??
ccfista_1
lambda
method
?
0.05
?
0.1
ccista_0
0.2
?
ccfista_1
0.4
0.5
40
?
Real Data
Real datasets arising from various physical and biological sciences often are not multivariate Gaussian and can have outliers. Hence, convergence characteristic may be different on such datasets. In
this section, the performance of proposed methods are assessed on a breast cancer dataset [12]. This
dataset contains expression levels of 24481 genes on 266 patients with breast cancer. Following the
approach in Khare et al. [6], the number of genes are reduced by utilizing clinical information that is
provided together with the microarray expression dataset. In particular, survival analysis via univariate Cox regression with patient survival times is used to select a subset of genes closely associated
with breast cancer. A choice of p-value < 0.03 yields a reduced dataset with p = 4433 genes.
Often times, graphical model selection algorithms are applied in a non-Gaussian and n p setting
such as the case here. In this n p setting, coordinate-wise CONCORD algorithm is especially
fast due to its computational complexity O(np2 ). However, even in this setting, the newly proposed
methods ccista 0, ccista 1, and ccfista 1 perform competitively to, or often better than,
concord as illustrated in Table 3. On this real dataset, ccista 1 performed the fastest whereas
ccista 0 was the fastest on synthetic datasets.
Conclusion
The Gaussian graphical model estimation or inverse covariance estimation has seen tremendous advances in the past few years. In this paper we propose using proximal gradient methods to solve
the general non-Gaussian sparse inverse covariance estimation problem. Rates of convergence
were established for the CONCORD-ISTA and CONCORD-FISTA algorithms. Coordinate-wise
minimization has been the standard approach to this problem thus far, and we provide numerical results comparing CONCORD-ISTA/FISTA and coordinate-wise minimization. We demonstrate that CONCORD-ISTA outperforms coordinate-wise in general, and in high dimensional settings CONCORD-ISTA can outperform coordinate-wise optimization by orders of magnitude. The
methodology is also tested on real data sets. We undertake a comprehensive treatment of the problem by also examining the dual formulation and consider methods to maximize the dual objective.
We note that efforts similar to ours for the Gaussian case has appeared in not one, but several NIPS
and other publications. Our approach on the other hand gives a complete and thorough treatment of
the non-Gaussian partial correlation graph estimation problem, all in this one self-contained paper.
7
????
?
?
?
????
??
?
?
?
????
0
20
??
?
?
????
??
?
?
????
??
40
40
Convergence behavior of CONCORD-ISTA and CONCORD-FISTA methods is shown in Figure
1. The best performing algorithms ccista 0 and ccfista 1 are shown. The vertical axis is
the subgradient ?subg (See Algorithms 1, 2). Plots show that ccista 0 seems to converge at a
constant rate much faster than ccfista 1 that appears to slow down after a few initial iterations.
While the theoretical convergence results from section 3 prove convergence rates of O(1/k) and
O(1/k 2 ) for CONCORD-ISTA and CONCORD-FISTA, in practice, ccista 0 with constant step
size performed the fastest for the tests in this section.
5
?
?
?
????
??
?
?
? ?
When a good initial guess ?(0) is available, warm-starting cc ista 0 and cc fista 0 algorithms
substantially shortens the running times. Simulations with Gaussian datasets indicate the running
times can be shortened by, on average, as much as 60%. Complete simulation results are given in
the Supplemental Section A.6.
4.2
?
??
ccista_0 of CONCORD-ISTA
ccfista_1
lambda
0.05and0.1
0.2
0.4
0.5
Figure 1: method
Convergence
CONCORD-FISTA
for threshold ?subg < 10?5
?
?
??
iter
?
?
iter
????
??
20
?
??
??
0
?
?
?
?
1e?03
?
?
?
0
????
??
??
?
?
?
0
?
?
?
1e?05
?
?
1e?05
1e?04
?
??
?
??
??
?
??
1e?04
??
?
?
?
?
??
?
??
?
?
?
ccfista_1
??
?
1e?04
?
?
??
1e?03
??
?
?subg
1e?01
1e?02
?
?
?
?
?subg
ccista_0
?
?
?
1e?01
?
?
1e?01
ccista_0
lambda
?
0.05
0.1
0.2
0.4
0.5
Table 1: Naming convention for step size variations
Variation
Method
Initial step
concord
Coordinatewise
-
ccista 0
ISTA
Constant
ccista 1
ISTA
Barzilai-Borwein
ccfista 0
FISTA
Constant
ccfista 1
FISTA
?k
Table 2: Timing comparison of concord and proposed methods: ccista 0 and ccfista 1.
p
n
250
1000
750
1250
750
3000
2250
3750
1250
5000
3750
6250
?
NZ%
0.150
0.163
0.300
0.090
0.103
0.163
0.071
0.077
0.163
0.090
0.103
0.163
0.053
0.059
0.090
0.040
0.053
0.163
0.066
0.077
0.103
0.039
0.049
0.077
0.039
0.077
0.163
1.52
0.99
0.05
1.50
0.76
0.23
1.41
0.97
0.23
1.10
0.47
0.08
1.07
0.56
0.16
1.28
0.28
0.07
1.42
0.53
0.10
1.36
0.31
0.10
0.27
0.10
0.04
concord
iter seconds
9
3.2
9
2.6
9
2.6
9
8.9
9
8.4
9
8.0
9
41.3
9
40.5
9
43.8
17
147.4
17
182.4
16
160.1
16
388.3
16
435.0
16
379.4
16
2854.2
16
2921.5
15
2780.5
17
832.7
17
674.7
17
667.6
17
2102.8
17
1826.6
17
2094.7
17 15629.3
17 15671.1
16 14787.8
ccista 0
iter seconds
13
1.8
18
2.0
15
1.2
11
1.4
15
1.6
15
1.6
10
1.4
15
1.7
13
1.2
20
32.4
28
36.0
28
28.3
17
28.5
28
38.5
16
19.9
17
33.0
15
23.5
25
35.1
32
193.9
30
121.4
27
81.2
18
113.0
16
73.4
29
95.8
17
93.9
27
101.0
26
97.3
ccfista 1
iter seconds
20
3.3
26
3.3
23
2.7
17
2.5
24
3.3
24
2.8
17
2.9
24
3.3
23
2.8
25
53.2
35
60.1
26
39.9
17
39.6
26
61.9
15
23.6
17
47.3
16
31.4
32
56.1
37
379.2
35
265.8
33
163.0
17
176.3
17
107.4
33
178.1
17
130.0
25
123.9
34
173.7
Table 3: Running time comparison on breast cancer dataset
?
NZ%
0.450
0.451
0.454
0.462
0.478
0.515
0.602
0.800
0.110
0.109
0.106
0.101
0.088
0.063
0.027
0.002
concord
iter
sec
80 724.5
80 664.2
80 690.3
79 671.6
77 663.3
63 600.6
46 383.5
24 193.6
ccista 0
iter
sec
132 686.7
129 669.2
130 686.2
125 640.4
117 558.6
104 466.0
80 308.0
45 133.8
ccista 1
iter
sec
123 504.0
112 457.0
81 352.9
109 447.1
87 337.9
75 282.4
66 229.7
32
92.2
ccfista 0
iter
sec
250 10870.3
216
7867.2
213
7704.2
214
7978.4
202
6913.1
276
9706.9
172
4685.2
74
1077.2
ccfista 1
iter
sec
201 672.6
199 662.9
198 677.8
196 646.3
197 609.0
184 542.0
152 409.1
70 169.8
Acknowledgments: S.O., O.D. and B.R. were supported in part by the National Science Foundation under grants DMS-0906392, DMS-CMG 1025465, AGS-1003823, DMS-1106642, DMS
CAREER-1352656 and grants DARPA-YFAN66001-111-4131 and SMC-DBNKY. K.K was partially supported by NSF grant DMS-1106084. S.O. was supported also in part by the Laboratory
Directed Research and Development Program of Lawrence Berkeley National Laboratory under
U.S. Department of Energy Contract No. DE-AC02-05CH11231.
8
References
[1] Onureena Banerjee, Laurent El Ghaoui, and Alexandre DAspremont. Model Selection
Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data.
JMLR, 9:485?516, 2008.
[2] Onkar Anant Dalal and Bala Rajaratnam. G-ama: Sparse gaussian graphical model estimation
via alternating minimization. arXiv preprint arXiv:1405.3034, 2014.
[3] Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial Correlation Estimation by Joint Sparse
Regression Models. Journal of the American Statistical Association, 104(486):735?746, June
2009.
[4] Guilherme V Rocha, Peng Zhao, and Bin Yu. A path following algorithm for Sparse PseudoLikelihood Inverse Covariance Estimation (SPLICE). Technical Report 60628102, 2008.
[5] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Applications of the lasso and grouped
lasso to the estimation of sparse graphical models. Technical report, 2010.
[6] Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. A convex pseudo-likelihood framework
for high dimensional partial correlation estimation with convergence guarantees. Journal of
the Royal Statistical Society: Series B (to appear), 2014.
[7] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear
inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[8] R.T. Rockafellar. Monotone operators and the proximal point algorithm. SIAM Journal on
Control and Optimization, 14(5):877?898, 1976.
[9] Yurii Nesterov. A method of solving a convex programming problem with convergence rate
O(1/k 2 ). In Soviet Mathematics Doklady, volume 27, pages 372?376, 1983.
[10] J. Barzilai and J.M. Borwein. Two-point step size gradient methods. IMA Journal of Numerical
Analysis, 8(1):141?148, 1988.
[11] Ga?el Guennebaud, Beno??t Jacob, et al. Eigen v3. http://eigen.tuxfamily.org, 2010.
[12] Howard Y Chang, Dimitry S A Nuyten, Julie B Sneddon, Trevor Hastie, Robert Tibshirani,
Therese S? rlie, Hongyue Dai, Yudong D He, Laura J van?t Veer, Harry Bartelink, Matt van de
Rijn, Patrick O Brown, and Marc J van de Vijver. Robustness, scalability, and integration of
a wound-response gene expression signature in predicting breast cancer survival. Proceedings
of the National Academy of Sciences of the United States of America, 102(10):3738?43, March
2005.
9
| 5576 |@word cox:1 version:2 dalal:2 inversion:1 norm:2 seems:1 c0:4 simulation:2 covariance:11 decomposition:1 simplifying:1 jacob:1 tr:9 initial:9 cyclic:1 contains:1 efficacy:1 series:1 united:1 ours:1 past:2 existing:1 outperforms:1 comparing:1 written:2 numerical:5 plot:1 guess:2 amir:1 slowing:1 ith:1 iterates:1 provides:1 successive:1 org:2 unbounded:1 along:3 sii:4 direct:1 prove:4 ch11231:1 peng:2 behavior:1 inspired:1 gov:1 little:1 actual:1 dimitry:1 solver:1 considering:1 becomes:1 begin:2 increasing:1 notation:1 bounded:1 matched:1 project:1 provided:1 minimizes:1 substantially:1 proposing:2 supplemental:3 ag:1 guarantee:2 pseudo:9 berkeley:2 thorough:2 every:2 doklady:1 control:1 normally:1 grant:3 appear:1 arguably:1 positive:3 timing:3 severely:1 khare:3 despite:2 shortened:1 analyzing:1 subscript:1 laurent:1 path:1 nz:2 fastest:5 smc:1 range:1 directed:1 practical:1 acknowledgment:1 yj:1 testing:1 practice:1 definite:1 secant:1 significantly:2 composite:1 ups:1 word:1 get:2 ga:1 selection:6 operator:5 context:2 seminal:2 restriction:1 equivalent:1 imposed:1 straightforward:1 attention:1 starting:3 convex:12 focused:1 splitting:1 array:1 utilizing:1 oh:2 rocha:1 proving:1 beno:1 coordinate:19 variation:4 suppose:2 barzilai:4 massive:1 programming:1 us:4 element:9 satisfying:1 updating:3 preprint:1 wang:1 contemporary:1 removed:1 principled:2 complexity:4 nesterov:3 rigorously:1 signature:1 depend:2 solving:2 algebra:5 division:1 darpa:1 joint:1 various:4 america:1 soviet:1 fast:3 shortcoming:1 outside:2 choosing:1 onureena:1 heuristic:2 stanford:4 solve:2 larger:1 supplementary:1 triangular:1 statistic:3 sequence:3 advantage:1 propose:2 achieve:2 ama:1 academy:1 scalability:1 convergence:22 derive:1 stat:1 ij:12 received:1 p2:2 implemented:1 indicate:2 convention:1 direction:1 popularly:1 closely:1 material:1 implementing:1 bin:1 require:2 rlie:1 biological:1 extension:1 strictly:1 considered:1 lawrence:2 sp2:2 substituting:1 estimation:19 largest:2 grouped:1 establishes:1 minimization:7 gaussian:25 avoid:1 zhou:1 shrinkage:1 publication:1 np2:2 ax:1 june:1 likelihood:14 contrast:2 rigorous:1 opted:1 sense:1 el:2 i0:1 irn:1 wij:1 arg:1 dual:3 overall:1 development:3 proposes:1 art:1 integration:1 equal:1 yu:1 k2f:4 nonsmooth:1 report:2 few:2 national:4 comprehensive:3 veer:1 beck:2 ima:1 friedman:1 evaluation:1 extreme:1 yielding:2 semidefinite:1 behind:1 cmg:1 edge:1 partial:6 respective:1 divide:1 lbl:1 theoretical:3 instance:1 column:2 modeling:1 soft:5 teboulle:2 maximization:2 entry:8 subset:1 examining:1 proximal:15 synthetic:6 referring:1 density:1 siam:2 kshitij:2 contract:1 off:5 together:1 borwein:4 leveraged:2 positivity:1 lambda:3 american:1 zhao:1 laura:1 guennebaud:1 sang:2 potential:1 tii:2 distribute:1 de:3 harry:1 summarized:1 sec:5 rockafellar:1 satisfy:1 ad:1 performed:2 h1:16 lab:1 start:1 parallel:3 contribution:1 minimize:2 variance:1 characteristic:1 efficiently:1 yield:2 interfaced:1 lu:1 marginally:1 cc:2 j6:7 parallelizable:1 trevor:2 against:1 energy:1 thereof:1 obvious:2 dm:5 associated:2 proof:3 attributed:1 newly:1 dataset:7 treatment:3 popular:3 dimensionality:1 cj:1 routine:2 appears:1 alexandre:1 methodology:1 response:1 formulation:1 though:2 correlation:6 until:1 hand:4 cran:1 jerome:1 web:1 ei:5 banerjee:1 widespread:1 continuity:3 name:1 matt:1 k22:1 brown:1 true:1 contain:1 daspremont:1 alumnus:1 hence:4 alternating:2 symmetric:1 nonzero:1 laboratory:2 illustrated:2 self:1 essence:1 criterion:1 yun:2 outline:1 complete:3 demonstrate:5 performs:2 wise:20 recently:3 functional:1 ji:6 physical:1 volume:1 association:1 he:1 significant:1 mathematics:1 vijver:1 patrick:1 multivariate:3 recent:6 optimizing:1 subg:14 inequality:2 binary:1 yi:1 seen:2 minimum:1 dai:1 converge:3 maximize:1 v3:1 ii:12 arithmetic:1 full:1 reduces:2 smooth:7 technical:2 faster:4 calculation:1 clinical:1 naming:1 scalable:6 regression:3 breast:6 patient:2 arxiv:2 iteration:7 whereas:1 microarray:1 crucial:1 parallelization:1 unlike:1 ufl:1 leverage:1 uki:3 split:2 undertake:3 iterate:1 hastie:2 perfectly:1 lasso:2 simplifies:1 ac02:1 det:5 rajaratnam:3 expression:3 and0:1 effort:2 penalty:3 hessian:2 jie:1 ignored:1 useful:1 tij:2 generally:1 ten:1 demonstrably:1 reduced:2 http:2 outperform:1 nsf:1 sign:3 upfront:1 arising:1 popularity:1 tibshirani:2 iter:11 threshold:1 enormous:1 pj:1 undertaken:1 backward:1 imaging:1 graph:4 subgradient:1 monotone:1 fraction:1 year:2 inverse:9 package:2 frobenious:1 p3:2 scaling:1 bound:12 ki:1 bala:3 constraint:1 sake:1 speed:2 argument:2 extremely:1 min:2 performing:3 department:3 developing:1 alternate:1 combination:1 march:1 belonging:1 smaller:1 increasingly:2 outlier:2 restricted:1 sij:2 ghaoui:1 equation:2 tractable:1 yurii:1 end:3 serf:1 ksk2:2 operation:5 available:3 competitively:1 apply:1 sjj:2 robustness:1 eigen:3 florida:1 existence:1 denotes:2 remaining:2 running:4 completed:1 graphical:9 newton:1 exploit:1 k1:6 especially:3 establish:1 society:1 move:1 objective:7 traditional:1 diagonal:16 cycling:1 gradient:20 topic:3 reason:2 assuming:2 kk:8 robert:2 holding:2 trace:1 negative:2 stated:1 skk:1 lagged:1 implementation:7 pei:1 perform:1 upper:6 vertical:1 datasets:11 howard:1 finite:1 descent:2 payoff:1 defining:1 arbitrary:1 community:1 introduced:1 specified:1 extensive:2 anant:1 tremendous:5 established:2 nip:2 address:4 beyond:1 below:2 appeared:1 sparsity:1 program:1 including:2 max:3 royal:1 warm:1 regularized:1 predicting:1 zhu:1 library:3 lk:3 axis:1 review:1 literature:5 kf:2 multiplication:3 lacking:1 rijn:1 highlight:1 ingredient:1 h2:6 foundation:1 sufficient:1 proxy:1 thresholding:6 concord:70 cancer:6 penalized:7 supported:3 last:1 nengfeng:1 allow:1 pseudolikelihood:1 wide:1 shortens:1 sparse:14 julie:1 distributed:3 van:3 yudong:1 dimension:1 forward:1 author:1 made:2 far:1 keep:1 gene:5 global:1 investigating:1 search:4 iterative:4 table:6 additionally:1 guilherme:1 ku:1 robust:1 career:1 marc:2 sp:3 dense:1 coordinatewise:1 ista:27 benefited:1 slow:1 momentum:1 position:1 jmlr:1 third:1 splice:2 theorem:8 down:2 survival:3 derives:1 essential:1 consist:1 sequential:1 gained:1 onkar:3 magnitude:1 gap:4 backtracking:5 simply:1 univariate:1 contained:1 partially:1 chang:1 satisfies:2 ukj:4 goal:1 tuxfamily:1 lipschitz:4 absence:1 feasible:1 fista:24 except:1 called:1 wound:1 select:1 assessed:1 brevity:1 accelerated:4 tested:1 |
5,055 | 5,577 | Learning Chordal Markov Networks
by Dynamic Programming
Kustaa Kangas Teppo Niinim?aki Mikko Koivisto
Helsinki Institute for Information Technology HIIT
Department of Computer Science, University of Helsinki
{jwkangas,tzniinim,mkhkoivi}@cs.helsinki.fi
Abstract
We present an algorithm for finding a chordal Markov network that maximizes
any given decomposable scoring function. The algorithm is based on a recursive
characterization of clique trees, and it runs in O(4n ) time for n vertices. On
an eight-vertex benchmark instance, our implementation turns out to be about
ten million times faster than a recently proposed, constraint satisfaction based
algorithm (Corander et al., NIPS 2013). Within a few hours, it is able to solve
instances up to 18 vertices, and beyond if we restrict the maximum clique size.
We also study the performance of a recent integer linear programming algorithm
(Bartlett and Cussens, UAI 2013). Our results suggest that, unless we bound the
clique sizes, currently only the dynamic programming algorithm is guaranteed to
solve instances with around 15 or more vertices.
1
Introduction
Structure learning in Markov networks, also known as undirected graphical models or Markov
random fields, has attracted considerable interest in computational statistics, machine learning, and
artificial intelligence. Natural score-and-search formulations of the task have, however, proved to be
computationally very challenging. For example, Srebro [1] showed that finding a maximum-likelihood
chordal (or triangulated or decomposable) Markov network is NP-hard even for networks of treewidth
at most 2, in sharp contrast to the treewidth-1 case [2]. Consequently, various approximative
approaches and local search heuristics have been proposed [3, 1, 4, 5, 6, 7, 8, 9, 10, 11].
Only very recently, Corander et al. [12] published the first non-trivial algorithm that is guaranteed to
find a globally optimal chordal Markov network. It is based on expressing the search space in terms of
logical constraints and employing the state-of-the-art solver technology equipped with optimization
capabilities. To this end, they adopt the usual clique tree, or junction tree, representation of chordal
graphs, and work with a particular characterization of clique trees, namely, that for any vertex of the
graph the cliques containing that vertex induce a connected subtree in the clique tree. The key idea
is to rephrase this property as what they call a balancing condition: for any vertex, the number of
cliques that contain it is one larger than the number of edges (the intersection of the adjacent cliques)
that contain it. They show that with appropriate, efficient encodings of the constraints, an eight-vertex
instance can be solved to the optimum in a few days of computing, which could have been impossible
by a brute-force search. However, while the constraint satisfaction approach enables exploiting the
powerful technology, it is currently not clear, whether it scales to larger instances.
Here, we investigate an alternative approach to find an optimal chordal Markov network. Like the
work of Corander at al. [12], our algorithm stems from a particular characterization of clique trees of
chordal graphs. However, our characterization is quite different, being recursive in nature. It concords
the structure of common scoring functions and so yields a natural dynamic programming algorithm
that grows an optimal clique tree by selecting its cliques one by one. In its basic form, the algorithm
1
is very inefficient. Fortunately, the fine structure of the scoring function enables us to further factorize
the main dynamic programming step and so bring the time requirement down to O(4n ) for instances
with n vertices. We also show that by setting the maximum clique size, equivalently
the treewidth
n
(plus one), to w ? n/4, the time requirement can be improved to O 3n?w w
w .
While our recursive characterization of clique trees and the resulting dynamic programming algorithm
are new, they are similar in spirit to a recent work by Korhonen and Parviainen [13]. Their algorithm
finds a bounded-treewidth Bayesian network structure that maximizes a decomposable score, running
in 3n nw+O(1) time, where w is the treewidth bound. For large w it thus is superexponentially slower
than our algorithm. The problems solved by the two algorithms are, of course, different: the class of
treewidth-w Bayesian networks properly extends the class of treewidth-w chordal Markov networks.
There is also more recent work for finding bounded-treewidth Bayesian networks by employing
constraint solvers: Berg et al. [14] solve the problem by casting into maximum satisfiability, while
Parviainen et al. [15] cast into integer linear programming. For unbounded-treewidth Bayesian
networks, O(2n n2 )-time algorithms based on dynamic programming are available [16, 17, 18].
However, none of these dynamic programming algorithms, nor their A* search based variant [19],
enables adding the constraints of chordality or bounded width.
But the integer linear programming approach to finding optimal Bayesian networks, especially the
recent implementation by Bartlett and Cussens [20], also enables adding the further constraints.1
We are not aware of any reasonable worst-case bounds for the algorithm?s time complexity, nor any
previous applications of the algorithm to the problem of learning chordal Markov networks. As a
second contribution of this paper, we report on an experimental study of the algorithm?s performance,
using both synthetic data and some frequently used machine learning benchmark datasets.
The remainder of this article begins by formulating the learning task as an optimization problem. Next
we present our recursive characterization of clique trees and a derivation of the dynamic programming
algorithm, with a rigorous complexity analysis. The experimental setting and results are reported in a
dedicated section. We end with a brief discussion.
2
The problem of learning chordal Markov networks
We adopt the hypergraph treatment of chordal Markov networks. For a gentler presentation and
proofs, see Lauritzen and Spiegelhalter [21, Sections 6 and 7], Lauritzen [22], and references therein.
Let p be a positive probability function over a product of n state spaces. Let G be an undirected
graph on the vertex set V = {1, . . . , n}, and call any maximal set of pairwise
Q adjacent vertices of G a
clique. Together, G and p form a Markov network if p(x1 , . . . , xn ) = C ?C (xC ), where C runs
through the cliques of G and each ?C is a mapping to positive reals. Here xC denotes (xv : v ? C).
The factors ?C take a particularly simple form when the graph G is chordal, that is, when every cycle
of G of length greater than three has a chord, which is an edge of G joining two nonconsecutive
vertices of the cycle. The chordality requirement can be expressed in terms of hypergraphs. Consider
first an arbitrary hypergraph on V , identified with a collection C of subsets of V such that each
element of V belongs to some set in C. We call C reduced if no set in C is a proper subset of another
set in C, and acyclic if, in addition, the sets in C admit an ordering C1 , . . . , Cm that has the running
intersection property: for each 2 ? j ? m, the intersection Sj = Cj ? (C1 ? ? ? ? ? Cj?1 ) is a subset
of some Ci with i < j. We call the sets Sj the separators. The multiset of separators, denoted by
S, does not depend on the ordering and is thus unique for an acyclic hypergraph. Now, letting C be
the set of cliques of the chordal graph G, it is known that the hypergraph C is acyclic and that each
factor ?Cj (xCj ) can be specified as the ratio p(xCj )/p(xSj ) of marginal probabilities (where we
define p(xS1 ) = 1). Also the converse holds: by connecting all pairs of vertices within each set of an
acyclic hypergraph we obtain a chordal graph.
Given multiple observations over
Qstate space, the data, we associate with each hyperQ the product
graph C on V a score s(C) = C?C p(C)
S?S p(S), where the local score p(A) measures the
probability (density) of the data projected on A ? V , possibly extended by some structure prior
or penalization term. The structure learning problem is to find an acyclic hypergraph C on V that
1
We thank an anonymous reviewer of an earlier version of this work for noticing this fact, which apparently
was not well known in the community, including the authors and reviewers of Corander?s et al. work [12].
2
maximizes the score s(C). This formulation covers a Bayesian approach, in which each p(A) is the
marginal likelihood for the data on A under a Dirichlet?multinomial model [23, 7, 12], but also the
maximum-likelihood formulation, in which each p(A) is the empirical probability of the data on
A [23, 1]. Motivated by these instantiations, we will assume that for any given A the value p(A) can
be efficiently computed, and we treat the values as the problem input.
Our approach to the problem exploits the fact [22, Prop. 2.27] that a reduced hypergraph C is acyclic
if and only if there is a junction tree T for C, that is, an undirected tree on the node set C that has the
junction property (JP): for any two nodes A and B in C and any C on the unique path in T between
A and B we have A ? B ? C. Furthermore, by labeling each edge of T by the intersection of its
endpoints, the edge labels amount to the multiset of separators of the hypergraph C. Thus a junction
tree gives the separators explicitly, which motivates us to write s(T ) for the respective score s(C)
and solve the structure learning problem by finding a junction tree T over V that maximizes s(T ).
Here and henceforth, we say that a tree is over a set if the union of the tree?s nodes equals the set.
As our problem formulation does not explicitly refer to the underlying chordal graph and cliques, we
will speak of junction trees instead of equivalent but semantically more loaded clique trees. From
here on, a junction tree refers specifically to a junction tree whose node set is a reduced hypergraph.
3
Recursive characterization and dynamic programming
The score of a junction tree obeys a recursive factorization along subtrees (by rooting the tree at any
node), given in Section 3.2 below. While this is the essential structural property of the score for our
dynamic programming algorithm, it does not readily yield the needed recurrence for the optimal
score. Indeed, we need a characterization of, not a fixed junction tree, but the entire search space
of junction trees that concords the factorization of the score. We next give such a characterization
before we proceed to the derivation and analysis of the dynamic programming algorithm.
3.1
Recursive partition trees
We characterize the set of junction trees by expressing the ways in which they can partition V . The
idea is that when any tree of interest is rooted at some node, the subtrees amount to a partition of not
only the remaining nodes in the tree (which holds trivially) but also the remaining vertices (contained
in the nodes); and the subtrees also satisfy this property. See Figure 1 for an illustration.
If T is a tree over a set S, we write C(T ) for its node set and V (T ) for the union of its nodes, S. For
a family R of subsets of a set S, we say that R is a partition of S and denote R @ S if the members
of R are non-empty and pairwise disjoint, and their union is S.
Definition 1 (Recursive partition tree, RPT). Let T be a tree over a finite set V , rooted at C ?
C(T ). Denote by C1 , . . . , Ck the children of C, by Ti the subtree rooted at Ci , and let Ri = V (Ti )\C.
We say that T is a recursive partition tree (RPT) if it satisfies the following three conditions: (R1)
each Ti is a RPT over Ci ? Ri , (R2) {R1 , . . . , Rk } @ V \ C, and (R3) C ? Ci is a proper subset of
both C and Ci . We denote by RPT(V, C) the set of all RPTs over V rooted at C.
We now present the following theorems to establish that, when edge directions are ignored, the
definitions of junction trees and recursive partition trees are equivalent.
Theorem 1. A junction tree T is a RPT when rooted at any C ? C(T ).
Theorem 2. A RPT is a junction tree (when considered undirected).
Our proofs of these results will use the following two observations:
Observation 3. A subtree of a junction tree is also a junction tree.
Observation 4. If T is a RPT, so is its every subtree rooted at any C ? C(T ).
Proof of Theorem 1. Let T be a junction tree over V and consider an arbitrary C ? C(T ). We show
by induction over the number of nodes that T is a RPT when rooted at C. Let Ci , Ti , and Ri be
defined as in Definition 1 and consider the three RPT conditions. If C is the only node in T , the
conditions hold trivially. Assume they hold up to n ? 1 nodes and consider the case |C(T )| = n. We
show that each condition holds.
3
9
0
Figure 1: An example of a chordal graph and a
corresponding recursive partition. The root node
C = {3, 4, 5} (dark grey) partitions the remaining
vertices into three disjoint sets R1 = {0, 1, 2},
R2 = {6}, and R3 = {7, 8, 9} (light grey), which
are connected to the root node by its child nodes
C1 = {1, 2, 3}, C2 = {4, 5, 6}, and C3 = {5, 7}
respectively (medium grey).
8
7
1
5
3
2
6
4
(R1) By Observation 3 each Ti is a junction tree and thus, by the induction assumption, a RPT. It
remains to show that V (Ti ) = Ci ? Ri . By definition both Ci ? V (Ti ) and Ri ? V (Ti ). Thus
Ci ? Ri ? V (Ti ). Assume then that x ? V (Ti ), i.e. x ? C 0 for some C 0 ? C(Ti ). If x ?
/ Ri ,
then by definition x ? C. Since Ci is on the path between C and C 0 , by JP x ? Ci . Therefore
V (Ti ) ? Ci ? Ri .
(R2) We show that the sets Ri partition V \ C. First, each Ri is non-empty
S since
S by definition of
reduced
is non-empty and not contained in C. Second, i Ri = i (V (Ti ) \ C) =
S hypergraph CiS
(C ? i V (Ti )) \ C = C(T ) \ C = V \ C. Finally, to see that Ri are pairwise disjoint, assume to
the contrary that x ? Ri ? Rj for distinct Ri and Rj . This implies x ? A ? B for some A ? C(Ti )
and B ? C(Tj ). Now, by JP x ? C, which contradicts the definition of Ri .
(R3) Follows by the definition of reduced hypergraph.
Proof of Theorem 2. Assume now that T is a RPT over V . We show that T is a junction tree. To see
that T has JP, consider arbitrary A, B ? C(T ). We show that A ? B is a subset of every C ? C(T )
on the path between A and B.
Consider first the case that A is an ancestor of B and let B = C1 , . . . , Cm = A be the path that
connects them. We show by induction over m that C1 ? Cm ? Ci for every i = 1, . . . , m. The base
case m = 1 is trivial. Assume m > 1 and the claim holds up to m ? 1. If i = m, the claim is trivial.
Let i < m. Denote by Tm?1 the subtree rooted at Cm?1 and let Rm?1 = V (Tm?1 ) \ Cm . Since
C1 ? V (Tm?1 ) we have that C1 ? Cm = (C1 ? V (Tm?1 )) ? Cm = C1 ? (Cm ? V (Tm?1 )). By
Observation 4 Tm?1 is a RPT. Therefore, from (R1) it follows that V (Tm?1 ) = Cm?1 ? Rm?1 and
thus Cm ? V (Tm?1 ) = (Cm ? Cm?1 ) ? (Cm ? Rm?1 ) = Cm ? Cm?1 . Plugging this above and
using the induction assumption we get C1 ? Cm = C1 ? (Cm ? Cm?1 ) ? C1 ? Cm?1 ? Ci .
Consider now the case that A and B have a least common ancestor C. By Observation 4, the subtree
rooted at C is a RPT. Thus, by (R1) and (R2) there are disjoint R and R0 such that A ? C ? R and
B ? C ? R0 . Thus, A ? B ? C, and consequently A ? B ? A ? C. As we proved above, A ? C is
a subset of every node on the path between A and C, and therefore A ? B is also a subset of every
such node. Similarly, A ? B is a subset of every node on the path between B and C. Combining
these results, we have that A ? B is a subset of every node on the path between A and B.
Finally, to see that C(T ) is reduced, assume the opposite, that A ? B for distinct A, B ? C(T ). Let
C be the node next to A on the path from A to B. By the initial assumption and JP A ? A ? B ? C.
As either A or C is a child of the other, this contradicts (R3) in the subtree rooted at the parent.
3.2
The main recurrence
We want to find a junction tree T over V that maximizes the score s(T ). By Theorems 1 and 2 this
is equivalent to finding a RPT T that maximizes s(T ). Let T be a RPT rooted at C and denote by
C1 , . . . , Ck the children of C and by Ti the subtree rooted at Ci . Then, the score factorizes as follows
s(T ) = p(C)
k
Y
s(Ti )
.
p(C
? Ci )
i=1
(1)
To see this, observe that each term of s(T ) is associated with a particular node or edge (separator) of
T . Thus the product of the s(Ti ) consists of exactly the terms of s(T ), except for the ones associated
with the root C of T and the edges between C and each Ci .
4
To make use of the above factorization, we introduce suitable constraints under which an optimal
tree can be constructed from subtrees that are, in turn, optimal with respect to analogous constraints
(cf. Bellman?s principle of optimality). Specifically, we define a function f that gives the score of an
optimal subtree over any subset of nodes as follows:
Definition 2. For S ? V and ? 6= R ? V \ S, let f (S, R) be the score of an optimal RPT over
S ? R rooted at a proper superset of S. That is
f (S, R) =
max
s(T ) .
S ?C ?S?R
T ?RPT(S?R,C)
Corollary 5. The score of an optimal RPT over V is given by f (?, V ).
We now show that f admits the following recurrence, which shall be used as the basis of our dynamic
programming algorithm.
Lemma 6. Let S ? V and ? 6= R ? V \ S. Then
f (S, R) =
max
S ?C ?S?R
{R1 , . . . , Rk } @ R \ C
S1 , . . . , Sk ? C
p(C)
k
Y
f (Si , Ri )
i=1
p(Si )
.
Proof. We first show inductively that the recurrence is well defined. Assume that the conditions
S ? V and ? 6= R ? V \ S hold. Observe that R is non-empty, every set has a partition, and C
is selected to be non-empty. Therefore, all three maximizations are over non-empty ranges and it
remains to show that the product over i = 1, . . . , k is well defined. If |R| = 1, then R \ C = ? and
the product equals 1 by convention. Assume now that f (S, R) is defined when |R| < m and consider
the case |R| = m. By construction Si ? V , ? 6= Ri ? V \ Si and |Ri | < |R| for every i = 1, . . . , k.
Thus, by the induction assumption each f (Si , Ri ) is defined and therefore the product is defined.
We now show that the recurrence indeed holds. Let the root C in Definition 2 be fixed and consider the
maximization over the trees T . By Definition 1, choosing a tree T ? RPT(S ? R, C) is equivalent
to choosing sets R1 , . . . , Rk , sets C1 , . . . , Ck , and trees T1 , . . . , Tk such that (R0) Ri = V (Ti ) \ C,
(R1) Ti is a RPT over Ci ? Ri rooted at Ci , (R2) {R1 , . . . , Rk } @ (S ? R) \ C, and (R3) C ? Ci is
a proper subset of C and Ci .
Observe first that (S ? R) \ C = R \ C and therefore (R2) is equivalent to choosing sets Ri such
that {R1 , . . . , Rk } @ R \ C.
Denote by Si the intersection C ? Ci . We show that together (R0) and (R1) are equivalent to
saying that Ti is a RPT over Si ? Ri rooted at Ci . Assume first that the conditions are true. By
(R1) it?s sufficient to show that Ci ? Ri = Si ? Ri . From (R1) it follows that Ci ? V (Ti )
and therefore Ci \ C ? V (Ti ) \ C, which by (R0) implies Ci \ C ? Ri . This in turn implies
Ci ? Ri = (Ci ? C) ? (Ci \ C) ? Ri = Si ? Ri . Assume then that Ti is a RPT over Si ? Ri rooted at
Ci . Condition (R0) holds since V (Ti ) \ C = (Si ? Ri ) \ C = (Si \ C) ? (Ri \ C) = ? ? Ri = Ri .
Condition (R1) holds since Si ? Ci ? V (Ti ) = Si ? Ri and thus Si ? Ri = Ci ? Ri .
Finally observe that (R3) is equivalent to first choosing Si ? C and then Ci ? Si . By (R1) it must
also be that Ci ? V (Ti ) = Si ? Ri . Based on these observations, we can now write
f (S, R) =
max
s(T ) .
S ?C ?S?R
{R1 , . . . , Rk } @ R \ C
S1 ,...,Sk ?C
?i:Si ?Ci ?Ri ?Si
?i:Ti is a RPT over Si ? Ri rooted at Ci
Next we factorize s(T ) using the factorization (1) of the score. In addition, once a root C, a partition
{R1 , . . . , Rk }, and separators {S1 , . . . , Sk } have been fixed, then each pair (Ci , Ti ) can be chosen
independently for different i. Thus, the above maximization can be written as
?
?
k
Y
1
?
max
p(C)
?
max
s(Ti )? .
S ?C ?S?R
Si ?Ci ?Ri ?Si
p(S
)
i
i=1
{R1 , . . . , Rk } @ R \ C
S1 ,...,Sk ?C
Ti ?RPT(Si ?Ri ,Ci )
By applying Definition 2 to the inner maximization the claim follows.
5
3.3
Fast evaluation
The direct evaluation of the recurrence in Lemma 6 would be very inefficient, especially since it
involves maximization over all partitions of the vertex set. In order to evaluate it more efficiently, we
decompose it into multiple recurrences, each of which can take advantage of dynamic programming.
Observe first that we can rewrite the recurrence as
f (S, R) =
max
S ?C ?S?R
{R1 , . . . , Rk } @ R \ C
p(C)
k
Y
h(C, Ri ) ,
(2)
i=1
where
h(C, R) = max f (S, R) p(S) .
S?C
(3)
We have simply moved the maximization over Si ? C inside the product and written each factor
using a new function h. Due to how the sets C and Ri are selected, the arguments to h are always
non-empty and disjoint subsets of V . In a similar fashion, we can further rewrite recurrence 2 as
f (S, R) =
max
S?C?S?R
p(C)g(C, R \ C) ,
(4)
where we define
g(C, U ) =
max
k
Y
{R1 ,...,Rk }@U
h(C, Ri ) .
i=1
Again, note that C and U are disjoint and C is non-empty. If U = ?, then g(C, U ) = 1. Otherwise
g(C, U ) = max h(C, R)
?6=R?U
max
{R2 ,...,Rk }@U \R
k
Y
i=2
h(C, Ri ) = max h(C, R)g(C, U \ R) .
?6=R?U
(5)
Thus, we have split the original recurrence into three simpler recurrences (4,5,3). We now obtain a
straightforward dynamic programming algorithm that evaluates f , g and h using these recurrences
with memoization, and then outputs the score f (?, V ) of an optimal RPT.
3.4
Time and space requirements
We measure the time requirement by the number of basic operations, namely comparisons and
arithmetic operations, executed for pairs of real numbers. Likewise, we measure the space requirement
by the maximum number of real values stored at any point during the execution of the algorithm.
We consider both time and space in the more general setting where the width w ? n of the optimal
network is restricted by selecting every node (clique) C in recurrence (4) with the constraint |C| ? w.
We prove the following bounds by counting, for each of the three functions, the associated subset
triplets that meet the applicable disjointness, inclusion, and cardinality constraints:
Theorem 7. Let V be a set of size n and w ? n. Given the local scores of the subsets of V of size
atP
most w as input, a maximum-score junction tree over V ofPwidth at
most w can be found using
w
w
6 i=0 ni 3n?i basic operations and having a storage for 3 i=0 ni 2n?i real numbers.
Proof. To bound the number of basic operations needed, we consider the evaluation of each the
functions f , g, and h using the recurrences (4,5,3). Consider first f . Due to memoization, the
algorithm executes at most two basic operations (one comparison and one multiplication) per triplet
(S, R, C), with S and R disjoint,
S ? C ? S ? R, and |C| ? w. Subject to these constraints, a set C
of size i can be chosen in ni ways, the set S ? C in at most 2i ways, andthe set R \ C in 2n?i ways.
Pw
Pw
Thus, the number of basic operations needed is at most Nf = 2 i=0 ni 2n?i 2i = 2n+1 i=0 ni .
Similarly, for h the algorithm executes at most two basic operations per triplet (C, R, S), with now C
and R disjoint, |C| ? w, and S ? C. A calculation gives the same bound as for f . Finally consider g.
Now the algorithm executes at most two basic operations per triplet (C, U, R), with C and U disjoint,
|C| ? w, and ? 6= R ? U . A set C of size i can be chosen in ni ways, and the remaining n ? i
elements can be assigned into U and its subset R in 3n?i ways. Thus, the number of basic operations
6
w=3
w=4
Junctor, any
GOBNILP, large
GOBNILP, medium
GOBNILP, small
1h
60s
10
12
14
16
w=?
w=6
1h
1h
1h
1h
60s
60s
60s
60s
1s
1s
1s
1s
1s
8
w=5
18
8
10
12
14
16
18
8
10
12
14
16
18
8
10
12
14
16
18
1h
1h
1h
1h
1h
60s
60s
60s
60s
60s
1s
1s
1s
1s
1s
8
10
12
14
16
18
8
10
12
14
16
18
8
10
12
14
16
18
8
10
12
14
16
18
8
10
12
14
16
18
8
10
12
14
16
18
Figure 2: The running time of Junctor and GOBNILP as a function of the number of vertices for
varying widths w, on sparse (top) and dense (bottom) synthetic instances with 100 (?small?), 1000
(?medium?), and 10,000 (?large?) data samples. The dashed red line indicates the 4-hour timeout or
memout. For GOBNILP shown is the median of the running times on 15 random instances.
Pw
needed
is at most Ng = 2 i=0 ni 3n?i . Finally, it is sufficient to observe that there is a j such that
n n?i
is larger than ni 2n when i ? j, and smaller when i > j. Now because both terms sum up
i 3
to the same value 4n when i = 0, . . . , n, the bound Ng is always greater or equal to Nf .
We bound the storage requirement in a similar manner. For each function, the size of the first argument
is at most w and the second argument is disjoint from the first, yielding the claimed bound.
Remark 1. For w = n, the bounds for the number of basic operations and storage requirement in
Theorem 7 become 6 ? 4n and 3 ? 3n , respectively.
n?i?1 When w ? n/4, the former bound can be replaced
n
n n?w
by 6w w
3
, since ni 3n?i ? i+1
3
if and only if i ? (n ? 3)/4.
Remark 2. Memoization requires indexing with pairs of disjoint sets. Representing sets as integers
n
n
allows efficient lookups to a two-dimensional
Pn array, using O(4 ) space. We can achieve O(3 )
space by mapping a pair of sets (A, B) to a=1 3a?1 Ia (A, B) where Ia (A, B) is 1 if a ? A, 2 if
a ? B, and 0 otherwise. Each pair gets a unique index from 0 to 3n ? 1 to a compact array. A na??ve
evaluation of the index adds an O(n) factor to the running time. This can be improved to constant
amortized time by updating the index incrementally while iterating over sets.
4
Experimental results
We have implemented the presented algorithm in a C++ program Junctor (Junction Trees Optimally
Recursively).2 In the experiments reported below, we compared the performance of Junctor and the
integer linear programming based solver GOBNILP by Bartlett and Cussens [20]. While GOBNILP
has been tailored for finding an optimal Bayesian network, it enables forbidding the so-called
v-structures in the network and, thereby, finding an optimal chordal Markov network, provided that
we use the BDeu score, as we have done, or some other special scoring function [23, 24]. We note
that when forbidding v-structures, the standard score pruning rules [20, 25] are no longer valid.
We first investigated the performance on synthetic data generated from Bayesian networks of varying
size and density. We generated 15 datasets for each combination of the number of vertices n from 8 to
18, maximum indegree k = 4 (sparse) or k = 8 (dense), and the number of samples m equaling 100,
1000, or 10,000, as follows: Along a random vertex ordering, we first drew for each vertex the number
of its parents from the uniform distribution between 0 and k and then the actual parents uniformly
at random from its predecessors in the vertex ordering. Next, we assigned each vertex two possible
states and drew the parameters of the conditional distributions from the uniform distribution. Finally,
from the obtained joint distribution, we drew m independent samples. The input for Junctor and
2
Junctor is publicly available at www.cs.helsinki.fi/u/jwkangas/junctor/.
7
Table 1: Benchmark instances with different numbers of attributes (n) and samples (m).
w=3
n
10
11
12
13
17
Dataset
Voting
Tumor
Lymph
Hypothyroid
Mushroom
m
958
10000
108
1066
101
w=4
1h
w=5
1h
V
T
X
L
1s X
B
F
P
V
T
GOBNILP
F
GOBNILP
GOBNILP
Z
B
1s
60s
L
B
60s
V T
F
1s
X P
w=?
F
X
P
VZ TL
B
1h
V T
L
60s
1s
Z
B
1h
Z
60s
m
435
339
148
3772
8124
w=6
Z
1h
n
17
18
19
22
22
Abbr.
V
T
L
L
GOBNILP
Abbr.
X
P
B
F
Z
GOBNILP
Dataset
Tic-tac-toe
Poker
Bridges
Flare
Zoo
F
60s
P
X
1s
P
1s
60s
Junctor
1h
1s
60s
Junctor
1h
1s
60s
Junctor
1h
1s
60s
Junctor
1h
1s
60s
Junctor
1h
Figure 3: The running time of Junctor against GOBNILP on the benchmark instances with at most
19 attributes, given in Table 1. The dashed red line indicates the 4-hour timeout or memout.
GOBNILP was produced using the BDeu score with equivalent sample size 1. For both programs, we
varied the maximum width parameter w from 3 to 6 and, in addition, examined the case of unbounded
width (w = ?). Because the performance of Junctor only depends on n and w, we ran it only
once for each combination of the two. In contrast, the performance of GOBNILP is very sensitive to
various characteristics of the data, and therefore we ran it for all the combinations. All runs were
allowed 4 CPU hours and 32 GB of memory. The results (Figure 2) show that for large widths
Junctor scales better than GOBNILP (with respect to n), and even for low widths Junctor is
superior to GOBNILP for smaller n. We found GOBNILP to exhibit moderate variance: 93% of all
running times (excluding timeouts) were within a factor of 5 of the respective medians shown in
Figure 2, while 73% were within a factor of 2. We observe that the running time of GOBNILP may
behave ?discontinuously? (e.g., small datasets around 15 vertices with width 4).
We also evaluated both programs on several benchmark instances taken from the UCI repository [26].
The datasets are summarized in Table 1. Figure 3 shows the results on the instances with at most 19
attributes, for which the runs were, again, allowed 4 CPU hours and 32 GB of memory. The results
are qualitatively in well agreement with the results obtained with synthetic data. For example, solving
the Bridges dataset on 12 attributes with width 5, takes less than one second by Junctor but around
7 minutes by GOBNILP. For the two 22-attribute datasets we allowed both programs one week of
CPU time and 128 GB of memory. Junctor was able to solve each within 33 hours for w = 3 and
within 74 hours for w = 4. GOBNILP was able to solve Hypothyroid up to w = 6 (in 24 hours, or
less for small widths), but Mushroom only up to w = 3. For higher widths GOBNILP ran out of time.
5
Concluding remarks
We have investigated the structure learning problem in chordal Markov networks. We showed that the
commonly used scoring functions factorize in a way that enables a relatively efficient dynamic programming treatment. Our algorithm is the first that is guaranteed to solve moderate-size instances to
the optimum within reasonable time. For example, whereas Corander et al. [12] report their algorithm
took more than 3 days on an eight-variable instance, our Junctor program solves any eight-variable
instance within 20 milliseconds. We also reported on the first evaluation of GOBNILP [20] for solving
the problem, which highlighted the advantages of the dynamic programming approach.
Acknowledgments
This work was supported by the Academy of Finland, grant 276864. The authors thank Matti J?arvisalo
for useful discussions on constraint programming approaches to learning Markov networks.
8
References
[1] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artificial Intelligence, 143(1):123?
138, 2003.
[2] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE
Transactions on Information Theory, 14:462?467, 1968.
[3] S. Della Pietra, V. J. Della Pietra, and J. D. Lafferty. Inducing features of random fields. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 19(4):380?393, 1997.
[4] M. Narasimhan and J. A. Bilmes. PAC-learning bounded tree-width graphical models. In D. M. Chickering
and J. Y. Halpern, editors, UAI, pages 410?417. AUAI Press, 2004.
[5] P. Abbeel, D. Koller, and A. Y. Ng. Learning factor graphs in polynomial time and sample complexity.
Journal of Machine Learning Research, 7:1743?1788, 2006.
[6] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In J. C. Platt, D. Koller,
Y. Singer, and S. T. Roweis, editors, NIPS. Curran Associates, Inc., 2007.
[7] J. Corander, M. Ekdahl, and T. Koski. Parallell interacting MCMC for learning of topologies of graphical
models. Data Mining and Knowledge Discovery, 17(3):431?456, 2008.
[8] G. Elidan and S. Gould. Learning bounded treewidth Bayesian networks. Journal of Machine Learning
Research, 9:2699?2731, 2008.
[9] F. Bromberg, D. Margaritis, and V. Honavar. Efficient Markov network structure discovery using independence tests. Journal of Artificial Intelligence Research, 35:449?484, 2009.
[10] J. Davis and P. Domingos. Bottom-up learning of Markov network structure. In J. F?urnkranz and
T. Joachims, editors, ICML, pages 271?278. Omnipress, 2010.
[11] J. Van Haaren and J. Davis. Markov network structure learning: A randomized feature generation approach.
In J. Hoffmann and B. Selman, editors, AAAI, pages 1148?1154. AAAI Press, 2012.
[12] J. Corander, T. Janhunen, J. Rintanen, H. J. Nyman, and J. Pensar. Learning chordal Markov networks by
constraint satisfaction. In C. J. C. Burges, L. Bottou, Z. Ghahramani, and K. Q. Weinberger, editors, NIPS,
pages 1349?1357, 2013.
[13] J. Korhonen and P. Parviainen. Exact learning of bounded tree-width Bayesian networks. In C. M. Carvalho
and P. Ravikumar, editors, AISTATS, volume 31 of JMLR Proceedings, pages 370?378. JMLR.org, 2013.
[14] J. Berg, M. J?arvisalo, and B. Malone. Learning optimal bounded treewidth Bayesian networks via maximum
satisfiability. In S. Kaski and J. Corander, editors, AISTATS, pages 86?95. JMLR.org, 2014.
[15] P. Parviainen, H. S. Farahani, and J. Lagergren. Learning bounded tree-width Bayesian networks using
integer linear programming. In S. Kaski and J. Corander, editors, AISTATS, pages 751?759. JMLR.org,
2014.
[16] S. Ott, S. Imoto, and S. Miyano. Finding optimal models for small gene networks. In R. B. Altman, A. K.
Dunker, L. Hunter, and T. E. Klein, editors, PSB, pages 557?567. World Scientific, 2004.
[17] M. Koivisto and K. Sood. Exact Bayesian structure discovery in Bayesian networks. Journal of Machine
Learning Research, pages 549?573, 2004.
[18] T. Silander and P. Myllym?aki. A simple approach for finding the globally optimal Bayesian network
structure. In R. Dechter and T. S. Richardson, editors, UAI, pages 445?452. AUAI Press, 2006.
[19] C. Yuan and B. Malone. Learning optimal Bayesian networks: A shortest path perspective. Journal of
Artificial Intelligence Research, 48:23?65, 2013.
[20] M. Bartlett and J. Cussens. Advances in Bayesian network learning using integer programming. In UAI,
pages 182?191. AUAI Press, 2013.
[21] S. L. Lauritzen and D. J. Spiegelhalter. Local computations with probabilities on graphical structures and
their application to expert systems. Journal of the Royal Statistical Society. Series B (Methodological),
50(2):pp. 157?224, 1988.
[22] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[23] A. P. Dawid and S. L. Lauritzen. Hyper Markov laws in the statistical analysis of decomposable graphical
models. The Annals of Statistics, 21(3):1272?1317, 09 1993.
[24] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of
knowledge and statistical data. Machine Learning, 20:197?243, 1995.
[25] C. P. de Campos and Q. Ji. Efficient structure learning of Bayesian networks using constraints. Journal of
Machine Learning Research, 12:663?689, 2011.
[26] K. Bache and M. Lichman. UCI machine learning repository, 2013.
9
| 5577 |@word repository:2 version:1 pw:3 polynomial:1 grey:3 thereby:1 recursively:1 initial:1 liu:1 series:1 score:22 selecting:2 lichman:1 imoto:1 chordal:19 si:25 mushroom:2 forbidding:2 attracted:1 readily:1 must:1 written:2 dechter:1 partition:13 xcj:2 enables:6 intelligence:5 selected:2 malone:2 flare:1 characterization:9 multiset:2 node:24 org:3 simpler:1 chechetka:1 unbounded:2 along:2 c2:1 constructed:1 direct:1 become:1 predecessor:1 yuan:1 consists:1 prove:1 inside:1 manner:1 introduce:1 pairwise:3 indeed:2 nor:2 frequently:1 bellman:1 globally:2 bdeu:2 actual:1 equipped:1 solver:3 cardinality:1 cpu:3 begin:1 provided:1 bounded:9 underlying:1 maximizes:6 medium:3 what:1 tic:1 cm:19 narasimhan:1 finding:10 every:11 nf:2 ti:31 voting:1 auai:3 exactly:1 rm:3 platt:1 brute:1 converse:1 grant:1 positive:2 before:1 t1:1 local:4 treat:1 xv:1 encoding:1 joining:1 oxford:1 meet:1 path:9 plus:1 therein:1 examined:1 challenging:1 factorization:4 range:1 obeys:1 unique:3 acknowledgment:1 recursive:11 union:3 empirical:1 induce:1 refers:1 suggest:1 get:2 storage:3 impossible:1 applying:1 www:1 equivalent:8 reviewer:2 straightforward:1 independently:1 decomposable:4 rule:1 array:2 hypothyroid:2 analogous:1 altman:1 annals:1 construction:1 speak:1 programming:23 exact:2 approximative:1 mikko:1 curran:1 domingo:1 agreement:1 associate:2 element:2 amortized:1 dawid:1 particularly:1 updating:1 bache:1 bottom:2 solved:2 worst:1 equaling:1 connected:2 cycle:2 sood:1 ordering:4 timeouts:1 chord:1 ran:3 principled:1 complexity:3 hypergraph:11 inductively:1 dynamic:16 halpern:1 depend:1 rewrite:2 solving:2 chordality:2 basis:1 joint:1 gobnilp:23 various:2 kaski:2 derivation:2 distinct:2 fast:1 artificial:4 labeling:1 hyper:1 choosing:4 quite:1 heuristic:1 larger:3 solve:7 whose:1 say:3 otherwise:2 statistic:2 richardson:1 highlighted:1 timeout:2 advantage:2 took:1 product:7 maximal:1 remainder:1 silander:1 uci:2 combining:1 achieve:1 roweis:1 academy:1 moved:1 inducing:1 exploiting:1 parent:3 empty:8 optimum:2 requirement:8 r1:21 tk:1 lauritzen:5 solves:1 implemented:1 c:2 involves:1 treewidth:11 implies:3 triangulated:1 convention:1 direction:1 attribute:5 farahani:1 abbeel:1 anonymous:1 decompose:1 hold:10 around:3 considered:1 nw:1 mapping:2 week:1 claim:3 bromberg:1 finland:1 adopt:2 applicable:1 label:1 currently:2 bridge:2 sensitive:1 vz:1 always:2 ck:3 pn:1 factorizes:1 casting:1 varying:2 corollary:1 arvisalo:2 joachim:1 properly:1 methodological:1 likelihood:4 indicates:2 contrast:2 rigorous:1 entire:1 chow:1 koller:2 ancestor:2 denoted:1 art:1 special:1 marginal:2 field:2 aware:1 equal:3 once:2 having:1 ng:3 icml:1 thin:1 np:1 report:2 few:2 ve:1 pietra:2 replaced:1 lagergren:1 connects:1 interest:2 investigate:1 mining:1 evaluation:5 yielding:1 light:1 tj:1 subtrees:4 edge:7 respective:2 unless:1 tree:53 junctor:19 instance:15 earlier:1 cover:1 maximization:6 ott:1 vertex:23 subset:16 uniform:2 characterize:1 reported:3 stored:1 optimally:1 synthetic:4 density:2 randomized:1 together:2 connecting:1 na:1 again:2 aaai:2 containing:1 possibly:1 henceforth:1 admit:1 expert:1 inefficient:2 dunker:1 lookup:1 de:1 summarized:1 disjointness:1 inc:1 teppo:1 satisfy:1 explicitly:2 depends:1 root:5 apparently:1 red:2 capability:1 contribution:1 ni:9 publicly:1 loaded:1 characteristic:1 efficiently:2 likewise:1 yield:2 variance:1 bayesian:19 produced:1 hunter:1 none:1 zoo:1 bilmes:1 published:1 executes:3 definition:12 evaluates:1 against:1 pp:1 toe:1 proof:6 associated:3 proved:2 treatment:2 dataset:3 logical:1 knowledge:2 satisfiability:2 cj:3 higher:1 day:2 improved:2 formulation:4 hiit:1 done:1 evaluated:1 furthermore:1 incrementally:1 scientific:1 grows:1 contain:2 true:1 former:1 assigned:2 rpt:25 adjacent:2 during:1 width:15 recurrence:14 aki:2 rooted:17 davis:2 gentler:1 nonconsecutive:1 koski:1 dedicated:1 bring:1 omnipress:1 fi:2 recently:2 common:2 superior:1 multinomial:1 ji:1 endpoint:1 jp:5 volume:1 million:1 hypergraphs:1 expressing:2 refer:1 tac:1 atp:1 trivially:2 similarly:2 inclusion:1 longer:1 base:1 add:1 recent:4 showed:2 perspective:1 belongs:1 moderate:2 claimed:1 scoring:5 guestrin:1 fortunately:1 greater:2 rooting:1 r0:6 shortest:1 elidan:1 dashed:2 arithmetic:1 multiple:2 rj:2 stem:1 faster:1 kustaa:1 calculation:1 ravikumar:1 plugging:1 memout:2 variant:1 basic:10 janhunen:1 tailored:1 c1:15 addition:3 want:1 fine:1 whereas:1 campos:1 median:2 subject:1 undirected:4 member:1 contrary:1 lafferty:1 spirit:1 integer:7 call:4 structural:1 counting:1 split:1 superset:1 xsj:1 independence:1 restrict:1 identified:1 opposite:1 inner:1 idea:2 tm:8 topology:1 whether:1 motivated:1 bartlett:4 gb:3 proceed:1 remark:3 ignored:1 useful:1 iterating:1 clear:1 amount:2 dark:1 ten:1 reduced:6 psb:1 millisecond:1 disjoint:11 per:3 klein:1 write:3 discrete:1 shall:1 urnkranz:1 key:1 graph:11 sum:1 run:4 noticing:1 powerful:1 extends:1 family:1 reasonable:2 saying:1 geiger:1 cussens:4 bound:11 guaranteed:3 constraint:15 helsinki:4 ri:46 argument:3 optimality:1 formulating:1 concluding:1 relatively:1 gould:1 rintanen:1 department:1 honavar:1 combination:4 smaller:2 heckerman:1 contradicts:2 s1:4 restricted:1 indexing:1 taken:1 computationally:1 remains:2 turn:3 r3:6 needed:4 singer:1 letting:1 end:2 koivisto:2 available:2 junction:24 operation:10 eight:4 observe:7 appropriate:1 alternative:1 weinberger:1 slower:1 original:1 denotes:1 running:8 dirichlet:1 remaining:4 cf:1 graphical:6 top:1 xc:2 exploit:1 ghahramani:1 especially:2 establish:1 approximating:1 society:1 hoffmann:1 indegree:1 dependence:1 usual:1 corander:9 poker:1 exhibit:1 thank:2 trivial:3 induction:5 length:1 index:3 illustration:1 ratio:1 memoization:3 equivalently:1 executed:1 margaritis:1 implementation:2 proper:4 motivates:1 observation:8 markov:21 datasets:5 benchmark:5 finite:1 behave:1 extended:1 excluding:1 kangas:1 varied:1 interacting:1 sharp:1 arbitrary:3 community:1 namely:2 cast:1 specified:1 pair:6 c3:1 rephrase:1 lymph:1 nyman:1 hour:8 nip:3 niinim:1 able:3 beyond:1 below:2 pattern:1 program:5 including:1 max:12 memory:3 royal:1 ia:2 suitable:1 satisfaction:3 natural:2 force:1 representing:1 technology:3 brief:1 spiegelhalter:2 parallell:1 prior:1 discovery:3 multiplication:1 law:1 generation:1 srebro:2 acyclic:6 carvalho:1 penalization:1 sufficient:2 article:1 principle:1 concord:2 editor:10 miyano:1 balancing:1 course:1 supported:1 burges:1 institute:1 xs1:1 sparse:2 van:1 xn:1 valid:1 world:1 author:2 collection:1 qualitatively:1 projected:1 commonly:1 selman:1 employing:2 transaction:2 sj:2 pruning:1 compact:1 gene:1 clique:21 uai:4 instantiation:1 factorize:3 search:6 triplet:4 sk:4 table:3 nature:1 matti:1 investigated:2 bottou:1 separator:6 aistats:3 main:2 dense:2 n2:1 myllym:1 child:4 allowed:3 x1:1 tl:1 fashion:1 chickering:2 abbr:2 jmlr:4 down:1 rk:11 theorem:8 minute:1 pac:1 r2:7 admits:1 essential:1 adding:2 drew:3 ci:41 haaren:1 execution:1 subtree:9 parviainen:4 intersection:5 simply:1 expressed:1 contained:2 satisfies:1 prop:1 conditional:1 presentation:1 consequently:2 pensar:1 considerable:1 hard:1 specifically:2 except:1 korhonen:2 semantically:1 uniformly:1 discontinuously:1 lemma:2 tumor:1 called:1 experimental:3 berg:2 evaluate:1 mcmc:1 della:2 |
5,056 | 5,578 | Q UIC & D IRTY: A Quadratic Approximation
Approach for Dirty Statistical Models
Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep Ravikumar
University of Texas at Austin
Austin, TX 78712 USA
{cjhsieh,inderjit,pradeepr}@cs.utexas.edu
Peder A. Olsen
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598 USA
[email protected]
Stephen Becker
University of Colorado at Boulder
Boulder, CO 80309 USA
[email protected]
Abstract
In this paper, we develop a family of algorithms for optimizing ?superpositionstructured? or ?dirty? statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid regularization. Most of the current approaches are first-order methods, including
proximal gradient or Alternating Direction Method of Multipliers (ADMM). We
propose a new family of second-order methods where we approximate the loss
function using quadratic approximation. The superposition structured regularizer
then leads to a subproblem that can be efficiently solved by alternating minimization. We propose a general active subspace selection approach to speed up the
solver by utilizing the low-dimensional structure given by the regularizers, and
provide convergence guarantees for our algorithm. Empirically, we show that our
approach is more than 10 times faster than state-of-the-art first-order approaches
for the latent variable graphical model selection problems and multi-task learning
problems when there is more than one regularizer. For these problems, our approach appears to be the first algorithm that can extend active subspace ideas to
multiple regularizers.
1
Introduction
From the considerable amount of recent research on high-dimensional statistical estimation, it has
now become well understood that it is vital to impose structural constraints upon the statistical model
parameters for their statistically consistent estimation. These structural constraints take the form of
sparsity, group-sparsity, and low-rank structure, among others; see [18] for unified statistical views
of such structural constraints. In recent years, such ?clean? structural constraints are frequently
proving insufficient, and accordingly there has been a line of work on ?superposition-structured? or
?dirty model? constraints, where the model parameter is expressed as the sum of a number of parameter components, each of which have their own structure. For instance, [4, 6] consider the estimation
of a matrix that is neither low-rank nor sparse, but which can be decomposed into the sum of a lowrank matrix and a sparse outlier matrix (this corresponds to robust PCA when the matrix-structured
parameter corresponds to a covariance matrix). [5] use such matrix decomposition to estimate the
structure of latent-variable Gaussian graphical models. [15] in turn use a superposition of sparse
and group-sparse structure for multi-task learning. For other recent work on such superpositionstructured models, see [1, 7, 14]. For a unified statistical view of such superposition-structured
models, and the resulting classes of M -estimators, please see [27].
? := Pk ? (r) , where {? (r) }k are the
Consider a general superposition-structured parameter ?
r=1
r=1
parameter-components, each with their own structure. Let {R(r) (?)}kr=1 be regularization functions
suited to the respective parameter components, and let L(?) be a (typically non-linear) loss function
1
? to the data. We now
that measures the goodness of fit of the superposition-structured parameter ?
have the notation to consider a popular class of M -estimators studied in the papers above for these
superposition-structured models:
X
X
min
L
? (r) +
?r R(r) (? (r) ) := F (?),
(1)
{? (r) }k
r=1
r
r
{?r }kr=1
where
are regularization penalties. In (1), the overall regularization contribution is separable in the individual parameter components, but the loss function term itself is not, and depends
? := Pk ? (r) . Throughout the paper, we use ?
? to denote the overall superpositionon the sum ?
r=1
(1)
(k)
structured parameter, and ? = [? ,. . . ,? ] to denote the concatenation of all the parameters.
Due to the wide applicability of this class of M -estimators in (1), there has been a line of work on
developing efficient optimization methods for solving special instances of this class of M -estimators
[14, 26], in addition to the papers listed above. In particular, due to the superposition-structure in
(1) and the high-dimensionality of the problem, this class seems naturally amenable to a proximal
gradient descent approach or the ADMM method [2, 17]; note that these are first-order methods and
are thus very scalable.
In this paper, we consider instead a proximal Newton framework to minimize the M -estimation objective in (1). Specifically, we use iterative quadratic approximations, and for each of the quadratic
subproblems, we use an alternating minimization approach to individually update each of the parameter components comprising the superposition-structure. Note that the Hessian of the loss might
be structured, as for instance with the logdet loss for inverse covariance estimation and the logistic
loss, which allows us to develop very efficient second-order methods. Even given this structure,
solving the regularized quadratic problem in order to obtain the proximal Newton direction is too
expensive due to the high dimensional setting. The key algorithmic contribution of this paper is in
developing a general active subspace selection framework for general decomposable norms, which
allows us to solve the proximal Newton steps over a significantly reduced search space. We are
able to do so by leveraging the structural properties of decomposable regularization functions in the
M -estimator in (1).
Our other key contribution is theoretical. While recent works [16, 21] have analyzed the convergence of proximal Newton methods, the superposition-structure here poses a key caveat: since the
loss function term only depends on the sum of the individual parameter components, the Hessian is
not positive-definite, as is required in previous analyses of proximal Newton methods. The theoretical analysis [9] relaxes this assumption by instead assuming the loss is self-concordant but again
allows at most one regularizer. Another key theoretical difficulty is our use of active subspace selection, where we do not solve for the vanilla proximal Newton direction, but solve the proximal
Newton step subproblem only over a restricted subspace, which moreover varies with each step. We
deal with these issues and show super-linear convergence of the algorithm when the sub-problems
are solved exactly. We apply our algorithm to two real world applications: latent Gaussian Markov
random field (GMRF) structure learning (with low-rank + sparse structure), and multitask learning
(with sparse + group sparse structure), and demonstrate that our algorithm is more than ten times
faster than state-of-the-art methods.
Overall, our algorithmic and theoretical developments open up the state of the art but forbidding
class of M -estimators in (1) to very large-scale problems.
Outline of the paper. We begin by introducing some background in Section 2. In Section 3,
we propose our quadratic approximation framework with active subspace selection for general dirty
statistical models. We derive the convergence guarantees of our algorithm in Section 4. Finally, in
Section 5, we apply our model to solve two real applications, and show experimental comparisons
with other state-of-the-art methods.
2
Background and Applications
Decomposable norms. We consider the case where all the regularizers {R(r) }kr=1 are decomposable norms k ? kAr . A norm k ? k is decomposable at x if there is a subspace T and a vector e ? T
such that the sub differential at x has the following form:
?kxkr = {? ? Rn | ?T (?) = e and k?T ? (?)k?Ar ? 1},
?
(2)
where ?T (?) is the orthogonal projection onto T , and kxk := supkak?1 hx, ai is the dual norm of
k ? k. The decomposable norm was defined in [3, 18], and many interesting regularizers belong to
this category, including:
2
? Sparse vectors: for the `1 regularizer, T is the span of all points with the same support as x.
? Group sparse vectors: suppose that the index set can be partitioned into a set of NG disjoint
PNG
kxGt k? . If
groups, say G = {G1 , . . . , GNG }, and define the (1,?)-group norm by kxk1,? := t=1
SG denotes the subset of groups where xGt 6= 0, then the subgradient has the following form:
X
X
mt },
xGt /kxGt k?? +
?kxk1,? := {? | ? =
t?SG
t?S
/ G
where kmt k?? ? 1 for all t ?
/ SG . Therefore, the group sparse norm is also decomposable with
T := {x | xGt = 0 for all t ?
/ SG }.
(3)
? Low-rank matrices: for the nuclear norm regularizer k ? k? , which is defined to be the sum of
singular values, the subgradient can be written as
?kXk? = {U V T + W | U T W = 0, W V = 0, kW k2 ? 1},
where k ? k2 is the matrix 2 norm and U, V are the left/right singular vectors of X corresponding to
non-zero singular values. The above subgradient can also be written in the decomposable form (2),
where T is defined to be span({ui v Tj }ki,j=1 ) where {ui }ki=1 , {v i }ki=1 are the columns of U and V .
Applications. Next we discuss some widely used applications of superposition-structured models,
and the corresponding instances of the class of M -estimators in (1).
? Gaussian graphical model with latent variables: let ? denote the precision matrix with corresponding covariance matrix ? = ??1 . [5] showed that the precision matrix will have a low rank
+ sparse structure when some random variables are hidden, thus ? = S ? L can be estimated by
solving the following regularized MLE problem:
min
? log det(S ? L) + hS ? L, ?i + ?S kSk1 + ?L trace(L).
(4)
S,L:L0,S?L0
While proximal Newton methods have recently become a dominant technique for solving the `1 regularized log-determinant problems [12, 10, 13, 19], our development is the first to apply proximal
Newton methods to solve log-determinant problems with sparse and low rank regularizers.
? Multi-task learning: given k tasks, each with sample matrix X (r) ? Rnr ?d (nr samples in the
r-th task) and labels y (r) , [15] proposes minimizing the following objective:
k
X
`(y (r) , X (r) (S (r) + B (r) )) + ?S kSk1 + ?B kBk1,? ,
(5)
r=1
where `(?) is the loss function and S (r) is the r-th column of S.
? Noisy PCA: to recover a covariance matrix corrupted with sparse noise, a widely used technique
is to solve the matrix decomposition problem [6]. In contrast to the squared loss above, an exponential PCA problem [8] would use a Bregman divergence for the loss function.
3
Our proposed framework
To perform a Newton-like step, we iteratively form quadratic approximations of the smooth loss
function. Generally the quadratic subproblem will have a large number of variables and will be hard
to solve. Therefore we propose a general active subspace selection technique to reduce the problem
size by exploiting the structure of the regularizers R1 , . . . , Rk .
3.1
Quadratic Approximation
Given k sets of variables ? = [? (1) , . . . , ? (k) ], and each ? (r) ? Rn , let ?(r) denote perturbation of
Pk
? to be the loss function,
? (r) , and ? = [?(1) , . . . , ?(k) ]. We define g(?) := L( r=1 ? (r) ) = L(?)
Pk
(r) (r)
and h(?) := r=1 R (? ) to be the regularization. Given the current estimate ?, we form the
quadratic approximation of the smooth loss function:
g?(? + ?) = g(?) +
k
X
1
h?(r) , Gi + ?T H?,
2
r=1
(6)
? is the gradient of L and H is the Hessian matrix of g(?). Note that ? ? L(?)
? =
where G = ?L(?)
?
? for all r so we simply write ? and refer to the gradient at ?
? as G (and similarly for ?2 ).
??(r) L(?)
By the chain rule, we can show that
3
Lemma 1. The Hessian matrix of g(?) is
?
H
2
?
H := ? g(?) = ...
H
???
..
.
???
?
H
.. ? ,
.
H
?
H := ?2 L(?).
(7)
In this paper we focus on the case where H is positive definite. When it is not, we add a small
constant to the diagonal of H to ensure that each block is positive definite.
Note that the full Hessian, H, will in general, not be positive definite (in fact rank(H) = rank(H)).
However, based on its special structure, we can still give convergence guarantees (along with rate of
convergence) for our algorithm. The Newton direction d is defined to be:
(1)
[d
,...,d
(k)
]=
argmin g?(? + ?) +
k
X
?(1) ,...,?(k)
?r k? (r) + ?(r) kAr := QH (?; ?).
(8)
r=1
The quadratic subproblem (8) cannot be directly separated into k parts because the Hessian matrix
(7) is not a block-diagonal matrix. Also, each set of parameters has its own regularizer, so it is hard
to solve them all together. Therefore, to solve (8), we propose a block coordinate descent method.
At each iteration, we pick a variable set ?(r) where r ? {1, 2, . . . , k} by a cyclic (or random) order,
and update the parameter set ?(r) while keeping other parameters fixed. Assume ? is the current
solution (for all the variable sets), then the subproblem with respect to ?(r) can be written as
X
1
?(r) ? argmin dT Hd + hd, G +
H?(t) i + ?r k? (r) + dkAr .
(9)
d?Rn 2
t:r6=t
The subproblem (9) is just a typical quadratic problem with a specific regularizer, so there already
exist efficient algorithms for solving it for different choices of k ? kA . For the `1 norm regularizer,
coordinate descent methods can be applied to solve (9) efficiently as used in [12, 21]; (accelerated)
proximal gradient descent or projected Newton?s method can also be used, as shown in [19]. For a
general atomic norm where there might be infinitely many atoms (coordinates), a greedy coordinate
descent approach can be applied, as shown in [22].
Pk
To iterate between different groups of parameters, we have to maintain the term r=1 H?(r) during
the Newton iteration. Directly computing H?(r) requires O(n2 ) flops; however, the Hessian matrix
often has a special structure so that H?(r) can be computed efficiently. For example, in the inverse
covariance estimation problem H = ??1 ? ??1 where ??1 is the current estimate of covariance,
and in the empirical risk minimization problem H = XDX T where X is the data matrix and D is
diagonal.
After solving the subproblem (8), we have to search for a suitable stepsize. We apply an Armijo
rule for line search [24], where we test the step size ? = 20 , 2?1 , ... until the following sufficient
decrease condition is satisfied for a pre-specified ? ? (0, 1) (typically ? = 10?4 ):
k
k
X
X
F (? + ??) ? F (?) + ???, ? = hG, ?i +
?r k?r + ??(r) kAr ?
?r k? (r) kAr . (10)
r=1
3.2
r=1
Active Subspace Selection
Since the quadratic subproblem (8) contains a large number of variables, directly applying the above
quadratic approximation framework is not efficient. In this subsection, we provide a general active
subspace selection technique, which dramatically reduces the size of variables by exploiting the
structure of regularizers. A similar method has been discussed in [12] for the `1 norm and in [11]
for the nuclear norm, but it has not been generalized to all decomposable norms. Furthermore, a key
point to note is that in this paper our active subspace selection is not only a heuristic, but comes with
strong convergence guarantees that we derive in Section 4.
(r)
(r)
Given the current ?, our subspace selection approach partitions each ? (r) into Sfixed and Sfree =
(r)
(r)
(Sfixed )? and then restricts the search space of the Newton direction in (8) within Sfree , which yields
the following quadratic approximation problem:
k
X
[d(1) , . . . , d(k) ] =
argmin
g?(? + ?) +
?r k? (r) + ?(r) kAr .
(11)
(1)
(k)
?(1) ?Sfree ,...,?(k) ?Sfree
4
r=1
Each group of parameter has its own fixed/free subspace, so we now focus on a single parameter
component ? (r) . An ideal subspace selection procedure would satisfy:
Property (I). Given the current iterate ?, any updates along directions in the fixed set, for instance
(r)
as ? (r) ? ? (r) + a, a ? Sfixed , does not improve the objective function value.
Property (II). The subspace Sfree converges to the support of the final solution in a finite number of
iterations.
Suppose given the current iterate, we first do updates along directions in the fixed set, and then do
updates along directions in the free set. Property (I) ensures that this is equivalent to ignoring updates
along directions in the fixed set in this current iteration, and focusing on updates along the free set.
As we will show in the next section, this property would suffice to ensure global convergence of our
procedure. Property (II) will be used to derive the asymptotic quadratic convergence rate.
We will now discuss our active subspace selection strategy which will satisfy both properties above.
Consider the parameter component ? (r) , and its corresponding regularizer k ? kAr . Based on the
definition of decomposable norm in (2), there exists a subspace Tr where ?Tr (?) is a fixed vector
for any subgradient of k ? kAr . The following proposition explores some properties of the subdifferential of the overall objective F (?) in (1).
Proposition 1. Consider any unit-norm vector a, with kakAr = 1, such that a ? Tr? .
(a) The inner-product of the sub-differential ??(r) F (?) with a satisfies:
ha, ??(r) F (?)i ? [ha, Gi ? ?r , ha, Gi + ?r ].
(12)
(b) Suppose |ha, Gi| ? ?r . Then, 0 ? argmin? F (? + ?a).
? denotes the gradient of L. The proposition
See Appendix 7.8 for the proof. Note that G = ?L(?)
(r)
?
thus implies that if |ha, Gi| ? ?r and Sfixed ? Tr then Property (I) immediately follows. The
difficulty is that the set {a | |ha, Gi| ? ?r } is possibly hard to characterize, and even if we could
characterize this set, it may not be amenable enough for the optimization solvers to leverage in order
to provide a speedup. Therefore, we propose an alternative characterization of the fixed subspace:
(r)
Definition 1. Let ? (r) be the current iterate, prox? be the proximal operator defined by
1
(r)
prox? (x) = argmin ky ? xk2 + ?kykAr ,
2
y
and Tr (x) be the subspace for the decomposable norm (2) k ? kAr at point x. We can define the
fixed/free subset at ? (r) as:
(r)
(r)
(r)
(r) ?
Sfixed := [T (? (r) )]? ? [T (prox?r (G))]? , Sfree = Sfixed .
(13)
It can be shown that from the definition of the proximal operator, and Definition 1, it holds that
|ha, Gi| < ?r , so that we would have local optimality in the direction a as before. We have the
following proposition:
(r)
Proposition 2. Let Sfixed be the fixed subspace defined in Definition 1. We then have:
0 = argmin QH ([0, . . . , 0, ?(r) , 0, . . . , 0]; ?).
(r)
?(r) ?Sfixed
We will prove that Sfree as defined above converges to the final support in Section 4, as required in
Property (II) above. We will now detail some examples of the fixed/free subsets defined above.
? ? ?} where ei is the ith canonical
? For `1 regularization: Sfixed = span{ei | ?i = 0 and |?i L(?)|
vector.
? For nuclear norm regularization: the selection scheme can be written as
Sfree = {UA M VAT | M ? Rk?k },
T
(14)
where UA = span(U, Ug ), VA = span(V, Vg ), with ? = U ?V is the thin SVD of ? and Ug , Vg
are the left and right singular vectors of prox? (? ? ?L(?)). The proximal operator prox? (?) in this
case corresponds to singular-value soft-thresholding, and can be computed by randomized SVD or
the Lanczos algorithm.
5
? For group sparse regularization: in the (1, 2)-group norm case, let SG be the nonzero groups,
? ? ?}, and the free
then the fixed groups FG can be defined by FG := {i | i ?
/ SG and k?LGi (?)k
subspace will be
Sfree = {? | ? i = 0 ?i ? FG }.
(15)
In Figure 3 (in the appendix) that the active subspace selection can significantly improve the speed
for the block coordinate descent algorithm [20].
Algorithm 1: Q UIC & D IRTY: Quadratic Approximation Framework for Dirty Statistical Models
Input : Loss function L(?), regularizers ?r k ? kAr for r = 1, . . . , k, and initial iterate ? 0 .
? t } converges to ?
??.
Output: Sequence {? t } such that {?
1 for t = 0, 1, . . . do
? t ? Pk ? (r) .
2
Compute ?
r=1 t
? t ).
3
Compute ?L(?
4
Compute Sfree by (13).
5
for sweep = 1, . . . , Touter do
6
for r = 1, . . . , k do
(r)
7
Solve the subproblem (9) within Sfree .
Pk
? t )?(r) .
8
Update r=1 ?2 L(?
9
10
4
Find the step size ? by (10).
? (r) ? ? (r) + ??(r) for all r = 1, . . . , k.
Convergence
The recently developed theoretical analysis of proximal Newton methods [16, 21] cannot be directly
applied because (1) we have the active subspace selection step, and (2) the Hessian matrix for each
quadratic subproblem is not positive definite. We first prove the global convergence of our algorithm
when the quadratic approximation subproblem (11) is solved exactly. Interestingly, in our proof
we show that the active subspace selection can be modeled within the framework of the Block
Coordinate Gradient Descent algorithm [24] with a carefully designed Hessian approximation, and
by making this connection we are able to prove global convergence.
Theorem 1. Suppose L(?) is convex (may not be strongly convex), and the quadratic subproblem
(8) at each iteration is solved exactly, Algorithm 1 converges to the optimal solution.
? is strongly convex. Note that
The proof is in Appendix 7.1. Next we consider the case that L(?)
Pk
(r)
?
?
even when L(?) is strongly convex with respect to ?, L( r=1 ? ) will not be strongly convex in ?
(if k > 1) and there may exist more than one optimal solution. However, we show that all solutions
? := Pk ? (r) .
give the same ?
r=1
Lemma 2. Assume L(?) is strongly convex, and {x(r) }kr=1 , {y (r) }kr=1 are two optimal solutions of
Pk
Pk
(1), then r=1 x(r) = r=1 y (r) .
(r)
The proof is in Appendix 7.2. Next, we show that Sfree (from Definition 1) will converge to the final
? ? be the global minimizer (which is unique
support T? (r) for each parameter set r = 1, . . . , k. Let ?
as shown in Lemma 2), and assume that we have
? ? ) k? < ?r ?r = 1, . . . , k.
k?(T? (r) )? ?L(?
(16)
Ar
This is the generalization of the assumption used in earlier literature [12] where only `1 regularization was considered. The condition is similar to strict complementary in linear programming.
Theorem 2. If L(?) is strongly convex and assumption (16) holds, then there exists a finite T > 0
(r)
such that Sf ree = T? (r) ?r = 1, . . . , k after t > T iterations.
The proof is in Appendix 7.3. Next we show that our algorithm has an asymptotic quadratic convergence rate (the proof is in Appendix 7.4).
Theorem 3. Assume that ?2 L(?) is Lipschitz continuous, and assumption (16) holds. If at each iteration the quadratic subproblem (8) is solved exactly, and L(?) is strongly convex, then our algorithm
converges with asymptotic quadratic convergence rate.
6
5
Applications
We demonstrate that our algorithm is extremely efficient for two applications: Gaussian Markov
Random Fields (GMRF) with latent variables (with sparse + low rank structure) and multi-task
learning problems (with sparse + group sparse structure).
5.1
GMRF with Latent Variables
We first apply our algorithm to solve the latent feature GMRF structure learning problem in eq (4),
where S ? Rp?p is the sparse part, L ? Rp?p is the low-rank part, and we require L = LT
0, S = S T and Y = S ? L 0 (i.e. ? (2) = ?L). In this case, L(Y ) = ? log det(Y ) + h?, Y i,
hence
?2 L(Y ) = Y ?1 ? Y ?1 , and ?L(Y ) = ? ? Y ?1 .
(17)
Active Subspace. For the sparse part, the free subspace is a subset of indices {(i, j) | Sij 6=
0 or |?ij L(Y )| ? ?}. For the low-rank part, the free subspace can be presented as {UA M VAT |
M ? Rk?k } where UA and VA are defined in (14).
Updating ?L . To solve the quadratic subproblem (11), first we discuss how to update ?L using
subspace selection. The subproblem is
1
min
trace(?L Y ?1 ?L Y ?1 )+trace((Y ?1 ???Y ?1 ?S Y ?1 )?L )+?L kL+?L k? ,
T
2
?L =U ?D U :L+?L 0
and since ?L is constrained to be a perturbation of L = UA M UAT so that we can write ?L =
UA ?M UAT , and the subproblem becomes
1
? M ) + ?L trace(M + ?M ) := q(?M ), (18)
min
trace(Y? ?M Y? ?M ) + trace(??
?M :M +?M 0 2
? := U T (Y ?1 ? ? ? Y ?1 ?S Y ?1 )UA . Therefore the subproblem
where Y? := UAT Y ?1 UA and ?
A
(18) becomes a k ? k dimensional problem where k p.
To solve (18), we first check if the closed form solution exists. Note that ?q(?M ) = Y? ?M Y? +
? + ?L I, thus the minimizer is ?M = ?Y? ?1 (?
? + ?L I)Y? ?1 if M + ?M 0. If not, we solve the
?
subproblem by the projected gradient descent method, where each step only requires O(k 2 ) time.
Updating ?S . The subproblem with respect to ?S can be written as
1
min vec(?S )T (Y ?1 ?Y ?1 ) vec(?S )+trace((??Y ?1 ?Y ?1 (?L )Y ?1 )?S )+?S kS +?S k1 ,
?S 2
In our implementation we apply the same coordinate descent procedure proposed in QUIC [12] to
solve this subproblem.
Results. We compare our algorithm with two state-of-the-art software packages. The LogdetPPA
algorithm was proposed in [26] and used in [5] to solve (4). The PGALM algorithm was proposed
in [17]. We run our algorithm on three gene expression datasets: the ER dataset (p = 692), the
Leukemia dataset (p = 1255), and a subset of the Rosetta dataset (p = 2000)1 For the parameters, we
use ?S = 0.5, ?L = 50 for the ER and Leukemia datasets, which give us low-rank and sparse results.
For the Rosetta dataset, we use the parameters suggested in LogdetPPA, with ?S = 0.0313, ?L =
0.1565. The results in Figure 1 shows that our algorithm is more than 10 times faster than other
algorithms. Note that in the beginning PGALM tends to produce infeasible solutions (L or S ? L is
not positive definite), which is not plotted in the figures.
Our proximal Newton framework has two algorithmic components: the quadratic approximation,
and our active subspace selection. From Figure 1 we can observe that although our algorithm is
a Newton-like method, the time cost for each iteration is similar or even cheaper than other first
order methods. The reason is (1) we take advantage from active selection, and (2) the problem has
a special structure of the Hessian (17), where computing it is no more expensive than the gradient.
To delineate the contribution of the quadratic approximation to the gain in speed of convergence, we
further compare our algorithm to an alternating minimization approach for solving (4), together with
our active subspace selection. Such an alternating minimization approach would iteratively fix one
of S, L, and update the other; we defer detailed algorithmic and implementation details to Appendix
7.6 for reasons of space. The results show that by using the quadratic approximation, we get a much
faster convergence rate (see Figure 2 in Appendix 7.6).
1
The full dataset has p = 6316 but the other methods cannot solve this size problem.
7
3000
1200
.
1100
1000
900
0
50
100
150
time (sec)
(a) ER dataset
?500
Quic & Dirty
PGALM
LogdetPPM
2500
2000
1500
0
Quic & Dirty
PGALM
LogdetPPM
.
Objective value
Quic & Dirty
PGALM
LogdetPPM
Objective value
Objective value
1300
100
200
300
time (sec)
400
500
.
?1000
?1500
?2000
0
(b) Leukemia dataset
200
400
time (sec)
600
(c) Rosetta dataset
Figure 1: Comparison of algorithms on the latent feature GMRF problem using gene expression
datasets. Our algorithm is much faster than PGALM and LogdetPPA.
Table 1: The comparisons on multi-task problems.
dataset
USPS
RCV1
5.2
number of
training data
100
100
400
400
1000
1000
5000
5000
relative
error
10?1
10?4
10?1
10?4
10?1
10?4
10?1
10?4
Dirty Models (sparse + group sparse)
Q UIC & D IRTY proximal gradient
ADMM
8.3% / 0.42s
8.5% / 1.8s
8.3% / 1.3
7.47% / 0.75s
7.49% / 10.8s
7.47% / 4.5s
2.92% / 1.01s
2.9% / 9.4s
3.0% / 3.6s
2.5% / 1.55s
2.5% / 35.8
2.5% / 11.0s
18.91% / 10.5s
18.5%/47s
18.9% / 23.8s
18.45% / 23.1s
18.49% / 430.8s
18.5% / 259s
10.54% / 42s
10.8% / 541s
10.6% / 281s
10.27% / 87s
10.27% / 2254s 10.27% / 1191s
Other Models
Lasso Group Lasso
10.27%
8.36%
4.87%
2.93%
22.67%
20.8%
13.67%
12.25%
Multiple-task learning with superposition-structured regularizers
Next we solve the multi-task learning problem (5) where the parameter is a sparse matrix S ? Rd?k
and a group sparse matrix B ? Rd?k . Instead of using the square loss (as in [15]), we consider the
logistic loss `logistic (y, a) = log(1 + e?ya ), which gives better performance as seen by comparing
Table 1 to results in [15]. Here the Hessian matrix has a special structure again: H = XDX T where
X is the data matrix and D is the diagonal matrix, and in Appendix 7.7 we have a detail description
of how to applying our algorithm to solve this problem.
Results. We follow [15] and transform multi-class problems into multi-task problems. For a
multiclass dataset with k classes and n samples, for each r = 1, . . . , k, we generate y r ? {0, 1}n
(k)
to be the vector such that yi = 1 if and only if the i-th sample is in class r. Our first dataset is the
USPS dataset which was first collected in [25] and subsequently widely used in multi-task papers.
On this dataset, the use of several regularizers is crucial for good performance. For example, [15]
demonstrates that on USPS, using lasso and group lasso regularizations together outperforms models
with a single regularizer. However, they only consider the squared loss in their paper, whereas we
consider a logistic loss which leads to better performance. For example, we get 7.47% error rate
using 100 samples in USPS dataset, while using the squared loss the error rate is 10.8% [15]. Our
second dataset is a larger document dataset RCV1 downloaded from LIBSVM Data, which has 53
classes and 47,236 features. We show that our algorithm is much faster than other algorithms on both
datasets, especially on RCV1 where we are more than 20 times faster than proximal gradient descent.
Here our subspace selection techniques works well because we expect that the active subspace at the
true solution is small.
6
Acknowledgements
This research was supported by NSF grants CCF-1320746 and CCF-1117055. C.-J.H also acknowledges support from an IBM PhD fellowship. P.R. acknowledges the support of ARO via W911NF12-1-0390 and NSF via IIS-1149803, IIS-1447574, and DMS-1264033. S.R.B. was supported by
an IBM Research Goldstine Postdoctoral Fellowship while the work was performed.
8
References
[1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation:
Optimal rates in high dimensions. Annals if Statistics, 40(2):1171?1197, 2012.
[2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning
via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1?
122, 2011.
[3] E. Candes and B. Recht. Simple bounds for recovering low-complexity models. Mathemetical Programming, 2012.
[4] E. J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. Assoc. Comput.
Mach., 58(3):1?37, 2011.
[5] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex
optimization. The Annals of Statistics, 2012.
[6] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix
decomposition. Siam J. Optim, 21(2):572?596, 2011.
[7] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures.
IEEE Transactions on Information Theory, 59(7):4324?4337, 2013.
[8] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal component analysis to the
exponential family. In NIPS, 2012.
[9] Q. T. Dinh, A. Kyrillidis, and V. Cevher. An inexact proximal path-following algorithm for constrained
convex minimization. arxiv:1311.1756, 2013.
[10] C.-J. Hsieh, I. S. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for sparse inverse
covariance estimation. In NIPS, 2012.
[11] C.-J. Hsieh and P. A. Olsen. Nuclear norm minimization via active subspace selection. In ICML, 2014.
[12] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation
using quadratic approximation. In NIPS, 2011.
[13] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, P. Ravikumar, and R. A. Poldrack. BIG & QUIC: Sparse inverse
covariance estimation for a million variables. In NIPS, 2013.
[14] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE Trans.
Inform. Theory, 57:7221?7234, 2011.
[15] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, 2010.
[16] J. D. Lee, Y. Sun, and M. A. Saunders. Proximal Newton-type methods for convex optimization. In NIPS,
2012.
[17] S. Ma, L. Xue, and H. Zou. Alternating direction methods for latent variable Gaussian graphical model
selection. Neural Computation, 25(8):2172?2198, 2013.
[18] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538?557, 2012.
[19] P. Olsen, F. Oztoprak, J. Nocedal, and S. Rennie. Newton-like methods for sparse inverse covariance
estimation. In NIPS, 2012.
[20] Z. Qin, K. Scheinberg, and D. Goldfarb. Efficient block-coordinate descent algorithm for the group lasso.
Mathematical Programming Computation, 2013.
[21] K. Scheinberg and X. Tang. Practical inexact proximal quasi-newton method with global complexity
analysis. arxiv:1311.6547, 2014.
[22] A. Tewari, P. Ravikumar, and I. Dhillon. Greedy algorithms for structurally constrained high dimensional
problems. In NIPS, 2011.
[23] K.-C. Toh, P. Tseng, and S. Yun. A block coordinate gradient descent method for regularized convex
separable optimization and covariance selection. Mathemetical Programming, 129:331?355, 2011.
[24] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization.
Mathematical Programming, 117:387?423, 2007.
[25] M. van Breukelen, R. P. W. Duin, D. M. J. Tax, and J. E. den Hartog. Handwritten digit recognition by
combined classifiers. Kybernetika, 34(4):381?386, 1998.
[26] C. Wang, D. Sun, and K.-C. Toh. Solving log-determinant optimization problems by a Newton-CG primal
proximal point algorithm. SIAM J. Optimization, 20:2994?3013, 2010.
[27] E. Yang and P. Ravikumar. Dirty statistical models. In NIPS, 2013.
[28] E.-H. Yen, C.-J. Hsieh, P. Ravikumar, and I. S. Dhillon. Constant nullspace strong convexity and fast
convergence of proximal methods under high-dimensional settings. In NIPS, 2014.
[29] G.-X. Yuan, C.-H. Ho, and C.-J. Lin. An improved GLMNET for L1-regularized logistic regression.
JMLR, 13:1999?2030, 2012.
9
| 5578 |@word multitask:1 h:1 determinant:3 seems:1 norm:21 open:1 hsieh:6 covariance:11 decomposition:5 pick:1 tr:5 initial:1 cyclic:1 contains:1 document:1 interestingly:1 outperforms:1 ksk1:2 current:9 com:1 ka:1 comparing:1 optim:1 toh:2 forbidding:1 written:5 chu:1 partition:1 designed:1 update:10 xdx:2 greedy:2 accordingly:1 beginning:1 ith:1 logdetppa:3 caveat:1 characterization:1 zhang:1 height:1 mathematical:2 along:6 become:2 differential:2 yuan:1 prove:3 frequently:1 nor:1 multi:10 decomposed:1 solver:2 ua:8 becomes:2 begin:1 notation:1 moreover:1 suffice:1 argmin:6 developed:1 kybernetika:1 unified:3 guarantee:4 exactly:4 k2:2 demonstrates:1 assoc:1 classifier:1 unit:1 grant:1 positive:6 before:1 understood:1 local:1 tends:1 mach:1 ree:1 incoherence:1 path:1 might:2 studied:1 k:1 co:1 statistically:1 unique:1 practical:1 atomic:1 block:7 definite:6 digit:1 procedure:3 erasure:1 empirical:1 significantly:2 projection:1 boyd:1 pre:1 jui:1 get:2 onto:1 cannot:3 selection:25 operator:3 risk:1 applying:2 equivalent:1 center:1 convex:13 decomposable:12 gmrf:5 immediately:1 recovery:1 estimator:9 rule:2 utilizing:1 nuclear:4 hd:2 proving:1 coordinate:10 annals:2 qh:2 suppose:4 colorado:2 programming:5 trend:1 expensive:2 recognition:1 updating:2 kxk1:2 subproblem:20 solved:5 wang:1 ensures:1 pradeepr:1 sun:2 decrease:1 convexity:1 ui:2 complexity:2 solving:8 upon:1 usps:4 tx:1 caramanis:1 regularizer:10 separated:1 fast:1 saunders:1 heuristic:1 widely:3 solve:20 larger:1 say:1 rennie:1 statistic:2 gi:7 uic:3 g1:1 transform:1 itself:1 noisy:2 final:3 sequence:1 advantage:1 propose:6 aro:1 product:1 qin:1 tax:1 description:1 ky:1 exploiting:2 convergence:17 r1:1 produce:1 converges:5 derive:3 develop:2 pose:1 ij:1 lowrank:1 eq:1 strong:2 recovering:1 c:1 come:1 implies:1 direction:12 subsequently:1 require:1 hx:1 fix:1 generalization:2 proposition:5 hold:3 considered:1 wright:1 algorithmic:4 touter:1 xk2:1 estimation:10 label:1 superposition:12 utexas:1 individually:1 minimization:9 gaussian:5 super:1 focus:2 rank:14 check:1 contrast:1 cg:1 typically:2 hidden:1 quasi:1 comprising:1 issue:1 among:1 overall:4 dual:1 development:2 proposes:1 art:5 special:5 constrained:3 ruan:1 field:2 ng:1 atom:1 kw:1 yu:1 icml:1 leukemia:3 thin:1 others:1 sanghavi:3 nonsmooth:1 divergence:1 individual:2 cheaper:1 maintain:1 analyzed:1 pradeep:1 primal:1 tj:1 regularizers:11 hg:1 chain:1 amenable:2 bregman:1 respective:1 orthogonal:1 divide:1 plotted:1 theoretical:5 cevher:1 instance:5 column:2 soft:1 earlier:1 ar:2 goodness:1 lanczos:1 applicability:1 introducing:1 cost:1 subset:5 too:1 characterize:2 varies:1 corrupted:1 proximal:24 xue:1 cho:1 combined:1 recht:1 explores:1 randomized:1 negahban:2 siam:2 lee:1 together:3 again:2 squared:3 satisfied:1 possibly:1 li:1 prox:5 parrilo:2 sec:3 satisfy:2 depends:2 performed:1 view:2 closed:1 recover:1 candes:2 defer:1 cjhsieh:1 contribution:4 minimize:1 square:1 yen:1 efficiently:3 yield:1 handwritten:1 lgi:1 corruption:1 inform:1 definition:6 inexact:2 dm:1 naturally:1 proof:6 gain:1 hsu:1 dataset:16 popular:1 subsection:1 dimensionality:1 carefully:1 appears:1 focusing:1 dt:1 follow:1 improved:1 delineate:1 strongly:7 furthermore:1 just:1 until:1 ei:2 banerjee:1 logistic:5 usa:3 multiplier:2 true:1 ccf:2 regularization:11 hence:1 alternating:7 dhillon:6 iteratively:2 nonzero:1 goldfarb:1 deal:1 during:1 self:1 please:1 yorktown:1 generalized:1 yun:2 outline:1 demonstrate:2 l1:1 recently:2 parikh:1 ug:2 mt:1 empirically:1 poldrack:1 million:1 extend:1 belong:1 discussed:1 refer:1 dinh:1 vec:2 ai:1 rd:2 vanilla:1 similarly:1 peder:1 kmt:1 add:1 dominant:1 own:4 recent:4 showed:1 optimizing:1 kar:9 watson:1 yi:1 seen:1 impose:1 xgt:3 converge:1 stephen:2 ii:5 multiple:2 full:2 reduces:1 smooth:3 faster:7 lin:1 ravikumar:9 mle:1 vat:2 va:2 involving:1 scalable:1 regression:1 arxiv:2 iteration:8 agarwal:1 addition:1 background:2 subdifferential:1 whereas:1 fellowship:2 singular:5 crucial:1 strict:1 leveraging:1 structural:5 leverage:1 ideal:1 yang:1 vital:1 enough:1 relaxes:1 iterate:5 fit:1 lasso:5 reduce:1 idea:1 gng:1 inner:1 multiclass:1 kyrillidis:1 texas:1 det:2 expression:2 pca:3 becker:2 penalty:1 rnr:1 hessian:11 logdet:1 dramatically:1 generally:1 tewari:1 detailed:1 listed:1 amount:1 ten:1 png:1 category:1 reduced:1 generate:1 schapire:1 exist:2 restricts:1 canonical:1 nsf:2 estimated:1 disjoint:1 write:2 dasgupta:1 group:20 key:5 neither:1 clean:1 libsvm:1 nocedal:1 subgradient:4 relaxation:1 sum:6 year:1 run:1 inverse:6 package:1 family:3 throughout:1 chandrasekaran:2 appendix:9 ki:3 bound:1 quadratic:28 duin:1 constraint:5 software:1 speed:3 min:5 span:5 optimality:1 extremely:1 separable:3 rcv1:3 speedup:1 structured:11 developing:2 partitioned:1 kakade:1 making:1 outlier:1 restricted:1 sij:1 den:1 boulder:2 scheinberg:2 turn:1 discus:3 sustik:2 apply:6 observe:1 stepsize:1 alternative:1 ho:1 rp:2 denotes:2 dirty:11 ensure:2 graphical:5 newton:21 k1:1 especially:1 conquer:1 sweep:1 objective:7 already:1 quic:5 strategy:1 nr:1 diagonal:4 jalali:2 gradient:13 subspace:33 kbk1:1 concatenation:1 collected:1 tseng:2 reason:2 willsky:2 assuming:1 index:2 modeled:1 insufficient:1 minimizing:1 subproblems:1 trace:7 implementation:2 perform:1 markov:2 datasets:4 finite:2 descent:13 flop:1 rn:3 perturbation:2 peleato:1 eckstein:1 required:2 specified:1 kl:1 connection:1 nip:10 trans:1 able:2 suggested:1 sparsity:3 including:2 wainwright:2 suitable:1 difficulty:2 hybrid:1 regularized:5 scheme:1 improve:2 kxkr:1 acknowledges:2 sg:6 literature:1 acknowledgement:1 asymptotic:3 relative:1 loss:21 expect:1 interesting:1 vg:2 foundation:1 downloaded:1 sufficient:1 consistent:1 thresholding:1 ibm:4 austin:2 supported:2 keeping:1 free:8 infeasible:1 wide:1 sparse:29 fg:3 distributed:1 van:1 dimension:1 world:1 projected:2 transaction:1 approximate:1 olsen:3 gene:2 global:5 active:19 postdoctoral:1 search:4 latent:10 iterative:1 continuous:1 table:2 robust:3 ignoring:1 rosetta:3 zou:1 pk:11 big:1 noise:1 n2:1 complementary:1 ny:1 precision:2 sub:3 structurally:1 exponential:2 sf:1 comput:1 r6:1 jmlr:1 nullspace:1 uat:3 tang:1 rk:3 theorem:3 specific:1 kakar:1 er:3 exists:3 kr:5 phd:1 chen:1 suited:1 lt:1 simply:1 infinitely:1 oztoprak:1 glmnet:1 expressed:1 kxk:2 inderjit:2 corresponds:3 minimizer:2 satisfies:1 ma:2 lipschitz:1 admm:3 considerable:1 hard:3 specifically:1 typical:1 lemma:3 principal:2 experimental:1 concordant:1 svd:2 ya:1 support:6 armijo:1 collins:1 accelerated:1 |
5,057 | 5,579 | Recursive Inversion Models for Permutations
Marina Meil?a
University of Washington
Seattle, Washington 98195
[email protected]
Christopher Meek
Microsoft Research
Redmond, Washington 98052
[email protected]
Abstract
We develop a new exponential family probabilistic model for permutations that
can capture hierarchical structure and that has the Mallows and generalized Mallows models as subclasses. We describe how to do parameter estimation and propose an approach to structure search for this class of models. We provide experimental evidence that this added flexibility both improves predictive performance
and enables a deeper understanding of collections of permutations.
1
Introduction
Among the many probabilistic models over permutations, models based on penalizing inversions
with respect to a reference permutation have proved particularly elegant, intuitive, and useful. Typically these generative models ?construct? a permutation in stages by inserting one item at each stage.
An example of such models are the Generalized Mallows Models (GMMs) of Fligner and Verducci
(1986). In this paper, we propose a superclass of the GMM, which we call the recursive inversion
model (RIM), which allows more flexibility than the original GMM, while preserving its elegant and
useful properties of compact parametrization, tractable normalization constant, and interpretability
of parameters. Essentially, while the GMM constructs a permutation sequentially by a stochastic
insertion sort process, the RIM constructs one by a stochastic merge sort. In this sense, the RIM is a
compactly parametrized Riffle Independence (RI) model (Huang & Guestrin, 2012) defined in terms
of inversions rather than independence.
2
Recursive Inversion Models
We are interested in probabilistic models of permutations of a set of elements E = {e1 , ..., en }. We
use ? ? SE to denote a permutation (a total ordering) of the elements in E, and use ei <? ej to
denote that two elements are ordered. We define an n ? n (lower diagonal) discrepancy matrix Dij
that captures the discrepancies between two permutations.
1 i <? j ? j <?0 i
Dij (?, ?0 ) =
(1)
0 otherwise
We call the first argument of Dij (?, ?) the test permutation (typically ?) and the second argument the
reference permutation (typically ?0 ).
Two classic models for permutations are the Mallows and the generalized P
Mallows models. The
Mallows model is defined in terms of the inversion distance d(?, ?0 ) =
ij Dij (?, ?0 ) which
is the total number of inversions between ? and ?0 (Mallows, 1957). The Mallows models is
1
then P (?|?0 , ?) = Z(?)
exp(??d(?, ?0 )), ? ? R. Note that the normalization constant does
not depend on ?0 but only on the concentration parameter ?. The Generalized Mallows model
(GMM) of Fligner and Verducci (1986) extends the Mallows model by introducing a parameter for each of the elements in E and decomposes the inversion distance into a per element dis1
tance1 . In particular, we define vj (?,
P ?0 ) to be the number of inversions for element j in ?
with respect to ?0 is vj (?, ?0 ) =
i>?0 j Dij (?, ?0 ). In this case, the GMM is defined as
P
1
P (?|?0 , ?) = Z(?) exp(? e?E ?e ve ) ? ? Rn . The GMM can be thought of as a stagewise
model in which each of the elements in E are inserted according to the reference permutation ?0
into a list where the parameter ?e controls how likely the insertion of element e will yield an inversion with respect to the reference permutation. For both of these models the normalization constant
can be computed in closed form
Our RIMs generalize the GMM by replacing the sequence of single element insertions with a sequence of recursive merges of subsequences where the relative order within the subsequences is preserved. For example, the sequence [a, b, c, d, e] can be obtained by merging the two subsequences
[a, b, c] with [d, e] with zero inversions and the sequence [a, d, b, e, c] can be obtained from these
subsequences with 3 inversions. The RIM generates a permutation recursively by merging subsequences defined by a binary recursive decomposition of the elements in E and where the number of
inversions is controlled by a separate parameter associated with each merge operation.
More formally, a RIM ? (?) for a set of elements E = {e1 , . . . , en }, has a structure ? that represents a recursive decomposition of the set E and a set of parameters ? ? Rn?1 . We represent a
RIM as a binary tree with n = |E| leaves, each associated with a distinct element of E. We denote
the set of internal vertices of the binary tree by I and each internal vertex is represented as a triple
i = (?i , iL , iR ) where iL (iR ) is the left (right) subtree, and ?i controls the number of inversions
when merging the subsequences generated from each of the subtrees. Traversing the tree ? in preorder, with the left child preceding the right child induces a permutation on E called the reference
permutation of the RIM which we denote as ?? .
The RIM is defined in terms of the vertex discrepancy, the number
at (internal) vertex
P of inversions
P
i = (?i , iL , iR ) of ? (?) for test permutation ? is vi (?, ?? ) = l?Li r?Ri Dlr (?, ?? ) where Li
(Ri ) is the subset of elements E that appear as leaves of iL (iR ), the left (right) subtree of internal
vertex i. Note that the sum of the vertex discrepancies over the internal vertices is the inversion
distance between ? and the reference permutation ?? . Finally, the likelihood of a permutation ?
with respect to RIM ? (?) is as follows:
P (?|? ) ?
Y
exp(??i vi (?, ?? ))
(2)
i?I
Example: For elements E = {a, b, c, d},
Figure 1 shows a RIM ? for preferences over
four types of fruit. The reference permutation for this model is ?? = (a, b, c, d) and the
modal permutation is (c, d, a, b) due to the
sign of the root vertex. For test permutation
? = (d, a, b, c), we have that vroot (?, ?? ) =
2, vlef t = 0, and vright = 1. Note that the
model captures strong preferences between
the pairs (a, b) and (c, d) and weak preferences between (c, a),(d, a),(c, b) and (d, b).
This is an example of a set of preferences that
cannot be captured in a GMM as choosing
a strong preference between the pairs (a, b)
and (c, d) induces a strong preference between either (a, d) or (c, b) which differs in
both strength and order from the example.
-0.1
0.8
apple
1.6
banana
cherry
durian
Figure 1: An example of a RIM for fruit preferences among (a)pple, (b)anana, (c)herry, and
(d)urian. The parameter for internal vertices indicates the preference between items in the left and
right subtree with 0 indicating no preference and
a negative number indicating the right items are
more preferable than the left items.
Naive computation of the partition function Z(? (?)) for a recursive inversion model would require
a sum with n! summands (all permutations). We can, however, use the recursive structure of ? (?) to
compute it as follows:
1
Note that a GMM can be parameterized in terms of n ? 1 parameters due to the fact that vn = 0.
2
Proposition 1
Z(? (?)) =
Y
G(|Li |, |Ri |; exp(??i )).
(3)
i?I
G(n, m; q) =
(q)n+m def
? Zn,m (q) .
(q)n (q)m
(4)
Qn
In the above G(n, m; q) is the Gaussian polynomial (Andrews, 1985) and (q)n = i=1 (1 ? q i ).
The Gaussian
polynomial is not defined for q = 1 so we extend the definition so that G(n, m, 1) =
n+m
which
corresponds to the limit of the Gaussian polynomial as q approaches 1 (and ? apm
proaches 0).
Note that when all ?i ? 0 the reference permutation ?? is also a modal permutation and that this
modal permutation is unique when all ?i > 0. Also note that a GMM can be represented by using
a chain-like tree structure in which each element in the reference permutation is split from the
remaining elements one at a time.
3
Estimating Recursive Inversion Models
In this section, we present a Maximum Likelihood (ML) approach to parameter and structure estimation from an observed data D = {?1 , ?2 , . . . ?N } of permutations over E.
Parameter estimation is straight-forward. Given a structure ? , we see from (2) that the likelihood
factors according to the structure. In particular, a RIM is a product of exponential family models,
one for each internal node i ? I. Consequently, the (negative) log-likelihood given D decomposes
into a sum
X
? ln P (D|? (?)) =
[?i V?i + ln Z|Li |,|Ri | (e??i )]
(5)
|
{z
}
i?I
score(i,?i )
P
1
with V?i = |D|
This is
??D vi (?, ?? ) representing the sufficient statistic for node i from data.
a convex function of the parameters ?i , and hence the ML estimate can be obtained numerically
solving a set of univariate minimization problems. In the remainder of the paper we use D to
be the sum of the discrepancy matrices for all of the observed data D with respect to the identity
permutation. Note that this matrix provides a basis for efficiently computing the sufficient statistics
of any RIM.
In the remainder of this section, we consider the problem of estimating the structure of a RIM from
observed data beginning with a brief exploration of the degree to which the structure of a RIM can
be identified.
3.1
Identifiability
First, we consider whether the structure of a RIM can be identified from data. From the previous
section, we know that the parameters are identifiable given the structure. However, the structure of
a RIM can only be identified under suitable assumptions.
The first type of non-identifiability occurs when some ?i parameters are zero. In this case, the
permutation ?? is not identifiable, because switching the left and right child of node i with ?i = 0
will not change the distribution represented by the RIM. In fact, as shown by the next proposition, the
left and right children can be swapped without changing the distribution if the sign of the parameter
is changed.
Proposition 2 Let ? (?) be a RIM over E, D a matrix of sufficient statistics and i any internal node
of ? , with parameter ?i and iL , iR its left and right children. Denote by ? 0 (?0 ) the RIM obtained
from ? (?) by switching iL , iR , and setting ?i0 = ??i . Then, P (?|? (?)) = P (?|? 0 (?0 )) for all
permutations ? of E.
This proposition demonstrates that the structure of a RIM cannot be identified in general and that
there is an equivalence class of alternative structures among which we cannot distinguish. We elimi3
nate this particular type of non-identifiability by considering RIM that are in canonical form. Proposition 2 provides a way to put any ? (?) in canonical form.
Algorithm 1 Algorithm C ANONICAL P ERMUTATION
Input any ? (?)
for each internal node i with parameter ?i do
if ?i < 0 then
?i ? ??i ; switch left child with right child
end if
end for
Proposition 3 For any matrix of sufficient statistics D, and any RIM ? (?), Algorithm C ANONI CAL P ERMUTATION does not change the log-likelihood.
The proof of correctness follows from repeated application of Proposition 2. Moreover, if ?i 6= 0
before applying C ANONICAL P ERMUTATION, then the output of the algorithm will have all ?i > 0.
A further non identifiability arises when parameters of the generating model are equal. It is easy to
see that if all the parameters ?i are equal to the same value ?, then the likelihood of a permutation
? would be P (?|?, (?, . . . ?)) ? exp(??d(?, ?? )), which is the likelihood corresponding to the
Mallows model. In this case ?? is identifiable, but the internal structure is not. Similarly, if all the
parameters ?i are equal in a subtree of ? , then the structure in that subtree is not identifiable.
We say that a RIM ? (?) is locally identifiable iff ?i 6= 0, i ? I and |?i | 6= |?i0 | whenever i is a
child of i0 . We say that a RIM ? (?) is identifiable if there is a unique canonical RIM that represents
the same distribution. The following proposition captures the degree to which one can identify the
structure of a RIM.
Proposition 4 A RIM ? (?) is identifiable iff it is locally identifiable.
3.2
ML estimation or ? for fixed ?? is tractable
We first consider ML estimation when we fix ?? , the reference permutation over the leaves in E.
For the remainder of this section we assume that the optimal value of ??i for any internal node i is
available (e.g., via the convex optimization problem described in the previous section). Hence, what
remains to be estimated is the internal tree structure
Proposition 5 For any set E, permutation ?? over E, and observed data D, the Maximum Likelihood RIM structure inducing this ?? can be computed in polynomial time by Dynamic Programming
algorithm S TRUCT B Y DP.
Proof sketch Note that there is a one-to-one correspondence between tree structures representing
alternative binary recursive partitioning over a fixed permutation of E and alternative ways in which
the one can parenthesize the permutation of E. The negative log-likelihood decomposes according
to the structure of the model with the cost of a subtree rooted at i depending only on the structure
of this subtree. Furthermore, this cost can be decomposed recursively into a sum of score(i, ??i )
and the costs of iL , iR the subtrees of i. The recursion is identical to the recursion of the ?optimal
matrix chain multiplication? problem, or to the ?inside? part of the Inside-Outside algorithm in string
parsing by SCFGs (Earley, 1970).
Without loss of generality, we consider that ?? is the identity, ?? = (e1 , . . . en ). For any subsequence ej , . . . em of length l = m ? j + 1, we define the variables cost(j, m), ?(j, m), Z(j, m)
that will store respectively the negative log-likelihood, the parameter at the root, and the Z for the
root node of the optimal tree over the subsequence ej , . . . em . If all the values of cost(j, m) are
known for m ? j + 1 < l, then the values of cost(j, j + l ? 1), ?(j, j + l ? 1), Z(j, j + l ? 1)
are obtained recursively from the existing values. We also maintain the pointers back(j, m) that
indicate which subtrees were used in obtaining cost(j, m). When cost(1, n) and the corresponding
? and Z are obtained, the optimal structure and its parameters have been found, and they can be read
4
recursively by following the pointers back(j, m). Note that in the innermost loop, the quantities
score(j, m), ?(j, m), V? are recalculated for each k.
We call the algorithm implementing this optimization S TRUCT B Y DP.
Algorithm 2 Algorithm S TRUCT B Y DP
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
Input sample discrepancy matrix D computed from the observed data
for m = 1 : n do
cost(m, m) ? 0
end for
for l ? 2 . . . n do
for j ? 1 : n ? l + 1 do
m?j+l?1
cost(j, m) ? ?
for k ? j : m ? 1 do
Pk Pm
calculate V? = j 0 =j m0 =k Dm0 j 0
L = k ? j + 1, R = m ? k
estimate ?jm from L, R, V?
calculate score(j, m) by (5)
s ? cost(j, k) + cost(k + 1, m) + score(j, m)
if s < cost(j, m) then
cost(j, m) ? s, back(j, m) ? k
store ?(j, m), ZLR (j, m)
end if
end for
end for
end for
Algorithm 3 Algorithm SAS EARCH
Input set E, discrepancy matrix D computed from observed data, inverse temperature ?
Initialize Estimate GMM ?0 by B RANCH &B OUND , ? best = ?0
for t = 1, 2, . . . tmax do
while accept= FALSE do
sample ? ? P (?|?t?1 )
? 0 ? S TRUCT B Y DP(?, D)
? 0 ? C ANONICAL P ERMUTATION(? 0 )
? 0 ? reference order of ? 0
? 0 ? S TRUCT B Y DP(? 0 , D)
accept=TRUE, u ? unif orm[0, 1)
0
if e??(ln P (D|?t?1 )?ln P (D|? )) < u then
accept? FALSE
end if
end while
?t ? ? 0 (store accepted new model)
if P (D|?t ) > P (D|? best ) then
? best ? ?t
end if
end for
Output ? best
To evaluate the running time of S TRUCT B Y DP algorithm, we consider the inner loop over k for
? Z for each L, R split of l, with L + R = l. Apparently, this
a given l. This loop computes V? , ?,
would take time cubic in l, since V? is a summation over LR terms. However, one can notice that
in the calculations of all V? values over this submatrix of size l ? l, for L = 1, 2, . . . l ? 1, each
of the Drl elements is added once to the sum, is kept in the sum for a number of steps, then is
removed. Therefore, the total number of additions and subtractions is no more than twice l(l ? 1)/2,
the number of submatrix elements . Estimating ? and the score involved computing Z by (3) (for
5
the score) and its gradient (for the ? estimation). These take min(L, R) < l operations per iteration.
If we consider the number of iterations to convergence a constant, then the inner loop over k will
take O(l2 ) operations. Since there are n ? l subsequences of length l, it is easy now to see that the
running time of the whole S TRUCT B Y DP algorithm is of the the order n4 .
3.3
A local search algorithm
Next we develop a local search algorithm for the structure when a reference permutation is not
provided. In part, this approach can be motivated by previous work on structure estimation for the
Mallows model, where the structure is a permutation. For these problems, researchers have found
that an approach in which one greedily improves the log-likelihood by transposing adjacent elements
coupled with a good initialization is a very effective approximate optimization method (Schalekamp
& van Zuylen, 2009; Ali & Meila, 2011).
In our approach, we take a similar approach and treat the problem as a search for good reference
permutations leveraging the S TRUCT B Y DP algorithm to find the structure given a reference permutation. At a high level, we initialize ?? = ?0 by estimating a GMM from the data D and then
improve ?? by ?local changes? starting from ?0 .
We rely on estimation of a GMM for initialization but, unfortunately, the ML estimation of a Mallows model, as well as that of a GMM, is NP-hard (Bartholdi et al., 1989). For the initialization, we
can use any of the fast heuristic methods of estimating a Mallows model, or a more computationally expensive search algorithm, The latter approach, if the search space is small enough, can find a
provably optimal permutation but, in most cases, it will return a suboptimal result.
For the local search, we make two variations with respect to the previous works, and we add a local
optimization step specific to the class of Recursive Inversion models. First, we replace the greedy
search with a simulated annealing search. Thus, we will generate proposal permutations ? 0 near
the current ?. Second, the proposals permutations ? 0 are not restricted to pairwise transpositions.
Instead, we sample a permutation ? 0 from the current RIM ?t . The reason is that if some of the
pairs e ??? e0 are only weakly ordered by ?t (which would happen if this ordering or e, e0 is not
well supported by the data), then the sampling process will be likely to create inversions between
these pairs. Conversely, if ?t puts a very high confidence on e ? e0 , then it is probable that this
ordering is well supported by the data and reversing it will be improbable in the proposed ? .
For each accepted proposal permutation ?, we estimate the optimal structure ? give this ? and the
?
optimal parameters ?? given the structure ? . Rather than sampling a permutation from the RIM ? (?)
we then apply C ANONICAL P ERMUTATION, which does not change the log-likelihood, to convert
? into a canonical model and perform another structure optimization step S TRUCT B Y DP. This
? (?)
has the chance of once again increasing the log-likelihood, and experimentally we find that it often
does increase the log-likelihood significantly. We then use the estimated structure and associated
parameters to sample a new permutation. These steps are implemented by algorithm SAS EARCH.
4
Related work
In addition to the Mallows and GMM models, our RIM model is related to the work of Manilla
& Meek (2000). To understand the connection between this work and our RIM model consider a
restricted RIM model in which parameter values can either be 0 or ?. Such a model provides
a uniform distribution over permutations consistent with a series-parallel partial order defined in
terms of the binary recursive partition where a parameter whose value is 0 corresponds to a parallel
combination and a parameter value of ? corresponds to a series combination. The work of Manilla
& Meek (2000) considers the problem of learning the structures and estimating the parameters of
mixtures of these series-parallel RIM models using a local greedy search over recursive partitions of
elements.
Another close connection exists between RIM models and the riffle independence models (RI) proposed by Huang et al. (2009); Huang & Guestrin (2012); Huang et al. (2012). Both approaches use
a recursive partitioning of the set of elements to define a distribution over permutations. Unlike the
RIM model, the RI model is not defined in terms of inversions but rather in terms of independence
between the merging processes. The RI model requires exponentially more parameters than the
6
Irish Meath elections
Sushi
Figure 2: Log-likelihood scores for the models alph, HG, and GMM as differences from the loglikelihood of SAS EARCH output, on held-out sets from Meath elections data (left) and Sushi data
(middle). Train/test set split was 90/2400, respectively 300/4700, with 50 random replications.
Negative score indicate that a model has lower likelihood than the model obtained by SAS EARCH.
The far outlier(s) in meath represent one run where SA scored poorly on the test set. Right: Most
common structure and typical parameters learned for the sushi data. Interior nodes contain the associated parameter value, with higher values and darker green indicating a stronger ordering between
the items in the left and right subtrees. The leaves are the different types of sushi.
RIM model due to the fact that the model defines a general distribution over mergings which grows
exponentially in the cardinality of the left and right sets of elements. In addition, the RI models
do not have the same ease of interpretation as the RIM model. For instance, one cannot easily extract a reference permutation or modal permutation from a given a RI model, and the comparison
of alternative RI models, even when the two RI models have the same structure, is limited to the
comparison of rank marginals and Fourier coefficients.
It is worth noting that there have been a wide range of approaches that use multiple reference permutations. One benefit of such approaches is that they enable the model to capture multi-modal
distributions over permutations. Examples of such approaches include the mixture modeling approaches of Manilla & Meek (2000) discussed above and the work of Lebanon & Lafferty (2002)
and Klementiev et al. (2008), where the model is a weighted product of a set of Mallows models each
with their own reference order. It is natural to consider both mixtures and products of RIM models.
5
Experiments
We performed experiments on synthetic data and real-world data sets. In our synthetic experiments
we found that our approach was typically able to identify both the structure and parameters of the
generative model. More specifically, we ran extensive experiments with n = 16 and n = 33, choosing the model structures to have varying degrees of balance, and choosing the parameters randomly
chosen with exp(??i ) between 0.4 and 0.9. We then used these RIMs to generate datasets containing varying numbers of permutations to investigate whether the true model could be recovered.
We found that all models were recoverable with high probability when using between 200-1000
SAS EARCH iterations. We did find that the identification of the correct tree structure in its entirety
typically required a large sample size. We note that failures to identify the correct structure were typically due to the fact that alternative structures had higher likelihood than the generating structure in
a particular sample rather than a failure of the search algorithm. While our experiments had at most
n = 33 this was not due to the running time of the algorithms. For instance, S TRUCT B Y DP ran in a
few seconds for domains with 33 items. For the smaller domains and for the real-world data below,
the whole search with hundreds of accepted proposals typically ran in less than three minutes. In
particular, this search was faster than the B RANCH &B OUND search for GMM models.
In our experiments on real-world data sets we examine two datasets. The first data set is an Irish
House of Parliament election dataset from the Meath constituency in Ireland. The parliament uses
the single transferable vote election system, in which voters rank candidates. There were 14 candi7
dates in the 2002 election, running for five seats. Candidates are associated with the two major rival
political parties, as well as a number of smaller parties. We use the roughly 2500 fully ranked ballots
from the election. See Gormley & Murphy (2007) for more details about the dataset. The second
dataset consists of 5,000 permutations of 10 different types of sushi where the permutation captures
preferences about sushi (Kamisha, 2003). The different types of sushi considered are: anago (sea
eel), ebi (shrimp), ika (squid), kappa-maki (cucumber roll), maguro (tuna), sake (salmon), tamago
(egg), tekka-maki (tuna roll), toro (fatty tuna), uni (sea urchin).
We compared a set of alternative recursive inversion models and approaches for identifying their
structure. Our baseline approach, denoted alph, is one where the reference permutation is alphabetical and fixed and we estimate the optimal structure given that order by S TRUCT B Y DP. Our second
approach, GMM, is to use the B RANCH &B OUND algorithm of Mandhani & Meila (2009)2 to estimate a generalized Mallows Model. A third approach, HG, is to fit the optimal RIM parametrization
to the hierarchical tree structure identified by Huang & Guestrin (2012) on the same data.3 Finally,
we search over both structures and orderings with SAS EARCH, with 150 (100) iterations for Meath
(sushi) at temperature 0.02.
The quantitative results are shown in Figure 2. We plot the difference in test log-likelihood for each
model as compared with SAS EARCH. We see that on the Meath data SAS EARCH outperforms
alph in 94% of the runs, HG in 75%, and GMM in 98%; on the Sushi data, SAS EARCH is always
superior to alph and GMM, and has higher likelihood than GMM in 75% of runs. On the training sets,
SAS EARCH had always the best fit (not shown).
We also investigated the structure and parameters of the learned models. For the Meath data we
found that there was significant variation in the learned structure across runs. Despite the variation
there were a number of substructures common to the learned models. Similar to the findings in
Huang & Guestrin (2012) on the structure of a learned riffle independence model, we found that
candidates from the same party were typically separated from candidates of other parties as a group.
In addition, within these political clusters we found systematic preference orderings among the candidates. Thus, many substructures in our trees were also found in the HG tree. In addition, again as
found by Huang & Guestrin (2012), we found that a single candidate in an extreme political party
is typically split near the top of the hierarchy, with a ? ? 0, indicating that this candidate can be
inserted anywhere in a ranking. We suspect that the inability of a GMM to capture such dependencies leads to the poor empirical performance relative to HG and full search which can capture such
dependencies. We note that alph is allowed to have ?i < 0, and therefore the alphabetic reference
permutation does not represent a major handicap.
For the sushi data roughly 90% of the runs had the structure shown in Figure 2 with the other
variants being quite similar. The structure found is interesting in a number of different ways. First,
the model captures a strong preference between different varieties of tuna (toro, maguro and tekka)
which corresponds with the typical price of these varieties. Second, the model captures a preference
against tamago and kappa as compared with several other types of sushi and both of these varieties
are distinct in that they are not varieties of fish but rather egg and cucumber respectively. Finally, uni
(sea urchin), which many people describe as being quite distinct in flavor, is ranked independently
of preferences between other sushi and, additionally, there is no consensus on its rank.
2
www.stat.washington.edu/mmp/intransitive.html
We would have liked to make a direct comparison with the algorithm of Huang & Guestrin (2012), but the
code was not available. Due to this, we aim only at comparing the quality of the HG structure, a structure found
to model these data well albeit with a different estimation algorithm, with the structures found by SAS EARCH.
3
8
References
Ali, Alnur and Meila, Marina. Experiments with Kemeny ranking: What works when? Mathematics
of Social Sciences, Special Issue on Computational Social Choice, pp. (in press), 2011.
Andrews, G.E. The Theory of Partitions. Cambridge University Press, 1985.
Bartholdi, J., Tovey, C. A., and Trick, M. Voting schemes for which it can be difficult to tell who
won. Social Choice and Welfare, 6(2):157?165, 1989. proof that consensus ordering is NP hard.
Earley, Jay. An efficient context-free parsing algorithm. Communications of the ACM, 13(2):94?102,
1970.
Fligner, M. A. and Verducci, J. S. Distance based ranking models. Journal of the Royal Statistical
Society B, 48:359?369, 1986.
Gormley, I. C. and Murphy, T. B. A latent space model for rank data. In Proceedings of the 24th
Annual International Conference on Machine Learning, pp. 90?102, New York, 2007. ACM.
Huang, Jonathan and Guestrin, Carlos. Uncovering the riffled independence structure of ranked
data. Electronic Journal of Statistics, 6:199?230, 2012.
Huang, Jonathan, Guestrin, Carlos, and Guibas, Leonidas. Fourier theoretic probabilistic inference
over permutations. Journal of Machine Learning Research, 10:997?1070, May 2009.
Huang, Jonathan, Kapoor, Ashish, and Guestrin, Carlos. Riffled independence for efficient inference
with partial rankings. Journal of Artificial Intelligence Research, 44:491?532, 2012.
Kamisha, T. Nantonac collaborative filtering: recommendation based on order responses. In Proceedings of the ninth ACM SIGKDD International Conference on Knowledge Discovery and Data
Mining, pp. 583?588, New York, 2003. ACM.
Klementiev, Alexandre, Roth, Dan, and Small, Kevin. Unsupervised rank aggregation with distancebased models. In Proceedings of the 25th International Conference on Machine Learning, pp.
472?479, New York, NY, USA, 2008. ACM.
Lebanon, Guy and Lafferty, John. Cranking: Combining rankings using conditional probability
models on permutations. In Proceedings of the 19th International Conference on Machine Learning, pp. 363?370, 2002.
Mallows, C. L. Non-null ranking models. Biometrika, 44:114?130, 1957.
Mandhani, Bhushan and Meila, Marina. Better search for learning exponential models of rankings. In VanDick, David and Welling, Max (eds.), Artificial Intelligence and Statistics AISTATS,
number 12, 2009.
Manilla, Heiki and Meek, Christopher. Global partial orders from sequential data. In Proceedings
of the Sixth Annual Confrerence on Knowledge Discovery and Data Mining (KDD), pp. 161?168,
2000.
Schalekamp, Frans and van Zuylen, Anke. Rank aggregation: Together we?re strong. In Finocchi,
Irene and Hershberger, John (eds.), Proceedings of the Workshop on Algorithm Engineering and
Experiments, ALENEX 2009, New York, New York, USA, January 3, 2009, pp. 38?51. SIAM,
2009.
9
| 5579 |@word middle:1 inversion:22 polynomial:4 stronger:1 unif:1 squid:1 decomposition:2 innermost:1 recursively:4 series:3 score:9 shrimp:1 outperforms:1 existing:1 current:2 com:1 recovered:1 comparing:1 parsing:2 truct:11 john:2 partition:4 happen:1 kdd:1 enables:1 toro:2 plot:1 generative:2 leaf:4 greedy:2 item:6 intelligence:2 beginning:1 parametrization:2 lr:1 pointer:2 transposition:1 provides:3 node:8 preference:14 five:1 direct:1 replication:1 consists:1 drl:1 dan:1 frans:1 inside:2 pairwise:1 maki:2 roughly:2 examine:1 multi:1 decomposed:1 election:6 jm:1 bartholdi:2 considering:1 increasing:1 cardinality:1 provided:1 estimating:6 moreover:1 null:1 what:2 string:1 fatty:1 alenex:1 finding:1 quantitative:1 subclass:1 voting:1 preferable:1 biometrika:1 demonstrates:1 klementiev:2 control:2 partitioning:2 appear:1 before:1 engineering:1 local:6 treat:1 sushi:12 limit:1 switching:2 despite:1 meil:1 merge:2 tmax:1 twice:1 initialization:3 voter:1 equivalence:1 conversely:1 ease:1 limited:1 range:1 unique:2 mallow:18 recursive:15 alphabetical:1 differs:1 empirical:1 thought:1 vroot:1 significantly:1 confidence:1 orm:1 cannot:4 close:1 interior:1 cal:1 put:2 context:1 applying:1 herry:1 www:1 roth:1 starting:1 independently:1 convex:2 identifying:1 seat:1 classic:1 variation:3 hierarchy:1 programming:1 us:1 trick:1 element:22 distancebased:1 expensive:1 particularly:1 kappa:2 observed:6 inserted:2 capture:10 calculate:2 irene:1 ordering:7 removed:1 ran:3 insertion:3 dynamic:1 depend:1 solving:1 weakly:1 ali:2 predictive:1 basis:1 compactly:1 easily:1 represented:3 train:1 separated:1 distinct:3 fast:1 describe:2 effective:1 anke:1 artificial:2 tell:1 kevin:1 choosing:3 outside:1 whose:1 heuristic:1 quite:2 say:2 loglikelihood:1 otherwise:1 statistic:6 sequence:4 propose:2 product:3 remainder:3 inserting:1 loop:4 combining:1 date:1 kapoor:1 iff:2 flexibility:2 poorly:1 alphabetic:1 intuitive:1 inducing:1 proaches:1 ebi:1 seattle:1 convergence:1 cluster:1 sea:3 generating:2 liked:1 depending:1 develop:2 andrew:2 stat:2 ij:1 sa:12 strong:5 implemented:1 entirety:1 indicate:2 correct:2 stochastic:2 exploration:1 enable:1 implementing:1 require:1 fix:1 proposition:10 probable:1 summation:1 considered:1 guibas:1 exp:6 welfare:1 recalculated:1 m0:1 major:2 estimation:10 ound:3 correctness:1 create:1 weighted:1 cucumber:2 minimization:1 gaussian:3 always:2 aim:1 rather:5 ej:3 apm:1 varying:2 gormley:2 rank:6 likelihood:19 indicates:1 nantonac:1 political:3 sigkdd:1 greedily:1 baseline:1 sense:1 inference:2 i0:3 bhushan:1 typically:9 accept:3 interested:1 provably:1 issue:1 among:4 html:1 uncovering:1 denoted:1 special:1 initialize:2 equal:3 construct:3 once:2 washington:5 sampling:2 irish:2 identical:1 represents:2 unsupervised:1 discrepancy:7 np:2 few:1 randomly:1 ve:1 murphy:2 transposing:1 microsoft:2 maintain:1 earch:11 investigate:1 mining:2 intransitive:1 mixture:3 extreme:1 hg:6 held:1 chain:2 subtrees:4 cherry:1 partial:3 improbable:1 traversing:1 tree:11 re:1 e0:3 instance:2 modeling:1 zn:1 cost:14 introducing:1 vertex:9 subset:1 uniform:1 hundred:1 dij:5 dependency:2 synthetic:2 international:4 siam:1 probabilistic:4 eel:1 systematic:1 ashish:1 together:1 again:2 containing:1 huang:11 guy:1 return:1 li:4 coefficient:1 ranking:7 vi:3 leonidas:1 performed:1 root:3 closed:1 apparently:1 sort:2 carlos:3 parallel:3 aggregation:2 identifiability:4 substructure:2 collaborative:1 il:7 ir:7 roll:2 who:1 efficiently:1 yield:1 identify:3 generalize:1 weak:1 identification:1 worth:1 apple:1 researcher:1 straight:1 whenever:1 ed:2 definition:1 sixth:1 failure:2 against:1 pp:7 involved:1 associated:5 proof:3 proved:1 dataset:3 knowledge:2 improves:2 rim:43 back:3 alexandre:1 higher:3 verducci:3 earley:2 modal:5 response:1 generality:1 furthermore:1 anywhere:1 stage:2 sketch:1 christopher:2 ei:1 replacing:1 defines:1 stagewise:1 quality:1 grows:1 usa:2 contain:1 true:2 hence:2 read:1 adjacent:1 rooted:1 transferable:1 won:1 generalized:5 theoretic:1 temperature:2 salmon:1 mandhani:2 common:2 superior:1 exponentially:2 extend:1 interpretation:1 discussed:1 numerically:1 marginals:1 significant:1 cambridge:1 meila:4 pm:1 similarly:1 mathematics:1 had:4 summands:1 add:1 own:1 store:3 binary:5 preserving:1 guestrin:9 captured:1 preceding:1 subtraction:1 nate:1 recoverable:1 multiple:1 full:1 faster:1 calculation:1 marina:3 e1:3 controlled:1 variant:1 essentially:1 pple:1 iteration:4 normalization:3 represent:3 preserved:1 addition:5 proposal:4 annealing:1 zuylen:2 swapped:1 unlike:1 suspect:1 elegant:2 leveraging:1 gmms:1 lafferty:2 call:3 near:2 noting:1 split:4 easy:2 enough:1 switch:1 independence:7 fit:2 variety:4 identified:5 dlr:1 suboptimal:1 inner:2 whether:2 motivated:1 york:5 useful:2 se:1 rival:1 locally:2 induces:2 constituency:1 generate:2 canonical:4 notice:1 fish:1 sign:2 estimated:2 per:2 group:1 four:1 changing:1 penalizing:1 gmm:22 kept:1 sum:7 convert:1 run:5 inverse:1 parameterized:1 cranking:1 ballot:1 extends:1 family:2 electronic:1 vn:1 submatrix:2 def:1 handicap:1 meek:6 distinguish:1 correspondence:1 identifiable:8 annual:2 strength:1 ri:12 ermutation:5 scfgs:1 sake:1 generates:1 fourier:2 argument:2 min:1 according:3 combination:2 poor:1 smaller:2 across:1 em:2 n4:1 outlier:1 restricted:2 ln:4 computationally:1 remains:1 know:1 tractable:2 end:11 available:2 operation:3 apply:1 hierarchical:2 alternative:6 original:1 top:1 remaining:1 running:4 include:1 society:1 added:2 quantity:1 occurs:1 concentration:1 tuna:4 diagonal:1 kemeny:1 gradient:1 dp:11 ireland:1 distance:4 separate:1 simulated:1 parametrized:1 considers:1 consensus:2 reason:1 length:2 code:1 balance:1 difficult:1 unfortunately:1 vright:1 negative:5 perform:1 ranch:3 datasets:2 january:1 alnur:1 communication:1 banana:1 rn:2 ninth:1 david:1 pair:4 required:1 extensive:1 preorder:1 connection:2 merges:1 learned:5 alph:5 able:1 redmond:1 below:1 interpretability:1 green:1 royal:1 max:1 suitable:1 natural:1 rely:1 ranked:3 recursion:2 representing:2 scheme:1 improve:1 brief:1 naive:1 coupled:1 extract:1 understanding:1 l2:1 discovery:2 multiplication:1 relative:2 riffle:3 loss:1 permutation:62 fully:1 interesting:1 filtering:1 triple:1 degree:3 sufficient:4 consistent:1 fruit:2 parliament:2 changed:1 supported:2 free:1 deeper:1 understand:1 wide:1 van:2 benefit:1 world:3 qn:1 computes:1 forward:1 collection:1 far:1 party:5 social:3 welling:1 lebanon:2 approximate:1 compact:1 uni:2 ml:5 global:1 sequentially:1 urchin:2 subsequence:9 search:17 latent:1 decomposes:3 additionally:1 obtaining:1 investigated:1 domain:2 vj:2 did:1 pk:1 aistats:1 whole:2 scored:1 ika:1 child:8 repeated:1 allowed:1 en:3 egg:2 cubic:1 fligner:3 darker:1 ny:1 mmp:2 exponential:3 candidate:7 house:1 third:1 jay:1 minute:1 specific:1 list:1 evidence:1 dis1:1 exists:1 workshop:1 false:2 merging:5 albeit:1 sequential:1 dm0:1 subtree:7 flavor:1 likely:2 univariate:1 ordered:2 recommendation:1 corresponds:4 chance:1 acm:5 conditional:1 superclass:1 identity:2 consequently:1 replace:1 price:1 change:4 hard:2 experimentally:1 typical:2 specifically:1 reversing:1 total:3 called:1 accepted:3 experimental:1 vote:1 indicating:4 formally:1 internal:12 people:1 latter:1 arises:1 inability:1 jonathan:3 riffled:2 evaluate:1 |
5,058 | 558 | ?
?
! " $#
% '&)(*+-, . + / 0 !21
+
43 3 5
4 687
?
?
?
?
9;:=<?>A@CB-DEGFHJILK-MNFOPRQSOTVUWULHYX T M E)Z[QSO TV\ Z] I
gihRj.k0lnmpoqRkrts[uwvyxzL{`|})~y?j8m g
??? j? { ? v2??? { }=? o? v2??? v)o???`jm ?p? ? ??J?)? usVlp????
?
?
: -I ^_ I-^)<`HJ:)a4bdcL_I ^ <eH`E)Z-f
v?? o ? v j.j m o ??v ?
{ m j ?y?????????)?????)??J?
? ???J?y?2?[?G?
?C?A}ym j8?0j v j8? } j?mno? | j v2nu h xyu u?? mp{`? u`v uev ue? {`? z ?????? ??? kp? o? }?l ??ut o? ??
} s j?| j v???N?? j ? jm u ~2s?$?$?J~yn?jv4u`xyu}G o? ? j v j ~ m uesGv j ?? { m?? ? ? j a o ? v ? }?mp{2k j xy~ym j ?
u`?yx mpj8? ~?sV??? o? v? ?? | j uv????$m je? ~ j ?)? ?e? ? { |?u.o??v rem?j ?2j ? k m ???? j x ? ??? j ? jAo? vyk sV~y? j
??j o??? ?` k{ v ? j m??`j8?)kj?mnu8? ??j k? { mp????w? ? jn? m u k ? o??{ v { ?Lu ?0o????v)u.s[o? vv { ????nj ?Guevyx ?nj8}Gr??
m ue o? { v { ? ? r o? ? o?Rk8r ? s?? k0{ |} hRj0? ? o? ?Jv ues ??? ~ k ?d? ? } j j k0? ?
? ?y? ???2???????G?y??S? ?
?? ????o? ? | { ?? ? j v j m ue?J? {tm |? p?2j??? ??o ? v=x j0} j v=x j v? k0{ | } {`? j v2iu.v)u s ?J? j?mL? ? v ? z ? ? ??? ? j l ?
?-{`?0?4?p? j?mn?w? ?V??`?`?J?y?2~ p? j v???? ?e? ? ? ?`?y? ? r v ? j ? ?nj??? { ? ? j ln? j ? { s ?{ ? ?? ? k s ? ? ?? ? r s
??oq ? ?Gr ? m {yk0j8?0??o??? ?
mp{ ? s j |? ? ?
? jwv }=? ? ?0o? k u ? x ?? ? ? o v k | j ? ~ m j | j v???? { ?
~ ?=???={ ? v h?o? v j r mk8{ | ? oq ? ue ? {t?y?{e?? o? vyx j?})j v=x j v ? o!" u h? {w?Wm k j ?$# ? j v j ? Jmn? ue~ { ?
r x)u } o! ? j ?R? j ?=? m8u k ? ???j %e~ ? ??u?sVj?2 o'? & x j } j vyx j v?? ?8o? ?Jv ?$( ?
? ? j v j??? { mp?Yk {`?y?no?R? ? *{ )u ?0j { ) ? ?0o? | }ys j } mn{ k j ??? ???v2? ~=v ?? ?? o+ v? jm?k0{?v=v?j k j x ? ? o???J?
? .??0j80
j / ? ~=m j 1? ? ? } m {2k8j ??? o? v2? ~=v ? ? 3 2 ! k ue? ? ~ u ? 5j 4 ?!???A{ 6) } 6 87:9 ; .=<5
? ?! ? ?? J-m , ? ?2vyuw } ? j8
? > j x {`?C? ? ? v ??A @ 9 B ? < u`v2x? ?=Dj CLj8o??? ? j x ?0? ? { ?L??? j{ ? ? } ~ ? F? ESmp{t| n? jm j |
u ?? vHG? v ?
? K:L ? jN?Lj8?? ? ?? ?Lrem?j ~ Gx r j.x ~ ?0?? v ? r| {2x?o? / j ? ?? ?y? ?? r v jrw? v ?? v2? m??GsRj ? ??j ? M
JI ? ~)v ? ? ?
?
j
n
m
r
? $? NJ? # ? ~2s?? ? ?`?=???2~ ? j8? ??? ?`? #POw?`?=1? ?
? ? ?? ?r`m k ? oQRpj.k ~ mnj ? ? s j ? R ? u m ? { ~ ?z ? T? S o | s j | j?v r o? { v ?V? U o n { ?? ? ? ?2? Wz ? mpjwv v
?????J?wu ?YX ? j ?)uZ j\[ | } s jw| j v nj x ??)m?jwj5xH]^`_ j8mpj v2 z ?0a ? cx b ? o ? ? cv 4?~ ? o d v??4x ?? _ j m?j v2 e j u ?
f
g5hjilkm k nocoqpRksrrut5v pwpRk ryx v$z ok zt kW{ v}|Wz o pVk~? ? |u? oqpRk v?
X
????
!
"
' &
]
#
#
$
#
#
#
#
#
) (
#
#
.0/
1
5 2
\
4
#
,
-
.32
8:9 ;=<?>A@CBEDFG:HJIBLKNM+K
#
#
#
%76
#
#
(
%+*
#
#
&
%
#
#
) *
%
#
OBQPCRTSUWVYX @ZW[\9]_^"B Z ^>`@WB
G
ab Ocd@">efBQgh Zjik U Z > i l PWg X OAm7n B o i <O7pLB PIS`mSeqS < a rs g ] HtRuS3Svxw [\BQy B pLz\{|S~}?\S?P [ X???? B?
XOAm+? B e S RT?WP [ U B g IS?|m????Y??? ZQ? ? U Z > i? P?gh:P [AB?^W?\?? Um?? ? S[BO?D???`DX hT? ?????? Bp z e?S }? ???Ae }
g > PIU?Q?I S e n?? ??? PCB Z IAO`Se|Sc } H [ B z X@WP a ? Z >efX UgL? v?^"IAB ZjiF @ Z > a ] ^ g XOAm e?B X@O ik O`c @ >e=B g
B p z {|S } B m i r O S > @ ib p zA?fB?p?B??`PXP i ? S? O gx? X ? B?\BBO?mAB g Z U ? ? ? Bm aF O mB P X i?=? Be g B R ? B @ B ? ? ? I B O
D????D~X h D?G ?AD ?h?D ??D Z hxD ?? ???h P ? i? gE? X ? B @ Z S? Z BO?P"U?X P?B?g ? O7PI B P B g P ? @WS Z BmA> @ B g XO??
@WB g"? e?w g > g i ?Ac?m i ??? B~@WB OP?P } ? B Sv i? O ?? ^Tg i? c OAX e g G
?WO3gB ?QP i ? S?O ? R BYmBQg??Q@ a???BYPIB?P?BgP ? @S?Bm\>A@WB?> g Bm?PCS?S?gB @"?B?PIB B ???e?>P i k S?O?S?vJPWI\B
RT? 9?c[^ g a ? O P i? p?B?v?@Sp?@B g B P?P"SE?SO ? B @ < ? O?QBxSv P I`BxOABP R S?@WV\GT?WO g B ?QP i? S????RTBxmBg? @ a ? ? B
PWB g P g mAB g a? cO\B m?P ? S g B@W?B?PIABxv?@WB ??>`B?A?Q} n SGy?
? X a? ?0?QI\X@??QPWB @ i?g P i? ? g Sv?P ? B??AB P??TS?U V ?WO
gB ??^ i ? S?OL???TB?mB g?@ a ? ? B?p?S @ B??p? i? P i? S?> g PWB oPWg i ? S{=? a ? Oc?g ? B B? [ g a ? cOA?{qguXOmLS?P [AB~@ X > ? m i? S?
? XOAm g i? cOAX ? g G
? {|e?@CB g > e|P g ? @B g B O?PWB m [ B @B?RB @WB?S?APX ?? O`B m?v?UCS?p P [AB???@ o P??B o i ? cO ? X+
G ?W[ ik ? PI\X P?> o B m
;
PWIAB?eqB X @? i? Oc?@>??fB g X O m n B g i? < O?P"B??[ O i?_? > B g S v:??i ? PWPS?xXO nE? @W@WB < > a ? Pt? ??i PWPWS? ? ????? ?x>`U
i ? p ? @WS ? B?y?B??AP g S??PIB i? @ ??@ i? c a ? OAX e i?=?E? e B p B ? ^ X~P i k S?O ? B @ B?p?SgPWe?} i? Od^"IAB?m\BP? i? efg?S? v?P ? ?
? a ? @W? > a ? P g X?OAm0@"B g >??f^"B n i ? ?+X?g??gQP B p PWIA?P????m?{qB ggtg }gQPWB??XP a ? ? ???g B P g 9 ? O0P"I ? ? a ? OAm a?? a?|?? ? ?
? RuBQ?B @ h g a?? 9???fX@?^?? gQPWg?R?? B?? B ? ? @?v?S ? p B ?Y? P?[AB ? PC?AB @
? ? p ? S?O ? ??^ g ? ? G S? B ??????A??X ?
? {?? P g
P R S m B g a ?=< Og?? i ^ ? g a? p i ? {fX@t@?B gW?x
?
?
G G
G
?
HJI i ? g PB ?P:? ??I`S ? B P S p?XP"? [? ?S?OAm a ^ a b S Og>?gB m?v?SU m i G < a ^Xe?g a? yL>AefXP aF S o Sv?P I ? ?B P ? ? UV
XOAm?^"S?? Sp ? X@ B PWIB?B ? Sef>AP a ? S Sv P [ B R B i?"!$# PCg i? O?^ [ B a U ? B i?"% I?P?? g z\X??B'&)( R S g i? OBxRTXQ? B g
Sv X ? ? UjS+* i? p?X ^WB e=}?D V-,?? ? ? B @WB/. ? *ABm i ? O3P R S?m ? ? ? ? U"B ?^x@ XP a 0 S ? h X?\m?^ ? ? yE??"1 B m324? % ?576 2
?8:?
9 XO\m;8 < ? R B U"B ? @CB g B ^ B m?PS?P [ BLG ?W? a? ?>= H [ BL? I?rA@ S >AP ? >A^ g ??"B OAX{ gDCFEHG?XOAm;E)I ? XOAm
P [ B RuB a? <[ P g CJ 9 < X?O n J < 9 ? R B @?B m ak < i P a ? ??B m X OAm z {?SPK"Bm ?
?ML
O h ?uz {=? ^HP v:P ? = B ? B?P R S?@Q h g i? O @ ?R
HJIB UWB g > ? P g XL @WB g?AS R O a? & c >A@ B?? = 8 a?=< >AU ? ? ? ???)N ? g X ? I?
[ ^ ? >A^ g a B ?\X { g XWX?^ ? @
gSN ? c?OAX eUT7?? ? n < >AUB:V CC ? ? i g?? ? IAXWT ? ? {=S?PTS7X ^ # B:Y BP?RTS @Z g?S7\
?
! " $#
% '& () ! *
,+-/. 021
??
?
79`b
8,:;<=ac >?:@?:@>?::@?:ABCDA?AE?B?AFHGIG>HJIGEKJILBHJIMHNAIAEHAM?OPQJB?JMM R
^KPc<P^
d2ef
klmst
cqr
tz R
p wxcrx
my }
^a?
??}?}? ?
?
}?}
7'? ? ?YP??
}???
??
?
?
k??? ?
?????
?m??
m
c??
??? ??? ? ??^ ? ? ? R ??? Y ?? ? ?????H? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ??? ? ; ? ? ? ? ? ? ? ? ?/? ? ? ? ? ? ? ? ? ? ? i ??? R ? ? ? ? ? ? i ? k??? ?
7 !8
? P8
7
?'? ? d??
8? ? ?}?? ? ? ?? R ? ? ? ? ? ?(? ? ? ? ? ? ? ? ? ? i ? ? ? ? ? i R ? ? ? ? ? R ? ? ? ? ? ? ? ? ? ? R ? ? ? R ? ? ? ? ? ? ? ? ? ? ? ? i ? ? ? ? ? C ?
d f
}?
CiZ^
? ? Y ???
?C?Z? ?
Z??<<
?
??C<C
?k}
?C<C<
? ?
?C^
??
?^CZi
C<?^Z J
?
7T? E ??
?jjZ
g
??Z?Z<
7? ? G k V V L ?< J J?J J J V _ VIVV V V V Z?C^ V
7
7
7
d ? ?
?
d f
7 ? k
C?
7 ??
7 ? ? ??
k
7 ? ??
C
g 7 ??
?
7 ? ?}
?
C
?
?
?
?
?
?
C
?? ? ? ?? ? ? ? ? ? ? ? ? ? i ?
?? ? ? ? ? ?
7
7? ? ? ? j ? ? ? ? ? ? j ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? j ?
f
d
?
!
"
?
4
01
#;:
A
EF
5
=
<
<<B
H
z|{~}N
?
}??
?#
?
??
?
??
??
??
??
??
??
???
?
??
???
? ??
???
???
?
? ??
L #
?
?? ?
??
? ???
?
Q
?
w
z?{}??
g
?
??
?
#
}??
?
?
g
!"
$#
)
*+,
3
..
.//
*8
?@
GJ
K
LNM O
PP
P
,.
e g XW WXW W XW WXhXij \X]kX\\ j \\ Z lXm W X]
x:
# y
Lw
???
nXoXlX\
R/
dXpq \X\o r . c
??
? ?? ??? ???X???X?X??X??X?X?X???X?X??X?? ? ?X?X? ?X? ?? ?X?
?V?
?
?
?
??
?
??
?
??
??
???? ?
?
f
?
Q
O
??
Q
??
??Q
?
CD
???
??X??X??X??X?X?X??X??X?`???X?`???`?X??X?X??X? ?X? ?X??X?X??X?X??X?X?
????
?? ?
>
#
`abXcXdc ^ c XdY e
u
v
QTVUW WXW Y WXWZ X[s Y W Y WXW YX\ X]Y s X]\ X^`_
!
t
?
67
0
QQ
S
?
2
- 1
I
(
!'&
%
!$#
5
9
???
?
7 T
8 S <C^ JJJ?UV :W:?: G WX YZ J[\?V[?V X?:A?::] Z^ ::?::@?::>?:@:?: ^C :: S :> _
h f
g ^C<Ci^j
C<i^C
nouvn @
@ C^^{C
~ZC^
u|n|
?CZ
^
???|
7'? CZ????^<?
?????
?Z
?????
???C<i??
|n??
<? iCC^
Ci^i
?|v?
C?
?
C
?
?
?
?
?
? 7?! 8 ? ? ??? ? ? ? ? ? ? R ? ? ? ? ? ? ? ? ? R ? ? ? ? ??^ ? ???? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? j ? ? ? ? ? ? ? R ? ? ? ? ? ? ? j ? ? ?? ? ? ? ? ? ? ? ? ? j R ? j ? ? ? ?|??? ?
??? 8
7
7 ?8
?
? d??
8T?'i ^<i<^ ? ? ? ? ? R i ? ? ? ? i i ? ? ? ? ? ? ? ? ? ? ? ? ? ? i ? i ? ? ? ? ? ? ? ? C ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
d f
g CiZ
?
i??Z?
??
? : ??
?<^
?C?C
^
n
? R ??<C<C^
n?
o||
g ?C^Z?Z?
n??
?
?
Z
?? J
?
7? ?Z?
?Z??Z^
?
?Z^<
79? ? ?Z??? V V V
J GL ?LA?LA V : A : uY
?
7 ?
?
8 ??
??
? ?
^Ci^Ci ?
f
^?^
|v?
C
?
? ? ??
?
?C
??C<C
?|v| Y
^? ^
???
?^
?Z
?
? ???<^C
?
<^Z
|
?
?^<
?
?Z ? j ? ? ? ? j Y ? ? ?/Y ?? ? ? ? i ? ? ? ? ? ? i ? i ? ? ? ? ? ? ? ? ? ? ? ? j ? ? ? ? ? ? ? ? ? ? ? H? ? ? ? ? ? ? j ? ? ? ? ? ? ? ? ? ?
7
f
d
/<
&%('*),+ .- 0? / 1% ' 32 ? # ' 5467985:2; %=
3546
?
?
?
?
?
//
?
? ?? 1
?
./
?
?.
u
?
??
?P
?
?
z?{?}??
?
?
X?
?
?
?
}??????
> ) / @4 ?A.B # > DC,EF ' - g BHG % I J # )K%L G ) ' % E NM )O E 86IQP ? R# % A8R# )KLS!?7T A ) SU8:V 86,8 0RXW
J
J
kl?2 B E %L 8 8nm ^o > %L2 p #
)6YKAZ86[ - \ # % 8 / > % 86# Y@] ? - B ^L_
^` >Ra 2 8 M ]cb C ) % E d#e %gfQ) A,hji :
|
<
{
)? m 2 q S!rDs .Y - # - t # A )Ku % G v,Z&% C 8 % F f E ? r L G ? !fQ2 wXx E %zy 8 y} % 98H> A B A )~ ? o|@?5?
2 B A.?1?^? T a 2 ? 8 M G|? ? M|?) b6? L? "#? k ) ? ' ? x@? \g 2 g # % E - ? 2 g0? % 8 o rl% ? # - ? O 8
L
?
?
?
?
?
?
:
!"
# $
%&')( *,+.-/ &021354 67098 / 3 ')/:&; 3 ;=< & - 35>?A@=B:3DCE3F G 1 / @ ;H4 I; +KJ 3 8LNMO ; 3 8 ; 4 PRQ @ B 3TS ( 6VUWX:F *VY=; J ;Z[+]\
^ )
/ _ 8 ( `R;a > 0 - 3 8 / 3 ;b ; 8 3[c d & ; 8 Je% 0 3fg> 0 ( h 8 & ;;iJ2j 3f & 0 > ' 3 % & 8 ( 6 0 1.k > ( G Q 8mlon U5p 3 ( h 1qr ;
> 8 B:3 % 8 / & 0 8 / 3tsvu a uw[xTy ( hVz{J U 3 ;N| s 3w &}:+~? l? 'e? > 8 8 / 3T> J 8?? J 8 ; 4 hVz 0 &? ; > l 8 / 3 Q 3 8 d L U??
3 f ( 6,& @ 3 ??? & l @ 3 %?U 3 ; 3 @ & 0:+?& l?8 3 %?-??{? 3 %?? 3 ?:- 3 ? lD8?? 3.d Zx ? ? z ?:8 ; % 3 ; ' 3 - 8?( ?R? 3 ? ?{??? 8 qe3 U
( ?V?
4 ? c ':??Z j 3 Q 8? @?? ? ? Q); 3?e? (?,?24 ??@ 3 +?; ( ?V?t? ? & %? 3 /:&?{4 6 > % ; 3[? - 3 ? @ 8 /:& 84 ? ( ? @ 4 6V& ? > O ; Z@ ; 4 ? 0 @ / x 3 dE3 4 I 1B r
; ' & - 3 s & ? 8 / > J 1? - Z U @ &4 6 0 ? ??0 > 8 - _ 8i? 8 % > ? B 4 6V- B 3 % Z w d 3 U 3?3 ? ( ? j?( ? 0:?8 Z +?? ? J ; ( ? 0 1 ( j ? U > ? 3??
- ( ?V% - JK( 6 8 @ 3 -[/ 0K? ?,? J 3 ; ?
? B ( ? ; @=Z ; 8 ( x ; 0 > 8 & ' ?e? ( ? 3 f 8 ? ? ?? z 3 %?} 3 8 d?> %??{? +2J 3 8?n Y B 3 ?,& %?? Z 0 J? ? Z % > l ;=4 ? ? Q)& ? ;?% 3 ? J24 6?U Z ?
8 >?> ?2? 3 U ? 3 8 / 3Dd 3 ( G z B @ ;
? - > ? ? &? ( ? ? z?@ / 3 - > ?? 3 ? ?Z?Q)W 3 %iZ[;J ? 8 ; ? l?8?B:3 |D? | 0 Z r p > ? ?;
>l U p / ( ? -=? 8 B 3Dd Z 4 hV1 B 8 ; d ??3 ? % 3 3i?)? Z % 0 &?? ??? >? ; 3 %?? & ? ? 3 p F ?Vr=B > 8 ? 3 % 0 3 8?p > U? ;? L U d /)( ? - / 8?? Z
p 3 ( ?Rz q 8 ; d 3 % 350K> 8 >? ; 3 U ? & ? ? 3? 4 h 8 d ? + 3 8 Z U ? 4 G 0 3 +?? / & 8E8vB Z ? +:+24 ???4 ? ? ? > l?8 / 3?> ? ; 3 %=? & 8 4 ? n Q
- 4 ?VU - J F 6 @ % ? ; ? > C 3 + - > 0 ? ZU z 3 0 - 3 ? ? & l & - 8 > U L ?E? x
?
???m? ???D?)???5?E?????????2??? ?5??? ?
?
^ / ( 6 ; ? % > - 3?? % 3 ( ? ; + 3 ;( ?V10:3 + 8 > 8 3 ; 8 & 0 4 ? c ' ? 3 j 3 0??8 ( 6 > 0 ? ; & ?24 6 ? 4 h 8?? 8 >?3i? 8 % & - 8 &.; 4 ?V1Q)??
a?
d / ( ? - B94 6?;.??? ?? ?? h 3 + ( ? 0 ?)&Wi?{z% > J 0 f 0 > ( ? ;=?
?
^ B 3 ?4 ? 1 Q & ?H???g? ? &?; ( h 0 J ; > ( ?R+ ? ( hV;T? > ? 3t3? 8 U & - 8 Z+ ? U n j ? & Qe+ ? ? ; ? 8 3 U Z + d ? 4 ? 8 3 0 > 4 x ? Z
a
? ? ? a d / 4 ? -/N/ ' 3 & & c ' ? ? 8 J ?{3
u f 1 U 3 ? 8 3 ? 8 ? &? @ / 3 ;?4 P 1 ? & ? & ? ? ? ? - 3 Q Y
% % 3 ? J 3 0 - ?
?
& % > J 0e+ 8 / 3 l?U 3 J 3 0 W ? ? lH8 B)Z ;(? z 0 &
h
^ Z ; 8 % 3 ; J ? ?=;D& % 3 ' ? >8=8 3 ? ( 1 J U 3 ? @ / 3 j ? z Q F ? @ ?23 ; ? 3 - 8UJ > l 8 / 3N> U F 6 4 ? 0 & ??;( z 0 &"! ;$#
8 ? 3 ( ? ? ? J 8 ; (? 1 0 & ? ; 8 >g8 / Z ? 3?8 d > U=? a & ? 0 + 8 / 3?> J @ ? J 8 ; 4 % 1{0 & ? ; > 8 / 3 0 3 8'&)(U ??& l @ 3 % W > 0+*
/ 3 -/ 4 . ' 4 ?R; & ? ? 3 8 > % Z +2J - 3 8 / 3 ?)& - 1 % > J 0 + Q0/ F ?2143 ? b 5 +65a & 0 +
? 3 %?z 3 0 - 3 & % 3 ; / > d 0
? ,
3 7 ( ??} JK; > ( ?V+ 3? 8 %& - @ 8?B 8
?
9 & % 1 3 % 3 r dE> % ?{; ;s :=< :+> d 3 % 3 8 3 ; 8 3 ftd (P @ / & ? h ? 8@? % 3 > l ;BA ;[(C 0 J ;? ( ?V+)& ? ; (% 10 & ? ;?& % > J 0 ?
E? DGF?; ' & - 3 + & 8t0 >8 % 3 1 0? H?&%9(I ? 8 3 % ? & ? ;g& ' ' % >? (6 c & 8 Z[? ? | 5+J F & ? &% 8 ?LK / 3 0 3 8 d!> %?;
7 J -- 3 ;=; Nl M ??? P
b O 3 ' & % & 8 3 + 3 & -i/ ? J % 3 ; ( ? 0 J2; > 4 ? + F Q 0SR > &?; 3 ' & U & 8 Z ? J 8 ' J 8 W B & ? 0 3 ? & 0 + ; J ?ET
> - > 0 ? 3 U 1 Z]\ - 3 ' % > ? ? 3 j ;
')% U3 7 ? 3 + & ?V? & + V W & - 3 08 ; 4 ?,0 JK; > ( ? f ; ? ? & ' k % > ?2? QYX &8 3 ? ? | u +6
[
?
Z
3
d?3 U 3 3 0 - > J 0K8 3 % 3 + C ? 8 / 8 ? ? ; ? &% 1 3 % 0 8 p > %
?
^
_
?N? ?` ?[a"bD?A?ced ?A?2? ? ? ?
x
% ? ; p 3 % 3 0 > 8"0 3 - 3 ;; &?U ( ? ? 4 ? 028?30 + 3 + l > % ? 8 3 %=( h 02z @ / 3 8 ?' 3 > l ; ? n z 0 & ? ;
/ Z ;i4 ? F 3f 4 ?V0 & ? & ? > U& @ > U ? ^ / 3 U 3 l > % 3 8 B 3 0 3 8 dH> U ?{; d Z % 3 &?V; > ? 8 3 ; ? 3 ?
3 -/ aK;( ? 1{0 & ? ; 8 / & 8 B & ? Z c ? > % 3 - >{c ? ? 3po ; 8 & 8 ( ? ; 8 4 ?7- &? ? U > ' 3 U @ 4 3 O
?
$
3
u
w
c
v
028 ? ?
> U?( ? 0 ; 8 & 0 - 3 a?_?% 3 - > U?+:( r 0 z > l & ; 3 z j Z Q28 > l r 3?? 8 % 3 ? f ( W 0ts!0 1 ? ( ? ; / & 0 +?& ;
3yx @ % 3 &+?( h 0 1 % 33 ??? ? 8 B 3 ; &j Z ; ' 3 &z 3 U d3 % 3 c A 3 +?4 ? Q 8 d > O ? ( ? 1 / @ ! ? +K4 ? O 3 ? 3 ? 85%& @ ( ? > ;
> ' % > +:J - 3 J 0 4 ? 028 3 ? ? ? 4 ? ? ? 3 ( ? 0 k J 8 ; ? 1 0 & ??; ? MK% r B:3 | ? | 0 Z @ C MU z ;"{ ^ / 3 ; ?AZ - 8 U > 1 % &j L ?
qK3 0 Z @ C L U=?K; 3 &; F ? ? ?
3 | ? x 3 + ; F? ?Q:& ? ;?( ? ;?; B >d 0 ( 6 0~?1J2U 3m?
8 ? ? F ? - & ? ; 3 1 j 3 0 r > lD8 / }
K
6
3 -? ? Z U Z +?R B 3 8 CH> LU=4 h 4 hVQ ? ??U Z W > U + F ?Q?; G @iB:ZD? ? Z - R U n ?U& | ; L l8 q Z ? J 8 < J r ; &U Z ? B > C 0
I
^ / 3 ; 3gf ?ih2j
0
? ??l ? ?
B8 & 8A? % 3 J ; J hYkm
J2;( ? 021 c JK;( ?,- & ? ?
q
r
8
U
&
3 8d ?
; ? 08
;' 3
!" $#% &(' ) +* , - /. 103246587
?A@
<4=
<(>
B
C
FE
GH
G IJ
LK M
NO
D =
?A@
D >
B
C
?AP
Q
=
Q/R
B
C
ST
SVU
S@ U
1W X
\^]`_acbedfg
Y
Z
W @[
h/ijk%lnmVoqpsrtuvo6wxzyo{ ls|} {z~8y??q??? ?c? o6}1???%y`??lnwi j ? k???{A? ? ? o?? ?6?o???????? ???n??? {z?|!? o y o?
?z?s???} o ? ?i ??{Vo???i? ?s? l?}
?
i??c? k?y oq?s???? ??? ?r |?? m??`o {Als?} w???omo?? ?? i ? ? o6???i? }???????o ? m`?? ? y ?s? t ? k{ ? ??ln{? j??%? y ?
???
? i? ? ? xi ?? ? {? ? ?qls{? ? ???????{ ??? o?A? ?
?
?????????`????????
w ? ?l } ?? ? ? ?? ? ??? ?
? ?? ?`? y? r ??? ?
??o ??? o ? o?{
?A? ??? ? ? x ? o y ? {V?n? x {? ? ??? o??? y?c? ? ? ? ??? ? ? ? y? y? ?
? y ? ? ?
x ? ? ?
? ?????| ? ? ??? ? ??
??c? { ? ??n??? ? ?? ????? ? } o ? ??%|?? ? ? ? o ? o?? ? ? ? ? ? ???
9:;
"!
+
?
?
?
(& '%) 8 *9 : ;<=
7
0 12 3 ."4 5 . 1 6
1
]
^
_
`
a
b
c
00 0
W ??? ???
1 }
> dfe gh i(j
?w @(ABDC } c(p q r s d tv8r?u ? 1xEGw y z {FI| }v.(HJ ? K? ~vL ??MN" ? 0 ?O
?? P
?
k l jn
O
Q
%
R
U
S
T
V
ODX ? c(?v? ?????Y[
? ? 1 0?? 1
?
? ? g ?? ? } ? 6x??m o?? ?? 5 ? r ?} ?? 1v?? ? d ? ??O ? ??? ? w???
?? ?D5 ? ??1 ?v? ??? ???? ??? ?? 1 O? t ??? .?
?
?? ? Z^O \O8 ? ?
? O?O? ?
?
?
?
?
1
.
?
1
?
x
?
?
(
?
?
?
1
?
?
?
?
?
?
1
?
?
?
?
d
?
?
O
,
5
+
%
?
?
?
?
?
?
r
?
0
}
}
r
/
1
.
0
2
5
?
?
1
?
?
?
?
r
}
.
!
#
c
%
"
$
&
?
.
}
? ?? ?<>? = ?
? ? 8 ?6 ?
.H K O?? j ? ? } U ? } V WXY Z [D? \
? . O ? ] ` ?? c d ? 5 V ?? ? ? . 3/} 4h 5r 8 1198 :; ??
? 0 6 n op q ^ ?
O
?
?
?
?
?
a
b
?
* ? 4 BD] C E
5 O 1 F .vHj u 5 GHa9M I w ] J a x ? ?? ? T y M4 b { L O NM| t } ~HO?/PQ ? ? ? ?R Q ST? 5 ; O ?? ? r ] ^ O j r d ? _ r ? ? r '
O ')( * O ??/? Ofg ? ??? O O i6 7 ? jMk Om? ? l ? ?D? ?H? OA@ ] '
5 ? ] >w8 ?H??? . ? ?? ? ?
a ] a? 0 g . ?
?
?
0D?r . w T O st 5 w????
O
'
? 7 O eO O ?H? 3 5 ? w? u ? ] 5 O ? ? ? T ?? a#
T?M?
? ]?
6 %. ? ?? /r O ? . ] ? a JM. ???8 ? ?? ?? r . ? r?
?? ?
1T ] O
?
0 K 8 ? ??? r ? ?? ? O? T T ? ?? .5??? ? ? ? ? ?? ] ? ? J ? ?
? ? w O ? } ? . r . ? ? L?} 5 ? ?? ?rM? ? 1 ?O z ? T 8 ? ? ? .
?
?
?
O
?
?
6
.
O?
??? ? 1 ' AO ??
?
O
? u } O
] ?Y ? ??? ? } ? ??? ? ? } ? ? ? r } ?. O ] 4O 365 ' a ?? 8 ? 5 . . a ? d ? ? ?? w 6 ?V ? ] ? ??? OO ? j?
O?
? Z \]
O? ? O 6 8$9 ] .;: u 5 C EDFH
GI ? J ?
O?? ] 5 ] T UWV a r .? Y[
Y. ] O
]] .
5
r.
T r 0 ? T ? ? O e f &? %(' J J ) a* O + r T ? a , a - . / 0 1 2 ? ? 7 ? j ? r r < = M >$? ? A? @ B O i O ? . ) L. O ? KNMPO Q R ? ] S j X a_T ^ ? Ja? ` a ubdc ? r ]
j
O
O !$#
"
O O
O h
O
O OO
O
g
OO
j
?
,
#%$
-
?
./
@
k
l
k
?
?
t 5?
3vux5 w
y{zh|L}
?
?
3
?
?
?
?
?
- ? ??
]???
T ?
3?
. ? dw }
????? r ??? ?v? r ? ? ? O ? ? 4 r ??iL?]A?? ?]? ] ? ? ] ? ? ? 1 ? =T ? ]? ] ??r ]?TO ~ ? ?Va??OP? J ? ? 5 ???
O ? ?hO ? ? T .O ?? r ? ? 1i
O?? ? ? ? w O ? ? O ? O?
?O
?? ? ? T
? ? ? ? ? ?? ?e O ?v? ? T
0
?
? ? ?r ' ? 0 ?j j ?
?r ?
w
.
}
.
r
$
?
?
?
?
O
0
'
?? . ?] r ? [ t ?
"
&)(+O? * ?
. ? ?? [O j ? ? ??r ? O .
O
?0
i$# ] %'
T
O
T
[
r
r
* ?
r r ? DFE ? ,.rH-0/ Y GI 5
O ? >? c O ? ? ?\! ? [ ? B 8 <\C b
O
J
r
T
A
@
]
?
? c ] w aed O
?o T
U ?
T5 8's .} r ? ] w ] = r x ^ ? O
@1
QR S T
O ? ? q OXW.Y Z O u ] ? ? ? r ? _ 'a` O . ?T???
?O ?
? m ??? 5 ? 5? ? 5 ?T pr
n
? O ? O ]? ^ ? V O tvu a ? ????? ? r 8 ? ?<?? ? ] }? ] ? }
? ?? r
r w O?? ? 8 . r ?M?
w?? ?
] ?
? a
?? O O ? O ???? ? ? ? ? ? ? ???
?.? J ? ?
J ? ) a ??? ? ? 3 ? ?
@
npo
qsr
O?
j
?F? ?????????+???
??? ?
?
????? ???????????
?A@CB
?K
?
? ?n?
M
J
? ?
J
???h
k
0O
2
?
8fg h ?43
w z { |}.~
5
y?
]5
? wv
T? ?M? ? TO ? ? ? ? 5 O ?
? .? O ? w r ?M?a
O ?O ? ?
5
??
?? )
T0 m 8 ?
J
?????n?
???
?
?
?????
?76
?
?
?DFE
J
* ? ? ???
? .? +
r } ?s? ? ?E6 ?
O? ? ? ? ?? ? ? ]P^? ? ? ? ?A? ? r .]?
]? ? ? ? r ? ??? T ?
a 8 ??
?
?
] TA? ? ? w?
?? ?
? ? ?w
T
) ?r
? ] ?] 6 7 N T 8.9:<;.= *
5 Y ' ? .ML
K ar
.PO O
jiP5 k
]M? 8 ? ? l
O ?
1
r ? ????
r f ? T . ? ?? ? O ' ] in?} ?] ? ?
B ? ? O w
' ' Y j?
O
r]?.
?
? ???? ? ?
m
??
???
???
? ??? ? ? ? ? ? ? ?
?
?????????????????
k
?3
FL :NM
?n?
\ ?F]
;
? ? ? ?
O ?
?
?
???
???"!$#
?&%('
?
5
? ) ?+*?? , ?.- /021
4
? 3
?
HGI
?M?
?
? ?
?
?
?
ONP7Q(@4R
?
??? ???a? ?<^
? ?
?
%>S , ???
? O?
? i
?
3 >
? ?
?
?
?
? ??? ? ???UT ? ?
_
? `
'
?
? ?
j
? k
?ba
? ?
O ? +?V
8) ] ? 4
@ 1
? ?O ? ?
?
1
?
?V ?
? ?
?
5
5c5
?ml
'
?
?
?
?
?
?
? ?
?
E
? ?XW
= : ;
098:<;
??? ?
8 deW =
?
?
E
?
?
? ? ? ) ?>=
?4?
? ?
?
Y ; *Z8 ?a?[ ? = ?
12 f ?Fg?? ?
nonqp
?
? ?
nsr
ptQ
?
? ? '
? ? 8 dN? Q`@ 0 ?
]xwzy|{ T??n? K ? ? ? ]>} ? ? 5c ? ? P>~(P ' ? T ?
??? ?N? ? ??? ? ' ?? ? 5?? ?a?4? ?>? ??? O ? ???<
!
x
w
?
?
T
?
g
?
'
B
d
?: ?
??????? l '????? ? V = ??? ?`?
_ = : ? ? ' ?
?)
? ??? ??? ? ?
?
???
? ?
? ?
? % ? ???s?
? ? ?
??? ? ?X??4?7???? ? ?a? ? ? ? l` ? 1?@ ??? ? ? 3 ? ? ; ?a? ? ? ? ?? ? ? K ?XV ? ; E ??? ?? 1 ? 3 ? 1 ? ?U? ??`??? ?n? ? %F1 ?cV'@ k?
V ? ?a?n? ? ? ? ] ? ? ?8 ? ? ? ? ? 1 ? P?? ? ?M? ? ? ? ? r?
? ? : ???
,
}
N
,
1
,
1
?
+
'
1
?
O
?
?
?
8
8
?
? ? @? ? ?U ? ? ? ? 1 g?0<?6??F? : ? ? V ? ? ; ' i
?
?
? K
?
??
? ? ? ? ? ? '
? ? ^? ?[ ? ? ?7* ? 3 ?
3
l } ? K ?H?
5 ? d ? ? ? ? P E ] K _ , ; ? ? ?P??? , ? = ? 1\??? h ??? @ ? @ ? ? ? ? ??W?V ' Vg ? ;
? ? ? ? K ?P?????U? V ? ? @H?
u
? ? ve
.
! #" $ &% (' *) ,+-/. 0 1 &23 546
:<;
?
=?>
?
?
?
`(b LG a b ces o Td b
H IJ K L MNPO QT RL i SPTVE Uj k l m WX Y#L?Z []n oq\pBX^0_ r F
T
h X T ? ?
T ? ? L T^B?P? ? ? m ? o o ?Z? ?T? }? ? ? ??? ? ? } ? T T T
_
Y
?
z
<
?
B
T
?
?
?
?
? ? ? Y ? ? T?Y ?? ? ? Z T m
y{? z | } ~ Y ?/??#T ? ??T ????
_ ? Y#bT ??
?Bb?q??
?
?
L
?
Y
T
?
?
?
?
? ???? ??
b
?
?
T
?
?
?
z
?
T ???T } ?
????? z ? ?? ?? ? ? } ? o? o ??? ?? ? L ? ? o } m ? T Y ? b T j T TqzB? ? ? ?z? L Y o ?
?M^ ? m z?
o? L L T L Z } o
? ooTk
}} @ ?b ? ? m Tb ? } TL z ? ?
T
Y z ? TZ ?Y
0Z
_T
T ^ T T_ T ^ ? Z } _
L
?
T
b
?
T TL T ? } T z ? T T o ? o
} } ?
?}
__
'
%$
&
+,
)+*
(
8:9 ;
\3]
L ^`_ a
L
<
?? ?
??
? ?!? ?
@ B
-. a A "DC
+E
d
f
g
F HG
=?>
???
?
?
?!? ? ??
cb
e
?
?'
??
? ?+?
?
L
?? ?
rq /10
p
{????? K
?
!
32
"
?
4
Y
A
??
576
Z
P U y{
X Y z |!}~???
:
x
??
WV
???
?
?? ' ?
?
?l?
W?
?
M!N3OQsW
P R tvu w
S T
?
5???
?
??
y
"#
L
hji lknm o
'??l? ?
?
? ?
??Q?l
?
H
IJ
?
?
T
Y o o t u b fv wBg x o
L? T ? ? T b ? T
T?
?T ? T??? ? ????T ? ? ?qT? ?T ? ? ? ??
} ?9?]? ?? ? ??L ? J XX b??? ? }? j? X o
TW _
? _ ? `0
j 0 T b Z z ? ?Z } m j T ? } Z o o T ? _ L b
T } ? ?T_ T o X ? T_ }
bm
?T X ?
L
?
}
L ? b j? _T TT kL _ } ? T k
j j T ? Tz
b
bW 0 b
z ? Y } } LZ ?}
zz ?
z
zz
@BA?C D
?
}} L
[
? ?
?
?
b
?
???
?
? T
?? _ T T/T T? ? } ? X o T m T T
} _? } L T L b J b ?
Y
LT b
_Z
T ob
L
L b}
?
?
X
?
? m T?o T T 0 b ? W
?
Tb T ? m T ? z T T
_
L b _ ob ? X o
} } } z
%
&
'
(
)
*
? ! " #$
43 6I5 7 H
J 8 :9; <>=@? BA C +
mX wvyYxz Z{ b ?L|K>[ M po6\^q N ] `_
E ?
u ?
? n z ? } ? ~ ??? ? ? ??
?? ???? ? >? ?? ????
T ? Tz
?????} ? ?}?? ?/? ???
z
H
?
T
y
Y ?bT Z o m zTT TX W
}
?k?
T
L T T ? L b ?L
? T z TT
?} L L
b bT 0 g ? L }
L
? ?
? ??
'
?
? ??? ?
@
5
!?
'
X
?
? ? ? ?? ? ? ?
+?
?
? ?
S? ?
?1?1??? ?
'
l?
L
2
? ?
?
S
? ?1?
? ??
?
5
?
?^
2
"
j?!?
?
?
? 2L
1
2y
L _o ? T
} } _L
"
? ??
6
L L
l?
`?
6
"
?
b
A
, -/EF. 0G ? }
S TVU W
?
b ctd:efgi? ? ?h jlk
? ^? ??? ?$ ? ? w? ? ??
?
?n? ? ??
z
T
X X o _T
_ Z0 0
} _ ?T
LT _
bZ? ?L _
0
A'
?
T? _
?
L??LX ? b Z?
L
Z
bz _ ?
Tz
_
T
z L ?b
^ ? ? ? jb ?} L
G
L
?L _ zb
j oT } b o? ? T ?L m
0_? ?
T?
D O/P R
a . Q r s
? ?
?
?
?
l?
"
? {? ?
?
AZ
@
z
Z}
z?}
?
?
??? ??`?>???? ?I? ?L????? ?I?????? ???? ? ???:??? ?L???>??????????L?????????? ??????? ?? ? ????????? ? ?????L?????? ??? ?I??????????? ???I?????????? ??? ???y?? ?? ? ? ? ???
? ?
? ??? ? ? ? ? ? ??? ? ?I? ? ? ? ? ??? ? ? ? ????????? I?l? ? ??? ? ?I? ?L??????? ? ? ?L? ? ? ? ? ? ? ?? ?? ? ? ??L???? ? ? ? ??L? ? ? ??? ??? ? ? ?????
+
-,/.
10
32 0
46578:9
<;
!#"%$
3=
.
?>
????I?L? ? ? ? ?? ? i? ??? ? ? ???? ? y??? ? ? ? ?? ? ? ??????L? ??? ? ? ???? ? ? ??? ? ? ? ? ? ? ???? ? ?
? ? ? ? ? ? I? ?? ?? ?? ? ??? ? ? ? ? ? ? ? ? ? ? y?
?? ??? ?? y?o ? I? ??? ? ??? ? I? y? ? ? ? ? ? ? ? ? ? ?o ?? ? ? ? ? ? ??? ? ? ???L??? ??? ?
?
_
?@? ? ?
?@? ? ??
??? ? ? ?
??
MY
n
ZOp
?
6? A ????
A :?
n
-o
?
?
%?
&
C G J:J
Y????
??
~
W?? ??
?
iq
=
??W?
J%8G ?R?:?!?-?
?
?
?? ? E
^`_ba
[]\
rFs
-??
^
&
&???
? ? ??
???!? ? ???-? ????*?F?b?F?
?
G :
A f
f
^
R???
Og A
& :u
t
@
=
Rced
f
0W
?
?
<??\
1?
?D?
???:???M? ?%? ?(??? ?O? ? ?
vd
W
??????
??
?-?????-?b?
:?
M? (?
?
? ? ?
?
?
2 4 H J
?!???
@ ??
4
R? ?
?M
?
? W
????
?
!j \
?C G ?!54
'
<y!z
x
4
{
lk
m!
B
-} ]~?
|
?
?? ? ?
`]? t
W-?
? ??? ?
x
z?? &?
*)
GIHJ
?
?
?
?
?
?
?
?
?
?
?
? ?/? ? ?? ? I? 6?
?
? ????`? ?
? ?? ? ?i? ? ? ? ? ? ?/? ?
o ? ? ? ? ??? ? ?
?? ? ?
?
?? ? ?i? ?
???
\ih
Rm Rw
???
?
\
?
('
?
KMLNOLP%LRQTS-LU
VXW
&
@ ABDCFE
[?t
c
?
? ?F?
? ?
&
d
?
?M?
??`?
m
???
?
? ? ? ?(?
k?e?
?
???-? ? ?
???
t
???
????l?
t
o??
?
&
r
7988
!" #%$ &
'
('*)+-,.0/1(.325476*8 2:9;( <='*> ?@BADCEAF*GH,JI*KL1M,JI 'ON A:P A QRTSVUE> WXZYD[\A ] Q I QB'*^ _`,ab,1"cOd;e\A < AfdgehAai'
[\>jlkmgnop[\,qgI
r A stvuwuyx!z
{ |
}
}
~ ? ?h? ? {????V? }??? ?V??? ?h?;?\??E? { ?? }??} ? |
}0{ ???f? ? {? ? ? ? { ???=| ? ?
? ? | ? ?D?? ~ ?g? ? ? } ?:?
?=??? A??B,N;aJ?5?E,<h?\>?lm\???Ea\>?@ A I*?
>?l'M?J?;?p( ?%'*>???Z,iI*A uh? QI ? R Qdh9?g??B??h??h?B?
? ,i?hAfa
?E? ,B4\?-????? Afm????!? (ah??2?m=?\I?A ,B4?2L?!? ? ??B?\?f?h???w??2?my2?4h'?L25?hQ < ' ? ? @BA
1hc?mh9\A < A?a;?\AaJ'?[ ? ?l? a;( ? [\, 4 I*r Af?? u??
>j r!?!?E?L
A 4 ? IQ?\? A'O?, I Kp1V, I?P A+ Q ?l?????? ?ZA![\Af]h? I?Q' ? ? ,Bm5,?H
[ ? mh'?NhA'*??
?H? {? ? }
~ ? ?h? ? { ?"?M? }H?h???E? ? ? }? ? ? ? { ? ??h? { ? ?i? ? { ? ? }? ? |
} { ?L?5} ? z ?B? ?5} ?O? { z?? ? ?J? A QB'' ? A ?
? 2 ?hc?SM?;?B?
? ? ?f?
? NhAfd ?
Qa;??2?m;?gI?A?, ? 2 _ ? ? ?? ? ?
? ? q II*AaJ' ? , 97A [ 4
h'ON;IA ? , ? 9 ? [?c*?
_
?
< o AZA?m ? F ? , m 1D'*NhA??EAIQ q F*S q ' A m q '? 9 Q < '*>j @ A CEA ' ? , I K j u u ?!"# ? { ? z ?h?? {%$
? { ? ~'& ?h???? } ? ? z
| ?J ( ? ? ?*) ?+-, u ? Q ?/.?? ? ?
? A10243 t ? ??*7i??898;: ? }=<pz ? ?B?> ? ??A@%B ? { ? { ?DC }!E\?*F {AGH ? >?JI-A ? ? ? ALKNM!,iI?K ?
?65
t
? AI Q4OP ?RQ QBah9 ? 4h'*'*Afm ?S ? ?? ?? ? 8 ??[ < QUTA ,I ?E? ? A 2?9;Q < '?> ? @BA?[i> ?WV ahQ ?YX I ,T?A ?? > ? a V 2 ?
C?A4hIQ CEA ' ? ,[Z
K\ ,h9\A ? ? t?u u ? ? ? z ?*] ?5} ??? {Bz
? ?E? { ? ? { ?_^ ?a`? ( ? ? ? 2?c*? ? , mb AIA aT A?? I?,ac
r?AA 9h>d a ? ? ??\? ? [\a7, ?
> I 9 ? ? ? je ? ?d '*A9f
? ? ,N7m3[?t 3 AmhKAI ?
Q
?-g '
'*AfmNh QYi?? ? ? ) ? 8 ? ? Q ? r 4 ?j A?4hI,lkfm ? A nV'*>? ? 4 A t A 'po ? QB>?'*A t A a ' 9h4 [ > ViahQ*I ? 25a;QoT?t ? A3A?? a
? ,[q < ,?fQAr ' A ? ?5c?msut v < A m;?wm\'?A ? uwu ? yxJz|{} NhA ? > ? ?6~?a ? @BA I??? *A ??[ T >?A m7'*> j????J4\A:A' A ? 9 ? r wAI A S
c?m ? '> t ' ? ' j Q*? ? ,Bm (1??? I ? '%?*T a\?+?? + ?hA ??|???l??-
? AJ?? I(aT
AJ?
t
? ? '
'OA a ? ? Qm ? ? AIQ4 ? ??? ? ? ?B? ? ?0???9??I-> ah9?[iAf<gQBIQ'*?? ,ia?,1?[i,J4hI?T
A ? u ?QI!5c ?? 2?m?2 9g(<
'*>? @ ??2 ? ? , I >?' ? ????? A9?,ia?CpA4hIO,[?=??? A'*>? TD25I T ? > 'OA'T'*4hI?A uwu ?y> ? ??g?? ? z
{ ? } ?
? ? ?? ? ? ?t??!? ?'? ?
? ???A? > j ? I??6T
>? AaT A:? 4? ? >?? A I ?*?
? ? Q'' ??? (ah9 ? Q ? V?? ap? ? 898 2?j A' ? , I K_??,iI"' N AY? A < Q I Q '?> ,d ,1?[ 4\I?TfA??p' N QB'H( I A [ 4i<=AIO> ??q|
<=,UA 9LQa\9 3 AfR%(?tA 9 ? H u ??~*F??;|
}
? ? ? ?5} ? z ?a? ? ? ? {z ?L?A`O ? { ? ? z
{ ?
} ?!? ? ?E? ???`}??:??? ? , I ? QBa
? Q 4B1?? Q dhd ? 4
?%> ? N7A I ? ? ? (a
??
? Q' A ,\???!?i?
? >?-'?' ,[?
2 ? Qm;9D25I*I A ? 47> #?Dt? ? ? ? ? , 8 ? ? ? ? [ c?ai'*A ? IQB' ? ? ,Ja 1 ? AfI?Q 4 ? 'OS ? q '?'A?m ? A ?T? ??1?,I
Y
e
?
[\A < QI?QA?> ?l,Bm 1E[ , ? I TA?? j H u? ?0{z ? ? ? {? { ? ?5?h?? { ???y? ?? ?? ~ ??} ? z ?? ?;?? ? ? ?D??? ,iI*' ? Qah? ?
? I AVB, d j ? ? 4 ? AIH2?r Q 9 A ?Z> T5?"IOA???6j ,iI ?HA!%? ?6 2 j
| 558 |@word uev:1 t_:3 pbx:1 od:1 bd:2 wx:2 cqr:1 xyu:2 rts:2 mnf:1 gtg:1 uwb:1 gx:2 lx:1 dn:1 vxw:2 h4:2 m7:1 ik:2 p8:1 ra:2 uz:1 ol:1 rem:1 td:1 jm:1 xx:2 qyi:1 sut:1 oax:5 acbed:1 ag:1 nj:4 w8:1 xd:1 rm:2 eha:1 zl:1 j24:1 pwg:2 xv:1 io:1 ak:2 ap:6 kml:1 uwu:2 au:2 nol:1 uy:1 vu:2 j0:1 dhd:1 cpa:1 l:2 d5:1 gsn:1 dw:1 fx:2 qq:1 pt:1 hjilk:1 jk:4 hv:1 yk:1 rq:2 ciz:2 uh:1 po:3 mh:1 k0:4 tx:1 cod:1 kp:1 plz:1 rds:1 vyx:2 my2:1 hvz:2 gi:2 eqb:1 dxp:1 j2:2 tvu:3 az:2 afr:1 oo:3 iq:1 qso:1 ac:3 ij:4 qt:2 op:4 eq:1 ucs:1 ja:2 ao:1 f1:1 mab:2 gqp:1 cb:6 lm:1 g8:1 u3:1 fh:1 kpc:1 hgi:1 iqp:1 hj:2 a3a:1 og:1 fq:1 l25:1 vl:1 bt:3 w:2 iu:1 oam:9 pcg:1 zz:2 kw:1 hv1:1 jb:1 ve:1 m4:1 ftd:1 bw:1 ab:9 nl:1 pc:2 hg:1 xy:1 tfa:1 afi:1 mk:1 wb:10 jil:1 ar:1 tg:1 de3:1 gr:2 byp:1 sv:2 my:2 st:1 yl:1 ym:2 b8:1 nm:4 aed:1 tz:5 ek:1 yp:1 de:1 mp:5 ad:1 wm:1 b6:1 om:1 il:1 iab:2 t3:1 svj:1 zy:1 lu:1 bdcfe:1 cc:1 ah:2 za:3 ed:1 wai:1 a10:1 pp:1 npo:1 ut:2 ea:2 ta:2 dt:1 o6:2 jw:1 xzy:1 iao:1 a7:1 o:1 aj:5 nsr:1 ye:1 ofg:1 q0:1 wp:2 pwe:1 gw:1 ue:6 qe:1 ay:1 tt:2 vo:1 gh:3 ef:3 fi:1 ji:4 qp:2 rl:1 b4:2 cv:2 ai:2 hp:1 i6:1 dj:1 pq:1 j4:1 gj:1 gt:1 v0:1 xwzy:1 wv:4 qls:1 ced:1 eo:1 dew:1 ii:5 af:4 y:1 qi:1 pow:1 ae:1 bz:3 cz:2 zw:1 ot:2 sr:1 nv:1 oq:4 ryx:1 tm:1 t0:2 s7:1 wo:2 se:1 gih:1 rw:2 vy:1 ctd:1 hji:5 zd:1 lnm:2 iz:1 xwx:1 jv:2 k4:1 pj:1 ce:1 ht:1 v1:2 efx:1 pwi:1 i5:1 j2u:1 wu:1 rub:1 ob:2 vb:1 fl:1 hi:3 i4:1 k8:2 qb:6 lxm:1 xty:1 bdc:1 tv:1 cbed:1 wi:1 tw:1 hio:1 pr:1 jlk:2 ln:2 ge:1 v2:9 jmn:1 bym:1 aia:1 ho:2 jn:3 rz:1 qsr:1 yx:5 blg:1 dfe:3 xw:4 yz:1 dgf:1 uj:1 bl:1 g0:1 lnw:1 rt:2 prq:1 mx:1 hq:1 oa:4 vd:1 vhg:1 uwvyx:2 lg:1 vyk:1 fe:1 pcb:1 ba:8 iaf:1 sm:1 qyx:1 t:3 efg:1 svu:1 bhg:1 dc:3 afm:2 kl:2 h9:1 nu:1 qa:1 tb:3 rf:1 wz:2 ia:2 eh:1 cea:2 mn:3 ne:1 lk:3 ues:1 gf:1 kj:1 l2:1 icc:1 zh:1 ptq:1 j8:5 o8:1 dd:2 cd:2 gl:1 czi:1 zc:1 fg:4 z8:1 fb:2 t5:2 c5:1 bm:3 viv:1 lz:1 bb:1 ml:5 b1:1 pvk:1 zq:1 k_:1 ku:1 u5p:1 hc:2 uwv:1 rh:1 ztt:1 qut:1 je:2 tl:2 vr:1 lq:1 ib:2 lw:1 z0:1 jmk:1 pz:1 ih:1 ci:4 kx:1 cx:1 lt:2 ch:1 dh:1 ma:1 zb:1 egf:1 e6:1 |
5,059 | 5,580 | Discovering Structure in High-Dimensional Data
Through Correlation Explanation
Greg Ver Steeg
Information Sciences Institute
University of Southern California
Marina del Rey, CA 90292
[email protected]
Aram Galstyan
Information Sciences Institute
University of Southern California
Marina del Rey, CA 90292
[email protected]
Abstract
We introduce a method to learn a hierarchy of successively more abstract representations of complex data based on optimizing an information-theoretic objective. Intuitively, the optimization searches for a set of latent factors that best explain the correlations in the data as measured by multivariate mutual information.
The method is unsupervised, requires no model assumptions, and scales linearly
with the number of variables which makes it an attractive approach for very high
dimensional systems. We demonstrate that Correlation Explanation (CorEx) automatically discovers meaningful structure for data from diverse sources including
personality tests, DNA, and human language.
1
Introduction
Without any prior knowledge, what can be automatically learned from high-dimensional data? If
the variables are uncorrelated then the system is not really high-dimensional but should be viewed
as a collection of unrelated univariate systems. If correlations exist, however, then some common
cause or causes must be responsible for generating them. Without assuming any particular model for
these hidden common causes, is it still possible to reconstruct them? We propose an informationtheoretic principle, which we refer to as ?correlation explanation?, that codifies this problem in a
model-free, mathematically principled way. Essentially, we are searching for latent factors so that,
conditioned on these factors, the correlations in the data are minimized (as measured by multivariate
mutual information). In other words, we look for the simplest explanation that accounts for the most
correlations in the data. As a bonus, building on this information-based foundation leads naturally to
an innovative paradigm for learning hierarchical representations that is more tractable than Bayesian
structure learning and provides richer insights than neural network inspired approaches [1].
After introducing the principle of ?Correlation Explanation? (CorEx) in Sec. 2, we show that it can
be efficiently implemented in Sec. 3. To demonstrate the power of this approach, we begin Sec. 4
with a simple synthetic example and show that standard learning techniques all fail to detect highdimensional structure while CorEx succeeds. In Sec. 4.2.1, we show that CorEx perfectly reverse
engineers the ?big five? personality types from survey data while other approaches fail to do so. In
Sec. 4.2.2, CorEx automatically discovers in DNA nearly perfect predictors of independent signals
relating to gender, geography, and ethnicity. In Sec. 4.2.3, we apply CorEx to text and recover
both stylistic features and hierarchical topic representations. After briefly considering intriguing
theoretical connections in Sec. 5, we conclude with future directions in Sec. 6.
2
Correlation Explanation
Using standard notation [2], capital X denotes a discrete random variable whose instances are written in lowercase. A probability distribution over a random variable X, pX (X = x), is shortened
1
to p(x) unless ambiguity arises. The cardinality of the set of values that a random variable can
take will always be finite and denoted by |X|. If we have n random variables, then G is a subset
of indices G ? Nn = {1, . . . , n} and XG is the corresponding subset of the random variables
(XNn is shortened to X). Entropy is defined in the usual way as H(X) ? EX [ log p(x)]. Higherorder entropies can be constructed in various ways from this standard definition. For instance, the
mutual information between two random variables, X1 and X2 can be written I(X1 : X2 ) =
H(X1 ) + H(X2 ) H(X1 , X2 ).
The following measure of mutual information among many variables was first introduced as ?total
correlation? [3] and is also called multi-information [4] or multivariate mutual information [5].
X
T C(XG ) =
H(Xi ) H(XG )
(1)
i2G
For G = {i1 , i2 }, this corresponds to the mutual information, I(Xi1 : Xi2 ). T C(XG ) is nonnegative and zero if and only if the probability distribution factorizes.
In fact, total correlation can
Q
also be written as a KL divergence, T C(XG ) = DKL (p(xG )|| i2G p(xi )).
The total correlation among
P a group of variables, X, after conditioning on some other variable, Y ,
is simply T C(X|Y ) = i H(Xi |Y ) H(X|Y ). We can measure the extent to which Y explains
the correlations in X by looking at how much the total correlation is reduced.
X
T C(X; Y ) ? T C(X) T C(X|Y ) =
I(Xi : Y ) I(X : Y )
(2)
i2Nn
We use semicolons as a reminder that T C(X; Y ) is not symmetric in the arguments, unlike mutual
information. T C(X|Y ) is zero (and T C(X; Y ) maximized) if and only if the distribution of X?s
conditioned on Y factorizes. This would be the case if Y were the common cause of all the Xi ?s
in which case Y explains all the correlation in X. T C(XG |Y ) = 0 can also be seen as encoding
local Markov properties among a group of variables and, therefore, specifying a DAG [6]. This
quantity has appeared as a measure of the redundant information that the Xi ?s carry about Y [7].
More connections are discussed in Sec. 5.
Optimizing over Eq. 2 can now be seen as a search for a latent factor, Y , that explains the correlations
in X. We can make this concrete by letting Y be a discrete random variable that can take one of k
possible values and searching over all probabilistic functions of X, p(y|x).
max T C(X; Y )
p(y|x)
s.t. |Y | = k,
(3)
The solution to this optimization is given as a special case in Sec. A. Total correlation is a functional
over the joint distribution, p(x, y) = p(y|x)p(x), so the optimization implicitly depends on the data
through p(x). Typically, we have only a small number of samples drawn from p(x) (compared to
the size of the state space). To make matters worse, if x 2 {0, 1}n then optimizing over all p(y|x)
involves at least 2n variables. Surprisingly, despite these difficulties we show in the next section
that this optimization can be carried out efficiently. The maximum achievable value of this objective
occurs for some finite k when T C(X|Y ) = 0. This implies that the data are perfectly described by
a naive Bayes model with Y as the parent and Xi as the children.
Generally, we expect that correlations in data may result from several different factors. Therefore,
we extend the optimization above to include m different factors, Y1 , . . . , Ym .1
max
Gj ,p(yj |xGj )
m
X
j=1
T C(XGj ; Yj )
s.t. |Yj | = k, Gj \ Gj 0 6=j = ;
(4)
Here we simultaneously search subsets of variables Gj and over variables Yj that explain the correlations in each group. While it is not necessary to make the optimization tractable, we impose
an additional condition on Gj so that each variable Xi is in a single group, Gj , associated with a
single ?parent?, Yj . The reason for this restriction is that it has been shown that the value of the
objective can then be interpreted as a lower bound on T C(X) [8]. Note that this objective is valid
1
Note that in principle we could have just replaced Y in Eq. 3 with (Y1 , . . . , Ym ), but the state space would
have been exponential in m, leading to an intractable optimization.
2
and meaningful regardless of details about the data-generating process. We only assume that we are
given p(x) or iid samples from it.
The output of this procedure gives us Yj ?s, which are probabilistic functions of X. If we iteratively apply this optimization to the resulting probability distribution over Y by searching for some
Z1 , . . . , Z m
? that explain the correlations in the Y ?s, we will end up with a hierarchy of variables
that forms a tree. We now show that the optimization in Eq. 4 can be carried out efficiently even for
high-dimensional spaces and small numbers of samples.
3
CorEx: Efficient Implementation of Correlation Explanation
We begin by re-writing the optimization in Eq. 4 in terms of mutual informations using Eq. 2.
max
G,p(yj |x)
m X
X
I(Yj : Xi )
j=1 i2Gj
m
X
I(Yj : XGj )
(5)
j=1
Next, we replace G with a set indicator variable, ?i,j = I[Xi 2 Gj ] 2 {0, 1}.
max
?,p(yj |x)
m X
n
X
?i,j I(Yj : Xi )
j=1 i=1
m
X
I(Yj : X)
(6)
j=1
P
The non-overlapping group constraint is enforced by demanding that ?j ?i,?j = 1. Note also that
we dropped the subscript Gj in the second term of Eq. 6 but this has no effect because solutions
must satisfy I(Yj : X) = I(Yj : XGj ), as we now show.
For fixed ?, it is straightforward to find the solution of the Lagrangian optimization problem as the
solution to a set of self-consistent equations. Details of the derivation can be found in Sec. A.
??
n ?
Y
1
p(yj |xi ) i,j
p(yj |x) =
p(yj )
(7)
Zj (x)
p(yj )
i=1
X
X
p(yj |xi ) =
p(yj |?
x)p(?
x) x?i ,xi /p(xi ) and p(yj ) =
p(yj |?
x)p(?
x)
(8)
x
?
x
?
Note that is the Kronecker delta and that Yj depends only on the Xi for which ?i,j is non-zero.
Remarkably, Yj ?s dependence on X can be written in terms of a linear (in n, the number of variables)
number of parameters which are just the marginals, p(yj ), p(yj |xi ). We approximate p(x) with the
PN
empirical distribution, p?(?
x) = l=1 x?,x(l) /N . This approximation allows us to estimate marginals
with fixed accuracy using only a constant number of iid samples from the true distribution. In Sec. A
we show that Eq. 7, which defines the soft labeling of any x, can be seen as a linear function followed
by a non-linear threshold, reminiscent of neural networks. Also note that the normalization constant
for any x, Zj (x), can be calculated easily by summing over just |Yj | = k values.
For fixed values
have an integer linear program for ? made easy by
Pof the parameters p(yj |xi ), we
?
the constraint ?j ?i,?j = 1. The solution is ?i,j
= I[j = arg max?j I(Xi : Y?j )]. However, this leads
to a rough optimization space. The solution in Eq. 7 is valid (and meaningful, see Sec. 5 and [8]) for
arbitrary values of ? so we relax our optimization accordingly. At step t = 0 in the optimization,
t=0
we pick ?i,j
? U (1/2, 1) uniformly at random (violating the constraints). At step t + 1, we make
a small update on ? in the direction of the solution.
t+1
?i,j
= (1
t
??
)?i,j
+ ?i,j
(9)
??
The second term, ?i,j
= exp (I(Xi : Yj ) max?j I(Xi : Y?j )) , implements a soft-max which
converges to the true solution for ?? in the limit ! 1. This leads to a smooth optimization and
good choices for , can be set through intuitive arguments described in Sec. B.
Now that we have rules to update both ? and p(yj |xi ) to increase the value of the objective, we
simply iterate between them until we achieve convergence. While there is no guarantee to find the
global optimum, the objective is upper bounded by T C(X) (or equivalently, T C(X|Y ) is lower
bounded by 0). Pseudo-code for this approach is described in Algorithm 1 with additional details
provided in Sec. B and source code available online2 . The overall complexity is linear in the number
2
Open source code is available at http://github.com/gregversteeg/CorEx.
3
input : A matrix of size ns ? n representing ns samples of n discrete random variables
set
: Set m, the number of latent variables, Yj , and k, so that |Yj | = k
output: Parameters ?i,j , p(yj |xi ), p(yj ), p(y|x(l) )
for i 2 Nn , j 2 Nm , l 2 Nns , y 2 Nk , xi 2 Xi
Randomly initialize ?i,j , p(y|x(l) );
repeat
Estimate marginals, p(yj ), p(yj |xi ) using Eq. 8;
Calculate I(Xi : Yj ) from marginals;
Update ? using Eq. 9;
Calculate p(y|x(l) ), l = 1, . . . , ns using Eq. 7;
until convergence;
Algorithm 1: Pseudo-code implementing Correlation Explanation (CorEx)
of variables. To bound the complexity in terms of the number of samples, we can always use minibatches of fixed size to estimate the marginals in Eq. 8.
A common problem in representation learning is how to pick m, the number of latent variables
to describe the data. Consider the limit in which we set m = n. To use all Y1 , . . . , Ym in our
representation, we would need exactly one variable, Xi , in each group, Gj . Then 8j, T C(XGj ) = 0
and, therefore, the whole objective will be 0. This suggests that the maximum value of the objective
must be achieved for some value of m < n. In practice, this means that if we set m too high,
only some subset of latent variables will be used in the solution, as we will demonstrate in Fig. 2.
In other words, if m is set high enough, the optimization will result in some number of clusters
m0 < m that is optimal with respect to the objective. Representations with different numbers of
layers, different m, and different k can be compared according to how tight of a lower bound they
provide on T C(X) [8].
4
4.1
Experiments
Synthetic data
CorEx
Spectral*
K-means
ICA
NMF*
N.Net:RBM*
PCA
Spectral Bi*
Isomap*
LLE*
Hierarch.
1.0
Accuracy (ARI)
CorEx
0.8
0.6
0.4
0.2
0.0
24
25
26
27
28
29 210 211
*
Layer
Z
Y1
X1
X...
2
Y...
Xc
Yb
X...
1
Xn
0
Synthetic model
# Observed Variables, n
Figure 1: (Left) We compare methods to recover the clusters of variables generated according to the
model. (Right) Synthetic data is generated according to a tree of latent variables.
To test CorEx?s ability to recover latent structure from data we begin by generating synthetic data
according to the latent tree model depicted in Fig. 1 in which all the variables are hidden except
for the leaf nodes. The most difficult part of reconstructing this tree is clustering of the leaf nodes.
If a clustering method can do that then the latent variables can be reconstructed for each cluster
easily using EM. We consider many different clustering methods, typically with several variations
4
of each technique, details of which are described in Sec. C. We use the adjusted Rand index (ARI)
to measure the accuracy with which inferred clusters recover the ground truth. 3
We generated samples from the model in Fig. 1 with b = 8 and varied c, the number of leaves per
branch. The Xi ?s depend on Yj ?s through a binary erasure channel (BEC) with erasure probability
. The capacity of the BEC is 1
so we let = 1 2/c to reflect the intuition that the signal from
each parent node is weakly distributed across all its children (but cannot be inferred from a single
child). We generated max(200, 2n) samples. In this example, all the Yj ?s are weakly correlated
with the root node, Z, through a binary symmetric channel with flip probability of 1/3.
Fig. 1 shows that for a small to medium number of variables, all the techniques recover the structure
fairly well, but as the dimensionality increases only CorEx continues to do so. ICA and hierarchical
clustering compete for second place. CorEx also perfectly recovers the values of the latent factors
in this example. For latent tree models, recovery of the latent factors gives a global optimum of the
objective in Eq. 4. Even though CorEx is only guaranteed to find local optima, in this example it
correctly converges to the global optimum over a range of problem sizes.
Note that a growing literature on latent tree learning attempts to reconstruct latent trees with theoretical guarantees [9, 1]. In principle, we should compare to these techniques, but they scale as
O(n2 ) O(n5 ) (see [3], Table 1) while our method is O(n). In a recent survey on latent tree learning methods, only one out of 15 techniques was able to run on the largest dataset considered (see
[3], Table 3), while most of the datasets in this paper are orders of magnitude larger than that one.
?i,j
t=0
i = 1, . . . , nv
j
=
1
...
m
t = 50
t = 10
I(Yj : Xi )
0
Uncorrelated
variables
1
Figure 2: (Color online) A visualization of structure learning in CorEx, see text for details.
Fig. 2 visualizes the structure learning process.4 This example is similar to that above but includes
some uncorrelated random variables to show how they are treated by CorEx. We set b = 5 clusters
of variables but we used m = 10 hidden variables. At each iteration, t, we show which hidden variables, Yj , are connected to input variables, Xi , through the connectivity matrix, ? (shown on top).
The mutual information is shown on the bottom. At the beginning, we started with full connectivity,
but with nothing learned we have I(Yj : Xi ) = 0. Over time, the hidden units ?compete? to find
a group of Xi ?s for which they can explain all the correlations. After only ten iterations the overall
structure appears and by 50 iterations it is exactly described. At the end, the uncorrelated random
variables (Xi ?s) and the hidden variables (Yj ?s) which have not explained any correlations can be
easily distinguished and discarded (visually and mathematically, see Sec. B).
4.2
Discovering Structure in Diverse Real-World Datasets
4.2.1
Personality Surveys and the ?Big Five? Personality Traits
One psychological theory suggests that there are five traits that largely reflect the differences in
personality types [1]: extraversion, neuroticism, agreeableness, conscientiousness and openness to
experience. Psychologists have designed various instruments intended to measure whether individuals exhibit these traits. We consider a survey in which subjects rate fifty statements, such as, ?I
am the life of the party?, on a five point scale: (1) disagree, (2) slightly disagree, (3) neutral, (4)
slightly agree, and (5) agree.5 The data consist of answers to these questions from about ten thousand test-takers. The test was designed with the intention that each question should belong to a
3
Rand index counts the percentage of pairs whose relative classification matches in both clusterings. ARI
adds a correction so that a random clustering will give a score of zero, while an ARI of 1 corresponds to a
perfect match.
4
A video is available online at http://isi.edu/?gregv/corex_structure.mpg.
5
Data and full list of questions are available at http://personality-testing.info/
_rawdata/.
5
cluster according to which personality trait the question gauges. Is it true that there are five factors
that strongly predict the answers to these questions?
CorEx learned a two-level hierarchical representation when applied to this data (full model shown
in Fig. C.2). On the first level, CorEx automatically determined that the questions should cluster
into five groups. Surprisingly, the five clusters exactly correspond to the big five personality traits
as labeled by the test designers. It is unusual to recover the ground truth with perfect accuracy on
an unsupervised learning problem so we tried a number of other standard clustering methods to see
if they could reproduce this result. We display the results using confusion matrices in Fig. 3. The
details of the techniques used are described in Sec. C but all of them had an advantage over CorEx
since they required that we specify the correct number of clusters. None of the other techniques are
able to recover the five personality types exactly.
5)
(AR
I:0
.9
9
)
I:0
.5
*
AR
fric
a
.A
fri
ca
(
Subsa
h
.A
ah
Su
b
sah
)
98
:0.
8)
RI
.9
(A
a
I:0
ric
AR
Af
a(
h.
sa
ric
Af Sub
5)
.
I:0.9
ah
(AR
bs
frica
h. A
a
Su
s
b
Su
Su
bs
0.87)
a(ARI:
. Afric
EurAsi
a(ARI:
0.53)
Interestingly, Independent Component Analysis (ICA) [1] is the only other method that comes close.
The intuition behind ICA is that it find a linear transformation on the input that minimizes the multiinformation among the outputs (Yj ). In contrast, CorEx searches for Yj ?s so that multi-information
among the Xi ?s is minimized after conditioning on Y . ICA assumes that the signals that give rise
to the data are independent while CorEx does not. In this case, personality traits like ?extraversion?
and ?agreeableness? are correlated, violating the independence assumption.
Eu
I:0.95)
AR
I:0
.8
6)
S/
Eu
*
5
:0.
RI
True
ia(
1)
t(A
5)
)
I:0.8
7
RI:
0.5
a(A
t(AR
Africa(AR
I:0
.52)
6)
Am
er
ica
(A
RI
:0.
9
9)
*
4)
Eas
RI
:0.
8
0.7
RI:
Am
eric
Subsah.
(A
t(A
Predicted
*
Subsah. Africa(ARI:0.92)
/E
a
Ea s
s
Ea
Ea
Oceania(ARI:1.00)st(ARI:0.87)
gender(AR
rA
s
Figure 3: (Left) Confusion matrix comparing predicted clusters to true clusters for the questions on
the Big-5 personality test. (Right) Hierarchical model constructed from samples of DNA by CorEx.
4.2.2
DNA from the Human Genome Diversity Project
Next, we consider DNA data taken from 952 individuals of diverse geographic and ethnic backgrounds [1]. The data consist of 4170 variables describing different SNPs (single nucleotide polymorphisms).6 We use CorEx to learn a hierarchical representation which is depicted in Fig. 3. To
evaluate the quality of the representation, we use the adjusted Rand index (ARI) to compare clusters
induced by each latent variable in the hierarchical representation to different demographic variables
in the data. Latent variables which substantially match demographic variables are labeled in Fig. 3.
The representation learned (unsupervised) on the first layer contains a perfect match for Oceania (the
Pacific Islands) and nearly perfect matches for America (Native Americans), Subsaharan Africa, and
gender. The second layer has three variables which correspond very closely to broad geographic
regions: Subsaharan Africa, the ?East? (including China, Japan, Oceania, America), and EurAsia.
4.2.3
Text from the Twenty Newsgroups Dataset
The twenty newsgroups dataset consists of documents taken from twenty different topical message
boards with about a thousand posts each [1]. For analyzing unstructured text, typical feature engineering approaches heuristically separate signals like style, sentiment, or topics. In principle, all
6
Data, descriptions of SNPs, and detailed demographics of subjects is available at ftp://ftp.cephb.
fr/hgdp_v3/.
6
three of these signals manifest themselves in terms of subtle correlations in word usage. Recent
attempts at learning large-scale unsupervised hierarchical representations of text have produced interesting results [1], though validation is difficult because quantitative measures of representation
quality often do not correlate well with human judgment [1].
To focus on linguistic signals, we removed meta-data like headers, footers, and replies even though
these give strong signals for supervised newsgroup classification. We considered the top ten thousand most frequent tokens and constructed a bag of words representation. Then we used CorEx to
learn a five level representation of the data with 326 latent variables in the first layer. Details are
described in Sec. C.1. Portions of the first three levels of the tree keeping only nodes with the highest
normalized mutual information with their parents are shown in Fig. 4 and in Fig. C.1.7
alt.atheism
comp.graphics
comp.os.ms-windows.misc
comp.sys.ibm.pc.hardware
comp.sys.mac.hardware
comp.windows.x
misc.forsale
rec.autos
rec.motorcycles
rec.sport.baseball
rec.sport.hockey
sci.crypt
sci.electronics
sci.med
sci.space
soc.religion.christian
talk.politics.guns
talk.politics.mideast
talk.politics.misc
talk.religion.misc
aa
cg
cms
cpc
cmac
cwx
mf
ra
rm
rsb
rsh
sc
se
sm
ss
src
tpg
tmid
tmisc
trm
rel
comp
comp
comp
comp
comp
misc
vehic
vehic
sport
sport
sci
sci
sci
sci
rel
talk
talk
talk
rel
Figure 4: Portions of the hierarchical representation learned for the twenty newsgroups dataset. We
label latent variables that overlap significantly with known structure. Newsgroup names, abbreviations, and broad groupings are shown on the right.
To provide a more quantitative benchmark of the results, we again test to what extent learned representations are related to known structure in the data. Each post can be labeled by the newsgroup
it belongs to, according to broad categories (e.g. groups that include ?comp?), or by author. Most
learned binary variables were active in around 1% of the posts, so we report the fraction of activations that coincide with a known label (precision) in Fig. 4. Most variables clearly represent
sub-topics of the newsgroup topics, so we do not expect high recall. The small portion of the tree
shown in Fig. 4 reflects intuitive relationships that contain hierarchies of related sub-topics as well
as clusters of function words (e.g. pronouns like ?he/his/him? or tense with ?have/be?).
Once again, several learned variables perfectly captured known structure in the data. Some users
sent images in text using an encoded format. One feature matched all the image posts (with perfect precision and recall) due to the correlated presence of unusual short tokens. There were also
perfect matches for three frequent authors: G. Banks, D. Medin, and B. Beauchaine. Note that the
learned variables did not trigger if just their names appeared in the text, but only for posts they
authored. These authors had elaborate signatures with long, identifiable quotes that evaded preprocessing but created a strongly correlated signal. Another variable with perfect precision for the
?forsale? newsgroup labeled comic book sales (but did not activate for discussion of comics in other
newsgroups). Other nearly perfect predictors described extensive discussions of Armenia/Turkey in
talk.politics.mideast (a fifth of all discussion in that group), specialized unix jargon, and a match for
sci.crypt which had 90% precision and 55% recall. When we ranked all the latent factors according
to a normalized version of Eq. 2, these examples all showed up in the top 20.
5
Connections and Related Work
While the basic measures used in Eq. 1 and Eq. 2 have appeared in several contexts [7, 1, 4, 3, 1],
the interpretation of these quantities is an active area of research [1, 2]. The optimizations we define
have some interesting but less obvious connections. For instance, the optimization in Eq. 3 is similar
7
An interactive tool for exploring the full hierarchy is available at http://bit.ly/corexvis.
7
to one recently introduced as a measure of ?common information? [2]. The objective in Eq. 6 (for
a single Yj ) appears exactly as a bound on ?ancestral? information [2]. For instance, if all the
?i = 1/ then Steudel and Ay [2] show that the objective is positive only if at least 1 + variables
share a common ancestor in any DAG describing them. This provides extra rationale for relaxing
our original optimization to include non-binary values of ?i,j .
The most similar learning approach to the one presented here is the information bottleneck [2] and
its extension the multivariate information bottleneck [2, 2]. The motivation behind information
bottleneck is to compress the data (X) into a smaller representation (Y ) so that information about
some relevance term (typically labels in a supervised learning setting) is maintained. The second
term in Eq. 6 is analogous to the compression term. Instead of maximizing a relevance term, we
are maximizing information about all the individual sub-systems of X, the Xi . The most redundant
information in the data is preferentially stored while uncorrelated random variables are completely
ignored.
The broad problem of transforming complex data into simpler, more meaningful forms goes under
the rubric of representation learning [2] which shares many goals with dimensionality reduction and
subspace clustering. Insofar as our approach learns a hierarchy of representations it superficially
resembles ?deep? approaches like neural nets and autoencoders [2, 2, 2, 3]. While those approaches
are scalable, a common critique is that they involve many heuristics discovered through trial-anderror that are difficult to justify. On the other hand, a rich literature on learning latent tree models [3,
3, 9, 1] have excellent theoretical properties but do not scale well. By basing our method on an
information-theoretic optimization that can nevertheless be performed quite efficiently, we hope to
preserve the best of both worlds.
6
Conclusion
The most challenging open problems today involve high-dimensional data from diverse sources
including human behavior, language, and biology.8 The complexity of the underlying systems makes
modeling difficult. We have demonstrated a model-free approach to learn successfully more coarsegrained representations of complex data by efficiently optimizing an information-theoretic objective.
The principle of explaining as much correlation in the data as possible provides an intuitive and fully
data-driven way to discover previously inaccessible structure in high-dimensional systems.
It may seem surprising that CorEx should perfectly recover structure in diverse domains without
using labeled data or prior knowledge. On the other hand, the patterns discovered are ?low-hanging
fruit? from the right point of view. Intelligent systems should be able to learn robust and general patterns in the face of rich inputs even in the absence of labels to define what is important. Information
that is very redundant in high-dimensional data provides a good starting point.
Several fruitful directions stand out. First, the promising preliminary results invite in-depth investigations on these and related problems. From a computational point of view, the main work of the
algorithm involves a matrix multiplication followed by an element-wise non-linear transform. The
same is true for neural networks and they have been scaled to very large data using, e.g., GPUs.
On the theoretical side, generalizing this approach to allow non-tree representations appears both
feasible and desirable [8].
Acknowledgments
We thank Virgil Griffith, Shuyang Gao, Hsuan-Yi Chu, Shirley Pepke, Bilal Shaw, Jose-Luis Ambite,
and Nathan Hodas for helpful conversations. This research was supported in part by AFOSR grant
FA9550-12-1-0417 and DARPA grant W911NF-12-1-0034.
References
[1] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing
properties of neural networks. In ICLR, 2014.
8
In principle, computer vision should be added to this list. However, the success of unsupervised feature
learning with neural nets for vision appears to rely on encoding generic priors about vision through heuristics
like convolutional coding and max pooling [3]. Since CorEx is a knowledge-free method it will perform
relatively poorly unless we find a way to also encode these assumptions.
8
[2] Thomas M Cover and Joy A Thomas. Elements of information theory. Wiley-Interscience, 2006.
[3] Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of research
and development, 4(1):66?82, 1960.
[4] M Studen`y and J Vejnarova. The multiinformation function as a tool for measuring stochastic dependence.
In Learning in graphical models, pages 261?297. Springer, 1998.
[5] Alexander Kraskov, Harald St?ogbauer, Ralph G Andrzejak, and Peter Grassberger. Hierarchical clustering
using mutual information. EPL (Europhysics Letters), 70(2):278, 2005.
[6] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, NY, NY, USA, 2009.
[7] Elad Schneidman, William Bialek, and Michael J Berry. Synergy, redundancy, and independence in
population codes. the Journal of Neuroscience, 23(37):11539?11553, 2003.
[8] Greg Ver Steeg and Aram Galstyan. Maximally informative hierarchical representations of highdimensional data. arXiv:1410.7404, 2014.
[9] Animashree Anandkumar, Kamalika Chaudhuri, Daniel Hsu, Sham M Kakade, Le Song, and Tong Zhang.
Spectral methods for learning multivariate latent tree structure. In NIPS, pages 2025?2033, 2011.
[10] Myung Jin Choi, Vincent YF Tan, Animashree Anandkumar, and Alan S Willsky. Learning latent tree
graphical models. The Journal of Machine Learning Research, 12:1771?1812, 2011.
[11] Lewis R Goldberg. The development of markers for the big-five factor structure. Psychological assessment, 4(1):26, 1992.
[12] Aapo Hyv?arinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural
networks, 13(4):411?430, 2000.
[13] N.A. Rosenberg, J.k. Pritchard, J.L. Weber, H.M. Cann, K.K. Kidd, L.A. Zhivotovsky, and M.W. Feldman.
Genetic structure of human populations. Science, 298(5602):2381?2385, 2002.
[14] K. Bache and M. Lichman. UCI machine learning repository, 2013.
[15] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations
in vector space. arXiv:1301.3781, 2013.
[16] Jonathan Chang, Jordan L Boyd-Graber, Sean Gerrish, Chong Wang, and David M Blei. Reading tea
leaves: How humans interpret topic models. In NIPS, volume 22, pages 288?296, 2009.
[17] Elad Schneidman, Susanne Still, Michael J Berry, William Bialek, et al. Network information and connected correlations. Physical Review Letters, 91(23):238701, 2003.
[18] Nihat Ay, Eckehard Olbrich, Nils Bertschinger, and J?urgen Jost. A unifying framework for complexity
measures of finite systems. Proceedings of European Complex Systems Society, 2006.
[19] P.L. Williams and R.D. Beer. Nonnegative decomposition of multivariate information. arXiv:1004.2515,
2010.
[20] Virgil Griffith and Christof Koch. Quantifying synergistic mutual information. arXiv:1205.4265, 2012.
[21] Gowtham Ramani Kumar, Cheuk Ting Li, and Abbas El Gamal. Exact common information.
arXiv:1402.0062, 2014.
[22] B. Steudel and N. Ay. Information-theoretic inference of common ancestors. arXiv:1010.5720, 2010.
[23] Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method.
arXiv:physics/0004057, 2000.
[24] Noam Slonim, Nir Friedman, and Naftali Tishby. Multivariate information bottleneck. Neural Computation, 18(8):1739?1789, 2006.
[25] Noam Slonim. The information bottleneck: Theory and applications. PhD thesis, Citeseer, 2002.
[26] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(8):1798?1828, 2013.
[27] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006.
[28] Yann LeCun, L?eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[29] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The
handbook of brain theory and neural networks, 3361, 1995.
[30] Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep
networks. Advances in neural information processing systems, 19:153, 2007.
[31] Rapha?el Mourad, Christine Sinoquet, Nevin L Zhang, Tengfei Liu, Philippe Leray, et al. A survey on
latent tree models and applications. J. Artif. Intell. Res.(JAIR), 47:157?203, 2013.
[32] Ryan Prescott Adams, Hanna M Wallach, and Zoubin Ghahramani. Learning the structure of deep sparse
graphical models. arXiv:1001.0160, 2009.
[33] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, 2009.
[34] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011.
9
| 5580 |@word nihat:1 repository:1 version:1 trial:1 compression:1 briefly:1 achievable:1 open:2 heuristically:1 hyv:1 tried:1 decomposition:1 citeseer:1 pick:2 carry:1 reduction:1 electronics:1 liu:1 sah:1 score:1 series:1 lichman:1 daniel:1 contains:1 genetic:1 document:2 dubourg:1 interestingly:1 africa:4 bilal:1 comparing:1 com:1 surprising:1 activation:1 intriguing:2 reminiscent:1 luis:1 must:3 chu:1 written:4 grassberger:1 informative:1 christian:1 designed:2 update:3 joy:1 intelligence:1 discovering:2 leaf:4 greedy:1 accordingly:1 beginning:1 sys:2 short:1 fa9550:1 blei:1 provides:4 node:5 simpler:1 zhang:2 five:11 constructed:3 consists:1 dan:1 interscience:1 introduce:1 blondel:1 ra:2 ica:6 behavior:1 mpg:1 themselves:1 isi:3 growing:1 multi:2 brain:1 salakhutdinov:1 inspired:1 automatically:4 window:2 cardinality:1 considering:1 provided:1 gamal:1 notation:1 matched:1 bonus:1 medium:1 underlying:1 bounded:2 what:3 unrelated:1 cm:1 interpreted:1 substantially:1 minimizes:1 transformation:1 guarantee:2 pseudo:2 quantitative:2 interactive:1 zaremba:1 exactly:5 rm:1 scaled:1 sale:1 unit:1 grant:2 ly:1 christof:1 positive:1 engineering:1 local:2 dropped:1 slonim:2 limit:2 despite:1 shortened:2 encoding:2 analyzing:1 critique:1 subscript:1 resembles:1 china:1 wallach:1 specifying:1 relaxing:1 suggests:2 multiinformation:2 challenging:1 bi:1 range:1 medin:1 acknowledgment:1 responsible:1 lecun:2 yj:46 testing:1 practice:1 implement:1 procedure:1 cmac:1 area:1 erasure:2 empirical:1 significantly:1 boyd:1 intention:1 word:6 griffith:2 prescott:1 zoubin:1 cannot:1 close:1 synergistic:1 context:1 writing:1 restriction:1 fruitful:1 dean:1 demonstrated:1 lagrangian:1 maximizing:2 straightforward:1 regardless:1 starting:1 williams:1 go:1 survey:5 tomas:1 hsuan:1 recovery:1 unstructured:1 rule:1 insight:1 lamblin:1 his:1 population:2 searching:3 variation:1 analogous:1 hierarchy:5 trigger:1 tan:1 user:1 today:1 exact:1 goldberg:1 goodfellow:1 element:2 recognition:1 rec:4 continues:1 bache:1 native:1 bec:2 labeled:5 observed:1 bottom:1 wang:1 calculate:2 thousand:3 region:1 connected:2 nevin:1 eu:2 removed:1 highest:1 src:1 principled:1 intuition:2 transforming:1 inaccessible:1 complexity:4 signature:1 aram:2 weakly:2 depend:1 tight:1 passos:1 baseball:1 eric:1 completely:1 easily:3 darpa:1 joint:1 various:2 america:2 talk:8 derivation:1 describe:1 activate:1 sc:1 labeling:1 header:1 whose:2 richer:1 elad:2 kai:1 larger:1 heuristic:2 s:1 reconstruct:2 quite:1 relax:1 ability:1 taker:1 transform:1 online:2 advantage:1 net:3 propose:1 galstyan:3 fr:1 frequent:2 uci:1 motorcycle:1 pronoun:1 poorly:1 chaudhuri:1 achieve:1 intuitive:3 description:1 anderror:1 sutskever:1 parent:4 convergence:2 optimum:4 cluster:13 generating:3 adam:1 converges:2 perfect:9 ftp:2 measured:2 ex:1 eq:19 strong:1 soc:1 sa:1 implemented:1 predicted:2 implies:1 larochelle:1 come:1 involves:2 direction:3 i2g:2 closely:1 correct:1 stochastic:1 human:6 cpc:1 implementing:1 explains:3 arinen:1 polymorphism:1 olbrich:1 geography:1 preliminary:1 really:1 investigation:1 varoquaux:1 ryan:1 mathematically:2 adjusted:2 exploring:1 extension:1 correction:1 around:1 considered:2 ground:2 koch:1 visually:1 exp:1 predict:1 m0:1 forsale:2 ruslan:1 estimation:1 bag:1 label:4 prettenhofer:1 quote:1 him:1 largest:1 basing:1 gauge:1 successfully:1 tool:2 reflects:1 hope:1 rough:1 clearly:1 always:2 openness:1 pn:1 factorizes:2 rosenberg:1 linguistic:1 encode:1 focus:1 grisel:1 contrast:1 cg:1 detect:1 am:3 helpful:1 kidd:1 inference:2 el:2 lowercase:1 nn:2 typically:3 hidden:6 ancestor:2 reproduce:1 i1:1 ralph:1 arg:1 overall:2 classification:2 among:5 pascal:2 denoted:1 development:2 gramfort:1 fairly:1 urgen:1 initialize:1 mutual:12 once:1 special:1 ng:1 pof:1 biology:1 broad:4 look:1 icml:1 unsupervised:6 nearly:3 future:1 minimized:2 yoshua:4 report:1 intelligent:1 randomly:1 oja:1 preserve:1 simultaneously:1 intell:1 divergence:1 individual:3 replaced:1 intended:1 jeffrey:1 william:3 friedman:1 attempt:2 message:1 cournapeau:1 chong:1 pc:1 behind:2 necessary:1 experience:1 nucleotide:1 unless:2 tree:15 re:2 theoretical:5 epl:1 psychological:2 instance:4 modeling:1 soft:2 ar:8 w911nf:1 cover:1 measuring:1 mac:1 introducing:1 subset:4 neutral:1 predictor:2 too:1 graphic:1 tishby:2 stored:1 answer:2 synthetic:5 nns:1 st:2 rapha:1 ancestral:1 lee:1 physic:1 probabilistic:2 xi1:1 michael:2 ym:3 tpg:1 concrete:1 connectivity:2 again:2 thesis:1 ambiguity:1 successively:1 reflect:2 nm:1 worse:1 book:1 american:1 leading:1 style:1 michel:1 li:1 szegedy:1 japan:1 account:1 diversity:1 sec:19 coding:1 includes:1 matter:1 satisfy:1 depends:2 performed:1 view:2 root:1 portion:3 bayes:1 recover:8 xgj:5 greg:3 accuracy:4 convolutional:3 largely:1 efficiently:5 maximized:1 judgment:1 correspond:2 bayesian:1 vincent:2 produced:1 iid:2 none:1 comp:11 visualizes:1 ah:2 explain:4 begin:3 definition:1 rsh:1 crypt:2 obvious:1 naturally:1 associated:1 rbm:1 recovers:1 hsu:1 dataset:4 animashree:2 recall:3 manifest:1 reminder:1 dimensionality:3 color:1 conversation:1 subtle:1 knowledge:3 ramani:1 sean:1 ea:4 appears:4 jair:1 violating:2 supervised:2 leray:1 specify:1 maximally:1 rand:3 wei:1 yb:1 trm:1 strongly:2 though:3 just:4 video:1 reply:1 until:2 autoencoders:1 correlation:28 hand:2 invite:1 su:4 o:1 scikit:1 assessment:1 evaded:1 marker:1 overlapping:1 del:2 defines:1 yf:1 quality:2 artif:1 usage:1 effect:1 building:1 contain:1 armenia:1 true:6 isomap:1 name:2 tense:1 geographic:2 normalized:2 symmetric:2 jargon:1 iteratively:1 i2:1 misc:5 attractive:1 self:1 maintained:1 naftali:2 m:1 ay:3 theoretic:4 demonstrate:3 confusion:2 christine:1 snp:2 reasoning:1 weber:1 wise:2 image:3 discovers:2 recently:1 ari:10 encoded:1 common:9 specialized:1 functional:1 physical:1 hugo:1 conditioning:2 volume:1 discussed:1 interpretation:1 belong:1 relating:1 trait:6 interpret:1 extend:1 refer:1 he:1 marginals:5 cambridge:1 feldman:1 dag:2 language:2 had:3 bruna:1 gj:9 add:1 patrick:1 multivariate:8 recent:2 showed:1 perspective:1 optimizing:4 belongs:1 driven:1 reverse:1 meta:1 binary:4 success:1 life:1 yi:1 captured:1 seen:3 additional:2 impose:1 fri:1 fernando:1 redundant:3 ogbauer:1 corrado:1 fric:1 branch:1 full:4 desirable:1 sham:1 turkey:1 schneidman:2 signal:8 alan:1 smooth:1 match:7 af:2 long:1 post:5 marina:2 europhysics:1 dkl:1 jost:1 scalable:2 basic:1 aapo:1 n5:1 essentially:1 vision:3 arxiv:8 iteration:3 represent:1 normalization:1 abbas:1 achieved:1 harald:1 background:1 remarkably:1 source:4 extra:1 fifty:1 unlike:1 nv:1 subject:2 med:1 pooling:1 sent:1 induced:1 seem:1 jordan:1 integer:1 anandkumar:2 kraskov:1 presence:1 bengio:4 insofar:1 ethnicity:1 easy:1 iterate:1 enough:1 newsgroups:4 independence:2 perfectly:5 haffner:1 politics:4 bottleneck:6 whether:1 pca:1 sentiment:1 song:1 peter:1 speech:1 cause:4 rey:2 deep:4 ignored:1 generally:1 detailed:1 se:1 involve:2 authored:1 ten:3 hardware:2 category:1 dna:5 simplest:1 vejnarova:1 http:4 reduced:1 exist:1 percentage:1 zj:2 designer:1 delta:1 neuroscience:1 correctly:1 per:1 diverse:5 discrete:3 brucher:1 tea:1 group:10 redundancy:1 threshold:1 nevertheless:1 drawn:1 capital:1 shirley:1 hierarch:1 fraction:1 enforced:1 compete:2 run:1 letter:2 unix:1 jose:1 place:1 stylistic:1 yann:2 discover:1 ric:2 steudel:2 bit:1 bound:4 layer:6 guaranteed:1 followed:2 courville:1 display:1 nonnegative:2 identifiable:1 constraint:3 kronecker:1 gregv:2 x2:4 ri:6 erkki:1 semicolon:1 nathan:1 argument:2 innovative:1 kumar:1 mikolov:1 px:1 format:1 relatively:1 gpus:1 pacific:1 according:7 hanging:1 project:1 across:1 slightly:2 reconstructing:1 em:1 smaller:1 island:1 kakade:1 b:2 psychologist:1 intuitively:1 explained:1 taken:2 equation:1 visualization:1 previously:1 agree:2 describing:2 count:1 fail:2 xi2:1 thirion:1 letting:1 flip:1 tractable:2 instrument:1 demographic:3 unusual:2 end:2 rubric:1 available:6 coarsegrained:1 apply:2 hierarchical:12 spectral:3 generic:1 online2:1 distinguished:1 shaw:1 original:1 compress:1 personality:11 denotes:1 clustering:9 include:3 thomas:2 graphical:3 top:3 assumes:1 unifying:1 xc:1 eon:1 ting:1 ghahramani:1 society:1 objective:13 perrot:1 added:1 quantity:2 codifies:1 occurs:1 question:7 dependence:2 usual:1 bialek:3 southern:2 exhibit:1 iclr:1 subspace:1 gradient:1 separate:1 higherorder:1 thank:1 capacity:1 sci:9 gun:1 topic:6 extent:2 reason:1 willsky:1 assuming:1 code:5 index:4 relationship:1 cann:1 preferentially:1 equivalently:1 difficult:4 statement:1 info:1 noam:2 rise:1 susanne:1 implementation:1 twenty:4 perform:1 upper:1 disagree:2 datasets:2 sm:1 benchmark:1 finite:3 markov:1 jin:1 discarded:1 gowtham:1 philippe:1 hinton:1 looking:1 y1:4 topical:1 varied:1 discovered:2 pritchard:1 arbitrary:1 nmf:1 inferred:2 david:1 introduced:2 pair:1 required:1 vanderplas:1 extensive:1 kl:1 connection:4 z1:1 california:2 learned:9 pearl:1 nip:2 able:3 pattern:3 appeared:3 reading:1 program:1 max:9 including:3 explanation:8 rsb:1 belief:1 power:1 overlap:1 demanding:1 difficulty:1 rely:1 treated:1 ranked:1 ia:1 indicator:1 representing:1 github:1 started:1 xg:7 carried:2 created:1 naive:1 auto:1 paradigm:1 nir:1 text:7 prior:3 literature:2 berry:2 popovici:1 review:2 multiplication:1 python:1 relative:1 afosr:1 fully:1 expect:2 comic:2 rationale:1 interesting:2 geoffrey:1 validation:1 foundation:1 usa:1 consistent:1 fruit:1 beer:1 principle:7 myung:1 bank:1 uncorrelated:5 share:2 ibm:2 token:2 surprisingly:2 repeat:1 keeping:1 free:3 supported:1 side:1 lle:1 allow:1 institute:2 explaining:1 face:1 fifth:1 andrzejak:1 sparse:1 distributed:1 calculated:1 depth:1 valid:2 world:2 genome:1 superficially:1 rich:2 xn:1 made:1 stand:1 coincide:1 preprocessing:1 collection:1 author:3 erhan:1 party:1 transaction:1 ranganath:1 correlate:1 approximate:1 shuyang:1 informationtheoretic:1 implicitly:1 reconstructed:1 synergy:1 global:3 active:2 ver:2 handbook:1 summing:1 conclude:1 xi:36 fergus:1 search:4 latent:25 table:2 hockey:1 promising:1 channel:2 learn:6 robust:1 ca:3 hanna:1 excellent:1 complex:4 bottou:1 european:1 domain:1 did:2 main:1 linearly:1 whole:1 steeg:2 big:5 motivation:1 n2:1 nothing:1 child:3 atheism:1 graber:1 x1:5 ethnic:1 fig:13 causality:1 board:1 elaborate:1 grosse:1 ny:2 tong:1 n:3 precision:4 wiley:1 sub:4 watanabe:1 duchesnay:1 pereira:1 exponential:1 mideast:2 learns:1 choi:1 er:1 list:2 alt:1 grouping:1 intractable:1 consist:2 rel:3 kamalika:1 phd:1 magnitude:1 bertschinger:1 conditioned:2 nk:1 chen:1 mf:1 entropy:2 depicted:2 generalizing:1 simply:2 univariate:1 gao:1 religion:2 sport:4 xnn:1 chang:1 springer:1 aa:1 gender:3 truth:2 gerrish:1 lewis:1 corresponds:2 minibatches:1 abbreviation:1 goal:1 viewed:1 quantifying:1 replace:1 absence:1 oceania:3 feasible:1 typical:1 except:1 reducing:1 determined:1 justify:1 uniformly:1 engineer:1 total:5 nil:1 called:1 succeeds:1 meaningful:4 east:1 newsgroup:5 pedregosa:1 aaron:1 highdimensional:2 arises:1 jonathan:1 alexander:1 relevance:2 evaluate:1 correlated:4 |
5,060 | 5,581 | Coresets for k-Segmentation of Streaming Data
Guy Rosman ? ?
CSAIL, MIT
32 Vassar St., 02139,
Cambridge, MA USA
[email protected]
Mikhail Volkov ?
CSAIL, MIT
32 Vassar St., 02139,
Cambridge, MA USA
[email protected]
Danny Feldman ?
CSAIL, MIT
32 Vassar St., 02139,
Cambridge, MA USA
[email protected]
Daniela Rus ?
CSAIL, MIT
32 Vassar St., 02139,
Cambridge, MA USA
[email protected]
John W. Fisher III
CSAIL, MIT
32 Vassar St., 02139,
Cambridge, MA USA
[email protected]
Abstract
Life-logging video streams, financial time series, and Twitter tweets are a few examples of high-dimensional signals over practically unbounded time. We consider
the problem of computing optimal segmentation of such signals by a k-piecewise
linear function, using only one pass over the data by maintaining a coreset for the
signal. The coreset enables fast further analysis such as automatic summarization
and analysis of such signals.
A coreset (core-set) is a compact representation of the data seen so far, which
approximates the data well for a specific task ? in our case, segmentation of the
stream. We show that, perhaps surprisingly, the segmentation problem admits
coresets of cardinality only linear in the number of segments k, independently
of both the dimension d of the signal, and its number n of points. More precisely, we construct a representation of size O(k log n/?2 ) that provides a (1 + ?)approximation for the sum of squared distances to any given k-piecewise linear
function. Moreover, such coresets can be constructed in a parallel streaming approach. Our results rely on a novel reduction of statistical estimations to problems
in computational geometry. We empirically evaluate our algorithms on very large
synthetic and real data sets from GPS, video and financial domains, using 255
machines in Amazon cloud.
1
Introduction
There is an increasing demand for systems that learn long-term, high-dimensional data streams.
Examples include video streams from wearable cameras, mobile sensors, GPS, financial data and
biological signals. In each, a time instance is represented as a high-dimensional feature, for example
location vectors, stock prices, or image content feature histograms.
We develop real-time algorithms for summarization and segmentation of large streams, by compressing the signals into a compact meaningful representation. This representation can then be used
to enable fast analyses such as summarization, state estimation and prediction. The proposed algorithms support data streams that are too large to store in memory, afford easy parallelization, and
are generic in that they apply to different data types and analyses. For example, the summarization
of wearable video data can be used to efficiently detect different scenes and important events, while
collecting GPS data for citywide drivers can be used to learn weekly transportation patterns and
characterize driver behavior.
?
?
Guy Rosman was partially supported by MIT-Technion fellowship
Support for this research has been provided by Hon Hai/Foxconn Technology Group and MIT Lincoln Laboratory. The authors are grateful for this support.
1
In this paper we use a data reduction technique called coresets [1, 9] to enable rapid contentbased segmentation of data streams. Informally, a coreset D is problem dependent compression
of the original data P , such that running algorithm A on the coreset D yields a result A(D) that
provably approximates the result A(P ) of running the algorithm on the original data. If the coreset
D is small and its construction is fast, then computing A(D) is fast even if computing the result
A(P ) on the original data is intractable. See definition 2 for the specific coreset which we develop
in this paper.
1.1 Main Contribution
The main contributions of the paper are: (i) A new coreset for the k-segmentation problem (as given
in Subsection 1.2) that can be computed at one pass over streaming data (with O(log n) insertion
time/space) and supports distributed computation. Unlike previous results, the insertion time per
new observation and required memory is only linear in both the dimension of the data, and the
number k of segments. This result is summarized in Theorem 4, and proven in the supplementary
material. Our algorithm is scalable, parallelizable, and provides a provable approximation of the
cost function. (ii) Using this novel coreset we demonstrate a new system for segmentation and
compression of streaming data. Our approach allows realtime summarization of large-scale video
streams in a way that preserves the semantic content of the aggregated video sequences, and is
easily extendable. (iii) Experiments to demonstrate our approach on various data types: video,
GPS, and financial data. We evaluate performance with respect to output size, running time and
quality and compare our coresets to uniform and random sample compression. We demonstrate the
scalability of our algorithm by running our system on an Amazon cluster with 255 machines with
near-perfect parallelism as demonstrated on 256, 000 frames. We also demonstrate the effectiveness
of our algorithm by running several analysis algorithms on the computed coreset instead of the
full data. Our implementation summarizes the video in less than 20 minutes, and allows real-time
segmentation of video streams at 30 frames per second on a single machine.
Streaming and Parallel computations. Maybe the most important property of coresets is that
even an efficient off-line construction implies a fast construction that can be computed (a) Embarrassingly in parallel (e.g. cloud and GPUs), (b) in the streaming model where the algorithm passes
only once over the (possibly unbounded) streaming data. Only small amount of memory and update
time (? log n) per new point insertion is allowed, where n is the number of observations so far.
1.2 Problem Statement
The k-segment mean problem optimally fits a given discrete time signal of n points by a set of k
linear segments over time, where k ? 1 is a given integer. That is, we wish to partition the signal
into k consecutive time intervals such that the points in each time interval are lying on a single line;
see Fig. 1(left) and the following formal definition.
We make the following assumptions with respect to the data: (a) We assume the data is represented by a feature space that suitably represents its underlying structure; (b) The content of the data
includes at most k segments that we wish to detect automatically; An example for this are scenes
in a video, phases in the market as seen by stock behaviour, etc. and (c) The dimensionality of the
feature space is often quite large (from tens to thousands of features), with the specific choice of the
features being application dependent ? several examples are given in Section 3. This motivates the
following problem definition.
Definition 1 (k-segment mean). A set P in Rd+1 is a signal if P = {(1, p1 ), (2, p2 ), ? ? ? , (n, pn )}
where pi ? Rd is the point at time index i for every i = [n] = {1, ? ? ? , n}. For an integer k ? 1, a
k-segment is a k-piecewise linear function f : R ? Rd that maps every time i ? R to a point f (i)
in Rd . The fitting error at time t is the squared distance between pi and its corresponding projected
point f (i) on the k-segments. The fitting cost of f to P is the sum of these squared distances,
cost(P, f ) =
n
X
kpi ? f (i)k22 ,
(1)
i=1
where k ? k denotes the Euclidean distance. The function f is a k-segment mean of P if it minimizes
cost(P, f ).
For the case k = 1 the 1-segment mean is the solution to the linear regression problem. If we
restrict each of the k-segments to be a horizontal segment, then each segment will be the mean height
of the corresponding input points. The resulting problem is similar to the k-mean problem, except
2
Figure 1: For every k-segment f , the cost of input points (red) is approximated by the cost of the coreset
(dashed blue lines). Left: An input signal and a 3-segment f (green), along with the regression distance to one
point (dashed black vertical lines). The cost of f is the sum of these squared distances from all the input points.
Right: The coreset consists of the projection of the input onto few segments, with approximate per-segment
representation of the data.
each of the voronoi cells is forced to be a single region in time, instead of nearest center assignment,
i.e. the regions are contiguous.
In this paper we are interested in seeking a compact representation D that approximates cost(P, f )
for every k-segment f using the above definition of cost0 (D, f ). We denote a set D as a (k, ?)coreset according to the following definition,
Definition 2 ((k, ?)-coreset). Let P ? Rd+1 , k ? 1 be an integer, for some small ? > 0. A set D,
with a cost function cost0 (?) is a (k, ?)-coreset for P if for every k-segment f we have
(1 ? ?)cost(P, f ) ? cost0 (D, f ) ? (1 + ?)cost(P, f ).
We present a new coreset construction with provable approximations for a family of natural ksegmentation optimization problems. This is the first such construction whose running time is linear
in both the number of data points n, their dimensionality d, and the number k of desired segments.
The resulting coreset consists of O(dk/?2 ) points that approximates the sum of square distances
for any k-piecewise linear function (k segments over time). In particular, we can use this coreset
to compute the k-piecewise linear function that minimize the sum of squared distances to the input
points, given arbitrary constraints or weights (priors) on the desired segmentation. Such a generalization is useful, for example, when we are already given a set of candidate segments (e.g. maps or
distribution of images) and wish to choose the right k segments that approximate the input signal.
Previous results on coresets for k-segmentation achieved running time or coreset size that are at
least quadratic in d and cubic in k [12, 11]. As such, they can be used with very large data, for
example to long streaming video data which is usually high-dimensional and contains large number
of scenes. This prior work is based on some non-uniform sampling of the input data. In order to
achieve our results, we had to replace the sampling approach by a new set of deterministic algorithms
that carefully select the coreset points.
1.3 Related Work
Our work builds on several important contributions in coresets, k-segmentations, and video summarization.
Approximation Algorithms. One of the main challenges in providing provable guarantees for
segmentation w.r.t segmentation size and quality is global optimization. Current provable algorithms
for data segmentation are cubic-time in the number of desired segments, quadratic in the dimension
of the signal, and cannot handle both parallel and streaming computation as desired for big data.
The closest work that provides provable approximations is that of [12].
Several works attempt to summarize high-dimensional data streams in various application domains. For example, [19] describe the video stream as a high-dimensional stream and run approximated clustering algorithms such as k-center on the points of the stream; see [14] for surveys on
stream summarization in robotics. The resulting k-centers of the clusters comprise the video summarization. The main disadvantages of these techniques are (i) They partition the data stream into
k clusters that do not provide k-segmentation over time. (ii) Computing the k-center takes time
exponential in both d and k [16]. In [19] heuristics were used for dimension reduction, and in [14]
a 2-approximation was suggested for the off-line case, which was replaced by a heuristic forstreaming. (iii) In the context of analysis of video streams, they use a feature space that is often simplistic
and does not utilize the large data available effciently. In our work the feature space can be updated
on-line using a coreset for k-means clustering of the features seen so far.
k-segment Mean. The k-segment mean problem can be solved exactly using dynamic programming
[4]. However, this takes O(dn2 k) time and O(dn2 ) memory, which is impractical for streaming data.
In [15, Theorem 8] a (1 + ?)-approximation was suggested using O(n(dk)4 log n/?) time. While
3
the algorithm in [15] support efficient streaming, it is not parallel. Since it returns a k-segmentation
and not a coreset, it cannot be used to solve other optimization problems with additional priors or
constraints. In [12] an improved algorithm that takes O(nd2 k + ndk 3 ) time was suggested. The
algorithm is based on a coreset of size O(dk 3 /?3 ). Unlike the coreset in this paper, the running time
of [12] is cubic in both d and k. The result in [12] is the last in a line of research for the k-segment
mean problem and its variations; see survey in [11, 15, 13]. The application was segmentation of
3-dimensional GPS signal (time, latitude, longitude). The coreset construction in [12] and previous
papers takes time and memory that is quadratic in the dimension d and cubic in the number of
segments k. Conversely, our coreset construction takes time only linear in both k and d. While recent
results suggest running time linear in n, and space that is near-logarithmic in n, the computation time
is still cubic in k, the number of segments, and quadratic in d, the dimension. Since the number k
represents the number of scenes, and d is the feature dimensionality, this complexity is prohibitive.
Video Summarization One motivating application for us is online video summarization, where input video stream can be represented by a set of points over time in an appropriate feature space.
Every point in the feature space represents the frame, and we aim to produce a compact approximation of the video in terms of this space and its Euclidean norm. Application-aware summarization
and analysis of ad-hoc video streams is a difficult task with many attempts aimed at tackling it from
various perspectives [5, 18, 2]. The problem is highly related to video action classification, scene
classification, and object segmentation [18]. Applications where life-long video stream analysis is
crucial include mapping and navigation medical / assistive interaction, and augmented-reality applications, among others. Our goal differs from video compression in that compression is geared
towards preserving image quality for all frames, and therefore stores semantically redundant content. Instead, we seek a summarization approach that allows us to represent the video content by a
set of key segments, for a given feature space.
This paper is organized as follows. We begin by describing the k-segmentation problem and the
proposed coresets, and describe their construction, and their properties in Section 2. We perform
several experiments in order to validate the proposed approach on data collected from GPS and
werable web-cameras, and demonstrate the aggregation and analysis of multiple long sequences of
wearable user video in Section 3. Section 4 concludes the paper and discusses future directions.
2
A Novel Coreset for k-segment Mean
The key insights for constructing the k-segment coreset are: i) We observe that for the case k = 1,
a 1-segment coreset can be easily obtained using SVD. ii) For the general case, k ? 2 we can
partition the signal into a suitable number of intervals, and compute a 1-segment coreset for each
such interval. If the number of intervals and their lengths are carefully chosen, most of them will be
well approximated by every k-segmentation, and the remaining intervals will not incur a large error
contribution.
Based on these observations, we propose the following construction. 1) Estimate the signal?s
complexity, i.e., the approximated fitting cost to its k-segment mean. We denote this step as a call to
the algorithm B ICRITERIA. 2) Given an complexity measure for the data, approximate the data by a
set of segments with auxiliary information, which is the proposed coreset, denoted as the output of
algorithm BALANCED PARTITION.
We then prove that the resulting coreset allows us to approximate with guarantees the fitting cost
for any k-segmentation over the data, as well as compute an optimal k-segmentation. We state the
main result in Theorem 4, and describe the proposed algorithms as Algorithms 1 and 2. We refer the
reader to the supplementary material for further details and proofs.
2.1
Computing a k-Segment Coreset
We would like to compute a (k, ?)-coreset for our data. A (k, ?)-coreset D for a set P approximates
the fitting cost of any query k-segment to P up to a small multiplicative error of 1 ? ?. We note that
a (1, 0)-coreset can be computed using SVD; See the supplementary material for details and proof.
However, for k > 2, we cannot approximate the data by a representative point set (we prove this
in the supplementary material). Instead, we define a data structure D as our proposed coreset, and
define a new cost function cost0 (D, f ) that approximates the cost of P to any k-segment f .
The set D consists of tuples of the type (C, g, b, e). Each tuple corresponds to a different time
interval [b, e] in R and represents the set P (b, e) of points in this interval. g is the 1-segment mean
of the data P in the interval [b, e]. The set C is a (1, ?)-coreset for P (b, e).
4
We note the following: 1) If all the points of the k-segment f are on the same segment in this
time interval, i.e, {f (t) | b ? t ? e} is a linear segment, then the cost from P (b, e) to f can be
approximated well by C, up to (1 + ?) multiplicative error. 2) If we project the points of P (b, e) on
their 1-segment mean g, then the projected set L of points will approximate well the cost of P (b, e)
to f , even if f corresponds to more than one segment in the time interval [b, e]. Unlike the previous
case, the error here is additive. 3) Since f is a k-segment there will be at most k ? 1 time intervals
that will intersect more than two segments of f , so the overall additive error is small . This motivates
the following definition of D and cost0 .
m
Definition 3 (cost0 (D, f )). Let D = {(Ci , gi , bi , ei )}i=1 where for every i ? [m] we have
Ci ? Rd+1 , gi : R ? Rd and bi ? ei ? R. For a k-segment f : R ? Rd and i ? [m]
we say that Ci is served by one segment of f if {f (t) | bi ? t ? ei } is a linear segment. We denote by Good(D, f ) ? [m] the union of indexes i such that Ci is served by one segment of f .
We
bi ? t ? ei }, the projection of Ci on gi . We define cost0 (D, f ) as
P also define Li = {gi (t) |P
cost(C
,
f
)
+
i
i?Good(D,f )
i?[m]\Good(D,f ) cost(Li , f ).
Our coreset construction for general k > 1 is based on an input parameter ? > 0 such that for
an appropriate ? the output is a (k, ?)-coreset. ? characterizes the complexity of the approximation. The B ICRITERIA algorithm, given as Algorithm 1, provides us with such an approximation.
Properties of this algorithms are described in the supplementary material.
Theorem 4. Let P = {(1, p1 ), ? ? ? , (n, pn )} such that pi ? Rd for every i ? [n]. Let D be the
output of a call to BALANCED PARTITION(P, ?, ?), and let f be the output of B ICRITERIA
(P, k);
Let ? = cost(f ). Then D is a (k, ?)-coreset for P of size |D| = O(k) ? log n/?2 , and can be
computed in O(dn/?4 ) time.
Proof. We give a sketch of the proof, also given in Theorem 10 in the supplementary material,
and accompanying theorems. Lemma 8 states that given an estimate ? of the optimal segmentation
cost, BALANCED PARTITION(P, ?, ?) provides a (k, ?)-coreset of the data P . This hinges on the
observation that given a fine enough segmentation of the time domain, for each segment we can
approximate the data by an SVD with bounded error. This approximation is exact for 1 ? segments
(See claim 2 in the supplementary material), and can be bounded for a k-segments because of the
number of segment intersections. According to Theorem 9 of the supplementary material, ? as
computed by B ICRITERIA(P, k) provides such an approximation.
Algorithm 1: B ICRITERIA(P, k)
1
2
3
4
5
6
7
8
9
10
11
12
Input: A set P ? Rd+1 and an integer k ? 1
Output: A bicriteria (O(log n), O(log n))-approximation to the k-segment mean of P .
if n ? 2k + 1 then
f := a 1-segment mean of P ;
return f ;
Set t1 ? ? ? ? ? tn and p1 , ? ? ? , pn ? Rd such that P = {(t1 , p1 ), ? ? ? , (tn , pn )}
m ? {t ? R | (t, p) ? P }
Partition P into 4k sets Pj1 , ? ? k? , P2k ? P such that for every i ? [2k ? 1]:
m
(i) | {t | (t, p) ? Pi } | =
, and
(ii) if (t, p) ? Pi and (t0 , p0 ) ? Pi+1 then t < t0 .
4k
for
; i := 1 to 4k do
Compute a 2-approximation gi to the 1-segment mean of Pi
Q := the union of k + 1 signals Pi with the smallest value cost(Pi , gi ) among i ? [2k].
h := B ICRITERIA(P \ Q, k); Repartition the segments that do not have a good approx.
Set
gi (t) ?(t, p) ? Pi such that Pi ? Q
f (t) :=
.
h(t) otherwise
return
f;
;
5
Algorithm 2: BALANCED PARTITION(P, ?, ?)
10
Input: A set P = {(1, p1 ), ? ? ? , (n, pn )} in Rd+1
an error parameters ? ? (0, 1/10) and ? > 0.
Output: A set D that satisfies Theorem 4.
Q := ?; D = ? ; pn+1 := an arbitrary point in Rd ;
for i := 1 to n + 1 do
Q := Q ? {(i, pi )}; Add new point to tuple
f ? := a linear approximation of Q; ? := cost(Q, f ? )
if ? > ? or i = n + 1 then
T := Q \ {(i, pi )} ; take all the new points into tuple
C := a (1, ?/4)-coreset for T ; Approximate points by a local representation
g := a linear approximation of T , b := i ? |T |, e := i ? 1; save endpoints
D := D ? {(C, g, b, e)} ; save a tuple
Q := {(i, pi )} ; proceed to new point
11
return D
1
2
3
4
5
6
7
8
9
For efficient k-segmentation we run a k-segment mean algorithm on our small coreset instead of
the original large input. Since the coreset is small we can apply dynamic programming (as in [4])
in an efficient manner. In order to compute an (1 + ?) approximation to the k-segment mean of
the original signal P , it suffices to compute a (1 + ?) approximation to the k-segment mean of
the coreset, where cost is replaced by cost0 . However, since D is not a simple signal, but a more
involved data structure, it is not clear how to run existing algorithms on D. In the supplementary
material we show how to apply such algorithms on our coresets. In particular, we can run naive
dynamic programming [4] on the coreset and get a (1 + ?) approximate solution in an efficient
manner, as we summarize as follows.
Theorem 5. Let P be a d-dimensional signal. A (1 + ?) approximation to the k-segment mean of
O(1)
P can be computed in O (ndk/? + d(klog(n)/?)
)) time .
2.2 Parallel and Streaming Implementation
One major advantage of coresets is that they can be constructed in parallel as well as in a streaming
setting. The main observation is that the union of coresets is a coreset ? if a data set is split into
subsets, and we compute a coreset for every subset, then the union of the coresets is a coreset of the
whole data set. This allows us to have each machine separately compute a coreset for a part of the
data, with a central node which approximately solves the optimization problem; see [10, Theorem
10.1] for more details and a formal proof. As we show in the supplementary material, this allows us
to use coresets in the streaming and parallel model.
3
Experimental Results
We now demonstrate the results of our algorithm on four data types of varying length and dimensionality. We compare our algorithms against several other segmentation algorithms. We also show
that the coreset effectively improves the performance of several segmentation algorithms by running
the algorithms on our coreset instead of the full data.
3.1 Segmentation of Large Datasets
We first examine the behavior of the algorithm on synthetic data which provides us with easy groundtruth, to evaluate the quality of the approximation, as well as the efficiency, and the scalability of
the coreset algorithms. We generate synthetic test data by drawing a discrete k-segment P with
k = 20, and then add Gaussian and salt-and-pepper noise. We then benchmark the computed (k, ?)coreset D by comparing it against piecewise linear approximations with (1) a uniformly sampled
subset of control points U and (2) a randomly placed control points R. For a fair comparison
between the (k, ?)-coreset D and the corresponding approximations U, R we allow the same number
of coefficients for each approximation. Coresets are evaluated by computing the fitting cost to a
query k-segment Q that is constructed based on the a-priori parameters used to generate P .
6
(a) Coreset size vs coreset error
(b) (k, ?)-coreset size vs CPU time (c) Coreset dim. vs coreset error
Figure 2: Figure 2a shows the coreset error (?) decreasing as a function of coreset size. The dotted black line
indicates the point at which the coreset size is equal to the input size. Figure 2b shows the coreset construction
time in minutes as a function of coreset size. Trendlines show the linear increase in construction time with
coreset size. Figure 2c shows the reduction in coreset error as a function of the dimensionality of the 1-segment
coreset, for fixed input size (dimensionality can often be reduced down to R2 .
Figure 3: Segmentation from Google Glass. Black vertical lines present segment boundaries, overlayed on top
of the bags of word representation. Icon images are taken from the middle of each segment.
Approximation Power: Figure 2a shows the aggregated fitting cost error for 1500 experiments on
synthetic data. We varied the assumed k 0 segment complexity. In the plot we show how well a given
k 0 performed as a guess for the true value of k. As Figure 2a shows, we significantly outperform the
other schemes. As the coreset size approaches the size P the error decreases to zero as expected.
Coreset Construction Time: Figure 2b shows the linear relationship between input size and
construction time of D for different coreset size. Figure 2c shows how a high dimensionality benefits
coreset construction. This is even more apparent in real data which tends to be sparse, so that in
practice we are typically able to further reduce the coreset dimension in each segment.
Scalability: The coresets presented in this work are parallelizable, as discussed in Section 2.2. We
demonstrate scalability by conducting very large scale experiments on both real and synthetic data,
running our algorithm on a network of 255 Amazon EC2 vCPU nodes. We compress a 256,000frame bags-of-words (BOW) stream in approximately 20 minutes with almost-perfect scalability.
For a comparable single node running on the same data dataset, we estimate a total running time of
approximately 42 hours.
3.2 Real Data Experiments
We compare our coreset against uniform sample and random sample coresets, as well as two other
segmentation techniques: Ramer-Douglas-Peucker (RDP) algorithm [20, 8], and the Dead Reckoning (DR) algorithm [23]. We also show that we can combine our coreset with segmentation algorithms, by running the algorithm on the coresets itself. We emphasize that segmentation techniques
were chosen as simple examples and are not intended to reflect the state of the art ? but rather to
demonstrate how the k-segment coreset can improve on any given algorithm.
To demonstrate the general applicability of our techniques, we run our algorithm using financial
(1D) time series data, as well as GPS data. For the 1D case we use price data from the Mt.Gox
Bitcoin exchange. Bitcoin is of interest because its price has grown exponentially with its popularity
in the past two years. Bitcoin has also sustained several well-documented market crashes [3],[6] that
we can relate to our analysis. For the 2D case we use GPS data from 343 taxis in San Francisco.
This is of interest because a taxi-route segmentation has an intuitive interpretation that we can easily
evaluate, and on the other hand GPS data forms an increasingly large information source in which
we are interested.
7
Figure 4a shows the results for Bitcoin data. Price extrema are highlighted by local price highs
(green) and lows (red). We observe that running the DR algorithm on our k-segment coreset captures
these events quite well. Figures 4b,4c show example results for a single taxi. Again, we observe that
the DR segmentation produces segments with a meaningful spatial interpretation. Figure 5 shows
a plot of coreset errors for the first 50 taxis (right), and the table gives a summary of experimental
results for the Bitcoin and GPS experiments.
3.3 Semantic Video Segmentation
In addition, we demonstrate use of the proposed coreset for video streams summarization. While
different choices of frame representations for video summarization are available [22, 17, 18], we
used color-augmented SURF features, quantized into 5000 visual words, trained on the ImageNet
2013 dataset [7]. The resulting histograms are compressed in a streaming coreset. Computation in
on a single core runs at 6Hz; A parallel version achieves 30Hz on a single i7 machine, processing 6
hours of video in 4 hours on a single machine, i.e. faster than real-time.
In Figure 3 we demonstrate segmentation of a video taken from Google Glass. We visualize
BOWs, as well as the segments suggested by the k-segment mean algorithm [4] run on the coreset.
Inspecting the results, most segment transitions occur at scene and room changes.
Even though optimal segmentation can not be done in real-time, the proposed coreset is computed
in real-time and can further be used to automatically summarize the video by associating representative frames with segments. To evaluate the ?semantic? quality of our segmentation, we compared
the resulting segments to uniform segmentation by contrasting them with a human annotation of the
video into scenes. Our method gave a 25% improvement (in Rand index [21]) over a 3000 frames
sequence.
?122.37
X1: Latitude (top)
X2: Longitude (bottom)
Dead Reckoning segmentation
Price (USD/BTC)
1000
MTGOXUSD D1 closing price
Dead Reckoning segmentation
Local price maxima
Local price minima
800
600
400
200
?122.39
?122.4
Longitude (X2)
1200
Latitude (top), Longitude (bottom)
MTGOXUSD
1400
?122.38
?122.42
?122.43
?122.44
?122.45
?122.46
0
?200
?122.41
Apr?2013
Jul?2013
Oct?2013
Date
Jan?2014
?122.47
37.6
Time
37.65
37.7
37.75
37.8
37.85
Latitude (X1)
(a) MTGOXUSD daily price data
(b) GPS taxi data
(c) GPS taxi data
Figure 4: (a) shows the Bitcoin prices from 2013 on, overlayed with a DR segmentation computed on our
coreset. The red/green triangles indicate prominent market events. (b) 4c shows normalized GPS data overlayed
with a DR segmentation computed on our coreset. (c) shows a lat/long plot (right) demonstrating that the
segmentation yields a meaningful spatial interpretation.
0.2
Average ? Bitcoin data GPS data
k-segment coreset
0.0092
0.0014
Uniform sample coreset
1.8726
0.0121
Random sample coreset
8.0110
0.0214
RDP on original data
0.0366
0.0231
RDP on k-segment
0.0335
0.0051
DeadRec on original data
0.0851
0.0417
DeadRec on k-segment
0.0619
0.0385
Figure 5: Table: Summary for Bitcoin / GPS data. Plot: Errors / standard deviations for the first 50 cabs.
k?segment coreset (mean and std)
Uniform sample coreset
Random sample coreset
RDP on points
Dead Reckoning on points
0.18
0.16
coreset error
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
0
5
10
15
20
25
30
35
40
45
50
Taxi ID
4
Conclusions
In this paper we demonstrated a new framework for segmentation and event summarization of highdimensional data. We have shown the effectiveness and scalability of the algorithms proposed, and
its applicability for large distributed video analysis. In the context of video processing, we demonstrate how using the right framework for analysis and clustering, even relatively straightforward
representations of image content lead to a meaningful and reliable segmentation of video streams at
real-time speeds.
8
References
[1] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Geometric approximations via coresets. Combinatorial and Computational Geometry - MSRI Publications, 52:1?30, 2005.
[2] S. Bandla and K. Grauman. Active learning of an action detector from untrimmed videos. In ICCV, 2013.
[3] BBC. Bitcoin panic selling halves its value, 2013.
[4] R. Bellman. On the approximation of curves by line segments using dynamic programming. Commun.
ACM, 4(6):284, 1961.
[5] W. Churchill and P. Newman. Continually improving large scale long term visual navigation of a vehicle
in dynamic urban environments. In Proc. IEEE Intelligent Transportation Systems Conference (ITSC),
Anchorage, USA, September 2012.
[6] CNBC. Bitcoin crash spurs race to create new exchanges, April 2013.
[7] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In Computer Vision and Pattern Recognition, 2009.
[8] D. H. Douglas and T. K. Peucker. Algorithms for the reduction of the number of points required to
represent a digitized line or its caricature. Cartographica: The International Journal for Geographic
Information and Geovisualization, 10(2):112?122, 1973.
[9] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In STOC,
2010. Manuscript available at arXiv.org.
[10] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for
k-means, PCA and projective clustering. SODA, 2013.
[11] D. Feldman, A. Sugaya, and D. Rus. An effective coreset compression algorithm for large scale sensor
networks. In IPSN, pages 257?268, 2012.
[12] D. Feldman, C. Sung, and D. Rus. The single pixel gps: learning big data signals from tiny coresets.
In Proceedings of the 20th International Conference on Advances in Geographic Information Systems,
pages 23?32. ACM, 2012.
[13] A. C. Gilbert, S. Guha, P. Indyk, Y. Kotidis, S. Muthukrishnan, and M. J. Strauss. Fast, small-space
algorithms for approximate histogram maintenance. In STOC, pages 389?398. ACM, 2002.
[14] Y. Girdhar and G. Dudek. Efficient on-line data summarization using extremum summaries. In ICRA,
pages 3490?3496. IEEE, 2012.
[15] S. Guha, N. Koudas, and K. Shim. Approximation and streaming algorithms for histogram construction
problems. ACM Transactions on Database Systems (TODS), 31(1):396?438, 2006.
[16] D. S. Hochbaum. Approximation algorithms for NP-hard problems. PWS Publishing Co., 1996.
[17] Y. Li, D. J. Crandall, and D. P. Huttenlocher. Landmark classification in large-scale image collections. In
ICCV, pages 1957?1964, 2009.
[18] Z. Lu and K. Grauman. Story-driven summarization for egocentric video. In CVPR, pages 2714?2721,
2013.
[19] R. Paul, D. Feldman, D. Rus, and P. Newman. Visual precis generation using coresets. In ICRA. IEEE
Press, 2014. accepted.
[20] U. Ramer. An iterative procedure for the polygonal approximation of plane curves. Computer Graphics
and Image Processing, 1(3):244 ? 256, 1972.
[21] W. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical
Association, 66(336):846?850, 1971.
[22] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In
ICCV, volume 2, pages 1470?1477, Oct. 2003.
[23] G. Trajcevski, H. Cao, P. Scheuermann, O. Wolfson, and D. Vaccaro. On-line data reduction and the
quality of history in moving objects databases. In MobiDE, pages 19?26, 2006.
9
| 5581 |@word version:1 middle:1 compression:6 norm:1 suitably:1 seek:1 p0:1 bicriteria:1 reduction:6 series:2 contains:1 past:1 existing:1 current:1 comparing:1 tackling:1 danny:1 john:1 additive:2 partition:8 enables:1 plot:4 update:1 v:3 half:1 prohibitive:1 guess:1 plane:1 core:2 provides:7 quantized:1 node:3 location:1 org:1 unbounded:2 height:1 along:1 constructed:3 dn:1 bitcoin:10 driver:2 anchorage:1 consists:3 prove:2 sustained:1 fitting:7 combine:1 manner:2 cnbc:1 expected:1 market:3 rapid:1 p1:5 examine:1 panic:1 behavior:2 bellman:1 decreasing:1 automatically:2 cpu:1 cardinality:1 increasing:1 provided:1 begin:1 moreover:1 underlying:1 project:1 bounded:2 churchill:1 wolfson:1 minimizes:1 contrasting:1 unified:1 extremum:2 impractical:1 sung:1 guarantee:2 every:11 collecting:1 weekly:1 exactly:1 grauman:2 control:2 medical:1 continually:1 t1:2 local:4 tends:1 taxi:7 vassar:5 id:1 approximately:3 black:3 conversely:1 co:1 projective:1 bi:4 klog:1 camera:2 union:4 practice:1 differs:1 procedure:1 jan:1 intersect:1 significantly:1 projection:2 matching:1 word:3 suggest:1 varadarajan:1 get:1 onto:1 cannot:3 context:2 gilbert:1 map:2 demonstrated:2 transportation:2 center:4 deterministic:1 straightforward:1 independently:1 kpi:1 survey:2 amazon:3 coreset:93 insight:1 financial:5 handle:1 variation:1 updated:1 construction:16 user:1 exact:1 programming:4 gps:16 approximated:5 recognition:1 std:1 database:3 huttenlocher:1 bottom:2 cloud:2 solved:1 capture:1 thousand:1 region:2 compressing:1 decrease:1 balanced:4 environment:1 insertion:3 complexity:5 peled:1 dynamic:5 trained:1 grateful:1 segment:83 incur:1 bbc:1 logging:1 efficiency:1 triangle:1 selling:1 easily:3 stock:2 represented:3 various:3 assistive:1 grown:1 muthukrishnan:1 forced:1 fast:6 describe:3 effective:1 query:2 crandall:1 newman:2 quite:2 whose:1 supplementary:10 heuristic:2 solve:1 say:1 drawing:1 otherwise:1 compressed:1 koudas:1 cvpr:1 gi:7 highlighted:1 itself:1 indyk:1 online:1 hoc:1 sequence:3 advantage:1 propose:1 interaction:1 cao:1 date:1 bow:2 spur:1 achieve:1 lincoln:1 intuitive:1 validate:1 scalability:6 cluster:3 produce:2 perfect:2 object:3 develop:2 nearest:1 solves:1 longitude:4 auxiliary:1 p2:1 implies:1 indicate:1 direction:1 ipsn:1 human:1 enable:2 material:10 exchange:2 behaviour:1 suffices:1 generalization:1 biological:1 inspecting:1 accompanying:1 practically:1 lying:1 mapping:1 visualize:1 claim:1 major:1 achieves:1 consecutive:1 smallest:1 estimation:2 proc:1 bag:2 combinatorial:1 create:1 mit:12 sensor:2 gaussian:1 aim:1 rather:1 pn:6 mobile:1 varying:1 publication:1 nd2:1 improvement:1 indicates:1 detect:2 glass:2 dim:1 twitter:1 dependent:2 voronoi:1 streaming:16 typically:1 volkov:1 interested:2 caricature:1 provably:1 pixel:1 overall:1 classification:3 among:2 hon:1 denoted:1 priori:1 art:1 spatial:2 equal:1 comprise:1 construct:1 once:1 aware:1 sampling:2 btc:1 sohler:1 represents:4 future:1 others:1 dudek:1 piecewise:6 intelligent:1 few:2 np:1 randomly:1 preserve:1 replaced:2 geometry:2 phase:1 intended:1 overlayed:3 attempt:2 interest:2 highly:1 evaluation:1 pj1:1 rdp:4 navigation:2 har:1 tuple:4 daily:1 euclidean:2 desired:4 instance:1 contiguous:1 disadvantage:1 assignment:1 cost:27 applicability:2 deviation:1 subset:3 uniform:6 technion:1 guha:2 too:1 graphic:1 characterize:1 optimally:1 motivating:1 synthetic:5 extendable:1 st:5 international:2 ec2:1 csail:10 off:2 dong:1 p2k:1 squared:5 central:1 reflect:1 again:1 choose:1 possibly:1 usd:1 guy:2 dr:5 dead:4 american:1 return:4 li:5 summarized:1 coresets:22 includes:1 coefficient:1 race:1 ad:1 stream:22 multiplicative:2 performed:1 vehicle:1 characterizes:1 red:3 aggregation:1 parallel:9 annotation:1 jul:1 contribution:4 minimize:1 square:1 conducting:1 efficiently:1 reckoning:4 yield:2 lu:1 served:2 icon:1 history:1 detector:1 parallelizable:2 definition:9 against:3 involved:1 proof:5 wearable:3 sampled:1 dataset:2 subsection:1 color:1 dimensionality:7 improves:1 segmentation:47 embarrassingly:1 organized:1 carefully:2 manuscript:1 zisserman:1 improved:1 rand:2 april:1 evaluated:1 though:1 done:1 sketch:1 apparent:1 horizontal:1 web:1 ei:4 hand:1 google:3 quality:6 perhaps:1 usa:6 k22:1 normalized:1 true:1 geographic:2 laboratory:1 contentbased:1 semantic:3 criterion:1 prominent:1 demonstrate:12 tn:2 image:8 novel:3 mt:1 empirically:1 salt:1 endpoint:1 exponentially:1 volume:1 discussed:1 interpretation:3 approximates:6 association:1 refer:1 cambridge:5 feldman:6 automatic:1 rd:13 approx:1 closing:1 had:1 moving:1 geared:1 etc:1 add:2 closest:1 recent:1 perspective:1 commun:1 driven:1 store:2 route:1 life:2 seen:3 preserving:1 additional:1 minimum:1 ndk:2 deng:1 aggregated:2 redundant:1 signal:21 ii:4 dashed:2 full:2 multiple:1 faster:1 long:6 retrieval:1 prediction:1 scalable:1 regression:2 simplistic:1 maintenance:1 vision:1 arxiv:1 histogram:4 represent:2 hochbaum:1 agarwal:1 achieved:1 cell:1 robotics:1 addition:1 fellowship:1 fine:1 separately:1 interval:12 crash:2 girdhar:1 source:1 crucial:1 parallelization:1 unlike:3 pass:1 hz:2 effectiveness:2 integer:4 call:2 near:2 iii:3 easy:2 enough:1 split:1 fit:1 gave:1 pepper:1 restrict:1 associating:1 trajcevski:1 reduce:1 i7:1 t0:2 pca:1 proceed:1 afford:1 action:2 useful:1 clear:1 informally:1 aimed:1 maybe:1 amount:1 ten:1 reduced:1 generate:2 documented:1 outperform:1 dotted:1 msri:1 per:4 popularity:1 blue:1 discrete:2 group:1 key:2 four:1 scheuermann:1 demonstrating:1 urban:1 douglas:2 utilize:1 egocentric:1 tweet:1 sum:5 year:1 run:7 soda:1 family:1 reader:1 groundtruth:1 almost:1 realtime:1 summarizes:1 comparable:1 quadratic:4 occur:1 precisely:1 constraint:2 fei:2 scene:7 x2:2 speed:1 relatively:1 gpus:1 according:2 increasingly:1 precis:1 iccv:3 taken:2 daniela:1 describing:1 discus:1 available:3 apply:3 observe:3 hierarchical:1 generic:1 appropriate:2 save:2 schmidt:1 original:7 compress:1 denotes:1 running:15 include:2 clustering:6 remaining:1 top:3 lat:1 maintaining:1 hinge:1 publishing:1 build:1 approximating:1 icra:2 seeking:1 objective:1 already:1 hai:1 september:1 distance:8 landmark:1 collected:1 provable:5 ru:5 length:2 index:3 relationship:1 providing:1 difficult:1 statement:1 relate:1 stoc:2 implementation:2 motivates:2 summarization:17 perform:1 vertical:2 observation:5 datasets:1 benchmark:1 cost0:8 digitized:1 frame:8 varied:1 arbitrary:2 required:2 imagenet:2 sivic:1 hour:3 able:1 suggested:4 parallelism:1 pattern:2 usually:1 latitude:4 challenge:1 summarize:3 green:3 memory:5 video:39 reliable:1 power:1 event:4 suitable:1 natural:1 rely:1 turning:1 scheme:1 improve:1 technology:1 concludes:1 naive:1 sugaya:1 text:1 prior:3 geometric:1 kotidis:1 shim:1 generation:1 proven:1 story:1 tiny:2 pi:14 summary:3 surprisingly:1 supported:1 last:1 placed:1 formal:2 allow:1 mikhail:2 sparse:1 distributed:2 benefit:1 boundary:1 dimension:7 curve:2 transition:1 dn2:2 author:1 collection:1 projected:2 san:1 far:3 transaction:1 approximate:10 compact:4 emphasize:1 langberg:1 global:1 active:1 assumed:1 francisco:1 tuples:1 iterative:1 reality:1 table:2 learn:2 improving:1 constructing:1 domain:3 surf:1 apr:1 main:6 big:3 whole:1 noise:1 paul:1 allowed:1 fair:1 citywide:1 x1:2 augmented:2 fig:1 representative:2 tod:1 cubic:5 wish:3 exponential:1 candidate:1 theorem:10 minute:3 down:1 pws:1 specific:3 r2:1 dk:3 admits:1 intractable:1 socher:1 polygonal:1 strauss:1 effectively:1 ci:5 cab:1 demand:1 intersection:1 logarithmic:1 visual:3 partially:1 corresponds:2 satisfies:1 acm:4 ma:5 oct:2 goal:1 towards:1 room:1 price:11 fisher:2 content:6 replace:1 change:1 hard:1 except:1 rosman:3 semantically:1 uniformly:1 lemma:1 called:1 total:1 pas:2 accepted:1 svd:3 experimental:2 meaningful:4 select:1 highdimensional:1 support:5 repartition:1 evaluate:5 d1:1 |
5,061 | 5,582 | Approximating Hierarchical MV-sets for Hierarchical
Clustering
Assaf Glazer
Omer Weissbrod
Michael Lindenbaum
Shaul Markovitch
Department of Computer Science, Technion - Israel Institute of Technology
{assafgr,omerw,mic,shaulm}@cs.technion.ac.il
Abstract
The goal of hierarchical clustering is to construct a cluster tree, which can be
viewed as the modal structure of a density. For this purpose, we use a convex optimization program that can efficiently estimate a family of hierarchical dense sets
in high-dimensional distributions. We further extend existing graph-based methods to approximate the cluster tree of a distribution. By avoiding direct density
estimation, our method is able to handle high-dimensional data more efficiently
than existing density-based approaches. We present empirical results that demonstrate the superiority of our method over existing ones.
1
Introduction
Data clustering is a classic unsupervised learning technique, whose goal is dividing input data into
disjoint sets. Standard clustering methods attempt to divide input data into discrete partitions. In
Hierarchical clustering, the goal is to find nested partitions of the data. The nested partitions reveal
the modal structure of the data density, where clusters are associated with dense regions, separated
by relatively sparse ones [27, 13].
Under the nonparametric assumption that the data is sampled i.i.d. from a continuous distribution
F with Lebesgue density f in Rd , Hartigan observed that f has a hierarchical structure, called its
cluster tree. Denote Lf (c) = {x : f (x) ? c} as the level set of f at level c. Then, the connected
components in Lf (c) are the high-density clusters at level c, and the collection of all high-density
clusters for c T
? 0 has a hierarchical structure, where for any two clusters A and B, either A ? B,
B ? A, or A B = ?.
L f ? c ? 0.23?
L f ? c ? 0.11?
Figure 1: A univariate, tri-modal density function and its corresponding cluster tree are illustrated.
Figure 1 shows a plot of a univariate, tri-modal density function. The cluster tree of the density
F ? 0.67
function is shown on top of the density function. The high-density
clusters are nodes in the cluster
tree. Leaves are associated with modes in the density function.
66.7%
66.7%
33.3%
33.3%
1
F ? 0.5
Given the density f , the cluster tree can be constructured in a straightforward manner via a recursive
algorithm [23]. We start by setting the root node with a single cluster containing the entire space,
corresponding to c = 0. We then recursively increase c until the number of connected components
increases, at which point we define a new level of the tree. The process is repeated as long as
the number of connected components increases. In Figure 1, for example, the root node has two
daughter nodes, which were found at level c = 0.11. The next two descendants of the left node were
found at level c = 0.23.
A common approach for hierarchical clustering is to first use a density estimation method to obtain f [18, 5, 23], and then estimate the cluster tree using the recursive method described above.
However, one major drawback in this approach is that a reliable density estimation is hard to obtain,
especially in high-dimensional data.
An alternative approach is to estimate the level sets directly, without a separate density estimation
step. To do so, we define the minimum volume set (MV-set) at level ? as the subset of the input space
with the smallest volume and probability mass of at least ?. MV-sets of a distribution, which are
also level sets of the density f (under sufficient regularity conditions), are hierarchical by definition.
The well-known One-Class SVM (OCSVM) [20] can efficiently find the MV-set at a specified level
?. A naive approach for finding a hierarchy of MV-sets is to train distinct OCSVMs, one for each
MV-set, and enforce hierarchy by intersection operations on the output. However, this solution is
not well suited for finding a set of hierarchical MV-sets, because the natural hierarchy of MV-sets is
not exploited, leading to a suboptimal solution.
In this study we propose a novel method for constructing cluster trees by directly estimating MV-sets,
while guaranteeing convergence to a globally optimum solution. Our method utilizes the q-OneClass SVM (q-OCSVM) method [11], which can be regarded as a natural extension of the OCSVM,
to jointly find the MV-sets at a set of levels {?i }. By avoiding direct density estimation, our method
is able to handle high-dimensional data more efficiently than existing density-based approaches. By
jointly considering the entire spectrum of desired levels, a globally optimum solution can be found.
We combine this approach with a graph-based heuristic, found to be successful in high-dimensional
data [2, 23], for finding high density clusters in the approximated MV-sets. Briefly, we construct a
fully connected graph whose nodes correspond to feature vectors, and remove edges between nodes
connected by low-density regions. The connected components in the resulting graph correspond to
high density clusters.
The advantage of our method is demonstrated empirically on synthetic and real data, including
a reconstruction of an evolutionary tree of human populations using the high-dimensional 1000
genomes dataset.
2
Background
Our novel method for hierarchical clustering belongs to a family of non-parametric clustering methods. Unlike parametric methods, which assume that each group i is associated with a density fi
belonging to some family of parametric densities, non-parametric methods assume that each group
is associated with modes of a density f [27]. Non-parametric methods aim to reveal the modal
structure of f [13, 28, 14].
Hierarchical clustering methods can be divided into agglomerative (bottom up) and divisive (top
down) methods. Agglomerative methods (e.g. single-linkage) start with n singleton clusters, one for
each training feature vector, and work by iteratively linking two closest clusters. Divisive methods,
on the other hand, start with all feature vectors in a single cluster and recursively divide clusters into
smaller sub-clusters.
While single-linkage was found, in theory, to have better stability and convergence properties in
comparison to average-linkage and complete-linkage [4], it is frequently criticized by practitioners
due to the chaining effect. Single-linkage ignores the density of feature vectors in clusters, and thus
may erroneously connect two modes (clusters) with a few feature vectors connecting them, that is,
a ?chain? of feature vectors.
Wishart [27] suggested overcoming this effect by conducting a one-level analysis of the data. The
idea is to estimate a specific level set of the data density (Lf (c)), and to remove noisy features
2
outside this level that could otherwise lead to the chaining effect. The connected components left
in Lf (c) are the clusters; expansions of this idea can be found in [9, 26, 6, 3]. Indeed, this analysis
is more resistant to the chaining effect. However, one of its major drawbacks is that no single level
set can reveal all the modes of the density. Therefore, various studies have proposed estimating the
entire hierarchical structure of the data (the cluster tree) using density estimates [13, 1, 22, 18, 5, 23,
17, 19]. These methods are considered as divisive hierarchical clustering methods, as they start by
associating all feature vectors to the root node, which is then recursively divided to sub-clusters by
incrementally exploring level sets of denser regions. Our proposed method belongs to this group of
divisive methods.
Stuetzle [22] used the nearest neighbor density estimate to construct the cluster tree and pointed out
its connection to single-linkage clustering. Kernel density estimates were used in other studies [23,
19]. The bisecting K-means (BiKMean) method is another divisive method that was found to work
effectively in cluster analysis [16], although it provides no theoretical guarantee for finding the
correct cluster tree of the underlying density.
Hierarchical clustering methods can be used as an exploration tool for data understanding [16]. The
nonparametric assumption, by which density modes correspond to homogenous feature vectors with
respect to their class labels, can be used to infer the hierarchical class structure of the data [15].
An implicit assumption is that the closer two feature vectors are, the less likely they will be to
have different class labels. Interestingly, this assumption, which does not necessarily hold for all
distributions, is being discussed lately in the context of hierarchical sampling methods for active
learning [8, 7, 25], where the correctness of such a hierarchical modeling approach is said to depend
on the ?Probabilistic Lipschitzness? assumption about the data distribution.
3
Approximating MV-sets for Hierarchical Clustering
Our proposed method consists of (a) estimating MV-sets using the q-OCSVM method; (b) using a
graph-based method for finding a hierarchy of high density regions in the MV-sets, and (c) constructing a cluster tree using these regions. These stages are described in detail below.
3.1
Estimating MV-Sets
We begin by briefly describing the One-Class SVM (OCSVM) method. Let X = {x1 , . . . , xn } be
a set of feature vectors sampled i.i.d. with respect to F . The function fC returned by the OCSVM
algorithm is specified by the solution of this quadratic program:
1
1 X
minn
||w||2 ? ? +
?i ,
w?F ,??R ,??R 2
?n i
(1)
s.t. (w ? ? (xi )) ? ? ? ?i , ?i ? 0,
where ? is a vector of the slack variables. Recall that all training examples xi for which (w ? ?(x))?
? ? 0 are called support vectors (SVs). Outliers are referred to as examples that strictly satisfy
(w ? ?(x)) ? ? < 0. By solving the program for ? = 1 ? ?, we can use the OCSVM to approximate
the MV-set C(?).
Let 0 < ?1 < ?2 , . . . , < ?q < 1 be a sequence of q quantiles. The q-OCSVM method generalizes
the OCSVM algorithm for approximating a set of MV-sets {C1 , . . . , Cq } such that a hierarchy constraint Ci ? Cj is satisfied for i < j. Given X , the q-OCSVM algorithm solves this primal program:
min
w,?j ,?j
q
q
X
X
1 X
q
||w||2 ?
?j +
?j,i
2
? n i
j=1
j=1 j
(2)
s.t. (w ? ? (xi )) ? ?j ? ?j,i , ?j,i ? 0, j ? [q], i ? [n],
where ?j = 1 ? ?j . This program generalizes Equation (1) to the case of finding multiple, parallel
half-space decision functions by searching for a global minimum over their sum of objective functions: the coupling between q half-spaces is done by summing q OCSVM programs, while forcing
these programs to share the same w. As a result, the q half-spaces in the solution of Equation (2)
differ only by their bias terms, and are thus parallel to each other. This program is convex, and thus
a global minimum can be found in polynomial time.
3
Glazer et al. [11] proves that the q-OCSVM algorithm can be used to approximate the MV-sets of a
distribution.
3.1.1
Generalizing q-OCSVM for Finding an Infinite Number of Approximated MV-sets
The q-OCSVM finds a finite number of q approximated MV-sets, which capture the overall structure
of the cluster tree. However, in order to better resolve differences in density levels between data
points, we would like the solution to be extended for defining an infinite number of hierarchical sets.
Our approach for doing so relies on the parallelism property of the approximated MV-sets in the
q-OCSVM solution. An infinite number of approximated MV-sets are associated with separating
hyperplanes in F that are parallel to the q hyperplanes in the q-OCSVM solution. Note that every
projected feature vector ?(x) lies on a unique separating hyperplane that is parallel to the q hyperplanes defined by the solution, and the distance dis(x) = (w ? ?(x)) ? ? is sufficient to determine
whether x is located inside each of the approximated MV-sets.
We would like to know the probability mass associated with each of the infinite hyperplanes. For
this purpose, we could similarly estimate the expected probability mass of the approximated MVset defined for any x ? Rd . When ?(x) lies strictly on one of the i ? [q] hyperplanes, then x is
considered as lying on the boundary of the set approximating C(?i ). When ?(x) does not satisfy
this condition, we use a linear interpolation to define ? for its corresponding approximated MV-set:
Let ?i , ?i+1 be the bias terms associated with the i and i + 1 approximated MV-sets that satisfy
?i > (w ? ?(x)) > ?i+1 . Then we linearly interpolate (w ? ?(x)) along the [?i+1 , ?i ] interval for an
intermediate ? ? (?i , ?i+1 ). For the completion of the definition, we set ?0 = maxx?X (w ? ?(x))
and ?q+1 = minx?X (w ? ?(x)).
3.2
Finding a Hierarchy of High-Density Regions
To find a hierarchy of high density regions, we adopt a graph-based approach. We construct a
fully-connected graph whose nodes correspond to feature vectors, and remove edges between nodes
separated by low-density regions. The connected components in the resulting graph correspond to
high density regions. The method proceeds as follows.
Let ?(x) be the expected probability mass of the approximated MV-set defined by x. Let ?i,s be the
maximal value of ?(x) over the line segment connecting the feature vectors xi and xs in X :
?i,s = max ?(txi + (1 ? t)xs ).
t?[0,1]
(3)
Let G be a complete graph between pairs of feature vectors in X with edges equal to ?i,s 1 . High
density clusters at level ? are defined as the connected components in the graph G(?) induced
by removing edges from G with ?i,s > ?. This method guarantees that two feature vectors in
the same cluster of the approximated MV-set at level ? would surely lie in the same connected
component in G(?). However, the opposite would not necessary hold ? when ?i,s > ? and a
curve connecting xi and xs exists in the cluster, xi and xs might erroneously be found in different
connected components. Nevertheless, it was empirically shown that erroneous splits of clusters are
rare if the density function is smooth [23].
One way to implement this method for finding high density clusters is to iteratively find connected
components in G(?), when at each iteration ? is incrementally increased (starting from ? = 0),
until all the clusters are found. However, [23] observed that we can simplify this method by working
only on the graph G and its minimal spanning tree T . Consequently, we can compute a hierarchy
of high-density regions in two steps: First, construct G and its minimal spanning tree T . Then,
remove edges from T in descending order of their weights such that the connected components left
after removing an edge with weight ? correspond to a high density cluster at level ?. Connected
components with a single feature vector are treated as outliers and removed.
1
We calculated ?i,s in G by checking the ?(x) values for 20 points sampled from the line segment between
xi and xs . The same approach was also used by [2] and [23].
4
3.3
Constructing a Cluster Tree
The hierarchy resulting from the procedure described above does not form a full partition of the
data, as in each edge removal step a fraction of the data is left outside the newly formed high density
clusters. To construct a full partition, feature vectors left outside at each step are assigned to their
nearest cluster. Additionally, when a cluster is split into sub-clusters, all its assigned feature vectors
are assigned to one of the new sub-clusters.
The choice of kernel width has a strong effect on the resulting cluster tree. On the one hand, a large
bandwidth may lead to the inner products induced by the kernel function being constant; that is,
many examples in the train data are projected to the same point in F. Hence, the approximated
MV-sets could eventually be equal, resulting in a cluster tree with a single node. On the other hand,
a small bandwidth may lead to the inner products becoming closer to zero; that is, points in F tend
to lie on orthogonal axes, resulting in a cluster tree with many branches and leaves.
We believe that the best approach for choosing the correct bandwidth is based on the number of
modes that we expect to find for the density function. By using a grid search over possible ? values,
we can choose the bandwidth that results in a cluster tree in which the expected number of modes is
the same as the number we expect.
4
Empirical Analysis
We evaluate our hierarchical clustering method on synthetic and real data. While the quality of
an estimated cluster tree for the synthetic data can be evaluated by comparing the resulting tree
with the true modal structure of the density, alternative quality measures are required to estimate the
efficiency of hierarchical clustering methods on high-dimensional data when the density is unknown.
In the following section we introduce our proposed measure.
4.1
The Quality Measure
One prominent measure is the F -measure, which was extended by [16] to evaluate the quality of
estimated cluster trees. Recall that classes refer to the true (unobserved) class assignment of the
observed vectors, whereas clusters refer to their tree-assigned partition. For a cluster j and class i,
define ni,j as the number of feature vectors of class i in cluster j, and ni , nj as the number of feature
vectors associated with class i and with cluster j, respectively. The F -measure for cluster j and class
2?Recall ?P recision
n
n
i is given by Fi,j = Recalli,ji,j+P recisioni,ji,j , where Recalli,j = ni,j
and P recisioni,j = ni,j
. The
i
j
F -measure for the cluster tree is
X ni
F =
max{Fi,j }.
(4)
n j
i
The F -measure was found to be a useful tool for the evaluation of hierarchical clustering methods [21], as it quantifies how well we could extract k clusters, one for each class, that are relatively
?pure? and large enough with respect to their associated class. However, we found it difficult to
use this measure directly in our analysis, because it appears to prefer overfitted trees, with a large
number of spurious clusters.
We suggest correcting this bias via cross-validation. We split the data X into two equal-sized train
and test sets, and construct a tree using the train set. Test examples are recursively assigned to
clusters in the tree in a top-down manner, and the F -measure is calculated according to the resulting
tree. When analytical boundaries of clusters in the tree are not available (such as in our method), we
recursively assign each test example in a cluster to the sub-cluster containing its nearest neighbor in
the train set, using Euclidean distance.
4.2
Reference Methods
We compare our method with methods for density estimation, that can also be used to construct a
graph G. For this purpose, since f (x) is used instead of ?(x), we had to adjust the way we construct
5
G and T 2 . A kernel density estimator (KDE) and nearest neighbor density estimator (NNE), similar
to the one used by [23], are used as competing methods. In addition, we compare our method with
the bisecting K-means (BiKMean) method [21] for hierarchical clustering.
4.3
Experiments with Synthetic Data
We run our hierarchical clustering method on data sampled from a synthetic, two-dimensional,
trimodal distribution. This distribution is defined by a 3-Gaussian mixture distribution. 20 i.i.d.
points were sampled for training our q-OCSVM method, with ?1 = 0.25, ?2 = 0.5, ?3 = 0.75
(3-quantiles), and with a bandwidth ?, which results in a cluster tree with 3 modes. The left side
of Figure 2 shows the data sampled, and the 3 approximated hierarchical MV-sets. The resulting
Q=3,N=20,?=15
3-modes cluster tree is
shown in the right side of Figure 2.
2.5
2
Leaf 1: {1}
1.5
1
Branch 4: {1 2}, P=0.68
0.5
0
Leaf 2: {2}
?0.5
Branch 5: {1 2 3}, P=0.85
?1
?1.5
?2
Leaf 3: {3}
?0.5
?2.5
?2.5
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
0
0.5
1
1.5
2
2.5
Figure 2: Left: Data sampled for training our q-OCSVM method and the 3 approximated MV-sets;
Right: The cluster tree estimated from the synthetic data. The most frequent label in each mode,
denoted in curly brackets next to each leaf, defines the label of the mode. Branches are labeled with
the probability mass associated with their level set.
.
We used our proposed and reference method on the data to obtain cluster trees with different numbers
of modes (leaves). The number of modes can be tweaked by changing the value of ? for the qOCSVM and KDE methods, and by pruning nodes of small size for the NNE and BiKMean methods.
20 test examples were i.i.d. sampled from the same distribution to estimate the resulting F -measures.
The left side of Figure 3 shows the F -measure for each method in terms of changes in the number of
modes in the resulting tree. For all methods, the F -measure is bounded by 0.8 as long as the number
of modes is greater than 3, correctly suggesting the presence of 3 modes for the data.
4.4
The olive oil dataset
The olive oil dataset [10] consists of 572 olive oil examples, with 8 features each, from 3 regions in
Italy (R1, R2, R3), each one further divided into 3 sub-areas. The right side of Figure 3 shows the
F -measure for each method in terms of changes in the number of modes in the tree. The q-OCSVM
method dominates the other three methods when the number of modes is higher than 5, with an
average F = 0.62, while its best competitor (KDE) has an average F = 0.55.
It can be seen that the variability of the F -measure plots is higher for the q-OCSVM and KDE methods than for the BiKMeans and NNE methods. This is a consequence of the fact that the structure of
unpruned nodes remains the same for the BiKMeans and NNE methods, whereas different ? values
may lead to different tree structures for the q-OCSVM and KDE methods.
The cluster trees estimated using the q-OCSVM and KDE methods are shown in Figure 4. For
each method, we chose to show the cluster tree with the smallest number of modes with leaves
corresponding to all 8 labels. The q-OCSVM method groups leaves associated with the 8 areas into
3 clusters, which perfectly corresponds to the hierarchical structure of the labels. In contrast, modes
estimated using the KDE method cannot be grouped into 3 homogeneous clusters.
2
When a density estimator f is used, pi,s = mint?[0,1] p(tf (xi ) + (1 ? t)f (xs )) are set to be the edge
weights, G(c) is induced by removing edges from G with pi,s < c, and T is defined as the maximal spanning
tree of G (instead of the minimal).
6
CC vs. F
CC vs. F
0.85
0.65
0.8
0.6
0.75
0.55
F?Measure
F?Measure
0.9
0.7
0.65
0.45
q-OCSVM Cluster Tree
0.6
0.4
qOCSVM
KDE
BiKMeans
NNE
0.55
0.5
0.5
1
2
3
Number of modes
4
qOCSVM
KDE
BiKMeans
NNE
0.35
0.3
5
0
2
4
6
8
10
12
Number of modes
14
16
18
R1
20
R2
Figure 3: Left: The F -measures of each method are plotted in terms of the number of modes in
the estimated cluster trees. The F -measures are calculated using the synthetic test data; Right:R3F measure for the olive oil dataset, calculated using 286 test examples, is shown in terms of the number
of modes in the cluster tree.
KDE Cluster Tree
q-OCSVM Cluster Tree
R1
R1
R2
R2
R3
R1
R3
R3
R1
R1
KDE Cluster Tree
Figure 4: Left: Cluster tree for the olive oil data estimated with q-OCSVM; Right: Cluster tree for
R1
the olive oil data estimated with KDE.
One prominent advantage of our method is that we can use the estimated probability mass of
branches in the tree to better understand theR2 modal structure of the data. For instance, we can
learn from Figure 4 that the R2 cluster is found in a relatively sparse MV-set at level 0.89, while
R3
its two nodes are found in a much denser MV-set
at level 0.12. Probability masses for high density
clusters can also be estimated using the KDE method,
but unlike our method, theoretical guarantees
R1
R3
are not provided.
R1
R1
The 1000 genomes dataset
4.5
We have also evaluated our method on the 1000 genomes dataset [24]. Hierarchical clustering
approaches naturally arise in genetic population studies, as they can reconstruct trees that describe
evolutionary history and are often the first step in evolutionary studies [12]. The reconstruction of
population structure is also crucial for genetic mapping studies, which search for genetic factors
underlying genetic diseases.
In this experiment we evaluated our method?s capability to reconstruct the evolutionary history of
populations represented in the 1000 genomes dataset, which consists of whole genome sequences
of 1, 092 human individuals from 14 distinct populations. We used a trinary representation wherein
each individual is represented as a vector of features corresponding to 0,1 or 2. Every feature represents a known genetic variation (with respect to the standard human reference genome 3 ), where
the number indicates the number of varied genome copies. We used data processed by the 1000
Genomes Consortium, which initially contained 2.25 million variations. To reduce dimensionality,
we used the 1, 000 features that had the highest information gain with respect to the populations. We
excluded from the analysis highly genetically admixed populations (Colombian, Mexican and Puerto
3
http://genomereference.org
7
Rican ancestry), because the evolutionary history of admixed populations cannot be represented by
a tree. After exclusion, 911 individuals remained in the analysis.
q-OCSVM Cluster Tree
CC vs. F
0.5
0.45
European
0.4
R1
0.35
F?Measure
0.3
0.25
0.2
R2
East Asian
0.15
0.1
0
KDE Cluster Tree
5
10
15
Number of modes
20
R3
African
qOCSVM
BiKMeans
SL
0.05
25
Figure 5: Left: F -measure for the 1000 genomes dataset, calculated using 455 test examples; Right:
Cluster tree for the 1000 genomes data estimated with q-OCSVM. The labels are GBR (British
in England and Scotland), TSI (Toscani in Italia), CEU (Utah Residents with Northern and Western
R1
European ancestry), FIN (Finnish in Finland), CHB
(Han Chinese in Bejing, China), CHS (Southern
Han Chinese), ASW (Americans of African Ancestry in SW USA), YRI (Yoruba in Ibadan, Nigera),
and LWK (Luhya in Webuye, Kenya).
The left side of Figure 5 shows that q-OCSVM dominates the other methods for every number of
modes tested, demonstrating its superiority inR2high dimensional settings. Namely, it achieves an
F -measure of 0.4 for >2 modes, whereas competing methods obtain an F -measure of 0.35. KDE
was not evaluated as it is not applicable due to R3the high data dimensionality.
To obtain a meaningful tree, we increased theR1number of modes until leaves corresponding to all
three major human population groups (African,R3East Asian and European) represented in the dataset
appeared. The tree obtained by using 28 modesR1is shown in the right side of Figure 5, indicating that
q-OCSVM clustering successfully distinguishes
R1 between these three population groups. Additionally, it corresponds with the well-established theory that a divergence of a single ancestral population
into African and Eurasian populations took place in the distant past, and that Eurasians diverged into
East Asian and European populations at a later time [12]. The larger number of leaves representing
European populations may result from the larger number of European individuals and populations
in the 1000 genomes dataset.
5
Discussion
In this research we use the q-OCSVM method as a plug-in method for hierarchical clustering in highdimensional distributions. The q-OCSVM method estimates the level sets (MV-sets) directly without
a density estimation step. Therefore, we expect to achieve more accurate results than approaches
based on density estimation. Furthermore, since we know ? for each approximated MV-set, we
believe our solution would be more interpretable and informative than a solution provided by a
density estimation-based method.
References
[1] Mihael Ankerst, Markus M Breunig, Hans-Peter Kriegel, and J?org Sander. Optics: ordering
points to identify the clustering structure. ACM SIGMOD Record, 28(2):49?60, 1999.
[2] Asa Ben-Hur, David Horn, Hava T Siegelmann, and Vladimir Vapnik. Support vector clustering. The Journal of Machine Learning Research, 2:125?137, 2002.
[3] G?erard Biau, Beno??t Cadre, and Bruno Pelletier. A graph-based estimator of the number of
clusters. ESAIM: Probability and Statistics, 11(1):272?280, 2007.
8
[4] Gunnar Carlsson and Facundo M?emoli. Characterization, stability and convergence of hierarchical clustering methods. The Journal of Machine Learning Research, 99:1425?1470, 2010.
[5] Gunnar Carlsson and Facundo M?emoli. Multiparameter hierarchical clustering methods. In
Classification as a Tool for Research, pages 63?70. Springer, 2010.
[6] Antonio Cuevas, Manuel Febrero, and Ricardo Fraiman. Cluster analysis: a further approach
based on density estimation. Computational Statistics & Data Analysis, 36(4):441?459, 2001.
[7] Sanjoy Dasgupta. Two faces of active learning. Theoretical Computer Science, 412(19):1767?
1781, 2011.
[8] Sanjoy Dasgupta and Daniel Hsu. Hierarchical sampling for active learning. In ICML, pages
208?215. ACM, 2008.
[9] Martin Ester, Hans-Peter Kriegel, J?org Sander, and Xiaowei Xu. A density-based algorithm for
discovering clusters in large spatial databases with noise. In KDD, volume 96, pages 226?231,
1996.
[10] M Forina, C Armanino, S Lanteri, and E Tiscornia. Classification of olive oils from their fatty
acid composition. Food Research and Data Analysis, pages 189?214, 1983.
[11] Assaf Glazer, Michael Lindenbaoum, and Shaul Markovitch. q-ocsvm: A q-quantile estimator
for high-dimensional distributions. In Advances in Neural Information Processing Systems,
pages 503?511, 2013.
[12] I. Gronau, M. J. Hubisz, et al. Bayesian inference of ancient human demography from individual genome sequences. Nature Genetics, 43(10):1031?1034, Oct 2011.
[13] John A Hartigan. Clustering Algorithms. John Wiley & Sons, Inc., New York, 1975.
[14] Anil K Jain. Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31(8):
651?666, 2010.
[15] Daphne Koller and Mehran Sahami. Hierarchically classifying documents using very few
words. In ICML, pages 170?178. Morgan Kaufmann Publishers Inc., 1997.
[16] Bjornar Larsen and Chinatsu Aone. Fast and effective text mining using linear-time document
clustering. In SIGKDD, ACM, pages 16?22, 1999.
?
[17] Alvaro
Mart??nez-P?erez. A density-sensitive hierarchical clustering method. arXiv preprint
arXiv:1210.6292, 2012.
[18] Philippe Rigollet and R?egis Vert. Optimal rates for plug-in estimators of density level sets.
Bernoulli, 15(4):1154?1178, 2009.
[19] Alessandro Rinaldo, Aarti Singh, Rebecca Nugent, and Larry Wasserman. Stability of densitybased clustering. Journal of Machine Learning Research, 13:905?948, 2012.
[20] Bernhard Sch?olkopf, John C. Platt, John C. Shawe-Taylor, Alex J. Smola, and Robert C.
Williamson. Estimating the support of a high-dimensional distribution. Neural Computation,
13(7):1443?1471, 2001.
[21] Michael Steinbach, George Karypis, and Vipin Kumar. A comparison of document clustering
techniques. In KDD Workshop on Text Mining, 2000.
[22] Werner Stuetzle. Estimating the cluster tree of a density by analyzing the minimal spanning
tree of a sample. Journal of Classification, 20(1):025?047, 2003.
[23] Werner Stuetzle and Rebecca Nugent. A generalized single linkage method for estimating the
cluster tree of a density. Journal of Computational and Graphical Statistics, 19(2), 2010.
[24] The 1000 Genomes Project Consortium. An integrated map of genetic variation from 1,092
human genomes. Nature, 491:1, 2012.
[25] Ruth Urner, Sharon Wulff, and Shai Ben-David. Plal: Cluster-based active learning. In COLT,
pages 1?22, 2013.
[26] G. Walther. Granulometric smoothing. The Annals of Statistics, pages 2273?2299, 1997.
[27] David Wishart. Mode analysis: A generalization of nearest neighbor which reduces chaining
effects. Numerical Taxonomy, 76:282?311, 1969.
[28] Rui Xu, Donald Wunsch, et al. Survey of clustering algorithms. IEEE Transactions on Neural
Networks, 16(3):645?678, 2005.
9
| 5582 |@word briefly:2 polynomial:1 lwk:1 recursively:5 daniel:1 genetic:6 document:3 interestingly:1 trinary:1 past:1 existing:4 comparing:1 manuel:1 olive:7 john:4 numerical:1 partition:6 distant:1 informative:1 kdd:2 remove:4 plot:2 interpretable:1 v:3 half:3 leaf:11 gbr:1 discovering:1 scotland:1 record:1 provides:1 characterization:1 node:14 hyperplanes:5 org:3 daphne:1 along:1 direct:2 descendant:1 consists:3 walther:1 combine:1 assaf:2 inside:1 introduce:1 manner:2 expected:3 indeed:1 frequently:1 globally:2 resolve:1 food:1 considering:1 begin:1 estimating:7 underlying:2 tweaked:1 bounded:1 mass:7 provided:2 project:1 israel:1 fatty:1 finding:9 lipschitzness:1 unobserved:1 nj:1 guarantee:3 every:3 cuevas:1 platt:1 superiority:2 consequence:1 analyzing:1 interpolation:1 becoming:1 might:1 chose:1 china:1 karypis:1 unique:1 horn:1 recursive:2 implement:1 lf:4 stuetzle:3 procedure:1 cadre:1 area:2 empirical:2 maxx:1 vert:1 word:1 donald:1 suggest:1 consortium:2 lindenbaum:1 cannot:2 context:1 descending:1 map:1 demonstrated:1 straightforward:1 starting:1 convex:2 survey:1 pure:1 correcting:1 wasserman:1 estimator:6 regarded:1 kenya:1 wunsch:1 fraiman:1 stability:3 population:15 classic:1 handle:2 markovitch:2 searching:1 variation:3 beno:1 hierarchy:9 annals:1 homogeneous:1 breunig:1 steinbach:1 curly:1 mic:1 approximated:15 recognition:1 located:1 labeled:1 database:1 observed:3 bottom:1 preprint:1 capture:1 region:11 connected:15 ordering:1 removed:1 overfitted:1 highest:1 disease:1 alessandro:1 depend:1 solving:1 segment:2 singh:1 asa:1 efficiency:1 bisecting:2 various:1 represented:4 facundo:2 train:5 separated:2 distinct:2 jain:1 describe:1 fast:1 effective:1 outside:3 choosing:1 whose:3 heuristic:1 toscani:1 larger:2 denser:2 otherwise:1 reconstruct:2 statistic:4 italia:1 multiparameter:1 jointly:2 noisy:1 advantage:2 sequence:3 analytical:1 took:1 propose:1 reconstruction:2 maximal:2 product:2 frequent:1 omer:1 achieve:1 olkopf:1 convergence:3 cluster:90 regularity:1 optimum:2 r1:14 guaranteeing:1 ben:2 coupling:1 ac:1 completion:1 nearest:5 strong:1 dividing:1 solves:1 c:1 differ:1 shaulm:1 drawback:2 correct:2 exploration:1 human:6 larry:1 assign:1 generalization:1 trimodal:1 extension:1 exploring:1 strictly:2 hold:2 lying:1 considered:2 mapping:1 diverged:1 major:3 finland:1 adopt:1 smallest:2 achieves:1 aarti:1 purpose:3 estimation:10 applicable:1 label:7 sensitive:1 grouped:1 correctness:1 tf:1 successfully:1 tool:3 puerto:1 aone:1 gaussian:1 aim:1 chb:1 ax:1 bernoulli:1 indicates:1 contrast:1 sigkdd:1 inference:1 entire:3 integrated:1 initially:1 shaul:2 spurious:1 koller:1 overall:1 classification:3 colt:1 denoted:1 spatial:1 smoothing:1 homogenous:1 equal:3 construct:9 sampling:2 represents:1 unsupervised:1 icml:2 simplify:1 few:2 distinguishes:1 divergence:1 interpolate:1 individual:5 asian:3 lebesgue:1 attempt:1 highly:1 mining:2 evaluation:1 adjust:1 mixture:1 bracket:1 primal:1 chain:1 accurate:1 edge:9 closer:2 necessary:1 orthogonal:1 tree:62 divide:2 euclidean:1 ancient:1 desired:1 plotted:1 taylor:1 theoretical:3 minimal:4 criticized:1 increased:2 instance:1 modeling:1 assignment:1 werner:2 subset:1 rare:1 technion:2 successful:1 connect:1 synthetic:7 density:63 alvaro:1 ancestral:1 probabilistic:1 michael:3 connecting:3 satisfied:1 containing:2 choose:1 wishart:2 ester:1 american:1 leading:1 ricardo:1 suggesting:1 singleton:1 inc:2 satisfy:3 mv:34 later:1 root:3 doing:1 start:4 parallel:4 capability:1 shai:1 il:1 formed:1 ni:5 acid:1 conducting:1 efficiently:4 kaufmann:1 correspond:6 identify:1 biau:1 bayesian:1 plal:1 cc:3 history:3 african:4 urner:1 hava:1 definition:2 competitor:1 larsen:1 naturally:1 associated:11 sampled:8 newly:1 dataset:10 gain:1 hsu:1 recall:3 hur:1 dimensionality:2 cj:1 appears:1 higher:2 modal:7 wherein:1 done:1 evaluated:4 furthermore:1 implicit:1 stage:1 smola:1 until:3 hand:3 ocsvms:1 working:1 western:1 incrementally:2 resident:1 defines:1 mode:29 quality:4 reveal:3 believe:2 oil:7 effect:6 utah:1 usa:1 true:2 hence:1 assigned:5 excluded:1 iteratively:2 illustrated:1 width:1 vipin:1 chaining:4 generalized:1 prominent:2 complete:2 demonstrate:1 txi:1 novel:2 fi:3 common:1 rigollet:1 empirically:2 ji:2 volume:3 million:1 extend:1 linking:1 discussed:1 refer:2 composition:1 rd:2 grid:1 similarly:1 pointed:1 erez:1 bruno:1 shawe:1 had:2 resistant:1 han:4 closest:1 exclusion:1 italy:1 belongs:2 mint:1 forcing:1 yri:1 exploited:1 seen:1 minimum:3 greater:1 morgan:1 george:1 surely:1 determine:1 branch:5 multiple:1 full:2 egis:1 reduces:1 infer:1 smooth:1 england:1 plug:2 cross:1 long:2 divided:3 mehran:1 arxiv:2 iteration:1 kernel:4 c1:1 background:1 whereas:3 addition:1 interval:1 crucial:1 publisher:1 sch:1 unlike:2 finnish:1 tri:2 induced:3 tend:1 practitioner:1 presence:1 intermediate:1 split:3 enough:1 assafgr:1 sander:2 perfectly:1 associating:1 suboptimal:1 opposite:1 bandwidth:5 idea:2 inner:2 competing:2 reduce:1 whether:1 linkage:7 peter:2 returned:1 york:1 svs:1 antonio:1 useful:1 nonparametric:2 processed:1 nugent:2 http:1 sl:1 northern:1 estimated:11 disjoint:1 correctly:1 discrete:1 dasgupta:2 group:6 gunnar:2 nevertheless:1 demonstrating:1 hartigan:2 changing:1 emoli:2 sharon:1 graph:13 fraction:1 sum:1 year:1 run:1 letter:1 place:1 family:3 utilizes:1 decision:1 prefer:1 quadratic:1 optic:1 constraint:1 alex:1 markus:1 erroneously:2 min:1 kumar:1 relatively:3 martin:1 department:1 pelletier:1 according:1 belonging:1 smaller:1 granulometric:1 son:1 outlier:2 equation:2 remains:1 describing:1 slack:1 eventually:1 r3:7 sahami:1 know:2 generalizes:2 operation:1 available:1 hierarchical:33 enforce:1 alternative:2 top:3 clustering:32 graphical:1 sw:1 sigmod:1 siegelmann:1 quantile:1 especially:1 prof:1 approximating:4 chinese:2 objective:1 parametric:5 said:1 evolutionary:5 minx:1 southern:1 distance:2 separate:1 separating:2 agglomerative:2 spanning:4 ruth:1 minn:1 cq:1 vladimir:1 difficult:1 robert:1 taxonomy:1 kde:15 daughter:1 unknown:1 fin:1 finite:1 philippe:1 defining:1 extended:2 variability:1 varied:1 overcoming:1 rebecca:2 david:3 pair:1 required:1 specified:2 namely:1 connection:1 glazer:3 established:1 able:2 suggested:1 proceeds:1 below:1 parallelism:1 kriegel:2 beyond:1 pattern:1 appeared:1 genetically:1 program:8 reliable:1 including:1 max:2 ocsvm:33 natural:2 treated:1 representing:1 technology:1 esaim:1 lately:1 naive:1 extract:1 text:2 understanding:1 carlsson:2 checking:1 removal:1 fully:2 expect:3 validation:1 sufficient:2 unpruned:1 classifying:1 share:1 pi:2 nne:6 genetics:1 copy:1 dis:1 bias:3 side:6 understand:1 institute:1 neighbor:4 face:1 sparse:2 boundary:2 curve:1 xn:1 calculated:5 qocsvm:4 genome:14 ignores:1 collection:1 projected:2 transaction:1 approximate:3 pruning:1 bernhard:1 global:2 active:4 tsi:1 summing:1 xi:8 spectrum:1 continuous:1 search:2 ancestry:3 quantifies:1 additionally:2 learn:1 nature:2 expansion:1 williamson:1 necessarily:1 european:6 constructing:3 dense:2 hierarchically:1 linearly:1 whole:1 noise:1 arise:1 repeated:1 ceu:1 wulff:1 x1:1 xu:2 referred:1 quantiles:2 wiley:1 sub:6 lie:4 admixed:2 anil:1 down:2 removing:3 erroneous:1 remained:1 specific:1 british:1 r2:6 oneclass:1 svm:3 x:6 dominates:2 exists:1 workshop:1 vapnik:1 effectively:1 ci:1 rui:1 suited:1 intersection:1 generalizing:1 fc:1 nez:1 univariate:2 likely:1 erard:1 rinaldo:1 contained:1 springer:1 nested:2 corresponds:2 xiaowei:1 relies:1 acm:3 mart:1 oct:1 goal:3 viewed:1 sized:1 consequently:1 hard:1 change:2 infinite:4 hyperplane:1 mexican:1 called:2 sanjoy:2 divisive:5 east:2 meaningful:1 indicating:1 highdimensional:1 demography:1 support:3 evaluate:2 tested:1 avoiding:2 |
5,062 | 5,583 | Tight Continuous Relaxation of the Balanced k-Cut
Problem
Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta and Matthias Hein
Department of Mathematics and Computer Science
Saarland University, Saarbr?ucken
Abstract
Spectral Clustering as a relaxation of the normalized/ratio cut has become one of
the standard graph-based clustering methods. Existing methods for the computation of multiple clusters, corresponding to a balanced k-cut of the graph, are
either based on greedy techniques or heuristics which have weak connection to
the original motivation of minimizing the normalized cut. In this paper we propose a new tight continuous relaxation for any balanced k-cut problem and show
that a related recently proposed relaxation is in most cases loose leading to poor
performance in practice. For the optimization of our tight continuous relaxation
we propose a new algorithm for the difficult sum-of-ratios minimization problem
which achieves monotonic descent. Extensive comparisons show that our method
outperforms all existing approaches for ratio cut and other balanced k-cut criteria.
1
Introduction
Graph-based techniques for clustering have become very popular in machine learning as they allow for an easy integration of pairwise relationships in data. The problem of finding k clusters in
a graph can be formulated as a balanced k-cut problem [1, 2, 3, 4], where ratio and normalized
cut are famous instances of balanced graph cut criteria employed for clustering, community detection and image segmentation. The balanced k-cut problem is known to be NP-hard [4] and thus in
practice relaxations [4, 5] or greedy approaches [6] are used for finding the optimal multi-cut. The
most famous approach is spectral clustering [7], which corresponds to the spectral relaxation of the
ratio/normalized cut and uses k-means in the embedding of the vertices found by the first k eigenvectors of the graph Laplacian in order to obtain the clustering. However, the spectral relaxation has
been shown to be loose for k = 2 [8] and for k > 2 no guarantees are known of the quality of the
obtained k-cut with respect to the optimal one. Moreover, in practice even greedy approaches [6]
frequently outperform spectral clustering.
This paper is motivated by another line of recent work [9, 10, 11, 12] where it has been shown that
an exact continuous relaxation for the two cluster case (k = 2) is possible for a quite general class of
balancing functions. Moreover, efficient algorithms for its optimization have been proposed which
produce much better cuts than the standard spectral relaxation. However, the multi-cut problem has
still to be solved via the greedy recursive splitting technique.
Inspired by the recent approach in [13], in this paper we tackle directly the general balanced k-cut
problem based on a new tight continuous relaxation. We show that the relaxation for the asymmetric
ratio Cheeger cut proposed recently by [13] is loose when the data does not contain k well-separated
clusters and thus leads to poor performance in practice. Similar to [13] we can also integrate label
information leading to a transductive clustering formulation. Moreover, we propose an efficient
algorithm for the minimization of our continuous relaxation for which we can prove monotonic
descent. This is in contrast to the algorithm proposed in [13] for which no such guarantee holds.
In extensive experiments we show that our method outperforms all existing methods in terms of the
1
achieved balanced k-cuts. Moreover, our clustering error is competitive with respect to several other
clustering techniques based on balanced k-cuts and recently proposed approaches based on nonnegative matrix factorization. Also we observe that already with small amount of label information
the clustering error improves significantly.
2
Balanced Graph Cuts
Graphs are used in machine learning typically as similarity graphs, that is the weight of an edge
between two instances encodes their similarity. Given such a similarity graph of the instances, the
clustering problem into k sets can be transformed into a graph partitioning problem, where the goal
is to construct a partition of the graph into k sets such that the cut, that is the sum of weights of the
edge from each set to all other sets, is small and all sets in the partition are roughly of equal size.
Before we introduce balanced graph cuts, we briefly fix the setting and notation. Let G(V, W )
denote an undirected, weighted graph with vertex set V with n = |V | vertices and weight matrix
W ? Rn?n
with W = W T . There is an edge between two
+
P vertices i, j ? V if wij > 0. The
cut between two sets A, B ? V is defined as cut(A, B) = i?A,j?B wij and we write 1A for the
indicator vector of set A ? V . A collection of k sets (C1 , . . . , Ck ) is a partition of V if ?ki=1 Ci = V ,
Ci ? Cj = ? if i 6= j and |Ci | ? 1, i = 1, . . . , k. We denote the set of all k-partitions of V by Pk .
Pk
Furthermore, we denote by ?k the simplex {x : x ? Rk , x ? 0, i=1 xi = 1}.
?
?
Finally, a set function S? : 2V ? R is called submodular if for all A, B ? V , S(A?B)+
S(A?B)
?
?
?
S(A) + S(B). Furthermore, we need the concept of the Lovasz extension of a set function.
?
Definition 1 Let S? : 2V ? R be a set function with S(?)
= 0. Let f ? RV be ordered in increasing
order f1 ? f2 ? . . . ? fn and
define
C
=
{j
?
V
|
f
>
fi } where C0 = V . Then S : RV ? R
i
j
Pn
?
?
?
given by, S(f ) =
i=1 fi S(Ci?1 ) ? S(Ci ) , is called the Lovasz extension of S. Note that
?
S(1A ) = S(A)
for all A ? V .
The Lovasz extension of a set function is convex if and only if the set function is submodular [14].
The cut function
Pn cut(C, C), where C = V \C, is submodular and its Lovasz extension is given by
TV(f ) = 12 i,j=1 wij |fi ? fj |.
2.1
Balanced k-cuts
The balanced k-cut problem is defined as
min
(C1 ,...,Ck )?Pk
k
X
cut(Ci , Ci )
i=1
? i)
S(C
=: BCut(C1 , . . . , Ck )
(1)
where S? : 2V ? R+ is a balancing function with the goal that all sets Ci are of the same ?size?.
?
?
In this paper, we assume that S(?)
= 0 and for any C ( V, C 6= ?, S(C)
? m, for some m > 0.
In the literature one finds mainly the following submodular balancing functions (in brackets is the
name of the overall balanced graph cut criterion BCut(C1 , . . . , Ck )),
?
S(C)
= |C|,
(Ratio Cut),
(2)
?
S(C)
= min{|C|, |C|},
?
S(C) = min{(k ? 1)|C|, C}
(Ratio Cheeger Cut),
(Asymmetric Ratio Cheeger Cut).
The Ratio Cut is well studied in the literature e.g. [3, 7, 6] and corresponds to a balancing function
without bias towards a particular size of the sets, whereas the Asymmetric Ratio Cheeger Cut recently
?
proposed in [13] has a bias towards sets of size |Vk | (S(C)
attains its maximum at this point) which
makes perfect sense if one expects clusters which have roughly equal size. An intermediate version
between the two is the Ratio Cheeger Cut which has a symmetric balancing function and strongly
penalizes overly large clusters. For the ease of presentation we restrict ourselves to these balancing
?
functions.
However, we
also handle the corresponding weighted cases e.g., S(C)
= vol(C) =
P
Pcan
n
d
,
where
d
=
w
,
leading
to
the
normalized
cut[4].
i
i
ij
i?C
j=1
2
3
Tight Continuous Relaxation for the Balanced k-Cut Problem
In this section we discuss our proposed relaxation for the balanced k-cut problem (1). It turns out
that a crucial question towards a tight multi-cut relaxation is the choice of the constraints so that
the continuous problem also yields a partition (together with a suitable rounding scheme). The
motivation for our relaxation is taken from the recent work of [9, 10, 11], where exact relaxations
are shown for the case k = 2. Basically, they replace the ratio of set functions with the ratio of
the corresponding Lovasz extensions. We use the same idea for the objective of our continuous
relaxation of the k-cut problem (1) which is given as
min
k
X
TV(Fl )
F =(F1 ,...,Fk ),
l=1
F ?Rn?k
+
(3)
S(Fl )
subject to : F(i) ? ?k ,
i = 1, . . . , n,
(simplex constraints)
max{F(i) } = 1, ?i ? I,
(membership constraints)
S(Fl ) ? m,
(size constraints)
l = 1, . . . , k,
?
where S is the Lovasz extension of the set function S? and m = minC(V, C6=? S(C).
We have
m = 1, for Ratio Cut and Ratio Cheeger Cut whereas m = k ? 1 for Asymmetric Ratio Cheeger
Cut. Note that TV is the Lovasz extension of the cut functional cut(C, C). In order to simplify
notation we denote for a matrix F ? Rn?k by Fl the l-th column of F and by F(i) the i-th row
of F . Note that the rows of F correspond to the vertices of the graph and the j-th column of F
corresponds to the set Cj of the desired partition. The set I ? V in the membership constraints is
chosen adaptively by our method during the sequential optimization described in Section 4.
An obvious question is how to get from the continuous solution F ? of (3) to a partition
(C1 , . . . , Ck ) ? Pk which is typically called rounding. Given F ? we construct the sets, by assigning
each vertex i to the column where the i-th row attains its maximum. Formally,
Ci = {j ? V | i = arg max Fjs },
i = 1, . . . , k,
(Rounding)
(4)
s=1,...,k
where ties are broken randomly. If there exists a row such that the rounding is not unique, we say
that the solution is weakly degenerated. If furthermore the resulting set (C1 , . . . , Ck ) do not form a
partition, that is one of the sets is empty, then we say that the solution is strongly degenerated.
First, we connect our relaxation to the previous work of [11] for the case k = 2. Indeed for symmetric balancing function such as the Ratio Cheeger Cut, our continuous relaxation (3) is exact even
without membership and size constraints.
?
?
Theorem 1 Let S? be a non-negative symmetric balancing function, S(C)
= S(C),
and denote by
p? the optimal value of (3) without membership and size constraints for k = 2. Then it holds
p? =
min
(C1 ,C2 )?P2
2
X
cut(Ci , Ci )
i=1
? i)
S(C
.
Furthermore there exists a solution F ? of (3) such that F ? = [1C ? , 1C ? ], where (C ? , C ? ) is the
optimal balanced 2-cut partition.
Note that rounding trivially yields a solution in the setting of the previous theorem.
A second result shows that indeed our proposed optimization problem (3) is a relaxation of the
balanced k-cut problem (1). Furthermore, the relaxation is exact if I = V .
Proposition 1 The continuous problem (3) is a relaxation of the k-cut problem (1). The relaxation
is exact, i.e., both problems are equivalent, if I = V .
The row-wise simplex and membership constraints enforce that each vertex in I belongs to exactly
one component. Note that these constraints alone (even if I = V ) can still not guarantee that F
corresponds to a k-way partition since an entire column of F can be zero. This is avoided by the
column-wise size constraints that enforce that each component has at least one vertex.
3
If I = V it is immediate from the proof that problem (3) is no longer a continuous problem as the
feasible set are only indicator matrices of partitions. In this case rounding yields trivially a partition.
On the other hand, if I = ? (i.e., no membership constraints), and k > 2 it is not guaranteed
that rounding of the solution of the continuous problem yields a partition. Indeed, we will see in
the following that for symmetric balancing functions one can, under these conditions, show that
the solution is always strongly degenerated and rounding does not yield a partition (see Theorem
2). Thus we observe that the index set I controls the degree to which the partition constraint is
enforced. The idea behind our suggested relaxation is that it is well known in image processing that
minimizing the total variation yields piecewise constant solutions (in fact this follows from seeing
the total variation as Lovasz extension of the cut). Thus if |I| is sufficiently large, the vertices where
the values are fixed to 0 or 1 propagate this to their neighboring vertices and finally to the whole
graph. We discuss the choice of I in more detail in Section 4.
Simplex constraints alone are not sufficient to yield a partition: Our approach has been inspired
by [13] who proposed the following continuous relaxation for the Asymmetric Ratio Cheeger Cut
min
k
X
F =(F1 ,...,Fk ),
l=1
F ?Rn?k
+
TV(Fl )
Fl ? quant
k?1 (Fl ) 1
(5)
subject to : F(i) ? ?k , i = 1, . . . , n, (simplex constraints)
?
where S(f ) =
f ? quantk?1 (f )
1 is the Lovasz extension of S(C)
= min{(k ? 1)|C|, C} and
quantk?1 (f ) is the k ? 1-quantile of f ? Rn . Note that in their approach no membership constraints
and size constraints are present.
We now show that the usage of simplex constraints in the optimization problem (3) is not sufficient
to guarantee that the solution F ? can be rounded to a partition for any symmetric balancing function
in (1). For asymmetric balancing functions as employed for the Asymmetric Ratio Cheeger Cut by
[13] in their relaxation (5) we can prove such a strong result only in the case where the graph is
disconnected. However, note that if the number of components of the graph is less than the number
of desired clusters k, the multi-cut problem is still non-trivial.
?
Theorem 2 Let S(C)
be any non-negative symmetric balancing function. Then the continuous
relaxation
k
X
TV(Fl )
min
(6)
S(Fl )
F =(F1 ,...,Fk ),
F ?Rn?k
+
l=1
subject to : F(i) ? ?k ,
i = 1, . . . , n,
(simplex constraints)
of the balanced k-cut problem (1) is void in the sense that the optimal solution F ? of the continuous problem can be constructed from the optimal solution of the 2-cut problem and F ? cannot be
rounded into a k-way partition, see (4). If the graph is disconnected, then the same holds also for
any non-negative asymmetric balancing function.
The proof of Theorem 2 shows additionally that for any balancing function if the graph is disconnected, the solution of the continuous relaxation (6) is always zero, while clearly the solution of the
balanced k-cut problem need not be zero. This shows that the relaxation can be arbitrarily bad in
this case. In fact the relaxation for the asymmetric case can even fail if the graph is not disconnected
but there exists a cut of the graph which is very small as the following corollary indicates.
Corollary 1 Let S? be an asymmetric balancing function and C ? = arg min cut(C,C)
and suppose
?
S(C)
C?V
?
?
Pk
,C ? )
,C ? )
i ,Ci )
that ?? := (k ? 1) cut(C
+ cut(C
< min(C1 ,...,Ck )?Pk i=1 cut(C
. Then there exists
? ?)
? ?)
?
S(C
S(C
S(C
)
Pk i
a feasible F with F1 = 1C ? and Fl = ?l 1C ? , l = 2, . . . , k such that l=2 ?l = 1, ?l > 0 for (6)
Pk
i)
?
which has objective i=1 TV(F
S(Fi ) = ? and which cannot be rounded to a k-way partition.
Theorem 2 shows that the membership and size constraints which we have introduced in our relaxation (3) are essential to obtain a partition for symmetric balancing functions. For the asymmetric
4
0 1 0
(a)
0 0 1
1 0 0
(b)
1 0 0
0 0 1
(c)
0 1 0
0 0 1
1 0 0
0 1 0
(d)
0 0 1
1 0 0
(e)
Figure 1: Toy example illustrating that the relaxation of [13] converges to a degenerate solution
when applied to a graph with dominating 2-cut. (a) 10NN-graph generated from three Gaussians in
10 dimensions (b) continuous solution of (5) from [13] for k = 3, (c) rounding of the continuous
solution of [13] does not yield a 3-partition (d) continuous solution found by our method together
with the vertices i ? I (black) where the membership constraint is enforced. Our continuous solution
corresponds already to a partition. (e) clustering found by rounding of our continuous solution
(trivial as we have converged to a partition). In (b)-(e), we color data point i according to F(i) ? R3 .
balancing function failure of the relaxation (6) and thus also of the relaxation (5) of [13] is only guaranteed for disconnected graphs. However, Corollary 1 indicates that degenerated solutions should
also be a problem when the graph is still connected but there exists a dominating cut. We illustrate
this with a toy example in Figure 1 where the algorithm of [13] for solving (5) fails as it converges
exactly to the solution predicted by Corollary 1 and thus only produces a 2-partition instead of the
desired 3-partition. The algorithm for our relaxation enforcing membership constraints converges to
a continuous solution which is in fact a partition matrix so that no rounding is necessary.
4
Monotonic Descent Method for Minimization of a Sum of Ratios
Apart from the new relaxation another key contribution of this paper is the derivation of an algorithm
which yields a sequence of feasible points for the difficult non-convex problem (3) and reduces
monotonically the corresponding objective. We would like to note that the algorithm proposed by
[13] for (5) does not yield monotonic descent. In fact it is unclear what the derived guarantee for
the algorithm in [13] implies for the generated sequence. Moreover, our algorithm works for any
non-negative submodular balancing function.
The key insight in order to derive a monotonic descent method for solving the sum-of-ratio minimization problem (3) is to eliminate the ratio by introducing a new set of variables ? = (?1 , . . . , ?k ).
min
k
X
F =(F1 ,...,Fk ),
l=1
, ??Rk
F ?Rn?k
+
+
?l
subject to : TV(Fl ) ? ?l S(Fl ),
F(i) ? ?k ,
(7)
l = 1, . . . , k,
i = 1, . . . , n,
(descent constraints)
(simplex constraints)
max{F(i) } = 1,
?i ? I,
(membership constraints)
S(Fl ) ? m,
l = 1, . . . , k.
(size constraints)
Note that for the optimal solution (F ? , ? ? ) of this problem it holds TV(Fl? ) = ?l? S(Fl? ), l =
1, . . . , k (otherwise one can decrease ?l? and hence the objective) and thus equivalence holds. This
is still a non-convex problem as the descent, membership and size constraints are non-convex. Our
algorithm proceeds now in a sequential manner. At each iterate we do a convex inner approximation
of the constraint set, that is the convex approximation is a subset of the non-convex constraint set,
based on the current iterate (F t , ? t ). Then we optimize the resulting convex optimization problem
and repeat the process. In this way we get a sequence of feasible points for the original problem (7)
for which we will prove monotonic descent in the sum-of-ratios.
Convex approximation: As S? is submodular, S is convex. Let stl ? ?S(Flt ) be an element of the
? l )?
sub-differential of S at the current iterate Flt . We have by Prop. 3.2 in [14], (stl )ji = S(C
i?1
th
t
t
t
? l ), where ji is the i smallest component of F and Cl = {j ? V | (F )j > (F )i }. MoreS(C
i
i
l
l
l
over, using the definition of subgradient, we have S(Fl ) ? S(Flt ) + hstl , Fl ? Flt i = hstl , Fl i.
5
For the descent constraints, let ?tl =
TV(Flt )
S(Flt )
and introduce new variables ?l = ?l ? ?tl that capture
the amount of change in each ratio. We further decompose ?l as ?l = ?l+ ? ?l? , ?l+ ? 0, ?l? ? 0.
?
Let M = maxf ?[0,1]n S(f ) = maxC?V S(C),
then for S(Fl ) ? m,
TV(Fl ) ? ?l S(Fl ) ? TV(Fl ) ? ?tl stl , Fl ? ?l+ S(Fl ) + ?l? S(Fl )
? TV(Fl ) ? ?tl stl , Fl ? ?l+ m + ?l? M
Finally, note that because of the simplex constraints, the membership constraints can be rewritten
as max{F(i) } ? 1. Let i ? I and define ji := arg maxj Fijt (ties are broken randomly). Then the
membership constraints can be relaxed as follows: 0 ? 1 ? max{F(i) } ? 1 ? Fiji =? Fiji ? 1.
As Fij ? 1 we get Fiji = 1. Thus the convex approximation of the membership constraints
fixes the assignment of the i-th point to a cluster and thus can be interpreted as ?label constraint?.
However, unlike the transductive setting, the labels for the vertices in I are automatically chosen by
our method. The actual choice of the set I will be discussed in Section 4.1. We use the notation
L = {(i, ji ) | i ? I} for the label set generated from I (note that L is fixed once I is fixed).
Descent algorithm: Our descent algorithm for minimizing (7) solves at each iteration t the following convex optimization problem (8).
k
X
min
,
F ?Rn?k
+
?
k
? + ?Rk
+ , ? ?R+
?l+ ? ?l?
(8)
l=1
subject to : TV(Fl ) ? ?tl stl , Fl + ?l+ m ? ?l? M,
l = 1, . . . k,
(descent constraints)
F(i) ? ?k ,
i = 1, . . . , n,
(simplex constraints)
Fiji = 1,
t t
sl , Fl ? m,
?(i, ji ) ? L,
(label constraints)
l = 1, . . . , k.
(size constraints)
As its solution F t+1 is feasible for (3) we update ?t+1
=
l
TV(Flt+1 )
S(Flt+1 )
and st+1
? ?S(Flt+1 ), l =
l
1, . . . , k and repeat the process until the sequence terminates, that is no further descent is possible as
Pk
the following theorem states, or the relative descent in l=1 ?tl is smaller than a predefined . The
following Theorem 3 shows the monotonic descent property of our algorithm.
Theorem 3 The sequence {F t } produced by the above algorithm satisfies
Pk TV(Flt )
l=1 S(F t ) for all t ? 0 or the algorithm terminates.
TV(Flt+1 )
l=1 S(F t+1 )
l
Pk
<
l
The inner problem (8) is convex, but contains the non-smooth term TV in the constraints. We
eliminate the non-smoothness by introducing additional variables and derive an equivalent linear
programming (LP) formulation. We solve this LP via the PDHG algorithm [15, 16]. The LP and the
exact iterates can be found in the supplementary material.
4.1
Choice of membership constraints I
The overall algorithm scheme for solving the problem (1) is given in the supplementary material. For
the membership constraints we start initially with I 0 = ? and sequentially solve the inner problem
(8). From its solution F t+1 we construct a Pk0 = (C1 , . . . , Ck ) via rounding, see (4). We repeat this
process until we either do not improve the resulting balanced k-cut or Pk0 is not a partition. In this
case we update I t+1 and double the number of membership constraints. Let (C1? , . . . , Ck? ) be the
currently optimal partition. For each l ? {1, . . . , k} and i ? Cl? we compute
cut Cs? ? {i}, Cs? \{i}
cut Cl? \{i}, Cl? ? {i}
?
+ min
(9)
bli =
? ? \{i})
? s? ? {i})
s6=l
S(C
S(C
l
and define Ol = {(?1 , . . . , ?|Cl? | ) | b?l?1 ? b?l?2 ? . . . ? b?l?|C ? | }. The top-ranked vertices in Ol
l
correspond to the ones which lead to the largest minimal increase in BCut when moved from Cl?
to another component and thus are most likely to belong to their current component. Thus it is
6
natural to fix the top-ranked vertices for each component first. Note that the rankings Ol , l =
1, . . . , k are updated when a better partition is found. Thus the membership constraints correspond
always to the vertices which lead to largest minimal increase in BCut when moved to another
component. In Figure 1 one can observe that the fixed labeled points are lying close to the centers
of the found clusters. The number of membership constraints depends on the graph. The better
separated the clusters are, the less membership constraints need to be enforced in order to avoid
degenerate solutions. Finally, we stop the algorithm if we see no more improvement in the cut or
the continuous objective and the continuous solution corresponds to a partition.
5
Experiments
We evaluate our method against a diverse selection of state-of-the-art clustering methods like spectral clustering (Spec) [7], BSpec [11], Graclus1 [6], NMF based approaches PNMF [18], NSC [19],
ONMF [20], LSD [21], NMFR [22] and MTV [13] which optimizes (5). We used the publicly
available code [22, 13] with default settings. We run our method using 5 random initializations, 7
initializations based on the spectral clustering solution similar to [13] (who use 30 such initializations). In addition to the datasets provided in [13], we also selected a variety of datasets from the
UCI repository shown below. For all the datasets not in [13], symmetric k-NN graphs are built with
skx?yk2
Gaussian weights exp ? min{?
2 ,? 2 } , where ?x,k is the k-NN distance of point x. We chose the
x,k
y,k
parameters s and k in a method independent way by testing for each dataset several graphs using all
the methods over different choices of k ? {3, 5, 7, 10, 15, 20, 40, 60, 80, 100} and s ? {0.1, 1, 4}.
The best choice in terms of the clustering error across all the methods and datasets, is s = 1, k = 15.
# vertices
# classes
Iris
wine
vertebral
ecoli
4moons
webkb4
optdigits
USPS
pendigits
20news
MNIST
150
3
178
3
310
3
336
6
4000
4
4196
4
5620
10
9298
10
10992
10
19928
20
70000
10
Quantitative results: In our first experiment we evaluate our method in terms of solving the balanced k-cut problem for various balancing functions, data sets and graph parameters. The following
table reports the fraction of times a method achieves the best as well as strictly best balanced k-cut
over all constructed graphs and datasets (in total 30 graphs per dataset). For reference, we also report
the obtained cuts for other clustering methods although they do not directly minimize this criterion
in italic; methods that directly optimize the criterion are shown in normal font. Our algorithm can
handle all balancing functions and significantly outperforms all other methods across all criteria.
For ratio and normalized cut cases we achieve better results than [7, 11, 6] which directly optimize
this criterion. This shows that the greedy recursive bi-partitioning affects badly the performance of
[11], which, otherwise, was shown to obtain the best cuts on several benchmark datasets [23]. This
further shows the need for methods that directly minimize the multi-cut. It is striking that the competing method of [13], which directly minimizes the asymmetric ratio cut, is beaten significantly by
Graclus as well as our method. As this clear trend is less visible in the qualitative experiments, we
suspect that extreme graph parameters lead to fast convergence to a degenerate solution.
Ours
MTV BSpec Spec Graclus PNMF NSC ONMF LSD NMFR
RCC-asym
Best (%)
Strictly Best (%)
80.54
44.97
25.50
10.74
23.49
1.34
7.38
0.00
38.26
4.70
2.01
0.00
5.37
0.00
2.01
0.00
4.03
0.00
1.34
0.00
RCC-sym
Best (%)
Strictly Best (%)
94.63
61.74
8.72
0.00
19.46
0.67
6.71
0.00
37.58
4.70
0.67
0.00
4.03
0.00
0.00
0.00
0.67
0.00
0.67
0.00
NCC-asym
Best (%)
Strictly Best (%)
93.29
56.38
13.42
2.01
20.13
0.00
10.07
0.00
38.26
2.01
0.67
0.00
5.37
0.00
2.01
0.67
4.70
0.00
2.01
1.34
NCC-sym
Best (%)
Strictly Best (%)
98.66
59.06
10.07
0.00
20.81
0.00
9.40
0.00
40.27
1.34
1.34
0.00
4.03
0.00
0.67
0.00
3.36
0.00
1.34
0.00
Rcut
Best (%)
Strictly Best (%)
85.91
58.39
7.38
0.00
20.13
2.68
10.07
2.01
32.89
8.72
0.67
0.00
4.03
0.00
0.00
0.00
1.34
0.00
1.34
0.67
Ncut
Best (%)
Strictly Best (%)
95.97
61.07
10.07
0.00
20.13
0.00
9.40
0.00
37.58
4.03
1.34
0.00
4.70
0.00
0.67
0.00
3.36
0.00
0.67
0.00
Qualitative results: In the following table, we report the clustering errors and the balanced k-cuts
obtained by all methods using the graphs built with k = 15, s = 1 for all datasets. As the main goal
1
Since [6], a multi-level algorithm directly minimizing Rcut/Ncut, is shown to be superior to METIS [17], we do not compare with [17].
7
is to compare to [13] we choose their balancing function (RCC-asym). Again, our method always
achieved the best cuts across all datasets. In three cases, the best cut also corresponds to the best
clustering performance. In case of vertebral, 20news, and webkb4 the best cuts actually result in
high errors. However, we see in our next experiment that integrating ground-truth label information
helps in these cases to improve the clustering performance significantly.
Iris
wine
vertebral
ecoli
4moons
webkb4
optdigits
USPS
BSpec
Err(%)
BCut
23.33
1.495
37.64
6.417
50.00
1.890
19.35
2.550
36.33
0.634
60.46
1.056
11.30
0.386
20.09
0.822
pendigits 20news
17.59
0.081
84.21
0.966
MNIST
11.82
0.471
Spec
Err(%)
BCut
22.00
1.783
20.22
5.820
48.71
1.950
14.88
2.759
31.45
0.917
60.32
1.520
7.81
0.442
21.05
0.873
16.75
0.141
79.10
1.170
22.83
0.707
PNMF
Err(%)
BCut
22.67
1.508
27.53
4.916
50.00
2.250
16.37
2.652
35.23
0.737
60.94
3.520
10.37
0.548
24.07
1.180
17.93
0.415
66.00
2.924
12.80
0.934
NSC
Err(%)
BCut
23.33
1.518
17.98
5.140
50.00
2.046
14.88
2.754
32.05
0.933
59.49
3.566
8.24
0.482
20.53
0.850
19.81
0.101
78.86
2.233
21.27
0.688
ONMF
Err(%)
BCut
23.33
1.518
28.09
4.881
50.65
2.371
16.07
2.633
35.35
0.725
60.94
3.621
10.37
0.548
24.14
1.183
22.82
0.548
69.02
3.058
27.27
1.575
LSD
Err(%)
BCut
23.33
1.518
17.98
5.399
39.03
2.557
18.45
2.523
35.68
0.782
47.93
2.082
8.42
0.483
22.68
0.918
13.90
0.188
67.81
2.056
24.49
0.959
NMFR
Err(%)
BCut
22.00
1.627
11.24
4.318
38.06
2.713
22.92
2.556
36.33
0.840
40.73
1.467
2.08
0.369
22.17
0.992
13.13
0.240
39.97
1.241
fail
Graclus
Err(%)
BCut
23.33
1.534
8.43
4.293
49.68
1.890
16.37
2.414
0.45
0.589
39.97
1.581
1.67
0.350
19.75
0.815
10.93
0.092
60.69
1.431
2.43
0.440
MTV
Err(%)
BCut
22.67
1.508
18.54
5.556
34.52
2.433
22.02
2.500
7.72
0.774
48.40
2.346
4.11
0.374
15.13
0.940
20.55
0.193
72.18
3.291
3.77
0.458
Ours
Err(%)
BCut
23.33
1.495
6.74
4.168
50.00
1.890
16.96
2.399
0.45
0.589
60.46
1.056
1.71
0.350
19.72
0.802
19.95
0.079
79.51
0.895
2.37
0.439
-
Transductive Setting: We evaluate our method against [13] in a transductive setting. As in [13], we
randomly sample either one label or a fixed percentage of labels per class from the ground truth. We
report clustering errors and the cuts (RCC-asym) for both methods for different choices of labels.
For label experiments their initialization strategy seems to work better as the cuts improve compared
to the unlabeled case. However, observe that in some cases their method seems to fail completely
(Iris and 4moons for one label per class).
Labels
MTV
1
Ours
MTV
1%
Ours
MTV
5%
Ours
MTV
10%
Ours
6
Iris
wine
vertebral
ecoli
Err(%)
BCut
Err(%)
BCut
33.33
3.855
22.67
1.571
9.55
4.288
8.99
4.234
42.26
2.244
50.32
2.265
13.99
2.430
15.48
2.432
4moons webkb4 optdigits USPS pendigits 20news MNIST
35.75
0.723
0.57
0.610
51.98
1.596
45.11
1.471
1.69
0.352
1.69
0.352
12.91
0.846
12.98
0.812
14.49
0.127
10.98
0.113
50.96
1.286
68.53
1.057
2.45
0.439
2.36
0.439
Err(%)
BCut
Err(%)
BCut
33.33
3.855
22.67
1.571
10.67
4.277
6.18
4.220
39.03
2.300
41.29
2.288
14.29
2.429
13.99
2.419
0.45
0.589
0.45
0.589
48.38
1.584
41.63
1.462
1.67
0.354
1.67
0.354
5.21
0.789
5.13
0.789
7.75
0.129
7.75
0.128
40.18
1.208
37.42
1.157
2.41
0.443
2.33
0.442
Err(%)
BCut
Err(%)
BCut
17.33
1.685
17.33
1.685
7.87
4.330
6.74
4.224
40.65
2.701
37.10
2.724
14.58
2.462
13.99
2.461
0.45
0.589
0.45
0.589
40.09
1.763
38.04
1.719
1.51
0.369
1.53
0.369
4.85
0.812
4.85
0.811
1.79
0.188
1.76
0.188
31.89
1.254
30.07
1.210
2.18
0.455
2.18
0.455
Err(%)
BCut
Err(%)
BCut
18.67
1.954
14.67
1.960
7.30
4.332
6.74
4.194
39.03
3.187
33.87
3.134
13.39
2.776
13.10
2.778
0.38
0.592
0.38
0.592
40.63
2.057
41.97
1.972
1.41
0.377
1.41
0.377
4.19
0.833
4.25
0.833
1.24
0.197
1.24
0.197
27.80
1.346
26.55
1.314
2.03
0.465
2.02
0.465
Conclusion
We presented a framework for directly minimizing the balanced k-cut problem based on a new tight
continuous relaxation. Apart from the standard ratio/normalized cut, our method can also handle
new application-specific balancing functions. Moreover, in contrast to a recursive splitting approach
[24], our method enables the direct integration of prior information available in form of must/cannotlink constraints, which is an interesting topic for future research. Finally, the monotonic descent
algorithm proposed for the difficult sum-of-ratios problem is another key contribution of the paper
that is of independent interest.
8
References
[1] W. E. Donath and A. J. Hoffman. Lower bounds for the partitioning of graphs. IBM J. Res. Develop.,
17:420?425, 1973.
[2] A. Pothen, H. D. Simon, and K.-P. Liou. Partitioning sparse matrices with eigenvectors of graphs. SIAM
J. Matrix Anal. Appl., 11(3):430?452, 1990.
[3] L. Hagen and A. B. Kahng. Fast spectral methods for ratio cut partitioning and clustering. In ICCAD,
pages 10?13, 1991.
[4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell.,
22:888?905, 2000.
[5] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages
849?856, 2001.
[6] I. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors: A multilevel approach.
IEEE Trans. Pattern Anal. Mach. Intell., pages 1944?1957, 2007.
[7] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395?416, 2007.
[8] S. Guattery and G. Miller. On the quality of spectral separators. SIAM J. Matrix Anal. Appl., 19:701?719,
1998.
[9] A. Szlam and X. Bresson. Total variation and Cheeger cuts. In ICML, pages 1039?1046, 2010.
[10] M. Hein and T. B?uhler. An inverse power method for nonlinear eigenproblems with applications in 1spectral clustering and sparse PCA. In NIPS, pages 847?855, 2010.
[11] M. Hein and S. Setzer. Beyond spectral clustering - tight relaxations of balanced graph cuts. In NIPS,
pages 2366?2374, 2011.
[12] X. Bresson, T. Laurent, D. Uminsky, and J. H. von Brecht. Convergence and energy landscape for Cheeger
cut clustering. In NIPS, pages 1394?1402, 2012.
[13] X. Bresson, T. Laurent, D. Uminsky, and J. H. von Brecht. Multiclass total variation clustering. In NIPS,
pages 1421?1429, 2013.
[14] F. Bach. Learning with submodular functions: A convex optimization perspective. Foundations and
Trends in Machine Learning, 6(2-3):145?373, 2013.
[15] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to
imaging. J. of Math. Imaging and Vision, 40:120?145, 2011.
[16] T. Pock and A. Chambolle. Diagonal preconditioning for first order primal-dual algorithms in convex
optimization. In ICCV, pages 1762?1769, 2011.
[17] G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs.
SIAM J. Sci. Comput., 20(1):359?392, 1998.
[18] Z. Yang and E. Oja. Linear and nonlinear projective nonnegative matrix factorization. IEEE Transactions
on Neural Networks, 21(5):734?749, 2010.
[19] C. Ding, T. Li, and M. I. Jordan. Nonnegative matrix factorization for combinatorial optimization: Spectral clustering, graph matching, and clique finding. In ICDM, pages 183?192, 2008.
[20] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal nonnegative matrix tri-factorizations for clustering. In
KDD, pages 126?135, 2006.
[21] R. Arora, M. R. Gupta, A. Kapila, and M. Fazel. Clustering by left-stochastic matrix factorization. In
ICML, pages 761?768, 2011.
[22] Z. Yang, T. Hao, O. Dikmen, X. Chen, and E. Oja. Clustering by nonnegative matrix factorization using
graph random walk. In NIPS, pages 1088?1096, 2012.
[23] A. J. Soper, C. Walshaw, and M. Cross. A combined evolutionary search and multilevel optimisation
approach to graph-partitioning. J. of Global Optimization, 29(2):225?241, 2004.
[24] S. S. Rangapuram and M. Hein. Constrained 1-spectral clustering. In AISTATS, pages 1143?1151, 2012.
9
| 5583 |@word kulis:1 illustrating:1 version:1 briefly:1 repository:1 seems:2 c0:1 propagate:1 contains:1 ours:6 outperforms:3 existing:3 err:18 current:3 assigning:1 must:1 fn:1 visible:1 partition:30 kdd:1 enables:1 update:2 alone:2 greedy:5 spec:3 selected:1 pnmf:3 iterates:1 math:1 c6:1 saarland:1 c2:1 constructed:2 become:2 differential:1 direct:1 qualitative:2 prove:3 manner:1 introduce:2 pairwise:1 peng:1 indeed:3 roughly:2 frequently:1 multi:6 ol:3 inspired:2 automatically:1 actual:1 ucken:1 increasing:1 provided:1 moreover:6 notation:3 what:1 interpreted:1 minimizes:1 finding:3 guarantee:5 quantitative:1 tackle:1 pramod:1 tie:2 exactly:2 partitioning:7 control:1 szlam:1 rcc:4 before:1 pock:2 mach:2 lsd:3 laurent:2 webkb4:4 black:1 chose:1 pendigits:3 initialization:4 studied:1 equivalence:1 appl:2 ease:1 factorization:6 projective:1 bi:1 karypis:1 fazel:1 unique:1 testing:1 practice:4 recursive:3 significantly:4 matching:1 integrating:1 seeing:1 get:3 cannot:2 close:1 selection:1 unlabeled:1 optimize:3 equivalent:2 center:1 shi:1 pk0:2 convex:16 splitting:2 insight:1 s6:1 asym:4 embedding:1 handle:3 variation:4 updated:1 suppose:1 kapila:1 exact:6 programming:1 us:1 element:1 trend:2 hagen:1 asymmetric:12 cut:91 labeled:1 rangapuram:2 ding:2 solved:1 capture:1 connected:1 news:4 decrease:1 balanced:27 cheeger:12 broken:2 weakly:1 tight:8 solving:4 f2:1 usps:3 completely:1 preconditioning:1 various:1 derivation:1 separated:2 fast:3 quite:1 heuristic:1 supplementary:2 dominating:2 solve:2 say:2 otherwise:2 statistic:1 transductive:4 sequence:5 matthias:1 propose:3 neighboring:1 uci:1 degenerate:3 achieve:1 moved:2 convergence:2 cluster:10 empty:1 double:1 produce:2 perfect:1 converges:3 help:1 illustrate:1 derive:2 develop:1 ij:1 strong:1 solves:1 p2:1 predicted:1 c:2 implies:1 fij:1 stochastic:1 material:2 multilevel:3 fix:3 f1:6 bcut:22 decompose:1 proposition:1 extension:9 strictly:7 hold:5 lying:1 sufficiently:1 ground:2 normal:1 exp:1 achieves:2 smallest:1 wine:3 label:13 currently:1 combinatorial:1 largest:2 weighted:3 hoffman:1 minimization:4 lovasz:9 clearly:1 always:4 gaussian:1 ck:9 pn:2 avoid:1 minc:1 corollary:4 derived:1 vk:1 improvement:1 indicates:2 mainly:1 contrast:2 attains:2 sense:2 membership:21 nn:3 typically:2 entire:1 eliminate:2 initially:1 transformed:1 wij:3 overall:2 arg:3 dual:2 art:1 integration:2 constrained:1 equal:2 construct:3 once:1 ng:1 graclus:3 park:1 icml:2 future:1 simplex:10 np:1 report:4 simplify:1 piecewise:1 randomly:3 oja:2 intell:2 maxj:1 ourselves:1 detection:1 uhler:1 interest:1 nmfr:3 bracket:1 extreme:1 behind:1 primal:2 predefined:1 edge:3 necessary:1 orthogonal:1 penalizes:1 desired:3 re:1 walk:1 hein:4 minimal:2 instance:3 column:5 bresson:3 assignment:1 introducing:2 vertex:16 subset:1 expects:1 rounding:12 pothen:1 connect:1 combined:1 adaptively:1 st:1 siam:3 rounded:3 together:2 again:1 von:3 choose:1 leading:3 toy:2 li:2 ranking:1 depends:1 competitive:1 start:1 simon:1 contribution:2 minimize:2 publicly:1 moon:4 who:2 miller:1 yield:10 correspond:3 landscape:1 weak:1 sundar:1 famous:2 produced:1 basically:1 ecoli:3 converged:1 ncc:2 maxc:1 definition:2 failure:1 against:2 energy:1 obvious:1 proof:2 stop:1 dataset:2 popular:1 color:1 improves:1 segmentation:2 cj:2 actually:1 wei:1 formulation:2 strongly:3 chambolle:2 furthermore:5 until:2 hand:1 nonlinear:2 quality:3 name:1 usage:1 normalized:8 contain:1 concept:1 hence:1 symmetric:8 dhillon:1 during:1 kaushik:1 iris:4 criterion:7 fj:1 image:3 wise:2 recently:4 fi:4 superior:1 functional:1 ji:5 discussed:1 belong:1 smoothness:1 fk:4 mathematics:1 trivially:2 submodular:7 similarity:3 longer:1 yk2:1 recent:3 perspective:1 belongs:1 apart:2 optimizes:1 vertebral:4 arbitrarily:1 additional:1 relaxed:1 employed:2 monotonically:1 rv:2 multiple:1 reduces:1 smooth:1 bach:1 cross:1 icdm:1 laplacian:1 vision:1 optimisation:1 iteration:1 achieved:2 c1:10 irregular:1 whereas:2 addition:1 void:1 crucial:1 donath:1 onmf:3 unlike:1 tri:1 subject:5 suspect:1 undirected:1 jordan:2 yang:2 intermediate:1 easy:1 iterate:3 variety:1 affect:1 brecht:2 restrict:1 competing:1 quant:1 inner:3 idea:2 multiclass:1 maxf:1 motivated:1 pca:1 setzer:1 bli:1 clear:1 eigenvectors:3 eigenproblems:1 amount:2 outperform:1 sl:1 percentage:1 tutorial:1 overly:1 per:3 diverse:1 write:1 vol:1 key:3 imaging:2 graph:43 relaxation:40 subgradient:1 fraction:1 sum:6 enforced:3 run:1 luxburg:1 inverse:1 striking:1 fjs:1 ki:1 fl:30 bound:1 guaranteed:2 nonnegative:5 mtv:7 badly:1 constraint:46 encodes:1 iccad:1 min:14 uminsky:2 kumar:1 department:1 tv:17 according:1 metis:1 poor:2 disconnected:5 terminates:2 smaller:1 across:3 lp:3 pcan:1 iccv:1 taken:1 discus:2 loose:3 turn:1 fail:3 r3:1 liou:1 available:2 gaussians:1 rewritten:1 observe:4 spectral:16 enforce:2 original:2 top:2 clustering:34 guattery:1 quantile:1 objective:5 malik:1 already:2 question:2 font:1 strategy:1 diagonal:1 italic:1 unclear:1 evolutionary:1 distance:1 sci:1 topic:1 skx:1 trivial:2 enforcing:1 degenerated:4 code:1 index:1 relationship:1 ratio:30 minimizing:5 difficult:3 hao:1 negative:4 anal:4 kahng:1 datasets:8 benchmark:1 descent:16 immediate:1 rn:8 community:1 nmf:1 introduced:1 extensive:2 connection:1 nsc:3 saarbr:1 nip:6 trans:2 beyond:1 suggested:1 proceeds:1 below:1 pattern:2 built:2 max:5 pdhg:1 suitable:1 power:1 ranked:2 natural:1 indicator:2 scheme:3 improve:3 arora:1 prior:1 literature:2 relative:1 interesting:1 foundation:1 integrate:1 fiji:4 degree:1 sufficient:2 balancing:22 rcut:2 row:5 ibm:1 repeat:3 sym:2 bias:2 allow:1 sparse:2 dimension:1 default:1 collection:1 avoided:1 transaction:1 clique:1 global:1 sequentially:1 xi:1 continuous:27 search:1 table:2 additionally:1 cl:6 separator:1 aistats:1 pk:11 main:1 motivation:2 whole:1 tl:6 fails:1 sub:1 comput:1 guan:1 rk:3 theorem:9 bad:1 specific:1 beaten:1 gupta:1 flt:11 stl:5 exists:5 essential:1 mnist:3 sequential:2 ci:12 chen:1 likely:1 ncut:2 ordered:1 monotonic:8 corresponds:7 truth:2 satisfies:1 prop:1 goal:3 formulated:1 presentation:1 optdigits:3 dikmen:1 towards:3 replace:1 feasible:5 hard:1 change:1 called:3 total:5 formally:1 syama:1 evaluate:3 |
5,063 | 5,584 | Streaming, Memory Limited Algorithms for
Community Detection
Marc Lelarge ?
Inria & ENS
23 Avenue d?Italie, Paris 75013
[email protected]
Se-Young. Yun
MSR-Inria
23 Avenue d?Italie, Paris 75013
[email protected]
Alexandre Proutiere ?
KTH, EE School / ACL
Osquldasv. 10, Stockholm 100-44, Sweden
[email protected]
Abstract
In this paper, we consider sparse networks consisting of a finite number of nonoverlapping communities, i.e. disjoint clusters, so that there is higher density
within clusters than across clusters. Both the intra- and inter-cluster edge densities
vanish when the size of the graph grows large, making the cluster reconstruction
problem nosier and hence difficult to solve. We are interested in scenarios where
the network size is very large, so that the adjacency matrix of the graph is hard to
manipulate and store. The data stream model in which columns of the adjacency
matrix are revealed sequentially constitutes a natural framework in this setting.
For this model, we develop two novel clustering algorithms that extract the clusters asymptotically accurately. The first algorithm is offline, as it needs to store
and keep the assignments of nodes to clusters, and requires a memory that scales
linearly with the network size. The second algorithm is online, as it may classify
a node when the corresponding column is revealed and then discard this information. This algorithm requires a memory growing sub-linearly with the network
size. To construct these efficient streaming memory-limited clustering algorithms,
we first address the problem of clustering with partial information, where only a
small proportion of the columns of the adjacency matrix is observed and develop,
for this setting, a new spectral algorithm which is of independent interest.
1
Introduction
Extracting clusters or communities in networks have numerous applications and constitutes a fundamental task in many disciplines, including social science, biology, and physics. Most methods
for clustering networks assume that pairwise ?interactions? between nodes can be observed, and
that from these observations, one can construct a graph which is then partitioned into clusters. The
resulting graph partitioning problem can be typically solved using spectral methods [1, 3, 5, 6, 12],
compressed sensing and matrix completion ideas [2, 4], or other techniques [10].
A popular model and benchmark to assess the performance of clustering algorithms is the Stochastic
Block Model (SBM) [9], also referred to as the planted partition model. In the SBM, it is assumed
?
Work performed as part of MSR-INRIA joint research centre. M.L. acknowledges the support of the
French Agence Nationale de la Recherche (ANR) under reference ANR-11-JS02-005-01 (GAP project).
?
A. Proutiere?s research is supported by the ERC FSA grant, and the SSF ICT-Psi project.
1
that the graph to partition has been generated randomly, by placing an edge between two nodes with
probability p if the nodes belong to the same cluster, and with probability q otherwise, with q < p.
The parameters p and q typically depends on the network size n, and they are often assumed to
tend to 0 as n grows large, making the graph sparse. This model has attracted a lot of attention
2
recently. We know for example that there is a phase transition threshold for the value of (p?q)
p+q . If
we are below the threshold, no algorithm can perform better than the algorithm randomly assigning
nodes to clusters [7, 14], and if we are above the threshold, it becomes indeed possible to beat the
naive random assignment algorithm [11]. A necessary and sufficient condition on p and q for the
existence of clustering algorithms that are asymptotically accurate (meaning that the proportion of
misclassified nodes tends to 0 as n grows large) has also been identified [15]. We finally know that
spectral algorithms can reconstruct the clusters asymptotically accurately as soon as this is at all
possible, i.e., they are in a sense optimal.
We focus here on scenarios where the network size can be extremely large (online social and biological networks can, already today, easily exceed several hundreds of millions of nodes), so that
the adjacency matrix A of the corresponding graph can become difficult to manipulate and store.
We revisit network clustering problems under memory constraints. Memory limited algorithms are
relevant in the streaming data model, where observations (i.e. parts of the adjacency matrix) are
collected sequentially. We assume here that the columns of the adjacency matrix A are revealed
one by one to the algorithm. An arriving column may be stored, but the algorithm cannot request it
later on if it was not stored. The objective of this paper is to determine how the memory constraints
and the data streaming model affect the fundamental performance limits of clustering algorithms,
and how the latter should be modified to accommodate these restrictions. Again to address these
questions, we use the stochastic block model as a performance benchmark. Surprisingly, we establish that when there exists an algorithm with unlimited memory that asymptotically reconstruct the
clusters accurately, then we can devise an asymptotically accurate algorithm that requires a memory scaling linearly in the network size n, except if the graph is extremely sparse. This claim is
f (n)
proved for the SBM with parameters p = a f (n)
n and q = b n , with constants a > b, under the
assumption that log n f (n). For this model, unconstrained algorithms can accurately recover the
clusters as soon as f (n) = ?(1) [15], so that the gap between memory-limited and unconstrained
algorithms is rather narrow. We further prove that the proposed algorithm reconstruct the clusters
accurately before collecting all the columns of the matrix A, i.e., it uses less than one pass on the
data. We also propose an online streaming algorithm with sublinear memory requirement. This
algorithm output the partition of the graph in an online fashion after a group of columns arrives.
Specifically, if f (n) = n? with 0 < ? < 1, our algorithm requires as little as n? memory with
? > max 1 ? ?, 32 . To the best of our knowledge, our algorithm is the first sublinear streaming
algorithm for community detection. Although streaming algorithms for clustering data streams have
been analyzed [8], the focus in this theoretical computer science literature is on worst case graphs
and on approximation performance which is quite different from ours.
To construct efficient streaming memory-limited clustering algorithms, we first address the problem
of clustering with partial information. More precisely, we assume that a proportion ? (that may
depend on n) of the columns of A is available, and we wish to classify the nodes corresponding to
these columns, i.e., the observed nodes. We show ?
that a necessary and sufficient condition for the
existence of asymptotically accurate algorithms is ?f (n) = ?(1). We also show that to classify
the observed nodes efficiently, a clustering algorithm must exploit the information provided by the
edges between observed and unobserved nodes. We propose such an algorithm, which in turn,
constitutes a critical building block in the design of memory-limited clustering schemes.
To our knowledge, this paper is the first to address the problem of community detection in the
streaming model, and with memory constraints. Note that PCA has been recently investigated in
the streaming model and with limited memory [13]. Our model is different, and to obtain efficient
clustering algorithms, we need to exploit its structure.
2
Models and Problem Formulation
We consider a network consisting of a set V of n nodes. V admits a hidden partition of K nonSK
overlapping subsets V1 , . . . , VK , i.e., V = k=1 Vk . The size of community or cluster Vk is ?k n
for some ?k > 0. Without loss of generality, let ?1 ? ?2 ? ? ? ? ? ?K . We assume that when the
2
network size n grows large, the number of communities K and their relative sizes are kept fixed. To
recover the hidden partition, we have access to a n ? n symmetric random binary matrix A whose
entries are independent and satisfy: for all v, w ? V , P[Avw = 1] = p if v and w are in the same
cluster, and P[Avw = 1] = q otherwise, with q < p. This corresponds to the celebrated Stochastic
Block Model (SBM). If Avw = 1, we say that nodes v and w are connected, or that there is an edge
between v and w. p and q typically depend on the network size n. To simplify the presentation,
we assume that there exists a function f (n) , and two constants a > b such that p = a f (n)
n and
f (n)
q = b n . This assumption on the specific scaling of p and q is not crucial, and most of the results
derived in this paper hold for more general p and q (as it can be seen in the proofs). For an algorithm
?, we denote by ?? (n) the proportion of nodes that are misclassified by this algorithm. We say that
? is asymptotically accurate if limn?? E[?? (n)] = 0. Note that in our setting, if f (n) = O(1),
there is a non-vanishing fraction of isolated nodes for which no algorithm will perform better than
a random guess. In particular, no algorithm can be asymptotically accurate. Hence, we assume that
f (n) = ?(1), which constitutes a necessary condition for the graph to be asymptotically connected,
i.e., the largest connected component to have size n ? o(n).
In this paper, we address the problem of reconstructing the clusters from specific observed entries
of A, and under some constraints related to the memory available to process the data and on the way
observations are revealed and stored. More precisely, we consider the two following problems.
Problem 1. Clustering with partial information. We first investigate the problem of detecting
communities under the assumption that the matrix A is partially observable. More precisely, we
assume that a proportion ? (that typically depend on the network size n) of the columns of A are
known. The ?n observed columns are selected uniformly at random among all columns of A. Given
these observations, we wish to determine the set of parameters ? and f (n) such that there exists an
asymptotically accurate clustering algorithm.
Problem 2. Clustering in the streaming model and under memory constraints. We are interested
here in scenarios where the matrix A cannot be stored entirely, and restrict our attention to algorithms
that require memory less than M bits. Ideally, we would like to devise an asymptotically accurate
clustering algorithm that requires a memory M scaling linearly or sub-linearly with the network size
n. In the streaming model, we assume that at each time t = 1, . . . , n, we observe a column Av of
A uniformly distributed over the set of columns that have not been observed before t. The column
Av may be stored at time t, but we cannot request it later on if it has not been explicitly stored. The
problem is to design a clustering algorithm ? such that in the streaming model, ? is asymptotically
accurate, and requires less than M bits of memory. We distinguish offline clustering algorithms that
must store the mapping between all nodes and their clusters (here M has to scale linearly with n),
and online algorithms that may classify the nodes when the corresponding columns are observed,
and then discard this information (here M could scale sub-linearly with n).
3
Clustering with Partial Information
In this section, we solve Problem 1. In what follows, we assume that ?n = ?(1), which simply
means that the number of observed columns of A grows large when n tends to ?. However we
are typically interested in scenarios where the proportion of observed columns ? tends to 0 as the
network size grows large. Let (Av , v ? V (g) ) denote the observed columns of A. V (g) is referred to
as the set of green nodes and we denote by n(g) = ?n the number of green nodes. V (r) = V \ V (g)
is referred to as the set of red nodes. Note that we have no information about the connections among
(g)
(r)
the red nodes. For any k = 1, . . . , K, let Vk = V (g) ? Vk , and Vk = V (r) ? Vk . We say that
a clustering algorithm ? classifies the green nodes asymptotically accurately, if the proportion of
misclassified green nodes, denoted by ?? (n(g) ), tends to 0 as the network size n grows large.
3.1
Necessary Conditions for Accurate Detection
We first derive necessary conditions for the existence of asymptotically accurate clustering algorithms. As it is usual in this setting, the hardest model to estimate (from a statistical point of view)
corresponds to the case of two clusters of equal sizes (see Remark 3 below). Hence, we state our
information theoretic lower bounds, Theorems 1 and 2, for the special case where K = 2, and
3
?
?1 = ?2 . Theorem 1 states that if the proportion of observed columns ? is such that ?f (n) tends
to 0 as n grows large, then no clustering algorithm can perform better than the naive algorithm that
assigns nodes to clusters randomly.
?
Theorem 1 Assume that ?f (n) = o(1). Then under any clustering algorithm ?, the expected
proportion of misclassified green nodes tends to 1/2 as n grows large, i.e., lim E[?? (n(g) )] = 1/2.
n??
Theorem 2 (i) shows that this condition is tight in the sense that as soon as there exists a?clustering
algorithm that classifies the green nodes asymptotically accurately, then we need to have ?f (n) =
?(1). Although we do not observe the connections among red nodes, we might ask to classify these
nodes through their connection patterns with green nodes. Theorem 2 (ii) shows that this is possible
only if ?f (n) tends to infinity as n grows large.
Theorem 2 (i) If there exists
? a clustering algorithm that classifies the green nodes asymptotically
accurately, then we have: ?f (n) = ?(1).
(ii) If there exists an asymptotically accurate clustering algorithm (i.e., classifying all nodes asymptotically accurately), then we have: ?f (n) = ?(1).
Remark 3 Theorems 1 and 2 might appear restrictive as they only deal with the case of two clusters
of equal sizes. This is not the case as we will provide in the next section an algorithm achieving the
bounds of Theorem 2 (i) and (ii) for the general case (with a finite number K of clusters of possibly
different sizes). In other words, Theorems 1 and 2 translates directly in minimax lower bounds
thanks to the results we obtain in Section 3.2.
Note that as soon as ?f (n) = ?(1) (i.e. the mean degree in the observed graph tends to infinity),
then standard spectral method applied on the squared matrix A(g) = (Avw , v, w ? V (g) ) will allow
us to classify asymptotically accurately the green nodes, i.e., taking into account only the graph
induced by the green vertices is sufficient. However if ?f (n) = o(1) then no algorithm based on
the induced graph only will be able to classify the green nodes. Theorem 2 shows that in the range
of parameters 1/f (n)2 ? 1/f (n), it is impossible to cluster asymptotically accurately the red
nodes but the question of clustering the green nodes is left open.
3.2
Algorithms
In this section, we deal with the general case and assume that the number K of clusters (of possibly
different sizes) is known. There are two questions of interest: clustering green and red nodes. It
seems intuitive that red nodes can be classified only if we are able to first classify green nodes.
Indeed as we will see below, once the green nodes have been classified, an easy greedy rule is
optimal for the red nodes.
Classifying green nodes. Our algorithm to classify green nodes rely on spectral methods. Note that
as suggested above, in the regime 1/f (n)2 ? 1/f (n), any efficient algorithm needs to exploit
the observed connections between green and red nodes. We construct such an algorithm below. We
should stress that our algorithm does not require to know or estimate ? or f (n).
(r)
When
is connected to at most a single green node, i.e.,
P from the observations, a red node w ? V
if v?V (g) Avw ? 1, this red node is useless in the classification of green nodes. On the contrary,
when a red node is connected to two green nodes, say v1 and v2 (Av1 w = 1 = Av2 w ), we may infer
that the green nodes v1 and v2 are likely to be in the same cluster. In this case, we say that there is
an indirect edge between v1 and v2 .
To classify the green nodes, we will use the matrix A(g) = (Avw )v,w?V (g) , as well as the graph
of indirect edges. However this graph is statistically different from the graphs arising in the classical stochastic block model. Indeed, when a red node is connected to three or more green nodes,
then the presence of indirect edges between these green nodes are not statistically independent. To
circumvent this difficulty, we only consider indirect edges
P created through red nodes connected to
exactly two green nodes. Let V (i) = {v : v ? V (r) and w?V (g) Awv = 2}. We denote by A0 the
(n(g) ? n(g) ) matrix reporting
the number of such indirect edges between pairs of green nodes: for
P
all v, w ? V (g) , A0vw = z?V (i) Avz Awz .
4
Algorithm 1 Spectral method with indirect edges
(g)
Input: A ? {0, 1}|V |?|V | , V , V (g) , K
V (r) ? V \ V (g)
P
V (i) ? {v : v ? V (r) and
w?V (g) Awv = 2}
P
(g)
A ? (Avw )v,w?V (g) and A0 ? (A0vw = z?V (i) Avz Awz )v,w?V (g)
(g)
P
v,w?V (g)
|V (g) |2
p?(g) ?
Avw
and p?0 ?
P
v,w?V (g)
|V (g) |2
A0vw
(g)
0
Q(g) , ?K , ?(g) ? Approx(A(g) , p?(g) , V (g) , K ) and Q0 , ?K
, ?0 ? Approx(A0 , p?0 , V (g) , K )
(g)
if ?
?K
|V (g) |p
?(g)
? 1{|V (g) |p?(g) ?50} ? ?
(g)
0
?K
|V (g) |p
?0
(g)
? 1{|V (g) |p?0 ?50} then
(Sk )1?k?K ? Detection (Q , ? , K)
Randomly place nodes in V (g) \ ?(g) to partitions (Sk )k=1,...,K
else
(Sk )1?k?K ? Detection (Q0 , ?0 , K)
Randomly place nodes in V (g) \ ?0 to partitions (Sk )k=1,...,K
end if
Output: (Sk )1?k?K ,
Our algorithm to classify the green nodes consists in the following steps:
Step 1. Construct the indirect edge matrix A0 using red nodes connected to two green nodes only.
Step 2. Perform a spectral analysis of matrices A(g) and A0 as follows: first trim A(g) and A0
(to remove nodes with too many connections), then extract their K largest eigenvalues and the
corresponding eigenvectors.
Step 3. Select the matrix A(g) or A0 with the largest normalized K-th largest eigenvalue.
(g)
(g)
Step 4. Construct the K clusters V1 , . . . , VK based on the eigenvectors of the matrix selected in
the previous step.
The detailed pseudo-code of the algorithm is presented in Algorithm 1. Steps 2 and 4 of the algorithm are standard techniques used in clustering for the SBM, see e.g. [5]. The algorithms involved
in these Steps are presented in the supplementary material (see Algorithms 4, 5, 6). Note that to
extract the K largest eigenvalues and the corresponding eigenvectors of a matrix, we use the power
method, which is memory-efficient (this becomes important when addressing Problem 2). Further
observe that in Step 3, the algorithm exploits the information provided by the red nodes: it selects,
between the direct edge matrix A(g) and the indirect edge matrix A0 , the matrix whose spectral
properties provide more accurate information about the K clusters. This crucial step is enough for
the algorithm to classify the green nodes asymptotically accurately whenever this is at all possible,
as stated in the following theorem:
Theorem 4 When
rately.
?
?f (n) = ?(1), Algorithm 1 classifies the green nodes asymptotically accu-
In view of Theorem 2 (i), our algorithm is optimal. It might be surprising to choose one of the
matrix A(g) or A0 and throw the information contained in the other one. But the following simple
calculation gives the main idea. To simplify, consider the case ?f (n) = o(1) so that we know that
the matrix A(g) alone is not sufficient to find?the clusters. In this case, it is easy to see that the
matrix A0 alone allows to classify as soon as ?f (n) = ?(1). Indeed, the probability of getting
an indirect edge between two green nodes is of the order (a2 + b2 )f (n)2 /(2n) if the two nodes are
in the same clusters and abf (n)2 /n if they are in different clusters. Moreover the graph of indirect
edges has the same statistics as a SBM with these probabilities of connection. Hence standard results
show that spectral methods will work as soon as ?f (n)2 tends to infinity, i.e. the mean degree in
the observed graph of indirect edges tends to infinity. In the case where ?f (n) is too large (indeed
ln(f (n))), then the graph of indirect edges becomes too sparse for A0 to be useful. But in this
regime, A(g) allows to classify the green nodes. This argument gives some intuitions about the full
proof of Theorem 4 which can be found in the Appendix.
5
Algorithm 2 Greedy selections
(g)
(g)
Input: A ? {0, 1}|V |?|V | , V , V (g) , (Sk )1?k?K .
(g)
V (r) ? V \ V (g) and Sk ? Sk , for all k
(r)
for v ? V
do
P
(g)
Find k? = arg maxk { w?S (g) Avw /|Sk |} (tie broken uniformly at random)
k
Sk? ? Sk? ? {v}
end for
Output: (Sk )1?k?K .
An attractive feature of our Algorithm 1 is that it does not require any parameter of the model as
input except the number of clusters K. In particular, our algorithm selects automatically the best
matrix among A0 and A(g) based on their spectral properties.
Classifying red nodes. From Theorem 2 (ii), in order to classify red nodes, we need to assume that
?f (n) = ?(1). Under this assumption, the green nodes are well classified under Algorithm 1. To
classify the red nodes accurately, we show that it is enough to greedily assign these nodes to the
clusters of green nodes identified using Algorithm 1. More precisely, a red node v is assigned to the
cluster that maximizes the number of observed edges between v and the green nodes of this cluster.
The pseudo-code of this procedure is presented in Algorithm 2.
Theorem 5 When ?f (n) = ?(1), combining Algorithms 1 and 2 yields an asymptotically accurate
clustering algorithm.
Again in view of Theorem 2 (ii), our algorithm is optimal. To summarize our results about Problem
1, i.e., clustering with partial information, we have shown that:
(a) If ? 1/f (n)2 , no clustering algorithm can perform better than the naive algorithm that assigns
nodes to clusters randomly (in the case of two clusters of equal sizes).
(b) If 1/f (n)2 ? 1/f (n), Algorithm 1 classifies the green nodes asymptotically accurately,
but no algorithm can classify the red nodes asymptotically accurately.
(c) If 1/f (n) ?, the combination of Algorithm 1 and Algorithm 2 classifies all nodes asymptotically accurately.
4
Clustering in the Streaming Model under Memory Constraints
In this section, we address Problem 2 where the clustering problem has additional constraints.
Namely, the memory available to the algorithm is limited (memory constraints) and each column
Av of A is observed only once, hence if it is not stored, this information is lost (streaming model).
In view of previous results, when the entire matrix A is available (i.e. ? = 1) and when there
is no memory constraint, we know that a necessary and sufficient condition for the existence of
asymptotically accurate clustering algorithms is that f (n) = ?(1). Here we first devise a clustering algorithm adapted to the streaming model and using a memory scaling linearly with n that is
asymptotically accurate as soon as log(n) f (n). Algorithms 1 and 2 are the building blocks of
this algorithm, and its performance analysis leverages the results of previous section. We also show
that our algorithm does not need to sequentially observe all columns of A in order to accurately
reconstruct the clusters. In other words, the algorithm uses strictly less than one pass on the data and
is asymptotically accurate.
Clearly if the algorithm is asked (as above) to output the full partition of the network, it will require
a memory scaling linearly with n, the size of the output. However, in the streaming model, we can
remove this requirement and the algorithm can output the full partition sequentially similarly to an
online algorithm (however our algorithm is not required to take an irrevocable action after the arrival
of each column but will classify nodes after a group of columns arrives). In this case, the memory
requirement can be sublinear. We present an algorithm with a memory requirement which depends
on the density of the graph. In the particular case where f (n) = n? with 0 < ? < 1, our algorithm
requires as little as n? bits of memory with ? > max 1 ? ?, 32 to accurately cluster the nodes.
Note that when the graph is very sparse (? ? 0), then the community detection is a hard statistical
task and the algorithm needs to gather a lot of columns so that the memory requirement is quite
6
Algorithm 3 Streaming offline
Input: {A1 , . . . , AT }, p, V , K
Initial: N ? n ? K matrix filled with zeros and B ?
nh(n)
min{np,n1/3 } log n
Subsampling: At ? Randomly erase entries of At with probability max{0, 1 ?
T
for ? = 1to b B
c do
(B)
A
? n ? B matrix where i-th column is Ai+(? ?1)B
(? )
(Sk ) ? Algorithm 1 (A(B) , V, {(? ? 1)B + 1, . . . , ? B}, K)
if ? = 1 then
P
(1)
V?k ? Sk for all k and Nv,k ? w?S (1) Awv for all v ? V and k
n1/3
}
np
k
else
P
(? )
V?s(k) ? V?s(k) ? Sk for all k where s(k) = arg max1?i?K
P
Nv,s(k) ? Nv,s(k) + w?S (? ) Awv for all v ? V and k
?
v?V
i
P
(? )
w?S
k
?i ||S (? ) |
|V
k
Avw
k
end if
end for
Greedy improvement : V?k ? {v : k = arg max1?i?K
Output: (V?k )1?k?K ,
Nv,i
?i | }
|V
for all k
high (? ? 1). As ? increases, the graph becomes denser and the statistical task easier. As a result,
our algorithm needs to look at smaller blocks of columns and the memory requirement decreases.
However, for ? ? 1/3, although the statistical task is much easier, our algorithm hits its memory
constraint and in order to store blocks with sufficiently many columns, it needs to subsample each
column. As a result, the memory requirement of our algorithm does not decrease for ? ? 1/3.
The main idea of our algorithms is to successively treat blocks of B consecutive arriving columns.
Each column of a block is stored in the memory. After the last column of a block arrives, we apply
Algorithm 1 to classify the corresponding nodes accurately, and we then merge the obtained clusters
with the previously identified clusters. In the online version, the algorithm can output the partition
of the block and in the offline version, it stores this result. We finally remove the stored columns,
and proceed with the next block. For the offline algorithm, after a total of T observed columns, we
apply Algorithm 2 to classify the remaining nodes so that T can be less than n. The pseudo-code
of the offline algorithm is presented in Algorithm 3. Next we discuss how to tune B and T so that
the classification is asymptotically accurate, and we compute the required memory to implement the
algorithm.
Block size. We denote by B the size of a block. Let h(n) be such that the block size is
h(n)n
1/3
?
} which represents the order of the number of positive
B = f (n)
log(n) . Let f (n) = min{f (n), n
entries of each column after the subsampling process. According to Theorem 4 (with ? = B/n),
? 2
to accurately classify the nodes arrived in a block, we just need that B
n f (n) = ?(1), which is
log(n)
equivalent to h(n) = ?( min{f (n),n1/3 } ). Now the merging procedure that combines the clusters found analyzing the current block with the previously identified clusters uses the number of
connections between the nodes corresponding to the columns of the current block to the previous
clusters. The number of these connections must grow large as n tends to ? to ensure the ac?
curacy of the merging procedure. Since the number of these connections scales as B 2 f (n)
n , we
2
need that h(n)2 = ?(min{f (n), n1/3 } log(n)
). Note that this condition is satisfied as long as
n
h(n) = ?( min{flog(n)
).
(n),n1/3 }
Total number of columns for the offline algorithm. To accurately classify the nodes whose
columns are not observed, we will show that we need the total number of observed columns T
n
to satisfy T = ?( min{f (n),n
1/3 } ) (which is in agreement with Theorem 5).
Required memory for the offline algorithm. To store the columns of a block, we need ?(nh(n))
bits. To store the previously identified clusters, we need at most log2 (K)n bits, and we can store
the number of connections between the nodes corresponding to the columns of the current block to
the previous clusters using a memory linearly scaling with n. Finally, to execute Algorithm 1, the
7
Algorithm 4 Streaming online
Input: {A1 , . . . , An }, p, V , K
nh(n)
?
T
Initial: B ? min{np,n
= bB
c
1/3 } log n and ?
Subsampling: At ? Randomly erase entries of At with probability max{0, 1 ?
for ? = 1to ? ? do
A(B) ? n ? B matrix where i-th column is Ai+(? ?1)B
(Sk )1?k?K ? Algorithm 1 (A(B) , V, {(? ? 1)B + 1, . . . , ? B}, K)
if ? = 1 then
V?k ? Sk for all k
Output at B: (Sk )1?k?K
else
P
?
P
w?S
s(k) ? arg max1?i?K v?Vi|V? ||S |k
i
k
Output at ? B: (Ss(k) )1?k?K
end if
end for
Avw
n1/3
}
np
for all k
power method used to perform the SVD (see Algorithm 5) requires the same amount of bits than
that used to store a block of size B. In summary, the required memory is M = ?(nh(n) + n).
n
Theorem 6 Assume that h(n) = ?( min{flog(n)
) and T = ?( min{f (n),n
Then with
1/3 } ).
(n),n1/3 }
h(n)n
M = ?(nh(n) + n) bits, Algorithm 3, with block size B = min{f (n),n
1/3 } log(n) and acquiring
?
?
the T first columns of A, outputs clusters V1 , . . . , VK such that with
high
probability, there exists
a
1/3
S
}
permutation ? of {1, . . . , K} such that: n1 1?k?K V?k \ V?(k) = O exp(?cT min{f (n),n
)
n
with a constant c > 0.
Under the conditions of the above theorem, Algorithm 3 is asymptotically accurate. Now if f (n) =
?(log(n)), we can choose h(n) = 1. Then Algorithm 3 classifies nodes accurately and uses a
memory linearly scaling with n. Note that increasing the number of observed columns T just reduces
the proportion of misclassified nodes. For example, if f (n) = log(n)2 , with high probability, the
proportion of misclassified nodes decays faster than 1/n if we acquire only T = n/ log(n) columns,
whereas it decays faster than exp(? log(n)2 ) if all columns are observed.
Our online algorithm is a slight variation of the offline algorithm. Indeed, it deals with the first block
exactly in the same manner and keeps in memory the partition of this first block. It then handles the
successive blocks as the first block and merges the partition of these blocks with those of the first
block as done in the offline algorithm for the second block. Once this is done, the online algorithm
just throw all the information away except the partition of the first block.
), then Algorithm 4 with block size B =
Theorem 7 Assume that h(n) = ?( min{flog(n)
(n),n1/3 }
h(n)n
min{f (n),n1/3 } log n
is asymptotically accurate (i.e., after one pass, the fraction of misclassified nodes
vanishes) and requires ?(nh(n)) bits of memory.
5
Conclusion
We introduced the problem of community detection with partial information, where only an induced
subgraph corresponding to a fraction of the nodes is observed. In this setting, we gave a necessary condition for accurate reconstruction and developed a new spectral algorithm which extracts
the clusters whenever this is at all possible. Building on this result, we considered the streaming,
memory limited problem of community detection and developed algorithms able to asymptotically
reconstruct the clusters with a memory requirement which is linear in the size of the network for
the offline version of the algorithm and which is sublinear for its online version. To the best of
our knowledge, these algorithms are the first community detection algorithms in the data stream
model. The memory requirement of these algorithms is non-increasing in the density of the graph
and determining the optimal memory requirement is an interesting open problem.
8
References
[1] R. B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In Foundations of
Computer Science, 1987., 28th Annual Symposium on, pages 280?285. IEEE, 1987.
[2] S. Chatterjee. Matrix estimation by universal singular value thresholding. arXiv preprint
arXiv:1212.1247, 2012.
[3] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general degrees
in the extended planted partition model. Journal of Machine Learning Research-Proceedings
Track, 23:35?1, 2012.
[4] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Advances in Neural Information
Processing Systems 25, pages 2213?2221. 2012.
[5] A. Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability & Computing, 19(2):227?284, 2010.
[6] A. Dasgupta, J. Hopcroft, R. Kannan, and P. Mitra. Spectral clustering by recursive partitioning. In Algorithms?ESA 2006, pages 256?267. Springer, 2006.
[7] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov?a. Inference and phase transitions in the
detection of modules in sparse networks. Phys. Rev. Lett., 107, Aug 2011.
[8] S. Guha, N. Mishra, R. Motwani, and L. O?Callaghan. Clustering data streams. In 41st Annual
Symposium on Foundations of Computer Science (Redondo Beach, CA, 2000), pages 359?366.
IEEE Comput. Soc. Press, Los Alamitos, CA, 2000.
[9] P. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks,
5(2):109 ? 137, 1983.
[10] M. Jerrum and G. B. Sorkin. The metropolis algorithm for graph bisection. Discrete Applied
Mathematics, 82(13):155 ? 175, 1998.
[11] L. Massouli?e. Community detection thresholds and the weak ramanujan property. CoRR,
abs/1311.3085, 2013.
[12] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science,
2001. Proceedings. 42nd IEEE Symposium on, pages 529?537. IEEE, 2001.
[13] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In NIPS, 2013.
[14] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. arXiv preprint
arXiv:1202.1499, 2012.
[15] S. Yun and A. Proutiere. Community detection via random and adaptive sampling. In COLT,
2014.
9
| 5584 |@word msr:2 version:4 proportion:11 seems:1 nd:1 open:2 accommodate:1 initial:2 celebrated:1 neeman:1 ours:1 mishra:1 current:3 surprising:1 assigning:1 attracted:1 must:3 partition:14 remove:3 alone:2 greedy:3 selected:2 guess:1 vanishing:1 recherche:1 detecting:1 node:95 successive:1 direct:1 become:1 symposium:3 prove:1 consists:1 combine:1 krzakala:1 manner:1 pairwise:1 inter:1 expected:1 indeed:6 growing:1 automatically:1 little:2 increasing:2 becomes:4 project:2 provided:2 classifies:7 moreover:1 maximizes:1 erase:2 what:1 flog:3 developed:2 unobserved:1 pseudo:3 collecting:1 tie:1 exactly:2 hit:1 partitioning:4 grant:1 appear:1 before:2 positive:1 mitra:1 treat:1 tends:11 limit:1 decelle:1 analyzing:1 merge:1 inria:4 acl:1 might:3 limited:10 range:1 statistically:2 lost:1 block:33 implement:1 recursive:1 procedure:3 universal:1 word:2 cannot:3 av2:1 selection:1 impossible:1 restriction:1 equivalent:1 ramanujan:1 attention:2 assigns:2 rule:1 sbm:6 handle:1 variation:1 today:1 us:4 agreement:1 observed:24 module:1 preprint:2 solved:1 worst:1 connected:8 decrease:2 intuition:1 vanishes:1 broken:1 ideally:1 asked:1 js02:1 depend:3 tight:1 max1:3 easily:1 joint:1 indirect:12 hopcroft:1 caramanis:1 jain:1 quite:2 whose:3 supplementary:1 solve:2 denser:1 say:5 s:1 otherwise:2 compressed:1 anr:2 reconstruct:5 statistic:1 jerrum:1 online:11 fsa:1 eigenvalue:4 reconstruction:3 propose:2 interaction:1 leinhardt:1 fr:2 relevant:1 combining:1 subgraph:1 chaudhuri:1 intuitive:1 getting:1 los:1 cluster:49 requirement:10 motwani:1 derive:1 develop:2 completion:1 ac:1 school:1 aug:1 throw:2 soc:1 rately:1 stochastic:6 material:1 adjacency:6 require:4 assign:1 biological:1 stockholm:1 strictly:1 hold:1 sufficiently:1 considered:1 exp:2 mapping:1 claim:1 consecutive:1 a2:1 estimation:1 largest:5 clearly:1 modified:1 rather:1 derived:1 focus:2 vk:9 improvement:1 greedily:1 sense:2 inference:1 streaming:21 typically:5 entire:1 a0:12 hidden:2 proutiere:3 misclassified:7 interested:3 selects:2 arg:4 among:4 classification:2 colt:1 denoted:1 special:1 equal:3 construct:6 once:3 beach:1 sampling:1 biology:1 placing:1 represents:1 hardest:1 look:1 constitutes:4 np:4 sanghavi:1 simplify:2 curacy:1 randomly:8 phase:2 consisting:2 n1:10 ab:1 detection:13 interest:2 investigate:1 intra:1 analyzed:1 arrives:3 mcsherry:1 accurate:20 edge:18 partial:6 necessary:7 sweden:1 filled:1 isolated:1 theoretical:1 column:46 classify:22 assignment:2 vertex:1 subset:1 entry:5 addressing:1 hundred:1 guha:1 too:3 stored:9 thanks:1 density:4 fundamental:2 st:1 physic:1 discipline:1 again:2 squared:1 satisfied:1 successively:1 choose:2 possibly:2 account:1 de:1 nonoverlapping:1 b2:1 satisfy:2 combinatorics:1 explicitly:1 depends:2 stream:4 vi:1 performed:1 later:2 lot:2 view:4 red:20 recover:2 ass:1 efficiently:1 yield:1 weak:1 accurately:22 bisection:2 classified:3 phys:1 whenever:2 lelarge:2 involved:1 proof:2 psi:1 proved:1 popular:1 ask:1 knowledge:3 lim:1 oghlan:1 alexandre:1 higher:1 formulation:1 execute:1 done:2 generality:1 just:3 sly:1 tsiatas:1 overlapping:1 french:1 laskey:1 grows:10 building:3 normalized:1 hence:5 assigned:1 symmetric:1 q0:2 moore:1 deal:3 attractive:1 yun:3 stress:1 theoretic:1 arrived:1 meaning:1 novel:1 recently:2 redondo:1 nh:6 million:1 belong:1 slight:1 ai:2 approx:2 unconstrained:2 mathematics:1 similarly:1 erc:1 centre:1 access:1 agence:1 discard:2 scenario:4 store:10 binary:1 devise:3 seen:1 additional:1 determine:2 ii:5 full:3 infer:1 reduces:1 faster:2 calculation:1 long:1 manipulate:2 a1:2 arxiv:4 whereas:1 else:3 grow:1 singular:1 limn:1 crucial:2 nv:4 induced:3 tend:1 contrary:1 extracting:1 ee:1 ssf:1 presence:1 leverage:1 revealed:4 exceed:1 easy:2 enough:2 affect:1 gave:1 sorkin:1 identified:5 restrict:1 idea:3 avenue:2 translates:1 pca:2 av1:1 avw:11 proceed:1 remark:2 action:1 useful:1 se:2 eigenvectors:3 detailed:1 tune:1 amount:1 revisit:1 disjoint:1 arising:1 track:1 discrete:1 dasgupta:1 group:2 threshold:4 achieving:1 kept:1 v1:5 graph:30 asymptotically:33 fraction:3 massouli:1 reporting:1 place:2 appendix:1 scaling:7 graham:1 bit:8 entirely:1 bound:3 ct:1 distinguish:1 annual:2 adapted:1 constraint:10 precisely:4 infinity:4 unlimited:1 argument:1 extremely:2 min:13 according:1 request:2 combination:1 across:1 smaller:1 reconstructing:1 partitioned:1 metropolis:1 rev:1 making:2 ln:1 previously:3 turn:1 discus:1 know:5 end:6 available:4 apply:2 observe:4 v2:3 spectral:15 away:1 existence:4 clustering:42 subsampling:3 remaining:1 ensure:1 log2:1 exploit:4 restrictive:1 establish:1 classical:1 objective:1 already:1 question:3 alamitos:1 planted:2 usual:1 zdeborov:1 kth:2 accu:1 collected:1 kannan:1 code:3 useless:1 acquire:1 italie:2 difficult:2 stated:1 design:2 perform:6 coja:1 av:4 observation:5 benchmark:2 finite:2 beat:1 maxk:1 extended:1 esa:1 community:14 introduced:1 pair:1 paris:2 namely:1 required:4 connection:10 merges:1 narrow:1 nip:1 address:6 able:3 suggested:1 below:4 pattern:1 regime:2 summarize:1 including:1 memory:48 max:4 green:37 power:2 critical:1 natural:1 rely:1 circumvent:1 difficulty:1 minimax:1 scheme:1 mossel:1 numerous:1 created:1 acknowledges:1 extract:4 naive:3 ict:1 literature:1 determining:1 relative:1 loss:1 permutation:1 sublinear:4 interesting:1 foundation:3 degree:3 gather:1 sufficient:5 thresholding:1 classifying:3 irrevocable:1 summary:1 supported:1 surprisingly:1 soon:7 arriving:2 last:1 offline:11 allow:1 taking:1 sparse:7 distributed:1 lett:1 transition:2 adaptive:2 social:3 bb:1 observable:1 trim:1 keep:2 sequentially:4 assumed:2 sk:18 ca:2 investigated:1 marc:2 blockmodels:1 main:2 linearly:11 subsample:1 alepro:1 arrival:1 xu:1 referred:3 en:2 fashion:1 sub:3 wish:2 comput:1 vanish:1 young:1 theorem:22 specific:2 sensing:1 decay:2 admits:1 exists:7 merging:2 corr:1 mitliagkas:1 callaghan:1 chatterjee:1 nationale:1 gap:2 easier:2 chen:1 simply:1 likely:1 contained:1 partially:1 holland:1 acquiring:1 springer:1 corresponds:2 presentation:1 seyoung:1 boppana:1 hard:2 specifically:1 except:3 uniformly:3 total:3 pas:3 svd:1 la:1 select:1 support:1 latter:1 |
5,064 | 5,585 | Computing Nash Equilibria in Generalized
Interdependent Security Games
Hau Chan
Luis E. Ortiz
Department of Computer Science, Stony Brook University
{hauchan,leortiz}@cs.stonybrook.edu
Abstract
We study the computational complexity of computing Nash equilibria in generalized interdependent-security (IDS) games. Like traditional IDS games, originally introduced by economists and risk-assessment experts Heal and Kunreuther
about a decade ago, generalized IDS games model agents? voluntary investment
decisions when facing potential direct risk and transfer-risk exposure from other
agents. A distinct feature of generalized IDS games, however, is that full investment can reduce transfer risk. As a result, depending on the transfer-risk reduction level, generalized IDS games may exhibit strategic complementarity (SC)
or strategic substitutability (SS). We consider three variants of generalized IDS
games in which players exhibit only SC, only SS, and both SC+SS. We show that
determining whether there is a pure-strategy Nash equilibrium (PSNE) in SC+SStype games is NP-complete, while computing a single PSNE in SC-type games
takes worst-case polynomial time. As for the problem of computing all mixedstrategy Nash equilibria (MSNE) efficiently, we produce a partial characterization.
Whenever each agent in the game is indiscriminate in terms of the transfer-risk exposure to the other agents, a case that Kearns and Ortiz originally studied in the
context of traditional IDS games in their NIPS 2003 paper, we can compute all
MSNE that satisfy some ordering constraints in polynomial time in all three game
variants. Yet, there is a computational barrier in the general (transfer) case: we
show that the computational problem is as hard as the Pure-Nash-Extension problem, also originally introduced by Kearns and Ortiz, and that it is NP-complete
for all three variants. Finally, we experimentally examine and discuss the practical impact that the additional protection from transfer risk allowed in generalized
IDS games has on MSNE by solving several randomly-generated instances of
SC+SS-type games with graph structures taken from several real-world datasets.
1
Introduction
Interdependent Security (IDS) games [1] model the interaction among multiple agents where each
agent chooses whether to invest in some form of security to prevent a potential loss based on both
direct and indirect (transfer) risks. In this context, an agent?s direct risk is that which is not the result
of the other agents? decisions, while indirect (transfer) risk is that which does.
Let us be more concrete and consider an application of IDS games. Imagine that you are an owner
of an apartment. One day, there was a fire alarm in the apartment complex. Luckily, it was nothing
major: nobody got hurt. As a result, you realize that your apartment can be easily burnt down
because you do not have any fire extinguishing mechanism such as a sprinkler system. However, as
you wonder about the cost and the effectiveness of the fire extinguishing mechanism, you notice that
the fire extinguishing mechanism can only protect your apartment if a small fire originates in your
apartment. If a fire originates in the floor below, or above, or even the apartment adjacent to yours,
then you are out of luck: by the time the fire gets to your apartment, the fire would be fierce enough
1
1
1
1
6
5
11
7
12
20
2
18
22
17
3
5
11
7
12
2
6 5
24
12
14
29
13
20
18
22
3
26
11
7 20
2
24
18
22
3
26
26
4
8
6
24
9
31
30
10
32
33
21
17
25
23
4
8
28
13
9
25
31
30
10
27
15 16
14
29
33
19
21 23
34
17
32 28
27
15 16 19
34
? ? N (0.4, 0.2)
? ? N (0.6, 0.2)
4
8
14
29
13
9
25
31
30
10
33
21 23
32 28
27
15 16 19
34
? ? N (0.8, 0.2)
Figure 1: ?-IDS Game of Zachary Karate Club at a Nash Equilibrium. Legend: Square ? SC player,
Circle ? SS player, Colored ? Invest, and Non-Colored ? No Invest
Table 1: Complexity of ?-IDS Games
Game type
SC
(n SC players)
SS
(n SS players)
SC + SS
(nsc + nss = n)
One PSNE
Always Exists
O(n2 )
Maybe Not Exist
NP-complete
All MSNE
Uniform Transfers (UT)
O(n4 )
UT wrt Ordering 1
O(n4 )
UT wrt Ordering 1
O(n4sc n3ss + n3sc n4ss )
Pure-Nash Extension
NP-Complete
already. You realize that if other apartment owners invest in the fire extinguishing mechanism, the
likelihood of their fires reaching you decreases drastically. As a result, you debate whether or not
to invest in the fire extinguishing mechanism given whether or not the other owners invest in the
fire extinguishing mechanism. Indeed, making things more interesting, you are not the only one
going through this decision process; assuming that everybody is concerned about their safety in the
apartment complex, everybody in the apartment complex wants to decide on whether or not to invest
in the fire extinguishing mechanism given the individual decision of other owners.
To be more specific, in the IDS games, the agents are the apartment owners, each apartment owner
needs to make a decision as to whether or not to invest in the fire extinguishing mechanism based on
cost, potential loss, as well as the direct and indirect (transfer) risks. The direct risk here is the chance
that an agent will start a fire (e.g., forgetting to turn off gas burners or overloading electrical outlets).
The transfer risk here is the chance that a fire from somebody else?s (unprotected) apartment will
spread to other apartments. Moreover, transfer risk comes from the direct neighbors and cannot be
re-transferred. For example, if a fire from your neighbors is transferred to you, then, in this model,
this fire cannot be re-transferred to your neighbors. Of course, IDS games can be used to model
other practical real-world situations such as airline security [2], vaccination [3], and cargo shipment
[4]. See Laszka et al. [5] for a survey on IDS games.
Note that in the apartment complex example, the fire extinguishing mechanism does not protect an
agent from fires that originate from other apartments. In this work, we consider a more general,
and possibly also more realistic, framework of IDS games where investment can partially protect
the indirect risk (i.e., investment in the fire extinguishing mechanism can partially extinguish some
fires that originate from others). To distinguish the naming scheme, we will call these generalized
IDS games as ?-IDS games where ? is a vector of probabilities, one for each agent, specifying the
probability that the transfer risk will not be protected by the investment. In other words, agent i?s
investment can reduce indirect risk by probability (1-?i ). Given an ?, the players can be partitioned
into two types: the SC type and the SS type. The SC players behave strategic complementarily:
they invest if sufficiently many people invest. On the other hand, the SS players behave strategic
substitutability: they do not invest if too many people invest.
As a preview of how the ? can affect the number of SC and SS players and Nash equilibria, which is
the solution concept used here (formally defined in the next section), Figure 1 presents the result of
our simulation of an instance of SC+SS ?-IDS games using the Zachary Karate Club network [6].
The nodes are the players, and the edge between nodes u and v represents the potential transfers
from u to v and v to u. As we increase ??s value, the number of SC players increases while the
2
number of SS players decreases. Interestingly, almost all of the SC players invest, and all of the SS
players are ?free riding? as they do not invest at the NE.
Our goal here is to understand the behavior of the players in ?-IDS games. Achieving this goal will
depend on the type of players, as characterized by the ?, and our ability to efficiently compute NE,
among other things. While Heal and Kunreuther [1] and Chan et al. [7] previously proposed similar
models, we are unaware of any work on computing NE in ?-IDS games and analyzing agents?
equilibrium behavior. The closest work to ours is Kearns and Ortiz [8], where they consider the
standard/traditional IDS model in which one cannot protect against the indirect risk (i.e., ? ? 1).
In particular, we study the computational aspects of computing NE of ?-IDS games in cases of
all game players being (1) SC, (2) SS, and (3) both SC and SS. Our contributions, summarized in
Table 1, follow.
? We show that determining whether there is a PSNE in (3) is NP-complete. However, there
is a polynomial-time algorithm to compute a PSNE for (1). We identify some instances for
(2) where PSNE does and does not exist.
? We study the instances of ?-IDS games where we can compute all NE. We show that
if the transfer probabilities are uniform (independent of the destination), then there is a
polynomial-time algorithm to compute all NE in case (1). Cases (2) and (3) may still take
exponential time to compute all NE. However, based on some ordering constraints, we are
able to efficiently compute all NE that satisfy the ordering constraints.
? We consider the general-transfer case and show that the pure-Nash-extension problem [8],
which, roughly, is the problem of determining whether there is a PSNE consistent with
some partial assignments of actions to some players, is NP-complete for cases (1), (2), and
(3). This implies that computing all NE is likely as hard.
? We perform experiments on several randomly-generated instances of SC+SS ?-IDS games
using various real-world graph structures to show ??s effect on the number of SC and SS
players and on the NE of the games .
2
?-IDS games: preliminaries, model definition, and solution concepts
In this section, we borrow definitions and notations of (graphical) IDS games from Kearns et al.
[9], Kearns and Ortiz [8], and Chan et al. [7]. In an ?-IDS game, we have an underlying (directed)
graph G = (V, E) where V = {1, 2, ..., n} represents the n players and E = {(i, j)|qij > 0} such
that qij is the transfer probability that player i will transfer the bad event to player j. As such, we
define Pa(i) and Ch(i) as the set of parents and children of player i in G, respectively.
In an ?-IDS game, each player i has to make a decision as to whether or not to invest in protection.
Therefore, the action or pure-strategy of player i is binary, denoted here by ai , with ai = 1 if i
decides to invest and ai = 0 otherwise. We denote the joint-action or joint-pure-strategy of all
players by the vector a ? (a1 , . . . , an ). For convenience, we denote by a?i all components of a
except that for player i. Similarly, given S ? V , we denote by aS and a?S all components of a
corresponding to players in S and V ? S, respectively. We also use the notation a ? (ai , a?i ) ?
(aS , a?S ) when clear from context.
In addition, in an ?-IDS game, there is a cost of investment Ci and loss Li associated with the bad
event occurring, either through direct or indirect (transfered) contamination. For convenience, we
denote the cost-to-loss ratio of player i by Ri ? Ci /Li . We can parametrize the direct risk as pi ,
the probability that player i will experience the bad event from direct contamination.
Specific to ?-IDS games, the parameter ?i denotes the probability of ineffectiveness of full investment in security (i.e., ai = 1) against player i?s transfer risk. Said differently, the parameter ?i models the degree to which investment in security can potentially reduce player
Q i?s transfer risk. Player
i?s transfer-risk function ri (aPa(i) ) ? 1 ? si (aPa(i) ), where si (aPa(i) ) ? j?Pa(i) [1 ? (1 ? aj )qji ],
is a function of joint-actions of Pa(i) because of the potential overall transfer probability (and thus
risk) from Pa(i) to i given Pa(i)?s actions. One can think of the function si as the transfer-safety
function of player i. The expression of si makes explicit the implicit assumption that the transfers
of the bad event are independent. Putting the above together, the cost function of player i is
Mi (ai , aP a(i) ) ?ai [Ci + ?i ri (aP a(i) )Li ] + (1 ? ai )[pi + (1 ? pi )ri (a?i )]Li .
3
Note that the safety function describes the situation where a player j can only be ?risky? to player
i if and only if j does not invest in protection. We assume, without loss of generality (wlog), that
Ci Li , or equivalently, that Ri 1; otherwise, not investing would be a dominant strategy.
While a syntactically minor addition to the traditional IDS model, the parameter ? introduces a
major semantic difference and an additional complexity over the traditional model. The semantic
difference is perhaps clearer from examining the best response of the players: player i invests if
Ci + ?i ri (aPa(i) )Li < [pi + (1 ? pi )ri (aPa(i) )]Li ? Ri ? pi < (1 ? pi ? ?i )ri (aPa(i) ) .
The expression (1 ? pi ? ?i ) is positive when ?i < 1 ? pi and negative when ?i > 1 ? pi . The best
response condition flips when the expression is negative. (When ?i = 1 ? pi , player i?s investment
decision simplifies because the player?s internal risk fully determines the optimal choice.)
In fact, the parameter ? induces a partition of the set of players based on whether the corresponding
?i value is higher or lower than 1 ? pi . We will call the set of players with ?i > 1 ? pi the
set of strategic complementarity (SC) players. SC players exhibit as optimal behavior that their
preference for investing increases as more players invest: they are ?followers.? The set of players
with ?i < 1 ? pi is the set of strategic substitutability (SS) players. In this case, SS players?
preference for investing decreases as more players invest: they are ?free riders.?
Ri ?pi
For all i ? SC, let ?sc
i ? 1 ? 1?pi ??i ; similarly
response correspondence for player i ? SC as
?
?0,
(a
)
?
BRsc
1,
Pa(i)
i
?
[0, 1],
for ?ss
i , for i ? SS. We can define the best?sc
i > si (aPa(i) ),
?sc
i < si (aPa(i) ),
?sc
i = si (aPa(i) ) .
sc
The best-response correspondence BRss
i for player i ? SS is similar, except that we replace ?i by
ss
?i and ?reverse? the strict inequalities above. We use the best-response correspondence to define
NE (i.e., both PSNE and MSNE). We introduce randomized strategies: in a joint-mixed-strategy x ?
[0, 1]n , each component xi corresponds to player i?s probability of invest (i.e. P r(ai = 1) = xi ).
Player i?s decision depends on expected cost, and, with abuse of notation, we denote it by Mi (x).
Definition A joint-action a ? {0, 1}n is a pure-strategy Nash equilibrium (PSNE) of an IDS game
if ai ? BRi (aPa(i) ) for each player i. Replacing a with a joint mixed-strategy x ? [0, 1]n in the
equilibrium condition and the respective functions it depends on leads to the condition for x being a
mixed-strategy Nash equilibrium (MSNE). Note that the set of PSNE ? MSNE. Hence, we use NE
and MSNE interchangably.
For general (and graphical) games, determining the existence of PSNE is NP-complete [10]. MSNE
always exist [11], but computing a MSNE is PPAD-complete [12?14].
3
Computational results for ?-IDS games
In this section, we present and discuss the
results of our computational study of ?-IDS
games. We begin by considering the problem
of computing PSNE, then moving to the more
general problem of computing MSNE.
3.1
Finding a PSNE in ?-IDS games
In this subsection, we look at the complexity
of determining a PSNE in ?-IDS games, and
finding it if one exists. Our first result follows.
Figure 2: 3-SAT-induced ?-IDS game graph
Theorem 1 Determining whether there is a PSNE in n-player SC+SS ?-IDS games is NP-complete.
Proof (Sketch) We are going to reduce an instance of a 3-SAT variant into our problem. Each clause
of the 3-SAT variant contains either only negated variables or only un-negated variables [15]. We
4
have an SC player for each clause and two SS players for each variable. The clause players invest
if there exists a neighbor (its literal) that invests. For each variable vi , we introduce two players
vi and v?i with preference for mutually opposite actions. They invest if there exists a neighbor
(its clause and v?i ) that does not invest. Figure 2 depicts the basic structure of the game. Nodes
at the botton-row of the graph correspond to a variable, where the un-negated-variables-clauses
and negated-variables-clauses are connected to their corresponding un-negated-variable and negated
variable with bidirectional transfer probability q.
Setting the parameters of the clause players. Wlog, we can set the parameters to be identical
3
for all clause players i: find Ri > 0 and ?i > 1 ? pi such that (1 ? q)2 > ?sc
i > (1 ? q) .
Setting the parameters of the variables players. Wlog, we can set the parameters to be identical
for all variable players i: find Ri > 0 and ?i < 1 ? pi such that 1 > ?ss
i > (1 ? q).
We now show that there exists a satisfiable assignment if and only if there exists a PSNE.
Satisfiable assignment =? PSNE. Suppose that we have a satisfiable assignment of the variant
3-SAT. This implies that every clause player is playing invest. Moreover, for each clause player,
there must be some corresponding variable players that play invest. Given a satisfiable assignment,
negated and un-negated variable players cannot play the same action. One of them must be playing
invest and the other must be playing no-invest. The investing variable is best-responding because
at least one of the players (namely its negation) is playing not invest. The not investing variable is
best-responding because all of its neighbors are investing. Hence, all the players are best-responding
to each other and thus we have a PSNE.
PSNE =? satisfiable assignment. (a) First we show that at every PSNE, all of the clause
players must play invest. For the sake of contradiction, suppose that there is a PSNE in which there
are some clause players that play no-invest. For the no-invest clause players, all of their variables
must play no-invest at PSNE. However, by the best-response conditions of the variable players, if
there exists a clause player that plays no-invest, then at least one of the variable players must play
invest, which contradicts the fact that we have a PSNE. (b) We now show that at every PSNE, the unnegated variable player and the corresponding negated variable player must play different actions.
Suppose that there is a PSNE, in which both of the players play the same action (i) no-invest or (ii)
invest. In the case of no-invest (i), by their best-response conditions (given that at every PSNE all
clause players play invest), none of the variables are best-responding so one of them must switch
from playing no-invest to invest. In the case of invest (ii), again by the best-response condition,
one of them must play no-invest. (c) Finally, we need to show that at every PSNE there must be
a variable player that makes every clause player play invest. To see this, note that, by the clause?s
best-response condition, there must be at least one variable player playing invest. If there is a clause
that plays invest when none of its variable players play invest, then the clause player would not be
best-responding.
t
u
3.1.1
SC ?-IDS games
What is the complexity of determining whether a PSNE exists in SC ?-IDS games (i.e. ?i > 1?pi )?
It turns out that SC players have the characteristics of following the actions of other agents. If there
are enough SC players who invest, then some remaining SC player(s) will follow suit. This is
evident from the safety function and the best-response condition. Consider the dynamics in which
everybody starts off with no-invest. If there are some players that are not best-responding, then their
best (dominant) strategy is to invest. We can safely change the actions of those players to invest.
Then, for the remaining players, we continue to check to see if any of them is not best-responding.
If not, we have a PSNE, otherwise, we change the strategy of the not best-responding players to
invest. The process continues until we have reached a PSNE.
Theorem 2 There is an O(n2 )-time algorithm to compute a PSNE of any n-player SC ?-IDS game.
Note that once a player plays invest, other players will either stay no-invest or move to invest. The
no-investing players do not affect the strategy of the players that already have decided to invest.
Players that have decided to invest will continue to invest because only more players will invest.
3.1.2
SS ?-IDS games
Unlike the SC case, an SS ?-IDS game may not have a PSNE when n > 2.
5
Proposition 1 Suppose we have an n-player SS ?-IDS game with 1 > ?ss
i > (1 ? qji ) where j is
the parent of i. (a) If the game graph is a directed tree, then the game has a PSNE. (b) If the game
graph is a a directed cycle, then the game has a PSNE if and only if n is even.
Proof (a) The root of the tree will always play no-invest while the immediate children of the root
will always play invest at a PSNE. Moreover, assigning the action invest or no-invest to any node
that has an odd or even (undirected) distance to the root, respectively, completes the PSNE.
(b) For even n, an assignment in which any independent set of n2 players play invest form a PSNE.
For odd n, suppose there is a PSNE in which I players invest and N players do not invest, such
that I + N = n. The investing players must have I parents that do not invest and the non-investing
players must have N parents that play invest. Moreover, I ? N and N ? I implies that I = N .
Hence, an odd n cycle cannot have a PSNE.
t
u
We leave the computational complexity of determining whether SS ?-IDS games have PSNE open.
Computing all NE in ?-IDS games
3.2
We now study whether we can compute all MSNE of ?-IDS games. We prove that we can compute
all MSNE in polynomial time in the case of uniform-transfer SC ?-IDS games, and a subset of all
MSNE in the case of SS and SC+SS games. A uniform transfer ?-IDS game is an ?-IDS game
where the transfer probability to another players from a particular player is the same regardless of
the destination. More formally, qij = ?i for all players i and j (i 6= j). Hence, we have a complete
graph with bidirectional transfer probabilities.
We can express the overall safety function given joint
Qn
mixed-strategy x ? [0, 1]n as s(x) = i=1 [1?(1?xi )?i ]. Now, we can determine the best response
of SC or SS player exactly based solely on the values of ?sc
i (1 ? (1 ? ai )?i ), for SC, relative to
s(x); similarly for SS.
We assume, wlog, that for all players i, Ri > 0, ?i > 0, pi > 0, and ?i > 0. Given a joint mixedstrategy x, we partition the players by type wrt x: let I ? I(x) ? {i | xi = 1}, N ? N (x) ?
{i | xi = 0}, and P ? P (x) ? {i | 0 < xi < 1} be the set of players that, wrt x, fully invest in
protection, do not invest in protection, and partially invest in protection, respectively.
3.2.1
Uniform-transfer SC ?-IDS games
The results of this section are non-trivial extensions of those of Kearns and Ortiz [8]. In particular, we can construct a polynomial-time algorithm to compute all MSNE of a uniform-transfer SC
?-IDS game, along the same lines of Kearns and Ortiz [8], by extending their Ordering Lemma
(their Lemma 3) and Partial-Ordering Lemma (their Lemma 4). 1 Appendixes A.1 and B of the supplementary material contain our versions of the lemmas and detailed pseudocode for the algorithm,
respectively. A running-time analysis similar to that for traditional uniform-transfer IDS games done
by Kearns and Ortiz [8] yields our next algorithmic result.
Theorem 3 There exists an O(n4 )-time algorithm to compute all MSNE of an uniform-transfer
n-player SC ?-IDS game.
The significance of the theorem lies in its simplicity. That we can extend almost the same computational results, and structural implications on the solution space, to a considerably more general, and
perhaps even more realistic, model, via what in hindsight were simple adaptations, is positive.
3.2.2
Uniform-transfer SS ?-IDS games
Unlike the SC case, the ordering we get for the SS case does not yield an analogous lemma. Nevertheless, it turns out that we can still determine the mixed strategies of the partially-investing players
in P relative to a partition. The result is a Partial-Investment Lemma that is analogous to that
of Kearns and Ortiz [8] for traditional IDS games. 2 For completeness, Appendix A.2 of the supplementary material formally states the lemma. We remind the reader that the significance and strength
Take their Ri /pi ?s and replace them with our corresponding ?sc
i ?s.
Take their Lemma 4 and replace Ri /pi there by ?ss
i here, and replace the expression for V there by
ss
V ? [maxk?N (1 ? ?k )?ss
k , mini?I ?i ].
1
2
6
of this non-trivial extension lies in its simplicity, and particularly when we note that the nature of
the SS case is the complete opposite of the version of IDS games studied by Kearns and Ortiz [8].
Indeed, a naive way to compute all NE is to consider all of the possible combinations of players
into the investment, partial investment, and not investment sets and apply the Partial-Investment
Lemma alluded
to in the previous paragraph to compute the mixed strategies. However, this would
ss
take O(nss 3n ) worst-case time to compute any equilibrium. So, how can we efficiently perform
this computation? As mentioned earlier, SS players are less likely to invest when there is a large
number of players investing and have ?opposite? behavior as the SC players (i.e., the best response
is flipped). Hence, imposing a ?flip? ordering (Ordering 1) that is opposite of the SC case seems
natural. If we assume such a specific ordering of the players at equilibrium, then we can compute all
NE consistent with that specific ordering efficiently, as we discuss earlier for the SC case. Mirroring
the SC ?-IDS game, we settle for computing all NE that satisfy the following ordering.
Ordering 1 For all i ? I ss , j ? P ss , and k ? N ss ,
ss
ss
(1 ? ?k )?ss
k ? (1 ? ?j )?j < ?j
ss
ss
(1 ? ?j )?ss
j ? ?j ? ?i
ss
ss
(1 ? ?k )?ss
k ? (1 ? ?i )?i ? ?i
The first and last set of inequalities (ignoring the middle one) follow from the consistency constraint
imposed by the overall safety function. The middle set of inequalities restrict and reduce the number
ss
of possible NE configurations we need to check. It is possible that the (1 ? ?k )?ss
k > (1 ? ?j )?j or
ss
ss
(1 ? ?k )?k > (1 ? ?i )?i at an NE, but we do not consider those types of NE. Our hardness results
presented in the upcoming Section 3.2.4 suggest that, in general, computing all MSNE without any
of the constraints above is likely hard. (See Algorithm 2 of the supplementary material.)
Theorem 4 There exists an O(n4 )-time algorithm to compute all MSNE consistent with Ordering 1
of an uniform-transfer n-player SS ?-IDS game.
3.2.3
Uniform-transfer SC+SS ?-IDS games
For the uniform variant of the SC+SS ?-IDS games, we could partition the players into either SC or
SS and modify the respective algorithms to compute all NE. Unfortunately, this is computationally
infeasible because we can only compute all NE in polynomial time in the SC case. Again, if we settle
for computing all NE consistent with Ordering 1, then we can devise an efficient algorithm. From
now on, the fact that we are only considering NE consistent with Ordering 1 is implicit, unless noted
otherwise. The idea is to partition the players into a class of SC and a class of SS players. From
the characterizations stated earlier, it is clear that there are only a polynomial number of possible
partitions we need to check for each class of players. Since the ordering results are based on the same
overall safety function, the orderings of SC and SS players do not affect each other. Hence, wlog,
starting with the algorithm described earlier as a based routine for SC players, we do the following.
For each possible equilibrium configuration of the SC players, we first run the algorithm described
in the previous section for SS players and then test whether the resulting joint mixed-strategy is a
NE. This guarantees that we check every possible equilibrium combination. A running-time analysis
yields our next result.
Theorem 5 There exists an O(n4sc n3ss + n3sc n4ss )-time algorithm to compute all NE consistent with
Ordering 1 of an uniform-transfer n-player SC+SS ?-IDS game, where n = nsc + nss .
3.2.4
Computing all MSNE of arbitrary ?-IDS games is intractable, in general
In this section, we prove that determining whether there exists a PSNE consistent with a partialassignment of the actions to some players is NP-complete, even if the transfer probability takes only
two values: ?i ? {0, q} for q ? (0, 1).
We consider the pure-Nash-extension problem [8] for binary-action n-player games that takes as
input a description of the game and a partial assignment a ? {0, 1, ?}n . We want to know whether
there is a complete assignment b ? {0, 1}n consistent with a. Indeed, computing all NE is at least
as difficult as the pure-Nash extension problem. Appendix C presents proofs of our next results.
7
Table 2: Level of Investment of SC+SS ?-IDS Games at Nash Equilibrium
Ci
Li
High
Datasets
Karate Club
Les Miserables
College Football
Power Grid
Wiki Vote
Email Enron
%SS
76.18
75.45
75.65
75.47
75.55
75.29
?i ? N (0.4, 0.2)
%SC Invest
%SS Invest
100.00
21.37
100.00
17.93
100.00
15.47
97.76*
19.38*
97.46*
17.87*
95.97*
19.91*
%SS
12.35
11.82
11.57
12.82
12.78
12.53
?i ? N (0.8, 0.2)
%SC Invest
%SS Invest
100.00
0.00
99.85
0.67
100.00
0.00
98.79*
2.13*
98.92*
2.06*
97.92*
2.24*
%SS
56.18
55.06
55.39
55.01
55.02
54.78
99.41
98.96
98.87
98.68
98.62
98.73
?i ? N (0.4, 0.2)
100.00
49.64
100.00
51.17
100.00
60.42
99.13*
49.45*
98.30*
46.50*
97.96**
49.80**
60.59
59.22
61.48
59.41
59.89
59.85
?i ? N (0.8, 0.2)
100.00
23.19
100.00
28.34
100.00
28.30
98.81*
28.66*
97.38*
27.54*
96.48*
29.32*
86.18
85.71
86.35
85.20
85.01
84.94
C
Low Li
i
Karate Club
Les Miserables
College Football
Power Grid
Wiki Vote
Email Enron
?i ? [0, 1]
%SC Invest
%SS Invest
100.00
14.88
99.40
14.84
100.00
13.46
97.31**
15.90**
97.00**
14.75**
94.39**
16.84**
?i ? [0, 1]
100.00
100.00
100.00
99.13**
98.51**
98.0**
41.34
49.26
54.87
45.07**
44.45**
44.72**
*=0.001-NE, **=0.005-NE, %SS (%SC) = Percentage of SS (SC) players, N (?, ? 2 ) =normal distribution with mean ? and variance ? 2
Theorem 6 The pure-Nash extension problem for n-player SC ?-IDS games is NP-complete.
A similar proof argument yields the following computational-complexity result.
Theorem 7 The pure-Nash extension problem for n-player SS ?-IDS games is NP-complete.
Combining Theorems 6 and 7 yields the next corollary.
Corollary 1 The pure-Nash extension problem for n-player SC+SS ?-IDS games is NP-complete.
4
Preliminary Experimental Results
To illustrate the impact of the ? parameter on ?-IDS games, we perform experiments on randomlygenerated instances of ?-IDS games in which we compute a possibly approximate NE. Given > 0,
in an approximate -NE each individual?s unilateral deviation cannot reduce the individual?s expected cost by more than . The underlying structures of the instances use network graphs from
publicly-available, real-world datasets [6, 16?20]. Appendix D of the supplementary material provides more specific information on the size of the different graphs in the real-world dataset. The
number of nodes/players ranges from 34 to ? 37K while the number of edges ranges from 78 to
? 368K. The table lists the graphs in increasing size (from top to bottom). To generate each instance
we generate (1) Ci /Li where Ci = 103 ?(1+random(0, 1)) and Li = 104 (or Li = 104 /3) to obtain
a low (high) cost-to-loss ratio and ?i values as specified in the experiments; (2) pi such that ?sc
i or
?ss
1]; and (3) qji ?s consistent with probabilistic constraints relative to the other parameters
i is [0,P
(i.e. pi + j?P a(i) qji ? 1). On each instance, we initialize the players? mixed strategies uniformly
at random and run a simple gradient-dynamics heuristic based on regret minimization [21?23] until
we reach an () NE. In short, we update the strategies of all non--best-responding players i at each
(t+1)
(t)
(t)
(t)
round t according to xi
? xi ? 10 ? (Mi (1, xPa(i) ) ? Mi (0, xPa(i) )). Note that for -NE to be
well-defined, all Mi s? values are normalized. Given that our main interest is to study the structural
properties of arbitrary ?-IDS games, our hardness results of computing NE in such games justify
the use of a heuristic as we do here. (Kearns and Ortiz [8] and Chan et al. [7] also used a similar
heuristic in their experiments.). Table 2 shows the average level of investment at NE over ten runs
on each graph instance. We observe that higher ? values generate more SC players, consistent with
the nature of the game instances. Almost all of the SC players invest while most of the SS players
do not invest, regardless of the number of players in the games and the ? values. This makes sense
because of the nature of the SC and SS players. Going from high to low cost-to-loss ratio, we see
that the number of SS players and the percentage of SS players investing at a NE increase across
all ? values. In both high and low cost-to-loss ratio cases, we see a similar behavior in which the
majority of the SS players do not invest (? 50%).
Acknowledgments
This material is based upon work supported by an NSF Graduate Research Fellowship (first author)
and an NSF CAREER Award IIS-1054541 (second author).
8
References
[1] Geoffrey Heal and Howard Kunreuther. Interdependent security: A general model. Working
Paper 10706, National Bureau of Economic Research, August 2004.
[2] Geoffrey Heal and Howard Kunreuther. IDS models of airline security. Journal of Conflict
Resolution, 49(2):201?217, April 2005.
[3] Geoffrey Heal and Howard Kunreuther. The vaccination game. Working paper, Wharton Risk
Management and Decision Processes Center, January 2005.
[4] Konstantinos Gkonis and Harilaos Psaraftis. Container transportation as an interdependent
security problem. Journal of Transportation Security, 3:197?211, 2010.
[5] Aron Laszka, Mark Felegyhazi, and Levente Buttyan. A survey of interdependent information
security games. ACM Comput. Surv., 47(2):23:1?23:38, August 2014.
[6] W.W. Zachary. An information flow model for conflict and fission in small groups. Journal of
Anthropological Research, 33:452?473, 1977.
[7] Hau Chan, Michael Ceyko, and Luis E. Ortiz. Interdependent defense games: Modeling interdependent security under deliberate attacks. In Proceedings of the Conference on Uncertainty
in Artificial Intelligence, UAI ?12, pages 152?162, 2012.
[8] Michael Kearns and Luis E. Ortiz. Algorithms for interdependent security games. In Advances
in Neural Information Processing Systems, NIPS ?04, pages 561?568, 2004.
[9] Michael Kearns, Michael Littman, and Satinder Singh. Graphical models for game theory.
In Proceedings of the Conference on Uncertainty in Artificial Intelligence, UAI? 01, pages
253?260, 2001.
[10] Georg Gottlob, Gianluigi Greco, and Francesco Scarcello. Pure Nash equilibria: Hard and
easy games. In Proceedings of the 9th Conference on Theoretical Aspects of Rationality and
Knowledge, TARK ?03, pages 215?230, 2003.
[11] John F. Nash. Equilibrium points in n-person games. Proceedings of the National Academy of
Sciences of the United States of America, 35(1):48?49, Jan. 1950.
[12] Constantinos Daskalakis, Paul W. Goldberg, and Christos H. Papadimitriou. The complexity of
computing a Nash equilibrium. In Proceedings of the Thirty-eighth Annual ACM Symposium
on Theory of Computing, STOC ?06, pages 71?78, 2006.
[13] Xi Chen, Xiaotie Deng, and Shang-Hua Teng. Settling the complexity of computing two-player
Nash equilibria. J. ACM, 56(3):14:1?14:57, May 2009.
[14] Edith Elkind, Leslie Ann Goldberg, and Paul Goldberg. Nash equilibria in graphical games on
trees revisited. In Proceedings of the 7th ACM Conference on Electronic Commerce, EC ?06,
pages 100?109, 2006.
[15] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory
of NP-Completeness. W. H. Freeman & Co., New York, NY, USA, 1979.
[16] Donald E. Knuth. The Stanford GraphBase: A Platform for Combinatorial Computing. ACM,
New York, NY, USA, 1993.
[17] M. Girvan and M. E. J. Newman. Community structure in social and biological networks.
Proceedings of the National Academy of Sciences, 99(12):7821?7826, 2002.
[18] D.J. Watts and S.H. Strogatz. Collective dynamics of ?small-world? networks. Nature, 393:
440?442, 1998.
[19] Jure Leskovec, Daniel Huttenlocher, and Jon Kleinberg. Signed networks in social media. In
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ?10,
pages 1361?1370, 2010.
[20] Bryan Klimt and Yiming Yang. Introducing the Enron corpus. In CEAS, 2004.
[21] Drew Fudenberg and David K. Levine. The Theory of Learning in Games, volume 1 of MIT
Press Books. The MIT Press, June 1998.
? Tardos, and Vijay V. Vazirani, editors. Algorithmic Game
[22] Noam Nisan, Tim Roughgarden, Eva
Theory. Cambridge University Press, 2007.
[23] Yoav Shoham and Kevin Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic,
and Logical Foundations. Cambridge University Press, Cambridge, UK, 2009.
9
| 5585 |@word middle:2 version:2 polynomial:8 seems:1 indiscriminate:1 open:1 simulation:1 anthropological:1 reduction:1 configuration:2 contains:1 united:1 daniel:1 ours:1 interestingly:1 protection:6 si:7 yet:1 stony:1 follower:1 luis:3 must:13 realize:2 assigning:1 realistic:2 partition:6 botton:1 john:1 update:1 intelligence:2 short:1 colored:2 stonybrook:1 characterization:2 node:5 club:4 preference:3 completeness:2 provides:1 attack:1 revisited:1 along:1 direct:9 symposium:1 qij:3 prove:2 owner:6 paragraph:1 introduce:2 yours:1 forgetting:1 expected:2 hardness:2 roughly:1 indeed:3 examine:1 behavior:5 chi:1 freeman:1 considering:2 increasing:1 begin:1 moreover:4 notation:3 underlying:2 medium:1 what:2 finding:2 hindsight:1 guarantee:1 safely:1 every:7 exactly:1 uk:1 originates:2 safety:7 positive:2 modify:1 id:72 analyzing:1 solely:1 ap:2 abuse:1 signed:1 studied:2 specifying:1 co:1 outlet:1 burnt:1 range:2 graduate:1 directed:3 practical:2 decided:2 acknowledgment:1 thirty:1 commerce:1 investment:17 regret:1 jan:1 got:1 shoham:1 word:1 donald:1 suggest:1 get:2 cannot:6 convenience:2 risk:25 context:3 imposed:1 center:1 transportation:2 exposure:2 regardless:2 starting:1 survey:2 resolution:1 simplicity:2 pure:13 contradiction:1 borrow:1 leortiz:1 hurt:1 analogous:2 tardos:1 imagine:1 suppose:5 play:19 rationality:1 goldberg:3 complementarity:2 pa:6 surv:1 particularly:1 continues:1 huttenlocher:1 bottom:1 levine:1 electrical:1 worst:2 apartment:16 eva:1 connected:1 cycle:2 ordering:20 decrease:3 psne:41 luck:1 contamination:2 mentioned:1 nash:22 complexity:9 littman:1 dynamic:3 depend:1 solving:1 singh:1 upon:1 easily:1 joint:9 indirect:7 differently:1 various:1 america:1 distinct:1 artificial:2 sc:74 newman:1 edith:1 kevin:1 heuristic:3 supplementary:4 stanford:1 s:85 otherwise:4 football:2 ability:1 think:1 ceas:1 interaction:1 adaptation:1 combining:1 academy:2 description:1 invest:80 parent:4 extending:1 produce:1 leave:1 yiming:1 tim:1 depending:1 illustrate:1 clearer:1 odd:3 minor:1 c:1 come:1 implies:3 luckily:1 human:1 settle:2 material:5 preliminary:2 proposition:1 biological:1 extension:10 sufficiently:1 normal:1 equilibrium:20 algorithmic:3 major:2 combinatorial:1 xpa:2 minimization:1 mit:2 always:4 reaching:1 corollary:2 june:1 likelihood:1 check:4 sense:1 going:3 overall:4 among:2 denoted:1 platform:1 initialize:1 wharton:1 once:1 construct:1 identical:2 represents:2 flipped:1 look:1 ppad:1 constantinos:1 jon:1 papadimitriou:1 np:13 others:1 randomly:2 ineffectiveness:1 national:3 individual:3 fire:22 ortiz:13 preview:1 negation:1 suit:1 interest:1 introduces:1 implication:1 edge:2 partial:7 experience:1 respective:2 unless:1 tree:3 circle:1 re:2 theoretical:1 leskovec:1 instance:12 earlier:4 modeling:1 yoav:1 assignment:9 leslie:1 strategic:6 cost:10 deviation:1 subset:1 introducing:1 uniform:13 wonder:1 examining:1 gianluigi:1 johnson:1 too:1 considerably:1 chooses:1 person:1 randomized:1 stay:1 destination:2 off:2 probabilistic:1 michael:5 transfered:1 together:1 concrete:1 again:2 management:1 possibly:2 literal:1 sprinkler:1 book:1 expert:1 li:12 potential:5 summarized:1 satisfy:3 nisan:1 depends:2 vi:2 aron:1 root:3 reached:1 start:2 klimt:1 satisfiable:5 contribution:1 square:1 publicly:1 variance:1 characteristic:1 efficiently:5 who:1 correspond:1 identify:1 yield:5 elkind:1 none:2 ago:1 reach:1 whenever:1 email:2 definition:3 against:2 garey:1 associated:1 mi:5 proof:4 dataset:1 logical:1 subsection:1 ut:3 knowledge:1 routine:1 bidirectional:2 originally:3 higher:2 day:1 follow:3 response:12 april:1 done:1 generality:1 implicit:2 until:2 hand:1 sketch:1 working:2 replacing:1 assessment:1 aj:1 perhaps:2 kunreuther:5 riding:1 effect:1 usa:2 concept:2 contain:1 normalized:1 brown:1 hence:6 semantic:2 adjacent:1 round:1 game:101 everybody:3 noted:1 generalized:8 evident:1 complete:16 theoretic:1 syntactically:1 pseudocode:1 clause:19 volume:1 extend:1 cambridge:3 imposing:1 ai:11 consistency:1 grid:2 similarly:3 nobody:1 moving:1 dominant:2 closest:1 chan:5 reverse:1 inequality:3 binary:2 continue:2 devise:1 additional:2 floor:1 deng:1 determine:2 ii:3 full:2 multiple:1 characterized:1 naming:1 award:1 a1:1 impact:2 variant:7 basic:1 addition:2 want:2 fellowship:1 else:1 completes:1 container:1 unlike:2 airline:2 enron:3 strict:1 induced:1 undirected:1 thing:2 legend:1 flow:1 effectiveness:1 call:2 structural:2 yang:1 enough:2 concerned:1 easy:1 switch:1 affect:3 restrict:1 opposite:4 reduce:6 simplifies:1 idea:1 economic:1 br:1 konstantinos:1 whether:17 expression:4 defense:1 unilateral:1 york:2 action:15 mirroring:1 clear:2 detailed:1 maybe:1 ten:1 induces:1 generate:3 wiki:2 exist:3 percentage:2 nsf:2 deliberate:1 notice:1 bryan:1 cargo:1 georg:1 express:1 group:1 putting:1 nevertheless:1 achieving:1 levente:1 prevent:1 graph:12 run:3 you:11 uncertainty:2 almost:3 reader:1 decide:1 electronic:1 decision:9 appendix:4 apa:10 distinguish:1 psaraftis:1 correspondence:3 somebody:1 annual:1 roughgarden:1 strength:1 constraint:6 your:6 ri:15 hau:2 sake:1 kleinberg:1 aspect:2 argument:1 complementarily:1 bri:1 transferred:3 department:1 according:1 combination:2 watt:1 describes:1 rider:1 across:1 contradicts:1 partitioned:1 n4:4 making:1 vaccination:2 heal:5 tark:1 taken:1 computationally:1 alluded:1 mutually:1 previously:1 discus:3 turn:3 mechanism:10 wrt:4 know:1 flip:2 parametrize:1 available:1 apply:1 observe:1 sigchi:1 existence:1 bureau:1 denotes:1 responding:9 remaining:2 running:2 top:1 graphical:4 qji:4 upcoming:1 move:1 greco:1 already:2 strategy:18 traditional:7 said:1 exhibit:3 gradient:1 distance:1 majority:1 originate:2 trivial:2 karate:4 economist:1 assuming:1 remind:1 mini:1 ratio:4 equivalently:1 difficult:1 unfortunately:1 fierce:1 potentially:1 stoc:1 debate:1 noam:1 negative:2 stated:1 collective:1 perform:3 negated:9 francesco:1 datasets:3 howard:3 behave:2 gas:1 january:1 voluntary:1 situation:2 immediate:1 maxk:1 arbitrary:2 august:2 community:1 introduced:2 david:2 namely:1 specified:1 security:14 conflict:2 nsc:2 protect:4 nip:2 brook:1 jure:1 able:1 below:1 eighth:1 power:2 event:4 natural:1 settling:1 scheme:1 ne:35 risky:1 naive:1 interdependent:9 determining:9 relative:3 girvan:1 loss:8 fully:2 multiagent:1 mixed:8 interesting:1 facing:1 geoffrey:3 foundation:1 agent:15 degree:1 consistent:10 editor:1 intractability:1 playing:6 pi:24 invests:2 row:1 course:1 supported:1 last:1 free:2 infeasible:1 drastically:1 guide:1 understand:1 unprotected:1 neighbor:6 barrier:1 zachary:3 world:6 unaware:1 qn:1 author:2 ec:1 social:2 vazirani:1 approximate:2 satinder:1 decides:1 uai:2 sat:4 corpus:1 xi:9 daskalakis:1 un:4 investing:12 decade:1 protected:1 table:5 fission:1 nature:4 transfer:38 career:1 ignoring:1 complex:4 significance:2 spread:1 main:1 fudenberg:1 alarm:1 paul:2 n2:3 nothing:1 allowed:1 child:2 depicts:1 ny:2 n:3 wlog:5 christos:1 explicit:1 exponential:1 comput:1 lie:2 down:1 theorem:9 bad:4 specific:5 list:1 exists:12 intractable:1 overloading:1 extinguish:1 drew:1 ci:8 knuth:1 occurring:1 chen:1 vijay:1 likely:3 gottlob:1 strogatz:1 partially:4 hua:1 ch:1 corresponds:1 leyton:1 chance:2 determines:1 acm:5 goal:2 ann:1 replace:4 hard:4 experimentally:1 change:2 except:2 uniformly:1 justify:1 kearns:13 lemma:10 shang:1 teng:1 experimental:1 player:147 vote:2 formally:3 college:2 substitutability:3 internal:1 people:2 mark:1 |
5,065 | 5,586 | Learning Optimal Commitment to Overcome Insecurity
Avrim Blum
Carnegie Mellon University
Nika Haghtalab
Carnegie Mellon University
Ariel D. Procaccia
Carnegie Mellon University
[email protected]
[email protected]
[email protected]
Abstract
Game-theoretic algorithms for physical security have made an impressive realworld impact. These algorithms compute an optimal strategy for the defender
to commit to in a Stackelberg game, where the attacker observes the defender?s
strategy and best-responds. In order to build the game model, though, the payoffs
of potential attackers for various outcomes must be estimated; inaccurate estimates can lead to significant inefficiencies. We design an algorithm that optimizes
the defender?s strategy with no prior information, by observing the attacker?s responses to randomized deployments of resources and learning his priorities. In
contrast to previous work, our algorithm requires a number of queries that is polynomial in the representation of the game.
1
Introduction
The US Coast Guard, the Federal Air Marshal Service, the Los Angeles Airport Police, and other
major security agencies are currently using game-theoretic algorithms, developed in the last decade,
to deploy their resources on a regular basis [13]. This is perhaps the biggest practical success story
of computational game theory ? and it is based on a very simple idea. The interaction between
the defender and a potential attacker can be modeled as a Stackelberg game, in which the defender
commits to a (possibly randomized) deployment of his resources, and the attacker responds in a
way that maximizes his own payoff. The algorithmic challenge is to compute an optimal defender
strategy ? one that would maximize the defender?s payoff under the attacker?s best response.
While the foregoing model is elegant, implementing it requires a significant amount of information.
Perhaps the most troubling assumption is that we can determine the attacker?s payoffs for different
outcomes. In deployed applications, these payoffs are estimated using expert analysis and historical data ? but an inaccurate estimate can lead to significant inefficiencies. The uncertainty about
the attacker?s payoffs can be encoded into the optimization problem itself, either through robust
optimization techniques [12], or by representing payoffs as continuous distributions [5].
Letchford et al. [8] take a different, learning-theoretic approach to dealing with uncertain attacker
payoffs. Studying Stackelberg games more broadly (which are played by two players, a leader and
a follower), they show that the leader can efficiently learn the follower?s payoffs by iteratively committing to different strategies, and observing the attacker?s sequence of responses. In the context of
security games, this approach may be questionable when the attacker is a terrorist, but it is a perfectly
reasonable way to calibrate the defender?s strategy for routine security operations when the attacker
is, say, a smuggler. And the learning-theoretic approach has two major advantages over modifying
the defender?s optimization problem. First, the learning-theoretic approach requires no prior information. Second, the optimization-based approach deals with uncertainty by inevitably degrading the
quality of the solution, as, intuitively, the algorithm has to simultaneously optimize against a range
of possible attackers; this problem is circumvented by the learning-theoretic approach.
But let us revisit what we mean by ?efficiently learn?. The number of queries (i.e., observations of
follower responses to leader strategies) required by the algorithm of Letchford et al. [8] is polynomial
in the number of pure leader strategies. The main difficulty in applying their results to Stackelberg
1
security games is that even in the simplest security game, the number of pure defender strategies is
exponential in the representation of the game. For example, if each of the defender?s resources can
protect one of two potential targets, there is an exponential number of ways in which resources can
be assigned to targets. 1
Our approach and results. We design an algorithm that learns an (additively) -optimal strategy
for the defender with probability 1 ? ?, by asking a number of queries that is polynomial in the
representation of the security game, and logarithmic in 1/ and 1/?. Our algorithm is completely
different from that of Letchford et al. [8]. Its novel ingredients include:
? We work in the space of feasible coverage probability vectors, i.e., we directly reason about
the probability that each potential target is protected under a randomized defender strategy.
Denoting the number of targets by n, this is an n-dimensional space. In contrast, Letchford
et al. [8] study the exponential-dimensional space of randomized defender strategies. We
observe that, in the space of feasible coverage probability vectors, the region associated
with a specific best response for the attacker (i.e., a specific target being attacked) is convex.
? To optimize within each of these convex regions, we leverage techniques ? developed
by Tauman Kalai and Vempala [14] ? for optimizing a linear objective function in an
unknown convex region using only membership queries. In our setting, it is straightforward
to build a membership oracle, but it is quite nontrivial to satisfy a key assumption of the
foregoing result: that the optimization process starts from an interior point of the convex
region. We do this by constructing a hierarchy of nested convex regions, and using smaller
regions to obtain interior points in larger regions.
? We develop a method for efficiently discovering new regions. In contrast, Letchford et
al. [8] find regions (in the high-dimensional space of randomized defender strategies) by
sampling uniformly at random; their approach is inefficient when some regions are small.
2
Preliminaries
A Stackelberg security game is a two-player general-sum game between a defender (or the leader)
and an attacker (or the follower). In this game, the defender commits to a randomized allocation
of his security resources to defend potential targets. The attacker, in turn, observes this randomized
allocation and attacks the target with the best expected payoff. The defender and the attacker receive payoffs that depend on the target that was attacked and whether or not it was defended. The
defender?s goal is to choose an allocation that leads to the best payoff.
More precisely, a security game is defined by a 5-tuple (T, D, R, A, U ):
? T = {1, . . . , n} is a set of n targets.
? R is a set of resources.
? D ? 2T is a collection of subsets of targets, each called a schedule, such that for every
schedule D ? D, targets in D can be simultaneously defended by one resource. It is
natural to assume that if a resource is capable of covering schedule D, then it can also
cover any subset of D. We call this property closure under the subset operation; it is also
known as ?subsets of schedules are schedules (SSAS)? [7].
? A : R ? 2D , called the assignment function, takes a resource as input and returns the set of
all schedules that the resource is capable of defending. An allocation of resources is valid
if every resource r is allocated to a schedule in A(r).
? The payoffs of the players are given by functions Ud (t, pt ) and Ua (t, pt ), which return the
expected payoffs of the defender and the attacker, respectively, when target t is attacked and
it is covered with probability pt (as formally explained below). We make two assumptions
that are common to all papers on security games. First, these utility functions are linear.
Second, the attacker prefers it if the attacked target is not covered, and the defender prefers
1
Subsequent work by Marecki et al. [9] focuses on exploiting revealed information during the learning
process ? via Monte Carlo Tree Search ? to optimize total leader payoff. While their method provably
converges to the optimal leader strategy, no theoretical bounds on the rate of convergence are known.
2
it if the attacked target is covered, i.e., Ud (t, pt ) and Ua (t, pt ) are respectively increasing
and decreasing in pt . We also assume w.l.o.g. that the utilities are normalized to have
values in [?1, 1]. If the utility functions have coefficients that are rational with denominator
at most a, then the game?s (utility) representation length is L = n log n + n log a.
A pure strategy of the defender is a valid assignment of resources to schedules. The set of pure
strategies is determined by T , D, R, and A. Let there be m pure strategies; we use the following
n ? m, zero-one matrix M to represent the set of all pure strategies. Every row in M represents
a target and every column represents a pure strategy. Mti = 1 if and only if target t is covered
using some resource in the ith pure strategy. A mixed strategy (hereinafter, called strategy) is a
distribution over the pure strategies. To represent a strategy
we use a 1 ? m vector s, such that si is
Pm
the probability with which the ith strategy is played, and i=1 si = 1.
Given a defender?s strategy, the coverage probability of a target is the probability with which it is
defended. Let s be a defender?s strategy, then the coverage probability vector is pT = M sT , where
pt is coverage probability of target t. We call a probability vector implementable if there exists a
strategy that imposes that coverage probability on the targets.
Let ps be the corresponding coverage probability vector of strategy s. The attacker?s best response
to s is defined by b(s) = arg maxt Ua (t, pst ). Since the attacker?s best-response is determined by the
coverage probability vector irrespective of the strategy, we slightly abuse notation by using b(ps )
to denote the best-response, as well. We say that target t is ?better? than t0 for the defender if the
highest payoff he receives when t is attacked is more than the highest payoff he receives when t0 is
attacked. We assume that if multiple targets are tied for the best-response, then ties are broken in
favor of the ?best? target.
The defender?s optimal strategy is defined as the strategy with highest expected payoff for the defender, i.e. arg maxs Ud (b(s), psb(s) ). An optimal strategy p is called conservative if no other optimal
strategy has a strictly lower sum of coverage probabilities. For two coverage probability vectors we
use q p to denote that for all t, qt ? pt .
3
Problem Formulation and Technical Approach
In this section, we give an overview of our approach for learning the defender?s optimal strategy
when Ua is not known. To do so, we first review how the optimal strategy is computed in the case
where Ua is known.
Computing the defender?s optimal strategy, even when Ua (?) is known, is NP-Hard [6]. In practice
the optimal strategy is computed using two formulations: Mixed Integer programming [11] and
Multiple Linear Programs [1]; the latter provides some insight for our approach. The Multiple LP
approach creates a separate LP for every t ? T . This LP, as shown below, solves for the optimal
defender strategy under the restriction that the strategy is valid (second and third constraints) and the
attacker best-responds by attacking t (first constraint). Among these solutions, the optimal strategy
is the one where the defender has the highest payoff.
maximize Ud (t,
X
si )
i:Mti =1
s.t.
?t0 6= t, Ua (t0 ,
X
i:Mt0 i =1
si ) ? Ua (t,
X
si )
i:Mti =1
?i, si ? 0
n
X
si = 1
i=1
We make two changes to the above LP in preparation for finding the optimal strategy in polynomially
many queries, when Ua is unknown. First, notice that when Ua is unknown, we do not have an
explicit definition of the first constraint. However, implicitly we can determine whether t has a better
payoff than t0 by observing the attacker?s best-response to s. Second, the above LP has exponentially
3
many variables, one for each pure strategy. However, given the coverage probabilities, the attacker?s
actions are independent of the strategy that induces that coverage probability. So, we can restate the
LP to use variables that represent the coverage probabilities and add a constraint that enforces the
coverage probabilities to be implementable.
maximize Ud (t, pt )
s.t. t is attacked
(1)
p is implementable
This formulation requires optimizing a linear function over a region of the space of coverage probabilities, by using membership queries. We do so by examining some of the characteristics of the
above formulation and then leveraging an algorithm introduced by Tauman Kalai and Vempala [14]
that optimizes over a convex set, using only an initial point and a membership oracle. Here, we
restate their result in a slightly different form.
Theorem 2.1 [14, restated]. For any convex set H ? Rn that is contained in a ball of radius R,
given a membership oracle, an initial point with margin r in H, and a linear function `(?), with
2
probability 1 ? ? we can find an -approximate optimal solution for ` in H, using O(n4.5 log nR
r? )
queries to the oracle.
4
Main Result
In this section, we design and analyze an algorithm that (, ?)-learns the defender?s optimal strategy
in a number of best-response queries that is polynomial in the number of targets and the representation, and logarithmic in 1 and 1? . Our main result is:
Theorem 1. Consider a security game with n targets and representation length L, such that for every target, the set of implementable coverage probability vectors that induce an attack on that target,
if non-empty, contains a ball of radius 1/2L . For any , ? > 0, with probability 1 ? ?, Algorithm 2
n
finds a defender strategy that is optimal up to an additive term of , using O(n6.5 (log ?
+ L))
best-response queries to the attacker.
The main assumption in Theorem 1 is that the set of implementable coverage probabilities for which
a given target is attacked is either empty or contains a ball of radius 1/2L . This implies that if it is
possible to make the attacker prefer a target, then it is possible to do so with a small margin. This
assumption is very mild in nature and its variations have appeared in many well-known algorithms.
For example, interior point methods for linear optimization require an initial feasible solution that
is within the region of optimization with a small margin [4]. Letchford et al. [8] make a similar
assumption, but their result depends linearly, instead of logarithmically, on the minimum volume of
a region (because they use uniformly random sampling to discover regions).
To informally see why such an assumption is necessary, consider a security game with n targets,
such that an attack on any target but target 1 is very harmful to the defender. The defender?s goal
is therefore to convince the attacker to attack target 1. The attacker, however, only attacks target 1
under a very specific coverage probability vector, i.e., the defender?s randomized strategy has to be
just so. In this case, the defender?s optimal strategy is impossible to approximate.
The remainder of this section is devoted to proving Theorem 1. We divide our intermediate results
into sections based on the aspect of the problem that they address. The proofs of most lemmas are
relegated to the appendix; here we mainly aim to provide the structure of the theorem?s overall proof.
4.1
Characteristics of the Optimization Region
One of the requirements of Theorem 2.1 is that the optimization region is convex. Let P denote the
space of implementable probability vectors, and let Pt = {p : p is implementable and b(p) = t}.
The next lemma shows that Pt is indeed convex.
Lemma 1. For all t ? T , Pt is the intersection of a finitely many half-spaces.
Proof. Pt is defined by the set of all p ? [0, 1]n such that there is s that satisfies the
P LP with the
following constraints. There are m half-spaces of the form si ? 0, 2 half-spaces i si ? 1 and
4
T
T
? 0 and M s T ? p T ? 0, and n ? 1 halfi si ? 1, 2n half-spaces of the form M s ? p
spaces of the form Ua (t, pt ) ? Ua (t0 , pt0 ) ? 0. Therefore, the set of (s, p) ? Rm+n such that p is
implemented by strategy s and causes an attack on t is the intersection of 3n + m + 1 half-spaces. Pt
is the reflection of this set on n dimensions; therefore, it is also the intersection of at most 3n+m+1
half-spaces.
P
Lemma 1, in particular, implies that Pt is convex. The Lemma?s proof also suggests a method
for finding the minimal half-space representation of P. Indeed, the set S = {(s, p) ? Rm+n :
Valid strategy s implements p} is given by its half-space representation. Using the Double Description Method [2, 10], we can compute the vertex representation of S. Since, P is a linear transformation of S, its vertex representation is the transformation of the vertex representation of S. Using the
Double Description Method again, we can find the minimal half-space representation of P.
Next, we establish some properties of P and the half-spaces that define it. The proofs of the following two lemmas appear in Appendices A.1 and A.2, respectively.
Lemma 2. Let p ? P. Then for any 0 q p, q ? P.
Lemma 3. Let A be a set of a positive volume that is the intersection of finitely many half-spaces.
Then the following two statements are equivalent.
1. For all p ? A, p . And for all q p, q ? A.
2. A can be defined as the intersection of ei ? p ? for all i, and a set H of half-spaces, such
that for any h ? p ? b in H, h 0, and b ? ?.
Using Lemmas 2 and 3, we can refer to the set of half-spaces that define P by {(ei , 0) : for all i} ?
HP , where for all (h? , b? ) ? HP , h? 0, and b? ? 0.
4.2
Finding Initial Points
An important requirement for many optimization algorithms, including the one developed by Tauman Kalai and Vempala [14], is having a ?well-centered? initial feasible point in the region of
optimization. There are two challenges involved in discovering an initial feasible point in the interior of every region. First, establishing that a region is non-empty, possibly by finding a boundary
point. Second, obtaining a point that has a significant margin from the boundary. We carry out these
tasks by executing the optimization in a hierarchy of sets where at each level the optimization task
only considers a subset of the targets and the feasibility space. We then show that optimization in
one level of this hierarchy helps us find initial points in new regions that are well-centered in higher
levels of the hierarchy.
To this end, let us define restricted regions. These regions are obtained by first perturbing the
defining half-spaces of P so that they conform to a given representation length, and then trimming
the boundaries by a given width (See Figure 1).
1
In the remainder of this paper, we use ? = (n+1)2
L+1 to denote the accuracy of the representation
and the width of the trimming procedure for obtaining restricted regions. More precisely:
Definition 1 (restricted regions). The set Rk ? Rn is defined by the intersection the following halfspaces: For all i, (ei , k?). For all (h? , b? ) ? HP , a half-space (h, b + k?), such that h = ?b ?1 h? c
and b = ?d ?1 b? e. Furthermore, for every t ? T , define Rkt = Rk ? Pt .
The next Lemma, whose proof appears in Appendix A.3, shows that the restricted regions are subsets
of the feasibility space, so, we can make best-response queries within them.
Lemma 4. For any k ? 0, Rk ? P.
The next two lemmas, whose proofs are relegated to Appendices A.4 and A.5, show that in Rk one
can reduce each coverage probability individually down to k?, and the optimal conservative strategy
in Rk indeed reduces the coverage probabilities of all targets outside the best-response set to k?.
Lemma 5. Let p ? Rk , and let q such that k? q p. Then q ? Rk .
Lemma 6. Let s and its corresponding coverage probability p be a conservative optimal strategy
in Rk . Let t? = b(s) and B = {t : Ua (t, pt ) = Ua (t? , pt? )}. Then for any t ?
/ B, pt = k?.
5
Target
1
2
The following Lemma, whose proof appears in
Appendix A.6 shows that if every non-empty
Pt contains a large enough ball, then Rnt 6= ?.
Lemma 7. For any t and k ? n such that Pt
contains a ball of radius r > 21L , Rkt 6= ?.
Attacker
Defender
0.5(1 ? p1 ) ?0.5(1 ? p1 )
(1 ? p2 )
?(1 ? p2 )
(a) Utilities of the game
1
Attack on Target 1
P
Attack on Target 2
1
0.9
R1
Utility Halfspace
1
Feasibility Halfspaces
0.8
Optimal Strategy
R21
0.7
Optimal strategy
0.6
=
p 1)
?p 2
1
p2
(1?
0.5
0.5
p
1
+
p
<=
2
1
0.4
0.3
R12
0.2
R22
P2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
p1
0.6
0.7
0.8
0.9
1
(b) Regions
Figure 1: A security game with one resource that
can cover one of two targets. The attacker receives utility 0.5 from attacking target 1 and utility 1 from attacking target 2, when they are not
defended; he receives 0 utility from attacking a
target that is being defended. The defender?s utility is the zero-sum complement.
The next lemma provides the main insight behind our search for the region with the highestpaying optimal strategy. It implies that we can
restrict our search to strategies that are optimal
for a subset of targets in Rk , if the attacker also
agrees to play within that subset of targets. At
any point, if the attacker chooses a target outside the known regions, he is providing us with
a point in a new region. Crucially, Lemma 8
requires that we optimize exactly inside each
restricted region, and we show below (Algorithm 1 and Lemma 11) that this is indeed possible.
Lemma 8. Assume that for every t, if Pt is
non-empty, then it contains a ball of radius 21L .
Given K ? T and k ? n, let p ? Rk be
the coverage probability of the strategy that has
k? probability mass on targets in T \ K and is
optimal if the attacker were to be restricted to
attacking targets in K. Let p? be the optimal
strategy in P. If b(p) ? K then b(p? ) ? K.
Proof. Assume on the contrary that b(p? ) =
t? ?
/ K. Since Pt? 6= ?, by Lemma 7, there
exists p0 ? Rkt? .
For ease of exposition, replace p with its corresponding conservative strategy in Rk . Let
B be the set of targets that are tied for
the attacker?s best-response in p, i.e. B =
arg maxt?T Ua (t, pt ). Since b(p) ? K and ties
are broken in favor of the ?best? target, i.e. t? , it must be that t? ?
/ B. Then, for any t ? B,
Ua (t, pt ) > Ua (t? , k?) ? Ua (t? , p0t? ) ? Ua (t, p0t ). Since Ua is decreasing in the coverage probability, for all t ? B, p0t > pt . Note that there is a positive gap between the attacker?s payoff for attacking
a best-response target versus another target, i.e. ? = mint0 ?K\B,t?B Ua (t, pt ) ? Ua (t0 , pt0 ) > 0, so
it is possible to increase pt by a small amount without changing the best response. More precisely,
since Ua is continuous and decreasing in the coverage probability, for every t ? B, there exists
? < p0t ? pt such that for all t0 ? K \ B, Ua (t0 , pt0 ) < Ua (t, p0t ? ?) < Ua (t, pt ).
Let q be such that for t ? B, qt = p0t ? ? and for t ?
/ B, qt = pt = k? (by Lemma 6 and the fact
that p was replaced by its conservative equivalent). By Lemma 5, q ? Rk . Since for all t ? B
and t0 ? K \ B, Ua (t, qt ) > Ua (t0 , qt0 ), b(q) ? B. Moreover, because Ud is increasing in the
coverage probability for all t ? B, Ud (t, qt ) > Ud (t, pt ). So, q has higher payoff for the defender
when the attacker is restricted to attacking K. This contradicts the optimality of p in Rk . Therefore,
b(p? ) ? K.
If the attacker attacks a target t outside the set of targets K whose regions we have already discovered, we can use the new feasible point in Rkt to obtain a well-centered point in Rk?1
, as the next
t
lemma formally states.
Lemma 9. For any k and t, let p be any strategy in Rkt . Define q such that qt = pt ? ?2 and for all
?
?
i 6= t, qi = pi + 4?
. Then, q ? Rtk?1 and q has distance 2n
from the boundaries of Rk?1
.
t
n
The lemma?s proof is relegated to Appendix A.7.
6
4.3
An Oracle for the Convex Region
We use a three-step procedure for defining a membership oracle for P or Rkt . Given a vector p, we
first use the half-space representation of P (or Rk ) described in Section 4.1 to determine whether
p ? P (or p ? Rk ). We then find a strategy s that implements p by solving a linear system with
constraints M sT = pT , 0 s, and ksk1 = 1. Lastly, we make a best-response query to the attacker
for strategy s. If the attacker responds by attacking t, then p ? Pt (or p ? Rkt ), else p ?
/ Pt (or
p?
/ Rkt ).
4.4
The Algorithms
In this section, we define algorithms that use the results from previous sections to prove Theorem 1.
First, we define Algorithm 1, which receives an approximately optimal strategy in Rkt as input,
and finds the optimal strategy in Rkt . As noted above, obtaining exact optimal solutions in Rkt is
required in order to apply Lemma 8, thereby ensuring that we discover new regions when lucrative
undiscovered regions still exist.
Algorithm 1 L ATTICE -ROUNDING (approximately optimal strategy p)
1. For all i 6= t, make best-response queries to binary search for the smallest p0i ? [k?, pi ] up
1
to accuracy 25n(L+1)
, such that t = b(p0 ), where for all j 6= i, p0j ? pj .
2. For all i, set ri and qi respectively to the smallest and second smallest rational numbers
1
.
with denominator at most 22n(L+1) , that are larger than p0i ? 25n(L+1)
3. Define p? such that p?t is the unique rational number with denominator at most 22n(L+1) in
1
[pt , pt + 24n(L+1)
). (Refer to the proof for uniqueness), and for all i 6= t, p?i ? ri .
4. Query j ? b(p? ).
5. If j 6= t, let p?j ? qi . Go to step 4
6. Return p? .
The next two Lemmas, whose proofs appear in Appendices A.8 and A.9, establish the guarantees of
Algorithm 1. The first is a variation of a well-known result in linear programming [3] that is adapted
specifically for our problem setting.
Lemma 10. Let p? be a basic optimal strategy in Rkt , then for all i, p?i is a rational number with
denominator at most 22n(L+1) .
1
Lemma 11. For any k and t, let p be a 26n(L+1)
-approximate optimal strategy in Rkt . Algorithm 1
finds the optimal strategy in Rkt in O(nL) best-response queries.
At last, we are ready to prove our main result, which provides guarantees for Algorithm 2, given
below.
Theorem 1 (restated). Consider a security game with n targets and representation length L, such
that for every target, the set of implementable coverage probability vectors that induce an attack
on that target, if non-empty, contains a ball of radius 1/2L . For any , ? > 0, with probability
1 ? ?, Algorithm 2 finds a defender strategy that is optimal up to an additive term of , using
n
O(n6.5 (log ?
+ L)) best-response queries to the attacker.
Proof Sketch. For each K ? T and k, the loop at step 5 of Algorithm 2 finds the optimal strategy if
the attacker was restricted to attacking targets of K in Rk .
Every time the IF clause at step 5a is satisfied, the algorithm expands the set K by a target t0 and
0
adds xt to the set of initial points X, which is an interior point of Rk?1
(by Lemma 9). Then the
t0
algorithm restarts the loop at step 5. Therefore every time the loop at step 5 is started, X is a set of
?
initial points in K that have margin 2n
in Rk . This loop is restarted at most n ? 1 times.
We reach step 6 only when the best-response to the optimal strategy that only considers targets of K
is in K. By Lemma 8, the optimal strategy is in Pt for some t ? K. By applying Theorem 2.1 to K,
7
Algorithm 2 O PTIMIZE (accuracy , confidence ?)
1. ? ?
1
,
(n+1)2L+1
?0 ?
?
n2 ,
and k ? n.
2. Use R, D, and A to compute oracles (half-spaces) for P, R0 , . . . , Rn .
3. Query t ? b(k?)
4. K ? {t}, X ? {x t }, where xtt = k? ? ?/2 and for i 6= t, xti = k? +
?
?
.
4 n
5. For t ? K,
(a) If during steps 5b to 5e a target t0 ?
/ K is attacked as a response to some strategy p:
0
0
?
.
i. Let xtt0 ? pt0 ? ?/2 and for i 6= t0 , xti ? pi + 4?
n
0
ii. X ? X ? {xt }, K ? K ? {t0 }, and k ? k ? 1.
iii. Restart the loop at step 5.
(b) Use Theorem 2.1 with set of targets K. With probability 1 ? ? 0 find a qt that is a
1
-approximate optimal strategy restricted to set K.
26n(L+1)
(c) Use the Lattice Rounding on qt to find qt? , that is the optimal strategy in Rkt restricted
to K.
(d) For all t0 ?
/ K, qtt?0 ? k?.
t?
(e) Query q .
6. For all t ? K, use Theorem 2.1 to find pt? that is an -approximate strategy with probability
1 ? ? 0 , in Pt .
7. Return pt? that has the highest payoff to the defender.
with an oracle for P using the initial set of point X which has ?/2n margin in R0 , we can find the
-optimal strategy with probability 1?? 0 . There are at most n2 applications of Theorem 2.1 and each
succeeds with probability 1?? 0 , so our overall procedure succeeds with probability 1?n2 ? 0 ? 1??.
Regarding the number of queries, every time the loop at step 5 is restarted |K| increases by 1. So,
this loop is restarted at most n ? 1 times. In a successful run of the loop for set K, the loop makes
1
-approximate optimal solution. In each
|K| calls to the algorithm of Theorem 2.1 to find a 26n(L+1)
?
call, X has initial points with margin 2n , and furthermore, the total feasibility space is bounded
?
by a sphere of radius n (because of probability vectors), so each call makes O(n4.5 (log n? + L))
n
+ L))
queries. The last call looks for an -approximate solution, and will take another O(n4.5 (log ?
2
3
queries. In addition, our the algorithm makes n calls to Algorithm 1 for a total of O(n L) queries.
n
1
In conclusion, our procedure makes a total of O(n6.5 (log ?
+L)) = poly(n, L, log ?
) queries.
5
Discussion
Our main result focuses on the query complexity of our problem. We believe that, indeed, best response queries are our most scarce resource, and it is therefore encouraging that an (almost) optimal
strategy can be learned with a polynomial number of queries.
It is worth noting, though, that some steps in our algorithm are computationally inefficient. Specifically, our membership oracle needs to determine whether a given coverage probability vector is
implementable. We also need to explicitly compute the feasibility half-spaces that define P. Informally speaking, (worst-case) computational inefficiency is inevitable, because computing an optimal
strategy to commit to is computationally hard even in simple security games [6].
Nevertheless, deployed security games algorithms build on integer programming techniques to
achieve satisfactory runtime performance in practice [13]. While beyond the reach of theoretical
analysis, a synthesis of these techniques with ours can yield truly practical learning algorithms for
dealing with payoff uncertainty in security games.
Acknowledgments. This material is based upon work supported by the National Science Foundation under grants CCF-1116892, CCF-1101215, CCF-1215883, and IIS-1350598.
8
References
[1] V. Conitzer and T. Sandholm. Computing the optimal strategy to commit to. In Proceedings
of the 7th ACM Conference on Electronic Commerce (EC), pages 82?90, 2006.
[2] K. Fukuda and A. Prodon. Double description method revisited. In Combinatorics and computer science, pages 91?111. Springer, 1996.
[3] P. G?acs and L. Lov?asz. Khachiyan?s algorithm for linear programming. Mathematical Programming Studies, 14:61?68, 1981.
[4] M. Gr?otschel, L. Lov?asz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization. Springer, 2nd edition, 1993.
[5] C. Kiekintveld, J. Marecki, and M. Tambe. Approximation methods for infinite Bayesian
Stackelberg games: Modeling distributional payoff uncertainty. In Proceedings of the 10th
International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages
1005?1012, 2011.
[6] D. Korzhyk, V. Conitzer, and R. Parr. Complexity of computing optimal Stackelberg strategies
in security resource allocation games. In Proceedings of the 24th AAAI Conference on Artificial
Intelligence (AAAI), pages 805?810, 2010.
[7] D. Korzhyk, Z. Yin, C. Kiekintveld, V. Conitzer, and M. Tambe. Stackelberg vs. Nash in
security games: An extended investigation of interchangeability, equivalence, and uniqueness.
Journal of Artificial Intelligence Research, 41:297?327, 2011.
[8] J. Letchford, V. Conitzer, and K. Munagala. Learning and approximating the optimal strategy
to commit to. In Proceedings of the 2nd International Symposium on Algorithmic Game Theory
(SAGT), pages 250?262, 2009.
[9] J. Marecki, G. Tesauro, and R. Segal. Playing repeated Stackelberg games with unknown
opponents. In Proceedings of the 11th International Conference on Autonomous Agents and
Multi-Agent Systems (AAMAS), pages 821?828, 2012.
[10] T. S. Motzkin, H. Raiffa, G. L. Thompson, and R. M. Thrall. The double description method.
Annals of Mathematics Studies, 2(28):51?73, 1953.
[11] P. Paruchuri, J. P. Pearce, J. Marecki, M. Tambe, F. F. Ord?on? ez, and S. Kraus. Playing games for
security: An efficient exact algorithm for solving Bayesian Stackelberg games. In Proceedings
of the 7th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS),
pages 895?902, 2008.
[12] J. Pita, M. Jain, M. Tambe, F. Ord?on? ez, and S. Kraus. Robust solutions to Stackelberg games:
Addressing bounded rationality and limited observations in human cognition. Artificial Intelligence, 174(15):1142?1171, 2010.
[13] M. Tambe. Security and Game Theory: Algorithms, Deployed Systems, Lessons Learned.
Cambridge University Press, 2012.
[14] A. Tauman Kalai and S. Vempala. Simulated annealing for convex optimization. Mathematics
of Operations Research, 31(2):253?266, 2006.
9
| 5586 |@word mild:1 polynomial:5 nd:2 closure:1 additively:1 crucially:1 p0:2 thereby:1 carry:1 initial:11 inefficiency:3 contains:6 pt0:4 denoting:1 ours:1 undiscovered:1 ksk1:1 si:10 follower:4 must:2 subsequent:1 additive:2 v:1 half:18 discovering:2 intelligence:3 ith:2 provides:3 revisited:1 attack:10 mathematical:1 guard:1 rnt:1 symposium:1 khachiyan:1 prove:2 inside:1 kraus:2 lov:2 indeed:5 expected:3 p1:3 multi:3 decreasing:3 xti:2 encouraging:1 ua:28 increasing:2 discover:2 notation:1 moreover:1 maximizes:1 mass:1 bounded:2 what:1 r21:1 degrading:1 developed:3 finding:4 transformation:2 guarantee:2 every:15 expands:1 questionable:1 tie:2 exactly:1 runtime:1 rm:2 grant:1 conitzer:4 appear:2 positive:2 service:1 establishing:1 abuse:1 approximately:2 equivalence:1 suggests:1 deployment:2 ease:1 tambe:5 limited:1 range:1 practical:2 unique:1 enforces:1 acknowledgment:1 commerce:1 practice:2 implement:2 procedure:4 confidence:1 induce:2 regular:1 interior:5 context:1 applying:2 impossible:1 optimize:4 restriction:1 equivalent:2 straightforward:1 go:1 convex:12 thompson:1 restated:2 pure:10 insight:2 his:4 proving:1 variation:2 autonomous:3 haghtalab:1 annals:1 target:61 deploy:1 hierarchy:4 pt:44 play:1 programming:5 exact:2 rationality:1 logarithmically:1 distributional:1 worst:1 region:34 highest:5 observes:2 halfspaces:2 agency:1 broken:2 complexity:2 nash:1 depend:1 solving:2 creates:1 upon:1 basis:1 completely:1 various:1 jain:1 committing:1 monte:1 query:25 artificial:3 outcome:2 outside:3 quite:1 encoded:1 larger:2 whose:5 foregoing:2 say:2 favor:2 commit:4 itself:1 sequence:1 advantage:1 interaction:1 commitment:1 remainder:2 loop:9 defended:5 achieve:1 description:4 los:1 exploiting:1 convergence:1 empty:6 p:2 requirement:2 double:4 r1:1 converges:1 executing:1 help:1 develop:1 ac:1 finitely:2 qt:9 p2:4 solves:1 coverage:27 c:2 implemented:1 implies:3 restate:2 stackelberg:11 radius:7 modifying:1 centered:3 human:1 munagala:1 material:1 implementing:1 require:1 preliminary:1 arielpro:1 mt0:1 investigation:1 strictly:1 algorithmic:2 cognition:1 parr:1 major:2 smallest:3 uniqueness:2 combinatorial:1 currently:1 individually:1 agrees:1 federal:1 aim:1 kalai:4 focus:2 mainly:1 contrast:3 defend:1 membership:7 nika:2 inaccurate:2 relegated:3 provably:1 arg:3 among:1 overall:2 airport:1 having:1 sampling:2 represents:2 look:1 inevitable:1 np:1 defender:42 simultaneously:2 national:1 replaced:1 trimming:2 truly:1 nl:1 behind:1 devoted:1 tuple:1 capable:2 necessary:1 tree:1 harmful:1 divide:1 theoretical:2 minimal:2 uncertain:1 column:1 modeling:1 asking:1 cover:2 assignment:2 calibrate:1 lattice:1 vertex:3 subset:8 addressing:1 examining:1 rounding:2 successful:1 gr:1 chooses:1 convince:1 st:2 qtt:1 international:4 randomized:8 synthesis:1 rkt:15 again:1 aaai:2 satisfied:1 choose:1 possibly:2 priority:1 expert:1 inefficient:2 kiekintveld:2 return:4 potential:5 segal:1 coefficient:1 satisfy:1 combinatorics:1 explicitly:1 depends:1 observing:3 analyze:1 start:1 halfspace:1 air:1 accuracy:3 characteristic:2 efficiently:3 yield:1 lesson:1 bayesian:2 carlo:1 worth:1 insecurity:1 reach:2 definition:2 against:1 involved:1 associated:1 proof:13 rational:4 pst:1 schedule:8 routine:1 appears:2 higher:2 restarts:1 response:24 interchangeability:1 formulation:4 though:2 furthermore:2 just:1 lastly:1 sketch:1 receives:5 ei:3 quality:1 perhaps:2 believe:1 normalized:1 ccf:3 assigned:1 iteratively:1 satisfactory:1 deal:1 game:36 during:2 width:2 covering:1 noted:1 theoretic:6 reflection:1 coast:1 mint0:1 novel:1 common:1 physical:1 overview:1 perturbing:1 clause:1 exponentially:1 volume:2 he:4 lucrative:1 mellon:3 significant:4 refer:2 cambridge:1 pm:1 hp:3 mathematics:2 impressive:1 add:2 own:1 optimizing:2 optimizes:2 tesauro:1 binary:1 success:1 minimum:1 r0:2 determine:4 maximize:3 ud:8 attacking:9 ii:2 multiple:3 reduces:1 technical:1 sphere:1 feasibility:5 impact:1 qi:3 ensuring:1 basic:1 denominator:4 cmu:3 represent:3 p0i:2 receive:1 addition:1 annealing:1 else:1 allocated:1 asz:2 elegant:1 contrary:1 leveraging:1 call:7 integer:2 leverage:1 noting:1 revealed:1 intermediate:1 enough:1 iii:1 perfectly:1 restrict:1 p0t:6 reduce:1 idea:1 regarding:1 angeles:1 t0:17 whether:4 utility:10 speaking:1 cause:1 pita:1 prefers:2 action:1 covered:4 informally:2 amount:2 induces:1 simplest:1 exist:1 psb:1 r12:1 revisit:1 notice:1 estimated:2 rtk:1 r22:1 broadly:1 conform:1 carnegie:3 key:1 nevertheless:1 blum:1 changing:1 pj:1 mti:3 sum:3 realworld:1 run:1 uncertainty:4 almost:1 reasonable:1 electronic:1 prefer:1 appendix:7 bound:1 played:2 oracle:9 nontrivial:1 adapted:1 precisely:3 constraint:6 ri:2 aspect:1 optimality:1 vempala:4 circumvented:1 ball:7 smaller:1 slightly:2 sandholm:1 contradicts:1 lp:7 n4:3 intuitively:1 explained:1 restricted:10 ariel:1 computationally:2 resource:18 turn:1 defending:1 letchford:7 end:1 studying:1 operation:3 opponent:1 apply:1 observe:1 raiffa:1 include:1 fukuda:1 commits:2 build:3 establish:2 approximating:1 objective:1 already:1 strategy:82 responds:4 nr:1 qt0:1 distance:1 separate:1 otschel:1 simulated:1 restart:1 considers:2 reason:1 length:4 modeled:1 providing:1 troubling:1 statement:1 design:3 unknown:4 attacker:41 ord:2 observation:2 pearce:1 hereinafter:1 implementable:9 inevitably:1 attacked:10 payoff:25 defining:2 extended:1 rn:3 discovered:1 police:1 introduced:1 complement:1 required:2 security:22 learned:2 protect:1 marecki:4 address:1 beyond:1 below:4 appeared:1 challenge:2 program:1 max:1 including:1 difficulty:1 natural:1 scarce:1 representing:1 irrespective:1 ready:1 started:1 n6:3 prior:2 review:1 geometric:1 ptimize:1 xtt:1 mixed:2 allocation:5 korzhyk:2 versus:1 ingredient:1 foundation:1 agent:6 imposes:1 story:1 playing:2 pi:3 maxt:2 row:1 supported:1 last:3 terrorist:1 tauman:4 overcome:1 dimension:1 boundary:4 valid:4 made:1 collection:1 historical:1 ec:1 polynomially:1 approximate:7 implicitly:1 dealing:2 p0j:1 leader:7 continuous:2 search:4 decade:1 protected:1 why:1 learn:2 nature:1 robust:2 obtaining:3 poly:1 constructing:1 main:7 linearly:1 edition:1 n2:3 aamas:3 repeated:1 biggest:1 deployed:3 explicit:1 exponential:3 tied:2 third:1 learns:2 theorem:13 rk:20 down:1 specific:3 xt:2 ssa:1 exists:3 avrim:2 margin:7 gap:1 intersection:6 logarithmic:2 yin:1 ez:2 contained:1 motzkin:1 restarted:3 springer:2 nested:1 satisfies:1 acm:1 goal:2 exposition:1 replace:1 feasible:6 hard:2 change:1 determined:2 specifically:2 uniformly:2 infinite:1 lemma:32 conservative:5 called:4 total:4 schrijver:1 player:3 succeeds:2 formally:2 procaccia:1 latter:1 preparation:1 |
5,066 | 5,587 | Diverse Randomized Agents Vote to Win
Albert Xin Jiang
Trinity University
Leandro Soriano Marcolino
USC
Ariel D. Procaccia
CMU
[email protected]
[email protected]
[email protected]
Tuomas Sandholm
CMU
Nisarg Shah
CMU
Milind Tambe
USC
[email protected]
[email protected]
[email protected]
Abstract
We investigate the power of voting among diverse, randomized software agents.
With teams of computer Go agents in mind, we develop a novel theoretical model
of two-stage noisy voting that builds on recent work in machine learning. This
model allows us to reason about a collection of agents with different biases (determined by the first-stage noise models), which, furthermore, apply randomized
algorithms to evaluate alternatives and produce votes (captured by the secondstage noise models). We analytically demonstrate that a uniform team, consisting
of multiple instances of any single agent, must make a significant number of mistakes, whereas a diverse team converges to perfection as the number of agents
grows. Our experiments, which pit teams of computer Go agents against strong
agents, provide evidence for the effectiveness of voting when agents are diverse.
1
Introduction
Recent years have seen a surge of work at the intersection of social choice and machine learning. In
particular, significant attention has been given to the learnability and applications of noisy preference
models [16, 2, 1, 3, 24]. These models enhance our understanding of voters? behavior in elections,
and provide a theoretical basis for reasoning about crowdsourcing systems that employ voting to
aggregate opinions [24, 8]. In contrast, this paper presents an application of noisy preference models
to the design of systems of software agents, emphasizing the importance of voting and diversity.
Our starting point is two very recent papers by Marcolino et al. [19, 20], which provide a new
perspective on voting among multiple software agents. Their empirical results focus on Computer
Go programs (see, e.g., [10]), which often use Monte Carlo tree search algorithms [7]. Taking the
team formation point of view, Marcolino et al. establish that a team consisting of multiple (four
to six) different computer Go programs that use plurality voting ? each agent giving one point to
a favorite alternative ? to decide on each move outperforms a team consisting of multiple copies
of the strongest program (which is better than a single copy because the copies are initialized with
different random seeds). The insight is that even strong agents are likely to make poor choices in
some states, which is why diversity beats strength. And while the benefits of diversity in problem
solving are well studied [12, 13, 6, 14], the setting of Marcolino et al. combines several ingredients.
First, performance is measured across multiple states; as they point out, this is also relevant when
making economic decisions (such as stock purchases) across multiple scenarios, or selecting item
recommendations for multiple users. Second, agents? votes are based on randomized algorithms;
this is also a widely applicable assumption, and in fact even Monte Carlo tree search specifically
is used for problems ranging from traveling salesman to classical (deterministic) planning, not to
mention that randomization is often used in many other AI applications.
1
Focusing on the computer Go application, we find it exciting because it provides an ideal example
of voting among teams of software agents: It is difficult to compare quality scores assigned by
heterogeneous agents to different moves, so optimization approaches that rely on cardinal utilities
fall short while voting provides a natural aggregation method. More generally the setting?s new
ingredients call for a novel model of social choice, which should be rich enough to explain the
empirical finding that diversity beats strength.
However, the model suggested by Marcolino et al. [19] is rather rudimentary: they prove that a
diverse team would outperform copies of the strongest agent only if one of the weaker agents outperforms the strongest agent in at least one state; their model cannot quantify the advantage of
diversity. Marcolino et al. [20] present a similar model, but study the effect of increasing the size of
the action space (i.e., the board size in the Go domain). More importantly, Marcolino et al. [19, 20]
? and other related work [6] ? assume that each agent votes for a single alternative. In contrast, it
is potentially possible to design agents that generate a ranking of multiple alternatives, calling for a
principled way to harness this additional information.
1.1
Our Approach and Results
We introduce the following novel, abstract model of voting, and instantiate it using Computer Go.
In each state, which corresponds to a board position in Go, there is a ground truth, which captures
the true quality of different alternatives ? feasible moves in Go. Heuristic agents have a noisy
perception of the quality of alternatives. We model this using a noise model for each agent, which
randomly maps the ground truth to a ranking of the alternatives, representing the agent?s biased view
of their qualities. But if a single agent is presented with the same state twice, the agent may choose
two different alternatives. This is because agents are assumed to be randomized. For example, as
mentioned above, most computer Go programs, such as Fuego [10], rely on Monte Carlo Tree Search
to randomly decide between different moves. We model this additional source of noise via a second
noise model, which takes the biased ranking as input, and outputs the agent?s vote (another ranking
of the alternatives). A voting rule is employed to select a single alternative (possibly randomly) by
aggregating the agents? votes. Our main theoretical result is the following theorem, which is, in a
sense, an extension of the classic Condorcet Jury Theorem [9].
Theorem 2 (simplified and informal). (i) Under extremely mild assumptions on the noise models
and voting rule, a uniform team composed of copies of any single agent (even the ?strongest? one
with the most accurate noise models), for any number of agents and copies, is likely to vote for
suboptimal alternatives in a significant fraction of states; (ii) Under mild assumptions on the noise
models and voting rule, a diverse team composed of a large number of different agents is likely to
vote for optimal alternatives in almost every state.
We show that the assumptions in both parts of the theorem are indeed mild by proving that three wellknown noise models ? the Mallows-? model [18], The Thurstone-Mosteller model [26, 21], and
the Plackett-Luce model [17, 23] ? satisfy the assumptions in both parts of the theorem. Moreover,
the assumptions on the voting rule are satisfied by almost all prominent voting rules.
We also present experimental results in the Computer Go domain. As stated before, our key methodological contributions are a procedure for automatically generating diverse teams by using different
parameterizations of a Go program, and a novel procedure for extracting rankings of moves from
algorithms that are designed to output only a single good move. We show that the diverse team
significantly outperforms the uniform team under the plurality rule. We also show that it is possible
to achieve better performance by extracting rankings from agents using our novel methodology, and
aggregating them via ranked voting rules.
2
Background
We use [k] as shorthand for {1, . . . , k}. A vote is a total order (ranking) over the alternatives, usually
denoted by ?. The set of rankings over a set of alternatives A is denoted by L(A). For a ranking ?,
we use ?(i) to denote the alternative in position i in ?, so, e.g., ?(1) is the most preferred alternative
in ?. We also use ?([k]) to denote {?(1), . . . , ?(k)}. A collection of votes is called a profile, denoted
by ?. A deterministic voting rule outputs a winning alternative on each profile. For a randomized
voting rule f (or simply a voting rule), the output f (?) is a distribution over the alternatives. A
2
voting rule is neutral if relabeling the alternatives relabels the output accordingly; in other words,
the output of the voting rule is independent of the labels of the alternatives. All prominent voting
rules, when coupled with uniformly random tie breaking, are neutral.
Families of voting rules. Next, we define two families of voting rules. These families are quite
wide, disjoint, and together they cover almost all prominent voting rules.
? Condorcet consistency. An alternative is called the Condorcet winner in a profile if it is
preferred to every other alternative in a majority of the votes. Note that there can be at
most one Condorcet winner. A voting rule is called Condorcet consistent if it outputs the
Condorcet winner (with probability 1) whenever it exists. Many famous voting rules such
as Kemeny?s rule, Copeland?s rule, Dodgson?s rule, the ranked pairs method, the maximin
rule, and Schulze?s method are Condorcet consistent.
? PD-c Rules [8]. This family is a generalization of positional scoring rules that include
prominent voting rules such as plurality and Borda count. While the definition of Caragiannis et al. [8] outputs rankings, we naturally modify it to output winning alternatives.
Let T? (k, a) denote the number of times alternative a appears among first k positions in
profile ?. Alternative a is said to position-dominate alternative b in ? if T? (k, a) > T? (k, b)
for all k ? [m ? 1], where m is the number of alternatives in ?. An alternative is called the
position-dominating winner if it position-dominates every other alternative in a profile. It
is easy to check that there can be at most one position-dominating winner. A voting rule is
called position-dominance consistent (PD-c) if it outputs the position-dominating winner
(with probability 1) whenever it exists. Caragiannis et al. [8] show that all positional scoring
rules (including plurality and Borda count) and Bucklin?s rule are PD-c (as rules that output
rankings). We show that this holds even when the rules output winning alternatives. This
is presented as Proposition 1 in the online appendix (specifically, Appendix A).
Caragiannis et al. [8] showed that PD-c rules are disjoint from Condorcet consistent rules (actually,
for rules that output rankings, they use a natural generalization of Condorcet consistent rules that
they call PM-c rules). Their proof also establishes the disjointness of the two families for rules that
output winning alternatives.
2.1
Noise Models
One view of computational social choice models the votes as noisy estimates of an unknown true order of the alternatives. These votes come from a distribution that is parametrized by some underlying
ground truth. The ground truth can itself be the true order of alternatives, in which case we say that
the noise model is of the rank-to-rank type. The ground truth can also be an objective true quality
level for each alternative, which is more fine-grained than a true ranking of alternatives. In this case,
we say that the noise model is of the quality-to-rank type. See [15] for examples of quality-to-rank
models and how they are learned. Note that the output votes are rankings over alternatives in both
cases. We denote the ground truth by ?. It defines a true ranking of the alternatives (even when the
ground truth is a quality level for each alternative), which we denote by ? ? .
Formally, a noise model P is a set of distributions over rankings ? the distribution corresponding
to the ground truth ? is denoted by P (?). The probability of sampling a ranking ? from P (?) is
denoted by PrP [?; ?].
Similarly to voting rules, a noise model is called neutral if relabeling the alternatives permutes
the probabilities of various rankings accordingly. Formally, a noise model P is called neutral if
PrP [?; ?] = PrP [? ?; ? ?], for every permutation ? of the alternatives, every ranking ?, and every
ground truth ?. Here, ? ? and ? ? denote the result of applying ? on ? and ?, respectively.
Classic noise models. Below, we define three classical noise models:
? The Mallows-? model [18]. This is a rank-to-rank noise model, where the probability
of a ranking decreases exponentially in its distance from the true ranking. Formally, the
Mallows-? model for m alternatives is defined as follows. For all rankings ? and ? ? ,
?
?dKT (?,? )
Pr[?; ? ? ] =
,
(1)
Z?m
3
where dKT is the Kendall-Tau distance that measures total pairwise disagreement between
Qm Pk?1
two rankings, and the normalization constant Z?m = k=1 j=0 ?j is independent of ? ? .
? The Thurstone-Mosteller (TM) [26, 21] and the Plackett-Luce (PL) [17, 23] models. Both
models are of the quality-to-rank type, and are special cases of a more general random
utility model (see [2] for its use in social choice). In a random utility model, each alternative
a has an associated true quality parameter ?a and a distribution ?a parametrized by ?a . In
each sample from the model, a noisy quality estimate Xa ? ?a (?a ) is obtained, and the
ranking where the alternatives are sorted by their noisy qualities is returned.
For the Thurstone-Mosteller model, ?a (?a ) is taken to be the normal distribution N (?a , ? 2 )
with mean ?a , and variance ? 2 . Its PDF is
(x??a )2
1
f (x) = ?
e? 2? 2 .
2
2??
For the Plackett-Luce model, ?a (?a ) is taken to be the Gumbel distribution G(?a ). Its PDF
?(x??a )
follows f (x) = e?(x??a )?e
. The CDF of the Gumbel distribution G(?a ) is given by
?(x??a )
?e
F (x) = e
. Note that we do not include a variance parameter because this subset
of Gumbel distributions is sufficient for our purposes.
The Plackett-Luce model has an alternative, more intuitive, formulation. Taking ?a =
e?a , the probability of obtaining a ranking is the probability of sequentially choosing its
alternatives from the pool of remaining alternatives. Each time, an alternative is chosen
Qm
?
, where
among a pool proportional to its ? value. Hence, Pr[?; {?a }] = i=1 Pm ?(i)
j=i ??(j)
m is the number of alternatives.
3
Theoretical Results
In this section, we present our theoretical results. But, first, we develop a novel model that will
provide the backdrop for these results. Let N = {1, . . . , n} be a set of agents. Let S be the set of
states of the world, and let |S| = t. These states represent different scenarios in which the agents
need to make decisions; in Go, these are board positions. Let ? denote a probability distribution
over states in S, which represents how likely it is to encounter each state. Each state s ? S has
a set of alternatives As , which is the set of possible actions the agents can choose in state s. Let
|As | = ms for each s ? S. We assume that the set of alternatives is fixed in each state. We will later
see how our model and results can be adjusted for varying sets of alternatives. The ground truth in
state s ? S is denoted by ? s , and the true ranking in state s is denoted by ?s? .
Votes of agents. The agents are presented with states sampled from ?. Their goal is to choose
the true best alternative, ?s? (1), in each state s ? S (although we discuss why our results also hold
when the goal is to maximize expected quality). The inability of the agents to do so arises from two
different sources: the suboptimal heuristics encoded within the agents, and their inability to fully
optimize according to their own heuristics ? these are respectively modeled by two noise models
Pi1 and Pi2 associated with each agent i.
The agents inevitably employ heuristics (in domains like Go) and therefore can only obtain a noisy
evaluation of the quality of different alternatives, which is modeled by the noise model Pi1 of agent
i. The biased view of agent i for the true order of the alternatives in As , denoted ?is , is modeled
as a sample from the distribution Pi1 (?s? ). Moreover, we assume that the agents? decision making is
randomized. For example, top computer Go programs use Monte Carlo tree search algorithms [7].
We therefore assume that each agent i has another associated noise model Pi2 such that the final
ranking that the agent returns is a sample from Pi2 (?is ). To summarize, agent i?s vote is obtained
by first sampling its biased truth from Pi1 , and then sampling its vote from Pi2 . It is clear that the
composition Pi2 ? Pi1 plays a crucial role in this process.
Agent teams. Since the agents make errors in estimating the best alternative, it is natural to form a
team of agents and aggregate their votes. We consider two team formation methods: a uniform team
comprising of multiple copies of a single agent that share the same biased truths but have different
final votes due to randomness; and a diverse team comprising of a single copy of each agent with
different biased truths and different votes. We show that the diverse team outperforms the uniform
team irrespective of the choice of the agent that is copied in the uniform team.
4
3.1
Restrictions on Noise Models
No team can perform well if the noise models Pi1 and Pi2 lose all useful information. Hence,
we impose intuitive restrictions on the noise models; our restrictions are mild, as we demonstrate
(Theorem 1) that the three classical noise models presented in Section 2.1 satisfy all our assumptions.
PM-? Noise Model For ? > 0, a neutral noise model P is called pairwise majority preserving with
strength ? (or PM-?) if for every ground truth ? (and the corresponding true ranking ? ? ) and every
i < j, we have
Pr??P (?) [? ? (i) ? ? ? (j)] ? Pr??P (?) [? ? (j) ? ? ? (i)] + ?,
(2)
where ? is the preference relation of a ranking ? sampled from P (?). Note that this definition
applies to both quality-to-rank and rank-to-rank noise models. In other words, in PM-? noise models
every pairwise comparison in the true ranking is preserved in a sample with probability at least ?
more than the probability of it not being preserved.
PD-? Noise Model For ? > 0, a neutral noise model is called position-dominance preserving with
strength ? (or PD-?) if for every ground truth ? (and the corresponding true ranking ? ? ), every
i < j, and every k ? [m ? 1] (where m is the number of alternatives),
Pr??P (?) [? ? (i) ? ?([k])] ? Pr??P (?) [? ? (j) ? ?([k])] + ?.
(3)
That is, for every k ? [m ? 1], an alternative higher in the true ranking has probability higher by at
least ? of appearing among the first k positions in a vote than an alternative at a lower position in
the true ranking.
Compositions of noise models with restrictions. As mentioned above, compositions of noise
models play an important role in our work. The next lemma shows that our restrictions on noise
models are preserved, in a sense, under composition; its proof appears in Appendix B.
Lemma 1. For ?1 , ?2 > 0, the composition of a PD-?1 noise model with a PD-?2 noise model is
a PD-(?1 ? ?2 ) noise model.
Unfortunately, a similar result does not hold for PM-? noise models; the composition of a PM-?1
noise model and a PM-?2 noise model may yield a noise model that is not PM-? for any ? > 0. In
Appendix C, we give such an example. While this is slightly disappointing, we show that a stronger
assumption on the first noise model in the composition suffices.
PPM-? Noise Model For ? > 0, a neutral noise model P is called positional pairwise majority
preserving (or PPM-?) if for every ground truth ? (and the corresponding true ranking ? ? ) and
every i < j, the quantity
Pr??P (?) [?(i0 ) = ? ? (i) ? ?(j 0 ) = ? ? (j)] ? Pr??P (?) [?(j 0 ) = ? ? (i) ? ?(i0 ) = ? ? (j)]
(4)
is non-negative for every i0 < j 0 , and at least ? for some i0 < j 0 . That is, for i0 < j 0 , the probability
that ? ? (i) and ? ? (j) go to positions i0 and j 0 respectively in a vote should be at least as high as the
probability of them going to positions j 0 and i0 respectively (and at least ? greater for some i0 and
j 0 ). Summing Equation (4) over all i0 < j 0 shows that every PPM-? noise model is also PM-?.
Lemma 2. For ?1 , ?2 > 0, if noise models P 1 and P 2 are PPM-?1 and PM-?2 , respectively, then
their composition P 2 ? P 1 is PM-(?1 ? ?2 ).
The lemma?s proof is relegated to Appendix D.
3.2
Team Formation and the Main Theoretical Result
Let us explain the process of generating votes for the uniform team and for the diverse team. Consider a state s ? S. For the uniform team consisting of k copies of agent i, the biased truth ?is is
drawn from Pi1 (? s ), and is common to all the copies. Each copy j then individually draws a vote
j
1
k
?is
from Pi2 (?is ); we denote the collection of these votes by ? kis = (?is
, . . . , ?is
). Under a voting
k
k
?
rule f , let Xis = I[f (? is ) = ?s (1)] be the indicator random variable denoting whether the uniform
5
team selects the best alternative, namely ?s? (1). Finally, agent i is chosen to maximize the overall
k
accuracy E[Xis
], where the expectation is over the state s and the draws from Pi1 and Pi2 .
The diverse team consists of one copy of each agent i ? N . Importantly, although we can take
multiple copies of each agent and a total of k copies, we show that taking even a single copy of
each agent outperforms the uniform team. Each agent i has its own biased truth ?is drawn from
Pi1 (? s ), and it draws its vote ?is from Pi2 (?is ). This results in the profile ? ns = (?1s , . . . , ?ns ).
Let Ysn = I[f (? ns ) = ?s? (1)] be the indicator random variable denoting whether the diverse team
selects the best alternative, namely ?s? (1).
Below we put forward a number of assumptions on noise models; different subsets of assumptions
are required for different results. We remark that each agent i ? N has two noise models for each
possible number of alternatives m. However, for the sake of notational convenience, we refer to
these noise models as Pi1 and Pi2 irrespective of m. This is natural, as the classic noise models
defined in Section 2.1 describe a noise model for each m.
A1 For each agent i ? N , the associated noise models Pi1 and Pi2 are neutral.
A2 There exists a universal constant ? > 0 such that for each agent i ? N , every possible ground
truth ? (and the corresponding true ranking ? ? ), and every k ? [m] (where m is the number of
alternatives), Pr??Pi1 (?) [? ? (1) = ?(k)] ? 1 ? ?.
In words, assumption A2 requires that the true best alternative appear in any particular position
with probability at most a constant which is less than 1. This ensures that the noise model indeed
introduces a non-zero constant amount of noise in the position of the true best alternative.
A3 There exists a universal constant ? > 0 such that for each agent i ? N , the noise models Pi1
and Pi2 are PD-?.
A4 There exists a universal constant ? > 0 such that for each agent i ? N , the noise models Pi1
and Pi2 are PPM-? and PM-?, respectively.
We show that the preceding assumptions are indeed very mild in that classical noise models introduced in Section 2.1 satisfy all four assumptions. The proof of the following result appears in
Appendix E.
Theorem 1. With a fixed set of alternatives (such that the true qualities of every two alternatives
are distinct in the case where the ground truth is the set of true qualities), the Mallows-? model
with ? ? [?, 1 ? ?], the Thurstone-Mosteller model with variance parameter ? 2 ? [L, U ], and the
Plackett-Luce model all satisfy assumption A1, A2, A3, and A4, given that ? ? (0, 1/2), L > 0, and
U > L are constants.
We are now ready to present our main result; its proof appears in Appendix F.
Theorem 2. Let ? be a distribution over the state space S. Let the set of alternatives in all states
{As }s?S be fixed.
1. Under the assumptions A1 and A2, and for any neutral voting rule f , there exists a universal constant c > 0 such that for every k and every N = {1, . . . , n}, it holds that
k
maxi?N E[Xis
] ? 1 ? c, where the expectation is over the state s ? ?, the ground truths
j
1
?is ? Pi (? s ) for all s ? S, and the votes ?is
? Pi2 (?is ) for all j ? [k].
2. Under each of the following two conditions, for a voting rule f , it holds that
limn?? E[Ysn ] = 1, where the expectation is over the state s ? ?, the biased truths
?is ? Pi1 (? s ) for all i ? N and s ? S, and the votes ?is ? Pi2 (?is ) for all i ? N and
s ? S: (i) assumptions A1 and A3 hold, and f is PD-c; (ii) assumptions A1 and A4 hold,
and f is Condorcet consistent.
4
Experimental Results
We now present our experimental results in the Computer Go domain. We use a novel methodology
for generating large teams, which we view as one of our main contributions. It is fundamentally
6
0.8
0.65
Diverse
Uniform
0.55
0.50
0.45
0.40
0.6
Maximin
Copeland
0.5
0.4
0.3
0.2
0.35
0.30
Plurality
Borda
Harmonic
0.7
Winning Rate
Winning Rate
0.60
2
5
10
15
20
Number of Agents
0.1
25
(a) Plurality voting rule
2
5
10
Number of Agents
15
(b) All voting rules
Figure 1: Winning rates for Diverse (continuous line) and Uniform (dashed line), for a variety of
team sizes and voting rules.
different from that of Marcolino et al. [19, 20], who created a diverse team by combining four
different, independently developed Go programs. Here we automatically create arbitrarily many
diverse agents by parameterizing one Go program. Specifically, we use different parametrizations
of Fuego 1.1 [10]. Fuego is a state-of-the-art, open source, publicly available Go program; it won
first place in 19?19 Go in the Fourth Computer Go UEC Cup, 2010, and also won first place in 9?9
Go in the 14th Computer Olympiad, 2009. We sample random values for a set of parameters for each
generated agent, in order to change its behavior. In Appendix G we list the sampled parameters, and
the range of sampled values. The original Fuego is the strongest agent, as we show in Appendix H.
All results were obtained by simulating 1000 9?9 Go games, in an HP dl165 with dual dodeca core,
2.33GHz processors and 48GB of RAM. We compare the winning rates of games played against a
fixed opponent. In all games the system under evaluation plays as white, against the original Fuego
playing as black. We evaluate two types of teams: Diverse is composed of different agents, and
Uniform is composed of copies of a specific agent (with different random seeds). In order to study
the performance of the uniform team, for each sample (which is an entire Go game) we construct
a team consisting of copies of a randomly chosen agent from the diverse team. Hence, the results
presented for Uniform are approximately the mean behavior of all possible uniform teams, given the
set of agents in the diverse team. In all graphs, the error bars show 99% confidence intervals.
Fuego (and, in general, all programs using Monte Carlo tree search algorithms) is not originally
designed to output a ranking over all possible moves (alternatives), but rather to output a single
move ? the best one according to its search tree (of course, there is no guarantee that the selected
move is in fact the best one). In this paper, however, we wish to compare plurality (which only
requires each agent?s top choice) with voting rules that require an entire ranking from each agent.
Hence, we modified Fuego to make it output a ranking over moves, by using the data available in its
search tree (we rank by the number of simulations per alternative). We ran games under 5 different
voting rules: plurality, Borda count, the harmonic rule, maximin, and Copeland. Plurality, Borda
count (which we limit to the top 6 positions in the rankings), and the harmonic rule (see Appendix A)
are PD-c rules, while maximin and Copeland are Condorcet-consistent rules (see, e.g., [24]).
We first discuss Figure 1(a), which shows the winning rates of Diverse and Uniform for a varying
number of agents using the plurality voting rule. The winning rates of both teams increase as the
number of agents increases. Diverse and Uniform start with similar winning rates, around 35%
with 2 agents and 40% with 5 agents, but with 25 agents Diverse reaches 57%, while Uniform only
reaches 45.9%. The improvement of Diverse over Uniform is not statistically significant with 5
agents (p = 0.5836), but is highly statistically significant with 25 agents (p = 8.592 ? 10?7 ). We
perform linear regression on the winning rates of the two teams to compare their rates of improvement in performance as the number of agents increases. Linear regression (shown as the dotted lines
in Figure 1(a)) gives the function y = 0.0094x + 0.3656 for Diverse (R2 = 0.9206, p = 0.0024)
and y = 0.0050x + 0.3542 for Uniform (R2 = 0.8712, p = 0.0065). In particular, the linear approximation for the winning rate of Diverse increases roughly twice as fast as the one for Uniform
as the number of agents increases.
7
Despite the strong performance of Diverse (it beats the original Fuego more than 50% of the time),
it seems surprising that its winning rate converges to a constant that is significantly smaller than 1, in
light of Theorem 2. There are (at least) two reasons for this apparent discrepancy. First, Theorem 2
deals with the probability of making good moves in individual board positions (states), whereas
the figure shows winning rates. Even if the former probability is very high, a bad decision in a
single state of a game can cost Diverse the entire game. Second, our diverse team is formed by
randomly sampling different parametrizations of Fuego. Hence, there might still exist a subset of
world states where all agents would play badly, regardless of the parametrization. In other words,
the parametrization procedure may not be generating the idealized diverse team (see Appendix H).
Figure 1(b) compares the results across different voting rules. As mentioned above, to generate
ranked votes, we use the internal data in the search tree of an agent?s run (in particular, we rank
using the number of simulations per alternative). We can see that increasing the number of agents
has a positive impact for all voting rules under consideration. Moving from 5 to 15 agents for
Diverse, plurality has a 14% increase in the winning rate, whereas other voting rules have a mean
increase of only 6.85% (std = 2.25%), close to half the improvement of plurality. For Uniform,
the impact of increasing the number of agents is much smaller: Moving from 5 to 15 agents, the
increase for plurality is 5.3%, while the mean increase for other voting rules is 5.70%(std = 1.45%).
Plurality surprisingly seems to be the best voting rule in these experiments, even though it uses less
information from the submitted rankings. This suggests that the ranking method used does not
typically place good alternatives in high positions other than the very top.
Winning Rate
Hence, we introduce a novel procedure to generate rankings, which
0.6
we view as another major method0.5
ological contribution. To generate a
0.4
ranked vote from an agent on a given
0.3
board state, we run the agent on the
board state 10 times (each run is inde0.2
pendent of other runs), and rank the
0.1
moves by the number of times they
0.0
are played by the agent. We use these
Plurality
Plurality
Borda
Harmonic Maximin Copeland
Non-sampled Sampled
votes to compare plurality with the
four other voting rules, for Diverse
with 5 agents. Figure 2 shows the Figure 2: All voting rules, for Diverse with 5 agents, using
results. All voting rules outperform the new ranking methodology.
plurality; Borda and maximin are statistically significantly better (p < 0.007 and p = 0.06, respectively). All ranked voting rules are
also statistically significantly better than the non-sampled (single run) version of plurality.
5
Discussion
While we have focused on computer Go for motivation, we have argued in Section 1 that our theoretical model is more widely applicable. At the very least, it is relevant to modeling game-playing
agents in the context of other games. For example, random sampling techniques play a key role in
the design of computer poker programs [25]. A complication in some poker games is that the space
of possible moves, in some stages of the game, is infinite, but this issue can likely be circumvented
via an appropriate discretization.
Our theoretical model does have (at least) one major shortcoming when applied to multistage games
like Go or poker: it assumes that the state space is ?flat?. So, for example, making an excellent move
in one state is useless if the agent makes a horrible move in a subsequent state. Moreover, rather
than having a fixed probability distribution ? over states, the agents? strategies actually determine
which states are more likely to be reached. To the best of our knowledge, existing models of voting
do not capture sequential decision making ? possibly with a few exceptions that are not relevant
to our setting, such as the work of Parkes and Procaccia [22]. From a theoretical and conceptual
viewpoint, the main open challenge is to extend our model to explicitly deal with sequentiality.
Acknowledgments: Procaccia and Shah were partially supported by the NSF under grants IIS1350598 and CCF-1215883, and Marcolino by MURI grant W911NF-11-1-0332.
8
References
[1] H. Azari Soufiani, W. Z. Chen, D. C. Parkes, and L. Xia. Generalized method-of-moments for rank
aggregation. In Proc. of 27th NIPS, pages 2706?2714, 2013.
[2] H. Azari Soufiani, D. C. Parkes, and L. Xia. Random utility theory for social choice. In Proc. of 26th
NIPS, pages 126?134, 2012.
[3] H. Azari Soufiani, D. C. Parkes, and L. Xia. Computing parametric ranking models via rank-breaking. In
Proc. of 31st ICML, 2014. Forthcoming.
[4] P. Baudi?s and J. l. Gailly. PACHI: State of the art open source go program. In Proc. of 13th ACG, pages
24?38, 2011.
[5] C. Boutilier, I. Caragiannis, S. Haber, T. Lu, A. D. Procaccia, and O. Sheffet. Optimal social choice
functions: A utilitarian view. In Proc. of 13th EC, pages 197?214, 2012.
[6] Y. Braouezec. Committee, expert advice, and the weighted majority algorithm: An application to the
pricing decision of a monopolist. Computational Economics, 35(3):245?267, 2010.
[7] C. Browne, E. J. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling, P. Rohlfshagen, S. Tavener, D. Perez,
S. Samothrakis, and S. Colton. A survey of Monte Carlo tree search methods. IEEE Transactions on
Computational Intelligence and AI in Games, 4(1):1?43, 2012.
[8] I. Caragiannis, A. D. Procaccia, and N. Shah. When do noisy votes reveal the truth? In Proc. of 14th EC,
pages 143?160, 2013.
[9] M. de Condorcet. Essai sur l?application de l?analyse a` la probabilit?e de d?ecisions rendues a` la pluralit?e
de voix. Imprimerie Royal, 1785. Facsimile published in 1972 by Chelsea Publishing Company, New
York.
[10] M. Enzenberger, M. M?uller, B. Arneson, and R. Segal. Fuego ? an open-source framework for board
games and Go engine based on Monte Carlo tree search. IEEE Transactions on Computational Intelligence and AI in Games, 2(4):259?270, 2010.
[11] E. Hellinger. Neue begr?undung der theorie quadratischer formen von unendlichvielen ver?anderlichen.
Journal f?ur die reine und angewandte Mathematik, 136:210?271, 1909. In German.
[12] L. Hong and S. E. Page. Groups of diverse problem solvers can outperform groups of high ability
problem solvers. Proceedings of the National Academy of Sciences of the United States of America,
101(46):16385?16389, 2004.
[13] L. Hong and S. E. Page. Some microfoundations of collective wisdom. In H. Landemore and J. Elster,
editors, Collective Wisdom, pages 56?71. Cambridge University Press, 2009.
[14] M. LiCalzi and O. Surucu. The power of diversity over large solution spaces. Management Science,
58(7):1408?1421, 2012.
[15] T.-Y. Liu. Learning to Rank for Information Retrieval. Springer, 2011.
[16] T. Lu and C. Boutilier. Learning Mallows models with pairwise preferences. In Proc. of 28th ICML,
pages 145?152, 2011.
[17] R. D. Luce. Individual choice behavior: A theoretical analysis. Wiley, 1959.
[18] C. L. Mallows. Non-null ranking models. Biometrika, 44:114?130, 1957.
[19] L. S. Marcolino, A. X. Jiang, and M. Tambe. Multi-agent team formation ? diversity beats strength? In
Proc. of 23rd IJCAI, pages 279?285, 2013.
[20] L. S. Marcolino, H. Xu, A. X. Jiang, M. Tambe, and E. Bowring. Give a hard problem to a diverse team:
Exploring large action spaces. In Proc. of 28th AAAI, 2014.
[21] F. Mosteller. Remarks on the method of paired comparisons: I. the least squares solution assuming equal
standard deviations and equal correlations. Psychometrika, 16(1):3?9, 1951.
[22] D. C. Parkes and A. D. Procaccia. Dynamic social choice with evolving preferences. In Proc. of 27th
AAAI, pages 767?773, 2013.
[23] R. Plackett. The analysis of permutations. Applied Statistics, 24:193?202, 1975.
[24] A. D. Procaccia, S. J. Reddi, and N. Shah. A maximum likelihood approach for selecting sets of alternatives. In Proc. of 28th UAI, pages 695?704, 2012.
[25] T. Sandholm. The state of solving large incomplete-information games, and application to Poker. AI
Magazine, 31(4):13?32, 2010.
[26] L. L. Thurstone. A law of comparative judgement. Psychological Review, 34:273?286, 1927.
9
| 5587 |@word mild:5 version:1 judgement:1 stronger:1 seems:2 open:4 simulation:2 sheffet:1 mention:1 moment:1 liu:1 score:1 leandro:1 selecting:2 united:1 denoting:2 reine:1 outperforms:5 existing:1 discretization:1 surprising:1 must:1 subsequent:1 nisarg:1 designed:2 half:1 instantiate:1 selected:1 item:1 intelligence:2 accordingly:2 parametrization:2 short:1 core:1 imprimerie:1 parkes:5 provides:2 parameterizations:1 complication:1 preference:5 prove:1 shorthand:1 consists:1 combine:1 hellinger:1 introduce:2 pairwise:5 expected:1 indeed:3 roughly:1 behavior:4 surge:1 planning:1 multi:1 automatically:2 company:1 election:1 solver:2 increasing:3 psychometrika:1 estimating:1 moreover:3 underlying:1 null:1 developed:1 finding:1 guarantee:1 every:22 voting:49 tie:1 biometrika:1 qm:2 grant:2 appear:1 before:1 positive:1 aggregating:2 modify:1 mistake:1 limit:1 despite:1 jiang:3 approximately:1 black:1 might:1 voter:1 twice:2 studied:1 rendues:1 suggests:1 pit:1 tambe:4 range:1 statistically:4 acknowledgment:1 mallow:6 utilitarian:1 procedure:4 probabilit:1 empirical:2 universal:4 evolving:1 significantly:4 word:4 confidence:1 cannot:1 convenience:1 close:1 put:1 context:1 applying:1 optimize:1 restriction:5 deterministic:2 map:1 go:29 attention:1 starting:1 independently:1 regardless:1 focused:1 economics:1 survey:1 permutes:1 insight:1 rule:59 parameterizing:1 importantly:2 dominate:1 classic:3 proving:1 thurstone:5 play:5 user:1 magazine:1 us:1 std:2 muri:1 role:3 capture:2 ensures:1 azari:3 jury:1 soufiani:3 decrease:1 ran:1 principled:1 mentioned:3 pd:12 und:1 multistage:1 ppm:5 dynamic:1 solving:2 basis:1 stock:1 various:1 america:1 distinct:1 fast:1 dkt:2 describe:1 monte:7 shortcoming:1 aggregate:2 formation:4 choosing:1 quite:1 heuristic:4 widely:2 dominating:3 encoded:1 say:2 apparent:1 ability:1 statistic:1 analyse:1 noisy:9 itself:1 final:2 online:1 advantage:1 monopolist:1 relevant:3 combining:1 parametrizations:2 achieve:1 academy:1 intuitive:2 ijcai:1 produce:1 generating:4 comparative:1 converges:2 pi2:15 develop:2 measured:1 pendent:1 strong:3 c:3 come:1 quantify:1 opinion:1 trinity:2 require:1 argued:1 suffices:1 generalization:2 plurality:19 arielpro:1 randomization:1 proposition:1 adjusted:1 extension:1 pl:1 exploring:1 hold:7 around:1 ground:16 normal:1 seed:2 major:2 tavener:1 a2:4 purpose:1 proc:11 applicable:2 lose:1 label:1 individually:1 create:1 establishes:1 weighted:1 uller:1 modified:1 rather:3 varying:2 focus:1 notational:1 methodological:1 rank:16 check:1 improvement:3 likelihood:1 prp:3 contrast:2 sense:2 plackett:6 i0:9 entire:3 typically:1 relation:1 relegated:1 going:1 comprising:2 selects:2 overall:1 among:6 dual:1 issue:1 denoted:8 caragiannis:5 lucas:1 art:2 special:1 equal:2 construct:1 having:1 sampling:5 represents:1 icml:2 purchase:1 discrepancy:1 fundamentally:1 cardinal:1 employ:2 few:1 randomly:5 composed:4 powley:1 national:1 individual:2 relabeling:2 usc:4 consisting:5 investigate:1 highly:1 evaluation:2 ecisions:1 introduces:1 light:1 perez:1 accurate:1 fuego:10 tree:10 incomplete:1 initialized:1 theoretical:10 psychological:1 instance:1 modeling:1 cover:1 w911nf:1 cost:1 deviation:1 neutral:9 subset:3 uniform:23 learnability:1 essai:1 st:1 randomized:7 mosteller:5 pool:2 enhance:1 together:1 milind:1 von:1 aaai:2 satisfied:1 management:1 choose:3 possibly:2 expert:1 return:1 segal:1 diversity:7 de:4 disjointness:1 satisfy:4 explicitly:1 ranking:45 idealized:1 later:1 view:7 kendall:1 reached:1 start:1 aggregation:2 borda:7 contribution:3 square:1 publicly:1 accuracy:1 formed:1 variance:3 who:1 yield:1 wisdom:2 famous:1 lu:2 carlo:7 published:1 randomness:1 processor:1 submitted:1 explain:2 strongest:5 reach:2 whenever:2 definition:2 against:3 copeland:5 naturally:1 proof:5 associated:4 sampled:7 knowledge:1 sequentiality:1 actually:2 focusing:1 appears:4 higher:2 originally:1 harness:1 methodology:3 formulation:1 though:1 furthermore:1 xa:1 stage:3 correlation:1 traveling:1 defines:1 quality:17 reveal:1 pricing:1 grows:1 effect:1 true:22 ccf:1 former:1 analytically:1 assigned:1 hence:6 whitehouse:1 white:1 deal:2 game:16 die:1 won:2 m:1 generalized:1 prominent:4 hong:2 pdf:2 demonstrate:2 reasoning:1 rudimentary:1 ranging:1 harmonic:4 consideration:1 novel:8 common:1 winner:6 exponentially:1 schulze:1 extend:1 significant:5 composition:8 refer:1 cup:1 cambridge:1 ai:4 rd:1 consistency:1 pm:13 similarly:1 hp:1 moving:2 chelsea:1 own:2 recent:3 showed:1 perspective:1 wellknown:1 scenario:2 disappointing:1 arbitrarily:1 der:1 scoring:2 captured:1 seen:1 additional:2 preserving:3 impose:1 greater:1 employed:1 preceding:1 determine:1 maximize:2 dashed:1 ii:2 multiple:10 retrieval:1 a1:5 paired:1 impact:2 regression:2 heterogeneous:1 gailly:1 cmu:6 expectation:3 albert:1 normalization:1 represent:1 preserved:3 whereas:3 background:1 fine:1 interval:1 source:5 limn:1 crucial:1 biased:9 effectiveness:1 call:2 extracting:2 reddi:1 ideal:1 enough:1 easy:1 variety:1 browne:1 forthcoming:1 suboptimal:2 economic:1 tm:1 luce:6 soriano:1 whether:2 six:1 utility:4 gb:1 returned:1 york:1 action:3 remark:2 boutilier:2 generally:1 useful:1 clear:1 amount:1 generate:4 outperform:3 exist:1 nsf:1 dotted:1 disjoint:2 per:2 diverse:35 olympiad:1 dominance:2 key:2 four:4 group:2 drawn:2 ram:1 graph:1 fraction:1 year:1 run:5 fourth:1 place:3 almost:3 family:5 decide:2 draw:3 decision:6 appendix:11 cowling:1 ki:1 played:2 copied:1 badly:1 strength:5 software:4 flat:1 calling:1 sake:1 extremely:1 pi1:15 circumvented:1 according:2 poor:1 sandholm:3 across:3 slightly:1 smaller:2 ur:1 making:5 pr:9 ariel:1 taken:2 equation:1 mathematik:1 discus:2 count:4 committee:1 german:1 mind:1 informal:1 salesman:1 available:2 rohlfshagen:1 opponent:1 apply:1 appropriate:1 disagreement:1 simulating:1 appearing:1 alternative:72 encounter:1 shah:4 original:3 top:4 remaining:1 include:2 assumes:1 publishing:1 a4:3 neue:1 giving:1 build:1 establish:1 classical:4 move:15 objective:1 quantity:1 strategy:1 parametric:1 said:1 kemeny:1 poker:4 win:1 distance:2 majority:4 parametrized:2 condorcet:12 samothrakis:1 reason:2 assuming:1 tuomas:1 sur:1 modeled:3 useless:1 difficult:1 unfortunately:1 potentially:1 theorie:1 stated:1 negative:1 design:3 collective:2 unknown:1 perform:2 inevitably:1 beat:4 team:46 relabels:1 introduced:1 pair:1 namely:2 required:1 maximin:6 engine:1 learned:1 nip:2 suggested:1 bar:1 usually:1 perception:1 below:2 summarize:1 challenge:1 program:12 including:1 tau:1 haber:1 royal:1 power:2 natural:4 rely:2 ranked:5 indicator:2 representing:1 irrespective:2 ready:1 created:1 perfection:1 coupled:1 voix:1 review:1 understanding:1 law:1 fully:1 permutation:2 proportional:1 ingredient:2 agent:100 sufficient:1 consistent:7 exciting:1 viewpoint:1 editor:1 playing:2 share:1 pi:1 course:1 surprisingly:1 supported:1 copy:17 bias:1 weaker:1 fall:1 wide:1 taking:3 benefit:1 ghz:1 xia:3 world:2 rich:1 forward:1 collection:3 simplified:1 ec:2 facsimile:1 social:7 transaction:2 preferred:2 colton:1 sequentially:1 ver:1 uai:1 summing:1 conceptual:1 assumed:1 xi:3 search:10 continuous:1 why:2 favorite:1 enzenberger:1 angewandte:1 obtaining:1 excellent:1 ysn:2 domain:4 pk:1 main:5 motivation:1 noise:57 profile:6 xu:1 advice:1 board:7 backdrop:1 wiley:1 n:3 position:20 wish:1 winning:17 breaking:2 acg:1 grained:1 theorem:10 emphasizing:1 bad:1 specific:1 maxi:1 list:1 r2:2 pluralit:1 evidence:1 dominates:1 exists:6 a3:3 sequential:1 importance:1 gumbel:3 chen:1 intersection:1 simply:1 likely:6 positional:3 partially:1 recommendation:1 applies:1 springer:1 corresponds:1 truth:23 cdf:1 sorted:1 goal:2 arneson:1 feasible:1 change:1 hard:1 determined:1 specifically:3 uniformly:1 infinite:1 lemma:4 total:3 called:10 experimental:3 xin:1 la:2 vote:32 exception:1 select:1 procaccia:7 formally:3 internal:1 inability:2 arises:1 evaluate:2 crowdsourcing:1 |
5,067 | 5,588 | Fairness in Multi-Agent Sequential Decision-Making
Chongjie Zhang and Julie A. Shah
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
{chongjie,julie a shah}@csail.mit.edu
Abstract
We define a fairness solution criterion for multi-agent decision-making
problems, where agents have local interests. This new criterion aims
to maximize the worst performance of agents with a consideration on
the overall performance. We develop a simple linear programming approach and a more scalable game-theoretic approach for computing an
optimal fairness policy. This game-theoretic approach formulates this
fairness optimization as a two-player zero-sum game and employs an
iterative algorithm for finding a Nash equilibrium, corresponding to an
optimal fairness policy. We scale up this approach by exploiting problem structure and value function approximation. Our experiments on
resource allocation problems show that this fairness criterion provides a
more favorable solution than the utilitarian criterion, and that our gametheoretic approach is significantly faster than linear programming.
Introduction
Factored multi-agent MDPs [4] offer a powerful mathematical framework for studying multi-agent
sequential decision problems in the presence of uncertainty. Its compact representation allows us to
model large multi-agent planning problems and to develop efficient methods for solving them. Existing approaches to solving factored multi-agent MDPs [4] have focused on the utilitarian solution
criterion, i.e., maximizing the sum of individual utilities. The computed utilitarian solution is optimal from the perspective of the system where the performance is additive. However, as the utilitarian
solution often discriminates against some agents, it is not desirable for many practical applications
where agents have their own interests and fairness is expected. For example, in manufacturing plants,
resources need to be fairly and dynamically allocated to work stations on assembly lines in order
to maximize the throughput; in telecommunication systems, wireless bandwidth needs to be fairly
allocated to avoid ?unhappy? customers; in transportation systems, traffic lights are controlled so
that traffic flow is balanced.
In this paper, we define a fairness solution criterion, called regularized maximin fairness, for multiagent MDPs. This criterion aims to maximize the worst performance of agents with a consideration
on the overall performance. We show that its optimal solution is Pareto-efficient. In this paper, we
will focus on centralized joint policies, which are sensible for many practical resource allocation
problems. We develop a simple linear programming approach and a more scalable game-theoretic
approach for computing an optimal fairness policy. This game-theoretic approach formulates this
fairness optimization for factored multi-agent MDPs as a two-player, zero-sum game. Inspired by
theoretical results that two-player games tend to have a Nash equilibrium (NE) with a small support [7], we develop an iterative algorithm that incrementally solves this game by starting with a
small subgame. This game-theoretic approach can scale up to large problems by relaxing the termination condition, exploiting problem structure in factored multi-agent MDPs, and applying value
function approximation. Our experiments on a factory resource allocation problem show that this
1
fairness criterion provides a more favorable solution than the utilitarian criterion [4], and our gametheoretic approach is significantly faster than linear programming.
Multi-agent decision-making model and its fairness solution
We are interested in multi-agent sequential decision-making problems, where agents have their own
interests. We assume that agents are cooperating. Cooperation can be proactive, e.g., sharing resources with other agents to sustain cooperation that benefits all agents, or passive, where agents?
actions are controlled by a thirty party, as with centralized resource allocation. We use a factored
multi-agent Markov decision processes (MDP) to model multi-agent sequential decision-making
problems [4]. A factored multi-agent MDP is defined by a tuple hI, X, A, T, {Ri }i?I , bi, where
I = {1, . . . , n} is a set of agent indices.
X is a state space represented by a set of state variables X = {X1 , . . . , Xm }. A state is defined
by a vector x of value assignments to each state variable. We assume the domain of each
variable is finite.
A = ?i?I Ai is a finite set of joint actions, where Ai is a finite set of actions available for agent i.
The joint action a = ha1 , . . . , an i is defined by a vector of individual action choices.
T is the transition model. T (x0 |x, a) specifies the probability of transitioning to the next state x0
after a joint action a is taken in the current state x. As in [4], we assume that the transition
model can be factored and compactly represented by a dynamic Bayesian network (DBN).
Ri (xi , ai ) is a local reward function of agent i, which is defined on a small set of variables xi ? X
and ai ? A.
b is the initial distribution of states.
This model allows us to exploit problem structures to represent exponentially-large multi-agent
MDPs compactly. Unlike factored MDPs defined in [4], which have one single reward function represented by a sum of partial reward functions, this multi-agent model has a local reward function for
each agent. From the multi-agent perspective, existing approaches to factored MDPs [4] essentially
aim to compute a control policy that maximizes the utilitarian criterion (i.e., the sum of individual
utilities). As the utilitarian criterion often provides a solution that is not fair or satisfactory for some
agents (e.g., as shown in the experiment section), it may not be desirable for problems where agents
have local interests.
In contrast to the utilitarian criterion, an egalitarian criterion, called maximin fairness, has been
studied in networking [1, 9], where resources are allocated to optimize the worst performance. This
egalitarian criterion exploits the maximin principle in Rawlsian theory of justice [14], maximizing
the benefits of the least-advantaged members of society. In the following, we will define a fairness
solution criterion for multi-agent MDPs by adapting and combining the maximin fairness criterion
and the utilitarian criterion. Under this new criterion, an optimal policy for multi-agent MDPs aims
to maximize the worst performance of agents with a consideration on the overall performance.
A joint stochastic policy ? : X ? A ? < is a function that returns the probability of taking joint
action a ? A for any given state x ? X. The utility of agent i under a joint policy ? is defined as its
infinite-horizon, total discounted reward, which is denoted by
?
X
?(i, ?) = E[
?t Ri (xt , at )|?, b].
(1)
t=0
where ? is the discount factor, the expectation operator E(?) averages over stochastic action selection
and state transition, b is the initial state distribution, and xt and at are the state and the joint action
taken at time t, respectively.
To achieve both fairness and efficiency, our goal for a given multi-agent MDP is to find a joint control
policy ? ? , called a regularized maximin fairness policy, that maximizes the following objective
value function
X
V (?) = min ?(i, ?) +
?(i, ?),
(2)
i?I
n
i?I
2
where n = |I| is the number of agents and is a strictly positive real number, chosen to be arbitrary
small. 1 This fairness objective function can be seen as a lexicographic aggregation of the egalitarian
criterion (min) and utilitarian criterion (sum of utilities) with priority to egalitarianism. This fairness
criterion can be also seen as a particular instance of the weighted Tchebycheff distance with respect
to a reference point, a classical secularization function used to generate compromise solutions in
multi-objective optimization [16]. Note that the optimal policy under the egalitarian (or maximin)
criterion alone may not be Pareto efficient, but the optimal policy under this regularized fairness
criterion is guaranteed to be Pareto efficient.
Definition 1. A joint control policy ? is said to be Pareto efficient if and only if there does not exist
another joint policy ? 0 such that the utility is at least as high for all agents and strictly higher for at
least one agent, that is, @? 0 , ?i, ?(i, ? 0 ) ? ?(i, ?) ? ?i, ?(i, ? 0 ) > ?(i, ?).
Proposition 1. A regularized maximin fairness policy ? ? is Pareto efficient.
Proof. We can prove by contradiction. Assume regularized maximin fairness policy ? ? is not Pareto
efficient. Then there must exist aP
policy ? such that ?i, ?(i, ?) ? ?(i,P
? ? ) ? ?i, ?(i, ?) > ?(i, ? ? ).
?
Then V ? = mini?I ?(i, ?) + n i?I ?(i, ?) > mini?I ?(i, ? ? ) + n i?I ?(i, ? ? ) = V ? , which
?
contradicts the pre-condition that ? is a regularized maximin fairness policy.
In this paper, we will mainly focus on centralized policies for multi-agent MDPs. This focus is
sensible because we assume that, although agents have local interests, they are also willing to cooperate. Many practical problems modeled by multi-agent MDPs use centralized policies to achieve
fairness, e.g., network bandwidth allocation by telecommunication companies, traffic congestion
control, public service allocation, and, more generally, fair resource allocation under uncertainty.
On the other hand, we can derive decentralized policies for individual agents from a maximin fairness policy ? ? by marginalizing it over the actions of all other agents. If the maximin fairness policy
is deterministic, then the derived decentralized policy profile is also optimal under the regularized
maximin fairness criterion. Although such a guarantee generally does not hold for stochastic policies, as indicated by the following proposition, the derived decentralized policy is a bounded solution
in the space of decentralized policies under the regularized maximin fairness criterion.
?
?
Proposition 2. Let ? c be an optimal centralized policy and ? dec be an optimal decentralized
policy profile under the regularized maximin fairness criterion. Let ? dec be an decentralized policy
?
?
profile derived from ? c by marginalization. The values of policy ? c and ? dec provides bounds for
?
the value of ? dec , that is,
?
?
V (? c ) ? V (? dec ) ? V (? dec ).
The proof of this proposition is quite straightforward. The first inequality holds because any decentralized policy profile can be converted to a centralized policy by product, and the second inequality
?
?
holds because ? dec is an optimal decentralized policy profile. When bounds provided by V (? c )
dec
dec
and V (? ) are close, we can conclude that ?
is almost an optimal decentralized policy profile
under the regularized maximin fairness criterion.
In this paper, we are primarily concerned with total discounted rewards for an infinite horizon, but
the definition, analysis, and computation of regularized maximin fairness can be adapted to a finite
horizon with an undiscounted sum of rewards. In the next section, we will present approaches to
computing the regularized maximin fairness policy for infinite-horizon multi-agent MDPs.
Computing Regularized Maximin Fairness Policies
In this section, we present two approaches to computing regularized maximin fairness policies for
multi-agent MDPs: a simple linear programming approach and a game theoretic approach. The
former approach is adapted from the linear programming formulation of single-agent MDPs. The
latter approach formulates this fairness problem as a two-player zero-sum game and employs an
iterative search method for finding a Nash equilibrium that contains a regularized maximin fairness
policy. This iterative algorithm allows us to scale up to large problems by exploiting structures in
multi-agent MDPs and value function approximation and employing a relaxed termination condition.
1
In some applications, we may choose proper large to trade off fairness and the overall performance.
3
A linear programming approach
For a multi-agent MDP, given a joint policy and the initial state distribution, frequencies of visiting
state-action pairs are uniquely determined. We use f? (x, a) to denote the total discounted probability, under the policy ? and initial state distribution b, that the system occupies state x and chooses
action a. Using this frequency function, we can rewrite the expected total discount rewards as follows, using f? (x, a):
XX
?(i, ?) =
f? (x, a)Ri (xi , ai ),
(3)
x
a
where xi ? x and ai ? a.
Since the dynamics of a multi-agent MDPs is Markovian, as it is for the single-agent MDP, we can
adapt the linear programming formulation of single-agent MDPs for finding an optimal centralized
policy for multi-agent MDPs under the regularized maximin fairness criterion as follows:
XX
XXX
max
min
f (x, a)Ri (xi , ai ) +
f (x, a)Ri (xi , ai )
i?I
f
n
x
a
a
i?I x
X
XX
?x0 ? X
s.t.
f (x0 , a) = b(x0 ) +
?T (x0 |x, a)f (x, a),
a
x
f (x, a) ? 0,
a
for all a ? A and x ? X.
(4)
Constraints are included to ensure that f (x, a) is well-defined. The first set of constraints require
that the probability of visiting state x0 is equal to the initial probability of state x0 plus the sum of
all probabilities of entering into state s0 . We linearize this program by introducing another variable
z, which represents the minimum expected total discounted reward among all agents, as follows:
XXX
f (x, a)Ri (xi , ai )
max
z+
f
n
a
i?I x
XX
s.t.
z?
f (x, a)Ri (xi , ai ),
?i ? I
x
X
a
0
f (x , a) = b(x0 ) +
a
f (x, a) ? 0,
XX
x
?T (x0 |x, a)f (x, a),
?x0 ? X
a
for all a ? A and x ? X.
(5)
We can employ existing linear programming solvers (e.g., the simplex method) to compute an optimal solution f ? for problem (5) and derive a policy ? ? from f ? by normalization:
f (x, a)
.
a?A f (x, a)
?(x, a) = P
(6)
Using Theorem 6.9.1 in [13], we can easily show that the derived policy ? ? is optimal under the
regularized maximin fairness criterion. This linear programming approach is simple, but is not scalable for multi-agent MDPs with large state spaces or large numbers of agents. This is because the
number of constraints of the linear program is |X| + |I|. In the next sections, we present a more
scalable game-theoretic approach for large multi-agent MDPs.
A game-theoretic approach
Since the fairness objective function in (2) can be turned to a maximin function, inspired by von
Neumann?s minimax theorem, we can formulate this optimization problem as a two-player zerosum game. Motivated by theoretical results that two-player games tend to have a Nash equilibrium
(NE) with a small support, we develop an iterative algorithm for solving zero-sum games.
Let ?S and ?D be the set of stochastic Markovian policies and deterministic Markovian policies,
respectively. As shown in [13], every stochastic policy can be represented by a convex combination
of deterministic policies and every convex combination of deterministic policies corresponds
P to a
stochastic policy. Specifically, for any stochastic policy ? s ? ?s , we can represent ? s = i pi ?id
using some set of {?1d , . . . , ?kd } ? ?D with probability distribution p.
4
Algorithm 1: An iterative approach to computing the regularized maximin fairness policy
D
? D , I)
? with small subsets ?
?D
?
1 Initialize a zero-sum game G(?
s ? ? and I ? I ;
2 repeat
? D , I)
? ;
3
(p? , q ? , V ? ) ? compute a Nash equilibrium of game G(?
d
4
(? , Vp ) ? compute the best-response deterministic policy against q ? in G(?D , I) ;
?D ? ?
? D ? {? d } ;
5
if Vp > V ? then ?
6
(i, Vq ) ? compute the best response against p? among all agents I;
7
if Vq < V ? then I? ? I? ? {i} ;
? D , I)
? changes then expand its payoff matrix with U (? d , i) for new pairs (? d , i) ;
8
if G(?
? D , I)
? converges;
9 until game G(?
?D ;
10 return the regularized maximin fairness policy ?ps? = p? ? ?
P
Let U (?, i) = ?(i, ?) + n j?I ?(j, ?). We can construct a two-player zero-sum game G(?D , I)
as follows: the maximizing player, who aims to maximize the value of the game, chooses a deterministic policy ? d from ?D ; the minimizing player, who aims to minimizing the value of the game,
chooses an agent indexed by i in multi-agent MDPs from I; and the payoff matrix has an entry
U (? d , i) for each pair ? d ? ?D and i ? I. The following proposition shows that we can compute
the regularized minimax fairness policy by solving G(?D , I).
Proposition 3. Let the strategy profile (p? , q ? ) be a NE of theP
game G(?D , I) and the stochastic
? d
?
policy ?ps? which is derived from (p? , q ? ) with ?ps? (x, a) =
i pi ?i (x, a), where pi is the ith
component of p? , i.e., the probability of choosing the deterministic policy ?id ? ?D . Then ?ps? is a
regularized maximin fairness policy,
Proof. According to von Neumann?s minimax theorem, p? is also the maximin strategy for the zerosum game G(?D , I).
X
X
min U (?ps? , i) = min
p?j U (?jd , i) (let ?ps? =
p?j ?jd )
i
i
=
=
min
q
j
j
XX
j
q
? max min
p
=
By definition,
?ps?
(there always exists a pure best response strategy)
i
max min
p
p?j qi U (?jd , i)
i
XX
j
X
pj qi U (?jd , i) (p? is the maximin strategy)
i
pj U (?jd , i) (consider i as a pure strategy)
j
max min U (?p , i) (let ?p =
?p
i
X
pj ?jd )
j
is a regularized maximin fairness policy.
As the game G(?D , I) is usually extremely large and computing the payoff matrix of the game
G(?D , I) is also non-trivial, it is impossible to directly use linear programming to solve this game.
On the other hand, existing work, such as [7] that analyzes the theoretical properties of the NE of
games drawn from a particular distribution, shows that support sizes of Nash equilibria tend to be
balanced and small, especially for n = 2. Prior work [11] demonstrated that it is beneficial to exploit
these results in finding a NE, especially in 2-player games. Inspired by these results, we develop an
iterative method to compute a fairness policy, as shown in Algorithm 1.
Intuitively, Algorithm 1 works as follows. It starts by computing a NE for a small subgame (Line 3)
and then checks whether this NE is also a NE of the whole game (Line 4-7); if not, it expands the
subgame and repeats this process until a NE is found for the whole game.
Line 1initializes a small sub game of the original game, which can be arbitrary. In our experiments, it
is initialized with a random agent and a policy maximizing this agent?s utility. Line 3 solves the twoplayer zero-sum game using linear programming or any other suitable technique. V ? is the maximin
5
value of this subgame. The best response problem in Line 4 is to find a deterministic policy ? that
maximizes the following payoff:
X
X
X
X
qi? [?(i, ?) +
qi? U (?, i) =
U (?, q ? ) =
?(j, ?)] =
(qi? + )?(i, ?)
n
n
?
?
j?I
i?I
i?I
i?I
Solving this optimization problem
P is equivalent to finding the optimal policy of a regular MDP with
a reward function R(x, a) = i?I (qi? + n )Ri (xi , ai ). We can use the dual linear programming
approach [13] for this MDP, which outputs the visitation frequency function f?d (x, a) representing the optimal policy.
facilitates the computation of the payoff U (?id , i) using
P ?This representation
d
Equation 3. Vp = i qi U (? , i) is the maximizing player?s utility of its best response against q ? .
Line 5 checks if the best response ? d is strictly better than p? . If this is true, we can infer that p? is
not the best response against q ? in the whole game and ? d must not be in ??D , which is then added
to ??D to expand the subgame.
Line 6 finds the minimizing player?s best response against p? , which minimizes the payoff of the
maximizing player. Note that there always exists a pure best response strategy. So we formulate this
best response problem as follows:
X
min U (?p? , q) = min
p?j U (?jd , i),
(7)
i?I
i?I
j
where ?p? is the stochastic policy corresponding to probability distribution p? . We can solve this
problem by directly searching for the agent i that yields the minimum utility with linear time complexity. Similar to Line 5, Line 7 checks if the minimizing player strictly preferred i to q ? against p?
and expands the subgame if needed. This algorithm terminates when the subgame does not change.
Proposition 4. Algorithm 1 converges to a regularized maximin fairness policy.
Proof. The convergence of this algorithm follows immediately because there exists a finite number
of deterministic Markovian policies and agents for a given multi-agent MDP. The algorithm terminates if and only if neither of the If conditions of Line 5 and 7 hold. This situation indicates no
player strictly prefers a strategy out of the support of its current strategy, which implies (p? , q ? ) is
? D , I).
? Using Proposition 3, we conclude that Algorithm 1 returns a
a NE of the whole game G(?
regularized maximin fairness policy.
Algorithm 1 shares some similarities with the double oracle algorithm proposed in [8] for iteratively solving zero-sum games. The double oracle method is motivated by Benders decomposition
technique, while our iterative algorithm exploits properties of Nash equilibrium, which leads to a
more efficient implementation. For example, unlike our algorithm, the double oracle method checks
if the computed best response MDP policy exists in the current sub-game by comparison, which is
time-consuming for MDP policies with a large state space.
Scaling the game-theoretic approach
Both linear programming and the game-theoretic approach suffer scalability issues for large problems. In multi-agent MDPs, the state space is exponential with the number of state variables and
the action space is exponential with the number of agents. This results in an exponential number of
variables and constraints in linear program formulation. In this section, we will investigate methods
to scale up the game-theoretic approach.
The major bottleneck of the iterative algorithm is the computation of the best response policy (Line
4 in Algorithm 1). As discussed in the previous section, this optimization
P is equivalent to finding
the optimal policy of a regular MDP with reward function R(x, a) = i (qi? + n )Ri (xi , ai ). Due
to the exponential state-action space, exact algorithms (e.g., linear programming) are impractical in
most cases. Fortunately, this MDP is essentially a factored MDP [4] with a weighted sum of partial
reward functions. We can use existing approximate algorithms [4] to solve factored MDPs, which
exploit both factored structures in the problem and value function approximation. For example, the
approximate linear programming approach for factored MDPs can provide efficient policies with up
to an exponential reduction in computation time.
6
#C
4
4
5
5
6
#R
12
20
10
20
18
#N
7E4
3E5
4E5
6E6
5E7
Time-LP
68.22s
22.39m
89.77m
-
Time-GT
11.43s
35.27s
48.56s
4.98m
43.36m
Sol-LP
157.67
250.59
104.33
-
Sol-GT
154.24
239.87
97.48
189.62
153.63
C
1
2
3
4
Min
MPE
180.41
198.45
216.49
234.53
108.22
Utilitarian
117.44
184.20
290.69
444.08
68.32
Fairness
250.59
250.59
250.59
250.59
157.67
Table 2: A comparison of three criteria
in a 4-agent 20-resource problem
Table 1: Performance in sample problems
with different cell sizes and total resoureces
A few subtleties are worth noting when approximate linear programming is employed. First, the best
response?s utility Vp should be computed by evaluating the computed approximate policy against q ? ,
instead of directly using the value from the approximate value function. Otherwise, the convergence
of Algorithm 1 will not be guaranteed. Similarly, the payoff U (? d , i) should be calculated through
policy evaluation. Second, existing approximate algorithms for factored MDPs usually output a
deterministic policy ? d (x) that is not represented by the visitation frequency function f? (x, a). In
order to facilitate the policy evaluation, we may convert a policy ? d (x) to a frequency function
f?d (x, a). Note that f?d (x, a) = 0 for all a 6= ? d (x). For other state-action pairs, we can compute
their visitation frequencies by solving the following equation:
X
f?d (x0 , ? d (x0 )) = b(x0 ) +
T (x0 |x, a)f?d (x, ? d (x)).
(8)
x
This equation can be approximately but more efficiently solved using an iterative method, similar
to the MDP value iteration. Finally, Algorithm 1 is still guaranteed to converge, but may return a
sub-optimal solution.
We can also speed up Algorithm 1 by relaxing its termination condition, which essentially reduces
the number of iterations. We can use the termination condition Vp ? Vq < , which turns the iterative
approach into an approximation algorithm.
Proposition 5. The iterative approach using the termination condition Vp ? Vq < has bounded
error .
Proof. Let V opt be the value of the regularized maximin fairness policy and V (? ? ) be the value of
the computed policy ? ? . By definition, V opt ? V (? ? ). Following von Neumann?s minimax theorem,
we have Vp ? V opt ? Vq . Since Vq is the value of the minimizing player?s best response against ? ? ,
V opt ? V (? ? ) ? Vq ? Vp + ? V opt + .
Experiments
One motivated domain for our work is resource allocation in a pulse-line manufacturing plant. In a
pulse-line factory, the manufacturing process of complex products is divided into several stages, each
of which contains a set of tasks to be done in a corresponding work cell. The overall performance
of a pulse line is mainly determined by the worse performance of work cells. Considering dynamics
and uncertainty of the manufacturing environment, we need to dynamically allocate resources to
balance the progress of work cells in order to optimize the throughput of the pulse line.
We evaluate our fairness solution criterion and its computation approaches, linear programming (LP)
and the game-theoretic (GT) approach with approximation, on this resource allocation problem. For
simplicity, we focus on managing one type of resource. We view each work cell in a pulse line as an
agent. Each agent?s state is represented by two variables: task level (i.e., high or low) and the number
of local resources. An agent?s next task level is affected by the current task levels of itself and the
previous agent. An action is defined on a directed link between two agents, representing the transfer
of one-unit resource from one agent to another. There is another action for all agents: ?no change?.
We assume only neighboring agents can transfer resources. An agent?s reward is measured by the
number of partially-finished products that will be processed during two decision points, given its
current task level and resources. We use a discount factor ? = 0.95. We use the approximate linear
programming technique presented in [4] for solving factored MDPs generated in the GT approach.
We used Java for our implementation and Gurobi 2.6 [5] for solving linear programming and ran
experiments on a 2.4GHz Intel Core i5 with 8Gb RAM.
7
Table 1 shows the performance of linear programming and the game-theoretic approach in different
problems by varying the number of work cells #C and total resources #R. The third column #N
= |X||A| is the state-action space size. We can observe that the game-theoretic approach is significantly faster than linear programming. This speed improvement is largely due to the integration of
approximate linear programming, which exploits the problem structure and value function approximation. In addition, the game-theoretic approach is scalable well to large problems. With 6 cells
and 18 resources, the size of the state-action space is around 5 ? 107 . The last two columns show the
minimum expected reward among agents, which determines the performance of the pulse line. The
game-theoretic approach only has a less than 8% loss over the optimal solution computed by LP.
We also compare the regularized maximin fairness criterion against the utilitarian criterion (i.e.,
maximizing the sum of individual utility) and Markov perfect equilibrium (MPE). MPE is an extension of Nash equilibrium to stochastic games. One obvious MPE in our resource allocation problem
is that no agent transfers its resources to other agents. We evaluated them in different problems,
but the results are qualitatively similar. Table 2 shows the performance of all work cells under the
optimal policy of different criteria in a problem with 4 agents and 20 resources. The fairness policy balanced the performance of all agents and provided a better solution (i.e., a greater minimum
utility) than other criteria. The perfection of the balance is due to the stochasticity of the computed
policy. Even in terms of the sum of utilities, the fairness policy has only a less than 4% loss over
the optimal policy under the utilitarian criterion. The utilitarian criterion generated a highly skewed
solution with the lowest minimum utility among the three criteria. In addition, we can observe that,
under the fairness criterion, all agents performed better than those under MPE, which suggests that
cooperation is beneficial for all of them in this problem.
Related Work
When using centralized policies, our multi-agent MDPs can be also viewed as multi-objective
MDPs [15]. Recently, Ogryczak et al. [10] defined a compromise solution for multi-objective MDPs
using the Tchebycheff scalarization function. They developed a linear programming approach for
finding such compromise solutions; however, this is computationally impractical for most real-world
problems. In contrast, we develop a more scalable game-theoretic approach for finding fairness solutions by exploiting structure in multi-agent factored MDPs and value function approximation.
The notion of maximin fairness is also widely used in the field of networking, such as bandwidth
sharing, congestion control, routing, load-balancing and network design [1, 9]. In contrast to our
work, maximin fairness in networking is defined without regularization, only addresses one-shot
resource allocation, and does not consider the dynamics and uncertainty of the environment.
Fair division is an active research area in economics, especially social choice theory. It is concerned
with the division of a set of goods among several people, such that each person receives his or
her due share. In the last few years, fair division has attracted the attention of AI researchers [2,
12], who envision the application of fair division in multi-agent systems, especially for multi-agent
resource allocation [3, 6]. Fair division theory focuses on proportional fairness and envy-freeness.
Most existing work in fair division involves a static setting, where all relevant information is known
upfront and is fixed. Only a few approaches deal with dynamics of agent arrival and departures [6,
17]. In contrast to our model and approach, these dynamic approaches to fair division do not address
uncertainty, or other dynamics such as changes of resource availability and users? resource demands.
Conclusion
In this paper, we defined a fairness solution criterion, called regularized maximin fairness, for multiagent decision-making under uncertainty. This solution criterion aims to maximize the worse performance among agents while considering the overall performance of the system. It is finding applications in various domains, including resource sharing, public service allocation, load balance, and
congestion control. We also developed a simple linear programming approach and a more scalable
game-theoretic approach for computing the optimal policy under this new criterion. This gametheoretic approach can scale up to large problems by exploiting the problem structure and value
function approximation.
8
References
[1] Thomas Bonald and Laurent Massouli?e. Impact of fairness on internet performance. In Proceedings of the
2001 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems,
pages 82?91, 2001.
[2] Yiling Chen, John Lai, David C. Parkes, and Ariel D. Procaccia. Truth, justice, and cake cutting. In
Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010.
[3] Yann Chevaleyre, Paul E. Dunne, Ulle Endriss, Jrme Lang, Michel Lematre, Nicolas Maudet, Julian A.
Padget, Steve Phelps, Juan A. Rodrguez-Aguilar, and Paulo Sousa. Issues in multiagent resource allocation. Informatica (Slovenia), 30(1):3?31, 2006.
[4] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman. Efficient solution algorithms for factored mdps.
Journal of Artificial Intelligence Research, 19:399?468, 2003.
[5] Inc. Gurobi Optimization. Gurobi optimizer reference manual, 2014.
[6] Ian A. Kash, Ariel D. Procaccia, and Nisarg Shah. No agent left behind: dynamic fair division of multiple
resources. In International conference on Autonomous Agents and Multi-Agent Systems, pages 351?358,
2013.
[7] Andrew McLennan and Johannes Berg. Asymptotic expected number of nash equilibria of two-player
normal form games. Games and Economic Behavior, 51(2):264?295, 2005.
[8] H Brendan McMahan, Geoffrey J Gordon, and Avrim Blum. Planning in the presence of cost functions
controlled by an adversary. In Proceedings of the Twentieth International Conference on Machine Learning, pages 536?543, 2003.
[9] Dritan Nace and Michal Pi?oro. Max-min fairness and its applications to routing and load-balancing in
communication networks: A tutorial. IEEE Communications Surveys and Tutorials, 10(1-4):5?17, 2008.
[10] Wlodzimierz Ogryczak, Patrice Perny, and Paul Weng. A compromise programming approach to multiobjective markov decision processes. International Journal of Information Technology and Decision
Making, 12(5):1021?1054, 2013.
[11] Ryan Porter, Eugene Nudelman, and Yoav Shoham. Simple search methods for finding a nash equilibrium.
In Proceedings of the 19th National Conference on Artifical Intelligence, pages 664?669, 2004.
[12] Ariel D. Procaccia. Thou shalt covet thy neighbor s cake. In Proceedings of the 21st International Joint
Conference on Artificial Intelligence, 2009, pages 239?244, 2009.
[13] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Willey Interscience, 2005.
[14] John Rawls. The theory of justice. Harvard University Press, Cambridge, MA, 1971.
[15] Diederik M. Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. A survey of multi-objective
sequential decision-making. Journal Artificial Intelligence Research, 48(1):67?113, October 2013.
[16] Ralph E. Steuer. Multiple Criteria Optimization: Theory, Computation, and Application. John Wiley,
1986.
[17] Toby Walsh. Online cake cutting. In Algorithmic Decision Theory - Second International Conference,
volume 6992 of Lecture Notes in Computer Science, pages 292?305, 2011.
9
| 5588 |@word justice:3 termination:5 willing:1 pulse:6 decomposition:1 shot:1 reduction:1 initial:5 contains:2 envision:1 existing:7 current:5 michal:1 lang:1 diederik:1 must:2 attracted:1 john:3 additive:1 nisarg:1 alone:1 intelligence:6 congestion:3 ith:1 core:1 parkes:1 provides:4 zhang:1 mathematical:1 prove:1 interscience:1 x0:15 thy:1 expected:5 behavior:1 planning:2 multi:41 inspired:3 discounted:4 company:1 bender:1 solver:1 considering:2 provided:2 xx:7 bounded:2 maximizes:3 lowest:1 minimizes:1 developed:2 finding:10 impractical:2 guarantee:1 every:2 expands:2 control:6 advantaged:1 unit:1 positive:1 service:2 local:6 multiobjective:1 id:3 laurent:1 ap:1 approximately:1 plus:1 studied:1 dynamically:2 suggests:1 relaxing:2 walsh:1 bi:1 directed:1 practical:3 thirty:1 utilitarian:14 subgame:7 area:1 significantly:3 adapting:1 java:1 shoham:1 pre:1 regular:2 close:1 selection:1 operator:1 applying:1 impossible:1 optimize:2 equivalent:2 deterministic:10 customer:1 transportation:1 maximizing:7 demonstrated:1 straightforward:1 economics:1 starting:1 attention:1 convex:2 focused:1 formulate:2 survey:2 simplicity:1 immediately:1 pure:3 factored:17 contradiction:1 aguilar:1 his:1 searching:1 notion:1 autonomous:1 user:1 exact:1 programming:27 harvard:1 solved:1 worst:4 rodrguez:1 venkataraman:1 trade:1 sol:2 ran:1 balanced:3 discriminates:1 environment:2 nash:10 complexity:1 reward:14 dynamic:9 solving:9 rewrite:1 compromise:4 division:8 efficiency:1 compactly:2 easily:1 joint:13 represented:6 various:1 artificial:5 choosing:1 quite:1 widely:1 solve:3 otherwise:1 itself:1 patrice:1 online:1 yiling:1 product:3 neighboring:1 turned:1 combining:1 relevant:1 achieve:2 gametheoretic:3 scalability:1 exploiting:5 convergence:2 double:3 undiscounted:1 neumann:3 p:7 perfect:1 converges:2 derive:2 develop:7 linearize:1 andrew:1 measured:1 progress:1 solves:2 involves:1 implies:1 stochastic:11 mclennan:1 occupies:1 routing:2 public:2 require:1 proposition:9 opt:5 ryan:1 strictly:5 extension:1 hold:4 around:1 normal:1 equilibrium:11 algorithmic:1 oro:1 parr:1 major:1 optimizer:1 favorable:2 unhappy:1 weighted:2 mit:1 lexicographic:1 always:2 sigmetrics:1 aim:7 e7:1 avoid:1 tchebycheff:2 varying:1 derived:5 focus:5 improvement:1 check:4 mainly:2 indicates:1 contrast:4 brendan:1 her:1 koller:1 expand:2 interested:1 ralph:1 overall:6 among:6 dual:1 issue:2 denoted:1 integration:1 fairly:2 initialize:1 equal:1 construct:1 field:1 represents:1 fairness:64 throughput:2 simplex:1 gordon:1 richard:1 employ:3 primarily:1 few:3 national:1 individual:5 interest:5 centralized:8 investigate:1 highly:1 evaluation:2 weng:1 light:1 behind:1 ogryczak:2 tuple:1 partial:2 indexed:1 initialized:1 theoretical:3 instance:1 column:2 modeling:1 markovian:4 formulates:3 yoav:1 assignment:1 cost:1 introducing:1 subset:1 entry:1 chooses:3 person:1 st:1 international:6 csail:1 roijers:1 off:1 von:3 aaai:1 choose:1 juan:1 priority:1 worse:2 return:4 michel:1 converted:1 paulo:1 egalitarian:4 availability:1 inc:1 proactive:1 view:1 performed:1 mpe:5 traffic:3 start:1 aggregation:1 who:3 efficiently:1 largely:1 yield:1 vp:8 bayesian:1 worth:1 researcher:1 networking:3 sharing:3 manual:1 definition:4 against:10 frequency:6 obvious:1 proof:5 static:1 massachusetts:1 steve:1 higher:1 xxx:2 sustain:1 response:14 formulation:3 done:1 evaluated:1 stage:1 until:2 hand:2 receives:1 incrementally:1 porter:1 ulle:1 indicated:1 mdp:14 facilitate:1 true:1 former:1 regularization:1 entering:1 laboratory:1 satisfactory:1 iteratively:1 deal:1 puterman:1 game:51 during:1 uniquely:1 skewed:1 criterion:43 theoretic:18 slovenia:1 passive:1 cooperate:1 consideration:3 recently:1 kash:1 exponentially:1 volume:1 discussed:1 measurement:1 cambridge:2 ai:13 dbn:1 similarly:1 stochasticity:1 similarity:1 gt:4 own:2 perspective:2 freeness:1 inequality:2 guestrin:1 seen:2 minimum:5 analyzes:1 relaxed:1 fortunately:1 greater:1 employed:1 managing:1 converge:1 maximize:6 multiple:2 desirable:2 infer:1 reduces:1 faster:3 adapt:1 offer:1 divided:1 lai:1 controlled:3 qi:8 impact:1 scalable:7 essentially:3 expectation:1 iteration:2 represent:2 normalization:1 dec:9 cell:8 addition:2 allocated:3 unlike:2 zerosum:2 tend:3 facilitates:1 member:1 flow:1 presence:2 noting:1 concerned:2 marginalization:1 bandwidth:3 economic:1 scalarization:1 bottleneck:1 whether:1 motivated:3 allocate:1 utility:13 gb:1 suffer:1 peter:1 phelps:1 action:19 prefers:1 generally:2 johannes:1 discount:3 processed:1 informatica:1 generate:1 specifies:1 exist:2 tutorial:2 upfront:1 discrete:1 affected:1 visitation:3 blum:1 drawn:1 pj:3 neither:1 ram:1 cooperating:1 sum:17 convert:1 year:1 powerful:1 uncertainty:6 telecommunication:2 i5:1 massouli:1 fourth:1 almost:1 yann:1 decision:14 scaling:1 bound:2 hi:1 internet:1 guaranteed:3 oracle:3 adapted:2 constraint:4 ri:10 speed:2 min:13 extremely:1 according:1 combination:2 kd:1 beneficial:2 terminates:2 contradicts:1 lp:4 making:8 intuitively:1 ariel:3 taken:2 computationally:1 resource:29 vq:7 equation:3 rawls:1 turn:1 needed:1 studying:1 available:1 decentralized:9 observe:2 shah:3 sousa:1 jd:7 original:1 thomas:1 cake:3 ensure:1 assembly:1 endriss:1 exploit:6 especially:4 society:1 classical:1 objective:7 initializes:1 added:1 strategy:8 said:1 visiting:2 distance:1 link:1 sensible:2 trivial:1 index:1 modeled:1 mini:2 julian:1 minimizing:5 balance:3 october:1 implementation:2 design:1 proper:1 policy:88 twenty:1 markov:4 finite:5 payoff:7 situation:1 communication:2 station:1 arbitrary:2 david:1 pair:4 gurobi:3 maximin:37 address:2 adversary:1 usually:2 xm:1 departure:1 program:3 max:6 including:1 suitable:1 regularized:27 steuer:1 minimax:4 representing:2 technology:2 mdps:32 ne:10 finished:1 perfection:1 prior:1 eugene:1 marginalizing:1 asymptotic:1 plant:2 multiagent:3 loss:2 lecture:1 allocation:14 proportional:1 geoffrey:1 bonald:1 agent:93 s0:1 principle:1 pareto:6 pi:4 share:2 balancing:2 cooperation:3 repeat:2 wireless:1 last:2 institute:1 neighbor:1 taking:1 julie:2 benefit:2 ha1:1 ghz:1 calculated:1 transition:3 evaluating:1 world:1 qualitatively:1 party:1 employing:1 social:1 approximate:8 compact:1 preferred:1 cutting:2 active:1 conclude:2 consuming:1 xi:10 thep:1 search:2 iterative:12 table:4 transfer:3 nicolas:1 whiteson:1 e5:2 complex:1 domain:3 whole:4 paul:2 profile:7 arrival:1 toby:1 fair:9 x1:1 intel:1 envy:1 wiley:1 sub:3 exponential:5 factory:2 mcmahan:1 third:1 ian:1 shimon:1 theorem:4 e4:1 transitioning:1 load:3 xt:2 exists:4 avrim:1 sequential:5 horizon:4 demand:1 chen:1 twentieth:1 partially:1 subtlety:1 corresponds:1 truth:1 determines:1 acm:1 ma:2 willey:1 goal:1 viewed:1 manufacturing:4 change:4 included:1 infinite:3 determined:2 specifically:1 called:4 total:7 player:17 procaccia:3 berg:1 e6:1 support:4 people:1 latter:1 artifical:1 evaluate:1 |
5,068 | 5,589 | Repeated Contextual Auctions with Strategic Buyers
Kareem Amin
University of Pennsylvania
[email protected]
Afshin Rostamizadeh
Google Research
[email protected]
Umar Syed
Google Research
[email protected]
Abstract
Motivated by real-time advertising exchanges, we analyze the problem of pricing
inventory in a repeated posted-price auction. We consider both the cases of a truthful and surplus-maximizing buyer, where the former makes decisions myopically
on every round, and the latter may strategically react to our algorithm, forgoing
short-term surplus in order to trick the algorithm into setting better prices in the
future. We further assume a buyer?s valuation of a good is a function of a context
vector that describes the good being sold. We give the first algorithm attaining
? 2/3 )) regret in the contextual setting against a surplus-maximizing
sublinear (O(T
buyer. We also extend this result to repeated second-price auctions with multiple
buyers.
1
Introduction
A growing fraction of Internet advertising is sold through automated real-time ad exchanges. In
a real-time ad exchange, after a visitor arrives on a webpage, information about that visitor and
webpage, called the context, is sent to several advertisers. The advertisers then compete in an auction
to win the impression, or the right to deliver an ad to that visitor. One of the great advantages of
online advertising compared to advertising in traditional media is the presence of rich contextual
information about the impression. Advertisers can be particular about whom they spend money
on, and are willing to pay a premium when the right impression comes along, a process known
as targeting. Specifically, advertisers can use context to specify which auctions they would like to
participate in, as well as how much they would like to bid. These auctions are most often secondprice auctions, wherein the winner is charged either the second highest bid or a prespecified reserve
price (whichever is larger), and no sale occurs if the reserve price isn?t cleared by one of the bids.
One side-effect of targeting, which has been studied only recently, is the tendency for such exchanges
to generate many auctions that are rather uncompetitive or thin, in which few advertisers are willing
to participate. Again, this stems from the ability of advertisers to examine information about the
impression before deciding to participate. While this selectivity is clearly beneficial for advertisers,
it comes at a cost to webpage publishers. Many auctions in real-time ad exchanges ultimately involve
just a single bidder, in which case the publisher?s revenue is entirely determined by the selection of
reserve price. Although a lone advertiser may have a high valuation for the impression, a low reserve
price will fail to extract this as revenue for the seller if the advertiser is the only participant in the
auction.
As observed by [1], if a single buyer is repeatedly interacting with a seller, selecting revenuemaximizing reserve prices (for the seller) reduces to revenue-maximization in a repeated postedprice setting: On each round, the seller offers a good to the buyer at a price. The buyer observes her
value for the good, and then either accepts or rejects the offer. The seller?s price-setting algorithm is
known to the buyer, and the buyer behaves to maximize her (time-discounted) cumulative surplus,
i.e., the total difference between the buyer?s value and the price on rounds where she accepts the
offer. The goal of the seller is to extract nearly as much revenue from the buyer as would have been
1
possible if the process generating the buyer?s valuations for the goods had been known to the seller
before the start of the game. In [1] this goal is called minimizing strategic regret.
Online learning algorithms are typically designed to minimize regret in hindsight, which is defined
as the difference between the loss of the best action and the loss of the algorithm given the observed
sequence of events. Furthermore, it is assumed that the observed sequence of events are generated
adversarially. However, in our setting, the buyer behaves self-interestedly, which is not necessarily
the same as behaving adversarially, because the interaction between the buyer and seller is not
zero-sum. A seller algorithm designed to minimize regret against an adversary can perform very
suboptimally. Consider an example from [1]: a buyer who has a large valuation v for every good.
If the seller announces an algorithm that minimizes (standard) regret, then the buyer should respond
by only accepting prices below some ? ? v. In hindsight, posting a price of ? in every round would
appear to generate the most revenue for the seller given the observed sequence of buyer actions,
and therefore ?T cumulative revenue is ?no-regret?. However, the seller was tricked by the strategic
buyer; there was (v ?)T revenue left on the table. Moreoever, this is a good strategy for the buyer
(it must have won the good for nearly nothing on ?(T ) rounds).
The main contribution of this paper is extending the setting described above to one where the buyer?s
valuations in each round are a function of some context observed by both the buyer and seller.
While [1] is motivated by our same application, they imagine an overly simplistic model wherein
the buyer?s value is generated by drawing an independent vt from an unknown distribution D. This
ignores that vt will in reality be a function of contextual information xt , information that is available
to the seller, and the entire reason auctions are thin to begin with (without xt there would be no
targeting). We give the first algorithm that attains sublinear regret in the contextual setting, against a
surplus-maximizing buyer. We also note that in the non-contextual setting, regret is measured against
the revenue that could have been made if D were known, and the single fixed optimal price were
selected. Our comparator will be more challenging as we wish to compete with the best function (in
some class) from contexts xt to prices.
The rest of the paper is organized as follows. We first introduce a linear model by which values vt are
derived from contexts xt . We then demonstrate an algorithm based on stochastic gradient descent
(SGD) which achieves sublinear regret against an truthful buyer (one that accepts price pt iff pt ? vt
on every round t). The analysis for the truthful buyer uses prexisting high probability bounds for
SGD when minimizing strongly convex functions [15]. Our main result requires an extension of
this analysis to cases in which ?incorrect? gradients are occasionally observed. This lets us study
a buyer that is allowed to best-respond to our algorithm, possibly rejecting offers that the truthful
buyer would not, in order to receive better offers on future rounds. We also adapt our algorithm
to non-linear settings via a kernelized version of the algorithm. Finally, we extend our results to
second-price auctions with multiple buyers.
Related Work: The pricing of digital good in repeated auctions has been considered by many other
authors, including [1, 10, 3, 2, 5, 11]. However, most of these papers do not consider a buyer who
behaves strategically across rounds. Buyers either behave randomly [11], or only participate in a
single round [10, 3, 2, 5], or participate in multiple rounds but only desire a single good [13, 7]
and therefore, in each of these cases, are not incentivized to manipulate the seller?s behavior on
future rounds. In reality buyers repeatedly interact with the same seller. There is empirical evidence
suggesting that buyers are not myopic, and do in fact behave strategically to induce better prices in
the future [6], as well as literature studying different strategies for strategic buyers [4, 8, 9].
2
Preliminaries
Throughout this work, we will consider a repeated auction where at every round a single seller
prices an item to sell to a single buyer (extensions to multiple buyers are discussed in Section 5).
The good sold at step t in the repeated auction is represented by a context (feature) vector xt 2 X =
{x : kxk2 ? 1} and is drawn according a fixed distribution D, which is unknown to the seller. The
good has a value vt that is a linear function of a parameter vector w? , also unknown to the seller,
vt = w? > xt (extensions to non-linear functions of the context are considered in Section 5). We
assume that w? 2 W = {w : kwk2 ? 1} and also that 0 ? w? > x ? 1 with probability one with
respect to D.
2
For rounds t = 1, . . . , T the repeated posted-price auction is defined as follows: (1) The buyer and
seller both observe xt ? D. (2) The seller offers a price pt . (3) The buyer selects at 2 {0, 1}. (4)
The seller receives revenue at pt .
Here, at is an indicator variable that represents whether or not the buyer accepted the offered price
(1 indicates yes). hThe goal of the seller
is to select a price pt in each round t such that the expected
i
PT
regret R(T ) = E
at pt is o(T ). The choice of at will depend on the buyer?s behavior.
t=1 vt
We will analyze two types of buyers in the subsequent sections of the paper: truthful and surplusmaximizing buyers, and will attempt to minimize regret against the truthful buyer and regret against
the surplus-maximizing buyer. Note, the regret is the difference between the maximum revenue
possible and the amount made by the algorithm that offers prices to the buyer.
3
Truthful Buyer
In this section we introduce
pthe Learn-Exploit Algorithm for Pricing (LEAP), which we show has
2/3
regret of the form O(T
log(T log(T ))) against a truthful buyer. A buyer is truthful if she accepts any offered price that gives a non-negative surplus, which is defined as the difference between
the buyer?s value for the good minus the price paid: vt pt . Therefore, for a truthful buyer we define
at = 1{pt ? vt }.
At this point, we note that the loss function vt 1{pt ? vt }pt , which we wish to minimize over
all rounds, is not convex, differentiable or even continuous. If the price is even slightly above the
truthful buyers valuation it is rejected and the seller makes zero revenue. To circumvent this our
algorithm will attempt to learn w? directly by minimizing a surrogate loss function for which w?
in the minimizer. Our analysis hinges on recent results [15] which give optimal rates for gradient
descent when the function being minimized is strongly convex. Our key trick is to offer prices so
that, in each round, the buyer?s behavior reveals the gradient of the surrogate loss at our current
estimate for w? . Below we define the LEAP algorithm (Algorithm 1), which we show addresses
these difficulties in the online setting.
Algorithm 1 LEAP algorithm
? Let 0 ? ? ? 1, w1 = 0 2 W, ?
? For t = 1, . . . , T?
0,
> 0, T? = d?T e.
(Learning phase)
? Offer pt ? U , where U is the uniform distribution on the interval [0, 1].
? Observe at .
? g
?t = 2 wt ? xt at xt .
? wt+1 = ?W (wt
? For t = T? + 1, . . . , T
1
g
? ).
t t
? Offer pt = wT? +1 ? xt
(Exploit phase)
?.
The algorithm depends on input parameters ?, ? and . The ? parameter determines what fraction
of rounds are spent in the learning phase as oppose to the exploit phase. During the learning phase,
uniform random prices are offered and the model parameters are updated as a function of the feedback given by the buyer. During the exploit phase, the model parameters are fixed and the offered
price is computed as a linear function of these parameters minus the value of the ? parameter. The
? parameter can be thought of as inversely proportional to our confidence in the fixed model parameters and is used to hedge against the possibility of over-estimating the value of a good. The
parameter is a learning-rate parameter set according to the minimum eigenvalue of the covariance
matrix, and is defined below in Assumption 1. In order to prove a regret bound, we first show that
the learning phase of the algorithm is minimizing a strongly convex surrogate loss and then show
that this implies the buyer enjoys near optimal revenue during the exploit phase of the algorithm.
?
?
Let gt = 2(wt> xt 1{pt ? vt })xt and F (w) = Ex?D (w? > x w> x)2 . Note that when the
buyer is truthful g
?t = gt . Against a truthful buyer, gt is an unbiased estimate of the gradient of F .
Proposition 1. The random variable gt satisfies E[gt | wt ] = rF (wt ). Also, kgt k ? 4 with
probability 1.
3
?
?
?
Proof.
First
note
that
E[g
|
w
]
=
E
2
w
?x
E
[1{p
?
v
}]
=
E
2 wt ?xt Prpt (pt ?
t
t
x
t
t
p
t
t
x
t
t
t
?
vt ) . Since pt is drawn uniformly from [0, 1] and vt is guaranteed to lie in [0, 1] we have that
R1
Pr(pt ? vt ) = 0 1{pt ? vt }dpt = vt . Plugging this back into gt gives us exactly the expression
for rF (wt ). Furthermore, kgt k = 2|wt> xt 1{pt ? vt }| kxt k ? 4 since |wt> xt | ? kwt kkxt k ?
1 and kxt k ? 1
We now introduce the notion of strong convexity. A twice-differentiable function H(w) is strongly convex if and only if the Hessian matrix r2 H(w) is full rank and the minimum eigenvalue
of r2 H(w) is at least . Note that the function F is strongly convex if and only if the covariance
matrix of the data is full-rank, since r2 F (w) = 2Ex [xx> ]. We make the following assumption.
Assumption 1. The minimum eigenvalue of 2Ex [xx> ] is at least > 0.
Note that if this is not the case then there is redundancy in the features and the data can be projected (for example using PCA) into a lower dimensional feature space with a full-rank covariance
matrix and without any loss in information. The seller can compute an offline estimate of both this
projection and by collecting a dataset of context vectors before starting to offer prices to the buyer.
Thus, in view of Proposition 1 and the strong convexity assumption, we see the learning phase of
the LEAP algorithm is conducting a stochastic gradient descent to minimize the -strongly convex
function F , where at each time step we update wt+1 = ?W (wt 1t g
?t ) and g
?t = gt is an unbiased
estimate of the gradient. We now make use of an existing bound ([14, 15]) for stochastic gradient
descent on strongly convex functions.
Lemma 1 ([14] Proposition 1). Let 2 (0, 1/e), T? 4 and suppose F is -strongly convex over
the convex set W. Also suppose E[gt | wt ] = rF (w) and kgt k2 ? G2 with probability 1. Then
with probability at least 1
for any t ? T? it holds that
(624 log(log(T? )/ ) + 1)G2
kwt w? k2 ?
where w? = argminw F (w) .
2t
This guarantees that, with high probability, the distance between the learned parameter vector wt
and the target weight vector w? is bounded and decreasing as t increases. This allows us to carefully
tune the ? parameter that is used in the exploit phase of the algorithm (see Lemma 6 in the appendix).
We are now equipped to prove a bound on the regret of the LEAP algorithm.
Theorem q
1. For any T > 4, 0 < ? < 1 and assuming a truthful buyer, the LEAP algorithm
p
(624 log( T? log(T? ))+1)G2
, where G =
2
q q T?
p
2
? ))+1)G
2?T + 4 T? (624 log( T? log(T
,
2
with ? =
4, has regret against a truthful buyer at most
R(T ) ?
which implies for ? = T
R(T ) ? 2T 2/3 + 4T 2/3
r
(624 log(T 1/3 log(T 2/3 )) + 1)G2
2
Proof. We first decompose the regret
T?
T
T
hX
i
hX
i
h X
E
v t a t pt = E
v t a t pt + E
vt
t=1
t=1
t=T? +1
1/3
a regret at most
?
?
p
= O T 2/3 log(T log(T )) .
T
i
h
X
a t pt ? T ? +
E vt
t=T? +1
i
at pt , (1)
where we have used the fact |vt at pt | ? 1. Let A denote the event that, for all t 2 {T? +1, . . . , T },
at = 1 ^ vt pt ? ?. Lemma 6 (see Appendix,
Section A.1) proves that A occurs with probability at
p
p
1/2
least 1 T?
. For brevity let N = (624 log( T? log(T? )) + 1)G2 / 2 , then we can decompose
the expectation in the following way:
h
i
E vt at pt = Pr[A]E[vt at pt |A] + (1 Pr[A])E[vt at pt |?A]
r
r
r
N
1
N
? Pr[A]? + (1 Pr[A]) ? ? + T? 1/2 =
+
?2
,
T?
T?
T?
where the inequalities follow from the definition of A, Lemma 6, and the fact that |vt at pt | < 1.
p
PT
Plugging this back into equation (1) gives T? + t=T? +1 E[vt at pt ] ? T? + d(1pT?)T e 2 N ?
?
q p
2?T + 4 T? N , proving the first result of the theorem. ? = T 1/3 gives the final expression.
4
In the next section we consider the more challenging setting of a surplus-maximizing buyer, who
may accept/reject prices in a manner meant to lower the prices offered.
4
Surplus-Maximizing Buyer
In the previous section we considered a truthful buyer who myopically accepts
hP every price belowi
T
her value, i.e., she sets at = 1{pt ? vt } for every round t. Let S(T ) = E
pt )
t=1 t at (vt
be the buyer?s cumulative discounted surplus, where { t } is a decreasing discount sequence, with
t 2 (0, 1). When prices are offered by the LEAP algorithm, the buyer?s decisions about which
prices to accept during the learning phase have an influence on the prices that she is offered in the
exploit phase, and so a surplus-maximizing buyer may be able to increase her cumulative discounted
surplus by occasionally behaving untruthfully. In this section we assume that the buyer knows the
pricing algorithm and seeks to maximize S(T ).
Assumption 2. The buyer is surplus-maximizing, i.e., she behaves so as to maximize S(T ), given
the seller?s pricing algorithm.
We say that a lie occurs in any round t where at 6= 1{pt ? vt }. Note that a surplus-maximizing
buyer has no reason to lie during the exploit phase, since the buyer?s behavior during exploit rounds
has no effect on the prices offered. Let L = {t : 1 ? t ? T? ^ at 6= 1{pt ? vt }} be the set of
learning rounds where the buyer lies, and let L = |L| be the number of lies. Observe that g
?t 6= gt
in any lie round (recall that E[gt | wt ] = rF (wt ), i.e., gt is the stochastic gradient in round t).
We take a moment to note the necessity of the discount factor t . This essentially models the buyer
as valuing surplus at the current time step more than in the future. Another way of interpreting this,
is that the seller is more ?patient? as compared to the buyer. In [1] the authors show a lower bound
on the regret against a surplus-maximizing buyer in the contextless setting of the form O(T ), where
PT
T = i=1 t . Thus, if no decreasing discount factor is used, i.e. t = 1, then sublinear regret is
not possible. Note, the lower bound of the contextless setting applies here as well, since the case of
a distribution D that induces a fixed context x? on every round is a special case of our setting. In
that case the problem reduces to the fixed unknown value setting since on every round vt = w? > x? .
p
In the rest of this section we prove an O T 2/3 log(T )/ log(1/ ) bound on the seller?s regret
under the assumption that the buyer is surplus-maximizing and that her discount sequence is t =
t 1
for some 2 (0, 1). The idea of the proof is to show that the buyer incurs a cost for telling
lies, and therefore will not tell very many, and thus the lies she does tell will not significantly affect
the seller?s estimate of w? .
Bounding the cost of lies: Observe that in any learning round where the surplus-maximizing buyer
tells a lie, she loses surplus in that round relative to the truthful buyer, either by accepting a price
higher than her value (when at = 1 and vt < pt ) or by rejecting a price less than her value (when
at = 0 and vt > pt ). This observation can be used to show that lies result in a substantial loss of
surplus relative to the truthful buyer, provided that in most of the lie rounds there is a nontrivial gap
between the buyer?s value and the seller?s price. Because prices are chosen uniformly at random
during the learning phase, this is in fact quite likely, and with high probability the surplus lost
relative to the truthful buyer during the learning phase grows exponentially with the number of lies.
The precise quantity is stated in the Lemma below. A full proof appears in the appendix, Section A.3.
Lemma 2. Let the discount sequence be defined as t = t 1 for 0 < < 1 and assume the buyer
has told L?lies. ?Then for > 0 with probability at least 1
the buyer loses a surplus of at least
L+3
T?
1
relative
to
the
truthful
buyer
during
the
learning
phase.
8T log( 1 ) 1
?
Bounding the number of lies: Although we argued in the previous lemma that lies during the
learning phase cause the surplus-maximizing buyer to lose surplus relative to the truthful buyer,
those lies may result in lower prices offered during the exploit phase, and thus the overall effect of
lying may be beneficial to the buyer. However, we show that there is a limit on how large that benefit
can be, and thus we have the following high-probability bound on the number of learning phase lies.
Lemma 3. Let the discount sequence be defined as t = t 1 for 0 <
log(32T? 1 log( 2 )+1)
probability at least 1
, the number of lies L ?
.
log(1/ )
5
< 1. Then for
> 0 with
The full proof is found in the appendix (Section A.4), and we provide a proof sketch here. The
argument proceeds by comparing the amount of surplus lost (compared to the truthful buyer) due to
telling lies in the learning phase to the amount of surplus that could hope to be gained (compared to
the truthful buyer) in the exploit phase. Due to the discount factor, the surplus lost will eventually
outweigh the surplus gained as the number of lies increases, implying a limit to the number of lies a
surplus maximizing buyer can tell.
Bounding the effect of lies: In Section 3 we argued that if the buyer is truthful then, in each
learning round t of the LEAP algorithm, g
?t is a stochastic gradient with expected value rF (wt ).
We then use the analysis of stochastic gradient descent in [14] to prove that wT? +1 converges to w?
(Lemma 1). However, if the buyer can lie then g
?t is not necessarily the gradient and Lemma 1 no
longer applies. Below we extend the analysis in Rakhlin et al. [14] to a setting where the gradient
may be corrupted by lies up to L times.
Lemma 4. Let
kwT? +1
? 2
2 (0, 1/e),
? T?
w k ?
1
T? +1
2. If the buyer tells L lies
? then with probability at least 1
(624 log(log(T? )/ )+e2 )G2
2
+
4e2 L
,
.
The proof of the lemma is similar to that of Lemma 1, but with extra steps needed to bound the
additional error introduced due to the erroneous gradients. Due to space constraints, we present
the proof in the appendix, Section A.6. Note that, modulo constants, the bound only differs by the
additive term L/T? . That is, there is an extra additive error term that depends on the ratio of lies to
number of learning rounds. Thus, if no lies are told, then there is no additive error. While if many
lies are told, e.g. L = T? , then the bound become vacuous.
Main result: We are now ready to prove an upper bound on the regret of the LEAP algorithm when
the buyer is surplus-maximizing.
Theorem 2. For any 0 < ? < 1 (such that T? 4), 0 < < 1 and assuming a surplus-maximizing
t 1
buyer with
, then the LEAP algorithm using parameq exponentialpdiscounting factor t =
p
p
4e2 log(128 T? log(4 T? )+1)
1 (624 log(2 T? log(T? ))+e2 )G2
ter ? = T?
+
, where G = 4, has regret
2
log(1/ )
against a surplus-maximizing buyer at most
r s
p
p
p
T (624 log(2 T? log(T? )) + e2 )G2
4e2 log(128 T? log(4 T? ) + 1)
R(T ) ? 2?T + 4
+
,
2
?
log(1/ )
which for ? = T
1/3
q
?
?
log(T )
implies R(T ) ? O T 2/3 log(1/
) .
Proof. Taking the high probability statements of Lemma 3 and Lemma
4 with /2 2 [0, 1/e]
?
(624 log(2 log(T? )/ )+e2 )G2
1
? 2
tells us that with probability at least 1
, kwT? w k ? T?
+
2
?
4e2 log(64T? 1 log( 4 )+1)
.
log(1/ )
1/2
1/2
Since we assume T? 4, if we set = T?
it implies /2 = T?
/2 ? 1/e, which is required
for Lemma 4 to hold. Thus, if we set the algorithm parameter ? as indicated in the statement of
1/2
theorem, we have that with probability at least 1 T?
for all t 2 {T? + 1, . . . , T } that at = 1
and vt pt ? ?, which follows from the same argument used for Lemma 6.
Finally, the same steps as in the proof of Theorem 1 we can be used to show the first inequality.
Setting ? = T 1/3 shows the second inequality and completes the theorem.
Note that the bound shows that if ! 1 (i.e. no discounting) the bound becomes vacuous, which
is to be expected since the ?(T ) lower bound on regret demonstrates the necessity of a discounting
factor. If ! 0 (i.e. buyer become myopic, thereby truthful), then we retrieve the truthful bound
modulo constants. Thus for any < 1, we have shown the first sublinear bound on the seller?s regret
against a surplus-maximizing buyer in the contextual setting.
6
5
Extensions
Doubling trick: A drawback of Theorem 2 is that optimally tuning the parameters ? and ? requires knowledge of the horizon T . The usual way of handling this problem in the standard online
learning setting is to apply the ?doubling trick?: If a learning algorithm that requires knowledge
of T has regret O(T c ) for some constant c, then running independent instances of the algorithm
during consecutive phases of exponentially increasing length (i.e., the ith phase has length 2i ) will
also have regret O(T c ). We can also apply the doubling trick to our strategic setting, but we must
exercise caution and argue that running the algorithm in phases does not affect the behavior of a
surplus-maximizing buyer in a way that invalidates the proof of Theorem 2. We formally state and
prove the relevant corollary in Section A.8 of the Appendix.
Kernelized Algorithm: In some cases, assuming that the value of a buyer is a linear function of the
context may not be accurate. In this section we briefly introduce a kernelized version of LEAP, which
allows for a non-linear model of the buyer value as a function of the context x. At the same time,
the regret guarantees provided in the previous sections still apply since we can view the model as
linear function of the induced features (x), where (?) is a non-linear map and the kernel function
K is used to compute the inner product in this induced feature space: K(x, x0 ) = (x)> (x0 ). For
a more complete discussion of kernel methods see, for example,
qP [12, 16]. For what follows, we
t
define the projection operation ?K , (x1 , . . . , xt ) = /
i,j=1 i j K(xi , xj ). The proof of
Proposition 2 is moved to the appendix (Section A.7) in interest of space.
Algorithm 2 Kernelized LEAP algorithm
? Let K(?, ?) be a PDS function s.t. 8x : |K(x, x)| ? 1, 0 ? ? ? 1, T? = d?T e,
? 0, > 0.
= 0 2 RT? ,
? For t = 1, . . . , T?
? Offer pt ? U
? Observe at
Pt 1
? t = 2t
i=1
?
= ?K
i K(xi , xt )
at
, (x1 , . . . , xt )
? For t = T? + 1, . . . , T
P ?
? Offer pt = Ti=1
i K(xi , xt )
?
Proposition 2. Algorithm 2 is a kernelized implementation of the LEAP algorithm with W =
{w : kwk2 ? 1} and w1 = 0. Furthermore, if we consider the feature space induced by the
kernel K via an explicit mapping (?), the learned linear hypothesis is represented as wt =
Pt 1
Pt 1
i (xi ) which satisfies kwt k =
i,j=1 i j K(xi , xj ) ? 1. The gradient is gt =
?i=1
?
Pt 1
>
2
(xt ) at (xt ), and kgt k ? 4.
i=1 i (xi )
Multiple Buyers: So far we have assumed that the seller is interacting with a single buyer across
multiple posted price auctions. Recall that the motivation for considering this setting was repeated
second price auctions against a single buyer, a situation that happens often in online advertising
because of targetting. One might nevertheless wonder whether the algorithm can be applied to a
setting where there can be multiple buyers, and whether it remains robust in such a setting. We
describe a way in which the analysis for the posted-price setting can carry over to multiple buyers. .
Formally, suppose there are K buyers, and on round t, buyer k receives a valuation of vk,t . We let
k val (t) = arg maxk vk,t , vt+ = vkval (t),t , and vt = maxk6=kval (t) vk,t : the buyer with the highest
valuation, the highest valuation itself, and the second-highest valuation respectively. In a second
price auction, each buyer also submits a bid bk,t , and we define k bid (t), b+
t and bt analogously
to k val (t), vt+ , vt , corresponding to the highest bidder, the largest bid, and the second-largest bid.
After the seller announces a reserve price pt , buyers submit their bids {bk,t }, and the seller receives
round t revenue of rt = 1{pt ? b+
t } max{bt , pt }. The goal of the seller is to minimize R(T ) =
PT
E[ t=1 vt+ rt ]. We assume that buyers are surplus-maximizing, and select a strategy that maps
previous reserve prices p1 , ..., pt 1 , pt , and vk,t to a choice of bid on round t.
7
We call vt+ the market valuation for good t. The key to extending the LEAP algorithm to the multiple
buyer setting will be to treat market valuations in the same way we treated the individual buyer?s
valuation in the single-buyer setting. In order to do so, we make an analogous modelling assumption
1
to that of Section 2. Specifically, we assume that there is some w? such that vt+ = w? >
t xt . Note
that we assume a model on the market price itself.
At first glance, this might seem like a strange assumption since vt+ is itself the result of a maximization over vk,t . However, we argue that it?s actually rather unrestrictive. In fact the individual
valuations vk,t can be generated arbitrarily so long as vk,t ? w? >
t xt and equality holds for some k.
In other words, we can imagine that nature first computes the market valuation vt+ , then arbitrarily
(even adversarialy) selects which buyer gets this valuation, and the other buyer valuations.
Now we can define at = 1{pt ? b+
t }, whether the largest bid was greater than the reserve, and
consider running the LEAP algorithm, but with this choice of at . Notice that for any t, at pt ? rt ,
PT
thereby giving us the following key fact: R(T ) ? R0 (T ) , E[ t=1 vt+ at pt ]. We also redefine
L to be the number of market lies: rounds t ? T? where at 6= 1{pt ? vt+ }. Note the market tells
a lie if either all valuations were below pt , but somebody bid over pt anyway, or if some valuation
was above pt but no buyer decided to outbid pt . With this choice of L, Lemma 4 holds exactly as
written but in the multiple buyer setting.
It?s well-known [17] that single-shot second price auctions are strategy-proof. Therefore, during the
exploit phase of the algorithm, all buyers are incentivized to bid truthfully. Thus, in order to bound
R0 (T ) and therefore R(T ), we need only rederive Lemma 3 to bound the number of market lies. We
begin partitioning the market lies. Let L = {t : t ? T? , 1{pt ? vt+ } =
6 1{pt ? b+
t }}, while letting
+
+
+ bid
+
Lk = {t : t ? T? , vt < pt ? bt , k (t) = k} [ {t ? T? , bt < pt ? vt+ , k val (t) = k}. In other
words, we attribute a lie to buyer k if (1) the reserve was larger than the market value, but buyer k
won the auction anyway, or (2) buyer k had the largest valuation, but nobody cleared the reserve.
PK
Checking that L = [k Lk and letting Lk = |Lk | tells us that L ?
k=1 Lk . Furthermore, we
can bound Lk using nearly identical arguments to the posted price setting, giving us the subsequent
Corollary for the multiple buyer setting.
Lemma 5. Let the discount sequence be defined as t = t 1 for 0 <
? / +1)
probability at least 1
, Lk ? log(32T
, and L ? KLk .
log(1/ )
< 1. Then for
> 0 with
Proof. We first consider the surplus buyer k loses during learning rounds, compared to if he had
been truthful. Suppose buyer k unilateraly switches to always bidding his value (i.e. bk,t = vk,t ).
For a single-shot second price auction, being truthful is a dominant strategy and so he would only
increase his surplus on learning rounds. Furthermore, on each round in Lk he would increase his
(undiscounted) surplus by at least |vk,t pt |. Now the analysis follows as in Lemmas 2 and 3.
Corollary 1. In the
buyers setting the LEAP algorithm with
q multiple surplus-maximizing
p
p
p
4e2 K log(128 T? log(4 T? )+1)
1 (624 log(2 T? log(T? ))+e2 )G2
1/3
? = T
,? =
+
, has regret
2
log(1/ )
?T? q
?
log(T )
R(T ) ? R0 (T ) ? O T 2/3 K
log(1/ )
6
Conclusion
In this work, we have introduced the scenario of contextual auctions in the presence of surplusmaximizing buyers and have presented an algorithm that is able to achieve sublinear regret in this
setting, assuming buyers receive a discounted surplus. Once again, we stress the importance of the
contextual setting, as it contributes to the rise of targeted bids that result in auction with single highbidders, essentially reducing the auction to the posted-price scenario studied in this paper. Future
directions for extending this work include considering different surplus discount rates as well as
understanding whether small modifications to standard contextual online learning algorithms can
lead to no-strategic-regret guarantees.
1
Note that we could also apply the kernelized LEAP algorithm (Algorithm 2) in the multiple buyer setting.
8
References
[1] Kareem Amin, Afshin Rostamizadeh, and Umar Syed. Learning prices for repeated auctions
with strategic buyers. In Advances in Neural Information Processing Systems, pages 1169?
1177, 2013.
[2] Ziv Bar-Yossef, Kirsten Hildrum, and Felix Wu. Incentive-compatible online auctions for
digital goods. In Proceedings of Symposium on Discrete Algorithms, pages 964?970. SIAM,
2002.
[3] Avrim Blum, Vijay Kumar, Atri Rudra, and Felix Wu. Online learning in online auctions. In
Proceedings Symposium on Discrete algorithms, pages 202?204. SIAM, 2003.
[4] Matthew Cary, Aparna Das, Ben Edelman, Ioannis Giotis, Kurtis Heimerl, Anna R Karlin,
Claire Mathieu, and Michael Schwarz. Greedy bidding strategies for keyword auctions. In
Proceedings of the 8th ACM conference on Electronic commerce, pages 262?271. ACM, 2007.
[5] Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. Regret minimization for reserve
prices in second-price auctions. In Proceedings of the Symposium on Discrete Algorithms.
SIAM, 2013.
[6] Benjamin Edelman and Michael Ostrovsky. Strategic bidder behavior in sponsored search
auctions. Decision support systems, 43(1):192?198, 2007.
[7] Mohammad Taghi Hajiaghayi, Robert Kleinberg, and David C Parkes. Adaptive limited-supply
online auctions. In Proceedings of the 5th ACM conference on Electronic commerce, pages 71?
80. ACM, 2004.
[8] Brendan Kitts and Benjamin Leblanc. Optimal bidding on keyword auctions. Electronic Markets, 14(3):186?201, 2004.
[9] Brendan Kitts, Parameshvyas Laxminarayan, Benjamin Leblanc, and Ryan Meech. A formal analysis of search auctions including predictions on click fraud and bidding tactics. In
Workshop on Sponsored Search Auctions, 2005.
[10] Robert Kleinberg and Tom Leighton. The value of knowing a demand curve: Bounds on regret
for online posted-price auctions. In Symposium on Foundations of Computer Science, pages
594?605. IEEE, 2003.
[11] Andres Munoz Medina and Mehryar Mohri. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of The 31st International
Conference on Machine Learning, pages 262?270, 2014.
[12] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of Machine Learning. MIT Press, 2012.
[13] David C Parkes. Online mechanisms. In Noam Nisan, Tim Roughgarden, Eva Tardos, and
Vijay Vazirani, editors, Algorithmic Game Theory. Cambridge University Press, 2007.
[14] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for
strongly convex stochastic optimization. arXiv preprint arXiv:1109.5647, 2011.
[15] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for
strongly convex stochastic optimization. In Proceedings of the 29th International Conference
on Machine Learning (ICML-12), pages 449?456, 2012.
[16] Bernhard Sch?olkopf and Alexander J Smola. Learning with kernels: Support vector machines,
regularization, optimization, and beyond. MIT press, 2002.
[17] Hal R Varian and Jack Repcheck. Intermediate microeconomics: a modern approach, volume 6. WW Norton & Company New York, NY, 2010.
9
| 5589 |@word briefly:1 version:2 leighton:1 willing:2 seek:1 covariance:3 paid:1 incurs:1 sgd:2 thereby:2 minus:2 klk:1 shot:2 carry:1 moment:1 necessity:2 selecting:1 cleared:2 existing:1 current:2 contextual:10 com:2 comparing:1 must:2 written:1 subsequent:2 additive:3 designed:2 sponsored:2 update:1 implying:1 greedy:1 selected:1 item:1 ith:1 short:1 prespecified:1 accepting:2 parkes:2 along:1 become:2 symposium:4 supply:1 incorrect:1 prove:6 edelman:2 redefine:1 manner:1 introduce:4 x0:2 upenn:1 market:10 expected:3 behavior:6 p1:1 examine:1 growing:1 discounted:4 decreasing:3 company:1 kkxt:1 equipped:1 increasing:1 becomes:1 begin:2 estimating:1 bounded:1 considering:2 provided:2 medium:1 xx:2 what:2 minimizes:1 lone:1 caution:1 hindsight:2 guarantee:3 every:9 collecting:1 ti:1 hajiaghayi:1 exactly:2 k2:2 demonstrates:1 ostrovsky:1 sale:1 partitioning:1 appear:1 before:3 felix:2 treat:1 limit:2 usyed:1 might:2 twice:1 studied:2 challenging:2 limited:1 revenuemaximizing:1 oppose:1 decided:1 commerce:2 lost:3 regret:34 differs:1 empirical:1 reject:2 thought:1 projection:2 significantly:1 confidence:1 induce:1 word:2 fraud:1 submits:1 get:1 targeting:3 selection:1 context:12 influence:1 outweigh:1 map:2 charged:1 maximizing:21 starting:1 convex:12 announces:2 react:1 his:3 retrieve:1 proving:1 notion:1 anyway:2 analogous:1 updated:1 tardos:1 imagine:2 pt:66 suppose:4 target:1 modulo:2 yishay:1 shamir:2 us:1 hypothesis:1 trick:5 observed:6 yossef:1 preprint:1 eva:1 keyword:2 highest:5 observes:1 substantial:1 benjamin:3 pd:1 convexity:2 seller:35 ultimately:1 depend:1 deliver:1 bidding:4 represented:2 describe:1 tell:8 quite:1 spend:1 larger:2 say:1 drawing:1 ability:1 kirsten:1 itself:3 final:1 online:12 advantage:1 sequence:8 differentiable:2 eigenvalue:3 kxt:2 karlin:1 leblanc:2 interaction:1 product:1 argminw:1 relevant:1 iff:1 pthe:1 achieve:1 amin:2 moved:1 olkopf:1 webpage:3 undiscounted:1 extending:3 r1:1 generating:1 converges:1 ben:1 spent:1 tim:1 measured:1 strong:2 come:2 implies:4 direction:1 drawback:1 kgt:4 attribute:1 stochastic:8 exchange:5 argued:2 hx:2 preliminary:1 decompose:2 proposition:5 ryan:1 rostami:1 extension:4 hold:4 lying:1 considered:3 deciding:1 great:1 mapping:1 algorithmic:1 reserve:12 matthew:1 achieves:1 consecutive:1 lose:1 leap:17 schwarz:1 largest:4 cary:1 hope:1 minimization:1 mit:2 clearly:1 always:1 rather:2 secondprice:1 claudio:1 corollary:3 derived:1 she:7 vk:9 rank:3 indicates:1 modelling:1 brendan:2 attains:1 rostamizadeh:3 talwalkar:1 typically:1 entire:1 accept:2 bt:4 her:7 kernelized:6 selects:2 overall:1 arg:1 ziv:1 special:1 once:1 identical:1 adversarially:2 sell:1 represents:1 icml:1 nearly:3 thin:2 future:6 minimized:1 few:1 strategically:3 modern:1 randomly:1 kwt:5 individual:2 phase:25 karthik:2 attempt:2 interest:1 possibility:1 arrives:1 myopic:2 accurate:1 rudra:1 ohad:2 varian:1 instance:1 maximization:2 strategic:8 cost:3 uniform:2 wonder:1 optimally:1 corrupted:1 st:1 international:2 siam:3 told:3 michael:2 analogously:1 w1:2 again:2 cesa:1 possibly:1 forgoing:1 suggesting:1 attaining:1 bidder:3 ioannis:1 ad:4 depends:2 nisan:1 view:2 analyze:2 start:1 participant:1 contribution:1 minimize:6 who:4 conducting:1 yes:1 rejecting:2 andres:1 tricked:1 advertising:5 definition:1 norton:1 against:15 e2:10 proof:14 dataset:1 recall:2 knowledge:2 organized:1 carefully:1 surplus:41 back:2 actually:1 appears:1 higher:1 follow:1 tom:1 specify:1 wherein:2 strongly:10 furthermore:5 just:1 rejected:1 smola:1 sketch:1 receives:3 google:4 glance:1 indicated:1 pricing:5 grows:1 hal:1 effect:4 unbiased:2 former:1 discounting:2 equality:1 regularization:1 round:38 game:2 self:1 during:14 taghi:1 won:2 tactic:1 stress:1 impression:5 complete:1 demonstrate:1 mohammad:1 interpreting:1 auction:37 jack:1 recently:1 behaves:4 qp:1 winner:1 exponentially:2 volume:1 extend:3 discussed:1 he:3 kitts:2 kwk2:2 munoz:1 cambridge:1 tuning:1 hp:1 nobody:1 had:3 longer:1 money:1 behaving:2 gt:12 nicolo:1 dominant:1 recent:1 scenario:2 selectivity:1 occasionally:2 inequality:3 arbitrarily:2 vt:49 minimum:3 additional:1 greater:1 gentile:1 r0:3 truthful:28 advertiser:9 maximize:3 multiple:13 full:5 reduces:2 stem:1 adapt:1 offer:13 long:1 manipulate:1 plugging:2 prediction:1 simplistic:1 essentially:2 expectation:1 patient:1 arxiv:2 kernel:4 moreoever:1 receive:2 interval:1 completes:1 publisher:2 myopically:2 extra:2 rest:2 sch:1 induced:3 sent:1 sridharan:2 invalidates:1 seem:1 call:1 near:1 presence:2 ter:1 intermediate:1 automated:1 bid:14 affect:2 xj:2 switch:1 pennsylvania:1 click:1 inner:1 idea:1 knowing:1 whether:5 motivated:2 expression:2 pca:1 hessian:1 cause:1 york:1 repeatedly:2 action:2 involve:1 tune:1 amount:3 discount:9 induces:1 generate:2 notice:1 overly:1 discrete:3 incentive:1 key:3 redundancy:1 nevertheless:1 blum:1 drawn:2 fraction:2 sum:1 compete:2 respond:2 throughout:1 strange:1 wu:2 electronic:3 decision:3 appendix:7 entirely:1 bound:21 internet:1 pay:1 guaranteed:1 somebody:1 microeconomics:1 nontrivial:1 roughgarden:1 constraint:1 kleinberg:2 argument:3 kumar:1 ameet:1 according:2 describes:1 beneficial:2 across:2 slightly:1 modification:1 happens:1 making:2 pr:5 handling:1 equation:1 remains:1 eventually:1 fail:1 mechanism:1 needed:1 know:1 letting:2 whichever:1 studying:1 available:1 operation:1 apply:4 observe:5 dpt:1 running:3 include:1 hinge:1 hthe:1 umar:2 exploit:12 giving:2 prof:1 unrestrictive:1 quantity:1 occurs:3 strategy:6 rt:4 usual:1 traditional:1 surrogate:3 gradient:17 win:1 distance:1 incentivized:2 participate:5 whom:1 valuation:20 argue:2 reason:2 afshin:3 assuming:4 suboptimally:1 length:2 ratio:1 minimizing:4 robert:2 statement:2 noam:1 negative:1 stated:1 rise:1 implementation:1 unknown:4 perform:1 bianchi:1 upper:1 observation:1 sold:3 descent:7 behave:2 situation:1 targetting:1 maxk:1 precise:1 interacting:2 mansour:1 ww:1 introduced:2 vacuous:2 bk:3 required:1 giotis:1 david:2 accepts:5 learned:2 address:1 able:2 adversary:1 proceeds:1 below:6 bar:1 beyond:1 rf:5 including:2 max:1 event:3 syed:2 difficulty:1 treated:1 circumvent:1 indicator:1 inversely:1 mathieu:1 lk:8 ready:1 extract:2 isn:1 literature:1 understanding:1 checking:1 val:3 relative:5 loss:8 sublinear:6 proportional:1 revenue:14 digital:2 foundation:2 offered:9 editor:1 claire:1 compatible:1 mohri:2 enjoys:1 offline:1 side:1 formal:1 telling:2 taking:1 kareem:2 benefit:1 feedback:1 curve:1 cumulative:4 rich:1 computes:1 ignores:1 author:2 made:2 adaptive:1 projected:1 far:1 vazirani:1 bernhard:1 reveals:1 assumed:2 xi:6 truthfully:1 continuous:1 search:3 table:1 reality:2 learn:2 maxk6:1 robust:1 nature:1 contributes:1 interact:1 inventory:1 mehryar:2 necessarily:2 posted:7 submit:1 da:1 anna:1 pk:1 main:3 bounding:3 motivation:1 nothing:1 repeated:10 allowed:1 x1:2 ny:1 medina:1 wish:2 explicit:1 exercise:1 lie:34 kxk2:1 rederive:1 posting:1 theorem:8 erroneous:1 xt:23 r2:3 rakhlin:3 evidence:1 workshop:1 avrim:1 gained:2 ci:1 importance:1 horizon:1 demand:1 gap:1 vijay:2 valuing:1 likely:1 desire:1 atri:1 g2:10 doubling:3 applies:2 minimizer:1 determines:1 satisfies:2 loses:3 hedge:1 acm:4 comparator:1 goal:4 targeted:1 price:59 specifically:2 determined:1 uniformly:2 reducing:1 wt:20 lemma:21 called:2 total:1 accepted:1 tendency:1 buyer:130 premium:1 select:2 formally:2 support:2 latter:1 meant:1 brevity:1 alexander:3 visitor:3 ex:3 |
5,069 | 559 | Learning How To Teach
or
Selecting Minimal Surface Data
Davi Geiger
Siemens Corporate Research, Inc
755 College Rd. East
Princeton, NJ 08540
USA
Ricardo A. Marques Pereira
Dipartimento di Informatica
Universita di Trento
Via Inama 7, Trento, TN 38100
ITALY
Abstract
Learning a map from an input set to an output set is similar to the problem of reconstructing hypersurfaces from sparse data (Poggio and Girosi,
1990). In this framework, we discuss the problem of automatically selecting "minimal" surface data. The objective is to be able to approximately
reconstruct the surface from the selected sparse data. We show that this
problem is equivalent to the one of compressing information by data removal and the one oflearning how to teach. Our key step is to introduce a
process that statistically selects the data according to the model. During
the process of data selection (learning how to teach) our system (teacher)
is capable of predicting the new surface, the approximated one provided
by the selected data. We concentrate on piecewise smooth surfaces, e.g.
images, and use mean field techniques to obtain a deterministic network
that is shown to compress image data.
1
Learning and surface reconstruction
Given a dense input data that represents a hypersurface, how could we automatically
select very few data points such as to be able to use these fewer data points (sparse
data) to approximately reconstruct the hypersurface ?
We will be using the term surface to refer to hypersurface (surface in multidimen364
Learning How to Teach or Selecting Minimal Surface Data
sions) throughout the paper.
It has been shown (Poggio and Girosi, 1990) that the problem of reconstructing a
surface from sparse and noisy data is equivalent to the problem of learning from
examples. For instance, to learn how to add numbers can be cast as finding the
map from X
{pair 01 numbers} to F {sum} from a set of noisy examples. The
surface is F(X) and the sparse and noisy data are the set of N examples {(Xi, di)},
where i 0,1, ... , N and Xi
(ai, bi ) E X, such that ai + bi di + TJi (TJi being the
noise term). Some a priori information about the surface, e.g. the smoothness one,
is necessary for reconstruction.
=
=
=
=
=
Consider a set of N input-output examples, {(Xi, di)}, and a form II PI 112 for
the cost of the deviation of I, the approximated surface, from smoothness. P is a
differential operator and II . II is a norm (usually L2). To find the surface I, that
best fits (i) the data and (ii) the smoothness criteria, is to solve the problem of
minimizing the functional
N-l
V(f)
= L (di -
I(Xi?2
+ #11 PI W
i=O
Different methods of solving the function can yield different types of network. In
particular using the Green's method gives supervised backprop type of networks
(Poggio and Girosi, 1990) and using optimization techniques (like gradient descent)
we obtain unsupervised (with feedback) type of networks.
2
Learning how to teach arithmetic operations
The problem of learning how to add and multiply is a simple one and yet provide
insights to our approach of selecting the minimum set of examples.
Learning arithmetic operations The surface given by the addition of two numbers, namely I(x, y) = X + y, is a plane passing through the origin. The multipliXv, is hyperbolic. The a priori knowledge of the addition
cation surface, I(x, y)
and multiplication surface can be expressed as a minimum of the functional
=
V(f) =
1: 1:
II yr 2/(x,y) II dxdy
where
yr 2/(x, y)
{}2
{}2
= ({}x 2 + {}y2 )/(x, y)
=
Other functions also minimize V(f), like I(x, y) x 2 - y2, and so a few examples
are necessary to learn how to add and multiply given the above prior knowledge. If
the prior assumption consider a larger class of basis functions, then more examples
will be required. Given p input-output examples, {(Xi, Vi); d i }, the learning problem
of adding and multiplying can be cast as the optimization of
365
366
Geiger and Pereira
p-l
V(f) =
~(f( X"
00
y,) - d,)'
+ Jl
00
100 100 II \1' I( x, y) II d xd y
We now consider the problem of selecting the examples from the full surface data.
A sparse process for selecting data Let us assume that the full set of data
is given. in a 2-Dimensionallattice. So we have a finite amount of data (N 2 data
points), with the input-output set being {(Xi, Yj); dij}, where i, j
0, 1, ... , N -1. To
select p examples we introduce a sparse process that selects out data by modifying
the cost function according to
=
N-l
V
00
00
N-l
= ,~y-8,;)(f(X"y;)-d';)'+Jl 100 100 II \1'I(x,y) II +A(p- i~O (1-8,;?'
=
where Sij
1 selects out the data and we have added the last term to assure that
p examples are selected. The data term forces noisy data to be thrown out first,
the second order smoothness of I reduces the need for many examples (p ~ 10) to
learn these arithmetic operations. Learning S is equivalent to learn how to select
the examples, or to learn how to teach. The system (teacher) has to learn a set of
examples (sparse data) that contains all the "relevant" information. The redundant
information can be "filled in" by the prior knowledge. Once the teacher has learned
these selected examples, he, she or it (machine) presents them to the student that
with the a priori knowledge about surfaces is able to approximately learn the full
input-output map (surface).
3
Teaching piecewise smooth surfaces
We first briefly introduce the weak membrane model, a coupled Markov random
field for modeling piecewise smooth surfaces. Then we lay down the framework for
learning to teach this surface.
3.1
Weak membrane model
Within the Bayes approach the a priori knowledge that surfaces are smooth (first
order smoothness) but not at the discontinuities has been analyzed by (Geman and
Geman, 1984) (Blake and Zisserman, 1987) (Mumford and Shah, 1985) (Geiger and
Girosi, 1991). If we consider the noise to be white Gaussian, the final posterior
probability becomes P(j,/lg) ie-,I3VU,l) , where
=
V(j,/) = I)(jij - gij)2
i,j
+ J1.11 'VI Ilrj (1-lij) +,ijlij],
(1)
We represented surfaces by lij at pixel (i, j), and discontinuities by lij. The input,
data is gij, II 'V I Ilij is the norm of the gradient at pixel (i, j). Z is a normalization
Learning How to Teach or Selecting Minimal Surface Data
constant, known as the partition function. f3 is a global parameter of the model and
is inspired on thermodynamics, and J.L and lij are parameters to be estimated. This
model, when used for image segmentation, has been shown to give a good pattern
of discontinuities and eliminate the noise. Thus, suggesting that the piecewise
assumption is valid for images.
3.2
Redundant data
We have assumed the surface to be smooth and therefore there is redundant information within smooth regions. We then propose a model that selects the "relevant"
information according to two criteria
1. Discontinuity data: Discontinuities usually capture relevant information,
and it is possible to roughly approximate surfaces just using edge data (see Geiger
and Pereira, 1990). A limitation of just using edge data is that an oversmoothed
surface is represented.
2. Texture data: Data points that have significant gradients (not enough to be
a discontinuity) are here considered texture data. Keeping texture data allows us
to distinguish between flat surfaces, as for example a clean sky in an image, and
texture surfaces, as for example the leaves in the tree (see figure 2).
3.3
The sparse process
Again, our proposal is first to extend the weak membrane model by including an
additional binary field - the sparse process s- that is 1 when data is selected out
and 0 otherwise. There are natural connections between the process s and robust
statistics (Huber, 1988) as discussed in (Geiger and Yuille, 1990) and (Geiger and
Pereira, 1991). We modify (1) by considering (see also Geiger and Pereira, 1990)
V(/, I, s)
= 2:)(1 -
Sij )(fij - gij)2
+ J.L II 'V I II;j
(1
-lij)
+ TJijSij + lijlij].
(2)
i,j
=
where we have introduced the term TJijSij to keep some data otherwise Sij
1
1 can suppress it. We
everywhere. If the data term is too large, the process S
will now assume that the data is noise-free, or that the noise has already been
smoothed out. We then want to find which data points (s
0) are necessary to
keep to reconstruct I.
=
=
3.4
Mean field equations and unsupervised networks
To impose the discontinuity data constraint we use the hard constraint technique
(Geiger and Yuille, 1990 and its references). We do not allow states that throw
out data (Sij
1) at the edge location (lij
1). More precisely, within the
statistical framework we reduce the possible states for the processes S and I to
Sij1ij = O. Therefore, excluding the state (Sij = 1,/ij = 1). Applying the saddle
point approximation, a well known mean field technique (Geiger and Girosi, 1989
and its references), on the field I, we can compute the partition function
=
=
367
368
Geiger and Pereira
Z
=
L
s.l=O
L
s.1=O
e- f3V (j,l,s)
f=(0, .. ,255)N2 s,1=(0 ,1)N2
Zij
(e- f3 h'i j +Cfi j -9i j )2]
~
L
e- f3VCf ,l,s)
~
s,1=(0,1)N2
II Zij
ij
+ e- f3 [JlIIVfll:j+T/;j] + e-f3[JlIIVfll~j+(jij-9,j)2])
(3)
where f maximizes Z. After applying mean field techniques we obtain the following
equations for the processes I and S
(4)
and, using the definition II \l f IIlj = [(fi,j+l - fi+l,j)2 + (Ji+l,j+l - fi,j)2 , the mean
field self consistent equation (Geiger and Pereira, 1991) becomes
-J.L{ f{ij(1 Mi -1 ,j (1 -
~j) + f{i-l,j-l(l- [i-l,j-l) +
~ -1 ,j ) + Mi ,j -1 (1 - Ii ,j -1) }
(5)
where f{ij = (fi+l,j+l - fi,j)2 and Mij = (Ji+l,j - fi,j+l?' The set of coupled
equations (5) (4) can be mapped to an unsupervised network, we call a minimal
surface representation network (MSRN), and can efficiently be solved in a massively
parallel machine. Notice that Sij + lij ~ 1, because of the hard constraint, and in
the limit of j3 --+ 00 the processes S and I becomes either 0 or 1. In order to throw
away redundant (smooth) data keeping some of the texture we adapt the cost TJij
according to the gradient of the surface. More precisely, we set
(6)
where (ilfjg)2 = (gi+l,j --gi_l,j)2 and (ilijg)2 = (9i,j+l - 9i,j_l)2. The smoother
is the data the lower is the cost to discard the data (Sij = 1). In the limit of TJ --+ 0
only edge data (lij = 1) is kept, since from (4) limT/-+osij
l - lij .
=
3.5
Learning how to teach and the approximated surface
With the mean field equations we compute the approximated surface f simultaneously to S and to I. Thus, while learning the process S (the selected data) the
system also predict the approximated surface f that the student will learn from
the selected examples. By changing the parameters, say J.L and TJ, the teacher can
choose the optimal parameters such as to select less data and preserve the quality
of the approximat~d surface. Once S has been learned the system only feeds the
selected data points to the learner machinery. We actually relax the condition and
feed the learner with the selected data and the corresponding discontinuity map (l).
Notice that in the limit of TJ --+ 0 the selected data points are coincident with the
discontinuities (I = 1).
Learning How to Teach or Selecting Minimal Surface Data
4
Results: Image compression
We show the results of the algorithm to learn the minimal representation of images.
The algorithm is capable of image compression and one advantage over the cosine
transform (traditional method) is that it does not have the problem of breaking the
images into blocks. However, a more careful comparison is needed.
4.1
Learning s,
f, and I
To analyze the quality of the surface approximation, we show in figure 1 the performance of the network as we vary the threshold 1]. We first show a face image
and the line process and then the predicted approximated surfaces together with
the correspondent sparse process s.
4.2
Reconstruction, Generalization or "The student performance"
We can now test how the student learns from the selected examples, or how good is
the surface reconstruction from the selected data. We reconstruct the approximate
surfaces by running (5) again, but with the selected surface data points (Sij
0)
and the discontinuities (iij = 1) given from the previous step. We show in figure 2f
that indeed we obtain the predicted surfaces (the student has learned).
=
References :
E. B. Baum and Y. Lyuu. 1991. The transition to perfect generalization in perceptrons,
Neural Computation, vo1.3, no.3. pp.386-401.
A. Blake and A. Zisserman. 1987. Visual Reconstruction, MIT Press, Cambridge, Mass.
D. Geiger and F. Girosi. 1989. Coupled Markov random fields and mean field theory,
Advances in Neural Information Processing Systems 2, Morgan Kaufmann, D. Touretzky.
D. Geiger and A. Yuille. 1991. A common framework for image segmentation, Int. Jour.
Compo Vis.,vo1.6:3, pp. 227-243.
D. Geiger and F. Girosi. 1991. Parallel and deterministic algorithms for MRFs: surface
reconstruction, PAMI, May 1991, vol.PAMI-13, 5, pp.401-412 .
D. Geiger and R. M. Pereira. 1991. The outlier process, IEEE Workshop on Neural
Networks for signal Processing, Princeton, N J.
S. Geman and D. Geman. 1984. Stochastic Relaxation, Gibbs Distributions, and the
Bayesian Restoration of Images,PAMI, vol.PAMI-6, pp.721-741K.
J.J. Hopfield. 1984. Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci.,79 , pp. 2554-2558.
P.J. Huber. 1981. Robust Statistics, John Wiley and Sons, New York.
D. Mumford and J. Shah. 1985. Boundary detection by minimizing functionals, I , Proc.
IEEE Conf. on Computer Vision & Pattern Recognition, San Francisco, CA .
T. Poggio and F. Girosi. 1990. Regularization algorithms for learning that are equivalent
to multilayer network, Science,vol-247, pp. 978-982.
D. E. Rumelhart, G. Hinton and R. J. Willians. 1986. Learning internal representations
by error backpropagation. Nature, 323, 533.
369
370
Geiger and Pereira
f
a.
h...
c.
d.
e.
f.
p
..
Figure 1: (a) 8-bit image of 128 X 128 pixels. (b) The edge map for J-l ::::: 1.0,
100.0. After 200 iterations and final f3 ::::: 25 ~ 00 (c) the approximated image
for J-l ::::: 0.01, 'Yij ::::: 1.0 and TJ ::::: 0.0009. (d) the corresponding sparse process (e)
approximated image J-l ::::: 0.01, 'Yij
1.0 and TJ ::::: 0.0001. (f) the corresponding
'Yij :::::
=
sparse process.
| 559 |@word briefly:1 compression:2 norm:2 contains:1 selecting:8 zij:2 yet:1 john:1 partition:2 j1:1 girosi:8 davi:1 selected:13 fewer:1 yr:2 leaf:1 plane:1 compo:1 location:1 differential:1 f3v:1 introduce:3 huber:2 indeed:1 roughly:1 inspired:1 automatically:2 considering:1 becomes:3 provided:1 maximizes:1 mass:1 finding:1 nj:1 sky:1 xd:1 modify:1 limit:3 acad:1 approximately:3 pami:4 bi:2 statistically:1 yj:1 block:1 backpropagation:1 cfi:1 hyperbolic:1 selection:1 operator:1 applying:2 equivalent:4 map:5 deterministic:2 baum:1 insight:1 oversmoothed:1 origin:1 assure:1 rumelhart:1 approximated:8 recognition:1 lay:1 geman:4 solved:1 capture:1 region:1 compressing:1 solving:1 yuille:3 learner:2 basis:1 hopfield:1 emergent:1 represented:2 larger:1 solve:1 say:1 relax:1 reconstruct:4 otherwise:2 ability:1 statistic:2 gi:1 transform:1 noisy:4 final:2 advantage:1 reconstruction:6 propose:1 jij:2 relevant:3 trento:2 correspondent:1 perfect:1 ij:4 throw:2 predicted:2 concentrate:1 fij:1 tji:2 modifying:1 stochastic:1 backprop:1 generalization:2 dipartimento:1 yij:3 considered:1 blake:2 predict:1 vary:1 proc:2 mit:1 gaussian:1 sion:1 she:1 mrfs:1 eliminate:1 selects:4 pixel:3 priori:4 field:11 once:2 f3:5 represents:1 unsupervised:3 piecewise:4 few:2 simultaneously:1 preserve:1 thrown:1 detection:1 multiply:2 analyzed:1 tj:5 edge:5 capable:2 necessary:3 poggio:4 machinery:1 filled:1 tree:1 minimal:7 instance:1 modeling:1 restoration:1 cost:4 oflearning:1 deviation:1 dij:1 too:1 teacher:4 jour:1 ie:1 together:1 again:2 tjij:1 choose:1 conf:1 ricardo:1 suggesting:1 student:5 int:1 inc:1 vi:3 analyze:1 bayes:1 parallel:2 j_l:1 minimize:1 kaufmann:1 efficiently:1 yield:1 weak:3 bayesian:1 multiplying:1 cation:1 touretzky:1 definition:1 pp:6 di:6 mi:2 knowledge:5 segmentation:2 actually:1 feed:2 supervised:1 zisserman:2 just:2 approximat:1 quality:2 usa:1 y2:2 regularization:1 white:1 during:1 self:1 cosine:1 criterion:2 tn:1 image:15 fi:6 common:1 functional:2 ji:2 physical:1 jl:2 extend:1 he:1 discussed:1 refer:1 significant:1 cambridge:1 gibbs:1 ai:2 smoothness:5 rd:1 teaching:1 surface:45 add:3 posterior:1 italy:1 discard:1 massively:1 binary:1 morgan:1 minimum:2 dxdy:1 additional:1 impose:1 redundant:4 signal:1 ii:16 arithmetic:3 full:3 corporate:1 smoother:1 reduces:1 smooth:7 adapt:1 j3:1 multilayer:1 vision:1 iteration:1 normalization:1 limt:1 proposal:1 addition:2 want:1 call:1 enough:1 fit:1 reduce:1 passing:1 york:1 amount:1 informatica:1 notice:2 estimated:1 vol:3 key:1 threshold:1 changing:1 clean:1 kept:1 relaxation:1 sum:1 everywhere:1 throughout:1 geiger:16 bit:1 distinguish:1 constraint:3 precisely:2 flat:1 according:4 membrane:3 reconstructing:2 son:1 outlier:1 sij:8 equation:5 discus:1 needed:1 operation:3 away:1 shah:2 compress:1 running:1 universita:1 objective:1 added:1 already:1 mumford:2 traditional:1 gradient:4 mapped:1 sci:1 minimizing:2 lg:1 teach:10 suppress:1 collective:1 markov:2 finite:1 descent:1 coincident:1 marque:1 hinton:1 excluding:1 smoothed:1 introduced:1 cast:2 pair:1 namely:1 required:1 connection:1 learned:3 discontinuity:10 able:3 usually:2 pattern:2 green:1 including:1 natural:1 force:1 predicting:1 thermodynamics:1 coupled:3 lij:9 prior:3 l2:1 removal:1 multiplication:1 limitation:1 ilij:1 consistent:1 pi:2 last:1 free:1 keeping:2 allow:1 face:1 sparse:13 feedback:1 boundary:1 valid:1 transition:1 san:1 hypersurface:3 functionals:1 approximate:2 keep:2 global:1 assumed:1 francisco:1 xi:6 learn:9 nature:1 robust:2 ca:1 dense:1 noise:5 n2:3 wiley:1 iij:1 pereira:9 breaking:1 learns:1 down:1 workshop:1 adding:1 texture:5 nat:1 saddle:1 visual:1 expressed:1 hypersurfaces:1 mij:1 careful:1 hard:2 vo1:2 gij:3 siemens:1 east:1 perceptrons:1 select:4 college:1 internal:1 princeton:2 |
5,070 | 5,590 | Universal Option Models
Hengshuai Yao, Csaba Szepesv?ari, Rich Sutton, Joseph Modayil
Department of Computing Science
University of Alberta
Edmonton, AB, Canada, T6H 4M5
hengshua,szepesva,sutton,[email protected]
Shalabh Bhatnagar
Department of Computer Science and Automation
Indian Institute of Science
Bangalore-560012, India
[email protected]
Abstract
We consider the problem of learning models of options for real-time abstract planning, in the setting where reward functions can be specified at any time and their
expected returns must be efficiently computed. We introduce a new model for
an option that is independent of any reward function, called the universal option
model (UOM). We prove that the UOM of an option can construct a traditional
option model given a reward function, and also supports efficient computation of
the option-conditional return. We extend the UOM to linear function approximation, and we show the UOM gives the TD solution of option returns and the
value function of a policy over options. We provide a stochastic approximation
algorithm for incrementally learning UOMs from data and prove its consistency.
We demonstrate our method in two domains. The first domain is a real-time strategy game, where the controller must select the best game unit to accomplish a
dynamically-specified task. The second domain is article recommendation, where
each user query defines a new reward function and an article?s relevance is the expected return from following a policy that follows the citations between articles.
Our experiments show that UOMs are substantially more efficient than previously
known methods for evaluating option returns and policies over options.
1
Introduction
Conventional methods for real-time abstract planning over options in reinforcement learning require
a single pre-specified reward function, and these methods are not efficient in settings with multiple
reward functions that can be specified at any time. Multiple reward functions arise in several contexts. In inverse reinforcement learning and apprenticeship learning there is a set of reward functions
from which a good reward function is extracted [Abbeel et al., 2010, Ng and Russell, 2000, Syed,
2010]. Some system designers iteratively refine their provided reward functions to obtain desired
behavior, and will re-plan in each iteration. In real-time strategy games, several units on a team can
share the same dynamics but have different time-varying capabilities, so selecting the best unit for
a task requires knowledge of the expected performance for many units. Even article recommendation can be viewed as a multiple-reward planning problem, where each user query has an associated
reward function and the relevance of an article is given by walking over the links between the articles [Page et al., 1998, Richardson and Domingos, 2002]. We propose to unify the study of such
problems within the setting of real-time abstract planning, where a reward function can be speci1
fied at any time and the expected option-conditional return for a reward function must be efficiently
computed.
Abstract planning, or planning with temporal abstractions, enables one to make abstract decisions
that involve sequences of low level actions. Options are often used to specify action abstraction
[Precup, 2000, Sorg and Singh, 2010, Sutton et al., 1999]. An option is a course of temporally
extended actions, which starts execution at some states, and follows a policy in selecting actions
until it terminates. When an option terminates, the agent can start executing another option. The
traditional model of an option takes in a state and predicts the sum of the rewards in the course till
termination, and the probability of terminating the option at any state. When the reward function is
changed, abstract planning with the traditional option model has to start from scratch.
We introduce universal option models (UOM) as a solution to this problem. The UOM of an option
has two parts. A state prediction part, as in the traditional option model, predicts the states where
the option terminates. An accumulation part, new to the UOM, predicts the occupancies of all the
states by the option after it starts execution. We also extend UOMs to linear function approximation,
which scales to problems with a large state space. We show that the UOM outperforms existing
methods in two domains.
2
Background
A finite Markov Decision Process (MDP) is defined by a discount factor ? ? (0, 1), the state set,
S, the action set, A, the immediate rewards ?Ra ?, and transition probabilities ?P a ?. We assume
that the number of states and actions are both finite. We also assume the states are indexed by
integers, i.e., S = {1, 2, . . . , N }, where N is the number of states. The immediate reward function
Ra ? S ? S ? R for a given action a ? A and a pair of states (s, s? ) ? S ? S gives the mean immediate
reward underlying the transition from s to s? while using a. The transition probability function is a
function P a ? S ? S ? [0, 1] and for (s, s? ) ? S ? S, a ? A, P a (s, s? ) gives the probability of arriving
at state s? given that action a is executed at state s.
A (stationary, Markov) policy ? is defined as ? ? S ? A ? [0, 1], where ?a?A ?(s, a) = 1 for any
s ? S. The value of a state s under a policy ? is defined as the expected return given that one starts
executing ? from s:
V ? (s) = Es,? {r1 + ?r2 + ? 2 r3 + ?} .
Here (r1 , r2 . . .) is a process with the following properties: s0 = s and for k ? 0, sk+1 is sampled
from P ak (sk , ?), where ak is the action selected by policy ? and rk+1 is such that its conditional
mean, given sk , ak , sk+1 is Rak (sk , sk+1 ). The definition works also in the case when at any time
step t the policy is allowed to take into account the history s0 , a1 , r1 , s1 , a2 , r2 , . . . , sk in coming up
with ak . We will also assume that the conditional variance of rk+1 given sk , ak and sk+1 is bounded.
The terminology, ideas and results in this section are based on the work of [Sutton et al., 1999]
unless otherwise stated. An option, o ? o??, ??, has two components, a policy ?, and a continuation
function ? ? S ? [0, 1]. The latter maps a state into the probability of continuing the option from
the state. An option o is executed as follows. At time step k, when visiting state sk , the next action
ak is selected according to ?(sk , ?). The environment then transitions to the next state sk+1 , and a
reward rk+1 is observed.1 The option terminates at the new state sk+1 with probability 1 ? ?(sk+1 ).
Otherwise it continues, a new action is chosen from the policy of the option, etc. When one option
terminates, another option can start.
The option model for option o helps with planning. Formally, the model of option o is a pair
<Ro , po >, where Ro is the so-called option return and po is the so-called (discounted) terminal
distribution of option o. In particular, Ro ? S ? R is a mapping such that for any state s, Ro (s) gives
the total expected discounted return until the option terminates:
Ro (s) = Es,o [r1 + ?r2 + ? + ? T ?1 rT ],
where T is the random termination time of the option, assuming that the process (s0 , r1 , s1 , r2 , . . .)
starts at time 0 at state s0 = s (initiation), and every time step the policy underlying o is followed to
get the reward and the next state until termination. The mapping po ? S ? S ? [0, ?) is a function
1
Here, sk+1 is sampled from P ak (sk , ?) and the mean of rk+1 is Rak (sk , sk+1 ).
2
that, for any given s, s? ? S, gives the discounted probability of terminating at state s? provided that
the option is followed from the initial state s:
po (s, s? ) = Es,o [ ? T I{sT =s? } ]
?
= ? ? k Ps,o {sT = s? , T = k} .
(1)
k=1
Here, I{?} is the indicator function, and Ps,o {sT = s? , T = k} is the probability of terminating the
option at s? after k steps away from s.
A semi-MDP (SMDP) is like an MDP, except that it allows multi-step transitions between states.
A MDP with a fixed set of options gives rise to an SMDP, because the execution of options lasts
multiple time steps. Given a set of options O, an option policy is then a mapping h ? S ? O ? [0, 1]
such that h(s, o) is the probability of selecting option o at state s (provided the previous option has
terminated). We shall also call these policies high-level policies. Note that a high-level policy selects
options which in turn select actions. Thus a high-level policy gives rise to a standard MDP policy
(albeit one that needs to remember which option was selected the last time, i.e., a history dependent
policy). Let flat(h) denote the standard MDP policy of a high-level policy h. The value function
underlying h is defined as that of flat(h): V h (s) = V flat(h) (s), s ? S . The process of constructing
flat(h) given h and the options O is the flattening operation. The model of options is constructed in
such a way that if we think of the option return as the immediate reward obtained when following
the option and if we think of the terminal distribution as transition probabilities, then Bellman?s
equations will formally hold for the tuple ?? = 1, S, O, ?Ro ?, ?po ??.
3
Universal Option Models (UOMs)
In this section, we define the UOM for an option, and prove a universality theorem stating that the
traditional model of an option can be constructed from the UOM and a reward vector of the option.
The goal of UOMs is to make models of options that are independent of the reward function. We use
the adjective ?universal? because the option model becomes universal with respect to the rewards.
In the case of MDPs, it is well known that the value function of a policy ? can be obtained from
the so-called discounted occupancy function underlying ?, e.g., see [Barto and Duff, 1994]. This
technique has been used in inverse reinforcement learning to compute a value function with basis
reward functions [Ng and Russell, 2000]. The generalization to options is as follows. First we
introduce the discounted state occupancy function, uo , of option o??, ??:
T ?1
uo (s, s? ) = Es,o [ ? ? k I{sk =s? } ] .
(2)
k=0
Then,
Ro (s) = ? r? (s? ) uo (s, s? ) ,
s? ?S
where r? is the expected immediate reward vector under ? and ?Ra ?, i.e., for any s ? S, r? (s) =
Es,? [r1 ]. For convenience, we shall also treat uo (s, ?) as a vector and write uo (s) to denote it
as a vector. To clarify the independence of uo from the reward function, it is helpful to first note
that every MDP can be viewed as the combination of an immediate reward function, ?Ra ?, and a
reward-less MDP, M = ??, S, A, ?P a ??.
Definition 1 The UOM of option o in a reward-less MDP is defined by ?uo , po ?, where uo is the option?s discounted state occupancy function, defined by (2), and po is the option?s discounted terminal
state distribution, defined by (1).
The main result of this section is the following theorem. All the proofs of the theorems in this paper
can be found in an extended paper.
Theorem 1 Fix an option o = o??, ?? in a reward-less MDP M, and let uo be the occupancy
function underlying o in M. Let ?Ra ? be some immediate reward function. Then, for any state
s ? S, the return of option o with respect to M and ?Ra ? is given by by Ro (s) = (uo (s))? r? .
3
4
UOMs with Linear Function Approximation
In this section, we introduce linear universal option models which use linear function approximation to compactly represent reward independent option-models over a potentially large state space.
In particular, we build upon previous work where the approximate solution has been obtained by
solving the so-called projected Bellman equations.
We assume that we are given a function
? ? S ? Rn , which maps any state s ? S into its n-dimensional feature representation ?(s). Let
V? ? S ? R be defined by V? (s) = ?? ?(s), where the vector ? is a so-called weight-vector.2 Fix an
initial distribution ? over the states and an option o = o??, ??. Given a reward function R = ?Ra ?,
the TD(0) approximation V?(TD,R) to Ro is defined as the solution to the following projected Bellman equations [Sutton and Barto, 1998]:
T ?1
E?,o [ ? {rk+1 + ?V? (sk+1 ) ? V? (sk )} ?(sk ) ] = 0 .
(3)
k=0
Here s0 is sampled from ?, the random variables (r1 , s1 , r2 , s2 , . . .) and T (the termination time)
are obtained by following o from this initial state until termination. It is easy to see that if ? = 0 then
V?(TD,R) becomes the least-squares approximation Vf (LS,R) to the immediate rewards R under o
given the features ?. The least-squares approximation to R is given by f (LS,R) = arg minf J(f ) =
?1
E?,o [ ?Tk=0
{rk+1 ? f ? ?(sk )} ]. We restrict our attention to this TD(0) solution in this paper, and
refer to f as an (approximate) immediate reward model.
2
The TD(0)-based linear UOM (in short, linear UOM) underlying o (and ?) is a pair of n ? n matrices
(U o , M o ), which generalize the tabular model (uo , po ). Given the same sequence as used in defining
the approximation to Ro (equation 3), U o is the solution to the following system of linear equations:
T ?1
E?,o [ ? {?(sk ) + ?U o ?(sk+1 ) ? U o ?(sk )} ?(sk )? ] = 0.
k=0
Let (U o )? = [u1 , . . . , un ], ui ? Rn . If we introduce an artificial ?reward? function, r?i = ?i , which is
the ith feature, then ui is the weight vector such that Vui is the TD(0)-approximation to the return of o
for the artificial reward function. Note that if we use tabular representation, then ui,s = uo (s, i) holds
for all s, i ? S. Therefore our extension to linear function approximation is backward consistent with
the UOM definition in the tabular case. However, this alone would not be a satisfactory justification
of this choice of linear UOMs. The following theorem shows that just like the UOMs of the previous
section, the U o matrix allows the separation of the reward from the option models without losing
information.
Theorem 2 Fix an option o = o??, ?? in a reward-less MDP, M = ??, S, A, ?P a ??, an initial state
distribution ? over the states S, and a function ? ? S ? Rn . Let U be the linear UOM of o w.r.t. ?
and ?. Pick some reward function R and let V?(TD,R) be the TD(0) approximation to the return Ro .
Then, for any s ? S,
V?(TD,R) (s) = (f (LS,R) )? (U ?(s)) .
The significance of this result is that it shows that to compute the TD approximation of an option
return corresponding to a reward function R, it suffices to find f (LS,R) (the least squares approximation of the expected one-step reward under the option and the reward function R), provided one
is given the U matrix of the option. We expect that finding a least-squares approximation (solving a
regression problem) is easier than solving a TD fixed-point equation. Note that the result also holds
for standard policies, but we do not explore this direction in this paper.
The definition of M o . The matrix M o serves as a state predictor, and we call M o the transient matrix
associated with option o. Given a feature vector ?, M o ? predicts the (discounted) expected feature
vector where the option stops. When option o is started from state s and stopped at state sT in T
time steps, we update an estimate of M o by
M o ? M o + ?(? T ?(sT ) ? M o ?(s))?(s)? .
2
Note that the subscript in V? always means the TD weight vector throughout this paper.
4
Formally, M o is the solution to the associated linear system,
E?,o [ ? T ?(sT )?(s)? ] = M o E?,o [ ?(s)?(s)? ] .
(4)
Notice that M o is thus just the least-squares solution of the problem when ? T ?(sT ) is regressed
on ?(s), given that we know that option o is executed. Again, this way we obtain the terminal
distribution of option o in the tabular case.
A high-level policy h defines a Markov chain over S ? O. Assume that this Markov chain has a
unique stationary distribution, ?h . Let (s, o) ? ?h be a draw from this stationary distribution. Our
goal is to find an option model that can be used to compute a TD approximation to the value function
of a high-level policy h (flattened) over a set of options O. The following theorem shows that the
value function of h can be computed from option returns and transient matrices.
Theorem 3 Let V? (s) = ?(s)? ?. Under the above conditions, if ? solves
E?h [ (Ro (s) + (M o ?(s))? ? ? ?(s)? ?)?(s) ] = 0
(5)
then V? is the TD(0) approximation to the value function of h.
Recall that Theorem 2 states that the U matrices can be used to compute the option returns given
an arbitrary reward function. Thus given a reward function, the U and M matrices are all that one
would need to solve the TD solution of the high-level policy. The merit of U and M is that they
are reward independent. Once they are learned, they can be saved and used for different reward
functions for different situations at different times.
5
Learning and Planning with UOMs
In this section we give incremental, TD-style algorithms for learning and planning with linear
UOMs. We start by describing the learning of UOMs while following some high-level policy h,
and then describe a Dyna-like algorithm that estimates the value function of h with learned UOMs
and an immediate reward model.
5.1
Learning Linear UOMs
Assume that we are following a high-level policy h over a set of options O, and that we want to
estimate linear UOMs for the options in O. Let the trajectory generated by following this high-level
policy be . . . , sk , qk , ok , ak , sk+1 , qk+1 , . . .. Here, qk = 1 is the indicator for the event that option
ok?1 is terminated at state sk and so ok ? h(sk , ?). Also, when qk = 0, ok = ok?1 . Upon the
transition from sk to sk+1 , qk+1 , the matrix U ok is updated as follows:
ok
Uk+1
= Ukok + ?kok ?k+1 ?(sk )? , where
?k+1 = ?(sk ) + ?Ukok ?(sk+1 )I{qk+1 =0} ? Ukok ?(sk ),
and ?kok ? 0 is the learning-rate at time k associated with option ok . Note that when option ok is
terminated the temporal difference ?k+1 is modified so that the next predicted value is zero.
The ?M o ? matrices are updated using the least-mean square algorithm. In particular, matrix M ok
is updated when option ok is terminated at time k + 1, i.e., when qk+1 = 1. In the update we need
the feature (??? ) of the state which was visited at the time option ok was selected and also the time
elapsed since this time (?? ):
ok
Mk+1
= Mkok + ??kok I{qk+1 =1} {? ?k ?(sk+1 ) ? Mkok ??k } ???k ,
??k+1 = I{qk+1 =0} ??k + I{qk+1 =1} ?(sk+1 ) ,
?k+1 = I{qk+1 =0} ?k + 1 .
These variables are initialized to ?0 = 0 and ??0 = ?(s0 ).
The following theorem states the convergence of the algorithm.
5
Theorem 4 Assume that the stationary distribution of h is unique, all options in O terminate with
probability one and that all options in O are selected at some state with positive probability.3 If
the step-sizes of the options are decreased towards zero so that the Robbins-Monro conditions hold
o
o
o
o
for them, i.e., ?i(k) ?i(k)
= ?, ?i(k) (?i(k)
)2 < ?, and ?j(k) ??j(k)
= ?, ?j(k) (?
?j(k)
)2 < ?,4 then
o
o
o
o
o
o
for any o ? O, Mk ? M and Uk ? U with probability one, where (U , M ) are defined in the
previous section.
5.2
Learning Reward Models
In conventional settings, a single reward signal will be contained in the trajectory when following the
high level policy, . . . , sk , qk , ok , ak , rk+1 , sk+1 , qk+1 , . . .. We can learn for each option an immediate
reward model for this reward signal. For example, f ok is updated using least mean squares rule:
?
ok
fk+1
= fkok + ??kok I{qk+1 =0} {rk+1 ? f ok ?(sk )} ?(sk ) .
In other settings, immediate reward models can be constructed in different ways. For example, more
than one reward signal can be of interest, so multiple immediate reward models can be learned in
parallel. Moreover, such additional reward signals might be provided at any time. In some settings,
an immediate reward model for a reward function can be provided directly from knowledge of the
environment and features where the immediate reward model is independent of the option.
5.3
Policy Evaluation with UOMs and Reward Models
Consider the process of policy evaluation for a high-level policy over options from a given set of
UOMs when learning a reward model. When starting from a state s with feature vector ?(s) and
following option o, the return Ro (s) is estimated from the reward model f o and the expected feature
occupancy matrix U o by Ro (s) ? (f o )? U o ?(s). The TD(0) approximation to the value function
of a high-level policy h can then be estimated online from Theorem 3. Interleaving updates of the
reward model learning with these planning steps for h gives a Dyna-like algorithm.
6
Empirical Results
In this section, we provide empirical results on choosing game units to execute specific policies
in a simplified real-time strategy game and recommending articles in a large academic database
with more than one million articles. We compare the UOM method with a method of Sorg and
Singh (2010), who introduced the linear-option expectation model (LOEM) that is applicable for
evaluating a high-level policy over options. Their method estimates (M o , bo ) from experience,
where bo is equal to (U o )? f o in our formulation. This term bo is the expected return from following the option, and can be computed incrementally from experience once a reward signal or an
immediate reward model are available.
A simplified Star Craft 2 mission. We examined the use of the UOMs and LOEMs for policy evaluation in a simplified variant of the real-time strategy game Star Craft 2, where the task for the player
was to select the best game unit to move to a particular goal location. We assume that the player has
access to a black-box game simulator. There are four game units with the same constant dynamics.
The internal status of these units dynamically changes during the game and this affects the reward
they receive in enemy controlled territory. We evaluated these units, when their rewards are as listed
in the table below (the rewards are associated with the previous state and are not action-contingent).
A game map is shown in Figure 1 (a). The four actions could move a unit left, right, up, or down.
With probability 2/3, the action moved the unit one grid in the intended direction. With probability
1/3, the action failed, and the agent was moved in a random direction chosen uniformly from the
other three directions. If an action would move a unit into the boundary, it remained in the original
location (with probability one). The discount factor was 0.9. Features were a lookup table over the
11 ? 11 grid. For all algorithms, only one step of planning was applied per action selection. The
3
Otherwise, we can drop the options in O which are never selected by h.
o
o
The index i(k) is advanced for ?i(k)
when following option o, and the index j(k) is advanced for ??j(k)
o
o
o
o
when o is terminated. Note that in the algorithm, we simply wrote as ?i(k) as ?k and ??j(k) as ??k .
4
6
0.2
(11, 11)
UOM
LOEM
0.18
o2
o7
0.16
RMSE
o8
o3
G
o6
o1
o9
o5
G
0.14
0.12
0.1
0.08
o4
B
0.06
0
(a)
(b)
20
40
60
Number of episodes
80
100
(c)
Figure 1: (a) A Star Craft local mission map, consisting of four bridged regions, and nine options
for the mission. (b) A high-level policy h =< o1 , o2 , o3 , o6 > initiates the options in the regions, with
deterministic policies in the regions as given by the arrows: o1 (green), o2 (yellow), o3 (purple), and
o6 (white). Outside these regions, the policies select actions uniformly at random. (c) The expected
performance of different units can be learned by simulating trajectories (with the standard deviation
shown by the bars), and the UOM method reduces the error faster than the LOEM method.
planning step-size for each algorithm was chosen from 0.001, 0.01, 0.1, 1.0. Only the best one was
reported for an algorithm. All data reported were averaged over 30 runs.
We defined a set of nine options and their correspondGame Units
ing policies, shown in FigEnemy Locations
Battlecruiser Reapers Thor SCV
ure 1 (a), (b). These options
fortress (yellow)
0.3
-1.0
1.0
-1.0
ground forces (green)
1.0
0.3
1.0
-1.0
are specified by the locations
viking (red)
-1.0
-1.0
1.0
-1.0
where they terminate, and the
cobra
(pink)
1.0
0.5
-1.0
-1.0
policies. The termination lominerals (blue)
0
0
0
1.0
cation is the square pointed
to by each option?s arrows.
Four of these are ?bridges? between regions, and one is the position labeled ?B? (which is the
player?s base at position (1, 1)). Each of the options could be initiated from anywhere in the region
in which the policy was defined. The policies for these options were defined by a shortest path
traversal from the initial location to the terminal location, as shown in the figure. These policies
were not optimized for the reward functions of the game units or the enemy locations.
To choose among units for a mission in real time, a player must be able to efficiently evaluate many
options for many units, compute the value functions of the various high-level policies, and select
the best unit for a particular high-level goal. A high-level policy for dispatching the game units is
defined by initiating different options from different states. For example, a policy for moving units
from the base ?B? to position ?G? can be, h =< o1 , o2 , o3 >. Another high-level policy could move
another unit from upper left terrain to ?G? by a different route with h? =< o8 , o5 , o6 >.
We evaluated policy h for the Reaper unit above using UOMs and LOEMs. We first pre-learned
the U o and M o models using the experience from 3000 trajectories. Using a reward function that is
described in the above table, we then learned f o for the UOM and and bo for the LEOM over 100
simulated trajectories, and concurrently learned ?. As shown in Figure 1(c), the UOM model learns
a more accurate estimate of the value function from fewer episodes, when the best performance is
taken across the planning step size. Learning f o is easier than learning bo because the stochastic
dynamics of the environment is factored out through the pre-learned U o . These constructed value
functions can be used to select the best game unit for the task of moving to the goal location.
This approach is computationally efficient for multiple units. We compared the computation time
of LOEMs and UOMs with linear Dyna on a modern PC with an Intel 1.7GHz processor and 8GB
RAM in a MATLAB implementation. Learning U o took 81 seconds. We used a recursive leastsquares update to learn M o , which took 9.1 seconds. Thus, learning an LOEM model is faster than
learning a UOM for a single fixed reward function, but the UOM can produce an accurate option
return quickly for each new reward function. Learning the value function incrementally from the 100
7
trajectories took 0.44 seconds for the UOM and 0.61 seconds for the LOEM. The UOM is slightly
more efficient as f o is more sparse than bo , but it is substantially more accurate, as shown in Figure
1(c). We evaluated all the units and the results are similar.
Article recommendation. Recommending relevant articles for a given user query can be thought of
as predicting an expected return of an option for a dynamically specified reward model. Ranking
an article as a function of the links between articles in the database has proven to be a successful
approach to article recommendation, with PageRank and other link analysis algorithms using a random surfer model [Page et al., 1998]. We build on this idea, by mapping a user query to a reward
model and pre-specified option for how a reader might transition between articles. The ranking of
an article is then the expected return from following references in articles according to the option.
Consider the policy of performing a random-walk between articles in a database by following a reference from an article that is selected uniformly at random. An article receives a positive reward if it
matches a user query (and is otherwise zero), and the value of the article is the expected discounted
return from following the random-walk policy over articles. More focused reader policies can be
specified as following references from an article with a common author or keyword.
We experimented with a collection from DBLP that has about 1.5 million articles, 1 million authors,
and 2 millions citations [Tang et al., 2008]. We assume that a user query q is mapped directly to an
option o and an immediate reward model fqo . For simplicity in our experiment, the reward models
are all binary, with three non-zero features drawn uniformly at random. In total we used about 58
features, and the discount factor was 0.9. There were three policies. The first followed a reference
selected uniformly at random, the second selected a reference written by an author of the current
article (selected at random), and the third selected a reference with a keyword in common with the
current article. Three options were defined from these policies, where the termination probability
beta was 1.0 if no suitable outgoing reference was available and 0.25 otherwise. High-level policies
of different option sequences could also be applied, but were not tested here. We used bibliometric
features for the articles extracted from the author, title, venue fields.
We generated queries q at random, where each query specified an associated option o and an optionindependent immediate reward model fqo = fq . We then computed their value functions. The immediate reward model is naturally constructed for these problems, as the reward comes from the
starting article based on its features, so it is not dependent on the action taken (and thus not the option). This approach is appropriate in article recommendation as a query can provide both terms for
relevant features (such as the venue), and how the reader intends to follow references in the paper.
For the UOM based approach we pre-learned U o , and then computed U o fqo for each query. For the
LOEM approach, we learned a bq for each query by simulating 3000 trajectories in the database (the
simulated trajectories were shared for all the queries). The computation time (in seconds) for the
UOM and LOEM approaches are shown in the table below, which shows that UOMS are much more
computationally efficient than LOEM.
Number of reward functions
LOEM
UOM
7
10
0.03
0.01
100
0.09
0.04
500
0.47
0.07
1,000
0.86
0.12
10,000
9.65
1.21
Conclusion
We proposed a new way of modelling options in both tabular representation and linear function
approximation, called the universal option model. We showed how to learn UOMs and how to use
them to construct the TD solution of option returns and value functions of policies, and prove their
theoretical guarantees. UOMs are advantageous in large online systems. Estimating the return of an
option given a new reward function with the UOM of the option is reduced to a one-step regression.
Computing option returns dependent on many reward functions in large online games and search
systems using UOMs is much faster than using previous methods for learning option models.
Acknowledgment
Thank the reviewers for their comments. This work was supported by grants from Alberta Innovates
Technology Futures, NSERC, and Department of Science and Technology, Government of India.
8
References
Abbeel, P., Coates, A., and Ng, A. Y. (2010). Autonomous helicopter aerobatics through apprenticeship learning. Int. J. Rob. Res., 29(13):1608?1639.
Barto, A. and Duff, M. (1994). Monte carlo matrix inversion and reinforcement learning. NIPS,
pages 687?694.
Bertsekas, D. P. and Tsitsiklis, J. N. (1996). Neuro-dynamic Programming. Athena.
Jaakkola, T., Jordan, M., and Singh, S. (1994). On the convergence of stochastic iterative dynamic
programming algorithms. Neural Computation, 6(6):1185?1201.
Ng, A. Y. and Russell, S. J. (2000). Algorithms for inverse reinforcement learning. ICML, pages
663?670.
Page, L., Brin, S., Motwani, R., and Winograd, T. (1998). The PageRank citation ranking: Bringing
order to the web. Technical report, Stanford University.
Precup, D. (2000). Temporal Abstraction in Reinforcement Learning. PhD thesis, University of
Massachusetts, Amherst.
Richardson, M. and Domingos, P. (2002). The intelligent surfer: Probabilistic combination of link
and content information in PageRank. NIPS.
Sorg, J. and Singh, S. (2010). Linear options. AAMAS, pages 31?38.
Sutton, R. S. and Barto, A. G. (1998). Reinforcement Learning: An Introduction. MIT Press.
Sutton, R. S., Precup, D., and Singh, S. (1999). Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence, 112:181?211.
Syed, U. A. (2010). Reinforcement Learning Without Rewards. PhD thesis, Princeton University.
Tang, J., Zhang, J., Yao, L., Li, J., Zhang, L., and Su, Z. (2008). Arnetminer: extraction and mining
of academic social networks. SIGKDD, pages 990?998.
9
| 5590 |@word innovates:1 inversion:1 advantageous:1 termination:7 pick:1 initial:5 selecting:3 outperforms:1 existing:1 o2:4 current:2 universality:1 must:4 written:1 sorg:3 enables:1 drop:1 update:4 smdp:2 stationary:4 alone:1 selected:11 fewer:1 intelligence:1 ith:1 short:1 location:8 zhang:2 constructed:5 beta:1 prove:4 introduce:5 apprenticeship:2 ra:7 expected:15 behavior:1 planning:14 multi:1 simulator:1 terminal:5 bellman:3 discounted:9 initiating:1 alberta:2 td:19 becomes:2 iisc:1 provided:6 underlying:6 bounded:1 moreover:1 estimating:1 substantially:2 finding:1 csaba:1 guarantee:1 temporal:4 remember:1 every:2 ro:14 uk:2 unit:25 uo:12 grant:1 bertsekas:1 positive:2 local:1 treat:1 sutton:7 ak:9 initiated:1 subscript:1 ure:1 path:1 might:2 black:1 examined:1 dynamically:3 averaged:1 unique:2 acknowledgment:1 recursive:1 empirical:2 universal:8 thought:1 pre:5 get:1 convenience:1 selection:1 context:1 accumulation:1 conventional:2 map:4 deterministic:1 reviewer:1 attention:1 starting:2 l:4 focused:1 unify:1 simplicity:1 cobra:1 factored:1 rule:1 autonomous:1 justification:1 updated:4 scv:1 ualberta:1 user:6 losing:1 programming:2 domingo:2 walking:1 continues:1 predicts:4 database:4 labeled:1 observed:1 winograd:1 region:6 episode:2 keyword:2 intends:1 russell:3 environment:3 ui:3 reward:86 dynamic:5 traversal:1 terminating:3 singh:5 solving:3 upon:2 basis:1 compactly:1 po:8 various:1 describe:1 monte:1 query:12 artificial:3 choosing:1 outside:1 stanford:1 solve:1 enemy:2 otherwise:5 richardson:2 think:2 online:3 sequence:3 took:3 propose:1 mission:4 coming:1 helicopter:1 relevant:2 till:1 moved:2 convergence:2 motwani:1 p:2 r1:7 produce:1 incremental:1 executing:2 tk:1 help:1 stating:1 solves:1 c:1 predicted:1 come:1 direction:4 saved:1 stochastic:3 transient:2 brin:1 require:1 government:1 abbeel:2 generalization:1 fix:3 suffices:1 leastsquares:1 extension:1 clarify:1 hold:4 ground:1 mapping:4 surfer:2 a2:1 applicable:1 visited:1 title:1 bridge:1 robbins:1 mit:1 concurrently:1 always:1 modified:1 varying:1 barto:4 jaakkola:1 modelling:1 fq:1 sigkdd:1 helpful:1 abstraction:4 dependent:3 selects:1 arg:1 among:1 reaper:2 plan:1 ernet:1 equal:1 construct:2 once:2 never:1 ng:4 extraction:1 field:1 icml:1 minf:1 tabular:5 future:1 report:1 intelligent:1 bangalore:1 modern:1 intended:1 consisting:1 ab:1 interest:1 mining:1 evaluation:3 pc:1 chain:2 accurate:3 tuple:1 loem:9 experience:3 bq:1 unless:1 indexed:1 continuing:1 initialized:1 desired:1 re:2 walk:2 theoretical:1 stopped:1 mk:2 deviation:1 predictor:1 successful:1 reported:2 accomplish:1 st:7 venue:2 amherst:1 probabilistic:1 precup:3 quickly:1 yao:2 again:1 thesis:2 choose:1 style:1 return:25 li:1 account:1 lookup:1 star:3 automation:1 int:1 o7:1 ranking:3 red:1 start:8 option:128 capability:1 parallel:1 rmse:1 monro:1 square:8 purple:1 variance:1 qk:14 efficiently:3 who:1 yellow:2 generalize:1 territory:1 bhatnagar:1 trajectory:8 carlo:1 cation:1 history:2 processor:1 definition:4 naturally:1 associated:6 proof:1 sampled:3 stop:1 bridged:1 massachusetts:1 recall:1 knowledge:2 ok:17 follow:1 o6:4 specify:1 formulation:1 execute:1 box:1 evaluated:3 just:2 anywhere:1 until:4 receives:1 web:1 su:1 incrementally:3 defines:2 mdp:11 shalabh:2 iteratively:1 satisfactory:1 white:1 aerobatics:1 game:15 during:1 szepesva:1 m5:1 o3:4 demonstrate:1 ari:1 common:2 million:4 extend:2 refer:1 consistency:1 fk:1 grid:2 pointed:1 moving:2 access:1 etc:1 base:2 showed:1 route:1 initiation:1 vui:1 binary:1 additional:1 contingent:1 shortest:1 signal:5 semi:2 multiple:6 hengshuai:1 reduces:1 o5:2 ing:1 technical:1 faster:3 academic:2 match:1 a1:1 controlled:1 prediction:1 variant:1 regression:2 neuro:1 controller:1 expectation:1 iteration:1 represent:1 receive:1 szepesv:1 background:1 want:1 decreased:1 dispatching:1 bringing:1 comment:1 jordan:1 integer:1 call:2 easy:1 independence:1 affect:1 restrict:1 idea:2 gb:1 nine:2 action:20 matlab:1 involve:1 listed:1 discount:3 kok:4 reduced:1 continuation:1 coates:1 notice:1 designer:1 estimated:2 per:1 blue:1 write:1 shall:2 four:4 terminology:1 drawn:1 backward:1 ram:1 sum:1 run:1 inverse:3 throughout:1 reader:3 separation:1 draw:1 decision:2 vf:1 followed:3 refine:1 flat:4 regressed:1 u1:1 performing:1 department:3 according:2 combination:2 pink:1 terminates:6 across:1 slightly:1 joseph:1 rob:1 s1:3 modayil:1 taken:2 computationally:2 equation:6 previously:1 turn:1 r3:1 describing:1 dyna:3 know:1 merit:1 initiate:1 serf:1 available:2 operation:1 away:1 appropriate:1 simulating:2 original:1 build:2 move:4 strategy:4 rt:1 traditional:5 visiting:1 link:4 mapped:1 simulated:2 thank:1 athena:1 o4:1 assuming:1 o1:4 index:2 executed:3 potentially:1 stated:1 rise:2 implementation:1 policy:56 upper:1 markov:4 finite:2 immediate:19 defining:1 extended:2 situation:1 team:1 rn:3 duff:2 arbitrary:1 canada:1 introduced:1 pair:3 specified:9 optimized:1 elapsed:1 learned:10 nip:2 able:1 bar:1 below:2 adjective:1 pagerank:3 green:2 event:1 syed:2 suitable:1 force:1 predicting:1 indicator:2 advanced:2 occupancy:6 technology:2 mdps:3 temporally:1 started:1 expect:1 proven:1 rak:2 agent:2 consistent:1 s0:6 article:28 o8:2 share:1 course:2 changed:1 supported:1 last:2 arriving:1 tsitsiklis:1 institute:1 india:2 sparse:1 ghz:1 boundary:1 evaluating:2 transition:8 rich:1 author:4 collection:1 reinforcement:9 projected:2 simplified:3 social:1 citation:3 approximate:2 status:1 wrote:1 thor:1 recommending:2 terrain:1 un:1 search:1 iterative:1 sk:43 table:4 terminate:2 learn:3 ca:1 csa:1 constructing:1 domain:4 flattening:1 significance:1 main:1 terminated:5 s2:1 arrow:2 arise:1 allowed:1 aamas:1 fied:1 intel:1 edmonton:1 position:3 third:1 learns:1 interleaving:1 tang:2 rk:8 theorem:12 down:1 remained:1 specific:1 r2:6 experimented:1 albeit:1 bibliometric:1 flattened:1 phd:2 execution:3 o9:1 dblp:1 easier:2 simply:1 explore:1 failed:1 contained:1 nserc:1 bo:6 recommendation:5 extracted:2 conditional:4 viewed:2 goal:5 towards:1 shared:1 content:1 change:1 except:1 uniformly:5 called:7 total:2 e:5 player:4 craft:3 formally:3 select:6 internal:1 support:1 latter:1 relevance:2 indian:1 evaluate:1 outgoing:1 princeton:1 tested:1 scratch:1 |
5,071 | 5,591 | Semi-Separable Hamiltonian Monte Carlo
for Inference in Bayesian Hierarchical Models
Yichuan Zhang
School of Informatics
University of Edinburgh
[email protected]
Charles Sutton
School of Informatics
University of Edinburgh
[email protected]
Abstract
Sampling from hierarchical Bayesian models is often difficult for MCMC methods, because of the strong correlations between the model parameters and
the hyperparameters. Recent Riemannian manifold Hamiltonian Monte Carlo
(RMHMC) methods have significant potential advantages in this setting, but are
computationally expensive. We introduce a new RMHMC method, which we call
semi-separable Hamiltonian Monte Carlo, which uses a specially designed mass
matrix that allows the joint Hamiltonian over model parameters and hyperparameters to decompose into two simpler Hamiltonians. This structure is exploited by
a new integrator which we call the alternating blockwise leapfrog algorithm. The
resulting method can mix faster than simpler Gibbs sampling while being simpler
and more efficient than previous instances of RMHMC.
1
Introduction
Bayesian statistics provides a natural way to manage model complexity and control overfitting, with
modern problems involving complicated models with a large number of parameters. One of the
most powerful advantages of the Bayesian approach is hierarchical modeling, which allows partial
pooling across a group of datasets, allowing groups with little data to borrow information from
similar groups with larger amounts of data. However, such models pose problems for Markov chain
Monte Carlo (MCMC) methods, because the joint posterior distribution is often pathological due to
strong correlations between the model parameters and the hyperparameters [3]. For example, one of
the most powerful MCMC methods is Hamiltonian Monte Carlo (HMC). However, for hierarchical
models even the mixing speed of HMC can be unsatisfactory in practice, as has been noted several
times in the literature [3, 4, 11]. Riemannian manifold Hamiltonian Monte Carlo (RMHMC) [7] is a
recent extension of HMC that aims to efficiently sample from challenging posterior distributions by
exploiting local geometric properties of the distribution of interest. However, it is computationally
too expensive to be applicable to large scale problems.
In this work, we propose a simplified RMHMC method, called Semi-Separable Hamiltonian Monte
Carlo (SSHMC), in which the joint Hamiltonian over parameters and hyperparameters has special
structure, which we call semi-separability, that allows it to be decomposed into two simpler, separable Hamiltonians. This condition allows for a new efficient algorithm which we call the alternating
blockwise leapfrog algorithm. Compared to Gibbs sampling, SSHMC can make significantly larger
moves in hyperparameter space due to shared terms between the two simple Hamiltonians. Compared to previous RMHMC methods, SSHMC yields simpler and more computationally efficient
samplers for many practical Bayesian models.
2
Hierarchical Bayesian Models
Let D = {Di }N
i=1 be a collection of data groups where ith data group is a collection of iid obserNi
i
vations yj = {yji }N
i=1 and their inputs xj = {xji }i=1 . We assume the data follows a parametric
1
distribution p(yi |xi , ? i ), where ? i is the model parameter for group i. The parameters are assumed
to be drawn from a prior p(? i |?), where ? is the hyperparameter with a prior distribution p(?). The
joint posterior over model parameters ? = (? 1 , . . . , ? N ) and hyperparameters ? is then
p(?, ?|D) ?
N
Y
i=1
p(yi |xi , ? i )p(? i |?)p(?).
(1)
This hierarchical Bayesian model is popular because the parameters ? i for each group are coupled,
allowing the groups to share statistical strength. However, this property causes difficulties when
approximating the posterior distribution. In the posterior, the model parameters and hyperparameters
are strongly correlated. In particular, ? usually controls the variance of p(?|?) to promote partial
pooling, so the variance of ?|?, D depends strongly on ?. This causes difficulties for many MCMC
methods, such as the Gibbs sampler and HMC. An illustrative example of pathological structure
in hierarchical
Qn models is the Gaussian funnel distribution [11]. Its density function is defined as
p(x, v) = i=1 N (xi |0, e?v )N (v|0, 32 ), where x is the vector of low-level parameters and v is the
variance hyperparameter. The pathological correlation between x and v is illustrated by Figure 1.
3
Hamiltonian Monte Carlo on Posterior Manifold
Hamiltonian Monte Carlo (HMC) is a gradient-based MCMC method with auxiliary variables. To
generate samples from a target density ?(z), HMC constructs an ergodic Markov chain with the
invariant distribution ?(z, r) = ?(z)?(r), where r is an auxiliary variable. The most common
choice of ?(r) is a Gaussian distribution N (0, G?1 ) with precision matrix G. Given the current
sample z, the transition kernel of the HMC chain includes three steps: first sample r ? ?(r),
second propose a new sample (z0 , r0 ) by simulating the Hamiltonian dynamics and finally accept
the proposed sample with probability ? = min {1, ?(z0 , r0 )/?(z, r)}, otherwise leave z unchanged.
The last step is a Metropolis-Hastings (MH) correction. Define H(z, r) := ? log ?(z, r). The
? r? ) = (?r H, ??z H), where z is
Hamiltonian dynamics is defined by the differential equations (z,
called the position and r is called the momentum.
?
It is easy to see that H(z,
r) = ?z H z? + ?r H r? = 0, which is called the energy preservation property
[10, 11]. In physics, H(z, r) is known as the Hamiltonian energy, and is decomposed into the sum
of the potential energy U (z) := ? log ?(z) and the kinetic energy K(r) := ? log ?(r). The most
used discretized simulation in HMC is the leapfrog algorithm, which is given by the recursion
r(? + /2) = r(? ) ? ?z U (? )
(2a)
2
z(? + ) = z(? ) + ?r K(? + /2)
(2b)
(2c)
r(? + ) = r(? + /2) ? ?? U (? + ),
2
where is the step size of discretized simulation time. After L steps from the current sample
(z(0), r(0)) = (z, r), the new sample is proposed as the last point (z0 , r0 ) = (z(L), r(L)). In
Hamiltonian dynamics, the matrix G is called the mass matrix. If G is constant w.r.t. z, then z
and r are independent in ?(z, r). In this case we say that H(z, r) is a separable Hamiltonian. In
particular, we use the term standard HMC to refer to HMC using the identity matrix as G. Although
HMC methods often outperform other popular MCMC methods, they may mix slowly if there are
strong correlations between variables in the target distribution. Neal [11] showed that HMC can mix
faster if G is not the identity matrix. Intuitively, such a G acts like a preconditioner. However, if the
curvature of ?(z) varies greatly, a global preconditioner can be inadequate.
For this reason, recent work, notably that on Riemannian manifold HMC (RMHMC) [7], has considered non-separable Hamiltonian methods, in which G(z) varies with position z, so that z and r
are no longer independent in ?(z, r). The resulting Hamiltonian H(z, r) = ? log ?(z, r) is called
a non-separable Hamiltonian. For example, for Bayesian inference problems, Girolami and Calderhead [7] proposed using the Fisher Information Matrix (FIM) of ?(?), which is the metric tensor
of posterior manifold. However, for a non-separable Hamiltonian, the simple leapfrog dynamics
(2a)-(2c) do not yield a valid MCMC method, as they are no longer reversible. Simulation of general non-separable systems requires the generalized leapfrog integrator (GLI) [7], which requires
computing higher order derivatives to solve a system of non-linear differential equations. The computational cost of GLI in general is O(d3 ) where d is the number of parameters, which is prohibitive
for large d.
2
In hierarchical models, there are two ways to sample the posterior using HMC. One way is to sample
the joint posterior ?(?, ?) directly. The other way is to sample the conditional ?(?|?) and ?(?|?),
simulating from each conditional distribution using HMC. This strategy is called HMC within Gibbs
[11]. In either case, HMC chains tend to mix slowly in hyperparameter space, because the huge variation of potential energy across different hyperparameter values can easily overwhelm the kinetic
energy in separable HMC [11]. Hierarchical models also pose a challenge to RMHMC, if we want
to sample the model parameters and hyperparameters jointly. In particular, the closed-form FIM
of the joint posterior ?(?, ?) is usually unavailable. Due to this problem, even sampling some toy
models like the Gaussian funnel using RMHMC becomes challenging. Betancourt [2] proposed a
new metric that uses a transformed Hessian matrix of ?(?), and Betancourt and Girolami [3] demonstrate the power of this method for efficiently sampling hyperparameters of hierarchical models on
some simple benchmarks like Gaussian funnel. However, the transformation requires computing
eigendecomposition of the Hessian matrix, which is infeasible in high dimensions.
Because of these technical difficulties, RMHMC for hierarchical models is usually used within a
block Gibbs sampling scheme, alternating between ? and ?. This RMHMC within Gibbs strategy is
useful because the simulation of the non-separable dynamics for the conditional distributions may
have much lower computational cost than that for the joint one. However, as we have discussed, in
hierarchical models these variables tend be very strongly correlated, and it is well-known that Gibbs
samplers mix slowly in such cases [13]. So, the Gibbs scheme limits the true power of RMHMC.
4
Semi-Separable Hamiltonian Monte Carlo
In this section we propose a non-separable HMC method that does not have the limitations of
Gibbs sampling and that scales to relatively high dimensions, based on a novel property that we
will call semi-separability. We introduce new HMC methods that rely on semi-separable Hamiltonians, which we call semi-separable Hamiltonian Monte Carlo (SSHMC).
4.1
Semi-Separable Hamiltonian
In this section, we define the semi-separable Hamiltonian system. Our target distribution will be the
posterior ?(?, ?) = log p(?, ?|D) of a hierarchical model (1), where ? ? Rn and ? ? Rm . Let
r? ? Rn and r? ? Rm be the momentum variables corresponding to ? and ? respectively. The
non-separable Hamiltonian is defined as
H(?, ?, r? , r? ) = U (?, ?) + K(r? , r? |?, ?),
(3)
where the potential energy is U (?, ?) = ? log ?(?, ?) and the kinetic energy is K(r? , r? |?, ?) =
? log N (r? , r? ; 0, G(?, ?)?1 ), which includes the normalization term log |G(?, ?)|. The mass
matrix G(?, ?) can be an arbitrary p.d. matrix. For example, previous work on RMHMC [7] has
chosen
G(?, ?)
to be FIM of the joint posterior ?(?, ?), resulting in an HMC method that requires
3
O (m + n) time. This limits applications of RMHMC to large scale problems.
To attack these computational challenges, we introduce restrictions on the mass matrix G(?, ?) to
enable efficient simulation. In particular, we restrict G(?, ?) to have the form
G? (?, x)
0
G(?, ?) =
,
0
G? (?)
where G? and G? are the precision matrices of r? and r? , respectively. Importantly, we restrict
G? (?, x) to be independent of ? and G? (?) to be independent of ?. If G has these properties, we
call the resulting Hamiltonian a semi-separable Hamiltonian. A semi-separable Hamiltonian is still
in general non-separable, as the two random vectors (?, ?) and (r? , r? ) are not independent.
The semi-separability property has important computational advantages. First, because G is block
diagonal, the cost of matrix operations reduces from O((n + m)k ) to O(nk ). Second, and more
important, substituting the restricted mass matrix into (3) results in the potential and kinetic energy:
X
U (?, ?) = ?
[log p(yi |? i , xi ) + log p(? i |?)] ? log p(?),
(4)
i
1 T
K(r? , r? |?, ?) =
r G? (x, ?)r? + rT? G? (?)r? + log |G? (x, ?)| + log |G? (?)| .
2 ?
3
(5)
References(3) can be seen as a separable HamiltoIf we fix (?, r? ) or (?, r? ), the non-separable Hamiltonian
Bache and M. Lichman. UCI machine learning repository, 2013. URL http://archive.
nian plus some constant terms. In particular, define[1]theK.notation
uci.edu/ml.
1
1
A(r? |?) = rT? G? (x, ?)r? , [2] A(r
|?) = ArT?General
G? (?)r
M. J. ?
Betancourt.
Metric
? . for Riemannian Manifold Hamiltonian Monte Carlo. ArXiv e2
2
Dec. 2012.
Then, considering (?, r? ) as fixed, the non-separable
(3) is Hamiltonian
different from
the for Hierarchical Models. ArXiv e[3] M.Hamiltonian
J. Betancourt andH
M.in
Girolami.
Monte Carlo
Dec. 2013.
following separable Hamiltonian
H1 (?, r? ) = U1 (?|?, r? ) + K1 (r? |?),[4] K. Choo. Learning hyperparameters for neural network models
(6) using Hamiltonian dynamics. PhD
Citeseer, 2000.
X
1 and J. S. Rosenthal. Scaling limits for the transient phase o
O.
F. Christensen,
G. O.|?)
Roberts,
U1 (?|?, r? ) = ?
[log p(yi |? i , xi ) + [5]
logMetropolis?Hastings
p(?
+ Journal
log |G
(?)| , (7)
i |?)] + A(ralgorithms.
?
of ?
the Royal Statistical Society: Series B (Statistical Met
2
i
ogy), 67(2):253?268, 2005.
K1 (r? |?) = A(r? |?)
(8) Science, pages 473?483, 1992.
[6] C. J. Geyer. Practical Markov Chain Monte Carlo. Statistical
[7](?,
M. rGirolami
and
B.
Calderhead.
Riemann
manifold
Langevin
only by some constant terms that do not depend on
).
What
this
means
is
that
any
update
to and Hamiltonian Monte Carlo me
?
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123?214, 2011.
(?, r? ) that leaves H1 invariant leaves the joint Hamiltonian
H
invariant
as
well.
An
example
is
the
1467-9868. doi: 10.1111/j.1467-9868.2010.00765.x. URL http://dx.doi.org/10.111
leapfrog dynamics on H1 , where U1 is considered the1467-9868.2010.00765.x.
potential energy, and K1 the kinetic energy.
[8] M.
D. Hoffmanseparable
and A. Gelman.
The no-U-turn sampler: Adaptively setting path lengths in Hamil
Similarly, if (?, r? ) are fixed, then H differs from the
following
Hamiltonian
Monte Carlo. Journal of Machine Learning Research, In press.
H2 (?, r? ) = U2 (?|?, r? ) + K2 (r? |?),
(9) inference and comparison with A
[9] S. Kim, N. Shephard, and S. Chib. Stochastic volatility: likelihood
X
models. The Review of Economic
Studies, 65(3):361?393, 1998.
1
U2 (?|?, r? ) = ?
log p(? i |?) ? log
+ A(r?and
|?)
+ log
|G? (x, ?)| ,
(10)
[10]p(?)
B. Leimkuhler
S. Reich.
2 Simulating Hamiltonian dynamics, volume 14. Cambridge University
2004.
i
[11] R. Neal. MCMC using Hamiltonian dynamics. Handbook
of Markov Chain Monte Carlo, pages 113
K2 (r? |?) = A(r? |?)
(11)
2011.
only by terms that are constant with respect to (?,[12]
r? ).
A. Pakman and L. Paninski. Auxiliary-variable exact hamiltonian monte carlo samplers for binary
butions.
Advances
in Each
Neural of
Information
Processing
Systems 26, pages 2490?2498. 2013.
Notice that H1 and H2 are coupled by the terms A(r? |?)
andInA(r
these terms
appears
? |?).
[13]
C.
P.
Robert
and
G.
Casella.
Monte
Carlo
statistical
methods,
in the kinetic energy of one of the separable Hamiltonians, but in the potential energy of the other volume 319. Citeseer, 2004.
[14] Z.
Wang,
Mohamed, energy
and N. determs
Freitas.introduced
Adaptive Hamiltonian
and Riemann manifold Monte
one. We call these terms auxiliary potentials because
they
areS.potential
by
samplers.
In International
Conference
on4.3).
Machine Learning (ICML), pages 1462?1470, 2013.
the auxiliary variables. These auxiliary potentials are key
to
our
method
(see
Section
http://jmlr.org/proceedings/papers/v28/wang13e.pdf. JMLR W&CP 28 (3):
4.2
Alternating Block-wise Leapfrog Algorithm
[15]
1470, 2013.
Y. Zhang, C. Sutton, A. Storkey, and Z. Ghahramani. Continuous relaxations for discrete Hamil
Monte Carlo. In Advances in Neural Information Processing Systems (NIPS), 2012.
Now we introduce an efficient SSHMC method
that exploits the semi-separability property. As Algorithm 1 SSHMC by ABLA
described in the previous section, any update to
Require: (?, )
(?, r? ) that leaves H1 invariant also leaves the
Sample r? ? N (0, G? ( , x)) and r ? N (0, G (?))
joint Hamiltonian H invariant, as does any upfor l in 1, 2, . . . , L do
(l+?/2)
(l)
date to (?, r? ) that leaves H2 invariant. So a
(? (l+?/2) , r?
)
leapfrog(? (l) , r? , H1 , ?/2)
(l+?) (l+?)
(l) (l)
natural idea is simply to alternate between sim(
,r
)
leapfrog(
, r , H2 , ?)
ulating the Hamiltonian dynamics for H1 and
(l+?)
(l)
(? (l+?) , r? )
leapfrog(? (l) , r? , H1 , ?/2)
that for H2 . Crucially, even though the total
end for
Hamiltonian H is not separable in general, both
Draw u ? U(0, 1)
(L?)
(L?) (L?) (L?)
,r
,r
)
H1 and H2 are separable. Therefore when simif u < min(1, eH(?, ,r? ,r ) H(? ,
)
ulating H1 and H2 , the simple leapfrog method
then
(L?)
(L?)
0
0 0
(L?)
(L?)
(? , , r? , r0 )
(?
,
, r? , r
)
can be used, and the more complex GLI method
else
is not required.
(? 0 , 0 , r0? , r0 )
(?, , r? , r )
We call this method the alternating block-wise
end if
return (? 0 , 0 )
leapfrog algorithm (ABLA), shown in Algorithm 1. In this figure the function ?leapfrog?
returns the result of the leapfrog dynamics (2a)-(2c) for the given starting point, Hamiltonian, and
step size. We call each iteration of the loop from 1 . . . L an ABLA step. For simplicity, we have
shown one leapfrog step for H1 and H2 for each ABLA step, but in practice it is useful to use multiple leapfrog steps per ABLA step. ABLA has discretization error due to the leapfrog discretization,
9
so the MH correction is required. If it is possible to simulate H1 and H2 exactly, then H is preserved
exactly and there is no need for MH correction.
To show that the SSHMC method by ABLA preserves the distribution ?(?, ?), we also need to
show that the ABLA is a time-reversible and volume-preserving transformation in the joint space of
(?, r? , ?, r? ). Let X = X?,r? ?X?,r? where (?, r? ) ? X?,r? and (?, r? ) ? X?,r? . Obviously, any
reversible and volume-preserving transformation in a subspace of X is also reversible and volumepreserving in X . It is easy to see that each leapfrog step in the ABLA algorithm is reversible
and volume-preserving in either X?,r? or X?,r? . One more property of integrator of interest is
4
symplecticity. Because each leapfrog integrator is symplectic in a subspace of X [10], they are also
symplectic in X . Then because ABLA is a composition of symplectic leapfrog integrators, and the
composition of symplectic transformations is symplectic, we know ABLA is symplectic.
We emphasize that ABLA is actually not a discretized simulation of the semi-separable Hamiltonian
system H, that is, if starting at a point (?, r? , ?, r? ) in the joint space, we run the exact Hamiltonian
dynamics for H for a length of time L, the resulting point will not be the same as that returned by
ABLA at time L even if the discretized time step is infinitely small. For example, ABLA simulates
H1 with step size 1 and H2 with step size 2 where 1 = 22 , when 2 ? 0 that preserves H.
4.3
Connection to Other Methods
Although the SSHMC method may seem similar to RMHMC within Gibbs (RMHMCWG), SSHMC
is actually very different. The difference is in the last two terms of (7) and (10); if these are omitted from SSHMC and the Hamiltonians for ?(?|?), then we obtain HMC within Gibbs. Particularly important among these two terms is the auxiliary potential, because it allows each of the
separable Hamiltonian systems to borrow energy from the other one. For example, if the previous
leapfrog step increases the kinetic energy K1 (r? |?) in H1 (?, r? ), then, in the next leapfrog step
for H2 (?, r? ), we see that ? will have greater potential energy U2 (?|?, r? ), because the auxiliary potential A(r? |?) is shared. That allows the leapfrog step to accommodate a larger change of
log p(?|?) using A(r? |?). So, the chain will mix faster in X? . By the symmetry of ? and ?, the
auxiliary potential will also accelerate the mixing in X? .
Another way to see this is that the dynamics in RMHMCWG for (r? , ?) preserves the distribution
?(?, r? , ?) = ?(?, ?)N (r? ; 0, G? (?)?1 ) but not the joint ?(?, ?, r? , r? ). That is because the
Gibbs sampler does not take into account the effect of ? on r? . In other words, the Gibbs step has the
stationary distribution ?(?, r? |?) rather than ?(?, r? |?, r? ). The difference between the two is the
auxiliary potential. In contrast, the SSHMC methods preserve the Hamiltonian of ?(?, ?, r? , r? ).
4.4
Choice of Mass Matrix
The choice of G? and G? in SSHMC is usually similar to RMHMCWG. If the Hessian matrix of
? log p(?|y, x, ?) is independent of ? and always p.d., it is natural to define G? as the inverse of the
Hessian matrix. However, for some popular models, e.g., logistic regression, the Hessian matrix of
the likelihood function depends on the parameters ?. In this case, one can use any approximate Hessian B, like the Hessian at the mode, and define G? := (B + B(?))?1 , where B(?) is the Hessian
of the prior distribution. Such a rough approximation is usually good enough to improve the mixing
speed, because the main difficulty is the correlation between model parameters and hyperparameters.
In general, because the computational bottleneck in HMC and SSHMC is computing the gradient of
the target distribution, both methods have the same computational complexity O(lg), where g is the
cost of computing the gradient and l is the total number of leapfrog steps per iteration. However, in
practice we find it very beneficial to use multiple steps in each blockwise leapfrog update in ABLA;
this can cause SSHMC to require more time than HMC. Also, depending on the mass matrix G? , the
cost of leapfrog a step in ABLA may be different from those in standard HMC. For some choices of
G? , the leapfrog step in ABLA can be even faster than one leapfrog step of HMC. For example, in
many models the computational bottleneck is the gradient ?? log Z(?), Z(?) is the normalization
in prior. Recall that G? is a function of ?. If |G? | = Z(?)?1 , Z(?) will be canceled out, avoiding
computation of ?? log Z(?). One example is using Gx = ev I in Gaussian funnel distribution
aforementioned in Section 2. A potential problem of such G? is that the curvature of the likelihood
function p(D|?) is ignored. But when the data in each group is sparse and the parameters ? are
strongly correlated, this G? can give nearly optimal mixing speed and make SSHMC much faster.
In general, any choice of G? and G? that would be valid for separable HMC with Gibbs is also valid
for SSHMC.
5
Experimental Results
In this section, we compare the performance of SSHMC with the standard HMC and RMHMC
within Gibbs [7] on four benchmark models.1 The step size of all methods are manually tuned so
1
Our use of a Gibbs scheme for RMHMC follows standard practice [7].
5
300
300
potential
Kinetic
Hamlt
250
0
100
?100
50
?200
5
10
15
time
20
25
30
?300
x
1
HMC with diagonal constant mass
v
energy
100
150
v
energy
200
0
potential
Kinetic
Hamlt
200
5
10
15
time
20
25
30
x
1
SSHMC (semi-separable mass)
Figure 1: The trace of energy over the simulation time and the trajectory of the first dimension
of 100 dimensional Gaussian x1 (vertical axis) and hyperparameter v (horizontal axis). The two
simulations start with the same initial point sampled from the Gaussian Funnel.
HMC
RMHMC(Gibbs)
SSHMC
time(s)
36.63
18.92
22.12
min ESS(x, v)
(115.35, 38.96)
(1054.33, 31.69)
(3868.79, 1541.67)
min ESS/s (x, v)
(3.14, 1.06)
(55.15, 1.6)
(103.57, 41.27)
MSE(E[v], E[v 2 ])
(0.6, 0.18)
(1.58, 0.72)
(0.04, 0.03)
Table 1: The result of ESS of 5000 samples on 100 + 1 dimensional Gaussian Funnel distribution.
x are model parameters and v is the hyperparameter. The last column is the mean squared error of
the sample estimated mean and variance of the hyperparameter.
HMC
RMHMC(Gibbs)
SSHMC
running time(s)
378
411
385.82
ESS ? (min, med, max)
(2.05, 3.68, 4.79) ?103
(0.8, 4.08, 4.99)?103
(2.5, 3.42, 4.27)?103
ESS v
815
271
2266
min ESS/s
2.15
0.6
5.83
Table 2: The results of ESS of 5000 samples after 1000 burn-in on Hierarchical Bayesian Logistic
Regression. ? are 200 dimensional model parameters and v is the hyperparameter.
HMC
RMHMC(Gibbs)
SSHMC
time (s)
162
183
883
ESS x(min, med, max)
(1.6, 2.2, 5.2)?102
(12.1, 18.4, 33.5)?102
(78.4, 98.9, 120.7)?102
ESS(?, ?, ?)
(50, 50, 128)
(385, 163, 411)
(4434, 1706, 1390)
min ESS/s
0.31
0.89
1.57
Table 3: The ESS of 20000 posterior samples of Stochastic Volatility after 10000 burn-in. x are
latent volatilities over 2000 time lags and (?, ?, ?) are hyperparameters. Min ESS/s is the lowest
ESS over all parameters normalized by running time.
that the acceptance rate is around 70-85%. The number of leapfrog steps are tuned for each method
using preliminary runs. The implementation of RMHMC we used is from [7]. The running time
is wall-clock time measured after burn-in. The performance is evaluated by the minimum Effective
Sample Size (ESS) over all dimensions (see [6]). When considering the different computational
complexity of methods, our main efficiency metric is time normalized ESS.
5.1
Demonstration on Gaussian Funnel
We demonstrate SSHMC by sampling the Gaussian Funnel (GF) defined in Section 2. We consider n = 100 dimensional low-level parameters x and 1 hyperparameter v. RMHMC within
Gibbs on GF has block diagonal mass matrix defined as Gx = ??v2 log p(x, v)?1 = ev I and
Gv = ?Ex [?v2 log p(x, v)]?1 = (n + 91 )?1 . We use the same mass matrix in SSHMC, because
it is semi-separable. We use 2 leapfrog steps for low-level parameters and 1 leapfrog step for the
hyperparameter in ABLA and the same leapfrog step size for the two separable Hamiltonians.
We generate 5000 samples from each method after 1000 burn-in iterations. The ESS per second
(ESS/s) and mean squared error (MSE) of the sample estimated mean and variance of the hyperparameter are given in Table 1. Notice that RMHMC within Gibbs is much more efficient for the
low-level variables because the mass matrix adapts with the hyperparameter. Figure 1 illustrates a
dramatic difference between HMC and SSHMC. It is clear that HMC suffers from oscillation of the
hyperparameter in a narrow region. That is because the kinetic energy limits the change of hyperparameters [3, 11]. In contrast, SSHMC has much wider energy variation and the trajectory spans
6
0.07
0.07
0.12
0.06
0.06
0.1
0.05
0.05
0.04
0.04
0.03
0.03
0.02
0.02
0.01
0.01
RMHMC
SSHMC
HMC
0.08
0.06
0.04
0
0.94
0.95
0.96
0.97
0.98
0.99
0.02
0
0.1
1
0.15
0.2
0.25
0.3
0.35
0
0.4
0.6
0.8
1
1.2
1.4
Figure 2: The normalized histogram of 20000 posterior samples of hyperparameters of the stochastic volatility model (from left to right ?, ?, ?) after 10000 burn-in samples. The data is generated by
the hyperparameter (? = 0.98, ? = 0.15, ? = 0.65). All three methods produce accurate estimates,
but SSHMC and RMHMC within Gibbs converge faster than HMC.
a larger range of hyperparameter v. The energy variation of SSHMC is similar to the RMHMC
with Soft-Abs metric (RMHMC-Soft-Abs) reported in [2], an instance of general RMHMC without
Gibbs. But compared with [2], each ABLA step is about 100 times faster than each generalized
leapfrog step and SSHMC can generate around 2.5 times more effective samples per second than
RMHMC-Soft-Abs. Although RMHMC within Gibbs has better ESS/s on the low level variables,
its estimation of the mean and variance is biased, indicating that the chain has not yet mixed. More
important, Table 1 shows that the samples generated by SSHMC give nearly unbiased estimates of
the mean and variance of the hyperparameter, which neither of the other methods are able to do.
5.2
Hierarchical Bayesian Logistic Regression
In this experiment, we consider hierarchical Bayesian logistic regression with an exponential prior
for the variance hyperparameter v, that is
YY
p(w, ?|D) ?
?(yij wiT xij )N (wi |0, vI)Exp(v|?),
i
j
where ? is the logistic function ?(z) = 1/(1+exp(?z)) and (yij , xij ) is the jth data point in the ith
group. We use the Statlog (German credit) dataset from [1]. This dataset includes 1000 data points
and each data has 16 categorical features and 4 numeric features. Bayesian logistic regression on
this dataset has been considered as a benchmark for HMC [7, 8], but the previous work uses only
one group in their experiments. To make the problem more interesting, we partition the dataset into
10 groups according to the feature Purpose. The size of group varies from 9 to 285. There are 200
model parameters (20 parameters for each group) and 1 hyperparameter.
We consider the reparameterization of the hyperparameter ? = log v. For RMHMC within Gibbs,
the mass matrix for group i is Gi := I(x, ?)?1 , where I(x, ?) is the Fisher Information matrix
for model parameter wi and constant mass Gv . In each iteration of the Gibbs sampler, each wi is
sampled from by RMHMC using 6 generalized leapfrog steps and v is sampled using 6 leapfrog
steps. For SSHMC, Gi := Cov(x) + exp(?)I and the same constant mass Gv .
The results are shown in Table 2. SSHMC again has much higher ESS/s than the other methods.
5.3
Stochastic Volatility
A stochastic volatility model we consider is studied in [9], in which the latent volatilities are modeled
by an auto-regressive AR(1) process such that the observations are yt = t ? exp(xt /2) with latent
variable xt+1 = ?xt + ?t+1 . We consider the distributions x1 ? N (0, ? 2 /(1 ? ?2 )), t ? N (0, 1)
and ?t ? (0, ? 2 ). The joint probability is defined as
p(y, x, ?, ?, ?)
=
T
Y
t=1
2
p(yt |xt , ?)p(x1 )
T
Y
t=2
p(xt |xt?1 , ?, ?)?(?)?(?)?(?),
where the prior ?(?) ? 1/?, ? ? Inv-?2 (10, 0.05) and (? + 1)/2 ? beta(20, 1.5). The FIM of
p(x|?, ?, ?, y) depends on the hyperparameters but not x, but the FIM of p(?, ?, ?|x, y) depends
on (?, ?, ?). For RMHMC within Gibbs we consider FIM as the metric tensor following [7]. For
SSHMC, we define G? as the inverse Hessian of log p(x|?, ?, ?, y), but G? as an identity matrix.
In each ABLA step, we use 5 leapfrog steps for updates of x and 2 leapfrog steps for updates of the
hyperparameters, so that the running time of SSHMC is about 7 times that of standard HMC.
7
0.12
5
0.16
RMHMC
SSHMC
5
10
10
15
15
20
20
RMHMC
SSHMC
0.14
0.1
0.12
0.08
0.1
0.06
0.08
0.06
0.04
25
25
30
30
0.04
0.02
5
10
15
20
25
30
(a)
0.02
5
10
15
20
25
0
0
30
(b)
1
2
3
4
5
6
7
(c)
0
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
(d)
Figure 3: Sample mean of latent fields of the LGCPP model from (a) RMHMC and (b) SSHMC.
The normalized histogram of sampled hyperparameter (c) ? and (d) ?. We draw 5000 samples from
both methods after 1000 burn-in. The true hyperparameter values are (? = 1.9, ? = 0.03).
SSHMC
RMHMC(Gibbs)
time(h)
2.6
2.64
ESS x(min, med, max)
(7.8, 30, 39)?102
(1, 29, 38.3)?102
ESS(?, ?)
(2101, 270)
(200, 46)
min ESS/h
103.8
16
Table 4: The ESS of 5000 posterior samples from 32x32 LGCPP after 1000 burn-in samples. x is
the 1024 dimensional vector of latent variables and (?, ?) are the hyperparameters of the Gaussian
Process prior. ?min ESS/h? means minimum ESS per hour.
We generate 20000 samples using each method after 10000 burn-in samples. As shown in Figure 2,
the histogram of hyperparameters by all methods converge to the same distribution, so all methods
are mixing well. But from Table 3, we see that SSHMC generates almost two times as many ESS/s
as RMHMC within Gibbs.
5.4
Log-Gaussian Cox Point Process
The log-Gaussian Cox Point Process (LGCPP) is another popular testing benchmark [5, 7, 14]. We
follow the experimental setting of Girolami and Calderhead [7]. The observations Y = {yij } are
counts at the location (i, j), i, j = 1, . . . , d on a regular spatial grid, which are conditionally independent given a latent intensity process ? = {?(i, j)} with mean m?(i, j) = m exp(xi,j ), where
m = 1/d2 , X = {xi,j }, x = Vec(X) and y = Vec(Y). X is assigned a Gaussian process prior, with
mean function m(xi,j ) = ?1 and covariance function ?(xi,j , xi0 ,j 0 ) = ? 2 exp(??(i, i0 , j, j 0 )/?d)
where ?(?) is the Euclidean
(i0 , j 0 ). The log joint probability is given by
P distance between (i, j) and
1
log p(y, x|?, ?, ?) = i,j yi,j xi,j ?m exp(xi,j )? 2 (x??1)T ??1 (x??1). We consider a 32?32
grid that has 1024 latent variables. Each latent variable xi,j corresponds to a single observation yi,j .
We consider RMHMC within Gibbs with FIM of the conditional posteriors. See [7] for the FIM for
this model. The generalized leapfrog steps are required for updating (?, ?), but only the leapfrog
steps are required for updating x. Each Gibbs iteration takes 20 leapfrog steps for x and 1 general
leapfrog step for (?, ?). In SSHMC, we use Gx = ??1 and G(?,?) = I. In each ABLA step, the
update of x takes 2 leapfrog steps and the update of (?, ?) takes 1 leapfrog step. Each SSHMC
transition takes 10 ABLA steps. We do not consider HMC on LGCPP, because it mixes extremely
slowly for the hyperparameters.
The results of ESS are given in Table 4. The mean of the sampled latent variables and the histogram
of sampled hyperparameters are given in Figure 3. It is clear that the samples of RMHMC and
SSHMC are consistent, so both methods are mixing well. However, SSHMC generates about six
times as many effective samples per hour as RMHMC within Gibbs.
6
Conclusion
We have presented Semi-Separable Hamiltonian Monte Carlo (SSHMC), a new version of Riemannian manifold Hamiltonian Monte Carlo (RMHMC) that aims to retain the flexibility of RMHMC for
difficult Bayesian sampling problems, while achieving greater simplicity and lower computational
complexity. We tested SSHMC on several different hierarchical models, and on all the models we
considered, SSHMC outperforms both HMC and RMHMC within Gibbs in terms of number of effective samples produced in a fixed amount of computation time. Future work could consider other
choices of mass matrix within the semi-separable framework, or the use of SSHMC within discrete
models, following previous work in discrete HMC [12, 15].
8
References
[1] K. Bache and M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.
uci.edu/ml.
[2] M. J. Betancourt. A general metric for Riemannian manifold Hamiltonian Monte Carlo. ArXiv e-prints,
Dec. 2012.
[3] M. J. Betancourt and M. Girolami. Hamiltonian Monte Carlo for hierarchical models. ArXiv e-prints,
Dec. 2013.
[4] K. Choo. Learning hyperparameters for neural network models using Hamiltonian dynamics. PhD thesis,
Citeseer, 2000.
[5] O. F. Christensen, G. O. Roberts, and J. S. Rosenthal. Scaling limits for the transient phase of local
Metropolis?Hastings algorithms. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):253?268, 2005.
[6] C. J. Geyer. Practical Markov Chain Monte Carlo. Statistical Science, pages 473?483, 1992.
[7] M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods.
Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123?214, 2011. doi:
10.1111/j.1467-9868.2010.00765.x.
[8] M. D. Hoffman and A. Gelman. The no-U-turn sampler: Adaptively setting path lengths in Hamiltonian
Monte Carlo. Journal of Machine Learning Research, 15:1593?1623, 2014.
[9] S. Kim, N. Shephard, and S. Chib. Stochastic volatility: likelihood inference and comparison with ARCH
models. The Review of Economic Studies, 65(3):361?393, 1998.
[10] B. Leimkuhler and S. Reich. Simulating Hamiltonian dynamics, volume 14. Cambridge University Press,
2004.
[11] R. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, pages 113?162,
2011.
[12] A. Pakman and L. Paninski. Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary
distributions. In Advances in Neural Information Processing Systems 26, pages 2490?2498. 2013.
[13] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2004.
[14] Z. Wang, S. Mohamed, and N. de Freitas. Adaptive Hamiltonian and Riemann manifold Monte Carlo
samplers. In International Conference on Machine Learning (ICML), pages 1462?1470, 2013. URL
http://jmlr.org/proceedings/papers/v28/wang13e.pdf. JMLR W&CP 28 (3): 1462?
1470, 2013.
[15] Y. Zhang, C. Sutton, A. Storkey, and Z. Ghahramani. Continuous relaxations for discrete Hamiltonian
Monte Carlo. In Advances in Neural Information Processing Systems (NIPS), 2012.
9
| 5591 |@word repository:2 version:1 cox:2 d2:1 simulation:8 crucially:1 covariance:1 citeseer:3 dramatic:1 accommodate:1 initial:1 series:4 lichman:2 tuned:2 outperforms:1 freitas:2 current:2 discretization:2 yet:1 dx:1 partition:1 nian:1 gv:3 designed:1 update:7 stationary:1 prohibitive:1 leaf:5 es:27 ith:2 hamiltonian:60 geyer:2 regressive:1 provides:1 location:1 gx:3 attack:1 org:3 simpler:5 zhang:4 differential:2 beta:1 introduce:4 notably:1 xji:1 integrator:5 discretized:4 decomposed:2 riemann:4 little:1 considering:2 becomes:1 notation:1 mass:16 lowest:1 what:1 transformation:4 act:1 exactly:2 rm:2 k2:2 uk:2 control:2 local:2 limit:5 sutton:4 path:2 plus:1 burn:8 studied:1 challenging:2 range:1 practical:3 yj:1 testing:1 practice:4 block:5 differs:1 significantly:1 word:1 regular:1 leimkuhler:2 gelman:2 restriction:1 ogy:1 yt:2 starting:2 ergodic:1 wit:1 simplicity:2 x32:1 importantly:1 borrow:2 reparameterization:1 variation:3 target:4 ulating:2 exact:3 us:3 storkey:2 expensive:2 particularly:1 updating:2 bache:2 wang:2 region:1 complexity:4 dynamic:16 depend:1 calderhead:4 efficiency:1 easily:1 joint:15 mh:3 accelerate:1 effective:4 monte:33 doi:3 vations:1 lag:1 larger:4 solve:1 say:1 otherwise:1 statistic:1 gi:2 cov:1 jointly:1 obviously:1 advantage:3 propose:3 uci:4 loop:1 date:1 mixing:6 flexibility:1 adapts:1 exploiting:1 produce:1 leave:1 volatility:8 depending:1 wider:1 ac:2 pose:2 measured:1 school:2 sim:1 strong:3 shephard:2 auxiliary:11 girolami:6 met:1 stochastic:6 enable:1 transient:2 require:2 fix:1 wall:1 decompose:1 preliminary:1 symplectic:6 statlog:1 yij:3 extension:1 correction:3 around:2 considered:4 credit:1 ic:1 exp:7 substituting:1 omitted:1 purpose:1 estimation:1 applicable:1 hoffman:1 rough:1 gaussian:14 always:1 aim:2 rather:1 thek:1 leapfrog:43 unsatisfactory:1 likelihood:4 greatly:1 contrast:2 kim:2 inference:4 i0:2 accept:1 transformed:1 canceled:1 among:1 aforementioned:1 art:1 special:1 spatial:1 field:1 construct:1 sampling:9 manually:1 icml:2 nearly:2 promote:1 future:1 modern:1 pathological:3 chib:2 preserve:4 phase:2 ab:3 interest:2 huge:1 acceptance:1 chain:10 accurate:1 partial:2 euclidean:1 instance:2 column:1 modeling:1 soft:3 ar:1 cost:5 inadequate:1 too:1 reported:1 varies:3 adaptively:2 density:2 international:2 retain:1 physic:1 informatics:2 squared:2 again:1 thesis:1 manage:1 slowly:4 derivative:1 return:2 toy:1 account:1 potential:18 de:1 includes:3 depends:4 vi:1 h1:14 closed:1 start:1 complicated:1 variance:8 efficiently:2 yield:2 bayesian:13 produced:1 iid:1 carlo:32 trajectory:2 casella:2 suffers:1 ed:2 energy:23 mohamed:2 e2:1 riemannian:6 di:1 sampled:6 dataset:4 popular:4 recall:1 actually:2 appears:1 higher:2 follow:1 methodology:3 evaluated:1 though:1 strongly:4 arch:1 correlation:5 preconditioner:2 clock:1 hastings:3 horizontal:1 reversible:5 logistic:6 mode:1 effect:1 normalized:4 true:2 hamil:2 unbiased:1 assigned:1 alternating:5 neal:3 illustrated:1 conditionally:1 noted:1 illustrative:1 generalized:4 pdf:2 butions:1 demonstrate:2 cp:2 wise:2 novel:1 charles:1 common:1 volume:6 discussed:1 xi0:1 significant:1 refer:1 composition:2 cambridge:2 gibbs:33 vec:2 grid:2 similarly:1 reich:2 longer:2 curvature:2 posterior:16 recent:3 showed:1 inf:1 binary:2 yi:6 exploited:1 seen:1 preserving:3 greater:2 minimum:2 r0:6 converge:2 semi:19 preservation:1 multiple:2 mix:7 reduces:1 technical:1 faster:7 pakman:2 involving:1 regression:5 metric:7 arxiv:4 iteration:5 kernel:1 normalization:2 histogram:4 dec:4 preserved:1 want:1 else:1 biased:1 specially:1 archive:2 pooling:2 tend:2 med:3 simulates:1 seem:1 call:10 easy:2 enough:1 xj:1 restrict:2 economic:2 idea:1 bottleneck:2 six:1 fim:8 url:4 returned:1 hessian:9 cause:3 ignored:1 useful:2 clear:2 amount:2 generate:4 http:5 outperform:1 xij:2 notice:2 estimated:2 rosenthal:2 per:6 yy:1 discrete:4 hyperparameter:22 group:15 key:1 four:1 achieving:1 drawn:1 d3:1 neither:1 relaxation:2 sum:1 run:2 inverse:2 powerful:2 almost:1 oscillation:1 draw:2 scaling:2 strength:1 generates:2 u1:3 speed:3 simulate:1 min:12 span:1 extremely:1 separable:36 relatively:1 according:1 alternate:1 across:2 beneficial:1 rmhmc:44 separability:4 wi:3 metropolis:2 christensen:2 intuitively:1 invariant:6 restricted:1 computationally:3 equation:2 overwhelm:1 turn:2 german:1 count:1 know:1 end:2 operation:1 hierarchical:19 v2:2 simulating:4 running:4 exploit:1 k1:4 ghahramani:2 approximating:1 society:4 unchanged:1 tensor:2 move:1 print:2 parametric:1 strategy:2 rt:2 diagonal:3 gradient:4 subspace:2 distance:1 me:1 manifold:12 reason:1 length:3 modeled:1 demonstration:1 difficult:2 hmc:41 lg:1 robert:4 blockwise:3 trace:1 implementation:1 allowing:2 vertical:1 observation:3 datasets:1 sm:1 markov:6 benchmark:4 langevin:2 rn:2 arbitrary:1 inv:1 intensity:1 introduced:1 required:4 connection:1 narrow:1 hour:2 nip:2 able:1 usually:5 ev:2 challenge:2 royal:4 max:3 power:2 natural:3 difficulty:4 rely:1 eh:1 recursion:1 scheme:3 improve:1 axis:2 categorical:1 coupled:2 auto:1 gf:2 prior:8 literature:1 geometric:1 review:2 betancourt:6 mixed:1 interesting:1 limitation:1 funnel:8 eigendecomposition:1 h2:11 consistent:1 share:1 last:4 infeasible:1 jth:1 sparse:1 edinburgh:2 dimension:4 transition:2 valid:3 numeric:1 qn:1 collection:2 adaptive:2 simplified:1 approximate:1 yichuan:1 emphasize:1 ml:2 global:1 overfitting:1 handbook:2 assumed:1 xi:12 yji:1 continuous:2 v28:2 latent:9 table:9 correlated:3 symmetry:1 unavailable:1 mse:2 complex:1 hamiltonians:7 main:2 hyperparameters:20 x1:3 precision:2 position:2 momentum:2 exponential:1 jmlr:4 z0:3 xt:6 phd:2 illustrates:1 nk:1 paninski:2 simply:1 infinitely:1 u2:3 springer:1 corresponds:1 kinetic:10 conditional:4 identity:3 choo:2 shared:2 fisher:2 change:2 sampler:11 called:7 total:2 experimental:2 indicating:1 mcmc:9 tested:1 avoiding:1 ex:1 |
5,072 | 5,592 | Bayesian Sampling Using Stochastic Gradient
Thermostats
Nan Ding?
Google Inc.
[email protected]
Changyou Chen
Duke University
[email protected]
Youhan Fang?
Purdue University
[email protected]
Robert D. Skeel
Purdue University
[email protected]
Ryan Babbush
Google Inc.
[email protected]
Hartmut Neven
Google Inc.
[email protected]
Abstract
Dynamics-based sampling methods, such as Hybrid Monte Carlo (HMC) and
Langevin dynamics (LD), are commonly used to sample target distributions. Recently, such approaches have been combined with stochastic gradient techniques
to increase sampling efficiency when dealing with large datasets. An outstanding
problem with this approach is that the stochastic gradient introduces an unknown
amount of noise which can prevent proper sampling after discretization. To remedy this problem, we show that one can leverage a small number of additional
variables to stabilize momentum fluctuations induced by the unknown noise. Our
method is inspired by the idea of a thermostat in statistical physics and is justified
by a general theory.
1
Introduction
The generation of random samples from a posterior distribution is a pervasive problem in Bayesian
statistics which has many important applications in machine learning. The Markov Chain Monte
Carlo method (MCMC), proposed by Metropolis et al.[16], generates unbiased samples from a
desired distribution when the density function is known up to a normalizing constant. However,
traditional MCMC methods are based on random walk proposals which lead to highly correlated
samples. On the other hand, dynamics-based sampling methods, e.g. Hybrid Monte Carlo (HMC)
[6, 10], avoid this high degree of correlation by combining dynamic systems with the Metropolis
step. The dynamic system uses information from the gradient of the log density to reduce the random walk effect, and the Metropolis step serves as a correction of the discretization error introduced
by the numerical integration of the dynamic systems.
The computational cost of HMC methods depends primarily on the gradient evaluation. In many
machine learning problems, expensive gradient computations are a consequence of working with
extremely large datasets. In such scenarios, methods based on stochastic gradients have been very
successful. A stochastic gradient uses the gradient obtained from a random subset of the data to
approximate the true gradient. This idea was first used in optimization [9, 19] but was recently
adapted for sampling methods based on stochastic differential equations (SDEs) such as Brownian
dynamics [1, 18, 24] and Langevin dynamics [5].
Due to discretization, stochastic gradients introduce an unknown amount of noise into the dynamic
system. Existing methods sample correctly only when the step size is small or when a good estimate
of the noise is available. In this paper, we propose a method based on SDEs that self-adapts to the
?
indicates equal contribution.
1
unknown noise with the help of a small number of additional variables. This allows for the use
of larger discretization step, smaller diffusion factor, or smaller minibatch to improve the sampling
efficiency without sacrificing accuracy.
From the statistical physics perspective, all these dynamics-based sampling methods are approaches
that use dynamics to approximate a canonical ensemble [23]. In a canonical ensemble, the distribution of the states follows the canonical distribution which corresponds to the target posterior
distribution of interests. In attemping to sample from the canonical ensemble, existing methods
have neglected the condition that, the system temperature must remain near a target temperature
(Eq.(4) of Sec. 3). When this requirement is ignored, noise introduced by stochastic gradients may
drive the system temperature away from the target temperature and cause inaccurate sampling. The
additional variables in our method essentially play the role of a thermostat which controls the temperature and, as a consequence, handles the unknown noise. This approach can also be found by
following a general recipe which helps designing dynamic systems that produce correct samples.
The rest of the paper is organized as follows. Section 2 briefly reviews the related background.
Section 3 proposes the stochastic gradient Nos?e-Hoover thermostat method which maintains the
canonical ensemble. Section 4 presents the general recipe for finding proper SDEs and mathematically shows that the proposed method produces samples from the correct target distribution. Section
5 compares our method with previous methods on synthetic and real world machine learning applications. The paper is concluded in Section 6.
2
Background
Our objective is to generate random samples from the posterior probability density p(?| X) ?
p(X |?)p(?), where ? represents an n-dim parameter vector and X represents data. The canonical form is p(?| X) = (1/Z) exp(?U (?)) where U (?) = ? log p(X |?) ? log p(?) is referred to
as the potential energy and Z is the normalizing constant. Here, we briefly review a few dynamicsbased sampling methods, including HMC, LD, stochastic gradient LD (SGLD) [24], and stochastic
gradient HMC (SGHMC) [5], while relegating a more comprehensive review to Appendix A.
HMC [17] works in an extended space ? = (?, p), where ? and p simulate the positions and the
momenta of particles in a system. Although some works, e.g. [7, 8], make use of variable mass,
we assume that all particles have unit constant mass (i.e. mi = 1). The joint density of ? and
p can be written as ?(?, p) ? exp(?H(?, p)), where H(?, p) = U (?) + K(p) is called the
Hamiltonian (the total energy). U (?) is called the potential energy and K(p) = p> p /2 is called
the kinetic energy. Note that p has standard normal distribution. The force on the system is defined
as f (?) = ??U (?). It can be shown that the Hamiltonian dynamics
d? = p dt,
d p = f (?)dt,
maintain a constant total energy [17]. In each step of the HMC algorithm, one first randomizes
p according to the standard normal distribution; then evolves (?, p) according to the Hamiltonian
dynamics (solved by numerical integrators); and finally uses the Metropolis step to correct the discretization error.
Langevin dynamics (with diffusion factor A) are described by the following SDE,
?
d? = p dt, d p = f (?)dt ? A p dt + 2Ad W,
(1)
where W is n independent Wiener processes (see Appendix A), and d W can be informally written
as N (0, I dt) or simply N (0, dt) as in [5]. Brownian dynamics
d? = f (?)dt + N (0, 2dt)
is obtained from Langevin dynamics by rescaling time t ? At and letting A ? ?, i.e., on long
time scales inertia effects can be neglected [11]. When the size of the dataset is big, the computation
PN
of the gradient of ? log p(X |?) = ? i=1 log p(xi |?) can be very expensive. In such situations,
one could use the likelihood of a random subset of the data xi ?s to approximate the true likelihood,
? (?) = ? N
U
?
N
?
N
X
log p(x(i) |?) ? log p(?),
i=1
2
(2)
? N . Define the stochastic force ?f (?) =
where x(i) represents a random subset of {xi } and N
?
?
??U (?). The SGLD algorithm [24] uses f (?) and the Brownian dynamics to generate samples,
d? = ?f (?)dt + N (0, 2dt).
In [5], the stochastic force with a discretization step h is approximated as h?f (?) ' h f (?) +
N (0, 2h B(?)) (note that the argument is not rigorous and that other significant artifacts of discretization may have been neglected). The SGHMC algorithm uses a modified LD,
d? = p dt,
?
d p = ?f (?)dt ? A p dt + N (0, 2(A I ?B(?))dt),
(3)
?
where B(?)
is intended to offset B(?), the noise from the stochastic force.
?
However, B(?)
is hard to estimate in practice and cannot be omitted when the discretization step h
?
is not small enough. Since poor estimation of B(?)
may lead to inaccurate sampling, we attempt to
find a dynamic system which is able to adaptively fit to the noise without explicit estimation. The
intuition comes from the practice of sampling a canonical ensemble in statistical physics.
The Metropolis step in SDE-based samplers with stochastic gradients is sometimes omitted on large
datasets, because the evaluation of the potential energy requires using the entire dataset which cancels the benefit of using stochastic gradients. There is some recent work [2, 3, 14] which attempts
to estimate the Metropolis step using partial data. Although an interesting direction for future work,
in this paper we do not consider applying Metropolis step in conjunction with stochastic gradients.
3
Stochastic Gradient Thermostats
In statistical physics, a canonical ensemble represents the possible states of a system in thermal
equilibrium with a heat bath at fixed temperature T [23]. The probability of the states in a canonical
ensemble follows the canonical distribution ?(?, p) ? exp(?H(?, p)/(kB T )), where kB is the
Boltzmann constant. A critical characteristic of the canonical ensemble is that the system temperature, defined as the mean kinetic energy, satisfies the following thermal equilibrium condition,
kB T
1
1
= E[K(p)], or equivalently, kB T = E[p> p].
2
n
n
(4)
All dynamics-based sampling methods approximate the canonical ensemble to generate samples. In
Bayesian statistics, n is the dimension of ?, and kB T = 1 so that ?(?, p) ? exp(?H(?, p)) and
more importantly ?? (?) ? exp(?U (?)). However, one key fact that was overlooked in previous
methods, is that the dynamics that correctly simulate the canonical ensemble must maintain the
thermal equilibrium condition (4). Besides its physical meaning, the condition is necessary for p
being distributed as its marginal canonical distribution ?p (p) ? exp(?K(p)).
It can be verified that ordinary HMC and LD (1) with true force both maintain (4). However, after
combination with the stochastic force ?f (?), the dynamics (3) may drift away from thermal equilib?
rium if B(?)
is poorly estimated. Therefore, to generate correct samples, one needs to introduce a
proper thermostat, which adaptively controls the mean kinetic energy. To this end, we introduce an
additional variable ?, and use the following dynamics (with diffusion factor A and kB T = 1),
?
(5)
d? = p dt, d p = ?f (?)dt ? ? p dt + 2A N (0, dt),
1
d? = ( p> p ?1)dt.
(6)
n
Intuitively, if the mean kinetic energy is higher than 1/2, then ? gets bigger and p experiences more
friction in (5); on the other hand, if the mean kinetic energy is lower, then ? gets smaller and p
experiences less friction. Because (6) appears to be the same as the Nos?e-Hoover thermostat [13]
in statistical physics, we call our method stochastic gradient Nos?e-Hoover thermostat (SGNHT,
Algorithm 1). In Section 4, we will show that (6) is a simplified version of a more general SGNHT
method that is able to handle high dimensional non-isotropic noise from ?f . But before that, let us
first look at a 1-D illustration of the SGNHT sampling in the presence of unknown noise.
3
Algorithm 1: Stochastic Gradient Nos?e-Hoover Thermostat
Input: Parameters h, A.
Initialize ?(0) ? Rn , p(0) ? N (0, I), and ?(0) = A ;
for t = 1, 2, . . . do
? (?(t?1) ) from (2) ;
Evaluate ?U
?
? (?(t?1) )h + 2A N (0, h);
p(t) = p(t?1) ??(t?1) p(t?1) h ? ?U
?(t) = ?(t?1) + p(t) h;
?(t) = ?(t?1) + ( n1 p>
(t) p(t) ?1)h;
end
Illustrations of a Double-well Potential To illustrate that the adaptive update (6) is able to control
the mean kinetic energy, and more importantly, produce correct sampling with unknown noise on
the gradient, we consider the following double-well potential,
U (?) = (? + 4)(? + 1)(? ? 1)(? ? 3)/14 + 0.5.
? (?)h =
The target distribution is ?(?) ? exp(?U (?)). To simulate the unknown noise, we let ?U
?U (?)h + N (0, 2Bh), where h = 0.01 and B = 1. In the interest of clarity we did not inject
? (?), namely A = 0. In Figure 1 we plot the estimated
additional noise other than the noise from ?U
density based on 106 samples and the mean kinetic energy over iterations, when ? is fixed at 0.1, 1, 10
successively, as well as when ? follows our thermostat update in (6).
From Figure 1, when ? = B = 1, the SDE is the ordinary Langevin dynamics. In this case, the
sampling is accurate and the kinetic energy is controlled around 0.5. When ? > B, the kinetic
energy drops to a low value, and the sampling gets stuck in one local minimum; this is what happens
in the SGD optimization with momentum. When ? < B, the kinetic energy gets too high, and the
sampling looks like a random walk. For SGNHT, the sampling looks as accurate as the one with
? = B and the kinetic energy is also controlled around 0.5. Actually in Appendix B, we see that the
value of ? of SGNHT quickly converges to B = 1.
True distribution
?=1
True distribution
? = 10
True distribution
? = 0.1
0.6
True distribution
SGNHT
0.6
0.6
2
0.4
?(?)
?(?)
?(?)
?(?)
0.4
0.4
1
0.2
0.2
0
0
?6
?4
?2
0
2
4
0
?6
6
0.2
?4
?2
0
?
2
4
0
?6
6
?4
?2
?
0.6
1.2
?=1
0
2
4
?6
6
?4
?2
0
?
2
? = 0.1
1
6
SGNHT
0.5
4
0.5
4
?
? = 10
0.6
K(p)
0.4
K(p)
K(p)
K(p)
0.8
2
0.4
0.4
0.3
0.3
0.2
0.2
0
0
0
0.2
0.4
0.6
iterations
0.8
1
?106
0
0.2
0.4
0.6
iterations
0.8
1
?106
0
0.2
0.4
0.6
iterations
0.8
1
?106
0.2
0
0.2
0.4
0.6
iterations
0.8
1
?106
Figure 1: The samples on ?(?) and the mean kinetic energy over iterations K(p) with ? = 1 (1st),
? = 10 (2nd), ? = 0.1 (3rd), and the SGNHT (4th). The first three do not use a thermostat. The
fourth column shows that the SGNHT method is able to sample accurately and maintains the mean
kinetic energy with unknown noise.
4
The General Recipe
In this section, we mathematically justify the proposed SGNHT method. We begin with a theorem
showing why and how a sampler based on SDEs using stochastic gradients can produce the correct
target distribution. The theorem serves two purposes. First, one can examine whether a given SDE
sampler is correct or not. The theorem is more general than previous ones in [5][24] which focus
on justifying individual methods. Second, the theorem can be a general recipe for proposing new
methods. As a concrete example of using this approach, we show how to obtain SGNHT from the
main theorem.
4
4.1
The Main Theorem
Consider the following general stochastic differential equations that use the stochastic force:
d? = v(?)dt + N (0, 2 D(?)dt)
(7)
where ? = (?, p, ?), and both p and ? are optional. v is a vector field that characterizes the
deterministic part of the dynamics. D(?) = A +diag(0, B(?), 0), where the injected noise A is
known and constant, whereas the noise of the stochastic gradient B(?) is unknown, may vary, and
only appears in blocks corresponding to rows of the momentum. Both A and B are symmetric
positive semidefinite. Taking the dynamics of SGHMC as an example, it has ? = (?, p), v =
?
(p, f ?Ap) and D(?) = diag(0, A I ?B(?)
+ B(?)).
Let ?(?) = (1/Z) exp(?H(?)) be the joint probability density of all variables, and write H as
H(?) = U (?) + Q(?, p, ?). The marginal density for ? must equal the target density,
ZZ
exp (?U (?)) ?
exp (?U (?) ? Q(?, p, ?)) dpd?
(8)
which will be referred as the marginalization condition.
Main Theorem. The stochastic process of ? generated by the stochastic differential equation (7) has
the target distribution ?? (?) = (1/Z) exp(?U (?)) as its stationary distribution, if ? ? exp (?H)
satisfies the marginalization condition (8), and
? ? (?v) = ??> : (? D),
(9)
where we use concise notation, ? = (?/??, ?/? p, ?/??) being a column vector,
? representing a vector inner product x ? y = x> y, and : representing a matrix double dot product
X : Y = trace(X> Y).
Proof. See Appendix C.
Remark. The theorem implies that when the SDE is solved exactly (namely h ? 0), then the noise
of the stochastic force has no effect, because limh?0 D = A [5]. In this case, any dynamics that
produce the correct distribution with the true gradient, such as the original Langevin dynamics, can
also produce the correct distribution with the stochastic gradient.
However, when there is discretization error one must find the proper H, v and A to ensure production of the correct distribution of ?. Towards this end, the theorem provides a general recipe
for finding proper dynamics that can sample correctly in the presence of stochastic forces. To use
this prescription, one may freely select the dynamics characterized by v and A as well as the joint
stationary distribution for which the marginalization condition holds. Together, the selected v, A
and ? must satisfy this main theorem.
The marginalization condition is important because for some stochastic differential equations there
exists a ? that makes (9) hold even though the marginalized distribution is not the target distribution.
Therefore, care must be taken when designing the dynamics. In the following subsection, we will
use the proposed stochastic gradient Nos?e-Hoover thermostats as an illustrative example of how our
recipe may be used to discover new methods. We will show more examples in Appendix D.
4.2
Revisiting the Stochastic Gradient Nos?e-Hoover Thermostat
Let us start from the following dynamics:
d? = p dt, d p = f dt ? ? p dt + N (0, 2 D dt),
where both ? and D are n ? n matrices. Apparently, when ? 6= D, the dynamics will not generate
the correct target distribution (see Appendix D). Now let us add dynamics for ?, denoted by d? =
v(?) dt, and demonstrate application of the main theorem.
Let ?(?, p, ?) = (1/Z) exp(?H(?, p, ?)) be our target distribution, where H(?, p, ?) = U (?) +
Q(p, ?) and Q(p, ?) is also to be determined. Clearly, the marginalization condition is satisfied for
such H(?, p, ?).
5
Let Rz denote the gradient of a function R, and Rz z denote the Hessian. For simplicity, we constrain
?? ? v(?) = 0, and assume that D is a constant matrix. Then the LHS and RHS of (9) become
T
(?)
LHS = (? ? v ??H ? v)? = (?trace(?) + f T p ? QT
)?,
p f + Qp ?p ? Q? : v
RHS = D : ?pp = D : (Qp QT
p ? Qpp )?.
Equating both sides, one gets
T
(?)
?trace(?) + f T p ? QT
= D : (Qp QT
p f + Qp ?p ? Q? : v
p ) ? D : Qpp .
To cancel the f terms, set Qp = p, then Q(p, ?) = 21 pT p + S(?), which leaves S(?) to be
determined. The equation becomes
?? : I +? : (ppT ) ? S? : v(?) = D : (ppT ) ? D : I .
(10)
Obviously, v(?) must be a function of ppT since S? is independent of p. Also, D must only appear
in S? , since we want v(?) to be independent of the unknown D. Finally, v(?) should be independent
of ?, since we let ?? ? v(?) = 0. Combining all three observations, we let v(?) be a linear function
of ppT , and S? a linear function of ?. With some algebra, one finds that
v(?) = (ppT ? I)/?,
(11)
and S? = (? ? D)? which means Q(p, ?) = 21 pT p + 12 ?(? ? D) : (? ? D). (11) defines a
general stochastic gradient Nos?e-Hoover thermostats. When D = D I and ? = ? I (here D and ?
are both scalars and I is the identity matrix), one can simplify (10) and obtain v (?) = (pT p ? n)/?.
It reduces to (6) of the SGNHT in section 3 when ? = n.
The Nos?e-Hoover thermostat without stochastic terms has ? ? N (0, ??1 ). When there is
a stochastic term N (0, 2 D dt), the distribution of ? changes to a matrix normal distribution
MN (D, ??1 I, I) (in the scalar case, N (D, ??1 )). This indicates that the thermostat absorbs the
stochastic term D, since the expected value of ? is equal to D, and leaves the marginal distribution
of ? invariant.
In the derivation above, we assumed that D is constant (by assuming B constant). This assumption
is reasonable when the data size is large so that the posterior of ? has small variance. In addition,
the full dynamics of ? requires additional n ? n equations of motion, which is generally too costly.
In practice, we found that Algorithm 1 with a single scalar ? works well.
5
5.1
Experiments
Gaussian Distribution Estimation Using Stochastic Gradient
We first demonstrate our method on a simple example: Bayesian inference on 1D normal distributions. The first part of the experiment tries to estimate the mean of the normal distribution with
known variance and N = 100 random examples from N (0, 1). The likelihood is N (xi |?, 1), and an
? = 10 examples.
improper prior of ? being uniform is assigned. Each iteration we randomly select N
? (Appendix E).
The noise of the stochastic gradient is a constant given N
Figure 2 shows the density of 106 samples obtained by SGNHT (1st plot) and SGHMC (2nd plot).
As we can see, SGNHT samples accurately without knowing the variance of the noise of the stochastic force under all parameter settings, whereas SGHMC samples accurately only when h is small and
A is large. The 3rd plot shows the mean of ? values in SGNHT. When h = 0.001, ? and A are close.
However, when h = 0.01, ? becomes much larger than A. This indicates that the discretization introduces a large noise from the stochastic gradient, and the ? variable effectively absorbs the noise.
The second part of the experiment is to estimate both mean and variance of the normal distribution. We use the likelihood function N (xi |?, ? ?1 ) and the Normal-Gamma distribution ?, ? ?
N (?|0, ?)Gam(?|1, 1) as prior. The variance of the stochastic gradient noise is no longer a constant and depends on the values of ? and ? (see Appendix E).
Similar density plots are available in Appendix E. Here we plot the Root Mean Square Error (RMSE)
of the density estimation vs. the autocorrelation time of the observable ? + ? under various h and
A in the 4th plot in Figure 2. We can see that SGNHT has significantly lower autocorrelation time
than SGHMC at similar sampling accuracy. More details about the h, A values which produces the
plot are also available in Appendix E.
6
Density of ? (SGNHT)
RMSE vs. Autocorrelation time
0.6
4
2
1
3
h=0.01,A=1
h=0.01,A=10
h=0.001,A=1
h=0.001,A=10
15
10
?
Density of ?
3
True
h=0.01,A=1
h=0.01,A=10
h=0.001,A=1
h=0.001,A=10
2
5
1
0
0
?0.6 ?0.4 ?0.2
0
0.2
?
0.4
0.6
0.8
1
?0.6 ?0.4 ?0.2
0
0
0.2
?
0.4
0.6
0.8
SGHMC
SGNHT
0.4
0.2
0
0
1
RMSE of Density Estimation
5
True
h=0.01,A=1
h=0.01,A=10
h=0.001,A=1
h=0.001,A=10
4
Density of ?
? value of SGNHT
Density of ? (SGHMC)
5
0.2
0.4
0.6
iterations
0.8
1
?106
0
100
200
300
400
Autocorrrelation Time
Figure 2: Density of ? obtained by SGNHT with known variance (1st), density of ? obtained by
SGHMC with known variance (2nd), mean of ? over iterations with known variance in SGNHT
(3rd), RMSE vs. Autocorrelation time for both methods with unknown variance (4th).
5.2 Machine Learning Applications
In the following machine learning experiments, we used a reformulation of (5) and (6) similar to [5],
by letting u = p h, ? = h2 , ? = ?h and a = Ah. The resulting Algorithm 2 is provided in Appendix
F. In [5], SGHMC has been extensively compared with SGLD, SGD and SGD-momentum. Our
experiments will focus on comparing SGHMC and SGNHT. Details of the experiment settings are
described below. The test results over various parameters are reported in Figure 3.
Bayesian Neural Network We first evaluate the benchmark MNIST dataset, using the Bayesian
Neural Network (BNN) as in [5]. The MNIST dataset contains 50,000 training examples, 10,000
validation examples, and 10,000 test examples. To show our algorithm being able to handle large
stochastic gradient noise due to small minibatch, we chose the minibatch of size 20. Each algorithm
is run for a total number of 50k iterations with burn-in of the first 10k iterations. The hidden layer
size is 100, parameter a is from {0.001, 0.01} and ? from {2, 4, 6, 8} ? 10?7 .
Bayesian Matrix Factorization Next, we evaluate our methods on two collaborative filtering
tasks: the Movielens ml-1m dataset and the Netflix dataset, using the Bayesian probabilistic matrix
factorization (BPMF) model [21]. The Movielens dataset contains 6,050 users and 3,883 movies
with about 1M ratings, and the Netflix dataset contains 480,046 users and 17,000 movies with about
100M ratings. To conduct the experiments, Each dataset is partitioned into training (80%) and testing (20%), and the training set is further partitioned for 5-fold cross validation. Each minibatch
contains 400 ratings for Movielens1M and 40k ratings for Netflix. Each algorithm is run for 100k
iterations with burn-in of the first 20k iterations. The base number is chosen as 10, parameter a is
from {0.01, 0.1} and ? from {2, 4, 6, 8} ? 10?7 .
Latent Dirichlet Allocation Finally, we evaluate our method on the ICML dataset using Latent
Dirichlet Allocation [4]. The ICML dataset contains 765 documents from the abstracts or ICML
proceedings from 2007 to 2011. After simple stopword removal, we obtained a vocabulary size of
about 2K and total words of about 44K. We used 80% documents for 5-fold cross validation and
the remaining 20% for testing. Similar to [18], we used the semi-collapsed LDA whose posterior
of ?kw is provided in Appendix H. The Dirichlet prior parameter for the topic distribution for each
document is set to 0.1 and the Gaussian prior for ?kw is set as N (0.1, 1). Each minibatch contains
100 documents. Each algorithm is run for 50k iterations with the first 10k iterations as burn-in.
Topic number is 30, parameter a is from {0.01, 0.1} and ? from {2, 4, 6, 8} ? 10?5 .
5.2.1
Result Analysis
From Figure 3, SGNHT is apparently more stable than SGHMC when the discretization step ? is
larger. In all four datasets, especially with the smaller a, SGHMC gets worse and worse results as ?
increases. With the largest ?, SGHMC diverges (as the green curve is way beyond the range) due to
its failure to handle the large unknown noise with small a.
Figure 3 also gives a comprehensive view of the critical role that a plays on. On one hand, larger
a may cause more random walk effect which slows down the convergence (as in Movielens1M and
Netflix). On the other hand, it is helpful to increase the ergodicity and compensate the unknown
noise from the stochastic gradient (as in MNIST and ICML).
7
Throughout the experiment, we find that the kinetic energy of SGNHT is always maintained around
0.5 while that of SGHMC is usually higher. And overall SGNHT has better test performance with
the choice of the parameters selected by cross validation (see Table 2 of Appendix G).
3
3
2
3
4
iterations
2
5
?104
0.9
0.4
0.6
0.8
iterations
1
?105
0.4
0.6
0.8
iterations
0.83
0.6
0.8
0.8
ICML (? = 2 ? 10?5 )
1,300
1,200
1,300
1,200
1,000
4
5
?104
0.4
0.6
0.8
1
2
3
iterations
4
5
?104
0.6
0.8
iterations
1
?105
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.85
0.84
0.82
1
?105
0.2
0.4
0.6
0.8
iterations
1
?105
ICML (? = 8 ? 10?5 )
1,500
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
1,200
1,000
0.4
Netflix (? = 8 ? 10?7 )
1,300
1,000
3
0.2
1,400
1,100
iterations
0.2
1,500
1,100
2
1
?105
0.83
iterations
1,100
1
0.8
ICML (? = 6 ? 10?5 )
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
1,400
Test Perplexity
1,400
0.6
0.84
0.82
1,500
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.9
0.86
ICML (? = 4 ? 10?5 )
1,500
0.92
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.85
1
?105
5
?104
0.86
0.4
iterations
Test RMSE
0.6
iterations
4
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.94
0.83
0.4
3
iterations
Movielens1M (? = 8 ? 10?7 )
0.86
0.2
2
Netflix (? = 6 ? 10?7 )
0.84
0.82
1
?105
2
5
?104
0.88
0.2
0.83
iterations
4
0.9
1
?105
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.85
Test RMSE
0.84
0.4
Test Error
0.92
0.86
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
3
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.94
Netflix (? = 4 ? 10?7 )
0.86
0.2
2
0.86
Netflix (? = 2 ? 10?7 )
0.85
1
0.88
0.2
4
Movielens1M (? = 6 ? 10?7 )
0.9
SGHMC(a = 0.001)
SGNHT(a = 0.001)
SGHMC(a = 0.01)
SGNHT(a = 0.01)
3
iterations
0.86
0.2
Test RMSE
2
5
?104
0.88
0.86
Test Perplexity
4
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.92
0.88
0.82
3
0.94
Test RMSE
Test RMSE
0.92
2
Movielens1M (? = 4 ? 10?7 )
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
0.94
1
iterations
Movielens1M (? = 2 ? 10?7 )
4
MNIST (? = 8 ? 10?7 )
?10?2
5
Test RMSE
1
6
SGHMC(a = 0.001)
SGNHT(a = 0.001)
SGHMC(a = 0.01)
SGNHT(a = 0.01)
3
Test RMSE
2
4
MNIST (? = 6 ? 10?7 )
?10?2
5
Test Error
4
6
SGHMC(a = 0.001)
SGNHT(a = 0.001)
SGHMC(a = 0.01)
SGNHT(a = 0.01)
5
Test Error
Test Error
5
MNIST (? = 4 ? 10?7 )
?10?2
Test RMSE
6
SGHMC(a = 0.001)
SGNHT(a = 0.001)
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.01)
SGNHT(a = 0.01)
SGHMC(a = 0.1)
SGNHT(a = 0.1)
1,400
Test Perplexity
MNIST (? = 2 ? 10?7 )
?10?2
Test Perplexity
6
1,300
1,200
1,100
1
2
3
iterations
4
5
?104
1,000
1
2
3
iterations
4
5
?104
Figure 3: The test error of MNIST (1st row), test RMSE of Movielens1M (2nd row), test RMSE
of Netflix (3rd row) and test perplexity of ICML (4th row) datasets with their standard deviations
(close to 0 in row 2 and 3) under various ? and a.
6
Conclusion and Discussion
In this paper, we find proper dynamics that adpatively fit to the noise introduced by stochastic gradients. Experiments show that our method is able to control the temperature, estimate the unknown
noise, and perform competitively in practice. Our method can be justified in continuous time by a
general theorem. The discretization of continuous SDEs, however, introduces bias. This issue has
been extensively studied by previous work such as [20, 22, 15, 12]. The existency of an invariant
measure has been proved (e.g., Theorem 3.2 [22] and Proposition 2.5 [12]) and the bound of the
error has been obtained (e.g, O(h2 ) for a symmetric splitting scheme [12]). Due to space limitation,
we leave a deeper discussion on this topic and a more rigorous justification to future work.
Acknowledgments
We acknowledge Kevin P. Murphy and Julien Cornebise for helpful discussions and comments.
References
[1] S. Ahn, A. K. Balan, and M. Welling. Bayesian Posterior Sampling via Stochastic Gradient
Fisher Scoring. Proceedings of the 29th International Conference on Machine Learning, pages
8
1591?1598, 2012.
[2] A. K. Balan, Y. Chen, and M. Welling. Austerity in MCMC Land: Cutting the MetropolisHastings Budget. Proceedings of the 31st International Conference on Machine Learning,
2014.
[3] R. Bardenet, A. Doucet, and C. Holmes. Towards Scaling up Markov Chain Monte Carlo:
an Adaptive Subsampling Approach. Proceedings of the 31st International Conference on
Machine Learning, pages 405?413, 2014.
[4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. J. Mach. Learn. Res.,
3:993?1022, March 2003.
[5] T. Chen, E. B. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. Proceedings of the 31st International Conference on Machine Learning, 2014.
[6] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Phys. Lett.
B, 195:216?222, 1987.
[7] Y. Fang, J. M. Sanz-Serna, and R. D. Skeel. Compressible Generalized Hybrid Monte Carlo.
J. Chem. Phys., 140:174108 (10 pages), 2014.
[8] M. Girolami and B. Calderhead. Riemann Manifold Langevin and Hamiltonian Monte Carlo
Methods. J. R. Statist. Soc. B, 73, Part 2:123?214(with discussion), 2011.
[9] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic Variational Inference. Journal
of Maching Learning Research, 14(1):1303?1347, 2013.
[10] A. M. Horowitz. A Generalized Guided Monte-Carlo Algorithm. Phys. Lett. B, 268:247?252,
1991.
[11] B. Leimkuhler and C. Matthews. Rational Construction of Stochastic Numerical Methods for
Molecular Sampling. arXiv:1203.5428, 2012.
[12] B. Leimkuhler, C. Matthews, and G. Stoltz. The Computation of Averages from Equilibrium
and Nonequilibrium Langevin Molecular Dynamics. IMA J Num. Anal., 2014.
[13] B. Leimkuhler and S. Reich. A Metropolis Adjusted Nos?e-Hoover Thermostat. Math. Modellinig Numer. Anal., 43(4):743?755, 2009.
[14] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with Subsets of Data.
arXiv: 1403.5693, 2014.
[15] J. C. Mattingly, A. M. Stuart, and M. Tretyakov. Convergence of Numerical Time-averaging
and Stationary Measures via Poisson Equations. SIAM J. Num. Anal., 48:552?577, 2014.
[16] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equation of State
Calculations by Fast Computing Machines. J. Chem. Phys., 21:1087?1092, 1953.
[17] R. M. Neal. MCMC Using Hamiltonian Dynamics. arXiv:1206.1901, 2012.
[18] S. Patterson and Y. W. Teh. Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex. Advances in Neural Information Processing Systems 26, pages 3102?3110,
2013.
[19] H. Robbins and S. Monro. A Stochastic Approximation Method. The Annals of Mathematical
Statistics, 22(3):400?407, 1951.
[20] G. Roberts and R. Tweedie. Exponential Convergence of Langevin Distributions and Their
Discrete Approximations. Bernoulli, 2:341?363, 1996.
[21] R. Salakhutdinov and A. Mnih. Bayesian Probabilistic Matrix Factorization Using Markov
Chain Monte Carlo. Proceedings of the 25th International Conference on Machine Learning,
pages 880?887, 2008.
[22] D. Talay. Second Order Discretization Schemes of Stochastic Differential Systems for the
Computation of the Invariant Law. Stochastics and Stochastics Reports, 29:13?36, 1990.
[23] M. E. Tuckerman. Statistical Mechanics: Theory and Molecular Simulation. Oxford University Press, 2010.
[24] M. Welling and Y. W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics.
Proceedings of the 28th International Conference on Machine Learning, 2011.
9
| 5592 |@word version:1 briefly:2 changyou:1 nd:4 simulation:1 concise:1 sgd:3 ld:5 contains:6 document:4 existing:2 com:4 discretization:13 comparing:1 gmail:1 must:8 written:2 numerical:4 sdes:5 plot:8 drop:1 update:2 v:3 stationary:3 selected:2 leaf:2 isotropic:1 hamiltonian:6 blei:2 num:2 provides:1 math:1 compressible:1 mathematical:1 differential:5 become:1 firefly:1 absorbs:2 autocorrelation:4 introduce:3 expected:1 examine:1 mechanic:1 integrator:1 inspired:1 salakhutdinov:1 riemann:1 relegating:1 becomes:2 begin:1 discover:1 notation:1 provided:2 mass:2 what:1 sde:5 proposing:1 finding:2 exactly:1 control:4 unit:1 appear:1 before:1 positive:1 local:1 consequence:2 randomizes:1 mach:1 oxford:1 fluctuation:1 ap:1 chose:1 burn:3 equating:1 studied:1 cchangyou:1 factorization:3 range:1 acknowledgment:1 testing:2 practice:4 block:1 significantly:1 word:1 leimkuhler:3 get:6 cannot:1 close:2 bh:1 collapsed:1 applying:1 deterministic:1 tuckerman:1 simplicity:1 splitting:1 holmes:1 importantly:2 fang:2 handle:4 bpmf:1 justification:1 annals:1 target:12 play:2 pt:3 user:2 construction:1 duke:1 exact:1 us:5 designing:2 expensive:2 approximated:1 role:2 ding:1 solved:2 wang:1 revisiting:1 improper:1 intuition:1 stopword:1 dynamic:40 neglected:3 algebra:1 calderhead:1 patterson:1 efficiency:2 joint:3 various:3 derivation:1 heat:1 fast:1 monte:11 kevin:1 pendleton:1 whose:1 larger:4 statistic:3 obviously:1 propose:1 product:2 combining:2 bath:1 poorly:1 adapts:1 recipe:6 sanz:1 convergence:3 double:3 requirement:1 diverges:1 produce:7 adam:1 converges:1 leave:1 help:2 illustrate:1 qt:4 eq:1 soc:1 c:2 come:1 implies:1 girolami:1 direction:1 guided:1 correct:11 stochastic:53 kb:6 hoover:9 proposition:1 ryan:1 mathematically:2 adjusted:1 correction:1 hold:2 around:3 normal:7 exp:13 sgld:3 equilibrium:4 maclaurin:1 matthew:2 vary:1 omitted:2 serna:1 purpose:1 dingnan:1 estimation:5 robbins:1 largest:1 maching:1 hoffman:1 clearly:1 gaussian:2 always:1 modified:1 avoid:1 pn:1 conjunction:1 pervasive:1 focus:2 bernoulli:1 indicates:3 likelihood:4 rigorous:2 dim:1 inference:2 helpful:2 austerity:1 neven:2 inaccurate:2 entire:1 hidden:1 mattingly:1 overall:1 issue:1 denoted:1 proposes:1 integration:1 initialize:1 marginal:3 equal:3 field:1 ng:1 sampling:22 zz:1 represents:4 kw:2 look:3 cancel:2 icml:9 stuart:1 future:2 simplex:1 report:1 simplify:1 primarily:1 few:1 ppt:5 randomly:1 gamma:1 comprehensive:2 individual:1 murphy:1 ima:1 intended:1 maintain:3 n1:1 attempt:2 interest:2 highly:1 mnih:1 evaluation:2 numer:1 introduces:3 semidefinite:1 chain:3 accurate:2 partial:1 necessary:1 experience:2 lh:2 tweedie:1 fox:1 conduct:1 stoltz:1 walk:4 desired:1 re:1 sacrificing:1 roweth:1 column:2 ordinary:2 cost:1 deviation:1 subset:4 uniform:1 nonequilibrium:1 successful:1 too:2 reported:1 synthetic:1 combined:1 adaptively:2 st:7 density:18 international:6 siam:1 probabilistic:2 physic:5 together:1 quickly:1 concrete:1 satisfied:1 successively:1 worse:2 horowitz:1 inject:1 rescaling:1 potential:5 sec:1 stabilize:1 inc:3 satisfy:1 depends:2 ad:1 try:1 root:1 view:1 apparently:2 characterizes:1 start:1 netflix:9 maintains:2 rmse:14 monro:1 contribution:1 collaborative:1 square:1 accuracy:2 wiener:1 variance:9 characteristic:1 ensemble:10 bayesian:11 accurately:3 carlo:11 drive:1 kennedy:1 ah:1 phys:4 failure:1 energy:19 pp:1 proof:1 mi:1 riemannian:1 rational:1 dataset:11 proved:1 limh:1 subsection:1 organized:1 actually:1 appears:2 higher:2 dt:28 though:1 ergodicity:1 correlation:1 hand:4 working:1 google:6 minibatch:5 defines:1 artifact:1 lda:1 effect:4 unbiased:1 remedy:1 true:10 assigned:1 symmetric:2 neal:1 bnn:1 self:1 maintained:1 illustrative:1 generalized:2 demonstrate:2 motion:1 temperature:8 meaning:1 variational:1 recently:2 physical:1 qp:5 significant:1 paisley:1 rd:4 particle:2 dot:1 stable:1 reich:1 longer:1 ahn:1 add:1 base:1 posterior:6 brownian:3 recent:1 perspective:1 perplexity:5 scenario:1 youhan:1 scoring:1 guestrin:1 minimum:1 additional:6 care:1 freely:1 semi:1 full:1 reduces:1 characterized:1 calculation:1 cross:3 long:1 compensate:1 justifying:1 prescription:1 molecular:3 bigger:1 controlled:2 essentially:1 poisson:1 dpd:1 iteration:31 sometimes:1 arxiv:3 justified:2 proposal:1 background:2 whereas:2 want:1 addition:1 concluded:1 rest:1 comment:1 induced:1 jordan:1 call:1 near:1 leverage:1 presence:2 enough:1 marginalization:5 fit:2 reduce:1 idea:2 inner:1 knowing:1 whether:1 hessian:1 cause:2 remark:1 ignored:1 generally:1 informally:1 amount:2 extensively:2 statist:1 generate:5 canonical:14 estimated:2 correctly:3 write:1 discrete:1 key:1 four:1 reformulation:1 sgnht:57 prevent:1 clarity:1 verified:1 bardenet:1 diffusion:3 run:3 fourth:1 injected:1 throughout:1 reasonable:1 appendix:13 scaling:1 layer:1 bound:1 nan:1 fold:2 adapted:1 constrain:1 generates:1 simulate:3 argument:1 extremely:1 friction:2 according:2 combination:1 poor:1 march:1 smaller:4 remain:1 partitioned:2 metropolis:9 evolves:1 stochastics:2 hartmut:1 happens:1 intuitively:1 invariant:3 taken:1 equation:8 letting:2 serf:2 end:3 available:3 sghmc:47 competitively:1 gam:1 away:2 original:1 rz:2 dirichlet:4 ensure:1 remaining:1 subsampling:1 marginalized:1 especially:1 objective:1 costly:1 traditional:1 gradient:42 topic:3 manifold:1 assuming:1 besides:1 illustration:2 equivalently:1 hmc:8 robert:2 trace:3 slows:1 anal:3 rosenbluth:2 proper:6 boltzmann:1 unknown:15 perform:1 teh:2 observation:1 datasets:5 markov:3 purdue:4 benchmark:1 acknowledge:1 tretyakov:1 thermal:4 optional:1 langevin:11 extended:1 situation:1 rn:1 drift:1 overlooked:1 rating:4 introduced:3 namely:2 able:6 beyond:1 below:1 usually:1 including:1 green:1 cornebise:1 critical:2 metropolishastings:1 hybrid:4 force:10 mn:1 representing:2 scheme:2 improve:1 movie:2 julien:1 review:3 prior:4 teller:2 removal:1 law:1 movielens1m:7 rium:1 generation:1 interesting:1 filtering:1 allocation:3 limitation:1 validation:4 h2:2 degree:1 land:1 production:1 row:6 balan:2 side:1 bias:1 deeper:1 taking:1 benefit:1 distributed:1 curve:1 dimension:1 skeel:3 world:1 vocabulary:1 lett:2 inertia:1 commonly:1 adaptive:2 stuck:1 simplified:1 welling:3 approximate:4 observable:1 cutting:1 dealing:1 ml:1 doucet:1 assumed:1 xi:5 talay:1 continuous:2 latent:3 why:1 table:1 learn:1 diag:2 did:1 main:5 rh:2 big:1 noise:29 referred:2 momentum:5 position:1 explicit:1 exponential:1 theorem:13 down:1 showing:1 offset:1 normalizing:2 thermostat:17 exists:1 mnist:8 effectively:1 babbush:2 budget:1 chen:3 simply:1 scalar:3 duane:1 corresponds:1 satisfies:2 kinetic:14 identity:1 towards:2 fisher:1 hard:1 change:1 determined:2 movielens:2 sampler:3 justify:1 averaging:1 called:3 total:4 select:2 chem:2 outstanding:1 evaluate:4 mcmc:5 correlated:1 |
5,073 | 5,593 | Distributed Variational Inference in Sparse Gaussian
Process Regression and Latent Variable Models
Yarin Gal?
Mark van der Wilk?
Carl E. Rasmussen
University of Cambridge
{yg279,mv310,cer54}@cam.ac.uk
Abstract
Gaussian processes (GPs) are a powerful tool for probabilistic inference over functions. They have been applied to both regression and non-linear dimensionality
reduction, and offer desirable properties such as uncertainty estimates, robustness
to over-fitting, and principled ways for tuning hyper-parameters. However the
scalability of these models to big datasets remains an active topic of research.
We introduce a novel re-parametrisation of variational inference for sparse GP
regression and latent variable models that allows for an efficient distributed algorithm. This is done by exploiting the decoupling of the data given the inducing
points to re-formulate the evidence lower bound in a Map-Reduce setting.
We show that the inference scales well with data and computational resources,
while preserving a balanced distribution of the load among the nodes. We further
demonstrate the utility in scaling Gaussian processes to big data. We show that
GP performance improves with increasing amounts of data in regression (on flight
data with 2 million records) and latent variable modelling (on MNIST). The results
show that GPs perform better than many common models often used for big data.
1
Introduction
Gaussian processes have been shown to be flexible models that are able to capture complicated
structure, without succumbing to over-fitting. Sparse Gaussian process (GP) regression [Titsias,
2009] and the Bayesian Gaussian process latent variable model (GPLVM, Titsias and Lawrence
[2010]) have been applied in many tasks, such as regression, density estimation, data imputation,
and dimensionality reduction. However, the use of these models with big datasets has been limited
by the scalability of the inference. For example, the use of the GPLVM with big datasets such
as the ones used in continuous-space natural language disambiguation is quite cumbersome and
challenging, and thus the model has largely been ignored in such communities.
It is desirable to scale the models up to be able to handle large amounts of data. One approach
is to spread computation across many nodes in a distributed implementation. Brockwell [2006];
Wilkinson [2005]; Asuncion et al. [2008], among others, have reasoned about the requirements such
distributed algorithms should satisfy. The inference procedure should:
1. distribute the computational load evenly across nodes,
2. scale favourably with the number of nodes,
3. and have low overhead in the global steps.
In this paper we scale sparse GP regression and latent variable modelling, presenting the first distributed inference algorithm for the models able to process datasets with millions of points. We
derive a re-parametrisation of the variational inference proposed by Titsias [2009] and Titsias and
Lawrence [2010], unifying the two, which allows us to perform inference using the original guarantees. This is achieved through the fact that conditioned on the inducing inputs, the data decouples and
the variational parameters can be updated independently on different nodes, with the only communi?
Authors contributed equally to this work.
1
cation between nodes requiring constant time. This also allows the optimisation of the embeddings
in the GPLVM to be done in parallel.
We experimentally study the properties of the suggested inference showing that the inference scales
well with data and computational resources, and showing that the inference running time scales
inversely with computational power. We further demonstrate the practicality of the inference, inspecting load distribution over the nodes and comparing run-times to sequential implementations.
We demonstrate the utility in scaling Gaussian processes to big data showing that GP performance
improves with increasing amounts of data. We run regression experiments on 2008 US flight data
with 2 million records and perform classification tests on MNIST using the latent variable model.
We show that GPs perform better than many common models which are often used for big data.
The proposed inference was implemented in Python using the Map-Reduce framework [Dean and
Ghemawat, 2008] to work on multi-core architectures, and is available as an open-source package1 .
The full derivation of the inference is given in the supplementary material as well as additional experimental results (such as robustness tests to node failure by dropping out nodes at random). The
open source software package contains an extensively documented implementation of the derivations, with references to the equations presented in the supplementary material for explanation.
2
Related Work
Recent research carried out by Hensman et al. [2013] proposed stochastic variational inference (SVI,
Hoffman et al. [2013]) to scale up sparse Gaussian process regression. Their method trained a Gaussian process using mini-batches, which allowed them to successfully learn from a dataset containing
700,000 points. Hensman et al. [2013] also note the applicability of SVI to GPLVMs and suggest
that SVI for GP regression can be carried out in parallel. However SVI also has some undesirable
properties. The variational marginal likelihood bound is less tight than the one proposed in Titsias
[2009]. This is a consequence of representing the variational distribution over the inducing targets
q(u) explicitly, instead of analytically deriving and marginalising the optimal form. Additionally
SVI needs to explicitly optimise over q(u), which is not necessary when using the analytic optimal
form. The noisy gradients produced by SVI also complicate optimisation; the inducing inputs need
to be fixed in advance because of their strong correlation with the inducing targets, and additional
optimiser-specific parameters, such as step-length, have to be introduced and fine-tuned by hand.
Heuristics do exist, but these points can make SVI rather hard to work with.
Our approach results in the same lower bound as presented in Titsias [2009], which averts the difficulties with the approach above, and enables us to scale GPLVMs as well.
3
The Gaussian Process Latent Variable Model and Sparse GP Regression
We now briefly review the sparse Gaussian process regression model [Titsias, 2009] and the Gaussian process latent variable model (GPLVM) [Lawrence, 2005; Titsias and Lawrence, 2010], in terms
of model structure and inference.
3.1
Sparse Gaussian Process Regression
We consider the standard Gaussian process regression setting, where we aim to predict the output of
some unknown function at new input locations, given a training set of n inputs {X1 , . . . , Xn } and
corresponding observations {Y1 , . . . , Yn }. The observations consist of the latent function values
{F1 , . . . , Fn } corrupted by some i.i.d. Gaussian noise with precision ?. This gives the following
generative model2 :
F (Xi ) ? GP(0, k(X, X)), Yi ? N (Fi , ? ?1 I)
For convenience, we collect the data in a matrix and denote single data points by subscripts.
X ? Rn?q , F ? Rn?d , Y ? Rn?d
1
see http://github.com/markvdw/GParML
We follow the definition of matrix normal distribution [Arnold, 1981]. For a full treatment of Gaussian
Processes, see Rasmussen and Williams [2006].
2
2
We can marginalise out the latent F analytically in order to find the predictive distribution and
marginal likelihood. However, this consists of an inversion of an n ? n matrix, thus requiring O(n3 )
time complexity, which is prohibitive for large datasets.
To address this problem, many approximations have been developed which aim to summarise the
behaviour of the regression function using a sparse set of m input-output pairs, instead of the entire
dataset3 . These input-output pairs are termed ?inducing points? and are taken to be sufficient statistics for any predictions. Given the inducing inputs Z ? Rm?q and targets u ? Rm?d , predictions
can be made in O(m3 ) time complexity:
Z
?
?
?1
?1
p(F |X , Y ) ? N F ? ; k?m Kmm
u, k?? ? k?m Kmm
km? p(u|Y, X)du
(3.1)
where Kmm is the covariance between the m inducing inputs, and likewise for the other subscripts.
Learning the function corresponds to inferring the posterior distribution over the inducing targets u.
Predictions are then made by marginalising u out of equation 3.1. Efficiently learning the posterior
over u requires an additional assumption to be made about the relationship between the training
data and the inducing points, such as a deterministic link using only the conditional GP mean F =
?1
Knm Kmm
u. This results in an overall computational complexity of O(nm2 ).
Qui?nonero-Candela and Rasmussen [2005] view this procedure as changing the prior to make inference more tractable, with Z as hyperparameters which can be tuned using optimisation. However,
modifying the prior in response to training data has led to over-fitting. An alternative sparse approximation was introduced by Titsias [2009]. Here a variational distribution over u is introduced, with
Z as variational parameters which tighten the corresponding evidence lower bound. This greatly
reduces over-fitting, while retaining the improved computational complexity. It is this approximation which we further develop in this paper to give a distributed inference algorithm. A detailed
derivation is given in section 3 of the supplementary material.
3.2
Gaussian Process Latent Variable Models
The Gaussian process latent variable model (GPLVM) can be seen as an unsupervised version of the
regression problem above. We aim to infer both the inputs, which are now latent, and the function
mapping at the same time. This can be viewed as a non-linear generalisation of PCA [Lawrence,
2005]. The model set-up is identical to the regression case, only with a prior over the latents X.
Xi ? N (Xi ; 0, I), F (Xi ) ? GP(0, k(X, X)), Yi ? N (Fi , ? ?1 I)
A Variational Bayes approximation for this model has been developed by Titsias and Lawrence
[2010] using similar techniques as for variational sparse GPs. In fact, the sparse GP can be seen as
a special case of the GPLVM where the inputs are given zero variance. The main task in deriving
approximate inference revolves around finding a variational lower bound to:
Z
p(Y ) = p(Y |F )p(F |X)p(X)d(F, X)
Which leads to a Gaussian approximation to the posterior q(X) ? p(X|Y ), explained in detail in
section 4 of the supplementary material. In the next section we derive a distributed inference scheme
for both models following a re-parametrisation of the derivations of Titsias [2009].
4
Distributed Inference
We now exploit the conditional independence of the data given the inducing points to derive a distributed inference scheme for both the sparse GP model and the GPLVM, which will allow us to
easily scale these models to large datasets. The key equations are given below, with an in-depth
explanation given in sections sections 3 and 4 of the supplementary material. We present a unifying
derivation of the inference procedures for both the regression case and the latent variable modelling
(LVM) case, by identifying that the explicit inputs in the regression case are identical to the latent
inputs in the LVM case when their mean is set to the observed inputs and used with variance 0 (i.e.
the latent inputs are fixed and not optimised).
We start with the general expression for the log marginal likelihood of the sparse GP regression
model, after introducing the inducing points,
3
See Qui?nonero-Candela and Rasmussen [2005] for a comprehensive review.
3
Z
log p(Y |X) = log
p(Y |F )p(F |X, u)p(u)d(u, F ).
The LVM derivation encapsulates this expression by multiplying with the prior over X and then
marginalising over X:
Z
log p(Y ) = log p(Y |F )p(F |X, u)p(u)p(X)d(u, F, X).
We then introduce a free-form variational distribution q(u) over the inducing points, and another
over X (where in the regression case, p(X)?s and q(X)?s variance is set to 0 and their mean set to
X). Using Jensen?s inequality we get the following lower bound:
Z
p(Y |F )p(u)
log p(Y |X) ? p(F |X, u)q(u) log
d(u, F )
q(u)
Z
Z
p(u)
p(F |X, u) log p(Y |F )d(F ) + log
d(u)
(4.1)
= q(u)
q(u)
all distributions that involve u also depend on Z which we have omitted for brevity. Next we
integrate p(Y ) over X to be able to use 4.1,
Z
Z
p(Y |X)p(X)
p(X)
log p(Y ) = log q(X)
d(X) ? q(X) log p(Y |X) + log
d(X) (4.2)
q(X)
q(X)
and obtain a bound which can be used for both models. Up to here the derivation is identical to the
two derivations given in [Titsias and Lawrence, 2010; Titsias, 2009]. However, now we exploit the
conditional independence given u to break the inference into small independent components.
4.1
Decoupling the Data Conditioned on the Inducing Points
The introduction of the inducing points decouples the function values from each other in the following sense. If we represent Y as the individual data points (Y1 ; Y2 ; ...; Yn ) with Yi ? R1?d and
similarly for F , we can write the lower bound as a sum over the data points, since Yi are independent
of Fj for j 6= i:
Z
Z
n
X
p(F |X, u) log p(Y |F )d(F ) = p(F |X, u)
log p(Yi |Fi )d(F )
i=1
=
n Z
X
p(Fi |Xi , u) log p(Yi |Fi )d(Fi )
i=1
Simplifying this expression and integrating over X we get that each term is given by
d
?
? log(2?? ?1 ) ?
Yi YiT ? 2 hFi ip(Fi |Xi ,u)q(Xi ) YiT + Fi FiT p(Fi |Xi ,u)q(Xi ) )
2
2
where we use triangular brackets hF ip(F ) to denote the expectation of F with respect to the distribution p(F ).
Now, using calculus of variations we can find optimal q(u) analytically. Plugging the optimal distribution into eq. 4.1 and using further algebraic manipulations we obtain the following lower bound:
nd
nd
d
d
log p(Y ) ? ?
log 2? +
log ? + log |Kmm | ? log |Kmm + ?D|
2
2
2
2
2
?
?d
?d
?
?1
? A?
B+
T r(Kmm
D) +
T r(C T ? (Kmm + ?D)?1 ? C) ? KL
(4.3)
2
2
2
2
where
n
n
n
n
X
X
X
X
A=
Yi YiT , B =
hKii iq(Xi ) , C =
hKmi iq(Xi ) Yi , D =
hKmi Kim iq(Xi )
i=1
i=1
i=1
and
KL =
n
X
KL(q(Xi )||p(Xi ))
i=1
when the inputs are latent or set to 0 when they are observed.
4
i=1
Notice that the obtained unifying bound is identical to the ones derived in [Titsias, 2009] for the
regression case and [Titsias and Lawrence, 2010] for the LVM case since hKmi iq(Xi ) = Kmi for
q(Xi ) with variance 0 and mean Xi . However, the terms are re-parametrised as independent sums
over the input points ? sums that can be computed on different nodes in a network without intercommunication. An in-depth explanation of the different transitions is given in the supplementary
material sections 3 and 4.
4.2
Distributed Inference Algorithm
A parallel inference algorithm can be easily derived based on this factorisation. Using the MapReduce framework [Dean and Ghemawat, 2008] we can maintain different subsets of the inputs
and their corresponding outputs on each node in a parallel implementation and distribute the global
parameters (such as the kernel hyper-parameters and the inducing inputs) to the nodes, collecting
only the partial terms calculated on each node.
We denote by G the set of global parameters over which we need to perform optimisation. These include Z (the inducing inputs), ? (the observation noise), and k (the set of kernel hyper-parameters).
Additionally we denote by Lk the set of local parameters on each node k that need to be optimised.
These include the mean and variance for each input point for the LVM model. First, we send to
all end-point nodes the global parameters G for them to calculate the partial terms hKmi iq(Xi ) Yi ,
hKmi Kim iq(Xi ) , hKii iq(Xi ) , Yi YiT , and KL(q(Xi )||p(Xi )). The calculation of these terms is explained in more detail in the supplementary material section 4. The end-point nodes return these
partial terms to the central node (these are m ? m ? q matrices ? constant space complexity for
fixed m). The central node then sends the accumulated terms and partial derivatives back to the
nodes and performs global optimisation over G. In the case of the GPLVM, the nodes then concurrently perform local optimisation on Lk , the embedding posterior parameters. In total, we have two
Map-Reduce steps between the central node and the end-point nodes to follow:
1. The central node distributes G,
2. Each end-point node k returns a partial sum of the terms A, B, C, D and KL based on Lk ,
3. The central node calculates F, ?F (m ? m ? q matrices) and distributes to the end-point
nodes,
4. The central node optimises G; at the same time the end-point nodes optimise Lk .
When performing regression, the third step and the second part of the fourth step are not required.
The appendices of the supplementary material contain the derivations of all the partial derivatives
required for optimisation.
Optimisation of the global parameters can be done using any procedure that utilises the calculated
partial derivative (such as scaled conjugate gradient [M?ller, 1993]), and the optimisation of the
local variables can be carried out by parallelising SCG or using local gradient descent. We now
explore the developed inference empirically and evaluate its properties on a range of tasks.
5
Experimental Evaluation
We now demonstrate that the proposed inference meets the criteria set out in the introduction. We
assess the inference on its scalability with increased computational power for a fixed problem size
(strong scaling) as well as with proportionally increasing data (weak scaling) and compare to existing inference. We further explore the distribution of the load over the different nodes, which is a
major inhibitor in large scale distributed systems.
In the following experiments we used a squared exponential ARD kernel over the latent space to
automatically determine the dimensionality of the space, as in Titsias and Lawrence [2010]. We
initialise our latent points using PCA and our inducing inputs using k-means with added noise. We
optimise using both L-BFGS and scaled conjugate gradient [M?ller, 1993].
5.1
Scaling with Computation Power
We investigate how much inference on a given dataset can be sped up using the proposed algorithm
given more computational resources. We assess the improvement of the running time of the algo-
5
Figure 2: Time per iteration when scaling
the computational resources proportionally
to dataset size up to 50K points. Also shown
standard inference (GPy) for comparison.
Figure 1: Running time per iteration for 100K
points synthetic dataset, as a function of available cores on log-scale.
rithm on a synthetic dataset of which large amounts of data could easily be generated. The dataset
was obtained by simulating a 1D latent space and transforming this non-linearly into 3D observations. 100K points were generated and the algorithm was run using an increasing number of cores
and a 2D latent space. We measured the total running time the algorithm spent in each iteration.
Figure 1 shows the improvement of run-time as a function of available cores. We obtain a relation
very close to the ideal t ? c?(cores)?1 . When doubling the number of cores from 5 to 10 we achieve
a factor 1.93 decrease in computation time ? very close to ideal. In a higher range, a doubling from
15 to 30 cores improves the running time by a factor of 1.90, so there is very little sign of diminishing
returns. It is interesting to note that we observed a minuscule overhead of about 0.05 seconds per
iteration in the global steps. This is due to the m ? m matrix inversion carried out in each global
step, which amounts to an additional time complexity of O(m3 ) ? constant for fixed m.
5.2
Scaling with Data and Comparison to Standard Inference
Using the same setup, we assessed the scaling of the running time as we increased both the dataset
size and computational resources equally. For a doubling of data, we doubled the number of available CPUs. In the ideal case of an algorithm with only distributable components, computation time
should be constant. Again, we measure the total running time of the algorithm per iteration. Figure
2 shows that we are able to effectively utilise the extra computational resources. Our total running
time takes 4.3% longer for a dataset scaled by 30 times.
Comparing the computation time to the standard inference scheme we see a significant improvement
in performance in terms of running time. We compared to the sequential but highly optimised GPy
implementation (see figure 2). The suggested inference significantly outperforms GPy in terms of
running time given more computational resources. Our parallel inference allows us to run sparse
GPs and the GPLVM on datasets which would simply take too long to run with standard inference.
5.3
Distribution of the Load
The development of parallel inference procedures is an active field of research for Bayesian nonparametric models [Lovell et al., 2012; Williamson et al., 2013]. However, it is important to study
Figure 3: Load distribution for each iteration. The maximum time spent in a node is the rate
limiting step. Shown are the minimum, mean and maximum execution times of all nodes when
using 5 (left) and 30 (right) cores.
6
Dataset
Mean
Linear
Ridge
RF
SVI 100
SVI 200
Dist GP 100
Flight 7K
Flight 70K
Flight 700K
36.62
36.61
36.61
34.97
34.94
34.94
35.05
34.98
34.95
34.78
34.88
34.96
NA
NA
33.20
NA
NA
33.00
33.56
33.11
32.95
Table 1: RMSE of flight delay (measured in minutes) for regression over flight data with 7K700K points by predicting mean, linear regression, ridge regression, random forest regression (RF),
Stochastic Variational Inference (SVI) GP regression with 100 and 200 inducing points, and the
proposed inference with 100 inducing points (Dist GP 100).
the characteristics of the parallel algorithm, which are sometimes overlooked [Gal and Ghahramani,
2014]. One of our stated requirements for a practical parallel inference algorithm is an approximately equal distribution of the load on the nodes. This is especially relevant in a Map-Reduce
framework, where the reduce step can only happen after all map computations have finished, so
the maximum execution time of one of the workers is the rate limiting step. Figure 3 shows the
minimum, maximum and average execution time of all nodes. For 30 cores, there is on average a
1.9% difference between the minimum and maximum run-time of the nodes, suggesting an even
distribution of the load.
6
GP Regression and Latent Variable Modelling on Real-World Big Data
Next we describe a series of experiments demonstrating the utility in scaling Gaussian processes to
big data. We show that GP performance improves with increasing amounts of data in regression
and latent variable modelling tasks. We further show that GPs perform better than common models
often used for big data.
We evaluate GP regression on the US flight dataset [Hensman et al., 2013] with up to 2 million
points, and compare the results that we got to an array of baselines demonstrating the utility of
using GPs for large scale regression. We then present density modelling results over the MNIST
dataset, performing imputation tests and digit classification based on model comparison [Titsias and
Lawrence, 2010]. As far as we are aware, this is the first GP experiment to run on the full MNIST
dataset.
6.1
Regression on US Flight Data
In the regression test we predict flight delays from various flight-record characteristics such as flight
date and time, flight distance, and others. The US 2008 flight dataset [Hensman et al., 2013] was
used with different subset sizes of data: 7K, 70K, and 700K. We selected the first 800K points from
the dataset and then split the data randomly into a test set and a training set, using 100K points
for testing. We then used the first 7K and 70K points from the large training set to construct the
smaller training sets, using the same test set for comparison. This follows the experiment setup of
[Hensman et al., 2013] and allows us to compare our results to the Stochastic Variational Inference
suggested for GP regression. In addition to that we constructed a 2M points dataset based on a
different split using 100K points for test. This test is not comparable to the other experiments due to
the non-stationary nature of the data, but it allows us to investigate the performance of the proposed
inference compared to the baselines on even larger datasets.
For baselines we predicted the mean of the data, used linear regression, ridge regression with parameter 0.5, and MSE random forest regression at depth 2 with 100 estimators. We report the best
results we got for each model for different parameter settings with available resources. We trained
our model with 100 inducing points for 500 iterations using LBFGS optimisation and compared the
Dataset
Mean
Linear
Ridge
RF
Dist GP 100
Flight 2M
38.92
37.65
37.65
37.33
35.31
Table 2: RMSE for flight data with 2M points by predicting mean, linear regression, ridge regression,
random forest regression (RF), and the proposed inference with 100 inducing points (Dist GP).
7
Figure 5: Digit from MNIST with missing data
Figure 4: Log likelihood as a function of func(left) and reconstruction using GPLVM (right).
tion evaluation for the 70K flight dataset using
SCG and LBFGS optimisation.
root mean square error (RMSE) to the baselines as well as SVI with 100 and 200 inducing points
(table 1). The results for 2M points are given in table 2. Our inference with 2M data points on
a 64 cores machine took ? 13.8 minutes per iteration. Even though the training of the baseline
models took several minutes, the use of GPs for big data allows us to take advantage of their desirable properties of uncertainty estimates, robustness to over-fitting, and principled ways for tuning
hyper-parameters.
One unexpected result was observed while doing inference with SCG. When increasing the number
of data points, the SCG optimiser converged to poor values. When using the final parameters of a
model trained on a small dataset to initialise a model to be trained on a larger dataset, performance
was as expected. We concluded that SCG was not converging to the correct optimum, whereas LBFGS performed better (figure 4). We suspect this happens because the modes in the optimisation
surface sharpen with more data. This is due to the increased weight of the likelihood terms.
6.2
Latent Variable Modelling on MNIST
We also run the GP latent variable model on the full MNIST dataset, which contains 60K examples
of 784 dimensions and is considered large in the Gaussian processes community. We trained one
model for each digit and used it as a density model, using the predictive probabilities to perform
classification. We classify a test point to the model with the highest posterior predictive probability.
We follow the calculation in [Titsias and Lawrence, 2010], by taking the ratio of the exponentiated
log marginal likelihoods: p(y ? |Y ) = p(y ? , Y )/p(Y ) ? eLy? ,Y ?LY . Due to the randomness in the
initialisation of the inducing inputs and latent point variances, we performed 10 random restarts on
each model and chose the model with the largest marginal likelihood lower bound.
We observed that the models converged to a point where they performed similarly, occasionally
getting stuck in bad local optima. No pre-processing was performed on the training data as our
main aim here is to show the benefit of training GP models using larger amounts of data, rather than
proving state-of-the-art performance.
We trained the models on a subset of the data containing 10K points as well as the entire dataset
with all 60K points, using additional 10K points for testing. We observed an improvement of 3.03
percentage points in classification error, decreasing the error from 8.98% to 5.95%. Training on the
full MNIST dataset took 20 minutes for the longest running model, using 500 iterations of SCG. We
demonstrate the reconstruction abilties of the GPLVM in figure 5.
7
Conclusions
We have scaled sparse GP regression and latent variable modelling, presenting the first distributed
inference algorithm able to process datasets with millions of data points. An extensive set of experiments demonstrated the utility in scaling Gaussian processes to big data showing that GP performance improves with increasing amounts of data. We studied the properties of the suggested
inference, showing that the inference scales well with data and computational resources, while preserving a balanced distribution of the load among the nodes. Finally, we showed that GPs perform
better than many common models used for big data.
The algorithm was implemented in the Map-Reduce architecture and is available as an open-source
package, containing an extensively documented implementation of the derivations, with references
to the equations presented in the supplementary material for explanation.
8
References
Arnold, S. (1981). The theory of linear models and multivariate analysis. Wiley series in probability
and mathematical statistics: Probability and mathematical statistics. Wiley.
Asuncion, A. U., Smyth, P., and Welling, M. (2008). Asynchronous distributed learning of topic
models. In Advances in Neural Information Processing Systems, pages 81?88.
Brockwell, A. E. (2006). Parallel Markov Chain Monte Carlo simulation by Pre-Fetching. Journal
of Computational and Graphical Statistics, 15(1):pp. 246?261.
Dean, J. and Ghemawat, S. (2008). MapReduce: Simplified data processing on large clusters.
Commun. ACM, 51(1):107?113.
Gal, Y. and Ghahramani, Z. (2014). Pitfalls in the use of parallel inference for the Dirichlet process.
In Proceedings of the 31th International Conference on Machine Learning (ICML-14).
Hensman, J., Fusi, N., and Lawrence, N. D. (2013). Gaussian processes for big data. In Nicholson,
A. and Smyth, P., editors, UAI. AUAI Press.
Hoffman, M. D., Blei, D. M., Wang, C., and Paisley, J. (2013). Stochastic Variational Inference.
JOURNAL OF MACHINE LEARNING RESEARCH, 14:1303?1347.
Lawrence, N. (2005). Probabilistic non-linear principal component analysis with gaussian process
latent variable models. The Journal of Machine Learning Research, 6:1783?1816.
Lovell, D., Adams, R. P., and Mansingka, V. (2012). Parallel Markov chain Monte Carlo for Dirichlet
process mixtures. In Workshop on Big Learning, NIPS.
M?ller, M. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural
networks, 6(4):525?533.
Qui?nonero-Candela, J. and Rasmussen, C. E. (2005). A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6:2005.
Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press.
Titsias, M. and Lawrence, N. (2010). Bayesian gaussian process latent variable model.
Titsias, M. K. (2009). Variational learning of inducing variables in sparse Gaussian processes.
Technical report, Technical Report.
Wilkinson, D. J. (2005). Parallel Bayesian computation. In Kontoghiorghes, E. J., editor, Handbook
of Parallel Computing and Statistics, volume 184, pages 477?508. Chapman and Hall/CRC, Boca
Raton, FL, USA.
Williamson, S., Dubey, A., and Xing, E. P. (2013). Parallel Markov Chain Monte Carlo for nonparametric mixture models. In Proceedings of the 30th International Conference on Machine
Learning (ICML-13), pages 98?106.
9
| 5593 |@word version:1 briefly:1 inversion:2 nd:2 open:3 km:1 calculus:1 simulation:1 scg:6 nicholson:1 covariance:1 simplifying:1 reduction:2 contains:2 series:2 initialisation:1 tuned:2 outperforms:1 existing:1 comparing:2 com:1 fn:1 happen:1 analytic:1 enables:1 stationary:1 generative:1 prohibitive:1 selected:1 core:10 record:3 blei:1 node:35 location:1 mathematical:2 constructed:1 consists:1 fitting:5 overhead:2 introduce:2 expected:1 dist:4 multi:1 decreasing:1 pitfall:1 automatically:1 little:1 cpu:1 increasing:7 developed:3 finding:1 gal:3 guarantee:1 collecting:1 auai:1 decouples:2 rm:2 scaled:5 uk:1 ly:1 yn:2 lvm:5 local:5 consequence:1 subscript:2 optimised:3 meet:1 approximately:1 chose:1 studied:1 collect:1 challenging:1 revolves:1 limited:1 range:2 practical:1 testing:2 svi:11 digit:3 procedure:5 significantly:1 got:2 pre:2 integrating:1 fetching:1 suggest:1 get:2 convenience:1 undesirable:1 close:2 doubled:1 map:6 dean:3 deterministic:1 missing:1 send:1 williams:2 demonstrated:1 independently:1 formulate:1 identifying:1 factorisation:1 estimator:1 array:1 deriving:2 initialise:2 embedding:1 handle:1 proving:1 variation:1 updated:1 limiting:2 target:4 smyth:2 carl:1 gps:9 observed:6 wang:1 capture:1 boca:1 calculate:1 decrease:1 highest:1 principled:2 balanced:2 transforming:1 complexity:6 kmi:1 wilkinson:2 cam:1 trained:6 depend:1 tight:1 algo:1 predictive:3 titsias:20 model2:1 easily:3 various:1 derivation:10 fast:1 describe:1 monte:3 hyper:4 quite:1 heuristic:1 supplementary:9 larger:3 triangular:1 statistic:5 gp:27 noisy:1 distributable:1 ip:2 final:1 advantage:1 took:3 reconstruction:2 relevant:1 nonero:3 minuscule:1 date:1 brockwell:2 achieve:1 inducing:25 scalability:3 getting:1 exploiting:1 cluster:1 requirement:2 r1:1 optimum:2 adam:1 spent:2 derive:3 develop:1 ac:1 iq:7 measured:2 ard:1 package1:1 eq:1 strong:2 implemented:2 predicted:1 correct:1 modifying:1 stochastic:4 material:9 crc:1 behaviour:1 f1:1 inspecting:1 around:1 considered:1 hall:1 normal:1 lawrence:14 mapping:1 predict:2 major:1 omitted:1 estimation:1 largest:1 successfully:1 tool:1 hoffman:2 mit:1 concurrently:1 inhibitor:1 gaussian:28 aim:4 rather:2 derived:2 improvement:4 longest:1 modelling:8 likelihood:7 greatly:1 kim:2 sense:1 baseline:5 inference:52 accumulated:1 entire:2 diminishing:1 relation:1 overall:1 among:3 flexible:1 classification:4 retaining:1 development:1 art:1 special:1 marginal:5 optimises:1 field:1 equal:1 aware:1 reasoned:1 construct:1 chapman:1 identical:4 unsupervised:1 icml:2 others:2 summarise:1 report:3 randomly:1 comprehensive:1 individual:1 maintain:1 cer54:1 investigate:2 highly:1 evaluation:2 mixture:2 bracket:1 parametrised:1 chain:3 partial:7 necessary:1 dataset3:1 worker:1 re:5 increased:3 classify:1 applicability:1 introducing:1 subset:3 latents:1 delay:2 too:1 corrupted:1 synthetic:2 density:3 international:2 probabilistic:2 gpy:3 parametrisation:3 na:4 squared:1 central:6 again:1 containing:3 derivative:3 return:3 suggesting:1 distribute:2 knm:1 bfgs:1 satisfy:1 explicitly:2 ely:1 performed:4 tion:1 view:2 break:1 candela:3 root:1 doing:1 xing:1 start:1 bayes:1 hf:1 parallel:14 complicated:1 asuncion:2 rmse:3 ass:2 square:1 variance:6 largely:1 likewise:1 efficiently:1 characteristic:2 weak:1 bayesian:4 produced:1 carlo:3 multiplying:1 cation:1 converged:2 randomness:1 cumbersome:1 complicate:1 definition:1 failure:1 pp:1 dataset:22 treatment:1 dimensionality:3 improves:5 back:1 higher:1 supervised:1 follow:3 restarts:1 response:1 improved:1 intercommunication:1 done:3 though:1 marginalising:3 correlation:1 flight:17 hand:1 favourably:1 mode:1 usa:1 requiring:2 y2:1 contain:1 analytically:3 criterion:1 lovell:2 presenting:2 ridge:5 demonstrate:5 performs:1 fj:1 variational:17 novel:1 fi:9 common:4 sped:1 empirically:1 volume:1 million:5 significant:1 cambridge:1 paisley:1 tuning:2 similarly:2 sharpen:1 language:1 longer:1 surface:1 posterior:5 hfi:1 recent:1 showed:1 multivariate:1 commun:1 termed:1 manipulation:1 occasionally:1 inequality:1 der:1 yi:11 optimiser:2 preserving:2 seen:2 additional:5 minimum:3 utilises:1 determine:1 ller:3 full:5 desirable:3 reduces:1 infer:1 technical:2 calculation:2 offer:1 long:1 equally:2 plugging:1 calculates:1 prediction:3 converging:1 regression:43 optimisation:12 expectation:1 iteration:9 represent:1 kernel:3 sometimes:1 achieved:1 addition:1 whereas:1 fine:1 source:3 sends:1 concluded:1 extra:1 suspect:1 ideal:3 split:2 embeddings:1 parallelising:1 independence:2 fit:1 architecture:2 reduce:6 expression:3 pca:2 utility:5 algebraic:1 ignored:1 detailed:1 involve:1 proportionally:2 dubey:1 amount:8 nonparametric:2 extensively:2 documented:2 http:1 exist:1 percentage:1 notice:1 sign:1 per:5 write:1 dropping:1 key:1 demonstrating:2 yit:4 imputation:2 changing:1 sum:4 run:9 package:2 powerful:1 uncertainty:2 fourth:1 disambiguation:1 fusi:1 appendix:1 scaling:10 qui:3 comparable:1 bound:11 fl:1 n3:1 software:1 performing:2 poor:1 conjugate:3 across:2 smaller:1 encapsulates:1 happens:1 kmm:8 explained:2 taken:1 resource:9 equation:4 remains:1 tractable:1 end:6 available:6 simulating:1 batch:1 robustness:3 alternative:1 original:1 running:11 include:2 dirichlet:2 graphical:1 unifying:4 exploit:2 practicality:1 ghahramani:2 especially:1 added:1 gradient:5 distance:1 link:1 evenly:1 topic:2 yg279:1 length:1 relationship:1 mini:1 ratio:1 setup:2 stated:1 implementation:6 unknown:1 perform:9 contributed:1 observation:4 datasets:9 markov:3 wilk:1 descent:1 gplvm:11 y1:2 rn:3 mv310:1 community:2 overlooked:1 raton:1 introduced:3 pair:2 required:2 kl:5 extensive:1 marginalise:1 nm2:1 nip:1 address:1 able:6 suggested:4 below:1 rf:4 optimise:3 explanation:4 power:3 natural:1 difficulty:1 predicting:2 representing:1 scheme:3 github:1 inversely:1 finished:1 lk:4 carried:4 func:1 review:2 prior:4 mapreduce:2 python:1 interesting:1 integrate:1 sufficient:1 editor:2 rasmussen:6 free:1 asynchronous:1 allow:1 exponentiated:1 arnold:2 taking:1 sparse:18 distributed:13 van:1 benefit:1 hensman:6 xn:1 depth:3 transition:1 calculated:2 world:1 dimension:1 author:1 made:3 stuck:1 adaptive:1 simplified:1 far:1 tighten:1 welling:1 approximate:2 global:8 active:2 uai:1 handbook:1 xi:22 continuous:1 latent:29 table:4 additionally:2 learn:1 nature:1 decoupling:2 forest:3 du:1 williamson:2 mse:1 spread:1 main:2 linearly:1 big:15 noise:3 hyperparameters:1 allowed:1 yarin:1 x1:1 rithm:1 wiley:2 precision:1 inferring:1 explicit:1 exponential:1 third:1 minute:4 load:9 specific:1 bad:1 showing:5 ghemawat:3 jensen:1 evidence:2 consist:1 workshop:1 mnist:8 sequential:2 effectively:1 execution:3 conditioned:2 led:1 simply:1 explore:2 lbfgs:3 unexpected:1 doubling:3 corresponds:1 utilise:1 acm:1 conditional:3 viewed:1 experimentally:1 hard:1 gplvms:2 generalisation:1 distributes:2 principal:1 total:4 experimental:2 m3:2 mark:1 assessed:1 brevity:1 evaluate:2 |
5,074 | 5,594 | Incremental Local Gaussian Regression
Franziska Meier1
[email protected]
1
Philipp Hennig2
[email protected]
University of Southern California
Los Angeles, CA 90089, USA
2
Stefan Schaal1,2
[email protected]
Max Planck Institute for Intelligent Systems
Spemannstra?e 38, T?ubingen, Germany
Abstract
Locally weighted regression (LWR) was created as a nonparametric method that
can approximate a wide range of functions, is computationally efficient, and can
learn continually from very large amounts of incrementally collected data. As
an interesting feature, LWR can regress on non-stationary functions, a beneficial
property, for instance, in control problems. However, it does not provide a proper
generative model for function values, and existing algorithms have a variety of
manual tuning parameters that strongly influence bias, variance and learning speed
of the results. Gaussian (process) regression, on the other hand, does provide
a generative model with rather black-box automatic parameter tuning, but it has
higher computational cost, especially for big data sets and if a non-stationary model
is required. In this paper, we suggest a path from Gaussian (process) regression to
locally weighted regression, where we retain the best of both approaches. Using
a localizing function basis and approximate inference techniques, we build a
Gaussian (process) regression algorithm of increasingly local nature and similar
computational complexity to LWR. Empirical evaluations are performed on several
synthetic and real robot datasets of increasing complexity and (big) data scale, and
demonstrate that we consistently achieve on par or superior performance compared
to current state-of-the-art methods while retaining a principled approach to fast
incremental regression with minimal manual tuning parameters.
1
Introduction
Besides accuracy and sample efficiency, computational cost is a crucial design criterion for machine
learning algorithms in real-time settings, such as control problems. An example is the modeling of
robot dynamics: The sensors in a robot can produce thousands of data points per second, quickly
amassing a coverage of the task related workspace, but what really matters is that the learning
algorithm incorporates this data in real time, as a physical system can not necessarily stop and
wait in its control ? e.g., a biped would simply fall over. Thus, a learning method in such settings
should produce a good local model in fractions of a second, and be able to extend this model as the
robot explores new areas of a very high dimensional workspace that can often not be anticipated
by collecting ?representative? training data. Ideally, it should rapidly produce a good (local) model
from a large number N of data points by adjusting a small number M of parameters. In robotics,
local learning approaches such as locally weighted regression [1] have thus been favored over global
approaches such as Gaussian process regression [2] in the past.
Local regression models approximate the function in the neighborhood of a query point x? . Each
local model?s region of validity is defined by a kernel. Learning the shape of that kernel [3] is the
key component of locally weighted learning. Schaal & Atkeson [4] introduced a non-memory-based
version of LWR to compress large amounts of data into a small number of parameters. Instead
of keeping data in memory and constructing local models around query points on demand, their
1
algorithm incrementally compresses data into M local models, where M grows automatically to
cover the experienced input space of the data. Each local model can have its own distance metric,
allowing local adaptation to local characteristics like curvature or noise. Furthermore, each local
model is trained independently, yielding a highly efficient parallelizable algorithm. Both its local
adaptiveness and its low computation cost (linear, O(N M )) has made LWR feasible and successful
in control learning. The downside is that LWR requires several tuning parameters, whose optimal
values can be highly data dependent. This is at least partly a result of the strongly localized training,
which does not allow models to ?coordinate?, or to benefit from other local models in their vicinity.
Gaussian process regression (GPR) [2], on the other hand, offers principled inference for hyperparameters, but at high computational cost. Recent progress in sparsifying Gaussian processes [5, 6]
has resulted in computationally efficient variants of GPR . Sparsification is achieved either through a
subset selection of support points [7, 8] or through sparsification of the spectrum of the GP [9, 10].
Online versions of such sparse GPs [11, 12, 13] have produced a viable alternative for real-time
model learning problems [14]. However, these sparse approaches typically learn one global distance
metric, making it difficult to fit the non-stationary data encountered in robotics. Moreover, restricting
the resources in a GP also restricts the function space that can be covered, such that with the need to
cover a growing workspace, the accuracy of learning with naturally diminish.
Here we develop a probabilistic alternative to LWR that, like GPR, has a global generative model, but
is locally adaptive and retains LWRs fast incremental training. We start in the batch setting, where
rethinking LWRs localization strategy results in a loss function coupling local models that can be
modeled within the Gaussian regression framework (Section 2). Modifying and approximating the
global model, we arrive at a localized batch learning procedure (Section 3), which we term Local
Gaussian Regression (LGR). Finally, we develop an incremental version of LGR that processes
streaming data (Section 4). Previous probabilistic formulations of local regression [15, 16, 17] are
bottom-up constructions?generative models for one local model at a time. Ours is a top-down
approach, approximating a global model to give a localized regression algorithm similar to LWR.
2
Background
Locally weighted regression (LWR) with a fixed set of M local models minimizes the loss function
N
M
M
L(w) = ? ? ?m (xn )(yn ? ? m (xn )T wm )2 = ? L(wm ).
n=1 m=1
(1)
m=1
The right hand side decomposes L(w) into independent losses for M models. We assume each
model has K local feature functions ?mk (x), so that the m-th model?s prediction at x is
K
fm (x) = ? ?mk (x)wmk = ? m (x)? wm
(2)
k=1
K = 2, ?m1 (x) = 1, ?m2 (x) = (x ? cm ) gives a linear model around cm . Higher polynomials can be
used, too, but linear models have a favorable bias-variance trade-off [18]. The models are localized
by a non-negative, symmetric and integrable weighting ?m (x), typically the radial basis function
?m (x) = exp [?
(x ? cm )2
],
2?2m
or
1
?
?m (x) = exp [? (x ? cm )??1
m (x ? cm ) ]
2
(3)
for x ? RD , with center cm and length scale ?m or positive definite metric ?m . ?m (xn ) localizes the
effect of errors on the least-squares estimate of wm ?data points far away from cm have little effect.
The prediction y? at a test point x? is a normalized weighted average of the local predictions y?,m :
M
y? =
?m=1 ?m (x? )fm (x? )
M
?m=1 ?m (x? )
(4)
?
LWR effectively trains M linear models on M separate datasets ym (xn ) = ?m (xn )yn . These
models differ from the one of Eq. (4), used at test time. This smoothes discontinuous transitions
between models, but also means that LWR can not be cast probabilistically as one generative model
for training and test data simultaneously. (This holds for any bottom-up construction that learns local
2
?y
?y
?fm
n
?m
n
?m
?nm
yn
n
fm
w
n
?m
?m
N
yn
N
wm
M
M
n
n n
Figure 1: Left: Bayesian linear regression with M feature functions ?nm = ?m (xn ) = ?m
?m , where ?m
can
n
be a function localizing the effect of the mth input function ?m
towards the prediction of yn . Right: Latent
n
variables fm
placed between the features and yn decouple the M regression parameters wm and effectively
n
create M local models connected only through the latent fm
.
models independently and combines them as above, e.g., [15, 16]). The independence of local models
is key to LWR?s training: changing one local model does not affect the others. While this lowers cost,
we believe it is also partially responsible for LWR?s sensitivity to manually tuned parameters.
Here, we investigate a different strategy to achieve localization, aiming to retain the computational
complexity of LWR, while adding a sense of globality. Instead of using ?m to localize the training
error of data points, we localize a model?s contribution y?m = ?(x)T wm towards the global fit of
training point y, similar to how LWR operates during test time (Eq.4). Thus, already during training,
local models must collaborate to fit a data point y? = ?m=1 ?m (x)?(x)T wm . Our loss function is
N
2
M
T
N
2
M
T
L(w) = ? (yn ? ? ?m (xn )? m (xn ) wm ) = ? (yn ? ? ?m (xn ) wm ) ,
n=1
m=1
n=1
(5)
m=1
combining the localizer ?m (xn ) and the mth input function ?m (xn ) to form the feature ?m (xn ) =
?m (xn )?m (xn ). This form of localization couples all local models, as in classical radial basis
function networks [19]. At test time, all local predictions form a joined prediction
M
M
m=1
m=1
y? = ? y?m = ? ?m (x? )T wm
(6)
This loss can be minimized through a regularized least-square estimator for w (the concatenation of all
wm ). We follow the probabilistic interpretation of least-squares estimation as inference on the weights
w, from a Gaussian prior p(w) = N (w; ?0 , ?0 ) and likelihood p(y ? ?, w) = N (y; ?? w, ?y?1 I).
The probabilistic formulation has additional value as a generative model for all (training and test)
data points y, which can be used to learn hyperparameters (Figure 1, left). The posterior is
?N =
(??1
0
?
p(w ? y, ?) = N (w; ?N , ?N )
?1
?
+ ?y ? ?) (?y ? y
? ??1
0 ?0 )
and
with
?N =
(7)
(??1
0
?
+ ?y ? ?)
?1
(8)
(Heteroscedastic data will be addressed below). The prediction for f (x? ) with features ?(x? ) =? ??
is also Gaussian, with p(f (x? ) ? y, ?) = N (f (x? ); ?? ?N , ?? ?N ??? ). As is widely known,
this framework can be extended nonparametrically by a limit that replaces all inner products
?(xi )?0 ?(xj )? with a Mercer (positive semi-definite) kernel k(xi , xj ), corresponding to a Gaussian process prior. The direct connection between Gaussian regression and the elegant theory of
Gaussian processes is a conceptual strength. The main downside, relative to LWR, is computational
cost: Calculating the posterior (7) requires solving the least-squares problem for all F parameters w
?
3
jointly, by inverting the Gram matrix (??1
0 + ?y ? ?). In general, this requires O(F ) operations.
Below we propose approximations to lower the computational cost of this operation to a level comparable to LWR, while retaining the probabilistic interpretation, and the modeling robustness of the full
Gaussian model.
3
Local Parametric Gaussian Regression
The above shows that Gaussian regression with features ?m (x) = ?m (x)? m (x) can be interpreted
as global regression with M models, where ?m (xn ) localizes the contribution of the model ? m (x)
towards the joint prediction of yn . The choice of local parametric model ? m is essentially free. Local
3
linear regression in a K-dimensional input space takes the form ? m (xn ) = xn ? cm , and can be
viewed as the analog of locally weighted linear regression. Locally constant models ?m (x) = 1
correspond to Gaussian regression with RBF features. Generalizing to M local models with K
parameters each, feature function ?nmk combines the k th component of the local model ?km (xn ),
localized by the m-th weighting function ?m (xn )
?nmk ?= ?mk (xn ) = ?m (xn )?km (xn ).
(9)
Treating mk as indices of a vector ? RM K , Equation (7) gives localized linear Gaussian regression.
Since it will become necessary to prune the model, we adopt the classic idea of automatic relevance
determination [20, 21] using a factorizing prior
M
p(w?A) = ? N (wm ; 0, A?1
m)
with Am = diag(?m1 , . . . , ?mK ).
(10)
m=1
Thus every component k of local model m has its own precision, and can be pruned out by setting
?mk _ ?. Section 3.1 assumes a fixed number M of local models with fixed centers cm . The
parameters are ? = {?y , {?mk }, {?md }}, where K is the dimension of local model ?(x) and D is
the dimension of input x. We propose an approximation for estimating ?. Section 4 then describes
an incremental algorithm allocating local models as needed, adapting M and cm .
3.1
Learning in Local Gaussian Regression
Exact Gaussian regression with localized features still has cubic cost. However, because of the
localization, correlation between distant local models approximately vanishes, and inference is
approximately independent between local models. To use this near-independence for cheap local
n
approximate inference, similar to LWR, we introduce a latent variable fm
for each local model m
and datum xn , as in probabilistic backfitting [22]. Intuitively, the f form approximate local targets,
n
against which the local parameters fit (Figure 1, right). Moreover, as formalized below, each fm
has
its own variance parameter, which re-introduces the ability to model hetereoscedastic data.
This modified model motivates a factorizing variational bound (Section 3.1.1). Rendering the local
models computationally independent, it allows for fast approximate inference in the local Gaussian
model. Hyperparameters can be learned by approximate maximum likelihood (Section 3.1.2),
i.e. iterating between constructing a bound q(z ? ?) on the posterior over hidden variables z (defined
below) given current parameter estimates ? and optimizing q with respect to ?.
3.1.1
Variational Bound
The complete data likelihood of the modified model (Figure 1, right) is
N
N
M
M
n
p(y, f , w ? ?, ?) = ? N (yn ; f n , ?y ) ? ? N (fm
; ?nm wm , ?f m ) ? N (wm ; 0, Am )
n=1
n=1 m=1
(11)
m=1
Our Gaussian model involves the latent variables w and f , the precisions ? = {?y , ?f 1 , . . . , ?f M } and
the model parameters ?m , cm . We treat w and f as probabilistic variables and estimate ? = {?, ?, c}.
On w, f , we construct a variational bound q(w, f ) imposing factorization q(w, f ) = q(w)q(f ).
The variational free energy is a lower bound on the log evidence for the observations y:
p(y, w, f ? ?)
log p(y ? ?) ? ? q(w, f ) log
.
(12)
q(w, f )
This bound is maximized by the q(w, f ) minimizing the relative entropy
DKL [q(w, f )?p(w, f ? y, ?)], the distribution for which log q(w) = Ef [log p(y ? f , w)p(w, f )]
and log q(f ) = Ew [log p(y ? f , w)p(w, f )]. It is relatively easy to show (e.g. [23]) that these
distributions are Gaussian in both w and f .The approximation on w is
N
M
n=1
m=1
log q(w) = Ef [ ? log p(f n ? ?n , w) + log p(w ? A)] = log ? N (wm ; ?wm , ?wm )
where
N
?1
T
?wm = (?f m ? ?nm ?nm + Am )
? RK?K
(13)
N
and
n=1
n
?wm = ?f m ?wm ( ? ?nm E [fm
]) ? RK?1
n=1
(14)
4
The posterior update equations for the weights are local: each of the local models updates its
n
parameters independently. This comes at the cost of having to update the belief over the variables fm
,
which achieves a coupling between the local models. The Gaussian variational bound on f is
log q(f n ) = Ew [log p(yn ? f n , ?y ) + log p(f n ? ?nm , w)] = N (f n ; ?f n , ?f ),
(15)
where
?f = B ?1 ? B ?1 1(?y?1 + 1T B ?1 1)?1 1T B ?1 = B ?1 ?
?f?1m
T
n = Ew [w m ]
?fm
?nm
?y?1
B ?1 11T B ?1
?y?1 + 1T B ?1 1
M
?1
+ ?M
m=1 ?f m
T
(yn ? ? Ew [wm ] ?nm )
(16)
(17)
m=1
n is the posterior mean of the m-th model?s virtual target for data
and B = diag (?f 1 , . . . , ?f M ). ?fm
point n. These updates can be performed in O(M K). Note how the posterior over hidden variables
f couples the local models, allowing for a form of message passing between local models.
3.1.2
Optimizing Hyperparameters
To set the parameters ? = {?y , {?f m , ?m }M
m=1 , {?mk }}, we maximize the expected complete log
likelihood under the variational bound
N
M
n=1
m=1
n
, ?y?1 )
Ef ,w [log p(y, f , w ? ?, ?)] = Ef ,w { ? [ log N (yn ; ? fm
M
M
m=1
m=1
n
+ ? log N (fm
; wTm ?nm , ?f?1m )] + ? log N (wm ; 0, A?1
m )}. (18)
Setting the gradient of this expression to zero leads to the following update equations for the variances
?y?1 =
1 N
2
T
? (yn ? 1?f n ) + 1 ?f 1
N n=1
(19)
?f?1m =
1 N
n 2
nT
n
2
? [(?f nm ? ?wm ?m ) + ?m ?wm ?m ] + ?f m
N n=1
(20)
?1
?mk
= ?2wmk + ?w,kk
(21)
The gradient with respect to the scales of each local model is completely localized
N
?1
n
T n
?Ef ,w [log p(y, f , w ? ?, ?)] ?Ef ,w [?n=1 N (fm ; wm ?m , ?f m )]
=
(22)
??md
??md
We use gradient ascent to optimize the length scales ?md . All necessary equations are of low cost
and, with the exception of the variance 1/?y , all hyper-parameter updates are solved independently
for each local model, similar to LWR. In contrast to LWR, however, these local updates do not
cause a potential catastrophic shrinking in the length scales: In LWR, both inputs and outputs are
weighted by the localizing function, thus reducing the length scale improves the fit. The localization
in Equation (22) only affects the influence of regression model m, but the targets still need to be
fit accordingly. Shrinking of local models only happens if it actually improves the fit against the
unweighted targets fnm such that no complex cross validation procedures are required.
3.1.3
Prediction
Predictions at a test point x? arise from marginalizing over both f and w, using
T
?1
T
?1
? [? N (y? ; 1 f ? , ?y )N (f ? ; W ?(x? ), B )df? ] N (w; ?w , ?w )dw
= N (y? ; ? wTm ??m , ? 2 (x? ))
m
?
?
M
?1
where ? 2 (x? ) = ?y?1 + ?M
m=1 ?f m + ?m=1 ?m ?wm ?m , which is linear in M and K.
T
5
(23)
4
Incremental Local Gaussian Regression
The above approximate posterior updates apply in the batch setting, assuming the number M and
locations c of local models are fixed. This section constructs an online algorithm for incrementally
incoming data, creating new local models when needed. There has been recent interest in variational
online algorithms for efficient learning on large data sets [24, 25]. Stochastic variational inference
[24] operates under the assumption that the data set has a fixed size N and optimizes the variational
lower bound for N data points via stochastic gradient descent. Here, we follow algorithms for
streaming datasets of unknown size. Probabilistic methods in this setting typically follow a Bayesian
filtering approach [26, 25, 27] in which the posterior after n ? 1 data points becomes the prior for
the n-th incoming data point. Following this principle we extend the model presented in Section 3
and treat precision variables {?f m , ?mk } as random variables, assuming Gamma priors p(?f m ) =
? ?
G(?f m ? a?0 , b?0 ) and p(?m ) = ?K
k=1 G(?mk ? a0 , b0 ). Thus, the factorized approximation on the
posterior q(z) over all random variables z = {f , w, ?, ? f } is changed to
q(z) = q(f , w, ? f , ?) = q(f )q(w)q(? f )q(?)
(24)
A batch version of this was introduced in [28]. Given that, the recursive application of Bayes? theorem
results in the approximate posterior
p(z?x1 , . . . , xn ) ? p(xn ? z)q(z ? x1 , . . . xn?1 )
(25)
after n data points. In essence, this formulates the (approximate) posterior updates in terms
of sufficient statistics, which are updated with each new incoming data point. The batch updates (listed in [28]) can be rewritten such that they depend on the following sufficient statistics
n n?
n
N
?n=1 ?m ?m , ?n=1 ?m ?nfm and ?n=1 (?nfm )2 . Although the length-scales ?m could be treated as
random variables too, here we update them using the noisy (stochastic) gradients produced by each
incoming data point. Due to space limitations, we only summarize these update equations in the
algorithm below, where we have replaced the expectation operator by ???.
Finally, we use an extension analogous to incremental training of the relevance vector machine [29] to
iteratively add local models at new, greedily selected locations cM +1 . Starting with one local model,
each iteration adds one local model in the variational step, and prunes out existing local models
for which all components ?mk _ ?. This works well in practice, with the caveat that the model
number M can grow fast initially, before the pruning becomes effective. Thus, we check for each
selected location cM +1 whether any of the existing local models c1?M produces a localizing weight
?m (cM +1 ) ? wgen , where wgen is a parameter between 0 and 1 and regulates how many parameters
are added. Algorithm 1 gives an overview of the entire incremental algorithm.
Algorithm 1 Incremental LGR
?
?
?
1: M = 0; C = {}, a?
0 , b0 , a0 , ?0 , forgetting rate ?, learning rate ?
2: for all (xn , yn ) do
// for each data point
3:
if ?m (xn ) < wgen , ?m = 1, . . . , M then cm ^ xn ; C ^ C ? {cm }; M = M + 1 end if
4:
?f = B ?1 ?
5:
6:
7:
for m = 1 to M do
if ?m (xn ) < 0.01 then continue end if
?
2
n, S 2
S?m ?Tm ^ ?S?m ??m + ?nm ?nm , S?m ?fm ^ ?S?m ?fm + ?nm ?fm
n
?f ^ ?S?2f + ?fm
8:
9:
10:
B ?1 11T B ?1
,
?y?1 +?m ???f m
? ?1
n
n
M
fm
T
T
n = ?
?fm
wm ?m ? ?1 +?M ????1 (yn ? ?m=1 ?wm ?m )
y
?1
?wm = (???f m S?m ?Tm + ?A?m )
m=1
fm
m
m
, ?wm = ???f m ?wm S?m ?fm
?
Nm = ?Nm + 1, a?N m = a?0 + Nm , a?
N m = a0 + 0.5
?
?
bN m = S?2f n ? 2?wm S?m ?fm + tr [S?m ??m (?wm + ?wm ??wm )] + Nm ?f2 m
m
11:
12:
13:
2
b?
N mk = ?wm ,k + ?wm ,kk
?
?
???f m = aN m/b?N m , ?A?m = diag (aN mk/b?N mk )
?m = ?m + ?(?/??m N (?f n ?m ; ?w?Tm ?nm , ????1
f m ))
14:
if ???mk > 1e3
15:
end for
16: end for
?k = 1, . . . , K then prune local model m,
6
M ^ M ? 1 end if
Table 1: Datasets for inverse dynamics tasks: KUKA1 , KUKA2 are different splits of the same data. Rightmost
column indicates the overlap in input space coverage between offline (ISoffline ) and online training (ISonline ) sets.
Dataset
freq
Motion
Noffline train
Nonline train
Ntest
ISoffline ? ISonline
Sarcos [2]
KUKA1
KUKA2
KUKAsim
100
500
500
500
rhythmic
rhythmic at various speeds
rhythmic at various speeds
rhythmic + discrete
4449
17560
17560
-
44484
180360
180360
1984950
20050
large overlap
small overlap
no overlap
-
5
Experiments
We evaluate our LGR on inverse dynamics learning tasks, using data from two robotic platforms:
a SARCOS anthropomorphic arm and a KUKA lightweight arm. For both robots, learning the
inverse dynamics involves learning a map from the joint positions q (rad), velocities q? (rad/s) and
accelerations q? (rad/s2 ), to torques ? (Nm) for each of 7 joints (degrees of freedom). We compare to
two methods previously used for inverse dynamics learning: LWPR1 ? an extension of LWR for high
dimensional spaces [31] ? and I-SSGPR2 [13] ? an incremental version of Sparse Spectrum GPR.
I-SSGPR differs from LGR and LWPR in that it is a global method and does not learn the distance
metric online. Instead, I-SSGPR needs offline training of hyperparameters before it can be used
online. We mimic the procedure used in [13]: An offline training set is used to learn an initial model
and hyperparameters, then an online training set is used to evaluate incremental learning. Where
indicated we use initial offline training for all three methods. I-SSGPR uses typical GPR optimization
procedures for offline training, and is thus only available in batch mode. For LGR, we use the batch
version for pre-training/hyperparameter learning. For all experiments we initialized the length scales
to ? = 0.3, and used wgen = 0.3 for both LWPR and LGR.
We evaluate on four different data sets, listed in Table 1. These sets vary in scale, types of motion
and how well the offline training set represents the data encountered during online learning. All
results were averaged over 5 randomly seeded runs, mean-squared error (MSE) and normalized
mean-squared error (nMSE) are reported on the online training dataset. The nMSE is reported as the
mean-squared error normalized by the variance of the outputs.
Table 2: Predictive performance on online training data of Sarcos after one sweep. I-SSGPR has been trained
with 200(400) features, MSE for 400 features is reported in brackets.
I-SSGPR200(400)
LWPR
LGR
Joint
MSE
nMSE
MSE
nMSE
# of LM
J1
J2
J3
J4
J5
J6
J7
13.699 (10.832)
6.158 (4.788)
1.803 (1.415)
1.198 (0.857)
0.034 (0.027)
0.129 (0.096)
0.093 (0.063)
0.033
0.027
0.018
0.006
0.036
0.044
0.014
19.180
9.783
3.595
4.807
0.071
0.248
0.231
0.046
0.044
0.036
0.025
0.075
0.085
0.034
461.4
495.0
464.6
382.8
431.2
510.2
378.8
MSE
nMSE
# of LM
11.434
8.342
2.237
5.079
0.031
0.101
0.170
0.027
0.037
0.023
0.027
0.033
0.034
0.025
321.4
287.4
298.0
303.2
344.2
344.2
348.8
Sarcos: Table 2 summarizes results on the popular Sarcos benchmark for inverse dynamics learning
tasks [2]. The traditional test set is used as the offline training data to pre-train all three models.
I-SSGPR is trained with 200 and 400 sparse spectrum features, indicated as I-SSGPR200(400) , where
200 features is the optimal design choice according to [13]. We report the (normalized) mean-squared
error on the online training data, after one sweep through it - i.e. each data point has been used once has been performed. All three methods perform well on this data, with I-SSGPR and LGR having a
slight edge over LWPR in terms of accuracy; and LGR uses fewer local models than LWPR. The
Sarcos data offline training set represents the data encountered during online training very well. Thus,
here online distance metric learning is not necessary to achieve good performance.
1
2
we use the LWPR implementation found in the SL simulation software package [30]
we use code from the learningMachine library in the RobotCub framework, from http:// eris.liralab.it/iCub
7
Table 3: Predictive performance on online training data of KUKA1 and KUKA2 after one sweep. KUKA2
results are averages across joints. I-SSGPR was trained on 200 and 400 features (results for I-SSGPR400 shown
in brackets).
I-SSGPR200(400)
LWPR
LGR
data
Joint
MSE
nMSE
MSE
nMSE
# of LM
MSE
nMSE
# of LM
KUKA1
J1
J2
J3
J4
J5
J6
J7
7.021 (7.680)
16.385 (18.492)
1.872 (1.824)
3.124 (3.460)
0.095 (0.143)
0.142 (0.296)
0.129 (0.198)
0.233
0.265
0.289
0.256
0.196
0.139
0.174
2.362
2.359
0.457
0.503
0.019
0.043
0.023
0.078
0.038
0.071
0.041
0.039
0.042
0.031
3476.8
3508.6
3477.2
3494.6
3512.4
3561.0
3625.6
2.238
2.738
0.528
0.571
0.017
0.029
0.033
0.074
0.044
0.082
0.047
0.036
0.029
0.044
3188.6
3363.8
3246.6
3333.6
3184.4
3372.4
3232.6
KUKA2
-
9.740 (9.985)
0.507
1.064
0.056
3617.7
1.012
0.054
3290.2
17,000
LGR
LWPR
0.04
16,000
M
nMSE
0.06
0.02
15,000
14,000
5 ? 105
1 ? 106
n
5 ? 105
1.5 ? 106
1 ? 106
n
1.5 ? 106
Figure 2: Right: nMSE on the first joint of simulated KUKA arm Left: average number of local models used.
KUKA1 and KUKA2 : The two KUKA datasets consist of rhythmic motions at various speeds, and
represent a more realistic setting in robotics: While one can collect some data for offline training, it is
not feasible to cover the whole state-space. Offline data of KUKA1 has been chosen to give partial
coverage of the range of available speeds, while KUKA2 consists of motion at only one speed. In this
setting, both LWPR and LGR excel (Table 3). As they can learn local distance metrics on the fly, they
adapt to incoming data in previously unexplored input areas. Performance of I-SSGPR200 degrades
as the offline training data is less representative, while LGR and LWPR perform almost equally well
on KUKA1 and KUKA2 . While there is little difference in accuracy between LGR and LWPR, LGR
consistently uses fewer local models and does not require careful manual meta-parameter tuning.
Since both LGR and LWPR use more local models on this data (compared to the Sarcos data) we
also tried increasing the feature space of I-SSGPR to 400 features. This did not improve I-SSGPRs
performance on the online data (see Table 3). Finally, it is noteworthy that LGR processes both of
these data sets at ? 500Hz (C++ code, on a 3.4GHz Intel Core i7), making it a realistic alternative for
real-time inverse dynamics learning tasks.
KUKAsim : Finally, we evaluate LGRs ability to learn from scratch on KUKAsim , a large data set
of 2 million simulated data points, collected using [30]. We randomly drew 1% points as a test
set, on which we evaluate convergence during online training. Figure 2 (left) shows convergence
and number of local models used, averaged over 5 randomly seeded runs for joint 1. After the first
1e5 data points, both LWPR and LGR achieve a normalized mean squared error below 0.07, and
eventually converge to a nMSE of ? 0.01. LGR converges slightly faster, while using fewer local
models (Figure 2, right).
6
Conclusion
We proposed a top-down approach to probabilistic localized regression. Local Gaussian Regression
decouples inference over M local models, resulting in efficient and principled updates for all
parameters, including local distance metrics. These localized updates can be used in batch as well as
incrementally, yielding computationally efficient learning in either case and applicability to big data
sets. Evaluated on a variety of simulated and real robotic inverse dynamics tasks, and compared to
I-SSGPR and LWPR, incremental LGR shows an ability to add resources (local models) and to update
its distance metrics online. This is essential to consistently achieve high accuracy. Compared to
LWPR, LGR matches or improves precision, while consistently using fewer resources (local models)
and having significantly fewer manual tuning parameters.
8
References
[1] Christopher G Atkeson, Andrew W Moore, and Stefan Schaal. Locally weighted learning for control.
Artificial Intelligence Review, (1-5):75?113, 1997.
[2] Carl Edward Rasmussen and Christopher KI Williams. Gaussian Processes for Machine Learning. MIT
Press, 2006.
[3] Jianqing Fan and Irene Gijbels. Data-driven bandwidth selection in local polynomial fitting: variable
bandwidth and spatial adaptation. Journal of the Royal Statistical Society., pages 371?394, 1995.
[4] Stefan Schaal and Christopher G Atkeson. Constructive incremental learning from only local information.
Neural Computation, 10(8):2047?2084, 1998.
[5] Joaquin Qui?nonero Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian
process regression. JMLR, 6:1939?1959, 2005.
[6] Krzysztof Chalupka, Christopher KI Williams, and Iain Murray. A framework for evaluating approximation
methods for Gaussian process regression. JMLR, 14(1):333?350, 2013.
[7] Michalis K Titsias. Variational learning of inducing variables in sparse Gaussian processes. In International
Conference on Artificial Intelligence and Statistics, pages 567?574, 2009.
[8] Edward Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. Advances in
neural information processing systems, 18:1257, 2006.
[9] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[10] Miguel L?azaro-Gredilla, Joaquin Qui?nonero-Candela, Carl Edward Rasmussen, and An??bal R FigueirasVidal. Sparse spectrum Gaussian process regression. JMLR, 11:1865?1881, 2010.
[11] Marco F Huber. Recursive Gaussian process: On-line regression and learning. Pattern Recognition Letters,
45:85?91, 2014.
[12] Lehel Csat?o and Manfred Opper. Sparse on-line Gaussian processes. Neural computation, 2002.
[13] Arjan Gijsberts and Giorgio Metta. Real-time model learning using incremental sparse spectrum Gaussian
process regression. Neural Networks, 41:59?69, 2013.
[14] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. UAI, 2013.
[15] Jo-Anne Ting, Mrinal Kalakrishnan, Sethu Vijayakumar, and Stefan Schaal. Bayesian kernel shaping for
learning control. Advances in neural information processing systems, 6:7, 2008.
[16] Duy Nguyen-Tuong, Jan R Peters, and Matthias Seeger. Local Gaussian process regression for real time
online model learning. In Advances in Neural Information Processing Systems, pages 1193?1200, 2008.
[17] Edward Snelson and Zoubin Ghahramani. Local and global sparse Gaussian process approximations. In
International Conference on Artificial Intelligence and Statistics, pages 524?531, 2007.
[18] Trevor Hastie and Clive Loader. Local regression: Automatic kernel carpentry. Statistical Science, 1993.
[19] J. Moody and C. Darken. Learning with localized receptive fields. In Proceedings of the 1988 Connectionist
Summer School, pages 133?143. San Mateo, CA, 1988.
[20] Radford M Neal. Bayesian Learning for Neural Network, volume 118. Springer, 1996.
[21] Michael E Tipping. Sparse Bayesian learning and the relevance vector machine. The Journal of Machine
Learning Research, 1:211?244, 2001.
[22] Aaron D?Souza, Sethu Vijayakumar, and Stefan Schaal. The Bayesian backfitting relevance vector machine.
In ICML, 2004.
[23] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends? in Machine Learning, 2008.
[24] Matthew D. Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. J.
Mach. Learn. Res., 14(1):1303?1347, May 2013.
[25] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael Jordan. Streaming
variational Bayes. In Advances in Neural Information Processing Systems, pages 1727?1735, 2013.
[26] Jan Luts, Tamara Broderick, and Matt Wand. Real-time semiparametric regression. arxiv, 2013.
[27] Antti Honkela and Harri Valpola. On-line variational Bayesian learning. In 4th International Symposium
on Independent Component Analysis and Blind Signal Separation, pages 803?808, 2003.
[28] Franziska Meier, Philipp Hennig, and Stefan Schaal. Efficient Bayesian local model learning for control.
In Proceedings of the IEEE International Conference on Intelligent Robotics Systems (IROS), 2014.
[29] Joaquin Qui?nonero-Candela and Ole Winther. Incremental Gaussian processes. In NIPS, 2002.
[30] Stefan Schaal. The SL simulation and real-time control software package. Technical report, 2009.
[31] Sethu Vijayakumar and Stefan Schaal. Locally weighted projection regression: Incremental real time
learning in high dimensional space. In ICML, pages 1079?1086, 2000.
9
| 5594 |@word version:6 polynomial:2 km:2 wtm:2 simulation:2 tried:1 bn:1 tr:1 initial:2 lightweight:1 tuned:1 ours:1 rightmost:1 past:1 existing:3 current:2 nt:1 anne:1 must:1 john:1 distant:1 realistic:2 j1:2 shape:1 cheap:1 treating:1 update:15 stationary:3 generative:6 selected:2 fewer:5 intelligence:3 accordingly:1 core:1 manfred:1 sarcos:7 caveat:1 blei:1 philipp:2 location:3 direct:1 become:1 symposium:1 viable:1 consists:1 backfitting:2 combine:2 fitting:1 introduce:1 huber:1 forgetting:1 expected:1 mpg:1 growing:1 torque:1 automatically:1 little:2 increasing:2 becomes:2 estimating:1 moreover:2 factorized:1 what:1 cm:16 interpreted:1 minimizes:1 sparsification:2 pseudo:1 every:1 collecting:1 unexplored:1 decouples:1 rm:1 clive:1 control:8 yn:16 planck:1 continually:1 positive:2 before:2 giorgio:1 local:83 treat:2 limit:1 aiming:1 mach:1 path:1 loader:1 approximately:2 noteworthy:1 black:1 mateo:1 collect:1 heteroscedastic:1 factorization:1 range:2 averaged:2 responsible:1 recursive:2 practice:1 definite:2 differs:1 carpentry:1 procedure:4 jan:2 area:2 empirical:1 adapting:1 significantly:1 boyd:1 projection:1 pre:2 radial:2 mrinal:1 wait:1 suggest:1 zoubin:2 tuong:1 selection:2 operator:1 influence:2 optimize:1 map:1 center:2 williams:2 starting:1 independently:4 formalized:1 kalakrishnan:1 m2:1 estimator:1 iain:1 dw:1 classic:1 kuka:3 coordinate:1 analogous:1 updated:1 construction:2 target:4 exact:1 gps:1 us:3 carl:3 velocity:1 trend:1 recognition:1 bottom:2 fly:1 solved:1 wang:1 thousand:1 region:1 connected:1 irene:1 trade:1 principled:3 benjamin:1 vanishes:1 phennig:1 complexity:3 broderick:2 ideally:1 dynamic:8 trained:4 depend:1 solving:1 ali:1 predictive:2 titsias:1 localization:5 duy:1 efficiency:1 f2:1 basis:3 completely:1 joint:8 various:3 harri:1 train:4 fast:4 effective:1 ole:1 query:2 artificial:3 hyper:1 neighborhood:1 whose:1 widely:1 ability:3 statistic:4 neil:1 gp:2 jointly:1 noisy:1 online:18 matthias:1 propose:2 product:1 adaptation:2 j2:2 combining:1 nonero:3 rapidly:1 achieve:5 inducing:1 franziska:2 los:1 convergence:2 produce:4 incremental:16 converges:1 figueirasvidal:1 coupling:2 develop:2 andrew:1 miguel:1 school:1 b0:2 progress:1 eq:2 edward:5 coverage:3 involves:2 come:1 differ:1 discontinuous:1 modifying:1 stochastic:4 virtual:1 require:1 really:1 anthropomorphic:1 extension:2 hold:1 marco:1 around:2 diminish:1 exp:2 lawrence:1 lm:4 matthew:1 sschaal:1 achieves:1 adopt:1 vary:1 favorable:1 estimation:1 create:1 weighted:10 hoffman:1 stefan:8 mit:1 sensor:1 gaussian:40 j7:2 modified:2 rather:1 wilson:1 probabilistically:1 schaal:8 consistently:4 likelihood:4 check:1 indicates:1 contrast:1 seeger:1 greedily:1 sense:1 am:3 inference:10 dependent:1 streaming:3 typically:3 entire:1 a0:3 initially:1 mth:2 hidden:2 lehel:1 lwpr:15 germany:1 j5:2 favored:1 retaining:2 art:1 platform:1 spatial:1 field:1 construct:2 lwr:22 having:3 once:1 manually:1 represents:2 icml:2 anticipated:1 mimic:1 minimized:1 others:1 report:2 intelligent:2 connectionist:1 randomly:3 simultaneously:1 resulted:1 gamma:1 usc:2 replaced:1 freedom:1 interest:1 message:1 highly:2 investigate:1 evaluation:1 chong:1 introduces:1 bracket:2 yielding:2 allocating:1 edge:1 partial:1 necessary:3 spemannstra:1 initialized:1 re:2 minimal:1 mk:16 instance:1 column:1 modeling:2 downside:2 cover:3 formulates:1 localizing:4 retains:1 cost:10 applicability:1 subset:1 successful:1 too:2 reported:3 synthetic:1 recht:1 explores:1 sensitivity:1 international:4 winther:1 retain:2 workspace:3 probabilistic:9 off:1 vijayakumar:3 michael:3 ym:1 quickly:1 moody:1 luts:1 jo:1 squared:5 nm:20 creating:1 potential:1 de:1 matter:1 blind:1 performed:3 view:1 candela:3 start:1 wm:38 bayes:2 contribution:2 square:4 accuracy:5 variance:6 characteristic:1 maximized:1 correspond:1 kuka2:8 bayesian:8 produced:2 j6:2 parallelizable:1 manual:4 trevor:1 andre:1 against:2 energy:1 tamara:2 regress:1 james:1 naturally:1 couple:2 stop:1 dataset:2 adjusting:1 popular:1 improves:3 shaping:1 actually:1 higher:2 tipping:1 follow:3 formulation:2 evaluated:1 box:1 strongly:2 furthermore:1 correlation:1 honkela:1 hand:3 joaquin:3 christopher:4 incrementally:4 nonparametrically:1 mode:1 indicated:2 grows:1 believe:1 usa:1 effect:3 validity:1 normalized:5 matt:1 vicinity:1 seeded:2 symmetric:1 iteratively:1 moore:1 freq:1 neal:1 during:5 essence:1 criterion:1 bal:1 complete:2 demonstrate:1 motion:4 variational:15 snelson:2 ef:6 superior:1 physical:1 regulates:1 overview:1 volume:1 million:1 extend:2 interpretation:2 m1:2 analog:1 slight:1 imposing:1 paisley:1 tuning:6 automatic:3 rd:1 collaborate:1 biped:1 robot:5 j4:2 add:3 nicolo:1 curvature:1 posterior:11 own:3 recent:2 chalupka:1 optimizing:2 optimizes:1 driven:1 ubingen:1 meta:1 jianqing:1 continue:1 integrable:1 additional:1 prune:3 converge:1 maximize:1 signal:1 semi:1 full:1 rahimi:1 technical:1 faster:1 determination:1 adapt:1 offer:1 cross:1 match:1 equally:1 dkl:1 prediction:10 variant:1 regression:41 j3:2 essentially:1 metric:8 df:1 expectation:1 arxiv:1 iteration:1 kernel:6 represent:1 robotics:4 achieved:1 c1:1 background:1 semiparametric:1 addressed:1 grow:1 crucial:1 ascent:1 hz:1 elegant:1 incorporates:1 jordan:2 near:1 split:1 easy:1 rendering:1 variety:2 independence:2 fit:7 affect:2 xj:2 hastie:1 fm:25 bandwidth:2 inner:1 idea:1 tm:3 angeles:1 i7:1 whether:1 expression:1 peter:1 e3:1 passing:1 cause:1 iterating:1 covered:1 listed:2 amount:2 nonparametric:1 locally:10 http:1 sl:2 restricts:1 per:1 csat:1 discrete:1 hyperparameter:1 hennig:1 sparsifying:1 key:2 four:1 localize:2 changing:1 iros:1 krzysztof:1 fraction:1 gijbels:1 wand:1 run:2 inverse:7 package:2 wgen:4 letter:1 arrive:1 almost:1 family:1 smoothes:1 separation:1 fusi:1 summarizes:1 qui:3 comparable:1 bound:9 ki:2 summer:1 datum:1 fan:1 replaces:1 encountered:3 strength:1 software:2 speed:6 pruned:1 relatively:1 martin:1 according:1 gredilla:1 beneficial:1 describes:1 increasingly:1 across:1 slightly:1 making:2 happens:1 intuitively:1 computationally:4 resource:3 equation:6 previously:2 eventually:1 needed:2 end:5 available:2 operation:2 rewritten:1 apply:1 away:1 nicholas:1 alternative:3 batch:8 robustness:1 compress:2 top:2 assumes:1 michalis:1 graphical:1 unifying:1 fnm:1 calculating:1 ting:1 ghahramani:2 especially:1 build:1 approximating:2 classical:1 society:1 murray:1 sweep:3 already:1 added:1 strategy:2 parametric:2 degrades:1 md:4 traditional:1 receptive:1 southern:1 gradient:5 distance:7 separate:1 valpola:1 simulated:3 tue:1 rethinking:1 concatenation:1 sethu:3 collected:2 assuming:2 lgr:22 besides:1 length:6 modeled:1 index:1 kk:2 code:2 minimizing:1 difficult:1 ashia:1 negative:1 kuka1:7 design:2 implementation:1 proper:1 motivates:1 unknown:1 perform:2 allowing:2 observation:1 darken:1 datasets:5 benchmark:1 descent:1 extended:1 souza:1 introduced:2 inverting:1 cast:1 required:2 david:1 meier:1 connection:1 rad:3 california:1 learned:1 nip:2 able:1 below:6 pattern:1 summarize:1 max:1 memory:2 including:1 belief:1 royal:1 wainwright:1 wmk:2 overlap:4 treated:1 regularized:1 localizes:2 arm:3 improve:1 library:1 created:1 excel:1 prior:5 review:1 marginalizing:1 relative:2 loss:5 par:1 interesting:1 limitation:1 filtering:1 localized:11 validation:1 foundation:1 degree:1 sufficient:2 mercer:1 principle:1 nmk:2 changed:1 placed:1 keeping:1 free:2 rasmussen:3 antti:1 offline:11 bias:2 allow:1 side:1 institute:1 wide:1 fall:1 rhythmic:5 sparse:12 benefit:1 ghz:1 dimension:2 xn:30 transition:1 gram:1 unweighted:1 evaluating:1 opper:1 hensman:1 made:1 adaptive:1 san:1 atkeson:3 far:1 nguyen:1 approximate:11 pruning:1 global:9 robotic:2 incoming:5 uai:1 conceptual:1 xi:2 spectrum:5 factorizing:2 latent:4 decomposes:1 table:7 learn:8 nature:1 ca:2 e5:1 mse:8 necessarily:1 complex:1 constructing:2 diag:3 did:1 main:1 big:4 noise:1 hyperparameters:6 arise:1 s2:1 whole:1 x1:2 nmse:11 representative:2 intel:1 cubic:1 localizer:1 experienced:1 precision:4 shrinking:2 position:1 exponential:1 jmlr:3 gpr:5 weighting:2 learns:1 down:2 rk:2 schaal1:1 theorem:1 evidence:1 consist:1 essential:1 restricting:1 adding:1 effectively:2 drew:1 demand:1 entropy:1 generalizing:1 simply:1 azaro:1 partially:1 joined:1 radford:1 springer:1 viewed:1 acceleration:1 rbf:1 towards:3 careful:1 feasible:2 typical:1 operates:2 reducing:1 decouple:1 partly:1 catastrophic:1 ntest:1 ew:4 exception:1 aaron:1 support:1 adaptiveness:1 relevance:4 wibisono:1 constructive:1 evaluate:5 scratch:1 |
5,075 | 5,595 | Just-In-Time Learning for Fast and Flexible Inference
S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli and John Winn
Microsoft Research
{alie,dtarlow,pkohli,jwinn}@microsoft.com
Abstract
Much of research in machine learning has centered around the search for inference
algorithms that are both general-purpose and efficient. The problem is extremely
challenging and general inference remains computationally expensive. We seek to
address this problem by observing that in most specific applications of a model,
we typically only need to perform a small subset of all possible inference computations. Motivated by this, we introduce just-in-time learning, a framework for
fast and flexible inference that learns to speed up inference at run-time. Through
a series of experiments, we show how this framework can allow us to combine the
flexibility of sampling with the efficiency of deterministic message-passing.
1
Introduction
We would like to live in a world where we can define a probabilistic model, press a button, and
get accurate inference results within a matter of seconds or minutes. Probabilistic programming
languages allow for the rapid definition of rich probabilistic models to this end, but they also raise a
crucial question: what algorithms can we use to efficiently perform inference for the largest possible
set of programs in the language? Much of recent research in machine learning has centered around
the search for inference algorithms that are both flexible and efficient.
The general inference problem is extremely challenging and remains computationally expensive.
Sampling based approaches (e.g. [5, 19]) can require many evaluations of the probabilistic program
to obtain accurate inference results. Message-passing based approaches (e.g. [12]) are typically
faster, but require the program to be expressed in terms of functions for which efficient messagepassing operators have been implemented. However, implementing a message-passing operator for
a new function either requires technical expertise, or is computationally expensive, or both.
In this paper we propose a solution to this problem that is automatic (it doesn?t require the user
to build message passing operators) and efficient (it learns from past experience to make future
computations faster). The approach is motivated by the observation that general algorithms are
solving problems that are harder than they need to be: in most specific inference problems, we only
ever need to perform a small subset of all possible message-passing computations. For example,
in Expectation Propagation (EP) the range of input messages to a logistic factor, for which it needs
to compute output messages, is highly problem specific (see Fig. 1a). This observation raises the
central question of our work: can we automatically speed up the computations required for general
message-passing, at run-time, by learning about the statistics of the specific problems at hand?
Our proposed framework, which we call just-in-time learning (JIT learning), initially uses highly
general algorithms for inference. It does so by computing messages in a message-passing algorithm
using Monte Carlo sampling, freeing us from having to implement hand-crafted message update
operators. However, it also gradually learns to increase the speed of these computations by regressing from input to output messages (in a similar way to [7]) at run-time. JIT learning enables us
to combine the flexibility of sampling (by allowing arbitrary factors) and the speed of hand-crafted
message-passing operators (by using regressors), without having to do any pre-training. This constitutes our main contribution and we describe the details of our approach in Sec. 3.
1
GP
8
Log precision
6
4
banknote_authentication
blood_transfusion
ionosphere
fertility_diagnosis
Training datapoints
Forest predictions
1
GP
xi
f soil
0.5
f seed
Eval
Eval
2
ai
ti
+
0
0
?2
yimax
?4
?40
?30
?20
?10
Mean
0
(a) Problem-specific variation
10
?0.5
?10
?5
0
5
topt
i
Yield
10
yiavg
(b) Random forest uncertainty
Noise
yi
(c)
Figure 1: (a) Parameters of Gaussian messages input to a logistic factor in logistic regression vary
significantly in four random UCI datasets. (b) Figure for Sec. 4: A regression forest performs
1D regression (1,000 trees, 2 feature samples per node, maximum depth 4, regressor polynomial
degree 2). The red shaded area indicates one standard deviation of the predictions made by the
different trees in the forest, indicating its uncertainty. (c) Figure for Sec. 6: The yield factor relates
temperatures and yields recorded at farms to the optimal temperatures of their planted grain. JIT
learning enables us to incorporate arbitrary factors with ease, whilst maintaining inference speed.
Our implementation relies heavily on the use of regressors that are aware of their own uncertainty.
Their awareness about the limits of their knowledge allows them to decide when to trust their predictions and when to fall back to computationally intensive Monte Carlo sampling (similar to [8]
and [9]). We show that random regression forests [4] form a natural and efficient basis for this
class of ?uncertainty aware? regressors and we describe how they can be modified for this purpose in
Sec. 4. To the best of our knowledge this is the first application of regression forests to the self-aware
learning setting and it constitutes our second contribution.
To demonstrate the efficacy of the JIT framework, we employ it for inference in a variety of graphical
models. Experimental results in Sec. 6 show that for general graphical models, our approach leads
to significant improvements in inference speed (often several orders of magnitude) over importance
sampling whilst maintaining overall accuracy, even boosting performance for models where hand
designed EP message-passing operators are available. Although we demonstrate JIT learning in the
context of expectation propagation, the underlying ideas are general and the framework can be used
for arbitrary inference problems.
2
Background
A wide class of probabilistic models can be represented using the framework of factor graphs. In this
context a factor graph represents the factorization of the joint distribution over aQset of random variables x = {x1 , ..., xV } via non-negative factors ?1 , ..., ?F given by p(x) = f ?f (xne(?f ) )/Z,
where xne(?f ) is the set of variables that factor ?f is defined over. We will focus on directed factors
of the form ?(xout |xin ) which directly specify the conditional density over the output variables xout
as a function of the inputs xin , although our approach can be extended to factors of arbitrary form.
Belief propagation (or sum-product) is a message-passing algorithm for performing inference in factor graphs with discrete and real-valued variables, and it includes sub-routines that compute variableto-factor and factor-to-variable messages. The bottleneck is mainly in computing the latter kind, as
they often involve intractable integrals. The message from factor ? to variable i is:
Z
Y
m??i (xi ) =
?(xout |xin )
mk?? (xk ),
(1)
x?i
k?ne(?)\i
where x?i denotes all random variables in xne(?) except i. To further complicate matters, the
messages are often not even representable in a compact form. Expectation Propagation [11] extends
the applicability of message-passing algorithms by projecting messages back to a pre-determined,
tractable family distribution:
hR
i
Q
proj x?i ?(xout |xin ) k?ne(?) mk?? (xk )
.
(2)
m??i (xi ) =
mi?? (xi )
2
The proj[?] operator ensures that the message is a distribution of the correct type and only has an
effect if its argument is outside the approximating family used for the target message.
The integral in the numerator of Eq. 2 can be computed using Monte Carlo methods [2, 7], e.g. by
using the generally applicable technique of importance sampling. After multiplying and dividing by
a proposal distribution q(xin ) we get:
"Z
#
m??i (xi ) ? proj
v(xin , xout ) ? w(xin , xout ) /mi?? (xi ),
(3)
x?i
Q
where v(xin , xout ) = q(xin )?(xout |xin ) and w(xin , xout ) = k?ne(?) mk?? (xk )/q(xin ). Therefore
P
w(xs , xs )?(x )
s
P in s out s i /mi?? (xi ),
m??i (xi ) ' proj
(4)
s w(xin , xout )
where xsin and xsout are samples from v(xin , xout ). To sample from v, we first draw values xsin from q
then pass them through the forward-sampling procedure defined by ? to get a value for xsout .
Crucially, note that we require no knowledge of ? other than the ability to sample from ?(xout |xin ).
This allows the model designer to incorporate arbitrary factors simply by providing an implementation of this forward sampler, which could be anything from a single line of deterministic code to
a large stochastic image renderer. However, drawing a single sample from ? can itself be a timeconsuming operation, and the complexity of ? and the arity of xin can both have a dramatic effect
on the number of samples required to compute messages accurately.
3
Just-in-time learning of message mappings
Monte Carlo methods (as defined above) are computationally expensive and can lead to slow inference. In this paper, we adopt an approach in which we learn a direct mapping, parameterized by ?,
from variable-to-factor messages {mk?? }k?ne(?) to a factor-to-variable message m??i :
m??i (xi ) ? f ({mk?? }k?ne(?) |?).
(5)
Using this direct mapping function f , factor-to-variable messages can be computed in a fraction
of the time required to perform full Monte Carlo estimation. Heess et al. [7] recently used neural
networks to learn this mapping offline for a broad range of input message combinations.
Motivated by the observation that the distribution of input messages that a factor sees is often problem specific (Fig. 1a), we consider learning the direct mapping just-in-time in the context of a specific model. For this we employ ?uncertainty aware? regressors. Along with each prediction m, the
regressor produces a scalar measure u of its uncertainty about that prediction:
u??i ? u({mk?? }k?ne(?) |?).
(6)
We adopt a framework similar to that of uncertainty sampling [8] (also [9]) and use these uncertainties at run-time to choose between the regressor?s estimate and slower ?oracle? computations:
(
m??i (xi ) u??i < umax
(7)
m??i (xi ) =
moracle
??i (xi ) otherwise
where umax is the maximum tolerated uncertainty for a prediction. In this paper we consider importance sampling or hand-implemented Infer.NET operators as oracles however other methods such as
MCMC-based samplers could be used. The regressor is updated after every oracle consultation in
order to incorporate the newly acquired information.
An appropriate value for umax can be found by collecting a small number of Monte Carlo messages for the target model offline: the uncertainty aware regressor is trained on some portion of the
collected messages, and evaluated on the held out portion, producing predictions m??i and confidences u??i for every held out message. We then set umax such that no held out prediction has an
error above a user-specified, problem-specific maximum tolerated value Dmax .
A natural choice for this error measure is mean squared error of the parameters of the messages (e.g.
natural parameters for the exponential family), however this is sensitive to the particular parameterization chosen for the target distribution type. Instead, for each pair of predicted and oracle messages
3
from factor ? to variable i, we calculate the marginals bi and boracle
they each induce on the target
i
random variable, and compute the Kullback-Leibler (KL) divergence between the two:
mar
oracle
DKL
(m??i kmoracle
),
??i ) ? DKL (bi kbi
(8)
where bi = m??i ? mi?? and boracle
= moracle
i
??i ? mi?? , using the fact that beliefs can be computed
mar
as the product of incoming and outgoing messages on any edge. We refer to the error measure DKL
as marginal KL and use it throughout the JIT framework, as it encourages the system to focus efforts
on the quantity that is ultimately of interest: the accuracy of the posterior marginals.
4
Random decision forests for JIT learning
We wish to learn a mapping from a set of incoming messages {mk?? }k?ne(?) to the outgoing
message m??i . Note that separate regressors are trained for each outgoing message. We require
that the regressor: 1) trains and predicts efficiently, 2) can model arbitrarily complex mappings,
3) can adapt dynamically, and 4) produces uncertainty estimates. Here we describe how decision
forests can be modified to satisfy these requirements. For a review of decision forests see [4].
In EP, each incoming and outgoing message can be represented using only a few numbers, e.g. a
Gaussian message can be represented by its natural parameters. We refer to the outgoing message by
mout and to the set of incoming messages by min . Each set of incoming messages min is represented
in two ways: the first, a concatenation of the parameters of its constituent messages which we call the
?regression parameterization? and denote by rin ; and the second, a vector of features computed on the
set which we call the ?tree parameterization? and denote by tin . This tree parametrization typically
contains values for a larger number of properties of each constituent message (e.g. parameters and
moments), and also properties of the set as a whole (e.g. ? evaluated at the mode of min ). We
represent the outgoing message mout by a vector of real valued numbers rout . Note that din and dout ,
the number of elements in rin and rout respectively, need not be equal.
Weak learner model. Data arriving at a split node j is separated into the node?s two children
according to a binary weak learner h(tin , ? j ) ? {0, 1}, where ? j parameterizes the split criterion.
We use weak learners of the generic oriented hyperplane type throughout (see [4] for details).
Prediction model. Each leaf node is associated with a subset of the labelled training data. During
testing, a previously unseen set of incoming messages traverses the tree until it reaches a leaf which
by construction is likely to contain similar training examples. We therefore use the statistics of the
data gathered in that leaf to predict outgoing messages with a multivariate polynomial regression
n train
n
model of the form: rtrain
out = W ? ? (rin ) + , where ? (?) is the n-th degree polynomial basis
function, and is the dout -dimensional vector of normal error terms. We use the learned dout ? din dimensional matrix of coefficients W at test time to make predictions rout for each rin . To recap, tin
is used to traverse message sets down to leaves, and rin is used by the linear regressor to predict rout .
Training objective function. The optimization of the split functions proceeds in a greedy manner. At each node j, depending on the subset of the incoming training set Sj we learn the
function that ?best? splits Sj into the training sets corresponding to each child, SjL and SjR , i.e.
? j = argmax? ?Tj I(Sj , ? ). This optimization is performed as a search over a discrete set Tj of a
random sample of possible parameter settings. The number of elements in Tj is typically kept small,
introducing random variation in the different trees in the forest. The objective function I is:
I(Sj , ? ) = ?E(SjL , WL ) ? E(SjR , WR ),
(9)
where WL and WR are the parameters of the polynomial regression models corresponding to the
left and right training sets SjL and SjR , and the ?fit residual? E is:
1 X mar W
mar
oracle
W
E(S, W) =
DKL (mmin kmoracle
(10)
min ) + DKL (mmin kmmin ).
2
min ?S
W
Here min is a set of incoming messages in S, moracle
min is the oracle outgoing message, mmin is the
mar
estimate produced by the regression model specified by W and DKL is the marginal KL. In simple
terms, this objective function splits the training data at each node in a way that the relationship
between the incoming and outgoing messages is well captured by the polynomial regression in each
child, as measured by symmetrized marginal KL.
4
Ensemble model. A key aspect of forests is that their trees are randomly different from each other.
This is due to the relatively small number of weak learner candidates considered in the optimization
of the weak learners. During testing, each test point min simultaneously traverses all trees from
their roots until it reaches their leaves. Combining the predictions into a single forest prediction
may be done by averaging the parameters rtout of the predicted outgoing messages mtout by each
tree t, however again this would be sensitive to the parameterizations of the output distribution
types. Instead, we compute the moment average mout of the distributions {mtout } by averaging
the first few moments of each predicted distribution across trees, and solving for the distribution
parameters which match the averaged moments. Grosse et al. [6] study the characteristics of the
moment average in detail, and have showed that it can be interpreted
P as minimizing an objective
function mout = argminm U ({mtout }, m) where U ({mtout }, m) = t DKL (mtout km).
Intuitively, the level of agreement between the predictions of the different trees can be used as a
proxy of the forest?s uncertainty about that prediction (we choose not to use uncertainty within
leaves in order to maintain high prediction speed). If all the trees in the forest predict the same output
distribution, it means that their knowledge about the function f is similar despite the randomness in
their structures. We therefore set uout ? U ({mtout }, mout ). A similar notion is used for classification
forests, where the entropy of the aggregate output histogram is used as a proxy of the classification?s
uncertainty [4]. We illustrate how this idea extends to simple regression forests in Fig. 1b, and in
Sec. 6 we also show empirically that this uncertainty measure works well in practice.
Online training. During learning, the trees periodically obtain new information in the form of
(min , moracle
out ) pairs. The forest makes use of this by pushing min down a portion 0 < ? ? 1 of the
trees to their leaf nodes and retraining the regressors at those leaves. Typically ? = 1, however we
use values smaller than 1 when the trees are shallow (due to the mapping function being captured
well by the regressors at the leaves) and the forest?s randomness is too low to produce reliable
uncertainty estimates. If the regressor?s fit residual E at a leaf (Eq. 10) is above a user-specified
max
threshold value Eleaf
, a split is triggered on that node. Note that no depth limit is ever specified.
5
Related work
There are a number of works in the literature that consider using regressors to speed up general
purpose inference algorithms. For example, the Inverse MCMC algorithm [20] uses discriminative
estimates of local conditional distributions to make proposals for a Metropolis-Hastings sampler,
however these predictors are not aware of their own uncertainty. Therefore the decision of when the
sampler can start to rely on them needs to be made manually and the user has to explicitly separate
offline training and test-time inference computations.
A related line of work is that of inference machines [14, 15, 17, 13]. Here, message-passing is
performed by a sequence of predictions, where the sequence itself is defined by the graphical model.
The predictors are jointly trained to ensure that the system produces correct labellings, however the
resulting inference procedure no longer corresponds to the original (or perhaps to any) graphical
model and therefore the method is unsuitable if we care about querying the model?s latent variables.
The closest work to ours is [7], in which Heess et al. use neural networks to learn to pass EP
messages. However, their method requires the user to anticipate the set of messages that will ever be
sent by the factor ahead of time (itself a highly non-trivial task), and it has no notion of confidence in
its predictions and therefore it will silently fail when it sees unfamiliar input messages. In contrast
the JIT learner trains in the context of a specific model thereby allocating resources more efficiently,
and because it knows what it knows, it buys generality without having to do extensive pre-training.
6
Experiments
We first analyze the behaviour of JIT learning with diagnostic experiments on two factors: logistic
and compound gamma, which were also considered by [7]. We then demonstrate its application to
a challenging model of US corn yield data. The experiments were performed using the extensible
factor API in Infer.NET [12]. Unless stated otherwise, we use default Infer.NET settings (e.g. for
message schedules and other factor implementations). We set the number of trees in each forest to
64 and use quadratic regressors. Message parameterizations and graphical models, experiments on
a product factor and a quantitative comparison with [7] can be found in the supplementary material.
5
Count
20
Hold out worst 2
Groundtruth ? ?: ?3.4, ?2: 6.8
Predicted ? ?: ?3.3, ?2: 6.6
Log marginal KL: ?8.6
Log uncertainty: ?8.2
15
0.6
0.6
10
0.4
0.4
5
0.2
0.2
0
?20
?18
?16
?14
?12
Log marginal KL
?10
?8
0
?10
(a) Inference error
0
0
?10
10
?6
?8
Log uncertainty
Hold out worst 1
Groundtruth ? ?: ?3.4, ?2: 6.8
Predicted ? ?: ?3.3, ?2: 6.5
Log marginal KL: ?8.2
Log uncertainty: ?7.8
25
?10
?12
?14
?16
0
Train
Hold out
?18
?25
10
(b) Worst predicted messages
?20
?15
Log marginal KL
?10
?5
(c) Awareness of uncertainty
0.2
0.15
0.1
Infer.NET
Infer.NET + KNN
Infer.NET + JIT
Sampling
Sampling + KNN
Sampling + JIT
11
10
9
8
7
0.05
0
12
6
50
100 150 200 250 300 350 400 450 500
Problems seen
(a) Oracle consultation rate
50
100 150 200 250 300 350 400 450 500
Problems seen
(b) Inference time
Log KL of inferred weight posterior
Infer.NET + KNN
Infer.NET + JIT
Sampling + KNN
Sampling + JIT
0.25
Log time (ms)
Oracle consultation rate
Figure 2: Uncertainty aware regression. All plots for the Gaussian forest. (a) Histogram of
marginal KLs of outgoing messages, which are typically very small. (b) The forest?s most inaccurate
predictions (black: moracle , red: m, dashed black: boracle , purple: b). (c) The regressor?s uncertainty
increases in tandem with marginal KL, i.e. it does not make confident but inaccurate predictions.
?10
?12
?14
Infer.NET + KNN
Infer.NET + JIT
Sampling
Sampling + KNN
Sampling + JIT
?16
?18
50
100 150 200 250 300 350 400 450 500
Problems seen
(c) Inference error
Figure 3: Logistic JIT learning. (a) The factor consults the oracle for only a fraction of messages,
(b) leading to significant savings in time, (c) whilst maintaining (or even decreasing) inference error.
Logistic. We have access to a hand-crafted EP implementation of this factor, allowing us to perform
quantitative analysis of the JIT framework?s performance. The logistic deterministically computes
xout = ?(xin ) = 1/(1+exp{?xin }). Sensible choices for the incoming and outgoing message types
are Gaussian and Beta respectively. We study the logistic factor in the context of Bayesian logistic
regression models, where the relationship between an input vector x and a binary output observation
y is modeled as p(y = 1) = ?(wT x). We place zero-mean, unit-variance Gaussian priors on the
entries of regression parameters w, and run EP inference for 10 iterations.
We first demonstrate that the forests described in Sec. 4 are fast and accurate uncertainty aware
regressors by applying them to five synthetic logistic regression ?problems? as follows: for each
problem, we sample a groundtruth w and training xs from N (0, 1) and then sample their corresponding ys. We use a Bayesian logistic regression model to infer ws using the training datasets
and make predictions on the test datasets, whilst recording the messages that the factor receives and
sends during both kinds of inference. We split the observed message sets into training (70%) and
hold out (30%), and train and evaluate the random forests using the two datasets. In Fig. 2 we show
that the regressor is accurate and that it is uncertain whenever it makes predictions with higher error.
One useful diagnostic for choosing the various parameters of the forests (including choice of
max
parametrization for rin and tin , as well leaf tolerance Eleaf
) is the average utilization of its leaves
during held out prediction, i.e. what fraction of leaves are visited at test time. In this experiment the
forests obtain an average utilization of 1, meaning that every leaf contributes to the predictions of the
30% held out data, thereby indicating that the forests have learned a highly compact representation
of the underlying function. As described in Sec. 3, we also use the data gathered in this experiment
to find an appropriate value of umax for use in just-in-time learning.
Next we evaluate the uncertainty aware regressor in the context of JIT learning. We present several
related regression problems to a JIT logistic factor, i.e. we keep w fixed and generate multiple new
{(x, y)} sets. This is a natural setting since often in practice we observe multiple datasets which
we believe to have been generated by the same underlying process. For each problem, using the JIT
factor we infer the regression weights and make predictions on test inputs, comparing wall-clock
time and accuracy with non-JIT implementations of the factor. We consider two kinds of oracles:
6
those that consult Infer.NET?s message operators and those that use importance sampling (Eq. 4).
As a baseline, we also implemented a K-nearest neighbour (KNN) uncertainty aware regressor.
Here, messages are represented using their natural parameters, the uncertainty associated with each
prediction is the mean distance from the K-closest points in this space, and the outgoing message?s
parameters are found by taking the average of the parameters of the K-closest output messages. We
use the same procedure as the one described in Sec. 3 to choose umax for KNN.
We observe that the JIT factor does indeed learn about the inference problem over time. Fig. 3a
shows that the rate at which the factor consults the oracle decreases over the course of the experiment, reaching zero at times (i.e. for these problems the factor relies entirely on its predictions). On
average, the factor sends 97.7% of its messages without consulting the sampling oracle (a higher rate
of 99.2% when using Infer.NET as the oracle, due to lack of sampling noise), which leads to several
orders of magnitude savings in inference time (from around 8 minutes for sampling to around 800
ms for sampling + JIT), even increasing the speed of our Infer.NET implementation (from around
1300 ms to around 800 ms on average, Fig. 3b). Note that the forests are not merely memorising a
mapping from input to output messages, as evidenced by the difference in the consultation rates of
JIT and KNN, and that KNN speed deteriorates as the database grows. Surprisingly, we observe that
the JIT regressors in fact decrease the KL between the results produced by importance sampling and
Infer.NET, thereby increasing overall inference accuracy (Fig. 3c, this could be due to the fact that
the regressors at the leaves of the forests smooth out the noise of the sampled messages). Reducing
the number of importance samples to reach speed parity with JIT drastically degrades the accuracy
of the outgoing messages, increasing overall log KL error from around ?11 to around ?4.
Compound gamma. The second factor we investigate is the compound gamma factor. The compound gamma construction is used as a heavy-tailed prior over precisions of Gaussian random variables, where first r2 is drawn from a gamma with rate r1 and shape s1 and the precision of the
Gaussian is set to be a draw from a gamma with rate r2 and shape s2 . Here, we have access to
closed-form implementations of the two gamma factors in the construction, however we use the JIT
framework to collapse the two into a single factor for increased speed.
We study the compound gamma factor in the context of Gaussian fitting, where we sample a random number of points from multiple Gaussians with a wide range of precisions, and then infer the
precision of the generating Gaussians via Bayesian inference using a compound gamma prior. The
number of samples varies between 10 and 100 and the precision varies between 10?4 and 104 in
each problem. The compound factor learns the message mapping after around 20 problems (see
Fig. 4a). Note that only a single message is sent by the factor in each episode, hence the abrupt drop
in inference time. This increase in performance comes at negligible loss of accuracy (Figs. 4b, 4c).
Log time (ms)
6
4
2
0
10
20
30
40 50 60
Problems seen
70
(a) Inference time
80
90
100
1
0.8
0.6
0.4
Infer.NET
Infer.NET + JIT
Sampling
Sampling (matching JIT speed)
Sampling + JIT
0.2
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Distance d of inferred log precision from groundtruth
(b) Inference error
Sampling + JIT inferred log precision
Infer.NET
Infer.NET + KNN
Infer.NET + JIT
Sampling
Sampling + KNN
Sampling + JIT
8
Ratio of inferred precisions with error < d
Yield. We also consider a more realistic application to scientific modelling. This is an example
of a scenario for which our framework is particularly suited: scientists often need to build large
models with factors that directly take knowledge about certain components of the problem into
account. We use JIT learning to implement a factor that relates agriculture yields to temperature in
the context of an ecological climate model. Ecologists have strong empirical beliefs about the form
of the relationship between temperature and yield (that yield increases gradually up to some optimal
temperature but drops sharply after that point; see Fig 5a and [16, 10]) and it is imperative that this
relationship is modelled faithfully. Deriving closed form message-operators is a non-trivial task, and
therefore current state-of-the-art is sampling-based (e.g. [3]) and highly computationally intensive.
10
5
0
?5
?10
?10
?5
0
5
Sampling inferred log precision
10
(c) Accuracy (1 dot per problem)
Figure 4: Compound gamma JIT learning. (a) JIT reduces inference time for sampling from ?11
seconds to ?1 ms. (b) JIT s posteriors agree highly with Infer.NET. Using fewer samples to match
JIT speed leads to degradation of accuracy. (c) Increased speed comes at negligible loss of accuracy.
7
z
2011
}|
{ z
2012
}|
{ z
2013
}|
{
Oracle consultation rate
Yield (bushels/acre)
0.9
200
150
100
50
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0
0
5
10
tOpt
20
25
30
Temperature (celcius)
(a) The yield factor
35
40
0
2000
4000
6000
Message number
8000
(b) Oracle consultation rate
10000
Sampling + JIT inferred county ability (ai)
yMax
40
20
0
?20
?40
?60
?60
?40
?20
0
20
Sampling inferred county ability (ai)
40
(c) Accuracy (1 dot per county)
Figure 5: A probabilistic model of corn yield. (a) Ecologists believe that yield increases gradually
up to some optimal temperature but drops sharply after that point [16, 10], and they wish to incorporate this knowledge into their models faithfully. (b) Average consultation rate per 1,000 messages
over the course of inference on the three datasets. Notice decrease within and across datasets. (c)
Significant savings in inference time (Table 1) come at a small cost in inference accuracy.
We obtain yield data for 10% of US counties for 2011?2013 from the USDA National Agricultural
Statistics Service [1] and corresponding temperature data using [18]. We first demonstrate that it
is possible to perform inference in a large-scale ecological model of this kind with EP (graphical
model shown in Fig. 1c; derived in collaboration with computational ecologists; see supplementary
material for a description), using importance sampling to compute messages for the yield factor
for which we lack message-passing operators. In addition to the difficulty of computing messages
for the multidimensional yield factor, inference in the model is challenging as it includes multiple
Gaussian processes, separate topt and y max variables for each location, many copies of the yield
factor, and its graph is loopy. Results of inference are shown in the supplementary material.
We find that with around 100,000 samples the message for the yield factor can be computed accurately, making these by far the slowest computations in the inference procedure. We apply JIT
learning by regressing these messages instead. The high arity of the factor makes the task particularly challenging as it increases the complexity of the mapping function being learned. Despite this,
we find that when performing inference on the 2011 data the factor can learn to accurately send up
to 54% of messages without having to consult the oracle, resulting in a speedup of 195%.
A common scenario is one in which we collect more data and
wish to repeat inference. We use the forests learned at the
end of inference on 2011 data to perform inference on 2012
data, and the forests learned at the end of this to do inference
on 2013 data, and compare to JIT learning from scratch for
each dataset. The factor transfers its knowledge across the
problems, increasing inference speedup from 195% to 289%
and 317% in the latter two experiments respectively (Table 1),
whilst maintaining overall inference accuracy (Fig. 5c).
7
IS
JIT fresh
JIT continued
Time FR Speedup FR Speedup
11 451s 54% 195% ?
?
12 449s 54% 192% 60% 288%
13 451s 54% 191% 64% 318%
Table 1: FR is fraction of regressions with no oracle consultation.
Discussion
The success of JIT learning depends heavily on the accuracy of the regressor and its knowledge
about its uncertainty. Random forests have shown to be adequate however alternatives may exist,
and a more sophisticated estimate of uncertainty (e.g. using Gaussian processes) is likely to lead to
an increased rate of learning. A second critical ingredient is an appropriate choice of umax , which
currently requires a certain amount of manual tuning.
In this paper we showed that it is possible to speed up inference by combining EP, importance
sampling and JIT learning, however it will be of interest to study other inference settings where JIT
ideas might be applicable. Surprisingly, our experiments also showed that JIT learning can increase
the accuracy of sampling or accelerate hand-coded message operators, suggesting that it will be
fruitful to use JIT to remove bottlenecks even in existing, optimized inference code.
Acknowledgments
Thanks to Tom Minka and Alex Spengler for valuable discussions, and to Silvia Caldararu and Drew
Purves for introducing us to the corn yield datasets and models.
8
References
[1] National Agricultural Statistics Service, 2013. United States Department of Agriculture.
http://quickstats.nass.usda.gov/.
[2] Simon Barthelm?e and Nicolas Chopin. ABC-EP: Expectation Propagation for Likelihoodfree Bayesian Computation. In Proceedings of the 28th International Conference on Machine
Learning, pages 289?296, 2011.
[3] Silvia Caldararu, Vassily Lyutsarev, Christopher McEwan, and Drew Purves. Filzbach,
2013. Microsoft Research Cambridge. Website URL: http://research.microsoft.com/enus/projects/filzbach/.
[4] Antonio Criminisi and Jamie Shotton. Decision Forests for Computer Vision and Medical
Image Analysis. Springer Publishing Company, Incorporated, 2013.
[5] Noah D. Goodman, Vikash K. Mansinghka, Daniel Roy, Keith Bonawitz, and Joshua B. Tenenbaum. Church: a language for generative models. In Uncertainty in Artificial Intelligence,
2008.
[6] Roger B Grosse, Chris J Maddison, and Ruslan Salakhutdinov. Annealing between distributions by averaging moments. In Advances in Neural Information Processing Systems 26, pages
2769?2777. 2013.
[7] Nicolas Heess, Daniel Tarlow, and John Winn. Learning to Pass Expectation Propagation
Messages. In Advances in Neural Information Processing Systems 26, pages 3219?3227. 2013.
[8] David D. Lewis and William A. Gale. A Sequential Algorithm for Training Text Classifiers.
In Special Interest Group on Information Retrieval, pages 3?12. Springer London, 1994.
[9] Lihong Li, Michael L. Littman, and Thomas J. Walsh. Knows what it knows: a framework for
self-aware learning. In Proceedings of the 25th International Conference on Machine learning,
pages 568?575, New York, NY, USA, 2008. ACM.
[10] David B. Lobell, Marianne Banziger, Cosmos Magorokosho, and Bindiganavile Vivek. Nonlinear heat effects on African maize as evidenced by historical yield trials. Nature Climate
Change, 1:42?45, 2011.
[11] Thomas Minka. Expectation Propagation for approximate Bayesian inference. PhD thesis,
Massachusetts Institute of Technology, 2001.
[12] Thomas Minka, John Winn, John Guiver, and David Knowles. Infer.NET 2.5, 2012. Microsoft
Research Cambridge. Website URL: http://research.microsoft.com/infernet.
[13] Daniel Munoz. Inference Machines: Parsing Scenes via Iterated Predictions. PhD thesis, The
Robotics Institute, Carnegie Mellon University, June 2013.
[14] Daniel Munoz, J. Andrew Bagnell, and Martial Hebert. Stacked Hierarchical Labeling. In
European Conference on Computer Vision, 2010.
[15] Stephane Ross, Daniel Munoz, Martial Hebert, and J. Andrew Bagnell. Learning MessagePassing Inference Machines for Structured Prediction. In Conference on Computer Vision and
Pattern Recognition, 2011.
[16] Wolfram Schlenker and Michael J. Roberts. Nonlinear temperature effects indicate severe
damages to U.S. crop yields under climate change. Proceedings of the National Academy of
Sciences, 106(37):15594?15598, 2009.
[17] Roman Shapovalov, Dmitry Vetrov, and Pushmeet Kohli. Spatial Inference Machines. In
Conference on Computer Vision and Pattern Recognition, pages 2985?2992, 2013.
[18] Matthew J. Smith, Paul I. Palmer, Drew W. Purves, Mark C. Vanderwel, Vassily Lyutsarev,
Ben Calderhead, Lucas N. Joppa, Christopher M. Bishop, and Stephen Emmott. Changing
how Earth System Modelling is done to provide more useful information for decision making,
science and society. Bulletin of the American Meteorological Society, 2014.
[19] Stan Development Team. Stan: A C++ Library for Probability and Sampling, 2014.
[20] Andreas Stuhlm?uller, Jessica Taylor, and Noah D. Goodman. Learning Stochastic Inverses. In
Advances in Neural Information Processing Systems 27, 2013.
9
| 5595 |@word kohli:2 trial:1 polynomial:5 retraining:1 km:1 seek:1 crucially:1 infernet:1 xout:13 dramatic:1 thereby:3 harder:1 moment:6 series:1 efficacy:1 contains:1 united:1 daniel:6 ours:1 past:1 existing:1 current:1 com:3 comparing:1 parsing:1 john:4 grain:1 periodically:1 realistic:1 shape:2 enables:2 remove:1 designed:1 plot:1 update:1 drop:3 greedy:1 leaf:15 fewer:1 website:2 parameterization:3 generative:1 intelligence:1 xk:3 parametrization:2 smith:1 wolfram:1 tarlow:2 boosting:1 node:8 parameterizations:2 traverse:3 consulting:1 location:1 five:1 along:1 direct:3 beta:1 combine:2 fitting:1 manner:1 introduce:1 acquired:1 indeed:1 rapid:1 salakhutdinov:1 decreasing:1 automatically:1 gov:1 company:1 tandem:1 increasing:4 agricultural:2 project:1 underlying:3 what:4 consults:2 kind:4 interpreted:1 whilst:5 quantitative:2 every:3 collecting:1 ti:1 multidimensional:1 classifier:1 utilization:2 dtarlow:1 unit:1 medical:1 producing:1 negligible:2 scientist:1 local:1 service:2 xv:1 limit:2 api:1 eslami:1 pkohli:1 despite:2 vetrov:1 black:2 might:1 dynamically:1 collect:1 challenging:5 shaded:1 ease:1 factorization:1 collapse:1 walsh:1 range:3 bi:3 averaged:1 palmer:1 directed:1 acknowledgment:1 testing:2 practice:2 implement:2 procedure:4 area:1 empirical:1 significantly:1 matching:1 pre:3 confidence:2 induce:1 kbi:1 dout:3 get:3 operator:12 context:8 live:1 applying:1 argminm:1 fruitful:1 deterministic:2 timeconsuming:1 send:1 guiver:1 abrupt:1 continued:1 deriving:1 datapoints:1 notion:2 variation:2 updated:1 target:4 construction:3 heavily:2 user:5 programming:1 us:2 agreement:1 element:2 roy:1 expensive:4 particularly:2 recognition:2 predicts:1 database:1 ep:9 observed:1 worst:3 calculate:1 ensures:1 episode:1 decrease:3 valuable:1 complexity:2 littman:1 ultimately:1 trained:3 raise:2 solving:2 ali:1 calderhead:1 efficiency:1 rin:6 basis:2 learner:6 accelerate:1 joint:1 represented:5 various:1 train:5 separated:1 heat:1 fast:3 describe:3 london:1 monte:6 stacked:1 artificial:1 labeling:1 aggregate:1 outside:1 choosing:1 larger:1 valued:2 supplementary:3 drawing:1 otherwise:2 ability:3 statistic:4 knn:12 unseen:1 gp:2 farm:1 itself:3 jointly:1 online:1 triggered:1 sequence:2 net:21 propose:1 jamie:1 product:3 fr:3 uci:1 combining:2 flexibility:2 ymax:1 academy:1 description:1 constituent:2 requirement:1 r1:1 produce:4 generating:1 ben:1 depending:1 illustrate:1 andrew:2 measured:1 nearest:1 freeing:1 mansinghka:1 keith:1 eq:3 strong:1 dividing:1 implemented:3 predicted:6 come:3 indicate:1 correct:2 stephane:1 stochastic:2 criminisi:1 centered:2 material:3 implementing:1 require:5 behaviour:1 wall:1 county:4 anticipate:1 hold:4 around:10 recap:1 considered:2 normal:1 marianne:1 moracle:5 seed:1 mapping:11 predict:3 exp:1 matthew:1 vary:1 adopt:2 agriculture:2 earth:1 purpose:3 estimation:1 ruslan:1 applicable:2 currently:1 visited:1 ross:1 sensitive:2 largest:1 wl:2 faithfully:2 uller:1 gaussian:10 modified:2 reaching:1 derived:1 focus:2 june:1 improvement:1 modelling:2 indicates:1 mainly:1 slowest:1 contrast:1 baseline:1 inference:58 inaccurate:2 typically:6 initially:1 w:1 proj:4 chopin:1 overall:4 classification:2 flexible:3 lucas:1 development:1 art:1 special:1 spatial:1 marginal:9 equal:1 aware:11 silently:1 having:4 saving:3 sampling:40 manually:1 represents:1 broad:1 constitutes:2 future:1 roman:1 employ:2 few:2 oriented:1 neighbour:1 randomly:1 simultaneously:1 national:3 divergence:1 gamma:10 argmax:1 microsoft:6 maintain:1 william:1 jessica:1 interest:3 message:87 highly:6 investigate:1 eval:2 evaluation:1 regressing:2 severe:1 tj:3 held:5 accurate:4 allocating:1 integral:2 edge:1 experience:1 unless:1 tree:16 taylor:1 mk:7 uncertain:1 increased:3 extensible:1 loopy:1 applicability:1 introducing:2 deviation:1 subset:4 entry:1 imperative:1 predictor:2 cost:1 too:1 barthelm:1 varies:2 tolerated:2 synthetic:1 confident:1 thanks:1 density:1 international:2 probabilistic:6 regressor:13 michael:2 na:1 thesis:2 central:1 recorded:1 squared:1 again:1 choose:3 gale:1 american:1 leading:1 li:1 account:1 suggesting:1 sec:9 includes:2 coefficient:1 matter:2 satisfy:1 mcmc:2 explicitly:1 depends:1 performed:3 root:1 closed:2 observing:1 analyze:1 red:2 portion:3 start:1 purves:3 simon:1 contribution:2 purple:1 accuracy:14 variance:1 characteristic:1 efficiently:3 ensemble:1 yield:20 gathered:2 weak:5 bayesian:5 modelled:1 iterated:1 accurately:3 produced:2 carlo:6 multiplying:1 expertise:1 randomness:2 african:1 reach:3 whenever:1 complicate:1 manual:1 definition:1 topt:3 minka:3 associated:2 mi:5 sampled:1 newly:1 dataset:1 consultation:8 massachusetts:1 knowledge:8 schedule:1 routine:1 sophisticated:1 back:2 higher:2 tom:1 specify:1 evaluated:2 done:2 mar:5 generality:1 just:6 roger:1 until:2 clock:1 hand:7 hastings:1 receives:1 trust:1 christopher:2 nonlinear:2 propagation:7 lack:2 meteorological:1 logistic:12 mode:1 perhaps:1 scientific:1 believe:2 grows:1 usa:1 effect:4 contain:1 hence:1 din:2 leibler:1 climate:3 vivek:1 numerator:1 self:2 encourages:1 during:5 mmin:3 anything:1 criterion:1 m:6 demonstrate:5 shapovalov:1 performs:1 temperature:9 image:2 meaning:1 recently:1 common:1 empirically:1 marginals:2 significant:3 refer:2 unfamiliar:1 cambridge:2 munoz:3 ai:3 mellon:1 automatic:1 tuning:1 language:3 dot:2 lihong:1 access:2 longer:1 renderer:1 posterior:3 own:2 recent:1 multivariate:1 showed:3 closest:3 scenario:2 compound:8 certain:2 ecological:2 binary:2 arbitrarily:1 success:1 yi:1 joshua:1 captured:2 seen:4 care:1 dashed:1 stephen:1 relates:2 full:1 multiple:4 infer:24 reduces:1 smooth:1 technical:1 faster:2 adapt:1 match:2 likelihoodfree:1 retrieval:1 y:1 dkl:7 coded:1 prediction:28 ecologist:3 regression:19 crop:1 vision:4 expectation:6 histogram:2 represent:1 iteration:1 robotics:1 proposal:2 background:1 addition:1 winn:3 annealing:1 sends:2 crucial:1 goodman:2 recording:1 sent:2 call:3 consult:2 split:7 shotton:1 variety:1 fit:2 andreas:1 idea:3 parameterizes:1 intensive:2 vikash:1 bottleneck:2 motivated:3 url:2 effort:1 sjl:3 passing:13 york:1 adequate:1 antonio:1 heess:3 generally:1 useful:2 involve:1 amount:1 tenenbaum:1 generate:1 http:3 exist:1 notice:1 designer:1 diagnostic:2 deteriorates:1 per:4 wr:2 discrete:2 carnegie:1 group:1 key:1 four:1 mout:5 threshold:1 drawn:1 changing:1 kept:1 button:1 graph:4 merely:1 fraction:4 sum:1 rout:4 run:5 inverse:2 parameterized:1 uncertainty:30 extends:2 family:3 throughout:2 decide:1 groundtruth:4 place:1 knowles:1 draw:2 decision:6 entirely:1 quadratic:1 oracle:18 ahead:1 noah:2 sharply:2 alex:1 scene:1 aspect:1 speed:16 argument:1 extremely:2 min:10 performing:2 relatively:1 corn:3 speedup:4 department:1 structured:1 according:1 combination:1 representable:1 across:3 smaller:1 shallow:1 metropolis:1 labellings:1 s1:1 making:2 projecting:1 gradually:3 intuitively:1 computationally:6 resource:1 agree:1 remains:2 previously:1 dmax:1 count:1 fail:1 know:4 tractable:1 end:3 available:1 operation:1 gaussians:2 apply:1 observe:3 hierarchical:1 appropriate:3 generic:1 alternative:1 rtrain:1 slower:1 symmetrized:1 original:1 thomas:3 denotes:1 ensure:1 publishing:1 graphical:6 maintaining:4 pushing:1 unsuitable:1 build:2 approximating:1 society:2 objective:4 question:2 quantity:1 degrades:1 planted:1 damage:1 bagnell:2 distance:2 separate:3 concatenation:1 sensible:1 chris:1 maddison:1 collected:1 trivial:2 fresh:1 jit:48 code:2 modeled:1 relationship:4 providing:1 minimizing:1 ratio:1 xne:3 robert:1 negative:1 stated:1 implementation:7 perform:7 allowing:2 observation:4 datasets:8 extended:1 ever:3 incorporated:1 team:1 arbitrary:5 inferred:7 david:3 evidenced:2 pair:2 required:3 specified:4 kl:12 extensive:1 optimized:1 learned:5 address:1 proceeds:1 pattern:2 program:3 reliable:1 max:3 including:1 belief:3 critical:1 natural:6 rely:1 difficulty:1 hr:1 residual:2 technology:1 library:1 ne:7 stan:2 martial:2 church:1 umax:7 text:1 review:1 literature:1 prior:3 loss:2 querying:1 ingredient:1 awareness:2 degree:2 proxy:2 usda:2 heavy:1 collaboration:1 course:2 soil:1 surprisingly:2 parity:1 arriving:1 copy:1 repeat:1 offline:3 drastically:1 allow:2 hebert:2 institute:2 fall:1 wide:2 taking:1 bulletin:1 tolerance:1 depth:2 default:1 world:1 rich:1 doesn:1 computes:1 forward:2 made:2 regressors:12 historical:1 far:1 pushmeet:2 sj:4 approximate:1 compact:2 kullback:1 dmitry:1 keep:1 incoming:10 buy:1 xi:12 discriminative:1 search:3 latent:1 tailed:1 table:3 bonawitz:1 nature:1 learn:7 transfer:1 nicolas:2 messagepassing:2 contributes:1 forest:32 complex:1 european:1 main:1 whole:1 noise:3 s2:1 silvia:2 paul:1 child:3 x1:1 fig:12 crafted:3 grosse:2 slow:1 ny:1 precision:10 sub:1 wish:3 deterministically:1 exponential:1 candidate:1 tin:4 learns:4 minute:2 down:2 specific:9 bishop:1 arity:2 r2:2 x:3 ionosphere:1 intractable:1 sequential:1 importance:8 drew:3 phd:2 magnitude:2 suited:1 entropy:1 simply:1 likely:2 expressed:1 scalar:1 springer:2 corresponds:1 relies:2 kls:1 abc:1 lewis:1 acm:1 conditional:2 labelled:1 sjr:3 change:2 determined:1 except:1 reducing:1 sampler:4 hyperplane:1 averaging:3 wt:1 degradation:1 pas:3 experimental:1 xin:18 indicating:2 mark:1 latter:2 stuhlm:1 memorising:1 incorporate:4 evaluate:2 outgoing:14 scratch:1 |
5,076 | 5,596 | Distributed Bayesian Posterior Sampling via
Moment Sharing
Minjie Xu1?, Balaji Lakshminarayanan2 , Yee Whye Teh3 , Jun Zhu1 , and Bo Zhang1
1
State Key Lab of Intelligent Technology and Systems; Tsinghua National TNList Lab
1
Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
2
Gatsby Unit, University College London, 17 Queen Square, London WC1N 3AR, UK
3
Department of Statistics, University of Oxford, 1 South Parks Road, Oxford OX1 3TG, UK
Abstract
We propose a distributed Markov chain Monte Carlo (MCMC) inference algorithm for large scale Bayesian posterior simulation. We assume that the dataset is
partitioned and stored across nodes of a cluster. Our procedure involves an independent MCMC posterior sampler at each node based on its local partition of the
data. Moment statistics of the local posteriors are collected from each sampler
and propagated across the cluster using expectation propagation message passing
with low communication costs. The moment sharing scheme improves posterior
estimation quality by enforcing agreement among the samplers. We demonstrate
the speed and inference quality of our method with empirical studies on Bayesian
logistic regression and sparse linear regression with a spike-and-slab prior.
1
Introduction
As we enter the age of ?big data?, datasets are growing to ever increasing sizes and there is an urgent
need for scalable machine learning algorithms. In Bayesian learning, the central object of interest
is the posterior distribution, and a variety of variational and Markov chain Monte Carlo (MCMC)
methods have been developed for ?big data? settings. The main difficulty with both approaches is
that each iteration of these algorithms requires an impractical O(N ) computation for a dataset of
size N 1. There are two general solutions: either to use stochastic approximation techniques
based on small mini-batches of data [15, 4, 5, 20, 1, 14], or to distribute data as well as computation
across a parallel computing architecture, e.g. using MapReduce [3, 13, 16].
In this paper we consider methods for distributing MCMC sampling across a computer cluster where
a dataset has been partitioned and locally stored on the nodes. Recent years have seen a flurry
of research on this topic, with many papers based around ?embarrassingly parallel? architectures
[16, 12, 19, 9]. The basic thesis is that because communication costs are so high, it is better for each
node to run a separate MCMC sampler based on its data stored locally, completely independently
from others, and then for a final combination stage to transform the local samples into samples for
the desired global posterior distribution given the whole dataset. [16] directly combines the samples
by weighted averages under an implicit Gaussian assumption; [12] approximates each local posterior with either a Gaussian or a Gaussian kernel density estimate (KDE) so that the combination
follows an explicit product of densities; [19] takes the KDE idea one step further by representing it
as a Weierstrass transform; [9] uses the ?median posterior? in an RKHS embedding space as a combination technique that is robust in the presence of outliers. The main drawback of embarrassingly
parallel MCMC sampling is that if the local posteriors differ significantly, perhaps due to noise or
non-random partitioning of the dataset across the cluster, or if they do not satisfy the Gaussian as?
This work was started and completed when the author was visiting University of Oxford.
1
sumptions in a number of methods, the final combination stage can result in highly inaccurate global
posterior representations.
To encourage local MCMC samplers to roughly be aware of and hence agree with one another
so as to improve inference quality, we develop a method to enforce sharing of a small number of
moment statistics of the local posteriors, e.g. mean and covariance, across the samplers. We frame
our method as expectation propagation (EP) [8], where the exponential family is defined by the
shared moments and each node represents a factor to be approximated, with moment statistics to
be estimated by the corresponding sampler. Messages passed among the nodes encode differences
between the estimated moments, so that at convergence all nodes agree on these moments. As EP
tends to converge rapidly, these messages will be passed around only infrequently (relative to the
number of MCMC iterations). It can also be performed in an asynchronous fashion, hence incurring
low communication costs. As opposed to previous embarrassingly parallel schemes which require a
final combination stage, upon convergence each sample drawn at any single node with our method
can be directly treated as a sample from an approximate global posterior distribution. Our method
differs from standard EP as each factor to be approximated consists of a product of many likelihood
terms (rather than just one as in standard EP), and therefore suffers less approximation bias.
2
A Distributed Bayesian Posterior Sampling Algorithm
In this section we develop our method for distributed Bayesian posterior sampling. We assume that
we have a dataset D = {xn }N
n=1 with N 1 which has already been partitioned onto m compute
nodes. Let Di denote the data on node i for i = 1, . . . , m such that D = ?m
i=1 Di . Let D?i = D\Di .
We assume that the data are i.i.d. given a parameter vector ? ? ?Qwith prior distribution p0 (?). The
m
object of interest is the posterior distribution, p(?|D) ? p0 (?) i=1 p(Di |?), where p(Di |?) is a
product of likelihood terms, one for each data item in Di .
Recall that our general approach is to have an independent sampler running on each node targeting
a ?local posterior?, and our aim is for the samplers to agree on the overall shape of the posteriors,
by enforcing that they share the same moment statistics, e.g. using the first two moments they will
share the same mean and covariance. Let S(?) be the sufficient statistics function such that f (S) :=
Ef [S(?)] are the moments of interest for some density f (?). Consider an exponential family of
distributions with sufficient statistics S(?) and let q(?; ?) be a density in the family with natural
parameter ?. We will assume for simplicity that the prior belongs to the exponential family, p0 (?) =
q(?; ?0 ) for some natural parameter ?0 . Let p?i (?|Di ) denote the local posterior at node i. Rather
than using the same prior, e.g. p0 (?), at all nodes, we use a local prior which enforces the moments
to be similar between local posteriors. More precisely, we consider the following target density,
p?i (?|Di ) ? q(?; ??i )p(Di |?),
where the effective local prior q(?; ??i ) is determined by the (natural) parameter ??i . We set ??i
such that Ep?i (?|Di ) [S(?)] = ? for all i, for some shared moment vector ?.
As an aside, note that the overall posterior distribution can be recovered via
p(?|D) ? p(D|?)p0 (?) = p0 (?)
m
Y
p(Di |?) ? q(?; ?0 )
i=1
m
Y
p?i (?|Di )
i=1
q(?; ??i )
,
(1)
for any choice of the parameters ??i , with a number of previous works corresponding to different choices. Q
[16, 12, 19] use ??i = ?0 /m, so that the local prior is p0 (?)1/m and (1) reduces to
m
p(?|D) ? i=1 p?i (?|Di ). [2] set ??i = ?0 for their distributed asynchronous streaming variational
algorithm, but reported that setting ??i such that q(?; ??i ) approximates the posterior distribution
given previously processed data achieves better performance. We say that such choice of ??i is
context aware as it contains contextual information from other local posteriors. Finally, in the ideal
situation with exact equality, q(?; ??i ) = p(?|D?i ), then each local posterior is precisely the true
posterior p(?|D). In the following subsections, we will describe how EP can be used to iteratively
approximate ??i so that q(?; ??i ) matches p(?|D?i ) as closely as possible in the sense of minimising the KL divergence. Since our algorithm performs distributed sampling by sharing messages
containing moment information, we refer to it as SMS (in short for sampling via moment sharing).
2
2.1
Expectation Propagation
In many typical scenarios the posterior is intractable to compute because the product of likelihoods
and the prior is not analytically tractable and approximation schemes, e.g. variational methods or
MCMC, are required to compute the posterior. EP is a variational message-passing scheme [8],
where each likelihood term is approximated by an exponential family density chosen iteratively to
minimise the KL divergence to a ?local posterior?.
Suppose we wish to approximate (up to normalisation) the likelihood p(Di |?) (as a function of ?),
using the exponential family density q(?; ?i ) for some suitably chosen natural parameter ?i , and
that other parameters {?j }j6=i are known such that each q(?; ?j ) approximates the corresponding
p(Dj |?) well. Then the posterior distribution is well approximated by a local posterior where all but
one likelihood factor is approximated,
Y
p(?|D) ? p?i (?|D) ? p0 (?)p(Di |?)
q(?; ?j ) = p(Di |?)?
pi (?|D?i ),
j6=i
P
where p?i (?|D?i ) = q(?; ??i ), with ??i = ?0 + j6=i ?j , is a context-aware prior which incorporates information from the other data subsets and is an approximation to the conditional distribution
p(?|D?i ). Replace p(Di |?) by q(?; ?i ), then the corresponding local posterior p?i (?|D) would be
approximated by q(?; ??i + ?i ). A natural choice for the parameter ?i is the one that minimises
KL(?
pi (?|D)kq(?; ??i + ?i )). This optimisation can be solved by calculating the moment parameter
?i = Ep?i (?|D) [S(?)], transforming the moment parameter ?i into its natural parameter, say ?i , and
then updating ?i ? ?i ? ??i .
EP proceeds iteratively, by updating each parameter given the current values of the others using the
above procedure until convergence. At convergence (which is not guaranteed), we have that,
?i = ? := ?0 +
m
X
?j ,
j=1
for all i, where ?j are the converged parameter values. Hence the natural parameters, as well as
the moments of the local posteriors, at all nodes agree. When the prior p0 (?) does not belong to
the exponential family, we may simply treat it as p(D0 |?) where D0 = ? and approximate it with
q(?; ?0 ) just as we approximate the likelihoods.
2.2
Distributed Sampling via Moment Sharing
In typical EP applications, the moment parameter ?i = Ep?i (?|D) [S(?)] can be computed either
analytically or using numerical quadrature. In our setting, this is not possible as each likelihood
factor p(Di |?) is now a product of many likelihoods with generally no tractable analytic form.
Instead we can use MCMC sampling to estimate these moments.
The simplest algorithm involves synchronous EP updates: At each EP iteration, each node i receives
from a master node ??i (initialised to ?0 at the first iteration) calculated from the previous iteration,
runs MCMC to obtain T samples from which the moments ?i are estimated, converts this into natural
parameters ?i , and returns ?i = ?i ? ??i to the master node. (Note that the MCMC samplers are
run in parallel; hence the moments are computed in parallel unlike standard EP.) An asynchronous
version can be implemented as well: At each node i, after the MCMC samples are obtained and the
new ?i parameter computed, the node communicates asynchronously with the master to send ?i and
receive the new value of ??i based on the current ?j6=i from other nodes. Finally, a decentralised
scheme is also possible: Each node i stores a local copy of all the parameters ?j for each j =
1, . . . , m, after the MCMC phase and a new value of ?i is computed it is broadcast to all nodes,
the local copy is updated based on messages the node received in the mean time, and a new ??i is
computed.
2.3
Multivariate Gaussian Exponential Family
For concreteness, we will describe the required computations of the moments and natural parameters
in the special cases of a multivariate Gaussian exponential family. In addition to being analytically
tractable and popular, the usage of multivariate Gaussian distribution can also be motivated using
3
Bayesian asymptotics for large datasets. In particular, for parameters in Rd and under regularity
conditions, if the size of the subset Di is large, the Bernstein-von Mises Theorem shows that the local
posterior distribution is well approximated by a multivariate Gaussian; hence the EP approximation
by an exponential family density will be very good. Given T samples {?it }Tt=1 collected at node i,
unbiased estimates of the moments (mean ?i and covariance ?i ) are given by
?i ?
T
1X
?it
T t=1
T
?i ?
1 X
(?it ? ?i )(?it ? ?i )> ,
T ? 1 t=1
(2)
while the natural parameters can be computed as ?i = (?i ?i , ?i ), where
?i =
T ? d ? 2 ?1
?i
T ?1
(3)
is an unbiased estimate of the precision matrix [11]. Note that simply using ??1
leads to a biased
i
estimate, which impacts upon the convergence of EP. Alternative estimators exist [18] but we use
the above unbiased estimate for simplicity. We stress that our approach is not limited to multivariate
Gaussian, but applicable to any exponential family distribution. In Section 3.2, we consider the case
where the local posterior is approximated using the spike and slab distribution.
2.4
Additional Comments
The collected samples can be used to form estimates for the global posterior p(?|D) in two ways.
Firstly, these samples can be combined using a combination technique [16, 12, 19, 9]. According to
(1), each sample ? needs to be assigned a weight of q(?; ??i )?1 before being combined. Alternatively, once EP has converged, the MCMC samples target the local posterior pi (?|D), which is already
a good approximation to the global posterior, so the samples can be used directly as approximate
samples of the global posterior without need for a combination stage. This has the advantage of producing mT samples if each of the m nodes produces T samples, while other combination techniques
only produce T samples. We have found the second approach to perform well in practice.
In our experiments we have found damping to be essential for the convergence of the algorithm.
This is because in addition to the typical convergence issues with EP, our mean parameters are also
estimated using MCMC and thus introduces additional stochasticity which can affect the convergence. There is little theory in the literature on convergence of EP [17], and even less can be shown
with the additional stochasticity introduced by the MCMC sampling. Nevertheless, we have found
that damping the natural parameters ?i works well in practice.
In the case of multivariate Gaussians, additional consideration has to be given due to the possibility
that the oscillatory behaviour in EP can lead to covariance matrices that are not positive definite. If
the precision of a local prior ??i is not positive definite, the resulting local posterior will become
unnormalisable and the MCMC sampling will diverge. We adopt a number of mitigating strategies
that we have found to be effective: Whenever a new value of the precision matrix ?new
?i is not positive
new
definite, we damp it towards its previous value as ??old
+
(1
?
?)?
,
with
an
?
large
enough such
?i
?i
that the linear combination is positive definite; We collect a large enough number of samples at
each MCMC phase to reduce variability of the estimators; And we use the pseudo-inverse instead of
actual matrix inverse in (3).
3
3.1
Experiments
Bayesian Logistic Regression
We tested our sampling via moment sharing method (SMS) on Bayesian logistic regression with
d
simulated data. Given a dataset D = {(xn , yn )}N
n=1 where xn ? R and yn = ?1, the conditional
model of each yn given xn is
p(yn |xn , w) = ?(yn w> xn ),
(4)
?x
where ?(x) = 1/(1+e ) is the standard logistic (sigmoid) function and the weight vector w ? Rd
is our parameter of interest. For simplicity we did not include the intercept in the model. We used
a standard Gaussian prior p0 (w) = N (w; 0d , Id ) on w and the aim is to draw samples from the
posterior p(w|D).
4
5
d20
Our simulated dataset consists of N = 4000 data points,
each with d = 20 dimensional covariates, generated using
i.i.d. draws xn ? N (?x , ?x ), where ?x = P P > , P ?
[0, 1]d?d and each entry of ?x and P are in turn generated i.i.d. from U(0, 1). We generate the ?true? parameter
vector w? from the prior N (0d , Id ), with which the labels are sampled i.i.d. according to the model, i.e. p(yn ) =
?(yn w?> xn ). The dataset is visualized in Fig. 1.
yn = +1
0
yn = ?1
p0(w)
?5
?5
0
5
d1
As the base MCMC sampler used across all methods, we
used the No-U-Turn sampler (NUTS) [6]. NUTS was also used to generate 100000 samples from the full posterior
p(?|D) for ground truth. Across all the methods, the sam- Figure 1: Plot of covariate dimensions
pler was initialised at 0d and used the first 20d samples for 1 and 20 of the simulated dataset for
Bayesian logistic regression.
burn-in, then thinned every other sample.
We compared our method SMS against consensus Monte Carlo (SCOT) [16], the embarrassingly
parallel MCMC sampler (NEIS) of [12] and the Weierstrass sampler (WANG) [19].
SMS: We tested both the synchronous (SMS(s)) and asynchronous (SMS(a)) versions of our
method, using a multivariate Gaussian exponential family. The damping factor used was
At
P0.2.
m
each EP iteration, SMS produced both the EP approximated Gaussian posterior q(?; ?0 + i=1 ?i ),
as well as a collection of mT local posterior samples ?. We use K to denote the total number of EP
iterations. For SMS(a), every m worker-master update is counted as one EP iteration.
SCOT: Since each node in our algorithm effectively draws KT samples in total, we allowed each
node in SCOT to draw KT samples as well, using a single NUTS run. To compare against our algorithm at iteration k ? K, we used the first kT samples for combination and form the approximate
posterior samples.
NEIS: As in SCOT, we drew KT samples at each node, and compared against ours at iteration k
using the first kT samples. We tested both the parametric (NEIS(p)) and non-parametric (NEIS(n))
combination methods. To combine the kernel density estimates in NEIS(n), we adopted the recursive pairwise combination strategy as suggested in [12, 19]. We retained 10mT samples during
intermediate stages of pair reduction and finally drew mT samples from the final reduction.
WANG: We test the sequential sampler in the first arXiv version, which can handle moderately high dimensional data and does not require a good initial
approximation. The bandwidths hl
?
(l = 1, . . . , d) were initialized to 0.01 and updated with m?l (if smaller) as suggested by the authors, where ?l is the estimated posterior standard deviation of dimension l. As a Gibbs sampling
algorithm, WANG requires a larger number of iterations for convergence but does not need as many
samples within each iteration. Hence we ran it for K 0 = 700 K iterations, each time generating
KT /K 0 samples on every node. We then collected every T combined samples generated from each
subsequent K 0 /K iterations for comparative purposes, leaving all previous samples as burn-in.
All methods were implemented and tested in Matlab. Experiments were conducted on a cluster with
as many as 24 nodes (Matlab workers), arranged in 4 servers, each being a multi-core server with 2
Intel(R) Xeon(R) E5645 CPUs (6 cores, 12 threads). We used the parfor command (synchronous)
and the parallel.FevalFuture object (asynchronous) in Matlab for parallel computations.
The underlying message passing is managed by the Matlab Distributed Computing Server.
Convergence of Shared Moments. Figure 2 demonstrates the convergence of the local posterior
means as the EP iteration progresses, on a smaller dataset generated likewise with N = 1000, d = 5
and 25000 samples as ground truth. It clearly illustrates that our algorithm achieves very good
approximation accuracy by quickly enforcing agreement across nodes on local posterior moments
(mean in this case). When m = 50, we used a larger number of samples for stable convergence.
Approximation Accuracies. We compare the approximation accuracy of the different methods on
our main simulated data (N = 4000, d = 20). We use a moderately large number of nodes m = 32,
and T = 10000. In this case, each subset consists of 125 data points. We considered three different
error measures for the approximation accuracies. Denote the ground truth posterior samples, mean
b ?
b for the approximate samples
b and ?
and covariance by ?? , ?? and ?? , and correspondingly ?,
collected using a distributed MCMC method. The first error measure is mean squared error (MSE)
5
1
1
1
0.5
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
?1
?1
?1
?1.5
?1.5
?1.5
?2
?2
?2
?2.5
?2.5
?2.5
250
500
750
1000
k ? T ? N/m ? 103
1250
1500
100
(a) m = 4, T = 1000
200
300
400
k ? T ? N/m ? 103
500
600
(b) m = 10, T = 1000
200
400
600 800 1000 1200 1400
k ? T ? N/m ? 103
(c) m = 50, T = 10000
Figure 2: Convergence of local posterior means on a smaller Bayesian logistic regression dataset
(N = 1000, d = 5). The x-axis indicates the number of likelihood evaluations, with vertical
lines denoted EP iteration numbers. The y-axis indicates the estimated posterior means (dimensions
indicated by different colours). We show ground truth with solid horizontal lines, the EP estimated
mean with asterisks, and local sample estimated means dots connected with dash lines.
2
?1
10
10
0
10
?2
10
1
?3
10
10
?2
10
?4
10
0
SMS(s)
SMS(a)
SCOT
NEIS(p)
NEIS(n)
WANG
?4
10
?6
10
3.2
?5
10
10
SMS(s)
SMS(a)
SCOT
WANG
3.2
?1
6.4
9.6
k?T?m
12.8
16
10
19.2
5
x 10
(a) MSE of posterior mean
?6
10
?7
6.4
9.6
k?T?m
12.8
16
19.2
5
x 10
(b) Approximate KL-divergence
10
SMS(s)
SMS(a)
SCOT
NEIS(n)
WANG
3.2
6.4
9.6
k?T?m
12.8
16
19.2
5
x 10
(c) MSE of conditional prob. (5)
Figure 3: Errors (log-scale) against the cumulative number of samples drawn on all nodes (kT m).
We tested two random splits of the dataset (hence 2 curves for each algorithm). Each complete EP
iteration is highlighted by a vertical grid line. Note that for SCOT, NEIS(p) and NEIS(n), apart
from usual combinations that occur after every T m/2 local samples are drawn on all nodes, we also
deliberately looked into combinations at a much earlier stage at (0.01, 0.02, 0.1, 0.5)T m.
2
2
10
10
SMS(s)
SMS(a)
m=8
m = 16
m = 32
m = 48
m = 64
1
10
SMS(s)
SMS(a)
m=8
m = 16
m = 32
m = 48
m = 64
1
10
0
0
10
2.5
2
SMS(s,s)
SMS(s,e)
SMS(a,s)
SMS(a,e)
SCOT
XING(p)
1.5
10
1
?1
?1
10
10
0.5
?2
10
0
?2
1
2
3
k?T
4
5
6
7
4
x 10
(a) Approximate KL-divergence
10
0
0.5
1
1.5
k ? T ? N/m
2
2.5
8
x 10
(b) Approximate KL-divergence
0
m=8
m=16
m=32
m=48
m=64
(c) Approximate KL-divergence
Figure 4: Cross comparison with different numbers of nodes. Note that the x-axes have different
meanings. In figure (a), it is the cumulative number of samples drawn locally on each node (kT ). For
the asynchronous SMS(a), we only plot every m iterations so as to mimic the behaviour of SMS(s)
for a more direct comparison. In figure (b) however, it is the cumulative number of likelihood
evaluations on each node (kT N/m), which more accurately reflect computation time.
6
Pd
b and ?? :
b l ? ??l )2 /d; the second is KL-divergence between N (?? , ?? ) and
between ?
l=1 (?
b
b ?); and finally the MSE of the conditional probabilities:
N (?,
i2
1 Xh 1 X
1 X
?(w> x) .
?(w> x) ? ?
(5)
b
N
|? |
|?|
?
x?D
w??
b
w??
Figure 3 shows the results for two separate runs of each method. We observe that both versions of
SMS converge rapidly, requiring few rounds of EP iterations. Further, they produce approximation
errors significantly below other methods. The synchronous SMS(s) does appear more stable and
converges faster than its asynchronous counterpart but ultimately both versions achieve the same
level of accuracy. SCOT and NEIS(p) are very closely related, with their MSE for posterior mean
overlapping. Both methods achieve reasonable accuracy early on, but fail to further improve with
the increasing number of samples available for combination due to their assumptions of Gaussianity.
b without drawing samples ?
b and is thus missing from Figure 3b
b and ?
NEIS(p) directly estimates ?
and 3c. Note that NEIS(n) is missing from Figure 3b because the posterior covariance estimated
from the combined samples is singular due to an insufficient number of distinct samples. Unsurprisingly, WANG requires a large number of iterations for convergence and does not achieve very good
approximation accuracy. It is also possible that the poor performances of NEIS(n) and WANG are
due to the kernel density estimation used, as its quality deteriorates very quickly with dimensionality.
Influence of the Number of Nodes. We also investigated how the methods behave with varying
numbers of partitions, m = 8, 16, 32, 48, 64. We tested the methods on three runs with three different random partitions of the dataset. We only tested m = 64 on our SMS methods.
In Figure 4a, we see the rapid convergence in terms of the number of EP iterations, and the insensitivity to the number of nodes. Also, the final accuracies of the SMS methods are better for smaller
values of m. This is not surprising since the approximation error of EP tends to increase when the
posterior is factorised into more factors. In the extreme case of m = 1, the methods will be exact. Note however that with larger m, each node contains a smaller subset of data, and computation
time is hence reduced. In Figure 4b we plotted the same curves against the number kT N/m of
likelihood evaluations on each node, which better reflects the computation times. We thus see an
accuracy-computation time trade-off, where with larger m computation time is reduced but accuracies get worse. In Figure 4c, we looked into the accuracy of the obtained approximate posterior
in terms of KL-divergence. Note that apart from a direct read-off of the mean and covariance from
the parametric EP estimate (SMS(s,e) & SMS(a,e)), we might also compute the estimators from the
posterior samples (SMS(s,s) & SMS(a,s)), and we compared both of these in the figure. As noted
above, the accuracies are better when we have less nodes. However, the errors of our methods still
increase much slower than SCOT and NEIS(p), for both of which the KL-divergence increases to
around 20 and 85 when m = 32 and 48 and is thus cropped from the figure.
3.2
Bayesian sparse linear regression with spike and slab prior
In this experiment, we apply SMS to a Bayesian sparse linear regression model with a spike and
slab prior over the weights. Our goal is to illustrate that our framework is applicable in scenarios
where the local posterior distribution is approximated by other exponential family distributions and
not just the multivariate Gaussian.
Given a feature vector xn ? Rd , we model the label as yn ? N (w> xn , ?y2 ), where w is the
parameter of interest. We use a spike and slab prior [10] over w, which is equivalent to setting
e s, where s is a d-dimensional binary vector (where 1 corresponds to an active feature and
w=w
0 inactive) whose elements are drawn independently from a Bernoulli distribution whose natural
2
(log odds) parameter is ?0 and w
el |sl ? N (0, ?w
) i.i.d. for each l = 1, . . . , d. [7] proposed the
Q
e s) = dl=1 q(w
el , sl ) where each factor
following variational approximation of the posterior: q(w,
q(w
el , sl ) = q(sl )q(w
el |sl ) is a spike and slab distribution. (We refer the reader to [7] for details.)
e s) is an exponential family distribution with sufficient
The spike and slab distribution over ? = (w,
statistics {sl , sl w
el , sl w
el2 }dl=1 , which we use for the EP approximation. The moments required consist of the probability of sl = 1, and the mean and variance of w
el conditioned on sl = 1, for each
2
l = 1, . . . , d. The conditional distribution of w
el given sl = 0 is simply the prior N (0, ?w
). The
natural parameters consist of the log odds of sl = 1, as well as those for w
el conditioned on sl = 1
7
0.4
0.4
0.2
0.2
0
0
?0.2
?0.2
?0.4
?0.4
0
1000
2000
3000
k ? T ? N/m ? 103
0
4000
(a) m = 2
500
1000
1500
k ? T ? N/m ? 103
2000
(b) m = 4
Figure 5: Results on Boston housing dataset for Bayesian sparse linear regression model with spike
and slab prior. The x-axis plots the number of data points per node (equals the number of likelihood evaluations per sample) times the cumulative number of samples drawn per node, which is a
surrogate for the computation times of the methods. The y-axis plots the ground truth (solid), local
sample estimated means (dashed) and EP estimated mean (asterisks) at every iteration.
(Section 2.3). We used the paired Gibbs sampler described in [7] as the underlying MCMC sampler,
and a damping factor of 0.5.
We experimented using the Boston housing dataset which consists of N = 455 training data points
in d = 13 dimensions. We fixed the hyperparameters to the values described in [7], and generated
ground truth samples by running a long chain of the paired Gibbs sampler and computed the posterior mean of w using these ground truth samples. Figure 5 illustrates the output of SMS(s) for
m = 2 and m = 4 (the number of nodes was kept small to ensure that each node contains at least
100 observations). Each color denotes a different dimension; to avoid clutter, we report results only
for dimensions 2, 5, 6, 7, 9, 10, and 13. The dashed lines denote the local sample estimated means at
each of the nodes; the solid lines denote the ground truth and the asterisks denote the EP estimated
mean at each iteration. Initially, the local estimated means are quite different since each node has
a different random data subset. As EP progresses, these local estimated means as well as the EP
estimated mean converge rapidly to the ground truth values.
4
Conclusion
We proposed an approach to performing distributed Bayesian posterior sampling where each compute node contains a different subset of data. We show that through very low-cost and rapidly
converging EP messages passed among the nodes, the local MCMC samplers can be made to share
a number of moment statistics like the mean and covariance. This in turn allows the local MCMC
samplers to converge to the same part of the parameter space, and allows each local sample produced to be interpreted as an approximate global sample without the need for a combination stage.
Through empirical studies, we showed that our methods are more accurate than previous methods
and also exhibits better scalability to the number of nodes. Interesting avenues of research include
using our SMS methods to adjust hyperparameters using either empirical or fully Bayesian learning,
implementation and evaluation of the decentralised version of SMS, and theoretical analysis of the
behaviour of EP under the stochastic perturbations caused by the MCMC estimation of moments.
Acknowledgements
We thank Willie Neiswanger for sharing his implementation of NEIS(n), and Michalis K Titsias for
sharing the code used in [7]. MX, JZ and BZ gratefully acknowledge funding from the National Basic Research Program of China (No. 2013CB329403) and National NSF of China (Nos. 61322308,
61332007). BL gratefully acknowledges generous funding from the Gatsby charitable foundation.
YWT gratefully acknowledges EPSRC for research funding through grant EP/K009362/1.
8
References
[1] Sungjin Ahn, Anoop Korattikara, and Max Welling. Bayesian posterior sampling via stochastic
gradient Fisher scoring. In Proceedings of the 29th International Conference on Machine
Learning (ICML-12), 2012.
[2] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael Jordan.
Streaming variational Bayes. In Advances in Neural Information Processing Systems, pages
1727?1735, 2013.
[3] Jeffrey Dean and Sanjay Ghemawat. MapReduce: simplified data processing on large clusters.
Communications of the ACM, 51(1):107?113, 2008.
[4] Matthew D Hoffman, Francis R Bach, and David M Blei. Online learning for latent Dirichlet
allocation. In Advances in Neural Information Processing Systems, pages 856?864, 2010.
[5] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational
inference. The Journal of Machine Learning Research, 14(1):1303?1347, 2013.
[6] Matthew D Hoffman and Andrew Gelman. The No-U-Turn sampler: Adaptively setting path
lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15:1593?1623,
2014.
[7] Miguel L?azaro-gredilla and Michalis K Titsias. Spike and slab variational inference for multitask and multiple kernel learning. In Advances in Neural Information Processing Systems,
pages 2339?2347, 2011.
[8] Thomas P Minka. A family of algorithms for approximate Bayesian inference. PhD thesis,
Massachusetts Institute of Technology, 2001.
[9] Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin, and David Dunson. Scalable and robust
Bayesian inference via the median posterior. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1656?1664, 2014.
[10] Toby J Mitchell and John J Beauchamp. Bayesian variable selection in linear regression. Journal of the American Statistical Association, 83(404):1023?1032, 1988.
[11] Robb J Muirhead. Aspects of multivariate statistical theory, volume 197. John Wiley & Sons,
2009.
[12] Willie Neiswanger, Chong Wang, and Eric Xing. Asymptotically exact, embarrassingly parallel MCMC. In Proceedings of the 30th International Conference on Uncertainty in Artificial
Intelligence (UAI-14), pages 623?632, 2014.
[13] David Newman, Arthur Asuncion, Padhraic Smyth, and Max Welling. Distributed algorithms
for topic models. The Journal of Machine Learning Research, 10:1801?1828, 2009.
[14] Sam Patterson and Yee Whye Teh. Stochastic gradient Riemannian Langevin dynamics on
the probability simplex. In Advances in Neural Information Processing Systems, pages 3102?
3110, 2013.
[15] Herbert Robbins and Sutton Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22(3):400?407, 1951.
[16] Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I
George, and Robert E McCulloch. Bayes and big data: The consensus Monte Carlo algorithm.
EFaBBayes 250 conference, 16, 2013.
[17] Matthias W Seeger. Bayesian inference and optimal design for the sparse linear model. The
Journal of Machine Learning Research, 9:759?813, 2008.
[18] Hisayuki Tsukuma and Yoshihiko Konno. On improved estimation of normal precision matrix
and discriminant coefficients. Journal of Multivariate Analysis, 97(7):1477 ? 1500, 2006.
[19] Xiangyu Wang and David B. Dunson. Parallel MCMC via Weierstrass sampler. arXiv preprint
arXiv:1312.4605, 2013.
[20] Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient Langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning (ICML-11),
pages 681?688, 2011.
9
| 5596 |@word multitask:1 version:6 suitably:1 simulation:1 covariance:8 p0:12 solid:3 tnlist:1 reduction:2 moment:31 initial:1 contains:4 rkhs:1 ours:1 recovered:1 contextual:1 current:2 surprising:1 zhu1:1 john:3 numerical:1 partition:3 subsequent:1 shape:1 analytic:1 plot:4 update:2 aside:1 intelligence:1 item:1 hamiltonian:1 short:1 core:2 blei:2 weierstrass:3 node:51 beauchamp:1 firstly:1 mathematical:1 direct:2 become:1 consists:4 combine:2 thinned:1 pairwise:1 rapid:1 roughly:1 growing:1 multi:1 little:1 actual:1 cpu:1 increasing:2 underlying:2 mcculloch:1 interpreted:1 developed:1 impractical:1 pseudo:1 every:7 demonstrates:1 uk:2 partitioning:1 unit:1 grant:1 zhang1:1 yn:10 producing:1 appear:1 before:1 positive:4 el2:1 local:40 treat:1 tsinghua:2 tends:2 minsker:1 sutton:1 oxford:3 id:2 path:1 might:1 burn:2 china:3 collect:1 limited:1 enforces:1 practice:2 recursive:1 definite:4 differs:1 procedure:2 asymptotics:1 empirical:3 significantly:2 boyd:1 road:1 get:1 onto:1 targeting:1 selection:1 gelman:1 context:2 influence:1 intercept:1 yee:3 equivalent:1 dean:1 missing:2 send:1 independently:2 simplicity:3 estimator:3 muirhead:1 his:1 embedding:1 handle:1 updated:2 annals:1 target:2 suppose:1 exact:3 smyth:1 us:1 agreement:2 element:1 infrequently:1 approximated:10 robb:1 updating:2 balaji:1 ep:40 epsrc:1 steven:1 preprint:1 solved:1 wang:11 connected:1 trade:1 ran:1 transforming:1 pd:1 broderick:1 covariates:1 moderately:2 flurry:1 dynamic:2 ultimately:1 titsias:2 upon:2 patterson:1 eric:1 completely:1 decentralised:2 distinct:1 effective:2 london:2 monte:5 describe:2 artificial:1 newman:1 whose:2 quite:1 larger:4 say:2 drawing:1 tested:7 statistic:10 transform:2 highlighted:1 final:5 asynchronously:1 online:1 housing:2 advantage:1 matthias:1 propose:1 product:5 korattikara:1 rapidly:4 achieve:3 insensitivity:1 scalability:1 convergence:16 cluster:6 regularity:1 produce:3 generating:1 comparative:1 converges:1 object:3 illustrate:1 develop:2 andrew:1 miguel:1 minimises:1 received:1 progress:2 edward:1 implemented:2 involves:2 differ:1 drawback:1 closely:2 stochastic:7 require:2 behaviour:3 around:3 considered:1 ground:9 normal:1 slab:9 matthew:3 achieves:2 adopt:1 early:1 generous:1 purpose:1 estimation:4 applicable:2 label:2 robbins:1 weighted:1 reflects:1 hoffman:3 clearly:1 gaussian:13 aim:2 rather:2 avoid:1 varying:1 command:1 wilson:1 encode:1 ax:1 bernoulli:1 likelihood:13 indicates:2 seeger:1 sense:1 inference:8 el:8 streaming:2 inaccurate:1 initially:1 mitigating:1 overall:2 among:3 issue:1 denoted:1 k009362:1 special:1 equal:1 aware:3 once:1 sampling:15 represents:1 park:1 icml:3 mimic:1 simplex:1 report:1 others:2 intelligent:1 few:1 national:3 divergence:9 phase:2 jeffrey:1 sumptions:1 interest:5 message:8 normalisation:1 highly:1 possibility:1 evaluation:5 adjust:1 chong:2 introduces:1 extreme:1 wc1n:1 chain:3 kt:10 accurate:1 encourage:1 worker:2 arthur:1 damping:4 old:1 initialized:1 desired:1 plotted:1 theoretical:1 xeon:1 earlier:1 ar:1 queen:1 tg:1 cost:4 deviation:1 subset:6 entry:1 kq:1 conducted:1 stored:3 reported:1 damp:1 combined:4 adaptively:1 st:1 density:10 international:4 hugh:1 off:2 diverge:1 michael:1 quickly:2 squared:1 thesis:2 central:1 von:1 padhraic:1 opposed:1 containing:1 broadcast:1 reflect:1 worse:1 cb329403:1 american:1 return:1 distribute:1 factorised:1 ywt:1 gaussianity:1 coefficient:1 satisfy:1 xu1:1 caused:1 performed:1 lab:2 francis:1 xing:2 bayes:2 parallel:11 asuncion:1 monro:1 square:1 accuracy:12 variance:1 likewise:1 bayesian:22 accurately:1 produced:2 carlo:5 j6:4 converged:2 oscillatory:1 suffers:1 sharing:9 whenever:1 andre:1 against:5 initialised:2 tamara:1 minka:1 di:19 mi:1 riemannian:1 propagated:1 sampled:1 dataset:16 popular:1 massachusetts:1 mitchell:1 recall:1 subsection:1 color:1 improves:1 dimensionality:1 embarrassingly:5 improved:1 arranged:1 just:3 stage:7 implicit:1 until:1 receives:1 horizontal:1 ox1:1 overlapping:1 propagation:3 bonassi:1 logistic:6 quality:4 perhaps:1 indicated:1 usage:1 requiring:1 y2:1 true:2 counterpart:1 willie:2 unbiased:3 managed:1 deliberately:1 hence:8 equality:1 analytically:3 iteratively:3 assigned:1 nut:3 i2:1 read:1 round:1 during:1 noted:1 whye:3 stress:1 tt:1 demonstrate:1 complete:1 performs:1 konno:1 meaning:1 variational:8 consideration:1 ef:1 funding:3 sigmoid:1 mt:4 volume:1 belong:1 lizhen:1 approximates:3 association:1 refer:2 gibbs:3 enter:1 paisley:1 rd:3 grid:1 stochasticity:2 gratefully:3 dj:1 dot:1 stable:2 ahn:1 base:1 posterior:62 multivariate:10 recent:1 showed:1 belongs:1 apart:2 scenario:2 store:1 server:3 binary:1 scoring:1 seen:1 herbert:1 additional:4 george:1 converge:4 fernando:1 xiangyu:1 dashed:2 full:1 multiple:1 reduces:1 d0:2 match:1 faster:1 minimising:1 cross:1 long:1 bach:1 lin:1 paired:2 impact:1 converging:1 scalable:2 regression:10 basic:2 optimisation:1 expectation:3 bz:1 arxiv:3 iteration:23 kernel:4 receive:1 addition:2 cropped:1 median:2 leaving:1 singular:1 biased:1 unlike:1 south:1 comment:1 incorporates:1 jordan:1 odds:2 chipman:1 presence:1 ideal:1 bernstein:1 intermediate:1 enough:2 split:1 variety:1 affect:1 architecture:2 bandwidth:1 reduce:1 idea:1 avenue:1 minimise:1 synchronous:4 thread:1 motivated:1 inactive:1 distributing:1 passed:3 colour:1 passing:3 matlab:4 generally:1 clutter:1 locally:3 processed:1 visualized:1 simplest:1 reduced:2 generate:2 sl:13 exist:1 nsf:1 estimated:16 deteriorates:1 per:3 key:1 nevertheless:1 drawn:6 kept:1 asymptotically:1 concreteness:1 blocker:1 year:1 beijing:1 convert:1 run:6 inverse:2 prob:1 master:4 uncertainty:1 family:15 reasonable:1 reader:1 d20:1 draw:4 guaranteed:1 dash:1 occur:1 precisely:2 aspect:1 speed:1 performing:1 department:2 according:2 gredilla:1 combination:16 poor:1 across:9 smaller:5 son:1 sam:2 partitioned:3 urgent:1 hl:1 outlier:1 agree:4 previously:1 turn:4 fail:1 neiswanger:2 tractable:3 adopted:1 available:1 gaussians:1 incurring:1 apply:1 observe:1 enforce:1 nicholas:1 batch:1 alternative:1 slower:1 thomas:1 denotes:1 running:2 include:2 ensure:1 completed:1 michalis:2 dirichlet:1 calculating:1 bl:1 already:2 spike:9 looked:2 strategy:2 parametric:3 sanvesh:1 usual:1 surrogate:1 visiting:1 exhibit:1 gradient:3 mx:1 separate:2 thank:1 simulated:4 topic:2 collected:5 consensus:2 discriminant:1 enforcing:3 code:1 length:1 retained:1 mini:1 insufficient:1 minjie:1 dunson:2 robert:1 kde:2 ashia:1 implementation:2 design:1 perform:1 teh:2 vertical:2 observation:1 markov:2 datasets:2 sm:36 acknowledge:1 behave:1 situation:1 langevin:2 communication:4 ever:1 variability:1 frame:1 perturbation:1 introduced:1 david:5 pair:1 required:3 kl:10 suggested:2 proceeds:1 below:1 sanjay:1 scott:1 program:1 max:3 difficulty:1 treated:1 natural:13 representing:1 scheme:5 improve:2 technology:3 axis:4 started:1 acknowledges:2 jun:1 prior:18 literature:1 mapreduce:2 acknowledgement:1 relative:1 unsurprisingly:1 stanislav:1 fully:1 interesting:1 allocation:1 age:1 asterisk:3 foundation:1 sufficient:3 charitable:1 share:3 pi:3 efabbayes:1 asynchronous:7 copy:2 bias:1 institute:1 correspondingly:1 sparse:5 distributed:11 curve:2 calculated:1 xn:10 dimension:6 cumulative:4 author:2 collection:1 made:1 sungjin:1 simplified:1 counted:1 welling:3 approximate:15 global:7 active:1 uai:1 alternatively:1 latent:1 jz:1 robust:2 mse:5 investigated:1 did:1 main:3 big:3 whole:1 noise:1 hyperparameters:2 toby:1 allowed:1 quadrature:1 fig:1 intel:1 fashion:1 gatsby:2 wiley:1 precision:4 explicit:1 wish:1 exponential:13 xh:1 communicates:1 scot:11 theorem:1 covariate:1 ghemawat:1 experimented:1 dl:2 intractable:1 essential:1 consist:2 sequential:1 effectively:1 drew:2 phd:1 illustrates:2 conditioned:2 boston:2 simply:3 azaro:1 bo:1 corresponds:1 truth:9 acm:1 conditional:5 goal:1 towards:1 shared:3 replace:1 fisher:1 determined:1 typical:3 sampler:22 total:2 college:1 alexander:1 anoop:1 wibisono:1 mcmc:28 d1:1 srivastava:1 |
5,077 | 5,597 | Communication Efficient Distributed Machine
Learning with the Parameter Server
Mu Li?? , David G. Andersen? , Alexander Smola?? , and Kai Yu?
?
Carnegie Mellon University ? Baidu ? Google
{muli, dga}@cs.cmu.edu, [email protected], [email protected]
Abstract
This paper describes a third-generation parameter server framework for distributed
machine learning. This framework offers two relaxations to balance system performance and algorithm efficiency. We propose a new algorithm that takes advantage of this framework to solve non-convex non-smooth problems with convergence guarantees. We present an in-depth analysis of two large scale machine
learning problems ranging from `1 -regularized logistic regression on CPUs to reconstruction ICA on GPUs, using 636TB of real data with hundreds of billions of
samples and dimensions. We demonstrate using these examples that the parameter server framework is an effective and straightforward way to scale machine
learning to larger problems and systems than have been previously achieved.
1
Introduction
In realistic industrial machine learning applications the datasets range from 1TB to 1PB. For example, a social network with 100 million users and 1KB data per user has 100TB. Problems in
online advertising and user generated content analysis have complexities of similar order of magnitudes [12]. Such huge quantities of data allow learning powerful and complex models with 109 to
1012 parameters [9], at which scale a single machine is often not powerful enough to complete these
tasks in time.
Distributed optimization is becoming a key tool for solving large scale machine learning problems
[1, 3, 10, 21, 19]. The workloads are partitioned into worker machines, which access the globally
shared model as they simultaneously perform local computations to refine the model. However, efficient implementations of the distributed optimization algorithms for machine learning applications
are not easy. A major challenge is the inter-machine data communication:
? Worker machines must frequently read and write the global shared parameters. This massive
data access requires an enormous amount of network bandwidth. However, bandwidth is one
of the scarcest resources in datacenters [6], often 10-100 times smaller than memory bandwidth
and shared among all running applications and machines. This leads to a huge communication
overhead and becomes a bottleneck for distributed optimization algorithms.
? Many optimization algorithms are sequential, requiring frequent synchronization among worker
machines. In each synchronization, all machines need to wait the slowest machine. However,
due to imperfect workload partition, network congestion, or interference by other running jobs,
slow machines are inevitable, which then becomes another bottleneck.
In this work, we build upon our prior work designing an open-source third generation parameter
server framework [4] to understand the scope of machine learning algorithms to which it can be
applied, and to what benefit. Figure 1 gives an overview of the scale of the largest machine learning
experiments performed on a number of state-of-the-art systems. We confirmed with the authors of
these systems whenever possible.
1
# of shared parameters
11
Compared to these systems, our parame10
Parameter server (Sparse LR)
ter server is several orders of magnitude
10
10
more scalable in terms of both parameters
Distbelief (DNN)
9
and nodes. The parameter server commu10
Petuum (Lasso)
nicates data asynchronously to reduce the
8
10
communication cost. The resulting data inNaiad (LR)
Yahoo!LDA (LDA)
7
consistency is a trade-off between the sys10
VW (LR)
tem performance and the algorithm conver6
Graphlab (LDA)
10
gence rate. The system offers two relaxMLbase (LR)
5
ations to address data (in)consistency: First,
10
REEF (LR)
rather than arguing for a specific consistency
4
10 1
2
3
4
model [29, 7, 15], we support flexible con10
10
10
10
# of cores
sistency models. Second, the system allows
user-specific filters for fine-grained consistency management. Besides, the system pro- Figure 1: Comparison of the public largest machine
vides other features such as data replication, learning experiments each system performed. The
instantaneous failover, and elastic scalability. results are current as of April 2014.
Motivating Application. Consider the following general regularized optimization problem:
minimize F (w) where F (w) := f (w) + h(w) and w ? Rp ,
w
(1)
We assume that the loss function f : Rp ? R is continuously differentiable but not necessarily
convex, and the regularizer h : Rp ? R is convex, left side continuous, block separable, but
possibly non-smooth.
The proposed algorithm solves this problem based on the proximal gradient method [23]. However,
it differs with the later in four aspects to efficiently tackle very high dimensional and sparse data:
? Only a subset (block) of coordinates is updated in each time: (block) Gauss-Seidel updates are
shown to be efficient on sparse data [36, 27].
? The model a worker maintains is only partially consistent with other machines, due to asynchronous data communication.
? The proximal operator uses coordinate-specific learning rates to adapt progress to sparsity pattern inherent in the data.
? Only coordinates that would change the associated model weights are communicated to reduce
network traffic.
We demonstrate the efficiency of the proposed algorithm by applying it to two challenging problems: (1) non-smooth `1 -regularized logistic regression on sparse text datasets with over 100 billion
examples and features; (2) a non-convex and non-smooth ICA reconstruction problem [18], extracting billions of sparse features from dense image data. We show that the combination of the proposed
algorithm and system effectively reduces both the communication cost and programming effort. In
particular, 300 lines of codes suffice to implement `1 -regularized logistic regression with nearly no
communication overhead for industrial-scale problems.
Outline: We first provide background in Section 2. Next, we address the two relaxations in Section 3
and the proposed algorithm in Section 4. In Section 5 (and also Appendix B and C), we present the
applications with the experimental results. We conclude with a discussion in Section 6.
2
Background
Related Work. The parameter server framework [29] has proliferated both in academia and in
industry. Related systems have been implemented at Amazon, Baidu, Facebook, Google [10], Microsoft, and Yahoo [2]. There are also open source codes, such as YahooLDA [2] and Petuum [15].
As introduced in [29, 2], the first generation of the parameter servers lacked flexibility and performance. The second generation parameter servers were application specific, exemplified by Distbelief [10] and the synchronization mechanism in [20]. Petuum modified YahooLDA by imposing
bounded delay instead of eventual consistency and aimed for a general platform [15], but it placed
2
more constraints on the threading model of worker machines. Compared to previous work, our
third generation system greatly improves system performance, and also provides flexibility and fault
tolerance.
Beyond the parameter server, there exist many general-purpose distributed systems for machine
learning applications. Many mandate synchronous and iterative communication. For example, Mahout [5], based on Hadoop [13] and MLI [30], based on Spark [37], both adopt the iterative MapReduce framework [11]. On the other hand, Graphlab [21] supports global parameter synchronization
on a best effort basis. These systems scale well to few hundreds of nodes, primarily on dedicated
research clusters. However, at a larger scale the synchronization requirement creates performance
bottlenecks. The primary advantage over these systems is the flexibility of consistency models offered by the parameter server.
There is also a growing interest in asynchronous algorithms. Shotgun [7], as a part of Graphlab,
performs parallel coordinate descent for solving `1 optimization problems. Other methods partition
observations over several machines and update the model in a data parallel fashion [34, 17, 38, 3,
1, 19]. Lock-free variants were proposed in Hogwild [26]. Mixed variants which partition data and
parameters into non-overlapping components were introduced in [33], albeit at the price of having
to move or replicate data on several machines. Lastly, the NIPS framework [31] discusses general
non-convex approximate proximal methods.
The proposed algorithm differs from existing approaches mainly in two aspects. First, we focus on
solving large scale problems. Given the size of data and the limited network bandwidth, neither
the shared memory approach of Shotgun and Hogwild nor moving the entire data during training is
desirable. Second, we aim at solving general non-convex and non-smooth composite objective functions. Different to [31], we derive a convergence theorem with weaker assumptions, and furthermore
we carry out experiments that are of many orders of magnitude larger scale.
The Parameter Server Architecture. An instance of the parameter server [4] contains a server
group and several worker groups, in which a group has several machines. Each machine in the server
group maintains a portion of the global parameters, and all servers communicate with each other to
replicate and/or migrate parameters for reliability and scaling.
A worker stores only a portion of the training data and it computes the local gradients or other
statistics. Workers communicate only with the servers to retrieve and update the shared parameters.
In each worker group, there might be a scheduler machine, which assigns workloads to workers as
well as monitors their progress. When workers are added or removed from the group, the scheduler
can reschedule the unfinished workloads. Each worker group runs an application, thus allowing for
multi-tenancy. For example, an ad-serving system and an inference algorithm can run concurrently
in different worker groups.
The shared model parameters are represented as sorted (key,value) pairs. Alternatively we can view
this as a sparse vector or matrix that interacts with the training data through the built-in multithreaded linear algebra functions. Data exchange can be achieved via two operations: push and
pull. A worker can push all (key, value) pairs within a range to servers, or pull the corresponding
values from the servers.
Distributed Subgradient Descent. For the motivating example introduced in (1), we can implement a standard distributed subgradient descent algorithm [34] using the parameter server. As
illustrated in Figure 2 and Algorithm 1, training data is partitioned and distributed among all the
workers. The model w is learned iteratively. In each iteration, each worker computes the local gradients using its own training data, and the servers aggregate these gradients to update the globally
shared parameter w. Then the workers retrieve the updated weights from the servers.
A worker needs the model w to compute the gradients. However, for very high-dimensional training
data, the model may not fit in a worker. Fortunately, such data are often sparse, and a worker
typically only requires a subset of the model. To illustrate this point, we randomly assigned samples
in the dataset used in Section 5 to workers, and then counted the model parameters a worker needed
for computing gradients. We found that when using 100 workers, the average worker only needs
7.8% of the model. With 10,000 workers this reduces to 0.15%. Therefore, despite the large total
size of w, the working set of w needed by a particular worker can be cached trivially.
3
Algorithm 1 Distributed Subgradient Descent
Solving (1) in the Parameter Server
3
1. compute
w1
g1 +... +gm
3. update
4. pull
w
4. pull
2. push
...
Worker r = 1, . . . , m:
r
1: Load a part of training data {yik , xik }n
k=1
(0)
2: Pull the working set wr from servers
3: for t = 1 to T do P
(t)
(t)
nr
4:
Gradient gr ? k=1
?`(xik , yik , wr )
(t)
5:
Push gr to servers
(t+1)
6:
Pull wr
from servers
7: end for
Servers:
1: for t = 1 to T do
Pm (t)
2:
Aggregate g (t) ? r=1 gr
3:
w(t+1) ? w(t) ? ? g (t) + ?h(w(t)
4: end for
worker 1
g1
2. push
servers
worker m
gm
training
data
1. compute
wm
Figure 2: One iteration of Algorithm 1. Each
worker only caches the working set of w.
Two Relaxations of Data Consistency
We now introduce the two relaxations that are key to the proposed system. We encourage the reader
interested in systems details such as server key layout, elastic scalability, and continuous fault tolerance, to see our prior work [4].
3.1
Asynchronous Task Dependency
We decompose the workloads in the parameter server into tasks that are issued by a caller to a remote
callee. There is considerable flexibility in terms of what constitutes a task: for instance, a task can be
a push or a pull that a worker issues to servers, or a user-defined function that the scheduler issues
to any node, such as an iteration in the distributed subgradient algorithm. Tasks can also contains
subtasks. For example, a worker performs one push and one pull per iteration in Algorithm 1.
Tasks are executed asynchronously: the caller can perform further computation immediately after
issuing a task. The caller marks a task as finished only once it receives the callee?s reply. A reply
could be the function return of a user-defined function, the (key,value) pairs requested by the pull,
or an empty acknowledgement. The callee marks a task as finished only if the call of the task is
returned and all subtasks issued by this call are finished.
By default callees execute tasks in parallel for best
performance. A caller wishing to render task execu- iter 10: gradient push & pull
iter 11:
tion sequential can insert an execute-after-finished
gradient push & pull
dependency between tasks. The diagram on the
iter 12: gradient pu
right illustrates the execution of three tasks. Tasks
10 and 11 are independent, but 12 depends on 11. The callee therefore begins task 11 immediately
after the gradients are computed in task 10. Task 12, however, is postponed to after pull of 11.
Task dependencies aid implementing algorithm logic. For example, the aggregation logic at servers
in Algorithm 1 can be implemented by having the updating task depend on the push tasks of all
workers. In this way, the weight w is updated only after all worker gradients have been aggregated.
3.2
Flexible Consistency Models via Task Dependency Graphs
The dependency graph introduced above can be used to relax consistency requirements. Independent
tasks improve the system efficiency by parallelizing the usage of CPU, disk and network bandwidth.
However, this may lead to data inconsistency between nodes. In the diagram above, the worker r
(11)
starts iteration 11 before the updated model wr is pulled back, thus it uses the outdated model
(10)
(11)
(10)
wr and compute the same gradient as it did in iteration 10, namely gr = gr . This inconsis4
tency can potentially slows down the convergence speed of Algorithm 1. However, some algorithms
may be less sensitive to this inconsistency. For example, if only a block of w is updated in each
iteration of Algorithm 2, starting iteration 11 without waiting for 10 causes only a portion of w to
be inconsistent.
The trade-off between algorithm efficiency and system performance depends on various factors in
practice, such as feature correlation, hardware capacity, datacenter load, etc. Unlike other systems
that force the algorithm designer to adopt a specific consistency model that may be ill-suited to
the real situations, the parameter server can provide full flexibility for different consistency models
by creating task dependency graphs, which are directed acyclic graphs defined by tasks with their
dependencies. Consider the following three examples:
0
1
2
(a) Sequential
0
1
2
(b) Eventual
1
0
2
3
4
(c) 1 Bounded delay
Sequential Consistency requires all tasks to be executed one by one. The next task can be started
only if the previous one has finished. It produces results identical to the single-thread implementation. Bulk Synchronous Processing uses this approach.
Eventual Consistency to the contrary allows all tasks to be started simultaneously. [29] describe
such a system for LDA. This approach is only recommendable whenever the underlying algorithms are very robust with regard to delays.
Bounded Delay limits the staleness of parameters. When a maximal delay time ? is set, a new task
will be blocked until all previous tasks ? times ago have been finished (? = 0 yields sequential
consistency and for ? = ? we recover eventual consistency). Algorithm 2 uses such a model.
Note that dependency graphs allow for more advanced consistency models. For example, the scheduler may increase or decrease the maximal delay according to the runtime progress to dynamically
balance the efficiency-convergence trade-off.
3.3
Flexible Consistency Models via User-defined Filters
Task dependency graphs manage data consistency between tasks. User-defined filters allow for a
more fine-grained control of consistency (e.g. within a task). A filter can transform and selectively
synchronize the the (key,value) pairs communicated in a task. Several filters can be applied together
for better data compression. Some example filters are:
Significantly modified filter: it only pushes entries that have changed by more than a threshold
since synchronized last time.
Random skip filter: it subsamples entries before sending. They are skipped in calculations.
KKT filter: it takes advantage of the optimality condition when solving the proximal operator: a
worker only pushes gradients that are likely to affect the weights on the servers. We will discuss
it in more detail in section 5.
Key caching filter: Each time a range of (key,value) pairs is communicated because of the rangebased push and pull. When the same range is chosen again, it is likely that only values
are modified while the keys are unchanged. If both the sender and receiver have cached these
keys, the sender then only needs to send the values with a signature of the keys. Therefore, we
effectively double the network bandwidth.
Compressing filter: The values communicated are often compressible numbers, such as zeros,
small integers, and floating point numbers with more than enough precision. This filter reduces
the data size by using lossless or lossy data compression algorithms1 .
4
Delayed Block Proximal Gradient Method
In this section, we propose an efficient algorithm taking advantage of the parameter server to solve
the previously defined nonconvex and nonsmooth optimization problem (1).
1
Both key caching and data compressing are presented as system-level optimization in the prior work [4],
here we generalize them into user-defined filters.
5
Algorithm 2 Delayed Block Proximal Gradient Method Solving (1)
Scheduler:
1: Partition parameters into k blocks b1 , . . . , bk
2: for t = 1 to T : Pick a block bit and issue the task to workers
Worker r at iteration t
1: Wait until all iterations before t ? ? are finished
(t)
(t)
2: Compute first-order gradient gr and coordinate-specific learning rates ur on block bit
(t)
(t)
3: Push gr and ur to servers with user-defined filters, e.g., the random skip or the KKT filter
(t+1)
4: Pull wr
from servers with user-defined filters, e.g., the significantly modified filter
Servers at iteration t
1: Aggregate g (t) and u(t)
(t)
(t)
2: Solve the generalized proximal operator (2) w(t+1) ? ProxU
?t (w ) with U = diag(u ).
Proximal Gradient Methods. For a closed proper convex function h(x) : Rp ? R ? {?} define
the generalized proximal operator
ProxU
? (x) := argmin h(y) +
y?Rp
1
2
kx ? ykU
2?
2
where kxkU := x> U x.
(2)
The Mahalanobis norm kxkU is taken with respect to a positive semidefinite matrix U 0. Many
proximal algorithms choose U = 1. To minimize the composite objective function f (w) + h(w),
proximal gradient algorithms update w in two steps: a forward step performing steepest gradient
descent on f and a backward step carrying out projection using h. Given learning rate ?t > 0 at
iteration t these two steps can be written as
h
i
(t)
w(t+1) = ProxU
? ?t ?f (w(t) )
for t = 1, 2, . . .
(3)
?t w
Algorithm. We relax the consistency model of the proximal gradient methods with a block scheme
to reduce the sensitivity to data inconsistency. The proposed algorithm is shown in Algorithm 2. It
differs from the standard method as well as Algorithm 1 in four substantial ways to take advantage
of the opportunities offered by the parameter server and to handle high-dimensional sparse data.
1. Only a block of parameters is updated per iteration.
2. The workers compute both gradients and coordinate-specific learning rates, e.g., the diagonal
part of the second derivative, on this block.
3. Iterations are asynchronous. We use a bounded-delay model over iterations.
4. We employ user-defined filters to suppress transmission of parts of data whose effect on the
model is likely to be negligible.
Convergence Analysis. To prove convergence we need to make a number of assumptions. As
before, we decompose
the loss f into blocks fi associated with the training data stored by worker i,
P
that is f = i fi . Next we assume that block bt is chosen at iteration t. A key assumption is that
for given parameter changes the rate of change in the gradients of f is bounded. More specifically,
we need to bound the change affecting the very block and the amount of ?crosstalk? to other blocks.
Assumption 1 (Block Lipschitz Continuity) There exists positive constants Lvar,i and Lcov,i such
that for any iteration t and all x, y ? Rp with xi = yi for any i ?
/ bt we have
k?bt fi (x) ? ?bt fi (y)k ? Lvar,i kx ? yk
k?bs fi (x) ? ?bs fi (y)k ? Lcov,i kx ? yk
for 1 ? i ? m
(4a)
for 1 ? i ? m, t < s ? t + ?
(4b)
Pm
Pm
where ?b f (x) is the block b of ?f (x). Further define Lvar := i=1 Lvar,i and Lcov := i=1 Lcov,i .
The following Theorem 2 indicates that this algorithm converges to a stationary point under the
relaxed consistency model, provided that a suitable learning rate is chosen. Note that since the
overall objective is nonconvex, no guarantees of optimality are possible in general.
6
Theorem 2 Assume that updates are performed with a delay bounded by ? , also assume that we
apply a random skip filter on pushing gradients and a significantly-modified filter on pulling weights
with threshold O(t?1 ). Moreover assume that gradients of the loss are Lipschitz continuous as per
Assumption 1. Denote by Mt the minimal coordinate-specific learning rate at time t. For any > 0,
Algorithm 2 converges to a stationary point in expectation if the learning rate ?t satisfies
?t ?
Mt
Lvar + ? Lcov +
for all t > 0.
(5)
The proof is shown in Appendix A. Intuitively, the difference between w(t?? ) and w(t) will be small
when reaching a stationary point. As a consequence, also the change in gradients will vanish. The
inexact gradient obtained by delayed and inexact model, therefore, is likely a good approximation
of the true gradient, so the convergence results of proximal gradient methods can be applied.
Note that, when the delay increase, we should decrease the learning rate to guarantee convergence.
However, a larger value is possible when careful block partition and order are chosen. For example,
if features in a block are less correlated then Lvar decreases. If the block is less related to the previous
blocks, then Lcov decreases, as also exploited in [26, 7].
5
Experiments
We now show how the general framework discussed above can be used to solve challenging machine
learning problems. Due to space constraints we only present experimental results for a 0.6PB dataset
below. Details on smaller datasets are relegated to Appendix B. Moreover, we discuss non-smooth
Reconstruction ICA in Appendix C.
Setup. We chose `1 -regularized logistic regression for evaluation because that it is one of the
most popular algorithms used in industry for large scale risk minimization [9]. We collected an ad
click prediction dataset with 170 billion samples and 65 billion unique features. The uncompressed
dataset size is 636TB. We ran the parameter server on 1000 machines, each with 16 CPU cores,
192GB DRAM, and connected by 10 Gb Ethernet. 800 machines acted as workers, and 200 were
servers. The cluster was in concurrent use by other jobs during operation.
Algorithm. We adopted Algorithm 2 with upper bounds of the diagonal entries of the Hessian as
the coordinate-specific learning rates. Features were randomly split into 580 blocks according the
feature group information. We chose a fixed learning rate by observing the convergence speed.
We designed a Karush-Kuhn-Tucker (KKT) filter to skip inactive coordinates. It is analogous to
the active-set selection strategies of SVM optimization [16] and active set selectors [22]. Assume
wk = 0 for coordinate k and gk the current gradient. According to the optimality condition of the
proximal operator, also known as soft-shrinkage operator, wk will remain 0 if |gk | ? ?. Therefore,
it is not necessary for a worker to send gk (as well as uk ). We use an old value g?k to approximate gk
to further avoid computing gk . Thus, coordinate k will be skipped in the KKT filter if |?
gk | ? ? ? ?,
where ? ? [0, ?] controls how aggressive the filtering is.
Implementation. To the best of our knowledge, no open source system can scale sparse logistic
regression to the scale described in this paper. Graphlab provides only a multi-threaded, single
machine implementation. We compared it with ours in Appendix B. Mlbase, Petuum and REEF do
not support sparse logistic regression (as confirmed with the authors in 4/2014). We compare the
parameter server with two special-purpose second general parameter servers, named System A and
B, developed by a large Internet company.
Both system A and B adopt the sequential consistency model, but the former uses a variant of LBFGS while the latter runs a similar algorithm as ours. Notably, both systems consist of more than
10K lines of code. The parameter server only requires 300 lines of code for the same functionality
as System B (the latter was developed by an author of this paper). The parameter server successfully
moves most of the system complexity from the algorithmic implementation into reusable components.
7
5
10.7
System?A
System?B
Parameter Server
computing
waiting
4
time (hours)
objective value
10
10.6
10
3
2
1
?1
10
0
1
10
time (hours)
10
0
Figure 3: Convergence of sparse logistic regression on a 636TB dataset.
relative network traffic (%)
computing
waiting
time (hours)
1.5
1
0.5
0
1
2
4
8
maximal delays
System?B
Parameter Server
Figure 4: Average time per worker spent on
computation and waiting during optimization.
100
2
0
System?A
80
server
worker
60
40
20
0
16
key caching compressing KKT Filter
Figure 6: The reduction of sent data size when
stacking various filters together.
Figure 5: Time to reach the same convergence
criteria under various allowed delays.
Experimental Results. We compare these systems by running them to reach the same convergence criteria. Figure 3 shows that System B outperforms system A due to its better algorithm. The
parameter server, in turn, speeds up System B in 2 times while using essentially the same algorithm.
It achieves this because the consistency relaxations significantly reduce the waiting time (Figure 4).
Figure 5 shows that increasing the allowed delays significantly decreases the waiting time though
slightly slows the convergence. The best trade-off is 8-delay, which results in a 1.6x speedup comparing the sequential consistency model. As can be seen in Figure 6, key caching saves 50% network
traffic. Compressing reduce servers? traffic significantly due to the model sparsity, while it is less
effective for workers because the gradients are often non-zeros. But these gradients can be filtered
efficiently by the KKT filter. In total, these filters give 40x and 12x compression rates for servers
and workers, respectively.
6
Conclusion
This paper examined the application of a third-generation parameter server framework to modern
distributed machine learning algorithms. We show that it is possible to design algorithms well
suited to this framework; in this case, an asynchronous block proximal gradient method to solve
general non-convex and non-smooth problems, with provable convergence. This algorithm is a
good match to the relaxations available in the parameter server framework: controllable asynchrony
via task dependencies and user-definable filters to reduce data communication volumes. We showed
experiments for several challenging tasks on real datasets up to 0.6PB size with hundreds billions
samples and features to demonstrate its efficiency. We believe that this third-generation parameter
server is an important and useful building block for scalable machine learning. Finally, the source
codes are available at http://parameterserver.org.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In IEEE CDC, 2012.
[2] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable
models. In WSDM, 2012.
[3] A. Ahmed, N. Shervashidze, S. Narayanamurthy, V. Josifovski, and A. J. Smola. Distributed large-scale
natural graph factorization. In WWW, 2013.
[4] M. Li, D. G. Andersen, J. Park h, A. J. Smola, A. Amhed, V. Josifovski, J. Long, E. Shekita, and B. Y. Su.
Scaling Distributed Machine Learning with the Parameter Server. In OSDI, 2014
[5] Apache Foundation. Mahout project, 2012. http://mahout.apache.org.
[6] L. A. Barroso and H. H?olzle. The datacenter as a computer: An introduction to the design of warehousescale machines. Synthesis lectures on computer architecture, 4(1):1?108, 2009.
[7] J.K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for L1-regularized loss
minimization. In ICML, 2011.
[8] J. Byers, J. Considine, and M. Mitzenmacher. Simple load balancing for distributed hash tables. In
Peer-to-peer systems II, pages 80?87. Springer, 2003.
[9] K. Canini. Sibyl: A system for large scale supervised machine learning. Technical Talk, 2012.
[10] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker,
K. Yang, and A. Ng. Large scale distributed deep networks. In NIPS, 2012.
[11] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. CACM, 2008.
[12] Domo. Data Never Sleeps 2.0, 2014. http://www.domo.com/learn.
[13] The Apache Software Foundation. Apache hadoop, 2009. http://hadoop.apache.org/core/.
[14] S. H. Gunderson. Snappy https://code.google.com/p/snappy/.
[15] Q. Ho, J. Cipar, H. Cui, S. Lee, J. Kim, P. Gibbons, G. Gibson, G. Ganger, and E. Xing. More effective
distributed ml via a stale synchronous parallel parameter server. In NIPS, 2013.
[16] T. Joachims. Making large-scale SVM learning practical. Advances in Kernel Methods, 1999
[17] J. Langford, A. J. Smola, and M. Zinkevich. Slow learners are fast. In NIPS, 2009.
[18] Q.V. Le, A. Karpenko, J. Ngiam, and A.Y. Ng. ICA with reconstruction cost for efficient overcomplete
feature learning. NIPS, 2011.
[19] M. Li, D. G. Andersen, and A. J. Smola. Distributed delayed proximal gradient methods. In NIPS
Workshop on Optimization for Machine Learning, 2013.
[20] M. Li, L. Zhou, Z. Yang, A. Li, F. Xia, D.G. Andersen, and A. J. Smola. Parameter server for distributed
machine learning. In Big Learning NIPS Workshop, 2013.
[21] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed graphlab: A
framework for machine learning and data mining in the cloud. In PVLDB, 2012.
[22] S. Matsushima, S.V.N. Vishwanathan, and A.J. Smola. Linear support vector machines via dual cached
loops. In KDD, 2012.
[23] N. Parikh and S. Boyd. Proximal algorithms. In Foundations and Trends in Optimization, 2013.
[24] K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2008. Version 20081110.
[25] A. Phanishayee, D. G. Andersen, H. Pucha, A. Povzner, and W. Belluomini. Flex-kv: Enabling highperformance and flexible KV systems. In Management of big data systems, 2012.
[26] B. Recht, C. Re, S.J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic
gradient descent. NIPS, 2011.
[27] P. Richt?arik and M. Tak?ac? . Iteration complexity of randomized block-coordinate descent methods for
minimizing a composite function. Mathematical Programming, 2012.
[28] A. Rowstron and P. Druschel. Pastry: Scalable, decentralized object location and routing for large-scale
peer-to-peer systems. In Distributed Systems Platforms, 2001.
[29] A. J. Smola and S. Narayanamurthy. An architecture for parallel topic models. In VLDB, 2010.
[30] E. Sparks, A. Talwalkar, V. Smith, J. Kottalam, X. Pan, J. Gonzalez, M. J. Franklin, M. I. Jordan, and
T. Kraska. MLI: An API for distributed machine learning. 2013.
[31] S. Sra. Scalable nonconvex inexact proximal splitting. In NIPS, 2012.
[32] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan. Chord: A scalable peer-to-peer
lookup service for internet applications. SIGCOMM Computer Communication Review, 2001.
[33] C. Teflioudi, F. Makari, and R. Gemulla. Distributed matrix completion. In ICDM, 2012.
[34] C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. JMLR, January 2010.
[35] R. van Renesse and F. B. Schneider. Chain replication for supporting high throughput and availability. In
OSDI, 2004.
[36] G. X. Yuan, K. W. Chang, C. J. Hsieh, and C. J. Lin. A comparison of optimization methods and software
for large-scale l1-regularized linear classification. JMLR, 2010.
[37] M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. Mccauley, M. J. Franklin, S. Shenker, and I.
Stoica. Fast and interactive analytics over hadoop data with spark. USENIX ;login:, August 2012.
[38] M. Zinkevich, A. J. Smola, M. Weimer, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010.
9
| 5597 |@word version:1 compression:3 norm:1 replicate:2 disk:1 open:3 cipar:1 vldb:1 hsieh:1 pick:1 carry:1 reduction:1 contains:2 karger:1 ours:2 franklin:2 outperforms:1 existing:1 bradley:1 current:2 com:3 comparing:1 issuing:1 must:1 written:1 devin:1 realistic:1 partition:5 academia:1 kdd:1 designed:1 update:7 bickson:2 hash:1 congestion:1 stationary:3 steepest:1 pvldb:1 core:3 smith:1 lr:5 filtered:1 provides:2 node:4 location:1 compressible:1 org:4 mathematical:1 baidu:3 replication:2 yuan:1 prove:1 overhead:2 introduce:1 inter:1 notably:1 ica:4 yahoolda:2 frequently:1 growing:1 nor:1 multi:2 wsdm:1 globally:2 company:1 cpu:3 cache:1 increasing:1 becomes:2 begin:1 provided:1 bounded:6 suffice:1 underlying:1 moreover:2 project:1 what:2 argmin:1 developed:2 guarantee:3 tackle:1 interactive:1 runtime:1 uk:1 datacenter:2 control:2 before:4 positive:2 negligible:1 local:3 service:1 limit:1 consequence:1 api:1 multithreaded:1 despite:1 niu:1 becoming:1 might:1 chose:2 examined:1 dynamically:1 challenging:3 josifovski:2 limited:1 factorization:1 analytics:1 range:4 directed:1 unique:1 practical:1 arguing:1 flex:1 crosstalk:1 practice:1 block:26 implement:2 petuum:4 differs:3 communicated:4 gibson:1 significantly:6 composite:3 projection:1 boyd:1 wait:2 petersen:1 selection:1 operator:6 risk:2 applying:1 www:2 zinkevich:2 dean:2 send:2 straightforward:1 layout:1 starting:1 convex:8 amazon:1 spark:3 assigns:1 immediately:2 splitting:1 pull:14 retrieve:2 handle:1 coordinate:13 caller:4 analogous:1 updated:6 gm:2 user:13 massive:1 programming:2 us:5 designing:1 trend:1 updating:1 cloud:1 compressing:4 connected:1 ranzato:1 remote:1 trade:4 removed:1 decrease:5 richt:1 yk:2 substantial:1 ran:1 chord:1 mu:1 complexity:3 gibbon:1 rowstron:1 signature:1 depend:1 solving:7 carrying:1 algebra:1 upon:1 creates:1 efficiency:6 learner:1 basis:1 workload:5 druschel:1 represented:1 various:3 talk:1 regularizer:1 fast:2 effective:3 describe:1 aggregate:3 shervashidze:1 cacm:1 peer:6 whose:1 kai:1 solve:5 larger:4 relax:2 statistic:1 g1:2 transform:1 asynchronously:2 online:1 subsamples:1 advantage:5 differentiable:1 propose:2 reconstruction:4 mli:2 maximal:3 frequent:1 karpenko:1 loop:1 flexibility:5 kv:2 scalability:2 billion:6 convergence:14 cluster:3 requirement:2 mahout:3 empty:1 produce:1 cached:3 double:1 transmission:1 converges:2 object:1 spent:1 derive:1 illustrate:1 ac:1 completion:1 progress:3 job:2 solves:1 implemented:2 c:1 skip:4 ethernet:1 synchronized:1 kuhn:1 functionality:1 filter:27 stochastic:3 kb:1 routing:1 public:1 implementing:1 exchange:1 karush:1 decompose:2 insert:1 wright:1 scope:1 algorithmic:1 major:1 achieves:1 adopt:3 purpose:2 tency:1 sensitive:1 teo:1 largest:2 concurrent:1 successfully:1 tool:1 minimization:3 pastry:1 concurrently:1 arik:1 aim:1 modified:5 rather:1 reaching:1 caching:4 avoid:1 shrinkage:1 zhou:1 focus:1 joachim:1 kyrola:2 indicates:1 mainly:1 slowest:1 greatly:1 industrial:2 skipped:2 wishing:1 talwalkar:1 kim:1 inference:2 osdi:2 entire:1 typically:1 bt:4 dnn:1 relegated:1 tak:1 interested:1 issue:3 classification:1 among:3 flexible:4 ill:1 dual:1 yahoo:2 overall:1 kottalam:1 art:1 platform:2 special:1 once:1 never:1 having:2 ng:2 identical:1 park:1 yu:1 definable:1 uncompressed:1 throughput:1 nearly:1 constitutes:1 inevitable:1 tem:1 nonsmooth:1 icml:1 cookbook:1 inherent:1 few:1 primarily:1 employ:1 randomly:2 modern:1 simultaneously:2 delayed:5 floating:1 microsoft:1 huge:2 interest:1 mining:1 chowdhury:1 evaluation:1 semidefinite:1 bundle:1 chain:1 encourage:1 worker:46 necessary:1 mccauley:1 old:1 re:1 overcomplete:1 minimal:1 instance:2 industry:2 soft:1 ations:1 cost:3 stacking:1 subset:2 entry:3 hundred:3 delay:13 gr:7 motivating:2 stored:1 dependency:10 proximal:18 recht:1 sensitivity:1 randomized:1 lee:1 off:4 together:2 continuously:1 synthesis:1 w1:1 andersen:5 again:1 management:2 manage:1 choose:1 possibly:1 creating:1 derivative:1 return:1 snappy:2 li:6 highperformance:1 aggressive:1 lookup:1 gemulla:1 wk:2 availability:1 dga:1 ad:2 depends:2 stoica:2 performed:3 later:1 closed:1 hogwild:3 view:1 traffic:4 portion:3 wm:1 tion:1 maintains:2 parallel:6 aggregation:1 start:1 recover:1 xing:1 recommendable:1 minimize:2 efficiently:2 yield:1 barroso:1 generalize:1 pedersen:1 advertising:1 confirmed:2 dave:1 ago:1 reach:2 whenever:2 facebook:1 inexact:3 tucker:2 associated:2 proof:1 dataset:5 popular:1 knowledge:1 improves:1 back:1 supervised:1 april:1 execute:2 though:1 mitzenmacher:1 furthermore:1 smola:12 lastly:1 reply:2 correlation:1 until:2 hand:1 working:3 receives:1 langford:1 su:1 overlapping:1 google:3 continuity:1 logistic:7 lda:4 asynchrony:1 pulling:1 stale:1 lossy:1 believe:1 usage:1 effect:1 building:1 requiring:1 true:1 former:1 assigned:1 read:1 iteratively:1 staleness:1 illustrated:1 mahalanobis:1 during:3 observing:1 byers:1 criterion:2 generalized:2 outline:1 complete:1 demonstrate:3 performs:2 dedicated:1 duchi:1 pro:1 reschedule:1 l1:2 ranging:1 image:1 instantaneous:1 fi:6 parikh:1 mt:2 apache:5 overview:1 volume:1 million:1 discussed:1 shenker:1 mellon:1 blocked:1 imposing:1 reef:2 consistency:24 trivially:1 pm:3 narayanamurthy:3 reliability:1 moving:1 access:2 etc:1 pu:1 own:1 showed:1 store:1 issued:2 server:58 nonconvex:3 fault:2 inconsistency:3 postponed:1 yi:1 exploited:1 seen:1 guestrin:2 fortunately:1 relaxed:1 schneider:1 parallelized:1 aggregated:1 corrado:1 ii:1 full:1 desirable:1 reduces:3 seidel:1 smooth:7 technical:1 match:1 adapt:1 calculation:1 offer:2 ahmed:2 long:1 lin:1 icdm:1 sigcomm:1 prediction:1 scalable:6 regression:7 variant:3 essentially:1 cmu:1 expectation:1 iteration:18 kernel:1 monga:1 agarwal:1 achieved:2 background:2 affecting:1 fine:2 mandate:1 diagram:2 source:4 unlike:1 sent:1 contrary:1 inconsistent:1 balakrishnan:1 jordan:1 call:2 extracting:1 integer:1 vw:1 yang:2 ter:1 split:1 enough:2 easy:1 affect:1 fit:1 architecture:3 lasso:1 bandwidth:6 click:1 imperfect:1 reduce:6 bottleneck:3 synchronous:3 thread:1 inactive:1 gb:2 vishwanthan:1 shotgun:2 effort:2 render:1 returned:1 hessian:1 cause:1 migrate:1 yik:2 useful:1 deep:1 aimed:1 amount:2 morris:1 hardware:1 http:5 exist:1 designer:1 per:5 wr:6 bulk:1 serving:1 carnegie:1 write:1 waiting:6 group:9 key:16 four:2 iter:3 threshold:2 pb:3 enormous:1 monitor:1 reusable:1 neither:1 backward:1 graph:7 relaxation:6 subgradient:4 run:3 powerful:2 communicate:2 muli:1 named:1 reader:1 shekita:1 gonzalez:3 appendix:5 scaling:2 bit:2 bound:2 internet:2 outdated:1 sleep:1 refine:1 constraint:2 vishwanathan:1 alex:1 software:2 aspect:2 speed:3 kxku:2 optimality:3 performing:1 separable:1 gpus:1 acted:1 speedup:1 according:3 combination:1 cui:1 describes:1 smaller:2 remain:1 slightly:1 ur:2 partitioned:2 proxu:3 pan:1 b:2 making:1 intuitively:1 interference:1 taken:1 resource:1 previously:2 discus:3 turn:1 mechanism:1 needed:2 end:2 sending:1 adopted:1 available:2 operation:2 decentralized:1 lacked:1 apply:1 hellerstein:1 save:1 ho:1 rp:6 running:3 lock:2 opportunity:1 pushing:1 build:1 unchanged:1 threading:1 move:2 objective:4 added:1 quantity:1 strategy:1 primary:1 interacts:1 nr:1 diagonal:2 gradient:36 gence:1 capacity:1 topic:1 threaded:1 collected:1 provable:1 besides:1 code:6 balance:2 minimizing:1 setup:1 executed:2 potentially:1 xik:2 gk:6 slows:2 suppress:1 dram:1 implementation:5 design:2 proper:1 perform:2 allowing:1 upper:1 observation:1 datasets:4 enabling:1 descent:9 canini:1 january:1 situation:1 supporting:1 communication:10 august:1 parallelizing:2 usenix:1 subtasks:2 aly:1 david:1 introduced:4 pair:5 namely:1 bk:1 learned:1 hour:3 nip:10 address:2 beyond:1 below:1 pattern:1 exemplified:1 sparsity:2 challenge:1 tb:5 built:1 memory:2 suitable:1 natural:1 force:1 regularized:8 synchronize:1 advanced:1 scheme:1 improve:1 lossless:1 finished:7 lvar:6 started:2 text:1 prior:3 review:1 mapreduce:2 acknowledgement:1 relative:1 synchronization:5 loss:4 lecture:1 cdc:1 mixed:1 generation:7 filtering:1 zaharia:1 acyclic:1 foundation:3 offered:2 consistent:1 unfinished:1 balancing:1 changed:1 placed:1 last:1 asynchronous:5 free:2 side:1 allow:3 understand:1 weaker:1 pulled:1 senior:1 taking:1 sparse:11 distributed:24 benefit:1 tolerance:2 depth:1 dimension:1 default:1 regard:1 xia:1 computes:2 van:1 author:3 forward:1 login:1 simplified:1 counted:1 social:1 approximate:2 selector:1 logic:2 graphlab:5 global:3 kkt:6 active:2 ml:1 receiver:1 b1:1 conclude:1 xi:1 alternatively:1 continuous:3 iterative:2 latent:1 table:1 learn:1 robust:1 sra:1 elastic:2 hadoop:4 controllable:1 requested:1 ngiam:1 necessarily:1 complex:1 kraska:1 diag:1 da:1 did:1 dense:1 weimer:1 big:2 allowed:2 fashion:1 slow:2 aid:1 precision:1 mao:1 scheduler:5 vanish:1 jmlr:2 third:5 matsushima:1 grained:2 theorem:3 down:1 ganger:1 load:3 specific:9 distbelief:2 ghemawat:1 svm:2 exists:1 consist:1 workshop:2 albeit:1 sequential:7 effectively:2 magnitude:3 execution:1 illustrates:1 push:14 kx:3 chen:1 suited:2 likely:4 sender:2 lbfgs:1 partially:1 chang:1 springer:1 satisfies:1 ma:1 sorted:1 careful:1 eventual:4 kaashoek:1 shared:8 price:1 content:1 change:5 considerable:1 lipschitz:2 specifically:1 vides:1 total:2 gauss:1 experimental:3 selectively:1 support:4 mark:2 latter:2 alexander:1 correlated:1 |
5,078 | 5,598 | On Model Parallelization and Scheduling Strategies
for Distributed Machine Learning
?Seunghak Lee, ?Jin Kyu Kim, ?Xun Zheng, ?Qirong Ho, ?Garth A. Gibson, ?Eric P. Xing
?School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
?Institute for Infocomm Research
A*STAR
Singapore 138632
seunghak@, jinkyuk@, xunzheng@,
[email protected]
garth@, [email protected]
Abstract
Distributed machine learning has typically been approached from a data parallel
perspective, where big data are partitioned to multiple workers and an algorithm
is executed concurrently over different data subsets under various synchronization schemes to ensure speed-up and/or correctness. A sibling problem that has
received relatively less attention is how to ensure efficient and correct model parallel execution of ML algorithms, where parameters of an ML program are partitioned to different workers and undergone concurrent iterative updates. We argue
that model and data parallelisms impose rather different challenges for system design, algorithmic adjustment, and theoretical analysis. In this paper, we develop a
system for model-parallelism, STRADS, that provides a programming abstraction
for scheduling parameter updates by discovering and leveraging changing structural properties of ML programs. STRADS enables a flexible tradeoff between
scheduling efficiency and fidelity to intrinsic dependencies within the models, and
improves memory efficiency of distributed ML. We demonstrate the efficacy of
model-parallel algorithms implemented on STRADS versus popular implementations for topic modeling, matrix factorization, and Lasso.
1
Introduction
Advancements in sensory technologies and digital storage media have led to a prevalence of ?Big
Data? collections that have inspired an avalanche of recent efforts on ?scalable? machine learning
(ML). In particular, numerous data-parallel solutions from both algorithmic [28, 10] and system
[7, 25] angles have been proposed to speed up inference and learning on Big Data. The recently
emerged parameter server architecture [15, 18] has started to pave ways for a unified programming
interface for data parallel algorithms, based on various parallellization models such as stale synchroneous parallelism (SSP) [15], eager SSP [5], and value-bound asynchronous parallelism [23],
etc. However, in addition to Big Data, modern large-scale ML problems have started to encounter
the so-called Big Model challenge [8, 1, 17], in which models with millions if not billions of parameters and/or variables (such as in deep networks [6] or large-scale topic models [20]) must be
estimated from big (or even modestly-sized) datasets. Such Big Model problems seem to have received less systematic investigation. In this paper, we propose a model-parallel framework for such
an investigation.
As is well known, a data-parallel algorithm parallelly computes a partial update of all model parameters (or latent model states in some cases) in each worker, based on only the subset of data
on that worker and a local copy of the model parameters stored on that worker, and then aggregates
these partial updates to obtain a global estimate of the model parameters [15]. In contrast, a model
1
parallel algorithm aims to parallelly update a subset of parameters on each worker ? using either
all data, or different subsets of the data [4] ? in a way that preserves as much correctness as possible, by ensuring that the updates from each subset are highly compatible. Obviously, such a scheme
directly alleviates memory bottlenecks caused by massive parameter sizes in big models; but even
for small or mid-sized models, an effective model parallel scheme is still highly valuable because it
can speed up an algorithm by updating multiple parameters concurrently, using multiple machines.
While data-parallel algorithms such as stochastic gradient descent [27] can be advantageous over
their sequential counterparts ? thanks to concurrent processing over data using various boundedasynchronous schemes ? they require every worker to have full access to all global parameters; furthermore they leverage on an assumption that different data subsets are i.i.d. given the shared global
parameters. For a model-parallel program however, in which model parameters are distributed to
different workers, one cannot blindly leverage such an i.i.d. assumption over arbitrary parameter
subsets, because doing so will cause incorrect estimates due to incompatibility of sub-results from
different workers (e.g., imagine trivially parallelizing a long, simplex-constrained vector across multiple workers ? independent updates will break the simplex constraint). Therefore, existing dataparallel schemes and frameworks, that cannot support sophisticated constraint and/or consistency
satisfiability mechanisms across workers, are not easily adapted to model-parallel programs. On the
other hand, as explored in a number of recent works, explicit analysis of dependencies across model
parameters, coupled with the design of suitable parallel schemes accordingly, opens up new opportunities for big models. For example, as shown in [4], model-parallel coordinate descent allows us
to update multiple parameters in parallel, and our work in this paper further this approach by allowing some parameters to be prioritized over others. Furthermore, one can take advantage of model
structures to avoid interference and loss of correctness during concurrent parameter updates (e.g.,
nearly independent parameters can be grouped to be updated in parallel [21]), and in this paper,
we explore how to discover such structures in an efficient and scalable manner. To date, modelparallel algorithms are usually developed for a specific application such as matrix factorization [10]
or Lasso [4] ? thus, there is a need for developing programming abstractions and interfaces that can
tackle the common challenges of Big Model problems, while also exposing new opportunities such
as parameter prioritization to speed up convergence without compromising inference correctness.
Effectively and conveniently programming a model-parallel algorithm stands as another challenge,
as it requires mastery of detailed communication management in a cluster. Existing distributed
frameworks such as MapReduce [7], Spark [25], and GraphLab [19] have shown that a variety of
ML applications can be supported by a single, common programming interface (e.g. Map/Reduce
or Gather/Apply/Scatter). Crucially, these frameworks allow the user to specify a coarse order to
parameter updates, but automatically decide on the precise execution order ? for example, MapReduce and Spark allow users to specify that parallel jobs should be executed in some topological order;
e.g. mappers are guaranteed to be followed by reducers, but the system will execute the mappers
in an arbitrary parallel or sequential order that it deems suitable. Similarly, GraphLab chooses the
next node to be updated based on its ?chromatic engine? and the user?s choice of graph consistency
model, but the user only has loose control over the update order (through the input graph structure).
While this coarse-grained, fully-automatic scheduling is certainly convenient, it does not offer the
fine-grained control needed to avoid parallelization of parameters with subtle interdependencies that
might not be present in the superficial problem or graph structure (which can then lead to algorithm
divergence, as in Lasso [4]). Moreover, most of these frameworks do not allow users to easily prioritize parameters based on new criteria, for more rapid convergence (though we note that GraphLab
allows node prioritization through a priority queue). It is true that data-parallel algorithms can be implemented efficiently on these frameworks, and in principle, one can also implement model-parallel
algorithms on top of them. Nevertheless, we argue that without fine-grained control over parameter
updates, we would miss many new opportunities for accelerating ML algorithm convergence.
To address these challenges, we develop STRADS (STRucture-Aware Dynamic Scheduler), a system that performs automatic scheduling and parameter prioritization for dynamic Big Model parallelism, and is designed to enable investigation of new ML-system opportunities for efficient management of memory and accelerated convergence of ML algorithms, while making a best-effort to
preserve existing convergence guarantees for model-parallel algorithms (e.g. convergence of Lasso
under parallel coordinate descent). STRADS provides a simple abstraction for users to program ML
algorithms, consisting of three ?conceptual? actions: schedule, push and pull. Schedule specifies
the next subset of model parameters to be updated in parallel, push specifies how individual workers
2
compute partial results on those parameters, and pull specifies how those partial results are aggregated to perform the full parameter update. A high-level view of STRADS is illustrated in Figure 1.
We stress that these actions only specify the abstraction for managed model-parallel ML programs;
they do not dictate the underlying implementation. A key-value store allows STRADS to handle a
large number of parameters in distributed fashion, accessible from all master and worker machines.
As a showcase for STRADS, we implement and
Schedule
Key-value
Key-value
Key-value
Master
Master
provide schedule/push/pull pseudocode for three
Master
store
store
store
popular ML applications: topic modeling (LDA),
matrix factorization (MF), and Lasso. It is our
Pull
Push
hope that: (1) the STRADS interface enables
Big Model problems to be solved in distributed
Worker
Worker
Worker
Worker
Worker
Worker
Worker
fashion with modest programming effort, and (2)
the STRADS mechanism accelerates the convergence Big ML algorithms through good schedul- Figure 1: High-level architecture of our STRADS
system interface for dynamic model parallelism.
ing (particularly through used-defined scheduling
criteria). In our experiments, we present some evidence of STRADS?s success: topic modeling with
3.9M docs, 10K topics, and 21.8M vocabulary (200B parameters), MF with rank-2K on a 480K-by10K matrix (1B parameters), and Lasso with 100M features (100M parameters).
Variable/Param
R/W
Variable/Param
R/W
2
Scheduling for Big Model Parallelism with STRADS
?Model parallelism? refers to parallelization
of an ML algorithm over the space of shared
model parameters, rather than the space of
(usually i.i.d.) data samples. At a high level,
model parameters are the changing intermediate quantities that an ML algorithm iteratively
updates, until convergence is reached. A key
advantage of the model-parallel approach is
that it explicitly partitions the model parameters into subsets, allowing ML problems with
massive model spaces to be tackled on machines with limited memory (see supplement
for details of STRADS memory usage).
// Generic STRADS application
schedule() {
// Select U params x[j] to be sent
// to the workers for updating
...
return (x[j_1], ..., x[j_U])
}
push(worker = p, pars = (x[j_1],...,x[j_U])) {
// Compute partial update z for U params x[j]
// at worker p
...
return z
}
pull(workers = [p], pars = (x[j_1],...,x[j_U]),
updates = [z]) {
// Use partial updates z from workers p to
// update U params x[j]. sync() is automatic.
...
}
To enable users to systematically and programmatically exploit model parallelism,
STRADS defines a programming interface,
where the user writes three functions for
a ML problem: schedule, push and pull Figure 2: STRADS interface: Basic functional signa(Figures 1, 2). STRADS repeatedly schedules tures of schedule, push, pull, using pseudocode.
and executes these functions in that order, thus creating an iterative model-parallel algorithm.
Below, we describe the three functions.
Schedule: This function selects U model parameters to be dispatched for updates (Figure 1).
Within the schedule function, the programmer may access all data D and all model parameters
x, in order to decide which U parameters to dispatch. A simple schedule is to select model parameters according to a fixed sequence, or drawn uniformly at random. As we shall later see, schedule
also allows model parameters to be selected in a way that: (1) focuses on the fastest-converging parameters, while avoiding already-converged parameters; (2) avoids parallel dispatch of parameters
with inter-dependencies, which can lead to divergence or parallelization errors.
Push & Pull: These functions describe the flow of model parameters x from the scheduler to
the workers performing the update equations, as in Fig 1. Push dispatches a set of parameters
{xj1 , . . . , xjU } to each worker p, which then computes a partial update z for {xj1 , . . . , xjU } (or a
subset of it). When writing push, the user can take advantage of data partitioning: e.g., when only
a fraction P1 of the data samples are stored at each worker, the p-th worker should compute partial
P
results zjp = Di fxj (Di ) by iterating over its P1 data points Di . Pull is used to collect the partial
results {zjp } from all workers, and commit them to the parameters {xj1 , . . . , xjU }. Our STRADS
LDA, MF, and Lasso applications partition the data samples uniformly over machines.
3
3
Leveraging Model-Parallelism in ML Applications through STRADS
In this section, we explore how users can apply model-parallelism to their ML applications, using
STRADS. As case studies, we design and experiment on 3 ML applications ? LDA, MF, and
Lasso ? in order to show that model-parallelism in STRADS can be simple to implement, yet also
powerful enough to expose new and interesting opportunities for speeding up distributed ML.
3.1
Latent Dirichlet Allocation (LDA)
// STRADS LDA
We introduce STRADS programming through
topic modeling via LDA [3]. Big LDA models provide a strong use case for modelparallelism: when thousands of topics and millions of words are used, the LDA model contains billions of global parameters, and dataparallel implementations face the challenge of
providing access to all these parameters; in contrast, model-parallellism explicitly divides up
the parameters, so that workers only need to access a fraction of parameters at a given time.
schedule() {
dispatch = [] // Empty list
for a=1..U
// Rotation scheduling
idx = ((a+C-1) mod U) + 1
dispatch.append( V[q_idx] )
return dispatch
}
push(worker = p, pars = [V_a, ..., V_U]) {
t = []
// Empty list
for (i,j) in W[q_p] // Fast Gibbs sampling
if w[i,j] in V_p
t.append( (i,j,f_1(i,j,D,B)) )
return t
}
Formally, LDA takes a corpus of N documents as input ? represented as word ?tokens? pull(workers = [p], pars = [V_a, ..., V_U],
updates = [t]) {
wij ? W , where i is the document index and
for all (i,j)
// Update sufficient stats
j is the word position index ? and outputs K
(D,B) = f_2([t])
topics as well as N K-dimensional topic vec- }
tors (soft assignments of topics to each docu- Figure 3: STRADS LDA pseudocode. Definitions for
ment). LDA is commonly reformulated as a f1 , f2 , qp are in the text. C is a global model parameter.
?collapsed? model [14], in which some of the
latent variables are integrated out for faster inference. Inference is performed using Gibbs sampling,
where each word-topic indicator (denoted zij ? Z) is sampled in turn according to its distribution
conditioned on all other parameters. To perform this computation without having to iterate over all
W , Z, sufficient statistics are kept in the form of a ?doc-topic? table D, and a ?word-topic? table
B. A full description of the LDA model is in the supplement.
2.5M vocab, 5K topics, 64 machines
s?error
STRADS implementation: In order to perform model2
STRADS
parallelism, we first identify the model parameters, and create a
1.5
schedule strategy over them. In LDA, the assignments zij are
1
the model parameters, while D, B are summary statistics over
0.5
zij that are used to speed up the sampler. Our schedule strategy
0
equally divides the V words into U subsets V1 , . . . , VU (where U
?0.5
is the number of workers). Each worker will only sample words
?1
0
100
200
300
Iteration
from one subset Va at a time (via push), and update the sufficient
statistics D, W via pull. Subsequent invocations of schedule will Figure 4: STRADS LDA: Par?rotate? subsets amongst workers, so that every worker touches all allelization error ?t at each iterU subsets every U invocations. For data partitioning, we divide ation, on the Wikipedia unigram
the document tokens wij ? W evenly across workers, and denote dataset with K = 5000 and 64
worker p?s set of tokens by Wqp , where qp is the index set for the machines.
p-th worker. Further details and analysis of the pseudocode, particularly how push-pull constitutes
a model-parallel execution of LDA, are in the supplement.
Model parallelism results in low error: Parallel Gibbs sampling is not generally guaranteed
to converge [12], unless the parameters being sampled for concurrent updates are conditionally
independent of each other. STRADS model-parallel LDA assigns workers to disjoint words V
and documents wij ; thus, each worker?s parameters zij are almost conditionally independent of
other workers, resulting in very low sampling error 1 . As evidence, we define an error score ?t
that measures the divergence between the true word-topic distribution/table B, versus the local
copy seen at each worker (a full mathematical explanation is in the supplement). ?t ranges from
[0, 2] (where 0 means no error). Figure 4 plots ?t for the ?Wikipedia unigram? dataset (see ?5 for
1
This sampling error arises because workers see different versions B ? which is an unavoidable when
parallelizing LDA inference, because the Gibbs sampler is inherently sequential.
4
experimental details) with K = 5000 topics and 64 machines (128 processor cores total). ?t is
? 0.002 throughout, confirming that STRADS LDA exhibits very small parallelization error.
3.2
Matrix Factorization (MF)
// STRADS Matrix Factorization
We now consider matrix factorization (collaborative filtering), which can be used to predict users? unknown preferences, given their
known preferences and the preferences of others. Formally, MF takes an incomplete matrix
A ? RN ?M as input, where N is the number of
users, and M is the number of items. The idea
is to discover rank-K matrices W ? RN ?K
and H ? RK?M such that WH ? A. Thus,
the product WH can be used to predict the
missing entries (user preferences). Let ? be the
set of indices of observed entries in A, let ?i
be the set of observed column indices in the ith row of A, and let ?j be the set of observed
row indices in the j-th column of A. Then, the
MF task is defined by an optimization problem:
P
2
minW,H (i,j)?? (aij ? wi hj )2 + ?(kWkF +
schedule() {
// Round-robin scheduling
if counter <= U
// Do W
return W[q_counter]
else
// Do H
return H[r_(counter-U)]
}
push(worker = p, pars = X[s]) {
z = []
// Empty list
if counter <= U
// X is from W
for row in s, k=1..K
z.append( (f_1(row,k,p),f_2(row,k,p)) )
else
// X is from H
for col in s, k=1..K
z.append( (g_1(k,col,p),g_2(k,col,p)) )
return z
}
pull(workers=[p], pars=X[s], updates=[z]) {
if counter <= U
// X is from W
for row in s, k=1..K
W[row,k] = f_3(row,k,[z])
else
// X is from H
for col in s, k=1..K
H[k,col] = g_3(k,col,[z])
counter = (counter mod 2*U) + 1
}
2
kHkF ). We solve this objective using a parallel
coordinate descent algorithm [24].
STRADS implementation: Our MF schedule strategy is to partition the rows of A into
U disjoint index sets qp , and the columns of A Figure 5: STRADS MF pseudocode. Definitions for
into U disjoint index sets rp . We then dispatch f1 , g1 , . . . and qp , rp are in the text. counter is a
the model parameters W, H in a round-robin global model variable.
fashion. To update the rows of W, each worker p uses push to compute partial summations on its
assigned columns rp of A and H; the columns of H are updated similarly with rows qp of A and
W. Finally, pull aggregates the partial summations, and then update the entries in W and H. In
Figure 5, we show the STRADS MF pseudocode, and further details are in the supplement.
3.3
Lasso
STRADS not only supports simple static schedules, but also dynamic, adaptive strategies that take
the model state into consideration. Specifically, STRADS Lasso implementation schedules parameter updates by (1) prioritizing coefficients that contribute the most to algorithm convergence, and
(2) avoiding the simultaneous update of coefficients whose dimensions are highly inter-dependent.
These properties complement each other in an algorithmically efficient way, as we shall see.
P
2
Formally, Lasso can be defined by an optimization problem: min? 21 ky ? X?k2 + ? j |?j |,
where ? is a regularization parameter that determines the sparsity of ?. We solve Lasso usP
(t)
(t?1)
ing coordinate descent (CD) update rule [9]: ?j ? S(xTj y ? j6=k xTj xk ?k
, ?), where
S(g, ?) := sign(?) (|g| ? ?)+ .
STRADS implementation: Lasso schedule dynamically selects parameters to be updated with
the following prioritization scheme: rapidly changing parameters are more frequently updated than
others. First, we define a probability distribution c = [c1 , . . . , cJ ] over ?; the purpose of c is
to prioritize ?j ?s during schedule, and thus speed up convergence. In particular, we observe that
2
(t?1)
choosing ?j with probability cj = f1 (j) :? ??j
+ ? substantially speeds up the Lasso
(t?1)
convergence rate, where ? is a small positive constant, and ??j
(t?2)
= ?j
(t?1)
? ?j
.
To prevent non-convergence due to dimension inter-dependencies [4], we only schedule ?j and ?k
for concurrent updates if xTj xk ? 0. This is performed as follows: first, select L0 (> L) indices of
coefficients from the probability distribution c to form a set C (|C| = L0 ). Next, choose a subset
B ? C of size L such that xTj xk < ? for all j, k ? B, where ? ? (0, 1]; we represent this selection
procedure by the function f2 (C). Note that this procedure is inexpensive: by selecting L0 candidate
5
?j ?s first, only L02 dependencies need to be checked, as opposed to J 2 , where J is the total number
of features. Here L0 and ? are user-defined parameters.
We execute push and pull to update the coefficients indexed by B using U workers in parallel.
The rows of the data matrix X are partitioned into U submatrices, and the p-th worker stores the
submatrix Xqp ? R|qp |?J ; with X partitioned in this manner, we need to modify the CD update rule
accordingly. Using U workers, push computes U partial summations for each selected ?j , j ? B,
(t)
(t)
denoted by {zj,1 , . . . , zj,U }, where zj,p represents the partial summation for ?j in the p-th worker at
n
o
P
P
(t)
(t?1)
the t-th iteration: zj,p ? f3 (p, j) := i?qp (xij )T y ? j6=k (xij )T (xik )?k
. After all pushes
PU
(t)
(t)
(t)
have been completed, pull updates ?j via ?j = f4 (j, [zj,p ]) := S( p=1 zj,p , ?).
Analysis of STRADS Lasso scheduling We
wish to highlight several notable aspects of the
STRADS Lasso schedule mentioned above.
In brief, the sampling distribution f1 (j) and
the model dependency control scheme with
threshold ? allow STRADS to speed up the
convergence rate of Lasso. To analyze this
claim, let us rewrite the Lasso problem by duplicating original features with opposite sign:
P2J
2
F (?) := min? 21 ky ? X?k2 + ? j=1 ?j .
Here, with an abuse of notation, X contains
2J features and ?j ? 0, for all j = 1, . . . , 2J.
Then, we have the following analysis of our
scheduling scheme.
// STRADS Lasso
schedule() {
// Priority-based scheduling
for all j
// Get new priorities
c_j = f_1(j)
for a=1..L?
// Prioritize betas
random draw s_a using [c_1, ..., c_J]
// Get ?safe? betas
(j_1, ..., j_L) = f_2(s_1, ..., s_L?)
return (b[j_1], ..., b[j_L])
}
push(worker = p, pars = (b[j_1],...,b[j_L])) {
z = []
// Empty list
for a=1..L
// Compute partial sums
z.append( f_3(p,j_a) )
return z
}
pull(workers = [p], pars = (b[j_1],...,b[j_L]),
updates = [z]) {
Proposition 1 Suppose B is the set of indices
for a=1..L
// Aggregate partial sums
of coefficients updated in parallel at the t-th
b[j_a] = f_4(j_a,[z])
iteration, and ? is sufficiently small constant }
(t)
(t)
such that ???j ??k ? 0, for all j 6= k ? Figure 6: STRADS Lasso pseudocode. Definitions for
B. Then, the sampling distribution p(j) ? f1 , f2 , . . . are given in the text.
2
(t)
??j
approximately maximizes a lower bound on EB F (? (t) ) ? F (? (t) + ?? (t) ) .
Proposition 1 (see supplement for proof) shows that our scheduling attempts to speed up the convergence of Lasso by decreasing the objective as much as possible at every iteration. However, in
2
2
(t?1)
(t)
(t)
with f1 (j) ? ? ?j
+ ? because ??j is unavailpractice, we approximate p(j) ? ??j
(t)
able at the t-th iteration before computing ?j ; we add ? to give all ?j ?s non-zero probability of
being updated to account for the approximation.
4
STRADS System Architecture and Implementation
Our STRADS system implementation uses multiple master/scheduler machines, multiple worker
machines, and a single ?master? coordinator2 machine that directs the activities of the schedulers
and workers The basic unit of STRADS execution is a ?round?, which consists of schedule-pushpull in that order. In more detail (Figure 1), (1) the masters execute schedule to pick U sets of
model parameters x that can be safely updated in parallel (if the masters need to read parameters,
they get them from the key-value stores); (2) jobs for push, which update the U sets of parameters,
are dispatched via the coordinator to the workers (again, workers read parameters from the key-value
stores), which then execute push to compute partial updates z for each parameter; (3) the key-value
stores execute pull to aggregate the partial updates z, and keep newly updated parameters.
To efficiently use multiple cores/machines in the scheduler pool, STRADS uses pipelined schedule
computations, i.e., masters compute schedule and queue jobs in advance for future rounds. In other
2
The coordinator sends jobs from the masters and the workers, which does not bottleneck at the 10- to
100-machine scale explored in this paper. Distributing the coordinator is left for future work.
6
words, parameters to be updated are determined by the masters without waiting for workers? parameter updates; the jobs for parameter updates are dispatched to workers in turn by the coordinator. By
pipelining schedule, the master machines do not become a bottleneck even with a large number of
workers. Specifically, the pipelined strategy does not occur any parallelization errors if parameters
x for push can be ordered in a manner that does not depend on their actual values (e.g. MF and
LDA applications). For programs whose schedule outcome depends on the current values of x (e.g.
Lasso), the strategy is equivalent to executing schedule based on stale values of x, similar to how
parameter servers allow computations to be executed on stale model parameters [15, 1]. In Lasso
experiments in ?5, such schedule strategy with stale values greatly improved its convergence rate.
STRADS does not have to perform push-pull communication between the masters and the workers
(which would bottleneck the masters). Instead, the model parameters x can be globally accessible
through a distributed, partitioned key-value store (represented by standard arrays in our pseudocode).
A variety of key-value store synchronization schemes exist, such as Bulk Synchronous Parallel
(BSP), Stale Synchronous Parallel (SSP) [15], and Asynchronous Parallel (AP). In this paper, we
use BSP synchronization; we leave the use of alternative schemes like SSP or AP as future work.
We implemented STRADS using C++ and the Boost libraries, and OpenMPI 1.4.5 was used for
asynchronous communication between the master schedulers, workers, and key-value stores.
5
Experiments
We now demonstrate that our STRADS implementations of LDA, MF and Lasso can (1) reach larger
model sizes than other baselines; (2) converge at least as fast, if not faster, than other baselines; (3)
with additional machines, STRADS uses less memory per machine (efficient partitioning). For
baselines, we used (a) a STRADS implementation of distributed Lasso with only a naive roundrobin scheduler (Lasso-RR), (b) GraphLab?s Alternating Least Squares (ALS) implementation of
MF [19], (c) YahooLDA for topic modeling [1]. Note that Lasso-RR imitates the random scheduling
scheme proposed by Shotgun algorithm on STRADS. We chose GraphLab and YahooLDA, as they
are popular choices for distributed MF and LDA.
We conducted experiments on two clusters [11] (with 2-core and 16-core machines respectively),
to show the effectiveness of STRADS model-parallelism across different hardware. We used the
2-core cluster for LDA, and the 16-core cluster for Lasso and MF. The 2-core cluster contains 128
machines, each with two 2.6GHz AMD cores and 8GB RAM, and connected via a 1Gbps network
interface. The 16-core cluster contains 9 machines, each with 16 2.1GHz AMD cores and 64GB
RAM, and connected via a 40Gbps network interface. Both clusters exhibit a 4GB memory-to-CPU
ratio, a setting commonly observed in the machine learning literature [22, 13], which closely matches
the more cost-effective instances on Amazon EC2. All our experiments use a fixed data size, and
we vary the number of machines and/or the model size (unless otherwise stated); furthermore, for
Lasso, we set ? = 0.001, and for MF, we set ? = 0.05.
5.1 Datasets
Latent Dirichlet Allocation We used 3.9M English Wikipedia abstracts, and conducted experiments using both unigram (1-word) tokens (V = 2.5M unique unigrams, 179M tokens) and bigram
(2-word) tokens [16] (V = 21.8M unique bigrams, 79M tokens). We note that our bigram vocabulary (21.8M) is an order of magnitude larger than recently published results [1], demonstrating
that STRADS scales to very large models. We set the number of topics to K = 5000 and 10000
(also larger than recent literature [1]), which yields extremely large word-topic tables: 25B elements
(unigram) and 218B elements (bigram).
Matrix Factorization We used the Nexflix dataset [2] for our MF experiments: 100M anonymized
ratings from 480,189 users on 17,770 movies. We varied the rank of W, H from K = 20 to 2000,
which exceeds the upper limit of previous MF papers [26, 10, 24].
Lasso We used synthetic data with 50K samples and J = 10M to 100M features, where every
feature xj has only 25 non-zero samples. To simulate correlations between adjacent features (which
exist in real-world data sets), we first generate x1 ? U nif (0, 1). Then, with 0.9 probability, we
make xj ? U nif (0, 1), and with 0.1 probability, xj ? 0.9xj?1 + 0.1U nif (0, 1) for j = 2, . . . , J.
5.2 Speed and Model Sizes
Figure 7 shows the time taken by each algorithm to reach a fixed objective value (over a range of
model sizes), as well as the largest model size that each baseline was capable of running. For LDA
and MF, STRADS handles much larger model sizes than either YahooLDA (could handle 5K topics
7
64 machines
9 machines
5000
1400
STRADS
YahooLDA
1200
19144
4000
9 machines
6620
3000
34194
STRADS
GraphLab
STRADS
Lasso?RR
2500
2000
Seconds
Seconds
Seconds
1000
3000
800
600
500
200
0
2.5M/5k
2.5M/10k 21.8M/5k 21.8M/10k
0
20
Vocab/Topics
1500
1000
400
1000
0
2000
40
80
160
320
1000 2000
10M
Ranks
50M
100M
Features
Figure 7: Convergence time versus model size for STRADS and baselines for (left) LDA, (center) MF, and
(right) Lasso. We omit the bars if a method did not reach 98% of STRADS?s convergence point (YahooLDA and
GraphLab-MF failed at 2.5M-Vocab/10K-topics and rank K ? 80, respectively). STRADS not only reaches
larger model sizes than YahooLDA, GraphLab, and Lasso-RR, but also converges significantly faster.
2.5M vocab, 5K topics
32 machines
9
Log?Likelihood
?2.5
x 10
80 ranks
9 machines
2.5
100M features
9 machines
0.25
STRADS
GraphLab
?3
STRADS
YahooLDA
?3.5
0
1
2
3
4
Seconds
5
4
x 10
STRADS
Lasso?RR
0.2
Objective
RMSE
2
1.5
0.15
1
0.1
0.5
0.05
0
50
100
Seconds
0
150
500
Seconds
1000
Figure 8: Convergence trajectories of different methods for (left) LDA, (center) MF, and (right) Lasso.
on the unigram dataset) or GraphLab (could handle rank < 80), while converging more quickly;
we attribute STRADS?s faster convergence to lower parallelization error (LDA only) and reduced
synchronization requirements through careful model partitioning (LDA, MF). We observed that each
YahooLDA worker stores a portion of the word-topic table ? specifically, those elements referenced
by the words in the worker?s data partition. Because our experiments feature very large vocabulary
sizes, even a small fraction of the word-topic table can still be too large for a single machine?s memory, which caused YahooLDA to fail on the larger experiments. For Lasso, STRADS converges
more quickly than Lasso-RR because of our dynamic schedule strategy, which is graphically captured in the convergence trajectory seen in Figure 8 ? observe that STRADS?s dynamic schedule
causes the Lasso objective to plunge quickly to the optimum at around 250 seconds. We also see
that STRADS LDA and MF achieved better objective values than the other baselines, confirming
that STRADS model-parallelism is fast without compromising convergence quality.
2.5M vocab, 5K topics
2.5M vocab, 5K topics
x 10
5.3 Scalability
?2.4
?2.6
In Figure 9, we show the convergence trajecto?2.8
ries and time-to-convergence for STRADS LDA
?3
using different numbers of machines at a fixed
STRADS (16 machines)
STRADS (32 machines)
?3.2
model size (unigram with 2.5M vocab and 5K topSTRADS (64 machines)
STRADS (128 machines)
?3.4
ics). The plots confirm that STRADS LDA ex0
1
2
3
Seconds
Number of machines
x 10
hibits faster convergence with more machines, and
that the time to convergence almost halves with ev- Figure 9: STRADS LDA scalablity with increasery doubling of machines (near-linear scaling).
ing machines using a fixed model size. (Left) Con4
9
8
x 10
STRADS (16 machines)
STRADS (32 machines)
STRADS (64 machines)
STRADS (128 machines)
7
Seconds
Log?Likelihood
6
5
4
3
2
1
0
4
6
16
32
64
128
vergence trajectories; (Right) Time taken to reach a
log-likelihood of ?2.6 ? 109 .
Conclusions
In this paper, we presented a programmable framework for dynamic Big Model-parallelism that
provides the following benefits: (1) scalability and efficient memory utilization, allowing larger
models to be run with additional machines; (2) the ability to invoke dynamic schedules that reduce
model parameter dependencies across workers, leading to lower parallelization error and thus faster,
correct convergence. An important direction for future research would be to reduce the communication costs of using STRADS. We also want to explore the use of STRADS for other popular ML
applications, such as support vector machines and logistic regression.
Acknowledgments
This work was done under support from NSF IIS1447676, CNS-1042543 (PRObE [11]), DARPA
FA87501220324, and support from Intel via the Intel Science and Technology Center for Cloud
Computing (ISTC-CC).
References
[1] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable
models. In WSDM, 2012.
8
[2] J. Bennett and S. Lanning. The Netflix prize. In Proceedings of KDD cup and workshop, 2007.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent dirichlet allocation. The Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for l1-regularized loss
minimization. In ICML, 2011.
[5] W. Dai, A. Kumar, J. Wei, Q. Ho, G. Gibson, and E. P. Xing. High-performance distributed ML at scale
through parameter server consistency models. In AAAI, 2014.
[6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. W. Senior,
P. A. Tucker, et al. Large scale distributed deep networks. In NIPS, 2012.
[7] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. Communications of
the ACM, 51(1):107?113, 2008.
[8] J. Fan, R. Samworth, and Y. Wu. Ultrahigh dimensional feature selection: beyond the linear model. The
Journal of Machine Learning Research, 10:2013?2038, 2009.
[9] J. Friedman, T. Hastie, H. Hofling, and R. Tibshirani. Pathwise coordinate optimization. Annals of Applied
Statistics, 1(2):302?332, 2007.
[10] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis. Large-scale matrix factorization with distributed
stochastic gradient descent. In SIGKDD, 2011.
[11] G. Gibson, G. Grider, A. Jacobson, and W. Lloyd. PRObE: A thousand-node experimental cluster for
computer systems research. USENIX; login, 38, 2013.
[12] J. Gonzalez, Y. Low, A. Gretton, and C. Guestrin. Parallel gibbs sampling: From colored fields to thin
junction trees. In AISTATS, 2011.
[13] J. Gonzalez, Y. Low, H. Gu, D. Bickson, and C. Guestrin. PowerGraph: Distributed graph-parallel computation on natural graphs. In OSDI, 2012.
[14] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of Sciences
of the United States of America, 101(Suppl 1):5228?5235, 2004.
[15] Q. Ho, J. Cipar, H. Cui, J. Kim, S. Lee, P. B. Gibbons, G. Gibson, G. R. Ganger, and E. P. Xing. More
effective distributed ML via a stale synchronous parallel parameter server. In NIPS, 2013.
[16] Jey Han Lau, Timothy Baldwin, and David Newman. On collocations and topic models. ACM Transactions on Speech and Language Processing (TSLP), 10(3):10, 2013.
[17] Q. V. Le, M. A. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building
high-level features using large scale unsupervised learning. In ICML, 2012.
[18] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and
B. Su. Scaling distributed machine learning with the parameter server. In OSDI, 2014.
[19] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed GraphLab: A
framework for machine learning and data mining in the cloud. In VLDB, 2012.
[20] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed algorithms for topic models. The
Journal of Machine Learning Research, 10:1801?1828, 2009.
[21] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel
coordinate descent. In NIPS, 2012.
[22] Y. Wang, X. Zhao, Z. Sun, H. Yan, L. Wang, Z. Jin, L. Wang, Y. Gao, J. Zeng, Q. Yang, et al. Towards
topic modeling for big data. arXiv:1405.4402 [cs.IR], 2014.
[23] J. Wei, W. Dai, A. Kumar, X. Zheng, Q. Ho, and E. P. Xing. Consistent bounded-asynchronous parameter
servers for distributed ML. arXiv:1312.7869 [stat.ML], 2013.
[24] H. Yu, C. Hsieh, S. Si, and I. Dhillon. Scalable coordinate descent approaches to parallel matrix factorization for recommender systems. In ICDM, 2012.
[25] M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, and I. Stoica. Spark: Cluster computing with
working sets. In HotCloud, 2010.
[26] Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan. Large-scale parallel collaborative filtering for the netflix
prize. In AAIM, 2008.
[27] M. Zinkevich, J. Langford, and A. J. Smola. Slow learners are fast. In NIPS, 2009.
[28] M. Zinkevich, M. Weimer, L. Li, and A. J. Smola. Parallelized stochastic gradient descent. In NIPS, 2010.
9
| 5598 |@word version:1 bigram:4 advantageous:1 open:1 cipar:1 vldb:1 crucially:1 programmatically:1 hsieh:1 deems:1 pick:1 contains:4 efficacy:1 zij:4 score:1 mastery:1 selecting:1 document:4 united:1 franklin:1 existing:3 bradley:1 current:1 com:1 si:1 gmail:1 scatter:1 must:1 yet:1 exposing:1 devin:2 subsequent:1 partition:4 confirming:2 kdd:1 enables:2 designed:1 plot:2 update:41 bickson:3 half:1 discovering:1 advancement:1 selected:2 item:1 accordingly:2 xk:3 ith:1 prize:2 core:10 colored:1 blei:1 provides:3 coarse:2 node:3 contribute:1 preference:4 mathematical:1 beta:2 become:1 incorrect:1 consists:1 sync:1 introduce:1 manner:3 inter:3 rapid:1 p1:2 frequently:1 yahoolda:9 inspired:1 vocab:7 decreasing:1 globally:1 automatically:1 wsdm:1 actual:1 cpu:1 param:2 discover:2 moreover:1 underlying:1 notation:1 medium:1 maximizes:1 bounded:1 substantially:1 developed:1 unified:1 finding:1 guarantee:1 safely:1 duplicating:1 every:5 p2j:1 tackle:1 k2:2 control:4 partitioning:4 unit:1 omit:1 utilization:1 positive:1 before:1 local:2 modify:1 referenced:1 limit:1 sismanis:1 abuse:1 approximately:1 might:1 ap:2 chose:1 eb:1 dynamically:1 collect:1 fastest:1 factorization:9 limited:1 josifovski:1 range:2 unique:2 acknowledgment:1 vu:1 implement:3 writes:1 prevalence:1 procedure:2 gibson:4 yan:1 submatrices:1 significantly:1 dictate:1 convenient:1 word:16 refers:1 griffith:1 get:3 cannot:2 pipelined:2 selection:2 scheduling:14 storage:1 collapsed:1 writing:1 equivalent:1 map:1 dean:3 missing:1 center:3 zinkevich:2 graphically:1 attention:1 spark:3 amazon:1 stats:1 assigns:1 rule:2 array:1 pull:19 steyvers:1 handle:4 coordinate:8 updated:11 annals:1 imagine:1 suppose:1 massive:2 user:15 programming:8 prioritization:4 us:4 smyth:1 pa:1 element:3 particularly:2 updating:2 showcase:1 observed:5 cloud:2 baldwin:1 solved:1 wang:3 thousand:2 connected:2 sun:1 ranzato:2 counter:7 reducer:1 valuable:1 mentioned:1 gibbon:1 wilkinson:1 dynamic:8 depend:1 rewrite:1 ries:1 eric:1 efficiency:2 f2:3 gu:1 model2:1 learner:1 easily:2 darpa:1 various:3 represented:2 america:1 fast:4 effective:3 describe:2 approached:1 newman:2 aggregate:4 choosing:1 outcome:1 whose:2 emerged:1 larger:7 solve:2 otherwise:1 ability:1 statistic:4 g1:1 commit:1 obviously:1 advantage:3 sequence:1 rr:6 propose:1 ment:1 product:1 date:1 rapidly:1 qirong:1 alleviates:1 academy:1 description:1 xun:1 ky:2 scalability:2 billion:2 convergence:26 cluster:10 empty:4 requirement:1 optimum:1 executing:1 leave:1 converges:2 develop:2 stat:1 school:1 received:2 job:5 strong:1 fa87501220324:1 implemented:3 c:2 direction:1 safe:1 closely:1 correct:2 compromising:2 attribute:1 stochastic:3 f4:1 pipelining:1 enable:2 programmer:1 require:1 f1:6 investigation:3 proposition:2 summation:4 sufficiently:1 around:1 ic:1 algorithmic:2 predict:2 claim:1 tor:1 vary:1 purpose:1 gbps:2 samworth:1 expose:1 concurrent:5 grouped:1 correctness:4 create:1 largest:1 istc:1 ex0:1 schreiber:1 hope:1 minimization:1 concurrently:2 lanning:1 aim:1 rather:2 r_:1 avoid:2 incompatibility:1 hj:1 chromatic:1 zhou:1 l0:4 focus:1 wqp:1 directs:1 rank:7 likelihood:3 kyrola:2 greatly:1 contrast:2 sigkdd:1 kim:2 baseline:6 inference:6 j_a:3 abstraction:4 osdi:2 dependent:1 typically:1 integrated:1 collocation:1 coordinator:4 wij:3 selects:2 fidelity:1 flexible:1 scherrer:1 denoted:2 constrained:1 field:1 aware:1 f3:1 having:1 ng:2 sampling:8 represents:1 park:1 yu:1 icml:2 nearly:1 constitutes:1 thin:1 unsupervised:1 future:4 simplex:2 others:3 modern:1 preserve:2 divergence:3 national:1 individual:1 xtj:4 consisting:1 cns:1 attempt:1 friedman:1 highly:3 mining:1 zheng:2 chowdhury:1 certainly:1 jacobson:1 halappanavar:1 capable:1 worker:67 partial:17 haglin:1 minw:1 modest:1 unless:2 indexed:1 incomplete:1 divide:3 tree:1 theoretical:1 instance:1 column:5 modeling:6 soft:1 assignment:2 cost:2 subset:15 entry:3 conducted:2 too:1 eager:1 stored:2 dependency:7 params:3 synthetic:1 chooses:1 thanks:1 ec2:1 accessible:2 lee:2 systematic:1 invoke:1 pool:1 quickly:3 andersen:1 again:1 aaai:1 management:2 unavoidable:1 choose:1 prioritize:3 opposed:1 priority:3 creating:1 zhao:1 leading:1 return:9 li:2 account:1 gemulla:1 star:1 lloyd:1 s_1:1 coefficient:5 notable:1 caused:2 explicitly:2 depends:1 stoica:1 later:1 break:1 view:1 performed:2 doing:1 analyze:1 reached:1 xing:4 unigrams:1 portion:1 parallel:45 avalanche:1 j_l:4 netflix:2 asuncion:1 rmse:1 nijkamp:1 collaborative:2 square:1 ir:1 efficiently:2 yield:1 identify:1 trajectory:3 cc:1 j6:2 executes:1 converged:1 processor:1 published:1 simultaneous:1 reach:5 checked:1 definition:3 inexpensive:1 tucker:1 proof:1 di:3 static:1 sampled:2 newly:1 dataset:4 popular:4 wh:2 improves:1 satisfiability:1 cj:2 subtle:1 schedule:36 sophisticated:1 specify:3 improved:1 wei:2 execute:5 though:1 done:1 roundrobin:1 furthermore:3 smola:4 langford:1 until:1 hand:1 correlation:1 nif:3 working:1 touch:1 su:1 zeng:1 dataparallel:2 bsp:2 defines:1 logistic:1 lda:31 quality:1 scientific:1 stale:6 building:1 usage:1 xj1:3 true:2 managed:1 counterpart:1 regularization:1 assigned:1 read:2 alternating:1 iteratively:1 dhillon:1 illustrated:1 dispatch:7 conditionally:2 round:4 during:2 adjacent:1 criterion:2 stress:1 demonstrate:2 performs:1 l1:1 interface:9 consideration:1 recently:2 common:2 rotation:1 wikipedia:3 pseudocode:8 functional:1 qp:7 million:2 shenker:1 mellon:1 jey:1 cup:1 gibbs:5 vec:1 automatic:3 trivially:1 consistency:3 similarly:2 narayanamurthy:1 powergraph:1 language:1 mapper:2 access:4 han:1 etc:1 pu:1 add:1 recent:3 perspective:1 store:12 server:6 success:1 seen:2 captured:1 additional:2 guestrin:4 impose:1 dai:2 parallelized:1 aggregated:1 converge:2 corrado:2 multiple:8 full:4 interdependency:1 gretton:1 xju:3 ing:3 exceeds:1 faster:6 match:1 ahmed:2 offer:1 long:2 icdm:1 equally:1 va:1 ensuring:1 converging:2 scalable:4 basic:2 regression:1 cmu:1 blindly:1 iteration:5 represent:1 monga:2 arxiv:2 suppl:1 achieved:1 c1:1 addition:1 want:1 fine:2 else:3 sends:1 parallelization:8 sent:1 kwkf:1 leveraging:2 flow:1 seem:1 mod:2 effectiveness:1 jordan:1 structural:1 near:1 leverage:2 yang:1 intermediate:1 enough:1 variety:2 iterate:1 xj:4 architecture:3 lasso:38 opposite:1 hastie:1 reduce:3 idea:1 sibling:1 tradeoff:1 bottleneck:4 synchronous:3 distributing:1 accelerating:2 shotgun:1 effort:3 gb:3 queue:2 reformulated:1 speech:1 cause:2 action:2 repeatedly:1 deep:2 programmable:1 generally:1 iterating:1 detailed:1 tewari:1 mid:1 hardware:1 reduced:1 generate:1 specifies:3 xij:2 exist:2 zj:6 singapore:1 nsf:1 sign:2 estimated:1 disjoint:3 algorithmically:1 per:1 bulk:1 tibshirani:1 carnegie:1 shall:2 waiting:1 key:11 nevertheless:1 threshold:1 demonstrating:1 drawn:1 changing:3 prevent:1 zjp:2 kept:1 v1:1 ram:2 graph:5 fraction:3 sum:2 run:1 angle:1 master:15 powerful:1 almost:2 throughout:1 decide:2 wu:1 shekita:1 doc:2 draw:1 gonzalez:4 scaling:2 submatrix:1 accelerates:1 bound:2 guaranteed:2 followed:1 tackled:1 fan:1 topological:1 activity:1 adapted:1 occur:1 constraint:2 fxj:1 aspect:1 speed:10 simulate:1 min:2 extremely:1 kumar:2 performing:1 relatively:1 developing:1 according:2 cui:1 across:6 pan:1 partitioned:5 wi:1 making:1 usp:1 lau:1 interference:1 taken:2 equation:1 turn:2 loose:1 mechanism:2 fail:1 needed:1 khkf:1 junction:1 apply:2 observe:2 probe:2 hellerstein:1 generic:1 alternative:1 encounter:1 ho:4 rp:3 original:1 top:1 dirichlet:3 ensure:2 running:1 completed:1 clustering:1 opportunity:5 kyu:1 exploit:1 objective:6 already:1 quantity:1 strategy:9 pave:1 modestly:1 ssp:4 exhibit:2 gradient:3 amongst:1 evenly:1 topic:31 amd:2 argue:2 haas:1 idx:1 index:10 providing:1 ratio:1 executed:3 xik:1 j_1:7 stated:1 append:5 design:3 implementation:12 unknown:1 perform:4 allowing:3 upper:1 recommender:1 datasets:2 jin:2 descent:10 communication:5 precise:1 rn:2 varied:1 arbitrary:2 parallelizing:2 usenix:1 aly:1 prioritizing:1 rating:1 complement:1 david:1 engine:1 boost:1 nip:5 address:1 able:1 bar:1 beyond:1 parallelism:17 usually:2 below:1 ev:1 sparsity:1 challenge:6 program:7 memory:9 explanation:1 suitable:2 ation:1 natural:1 regularized:1 indicator:1 scheme:12 movie:1 technology:2 epxing:1 brief:1 numerous:1 library:1 started:2 coupled:1 naive:1 imitates:1 speeding:1 text:3 literature:2 mapreduce:3 ultrahigh:1 synchronization:4 loss:2 fully:1 par:9 highlight:1 interesting:1 tures:1 allocation:3 filtering:2 versus:3 zaharia:1 digital:1 gather:1 sufficient:3 anonymized:1 undergone:1 signa:1 principle:1 consistent:1 systematically:1 cd:2 row:12 compatible:1 summary:1 token:7 supported:1 copy:2 asynchronous:4 english:1 aij:1 allow:5 senior:1 institute:1 face:1 distributed:20 ghz:2 benefit:1 dimension:2 vocabulary:3 stand:1 avoids:1 world:1 computes:3 sensory:1 login:1 collection:1 commonly:2 adaptive:1 simplified:1 garth:2 welling:1 transaction:1 approximate:1 keep:1 confirm:1 ml:27 global:6 graphlab:11 conceptual:1 pittsburgh:1 corpus:1 iterative:2 latent:6 vergence:1 table:6 robin:2 superficial:1 inherently:1 did:1 aistats:1 weimer:1 big:17 x1:1 fig:1 intel:2 fashion:3 openmpi:1 slow:1 sub:1 position:1 scheduler:7 explicit:1 wish:1 mao:1 col:6 invocation:2 candidate:1 parallelly:2 grained:3 rk:1 ganger:1 specific:1 unigram:6 ghemawat:1 explored:2 list:4 evidence:2 intrinsic:1 workshop:1 sequential:3 effectively:1 supplement:6 magnitude:1 execution:4 conditioned:1 push:23 chen:2 mf:24 led:1 timothy:1 explore:3 gao:1 conveniently:1 failed:1 adjustment:1 ordered:1 pathwise:1 doubling:1 determines:1 acm:2 sized:2 careful:1 towards:1 prioritized:1 shared:2 bennett:1 specifically:3 determined:1 uniformly:2 infocomm:1 sampler:2 miss:1 seunghak:2 called:1 total:2 experimental:2 select:3 formally:3 support:5 rotate:1 arises:1 accelerated:1 avoiding:2 |
5,079 | 5,599 | Communication-Efficient
Distributed Dual Coordinate Ascent
Martin Jaggi ?
ETH Zurich
Jonathan Terhorst
UC Berkeley
Virginia Smith ?
UC Berkeley
Sanjay Krishnan
UC Berkeley
Martin Tak?ac?
Lehigh University
Thomas Hofmann
ETH Zurich
Michael I. Jordan
UC Berkeley
Abstract
Communication remains the most significant bottleneck in the performance of
distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, C O C OA, that uses local
computation in a primal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this
class of algorithms, as well as experiments on real-world distributed datasets with
implementations in Spark. In our experiments, we find that as compared to stateof-the-art mini-batch versions of SGD and SDCA algorithms, C O C OA converges
to the same .001-accurate solution quality on average 25? as quickly.
1
Introduction
With the immense growth of available data, developing distributed algorithms for machine learning
is increasingly important, and yet remains a challenging topic both theoretically and in practice. On
typical real-world systems, communicating data between machines is vastly more expensive than
reading data from main memory, e.g. by a factor of several orders of magnitude when leveraging
commodity hardware.1 Yet, despite this reality, most existing distributed optimization methods for
machine learning require significant communication between workers, often equalling the amount of
local computation (or reading of local data). This includes for example popular mini-batch versions
of online methods, such as stochastic subgradient (SGD) and coordinate descent (SDCA).
In this work, we target this bottleneck. We propose a distributed optimization framework that allows
one to freely steer the trade-off between communication and local computation. In doing so, the
framework can be easily adapted to the diverse spectrum of available large-scale computing systems,
from high-latency commodity clusters to low-latency supercomputers or the multi-core setting.
Our new framework, C O C OA (Communication-efficient distributed dual Coordinate Ascent), supports objectives for linear reguarlized loss minimization, encompassing a broad class of machine
learning models. By leveraging the primal-dual structure of these optimization problems, C O C OA
effectively combines partial results from local computation while avoiding conflict with updates simultaneously computed on other machines. In each round, C O C OA employs steps of an arbitrary
dual optimization method on the local data on each machine, in parallel. A single update vector is
then communicated to the master node. For example, when choosing to perform H iterations (usually order of the data size n) of an online optimization method locally per round, our scheme saves
a factor of H in terms of communication compared to the corresponding naive distributed update
?
Both authors contributed equally.
On typical computers, the latency for accessing data in main memory is in the order of 100 nanoseconds.
In contrast, the latency for sending data over a standard network connection is around 250,000 nanoseconds.
1
1
scheme (i.e., updating a single point before communication). When processing the same number of
datapoints, this is clearly a dramatic savings.
Our theoretical analysis (Section 4) shows that this significant reduction in communication cost
comes with only a very moderate increase in the amount of total computation, in order to reach
the same optimization accuracy. We show that, in general, the distributed C O C OA framework will
inherit the convergence rate of the internally-used local optimization method. When using SDCA
(randomized dual coordinate ascent) as the local optimizer and assuming smooth losses, this convergence rate is geometric.
In practice, our experiments with the method implemented on the fault-tolerant Spark platform [1]
confirm both the clock time performance and huge communication savings of the proposed method
on a variety distributed datasets. Our experiments consistently show order of magnitude gains over
traditional mini-batch methods of both SGD and SDCA, and significant gains over the faster but
theoretically less justified local SGD methods.
Related Work. As we discuss below (Section 5), our approach is distinguished from recent work
on parallel and distributed optimization [2, 3, 4, 5, 6, 7, 8, 9] in that we provide a general framework
for improving the communication efficiency of any dual optimization method. To the best of our
knowledge, our work is the first to analyze the convergence rate for an algorithm with this level
of communication efficiency, without making data-dependent assumptions. The presented analysis
covers the case of smooth losses, but should also be extendable to the non-smooth case. Existing
methods using mini-batches [4, 2, 10] are closely related, though our algorithm makes significant
improvements by immediately applying all updates locally while they are processed, a scheme that
is not considered in the classic mini-batch setting. This intuitive modification results in dramatically
improved empirical results and also strengthens our theoretical convergence rate. More precisely,
the convergence rate shown here only degrades with the number of workers K, instead of with the
significantly larger mini-batch-size (typically order n) in the case of mini-batch methods.
Our method builds on a closely related recent line of work of [2, 3, 11, 12]. We generalize the algorithm of [2, 3] by allowing the use of arbitrary (dual) optimization methods as the local subroutine
within our framework. In the special case of using coordinate ascent as the local optimizer, the
resulting algorithm is very similar, though with a different computation of the coordinate updates.
Moreover, we provide the first theoretical convergence rate analysis for such methods, without making strong assumptions on the data.
The proposed C O C OA framework in its basic variant is entirely free of tuning parameters or learning
rates, in contrast to SGD-based methods. The only choice to make is the selection of the internal local optimization procedure, steering the desired trade-off between communication and computation.
When choosing a primal-dual optimizer as the internal procedure, the duality gap readily provides a
fair stopping criterion and efficient accuracy certificates during optimization.
Paper Outline. The rest of the paper is organized as follows. In Section 2 we describe the problem setting of interest. Section 3 outlines the proposed framework, C O C OA, and the convergence
analysis of this method is presented in Section 4. We discuss related work in Section 5, and compare
against several other state-of-the-art methods empirically in Section 6.
2
Setup
A large class of methods in machine learning and signal processing can be posed as the minimization
of a convex loss function of linear predictors with a convex regularization term:
n
h
i
?
1X
min
P (w) := kwk2 +
`i (wT xi ) ,
(1)
2
n i=1
w?Rd
Here the data training examples are real-valued vectors xi ? Rd ; the loss functions `i , i = 1, . . . , n
are convex and depend possibly on labels yi ? R; and ? > 0 is the regularization parameter. Using
the setup of [13], we assume the regularizer is the `2 -norm for convenience. Examples of this class
of problems include support vector machines, as well as regularized linear and logistic regression,
ordinal regression, and others.
2
The most popular method to solve problems of the form (1) is the stochastic subgradient method
(SGD) [14, 15, 16]. In this setting, SGD becomes an online method where every iteration only
requires access to a single data example (xi , yi ), and the convergence rate is well-understood.
The associated conjugate dual problem of (1) takes the following form, and is defined over one dual
variable per each example in the training set.
maxn
??R
h
n
i
?
1X ?
D(?) := ? kA?k2 ?
`i (??i ) ,
2
n i=1
(2)
where `?i is the conjugate (Fenchel dual) of the loss function `i , and the data matrix A ? Rd?n
1
xi in its columns. The duality comes with the
collects the (normalized) data examples Ai := ?n
convenient mapping from dual to primal variables w(?) := A? as given by the optimality conditions [13]. For any configuration of the dual variables ?, we have the duality gap defined as
P (w(?))?D(?). This gap is a computable certificate of the approximation quality to the unknown
true optimum P (w? ) = D(?? ), and therefore serves as a useful stopping criteria for algorithms.
For problems of the form (2), coordinate descent methods have proven to be very efficient, and come
with several benefits over primal methods. In randomized dual coordinate ascent (SDCA), updates
are made to the dual objective (2) by solving for one coordinate completely while keeping all others
fixed. This algorithm has been implemented in a number of software packages (e.g. LibLinear [17]),
and has proven very suitable for use in large-scale problems, while giving stronger convergence
results than the primal-only methods (such as SGD), at the same iteration cost [13]. In addition
to superior performance, this method also benefits from requiring no stepsize, and having a welldefined stopping criterion given by the duality gap.
3
Method Description
The C O C OA framework, as presented in Algorithm 1, assumes that the data {(xi , yi )}ni=1 for a
regularized loss minimization problem of the form (1) is distributed over K worker machines. We
associate with the datapoints their corresponding dual variables {?i }ni=1 , being partitioned between
the workers in the same way. The core idea is to use the dual variables to efficiently merge the
parallel updates from the different workers without much conflict, by exploiting the fact that they all
work on disjoint sets of dual variables.
Algorithm 1: C O C OA: Communication-Efficient Distributed Dual Coordinate Ascent
Input: T ? 1, scaling parameter 1 ? ?K ? K (default: ?K := 1).
Data: {(xi , yi )}ni=1 distributed over K machines
(0)
Initialize: ?[k] ? 0 for all machines k, and w(0) ? 0
for t = 1, 2, . . . , T
for all machines k = 1, 2, . . . , K in parallel
(t?1)
(??[k] , ?wk ) ? L OCAL D UAL M ETHOD(?[k] , w(t?1) )
(t)
(t?1)
?[k] ? ?[k] + ?KK ??[k]
end
PK
reduce w(t) ? w(t?1) + ?KK k=1 ?wk
end
In each round, the K workers in parallel perform some steps of an arbitrary optimization method,
applied to their local data. This internal procedure tries to maximize the dual formulation (2), only
with respect to their own local dual variables. We call this local procedure L OCAL D UAL M ETHOD,
as specified in the template Procedure A. Our core observation is that the necessary information
each worker requires about the state of the other dual variables can be very compactly represented
by a single primal vector w ? Rd , without ever sending around data or dual variables between the
machines.
Allowing the subroutine to process more than one local data example per round dramatically reduces
the amount of communication between the workers. By definition, C O C OA in each outer iteration
3
Procedure A: L OCAL D UAL M ETHOD: Dual algorithm for prob. (2) on a single coordinate block k
Input: Local ?[k] ? Rnk , and w ? Rd consistent with other coordinate blocks of ? s.t. w = A?
k
Data: Local {(xi , yi )}ni=1
Output: ??[k] and ?w := A[k] ??[k]
Procedure B: L OCAL SDCA: SDCA iterations for problem (2) on a single coordinate block k
Input: H ? 1, ?[k] ? Rnk , and w ? Rd consistent with other coordinate blocks of ? s.t. w = A?
k
Data: Local {(xi , yi )}ni=1
(0)
Initialize: w ? w, ??[k] ? 0 ? Rnk
for h = 1, 2, . . . , H
choose i ? {1, 2, . . . , nk } uniformly at random
(h?1)
1
(h?1)
find ?? maximizing ? ?n
+ ?n
?? xi k2 ? `?i ? (?i
+ ??)
2 kw
(h)
(h?1)
?i ? ?i
+ ??
(??[k] )i ? (??[k] )i + ??
1
w(h) ? w(h?1) + ?n
?? xi
end
Output: ??[k] and ?w := A[k] ??[k]
only requires communication of a single vector for each worker, that is ?wk ? Rd . Further, as we
will show in Section 4, C O C OA inherits the convergence guarantee of any algorithm run locally on
each node in the inner loop of Algorithm 1. We suggest to use randomized dual coordinate ascent
(SDCA) [13] as the internal optimizer in practice, as implemented in Procedure B, and also used in
our experiments.
Notation. In the same way the data is partitioned across the K worker machines, we write the dual
n
nk
variable vector
P as ? = (?[1] , . . . , ?[K] ) ? R with the corresponding coordinate blocks ?[k] ? R
such that k nk = n. The submatrix A[k] collects the columns of A (i.e. rescaled data examples)
which are available locally on the k-th worker. The parameter T determines the number of outer
iterations of the algorithm, while when using an online internal method such as L OCAL SDCA, then
the number of inner iterations H determines the computation-communication trade-off factor.
4
Convergence Analysis
Considering the dual problem (2), we define the local suboptimality on each coordinate block as:
?D,k (?) :=
? [k] , . . . , ?[K] )) ? D((?[1] , . . . , ?[k] , . . . , ?[K] )),
max D((?[1] , . . . , ?
? [k] ?Rnk
?
(3)
that is how far we are from the optimum on block k with all other blocks fixed. Note that this differs
? ? D((?[1] , . . . , ?[K] )).
from the global suboptimality max?? D(?)
Assumption 1 (Local Geometric Improvement of L OCAL D UAL M ETHOD). We assume that there
exists ? ? [0, 1) such that for any given ?, L OCAL D UAL M ETHOD when run on block k alone
returns a (possibly random) update ??[k] such that
E[D,k ((?[1] , . . . , ?[k?1] , ?[k] + ??[k] , ?[k+1] , . . . , ?[K] ))] ? ? ? D,k (?).
(4)
Note that this assumption is satisfied for several available implementations of the inner procedure
L OCAL D UAL M ETHOD, in particular for L OCAL SDCA, as shown in the following Proposition.
From here on, we assume that the input data is scaled such that kxi k ? 1 for all datapoints. Proofs
of all statements are provided in the supplementary material.
Proposition 1. Assume the loss functions `i are (1/?)-smooth. Then for using L OCAL SDCA,
Assumption 1 holds with
H
?n? 1
?= 1?
.
(5)
1 + ?n? n
?
where n
? := maxk nk is the size of the largest block of coordinates.
4
Theorem 2. Assume that Algorithm 1 is run for T outer iterations on K worker machines, with
the procedure L OCAL D UAL M ETHOD having local geometric improvement ?, and let ?K := 1.
Further, assume the loss functions `i are (1/?)-smooth. Then the following geometric convergence
rate holds for the global (dual) objective:
T
1 ?n?
?
(T )
E[D(? ) ? D(? )] ? 1 ? (1 ? ?)
D(?? ) ? D(?(0) ) .
(6)
K ? + ?n?
Here ? is any real number satisfying
? ? ?min := maxn ?2 n2
??R
PK
2
k=1 kA[k] ?[k] k
2
k?k
? kA?k2
? 0.
(7)
Lemma 3. If K = 1 then ?min = 0. For any K ? 1, when assuming kxi k ? 1 ?i, we have
0 ? ?min ? n
?.
Moreover, if datapoints between different workers are orthogonal, i.e. (AT A)i,j = 0 ?i, j such that
i and j do not belong to the same part, then ?min = 0.
If we choose K = 1 then, Theorem 2 together with Lemma 3 implies that
E[D(?? ) ? D(?(T ) )] ? ?T D(?? ) ? D(?(0) ) ,
as expected, showing that the analysis is tight in the special case K = 1. More interestingly, we
observe that for any K, in the extreme case when the subproblems are solved to optimality (i.e.
letting H ? ? in L OCAL SDCA), then the algorithm as well as the convergence rate match that of
serial/parallel block-coordinate descent [18, 19].
Note: If choosing the starting point as ?(0) := 0 as in the main algorithm, then it is known that
D(?? ) ? D(?(0) ) ? 1 (see e.g. Lemma 20 in [13]).
5
Related Work
Distributed Primal-Dual Methods. Our approach is most closely related to recent work by [2, 3],
which generalizes the distributed optimization method for linear SVMs as in [11] to the primal-dual
setting considered here (which was introduced by [13]). The difference between our approach and
the ?practical? method of [2] is that our internal steps directly correspond to coordinate descent iterations on the global dual objective (2), for coordinates in the current block, while in [3, Equation 8]
and [2], the inner iterations apply to a slightly different notion of the sub-dual problem defined on
the local data. In terms of convergence results, the analysis of [2] only addresses the mini-batch
case without local updates, while the more recent paper [3] shows a convergence rate for a variant of
C O C OA with inner coordinate steps, but under the unrealistic assumption that the data is orthogonal
between the different workers. In this case, the optimization problems become independent, so that
an even simpler single-round communication scheme summing the individual resulting models w
would give an exact solution. Instead, we show a linear convergence rate for the full problem class
of smooth losses, without any assumptions on the data, in the same generality as the non-distributed
setting of [13].
While the experimental results in all papers [11, 2, 3] are encouraging for this type of method, they
do not yet provide a quantitative comparison of the gains in communication efficiency, or compare
to the analogous SGD schemes that use the same distribution and communication patterns, which is
the main goal or our experiments in Section 6. For the special case of linear SVMs, the first paper
to propose the same algorithmic idea was [11], which used LibLinear in the inner iterations. However, the proposed algorithm [11] processes the blocks sequentially (not in the parallel or distributed
setting). Also, it is assumed that the subproblems are solved to near optimality on each block before selecting the next, making the method essentially standard block-coordinate descent. While
no convergence rate was given, the empirical results in the journal paper [12] suggest that running
LibLinear for just one pass through the local data performs well in practice. Here, we prove this,
quantify the communication efficiency, and show that fewer local steps can improve the overall performance. For the LASSO case, [7] has proposed a parallel coordinate descent method converging
to the true optimum, which could potentially also be interpreted in our framework here.
5
Mini-Batches. Another closely related avenue of research includes methods that use mini-batches
to distribute updates. In these methods, a mini-batch, or sample, of the data examples is selected
for processing at each iteration. All updates within the mini-batch are computed based on the same
fixed parameter vector w, and then these updates are either added or averaged in a reduce step
and communicated back to the worker machines. This concept has been studied for both SGD and
SDCA, see e.g. [4, 10] for the SVM case. The so-called naive variant of [2] is essentially identical
to mini-batch dual coordinate descent, with a slight difference in defining the sub-problems.
As is shown in [2] and below in Section 6, the performance of these algorithms suffers when processing large batch sizes, as they do not take local updates immediately into account. Furthermore,
they are very sensitive to the choice of the parameter ?b , which controls the magnitude of combining
all updates between ?b := 1 for (conservatively) averaging, and ?b := b for (aggressively) adding
the updates (here we denote b as the size of the selected mini-batch, which can be of size up to n).
This instability is illustrated by the fact that even the change of ?b := 2 instead of ?b := 1 can
lead to divergence of coordinate descent (SDCA) in the simple case of just two coordinates [4] .
In practice it can be very difficult to choose the correct data-dependent parameter ?b especially for
large mini-batch sizes b ? n, as the parameter range spans many orders of magnitude, and directly
controls the step size of the resulting algorithm, and therefore the convergence rate [20, 21]. For
sparse data, the work of [20, 21] gives some data dependent choices of ?b which are safe.
Known convergence rates for the mini-batch methods degrade linearly with the growing batch size
b ? ?(n). More precisely, the improvement in objective function per example processed degrades
with a factor of ?b in [4, 20, 21]. In contrast, our convergence rate as shown in Theorem 2 only
degrades with the much smaller number of worker machines K, which in practical applications is
often several orders of magnitudes smaller than the mini-batch size b.
Single Round of Communication. One extreme is to consider methods with only a single round
of communication (e.g. one map-reduce operation), as in [22, 6, 23]. The output of these methods is
the average of K individual models, trained only on the local data on each machine. In [22], the authors give conditions on the data and computing environment under which these one-communication
algorithms may be sufficient. In general, however, the true optimum of the original problem (1) is
not the average of these K models, no matter how accurately the subproblems are solved [24].
Naive Distributed Online Methods, Delayed Gradients, and Multi-Core. On the other extreme,
a natural way to distribute updates is to let every machine send updates to the master node (sometimes called the ?parameter server?) as soon as they are performed. This is what we call the naive
distributed SGD / CD in our experiments. The amount of communication for such naive distributed
online methods is the same as the number of data examples processed. In contrast to this, the number of communicated vectors in our method is divided by H, that is the number of inner local steps
performed per outer iteration, which can be ?(n).
The early work of [25] introduced the nice framework of gradient updates where the gradients come
with some delays, i.e. are based on outdated iterates, and shows some robust convergence rates.
In the machine learning setting, [26] and the later work of [27] have provided additional insights
into these types of methods. However, these papers study the case of smooth objective functions
of a sum structure, and so do not directly apply to general case we consider here. In the same
spirit, [5] implements SGD with communication-intense updates after each example processed, allowing asynchronous updates again with some delay. For coordinate descent, the analogous approach was studied in [28]. Both methods [5, 28] are H times less efficient in terms of communication when compared to C O C OA, and are designed for multi-core shared memory machines (where
communication is as fast as memory access). They require the same amount of communication as
naive distributed SGD / CD, which we include in our experiments in Section 6, and a slightly larger
number of iterations due to the asynchronicity. The 1/t convergence rate shown in [5] only holds
under strong sparsity assumptions on the data. A more recent paper [29] deepens the understanding of such methods, but still only applies to very sparse data. For general data, [30] theoretically
shows that 1/?2 communications rounds of single vectors are enough to obtain ?-quality for linear
classifiers, with the rate growing with K 2 in the number of workers. Our new analysis here makes
the dependence on 1/? logarithmic.
6
6
Experiments
In this section, we compare C O C OA to traditional mini-batch versions of stochastic dual coordinate
ascent and stochastic gradient descent, as well as the locally-updating version of stochastic gradient
descent. We implement mini-batch SDCA (denoted mini-batch-CD) as described in [4, 2]. The
SGD-based methods are mini-batch and locally-updating versions of Pegasos [16], differing only in
whether the primal vector is updated locally on each inner iteration or not, and whether the resulting
combination/communication of the updates is by an average over the total size KH of the minibatch (mini-batch-SGD) or just over the number of machines K (local-SGD). For each algorithm,
we additionally study the effect of scaling the average by a parameter ?K , as first described in [4],
while noting that it is a benefit to avoid having to tune this data-dependent parameter.
We apply these algorithms to standard hinge loss `2 -regularized support vector machines, using
implementations written in Spark on m1.large Amazon EC2 instances [1]. Though this non-smooth
case is not yet covered in our theoretical analysis, we still see remarkable empirical performance.
Our results indicate that C O C OA is able to converge to .001-accurate solutions nearly 25? as fast
compared the other algorithms, when all use ?K = 1. The datasets used in these analyses are
summarized in Table 1, and were distributed among K = 4, 8, and 32 nodes, respectively. We use
the same regularization parameters as specified in [16, 17].
Table 1: Datasets for Empirical Study
Dataset
Training (n)
Features (d)
Sparsity
522,911
677,399
32,751
54
47,236
160,000
22.22%
0.16%
100%
cov
rcv1
imagenet
?
1e-6
1e-6
1e-5
Workers (K)
4
8
32
In comparing each algorithm and dataset, we analyze progress in primal objective value as a function
of both time (Figure 1) and communication (Figure 2). For all competing methods, we present the
result for the batch size (H) that yields the best performance in terms of reduction in objective
value over time. For the locally-updating methods (C O C OA and local-SGD), these tend to be larger
batch sizes corresponding to processing almost all of the local data at each outer step. For the
non-locally updating mini-batch methods, (mini-batch SDCA [4] and mini-batch SGD [16]), these
typically correspond to smaller values of H, as averaging the solutions to guarantee safe convergence
becomes less of an impediment for smaller batch sizes.
Cov
2
?2
10
?4
10
?6
0
COCOA (H=1e5)
minibatch?CD (H=100)
local?SGD (H=1e5)
batch?SGD (H=1)
20
40
10
0
10
?2
10
?4
10
?6
60
80
10
100
Log Primal Suboptimality
Log Primal Suboptimality
Log Primal Suboptimality
0
Imagenet
2
10
10
10
RCV1
2
10
COCOA (H=1e5)
minibatch?CD (H=100)
local?SGD (H=1e4)
batch?SGD (H=100)
0
100
Time (s)
0
10
?2
10
?4
10
?6
200
300
10
400
COCOA (H=1e3)
mini?batch?CD (H=1)
local?SGD (H=1e3)
mini?batch?SGD (H=10)
0
200
400
Time (s)
600
800
Time (s)
Figure 1: Primal Suboptimality vs. Time for Best Mini-Batch Sizes (H): For ?K = 1, C O C OA converges
more quickly than all other algorithms, even when accounting for different batch sizes.
Cov
2
?2
10
?4
10
?6
0
COCOA (H=1e5)
minibatch?CD (H=100)
local?SGD (H=1e5)
batch?SGD (H=1)
50
100
150
10
0
10
?2
10
?4
10
?6
200
# of Communicated Vectors
250
300
Log Primal Suboptimality
Log Primal Suboptimality
Log Primal Suboptimality
0
10
Imagenet
2
10
10
10
RCV1
2
10
0
COCOA (H=1e5)
minibatch?CD (H=100)
local?SGD (H=1e4)
batch?SGD (H=100)
100
200
300
0
10
?2
10
?4
10
?6
400
500
# of Communicated Vectors
600
700
10
0
COCOA (H=1e3)
mini?batch?CD (H=1)
local?SGD (H=1e3)
mini?batch?SGD (H=10)
500
1000
1500
2000
2500
3000
# of Communicated Vectors
Figure 2: Primal Suboptimality vs. # of Communicated Vectors for Best Mini-Batch Sizes (H): A clear
correlation is evident between the number of communicated vectors and wall-time to convergence (Figure 1).
7
First, we note that there is a clear correlation between the wall-time spent processing each dataset
and the number of vectors communicated, indicating that communication has a significant effect on
convergence speed. We see clearly that C O C OA is able to converge to a more accurate solution in all
datasets much faster than the other methods. On average, C O C OA reaches a .001-accurate solution
for these datasets 25x faster than the best competitor. This is a testament to the algorithm?s ability
to avoid communication while still making significant global progress by efficiently combining the
local updates of each iteration. The improvements are robust for both regimes n d and n d.
2
2
?2
10
1e5
1e4
1e3
100
1
?4
10
?6
0
20
10
Log Primal Suboptimality
Log Primal Suboptimality
Log Primal Suboptimality
0
10
10
2
10
10
0
10
?2
10
?4
10
COCOA (?k=1)
mini?batch?CD (?k=10)
0
10
?2
10
?4
10
local?SGD (?k=1)
mini?batch?sgd (?k=10)
mini?batch?sgd (?k=1)
?6
40
60
Time (s)
80
100
Figure 3: Effect of H on C O C OA.
10
0
COCOA (?k=1)
mini?batch?CD (?k=100)
local?SGD (?k=1)
?6
20
40
60
Time (s)
80
100
10
0
20
40
60
80
100
Time (s)
Figure 4: Best ?K Scaling Values for H = 1e5 and H = 100.
In Figure 3 we explore the effect of H, the computation-communication trade-off factor, on the convergence of C O C OA for the Cov dataset on a cluster of 4 nodes. As described above, increasing H
decreases communication but also affects the convergence properties of the algorithm. In Figure 4,
we attempt to scale the averaging step of each algorithm by using various ?K values, for two different batch sizes on the Cov dataset (H = 1e5 and H = 100). We see that though ?K has a larger
impact on the smaller batch size, it is still not enough to improve the mini-batch algorithms beyond
what is achieved by C O C OA and local-SGD.
7
Conclusion
We have presented a communication-efficient framework for distributed dual coordinate ascent algorithms that can be used to solve large-scale regularized loss minimization problems. This is crucial
in settings where datasets must be distributed across multiple machines, and where communication
amongst nodes is costly. We have shown that the proposed algorithm performs competitively on
real-world, large-scale distributed datasets, and have presented the first theoretical analysis of this
algorithm that achieves competitive convergence rates without making additional assumptions on
the data itself.
It remains open to obtain improved convergence rates for more aggressive updates corresponding
to ?K > 1, which might be suitable for using the ?safe? updates techniques of [4] and the related
expected separable over-approximations of [18, 19], here applied to K instead of n blocks. Furthermore, it remains open to show convergence rates for local SGD in the same communication efficient
setting as described here.
Acknowledgments. We thank Shivaram Venkataraman, Ameet Talwalkar, and Peter Richt?arik for
fruitful discussions. MJ acknowledges support by the Simons Institute for the Theory of Computing.
References
[1] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Murphy McCauley, Michael J
Franklin, Scott Shenker, and Ion Stoica. Resilient Distributed Datasets: A Fault-Tolerant Abstraction
for In-Memory Cluster Computing. NSDI, 2012.
[2] Tianbao Yang. Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent.
NIPS, 2013.
[3] Tianbao Yang, Shenghuo Zhu, Rong Jin, and Yuanqing Lin. On Theoretical Analysis of Distributed
Stochastic Dual Coordinate Ascent. arXiv:1312.1031, December 2013.
[4] Martin Tak?ac? , Avleen Bijral, Peter Richt?arik, and Nathan Srebro. Mini-Batch Primal and Dual Methods
for SVMs. ICML, 2013.
8
[5] Feng Niu, Benjamin Recht, Christopher R?e, and Stephen J Wright. Hogwild!: A Lock-Free Approach to
Parallelizing Stochastic Gradient Descent. NIPS, 2011.
[6] Martin A Zinkevich, Markus Weimer, Alex J Smola, and Lihong Li. Parallelized Stochastic Gradient
Descent. NIPS 23, 2010.
[7] Joseph K Bradley, Aapo Kyrola, Danny Bickson, and Carlos Guestrin. Parallel Coordinate Descent for
L1-Regularized Loss Minimization. ICML, 2011.
[8] Jakub Mare?cek, Peter Richt?arik, and Martin Tak?ac? . Distributed Block Coordinate Descent for Minimizing
Partially Separable Functions. arXiv:1408.2467, June 2014.
[9] Ion Necoara and Dragos Clipici. Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: Application to distributed MPC. Journal of Process Control,
23(3):243?253, 2013.
[10] Martin Tak?ac? , Peter Richt?arik, and Nathan Srebro. Primal-Dual Parallel Coordinate Descent for Machine
Learning Optimization. Manuscript, 2014.
[11] Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Large linear classification when data
cannot fit in memory. the 16th ACM SIGKDD international conference, page 833, 2010.
[12] Hsiang-Fu Yu, Cho-Jui Hsieh, Kai-Wei Chang, and Chih-Jen Lin. Large Linear Classification When Data
Cannot Fit in Memory. ACM Transactions on Knowledge Discovery from Data, 5(4):1?23, 2012.
[13] Shai Shalev-Shwartz and Tong Zhang. Stochastic Dual Coordinate Ascent Methods for Regularized Loss
Minimization. JMLR, 14:567?599, 2013.
[14] Herbert Robbins and Sutton Monro. A Stochastic Approximation Method. The Annals of Mathematical
Statistics, 22(3):400?407, 1951.
[15] L?eon Bottou. Large-Scale Machine Learning with Stochastic Gradient Descent. COMPSTAT?2010 Proceedings of the 19th International Conference on Computational Statistics, pages 177?187, 2010.
[16] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal Estimated
Sub-Gradient Solver for SVM. Mathematical Programming, 127(1):3?30, 2010.
[17] Cho-Jui Hsieh, Kai-Wei Chang, Chih-Jen Lin, S Sathiya Keerthi, and S Sundararajan. A Dual Coordinate
Descent Method for Large-scale Linear SVM. ICML, 2008.
[18] Peter Richt?arik and Martin Tak?ac? . Iteration complexity of randomized block-coordinate descent methods
for minimizing a composite function. Mathematical Programming, 144(1-2):1?38, April 2014.
[19] Peter Richt?arik and Martin Tak?ac? . Parallel Coordinate Descent Methods for Big Data Optimization.
arXiv:1212.0873, 2012.
[20] Peter Richt?arik and Martin Tak?ac? . Distributed Coordinate Descent Method for Learning with Big Data.
arXiv:1310.2059, 2013.
[21] Olivier Fercoq, Zheng Qu, Peter Richt?arik, and Martin Tak?ac? . Fast Distributed Coordinate Descent for
Non-Strongly Convex Losses. IEEE Workshop on Machine Learning for Signal Processing, May 2014.
[22] Yuchen Zhang, John C Duchi, and Martin J Wainwright. Communication-Efficient Algorithms for Statistical Optimization. JMLR, 14:3321?3363, November 2013.
[23] Gideon Mann, Ryan McDonald, Mehryar Mohri, Nathan Silberman, and Daniel D Walker. Efficient
Large-Scale Distributed Training of Conditional Maximum Entropy Models. NIPS, 1231?1239, 2009.
[24] Ohad Shamir, Nathan Srebro, and Tong Zhang. Communication-Efficient Distributed Optimization using
an Approximate Newton-type Method. ICML, 32(1):1000?1008, 2014.
[25] John N Tsitsiklis, Dimitri P Bertsekas, and Michael Athans. Distributed asynchronous deterministic and
stochastic gradient optimization algorithms. IEEE Trans. on Automatic Control, 31(9):803?812, 1986.
[26] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal Distributed Online Prediction
Using Mini-Batches. JMLR, 13:165?202, 2012.
[27] Alekh Agarwal and John C Duchi. Distributed Delayed Stochastic Optimization. NIPS, 873?881, 2011.
[28] Ji Liu, Stephen J Wright, Christopher R?e, Victor Bittorf, and Srikrishna Sridhar. An Asynchronous
Parallel Stochastic Coordinate Descent Algorithm. ICML, 2014.
[29] John C Duchi, Michael I Jordan, and H Brendan McMahan. Estimation, Optimization, and Parallelism
when Data is Sparse. NIPS, 2013.
[30] Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed Learning, Communication Complexity and Privacy. COLT, 23:26.1?26.22, 2012.
9
| 5599 |@word version:5 norm:1 stronger:1 dekel:1 open:2 accounting:1 hsieh:3 dramatic:1 sgd:36 liblinear:3 reduction:2 configuration:1 liu:1 selecting:1 deepens:1 daniel:1 interestingly:1 franklin:1 existing:2 bradley:1 ka:3 current:1 comparing:1 yet:4 danny:1 written:1 readily:1 must:1 john:4 hofmann:1 designed:1 update:24 bickson:1 v:2 alone:1 fewer:1 selected:2 smith:1 core:5 provides:1 certificate:2 node:6 iterates:1 bittorf:1 simpler:1 zhang:3 mathematical:3 become:1 welldefined:1 prove:1 combine:1 privacy:1 theoretically:3 expected:2 growing:2 multi:3 encouraging:1 considering:1 increasing:1 becomes:2 provided:2 solver:1 moreover:2 notation:1 what:2 interpreted:1 differing:1 guarantee:2 berkeley:4 commodity:2 every:2 quantitative:1 growth:1 k2:3 scaled:1 classifier:1 control:4 internally:1 bertsekas:1 before:2 understood:1 local:43 despite:1 sutton:1 niu:1 merge:1 might:1 shenghuo:1 studied:2 ankur:1 mare:1 collect:2 challenging:1 nsdi:1 range:1 averaged:1 practical:2 acknowledgment:1 practice:5 block:18 implement:2 differs:1 communicated:9 procedure:10 sdca:16 empirical:4 eth:2 significantly:1 composite:1 convenient:1 jui:3 suggest:2 convenience:1 pegasos:2 selection:1 cannot:2 applying:1 instability:1 fruitful:1 map:1 zinkevich:1 deterministic:1 maximizing:1 send:1 tianbao:2 compstat:1 starting:1 convex:5 bachrach:1 spark:3 amazon:1 immediately:2 communicating:1 insight:1 datapoints:4 classic:1 notion:1 coordinate:43 analogous:2 updated:1 annals:1 target:1 shamir:2 yishay:1 exact:1 programming:2 olivier:1 us:1 associate:1 expensive:1 strengthens:1 updating:5 satisfying:1 solved:3 venkataraman:1 richt:8 trade:4 rescaled:1 decrease:1 ran:1 accessing:1 environment:1 benjamin:1 complexity:2 trained:1 depend:1 solving:1 tight:1 efficiency:4 completely:1 compactly:1 easily:1 represented:1 various:1 regularizer:1 testament:1 fast:3 describe:1 choosing:3 shalev:2 larger:4 posed:1 valued:1 solve:2 supplementary:1 kai:3 ability:1 cov:5 statistic:2 itself:1 online:7 propose:3 loop:1 combining:2 intuitive:1 description:1 kh:1 exploiting:1 convergence:31 cluster:3 optimum:4 converges:2 spent:1 andrew:1 ac:8 srikrishna:1 progress:2 strong:3 implemented:3 come:4 implies:1 quantify:1 indicate:1 trading:1 safe:3 closely:4 correct:1 stochastic:15 material:1 mann:1 require:2 resilient:1 wall:2 proposition:2 ryan:1 rong:1 hold:3 around:2 considered:2 wright:2 mapping:1 algorithmic:1 optimizer:4 early:1 achieves:1 estimation:1 label:1 sensitive:1 robbins:1 largest:1 cotter:1 minimization:6 clearly:2 arik:8 avoid:2 inherits:1 june:1 improvement:5 consistently:1 kyrola:1 maria:1 contrast:4 sigkdd:1 brendan:1 talwalkar:1 dependent:4 stopping:3 abstraction:1 typically:2 tak:8 subroutine:2 overall:1 dual:42 among:1 classification:2 stateof:1 denoted:1 colt:1 art:2 platform:1 special:3 uc:4 initialize:2 saving:2 having:3 identical:1 kw:1 broad:1 yu:2 icml:5 nearly:1 others:2 employ:1 simultaneously:1 divergence:1 individual:2 delayed:2 murphy:1 keerthi:1 attempt:1 huge:1 interest:1 zheng:1 chowdhury:1 extreme:3 primal:25 necoara:1 immense:1 accurate:4 fu:2 worker:18 necessary:2 partial:1 ohad:2 orthogonal:2 intense:1 mccauley:1 yuchen:1 desired:1 theoretical:6 fenchel:1 column:2 instance:1 steer:1 cover:1 bijral:1 cost:2 predictor:1 delay:2 virginia:1 kxi:2 extendable:1 cho:3 recht:1 international:2 randomized:4 ec2:1 shivaram:1 off:4 michael:4 together:1 quickly:2 vastly:1 again:1 satisfied:1 choose:3 possibly:2 ocal:12 dimitri:1 return:1 li:1 account:1 distribute:2 aggressive:1 summarized:1 wk:3 includes:2 matter:1 performed:2 try:1 later:1 stoica:1 hogwild:1 doing:1 analyze:2 competitive:1 carlos:1 parallel:13 shai:3 simon:1 monro:1 ni:5 accuracy:2 efficiently:2 correspond:2 yield:1 generalize:1 accurately:1 dave:1 reach:2 suffers:1 definition:1 against:1 competitor:1 mpc:1 associated:1 proof:1 athans:1 gain:3 dataset:5 popular:2 knowledge:2 organized:1 back:1 manuscript:1 improved:2 wei:3 april:1 formulation:1 though:4 strongly:1 generality:1 furthermore:2 just:3 smola:1 clock:1 correlation:2 christopher:2 minibatch:5 logistic:1 quality:3 effect:4 normalized:1 true:3 requiring:1 concept:1 regularization:3 aggressively:1 illustrated:1 round:8 during:1 suboptimality:13 criterion:3 outline:2 evident:1 mcdonald:1 performs:2 l1:1 duchi:3 balcan:1 nanosecond:2 superior:1 empirically:1 ji:1 belong:1 slight:1 m1:1 shenker:1 kwk2:1 sundararajan:1 significant:7 ai:1 tuning:1 rd:7 automatic:1 lihong:1 access:2 alekh:1 jaggi:1 own:1 recent:5 moderate:1 server:1 fault:2 cocoa:8 yi:6 victor:1 guestrin:1 herbert:1 additional:2 steering:1 freely:1 parallelized:1 converge:2 maximize:1 signal:2 stephen:2 full:1 multiple:1 reduces:1 smooth:8 faster:3 match:1 lin:5 divided:1 serial:1 equally:1 impact:1 converging:1 variant:3 basic:1 regression:2 aapo:1 essentially:2 prediction:1 florina:1 arxiv:4 iteration:17 sometimes:1 gilad:1 achieved:1 ion:2 agarwal:1 justified:1 addition:1 fine:1 walker:1 crucial:1 rest:1 ascent:12 tend:1 december:1 leveraging:2 spirit:1 jordan:2 call:2 near:1 noting:1 yang:2 enough:2 krishnan:1 variety:1 affect:1 fit:2 lasso:1 competing:1 impediment:1 reduce:4 idea:2 inner:8 avenue:1 computable:1 bottleneck:2 whether:2 peter:8 e3:5 dramatically:3 useful:1 latency:4 covered:1 clear:2 tune:1 amount:6 locally:9 hardware:1 processed:4 svms:3 estimated:1 disjoint:1 per:5 diverse:1 write:1 blum:1 cek:1 reguarlized:1 subgradient:2 sum:1 run:3 package:1 prob:1 master:2 almost:1 chih:3 asynchronicity:1 scaling:3 submatrix:1 entirely:1 rnk:4 outdated:1 adapted:1 precisely:2 constraint:1 alex:1 software:1 markus:1 nathan:5 speed:1 min:5 optimality:3 span:1 rcv1:3 separable:3 ameet:1 fercoq:1 martin:11 developing:1 maxn:2 combination:1 conjugate:2 across:2 slightly:2 increasingly:1 smaller:5 partitioned:2 joseph:1 qu:1 making:5 modification:1 equation:1 zurich:2 remains:4 discus:2 singer:1 ordinal:1 letting:1 serf:1 sending:2 end:3 available:4 generalizes:1 operation:1 ofer:1 competitively:1 apply:3 observe:1 matei:1 distinguished:1 stepsize:1 save:1 batch:50 supercomputer:1 thomas:1 original:1 assumes:1 running:1 include:2 lock:1 hinge:1 newton:1 yoram:1 giving:1 eon:1 build:1 especially:1 silberman:1 feng:1 objective:8 added:1 degrades:3 costly:1 dependence:1 traditional:2 gradient:10 amongst:1 thank:1 oa:23 outer:5 degrade:1 topic:1 yuanqing:1 assuming:2 mini:38 kk:2 minimizing:2 setup:2 difficult:1 statement:1 potentially:1 subproblems:3 implementation:3 ethod:7 unknown:1 perform:2 contributed:1 allowing:3 observation:1 datasets:9 descent:24 jin:1 november:1 maxk:1 defining:1 communication:44 ever:1 mansour:1 arbitrary:3 parallelizing:1 introduced:2 specified:2 mosharaf:1 connection:1 imagenet:3 conflict:2 nip:6 trans:1 address:1 able:2 beyond:1 sanjay:1 usually:1 below:2 pattern:1 scott:1 regime:1 reading:2 sparsity:2 gideon:1 parallelism:1 max:2 memory:7 wainwright:1 unrealistic:1 suitable:2 ual:7 natural:1 regularized:6 zhu:1 scheme:5 improve:2 acknowledges:1 naive:6 nice:1 geometric:4 understanding:1 discovery:1 loss:15 encompassing:1 zaharia:1 proven:2 srebro:4 remarkable:1 sufficient:1 consistent:2 xiao:1 cd:11 mohri:1 free:2 keeping:1 soon:1 asynchronous:3 tsitsiklis:1 institute:1 template:1 sparse:3 distributed:39 benefit:3 default:1 world:3 conservatively:1 author:2 made:1 far:1 transaction:1 approximate:1 confirm:1 global:4 sequentially:1 tolerant:2 summing:1 assumed:1 sathiya:1 xi:10 shwartz:2 spectrum:1 reality:1 additionally:1 table:2 mj:1 robust:2 improving:1 e5:9 mehryar:1 bottou:1 da:1 inherit:1 pk:2 main:4 linearly:1 weimer:1 big:2 n2:1 sridhar:1 fair:1 hsiang:2 tong:2 sub:3 lehigh:1 mcmahan:1 jmlr:3 theorem:3 e4:3 jen:3 showing:1 jakub:1 svm:3 exists:1 workshop:1 avrim:1 adding:1 effectively:1 magnitude:5 terhorst:1 nk:4 gap:4 entropy:1 logarithmic:1 explore:1 partially:1 chang:3 applies:1 avleen:1 determines:2 acm:2 conditional:1 goal:1 shared:1 change:1 typical:2 uniformly:1 wt:1 averaging:3 lemma:3 total:2 called:2 pas:1 duality:4 experimental:1 indicating:1 clipici:1 internal:6 support:4 jonathan:1 avoiding:1 |
5,080 | 56 | 701
DISCOVERING STRUCfURE FROM MOTION IN
MONKEY, MAN AND MACHINE
Ralph M. Siegel?
The Salk Institute of Biology, La Jolla, Ca. 92037
ABSTRACT
The ability to obtain three-dimensional structure from visual motion is
important for survival of human and non-human primates. Using a parallel processing model, the current work explores how the biological visual system might solve
this problem and how the neurophysiologist might go about understanding the
solution.
INTRODUcnON
1
Psychophysical experiments have shown that monke and man are equally
adept at obtaining three dimensional structure from motion . In the present work,
much effort has been expended mimicking the visual system. This was done for one
main reason: the model was designed to help direct physiological experiments in the
primate. It was hoped that if an approach for understanding the model could be
developed, the approach could then be directed at the primate's visual system.
Early in this century, von Helmholtz2 described the problem of extracting
three-dimensional structure from motion:
Suppose, for instance, that a person is standing still in a thick woods,
where it is impossible for him to distinguish, except vaguely and roughly,
in the mass of foliage and branches all around him what belongs to one
tree and what to another, or how far apart the separate trees are, etc. But
the moment he begins to move forward, everything disentangles itself,
and immediately he gets an apperception of the material content of the
woods and their relation to each other in space, just as if he were looking
at a good stereoscopic view of it.
If the object moves, rather than the observer, the perception of threedimensional structure from motion is still obtained. Object-centered structure from
motion is examined in this report. Lesion studies in monkey have demonstrated that
two extra-striate visual cortices called the middle temporal area (abbreviated MT
?Current address: Laboratory of Neurobiology, The Rockefeller University, 1230
York Avenue, New York, NY 10021
? American Institute of Physics 1988
702
or V5) and the medial superior temporal area (MST)3,4 are involved in obtaining
structure from motion. The present model is meant to mimic the V5-MST part of
the cortical circuitry involved in obtaining structure from motion. The model
attempts to determine ifthe visual image corresponds to a three-dimensional object.
TIlE STRUCfURE FROM MOTION STIMULUS
The problem that the model solved was the same as that posed in the studies
of monkey and man 1. Structured and unstructured motion displays of a hollow,
orthographically projected cylinder were computed (Figure 1). The cylinder rotates
about its vertical axis. The unstructured stimulus was generated by shuffling the
velocity vectors randomly on the display screen. The overall velocity and spatial
distribution for the two displays are identical; only the spatial relationships have
been changed in the unstructured stimulus. Human subjects report that the points
are moving on the surface of a hollow cylinder when viewing the structured stimulus.
With the unstructured stimulus, most subjects report that they have no sense of
three-dimensional structure.
c.
B. Orthographic
Projection
A. Rotating Cylinder
Unstructured
Display
~
~
~
....+--~
+-..
,
~
+
~~
~~
-:/'
~
~
~
~
+-.. ~~
)
~
~~
~~
-+
~
,.
~
~
---+
---+~
~
+ ~
~
~
~
~
Figure 1. The structured and unstructured motion stimulus. A) "N" pomts are
randomly placed on the surface of a cylinder. B) The points are orthographically projected. The motion gives a strong percept of a hollow cylinder. C) The unstructured
stimulus was generated by shuffling the velocity vectors randomly on the screen.
FUNCTIONALARCHITECfUREOFTIlEMODEL
As with the primate subjects, the model was required to only indicate whether
or not the display was structured. Subjects were not required to describe the shape,
velocity or size of the cylinder. Thus the output cell* of the model signaled "1" if
*By cell, I mean a processing unit of the model which may correspond to a single
neuron or group of neurons. The term neuron refers only to the actual wetware
in the brain.
703
structured and "0" if not structured. This output layer corresponds to the cortical
area MST of macaque monkey which appear to be sensitive to the global organization of the motion image5. It is not known if MST neurons will distinguish between
structured and unstructured images.
The input to the model was based on physiological studies in the maca~ue
monkey. Neurons in area V5 have a retinotopic representation of visual space ,7.
For each retinotopic location there is
an encoding of a wide range of velocitiesS. Thus in the model's input rep1
resentation, there were cells that
<l.I
CIl
represent different combinations of
C
o
velocity and retinotopic spatial posi0CIl
<l.I
tion. Furthermore motion velocity
s....
neurons in V5 have a center-surround opponent organization9. The
width of the receptive fields was
-O.~-+-~I--""-"""-+-"'"
taken from the data of Albright et
-3 -2 -1 0
1
2
3
retinal position (deg)
al. S. A typical receptive field of the
model is shown in Figure 2.
Figure 2. The receptive field of an input
layer cell. The optimal velocity is "vo".
lt was possible to determine what the activity of the input cells would be for
the rotating cylinder given this representation. The activation pattern of the set of
input cells was computed by convolving the velocity points with the difference of
gaussians. The activity of the 100 input cells for an image of 20 points, with an angular velocity of SO/sec is presented in Figure 3.
Relinotopic map
Retinotopic map
>.
..-'. .
()
o
Q)
>
Structure = 1
Structure = 0
Figure 3. The input cell's activation pattern for a structured and unstructured stimuIus. The circles correspond to the cells of the input layer. The contours were com-
704
puted using a linear interpolation between the individual cells. The horizontal axis
corresponds to the position along the horizontal meridian. The vertical axis corresponds to the speed along the horizontal meridian. Thus activation of a cell in the
upper right hand corner of the graph correspond to a velocity of 300 / sec towards the
right at a location of 30 to the right along the horizontal meridian.
Inspection of this input pattern suggested that the problem of detecting
three-dimensional structure from motion may be reduced to a pattern recognition
task. The problem was then: "Given a sparsely sampled input motion flow field, determine whether it corresponds best to a structured or unstructured object."
Itwas next necessary to determine the connections between the two input and
output layers such that the model will be able to correctly signal structure or no structure (1 or 0) over a wide range of cylinder radii and rotational velocities. A parallel
distributed network of the type used by Rosenberg and Sejnowski 10 provided the
functional architecture (Figure 4).
o
M
I
Figure 4. The parallel architecture used to extract structure
from motion. The input layer (I), corresponding to area V5,
mapped the position and speed along the horizontal axis.
The output layer (0) corresponded to area MST that, it is
proposed, signals structure or not. The middle layer (M)
may exist in either V5 or MST.
The input layer of cells was fully connected to the middle layer of cells. The
middle layer of cells represented an intermediate stage of processing and may be in
either V5 or MST. All of the cells of the middle layer were then fully connected to
the output cell. The inputs from cells of the lower layer to the next higher level were
summed linearly and then "thresholded" using the Hill equation X3/(X3 + 0.5 3).
The weights between the layers were initially chosen between.?.1. The values of the
weights were then adjusted using back-propagation methods (steepest descent) so
that the network would "learn" to correctly predict the structure of the input image.
The model learned to correctly perform the task after about 10,000 iterations
(Figure 5).
Figure 5. The "education" of the network to
o.
perform the structure from motion problem.
The iteration number is plotted against the
mean square error. The error is defined as the
difference between the model's prediction and
o.
the known structure. The model was trained on
a set of structured and unstructured cylinders
a wi?e range o~ ~adii, number of points,
o 100002000030000 40000 with
and rotatlOnal velOCItIes.
Iteration number
705
PSYCHOPHYSICAL PERFORMANCE OF THE MODEL
The model's performance was comparable to that of monkey and man with
respect to fraction of structure and number of points in the display (Figure 6). The
model was indeed performing a global analysis as shown by allowing the model to
view only a portion of the image. Like man and monkey, the model's performance
suffers. Thus it appears that the model's performance was quite similar to known
monkey and human psychophysics.
1 Output
1
-..- monkey
..... man
machine
0.8
0.6
.8
0.6
0.4
0.4
0.2
0.2
0
0
0
0.2
0.4
0.6
0.8
1
monkey
-man
machine
-.l-
0
Fraction structure
32
64
96
128
Number of points
Figure 6. Psychophysical performance of the model. A. The effect of varying the
fraction of structure. As the fraction of structure increase, the model's performance
improves. Thirty repetitions were averaged for each value of structure for the model.
The fraction of structure is defined as (1-Rs/Rc), where Rs is the radius of shuffling
of the motion vectors and Rc is the radius of the cylinder. The human and monkey
data are taken from psychophysical studies 1.
HOW IS IT DONE?
The model has similar performance to monkey and man. It was next possible
to examine this artificial network in order to obtain hints for studying the biological
system. Following the approach of an electrophysiologist, receptive field maps for
all the cells of the middle and ou tput layers were made by activating individual inpu t
cells. The receptive field of some middle layer cells are shown in Figure 7. The layout
of these maps are quite similar to that of Figure 4. However, now the activity of one
cell in the middle layer is plotted as a function of the location and speed of a motion
stimulus in the input layer. One could imagine that an electrode was placed in one
of the cells of the middle layer while the experimentalist moved a bar about the
706
horizontal meridian with different locations and speeds. The activity of the cell is
then plotted as a function of position and space.
~
-f
Relinolopic map
30
~
"'::>:->::::':-:-"
...
rJ
\
00
(I')
I
~~=-~~-L~~~~~~~~
Figure 7. The activity of two different cells in the middle layer. Activity is plotted
as a contour map as a function of horizontal position and speed. Dotted lines
indicate inhibition.
These middle layer receptive field maps were interesting because they
appear to be quite simple and symmetrical. In some, the inhibitory central regions
of the receptive field were surrounded by excitatory regions (Figure 7A). Complementary cells were also found. In others, there are inhibitory bands adjacent to
excitatory bands (Figure 7B). The above results suggest that neurons involved in
extracting structure from motion may have relatively simple receptive fields in the
spatial velocity domain. These receptive fields might be thought of as breaking the
image down into component parts (i.e. a basis set). Correct recombination of these
second order cells could then be used to detect the presence of a three-dimensional
structure.
The output cell also had a simple receptive field again with interesting
symmetries (Figure 8). However, the receptive field analysis is insufficient to
indicate the role of the cell. Therefore in order to properly understand the "meaning" of the cell's receptive field, it is necessary to use
stimuli that are "real world relevant" - in this case the
structure from motion stimuli. The output cell would
give its maximal response only when a cylinder stimulus
is presented.
Figure 8. The receptive field map of the output layer cell.
Nothing about this receptive field structure indicates the
cell is involved in obtaining structure from motion.
707
This work predicts that neurons in cortex involved in extracting structure
from motion will have relatively simple receptive fields. In order to test this
hypothesis, it will be necessary to make careful maps of these cells using small
patches of motion (Figure 9). Known qualitative results in areas V5 and MST are
consistent with, but do not prove, this hypothesis. As well, it will be necessary to use
"relevant" stimuli (e.g. three-dimensional objects). If such simple receptive fields
are indeed used in structure from motion, then support will be found for the idea that
a simple cortical circuit (e.g. center-surround) can be used for many different visual
analyses.
?
Motion patches consisting of random dots with
variable velocity.
ru
Fix point
Figure 9. It may be necessary to make careful
maps of these neurons using small patches of
motion, in order to observe the postulated simple
receptive field properties of cortical neurons involved in extracting structure from
motion. Such structures may not be apparent using hand moved bar stimuli.
DISCUSSION
In conclusion, it is possible to extract the three-dimensional structure of a
rotating cylinder using a parallel network based on a similar functional architecture
as found in primate cortex. The present model has similar psychophysics to monkey
and man. The receptive field structures that underlie the present model are simple
when viewed using a spatial-velocity representation. It is suggested that in order to
understand how the visual system extracts structure from motion, quantitative
spatial-velocity maps of cortical neurons involved need to be made. One also needs
to use stimuli derived from the "real world" in order to understand how they may
be used in visual field analysis. There are similarities between the shapes of the
receptive fields involved in analyzing structure from motion and receptive fields in
striate cortex 11. It may be that similar cortical mechanisms and connections are used
to perform different functions in different cortical areas. Lastly, this model demonstrates that the use of parallel architectures that are closely modeled on the cortical
representation is a computationally efficient means to solve problems in vision. Thus
as a final caveat, I would like to advise the creators of networks that solve
ethologically realistic problems to use solutions that evolution has provided.
708
REFERENCES
1. R.M. Siegel and R.A. Andersen, Nature {Lond.} (1988).
2. H. von Helmholtz, Treatise on Physiological Optics {Dover Publications, N.Y.,
1910}, p. 297.
.
3. R.M. Siegel and R.A. Andersen, Soc. Neurosci. Abstr., 12, p. 1183 {1986}.
4. R.M. Siegel and R.A Andersen, Localization of function in extra-striate cortex:
the effect of ibotenic acid lesions on motion sensitivity in Rhesus monkey, {in
preparation}.
5. K. Tanaka, K. Hikosaka, H. Saito, M. Yukie, Y. Fukada, and E. Iwai, J., Neurosci.,
~, pp. 134-144 {1986}.
6. S.M. Zeki, Brain Res.,~, pp. 528-532 {1971}.
7. J.H.R. Maunsell and D.C. VanEssen, J. Neurophysiol., 49, pp. 1127-1147 {1983}.
8. T.D. Albright, R. Desimone, and C.G. Gross, J. NeurophysioI., 51, pp. 16-31
{1984}.
9. J. Allman, F. Miezen, and E. McGuinness, Ann. Rev. Neurosci., 8, pp. 407-430
(1985).
10. C.R. Rosenberg and T.J. Sejnowski, in: Reports of the Cognitive Neuropsychology Laboratory, John-Hopkins University {1986}.
11. D.H. Hubel and T.N. Wiesel, Proc. R. Soc. Lond. B., 198, pp.I-59 {1977}.
This work was supported by the Salk Institute for Biological Studies, The San Diego
Supercomputer Center, and PHS NS07457-02.
| 56 |@word middle:11 wiesel:1 rhesus:1 r:2 moment:1 current:2 com:1 activation:3 john:1 mst:8 realistic:1 shape:2 designed:1 medial:1 discovering:1 inspection:1 steepest:1 dover:1 caveat:1 detecting:1 location:4 rc:2 along:4 direct:1 qualitative:1 prove:1 indeed:2 roughly:1 examine:1 brain:2 actual:1 begin:1 retinotopic:4 provided:2 circuit:1 mass:1 what:3 monkey:14 developed:1 temporal:2 quantitative:1 demonstrates:1 unit:1 underlie:1 maunsell:1 appear:2 encoding:1 analyzing:1 interpolation:1 might:3 examined:1 range:3 averaged:1 directed:1 thirty:1 orthographic:1 x3:2 saito:1 area:8 thought:1 projection:1 refers:1 advise:1 suggest:1 get:1 impossible:1 disentangles:1 map:11 demonstrated:1 center:3 go:1 layout:1 unstructured:11 immediately:1 century:1 imagine:1 suppose:1 diego:1 wetware:1 hypothesis:2 velocity:17 helmholtz:1 recognition:1 fukada:1 sparsely:1 predicts:1 role:1 solved:1 region:2 connected:2 neuropsychology:1 gross:1 trained:1 localization:1 basis:1 neurophysiol:1 represented:1 describe:1 sejnowski:2 artificial:1 corresponded:1 quite:3 apparent:1 posed:1 solve:3 ability:1 itself:1 final:1 tput:1 ifthe:1 maximal:1 relevant:2 moved:2 electrode:1 abstr:1 object:5 help:1 strong:1 soc:2 indicate:3 foliage:1 thick:1 radius:3 correct:1 closely:1 centered:1 human:5 viewing:1 material:1 everything:1 education:1 activating:1 fix:1 biological:3 adjusted:1 around:1 predict:1 circuitry:1 early:1 proc:1 sensitive:1 him:2 repetition:1 rather:1 varying:1 rosenberg:2 publication:1 derived:1 properly:1 indicates:1 sense:1 detect:1 initially:1 relation:1 ralph:1 mimicking:1 overall:1 spatial:6 summed:1 psychophysics:2 field:22 biology:1 identical:1 mimic:1 report:4 stimulus:14 others:1 hint:1 randomly:3 individual:2 consisting:1 attempt:1 cylinder:13 organization:1 desimone:1 necessary:5 tree:2 rotating:3 circle:1 signaled:1 plotted:4 re:1 maca:1 instance:1 resentation:1 meridian:4 person:1 explores:1 sensitivity:1 standing:1 physic:1 hopkins:1 von:2 central:1 again:1 andersen:3 tile:1 corner:1 cognitive:1 american:1 convolving:1 expended:1 retinal:1 sec:2 postulated:1 experimentalist:1 tion:1 view:2 observer:1 portion:1 parallel:5 square:1 acid:1 percept:1 correspond:3 suffers:1 against:1 pp:6 involved:8 sampled:1 improves:1 ou:1 back:1 appears:1 higher:1 response:1 done:2 furthermore:1 just:1 angular:1 stage:1 lastly:1 hand:2 horizontal:7 propagation:1 puted:1 yukie:1 effect:2 evolution:1 laboratory:2 adjacent:1 ue:1 width:1 hill:1 vo:1 motion:32 image:6 meaning:1 superior:1 functional:2 mt:1 he:3 surround:2 shuffling:3 had:1 dot:1 moving:1 treatise:1 cortex:5 surface:2 inhibition:1 etc:1 similarity:1 jolla:1 introducnon:1 belongs:1 apart:1 rockefeller:1 determine:4 signal:2 branch:1 rj:1 hikosaka:1 equally:1 inpu:1 prediction:1 vision:1 iteration:3 represent:1 cell:33 extra:2 subject:4 flow:1 extracting:4 allman:1 presence:1 intermediate:1 architecture:4 idea:1 avenue:1 orthographically:2 whether:2 effort:1 york:2 adept:1 band:2 ph:1 reduced:1 exist:1 inhibitory:2 dotted:1 stereoscopic:1 correctly:3 group:1 zeki:1 thresholded:1 mcguinness:1 vaguely:1 graph:1 fraction:5 wood:2 patch:3 comparable:1 layer:21 distinguish:2 display:6 activity:6 optic:1 speed:5 lond:2 performing:1 relatively:2 structured:10 combination:1 wi:1 rev:1 primate:5 taken:2 computationally:1 equation:1 abbreviated:1 mechanism:1 studying:1 gaussians:1 opponent:1 observe:1 supercomputer:1 creator:1 recombination:1 threedimensional:1 psychophysical:4 move:2 v5:8 receptive:20 striate:3 separate:1 rotates:1 mapped:1 reason:1 ru:1 modeled:1 relationship:1 insufficient:1 rotational:1 iwai:1 perform:3 allowing:1 upper:1 vertical:2 neuron:11 ethologically:1 descent:1 neurobiology:1 looking:1 required:2 connection:2 learned:1 tanaka:1 macaque:1 address:1 able:1 suggested:2 bar:2 perception:1 pattern:4 axis:4 extract:3 understanding:2 fully:2 interesting:2 consistent:1 surrounded:1 excitatory:2 changed:1 placed:2 supported:1 understand:3 institute:3 wide:2 distributed:1 cortical:8 world:2 contour:2 forward:1 made:2 projected:2 san:1 far:1 deg:1 global:2 hubel:1 symmetrical:1 learn:1 nature:1 ca:1 obtaining:4 symmetry:1 domain:1 main:1 linearly:1 neurosci:3 nothing:1 lesion:2 complementary:1 neurophysiologist:1 siegel:4 screen:2 salk:2 ny:1 cil:1 position:5 breaking:1 down:1 adii:1 physiological:3 survival:1 hoped:1 lt:1 visual:10 corresponds:5 neurophysioi:1 viewed:1 ann:1 careful:2 towards:1 man:9 content:1 typical:1 except:1 called:1 albright:2 la:1 support:1 meant:1 preparation:1 hollow:3 |
5,081 | 560 | Induction of Finite-State Automata Using
Second-Order Recurrent Networks
Raymond L. Watrous
Siemens Corporate Research
755 College Road East, Princeton, NJ 08540
Gary M. Kuhn
Center for Communications Research, IDA
Thanet Road, Princeton, NJ 08540
Abstract
Second-order recurrent networks that recognize simple finite state languages over {0,1}* are induced from positive and negative examples. Using the complete gradient of the recurrent network and sufficient training
examples to constrain the definition of the language to be induced, solutions are obtained that correctly recognize strings of arbitrary length. A
method for extracting a finite state automaton corresponding to an optimized network is demonstrated.
1
Introduction
We address the problem of inducing languages from examples by considering a set of
finite state languages over {O, 1}* that were selected for study by Tomita (Tomita,
1982):
L1. 1*
L2 . (10)*
L3. no odd-length O-string anywhere after an odd-length I-string
L4. not more than 20's in a row
L5. bit pairs, #01 's
+ #10's = 0 mod 2
309
310
Watrous and Kuhn
L6. abs(#l's - #O's)
= 0 mod 3
L 7. 0*1*0*1*
Tomita also selected for each language a set of positive and negative examples
(summarized in Table 1) to be used as a training set. By a method of heuristic
search over the space of finite state automata with up to eight states, he was able
to induce a recognizer for each of these languages (Tomita, 1982).
Recognizers of finite-state languages have also been induced using first-order recurrent connectionist networks (Elman, 1990; Williams and Zipser, 1988; Cleeremans, Servan-Schreiber and McClelland, 1989). Generally speaking, these results
were obtained by training the network to predict the next symbol (Cleeremans,
Servan-Schreiber and McClelland, 1989; Williams and Zipser, 1988), rather than
by training the network to accept or reject strings of different .lengths. Several
training algorithms used an approximation to the gradient (Elman, 1990; Cleeremans, Servan-Schreiber and McClelland, 1989) by truncating the computation of
the backward recurrence.
The problem of inducing languages from examples has also been approached using
second-order recurrent networks (Pollack, 1990; Giles et al., 1990). Using a truncated approximation to the gradient, and Tomita's training sets, Pollack reported
that "none of the ideal languages were induced" (Pollack, 1990). On the other hand,
a Tomita language has been induced using the complete gradient (Giles et al., 1991).
This paper reports the induction of several Tomita languages and the extraction of
the corresponding automata with certain differences in method from (Giles et al.,
1991).
2
2.1
Method
Architecture
The network model consists of one input unit, one threshold unit, N state units and
one output unit. The output unit and each state unit receive a first order connection
from the input unit and the threshold unit. In addition, each of the output and state
units receives a second-order connection for each pairing of the input and threshold
unit with each of the state units. For N = 3, the model is mathematically identical
to that used by Pollack (Pollack, 1990); it has 32 free parameters.
2.2
Data Representation
The symbols of the language are represented by byte values, that are mapped into
real values between 0 and 1 by dividing by 255. Thus, the ZERO symbol is represented by octal 040 (0.1255). This value was chosen to be different from 0.0, which
is used as the initial condition for all units except the threshold unit, which is set to
1.0. The ONE symbol was chosen as octal 370 (0.97255). All strings are terminated
by two occurrences of a termination symbol that has the value 0.0.
Induction of Finite-State Automata Using Second-Order Recurrent Networks
Grammatical Strings
Longer Strmgs
Length < 10
Total Training In Training Set
11
9
1
5
6
2
652
11
1
1103
10
683
9
10
683
2
11
561
I
Language
1
2
3
4
5
6
7
Ungrammatical Strings
Length ::; 10
Longer Strmgs
Total Training In Training Set
2036
8
10
2041
1
11
1395
2
944
7
11
1
1364
1
1364
11
2
1486
6
I
Table 1; Number of grammatical and ungrammatical strings oflength 10 or less for
Tomita languages and number of those included in the Tomita training sets.
2.3
Training
The Tomita languages are characterized in Table 1 by the number of grammatical
strings of length 10 or less (out of a total of 2047 strings). The Tomita training
sets are also characterized by the number of grammatical strings of length 10 or
less included in the training data. For completeness, the Table also shows the
number of grammatical strings in the training set of length greater than 10. A
comparison of the number of grammatical strings with the number included in the
training set shows that while Languages 1 and 2 are very sparse, they are almost
completely covered by the training data, whereas Languages 3-7 are more dense, and
are sparsely covered by the training sets. Possible consequences of these differences
are considered in discussing the experimental results.
A mean-squared error measure was defined with target values of 0.9 and 0.1 for
accept and reject, respectively. The target function was weighted so that error was
injected only at the end of the string.
The complete gradient of this error measure for the recurrent network was computed
by a method of accumulating the weight dependencies backward in time (Watrous,
Ladendorf and Kuhn, 1990). This is in contrast to the truncated gradient used
by Pollack (Pollack, 1990) and to the forward-propagation algorithm used by Giles
(Giles et al., 1991).
The networks were optimized by gradient descent using the BFGS algorithm. A
termination criterion of 10- 10 was set; it was believed that such a strict tolerance
might lead to smaller loss of accuracy on very long strings. No constraints were set
on the number of iterations.
Five networks with different sets of random initial weights were trained separately
on each of the seven languages described by Tomita using exactly his training sets
(Tomita, 1982), including the null string. The training set used by Pollack (Pollack,
1990) differs only in not including the null string.
2.4
Testing
The networks were tested on the complete set of strings up to length 10. Acceptance
of a string was defined as the network having a final output value of greater than
311
312
Watrous and Kuhn
0.9 - T and rejection as a final value of less than 0.1 + T, where 0
tolerance. The decision was considered ambiguous otherwise.
3
< T < 0.4 is the
Results
The results of the first experiment are summarized in Table 2. For each language,
each network is listed by the seed value used to initialize the random weights. For
each network, the iterations to termination are listed, followed by the minimum
MSE value reached. Also listed is the percentage of strings of length 10 or less that
were correctly recognized by the network, and the percentage of strings for which
the decision was uncertain at a tolerance of 0.0.
The number of iterations until termination varied widely, from 28 to 37909. There
is no obvious correlation between number of iterations and minimum MSE.
3.1
Language 1
It may be observed that Language 1 is recognized correctly by two of the networks
(seeds 72 and 987235) and nearly correctly by a third (seed 239). This latter network
failed on the strings 19 and 110 , both of which were not in the training set.
The network of seed 72 was further tested on all strings of length 15 or less and
made no errors. This network was also tested on a string of 100 ones and showed no
diminution of output value over the length of the string. When tested on strings of
99 ones plus either an initial zero or a final zero, the network also made no errors.
Another network, seed 987235, made no errors on strings of length 15 or less but
failed on the string of 100 ones. The hidden units broke into oscillation after about
the 30th input symbol and the output fell into a low amplitude oscillation near zero.
3.2
Language 2
Similarly, Language 2 was recognized correctly by two networks (seeds 89340 and
987235) and nearly correctly by a third network (seed 104). The latter network
failed only on strings of the form (10)*010, none of which were included in the
training data.
The networks that performed perfectly on strings up to length 10 were tested further
on all strings up to length 15 and made no errors. These networks were also tested
on a string of 100 alternations of 1 and 0, and responded correctly. Changing the
first or final zero to a one caused both networks correctly to reject the string.
3.3
The Other Languages
For most of the other languages, at least one network converged to a very low
MSE value. However, networks that performed perfectly on the training set did
not generalize well to a definition of the language. For example, for Language 3,
the network with seed 104 reached a MSE of 8 x 10- 10 at termination, yet the
performance on the test set was only 78.31%. One interpretation of this outcome
is that the intended language was not sufficiently constrained by the training set.
Induction of Finite-State Automata Using Second-Order Recurrent Networks
Language
1
2
3
4
5
6
7
Seed
72
104
239
89340
987235
72
104
239
89340
987235
72
104
239
89340
987235
72
104
239
89340
987235
72
104
239
89340
987235
72
104
239
89340
987235
72
104
239
89340
987235
Iterations
28
95
8707
5345
994
5935
4081
807
1084
1("06
442
37909
9264
8250
5769
8630
60
2272
10680
324
890
368
1422
2775
2481
524
332
1355
8171
306
373
8578
969
4259
666
MSE
0.0012500000
0.0215882357
0.0005882353
0.0266176471
0.0000000001
0.0005468750
0.0003906250
0.0476171875
0.0005468750
0.0001562500
0.0149000000
0.0000000008
0.0087000000
0.0005000000
0.0136136712
0.0004375001
0.0624326924
0.0005000004
0.0003750001
0.0459375000
0.0526912920
0.0464772727
0.0487500000
0.0271525856
0.0209090867
0.0788760972
0.0789530751
0.0229551248
0.0001733280
0.0577867426
0.0588385157
0.0104224185
0.0211073814
0.0007684520
0.0688690476
Accuracy
100.00
78.07
99.90
66.93
100.00
93.36
99.80
62.73
100.00
100.00
47.09
78.31
74.60
73.57
50.76
52.71
20.86
55.40
60.92
22.62
34.39
45.92
31.46
46.12
66.83
0.05
0.05
31.95
46.21
37.71
9.38
55.74
52.76
54.42
12.55
Uncertainty
0.00
20.76
0.00
0.00
0.00
4.93
0.20
37.27
0.00
0.00
33.27
0.15
11.87
0.00
23.94
6.45
50.02
9.38
15.53
77.38
63.80
41.62
36.93
22.52
2.49
99.95
99.95
47.04
5.32
24.87
86.08
17.00
26.58
0.49
74.94
Table 2: Results of Training Three State-Unit Network from 5 Random Starts on
Tomita.Languages Using Tomita Training Data
In the case of Language 5, in no case was the MSE reduced below 0.02. We believe
that the model is sufficiently powerful to compute the language. It is possible,
however, that the power of the model is marginally sufficient, so that finding a
solution depends critically upon the initial conditions.
313
314
Watrous and Kuhn
Seed
72
104
239
89340
987235
Iterations
215
665
205
5244
2589
MSE
0.0000001022
0.0000000001
0.0000000001
0.0005731708
0.0004624581
Accuracy
100.00
99.85
99.90
99.32
92.13
Uncertainty
0.00
0.05
0.10
0.10
6.55
Table 3: Results of Training Three State-Unit Network from 5 Random Starts on
Tomita Language 4 Using Probabilistic Training Data (p=O.l)
4
Further Experiments
The effect of additional training data was investigated by creating training sets in
which each string oflength 10 or less is randomly included with a fixed probability p.
Thus, for p 0.1 approximately 10% of 2047 strings are included in the training set.
A flat random sampling of the lexicographic domain may not be the best approach,
however, since grammaticality can vary non-uniformly.
=
The same networks as before were trained on the larger training set for Language
4, with the results listed in Table 3.
Under these conditions, a network solution was obtained that generalizes perfectly
to the test set (seed 72). This network also made no errors on strings up to length 15.
However, very low MSE values were again obtained for networks that do not perform
perfectly on the test data (seeds 104 and 239). Network 239 made two ambiguous
decisions that would have been correct at a tolerance value of 0.23. Network 104
incorrectly accepted the strings 000 and 1000 and would have correctly accepted
the string 0100 at a tolerance of 0.25. Both networks made no additional errors
on strings up to length 15. The training data may still be slightly indeterminate.
Moreover, the few errors made were on short strings, that are not included in the
training data.
Since this network model is continuous, and thus potentially infinite state, it is perhaps not surprising that the successful induction of a finite state language seems to
require more training data than was needed for Tomita's finite state model (Tomita,
1982).
The effect of more complex models was investigated for Language 5 using a network
with 11 state units; this increases the number of weights from 32 to 288. Networks
of this type were optimized from 5 random initial conditions on the original training
data. The results of this experiment are summarized in Table 4. By increasing the
complexity of the model, convergence to low MSE values was obtained in every case,
although none of these networks generalized to the desired language. Once again,
it is possible that more data is required to constrain the language sufficiently.
5
FSA Extraction
The following method for extracting a deterministic finite-state automaton corresponding to an optimized network was developed:
Induction of Finite-State Automata Using Second-Order Recurrent Networks
Seed
72
104
239
89340
987235
Iterations
1327
680
357
122
4502
MSE
0.0002840909
0.0001136364
0.0006818145
0.0068189264
0.0001704545
Accuracy
53.00
39.47
61.31
63.36
48.41
Uncertainty
11.87
16.32
3.32
6.64
16.95
Table 4: Results of Training Network with 11 State-Units from 5 Random Starts
on Tomita Language 5 Using Tomita Training Data
1. Record the response of the network to a set of strings.
2. Compute a zero bin-width histogram for each hidden unit and partition each
histogram so that the intervals between adjacent peaks are bisected.
3. Initialize a state-transition table which is indexed by the current state and
input symbol; then, for each string:
(a) Starting from the NULL state, for each hidden unit activation vector:
1. Obtain the next state label from the concatenation of the histogram
interval number of each hidden unit value.
ll. Record the next state in the state-transition table. If a transition is
recorded from the same state on the same input symbol to two different
states, move or remove hidden unit histogram partitions so that the two
states are collapsed and go to 3; otherwise, update the current state.
(b) At 'the end of the string, mark the current state as accept, reject or uncertain according as the output unit is ~ 0.9, S; 0.1 or otherwise. If the .
current state has already received a different marking, move or insert histogram partitions so that the offending state is subdivided and go to 3.
If the recorded strings are processed successfully, then the resulting state-transition
table may be taken as an FSA interpretation of the optimized network. The FSA
may then be minimized by standard methods (Giles et al., 1991). If no histogram
partition can be found such that the process succeeds, the network may not have a
finite-state interpretation.
As an approximation to Step 3, the hidden unit vector was labeled by the index of
that vector in an initially empty set of reference vectors for which each component
value was within some global threshold (B) of the hidden unit value. If no such
reference vector was found, the observed vector was added to the reference set . The
threshold B could be raised or lowered as states needed to be collapsed or subdivided.
Using the approximate method, for Language 1, the correct and minimal FSA was
extracted from one network (seed 72, B 0.1). The correct FSA was also extracted
from another network (seed 987235, B = 0.06), although for no partition of the
hidden unit activation values could the minimal FSA be extracted. Interestingly,
the FSA extracted from the network with seed 239 corresponded to 1n for n < 8.
Also, the FSA for another network (seed 89340, B 0.0003) was nearly correct,
although the string accuracy was only 67%; one state was wrongly labeled "accept" .
=
=
For Language 2, the correct and minimal FSA was extracted from one network (seed
987235, B 0.00001). A correct FSA was also extracted from another network (seed
=
315
316
Watrous and Kuhn
89340, ()
= 0.0022), although this FSA was not minimal.
For Language 4, a histogram partition was found for one network (seed 72) that
led to the correct and minimal FSA; for the zero-width histogram, the FSA was
correct, but not minimal.
Thus, a correct FSA was extracted from every optimized network that correctly
recognized strings of length 10 or less from the language for which it was trained.
However, in some cases, no histogram partition was found for which the extracted
FSA was minimal. It also appears that an almost-correct FSA can be extracted,
which might perhaps be corrected externally. And, finally, the extracted FSA may
be correct, even though the network might fail on very long strings.
6
Conclusions
We have succeeded in recognizing several simple finite state languages using secondorder recurrent networks and extracting corresponding finite-state automata. We
consider the computation of the complete gradient a key element in this result.
Acknowledgements
We thank Lee Giles for sharing with us their results (Giles et al., 1991).
References
Cleeremans, A., Servan-Schreiber, D., and McClelland, J. (1989). Finite state automata and simple recurrent networks. Neural Computation, 1(3):372-381.
Elman, J . L. (1990). Finding structure in time. Cognitive Science, 14:179-212.
Giles, C. 1., Chen, D., Miller, C. B., Chen, H. H., Sun, G. Z., and Lee, Y. C.
(1991). Second-order recurrent neural networks for grammatical inference. In
Proceedings of the International Joint Conference on Neural Networks, volume II, pages 273-281.
Giles, C. L., Sun, G. Z., Chen, H. H., Lee, Y. C., and Chen, D. (1990). Higher order
recurrent networks and grammatical inference. In Touretzky, D. S., editor,
Advances in Neural Information Systems 2, pages 380-387. Morgan Kaufmann.
Pollack, J. B. (1990). The induction of dynamical recognizers. Technical Report
90-JP-AUTOMATA, Ohio State University.
Tomita, M. (1982). Dynamic construction of finite automata from examples using hill-climbing. In Proceedings of the Fourth International Cognitive Science
Conference, pages 105-108.
Watrous, R. L., Ladendorf, B., and Kuhn, G. M. (1990). Complete gradient optimization of a recurrent network applied to /b/, /d/, /g/ discrimination. Journal of the Acoustical Society of America, 87(3):1301-1309.
Williams, R. J. and Zipser, D. (1988). A learning algorithm for continually running
fully recurrent neural networks. Technical Report ICS Report 8805, UCSD
Institute for Cognitive Science.
| 560 |@word seems:1 termination:5 offending:1 initial:5 interestingly:1 current:4 ida:1 surprising:1 activation:2 yet:1 partition:7 remove:1 update:1 discrimination:1 selected:2 short:1 record:2 completeness:1 five:1 ladendorf:2 pairing:1 consists:1 elman:3 considering:1 increasing:1 moreover:1 null:3 watrous:7 string:47 developed:1 finding:2 nj:2 every:2 exactly:1 unit:26 continually:1 positive:2 before:1 consequence:1 approximately:1 might:3 plus:1 testing:1 differs:1 reject:4 indeterminate:1 road:2 induce:1 wrongly:1 collapsed:2 accumulating:1 deterministic:1 demonstrated:1 center:1 williams:3 go:2 starting:1 truncating:1 automaton:12 his:1 target:2 construction:1 secondorder:1 element:1 sparsely:1 labeled:2 observed:2 cleeremans:4 sun:2 complexity:1 dynamic:1 trained:3 upon:1 completely:1 joint:1 represented:2 america:1 approached:1 corresponded:1 outcome:1 heuristic:1 widely:1 larger:1 otherwise:3 final:4 fsa:17 inducing:2 convergence:1 empty:1 recurrent:15 odd:2 received:1 dividing:1 kuhn:7 correct:11 broke:1 bin:1 require:1 subdivided:2 mathematically:1 insert:1 sufficiently:3 considered:2 ic:1 seed:20 predict:1 vary:1 recognizer:1 label:1 schreiber:4 successfully:1 weighted:1 lexicographic:1 rather:1 contrast:1 inference:2 accept:4 initially:1 hidden:8 constrained:1 raised:1 initialize:2 once:1 extraction:2 having:1 sampling:1 identical:1 nearly:3 minimized:1 connectionist:1 report:4 few:1 randomly:1 recognize:2 intended:1 ab:1 acceptance:1 succeeded:1 indexed:1 desired:1 pollack:10 minimal:7 uncertain:2 giles:10 servan:4 recognizing:1 successful:1 reported:1 dependency:1 peak:1 international:2 l5:1 probabilistic:1 lee:3 squared:1 again:2 recorded:2 cognitive:3 creating:1 bfgs:1 summarized:3 caused:1 depends:1 grammaticality:1 performed:2 reached:2 start:3 accuracy:5 responded:1 kaufmann:1 miller:1 climbing:1 generalize:1 critically:1 none:3 marginally:1 converged:1 touretzky:1 sharing:1 definition:2 obvious:1 amplitude:1 appears:1 higher:1 response:1 though:1 anywhere:1 until:1 correlation:1 hand:1 receives:1 propagation:1 perhaps:2 believe:1 effect:2 adjacent:1 ll:1 width:2 recurrence:1 ambiguous:2 criterion:1 generalized:1 hill:1 complete:6 l1:1 ohio:1 jp:1 volume:1 he:1 interpretation:3 oflength:2 similarly:1 language:44 l3:1 lowered:1 recognizers:2 longer:2 showed:1 certain:1 discussing:1 alternation:1 morgan:1 minimum:2 greater:2 additional:2 recognized:4 ii:1 corporate:1 technical:2 characterized:2 believed:1 long:2 iteration:7 histogram:9 receive:1 addition:1 whereas:1 separately:1 interval:2 strict:1 fell:1 induced:5 mod:2 extracting:3 zipser:3 near:1 ideal:1 architecture:1 perfectly:4 speaking:1 generally:1 covered:2 listed:4 processed:1 mcclelland:4 reduced:1 percentage:2 correctly:10 key:1 threshold:6 changing:1 backward:2 injected:1 uncertainty:3 powerful:1 fourth:1 almost:2 oscillation:2 decision:3 bit:1 followed:1 constraint:1 constrain:2 flat:1 marking:1 according:1 smaller:1 slightly:1 taken:1 fail:1 needed:2 end:2 generalizes:1 eight:1 occurrence:1 original:1 tomita:21 running:1 l6:1 society:1 move:2 already:1 added:1 gradient:9 thank:1 mapped:1 concatenation:1 seven:1 acoustical:1 induction:7 length:19 index:1 potentially:1 negative:2 perform:1 finite:17 descent:1 truncated:2 incorrectly:1 communication:1 varied:1 ucsd:1 arbitrary:1 pair:1 required:1 optimized:6 connection:2 address:1 able:1 below:1 dynamical:1 including:2 power:1 raymond:1 byte:1 l2:1 acknowledgement:1 loss:1 fully:1 sufficient:2 editor:1 row:1 free:1 institute:1 sparse:1 tolerance:5 grammatical:8 transition:4 forward:1 made:8 approximate:1 global:1 search:1 continuous:1 table:13 ungrammatical:2 mse:10 investigated:2 complex:1 domain:1 did:1 dense:1 terminated:1 third:2 externally:1 symbol:8 chen:4 rejection:1 led:1 failed:3 gary:1 extracted:10 included:7 infinite:1 except:1 uniformly:1 corrected:1 total:3 accepted:2 experimental:1 siemens:1 succeeds:1 east:1 l4:1 college:1 mark:1 latter:2 princeton:2 tested:6 |
5,082 | 5,600 | Projective dictionary pair learning for pattern
classification
Shuhang Gu1 , Lei Zhang1 , Wangmeng Zuo2 , Xiangchu Feng3
Dept. of Computing, The Hong Kong Polytechnic University, Hong Kong, China
2
School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
3
Dept. of Applied Mathematics, Xidian University, Xi0 an, China
{cssgu, cslzhang}@comp.polyu.edu.hk
[email protected], [email protected]
1
Abstract
Discriminative dictionary learning (DL) has been widely studied in various pattern
classification problems. Most of the existing DL methods aim to learn a synthesis
dictionary to represent the input signal while enforcing the representation coefficients and/or representation residual to be discriminative. However, the `0 or
`1 -norm sparsity constraint on the representation coefficients adopted in most DL
methods makes the training and testing phases time consuming. We propose a new
discriminative DL framework, namely projective dictionary pair learning (DPL),
which learns a synthesis dictionary and an analysis dictionary jointly to achieve
the goal of signal representation and discrimination. Compared with conventional DL methods, the proposed DPL method can not only greatly reduce the time
complexity in the training and testing phases, but also lead to very competitive
accuracies in a variety of visual classification tasks.
1
Introduction
Sparse representation represents a signal as the linear combination of a small number of atoms chosen out of a dictionary, and it has achieved a big success in various image processing and computer
vision applications [1, 2]. The dictionary plays an important role in the signal representation process
[3]. By using a predefined analytical dictionary (e.g., wavelet dictionary, Gabor dictionary) to represent a signal, the representation coefficients can be produced by simple inner product operations.
Such a fast and explicit coding makes analytical dictionary very attractive in image representation;
however, it is less effective to model the complex local structures of natural images.
Sparse representation with a synthesis dictionary has been widely studied in recent years [2, 4, 5].
With synthesis dictionary, the representation coefficients of a signal are usually obtained via an
`p -norm (p ?1) sparse coding process, which is computationally more expensive than analytical
dictionary based representation. However, synthesis based sparse representation can better model
the complex image local structures and it has led to many state-of-the-art results in image restoration
[6]. Another important advantage lies in that the synthesis based sparse representation model allows
us to easily learn a desired dictionary from the training data. The seminal work of KSVD [1] tells
us that an over-complete dictionary can be learned from example natural images, and it can lead
to much better image reconstruction results than the analytically designed off-the-shelf dictionaries.
Inspired by KSVD, many dictionary learning (DL) methods have been proposed and achieved stateof-the-art performance in image restoration tasks.
The success of DL in image restoration problems triggers its applications in image classification
tasks. Different from image restoration, assigning the correct class label to the test sample is the
goal of classification problems; therefore, the discrimination capability of the learned dictionary is
1
of the major concern. To this end, supervised dictionary learning methods have been proposed to
promote the discriminative power of the learned dictionary [4, 5, 7, 8, 9]. By encoding the query
sample over the learned dictionary, both the coding coefficients and the coding residual can be used
for classification, depending on the employed DL model. Discriminative DL has led to many stateof-the-art results in pattern recognition problems.
One popular strategy of discriminative DL is to learn a shared dictionary for all classes while enforcing the coding coefficients to be discriminative [4, 5, 7]. A classifier on the coding coefficients
can be trained simultaneously to perform classification. Mairal et al. [7] proposed to learn a dictionary and a corresponding linear classifier in the coding vector space. In the label consistent KSVD
(LC-KSVD) method, Jiang et al. [5] introduced a binary class label sparse code matrix to encourage
samples from the same class to have similar sparse codes. In [4], Mairal et al. proposed a task driven dictionary learning (TDDL) framework, which minimizes different risk functions of the coding
coefficients for different tasks.
Another popular line of research in DL attempts to learn a structured dictionary to promote discrimination between classes [2, 8, 9, 10]. The atoms in the structured dictionary have class labels,
and the class-specific representation residual can be computed for classification. Ramirez et al. [8]
introduced an incoherence promotion term to encourage the sub-dictionaries of different classes
to be independent. Yang et al. [9] proposed a Fisher discrimination dictionary learning (FDDL)
method which applies the Fisher criterion to both representation residual and representation coefficient. Wang et al. [10] proposed a max-margin dictionary learning (MMDL) algorithm from the
large margin perspective.
In most of the existing DL methods, `0 -norm or `1 -norm is used to regularize the representation
coefficients since sparser coefficients are more likely to produce better classification results. Hence
a sparse coding step is generally involved in the iterative DL process. Although numerous algorithms
have been proposed to improve the efficiency of sparse coding [11, 12], the use of `0 -norm or `1 norm sparsity regularization is still a big computation burden and makes the training and testing
inefficient.
It is interesting to investigate whether we can learn discriminative dictionaries but without the costly
`0 -norm or `1 -norm sparsity regularization. In particular, it would be very attractive if the representation coefficients can be obtained by linear projection instead of nonlinear sparse coding. To this
end, in this paper we propose a projective dictionary pair learning (DPL) framework to learn a synthesis dictionary and an analysis dictionary jointly for pattern classification. The analysis dictionary
is trained to generate discriminative codes by efficient linear projection, while the synthesis dictionary is trained to achieve class-specific discriminative reconstruction. The idea of using functions to
predict the representation coefficients is not new, and fast approximate sparse coding methods have
been proposed to train nonlinear functions to generate sparse codes [13, 14]. However, there are
clear difference between our DPL model and these methods. First, in DPL the synthesis dictionary
and analysis dictionary are trained jointly, which ensures that the representation coefficients can be
approximated by a simple linear projection function. Second, DPL utilizes class label information
and promotes discriminative power of the representation codes.
One related work to this paper is the analysis-based sparse representation prior learning [15, 16],
which represents a signal from a dual viewpoint of the commonly used synthesis model. Analysis prior learning tries to learn a group of analysis operators which have sparse responses to the
latent clean signal. Sprechmann et al. [17] proposed to train a group of analysis operators for classification; however, in the testing phase a costly sparsity-constrained optimization problem is still
required. Feng et al. [18] jointly trained a dimensionality reduction transform and a dictionary
for face recognition. The discriminative dictionary is trained in the transformed space, and sparse
coding is needed in both the training and testing phases.
The contribution of our work is two-fold. First, we introduce a new DL framework, which extends
the conventional discriminative synthesis dictionary learning to discriminative synthesis and analysis
dictionary pair learning (DPL). Second, the DPL utilizes an analytical coding mechanism and it
largely improves the efficiency in both the training and testing phases. Our experiments in various
visual classification datasets show that DPL achieves very competitive accuracy with state-of-the-art
DL algorithms, while it is significantly faster in both training and testing.
2
2
2.1
Projective Dictionary Pair Learning
Discriminative dictionary learning
Denote by X = [X1 , . . . , Xk , . . . , XK ] a set of p-dimensional training samples from K classes, where
Xk ? Rp?n is the training sample set of class k, and n is the number of samples of each class. Discriminative DL methods aim to learn an effective data representation model from X for classification
tasks by exploiting the class label information of training data. Most of the state-of-the-art discriminative DL methods [5, 7, 9] can be formulated under the following framework:
minD,A k X ? DA k2F +? k A kp +?(D, A, Y),
(1)
where ? ? 0 is a scalar constant, Y represents the class label matrix of samples in X, D is the
synthesis dictionary to be learned, and A is the coding coefficient matrix of X over D. In the training
model (1), the data fidelity term k X ? DA k2F ensures the representation ability of D; k A kp is
the `p -norm regularizer on A; and ?(D, A, Y) stands for some discrimination promotion function,
which ensures the discrimination power of D and A.
As we introduced in Section 1, some DL methods [4, 5, 7] learn a shared dictionary for all classes
and a classifier on the coding coefficients simultaneously, while some DL methods [8, 9, 10] learn
a structured dictionary to promote discrimination between classes. However, they all employ `0 or
`1 -norm sparsity regularizer on the coding coefficients, making the training stage and the consequent
testing stage inefficient.
In this work, we extend the conventional DL model in (1), which learns a discriminative synthesis
dictionary, to a novel DPL model, which learns a pair of synthesis and analysis dictionaries. No
costly `0 or `1 -norm sparsity regularizer is required in the proposed DPL model, and the coding
coefficients can be explicitly obtained by linear projection. Fortunately, DPL does not sacrifice the
classification accuracy while achieving significant improvement in the efficiency, as demonstrated
by our extensive experiments in Section 3.
2.2
The dictionary pair learning model
The conventional discriminative DL model in (1) aims to learn a synthesis dictionary D to sparsely
represent the signal X, and a costly `1 -norm sparse coding process is needed to resolve the code A.
Suppose that if we can find an analysis dictionary, denoted by P ? RmK?p , such that the code A
can be analytically obtained as A = PX, then the representation of X would become very efficient.
Based on this idea, we propose to learn such an analysis dictionary P together with the synthesis
dictionary D, leading to the following DPL model:
{P? ,D? } = arg min k X ? DPX k2F +?(D, P, X, Y),
(2)
P,D
where ?(D, P, X, Y) is some discrimination function. D and P form a dictionary pair: the analysis
dictionary P is used to analytically code X, and the synthesis dictionary D is used to reconstruct X.
The discrimination power of the DPL model depends on the suitable design of ?(D, P, X, Y) .
We propose to learn a structured synthesis dictionary D = [D1 , . . . , Dk , . . . , DK ] and a structured
analysis dictionary P = [P1 ; . . . ; Pk ; . . . ; PK ], where {Dk ? Rp?m , Pk ? Rm?p } forms a subdictionary pair corresponding to class k. Recent studies on sparse subspace clustering [19] have
proved that a sample can be represented by its corresponding dictionary if the signals satisfy certain
incoherence condition. With the structured analysis dictionary P, we want that the sub-dictionary
Pk can project the samples from class i, i 6= k, to a nearly null space, i.e.,
Pk Xi ? 0, ?k 6= i.
(3)
Clearly, with (3) the coefficient matrix PX will be nearly block diagonal. On the other hand, with
the structured synthesis dictionary D, we want that the sub-dictionary Dk can well reconstruct the
data matrix Xk from its projective code matrix Pk Xk ; that is, the dictionary pair should minimize
the reconstruction error:
XK
min
k Xk ? Dk Pk Xk k2F .
(4)
P,D
k=1
Based on the above analysis, we can readily have the following DPL model:
XK
? k k2F , s.t. k di k22 ? 1.
{P? , D? } = arg min
k Xk ? Dk Pk Xk k2F +? k Pk X
P,D
k=1
3
(5)
Algorithm 1 Discriminative synthesis&analysis dictionary pair learning (DPL)
Input: Training samples for K classes X = [X1 , X2 , . . . , XK ], parameter ?, ? , m;
1: Initialize D(0) and P(0) as random matrixes with unit Frobenious norm, t = 0;
2: while not converge do
3:
t ? t + 1;
4:
for i=1:K do
(t)
5:
Update Ak by (8);
(t)
6:
Update Pk by (10);
(t)
7:
Update Dk by (12);
8:
end for
9: end while
Output: Analysis dictionary P, synthesis dictionary D.
? k denotes the complementary data matrix of Xk in the whole training set X, ? > 0 is a scalar
where X
constant, and di denotes the ith atom of synthesis dictionary D. We constrain the energy of each
atom di in order to avoid the trivial solution of Pk = 0 and make the DPL more stable.
The DPL model in (5) is not a sparse representation model, while it enforces group sparsity on the
code matrix PX (i.e., PX is nearly block diagonal). Actually, the role of sparse coding in classification is still an open problem, and some researchers argued that sparse coding may not be crucial
to classification tasks [20, 21]. Our findings in this work are supportive to this argument. The DPL model leads to very competitive classification performance with those sparse coding based DL
models, but it is much faster.
2.3
Optimization
The objective function in (5) is generally non-convex. We introduce a variable matrix A and relax
(5) to the following problem:
{P? , A? , D? } = arg min
P,A,D
K
X
k Xi ?Dk Ak k2F +? k Pk Xk ?Ak k2F +? k Pk X?k k2F ,
s.t. k di k22 ? 1. (6)
k=1
where ? is a scalar constant. All terms in the above objective function are characterized by Frobenius
norm, and (6) can be easily solved. We initialize the analysis dictionary P and synthesis dictionary
D as random matrices with unit Frobenius norm, and then alternatively update A and {D, P}. The
minimization can be alternated between the following two steps.
(1) Fix D and P, update A
A? = arg min
A
XK
k=1
k Xk ? Dk Ak k2F +? k Pk Xk ? Ak k2F .
(7)
This is a standard least squares problem and we have the closed-form solution:
A?k = (DTk Dk + ? I)?1 (? Pk Xk + DTk Xk ).
(2) Fix A, update D and P:
(
PK
P? = arg minP k=1 ? k Pk Xk ? Ak k2F +? k Pk X?k k2F ;
PK
D? = arg minD k=1 k Xk ? Dk Ak k2F , s.t. k di k22 ? 1.
(8)
(9)
The closed-form solutions of P can be obtained as:
P?k = ? Ak XTk (? Xk XTk + ?X?k X?Tk + ?I)?1 ,
(10)
?4
where ? = 10e is a small number. The D problem can be optimized by introducing a variable S:
XK
min
k Xk ? Dk Ak k2F , s.t. D = S, k si k22 ? 1.
(11)
D,S
k=1
The optimal solution of (11) can be obtained by the ADMM algorithm:
?
PK
(r)
(r)
(r+1)
?
= arg minD k=1 k Xk ? Dk Ak k2F +? k Dk ? Sk + T k k2F ,
?D
PK
(r+1)
(r) 2
(r+1)
S
= arg minS k=1 ? k Dk
? Sk + T k kF , s.t. k si k22 ? 1,
?
? (r+1)
(r+1)
(r+1)
T
= T (r) + Dk
? Sk
, update ? if appropriate.
4
(12)
2 2
?2 ?2
(a)k
P?k P
y kPy22 y
(a)(a)
*
k
*
k
2
* ** *
D
y ? yDy?k ?
PD
yPk Pyk y k2
(b)(b)(b)k
k k k2
2
2 2
Figure 1: (a) The representation codes and (b) reconstruction error on the Extended YaleB dataset.
In each step of optimization, we have closed form solutions for variables A and P, and the ADMM
based optimization of D converges rapidly. The training of the proposed DPL model is much faster
than most of previous discriminative DL methods. The proposed DPL algorithm is summarized in
Algorithm 1. When the difference between the energy in two adjacent iterations is less than 0.01,
the iteration stops. The analysis dictionary P and the synthesis dictionary D are then output for
classification.
One can see that the first sub-objective function in (9) is a discriminative analysis dictionary learner,
focusing on promoting the discriminative power of P; the second sub-objective function in (9) is a
representative synthesis dictionary learner, aiming to minimize the reconstruction error of the input
signal with the coding coefficients generated by the analysis dictionary P. When the minimization
process converges, a balance between the discrimination and representation power of the model can
be achieved.
2.4
Classification scheme
In the DPL model, the analysis sub-dictionary P?k is trained to produce small coefficients for samples
from classes other than k, and it can only generate significant coding coefficients for samples from
class k. Meanwhile, the synthesis sub-dictionary D?k is trained to reconstruct the samples of class k
from their projective coefficients P?k Xk ; that is, the residual k Xk ? D?k P?k Xk k2F will be small. On
the other hand, since P?k Xi , i 6= k, will be small and D?k is not trained to reconstruct Xi , the residual
k Xi ? D?k P?k Xi k2F will be much larger than k Xk ? D?k P?k Xk k2F .
In the testing phase, if the query sample y is from class k, its projective coding vector by P?k (i.e.,
P?k y ) will be more likely to be significant, while its projective coding vectors by P?i , i 6= k, tend to
be small. Consequently, the reconstruction residual k y ? D?k P?k y k22 tends to be much smaller than
the residuals k y ? D?i P?i y k22 , i 6= k. Let us use the Extended YaleB face dataset [22] to illustrate
this. (The detailed experimental setting can be found in Section 3.) Fig. 1(a) shows the `2 -norm
of the coefficients P?k y, where the horizontal axis refers to the index of y and the vertical axis refers
to the index of P?k . One can clearly see that k P?k y k22 has a nearly block diagonal structure, and
the diagonal blocks are produced by the query samples which have the same class labels as P?k .
Fig. 1(b) shows the reconstruction residual k y ? D?k P?k y k22 . One can see that k y ? D?k P?k y k22 also
has a block diagonal structure, and only the diagonal blocks have small residuals. Clearly, the classspecific reconstruction residual can be used to identify the class label of y, and we can naturally have
the following classifier associated with the DPL model:
identity(y) = arg mini k y ? Di Pi y k2 .
2.5
(13)
Complexity and Convergence
Complexity In the training phase of DPL, Ak , Pk and Dk are updated alternatively. In each iteration,
the time complexities of updating Ak , Pk and Dk are O(mpn + m3 + m2 n), O(mnp + p3 + mp2 ) and
O(W(pmn + m3 + m2 p + p2 m)), respectively, where W is the iteration number in ADMM algorithm
for updating D. We experimentally found that in most cases W is less than 20. In many applications,
the number of training samples and the number of dictionary atoms for each class are much smaller
than the dimension p. Thus the major computational burden in the training phase of DPL is on
? kX
? Tk + ?I}. Fortunately, this
updating Pk , which involves an inverse of a p ? p matrix {? Xk XTk + ?X
5
Energy
6
5
4
0
10
20
30
40
50
Iteration Number
Figure 2: The convergence curve of DPL on the AR database.
matrix will not change in the iteration, and thus the inverse of it can be pre-computed. This greatly
accelerates the training process.
In the testing phase, our classification scheme is very efficient. The computation of class-specific
reconstruction error k y ? D?k P?k y k2 only has a complexity of O(mp). Thus, the total complexity of
our model to classify a query sample is O(Kmp).
Convergence The objective function in (6) is a bi-convex problem for {(D, P), (A)}, e.g., by fixing
A the function is convex for D and P, and by fixing D and P the function is convex for A. The convergence of such a problem has already been intensively studied [23], and the proposed optimization
algorithm is actually an alternate convex search (ACS) algorithm. Since we have the optimal solutions of updating A, P and D, and our objective function has a general lower bound 0, our algorithm
is guaranteed to converge to a stationary point. A detailed convergence analysis can be found in our
supplementary file.
It is empirically found that the proposed DPL algorithm converges rapidly. Fig. 2 shows the convergence curve of our algorithm on the AR face dataset [24]. One can see that the energy drops quickly
and becomes very small after 10 iterations. In most of our experiments, our algorithm will converge
in less than 20 iterations.
3
Experimental Results
We evaluate the proposed DPL method on various visual classification datasets, including two face
databases (Extended YaleB [22] and AR [24]), one object categorization database (Caltech101)
[25], and one action recognition database (UCF 50 action [26]). These datasets are widely used in
previous works [5, 9] to evaluate the DL algorithms.
Besides the classification accuracy, we also report the training and testing time of competing algorithms in the experiments. All the competing algorithms are implemented in Matlab except for SVM
which is implemented in C. All experiments are run on a desktop PC with 3.5GHz Intel CPU and
8 GB memory. The testing time is calculated in terms of the average processing time to classify a
single query sample.
3.1
Parameter setting
There are three parameters, m, ? and ? in the proposed DPL model. To achieve the best performance,
in face recognition and object recognition experiments, we set the number of dictionary atoms as
its maximum (i.e., the number of training samples) for all competing DL algorithms, including the
proposed DPL. In the action recognition experiment, since the samples per class is relatively big, we
set the number of dictionary atoms of each class as 50 for all the DL algorithms. Parameter ? is an
algorithm parameter, and the regularization parameter ? is to control the discriminative property of
P. In all the experiments, we choose ? and ? by 10-fold cross validation on each dataset. For all the
competing methods, we tune their parameters for the best performance.
3.2
Competing methods
We compare the proposed DPL method with the following methods: the base-line nearest subspace
classifier (NSC) and linear support vector machine (SVM), sparse representation based classification
(SRC) [2] and collaborative representation based classification (CRC) [21], and the state-of-the-art
DL algorithms DLSI [8], FDDL [9] and LC-KSVD [5]. The original DLSI represents the test sample
by each class-specific sub-dictionary. The results in [9] have shown that by coding the test sample
collaboratively over the whole dictionary, the classification performance can be greatly improved.
6
(a)
(b)
Figure 3: Sample images in the (a) Extended YaleB and (b) AR databases.
Therefore, we follow the use of DLDI in [9] and denote this method as DLSI(C). For the two
variants of LC-KSVD proposed in [5], we adopt the LC-KSVD2 since it can always produce better
classification accuracy.
3.3 Face recognition
We first evaluate our algorithm on two widely used face datasets: Extended YaleB [22] and AR [24].
The Extended YaleB database has large variations in illumination and expressions, as illustrated in
Fig. 3(a). The AR database (a)
involves many variations such as illumination, expressions and sunglass
and scarf occlusion, as illustrated in Fig. 3(b).
We follow the experimental settings in [5] for fair comparison with state-of-the-arts. A set of 2,414
face images of 38 persons are extracted from the Extended YaleB database. We randomly select half
of the images per subject for training and the other half for testing. For the AR database, a set of
2,600 images of 50 female and 50 male subjects are extracted. 20 images of each subject are used
for training and the remain 6 images are used for testing. We use the features provided by Jiang
et al. [5] to represent the face image. The feature dimension is 504 for Extended YaleB and 540
for AR. The parameter ? is set to 0.05 on both the datasets and ? is set to 3e-3 and 5e-3 on the
Extended YaleB and AR datasets, respectively. In these two experiments, we also compare with the
max-margin dictionary learning (MMDL) [10] algorithm, whose recognition accuracy is cropped
from the original paper but the training/testing time is not available.
Table 1: Results on the Extended YaleB database.
NSC
SVM
CRC
SRC
DLSI(C)
FDDL
LC-KSVD
MMDL
DPL
Accuracy
(%)
94.7
95.6
97.0
96.5
97.0
96.7
96.7
97.3
97.5
Training
time (s)
no need
0.70
no need
no need
567.47
6,574.6
412.58
4.38
Testing
time (s)
1.41e-3
3.49e-5
1.92e-3
2.16e-2
4.30e-2
1.43
4.22e-4
1.71e-4
Table 2: Results on the AR database.
NSC
SVM
CRC
SRC
DLSI(C)
FDDL
LC-KSVD
MMDL
DPL
Accuracy
(%)
92.0
96.5
98.0
97.5
97.5
97.5
97.8
97.3
98.3
Training
time (s)
no need
3.42
no need
no need
2,470.5
61,709
1,806.3
11.30
Testing
time (s)
3.29e-3
6.16e-5
5.08e-3
3.42e-2
0.16
2.50
7.72e-4
3.93e-4
Extended YaleB database The recognition accuracies and training/testing time by different algorithms on the Extended YaleB database are summarized in Table 1. The proposed DPL algorithm
achieves the best accuracy, which is slightly higher than MMDL, DLSI(C), LC-KSVD and FDDL.
However, DPL has obvious advantage in efficiency over the other DL algorithms.
AR database The recognition accuracies and running time on the AR database are shown in Table 2.
DPL achieves the best results among all the competing algorithms. Compared with the experiment
on Extended YaleB, in this experiment there are more training samples and the feature dimension is
higher, and DPL0 s advantage of efficiency is much more obvious. In training, it is more than 159
times faster than DLSI and LC-KSVD, and 5,460 times faster than FDDL.
3.4
Object recognition
In this section we test DPL on object categorization by using the Caltech101 database [25]. The
Caltech101 database [25] includes 9,144 images from 102 classes (101 common object classes and
a background class). The number of samples in each category varies from 31 to 800. Following
the experimental settings in [5, 27], 30 samples per category are used for training and the rest are
7
Table 3: Recognition accuracy(%) & running time(s) on the Caltech101 database.
NSC
SVM
CRC
SRC
DLSI(C)
FDDL
LC-KSVD
DPL
Accuracy
70.1
64.6
68.2
70.7
73.1
73.2
73.6
73.9
Training time
no need
14.6
no need
no need
97,200
104,000
12,700
134.6
Testing time
1.79e-2
1.81e-4
1.38e-2
1.09
1.46
12.86
4.17e-3
1.29e-3
used for testing. We use the standard bag-of-words (BOW) + spatial pyramid matching (SPM)
framework [27] for feature extraction. Dense SIFT descriptors are extracted on three grids of sizes
1?1, 2?2, and 4?4 to calculate the SPM features. For a fair comparison with [5], we use the vector
quantization based coding method to extract the mid-level features and use the standard max pooling
approach to build up the high dimension pooled features. Finally, the original 21,504 dimensional
data is reduced to 3,000 dimension by PCA. The parameters ? and ? used in our algorithm are 0.05
and 1e-4, respectively.
The experimental results are listed in Table 3. Again, DPL achieves the best performance. Though
its classification accuracy is only slightly better than the DL methods, its advantage in terms of
training/testing time is huge.
3.5
Action recognition
Action recognition is an important yet very challenging task and it has been attracting great research
interests in recent years. We test our algorithm on the UCF 50 action database [26], which includes
50 categories of 6,680 human action videos from YouTube. We use the action bank features [28]
and five-fold data splitting to evaluate our algorithm. For all the comparison methods, the feature
dimension is reduced to 5,000 by PCA. The parameters ? and ? used in our algorithm are both 0.01.
The results by different methods are reported in Table 4. Our DPL algorithm achieves much higher
accuracy than its competitors. FDDL has the second highest accuracy; however, it is 1,666 times
slower than DPL in training and 83,317 times slower than DPL in testing.
Table 4: Recognition accuracy(%) & running time(s) on the UCF50 action database
Methods
NSC
SVM
CRC
SRC
DLSI(C)
FDDL
LC-KSVD
DPL
4
Accuracy
51.8
57.9
60.3
59.6
60.0
61.1
53.6
62.9
Training time
no need
59.8
no need
no need
397,000
415,000
9,272.4
249.0
Testing time
6.11e-2
5.02e-4
6.76e-2
8.92
10.11
89.15
0.12
1.07e-3
Conclusion
We proposed a novel projective dictionary pair learning (DPL) model for pattern classification tasks.
Different from conventional dictionary learning (DL) methods, which learn a single synthesis dictionary, DPL learns jointly a synthesis dictionary and an analysis dictionary. Such a pair of dictionaries
work together to perform representation and discrimination simultaneously. Compared with previous DL methods, DPL employs projective coding, which largely reduces the computational burden
in learning and testing. Performance evaluation was conducted on publically accessible visual classification datasets. DPL exhibits highly competitive classification accuracy with state-of-the-art DL
methods, while it shows significantly higher efficiency, e.g., hundreds to thousands times faster than
LC-KSVD and FDDL in training and testing.
8
References
[1] Aharon, M., Elad, M., Bruckstein, A.: K-svd: An algorithm for designing overcomplete dictionaries for
sparse representation. IEEE Trans. on Signal Processing, 54(11) (2006) 4311?4322
[2] Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation.
IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2) (2009) 210?227
[3] Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Proceedings
of the IEEE 98(6) (2010) 1045?1057
[4] Mairal, J., Bach, F., Ponce, J.: Task-driven dictionary learning. IEEE Trans. Pattern Anal. Mach. Intelligence 34(4) (2012) 791?804
[5] Jiang, Z., Lin, Z., Davis, L.: Label consistent k-svd: learning a discriminative dictionary for recognition.
IEEE Trans. on Pattern Anal. Mach. Intelligence 35(11) (2013) 2651?2664
[6] Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 15(12) (2006) 3736?3745
[7] Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A., et al.: Supervised dictionary learning. In: NIPS.
(2008)
[8] Ramirez, I., Sprechmann, P., Sapiro, G.: Classification and clustering via dictionary learning with structured incoherence and shared features. In: CVPR. (2010)
[9] Yang, M., Zhang, L., , Feng, X., Zhang, D.: Fisher discrimination dictionary learning for sparse representation. In: ICCV. (2011)
[10] Wang, Z., Yang, J., Nasrabadi, N., Huang, T.: A max-margin perspective on sparse representation-based
classification. In: ICCV. (2013)
[11] Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: NIPS. (2007)
[12] Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for `1 -minimization: Methodology and convergence. SIAM Journal on Optimization 19(3) (2008) 1107?1130
[13] Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: ICML. (2010)
[14] Ranzato, M., Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an
energy-based model. In: NIPS. (2006)
[15] Yunjin, C., Thomas, P., Bischof, H.: Learning l1-based analysis and synthesis sparsity priors using bilevel optimization. NIPS workshop (2012)
[16] Elad, M., Milanfar, P., Rubinstein, R.: Analysis versus synthesis in signal priors. Inverse problems 23(3)
(2007) 947
[17] Sprechmann, P., Litman, R., Yakar, T.B., Bronstein, A., Sapiro, G.: Efficient supervised sparse analysis
and synthesis operators. In: NIPS. (2013)
[18] Feng, Z., Yang, M., Zhang, L., Liu, Y., Zhang, D.: Joint discriminative dimensionality reduction and
dictionary learning for face recognition. Pattern Recognition 46(8) (2013) 2134?2143
[19] Soltanolkotabi, M., Elhamifar, E., Candes, E.: Robust subspace clustering. arXiv preprint arXiv:1301.2603 (2013)
[20] Coates, A., Ng, A.Y.: The importance of encoding versus training with sparse coding and vector quantization. In: ICML. (2011)
[21] Zhang, L., Yang, M., Feng, X.: Sparse representation or collaborative representation: Which helps face
recognition? In: ICCV. (2011)
[22] Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: Illumination cone models for face
recognition under variable lighting and pose. IEEE Trans. Patt. Anal. Mach. Intel. 23(6) (2001) 643?660
[23] Gorski, J., Pfeuffer, F., Klamroth, K.: Biconvex sets and optimization with biconvex functions: a survey
and extensions. Mathematical Methods of Operations Research 66(3) (2007) 373?407
[24] Martinez, A., Benavente., R.: The ar face database. CVC Technical Report (1998)
[25] Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding
106(1) (2007) 59?70
[26] Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Machine Vision and
Applications 24(5) (2013) 971?981
[27] Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In: CVPR. (2006)
[28] Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: CVPR.
(2012)
9
| 5600 |@word kong:2 dtk:2 norm:16 open:1 reduction:2 liu:1 existing:2 com:1 si:2 gmail:1 assigning:1 yet:1 readily:1 designed:1 drop:1 update:7 discrimination:12 stationary:1 half:2 intelligence:3 generative:1 desktop:1 xk:31 ith:1 zhang:6 five:1 mathematical:1 become:1 ksvd:13 introduce:2 sacrifice:1 p1:1 inspired:1 resolve:1 cpu:1 becomes:1 project:1 provided:1 null:1 minimizes:1 finding:1 sapiro:3 litman:1 classifier:5 rm:1 k2:4 control:1 unit:2 zhang1:1 local:2 tends:1 aiming:1 encoding:2 ak:12 mach:3 jiang:3 xidian:2 incoherence:3 pmn:1 china:3 studied:3 challenging:1 projective:10 bi:1 lecun:1 enforces:1 testing:25 block:6 yunjin:1 dpx:1 pyk:1 gabor:1 significantly:2 projection:4 matching:2 pre:1 word:1 refers:2 operator:3 risk:1 seminal:1 conventional:5 demonstrated:1 convex:5 survey:1 splitting:1 m2:2 d1:1 regularize:1 variation:2 updated:1 play:1 trigger:1 suppose:1 designing:1 ydy:1 expensive:1 recognition:21 approximated:1 updating:4 sparsely:1 database:21 role:2 preprint:1 wang:2 solved:1 calculate:1 thousand:1 ensures:3 mpn:1 gu1:1 ranzato:1 highest:1 src:5 pd:1 complexity:6 kriegman:1 trained:9 efficiency:6 learner:2 pfeuffer:1 easily:2 joint:1 georghiades:1 various:4 represented:1 regularizer:3 train:2 fast:3 effective:2 kp:2 query:5 rubinstein:2 tell:1 whose:1 widely:4 larger:1 supplementary:1 elad:4 relax:1 reconstruct:4 cvpr:3 ability:1 jointly:5 transform:1 advantage:4 analytical:4 propose:4 reconstruction:9 product:1 rapidly:2 bow:1 achieve:3 frobenius:2 exploiting:1 convergence:7 produce:3 categorization:2 incremental:1 converges:3 tk:2 object:6 depending:1 illustrate:1 ac:1 pose:1 fixing:2 help:1 wangmeng:1 nearest:1 school:1 p2:1 implemented:2 involves:2 correct:1 human:2 crc:5 argued:1 fix:2 extension:1 wright:1 great:1 predict:1 major:2 dictionary:100 achieves:5 collaboratively:1 adopt:1 bag:2 label:10 minimization:3 promotion:2 clearly:3 always:1 aim:3 avoid:1 shelf:1 ponce:3 improvement:1 hk:1 greatly:3 scarf:1 publically:1 sadanand:1 perona:1 transformed:1 arg:9 classification:32 dual:1 fidelity:1 stateof:2 denoted:1 among:1 art:8 constrained:1 initialize:2 spatial:2 extraction:1 ng:2 atom:7 represents:4 k2f:20 nearly:4 icml:2 promote:3 report:2 employ:2 few:2 randomly:1 simultaneously:3 phase:9 occlusion:1 classspecific:1 attempt:1 huge:1 interest:1 investigate:1 highly:1 evaluation:1 male:1 pc:1 predefined:1 encourage:2 desired:1 overcomplete:1 classify:2 modeling:1 rmk:1 ar:13 restoration:4 introducing:1 hundred:1 recognizing:2 conducted:1 reported:1 yakar:1 varies:1 person:1 siam:1 accessible:1 lee:1 off:1 synthesis:32 together:2 quickly:1 again:1 benavente:1 choose:1 huang:1 inefficient:2 leading:1 mp2:1 pooled:1 coding:32 summarized:2 includes:2 coefficient:24 satisfy:1 explicitly:1 mp:1 depends:1 try:1 closed:3 competitive:4 capability:1 candes:1 contribution:1 minimize:2 square:1 collaborative:2 accuracy:19 descriptor:1 largely:2 identify:1 bayesian:1 produced:2 comp:1 researcher:1 lighting:1 competitor:1 energy:5 corso:1 involved:1 obvious:2 naturally:1 associated:1 di:6 stop:1 proved:1 dataset:4 popular:2 intensively:1 dimensionality:2 improves:1 actually:2 focusing:1 higher:4 supervised:3 follow:2 methodology:1 response:1 improved:1 zisserman:1 klamroth:1 though:1 stage:2 hand:2 horizontal:1 web:1 ganesh:1 nonlinear:2 spm:2 lei:1 k22:10 yaleb:13 analytically:3 hence:1 regularization:3 illustrated:2 attractive:2 adjacent:1 davis:1 biconvex:2 hong:2 criterion:1 complete:1 l1:1 image:22 lazebnik:1 novel:2 common:1 empirically:1 extend:1 xi0:1 significant:3 grid:1 mathematics:1 sastry:1 soltanolkotabi:1 stable:1 harbin:2 ucf:2 attracting:1 base:1 recent:3 female:1 perspective:2 driven:2 certain:1 binary:1 success:2 supportive:1 fortunately:2 employed:1 belhumeur:1 converge:3 redundant:1 nasrabadi:1 signal:13 reduces:1 gorski:1 technical:1 faster:6 characterized:1 cross:1 bach:2 lin:1 dept:2 promotes:1 variant:1 vision:3 kmp:1 iteration:8 represent:4 arxiv:2 pyramid:2 achieved:3 cropped:1 want:2 background:1 cvc:1 crucial:1 rest:1 file:1 subject:3 tend:1 pooling:1 chopra:1 yang:6 variety:1 competing:6 reduce:1 inner:1 cn:1 idea:2 whether:1 expression:2 pca:2 gb:1 milanfar:1 action:11 matlab:1 generally:2 clear:1 detailed:2 tune:1 listed:1 mid:1 category:6 reduced:2 generate:3 continuation:1 coates:1 per:3 patt:1 group:3 achieving:1 clean:1 year:2 cone:1 run:1 inverse:3 extends:1 frobenious:1 utilizes:2 p3:1 accelerates:1 bound:1 guaranteed:1 fold:3 activity:1 bilevel:1 constraint:1 constrain:1 fei:2 x2:1 scene:1 argument:1 min:7 px:4 xtk:3 relatively:1 structured:8 alternate:1 combination:1 battle:1 smaller:2 remain:1 slightly:2 cun:1 making:1 iccv:3 computationally:1 reddy:1 mechanism:1 needed:2 sprechmann:3 mind:3 end:4 adopted:1 available:1 operation:2 aharon:2 promoting:1 polytechnic:1 appropriate:1 shah:1 slower:2 rp:2 original:3 thomas:1 denotes:2 clustering:3 running:3 build:1 gregor:1 feng:4 objective:6 already:1 ucf50:1 strategy:1 costly:4 diagonal:6 exhibit:1 subspace:3 mail:1 trivial:1 enforcing:2 code:11 besides:1 index:2 mini:1 balance:1 design:1 anal:3 bronstein:1 perform:2 vertical:1 datasets:7 extended:13 mmdl:5 introduced:3 pair:13 namely:1 required:2 extensive:1 optimized:1 bischof:1 nsc:5 learned:6 nip:5 trans:4 beyond:1 usually:1 pattern:9 sparsity:8 poultney:1 max:4 including:2 memory:1 video:3 power:6 suitable:1 natural:3 residual:11 raina:1 scheme:2 improve:1 technology:2 sunglass:1 numerous:1 axis:2 extract:1 schmid:1 alternated:1 prior:4 understanding:1 kf:1 interesting:1 versus:2 validation:1 consistent:2 minp:1 viewpoint:1 bank:2 pi:1 caltech101:4 institute:1 face:14 sparse:34 ghz:1 curve:2 dimension:6 calculated:1 stand:1 commonly:1 dpl:47 transaction:2 approximate:1 bruckstein:2 mairal:4 consuming:1 discriminative:26 xi:6 alternatively:2 fergus:1 search:1 iterative:1 latent:1 sk:3 table:8 learn:15 robust:2 complex:2 meanwhile:1 da:2 pk:24 dense:1 big:3 whole:2 martinez:1 fair:2 complementary:1 x1:2 fig:5 representative:1 intel:2 lc:11 sub:8 explicit:1 lie:1 learns:4 wavelet:1 specific:4 sift:1 hale:1 dk:18 svm:6 consequent:1 concern:1 dl:32 burden:3 workshop:1 quantization:2 importance:1 illumination:3 elhamifar:1 margin:4 kx:1 sparser:1 led:2 yin:1 likely:2 ramirez:2 visual:5 scalar:3 applies:1 extracted:3 ma:1 goal:2 formulated:1 identity:1 consequently:1 mnp:1 shared:3 fisher:3 admm:3 experimentally:1 change:1 youtube:1 except:1 denoising:1 total:1 experimental:5 svd:2 m3:2 select:1 support:1 evaluate:4 tested:1 |
5,083 | 5,601 | Augmentative Message Passing for Traveling
Salesman Problem and Graph Partitioning
Reihaneh Rabbany
Department of Computing Science
University of Alberta
Edmonton, AB T6G 2E8
[email protected]
Siamak Ravanbakhsh
Department of Computing Science
University of Alberta
Edmonton, AB T6G 2E8
[email protected]
Russell Greiner
Department of Computing Science
University of Alberta
Edmonton, AB T6G 2E8
[email protected]
Abstract
The cutting plane method is an augmentative constrained optimization procedure
that is often used with continuous-domain optimization techniques such as linear
and convex programs. We investigate the viability of a similar idea within message
passing ? for integral solutions in the context of two combinatorial problems: 1)
For Traveling Salesman Problem (TSP), we propose a factor-graph based on HeldKarp formulation, with an exponential number of constraint factors, each of which
has an exponential but sparse tabular form. 2) For graph-partitioning (a.k.a. community mining) using modularity optimization, we introduce a binary variable
model with a large number of constraints that enforce formation of cliques. In
both cases we are able to derive simple message updates that lead to competitive
solutions on benchmark instances. In particular for TSP we are able to find nearoptimal solutions in the time that empirically grows with N 3 , demonstrating that
augmentation is practical and efficient.
1
Introduction
Probabilistic Graphical Models (PGMs) provide a principled approach to approximate constraint optimization for NP-hard problems. This involves a message passing procedure (such as max-product
Belief Propagation; BP) to find an approximation to maximum a posteriori (MAP) solution. Message passing methods are also attractive as they are easily mass parallelize. This has contributed to
their application in approximating many NP-hard problems, including constraint satisfaction [1, 2],
constrained optimization [3, 4], min-max optimization [5], and integration [6].
The applicability of PGMs to discrete optimization problems is limited by the size and number of
factors in the factor-graph. While many recent attempts have been made to reduce the complexity
of message passing over high-order factors [7, 8, 9], to our knowledge no published result addresses
the issues of dealing with large number of factors. We consider a scenario where a large number
of factors represent hard constraints and ask whether it is possible to find a feasible solution by
considering only a small fraction of these constraints.
The idea is to start from a PGM corresponding to a tractible subsset of constraints, and after obtaining an approximate MAP solution using min-sum BP, augment the PGM with the set of constraints
that are violated in the current solution. This general idea has been extensively studied under the
1
term cutting plane methods in different settings. Dantzig et al. [10] first investigated this idea in the
context of TSP and Gomory et al.[11] provided a elegant method to generate violated constraints
in the context of finding integral solutions to linear programs (LP). It has since been used to also
solve a variety of nonlinear optimization problems. In the context of PGMs, Sontag and Jaakkola
use cutting plane method to iteratively tighten the marginal polytope ? that enforces the local consistency of marginals ? in order to improve the variational approximation [12]. This differs from our
approach, where the augmentation changes the factor-graph (i.e., the inference problem) rather than
improving the approximation of inference.
Recent studies show that message passing can be much faster than LP in finding approximate MAP
assignments for structured optimization problems [13]. This further motivates our inquiry regarding
the viability of augmentation for message passing. We present an affirmative answer to this question
in application to two combinatorial problems. Section 2 introduces our factor-graph formulations
for Traveling Salesman Problem (TSP) and graph-partitioning. Section 3 derives simple message
update equations for these factor-graphs and reviews our augmentation scheme. Finally, Section 4
presents experimental results for both applications.
2
Background and Representation
Let x = {x1 , . . . , xD } ? X = X1 ? X2 . . . ? XD denote an instance of a tuple of discrete variables.
Let xI refer to a sub-tuple, where I ? {1, . . . , D} indexes a subset of these variables. Define the
P
energy function f (x) ,
I?F fI (xI ) where F denotes the set of factors. Here the goal of
inference is to find an assignment with minimum energy x? = argx min f (x). This model can be
conveniently represented using a bipartite graph, known as factor-graph [14], where a factor node
fI (xI ) is connected to a variable node xi iff i ? I.
2.1
Traveling Salesman Problem
A Traveling Salesman Problem (TSP) seeks the minimum length tour of N cities that visits each
city exactly once. TSP is N P-hard, and for general distances, no constant factor approximation to
this problem is possible [15]. The best known exact solver, due to Held et al.[16], uses dynamic
programming to reduce the cost of enumerating all orderings from O(N !) to O(N 2 2N ). The development of many (now) standard optimization techniques, such as simulated annealing, mixed
integer linear programming, dynamic programming, and ant colony optimization are closely linked
with advances in solving TSP. Since Dantzig et al.[10] manually applied the cutting plane method
to 49-city problem, a combination of more sophisticated cuts, used with branch-and-bound techniques [17], has produced the state-of-the-art TSP-solver, Concorde [18]. Other notable results on
very large instances have been reported by LinKernighan heuristic [19] that continuously improves
a solution by exchanging nodes in the tour. In a related work, Wang et al.[20] proposed a message
passing solution to TSP. However their method does not scale beyond small toy problems (authors
experimented with N = 5 cities). For a readable historical background of the state-of-the-art in TSP
and its various applications, see [21].
2.1.1
TSP Factor-Graph
Let G = (V, E) denote a graph, where V = {v1 , . . . , vN } is the set of nodes and the set of edges
E contains ei?j iff vi and vj are connected. Let x = {xe1 , . . . , xeM } ? X = {0, 1}M be a set of
binary variables, one for each edge in the graph (i.e., M = |E|) where we will set xem = 1 iff em is
in the tour. For each node vi , let ?vi = {ei?j | ei?j ? E} denote the edges adjacent to vi . Given a
distance function d : E ? <, define the local factors for each edge e ? E as fe (xe ) = xe d(e) ? so
this is either d(e) or zero. Any valid tour satisfies the following necessary and sufficient constraints
? a.k.a. Held-Karp constraints [22]:
1. Degree constraints: Exactly two edges that are adjacent to each vertex should be in the tour.
Define the factor f?vi (x?vi ) : {0, 1}|?vi | ? {0, ?} to enforce this constraint
!
X
f?vi (x?vi ) , I?
xe = 2
?vi ? V
e??vi
2
where I? (condition) , 0 iff the condition is satisfied and +? otherwise.
2. Subtour constraints: Ensure that there are no short-circuits ? i.e., there are no loops that contain
strict subsets of nodes. To enforce this, for each S ? V, define ?(S) , {ei?j ? E | vi ? S, vj ?
/ S}
to be the set of edges, with one end in S and the other end in V \ S.
We need to have at least two edges leaving each subset S. The following set of factors enforce these
constraints
!
X
f?(S) (x?(S) ) = I?
xe ? 2
? S ? V, S =
6 ?
xe ?S
These three types of factors define a factor-graph, whose minimum energy configuration is the smallest tour for TSP.
2.2
Graph Partitioning
Graph partitioning ?a.k.a. community mining? is an active field of research that has recently produced a variety of community detection methods (e.g., see [23] and its references), a notable one
of which is Modularity maximization [24]. However, exact optimization of Modularity is N P-hard
[25]. Modularity is closely related to fully connected Potts graphical models [26]. However, due to
full connectivity of PGM, message passing is not able to find good solutions. Many have proposed
various other heuristics for modularity optimization [27, 28, 26, 29, 30]. We introduce a factor-graph
representation of this problem that has a large number of factors. We then discuss a stochastic but
sparse variation of modularity that enables us to efficiently partition relatively large sparse graphs.
2.2.1
Clustering Factor-Graph
Let G = (V, E) be a graph, with a weight function ?
e : V ? V ? <, where ?
e (vi , vj ) 6= 0
P
?
e
iff ei:j ? E. Let Z =
e (v1 , v2 ) and ?(vi , vj ) , 2Z be the normalized weights.
v1 ,v2 ?V ?
P
Also let ?(?vi ) , vj ?(vi , vj ) denote the normalized degree of node vi . Graph clustering using modularity optimization seeks a partitioning of the nodes into unspecified number of clusters
C = {C1 , . . . , CK }, maximizing
X
X
q(C) =
?(vi , vj ) ? ?(?vi ) ?(?vj )
(1)
Ci ?C
vi ,vj ?Ci
The first term of modularity is proportional to within-cluster edge-weights. The second term is
proportional to the expected number of within cluster edge-weights for a null model with the same
weighted node degrees for each node vi .
Here the null model is a fully-connected graph. We generate a random sparse null model with
Mnull < ?M weighted
p edges (Enull ), by randomly sampling two nodes, each drawn indepen?(?vi ), and connecting them with a weight proportional to ?
enull (vi , vj ) ?
dently
from
P(v
)
?
i
p
?(?vi )?(?vj ). If they have been already connected, this weight is added to their current weight.
We repeat this process ?M times, however since some of the edges are repeated, the total number
of edges in the null model may be under ?M . Finally the normalized edge-weight in the sparse
?
e
(vi ,vj )
null model is ?null (vi , vj ) , 2 P null?enull
(vi ,vj ) . It is easy to see that this generative process in
vi ,vj
expectation produces the fully connected null model.1
Here we use the following binary-valued factor-graph formulation. Let x = {xi1 :j1 , . . . , xiL :jL } =
{0, 1}L be a set of binary variables, one for each edge ei:j ? E ? Enull ? i.e., |E ? Enull | = L.
Define the local factor for each variable as fi:j (xi:j ) = ?xi?j (?(vi , vj ) ? ?null (vi , vj )). The
idea is to enforce formation of cliques, while minimizing the sum of local factors. By doing so the
1
The choice of using square root of weighted degrees for both sampling and weighting is to reduce the
variance. One may also use pure importance sampling (i.e., use the product of weighted degrees for sampling
and set the edge-weights in the null model uniformly), or uniform sampling of edges, where the edge-weights
of the null model are set to the product of weighted degrees.
3
negative sum of local factors evaluates to modularity (eq 1). For each three edges ei:j , ej:k , ei:k ?
E ? Enull , i < j < k that form a triangle, define a clique constraint as
f{i:j,j:k,i:k} (xi:j , xj:k , xi:k ) , I? (xi:j + xj:k + xi:k 6= 2)
These factors ensure the formation of cliques ? i.e., if the weights of two edges that are adjacent to
the same node are non-zero, the third edge in the triangle should also have non-zero weight. The
computational challenge here is the large number of clique constraints. Brandes et al.[25] use a
similar LP formulation. However, since they include all the constraints from the beginning and the
null model is fully connected, their method is only applied to small toy problems.
3
Message Passing
Min-sum belief propagation is an inference procedure, in which a set of messages are exchanged between variables and factors. The factor-to-variable (?I?e ) and variable-to-factor (?e?I ) messages
are defined as
X
?e?I (xe ) ,
?I 0 ?e (xe )
(2)
I 0 3e,I 0 6=I
?I?e (xe )
,
min fI (xI\e , xe )
X
e0 ?I\e
?e0 ?I (xe0 )
(3)
xI\e
where I 3 e indexes all factors that are adjacent to the variable xe on the factor-graph. Starting
from an initial set of messages, this recursive update is performed until convergence.
This procedure is exact on trees, factor-graphs with single cycle as well as some special settings
[4]. However it is found to produce goodPapproximations in general loopy graphs. When BP is
exact, the set of local beliefs ?e (xe ) ,
I3e ?I?e (xe ) indicate the minimum value that can be
obtained for a particular assignment of xe . When there are no ties, the joint assignment x? , obtained
by minimizing individual local beliefs, is optimal.
When BP is not exact or the marginal beliefs are tied, a decimation procedure can improve the
quality of final assignment. Decimation involves fixing a subset of variables to their most biased
values, and repeating the BP update. This process is repeated until all variables are fixed.
Another way to improve performance of BP when applied to loopy graphs is to use damping, which
often prevents oscillations: ?I?e (xe ) = ?e
?I?e (xe ) + (1 ? ?)?I?e (xe ). Here ?eI?e is the new
message as calculated by eq 3 and ? ? (0, 1] is the damping parameter. Damping can also be applied
to variable-to-factor messages.
When applying BP equations eqs 2, 3 to the TSP and clustering factor-graphs, as defined above,
we face two computational challenges: (a) Degree constraints for TSP can depend on N variables,
resulting in O(2N ) time complexity of calculating factor-to-variable messages. For subtour constraints, this is even more expensive as fS (x?(S) ) depends on O(M ) (recall M = |E| which can be
O(N 2 )) variables. (b) The complete TSP factor-graph has O(2N ) subtour constraints. Similarly the
clustering factor-graph can contain a large number of clique constraints. For the fully connected null
model, we need O(N 3 ) such factors and even using the sparse null model ? assuming a random edge
L3
L3
3
probability a.k.a. Erdos-Reny graph ? there are O( N
6 N ) = O( N 3 ) triangles in the graph (recall
that L = |E ? Enull |). In the next section, we derive the compact form of BP messages for both
problems. In the case of TSP, we show how to exploit the sparsity of degree and subtour constraints
to calculate the factor-to-variable messages in O(N ) and O(M ) respectively.
3.1
Closed Form of Messages
For simplicity we work with normalized message ?I?e , ?I?e (1) ? ?I?e (0), which is equivalent
to assuming ?I?e (0) = 0 ?I, e. The same notation is used for variable-to-factor message, and
marginal belief. We refer to the normalized marginal belief, ?e = ?e (1) ? ?(0)e as bias.
Despite their exponentially large tabular form, both degree and subtour constraint factors for TSP
are sparse. Similar forms of factors is studied in several previous works [7, 8, 9]. By calculating
4
Figure 1: (left) The message passing results after each augmentation step for the complete graph of
printing board instance from [31]. The blue lines in each figure show the selected edges at the end
of message passing. The pale red lines show the edges with the bias that, although negative (?e <
0), were close to zero. (middle) Clustering of power network (N = 4941) by message passing.
Different clusters have different colors and the nodes are scaled by their degree. (right) Clustering
of politician blogs network (N = 1490) by message passing and by meta-data ? i.e., liberal or
conservative.
the closed form of these messages for TSP factor-graph, we observe that they have a surprisingly
simple form. Rewriting eq 3 for degree constraint factors, we get:
??vi ?e (1) = min{?e0 ??vi }e0 ??vi \e
,
??vi ?e (0) = min{?e0 ??vi + ?e00 ??vi }e0 ,e00 ??vi \e (4)
where we have dropped the summation and the factor from eq 3. For xe = 1, in order to have
f?vi (x?i ) < ?, only one other xe0 ? x?vi should be non-zero. On the other hand, we know that
messages are normalized such that ?e??vi (0) = 0 ?vi , e ? ?vi , which means they can be ignored
in the summation. For xe = 0, in order to satisfy the constraint factor, two of the adjacent variables
should have a non-zero value. Therefore we seek two such incoming messages with minimum
values. Let min[k]A denote the k th smallest value in the set A ? i.e., min A ? min[1]A. We
combine the updates above to get a ?normalized message?, ??vi ?e , which is simply the negative of
the second largest incoming message (excluding ?e??vi ) to the factor f?vi :
??vi ?e = ??vi ?e (1) ? ??vi ?e (0) = ? min[2]{?e0 ??vi }e0 ??vi \e
(5)
Following a similar procedure, factor-to-variable messages for subtour constraints is given by
??(S)?e = ? max{0, min[2]{?e0 ??(S) }e0 ??(S)\e }}
(6)
Here while we are searching for the minimum incoming message, if we encounter two messages
with negative or zero values, we can safely assume ??(S)?e = 0, and stop the search. This results
in significant speedup in practice. Note that both eq 5 and eq 6 only need to calculate the second
smallest message in the set {?e0 ??(S) }e0 ??(S)\e . In the asynchronous calculation of messages, this
minimization should be repeated for each outgoing message. However in a synchronous update by
finding three smallest incoming messages to each factor, we can calculate all the factor-to-variable
messages at the same time.
For the clustering factor-graph, the clique factor is satisfied only if either zero, one, or all three of
the variables in its domain are non-zero. The factor-to-variable messages are given by
?{i:j,j:k,i:k}?i:j (0) = min{0, ?j:k?{i:j,j:k,i:k} , ?i:k?{i:j,j:k,i:k} }
?{i:j,j:k,i:k}?i:j (1) = min{0, ?j:k?{i:j,j:k,i:k} + ?i:k?{i:j,j:k,i:k} }
(7)
For xi:j = 0, the minimization is over three feasible cases (a) xj:k = xi:k = 0, (b) xj:k = 1, xi:k = 0
and (c) xj:k = 0, xi:k = 1. For xi:j = 1, there are two feasible cases (a) xj:k = xi:k = 0 and
(b) xj:k = xi:k = 1. Normalizing these messages we have
?{i:j,j:k,i:k}?i:j = min{0, ?j:k?{i:j,j:k,i:k} + ?i:k?{i:j,j:k,i:k} }?
(8)
min{0, ?j:k?{i:j,j:k,i:k} , ?i:k?{i:j,j:k,i:k} }
3.2
Finding Violations
Due to large number of factors, message passing for the full factor-graph in our applications is not
practical. Our solution is to start with a minimal set of constraints. For TSP, we start with no subtour
constraints and for clustering, we start with no clique constraint. We then use message passing to
find marginal beliefs ?e and select the edges with positive bias ?e > 0.
5
Figure 2: Results of message passing for TSP on different benchmark problems. From left to right, the
plots show: (a) running time, (b) optimality ratio (compared to Concorde), (c) iterations of augmentation and
(d) number of subtours constraints ? all as a function of number of nodes. The optimality is relative to the
result reported by Concorde. Note that all plots except optimality are log-log plots where a linear trend shows
a monomial relation (y = axm ) between the values on the x and y axis, where the slope shows the power m.
We then find the constraints that are violated. For TSP, this is achieved by finding connected components C = {Si ? V} of the solution in O(N ) time and define new subtour constraints for each
Si ? C (see Figure 1(left)).
For graph partitioning, we simply look at pairs of positively fixed edges around each node and if
the third edge of the triangle is not positively fixed, we add the corresponding clique factor to the
factor-graph; see Appendix A for more details.
4
Experiments
4.1 TSP
Here we evaluate our method over five benchmark datasets: (I) TSPLIB, which contains a variety
of real-world benchmark instances, the majority of which are 2D or 3D Euclidean or geographic
6
Table 1: Comparison of different modularity optimization methods.
.07
0.41
0
NA
0.01
0
10.89
NA
0
3624
5635
431
53027
1269
1601
21380
156753
423
.04
0.14
0
2.01
0.01
0.01
2.82
32.75
0
0.525
0.601
0.444
0.907
0.523
0.529
0.406
0.427
0.417
1.648
0.87
0.557
8.459
0.728
1.31
5.849
67.674
0.531
0.467
0.487
0.421
0.889
0.491
0.483
0.278
0.425
0.393
0.179
0.151
0.095
0.303
0.109
0.081
0.188
0.33
0.086
0.501
0.548
0.410
0.926
0.495
0.472
0.367
0.427
0.380
0.643
0.08
0.085
0.154
0.107
0.073
0.12
0.305
0.079
0.489
0.602
0.443
0.948
0.517
0.566
0.435
0.426
0.395
Time
Louvian
Modularity
Time
FastGreedy
Modularity
Time
L-Eigenvector
Modularity
Time
0.506
0.594
0.401
0.941
0.521
0.534
0.404
0.411
0.390
Spin-glass
Modularity
13.55%
17.12%
15.14%
.0004%
6.50%
1.7%
3.16%
.14%
17.54%
Time
0.511
0.591
0.431
NA
0.508
0.531
0.391
NA
0.355
Modularity
5.68%
27.85%
12.34%
NA
14.02%
5.14%
16.70%
NA
14.32%
Cost
5461
6554
562
NA
1892
2927
43957
NA
562
L
441
615
78
2742
159
254
2359
19090
78
message passing (sparse)
Time
Edges
105
115
34
1589
62
77
297
1490
34
Modularity
Nodes
y
y
n
n
y
n
n
y
y
Cost
Weighted?
polbooks
football
wkarate
netscience
dolphins
lesmis
celegansneural
polblogs
karate
L
Problem
message passing (full)
0.03
0.019
0.027
0.218
0.011
0.011
0.031
0.099
0.009
distances.2 (II) Euclidean distance between random points in 2D. (III) Random (symmetric) distance matrices. (IV) Hamming distance between random binary vectors with fixed length (20 bits).
This appears in applications such as data compression [32] and radiation hybrid mapping in genomics [33]. (V) Correlation distance between random vectors with 5 random features (e.g., using
TSP for gene co-clustering [34]). In producing random points and features as well as random distances (in (III)), we used uniform distribution over [0, 1].
For each of these cases, we report the (a) run-time, (b) optimality, (c) number of iterations of augmentation and (d) number of subtour factors at the final iteration. In all of the experiments, we use
Concorde [18] with its default settings to obtain the optimal solution.3 Since there are very large
number of TSP solvers, comparison with any particular method is pointless. Instead we evaluate the
quality of message passing against the ?optimal? solution. The results in Figure 2(2nd column from
left) reports the optimality ratio ? i.e., ratio of the tour found by message passing, to the optimal
tour. To demonstrate the non-triviality of these instance, we also report the optimality ratio for two
heuristics that have optimality guarantees for metric instances [35]: (a) nearest neighbour heuristic
(O(N 2 )), which incrementally adds the to any end of the current path the closest city that does not
form a loop; (b) greedy algorithm (O(N 2 log(N ))), which incrementally adds a lowest cost edge to
the current edge-set, while avoiding subtours.
In all experiments, we used the full graph G = (V, E), which means each iteration of message
passing is O(N 2 ? ), where ? is the number of subtour factors. All experiments use Tmax = 200
iterations, max = median{d(e)}e?E and damping with ? = .2. We used decimation, and fixed
10% of the remaining variables (out of N ) per iteration of decimation.4 This increases the cost of
message passing by an O(log(N )) multiplicative factor, however it often produces better results.
All the plots in Figure 2, except for the second column, are in log-log format. When using log-log
plot, a linear trend shows a monomial relation between x and y axes ? i.e., y = axm . Here m
indicates the slope of the line in the plot and the intercept corresponds to log(a). By studying the
slope of the linear trend in the run-time (left column) in Figure 2, we observe that, for almost all
instances, message passing seems to grow with N 3 (i.e., slope of ? 3). Exceptions are TSPLIB
instances, which seem to pose a greater challenge, and random distance matrices which seem to be
easier for message passing. A similar trend is suggested by the number of subtour constraints and
iterations of augmentation, which has a slope of ? 1, suggesting a linear dependence on N . Again
the exceptions are TSPLIB instances that grow faster than N and random distance matrices that
seem to grow sub-linearly.5 Finally, the results in the second column suggests that message passing
is able to find near optimal (in average ? 1.1-optimal) solutions for almost all instances and the
quality of tours does not degrade with increasing number of nodes.
2
Geographic distance is the distance on the surface of the earth as a large sphere.
For many larger instances, Concorde (with default setting and using CPLEX as LP solver) was not able
to find the optimal solution. Nevertheless we used the upper-bound on the optimal produced by Concord in
evaluating our method.
4
Note that here we are only fixing the top N variables with positive bias. The remaining M ? N variables
are automatically clamped to zero.
5
Since we measured the time in milliseconds, the first column does not show the instances that had a running
time of less than a millisecond.
3
7
4.2
Graph Partitioning
For graph partitioning, we experimented with a set of classic benchmarks6 . Since the optimization
criteria is modularity, we compared our method only against best known ?modularity optimization?
heuristics: (a) FastModularity[27], (b) Louvain [30], (c) Spin-glass [26] and (d) Leading eigenvector [28]. For message passing, we use ? = .1, max = median{|?(e) ? ?null (e)|}e?E?Enull and
Tmax = 10. Here we do not perform any decimation and directly fix the variables based on their
bias ?e > 0 ? xe = 1.
Table 1 summarizes our results (see also Figure 1(middle,right)). Here for each method and each
data-set, we report the time (in seconds) and the Modularity of the communities found by each
method. The table include the results of message passing for both full and sparse null models, where
we used a constant ? = 20 to generate our stochastic sparse null model. For message passing, we
also included L = |E + Enull | and the saving in the cost using augmentation. This column shows
the percentage of the number of all the constraints considered by the augmentation. For example,
the cost of .14% for the polblogs data-set shows that augmentation and sparse null model meant
using .0014 times fewer clique-factors, compared to the full factor-graph.
Overall, the results suggest that our method is comparable to state-of-the-art in terms both time and
quality of clustering. But more importantly it shows that augmentative message passing is able to
find feasible solutions using a small portion of the constraints.
5
Conclusion
We investigate the possibility of using cutting-plane-like, augmentation procedures with message
passing. We used this procedure to solve two combinatorial problems; TSP and modularity optimization. In particular, our polynomial-time message passing solution to TSP often finds near-optimal
solutions to a variety of benchmark instances.
Despite losing the guarantees that make cutting plane method very powerful, our approach has several advantages: First, message passing is more efficient than LP for structured optimization [13]
and it is highly parallelizable. Moreover by directly obtaining integral solutions, it is much easier
to find violated constraints. (Note the cutting plane method for combinatorial problems operates
on fractional solutions, whose rounding may eliminate its guarantees.) For example, for TSPs,
our method simply adds violated constraints by finding connected components. However, due to
non-integral assignments, cutting plane methods require sophisticated tricks to find violations [21].
Although powerful branch-and-cut methods, such as Concorde, are able to exactly solve instances
with few thousands of variables, their general run-time on benchmark instances remains exponential [18, p495], while our approximation appears to be O(N 3 ). Overall our studies indicate that
augmentative message passing is an efficient procedure for constraint optimization with large number of constraints.
References
[1] M. Mezard, G. Parisi, and R. Zecchina, ?Analytic and algorithmic solution of random satisfiability problems,? Science, 2002.
[2] S. Ravanbakhsh and R. Greiner, ?Perturbed message passing for constraint satisfaction problems,? arXiv
preprint arXiv:1401.6686, 2014.
[3] B. Frey and D. Dueck, ?Clustering by passing messages between data points,? Science, 2007.
[4] M. Bayati, D. Shah, and M. Sharma, ?Maximum weight matching via max-product belief propagation,?
in ISIT, 2005.
[5] S. Ravanbakhsh, C. Srinivasa, B. Frey, and R. Greiner, ?Min-max problems on factor-graphs,? ICML,
2014.
[6] B. Huang and T. Jebara, ?Approximating the permanent with belief propagation,? arXiv preprint
arXiv:0908.1769, 2009.
6
Obtained form Mark Newman?s website:
netdata/
http://www-personal.umich.edu/?mejn/
8
[7] B. Potetz and T. S. Lee, ?Efficient belief propagation for higher-order cliques using linear constraint
nodes,? Computer Vision and Image Understanding, vol. 112, no. 1, pp. 39?54, 2008.
[8] R. Gupta, A. A. Diwan, and S. Sarawagi, ?Efficient inference with cardinality-based clique potentials,? in
ICML, 2007.
[9] D. Tarlow, I. E. Givoni, and R. S. Zemel, ?Hop-map: Efficient message passing with high order potentials,? in International Conference on Artificial Intelligence and Statistics, pp. 812?819, 2010.
[10] G. Dantzig, R. Fulkerson, and S. Johnson, ?Solution of a large-scale traveling-salesman problem,? J
Operations Research society of America, 1954.
[11] R. E. Gomory et al., ?Outline of an algorithm for integer solutions to linear programs,? Bulletin of the
American Mathematical Society, vol. 64, no. 5, pp. 275?278, 1958.
[12] D. Sontag and T. S. Jaakkola, ?New outer bounds on the marginal polytope,? in Advances in Neural
Information Processing Systems, pp. 1393?1400, 2007.
[13] C. Yanover, T. Meltzer, and Y. Weiss, ?Linear programming relaxations and belief propagation?an empirical study,? JMLR, 2006.
[14] F. Kschischang and B. Frey, ?Factor graphs and the sum-product algorithm,? Information Theory, IEEE,
2001.
[15] C. H. Papadimitriou, ?The euclidean travelling salesman problem is np-complete,? Theoretical Computer
Science, vol. 4, no. 3, pp. 237?244, 1977.
[16] M. Held and R. M. Karp, ?A dynamic programming approach to sequencing problems,? Journal of the
Society for Industrial & Applied Mathematics, vol. 10, no. 1, p. 196210, 1962.
[17] M. Padberg and G. Rinaldi, ?A branch-and-cut algorithm for the resolution of large-scale symmetric
traveling salesman problems,? SIAM review, vol. 33, no. 1, pp. 60?100, 1991.
[18] D. Applegate, R. Bixby, V. Chvatal, and W. Cook, ?Concorde TSP solver,? 2006.
[19] K. Helsgaun, ?General k-opt submoves for the lin?kernighan tsp heuristic,? Mathematical Programming
Computation, 2009.
[20] C. Wang, J. Lai, and W. Zheng, ?Message-passing for the traveling salesman problem,?
[21] D. Applegate, The traveling salesman problem: a computational study. Princeton, 2006.
[22] M. Held and R. Karp, ?The traveling-salesman problem and minimum spanning trees,? Operations Research, 1970.
[23] J. Leskovec, K. Lang, and M. Mahoney, ?Empirical comparison of algorithms for network community
detection,? in WWW, 2010.
[24] M. Newman and M. Girvan, ?Finding and evaluating community structure in networks,? Physical Review
E, 2004.
[25] U. Brandes, D. Delling, et al., ?on clustering,? IEEE KDE, 2008.
[26] J. Reichardt and S. Bornholdt, ?Detecting fuzzy community structures in complex networks with a potts
model,? Physical Review Letters, vol. 93, no. 21, p. 218701, 2004.
[27] A. Clauset, ?Finding local community structure in networks,? Physical Review E, 2005.
[28] M. Newman, ?Finding community structure in networks using the eigenvectors of matrices,? Physical
review E, 2006.
[29] P. Ronhovde and Z. Nussinov, ?Local resolution-limit-free potts model for community detection,? Physical Review E, vol. 81, no. 4, p. 046114, 2010.
[30] V. Blondel, J. Guillaume, et al., ?Fast unfolding of communities in large networks,? J Statistical Mechanics, 2008.
[31] G. Reinelt, ?Tspliba traveling salesman problem library,? ORSA journal on computing, vol. 3, no. 4,
pp. 376?384, 1991.
[32] D. Johnson, S. Krishnan, J. Chhugani, S. Kumar, and S. Venkatasubramanian, ?Compressing large
Boolean matrices using reordering techniques,? in VLDB, 2004.
[33] A. Ben-Dor and B. Chor, ?On constructing radiation hybrid maps,? J Computational Biology, 1997.
[34] S. Climer and W. Zhang, ?Take a walk and cluster genes: A TSP-based approach to optimal rearrangement
clustering,? in ICML, 2004.
[35] D. Johnson and L. McGeoch, ?The traveling salesman problem: A case study in local optimization,?
Local search in combinatorial optimization, 1997.
9
| 5601 |@word middle:2 polynomial:1 compression:1 seems:1 nd:1 vldb:1 seek:3 venkatasubramanian:1 configuration:1 contains:2 initial:1 current:4 si:2 lang:1 partition:1 j1:1 enables:1 analytic:1 siamak:1 plot:6 update:6 generative:1 selected:1 greedy:1 fewer:1 website:1 cook:1 plane:8 intelligence:1 beginning:1 short:1 tarlow:1 detecting:1 node:18 liberal:1 zhang:1 five:1 mathematical:2 combine:1 introduce:2 blondel:1 expected:1 mechanic:1 alberta:3 automatically:1 considering:1 solver:5 increasing:1 provided:1 cardinality:1 notation:1 moreover:1 circuit:1 mass:1 null:18 xe1:1 lowest:1 unspecified:1 affirmative:1 eigenvector:2 fuzzy:1 finding:9 tractible:1 guarantee:3 safely:1 zecchina:1 dueck:1 xd:2 orsa:1 tie:1 exactly:3 scaled:1 partitioning:9 producing:1 rgreiner:1 positive:2 dropped:1 frey:3 local:11 limit:1 despite:2 parallelize:1 path:1 tmax:2 studied:2 dantzig:3 suggests:1 co:1 limited:1 practical:2 enforces:1 recursive:1 practice:1 differs:1 sarawagi:1 procedure:9 empirical:2 matching:1 suggest:1 get:2 close:1 context:4 applying:1 intercept:1 www:2 equivalent:1 map:5 maximizing:1 starting:1 convex:1 resolution:2 simplicity:1 pure:1 importantly:1 classic:1 searching:1 fulkerson:1 variation:1 ualberta:3 exact:5 programming:6 losing:1 us:1 givoni:1 decimation:5 trick:1 trend:4 expensive:1 cut:3 preprint:2 wang:2 calculate:3 thousand:1 clauset:1 compressing:1 connected:10 cycle:1 ordering:1 russell:1 e8:3 principled:1 complexity:2 polblogs:2 dynamic:3 personal:1 depend:1 solving:1 applegate:2 bipartite:1 triangle:4 easily:1 joint:1 e00:2 represented:1 various:2 america:1 fast:1 artificial:1 zemel:1 newman:3 formation:3 whose:2 heuristic:6 larger:1 solve:3 valued:1 concorde:7 otherwise:1 football:1 statistic:1 tsp:29 final:2 advantage:1 parisi:1 propose:1 product:5 pale:1 loop:2 iff:5 dolphin:1 convergence:1 cluster:5 produce:3 xil:1 ben:1 derive:2 mcgeoch:1 fixing:2 radiation:2 pose:1 colony:1 measured:1 nearest:1 eq:7 involves:2 indicate:2 closely:2 stochastic:2 require:1 fix:1 isit:1 opt:1 summation:2 around:1 considered:1 netdata:1 mapping:1 algorithmic:1 smallest:4 earth:1 combinatorial:5 largest:1 city:5 weighted:6 minimization:2 unfolding:1 rather:1 ck:1 ej:1 jaakkola:2 karp:3 ax:1 potts:3 sequencing:1 indicates:1 industrial:1 glass:2 posteriori:1 inference:5 eliminate:1 relation:2 issue:1 overall:2 augment:1 development:1 constrained:2 integration:1 art:3 special:1 marginal:6 mravanba:1 once:1 saving:1 field:1 sampling:5 manually:1 biology:1 hop:1 look:1 icml:3 tabular:2 papadimitriou:1 np:3 report:4 few:1 randomly:1 neighbour:1 individual:1 dor:1 cplex:1 ab:3 attempt:1 detection:3 rearrangement:1 message:66 investigate:2 mining:2 possibility:1 highly:1 zheng:1 mahoney:1 introduces:1 violation:2 held:4 integral:4 tuple:2 edge:29 necessary:1 damping:4 tree:2 iv:1 euclidean:3 exchanged:1 walk:1 e0:12 theoretical:1 minimal:1 politician:1 leskovec:1 instance:16 column:6 boolean:1 assignment:6 exchanging:1 maximization:1 applicability:1 cost:7 vertex:1 subset:4 loopy:2 tour:9 uniform:2 rounding:1 johnson:3 reported:2 nearoptimal:1 answer:1 perturbed:1 international:1 siam:1 probabilistic:1 xi1:1 lee:1 connecting:1 continuously:1 na:8 connectivity:1 again:1 augmentation:12 satisfied:2 huang:1 american:1 leading:1 toy:2 suggesting:1 potential:2 polbooks:1 xem:2 satisfy:1 notable:2 permanent:1 vi:50 depends:1 multiplicative:1 performed:1 root:1 closed:2 linked:1 doing:1 red:1 competitive:1 start:4 portion:1 slope:5 bornholdt:1 square:1 spin:2 variance:1 efficiently:1 ant:1 produced:3 published:1 inquiry:1 parallelizable:1 evaluates:1 against:2 energy:3 pp:7 hamming:1 stop:1 ask:1 mejn:1 recall:2 knowledge:1 color:1 improves:1 fractional:1 satisfiability:1 sophisticated:2 appears:2 higher:1 wei:1 formulation:4 until:2 traveling:12 hand:1 correlation:1 ei:9 nonlinear:1 propagation:6 incrementally:2 kernighan:1 quality:4 grows:1 contain:2 normalized:7 geographic:2 brandes:2 symmetric:2 iteratively:1 attractive:1 adjacent:5 criterion:1 outline:1 complete:3 demonstrate:1 image:1 variational:1 fi:4 recently:1 srinivasa:1 empirically:1 physical:5 exponentially:1 jl:1 marginals:1 refer:2 significant:1 consistency:1 mathematics:1 similarly:1 had:1 l3:2 surface:1 add:4 closest:1 recent:2 scenario:1 meta:1 binary:5 blog:1 xe:19 minimum:7 greater:1 sharma:1 xe0:2 ii:1 branch:3 full:6 faster:2 calculation:1 sphere:1 lin:1 lai:1 visit:1 vision:1 expectation:1 metric:1 arxiv:4 iteration:7 represent:1 achieved:1 c1:1 background:2 annealing:1 median:2 leaving:1 grow:3 biased:1 strict:1 elegant:1 seem:3 integer:2 tsplib:3 near:2 chvatal:1 iii:2 viability:2 easy:1 meltzer:1 variety:4 xj:7 krishnan:1 reduce:3 idea:5 regarding:1 enumerating:1 synchronous:1 whether:1 triviality:1 f:1 sontag:2 passing:38 ignored:1 eigenvectors:1 repeating:1 extensively:1 chhugani:1 generate:3 http:1 ravanbakhsh:3 percentage:1 millisecond:2 per:1 blue:1 dently:1 discrete:2 vol:8 demonstrating:1 nevertheless:1 drawn:1 rewriting:1 rinaldi:1 v1:3 graph:44 relaxation:1 fraction:1 sum:5 run:3 letter:1 powerful:2 almost:2 vn:1 padberg:1 oscillation:1 appendix:1 summarizes:1 comparable:1 bit:1 bound:3 constraint:42 pgm:3 bp:8 x2:1 min:17 optimality:7 kumar:1 relatively:1 format:1 speedup:1 department:3 structured:2 combination:1 em:1 chor:1 lp:5 bixby:1 equation:2 remains:1 discus:1 know:1 end:4 umich:1 travelling:1 salesman:13 studying:1 operation:2 observe:2 v2:2 enforce:5 diwan:1 encounter:1 shah:1 denotes:1 clustering:13 ensure:2 include:2 running:2 graphical:2 remaining:2 top:1 readable:1 calculating:2 exploit:1 potetz:1 approximating:2 society:3 question:1 already:1 added:1 dependence:1 distance:12 simulated:1 majority:1 outer:1 degrade:1 polytope:2 reinelt:1 spanning:1 assuming:2 karate:1 length:2 index:2 ratio:4 minimizing:2 fe:1 kde:1 negative:4 motivates:1 contributed:1 perform:1 upper:1 datasets:1 benchmark:6 excluding:1 jebara:1 community:11 pair:1 louvain:1 address:1 able:7 beyond:1 suggested:1 sparsity:1 challenge:3 program:3 max:7 including:1 belief:12 power:2 satisfaction:2 hybrid:2 yanover:1 scheme:1 improve:3 library:1 axis:1 genomics:1 reichardt:1 review:7 understanding:1 relative:1 netscience:1 fully:5 girvan:1 reordering:1 mixed:1 proportional:3 axm:2 bayati:1 degree:11 sufficient:1 t6g:3 concord:1 repeat:1 surprisingly:1 indepen:1 asynchronous:1 free:1 monomial:2 bias:5 face:1 bulletin:1 sparse:11 pgms:3 calculated:1 default:2 valid:1 world:1 evaluating:2 author:1 made:1 historical:1 tighten:1 approximate:3 compact:1 erdos:1 cutting:8 gene:2 clique:12 dealing:1 active:1 incoming:4 xi:19 continuous:1 search:2 modularity:20 table:3 ca:3 kschischang:1 obtaining:2 improving:1 investigated:1 complex:1 constructing:1 domain:2 vj:17 linearly:1 repeated:3 x1:2 positively:2 edmonton:3 board:1 sub:2 mezard:1 exponential:3 clamped:1 tied:1 jmlr:1 weighting:1 third:2 printing:1 pointless:1 experimented:2 gupta:1 normalizing:1 derives:1 importance:1 ci:2 easier:2 gomory:2 simply:3 greiner:3 conveniently:1 prevents:1 corresponds:1 satisfies:1 goal:1 feasible:4 hard:5 change:1 included:1 except:2 uniformly:1 operates:1 conservative:1 total:1 experimental:1 exception:2 select:1 guillaume:1 mark:1 meant:1 violated:5 evaluate:2 outgoing:1 princeton:1 avoiding:1 |
5,084 | 5,602 | Causal Inference through a
Witness Protection Program
Ricardo Silva
Department of Statistical Science and CSML
University College London
[email protected]
Robin Evans
Department of Statistics
University of Oxford
[email protected]
Abstract
One of the most fundamental problems in causal inference is the estimation of a
causal effect when variables are confounded. This is difficult in an observational
study because one has no direct evidence that all confounders have been adjusted
for. We introduce a novel approach for estimating causal effects that exploits
observational conditional independencies to suggest ?weak? paths in a unknown
causal graph. The widely used faithfulness condition of Spirtes et al. is relaxed
to allow for varying degrees of ?path cancellations? that will imply conditional
independencies but do not rule out the existence of confounding causal paths. The
outcome is a posterior distribution over bounds on the average causal effect via
a linear programming approach and Bayesian inference. We claim this approach
should be used in regular practice to complement other default tools in observational studies.
1
Contribution
We provide a new methodology to bound the average causal effect (ACE) of a variable X on a
variable Y . For binary variables, the ACE is defined as
E[Y | do(X = 1)] ? E[Y | do(X = 0)] = P (Y = 1 | do(X = 1)) ? P (Y = 1 | do(X = 0)), (1)
where do(?) is the operator of Pearl [14], denoting distributions where a set of variables has been
intervened upon by an external agent. In the interest of space, we assume the reader is familiar
with the concept of causal graphs, the basics of the do operator, and the basics of causal discovery
algorithms such as the PC algorithm of Spirtes et al. [22]. We provide a short summary for context
in Section 2
The ACE is in general not identifiable from observational data. We obtain upper and lower bounds
on the ACE by exploiting a set of (binary) covariates, which we also assume are not effects of X or
Y (justified by temporal ordering or other background assumptions). Such covariate sets are often
found in real-world problems, and form the basis of most observational studies done in practice [21].
However, it is not obvious how to obtain the ACE as a function of the covariates. Our contribution
modifies the results of Entner et al. [6], who exploit conditional independence constraints to obtain
point estimates of the ACE, but give point estimates relying on assumptions that might be unstable
in practice. Our modification provides a different interpretation of their search procedure, which we
use to generate candidate instrumental variables [11]. The linear programming approach of Dawid
[5] and Ramsahai [16] is then modified to generate bounds on the ACE by introducing constraints on
some causal paths, motivated as relaxations of [6]. The new setup can be computationally expensive,
so we introduce further relaxations to the linear program to generate novel symbolic bounds, and a
fast algorithm that sidesteps the full linear programming optimization with some simple, message
passing-like, steps.
1
U
X
U
Y
(a)
X
W
Y
(b)
X
Y
(c)
W
U
W U? U
X
Y
X
(d)
Y
(e)
Figure 1: (a) A generic causal graph where X and Y are confounded by some U . (b) The same
system in (a) where X is intervened upon by an external agent. (c) A system where W and Y are
independent given X. (d) A system where it is possible to use faithfulness to discover that U is
sufficient to block all back-door paths between X and Y . (e) Here, U itself is not sufficient.
Section 2 introduces the background of the problem and Section 3 our methodology. Section 4
discusses an analytical approximation of the main results, and a way by which this provides scalingup possibilities for the approach. Section 5 contains experiments with synthetic and real data.
2
Background: Instrumental Variables, Witnesses and Admissible Sets
Assuming X is a potential cause of Y , but not the opposite, a cartoon of the causal system containing
X and Y is shown in Figure 1(a). U represents the universe of common causes of X and Y . In
control and policy-making problems, we would like to know what happens to the system when the
distribution of X is overridden by some external agent (e.g., a doctor, a robot or an economist).
The resulting modified system is depicted in Figure 1(b), and represents the family of distributions
indexed by do(X = x): the graph in (a) has undergone a ?surgery? that wipes out edges, as originally
discussed by [22] in the context of graphical models. Notice that if U is observed
Pin the dataset, then
we can obtain the distribution P (Y = y | do(X = x)) by simply calculating u P (Y = y | X =
x, U = u)P (U = u) [22]. This was popularized by [14] as the back-door adjustment. In general
P (Y = y | do(X = x)) can be vastly different from P (Y = y | X = x).
The ACE is simple to estimate in a randomized trial: this follows from estimating the conditional
distribution of Y given X under data generated as in Figure 1(b). In contrast, in an observational
study [21] we obtain data generated by the system in Figure 1(a). If one believes all relevant confounders U have been recorded in the data then back-door adjustment can be used, though such
completeness is uncommon. By postulating knowledge of the causal graph relating components
of U , one can infer whether a measured subset of the causes of X and Y is enough [14, 23, 15].
Without knowledge of the causal graph, assumptions such as faithfulness [22] are used to infer it.
The faithfulness assumption states that a conditional independence constraint in the observed distribution exists if and only if a corresponding structural independence exists in the underlying causal
graph. For instance, observing the independence W ?
? Y | X, and assuming faithfulness and the
causal order, we can infer the causal graph Figure 1(c); in all the other graphs this conditional independence in not implied. We deduce that no unmeasured confounders between X and Y exist.
This simple procedure for identifying chains W ? X ? Y is useful in exploratory data analysis
[4], where a large number of possible causal relations X ? Y are unquantified but can be screened
using observational data before experiments are performed. The idea of using faithfulness is to be
able to sometimes identify such quantities.
Entner et al. [6] generalize the discovery of chain models to situations where a non-empty set of
covariates is necessary to block all back-doors. Suppose W is a set of covariates which are known
not to be effects of either X or Y , and we want to find an admissible set contained in W: a set
of observed variables which we can use for back-door adjustment to get P (Y = y | do(X = x)).
Entner?s ?Rule 1? states the following:
Rule 1: If there exists a variable W ? W and a set Z ? W\{W } such that:
(i) W ?
?
\ Y |Z
(ii)
then infer that Z is an admissible set.
2
W ?
? Y | Z ? {X}.
A point estimate of the ACE can then be found using Z. Given that (W, Z) satisfies1 Rule 1, we
call W a witness for the admissible set Z. The model in Figure 1(c) can be identified with Rule
1, where W is the witness and Z = ?. In this case, a so-called Na??ve Estimator2 P (Y = 1 | X =
1) ? P (Y = 1 | X = 0) will provide the correct ACE. If U is observable in Figure 1(d), then it can
be identified as an admissible set for witness W . Notice that in Figure 1(a), taking U as a scalar, it
is not possible to find a witness since there are no remaining variables. Also, if in Figure 1(e) our
covariate set W is {W, U }, then no witness can be found since U 0 cannot be blocked. Hence, it is
possible for a procedure based on Rule 1 to answer ?I don?t know whether an admissible set exists?
even when a back-door adjustment would be possible if one knew the causal graph. However, using
the faithfulness assumption alone one cannot do better: Rule 1 is complete for non-zero effects
without more information [6].
Despite its appeal, the faithfulness assumption is not without difficulties. Even if unfaithful distributions can be ruled out as pathological under seemingly reasonable conditions [13], distributions
which lie close to (but not on) a simpler model may in practice be indistinguishable from distributions within that simpler model at finite sample sizes. To appreciate these complications, consider
the structure in Figure 1(d) with U unobservable. Here W is randomized but X is not, and we would
like to know the ACE of X on Y 3 . W is sometimes known as an instrumental variable (IV), and we
call Figure 1(d) the standard IV structure; if this structure is known, optimal bounds LIV ? ACE
? UIV can be obtained without further assumptions, using only observational data over the binary
variables W , X and Y [1]. There exist distributions faithful to the IV structure but which at finite
sample sizes may appear to satisfy the Markov property for the structure W ? X ? Y ; in practice
this can occur at any finite sample size [20]. The true average causal effect may lie anywhere in the
interval [LIV , UIV ] (which can be rather wide), and may differ considerably from the na??ve estimate
appropriate for the simpler structure. While we emphasize that this is a ?worst-case scenario? analysis and by itself should not rule out faithfulness as a useful assumption, it is desirable to provide a
method that gives greater control over violations of faithfulness.
3
Methodology: the Witness Protection Program
The core of our idea is (i) to invert the usage of Entner?s Rule 1, so that pairs (W, Z) should provide
an instrumental variable bounding method instead of a back-door adjustment; (ii) express violations
of faithfulness as bounded violations of local independence; (iii) find bounds on the ACE using a
linear programming formulation.
Let (W, Z) be any pair found by a search procedure that decides when Rule 1 holds. W will play the
role of an instrumental variable, instead of being discarded. A standard IV bounding procedure such
as [1] can be used conditional on each individual value z of Z, then averaged over P (Z). The lack of
an edge W ? Y given Z can be justified by faithfulness (as W ?
? Y | {X, Z}). For the same reason,
there might be no (conditional) dependence between W and a possible unmeasured common parent
of X and Y . However, assuming faithfulness itself is not interesting, as a back-door adjustment
could be directly obtained. Allowing unconstrained dependencies induced by edges W ? Y and
(W, U ) (any direction) is also a non-starter, as all bounds will be vacuous [16].
Consider instead the (partial) parameterization in Table 1 of the joint distribution of {W, X, Y, U },
where U is latent and not necessarily a scalar. For simplicity of presentation, assume we are conditioning everywhere on a particular value z of Z, but which we supress from our notation as this will
not be crucial to developments in this Section. Under this notation, the ACE is given by
?11 P (W = 1) + ?10 P (W = 0) ? ?01 P (W = 1) ? ?00 P (W = 0).
1
(2)
The work in [6] aims also at identifying zero effects with a ?Rule 2?. For simplicity we assume that the
effect of interest was already identified as non-zero.
2
Sometimes we use the word ?estimator? to mean a functional of the probability distribution instead of a
statistical estimator that is a function of samples of this distribution. Context should make it clear when we
refer to an actual statistic or a functional.
3
A classical example is in non-compliance: suppose W is the assignment of a patient to either drug or
placebo, X is whether the patient actually took the medicine or not, and Y is a measure of health status. The
doctor controls W but not X. This problem is discussed by [14] and [5].
3
?
?yx.w
?yx.w
?
?
=
P
P(Y = y, X = x | W = w, U )
U P (Y = y, X = x | W = w, U )P (U | W = w)
P (Y = y, X = x | W = w)
?
?xw
?xw
?
?
=
P
P(Y = 1 | X = x, W = w, U )
U P (Y = 1 | X = x, W = w, U )P (U | W = w)
P (Y = 1 | do(X = x), W = w)
?
?w
?w
? P
P(X = 1 | W = w, U )
?
U P (X = x | W = w, U )P (U | W = w)
= P (X = 1 | W = w).
Table 1: A partial parameterization of a causal DAG model over some {U, W, X, Y }. Notice that
such parameters cannot be functionally independent, and this is precisely what we will exploit.
We now introduce the following assumptions,
?
?
|?x1
? ?x0
| ? w
?
|?xw ? P (Y = 1 | X = x, W = w)| ? y
?
|?w
? P (X = 1 | W = w)| ? x
? (U ).
?P (U ) ? P (U | W = w) ? ?P
(3)
(4)
(5)
(6)
Setting w = 0, ? = ?? = 1 recovers the standard IV structure. Further assuming y = x = 0
recovers the chain structure W ? X ? Y . Deviation from these values corresponds to a violation
of faithfulness, as the premises of Rule 1 can only be satisfied by enforcing functional relationships
among the conditional probability tables of each vertex. Using this parameterization in the case
y = x = 1, ? = ?? = 1, Ramsahai [16], extending [5], used the following linear programming to
obtain bounds on the ACE (for now, assume that ?yx.w and P (W = w) are known constants):
?
1. There is a 4-dimensional polytope where parameters {?xw
} can take values: for w = y =
4
1, this is the unit hypercube [0, 1] . Find the extreme points of this polytope (up to 12 points
?
}.
for the case where w > 0). Do the same for {?w
?
?
}?
2. Find the extreme points of the joint space ?yx.w by mapping them from the points in {?w
? (1?x) ?
? x
?
?
?xw .
{?xw }, since ?yx.w = (?w ) (1 ? ?w )
?
?
}, find the dual
} ? {?xw
3. Using the extreme points of the 12-dimensional joint space {?yx.w
polytope of this space in terms of linear inequalities. Points in this polytope are convex
?
?
} ? {?xw
}, shown by [5] to correspond to the marginalization over
combinations of {?yx.w
some arbitrary P (U ). This results in contraints over {?yx.w } ? {?xw }.
4. Maximize/minimize (2) with respect to {?xw } subject to the constraints found in Step 3 to
obtain upper/lower bounds on the ACE.
Allowing for the case where x < 1 or y < 1 is just a matter changing the first step, where
box constraints are set on each individual parameter as a function of the known P (Y = y, X =
x | W = w), prior to the mapping in Step 2. The resulting constraints are now implicitly non-linear
in P (Y = y, X = x | W = w), but at this stage this does not matter as they are treated as constants.
? use exactly the same procedure, but substitute every occurrence of
To allow for the case ? < 1 < ?,
P ?
?yx.w in the constraints by ?yx.w ? U ?yx.w
P (U ); notice the difference between
P ??yx.w and ?yx.w .
Likewise, substitute every occurrence of ?xw in the constraints by ?xw ? U ?xw
P (U ). Instead
of plugging in constants for the values of ?yx.w and turning the crank of a linear programming
solver, we first treat {?yx.w } (and {?xw })P
as unknowns, linking them to observables and ?xw by the
constraints ?yx.w /?? ? ?yx.w ? ?yx.w /?, yx ?yx.w = 1 and ?xw /?? ? ?xw ? ?xw /?. Finally, the
method can be easily implemented using a package such as Polymake (http://www.poymake.org) or
SCDD for R. More details are given in the Supplemental Material.
In this paper, we will not discuss in detail how to choose the free parameters of the relaxation. Any
choice of w ? 0, y ? 0, x ? 0, 0 ? ? ? 1 ? ?? is guaranteed to provide bounds that are at
4
input : Binary data matrix D; set of relaxation parameters ?; covariate index set W;
cause-effect indices X and Y
output: A list of pairs (witness, admissible set) contained in W
L ? ?;
for each W ? W do
for every admissible set Z ? W\{W } identified by W and ? given D do
B ? posterior over upper/lowed bounds on the ACE as given by (W, Z, X, Y, D, ?);
if there is no evidence in B to falsify the (W, Z, ?) model then
L ? L ? {B};
end
end
end
return L
Algorithm 1: The outline of the Witness Protection Program algorithm.
least as conservative as the back-door adjusted point estimator of [6], which is always covered by
the bounds. Background knowledge, after a user is suggested a witness and admissible set, can be
used here. In Section 5 we experiment with a few choices of default parameters. To keep focus,
in what follows we will discuss only computational aspects. We develop a framework for choosing
relaxation parameters in the Supplemental, and expect to extend it in follow-up publications.
As the approach provides the witness a degree of protection against faithfulness violations, using a
linear program, we call this framework the Witness Protection Program (WPP).
3.1
Bayesian Learning
The previous section treated ?yx.w and P (W = w) as known. A common practice is to replace
them by plug-in estimators (and in the case of a non-empty admissible set Z, an estimate of P (Z)
is also necessary). Such models can also be falsified, as the constraints generated are typically only
supported by a strict subset of the probability simplex. In principle, one could fit parameters without
constraints, and test the model by a direct check of satisfiability of the inequalities using the plug-in
values. However, this does not take into account the uncertainty in the estimation. For the standard
IV model, [17] discuss a proper way of testing such models in a frequentist sense.
Our models can be considerably more complicated. Recall that constraints will depend on the ex?
} parameters. As implied by (4) and (5), extreme points will be functions
treme points of the {?yx.w
of ?yx.w . Writing the constraints fully in terms of the observed distribution will reveal non-linear
relationships. We approach the problem in a Bayesian way. We will assume first the dimensionality
of Z is modest (say, 10 or less), as this is the case in most applications of faithfulness to causal
discovery. We parameterize P (Y, X, W | Z) as a full 2 ? 2 ? 2 contingency table4 .
Given that the dimensionality of the problem is modest, we assign to each three-variate distribution
P (Y, X, W | Z = z) an independent Dirichet prior for every possible assigment of Z, constrained
by the inequalities implied by the corresponding polytopes. The posterior is also a 8-dimensional
constrained Dirichlet distribution, where we use rejection sampling to obtain a posterior sample by
proposing from the unconstrained Dirichlet. A Dirichlet prior can also be assigned to P (Z). Using
a sample from the posterior of P (Z) and a sample (for each possible value z) from the posterior of
P (Y, X, W | Z = z), we obtain a sample upper and lower bound for the ACE.
The full algorithm is shown in Algorithm 1. The search procedure is left unspecified, as different
existing approaches can be plugged in into this step. See [6] for a discussion. In Section 5 we deal
with small dimensional problems only, using the brute-force approach of performing an exhaustive
search for Z. In practice, brute-force can be still valuable by using a method such as discrete PCA
[3] to reduce W\{W } to a small set of binary variables. To decide whether the premises in Rule 1
hold, we merely perform Bayesian model selection with the BDeu score [2] between the full graph
{W ? X, W ? Y, X ? Y } (conditional on Z) and the graph with the edge W ? Y removed. Our
4
That is, we allow for dependence between W and Y given {X, Z}, interpreting the decision of independence used in Rule 1 as being only an indicator of approximate independence.
5
?xw ? ?1x.w + LYxwU (?0x0 .w + ?1x0 .w )
?xw ? 1 ? (?
0x.w0
?xw ? ?
xw0
UxXU
0w
? w (?
0x.w0
? ?1x.w + w (?
0x0 .w
+?
+?
1x0 .w
1x.w0
(7)
XU
))/Uxw
0
)
? + L + 2w ) + L
?xw + ?x0 w ? ?x0 w0 ? ?1x0 .w + ?1x.w ? ?1x0 .w0 + ?1x.w0 ? ?xw0 (U
(8)
(9)
(10)
Table 2: Some of the algebraic bounds found by symbolic manipulation of linear inequalities. Notation: x, w ? {0, 1}, x0 = 1 ? x and w0 = 1 ? w are the complementary values.
YU
LYxwU ? max(0, P (Y = 1 |X = x, W = w) ? y ), Uxw
? min(1, P (Y = 1 |X = x, W =
XU
XU
w) + y ); Lxw ? max(0, P (X = x |W = w) ? x ), with Uxw
defined accordingly. Finally,
YU
YU
?
U ? max{Uxw }, L ? min{Lxw } and ?xw ? ?1x.w + ?0x.w . Full set of bounds with proofs can
be found in the Supplementary Material.
?falsification test? in Step 5 is a simple and pragmatical one: our initial trial of rejection sampling
proposes M samples, and if more than 95% of them are rejected, we take this as an indication that
the proposed model provides a bad fit. The final result is a set of posterior distributions over bounds,
possibly contradictory, which should be summarized as appropriate. Section 5 provides an example.
4
Algebraic Bounds and the Back-substitution Algorithm
Posterior sampling is expensive within the context of Bayesian WPP: constructing the dual polytope
for possibly millions of instantiations of the problem is time consuming, even if each problem is
small. Moreover, the numerical procedure described in Section 3 does not provide any insight on
? interact to produce bounds, unlike the analytical
how the different free parameters {w , y , x , ?, ?}
bounds available in the standard IV case. [16] derives analytical bounds under (3) given a fixed,
numerical value of w . We know of no previous analytical bounds as an algebraic function of w .
In the Supplementary Material, we provide a series of algebraic bounds as a function of our free
parameters. Due to limited space, we show only some of the bounds in Table 2. They illustrate
qualitative aspects of our free parameters. For instance, if y = 1 and ? = ?? = 1, then LYxwU = 0 and
(7) collapses to ?xw ? ?1x.w , one of the original relations found by [1] for the standard IV model.
Decreasing y will linearly increase LYxwU , tightening the corresponding lower bound in (7). If also
w = 0 and x = 1, from (8) it follows ?xw ? 1 ? ?0x.w0 . Equation (3) implies ?x0 w ? ?x0 w0 ? w ,
and as such by setting w = 0 we have that (10) implies ?xw ? ?1x.w + ?1x.w0 ? ?1x0 .w0 ? ?0x.w0 ,
one of the most complex relationships in [1]. Further geometric intuition about the structure of the
binary standard IV model is given by [19].
These bounds are not tight, in the sense that we opted not to fully exploit all possible algebraic
?
? and 0 ? ? ? ? 1 instead of
?U
combinations for some results, such as (10): there we use L ? ?xw
w
all possible combinations resulting from (4) and (5). The proof idea in the Supplementary Material
can be further refined, at the expense of clarity. Because our derivation is a further relaxation, the
implied bounds are more conservative (i.e., wider).
Besides providing insight on the structure of the problem, this gives a very efficient way of checking
?
whether a proposed parameter vector {?yx.w
} is valid, as well as finding the bounds: use backsubstitution on the symbolic set of constraints to find box constraints Lxw ? ?xw ? Uxw . The
proposed parameter will be rejected whenever an upper bound is smaller than a lower bound, and (2)
can be trivially optimized conditioning only on the box constraints?this is yet another relaxation,
added on top of the ones used to generate the algebraic inequalities. We initialize by intersecting
all algebraic box constraints (of which (7) and (8) are examples); next we refine these by scanning
relations ??xw ? a?xw0 ? c such as (9) in lexicographical order, and tightening the bounds of
?xw using the current upper and lower bounds on ?xw0 where possible. We then identify constraints
Lxww0 ? ?xw ? ?xw0 ? Uxww0 starting from ?w ? ?xw ? ?xw0 ? w and the existing bounds,
and plug into relations ??xw + ?x0 w ? ?x0 w0 ? c (as exemplified by (10)) to get refined bounds
on ?xw as functions of (Lx0 ww0 , Ux0 ww0 ). We iterate this until convergence, which is guaranteed
since bounds never widen at any iteration. This back-substitution of inequalities follows the spirit
6
of message-passing and it is an order of magnitude more efficient than the fully numerical solution,
while not increasing the width of the bounds by too much. In the Supplementary Material we provide
evidence for this claim. In our experiments in Section 5, the back-substitution method was used in
the testing stage of WPP. After collecting posterior samples, we calculated the posterior expected
value of the contingency tables and run the numerical procedure to obtain the final tight bound5 .
5
Experiments
We describe a set of synthetic studies, followed by one study with the influenza data discussed by
[9, 18]. In the synthetic study setup, we compare our method against NE1 and NE2, two na??ve point
estimators defined by back-door adjustment on the whole of W and on the empty set, respectively.
The former is widely used in practice, even when there is no causal basis for doing so [15]. The
point estimator of [6], based solely on the faithfulness assumption, is also assessed.
We generate problems where conditioning on the whole set W is guaranteed to give incorrect estimates6 . Here, |W| = 8. We analyze two variations: one where it is guaranteed that at least one
valid witness ? admissible set pair exists; in the other, latent variables in the graph are common
parents also of X and Y , so no valid witness exists. We divide each variation into two subcases: in
the first, ?hard? subcase, parameters are chosen (by rejection sampling) so that NE1 has a bias of
at least 0.1 in the population; in the second, no such selection exists, and as such our exchangeable
parameter sampling scheme makes the problem relatively easy. We summarize each WPP bound by
the posterior expected value of the lower and upper bounds. In general WPP returns more than one
bound: we select the upper/lower bound corresponding to the (W, Z) pair where the sum of BDeu
scores for W ?
?
\ Y | Z and W ?
? Y | Z ? {X} is highest.
Our main evaluation metric for an estimate is the Euclidean distance (henceforth, ?error?) between
the true ACE and the closed point in the given estimate, whether the estimate is a point or an interval.
For methods that provide point estimates (NE1, NE2, and faithfulness), this means just the absolute
value of the difference between the true ACE and the estimated ACE. For WPP, the error of the
interval [L, U] is zero if the true ACE lies in this interval. We report error average and error tail
mass at 0.1, the latter meaning the proportion of cases where the error exceeds 0.1. The comparison
is not straightforward, since the trivial interval [?1, 1] will always have zero bias according to this
definition. This is a trade-off, to be set according to an agreed level of information loss, measured
by the width of the resulting intervals. This is discussed in the Supplemental. We run simulations
at two levels of parameters: ? = 0.9, ?? = 1.1, and the same configuration except for ? = ?? = 1.
The former gives somewhat wide intervals. As Manski emphasizes [11], this is the price for making
fewer assumptions. For the cases where no witness exists, Entner?s Rule 1 should theoretically report
no solution. In [6], stringent thresholds for accepting the two conditions of Rule 1 are adopted.
Instead we take a more relaxed approach, using a uniform prior on the hypothesis of independence,
and a BDeu prior with effective sample size of 10. As such, due to the nature of our parameter
randomization, almost always (typically > 90%) the method will propose at least one witness. Given
this theoretical failure, for the problems where no exact solution exists, we assess how sensitive the
methods are given conclusions taken from ?approximate independencies? instead of exact ones.
We simulate 100 datasets for each one of the four cases (hard case/easy case, with theoretical solution/without theoretical solution), 5000 points per dataset, 1000 Monte Carlo samples per decision.
Results are summarized in Table 3 for the case w = x = y = 0.2, ? = 0.9, ?? = 1.1. Notice
5
Sometimes, however, the expected contingency table given by the back-substitution method would fall
outside the feasible region of the fully specified linear program ? this is expected to happen from time to time,
as the analytical bounds are looser. In such a situation, we report the bounds given by the back-substitution
samples.
6
In detail: we generate graphs where W ? {Z1 , Z2 , . . . , Z8 }. Four independent latent variables
L1 , . . . , L4 are added as parents of each {Z5 , . . . , Z8 }; L1 is also a parent of X, and L2 a parent of Y .
L3 and L4 are each randomly assigned to be a parent of either X or Y , but not both. {Z5 , . . . , Z8 } have no
other parents. The graph over Z1 , . . . , Z4 is chosen by adding edges uniformly at random according to the lexicographic order. In consequence using the full set W for back-door adjustment is always incorrect, as at least
four paths X ? L1 ? Zi ? L2 ? Y are active for i = 5, 6, 7, 8. The conditional probabilities of a vertex
given its parents are generated by a logistic regression model with pairwise interactions, where parameters are
sampled according to a zero mean Gaussian with standard deviation 10 / number of parents. Parameter values
are truncated so that all conditional probabilities are between 0.025 and 0.975.
7
Case (? = 1, ?? = 1)
Hard/Solvable
Easy/Solvable
Hard/Unsolvable
Easy/Unsolvable
NE1
0.12 1.00
0.01 0.01
0.16 1.00
0.09 0.32
NE2
0.02 0.03
0.07 0.24
0.20 0.88
0.14 0.56
Faith.
0.05 0.05
0.02 0.01
0.19 0.95
0.12 0.53
WPP
0.01 0.01
0.00 0.00
0.07 0.25
0.03 0.08
Width
0.24
0.24
0.24
0.23
Table 3: Summary of the outcome of the synthetic studies. Each entry for particular method is a pair
(bias average, bias tail mass at 0.1) of the respective methods, as explained in the main text. The last
column is the median width of the WPP interval. In a similar experiment with ? = 0.9, ?? = 1.1,
WPP achieves nearly zero error, with interval widths around 0.50. A much more detailed table for
many other cases is provided in the Supplementary Material.
that WPP is quite stable, while the other methods have strengths and weaknesses depending on the
setup. For the unsolvable cases, we average over the approximately 99% of cases where some solution was reported?in theory, no conditional independences hold and no solution should be reported,
but WPP shows empirical robustness for the true ACE in these cases.
Our empirical study concerns the effect of influenza vaccination on a patient being hospitalized later
on with chest problems. X = 1 means the patient got a flu shot, Y = 1 indicates the patient
was hospitalized. A negative ACE therefore suggests a desirable vaccine. The study was originally
discussed by [12]. Shots were not randomized, but doctors were randomly assigned to receive a
reminder letter to encourage their patients to be inoculated, recorded as GRP. This suggests the
standard IV model in Figure 1(d), with W = GRP and U unobservable. Using the bounds of [1] and
observed frequencies gives an interval of [?0.23, 0.64] for the ACE. WPP could not validate GRP
as a witness, instead returning as the highest-scoring pair the witness DM (patient had history of
diabetes prior to vaccination) with admissible set composed of AGE (dichotomized at 60 years) and
SEX. Here, we excluded GRP as a possible member of an admissible set, under the assumption that
it cannot be a common cause of X and Y . Choosing w = y = x = 0.2 and ? = 0.9, ?? = 1.1, we
obtain the posterior expected interval [?0.10, 0.17]. This does not mean the vaccine is more likely
to be bad (positive ACE) than good: the posterior distribution is over bounds, not over points, being
completely agnostic about the distribution within the bounds. Notice that even though we allow for
full dependence between all of our variables, the bounds are considerably stricter than in the standard
IV model due to the weakening of hidden confounder effects postulated by observing conditional
independences. Posterior plots and sensitivity analysis are included in the Supplementary Material;
for further discussion see [18, 9].
6
Conclusion
Our model provides a novel compromise between point estimators given by the faithfulness assumptions and bounds based on instrumental variables. We believe such an approach should become
a standard item in the toolbox of anyone who needs to perform an observational study. R code
is available at http://www.homepages.ucl.ac.uk/?ucgtrbd/wpp. Unlike risky Bayesian
approaches that put priors directly on the parameters of the unidentifiable latent variable model
P (Y, X, W, U | Z), the constrained Dirichlet prior does not suffer from massive sensitivity to the
choice of hyperparameters, as discussed at length by [18] and the Supplementary Material. By focusing on bounds, WPP keeps inference more honest, providing a compromise between a method
purely based on faithfulness and purely theory-driven analyses that overlook competing models
suggested by independence constraints. As future work, we will look at a generalization of the
procedure beyond relaxations of chain structures W ? X ? Y . Much of the machinery here
developed, including Entner?s Rules, can be adapted to the case where causal ordering is unknown:
the search for ?Y-structures? [10] generalizes the chain structure search to this case. Also, we will
look into ways on suggesting plausible values for the relaxation parameters, already touched upon in
the Supplementary Material. Finally, the techniques used to derive the symbolic bounds in Section 4
may prove useful in a more general context and complement other methods to find subsets of useful
constraints such as the information theoretical approach of [8] and the graphical approach of [7].
Acknowledgements. We thank McDonald, Hiu and Tierney for their flu vaccine data, and the
anonymous reviewers for their valuable feedback.
8
References
[1] A. Balke and J. Pearl. Bounds on treatment effects from studies with imperfect compliance.
Journal of the American Statistical Association, pages 1171?1176, 1997.
[2] W. Buntine. Theory refinement on Bayesian networks. Proceedings of the 7th Conference on
Uncertainty in Artificial Intelligence (UAI1991), pages 52?60, 1991.
[3] W. Buntine and A. Jakulin. Applying discrete PCA in data analysis. Proceedings of 20th
Conference on Uncertainty in Artificial Intelligence (UAI2004), pages 59?66, 2004.
[4] L. Chen, F. Emmert-Streib, and J. D. Storey. Harnessing naturally randomized transcription to
infer regulatory relationships among genes. Genome Biology, 8:R219, 2007.
[5] A.P. Dawid. Causal inference using influence diagrams: the problem of partial compliance. In
P.J. Green, N.L. Hjort, and S. Richardson, editors, Highly Structured Stochastic Systems, pages
45?65. Oxford University Press, 2003.
[6] D. Entner, P. Hoyer, and P. Spirtes. Data-driven covariate selection for nonparametric estimation of causal effects. JMLR W&CP: AISTATS 2013, 31:256?264, 2013.
[7] R. Evans. Graphical methods for inequality constraints in marginalized DAGs. Proceedings of
the 22nd Workshop on Machine Learning and Signal Processing, 2012.
[8] P. Geiger, D. Janzing, and B. Sch?olkopf. Estimating causal effects by bounding confounding.
Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pages 240?249,
2014.
[9] K. Hirano, G. Imbens, D. Rubin, and X.-H. Zhou. Assessing the effect of an inuenza vaccine
in an encouragement design. Biometrics, 1:69?88, 2000.
[10] S. Mani, G. Cooper, and P. Spirtes. A theoretical study of Y structures for causal discovery.
Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI2006), pages
314?323, 2006.
[11] C. Manski. Identification for Prediction and Decision. Harvard University Press, 2007.
[12] C. McDonald, S. Hiu, and W. Tierney. Effects of computer reminders for influenza vaccination
on morbidity during influenza epidemics. MD Computing, 9:304?312, 1992.
[13] C. Meek. Strong completeness and faithfulness in Bayesian networks. Proceedings of the
Eleventh Conference on Uncertainty in Artificial Intelligence (UAI1995), pages 411?418,
1995.
[14] J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2000.
[15] J. Pearl. Myth, confusion, and science in causal analysis. UCLA Cognitive Systems Laboratory,
Technical Report (R-348), 2009.
[16] R. Ramsahai. Causal bounds and observable constraints for non-deterministic models. Journal
of Machine Learning Research, pages 829?848, 2012.
[17] R. Ramsahai and S. Lauritzen. Likelihood analysis of the binary instrumental variable model.
Biometrika, 98:987?994, 2011.
[18] T. Richardson, R. Evans, and J. Robins. Transparent parameterizatios of models for potential
outcomes. In J. Bernardo, M. Bayarri, J. Berger, A. Dawid, D. Heckerman, A. Smith, and
M. West, editors, Bayesian Statistics 9, pages 569?610. Oxford University Press, 2011.
[19] T. Richardson and J. Robins. Analysis of the binary instrumental variable model. In R. Dechter,
H. Geffner, and J.Y. Halpern, editors, Heuristics, Probability and Causality: A Tribute to Judea
Pearl, pages 415?444. College Publications, 2010.
[20] J. Robins, R. Scheines, P. Spirtes, and L. Wasserman. Uniform consistency in causal inference.
Biometrika, 90:491?515, 2003.
[21] P. Rosenbaum. Observational Studies. Springer-Verlag, 2002.
[22] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction and Search. Cambridge University Press, 2000.
[23] T. VanderWeele and I. Shpitser. A new criterion for confounder selection. Biometrics,
64:1406?1413, 2011.
9
| 5602 |@word trial:2 instrumental:8 proportion:1 nd:2 sex:1 simulation:1 assigment:1 subcase:1 shot:2 initial:1 substitution:5 contains:1 score:2 series:1 configuration:1 denoting:1 existing:2 current:1 z2:1 protection:5 yet:1 evans:4 numerical:4 happen:1 dechter:1 plot:1 alone:1 intelligence:5 fewer:1 item:1 parameterization:3 accordingly:1 smith:1 short:1 core:1 accepting:1 provides:6 completeness:2 complication:1 org:1 simpler:3 hospitalized:2 direct:2 become:1 qualitative:1 incorrect:2 prove:1 eleventh:1 introduce:3 theoretically:1 pairwise:1 x0:15 expected:5 falsify:1 relying:1 decreasing:1 actual:1 bdeu:3 solver:1 increasing:1 provided:1 estimating:3 discover:1 underlying:1 bounded:1 notation:3 moreover:1 mass:2 what:3 agnostic:1 homepage:1 unspecified:1 developed:1 proposing:1 supplemental:3 finding:1 temporal:1 every:4 collecting:1 bernardo:1 stricter:1 exactly:1 returning:1 biometrika:2 uk:3 control:3 unit:1 brute:2 exchangeable:1 appear:1 before:1 positive:1 local:1 treat:1 consequence:1 despite:1 jakulin:1 oxford:3 flu:2 path:6 solely:1 approximately:1 might:2 suggests:2 limited:1 collapse:1 confounder:2 averaged:1 lexicographical:1 faithful:1 testing:2 practice:8 block:2 procedure:10 empirical:2 drug:1 got:1 word:1 regular:1 suggest:1 symbolic:4 get:2 cannot:4 close:1 selection:4 operator:2 put:1 context:5 applying:1 writing:1 influence:1 www:2 deterministic:1 reviewer:1 modifies:1 straightforward:1 starting:1 convex:1 simplicity:2 identifying:2 stats:2 wasserman:1 rule:17 estimator:7 insight:2 population:1 exploratory:1 unmeasured:2 variation:2 suppose:2 play:1 user:1 exact:2 programming:6 massive:1 hypothesis:1 diabetes:1 harvard:1 dawid:3 expensive:2 observed:5 role:1 worst:1 parameterize:1 region:1 ordering:2 trade:1 removed:1 highest:2 valuable:2 intuition:1 covariates:4 halpern:1 depend:1 overridden:1 tight:2 compromise:2 manski:2 purely:2 upon:3 basis:2 observables:1 completely:1 easily:1 joint:3 derivation:1 fast:1 describe:1 london:1 effective:1 monte:1 artificial:5 outcome:3 choosing:2 harnessing:1 exhaustive:1 refined:2 ace:26 widely:2 supplementary:8 outside:1 say:1 quite:1 plausible:1 epidemic:1 heuristic:1 statistic:3 richardson:3 itself:3 final:2 seemingly:1 indication:1 analytical:5 ucl:2 took:1 propose:1 interaction:1 relevant:1 faith:1 validate:1 olkopf:1 exploiting:1 parent:9 empty:3 convergence:1 extending:1 assessing:1 produce:1 wider:1 illustrate:1 develop:1 ac:3 depending:1 derive:1 measured:2 lauritzen:1 strong:1 implemented:1 rosenbaum:1 implies:2 differ:1 direction:1 correct:1 stochastic:1 stringent:1 observational:10 material:9 premise:2 assign:1 transparent:1 generalization:1 anonymous:1 randomization:1 adjusted:2 hold:3 around:1 mapping:2 claim:2 achieves:1 estimation:3 sensitive:1 tool:1 lexicographic:1 always:4 gaussian:1 aim:1 modified:2 rather:1 zhou:1 varying:1 publication:2 focus:1 check:1 indicates:1 likelihood:1 contrast:1 opted:1 sense:2 inference:7 typically:2 weakening:1 hidden:1 relation:4 unobservable:2 among:2 dual:2 development:1 proposes:1 constrained:3 initialize:1 never:1 cartoon:1 sampling:5 biology:1 represents:2 yu:3 look:2 nearly:1 future:1 simplex:1 report:4 few:1 causation:1 pathological:1 widen:1 ne2:3 composed:1 ve:3 randomly:2 individual:2 familiar:1 interest:2 message:2 possibility:1 highly:1 evaluation:1 weakness:1 introduces:1 uncommon:1 violation:5 extreme:4 pc:1 chain:5 edge:5 encourage:1 partial:3 necessary:2 respective:1 modest:2 machinery:1 indexed:1 iv:11 plugged:1 divide:1 euclidean:1 ruled:1 biometrics:2 causal:32 dichotomized:1 wipe:1 theoretical:5 instance:2 column:1 unfaithful:1 assignment:1 hirano:1 introducing:1 deviation:2 subset:3 placebo:1 vertex:2 uniform:2 entry:1 too:1 buntine:2 reported:2 dependency:1 answer:1 scanning:1 synthetic:4 considerably:3 confounders:3 fundamental:1 randomized:4 sensitivity:2 off:1 na:3 intersecting:1 vastly:1 recorded:2 satisfied:1 containing:1 choose:1 xw0:6 possibly:2 henceforth:1 geffner:1 external:3 cognitive:1 american:1 sidestep:1 shpitser:1 ricardo:2 return:2 account:1 potential:2 suggesting:1 summarized:2 matter:2 satisfy:1 postulated:1 performed:1 later:1 closed:1 bayarri:1 observing:2 doing:1 doctor:3 analyze:1 complicated:1 contribution:2 minimize:1 ass:1 who:2 likewise:1 correspond:1 identify:2 generalize:1 weak:1 bayesian:9 identification:1 emphasizes:1 overlook:1 carlo:1 history:1 janzing:1 whenever:1 definition:1 against:2 failure:1 frequency:1 obvious:1 dm:1 naturally:1 proof:2 recovers:2 judea:1 sampled:1 dataset:2 treatment:1 recall:1 knowledge:3 reminder:2 dimensionality:2 satisfiability:1 agreed:1 actually:1 back:16 focusing:1 originally:2 follow:1 methodology:3 formulation:1 done:1 ox:1 though:2 box:4 unidentifiable:1 anywhere:1 just:2 stage:2 rejected:2 until:1 myth:1 lack:1 logistic:1 reveal:1 believe:1 usage:1 effect:18 concept:1 true:5 former:2 hence:1 assigned:3 mani:1 excluded:1 laboratory:1 spirtes:6 deal:1 indistinguishable:1 during:1 width:5 criterion:1 outline:1 complete:1 mcdonald:2 confusion:1 l1:3 interpreting:1 silva:1 cp:1 reasoning:1 meaning:1 novel:3 common:5 functional:3 conditioning:3 influenza:4 million:1 discussed:6 interpretation:1 linking:1 relating:1 functionally:1 extend:1 tail:2 refer:1 blocked:1 association:1 cambridge:2 dag:2 encouragement:1 unconstrained:2 trivially:1 consistency:1 z4:1 cancellation:1 had:1 l3:1 robot:1 stable:1 deduce:1 posterior:14 confounding:2 driven:2 scenario:1 manipulation:1 verlag:1 inequality:7 binary:8 scoring:1 greater:1 relaxed:2 somewhat:1 maximize:1 signal:1 ii:2 full:7 desirable:2 infer:5 exceeds:1 technical:1 plug:3 plugging:1 z5:2 prediction:2 basic:2 regression:1 patient:7 metric:1 iteration:1 sometimes:4 invert:1 justified:2 background:4 want:1 receive:1 interval:11 diagram:1 median:1 crucial:1 sch:1 unlike:2 morbidity:1 strict:1 induced:1 compliance:3 subject:1 member:1 spirit:1 call:3 chest:1 structural:1 door:11 hjort:1 iii:1 enough:1 easy:4 iterate:1 independence:12 marginalization:1 fit:2 variate:1 zi:1 identified:4 opposite:1 grp:4 reduce:1 idea:3 competing:1 imperfect:1 csml:1 honest:1 whether:6 motivated:1 pca:2 suffer:1 algebraic:7 vanderweele:1 passing:2 cause:5 supress:1 useful:4 clear:1 covered:1 lxw:3 detailed:1 nonparametric:1 wpp:14 generate:6 http:2 exist:2 balke:1 notice:6 estimated:1 per:2 discrete:2 express:1 independency:3 four:3 liv:2 threshold:1 tierney:2 changing:1 clarity:1 graph:15 relaxation:9 merely:1 year:1 sum:1 screened:1 package:1 everywhere:1 uncertainty:6 run:2 unsolvable:3 letter:1 family:1 reader:1 reasonable:1 decide:1 almost:1 looser:1 geiger:1 decision:3 bound:51 uiv:2 subcases:1 guaranteed:4 followed:1 meek:1 refine:1 identifiable:1 strength:1 occur:1 adapted:1 constraint:22 precisely:1 ucla:1 aspect:2 simulate:1 anyone:1 min:2 performing:1 relatively:1 glymour:1 department:2 structured:1 according:4 popularized:1 combination:3 smaller:1 heckerman:1 modification:1 making:2 happens:1 vaccination:3 imbens:1 explained:1 taken:1 computationally:1 equation:1 scheines:2 discus:4 pin:1 know:4 end:3 confounded:2 adopted:1 available:2 generalizes:1 generic:1 appropriate:2 occurrence:2 ww0:2 frequentist:1 robustness:1 existence:1 substitute:2 original:1 top:1 remaining:1 dirichlet:4 graphical:3 marginalized:1 yx:24 calculating:1 medicine:1 exploit:4 xw:34 classical:1 hypercube:1 appreciate:1 surgery:1 implied:4 already:2 quantity:1 added:2 dependence:3 md:1 hoyer:1 distance:1 thank:1 w0:13 polytope:5 unstable:1 trivial:1 reason:1 enforcing:1 assuming:4 economist:1 besides:1 code:1 index:2 relationship:4 length:1 providing:2 berger:1 difficult:1 setup:3 tribute:1 expense:1 negative:1 tightening:2 design:1 contraints:1 proper:1 policy:1 unknown:3 perform:2 allowing:2 upper:8 markov:1 discarded:1 datasets:1 finite:3 truncated:1 situation:2 witness:19 arbitrary:1 hiu:2 complement:2 pair:7 vacuous:1 crank:1 specified:1 optimized:1 faithfulness:21 z1:2 toolbox:1 polytopes:1 pearl:5 starter:1 able:1 suggested:2 beyond:1 exemplified:1 summarize:1 program:7 max:3 including:1 green:1 belief:1 difficulty:1 treated:2 force:2 turning:1 indicator:1 solvable:2 scheme:1 imply:1 risky:1 health:1 text:1 prior:8 geometric:1 discovery:4 checking:1 l2:2 acknowledgement:1 fully:4 expect:1 loss:1 interesting:1 age:1 contingency:3 degree:2 agent:3 sufficient:2 undergone:1 rubin:1 principle:1 editor:3 summary:2 supported:1 last:1 free:4 bias:4 allow:4 wide:2 fall:1 taking:1 absolute:1 feedback:1 default:2 calculated:1 world:1 valid:3 z8:3 genome:1 refinement:1 approximate:2 observable:2 emphasize:1 implicitly:1 status:1 transcription:1 keep:2 gene:1 decides:1 instantiation:1 active:1 knew:1 consuming:1 don:1 search:7 latent:4 regulatory:1 robin:4 table:10 storey:1 nature:1 interact:1 necessarily:1 complex:1 constructing:1 aistats:1 main:3 universe:1 linearly:1 bounding:3 whole:2 hyperparameters:1 complementary:1 x1:1 xu:3 causality:2 west:1 postulating:1 cooper:1 vaccine:4 uxw:5 candidate:1 intervened:2 lie:3 jmlr:1 admissible:13 touched:1 bad:2 covariate:4 appeal:1 list:1 lowed:1 evidence:3 derives:1 exists:9 concern:1 workshop:1 adding:1 magnitude:1 chen:1 falsified:1 rejection:3 depicted:1 simply:1 likely:1 adjustment:8 contained:2 scalar:2 springer:1 corresponds:1 conditional:14 presentation:1 replace:1 price:1 feasible:1 hard:4 included:1 except:1 uniformly:1 contradictory:1 conservative:2 called:1 entner:7 ne1:4 select:1 college:2 l4:2 latter:1 assessed:1 ex:1 |
5,085 | 5,603 | Biclustering Using Message Passing
Luke O?Connor
Bioinformatics and Integrative Genomics
Harvard University
Cambridge, MA 02138
[email protected]
Soheil Feizi
Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
Biclustering is the analog of clustering on a bipartite graph. Existent methods infer
biclusters through local search strategies that find one cluster at a time; a common
technique is to update the row memberships based on the current column memberships, and vice versa. We propose a biclustering algorithm that maximizes a global
objective function using message passing. Our objective function closely approximates a general likelihood function, separating a cluster size penalty term into
row- and column-count penalties. Because we use a global optimization framework, our approach excels at resolving the overlaps between biclusters, which are
important features of biclusters in practice. Moreover, Expectation-Maximization
can be used to learn the model parameters if they are unknown. In simulations, we
find that our method outperforms two of the best existing biclustering algorithms,
ISA and LAS, when the planted clusters overlap. Applied to three gene expression datasets, our method finds coregulated gene clusters that have high quality in
terms of cluster size and density.
1
Introduction
The term biclustering has been used to describe several distinct problems variants. In this paper, In
this paper, we consider the problem of biclustering as a bipartite analogue of clustering: Given an
N ? M matrix, a bicluster is a subset of rows that are heavily connected to a subset of columns. In
this framework, biclustering methods are data mining techniques allowing simultaneous clustering
of the rows and columns of a matrix. We suppose there are two possible distributions for edge
weights in the bipartite graph: a within-cluster distribution and a background distribution. Unlike in
the traditional clustering problem, in our setup, biclusters may overlap, and a node may not belong
to any cluster. We emphasize the distinction between biclustering and the bipartite analog of graph
partitioning, which might be called bipartitioning.
Biclustering has several noteworthy applications. It has been used to find modules of coregulated
genes using microarray gene expression data [1] and to predict tumor phenotypes from their genotypes [2]. It has been used for document classification, clustering both documents and related words
simultaneously [3]. In all of these applications, biclusters are expected to overlap with each other,
and these overlaps themselves are often of interest (e.g., if one wishes to explore the relationships
between document topics).
The biclustering problem is NP-hard (see Proposition 1). However, owing to its practical importance, several heuristic methods using local search strategies have been developed. A popular approach is to search for one bicluster at a time by iteratively assigning rows to a bicluster based on
the columns, and vice versa. Two algorithms based on this approach are ISA [4] and LAS [5]. Another approach is an exhaustive search for complete bicliques used by Bimax [6]. This approach
fragments large noisy clusters into small complete ones. SAMBA [7] uses a heuristic combinatorial
search for locally optimal biclusters, motivated by an exhaustive search algorithm that is exponential
1
in the maximum degree of the nodes. For more details about existent biclustering algorithms, and
performance comparisons, see references [6] and [8]. Existent biclustering methods have two major
shortcomings: first, they apply a local optimality criterion to each bicluster individually. Because a
collection of locally optimal biclusters might not be globally optimal, these local methods struggle
to resolve overlapping clusters, which arise frequently in many applications. Second, the lack of
a well-defined global objective function precludes an analytical characterization of their expected
results.
Global optimization methods have been developed for problems closely related to biclustering, including clustering. Unlike most biclustering problem formulations, these are mostly partitioning
problems: each node is assigned to one cluster or category. Major recent progress has been made in
the development of spectral clustering methods (see references [9] and [10]) and message-passing
algorithms (see [11], [12] and [13]). In particular, Affinity Propagation [12] maximizes the sum of
similarities to one central exemplar instead of overall cluster density. Reference [14] uses variational
expectation-maximization to fit the latent block model, which is a binary model in which each row
or column is assigned to a row or column cluster, and the probability of an edge is dictated by the
respective cluster memberships. Row and column clusters that are not paired to form biclusters.
In this paper, we propose a message-passing algorithm that searches for a globally optimal collection of possibly overlapping biclusters. Our method maximizes a likelihood function using an
approximation that separates a cluster-size penalty term into a row-count penalty and a columncount penalty. This decoupling enables the messages of the max-sum algorithm to be computed
efficiently, effectively breaking an intractable optimization into a pair of tractable ones that can be
solved in nearly linear time. When the underlying model parameters are unknown, they can be
learned using an expectation-maximization approach.
Our approach has several advantages over existing biclustering algorithms: the objective function
of our biclustering method has the flexibility to handle diverse statistical models; the max-sum algorithm is a more robust optimization strategy than commonly used iterative approaches; and in
particular, our global optimization technique excels at resolving overlapping biclusters. In simulations, our method outperforms two of the best existing biclustering algorithms, ISA and LAS, when
the planted clusters overlap. Applied to three gene expression datasets, our method found biclusters
of high quality in terms of cluster size and density.
2
2.1
Methods
Problem statement
Let G = (V, W, E) be a weighted bipartite graph, with vertices V = (1, ..., N ) and W = (1, ..., M ),
connected by edges with non-negative weights: E : V ? W ? [0, ?). Let V1 , ..., VK ? V and
W1 , ..., WK ? W . Let (Vk , Wk ) = {(i, j) : i ? Vk , j ? Wk } be a bicluster: Graph edge weights
eij are drawn independently from either a within-cluster distribution or a background distribution
depending on whether, for some k, i ? Vk and j ? Wk . In this paper, we assume that the withincluster and background distributions are homogenous. However, our formulation can be extended
to a general case in which the distributions are row- or column-dependent.
P
Let ckij be the indicator for i ? Vk and j ? Wk . Let cij , min(1, k ckij ) and let c , (ckij ).
Definition 1 (Biclustering Problem). Let G = (V, W, E) be a bipartite graph with biclusters
(V1 , W1 ), ..., (VK , WK ), within-cluster distribution f1 and background distribution f0 . The problem
is to find the maximum likelihood cluster assignments (up to reordering):
X
f1 (eij )
? = arg max
c
cij log
,
(1)
c
f0 (eij )
(i,j)
ckij = ckrs = 1 ? ckis = ckrj = 1,
?i, r ? V, ?j, s ? W.
Figure 1 demonstrates the problem qualitatively for an unweighted bipartite graph. In general, the
combinatorial nature of a biclustering problem makes it computationally challenging.
Proposition 1. The clique problem can be reduced to the maximum likelihood problem of Definition
(1). Thus, the biclustering problem is NP-hard.
2
(a)
Biclustering
Biclustering
row variables
row variables
(b)
column variables
column variables
Figure 1: Biclustering is the analogue of clustering on a bipartite graph. (a) Biclustering allows
nodes to be reordered in a manner that reveals modular structures in the bipartite graph. (b) The
rows and columns of an adjacency matrix are similarly biclustered and reordered.
Proof. Proof is provided in Supplementary Note 1.
2.2
BCMP objective function
In this section, we introduce the global objective function considered in the proposed biclustering
algorithm called Biclustering using Message Passing (BCMP). This objective function approximates
f (e )
the likelihood function of Definition 1. Let lij = log f10 (eij
be the log-likelihood ratio score of
ij )
P
tuple (i, j). Thus, the likelihood function of Definition 1 can be written as
cij lij . If there were
no consistency constraints in the Optimization (1), an optimal maximum likelihood biclustering
solution would be to set cij = 1 for all tuples with positive lij . Our key idea is to enforce the
consistency constraints by introducing a cluster-size penalty function and shifting the log-likelihood
ratios lij to recoup this penalty. Let Nk and Mk be the number of rows and columns, respectively,
assigned to cluster k. We have,
X
(a)
cij lij ?
(i,j)
X
cij max(0, lij + ?) ? ?
(i,j)
(b)
=
X
?
X
(i,j)
cij
(i,j)
cij max(0, lij + ?) + ?
(i,j)
(c)
X
X
max(0, ?1 +
X
ckij ) ? ?
k
(i,j)
cij max(0, lij + ?) + ?
X
max(0, ?1 +
X
k
(i,j)
X
Nk Mk
k
ckij ) ?
?X
rk Nk2 + rk?1 Mk2 .
2
k
(2)
The approximation (a) holds when ? is large enough that thresholding lij at ?? has little effect on
the resulting objective function. In equation (b), we have expressed the second term of (a) in terms
of a cluster size penalty ??Nk Mk , and we have added back a term corresponding to the overlap
between clusters. Because a cluster-size penalty function of the form Nk Mk leads to an intractable
optimization in the max-sum framework, we approximate it using a decoupling approximation (c)
where rk is a cluster shape parameter:
2Nk Mk ? rk Nk2 + rk?1 Mk2 ,
(3)
when rk ? Mk /Nk . The cluster-shape parameter can be iteratively tuned to fit the estimated biclusters.
Following equation (2), the BCMP objective function can be separated into three terms as follows:
3
F (c) =
X
?ij +
i,j
X
?k +
X
k
?k ,
(4)
k
P k
P k
?
??ij = `ij min(1, k cij ) + ? max(0, k cij ? 1)
? = ? 2? rk Nk2
? k
?k = ? 2? rk?1 Mk2
?(i, j) ? V ? W,
?1 ? k ? K,
?1 ? k ? K
(5)
Here ?ij , the tuple function, encourages heavier edges of the bipartite graph to be clustered. Its
second term compensates for the fact that when biclusters overlap, the cluster-size penalty functions
double-count the overlapping regions. `ij , max(0, lij ? ?) is the shifted log-likelihood ratio for
observed edge weight eij . ?k and ?k penalize the number of rows and columns of cluster k, Nk
and Mk , respectively. Note that by introducing a penalty for each nonempty cluster, the number of
clusters can be learned, and finding weak, spurious clusters can be avoided (see Supplementary Note
3.3).
Now, we analyze BCMP over the following model for a binary or unweighted bipartite graph:
Definition 2. The binary biclustering model is a generative model for N ? M bipartite graph
(V, W, E) with K biclusters distributed by uniform sampling with replacement, allowing for overlapping clusters. Within a bicluster, edges are drawn independently with probability p, and outside
of a bicluster, they are drawn independently with probability q < p.
In the following, we assume that p, q, and K are given. We discuss the case that the model parameters are unknown in Section 2.4. The following proposition shows that optimizing the BCMP
objective function solves the problem of Definition 1 in the case of the binary model:
Proposition 2. Let (eij ) be a matrix generated by the binary model described in Definition 2.
Suppose p, q and K are given. Suppose the maximum likelihood assignment of edges to biclusters,
arg max(P (data|c)), is unique up to reordering. Let rk = Mk0 /Nk0 be the cluster shape ratio for
the k-th maximum likelihood cluster. Then, by using these values of rk , setting `ij = eij , for all
(i, j), with cluster size penalty
log( 1?p
?
1?q )
=?
,
p(1?q)
2
2 log(
)
(6)
arg max(P (data|c)) = arg max(F (c)).
(7)
q(1?p)
we have,
c
c
Proof. The proof follows the derivation of equation (2). It is presented in Supplementary Note
2.
Remark 1. In the special case when q = 1 ? p ? (0, 1/2), according to equation (6), we have
?
2 = 1/4. This is suggested as a reasonable initial value to choose when the true values of p and q
are unknown; see Section 2.4 for a discussion of learning the model parameters.
The assumption that rk = Nk0 /Mk0 may seem rather strong. However, it is essential as it justifies the
decoupling equation (3) that enables a linear-time algorithm. In practice, if the initial choice of rk
is close enough to the actual ratio that a cluster is detected corresponding to the real cluster, rk can
be tuned to find the true value by iteratively updating it to fit the estimated bicluster. This iterative
strategy works well in our simulations. For more details about automatically tuning the parameter
rk , see Supplementary Note 3.1.
In a more general statistical setting, log-likelihood ratios lij may be unbounded below, and the first
step (a) of derivation (2) is an approximation; setting ? arbitrarily large will eventually lead to
instability in the message updates.
4
2.3
Biclustering Using Message Passing
In this section, we use the max-sum algorithm to optimize the objective function of equation (4).
For a review of the max-sum message update rules, see Supplementary Note 4. There are N M
function nodes for the functions ?ij , K function nodes for the functions ?k , and K function nodes
for the functions ?k . There are N M K binary variables, each attached to three function nodes: ckij
is attached to ?ij , ?k , and ?k (see Supplementary Figure 1). The incoming messages from these
function nodes are named tkij , nkij , and mkij , respectively. In the following, we describe messages for
ckij = c112 ; other messages can be computed similarly.
First, we compute t112 :
(a)
t112 (x) =
max [?12 (x, c212 , . . . , cK
12 ) +
c212 ,...,cK
12
(b)
=
max [`12 min(1,
c212 ,...,cK
12
X
mk12 (ck12 ) + nk12 (ck12 )]
(8)
k6=1
X
ck12 ) + ? max(0,
k
X
k
ck12 ? 1) +
X
ck12 (mk12 + nk12 )] + d1
k6=1
where d1 = k6=1 mk12 (0)+nk12 (0) is a constant. Equality (a) comes from the definition of messages
according to equation (6) in the Supplement. Equality (b) uses the definition of ?12 of equation (5)
and the definition of the scalar message of equation (8) in the Supplement. We can further simplify
t12 as follows:
?
P
(c)
1
k
k
?
?
?t12 (1) ? d1 = `12 + k6=1 max(0, ? + m12 + n12 ),
P
(d)
t112 (0) ? d1 = `12 ? ? + k6=1 max(0, ? + mk12 + nk12 ), if ?k, nk12 + mk12 + ? > 0, (9)
?
?
?1
(e)
t12 (0) ? d1 = max(0, `12 + maxk6=1 (mk12 + nk12 )),
otherwise .
P
P
P
If c112 = 1, we have min(1, k ck12 ) = 1, and max(0, k ck12 ? 1) = k6=1 ck12 . These lead to
equality (c). A similar argument can be made if c112 = 0 but there exists a k such that nk12 +mk12 +? >
0. This leads to equality (d). If c112 = 0 and there is no k such that nk12 + mk12 + ? > 0, we compare
the increase obtained by letting ck12 = 1 (i.e., `12 ) with the penalty (i.e., mk12 + nk12 ), for the best k.
This leads to equality (e).
Remark 2. Computation of t1ij , ..., tkij using equality (d) costs O(K), and not O(K 2 ), as the summation need only be computed once.
P
Messages m112 and n112 are computed as follows:
(
P
m112 (x) = maxc1 |c112 =x [?1 (c1 ) + (i,j)6=(1,2) t1ij (c1ij ) + n1ij (c1ij )],
P
n112 (x) = maxc1 |c112 =x [?1 (c1 ) + (i,j)6=(1,2) t1ij (c1ij ) + m1ij (c1ij )],
(10)
where c1 = {c1ij : i ? V, j ? W }. To compute n112 in constant time, we perform a preliminary
optimization, ignoring the effect of edge (1, 2):
? 2 X 1 1
N +
tij (cij ) + m1ij (c1ij ).
(11)
arg max
?
2 1
c1
(i,j)
PM
Let si = j=1 max(0, m1ij + t1ij ) be the sum of positive incoming messages of row i. The function
?1 penalizes the number of rows containing some nonzero c1ij : if any message along that row is
included, there is no additional penalty for including every positive message along that row. Thus,
optimization (11) is computed by deciding which rows to include. This can be done efficiently
through sorting: we sort row sums s(1) , ..., s(N ) at a cost of O(N log N ). Then we proceed from
largest to smallest, including row (N + 1 ? i) if the marginal penalty 2? (i2 ? (i ? 1)2 ) = 2? (2i ? 1)
is less than s(N +1?i) . After solving optimization (11), the messages n112 , ..., n1N 2 can be computed
in linear time, as we explain in Supplementary Note 5.
Remark 3. Computation of nkij through sorting costs O(N log N ).
Proposition 3 (Computational Complexity of BCMP). The computational complexity of BCMP
over a bipartite graph with N rows, M columns, and K clusters is O(K(N + log M )(M + log N )).
5
Proof. For each iteration, there are N M messages tij to be computed at cost O(K) each. Before
computing (nkij ), there are K sorting steps at a cost of O(M log M ), after which each message may
be computed in constant time. Likewise, there are K sorting steps at a cost of O(N log N ) each
before computing (mkij ).
We provide an empirical runtime example of the algorithm in Supplementary Figure 3.
2.4
Parameter learning using Expectation-Maximization
In the BCMP objective function described in Section 2.2, the parameters of the generative model
were used to compute the log-likelihood ratios (lij ). In practice, however, these parameters may
be unknown. Expectation-Maximization (EM) can be used to estimate these parameters. The use
of EM in this setting is slightly unorthodox, as we estimate the hidden labels (cluster assignments)
in the M step instead of the E step. However, the distinction between parameters and labels is not
intrinsic in the definition of EM [15] and the true ML solution is still guaranteed to be a fixed point
of the iterative process. Note that it is possible that the EM iterative procedure leads to a locally
optimal solution and therefore it is recommended to use several random re-initializations for the
method.
The EM algorithm has three steps:
? Initialization: We choose initial values for the underlying model parameters ? and compute
the log-likelihood ratios (lij ) based on these values, denoting by F0 the initial objective
function.
? M step: We run BCMP to maximize the objective Fi (c). We denote the estimated cluster
assignments by by c?i .
? E step: We compute the expected-log-likelihood function as follows:
Fi+1 (c) = E? [log P ((eij )|?)|c = c?i ] =
X
E? [log P (eij |?)|c = c?i ].
(12)
(i,j)
Conveniently, the expected-likelihood function takes the same form as the original likelihood function, with an input matrix of expected log-likelihood ratios. These can be computed efficiently if
conjugate priors are available for the parameters. Therefore, BCMP can be used to maximize Fi+1 .
The algorithm terminates upon failure to improve the estimated likelihood Fi (c?i ).
For a discussion of the application of EM to the binary and Gaussian models, see Supplementary
Note 6. In the case of the binary model, we use uniform Beta distributions as conjugate priors
for p and q, and in the case of the Gaussian model, we use inverse-gamma-normal distributions as
the priors for the variances and means. Even when convenient priors are not available, EM is still
tractable as long as one can sample from the posterior distributions.
3
Evaluation results
We compared the performance of our biclustering algorithm with two methods, ISA and LAS, in
simulations and in real gene expression datasets (Supplementary Note 8). ISA was chosen because
it performed well in comparison studies [6] [8], and LAS was chosen because it outperformed ISA
in preliminary simulations. Both ISA and LAS search for biclusters using iterative refinement. ISA
assigns rows iteratively to clusters fractionally in proportion to the sum of their entries over columns.
It repeats the same for column-cluster assignments, and this process is iterated until convergence.
LAS uses a similar greedy iterative search without fractional memberships, and it masks alreadydetected clusters by mean subtraction.
In our simulations, we generate simulated bipartite graphs of size 100x100. We planted (possibly
overlapping) biclusters as full blocks with two noise models:
? Bernoulli noise: we drew edges according to the binary model of Definition 2 with varying
noise level q = 1 ? p.
6
Bernoulli noise
1400
average number of
misclassified tuples
row variables
1000
800
(a2)
total number of clustered
tuples is 850
600
400
0.05
0.1
0.15
0.2
noise level
0.25
0.3
BCMP
LAS
ISA
600
400
200
1000
0.2
0.4
0.6
0.8
1
noise level
(b2)
BCMP
LAS
ISA
1200
0
0.35
1800
1400
average number of
misclassified tuples
total number of clustered tuples is 850
800
0
0
1600
row variables
1000
200
0
column variables
average number of
misclassified tuples
BCMP
LAS
ISA
1200
overlapping biclusters
(fixed overlap)
Gaussian noise
(b1)
average number of
misclassified tuples
non-overlapping biclusters
(a1)
total number of clustered
tuples is 900
800
600
400
1000
total number of clustered tuples is 900
800
BCMP
LAS
ISA
600
400
200
200
0
column variables
overlapping biclusters
(variable overlap)
0
0.05
0.1
0.15
0.2
0.25
0.3
0
0.35
noise level
(a3)
(b3)
600
0.4
0.6
0.8
1
800
700
average number of
misclassified tuples
average number of
misclassified tuples
0.2
noise level
500
row variables
0
BCMP
LAS
400
300
200
100
600
BCMP
LAS
500
400
300
200
100
column variables
0
0
0.1
0.2
0.3
overlap
0.4
0.5
0.6
0
0
0.1
0.2
0.3
0.4
0.5
overlap
Figure 2: Performance comparison of the proposed method (BCMP) with ISA and LAS, for
Bernoulli and Gaussian models, and for overlapping and non-overlapping biclusters. On the y axis
is the total number of misclassified row-column pairs. Either the noise level or the amount of overlap
is on the x axis.
? Gaussian noise: we drew edge weights within and outside of biclusters from normal distributions N (1, ? 2 ) and N (0, ? 2 ), respectively, for different values of ?.
For each of these cases, we ran simulations on three setups (see Figure 2):
? Non-overlapping clusters: three non-overlapping biclusters were planted in a 100 ? 100
matrix with sizes 20 ? 20, 15 ? 20, and 15 ? 10. We varied the noise level.
? Overlapping clusters with fixed overlap: Three overlapping biclusters with fixed overlaps
were planted in a 100 ? 100 matrix with sizes 20 ? 20, 20 ? 10, and 10 ? 30. We varied
the noise level.
? Overlapping clusters with variable overlap: we planted two 30 ? 30 biclusters in a 100 ?
100 matrix with variable amount of overlap between them, where the amount of overlap
is defined as the fraction of rows and columns shared between the two clusters. We used
Bernoulli noise level q = 1 ? p = 0.15, and Gaussian noise level ? = 0.7.
The methods used have some parameters to set. Pseudocode for BCMP is presented in Supplementary Note 10. Here are the parameters that we used to run each method:
? BCMP method with underlying parameters given: We computed the input matrix of shifted
log-likelihood ratios following the discussion in Section 2.2. The number of biclusters
K was given. We initialized the cluster-shape parameters rk at 1 and updated them as
discussed in Supplementary Note 3.1. In the case of Bernoulli noise, following Proposition
2 and Remark 1, we set `ij = eij and 2? = 1/4. In the case of Gaussian noise, we chose a
threshold ? to maximize the unthresholded likelihood (see Supplementary Note 3.2).
? BCMP - EM method: Instead of taking the underlying model parameters as given, we
estimated them using the procedure described in Section 2.4 and Supplementary Note 6.
7
We used identical, uninformative priors on the parameters of the within-cluster and null
distributions.
? ISA method: We used the same threshold ranges for both rows and columns, attempting
to find best-performing threshold values for each noise level. These values were mostly
around 1.5 for both noise types and for all three dataset types. We found positive biclusters,
and used 20 reinitializations. Out of these 20 runs, we selected the best-performing run.
? LAS method: There were no parameters to set. Since K was given, we selected the first K
biclusters discovered by LAS, which marginally increased its performance.
Evaluation results of both noise models and non-overlapping and overlapping biclusters are shown
in Figure 2. In the non-overlapping case, BCMP and LAS performed similarly well, better than
ISA. Both of these methods made few or no errors up until noise levels q = 0.2 and ? = .6 in
Bernoulli and Gaussian cases, respectively. When the parameters had to be estimated using EM,
BCMP performed worse for higher levels of Gaussian noise but well otherwise. ISA outperformed
BCMP and LAS at very high levels of Bernoulli noise; at such a high noise level, however, the
results of all three algorithms are comparable to a random guess.
In the presence of overlap between biclusters, BCMP outperformed both ISA and LAS except at very
high noise levels. Whereas LAS and ISA struggled to resolve these clusters even in the absence of
noise, BCMP made few or no errors up until noise levels q = 0.2 and ? = .6 in Bernoulli and Gaussian cases, respectively. Notably, the overlapping clusters were more asymmetrical, demonstrating
the robustness of the strategy of iteratively tuning rk in our method. In simulations with variable
overlaps between biclusters, for both noise models, BCMP outperformed LAS significantly, while
the results for the ISA method were very poor (data not shown). These results demonstrate that
BCMP excels at inferring overlapping biclusters.
4
Discussion and future directions
In this paper, we have proposed a new biclustering technique called Biclustering Using Message
Passing that, unlike existent methods, infers a globally optimal collection of biclusters rather than a
collection of locally optimal ones. This distinction is especially relevant in the presence of overlapping clusters, which are common in most applications. Such overlaps can be of importance if one is
interested in the relationships among biclusters. We showed through simulations that our proposed
method outperforms two popular existent methods, ISA and LAS, in both Bernoulli and Gaussian
noise models, when the planted biclusters were overlapping. We also found that BCMP performed
well when applied to gene expression datasets.
Biclustering is a problem that arises naturally in many applications. Often, a natural statistical model
for the data is available; for example, a Poisson model can be used for document classification (see
Supplementary Note 9). Even when no such statistical model will be available, BCMP can be used
to maximize a heuristic objective function such as the modularity function [17]. This heuristic is
preferable to clustering the original adjacency matrix when the degrees of the nodes vary widely;
see Supplementary Note 7.
The same optimization strategy used in this paper for biclustering can also be applied to perform
clustering, generalizing the graph-partitioning problem by allowing nodes to be in zero or several
clusters. We believe that the flexibility of our framework to fit various statistical and heuristic models
will allow BCMP to be used in diverse clustering and biclustering applications.
Acknowledgments
We would like to thank Professor Manolis Kellis and Professor Muriel M?dard for their advice
and support. We would like to thank the Harvard Division of Medical Sciences for supporting this
project.
8
References
[1] Cheng, Yizong, and George M. Church. "Biclustering of expression data." Ismb. Vol. 8. 2000.
[2] Dao, Phuong, et al. "Inferring cancer subnetwork markers using density-constrained biclustering." Bioinformatics 26.18 (2010): i625-i631.
[3] Bisson, Gilles, and Fawad Hussain. "Chi-sim: A new similarity measure for the co-clustering
task." Machine Learning and Applications, 2008. ICMLA?08. Seventh International Conference
on. IEEE, 2008.
[4] Bergmann, Sven, Jan Ihmels, and Naama Barkai. "Iterative signature algorithm for the analysis
of large-scale gene expression data." Physical review E 67.3 (2003): 031902.
[5] Shabalin, Andrey A., et al. "Finding large average submatrices in high dimensional data." The
Annals of Applied Statistics (2009): 985-1012.
[6] Prelic, Amela, et al. "A systematic comparison and evaluation of biclustering methods for gene
expression data." Bioinformatics 22.9 (2006): 1122-1129.
[7] Tanay, Amos, Roded Sharan, and Ron Shamir. "Discovering statistically significant biclusters
in gene expression data." Bioinformatics 18.suppl 1 (2002): S136-S144.
[8] Li, Li, et al. "A comparison and evaluation of five biclustering algorithms by quantifying goodness of biclusters for gene expression data." BioData mining 5.1 (2012): 1-10.
[9] Nadakuditi, Raj Rao, and Mark EJ Newman. "Graph spectra and the detectability of community
structure in networks." Physical review letters 108.18 (2012): 188701.
[10] Krzakala, Florent, et al. "Spectral redemption in clustering sparse networks." Proceedings of
the National Academy of Sciences 110.52 (2013): 20935-20940.
[11] Decelle, Aurelien, et al. "Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications." Physical Review E 84.6 (2011): 066106.
[12] Frey, Brendan J., and Delbert Dueck. "Clustering by passing messages between data points."
Science 315.5814 (2007): 972-976.
[13] Dueck, Delbert, et al. "Constructing treatment portfolios using affinity propagation." Research
in Computational Molecular Biology. Springer Berlin Heidelberg, 2008.
[14] Govaert, G. and Nadif, M. "Block clustering with bernoulli mixture models: Comparison of
different approaches." Computational Statistics and Data Analysis, 52 (2008): 3233-3245.
[15] Dempster, Arthur P., Nan M. Laird, and Donald B. Rubin. "Maximum likelihood from incomplete data via the EM algorithm." Journal of the Royal Statistical Society. Series B (Methodological) (1977): 1-38.
[16] Marbach, Daniel, et al. "Wisdom of crowds for robust gene network inference." Nature methods 9.8 (2012): 796-804.
[17] Newman, Mark EJ. "Modularity and community structure in networks." Proceedings of the
National Academy of Sciences 103.23 (2006): 8577-8582.
[18] Yedidia, Jonathan S., William T. Freeman, and Yair Weiss. "Constructing free-energy approximations and generalized belief propagation algorithms." Information Theory, IEEE Transactions
on 51.7 (2005): 2282-2312.
[19] Caldas, Jos?, and Samuel Kaski. "Bayesian biclustering with the plaid model." Machine Learning for Signal Processing, 2008. MLSP 2008. IEEE Workshop on. IEEE, 2008.
9
| 5603 |@word proportion:1 integrative:1 simulation:9 initial:4 series:1 fragment:1 score:1 daniel:1 tuned:2 document:4 denoting:1 phuong:1 outperforms:3 existing:3 current:1 si:1 assigning:1 written:1 shape:4 enables:2 update:3 generative:2 greedy:1 selected:2 guess:1 discovering:1 characterization:1 node:11 ron:1 five:1 unbounded:1 along:2 beta:1 introduce:1 krzakala:1 manner:1 notably:1 mask:1 expected:5 themselves:1 frequently:1 chi:1 freeman:1 globally:3 manolis:1 automatically:1 resolve:2 little:1 actual:1 provided:1 project:1 moreover:1 underlying:4 maximizes:3 null:1 developed:2 finding:2 dueck:2 every:1 runtime:1 preferable:1 demonstrates:1 partitioning:3 medical:1 positive:4 before:2 engineering:1 local:4 frey:1 struggle:1 decelle:1 noteworthy:1 might:2 chose:1 initialization:2 luke:1 challenging:1 co:1 range:1 statistically:1 ismb:1 practical:1 unique:1 acknowledgment:1 practice:3 block:4 procedure:2 jan:1 empirical:1 submatrices:1 significantly:1 convenient:1 word:1 donald:1 close:1 instability:1 optimize:1 independently:3 bipartitioning:1 assigns:1 rule:1 handle:1 n12:1 updated:1 annals:1 shamir:1 suppose:3 heavily:1 us:4 harvard:3 updating:1 observed:1 module:1 electrical:1 solved:1 t1ij:4 region:1 t12:3 connected:2 ran:1 redemption:1 dempster:1 complexity:2 existent:5 signature:1 solving:1 reordered:2 upon:1 bipartite:14 division:1 x100:1 various:1 kaski:1 derivation:2 separated:1 distinct:1 sven:1 describe:2 shortcoming:1 detected:1 newman:2 outside:2 crowd:1 exhaustive:2 heuristic:5 modular:2 supplementary:16 widely:1 otherwise:2 precludes:1 compensates:1 statistic:2 noisy:1 laird:1 advantage:1 analytical:1 propose:2 relevant:1 flexibility:2 academy:2 f10:1 convergence:1 cluster:54 double:1 depending:1 exemplar:1 ij:10 progress:1 sim:1 strong:1 solves:1 come:1 direction:1 plaid:1 closely:2 owing:1 stochastic:1 adjacency:2 f1:2 clustered:5 preliminary:2 proposition:6 summation:1 hold:1 around:1 considered:1 normal:2 deciding:1 algorithmic:1 predict:1 major:2 vary:1 smallest:1 a2:1 outperformed:4 combinatorial:2 label:2 individually:1 largest:1 vice:2 weighted:1 amos:1 mit:1 gaussian:11 rather:2 ck:3 ej:2 varying:1 icmla:1 mkij:2 vk:6 shabalin:1 bimax:1 likelihood:23 bernoulli:10 methodological:1 sharan:1 brendan:1 inference:1 dependent:1 membership:4 spurious:1 hidden:1 misclassified:7 interested:1 mk12:9 overall:1 classification:2 arg:5 among:1 k6:6 development:1 constrained:1 special:1 homogenous:1 marginal:1 once:1 sampling:1 identical:1 biology:1 nearly:1 future:1 dao:1 np:2 simplify:1 few:2 simultaneously:1 gamma:1 national:2 replacement:1 william:1 interest:1 message:23 mining:2 evaluation:4 mixture:1 genotype:1 edge:11 tuple:2 arthur:1 respective:1 nadakuditi:1 incomplete:1 penalizes:1 re:1 initialized:1 mk:7 increased:1 column:23 rao:1 goodness:1 assignment:5 maximization:5 cost:6 introducing:2 vertex:1 subset:2 entry:1 uniform:2 mk0:2 seventh:1 andrey:1 density:4 international:1 systematic:1 nk2:3 jos:1 ihmels:1 w1:2 central:1 containing:1 choose:2 possibly:2 n1n:1 worse:1 li:2 b2:1 wk:6 mlsp:1 performed:4 analyze:1 sort:1 variance:1 efficiently:3 likewise:1 wisdom:1 weak:1 bayesian:1 soheil:1 iterated:1 marginally:1 simultaneous:1 explain:1 definition:12 failure:1 energy:1 naturally:1 proof:5 dataset:1 treatment:1 massachusetts:1 popular:2 fractional:1 infers:1 back:1 higher:1 wei:1 formulation:2 done:1 until:3 overlapping:23 lack:1 propagation:3 marker:1 quality:2 believe:1 barkai:1 b3:1 effect:2 true:3 asymmetrical:1 equality:6 assigned:3 iteratively:5 nonzero:1 i2:1 encourages:1 samuel:1 criterion:1 generalized:1 yizong:1 complete:2 demonstrate:1 bergmann:1 variational:1 fi:4 common:2 pseudocode:1 physical:3 attached:2 analog:2 belong:1 approximates:2 discussed:1 significant:1 connor:1 cambridge:2 versa:2 tuning:2 consistency:2 pm:1 similarly:3 marbach:1 ckij:8 had:1 portfolio:1 f0:3 similarity:2 posterior:1 recent:1 dictated:1 showed:1 optimizing:1 raj:1 feizi:1 binary:9 arbitrarily:1 additional:1 george:1 subtraction:1 maximize:4 recommended:1 signal:1 resolving:2 full:1 isa:20 infer:1 long:1 molecular:1 paired:1 a1:1 variant:1 expectation:5 poisson:1 iteration:1 suppl:1 penalize:1 c1:4 background:4 uninformative:1 whereas:1 microarray:1 unlike:3 unorthodox:1 seem:1 presence:2 enough:2 fit:4 hussain:1 florent:1 bisson:1 idea:1 whether:1 expression:10 motivated:1 heavier:1 penalty:15 passing:8 proceed:1 remark:4 tij:2 amount:3 locally:4 category:1 struggled:1 reduced:1 generate:1 shifted:2 estimated:6 diverse:2 detectability:1 vol:1 key:1 fractionally:1 threshold:3 demonstrating:1 drawn:3 v1:2 graph:16 fraction:1 sum:9 run:4 inverse:1 letter:1 named:1 mk2:3 reasonable:1 comparable:1 guaranteed:1 nan:1 cheng:1 constraint:2 aurelien:1 argument:1 optimality:1 min:4 attempting:1 performing:2 according:3 poor:1 conjugate:2 terminates:1 slightly:1 em:10 computationally:1 equation:9 discus:1 count:3 nonempty:1 eventually:1 letting:1 tractable:2 available:4 yedidia:1 apply:1 spectral:2 enforce:1 robustness:1 yair:1 original:2 clustering:15 include:1 biclusters:38 especially:1 society:1 kellis:1 objective:15 added:1 strategy:6 planted:7 bicluster:8 traditional:1 govaert:1 subnetwork:1 affinity:2 excels:3 separate:1 thank:2 separating:1 simulated:1 berlin:1 topic:1 relationship:2 ratio:10 setup:2 mostly:2 cij:12 statement:1 negative:1 unknown:5 perform:2 allowing:3 gilles:1 m12:1 datasets:4 supporting:1 extended:1 discovered:1 varied:2 community:2 coregulated:2 pair:2 distinction:3 learned:2 suggested:1 below:1 including:3 max:25 royal:1 belief:1 analogue:2 shifting:1 overlap:21 natural:1 indicator:1 improve:1 technology:1 axis:2 church:1 lij:13 genomics:1 review:4 prior:5 asymptotic:1 reordering:2 degree:2 rubin:1 thresholding:1 row:30 cancer:1 repeat:1 free:1 allow:1 institute:1 taking:1 unthresholded:1 naama:1 sparse:1 distributed:1 unweighted:2 collection:4 made:4 commonly:1 qualitatively:1 avoided:1 refinement:1 dard:1 transaction:1 approximate:1 emphasize:1 gene:12 clique:1 ml:1 global:6 reveals:1 incoming:2 b1:1 tuples:11 spectrum:1 search:9 latent:1 iterative:7 modularity:2 learn:1 nature:2 robust:2 maxk6:1 decoupling:3 ignoring:1 heidelberg:1 constructing:2 bcmp:30 noise:29 arise:1 advice:1 tanay:1 delbert:2 inferring:2 wish:1 exponential:1 breaking:1 rk:16 a3:1 workshop:1 intractable:2 essential:1 exists:1 intrinsic:1 effectively:1 importance:2 drew:2 supplement:2 justifies:1 nk:7 sorting:4 phenotype:1 generalizing:1 eij:10 explore:1 conveniently:1 expressed:1 scalar:1 biclustering:40 springer:1 ma:2 quantifying:1 shared:1 absence:1 professor:2 hard:2 biclustered:1 included:1 except:1 tumor:1 called:3 total:5 la:22 support:1 mark:2 arises:1 jonathan:1 bioinformatics:4 nk0:2 d1:5 |
5,086 | 5,604 | PAC-Bayesian AUC classification and scoring
James Ridgway?
CREST and CEREMADE University Dauphine
[email protected]
Nicolas Chopin
CREST (ENSAE) and HEC Paris
[email protected]
Pierre Alquier
CREST (ENSAE)
[email protected]
Feng Liang
University of Illinois at Urbana-Champaign
[email protected]
Abstract
We develop a scoring and classification procedure based on the PAC-Bayesian approach and the AUC (Area Under Curve) criterion. We focus initially on the class
of linear score functions. We derive PAC-Bayesian non-asymptotic bounds for
two types of prior for the score parameters: a Gaussian prior, and a spike-and-slab
prior; the latter makes it possible to perform feature selection. One important advantage of our approach is that it is amenable to powerful Bayesian computational
tools. We derive in particular a Sequential Monte Carlo algorithm, as an efficient
method which may be used as a gold standard, and an Expectation-Propagation
algorithm, as a much faster but approximate method. We also extend our method
to a class of non-linear score functions, essentially leading to a nonparametric
procedure, by considering a Gaussian process prior.
1
Introduction
Bipartite ranking (scoring) amounts to rank (score) data from binary labels. An important problem
in its own right, bipartite ranking is also an elegant way to formalise classification: once a score
function has been estimated from the data, classification reduces to chooses a particular threshold,
which determine to which class is assigned each data-point, according to whether its score is above
or below that threshold. It is convenient to choose that threshold only once the score has been estimated, so as to get finer control of the false negative and false positive rates; this is easily achieved
by plotting the ROC (Receiver operating characteristic) curve.
A standard optimality criterion for scoring is AUC (Area Under Curve), which measures the area
under the ROC curve. AUC is appealing for at least two reasons. First, maximising AUC is equivalent to minimising the L1 distance between the estimated score and the optimal score. Second,
under mild conditions, Cortes and Mohri [2003] show that AUC for a score s equals the probability
that s(X ? ) < s(X + ) for X ? (resp. X + ) a random draw from the negative (resp. positive class).
Yan et al. [2003] observed AUC-based classification handles much better skewed classes (say the
positive class is much larger than the other) than standard classifiers, because it enforces a small
score for all members of the negative class (again assuming the negative class is the smaller one).
One practical issue with AUC maximisation is that the empirical version of AUC is not a continuous
function. One way to address this problem is to ?convexify? this function, and study the properties of
so-obtained estimators [Cl?emenc?on et al., 2008a]. We follow instead the PAC-Bayesian approach in
this paper, which consists of using a random estimator sampled from a pseudo-posterior distribution
that penalises exponentially the (in our case) AUC risk. It is well known [see e.g. the monograph
of Catoni, 2007] that the PAC-Bayesian approach comes with a set of powerful technical tools to
?
http://www.crest.fr/pagesperso.php?user=3328
1
establish non-asymptotic bounds; the first part of the paper derive such bounds. A second advantage
however of this approach, as we show in the second part of the paper, is that it is amenable to powerful Bayesian computational tools, such as Sequential Monte Carlo and Expectation Propagation.
2
2.1
Theoretical bounds from the PAC-Bayesian Approach
Notations
The data D consist in the realisation of n IID (independent and identically
distributed) pairs (Xi , Yi )
Pn
with distribution P , and taking values in Rd ?{?1, 1}. Let n+ = i=1 1{Yi = +1}, n? = n?n+ .
For a score function s : Rd ? R, the AUC risk and its empirical counter-part may be defined as:
R(s) = P(X,Y ),(X 0 ,Y 0 )?P [{s(X) ? s(X 0 )}(Y ? Y 0 ) < 0] ,
X
1
Rn (s) =
1 [{s(Xi ) ? s(Xj )}(Yi ? Yj ) < 0] .
n(n ? 1)
i6=j
? = R(?) and R
? n = Rn (?). It is well known that ? is the score that
Let ?(x) = E(Y |X = x), R
? = R(?) for any score s.
minimise R(s), i.e. R(s) ? R
The results of this section apply to the class of linear scores, s? (x) = h?, xi, where h?, xi = ?T x
denotes the inner product. Abusing notations, let R(?) = R(s? ), Rn (?) = Rn (s? ), and, for a given
prior density ?? (?) that may depend on some hyperparameter ? ? ?, define the Gibbs posterior
density (or pseudo-posterior) as
Z
n
o
?? (?) exp {??Rn (?)}
? exp ??Rn (?)
? d??
, Z?,? (D) =
?? (?)
??,? (?|D) :=
Z?,? (D)
Rd
for ? > 0. Both the prior and posterior densities are defined with respect to the Lebesgue measure
over Rd .
2.2
Assumptions and general results
Our general results require the following assumptions.
Definition 2.1 We say that Assumption Dens(c) is satisfied for c > 0 if
P(hX1 ? X2 , ?i ? 0, hX1 ? X2 , ?0 i ? 0) ? ck? ? ?0 k
for any ? and ?0 ? Rd such that k?k = k?0 k = 1.
This is a mild Assumption, which holds for instance as soon as (X1 ? X2 )/kX1 ? X2 k admits a
bounded probability density; see the supplement.
Definition 2.2 (Mammen & Tsybakov margin assumption) We say that Assumption MA(?, C)
is satisfied for ? ? [1, +?] and C ? 1 if
1
? 2
E (q1,2
) ? C R(?) ? R ?
?
where qi,j
= 1{h?, Xi ? Xj i (Yi ? Yj ) < 0} ? 1{[?(Xi ) ? ?(Xj )](Yi ? Yj ) < 0} ? R(?) + R.
This assumption was introduced for classification by Mammen and Tsybakov [1999], and used for
ranking by Cl?emenc?on et al. [2008b] and Robbiano [2013] (see also a nice discussion in Lecu?e
[2007]). The larger ?, the less restrictive MA(?, C). In fact, MA(?, C) is always satisfied for
C = 4. For a noiseless classification task (i.e. ?(Xi )Yi ? 0 almost surely), R = 0,
?
?
E((q1,2
)2 ) = Var(q1,2
) = E[1{h?, X1 ? X2 i (Yi ? Yj ) < 0}] = R(?) ? R
and MA(1, 1) holds. More generally, MA(1, C) is satisfied as soon as the noise is small; see the
discussion in Robiano 2013 (Proposition 5 p. 1256) for a formal statement. From now, we focus
on either MA(1, C) or MA(?, C), C ? 1. It is possible to prove convergence under MA(?, 1)
2
for a general ? ? 1, but at the price of complications regarding the choice of ?; see Catoni [2007],
Alquier [2008] and Robbiano [2013].
We use the classical PAC-Bayesian methodology initiated by McAllester [1998];
Shawe-Taylor and Williamson [1997] (see Alquier [2008]; Catoni [2007] for a complete survey and more recent advances) to get the following results. Proof of these and forthcoming
results may Rbe found in the supplement. Let K(?, ?) denotes the Kullback-Liebler divergence,
d?
K(?, ?) = ?(d?) log{ d?
(?)} if ? << ?, ? otherwise, and denote M1+ the set of probability
distributions ?(d?).
Lemma 2.1 Assume that MA(1, C) holds with C ? 1. For any fixed ? with 0 < ? ? (n ? 1)/(8C),
for any ? > 0, with probability at least 1 ? ? on the drawing of the data D,
(Z
)
Z
K(?, ?) + log 4?
R(?)??,? (?|D)d? ? R ? 2 inf 1
R(?)?(d?) ? R + 2
.
?
??M+
Lemma 2.2 Assume MA(?, C) with C ? 1. For any fixed ? with 0 < ? ? (n ? 1)/8, for any
> 0 with probability 1 ? on the drawing of D,
Z
Z
K(?, ?) + log 2
16?
?
?
R(?)??,? (?|D)d? ? R ? inf 1
R(?)?(d?) ? R + 2
+
.
?
n?1
??M+
Both lemmas bound the expected risk excess, for a random estimator of ? generated from ??,? (?|D).
2.3
Independent Gaussian Prior
We now specialise these results to the prior density ?? (?) =
independent Gaussian distributions N (0, ?); ? = ? in this case.
Qd
i=1
?(?i ; 0, ?), i.e. a product of
Theorem 2.3 Assume MA(1, C), C ? 1, Dens(c), c > 0, and take ? = d2 (1 + n12 d ), ? =
(n ? 1)/8C, then there exists a constant ? = ?(c, C, d) such that for any > 0, with probability
1 ? ,
Z
4
? + ? d log(n) + log .
? ? 2 inf R(?0 ) ? R
R(?)?? (?|D)d? ? R
?0
n?1
Theorem 2.4 Assume MA(?, C), C ? 1, Dens(c) c > 0, and take ? = d2 (1 + n12 d ), ? =
p
C dn log(n), there exists a constant ? = ?(c, C, d) such that for any > 0, with probability 1 ? ,
p
Z
d log(n) + log 2
?
?
?
R(?)?? (?|D)d? ? R ? inf R(?0 ) ? R + ?
.
?0
n
The proof of these results is provided in the supplementary material. It is known that, under
?
MA(?, C), the rate (d/n) 2??1 is minimax-optimal for classification problems, see Lecu?e [2007].
Following Robbiano [2013] we conjecturate that this rate is also optimal for ranking problems.
2.4
Spike and slab prior for feature selection
The independent Gaussian prior considered in the previous section is a natural choice, but it does
not accommodate sparsity, that is, the possibility that only a small subset of the components of Xi
actually determine the membership to either class. For sparse scenarios, one may use the spike and
slab prior of Mitchell and Beauchamp [1988], George and McCulloch [1993],
?? (?) =
d
Y
[p?(?i ; 0, v1 ) + (1 ? p)?(?i ; 0, v0 )]
i=1
with ? = (p, v0 , v1 ) ? [0, 1] ? (R+ )2 , and v0 v1 , for which we obtain the following result. Note
k?k0 is the number of non-zero coordinates for ? ? Rd .
3
Theorem 2.5 Assume MA(1, C) holds with C ? 1, Dens(c) holds with c > 0, and take p = 1 ?
exp(?1/d), v0 ? 1/(2nd log(d)), and ? = (n ? 1)/(8C). Then there is a constant ? = ?(C, v1 , c)
such that for any ? > 0, with probability at least 1 ? ? on the drawing of the data D,
(
)
Z
k?0 k0 log(nd) + log 4?
R(?)?? (d?|D) ? R ? 2 inf R(?0 ) ? R + ?
.
?0
2(n ? 1)
Compared to Theorem 2.3, the bound above increases logarithmically rather than linearly in d, and
depends explicitly on k?k0 , the sparsity of ?. This suggests that the spike and slab prior should lead
to better performance than the Gaussian prior in sparse scenarios. The rate k?k0 log(d)/n is the
same as the one obtained in sparse regression, see e.g. B?uhlmann and van de Geer [2011].
Finally, note that if v0 ? 0, we recover the more standard prior which assigns a point mass at zero
for every component. However this leads to a pseudo-posterior which is a mixture of 2d components
that mix Dirac masses and continuous distributions, and thus which is more difficult to approximate
(although see the related remark in Section 3.4 for Expectation-Propagation).
3
3.1
Practical implementation of the PAC-Bayesian approach
Choice of hyper-parameters
Theorems 2.3, 2.4, and 2.5 propose specific values for hyper-parameters ? and ?, but these values depend on some unknown constant C. Two data-driven ways to choose ? and ? are (i) cross-validation
(which we will use for ?), and (ii) (pseudo-)evidence maximisation (which we will use for ?).
The latter may be justified from intermediate results of our proofs in the supplement, which provide
an empirical bound on the expected risk:
Z
Z
2
? ? ??,n inf
? n + 2 K(?, ?) + log
R(?)??,? (?|D)d? ? R
R
(?)?(d?)
?
R
n
?
??M1+
with ??,n ? 2. The right-hand side is minimised at ?(d?) = ??,? (?|D)d?, and the so-obtained
bound is ???,n log(Z?,? (D))/? plus constants. Minimising the upper bound with respect to hyperparameter ? is therefore equivalent to maximising log Z?,? (D) with respect to ?. This is of course
akin to the empirical Bayes approach that is commonly used in probabilistic machine learning. Regarding ? the minimization is more cumbersome because the dependence with the log(2/) term
and ?n,? , which is why we recommend cross-validation instead.
It seems noteworthy that, beside Alquier and Biau [2013], very few papers discuss the practical implementation of PAC-Bayes, beyond some brief mention of MCMC (Markov chain Monte Carlo).
However, estimating the normalising constant of a target density simulated with MCMC is notoriously difficult. In addition, even if one decides to fix the hyperparameters to some arbitrary value,
MCMC may become slow and difficult to calibrate if the dimension of the sampling space becomes
large. This is particularly true if the target does not (as in our case) have some specific structure
that make it possible to implement Gibbs sampling. The two next sections discuss two efficient
approaches that make it possible to approximate both the pseudo-posterior ??,? (?|D) and its normalising constant, and also to perform cross-validation with little overhead.
3.2
Sequential Monte Carlo
Given the particular structure of the pseudo-posterior ??,? (?|D), a natural approach to simulate
from ??,? (?|D) is to use tempering SMC [Sequential Monte Carlo Del Moral et al., 2006] that is,
define a certain sequence ?0 = 0 < ?1 < . . . < ?T , start by sampling from the prior ?? (?),
then applies successive importance sampling steps, from ??,?t?1 (?|D) to ??,?t (?|D), leading to
importance weights proportional to:
??,?t (?|D)
? exp {?(?t ? ?t?1 )Rn (?)} .
??,?t?1 (?|D)
When the importance weights become too skewed, one rejuvenates the particles through a resampling step (draw particles randomly with replacement, with probability proportional to the weights)
and a move step (move particles according to a certain MCMC kernel).
4
One big advantage of SMC is that it is very easy to make it fully adaptive. For the choice of
the successive ?t , we follow Jasra et al. [2007] in solving numerically (1) in order to impose that
the Effective sample size has a fixed value. This ensures that the degeneracy of the weights always
remain under a certain threshold. For the MCMC kernel, we use a Gaussian random walk Metropolis
step, calibrated on the covariance matrix of the resampled particles. See Algorithm 1 for a summary.
Algorithm 1 Tempering SMC
Input N (number of particles), ? ? (0, 1) (ESS threshold), ? > 0 (random walk tuning parameter)
Init. Sample ?0i ? ?? (?) for i = 1 to N , set t ? 1, ?0 = 0, Z0 = 1.
Loop a. Solve in ?t the equation
PN
i
{ i=1 wt (?t?1
)}2
= ? N,
PN
i
2
i=1 {wt (?t?1 )) }
wt (?) = exp[?(?t ? ?t?1 )Rn (?)]
using bisection search. If ?t ? ?T , set ZT = Zt?1 ?
n
1
N
PN
i=1
(1)
o
i
wt (?t?1
) , and stop.
b. Resample: for i = 1 to N , draw Ait in 1, . . . , N so that P(Ait = j) =
PN
j
k
wt (?t?1
)/ k=1 wt (?t?1
); see Algorithm 1 in the supplement.
Ai
t
c. Sample ?ti ? Mt (?t?1
, d?) for i = 1 to N where Mt is a MCMC kernel that leaves
invariant ?t ; see Algorithm 2 in the supplement for an instance of such a MCMC
i
? where ?
? is the covariance matrix of the ?At .
kernel, which takes as an input S = ??,
t?1
o
n P
N
i
) .
d. Set Zt = Zt?1 ? N1 i=1 wt (?t?1
In our context, tempering SMC brings two extra advantages: it makes it possible to obtain samples from ??,? (?|D) for a whole range of values of ?, rather than a single value. And it provides
an approximation of Z?,? (D) for the same range of ? values, through the quantity Zt defined in
Algorithm 1.
3.3
Expectation-Propagation (Gaussian prior)
The SMC sampler outlined in the previous section works fairly well, and we will use it as gold
standard in our simulations. However, as any other Monte Carlo method, it may be too slow for
large datasets. We now turn our attention to EP [Expectation-Propagation Minka, 2001], a general
framework to derive fast approximations to target distributions (and their normalising constants).
First note that the pseudo-posterior may be rewritten as:
Y
1
?? (?) ?
fij (?), fij (?) = exp [?? 0 1{h?, Xi ? Xj i < 0}]
??,? (?|D) =
Z?,? (D)
i,j
where ? 0 = ?/n+ n? , and the product is over all (i, j) such that Yi = 1, Yj = ?1. EP generates an
approximation of this target distribution based on the same factorisation:
Y
1
T
q(?) ? q0 (?)
qij (?), qij (?) = exp{? ?T Qij ? + rij
?}.
2
i,j
We consider in the section the case where the prior is Gaussian, as in Section 2.3. Then one may set
q0 (?) = ?? (?). The approximating factors are un-normalised Gaussian densities (under a natural
parametrisation), leading to an overall approximation that is also Gaussian, but other types of exponential family parametrisations may be considered; see next section and Seeger [2005]. EP updates
iteratively each site qij (that is, it updates the parameters Qij and rij ), conditional on all the sites,
by matching the moments of q with those of the hybrid distribution
hij (?) ? q(?)
fij (?)
? q0 (?)fij (?)
qij (?)
5
Y
(k,l)6=(i,j)
fkl (?)
where again the product is over all (k, l) such that Yk = 1, Yl = ?1, and (k, l) 6= (i, j).
We refer to the supplement for a precise algorithmic description of our EP implementation. We
highlight the following points. First, the site update is particularly simple in our case:
1
h
hij (?) ? exp{?T rij
? ?T Qhij ?} exp [?? 0 1{h?, Xi ? Xj i < 0}] ,
2
P
P
h
with rij
= (k,l)6=(i,j) rkl , Qhij = (k,l)6=(i,j) Qkl , which may be interpreted as: ? conditional
on T (?) = h?, Xi ? Xj i has a d ? 1-dimensional Gaussian distribution, and the distribution of
T (?) is that of a one-dimensional Gaussian penalised by a step function. The two first moments
of this particular hybrid may therefore be computed exactly, and in O(d2 ) time, as explained in
the supplement. The updates can be performed efficiently using the fact that the linear combination
(Xi ? Xj )? is a one dimensional Gaussian. For our numerical experiment we used a parallel version
of EP Van Gerven et al. [2010]. The complexity of our EP implementation is O(n+ n? d2 + d3 ).
Second, EP offers at no extra cost an approximation of the normalising constant Z?,? (D) of the
target ??,? (?|D); in fact, one may even obtain derivatives of this approximated quantity with respect
to hyper-parameters. See again the supplement for more details.
Third, in the EP framework, cross-validation may be interpreted as dropping all the factors qij that
depend on a given data-point Xi in the global approximation q. This makes it possible to implement
cross-validation at little extra cost [Opper and Winther, 2000].
3.4
Expectation-Propagation (spike and slab prior)
To adapt our EP algorithm to the spike and slab prior of Section 2.4, we introduce latent variables
Zk = 0/1 which ?choose? for each component ?k whether it comes from a slab, or from a spike,
and we consider the joint target
?
?
( d
)
Y
X
?
??,? (?, z|D) ?
B(zk ; p)N (?k ; 0, vzk ) exp ??
1{h?, Xi ? Xj i > 0}? .
n+ n? ij
k=1
On top of the n+ n? Gaussian sites defined in the previous section, we add a product of d sites to
approximate the prior. Following Hernandez-Lobato et al. [2013], we use
pk
1
qk (?k , zk ) = exp zk log
? ?k2 uk + vk ?k
1 ? pk
2
that is a (un-normalised) product of an independent Bernoulli distribution for zk , times a Gaussian
distribution for ?k . Again that the site update is fairly straightforward, and may be implemented in
O(d2 ) time. See the supplement for more details. Another advantage of this formulation is that we
obtain a Bernoulli approximation of the marginal pseudo-posterior ??,? (zi = 1|D) to use in feature
selection. Interestingly taking v0 to be exactly zero also yield stable results corresponding to the
case where the spike is a Dirac mass.
4
Extension to non-linear scores
To extend our methodology to non-linear score functions, we consider the pseudo-posterior
?
?
?
?
X
?
??,? (ds|D) ? ?? (ds) exp ?
1{s(Xi ) ? s(Xj ) > 0}
? n+ n?
?
i?D+ , j?D?
where ?? (ds) is some prior probability measure with respect to an infinite-dimensional functional
class. Let si = s(Xi ), s1:n = (s1 , . . . , sn ) ? Rn , and assume that ?? (ds) is a GP (Gaussian process) associated to some kernel k? (x, x0 ), then using a standard trick in the GP literature
[Rasmussen and Williams, 2006], one may derive the marginal (posterior) density (with respect to
6
the n-dimensional Lebesgue measure) of s1:n as
?
?
?
??,? (s1:n |D) ? Nd (s1:n ; 0, K? ) exp ?
? n+ n?
X
1{si ? sj > 0}
?
?
?
i?D+ , j?D?
where Nd (s1:n ; 0, K? ) denotes the probability density of the N (0, K? ) distribution, and K? is the
n
n ? n matrix (k? (Xi , Xj ))i,j=1 .
This marginal pseudo-posterior retains essentially the structure of the pseudo-posterior ??,? (?|D)
for linear scores, except that the ?parameter? s1:n is now of dimension n. We can apply straightforwardly the SMC sampler of Section 3.2, and the EP algorithm of 3.3, to this new target distribution.
In fact, for the EP implementation, the particular simple structure of a single site:
exp [?? 0 1{si ? sj > 0}]
makes it possible to implement a site update in O(1) time, leading to an overall complexity
O(n+ n? + n3 ) for the EP algorithm.
Theoretical results for this approach could be obtained by applying lemmas from e.g.
van der Vaart and van Zanten [2009], but we leave this for future study.
5
Numerical Illustration
Figure 1 compares the EP approximation with the output of our SMC sampler, on the well-known
Pima Indians dataset and a Gaussian prior. Marginal first and second order moments essentially
match; see the supplement for further details. The subsequent results are obtained with EP.
1.5
0.75
1.0
1.0
0.50
0.5
0.5
0.25
0.0
0.00
?2
?1
0
0.0
?4
(a) ?1
?3
?2
(b) ?2
?1
0
?1
0
1
(c) ?3
Figure 1: EP Approximation (green), compared to SMC (blue) of the marginal posterior of the first
three coefficients, for Pima dataset (see the supplement for additional analysis).
We now compare our PAC-Bayesian approach (computed with EP) with Bayesian logistic regression
(to deal with non-identifiable cases), and with the rankboost algorithm [Freund et al., 2003] on different datasets1 ; note that Cortes and Mohri [2003] showed that the function optimised by rankbook
is AUC.
As mentioned in Section 3, we set the prior hyperparameters by maximizing the evidence, and we
use cross-validation to choose ?. To ensure convergence of EP, when dealing with difficult sites,
we use damping [Seeger, 2005]. The GP version of the algorithm is based on a squared exponential
kernel. Table 1 summarises the results; balance refers to the size of the smaller class in the data
(recall that the AUC criterion is particularly relevant for unbalanced classification tasks), EP-AUC
(resp. GPEP-AUC) refers to the EP approximation of the pseudo-posterior based on our Gaussian
prior (resp. Gaussian process prior). See also Figure 2 for ROC curve comparisons, and Table 1 in
the supplement for a CPU time comparison.
Note how the GP approach performs better for the colon data, where the number of covariates
(2000) is very large, but the number of observations is only 40. It seems also that EP gives a better
approximation in this case because of the lower dimensionality of the pseudo-posterior (Figure 2b).
1
All available at http://archive.ics.uci.edu/ml/
7
Dataset
Covariates
Balance
EP-AUC
GPEP-AUC
Logit
Rankboost
Pima
Credit
DNA
SPECTF
Colon
Glass
7
60
180
22
2000
10
34%
28%
22%
50%
40%
1%
0.8617
0.7952
0.9814
0.8684
0.7034
0.9843
0.8557
0.7922
0.9812
0.8545
0.75
0.9629
0.8646
0.7561
0.9696
0.8715
0.73
0.9029
0.8224
0.788
0.9814
0.8684
0.5935
0.9436
Table 1: Comparison of AUC.
The Glass dataset has originally more than two classes. We compare the ?silicon? class against all others.
1.00
1.00
1.00
0.75
0.75
0.75
0.50
0.50
0.50
0.25
0.25
0.25
0.00
0.00
0.00
0.25
0.50
0.75
1.00
0.00
0.00
(a) Rankboost vs EP-AUC
on Pima
0.25
0.50
0.75
1.00
0.00
(b) Rankboost vs GPEPAUC on Colon
0.25
0.50
0.75
1.00
(c) Logistic vs EP-AUC on
Glass
Figure 2: Some ROC curves associated to the example described in a more systematic manner in
table 1. In black is always the PAC version.
Finally, we also investigate feature selection for the DNA dataset (180 covariates) using a spike and
slab prior. The regularization plot (3a) shows how certain coefficients shrink to zero as the spike?s
variance v0 goes to zero, allowing for some sparsity. The aim of a positive variance in the spike is
to absorb negligible effects into it [Ro?ckov?a and George, 2013]. We observe this effect on figure 3a
where one of the covariates becomes positive when v0 decreases.
0.1
?
?
0.0
???
?
?
?? ? ? ?
?? ??
?
?
??
?
?
?
?? ?
??
? ?
?
?? ?
?? ? ?
?
????
?
? ??
?
??
???????
??
?
??
? ?
??? ?
??
????
???
? ?
?
???
???
??
?
?
??
?
????
???
?
? ? ? ? ??
? ???
???
?? ????
?????
??
???
?
????????
??
? ?
??? ??
??
?? ?
??
?
????
? ?
??
?????
?
????
??
??
? ??
??
?
??
?? ?
???????
??
?
??
??
??
??
?????
????
?
???
? ?
??
??????????
?
? ?
?
???
? ??
??
??????
???
?? ?
? ?
??
??????
??
?
?
? ?
??
???????
???
????
?
??? ?
???
??
??
?
? ?
? ?
???
?
?
? ?
??
??
?
??
?
?
? ?
?
?
???
???
?
?? ?
??????
?
?? ?
? ?
?
?
?
0.0
?
?
?
?
?
?
?0.1
?
?
?0.1
?0.2
?
?
?0.3
1e?04
v0
1e?02
0
50
100
150
V1
(a) Regularization plot
(b) Estimate
Figure 3: Regularization plot for v0 ? 10?6 , 0.1 and estimation for v0 = 10?6 for DNA dataset;
blue circles denote posterior probabilities ? 0.5.
6
Conclusion
The combination of the PAC-Bayesian theory and Expectation-Propagation leads to fast and efficient
AUC classification algorithms, as observed on a variety of datasets, some of them very unbalanced.
Future work may include extending our approach to more general ranking problems (e.g. multiclass), establishing non-asymptotic bounds in the nonparametric case, and reducing the CPU time
by considering only a subset of all the pairs of datapoints.
8
Bibliography
P. Alquier. Pac-bayesian bounds for randomized empirical risk minimizers. Mathematical Methods
of Statistics, 17(4):279?304, 2008.
P. Alquier and G. Biau. Sparse single-index model. J. Mach. Learn. Res., 14(1):243?280, 2013.
P. B?uhlmann and S. van de Geer. Statistics for High-Dimensionnal Data. Springer, 2011.
O. Catoni. PAC-Bayesian Supervised Classification, volume 56. IMS Lecture Notes & Monograph
Series, 2007.
S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of U-statistics. Ann.
Stat., 36(2):844?874, 04 2008a.
S. Cl?emenc?on, V.C. Tran, and H. De Arazoza. A stochastic SIR model with contact-tracincing: large
population limits and statistical inference. Journal of Biological Dynamics, 2(4):392?414, 2008b.
C. Cortes and M. Mohri. Auc optimization vs. error rate minimization. In NIPS, volume 9, 2003.
P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. J. R. Statist. Soc. B, 68
(3):411?436, 2006. ISSN 1467-9868.
Y. Freund, R. Iyer, R.E Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. J. Mach. Learn. Res., 4:933?969, 2003.
E.I. George and R.E. McCulloch. Variable selection via Gibbs sampling. J. Am. Statist. Assoc., 88
(423):pp. 881?889, 1993.
D. Hernandez-Lobato, J. Hernandez-Lobato, and P. Dupont. Generalized Spike-and-Slab Priors for
Bayesian Group Feature Selection Using Expectation Propagation . J. Mach. Learn. Res., 14:
1891?1945, 2013.
A. Jasra, D. Stephens, and C. Holmes. On population-based simulation for static inference. Statist.
Comput., 17(3):263?279, 2007.
G. Lecu?e. M?ethodes d?agr?egation: optimalit?e et vitesses rapides. Ph.D. thesis, Universit?e Paris 6,
2007.
E. Mammen and A. Tsybakov. Smooth discrimination analysis. Ann. Stat., 27(6):1808?1829, 12
1999.
D.A McAllester. Some PAC-Bayesian theorems. In Proceedings of the eleventh annual conference
on Computational learning theory, pages 230?234. ACM, 1998.
T. Minka. Expectation Propagation for approximate Bayesian inference. In Proc. 17th Conf. Uncertainty Artificial Intelligence, UAI ?01, pages 362?369. Morgan Kaufmann Publishers Inc., 2001.
T. J Mitchell and J. Beauchamp. Bayesian variable selection in linear regression. J. Am. Statist.
Assoc., 83(404):1023?1032, 1988.
M. Opper and O. Winther. Gaussian Processes for Classification: Mean-field Algorithms. Neural
Computation, 12(11):2655?2684, November 2000.
C. Rasmussen and C. Williams. Gaussian processes for Machine Learning. MIT press, 2006.
S. Robbiano. Upper bounds and aggregation in bipartite ranking. Elec. J. of Stat., 7:1249?1271,
2013.
V. Ro?ckov?a and E. George. EMVS: The EM Approach to Bayesian Variable Selection. J. Am.
Statist. Assoc., 2013.
M. Seeger. Expectation propagation for exponential families. Technical report, U. of California,
2005.
J. Shawe-Taylor and R.C. Williamson. A PAC analysis of a Bayesian estimator. In Proc. conf.
Computat. learn. theory, pages 2?9. ACM, 1997.
A.W. van der Vaart and J.H. van Zanten. Adaptive Bayesian estimation using a Gaussian random
field with inverse Gamma bandwidth. Ann. Stat., pages 2655?2675, 2009.
M. A.J. Van Gerven, B. Cseke, F. P. de Lange, and T. Heskes. Efficient Bayesian multivariate fMRI
analysis using a sparsifying spatio-temporal prior. NeuroImage, 50:150?161, 2010.
L. Yan, R. Dodier, M. Mozer, and R. Wolniewicz. Optimizing classifier performance via an approximation to the Wilcoxon-Mann-Whitney statistic. Proc. 20th Int. Conf. Mach. Learn., pages
848?855, 2003.
9
| 5604 |@word mild:2 version:4 seems:2 nd:4 logit:1 d2:5 simulation:2 hec:1 covariance:2 q1:3 mention:1 accommodate:1 moment:3 series:1 score:18 interestingly:1 si:3 numerical:2 subsequent:1 dupont:1 plot:3 update:6 resampling:1 v:4 discrimination:1 leaf:1 intelligence:1 es:1 normalising:4 provides:1 boosting:1 complication:1 beauchamp:2 penalises:1 successive:2 preference:1 mathematical:1 dn:1 become:2 qij:7 consists:1 prove:1 overhead:1 eleventh:1 manner:1 introduce:1 x0:1 expected:2 little:2 cpu:2 considering:2 becomes:2 provided:1 estimating:1 notation:2 bounded:1 mass:3 mcculloch:2 interpreted:2 convexify:1 pseudo:13 temporal:1 every:1 ti:1 exactly:2 ro:2 classifier:2 k2:1 uk:1 control:1 assoc:3 universit:1 positive:5 negligible:1 limit:1 mach:4 initiated:1 establishing:1 optimised:1 hernandez:3 noteworthy:1 lugosi:1 black:1 plus:1 suggests:1 smc:8 range:2 practical:3 enforces:1 yj:5 maximisation:2 implement:3 procedure:2 area:3 empirical:6 yan:2 convenient:1 matching:1 refers:2 get:2 hx1:2 selection:8 risk:5 context:1 applying:1 www:1 equivalent:2 lobato:3 maximizing:1 emenc:4 attention:1 straightforward:1 williams:2 go:1 survey:1 assigns:1 factorisation:1 estimator:4 holmes:1 datapoints:1 population:2 handle:1 n12:2 coordinate:1 resp:4 target:7 user:1 trick:1 logarithmically:1 approximated:1 particularly:3 dodier:1 observed:2 ep:23 ensae:4 rij:4 ensures:1 counter:1 decrease:1 yk:1 monograph:2 mentioned:1 mozer:1 complexity:2 covariates:4 dynamic:1 depend:3 solving:1 bipartite:3 easily:1 joint:1 k0:4 elec:1 fast:2 effective:1 monte:7 artificial:1 hyper:3 larger:2 supplementary:1 solve:1 say:3 drawing:3 otherwise:1 statistic:4 vaart:2 gp:4 agr:1 advantage:5 sequence:1 propose:1 tran:1 product:6 fr:3 relevant:1 loop:1 uci:1 combining:1 liangf:1 kx1:1 gold:2 description:1 ridgway:2 dirac:2 convergence:2 extending:1 leave:1 derive:5 develop:1 stat:4 ij:1 soc:1 implemented:1 come:2 qd:1 fij:4 stochastic:1 mcallester:2 material:1 mann:1 require:1 fix:1 proposition:1 biological:1 extension:1 hold:5 considered:2 ic:1 credit:1 exp:14 algorithmic:1 slab:9 resample:1 estimation:2 proc:3 label:1 uhlmann:2 tool:3 minimization:3 mit:1 gaussian:23 always:3 rankboost:4 aim:1 ck:1 rather:2 pn:5 cseke:1 focus:2 vk:1 rank:1 bernoulli:2 ceremade:1 seeger:3 am:3 glass:3 colon:3 inference:3 minimizers:1 membership:1 initially:1 chopin:2 issue:1 classification:12 overall:2 fairly:2 marginal:5 equal:1 once:2 field:2 sampling:5 fmri:1 future:2 report:1 others:1 recommend:1 realisation:1 few:1 randomly:1 gamma:1 divergence:1 lebesgue:2 replacement:1 n1:1 possibility:1 investigate:1 mixture:1 chain:1 amenable:2 damping:1 taylor:2 walk:2 circle:1 re:3 formalise:1 theoretical:2 instance:2 retains:1 whitney:1 calibrate:1 cost:2 subset:2 too:2 straightforwardly:1 chooses:1 calibrated:1 density:9 winther:2 randomized:1 ie:1 probabilistic:1 yl:1 systematic:1 minimised:1 parametrisation:1 again:4 squared:1 satisfied:4 thesis:1 choose:4 conf:3 derivative:1 leading:4 de:4 coefficient:2 inc:1 int:1 explicitly:1 ranking:7 depends:1 performed:1 start:1 recover:1 bayes:2 parallel:1 aggregation:1 php:1 qk:1 characteristic:1 efficiently:1 variance:2 yield:1 kaufmann:1 biau:2 bayesian:23 iid:1 bisection:1 carlo:7 notoriously:1 finer:1 liebler:1 penalised:1 cumbersome:1 definition:2 against:1 pp:1 james:2 minka:2 proof:3 associated:2 static:1 degeneracy:1 sampled:1 stop:1 dataset:6 mitchell:2 recall:1 dimensionality:1 actually:1 originally:1 supervised:1 follow:2 methodology:2 formulation:1 shrink:1 d:4 hand:1 propagation:10 del:2 abusing:1 logistic:2 brings:1 effect:2 alquier:7 true:1 regularization:3 assigned:1 q0:3 iteratively:1 deal:1 skewed:2 auc:22 mammen:3 criterion:3 generalized:1 complete:1 performs:1 l1:1 functional:1 mt:2 exponentially:1 volume:2 extend:2 m1:2 numerically:1 ims:1 refer:1 silicon:1 gibbs:3 ai:1 rd:6 tuning:1 outlined:1 heskes:1 i6:1 particle:5 illinois:2 shawe:2 stable:1 operating:1 v0:11 add:1 wilcoxon:1 posterior:17 own:1 recent:1 showed:1 multivariate:1 optimizing:1 inf:6 driven:1 scenario:2 certain:4 binary:1 lecu:3 dauphine:1 yi:8 scoring:4 der:2 morgan:1 george:4 additional:1 impose:1 surely:1 determine:2 ii:1 stephen:1 mix:1 reduces:1 champaign:1 technical:2 faster:1 adapt:1 match:1 minimising:2 cross:6 offer:1 smooth:1 qi:1 wolniewicz:1 regression:3 essentially:3 expectation:10 noiseless:1 kernel:6 achieved:1 justified:1 addition:1 vayatis:1 publisher:1 extra:3 archive:1 elegant:1 member:1 ckov:2 gerven:2 intermediate:1 identically:1 easy:1 variety:1 xj:10 zi:1 forthcoming:1 bandwidth:1 inner:1 regarding:2 lange:1 specialise:1 multiclass:1 minimise:1 whether:2 moral:2 akin:1 remark:1 generally:1 amount:1 nonparametric:2 tsybakov:3 statist:5 ph:1 dna:3 http:2 schapire:1 computat:1 estimated:3 blue:2 hyperparameter:2 dropping:1 group:1 sparsifying:1 threshold:5 tempering:3 d3:1 v1:5 inverse:1 powerful:3 uncertainty:1 almost:1 family:2 draw:3 bound:12 resampled:1 robbiano:4 identifiable:1 annual:1 rkl:1 x2:5 n3:1 bibliography:1 generates:1 simulate:1 optimality:1 jasra:3 according:2 combination:2 smaller:2 remain:1 em:1 appealing:1 metropolis:1 s1:7 den:4 invariant:1 explained:1 equation:1 discus:2 turn:1 singer:1 available:1 rewritten:1 apply:2 observe:1 pierre:2 denotes:3 top:1 ensure:1 include:1 restrictive:1 establish:1 approximating:1 classical:1 summarises:1 feng:1 contact:1 move:2 quantity:2 spike:12 dependence:1 distance:1 simulated:1 reason:1 maximising:2 assuming:1 issn:1 index:1 illustration:1 balance:2 liang:1 difficult:4 statement:1 pima:4 hij:2 negative:4 implementation:5 zt:5 unknown:1 perform:2 allowing:1 upper:2 observation:1 markov:1 urbana:1 datasets:2 november:1 precise:1 rn:9 arbitrary:1 introduced:1 pair:2 paris:2 california:1 datasets1:1 nip:1 address:1 beyond:1 below:1 sparsity:3 green:1 natural:3 hybrid:2 minimax:1 brief:1 sn:1 prior:28 nice:1 literature:1 asymptotic:3 qkl:1 beside:1 fully:1 freund:2 highlight:1 lecture:1 sir:1 proportional:2 var:1 validation:6 plotting:1 course:1 mohri:3 summary:1 soon:2 rasmussen:2 formal:1 side:1 normalised:2 taking:2 sparse:4 distributed:1 van:8 curve:6 dimension:2 opper:2 commonly:1 adaptive:2 crest:4 approximate:5 excess:1 sj:2 kullback:1 absorb:1 dealing:1 ml:1 global:1 decides:1 doucet:1 uai:1 receiver:1 spatio:1 xi:17 continuous:2 search:1 un:2 latent:1 why:1 table:4 learn:5 zk:5 nicolas:2 init:1 williamson:2 parametrisations:1 cl:4 zanten:2 pk:2 linearly:1 big:1 noise:1 hyperparameters:2 whole:1 ait:2 x1:2 site:9 roc:4 slow:2 neuroimage:1 exponential:3 comput:1 third:1 theorem:6 z0:1 specific:2 pac:16 cortes:3 admits:1 evidence:2 consist:1 exists:2 false:2 sequential:5 importance:3 supplement:12 catoni:4 iyer:1 margin:1 applies:1 springer:1 acm:2 ma:14 conditional:2 ann:3 price:1 fkl:1 infinite:1 except:1 reducing:1 ucd:1 wt:7 sampler:4 lemma:4 geer:2 latter:2 unbalanced:2 indian:1 spectf:1 mcmc:7 |
5,087 | 5,605 | Partition-wise Linear Models
Hidekazu Oiwa?
Graduate School of Information Science and Technology
The University of Tokyo
[email protected]
Ryohei Fujimaki
NEC Laboratories America
[email protected]
Abstract
Region-specific linear models are widely used in practical applications because
of their non-linear but highly interpretable model representations. One of the key
challenges in their use is non-convexity in simultaneous optimization of regions
and region-specific models. This paper proposes novel convex region-specific linear models, which we refer to as partition-wise linear models. Our key ideas
are 1) assigning linear models not to regions but to partitions (region-specifiers)
and representing region-specific linear models by linear combinations of partitionspecific models, and 2) optimizing regions via partition selection from a large
number of given partition candidates by means of convex structured regularizations. In addition to providing initialization-free globally-optimal solutions, our
convex formulation makes it possible to derive a generalization bound and to use
such advanced optimization techniques as proximal methods and decomposition
of the proximal maps for sparsity-inducing regularizations. Experimental results
demonstrate that our partition-wise linear models perform better than or are at
least competitive with state-of-the-art region-specific or locally linear models.
1
Introduction
Among pre-processing methods, data partitioning is one of the most fundamental. In it, an input
space is divided into several sub-spaces (regions) and assigned a simple model for each region. In
addition to better predictive performance resulting from the non-linear nature that arises from multiple partitions, the regional structure provides a better understanding of data (i.e., interpretability).
Region-specific linear models learn both partitioning structures and predictors in each region.
Such models vary?from traditional decision/regression trees [1] to more advanced models [2, 3,
4]?depending on their region-specifiers (how they characterize regions), region-specific prediction
models, and the objective functions to be optimized. One important challenge that remains in learning these models is the non-convexity that arises from the inter-dependency of optimizing regions
and prediction models in individual regions. Most previous work suffers from disadvantages arising
from non-convexity, including initialization-dependency (bad local minima) and lack of generalization error analysis.
We propose convex region-specific linear models, which are referred to as partition-wise linear models. Our models have two distinguishing characteristics that help avoid the non-convexity problem.
Partition-wise Modeling We propose partition-wise linear models as a novel class of regionspecific linear models. Our models divide an input space by means of a small set of partitions1 .
Each partition possesses one weight vector, and this weight vector is only applied to one side of
the divided space. It is trained to represent the local relationship between input vectors and output
?
The work reported here was conducted when the first author was a visiting researcher at NEC Laboratories
America.
1
In our paper, a region is a sub-space in an input space. Multiple regions do not intersect each other, and, in
their entirety, they cover the whole input space. A partition is an indicator function that divides an input space
into two parts.
1
values. Region-specific predictors are constructed by linear combinations of these weight vectors.
Our partition-wise parameterization enables us to construct convex objective functions.
Convex Optimization via Sparse Partition Selection We optimize regions by selecting effective partitions from a large number of given candidates, using convex sparsity-inducing structured
regularizations. In other words, we trade continuous region optimization for convexity. We allow
partitions to locate only given discrete candidate positions, and are able to derive convex optimization problems. We have developed an efficient algorithm to solve structured-sparse optimization
problems, and in it we adopt a proximal method [5, 6] and the decomposition of proximal maps [7].
As a reliable partition-wise linear model, we have developed a global and local residual model that
combines one global linear model and a set of partition-wise linear ones. Further, our theoretical
analysis gives a generalization bound for this model to evaluate the risk of over-fitting. Our generalization bound analysis indicates that we can increase the number of partition candidates by less than
an exponential order with respect to the sample size, which is large enough to achieve good predictive performance in practice. Experimental results have demonstrated that our models perform
better than or are at least competitive with state-of-the-art region-specific or locally linear models.
1.1
Related Work
Region-specific linear models and locally linear models are the most closely related models to our
own. The former category, to which our models belong, assumes one predictor in a specific region
and has an advantage in clear model interpretability, while the latter assigns one predictor to every
single datum and has an advantage in higher model flexibility. Interpretable models are able to
indicate clearly where and how the relationships between inputs and outputs change.
Well-known precursors to region-specific linear models are decision/regression trees [1], which use
rule-based region-specifiers and constant-valued predictors. Another traditional framework is a hierarchical mixture of experts [8], which is a probabilistic tree-based region-specific model framework.
Recently, Local Supervised Learning through Space Partitioning (LSL-SP) has been proposed [3].
LSL-SP utilizes a linear-chain of linear region-specifiers as well as region-specific linear predictors.
The highly important advantage of LSL-SP is the upper bound of generalization error analysis via
the VC dimension. Additionally, a Cost-Sensitive Tree of Classifiers (CSTC) algorithm has also
been developed [4]. It utilizes a tree-based linear localizer and linear predictors. This algorithm?s
uniqueness among other region-specific linear models is in its taking ?feature utilization cost? into
account for test time speed-up. Although the developers? formulation with sparsity-inducing structured regularization is, in a way, related to ours, their model representations and, more importantly,
their motivation (test time speed-up) is different from ours.
Fast Local Kernel Support Vector Machines (FaLK-SVMs) represent state-of-the-art locally linear
models. FaLK-SVMs produce test-point-specific weight vectors by learning local predictive models
from the neighborhoods of individual test points [9]. It aims to reduce prediction time cost by preprocessing for nearest-neighbor calculations and local model sharing, at the cost of initializationindependency. Another advanced locally linear model is that of Locally Linear Support Vector
Machines (LLSVMs) [10]. LLSVMs assign linear SVMs to multiple anchor points produced by
manifold learning [11, 12] and construct test-point-specific linear predictors according to the weights
of anchor points with respect to individual test points. When the manifold learning procedure is
initialization-independent, LLSVMs become initial-value-independent because of the convexity of
the optimization problem. Similarly, clustered SVMs (CSVMs) [13] assume given data clusters
and learn multiple SVMs for individual clusters simultaneously. Although CSVMs are convex and
generalization bound analysis has been provided, they cannot optimize regions (clusters).
Joes et al. have proposed Local Deep Kernel Learning (LDKL) [2], which adopts an intermediate
approach with respect to region-specific and locally linear models. LDKL is a tree-based local kernel classifier in which the kernel defines regions and can be seen as performing region-specification.
One main difference from common region-specific linear models is that LDKL changes kernel combination weights for individual test points, and the predictors are locally determined in every single
region. Its aim is to speed up kernel SVMs? prediction while maintaining the non-linear ability.
Table 1 summarizes the above described state-of-the-art models in contrast with ours from a number
of significant perspectives. Our proposed model uniquely exhibits three properties: joint optimization of regions and region-specific predictors, initialization-independent optimization, and meaningful generalization bound.
2
Table 1: Comparison of region-specific and locally linear models.
Region Optimization
Initialization-independent
Generalization Bound
Region Specifiers
1.2
Ours
?
?
?
LSL-SP
?
(Sec. 2.2)
Linear
CSTC
?
LDKL
?
FaLK-SVM
LLSVM
?
?
?
Linear
Linear
Non-Regional
Non-Regional
Notations
Scalars and vectors are denoted by lower-case x. Matrices are denoted by upper-case X. An n-th
training sample and label are denoted by xn ? RD and yn , respectively.
2
Partition-wise Linear Models
This section explains partition-wise linear models under the assumption that effective partitioning is
already fixed. We discuss how to optimize partitions and region-specific linear models in Section 3.
2.1
Framework
Figure 1 illustrates the concept of partition-wise linear models. Suppose we have P partitions (red
dashed lines) which essentially specify 2P regions. Partition-wise linear models are defined as follows. First, we assign a linear weight vector ap to the p-th partition. This partition has an activeness
function, fp , which indicates whether the attached weight vector ap is applied to individual data
points or not. For example, in Figure 1, we set the weight vector a1 to be applied to the right-hand
side of partition p1 . In this case, the corresponding activeness function is defined as f1 (x) = 1 when
x is in the right-hand side of p1 . Second, region-specific predictors (squared regions surrounded by
partitions in Figure 1) are defined by a linear combination of active partition-wise weight vectors
that are also linear models.
Let us formally define the partition-wise linear models. We have a set of given activeness functions,
f1 , . . . , fP , which is denoted in a vector form as
f (?) = (f1 (?), . . . , fP (?))T . The p-th element
fp (x) ? {0, 1} indicates whether the attached
weight vector ap is applied to x or not. The activeness function f (?) can represent at most 2P regions,
and f (x) specifies to which region x belongs. A linear model of an individual region is then represented
PP
as p=1 fp (?)ap . It is worth noting that partitionwise linear models use P linear weight vectors to
represent 2P regions and restrict the number of parameters.
a4
a1 + a3
+a4
p4
a3
a1 + a3
p3
a1 + a2
+a3
a1
p1
p2
a2
Figure 1: Concept of Partition-wise Linear
The overall predictor g(?) can be denoted as follows:
Models
X
X
g(x) =
fp (x)
adp xd .
(1)
p
d
Let us define A as A = (a1 , . . . , aP ). The partition-wise linear model (1) simply acts as a linear
model w.r.t. A while it captures the non-linear nature of data (individual regions use different linear
models). Such non-linearity originates from the activeness functions fp s, which are fundamentally
important components in our models.
By introducing a convex loss function `(?, ?) (e.g., squared loss for regression, squared hinge or
logistic loss for classification), we can represent an objective function of the partition-wise linear
models as a convex loss minimization problem as follows:
X
X
X
X
min
`(yn , g(xn )) = min
`(yn ,
fp (xn )
adp xnd ).
(2)
A
n
A
n
p
d
Here we give a convex formulation of region-specific linear models under the assumption that a set
of partitions is given. In Section 3, we propose a convex optimization algorithm for partitions and
regions as a partition selection problem, using sparsity-inducing structured regularization.
3
2.2 Partition Activeness Functions
A partition activeness function fp divides the input space into two regions, and a set of activeness
functions defines the entire region-structure. Although any function is applicable in principle to
being used as a partition activeness function, we prefer as simple a region representation as possible
because of our practical motivation of utilizing region-specific linear models (i.e., interpretability is
a priority). This paper restricts them to being parallel to the coordinates, e.g., fp (x) = 1 (xi > 2.5)
and fp (x) = 0 (otherwise) with respect to the i-th coordinate. Although this ?rule-representation?
is simpler than others [2, 3] which use dense linear hyperplanes as region-specifiers, our empirical
evaluation (Section 5) indicates that our models perform competitively with or even better than those
others by appropriately optimizing the simple region-specifiers (partition activeness functions).
2.3 Global and Local Residual Model
As a special instance of partition-wise linear models, we here propose a model which we refer to
as a global and local residual model. It employs a global linear weight vector a0 in addition to
partition-wise linear weights. The predictor model (1) can be rewritten as:
X
X
g(x) = aT0 x +
fp (x)
adp xd .
(3)
p
d
The global weight vector is active for all data. The integration of the global weight vector enables
the model to determine how features affect outputs not only locally but also globally. Let us consider
a new partition activeness function f0 (x) that always returns to 1 regardless of x. Then, by setting
f (?) = (f0 (?), f1 (?), . . . , fp (?), . . . , fP (?))T and A = (a0 , a1 , . . . , aP ), the global and local residual
model can be represented using the same notation as is used in Section 2.1. Although a0 and ap have
no fundamental difference here, they are different in terms of how we regularize them (Section 3.1).
3
Convex Optimization of Regions and Predictors
In Section 2, we presented a convex formulation of partition-wise linear models in (2) under the assumption that a set of partition activeness functions was given. This section relaxes this assumption
and proposes a convex partition optimization algorithm.
3.1 Region Optimization as Sparse Partition Selection
Let us assume that we have been given P + 1 partition activeness functions, f0 , f1 , . . . , fP , and their
attached linear weight vectors, a0 , a1 , . . . , aP , where f0 and a0 are the global activeness function
and weight vector, respectively. We formulate the region optimization problem here as partition
selection by setting setting most of ap s to zero since ap = 0 corresponds to the situation in which
the p-th partition does not exist.
Formally, we formulate our optimization problem with respect to regions and weight vectors by
introducing two types of sparsity-inducing constrains to (2) as follows:
X
X
`(yn , g(xn )) s.t.
1{ap 6=0} ? ?P , kap k0 ? ?0 ?p.
(4)
min
A
n
p?{1,...,P }
The former constraint restricts the number of effective partitions to at most ?P . Note that we do
not enforce this sparse partition constraint to the global model a0 so as to be able to determine local
trends as residuals from a global trend. The latter constraint restricts the number of effective features
of ap to at most ?0 . We add this constraint because 1) it is natural to assume only a small number
of features are locally effective in practical applications and 2) a sparser model is typically preferred
for our purposes because of its better interpretability.
3.2 Convex Optimization via Decomposition of Proximal map
3.2.1 The Tightest Convex Envelope
The constraints in (5) are non-convex, and it is very hard to find the global optimum due to the
indicator functions and L0 penalties. This makes optimization over a non-convex region a very
complicated task, and we therefore apply a convex relaxation. One standard approach to convex
relaxation would be a combination of group L1 (the first constraint) and L1 (the second constraint)
penalties. Here, however, we consider the tightest convex relaxation of (4) as follows:
min
A
X
n
`(yn , g(xn )) s.t.
P
X
kap k? ? ?P ,
p=1
D
X
d=1
4
kadp k? ? ?0 ?p.
(5)
The tightness of (5) is shown in the full version [14]. Through such a convex envelope of constraints,
the feasible region becomes convex. Therefore, we can reformulate (5) to the following equivalent
problem:
min
A
X
`(yn , g(xn )) + ?(A) where ?(A) = ?P
n
P
X
kap k? + ?0
p=1
P X
D
X
kadp k? ,
(6)
p=0 d=1
where ?P and ?0 are regularization weights corresponding to ?P and ?0 , respectively. We derive an
efficient optimization algorithm using a proximal method and the decomposition of proximal maps.
3.2.2 Proximal method and FISTA
The proximal method is a standard efficient tool for solving convex optimization problems with
non-differential regularizers. It iteratively applies gradient steps and proximal steps to update parameters. This achieves O(1/t) convergence [5] under Lipschitz-continuity of the loss gradient, or
even O(1/t2 ) convergence if an acceleration technique, such as a fast iterative shrinkage thresholding algorithm (FISTA) [6, 15], is incorporated.
Let us define A(t) as the weight matrix at the t-th iteration. In the gradient step, the weight vectors
are updated to decrease empirical loss through the first-order approximation of loss functions as:
X
1
A(t+ 2 ) = A(t) ? ? (t)
?A(t) ` (yn , g(xn )) ,
(7)
n
where ? (t) is a step size and ?A(t) `(?, ?) is the gradient of loss functions evaluated at A(t) . In the
1
proximal step, we apply regularization to the current solution A(t+ 2 ) as follows:
1
(t+1)
(t+ 12 )
2
(t)
) where M0 (B) = argmin
A
= M0 (A
kA ? BkF + ? ?(A) ,
(8)
2
A
where k ? kF is the Frobenius norm. Furthermore, we employed FISTA [6] to achieve the faster
convergence rate for weakly convex problem and adopted a backtracking rule [6] to avoid the difficulty of calculating appropriate step widths beforehand. Through empirical evaluations as well as
theoretical backgrounds, we have confirmed that it significantly improves convergence in learning
partition-wise linear models. The detail is written in the full version [14].
3.2.3 Decomposition of Proximal Map
The computational cost of the proximal method depends strongly on the efficiency of solving the
proximal step (8). A number of approaches have been developed for improving efficiency, including the minimum-norm-point approach [16] and the networkflow approach [17, 18]. Their computational efficiencies depend strongly on feature and partition size2 , however, which makes them
inappropriate for our formulation because of potentially large feature and partition sizes.
Alternatively, this paper employs the decomposition of proximal maps [7]. The key idea here is
to decompose the proximal step into a sequence of sub-problems that are easily solvable. We first
introduce two easily-solvable proximal maps as follows:
P
X
1
M1 (B) = argmin kA ? Bk2F + ? (t) ?P
kap k? ,
2
A
p=1
P X
D
X
1
M2 (B) = argmin kA ? Bk2F + ? (t) ?0
|adp | .
2
A
p=0
(9)
(10)
d=1
The theorem below guarantees that the decomposition of the proximal map (8) can be performed.
The proof is provided in the full version.
Theorem 1 The original problem (8) can be decomposed into a sequence of two easily solvable
proximal map problems as follows:
1
1
A(t+1) = M0 (A(t+ 2 ) ) = M2 (M1 (A(t+ 2 ) )) .
(11)
2
For example, the fastest algorithm for the networkflow approach has O(M(B+1) log(M2 /(B+1))) time
complexity, where B is the number of breakpoints determined by the structure of the graph (B ? D(P + 1) =
O(DP )) and M is the number of nodes, that is P + D(P + 1) = O(DP ) [17]. Therefore, the worst
computational complexity is O(D2 P 2 log DP ).
5
The first proximal map (9) is the proximal operator with respect to the L1,? -regularization. This
problem can be decomposed into group-wise sub-problems. Each proximal operator with respect
to each group can be computed through a projection on an L1 -norm ball (derived from the Moreau
decomposition [16]), that is, ap = bp ? argmin kc ? bp k2 s.t. kck1 ? ? (t) ?. This projection
c
problem can be efficiently solved [19].
The second proximal map (10) is a well-known proximal operator with respect to L1 -regularization.
This problem can be decomposed into element-wise ones
and its solution is generated in a closed
form through adp = sgn(bdp ) max 0, |bdp ? ? (t) ?0 | . These two sub-problems can be easily
solved, therefore, we can easily obtain the solution of the original proximal map (8).
O(N P + P? D + P D log D) is the computational complexity of partition-wise linear models where
P? is the number of active partitions. The procedure to derive the computational complexity, the
implementation to speed up the optimization through warm start, and the summary of the iterative
update procedure are written in the full version.
4
Generalization Bound Analysis
This section presents the derivation of a generalization error bound for partition-wise linear models
and discusses how we can increase the number of partition candidates P over the number of samples
N . Our bound analysis is related to that of [20], which gives bounds for general overlapping group
Lasso cases, while ours is specifically designed for partition-wise linear models.
Let us first derive an empirical Rademacher complexity [21] for a feasible weight space conditioned
on (6). We can derive Rademacher complexity for our model using the Lemma below. Its proof is
shown in the full version and this result is used to analyze the expected loss bound.
Lemma 1 If ?(A) ? 1 is satisfied and if almost surely kxk? ? 1 with respect to x ? X , the
empirical Rademacher complexity for partition-wise linear models can be bounded as:
p
23/2
<A (X) = ?
(12)
2 + ln(P + D(P + 1)) .
N
The next theorem shows the generalization bound of the global and local residual model. This bound
is straightforwardly derived from Lemma 1 and the discussion of [21]. In [21], it has been shown that
the uniform bound on the estimation error can be obtained through the upper bound of Rademacher
complexity derived in Lemma 1. By using the uniform bound, the generalization bound of the global
and local residual model defined in formula (4) can be derived.
Theorem 2 Let us define a set of weights that satisfies ?group (A) ? 1 as A where ?group (A)
is as defined in Section 2.5 in [20]. Let a datum (xn , yn ) be i.i.d. sampled from a specific data
distribution D and let us assume loss functions `(?, ?) to be L-Lipschitz functions with respect to a
norm k ? k and its range to be within [0, 1]. Then, for any constant ? ? (0, 1) and any A ? A, the
following inequality holds with probability at least 1 ? ?.
r
N
1 X
ln 1/?
E(x,y)?D [`(y, g(x))] ?
`(yn , g(xn )) + <A (X) +
.
(13)
N n=1
2N
This theorem implies how we can increase the number of partition candidates. The third term of the
right-hand side is obviously small if N is large. The second term converges to zero with N ? ? if
the value of P is smaller than o(eN ), which is sufficiently large in practice. In summary, we expect
to handle a sufficient number of partition candidates for learning with little risk of over fitting.
5
Experiments
We conducted two types of experiments: 1) evaluation of how partition-wise linear models perform,
on the basis of a simple synthetic dataset and 2) comparisons with state-of-the-art region-specific
and locally linear models on the basis of standard classification and regression benchmark datasets.
5.1 Demonstration using Synthetic Dataset
We generated a synthetic binary classification dataset as follows. xn s were uniformly sampled from
a 20-dimensional input space in which each dimension had values between [?1, 1]. The target variables were determined using the XOR rule over the first and second features (the other 18 features
6
were added as noise for prediction purposes.), i.e., if the signs of first feature value and second
feature value are the same, y = 1, otherwise y = ?1. This is well known as a case in which linear models do not work. For example, L1 -regularized logistic regression produced nearly random
outputs where the error rate was 0.421.
We generated one partition for each feature except for the first feature. Each partition became active
if the corresponding feature value was greater than 0.0. Therefore, the number of candidate partitions
was 19. We used the logistic regression function for loss functions. Hyper-parameters3 were set as
?0 = 0.01 and ?P = 0.001. The algorithm was run in 1, 000 iterations.
Figure 2 illustrates results produced by
the global and local residual model. The
left-hand figure illustrates a learned effective partition (red line) to which the
weight vector a1 = (10.96, 0.0, ? ? ? ) was
assigned. This weight a1 was only applied
to the region above the red line. By combining a1 and the global weight a0 , we obtained the piece-wise linear representation
shown in the right-hand figure. While it
is yet difficult for existing piece-wise linear methods to capture global structures4 ,
our convex formulation makes it possible
for the global and local residual model to
easily capture the global XOR structures.
?? ?
?1 =
=
6.59
0.0
?
=
?4.37
0.0
?
10.96
0.0
?
?? ?
?0 =
?
?
?4.37
0.0
?
Figure 2: How the global and local residual model classifies XOR data. Red line indicates effective partition;
green lines indicate local predictors; red circles indicate samples with y = ?1; blue circles indicate samples with y = 1: This model classified XOR data precisely.
Table 2: Classification and regression
5.2 Comparisons using Benchmark Datasets
datasets. N is the size of data. D is the numWe next used benchmark datasets to compare our ber of dimensions. P is the number of parmodels with other state-of-the-art region-specific titions. CL/RG denotes the type of dataset
ones. In these experiments, we simply generated (CL: Classification/RG: Regression).
partition candidates (activeness functions) as folN
D
P
CL/RG
lows. For continuous value features, we calculated
skin
245,057
3
12
CL
winequality
6,497
11
44
CL
all 5-quantiles for each feature and generated parti45,222
105
99
CL
census income
tions at each quantile point. Partitions became active
twitter
140,707
11
44
CL
if a feature value was greater than the corresponding
a1a
1,605
113
452
CL
breast-cancer
683
10
40
CL
quantile value. For binary categorical features, we
2,359
1,559
1,558
CL
internet ad
generated two partitions in which one became active
energy heat
768
8
32
RG
when the feature was 1 (yes) and the other became
energy cool
768
8
32
RG
abalone
4,177
10
40
RG
active only when the feature value was 0 (no).
kinematics
8,192
8
32
RG
puma8NH
8,192
8
32
RG
We utilized several standard benchmark datasets
bank8FM
8,192
8
32
RG
from UCI datasets (skin, winequality, cencommunities
1,994
101
404
RG
sus income, twitter, internet ad, energy heat,
energy cool, communities), libsvm datasets (a1a, breast cancer), and LIACC datasets (abalone,
kinematics, puma8NH, bank8FM). Table 2 summarizes specifications for each dataset.
5.2.1 Classification
For classification, we compared the global and local residual model (Global/Local) with L1 logistic regression (Linear), LSL-SP with linear discrimination analysis5 , LDKL supported by L2 regularized hinge loss6 , FaLK-SVM with linear kernels7 , and C-SVM with RBF kernel8 . Note that
C-SVM is neither a region-specific nor locally linear classification model; it is, rather, non-linear.
We compared it with ours as a reference with respect to a common non-linear classification model.
3
We conducted several experiments on other hyper-parameter settings and confirmed that variations in
hyper-parameter settings did not significantly affect results.
4
For example, a decision tree cannot be used to find a ?true? XOR structure since marginal distributions on
the first and second features cannot discriminate between positive and negative classes.
5
The source code is provided by the author of [3].
6
https://research.microsoft.com/en-us/um/people/manik/code/LDKL/
download.html
7
http://disi.unitn.it/?segata/FaLKM-lib/
8
We used a libsvm package. http://www.csie.ntu.edu.tw/?cjlin/libsvm/
7
Table 3: Classification results: error rate (standard deviation). The best performance figure in each
dataset is denoted in bold typeface and the second best is denoted in bold italic.
skin
winequality
census income
twitter
a1a
breast-cancer
internet ad
Linear
8.900 (0.174)
33.667 (1.988)
43.972 (0.404)
6.964 (0.164)
16.563 (2.916)
35.000 (4.402)
7.319 (1.302)
Global Local
0.249 (0.048)
23.713 (1.202)
35.697 (0.453)
4.231 (0.090)
16.250 (2.219)
3.529 (1.883)
2.638 (1.003)
LSL-SP
12.481 (8.729)
30.878 (1.783)
35.405 (1.179)
8.370 (0.245)
20.438 (2.717)
3.677 (2.110)
6.383 (1.118)
LDKL
1.858 (1.012)
36.795 (3.198)
47.229 (2.053)
15.557 (11.393)
17.063 (1.855)
35.000 (4.402)
13.064 (3.601)
FaLK-SVM
0.040 (0.016)
28.706 (1.298)
?
4.135 (0.149)
18.125 (1.398)
?
3.362 (0.997)
RBF-SVM
0.229 (0.029)
23.898 (1.744)
45.843 (0.772)
9.109 (0.160)
16.500 (1.346)
33.824 (4.313)
3.447 (0.772)
Table 4: Regression results: root mean squared loss (standard deviation). The best performance
figure in each dataset is denoted in bold typeface and the second best is denoted in bold italic.
energy heat
energy cool
abalone
kinematics
puma8NH
bank8FM
communities
Linear
0.480 (0.047)
0.501 (0.044)
0.687 (0.024)
0.766 (0.019)
0.793 (0.023)
0.255 (0.012)
0.586 (0.049)
Global Local
0.101 (0.014)
0.175 (0.018)
0.659 (0.023)
0.634 (0.022)
0.601 (0.017)
0.218 (0.009)
0.578 (0.040)
RegTree
0.050 (0.005)
0.200 (0.018)
0.727 (0.028)
0.732 (0.031)
0.612 (0.024)
0.254 (0.008)
0.653 (0.060)
RBF-SVR
0.219 (0.017)
0.221 (0.026)
0.713 (0.025)
0.347 (0.010)
0.571 (0.020)
0.202 (0.007)
0.618 (0.053)
For our models, we used logistic functions for loss functions. The max iteration number was set as
1000, and the algorithm stopped early when the gap in the empirical loss from the previous iteration
became lower than 10?9 in 10 consecutive iterations. Hyperparameters9 were optimized through
10-fold cross validation. We fixed the number of regions to 10 in LSL-SP, tree-depth to 3 in LDKL,
and neighborhood size to 100 in FaLK-SVM.
Table 3 summarizes the classification errors. We observed 1) Global/Local consistently performed
well and achieved the best error rates foir four datasets out of seven. 2) LSL-SP performed well for
census income and breast-cancer, but did significantly worse than Linear for skin, twitter, and a1a.
Similarly, LDKL performed worse than Linear for census income, twitter, a1a and internet ad. This
arose partly because of over fitting and partly because of bad local minima. Particularly noteworthy
is that the standard deviations in LDKL were much larger than in the others, and the initialization
issue would seem to become significant in practice. 3) FaLK-SVM performed well in most cases,
but its computational cost was significantly higher than that of others, and it was unable to obtain
results for census income and internet ad (we stopped the algorithm after 24 hours running).
5.2.2 Regression
For regression, we compared Global/Local with Linear, regression tree10 by CART (RegTree) [1],
and epsilon-SVR with RBF kernel11 . Target variables were standardized so that their mean was
0 and their variance was 1. Performance was evaluated using the root mean squared loss in the
test data. Tree-depth of RegTree and in RBF-SVR were determined by means of 10-fold cross
validation. Other experimental settings were the same as those used in the classification tasks.
Table 4 summarizes RMSE values. In classification tasks, Global/Local consistently performed
well. For the kinematics, RBF-SVR performed much better than Global/Local, but Global/Local
was better than Linear and RegTree in many other datasets.
6
Conclusion
We have proposed here a novel convex formulation of region-specific linear models that we refer
to as partition-wise linear models. Our approach simultaneously optimizes regions and predictors
using sparsity-inducing structured penalties. For the purpose of efficiently solving the optimization
problem, we have derived an efficient algorithm based on the decomposition of proximal maps.
Thanks to its convexity, our method is free from initialization dependency, and a generalization
error bound can be derived. Empirical results demonstrate the superiority of partition-wise linear
models over other region-specific and locally linear models.
Acknowledgments
The majority of the work was done during the internship of the first author at the NEC central
research laboratories.
?1 , ?2p in Global/Local,?1 in Linear, ?W , ?? , ??? , ? in LDKL, C in FaLK-SVM, and C, ? in RBF-SVM.
We used a scikit-learn package. http://scikit-learn.org/
11
We used a libsvm package.
9
10
8
References
[1] Leo Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and Regression
Trees. Wadsworth, 1984.
[2] Cijo Jose, Prasoon Goyal, Parv Aggrwal, and Manik Varma. Local deep kernel learning for
efficient non-linear svm prediction. In ICML, pages 486?494, 2013.
[3] Joseph Wang and Venkatesh Saligrama. Local supervised learning through space partitioning.
In NIPS, pages 91?99, 2012.
[4] Zhixiang Xu, Matt Kusner, Minmin Chen, and Kilian Q. Weinberger. Cost-Sensitive Tree of
Classifiers. In ICML, pages 133?141, 2013.
[5] Paul Tseng. Approximation accuracy, gradient methods, and error bound for structured convex
optimization. Mathematical Programming, 125(2):263?295, 2010.
[6] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear
inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[7] Yaoliang Yu. On decomposing the proximal map. In NIPS, pages 91?99, 2013.
[8] Michael I. Jordan and Robert A. Jacobs. Hierarchical mixtures of experts and the em algorithm.
Neural Computation, 6(2):181?214, 1994.
[9] Nicola Segata and Enrico Blanzieri. Fast and scalable local kernel machines. Journal of
Machine Learning Research, 11:1883?1926, 2010.
[10] Lubor Ladicky and Philip H.S. Torr. Locally Linear Support Vector Machines. In ICML, pages
985?992, 2011.
[11] Kai Yu, Tong Zhang, and Yihong Gong. Nonlinear learning using local coordinate coding. In
NIPS, pages 2223?2231, 2009.
[12] Ziming Zhang, Lubor Ladicky, Philip H.S. Torr, and Amir Saffari. Learning anchor planes for
classification. In NIPS, pages 1611?1619, 2011.
[13] Quanquan Gu and Jiawei Han. Clustered support vector machines. In AISTATS, pages 307?
315, 2013.
[14] Hidekazu Oiwa and Ryohei Fujimaki. Partition-wise linear models. CoRR, 2014.
[15] Yurii Nesterov. Gradient methods for minimizing composite objective function. Core discussion papers, 2007.
[16] Francis R. Bach. Structured sparsity-inducing norms through submodular functions. In NIPS,
pages 118?126, 2010.
[17] Giorgio Gallo, Michael D. Grigoriadis, and Robert E. Tarjan. A fast parametric maximum flow
algorithm and applications. SIAM Journal on Computing, 18(1):30?55, 1989.
[18] Kiyohito Nagano and Yoshinobu Kawahara. Structured convex optimization under submodular
constraints. In UAI, 2013.
[19] John Duchi and Yoram Singer. Efficient online and batch learning using forward backward
splitting. Journal of Machine Learning Research, 10:2899?2934, 2009.
[20] Andreas Maurer and Massimiliano Pontil. Structured sparsity and generalization. Journal of
Machine Learning Research, 13:671?690, 2012.
[21] Peter L. Bartlett and Shahar Mendelson. Rademacher and gaussian complexities: risk bounds
and structural results. Journal of Machine Learning Research, 3:463?482, 2002.
9
| 5605 |@word version:5 norm:5 d2:1 decomposition:9 jacob:1 initial:1 selecting:1 ours:6 existing:1 current:1 com:3 ka:3 gmail:1 assigning:1 written:2 yet:1 john:1 partition:76 enables:2 minmin:1 designed:1 interpretable:2 update:2 discrimination:1 parameterization:1 amir:2 plane:1 core:1 provides:1 node:1 hyperplanes:1 org:1 simpler:1 zhang:2 mathematical:1 constructed:1 ryohei:2 become:2 differential:1 combine:1 fitting:3 introduce:1 inter:1 expected:1 p1:3 nor:1 globally:2 decomposed:3 little:1 precursor:1 inappropriate:1 lib:1 becomes:1 provided:3 classifies:1 notation:2 linearity:1 bounded:1 argmin:4 developer:1 developed:4 guarantee:1 disi:1 every:2 act:1 xd:2 um:1 classifier:3 k2:1 partitioning:5 utilization:1 originates:1 yn:9 superiority:1 positive:1 giorgio:1 local:34 specifier:7 ap:13 noteworthy:1 initialization:7 fastest:1 graduate:1 range:1 practical:3 acknowledgment:1 practice:3 goyal:1 procedure:3 pontil:1 intersect:1 empirical:7 llsvm:1 significantly:4 composite:1 projection:2 pre:1 word:1 svr:4 cannot:3 selection:5 operator:3 risk:3 optimize:3 equivalent:1 map:14 demonstrated:1 www:1 regardless:1 convex:31 formulate:2 splitting:1 assigns:1 m2:3 rule:4 utilizing:1 importantly:1 regularize:1 varma:1 handle:1 coordinate:3 variation:1 updated:1 target:2 lsl:8 suppose:1 programming:1 distinguishing:1 element:2 trend:2 particularly:1 utilized:1 xnd:1 observed:1 csie:1 solved:2 capture:3 worst:1 wang:1 region:75 kilian:1 cstc:2 trade:1 decrease:1 convexity:7 complexity:9 constrains:1 nesterov:1 trained:1 weakly:1 solving:3 depend:1 predictive:3 efficiency:3 basis:2 liacc:1 gu:1 easily:6 joint:1 k0:1 represented:2 america:2 derivation:1 leo:1 heat:3 fast:5 effective:7 massimiliano:1 hyper:3 neighborhood:2 kawahara:1 widely:1 solve:1 valued:1 larger:1 tightness:1 otherwise:2 kai:1 ability:1 online:1 obviously:1 advantage:3 sequence:2 propose:4 p4:1 saligrama:1 uci:1 combining:1 nagano:1 flexibility:1 achieve:2 inducing:7 frobenius:1 convergence:4 cluster:3 optimum:1 rademacher:5 produce:1 converges:1 help:1 derive:6 depending:1 tions:1 gong:1 nearest:1 school:1 p2:1 entirety:1 cool:3 indicate:4 implies:1 closely:1 tokyo:1 vc:1 sgn:1 saffari:1 explains:1 assign:2 f1:5 generalization:14 clustered:2 decompose:1 ntu:1 a1a:5 hold:1 sufficiently:1 m0:3 vary:1 adopt:1 a2:2 achieves:1 early:1 purpose:3 uniqueness:1 estimation:1 consecutive:1 applicable:1 label:1 sensitive:2 quanquan:1 tool:1 minimization:1 clearly:1 gaussian:1 always:1 aim:2 rather:1 arose:1 avoid:2 lubor:2 shrinkage:2 breiman:1 l0:1 derived:6 consistently:2 indicates:5 contrast:1 twitter:5 entire:1 typically:1 a0:7 yaoliang:1 jiawei:1 kc:1 issue:1 overall:1 among:2 classification:15 html:1 denoted:9 winequality:3 proposes:2 art:6 special:1 integration:1 wadsworth:1 marginal:1 construct:2 prasoon:1 rfujimaki:1 nearly:1 icml:3 yu:2 others:4 t2:1 fundamentally:1 employ:2 activeness:15 simultaneously:2 falk:8 individual:8 beck:1 microsoft:1 friedman:1 highly:2 evaluation:3 fujimaki:2 mixture:2 regularizers:1 chain:1 beforehand:1 tree:11 divide:3 maurer:1 bk2f:2 circle:2 theoretical:2 stopped:2 instance:1 modeling:1 teboulle:1 cover:1 disadvantage:1 cost:7 introducing:2 deviation:3 size2:1 predictor:16 uniform:2 conducted:3 characterize:1 reported:1 straightforwardly:1 dependency:3 proximal:27 synthetic:3 thanks:1 fundamental:2 siam:2 probabilistic:1 michael:2 squared:5 central:1 satisfied:1 priority:1 worse:2 expert:2 return:1 account:1 sec:1 bold:4 coding:1 depends:1 ad:5 piece:2 performed:7 manik:2 root:2 lab:1 closed:1 analyze:1 francis:1 red:5 competitive:2 start:1 parallel:1 complicated:1 rmse:1 accuracy:1 xor:5 became:5 characteristic:1 efficiently:2 variance:1 yes:1 produced:3 worth:1 researcher:1 confirmed:2 classified:1 simultaneous:1 suffers:1 sharing:1 energy:6 internship:1 pp:1 proof:2 sampled:2 dataset:7 improves:1 higher:2 supervised:2 specify:1 formulation:7 evaluated:2 done:1 strongly:2 typeface:2 furthermore:1 zhixiang:1 hand:5 nonlinear:1 scikit:2 lack:1 overlapping:1 continuity:1 defines:2 logistic:5 matt:1 concept:2 true:1 former:2 regularization:9 assigned:2 laboratory:3 iteratively:1 during:1 width:1 uniquely:1 abalone:3 stone:1 demonstrate:2 duchi:1 l1:7 wise:34 novel:3 recently:1 common:2 at0:1 attached:3 belong:1 adp:5 m1:2 refer:3 significant:2 rd:1 similarly:2 submodular:2 had:1 specification:2 f0:4 han:1 add:1 own:1 perspective:1 optimizing:3 belongs:1 optimizes:1 gallo:1 inequality:1 binary:2 shahar:1 seen:1 minimum:3 greater:2 employed:1 surely:1 determine:2 dashed:1 multiple:4 full:5 faster:1 calculation:1 cross:2 bach:1 divided:2 a1:11 prediction:6 scalable:1 regression:14 breast:4 essentially:1 iteration:5 represent:5 kernel:8 achieved:1 addition:3 background:1 enrico:1 source:1 appropriately:1 envelope:2 regional:3 posse:1 cart:1 flow:1 seem:1 jordan:1 structural:1 noting:1 intermediate:1 enough:1 relaxes:1 affect:2 restrict:1 lasso:1 reduce:1 idea:2 andreas:1 yihong:1 whether:2 bartlett:1 sus:1 penalty:3 peter:1 deep:2 clear:1 locally:15 svms:6 category:1 http:4 specifies:1 exist:1 restricts:3 kap:4 sign:1 arising:1 puma8nh:3 blue:1 discrete:1 group:6 key:3 four:1 neither:1 libsvm:4 backward:1 imaging:1 graph:1 relaxation:3 run:1 package:3 jose:1 inverse:1 almost:1 analysis5:1 utilizes:2 p3:1 decision:3 summarizes:4 prefer:1 bound:21 internet:5 breakpoints:1 datum:2 fold:2 constraint:9 precisely:1 ladicky:2 bp:2 grigoriadis:1 speed:4 min:5 performing:1 structured:10 according:1 combination:5 ball:1 smaller:1 em:1 kusner:1 tw:1 joseph:1 census:5 ln:2 remains:1 discus:2 kinematics:4 cjlin:1 singer:1 yurii:1 adopted:1 decomposing:1 tightest:2 rewritten:1 competitively:1 apply:2 hierarchical:2 enforce:1 appropriate:1 batch:1 weinberger:1 original:2 assumes:1 denotes:1 running:1 standardized:1 a4:2 maintaining:1 hinge:2 calculating:1 yoram:1 quantile:2 epsilon:1 nicola:1 objective:4 skin:4 already:1 added:1 parametric:1 traditional:2 italic:2 visiting:1 exhibit:1 gradient:6 dp:3 unable:1 majority:1 philip:2 seven:1 manifold:2 tseng:1 code:2 relationship:2 reformulate:1 providing:1 demonstration:1 minimizing:1 difficult:1 olshen:1 robert:2 potentially:1 negative:1 implementation:1 perform:4 upper:3 datasets:10 benchmark:4 situation:1 incorporated:1 locate:1 tarjan:1 community:2 download:1 venkatesh:1 optimized:2 learned:1 hour:1 nip:5 able:3 below:2 fp:15 sparsity:8 challenge:2 interpretability:4 including:2 reliable:1 max:2 green:1 natural:1 difficulty:1 warm:1 regularized:2 indicator:2 solvable:3 residual:11 advanced:3 representing:1 technology:1 categorical:1 understanding:1 l2:1 kf:1 loss:15 expect:1 ziming:1 validation:2 sufficient:1 principle:1 thresholding:2 bdp:2 surrounded:1 cancer:4 summary:2 supported:1 free:2 side:4 allow:1 ber:1 neighbor:1 taking:1 sparse:4 moreau:1 dimension:3 xn:10 kck1:1 calculated:1 depth:2 author:3 adopts:1 forward:1 preprocessing:1 income:6 preferred:1 global:30 active:7 uai:1 anchor:3 xi:1 alternatively:1 continuous:2 iterative:3 table:8 additionally:1 kiyohito:1 nature:2 learn:4 yoshinobu:1 improving:1 cl:10 marc:1 sp:8 did:2 main:1 dense:1 aistats:1 whole:1 motivation:2 noise:1 paul:1 xu:1 referred:1 en:2 quantiles:1 tong:1 localizer:1 sub:5 position:1 bank8fm:3 exponential:1 candidate:9 third:1 theorem:5 formula:1 bad:2 specific:32 svm:11 a3:4 mendelson:1 ldkl:11 corr:1 nec:4 illustrates:3 conditioned:1 sparser:1 gap:1 chen:1 rg:10 backtracking:1 simply:2 kxk:1 scalar:1 applies:1 corresponds:1 satisfies:1 parameters3:1 acceleration:1 rbf:7 lipschitz:2 feasible:2 change:2 hard:1 fista:3 determined:4 specifically:1 uniformly:1 except:1 torr:2 lemma:4 discriminate:1 partly:2 experimental:3 meaningful:1 formally:2 support:4 people:1 latter:2 arises:2 unitn:1 evaluate:1 bkf:1 |
5,088 | 5,606 | Learning Shuffle Ideals
Under Restricted Distributions
Dongqu Chen
Department of Computer Science
Yale University
[email protected]
Abstract
The class of shuffle ideals is a fundamental sub-family of regular languages. The
shuffle ideal generated by a string set U is the collection of all strings containing
some string u ? U as a (not necessarily contiguous) subsequence. In spite of
its apparent simplicity, the problem of learning a shuffle ideal from given data is
known to be computationally intractable. In this paper, we study the PAC learnability of shuffle ideals and present positive results on this learning problem under
element-wise independent and identical distributions and Markovian distributions
in the statistical query model. A constrained generalization to learning shuffle
ideals under product distributions is also provided. In the empirical direction, we
propose a heuristic algorithm for learning shuffle ideals from given labeled strings
under general unrestricted distributions. Experiments demonstrate the advantage
for both efficiency and accuracy of our algorithm.
1
Introduction
The learnablity of regular languages is a classic topic in computational learning theory. The applications of this learning problem include natural language processing (speech recognition, morphological analysis), computational linguistics, robotics and control systems, computational biology
(phylogeny, structural pattern recognition), data mining, time series and music ([7, 14?18, 20, 21]).
Exploring the learnability of the family of formal languages is significant to both theoretical and
applied realms.
Valiant?s PAC learning model introduces a clean and elegant framework for mathematical analysis
of machine learning and is one of the most widely-studied theoretical learning models ([22]). In the
PAC learning model, unfortunately, the class of regular languages, or equivalently the concept class
of deterministic finite automata (DFA), is known to be inherently unpredictable ([1, 9, 19]). In a
modified version of Valiant?s model which allows the learner to make membership queries, Angluin
[2] has shown that the concept class of regular languages is PAC learnable.
Throughout this paper we study the PAC learnability of a subclass of regular languages, the class of
(extended) shuffle ideals. The shuffle ideal generated by an augmented string U is the collection of all
strings containing some u ? U as a (not necessarily contiguous) subsequence, where an augmented
string is a finite concatenation of symbol sets (see Figure 1 for an illustration). The special class
of shuffle ideals generated by a single string is called the principal shuffle ideals. Unfortunately,
even such a simple class is not PAC learnable, unless RP=NP ([3]). However, in most application
scenarios, the strings are drawn from some particular distribution we are interested in. Angluin
et al. [3] prove under the uniform string distribution, principal shuffle ideals are PAC learnable.
Nevertheless, the requirement of complete knowledge of the distribution, the dependence on the
symmetry of the uniform distribution and the restriction of principal shuffle ideals lead to the lack
of generality of the algorithm. Our main contribution in this paper is to present positive results
1
Figure 1: The DFA accepting precisely the shuffle ideal of U = (a|b|d)a(b|c) over ? = {a, b, c, d}.
on learning the class of shuffle ideals under element-wise independent and identical distributions
and Markovian distributions. Extensions of our main results include a constrained generalization
to learning shuffle ideals under product distributions and a heuristic method for learning principal
shuffle ideals under general unrestricted distributions.
After introducing the preliminaries in Section 2, we present our main result in Section 3: the extended class of shuffle ideals is PAC learnable from element-wise i.i.d. strings. That is, the distributions of the symbols in a string are identical and independent of each other. A constrained
generalization to learning shuffle ideals under product distributions is also provided. In Section 4,
we further show the PAC learnability of principal shuffle ideals when the example strings drawn
from ??n are generated by a Markov chain with some lower bound assumptions on the transition
matrix. In Section 5, we propose a greedy algorithm for learning principal shuffle ideals under
general unrestricted distributions. Experiments demonstrate the advantage for both efficiency and
accuracy of our heuristic algorithm.
2
Preliminaries
We consider strings over a fixed finite alphabet ?. The empty string is ?. Let ?? be the Kleene
star of ? and ?? be the collection of all subsets of ?. As strings are concatenations of symbols, we
similarly define augmented strings as concatenations of unions of symbols.
Definition 1 (Alphabet, simple string and augmented string) Let ? be a non-empty finite set of
symbols, called the alphabet. A simple string over ? is any finite sequence of symbols from ?, and
?? is the collection of all simple strings. An augmented string over ? is any finite concatenation of
?
symbol sets from ?? , and (?? ) is the collection of all augmented strings.
Denote by s the cardinality of ?. Because an augmented string only contains strings of the same
length, the length of an augmented string U , denoted by |U |, is the length of any u ? U . We use
exponential notation for repeated concatenation of a string with itself, that is, v k is the concatenation
of k copies of string v. Starting from index 1, we denote by vi the i-th symbol in string v and use
?
notation v[i, j] = vi . . . vj for 1 ? i ? j ? |v|. Define the binary relation v on h(?? ) , ?? i as
follows. For a simple string w, w v v holds if and only if there is a witness ~i = (i1 < i2 < . . . <
i|w| ) such that vij = wj for all integers 1 ? j ? |w|. For an augmented string W , W v v if and only
if there exists some w ? W such that w v v. When there are several witnesses for W v v, we may
order them coordinate-wise, referring to the unique minimal element as the leftmost embedding. We
will write IW vv to denote the position of the last symbol of W in its leftmost embedding in v (if the
latter exists; otherwise, IW vv = ?).
Definition 2 (Extended/Principal Shuffle Ideal) The (extended) shuffle ideal of an augmented
L
string U ? (?? ) is a regular language defined as X(U ) = {v ? ?? | ?u ? U, u v v} =
?
?
?
? U1 ? U2 ? . . . ?? UL ?? . A shuffle ideal is principal if it is generated by a simple string.
A shuffle ideal is an ideal in order theory and was originally defined for lattices. Denote by the
class of principal shuffle ideals and by X the class of extended shuffle ideals. Unless otherwise
stated, in this paper shuffle ideal refers to the extended ideal. An example is given in Figure 1. The
feasibility of determining whether a string is in the class X(U ) is obvious.
Lemma 1 Evaluating relation U v x and meanwhile determining IU vx is feasible in time O(|x|).
In a computational learning model, an algorithm is usually given access to an oracle providing
information about the sample. In Valiant?s work [22], the example oracle EX(c, D) was defined,
2
where c is the target concept and D is a distribution over the instance space. On each call, EX(c, D)
draws an input x independently at random from the instance space I under the distribution D, and
returns the labeled example hx, c(x)i.
Definition 3 (PAC Learnability: [22]) Let C be a concept class over the instance space I. We
say C is probably approximately correctly (PAC) learnable if there exists an algorithm A with the
following property: for every concept c ? C, for every distribution D on I, and for all 0 < < 1/2
and 0 < ? < 1/2, if A is given access to EX(c, D) on I and inputs and ?, then with probability
at least 1 ? ?, A outputs a hypothesis h ? H satisfying Prx?D [c(x) 6= h(x)] ? . If A runs in time
polynomial in 1/, 1/? and the representation size of c, we say that C is efficiently PAC learnable.
We refer to as the error parameter and ? as the confidence parameter. If the error parameter
is set to 0, the learning is exact ([6]). Kearns [11] extended Valiant?s model and introduced the
statistical query oracle STAT(c, D). Kearns? oracle takes as input a statistical query of the form
(?, ? ). Here ? is any mapping of a labeled example to {0, 1} and ? ? [0, 1] is called the noise
tolerance. STAT(c, D) returns an estimate for the expectation IE?, that is, the probability that ? = 1
when the labeled example is drawn according to D. A statistical query can have a condition so IE?
can be a conditional probability. This estimate is accurate within additive error ? .
Definition 4 (Legitimacy and Feasibility: [11]) A statistical query ? is legimate and feasible if
and only if with respect to 1/, 1/? and representation size of c:
1. Query ? maps a labeled example hx, c(x)i to {0, 1};
2. Query ? can be efficiently evaluated in polynomial time;
3. The condition of ?, if any, can be efficiently evaluated in polynomial time;
4. The probability of the condition of ?, if any, should be at least polynomially large.
Throughout this paper, the learnability of shuffle ideals is studied in the statistical query model.
Kearns [11] proves that oracle STAT(c, D) is weaker than oracle EX(c, D). In words, if a concept
class is PAC learnable from STAT(c, D), then it is PAC learnable from EX(c, D), but not necessarily
vice versa.
3
Learning shuffle ideals from element-wise i.i.d. strings
Although learning the class of shuffle ideals has been proved hard, in most scenarios the string
distribution is restricted or even known. A very usual situation in practice is that we have some prior
knowledge of the unknown distribution. One common example is the string distributions where each
symbol in a string is generated independently and identically from an unknown distribution. It is
element-wise i.i.d. because we view a string as a vector of symbols. This case is general enough to
cover some popular distributions in applications such as the uniform distribution and the multinomial
distribution. In this section, we present as our main result a statistical query algorithm for learning
the concept class of extended shuffle ideals from element-wise i.i.d. strings and provide theoretical
guarantees of its computational efficiency and accuracy in the statistical query model. The instance
space is ?n . Denote by U the augmented pattern string that generates the target shuffle ideal and by
L = |U | the length of U .
3.1
Statistical query algorithm
Before presenting the algorithm, we define function ?V,a (?) and query ?V,a (?, ?) for any augmented
?n
string V ? (?? ) and any symbol a ? ? as as follows.
a
if V 6v x[1, n ? 1]
?V,a (x) =
x
if V v x[1, n ? 1]
IV vx +1
?V,a (x, y) =
1
(y + 1) given ?V,a (x) = a
2
3
where y = c(x) is the label of example string x. More precisely, y = +1 if x ? X(U ) and
L
y = ?1 otherwise. Our learning algorithm uses statistical queries to recover string U ? (?? ) one
element at a time. It starts with the empty string V = ?. Having recovered V = U [1, `] where
0 ? ` < L, we infer U`+1 as follows. For each a ? ?, the statistical query oracle is called with
the query ?V,a at the error tolerance ? claimed in Theorem 1. Our key technical observation is
that the value of IE?V,a effectively selects U`+1 . The query results of ?V,a will form two separate
clusters such that the maximum difference (variance) inside one cluster is smaller than the minimum
difference (gap) between the two clusters, making them distinguishable. The set of symbols in the
cluster with larger query results is proved to be U`+1 . Notice that this statistical query only works
for 0 ? ` < L. To complete the algorithm, the algorithm addresses the trivial case ` = L with query
Pr[y = +1 | V v x] and halts if the query answer is close to 1.
3.2
PAC learnability of ideal X
We show the algorithm described above learns the class of shuffle ideals from element-wise i.i.d.
strings in the statistical query learning model.
Theorem 1 Under element-wise independent and identical distributions over instance space I =
?n , concept class X is approximately identifiable with O(sn) conditional statistical queries from
STAT(X, D) at tolerance
2
?=
40sn2 + 4
or with O(sn) statistical queries from STAT(X, D) at tolerance
4
?? = 1 ?
2
20sn + 2 16sn(10sn2 + )
We provide the main idea of the proofs in this section and defer the details and algebra to Appendix
A. The proof starts from the legitimacy and feasibility of the algorithm. Since ?V,a computes a
binary mapping from labeled examples to {0, 1}, the legitimacy is trivial. But ?V,a is not feasible
for symbols in ? of small occurrence probabilities. We avoid the problematic cases by reducing the
original learning problem to the same problem with a polynomial lower bound assumption Pr[xi =
a] ? /(2sn) ? 2 /(20sn2 + 2) for any a ? ? and achieve feasibility.
The correctness of the algorithm is based on the intuition that the query result IE?V,a+ of a symbol
a+ ? U`+1 should be greater than that of a symbol a? 6? U`+1 and the difference is large enough
to tolerate the noise from the oracle. To prove this, we first consider the exact learning case. Define
?
an infinite string U 0 = U [1, `]U [` + 2, L]U`+1
and let x0 = x?? be the extension of x obtained by
padding it on the right with an infinite string generated from the same distribution as x. Let Q(j, i)
be the probability that the largest g such that U 0 [1, g] v x0 [1, i] is j, or formally
Q(j, i) = Pr[U 0 [1, j] v x0 [1, i] ? U 0 [1, j + 1] 6v x0 [1, i]]
By taking the difference between IE?V,a+ and IE?V,a? in terms of Q(j, i), we get the query tolerance
for exact learning.
Lemma 2 Under element-wise independent and identical distributions over instance space I =
?n , concept class X is exactly identifiable with O(sn) conditional statistical queries from
STAT(X, D) at tolerance
1
? 0 = Q(L ? 1, n ? 1)
5
Lemma 2 indicates bounding the quantity Q(L ? 1, n ? 1) is the key to the tolerance for PAC
learning. Unfortunately, the distribution {Q(j, i)} doesn?t seem to have any strong properties we
know of providing a polynomial lower bound. Instead we introduce new quantity
R(j, i) = Pr[U 0 [1, j] v x0 [1, i] ? U 0 [1, j] 6v x0 [1, i ? 1]]
being the probability that the smallest g such that U 0 [1, j] v x0 [1, g] is i. An important property of
distribution {R(j, i)} is its strong unimodality as defined below.
4
Definition 5 (Unimodality: [8]) A distribution {P (i)} with all support on the lattice of integers is
unimodal if and only if there exists at least one integer K such that P (i) ? P (i ? 1) for all i ? K
and P (i + 1) ? P (i) for all i ? K. We say K is a mode of distribution {P (i)}.
Throughout this paper, when referring to the mode of a distribution, we mean the one with the largest
index, if the distribution has multiple modes with equal probabilities.
Definition 6 (Strong Unimodality: [10]) A distribution {H(i)} is strongly unimodal if and only if
the convolution of {H(i)} with any unimodal distribution {P (i)} is unimodal.
Since a distribution with all mass at zero is unimodal, a strongly unimodal distribution is also unimodal. In this paper, we only consider distributions with all support on the lattice of integers. So the
convolution of {H(i)} and {P (i)} is
{H ? P }(i) =
?
X
H(j)P (i ? j) =
j=??
?
X
H(i ? j)P (j)
j=??
We prove the strong unimodality of {R(j, i)} with respect to i via showing it is the convolution of
two log-concave distributions by induction. We do an initial statistical query to estimate Pr[y = +1]
to handle two marginal cases Pr[y = +1] ? /2 and Pr[y = +1] ? 1?/2. After that an additional
query Pr[y = +1 | V v x] is made to tell whether ` = L. If the algorithm doesn?t halt, it means
` < L and both Pr[y = +1] and Pr[y = ?1] are at least /2 ? 2? . By upper bounding Pr[y = +1]
and Pr[y = ?1] using linear sums of R(j, i), the strong unimodality of {R(j, i)} gives a lower
bound for R(L, n), which further implies one for Q(L ? 1, n ? 1) and completes the proof.
3.3
A generalization to instance space ??n
We have proved the extended class of shuffle ideals is PAC learnable from element-wise i.i.d. fixedlength strings. Nevertheless, in many real-world applications such as natural language processing
and computational linguistics, it is more natural to have strings of varying lengths. Let n be the
maximum length of the sample strings and as a consequence the instance space for learning is ??n .
Here we show how to generalize the statistical query algorithm in Section 3.1 to the more general
instance space ??n .
Let Ai be the algorithm in Section 3.1 for learning
S shuffle ideals from element-wise i.i.d. strings of
fixed length i. Because instance space ??n = i?n ?i , we divide the sample S into n subsets {Si }
where Si = {x | |x| = i}. An initial statistical query then is made to estimate probability Pr[|x| = i]
for each i ? n at tolerance /(8n). We discard all subsets Si with query answer ? 3/(8n) in the
learning procedure, because we know Pr[|x| = i] ? /(2n). As there are at most (n ? 1) such
Si of low occurrence probabilities. The total probability that an instance comes from one of these
negligible sets is at most /2. Otherwise, Pr[|x| = i] ? /(4n) and we apply algorithm Ai on each
Si with query answer ? 3/(8n) with error parameter /2. Because the probability of the condition
is polynomially large, the algorithm is feasible. Finally, the total error over the whole instance space
will be bounded by and concept class X is PAC learnable from element-wise i.i.d. strings over
instance space ??n .
Corollary 1 Under element-wise independent and identical distributions over instance space I =
??n , concept class X is approximately identifiable with O(sn2 ) conditional statistical queries from
STAT(X, D) at tolerance
?=
2
160sn2 + 8
or with O(sn2 ) statistical queries from STAT(X, D) at tolerance
5
?? = 1 ?
40sn2 + 2 512sn2 (20sn2 + )
5
3.4
A constrained generalization to product distributions
A direct generalization from element-wise independent and identical distributions is product distributions. A random string, or a random vector of symbols under a product distribution has
Q|x|
element-wise independence between its elements. That is, Pr[X = x] = i=1 Pr[Xi = xi ]. Although strings under product distributions share many independence properties with element-wise
i.i.d. strings, the algorithm in Section 3.1 is not directly applicable to this case as the distribution
{R(j, i)} defined above is not unimodal with respect to i in general. However, the intuition that
given IV vx = h, the strings with xh+1 ? U`+1 have higher probability of positivity than that of the
strings with xh+1 6? U`+1 is still true under product distributions. Thus we generalize query ?V,a
?n
and define for any V ? (?? ) , a ? ? and h ? [0, n ? 1],
?
?V,a,h (x, y) =
1
(y + 1)
2
given IV vx = h and xh+1 = a
where y = c(x) is the label of example string x. To ensure the legitimacy and feasibility of the
algorithm, we have to attach a lower bound assumption that Pr[xi = a] ? t > 0, for ?1 ? i ? n and
?a ? ?. Appendix C provides a constrained algorithm based on this intuition. Let P (+|a, h) denote
IE?
?V,a,h . If the difference P (+|a+ , h) ? P (+|a? , h) is large enough for some h with nonnegligible
Pr[IV vx = h], then we are able to learn the next element in U . Otherwise, the difference is very
small and we will show that there is an interval starting from index (h + 1) which we can skip
with little risk. The algorithm is able to classify any string whose classification process skips O(1)
intervals. Details of this constrained generalization are deferred to Appendix C.
4
Learning principal shuffle ideals from Markovian strings
Markovian strings are widely studied in natural language processing and biological sequence modeling. Formally, a random string x is Markovian if the distribution of xi+1 only depends on the
value of xi : Pr[xi+1 | x1 . . . xi ] = Pr[xi+1 | xi ] for any i ? 1. If we denote by ?0 the distribution
of x1 and define s ? s stochastic matrix M by M (a1 , a2 ) = Pr[xi+1 = a1 | xi = a2 ], then a
random string can be viewed as a Markov chain with initial distribution ?0 and transition matrix
M . We choose ??n as the instance space in this section and assume independence between the
string length and the symbols in the string. We assume Pr[|x| = k] ? t for all 1 ? k ? n and
min{M (?, ?), ?0 (?)} ? c for some positive t and c. We will prove the PAC learnability of class
under this lower bound assumption. Denote by u be the target pattern string and let L = |u|.
4.1
Statistical query algorithm
Starting with empty string v = ?, the pattern string
Pn u is recovered one symbol at a time. Having
recovered v = u[1, `], we infer u`+1 by ?v,a = k=h+1 IE?v,a,k (x, y), where
?v,a,k (x, y) =
1
(y + 1)
2
given Ivvx = h, xh+1 = a and |x| = k
0 ? ` < L and h is chosen from [0, n ? 1] such that the probability Pr[Ivvx = h] is polynomially
large. The statistical queries ?v,a,k are made at tolerance ? claimed in Theorem 2 and the symbol
with the largest query result of ?v,a is proved to be u`+1 . Again, the case where ` = L is addressed
by query Pr[y = +1 | v v x]. The learning procedure is completed if the query result is close to 1.
4.2
PAC learnability of principal ideal
With query ?v,a , we are able to recover the pattern string u approximately from STAT( (u), D) at
proper tolerance as stated in Theorem 2:
Theorem 2 Under Markovian string distributions over instance space I = ??n , given Pr[|x| =
k] ? t > 0 for ?1 ? k ? n and min{M (?, ?), ?0 (?)} ? c > 0, concept class is approximately
identifiable with O(sn2 ) conditional statistical queries from STAT( , D) at tolerance
?= 2
3n + 2n + 2
6
or with O(sn2 ) statistical queries from STAT( , D) at tolerance
?? =
3ctn2
(3n2 + 2n + 2)2
Please refer to Appendix B for a complete proof of Theorem 2. Due to the probability lower bound
assumptions, the legitimacy and feasibility are obvious. To calculate the tolerance for PAC learning,
we first consider the exact learning tolerance. Let x0 be an infinite string generated by the Markov
chain defined above. For any 0 ? ` ? L ? j, we define quantity R` (j, i) by
R` (j, i) = Pr[u[` + 1, ` + j] v x0 [m + 1, m + i] ? u[` + 1, ` + j] 6v x0 [m + 1, m + i ? 1] | x0m = u` ]
Intuitively, R` (j, i) is the probability that the smallest g such that u[` + 1, ` + j] v x0 [m + 1, m + g]
is i, given x0m = u` . We have the following conclusion on the exact learning tolerance.
Lemma 3 Under Markovian string distributions over instance space I = ??n , given Pr[|x| =
k] ? t > 0 for ?1 ? k ? n and min{M (?, ?), ?0 (?)} ? c > 0, the concept class is exactly
identifiable with O(sn2 ) conditional statistical queries from STAT( , D) at tolerance
)
(
n
X
1
R`+1 (L ? ` ? 1, k ? h ? 1)
? 0 = min
0?`<L
3(n ? h)
k=h+1
The algorithm first deals with the marginal case where P [y = +1] ? through query Pr[y = +1].
If it doesn?t halt, we know Pr[y = +1] is at least (3n2 + 2n)/(3n2 + 2n + 2). We then make a
statistical query ?0h (x, y) = 12 (y + 1) ? 1{Ivvx =h} for each h from ` to n ? 1. It can be shown that
at least one h will give an answer ? (3n + 1)/(3n2 + 2n + 2). This implies lower bounds for
Pr[Ivvx = h] and Pr[y = +1 | Ivvx = h]. The former guarantees the feasibility while the latter
can serve as a lower bound for the sum in Lemma 3 after some algebra and completes the proof.
The assumption on M and ?0 can be weakened to M (u`+1 , u` ) = Pr[x2 = u`+1 | x1 = u` ] ? c
and ?0 (u1 ) ? c for all 1 ? ` ? L ? 1. We first make a statistical query to estimate M (a, u` )
for ` ? 1 or ?0 (a) for ` = 0 for each symbol a ? ? at tolerance c/3. If the result is ? 2c/3
then M (a, u` ) ? c or ?0 (a) ? c and we won?t consider symbol a at this position. Otherwise,
M (a, u` ) ? c/3 or ?0 (a) ? c/3 and the queries in the algorithm are feasible.
Corollary 2 Under Markovian string distributions over instance space I = ??n , given Pr[|x| =
k] ? t > 0 for ?1 ? k ? n, ?0 (u1 ) ? c and M (u`+1 , u` ) ? c > 0 for ?1 ? ` ? L ? 1, concept
class is approximately identifiable with O(sn2 ) conditional statistical queries from STAT( , D)
at tolerance
c
,
? = min
3n2 + 2n + 2 3
or with O(sn2 ) statistical queries from STAT( , D) at tolerance
ctn2
tnc2
?? = min
,
(3n2 + 2n + 2)2 3(3n2 + 2n + 2)
5
Learning shuffle ideals under general distributions
Although the string distribution is restricted or even known in most application scenarios, one might
be interested in learning shuffle ideals under general unrestricted and unknown distributions without
any prior knowledge. Unfortunately, under standard complexity assumptions, the answer is negative.
Angluin et al. [3] have shown that a polynomial time PAC learning algorithm for principal shuffle
ideals would imply the existence of polynomial time algorithms to break the RSA cryptosystem,
factor Blum integers, and test quadratic residuosity.
Theorem 3 ([3]) For any alphabet of size at least 2, given two disjoint sets of strings S, T ? ??n ,
the problem of determining whether there exists a string u such that u v x for each x ? S and
u 6v x for each x ? T is NP-complete.
7
As ideal is a subclass of ideal X, we know learning ideal X is only harder. Is the problem easier
over instance space ?n ? The answer is again no.
Lemma 4 Under general unrestricted string distributions, a concept class is PAC learnable over
instance space ??n if and only if it is PAC learnable over instance space ?n .
The proof of Lemma 4 is presented in Appendix D using the same idea as our generalization in
Section 3.3. Note that Lemma 4 holds under general string distributions. It is not necessarily true
when we have assumptions on the marginal distribution of string length.
Despite the infeasibility of PAC learning a shuffle ideal in theory, it is worth exploring the possibilities to do the classification problem without theoretical guarantees, since most applications care
more about the empirical performance than about theoretical results. For this purpose we propose a
heuristic greedy algorithm for learning principal shuffle ideals based on reward strategy as follows.
Upon having recovered v = u
b[1, `], for a symbol a ? ? and a string x of length n, we say a consumes k elements in x if min{Ivavx , n + 1} ? Ivvx = k. The reward strategy depends on the ratio
r+ /r? : the algorithm receives r? reward from each element it consumes in a negative example or
r+ penalty from each symbol it consumes in a positive string. A symbol is chosen as u
b`+1 if it
brings us most reward. The algorithm will halt once u
b exhausts any positive example and makes a
false negative error, which means we have gone too far. Finally the ideal (b
u[1, ` ? 1]) is returned
as the hypothesis. The performance of this greedy algorithm depends a great deal on the selection of
parameter r+ /r? . A clever choice is r+ /r? = #(?)/#(+), where #(+) is the number of positive
examples x such that u
b v x and #(?) is the number of negative examples x such that u
b v x.
A more recommended but more complex strategy to determine the parameter r+ /r? in practice is
cross validation.
A better studied approach to learning regular languages, especially the piecewise-testable ones, in
recent works is kernel machines ([12, 13]). An obvious advantage of kernel machines over our
greedy method is its broad applicability to general classification learning problems. Nevertheless,
the time complexity of the kernel machine is O(N 3 + n2 N 2 ) on a training sample set of size N
([5]), while our greedy method only takes O(snN ) time due to its great simplicity. Because N
is usually huge for the demand of accuracy, kernel machines suffer from low efficiency and long
running time in practice. To make a comparison between the greedy method and kernel machines
for empirical performance, we conducted a series of experiments on a real world dataset [4] with
string length n as a variable. The experiment results demonstrate the empirical advantage on both
efficiency and accuracy of the greedy algorithm over the kernel method, in spite of its simplicity.
As this is a theoretical paper, we defer the details on the experiments to Appendix D, including the
experiment setup and figures of detailed experiment results.
6
Discussion
We have shown positive results for learning shuffle ideals in the statistical query model under
element-wise independent and identical distributions and Markovian distributions, as well as a constrained generalization to product distributions. It is still open to explore the possibilities of learning
shuffle ideals under less restricted distributions with weaker assumptions. Also a lot more work
needs to be done on approximately learning shuffle ideals in applications with pragmatic approaches.
In the negative direction, even a family of regular languages as simple as the shuffle ideals is not
efficiently properly PAC learnable under general unrestricted distributions unless RP=NP. Thus, the
search for a nontrivial properly PAC learnable family of regular languages continues. Another theoretical question that remains is how hard the problem of learning shuffle ideals is, or whether PAC
learning a shuffle ideal is as hard as PAC learning a deterministic finite automaton.
Acknowledgments
We give our sincere gratitude to Professor Dana Angluin of Yale University for valuable discussions
and comments on the learning problem and the proofs. Our thanks are also due to Professor Joseph
Chang of Yale University for suggesting supportive references on strong unimodality of probability
distributions and to the anonymous reviewers for their helpful feedback.
8
References
[1] D. Angluin. On the complexity of minimum inference of regular sets. Information and Control,
39(3):337 ? 350, 1978.
[2] D. Angluin. Learning regular sets from queries and counterexamples. Information and Computation, 75(2):87?106, Nov. 1987.
[3] D. Angluin, J. Aspnes, S. Eisenstat, and A. Kontorovich. On the learnability of shuffle ideals.
Journal of Machine Learning Research, 14:1513?1531, 2013.
[4] K. Bache and M. Lichman. NSF research award abstracts 1990-2003 data set. UCI Machine
Learning Repository, 2013.
[5] L. Bottou and C.-J. Lin. Support vector machine solvers. Large scale kernel machines, pages
301?320, 2007.
[6] N. H. Bshouty. Exact learning of formulas in parallel. Machine Learning, 26(1):25?41, Jan.
1997.
[7] C. de la Higuera. A bibliographical study of grammatical inference. Pattern Recognition,
38(9):1332?1348, Sept. 2005.
[8] B. Gnedenko and A. N. Kolmogorov. Limit distributions for sums of independent random
variables. Addison-Wesley series in statistics, 1949.
[9] E. M. Gold. Complexity of automaton identification from given data. Information and Control,
37(3):302 ? 320, 1978.
[10] I. Ibragimov. On the composition of unimodal distributions. Theory of Probability and Its
Applications, 1(2):255?260, 1956.
[11] M. Kearns. Efficient noise-tolerant learning from statistical queries. Journal of the ACM
(JACM), 45(6):983?1006, Nov. 1998.
[12] L. A. Kontorovich, C. Cortes, and M. Mohri. Kernel methods for learning languages. Theoretical Computer Science, 405(3):223?236, Oct. 2008.
[13] L. A. Kontorovich and B. Nadler. Universal kernel-based learning with applications to regular
languages. The Journal of Machine Learning Research, 10:1095?1129, June 2009.
[14] K. Koskenniemi. Two-level model for morphological analysis. Proceedings of the Eighth
International Joint Conference on Artificial Intelligence - Volume 2, pages 683?685, 1983.
[15] M. Mohri. On some applications of finite-state automata theory to natural language processing.
Journal of Natural Language Engineering, 2(1):61?80, Mar. 1996.
[16] M. Mohri. Finite-state transducers in language and speech processing. Computational Linguistics, 23(2):269?311, June 1997.
[17] M. Mohri, P. J. Moreno, and E. Weinstein. Efficient and robust music identification with
weighted finite-state transducers. IEEE Transactions on Audio, Speech, and Language Processing, 18(1):197?207, Jan. 2010.
[18] M. Mohri, F. Pereira, and M. Riley. Weighted finite-state transducers in speech recognition.
Computer Speech and Language, 16(1):69 ? 88, 2002.
[19] L. Pitt and M. K. Warmuth. The minimum consistent DFA problem cannot be approximated
within any polynomial. Journal of the ACM (JACM), 40(1):95?142, Jan. 1993.
[20] O. Rambow, S. Bangalore, T. Butt, A. Nasr, and R. Sproat. Creating a finite-state parser with
application semantics. Proceedings of the 19th International Conference on Computational
Linguistics - Volume 2, pages 1?5, 2002.
[21] R. Sproat, W. Gale, C. Shih, and N. Chang. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics, 22(3):377?404, Sept. 1996.
[22] L. G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142,
Nov. 1984.
9
| 5606 |@word repository:1 version:1 polynomial:8 open:1 harder:1 initial:3 series:3 contains:1 lichman:1 recovered:4 si:5 ctn:2 additive:1 moreno:1 greedy:7 intelligence:1 warmuth:1 accepting:1 provides:1 mathematical:1 c2:1 direct:1 transducer:3 prove:4 weinstein:1 inside:1 introduce:1 x0:11 little:1 snn:1 unpredictable:1 cardinality:1 solver:1 provided:2 notation:2 bounded:1 mass:1 string:83 kleene:1 guarantee:3 every:2 subclass:2 concave:1 exactly:2 nonnegligible:1 x0m:2 control:3 positive:7 before:1 negligible:1 engineering:1 limit:1 consequence:1 despite:1 approximately:7 might:1 studied:4 weakened:1 gone:1 unique:1 acknowledgment:1 union:1 practice:3 rsa:1 procedure:2 jan:3 universal:1 empirical:4 confidence:1 word:2 regular:12 refers:1 spite:2 get:1 cannot:1 close:2 selection:1 clever:1 risk:1 restriction:1 deterministic:2 map:1 reviewer:1 starting:3 independently:2 automaton:4 simplicity:3 eisenstat:1 classic:1 embedding:2 handle:1 coordinate:1 target:3 parser:1 exact:6 gnedenko:1 us:1 hypothesis:2 element:23 recognition:4 satisfying:1 approximated:1 continues:1 bache:1 labeled:6 calculate:1 wj:1 morphological:2 shuffle:49 consumes:3 valuable:1 intuition:3 complexity:4 reward:4 algebra:2 serve:1 upon:1 efficiency:5 learner:1 joint:1 unimodality:6 kolmogorov:1 alphabet:4 query:54 artificial:1 tell:1 apparent:1 heuristic:4 widely:2 larger:1 whose:1 say:4 otherwise:6 statistic:1 itself:1 legitimacy:5 advantage:4 sequence:2 propose:3 product:9 uci:1 achieve:1 gold:1 empty:4 requirement:1 cluster:4 stat:15 bshouty:1 strong:6 skip:2 implies:2 come:1 direction:2 stochastic:2 vx:5 hx:2 generalization:9 preliminary:2 anonymous:1 biological:1 exploring:2 extension:2 hold:2 great:2 nadler:1 mapping:2 pitt:1 smallest:2 a2:2 purpose:1 applicable:1 label:2 iw:2 largest:3 vice:1 correctness:1 weighted:2 modified:1 avoid:1 pn:1 varying:1 corollary:2 june:2 properly:2 indicates:1 helpful:1 inference:2 membership:1 relation:2 interested:2 i1:1 selects:1 iu:1 semantics:1 classification:3 denoted:1 constrained:7 special:1 marginal:3 equal:1 once:1 having:3 identical:8 biology:1 broad:1 np:3 sincere:1 piecewise:1 bangalore:1 huge:1 mining:1 possibility:2 deferred:1 introduces:1 chain:3 accurate:1 unless:3 iv:4 divide:1 theoretical:8 minimal:1 instance:21 classify:1 modeling:1 markovian:9 contiguous:2 cover:1 lattice:3 riley:1 applicability:1 introducing:1 subset:3 uniform:3 conducted:1 too:1 learnability:10 answer:6 referring:2 thanks:1 fundamental:1 international:2 ie:8 kontorovich:3 again:2 containing:2 choose:1 gale:1 positivity:1 creating:1 return:2 suggesting:1 de:1 star:1 exhaust:1 vi:2 depends:3 view:1 break:1 lot:1 start:2 recover:2 parallel:1 defer:2 contribution:1 accuracy:5 variance:1 efficiently:4 generalize:2 identification:2 worth:1 definition:6 obvious:3 proof:7 proved:4 dataset:1 popular:1 realm:1 knowledge:3 segmentation:1 wesley:1 originally:1 tolerate:1 higher:1 evaluated:2 done:1 strongly:2 generality:1 mar:1 receives:1 lack:1 fixedlength:1 mode:3 brings:1 concept:15 true:2 former:1 i2:1 deal:2 please:1 higuera:1 won:1 leftmost:2 presenting:1 complete:4 demonstrate:3 tn:1 wise:18 common:1 multinomial:1 volume:2 significant:1 refer:2 composition:1 versa:1 counterexample:1 ai:2 similarly:1 language:20 access:2 recent:1 discard:1 scenario:3 claimed:2 binary:2 supportive:1 minimum:3 unrestricted:6 greater:1 additional:1 care:1 determine:1 recommended:1 multiple:1 unimodal:9 infer:2 technical:1 cross:1 long:1 lin:1 cryptosystem:1 award:1 halt:4 feasibility:7 a1:2 expectation:1 kernel:9 robotics:1 interval:2 addressed:1 completes:2 probably:1 comment:1 elegant:1 seem:1 call:1 integer:5 structural:1 ideal:57 sproat:2 identically:1 enough:3 independence:3 idea:2 whether:4 ul:1 padding:1 penalty:1 suffer:1 returned:1 speech:5 dfa:3 detailed:1 sn2:14 ibragimov:1 angluin:7 problematic:1 nsf:1 notice:1 disjoint:1 correctly:1 write:1 key:2 shih:1 nevertheless:3 blum:1 drawn:3 clean:1 sum:3 run:1 family:4 throughout:3 draw:1 appendix:6 bound:9 yale:4 quadratic:1 oracle:8 identifiable:6 nontrivial:1 precisely:2 x2:1 generates:1 u1:3 min:7 department:1 according:1 smaller:1 joseph:1 making:1 intuitively:1 restricted:4 pr:34 computationally:1 remains:1 know:4 addison:1 apply:1 occurrence:2 rp:2 existence:1 original:1 running:1 include:2 linguistics:5 ensure:1 completed:1 music:2 testable:1 prof:1 especially:1 chinese:1 question:1 quantity:3 strategy:3 dependence:1 usual:1 separate:1 concatenation:6 topic:1 trivial:2 induction:1 length:11 index:3 illustration:1 providing:2 ratio:1 equivalently:1 setup:1 unfortunately:4 stated:2 negative:5 proper:1 unknown:3 upper:1 observation:1 convolution:3 markov:3 finite:13 situation:1 extended:9 witness:2 communication:1 introduced:1 gratitude:1 address:1 able:3 usually:2 pattern:6 below:1 eighth:1 including:1 natural:6 attach:1 imply:1 sept:2 sn:6 prior:2 determining:3 dana:1 validation:1 consistent:1 vij:1 share:1 mohri:5 last:1 copy:1 formal:1 weaker:2 vv:2 taking:1 tolerance:21 grammatical:1 feedback:1 evaluating:1 transition:2 world:2 computes:1 doesn:3 collection:5 made:3 far:1 polynomially:3 transaction:1 nov:3 butt:1 tolerant:1 xi:12 subsequence:2 search:1 infeasibility:1 learn:1 robust:1 inherently:1 symmetry:1 bottou:1 necessarily:4 meanwhile:1 complex:1 vj:1 main:5 bounding:2 noise:3 whole:1 n2:8 prx:1 repeated:1 x1:3 augmented:12 sub:1 position:2 pereira:1 xh:4 exponential:1 learns:1 theorem:7 formula:1 pac:29 showing:1 learnable:15 symbol:25 cortes:1 intractable:1 exists:5 false:1 valiant:5 effectively:1 demand:1 chen:2 gap:1 easier:1 distinguishable:1 explore:1 jacm:2 u2:1 chang:2 acm:3 oct:1 conditional:7 viewed:1 professor:2 feasible:5 hard:3 infinite:3 reducing:1 principal:13 lemma:8 called:4 kearns:4 total:2 la:1 formally:2 phylogeny:1 pragmatic:1 support:3 latter:2 audio:1 ex:5 |
5,089 | 5,607 | Extracting Certainty from Uncertainty: Transductive
Pairwise Classification from Pairwise Similarities
Tianbao Yang? , Rong Jin?\
The University of Iowa, Iowa City, IA 52242
?
Michigan State University, East Lansing, MI 48824
\
Alibaba Group, Hangzhou 311121, China
[email protected], [email protected]
?
Abstract
In this work, we study the problem of transductive pairwise classification from
pairwise similarities 1 . The goal of transductive pairwise classification from pairwise similarities is to infer the pairwise class relationships, to which we refer as
pairwise labels, between all examples given a subset of class relationships for a
small set of examples, to which we refer as labeled examples. We propose a very
simple yet effective algorithm that consists of two simple steps: the first step is
to complete the sub-matrix corresponding to the labeled examples and the second step is to reconstruct the label matrix from the completed sub-matrix and the
provided similarity matrix. Our analysis exhibits that under several mild preconditions we can recover the label matrix with a small error, if the top eigen-space
that corresponds to the largest eigenvalues of the similarity matrix covers well the
column space of label matrix and is subject to a low coherence, and the number of
observed pairwise labels is sufficiently enough. We demonstrate the effectiveness
of the proposed algorithm by several experiments.
1
Introduction
Pairwise classification aims to determine if two examples belong to the same class. It has been
studied in several different contexts, depending on what prior information is provided. In this paper,
we tackle the pairwise classification problem provided with a pairwise similarity matrix and a small
set of true pairwise labels. We refer to the problem as transductive pairwise classification from
pairwise similarities. The problem has many applications in real world situations. For example, in
network science [17], an interesting task is to predict whether a link between two nodes is likely to
occur given a snapshot of a network and certain similarities between the nodes. In computational
biology [16], an important problem is to predict whether two protein sequences belong to the same
family based on their sequence similarities, with some partial knowledge about protein families
available. In computer vision, a good application can been found in face verification [5], which aims
to verify whether two face images belong to the same identity given some pairs of training images.
The challenge in solving the problem arises from the uncertainty of the given pairwise similarities in
reflecting the pairwise labels. Therefore the naive approach by binarizing the similarity values with
a threshold would suffer from a bad performance. One common approach towards the problem is to
cast the problem into a clustering problem and derive the pairwise labels from the clustering results.
Many algorithms have been proposed to cluster the data using the pairwise similarities and a subset
of pairwise labels. However, the success of these algorithms usually depends on how many pairwise
labels are provided and how well the pairwise similarities reflect the true pairwise labels as well.
1
The pairwise similarities are usually derived from some side information instead of the underlying class
labels.
1
In this paper, we focus on the theoretical analysis of the problem. Essentially, we answer the question
of what property the similarity matrix should satisfy and how many pre-determined pairwise labels
are sufficient in order to recover the true pairwise labels between all examples. We base our analysis
on a very simple scheme which is composed of two steps: (i) the first step recovers the sub-matrix
of the label matrix from the pre-determined entries by matrix completion, which has been studied
extensively and can be solved efficiently; (ii) the second step estimates the full label matrix by
simple matrix products based on the top eigen-space of the similarity matrix and the completed
sub-matrix. Our empirical studies demonstrate that the proposed algorithm could be effective than
spectral clustering and kernel alignment approach in exploring the pre-determined labels and the
provided similarities.
To summarize our theoretical results: under some appropriate pre-conditions, namely the distribution of data over the underlying classes in hindsight is well balanced, the labeled data are uniformly
sampled from all data and the pre-determined pairwise labels are uniformly sampled from all pairs
between the labeled examples, we can recover the label matrix with a small error if (i) the top eigenspace that corresponds to the s largest eigen-values of the similarity matrix covers well the column
space of the label matrix and has a low incoherence, and (ii) the number of pre-determined pairwise
labels N on m labeled examples satisfy N ? ?(m log2 (m)), m ? ?(?s s log s), where ?s is a
coherence measure of the top eigen-space of the similarity matrix.
2
Related Work
The transductive pairwise classification problem is closely related to semi-supervised clustering,
where a set of pairwise labels are provided with pairwise similarities or feature vectors to cluster a
set of data points. We focus our attention on the works where the pairwise similarities instead of the
feature vectors are served as inputs.
Spectral clustering [19] and kernel k-means [7] are probably the most widely applied clustering
algorithms given a similarity matrix or a kernel matrix. In spectral clustering, one first computes
the top eigen-vectors of a similarity matrix (or bottom eigen-vectors of a Laplacian matrix), and
then cluster the eigen-matrix into a pre-defined number of clusters. Kernel k-means is a variant
of k-means that computes the distances using the kernel similarities. One can easily derive the
pairwise labels from the clustering results by assuming that if two data points assigned to the same
cluster belong to the same class and vice versa. To utilize some pre-determined pairwise labels, one
can normalize the similarities and replace the entries corresponding to the observed pairs with the
provided labels.
There also exist some works that try to learn a parametric or non-parametric kernel from the predetermined pairwise labels and the pairwise similarities. Hoi et al. [13] proposed to learn a parametric kernel that is characterized by a combination of the top eigen-vectors of a (kernel) similarity
matrix by maximizing a kernel alignment measure over the combination weights. Other works [2, 6]
that exploit the pairwise labels for clustering are conducted using feature vector representations of
data points. However, all of these works are lack of analysis of algorithms, which is important
from a theoretical point. There also exist a large body of research on preference learning and ranking in semi-supervised or transductive setting [1, 14]. We did not compare with them because that
the ground-truth we analyzed of a pair of data denoted by h(u, v) is a symmetric function, i.e.,
h(u, v) = h(v, u), while in preference learning the function h(u, v) is an asymmetric function.
Our theoretical analysis is built on several previous studies on matrix completion and matrix reconstruction by random sampling. Cand`es and Recht [3] cooked a theory of matrix completion from partial observations that provides a theoretical guarantee of perfect recovery of a low rank matrix under
appropriate conditions on the matrix and the number of observations. Several works [23, 10, 15, 28]
analyzed the approximation error of the Nystr?om method that approximates a kernel matrix by sampling a small number of columns. All of these analyses exploit an important measure of an orthogonal matrix, i.e., matrix incoherence, which also plays an important role in our analysis.
It has been brought to our attention that two recent works [29, 26] are closely related to the present
work but with remarkable differences. Both works present a matrix completion theory with side
information. Yi et al. [29] aim to complete the pairwise label matrix given partially observed entries for semi-supervised clustering. Under the assumption that the column space of the symmetric
2
pairwise label matrix to be completed is spanned by the top left singular vectors of the data matrix,
they show that their algorithm can perfectly recover the pairwise label matrix with a high probability. In [26], the authors assume that the column and row space of the matrix to be completed is
given aprior and show that the required number of observations in order to perfectly complete the
matrix can be reduced substantially. There are two remarkable differences between [29, 26] and
our work: (i) we target on a transductive setting, in which the observed partial entries are not uniformly sampled from the whole matrix; therefore their algorithms are not applicable; (ii) we prove
a small reconstruction error when the assumption that the column space of the pairwise label matrix
is spanned by the top eigen-vectors of the pairwise similarity matrix fails.
3
The Problem and A Simple Algorithm
We first describe the problem of transductive pairwise classification from pairwise similarities, and
then present a simple algorithm.
3.1
Problem Definition
Let Dn = {o1 , . . . , on } be a set of n examples. We are given a pairwise similarity matrix denoted
by S ? Rn?n with each entry Sij measuring the similarity between oi and oj , a set of m random
bm = {?
samples denote by D
o1 , . . . , o?m } ? Dn , and a subset of pre-determined pairwise labels
bm . The
being either 1 or 0 that are randomly sampled from all pairs between the examples in D
problem is to recover the pairwise labels of all remaining pairs between examples in Dn . Note that
the key difference between our problem and previous matrix completion problems is that the partial
bm ? D
bm instead of Dn ? Dn .
observed entries are only randomly distributed over D
We are interested in that the pairwise labels indicate the pairwise class relationships, i.e., the pairwise
label between two examples being equal to 1 indicates they belong to the same class, and being equal
to 0 indicates that they belong to different classes. We denote by r the number of underlying classes.
We introduce a label matrix Z ? {0, 1}n?n to represent the pairwise labels between all examples,
and similarly denote by Zb ? {0, 1}m?m the pairwise labels between any two labeled examples 2 in
bm . To capture the subset of pre-determined pairwise labels for the labeled data, we introduce a set
D
b i.e., the pairwise label Zbi,j , (i, j) ? ?
? ? [m] ? [m] to indicate the subset of observed entries in Z,
is observed if and only if the pairwise label between o?i and o?j is pre-determined. We denote by Zb?
the partially observed label matrix, i.e.
Zbi,j (i, j) ? ?
b
[Z? ]i,j =
N\A (i, j) ?
/?
The goal of transductive pairwise classification from pairwise similarities is to estimate the pairwise label matrix Z ? {0, 1}n?n for all examples in Dn using (i) the pairwise similarities in S and
b? .
(ii) the partially observed label matrix Z
3.2
A Simple Algorithm
In order to estimate the label matrix Z, the proposed algorithm consists of two steps. The first step is
b and the second step is to estimate the label matrix Z using the recovered
to recover the sub-matrix Z,
Zb and the provided similarity matrix S.
Recover the sub-matrix Zb First, we note that the label matrix Z and the sub-matrix Zb are of
low rank by assuming that the number of hidden classes r is small. To see this, we let gk ?
bk ? {1, 0}m denote the class assignments to the k-th hidden class of all data and the
{1, 0}n , g
labeled data, respectively. It is straightforward to show that
Z=
r
X
gk gk> ,
Zb =
k=1
2
r
X
bk g
bk>
g
(1)
k=1
bm that serve as the bed for the pre-determined pairwise labels.
The labeled examples refer to examples in D
3
Algorithm 1 A Simple Algorithm for Transductive Pairwise Classification by Matrix Completion
1: Input:
? S: a pairwise similarity matrix between all examples in Dn
bm
? Zb? : the subset of observed pairwise labels for labeled examples in D
? s < m: the number of eigenvectors used for estimating Z
2: Compute the first s eigen-vectors of a similarity matrix S // Preparation
b by solving the optimization problem in (2) // Step 1: recover the sub-matrix Z
b
3: Estimate Z
4: Estimate the label matrix Z using (5) // Step 2: estimate the label matrix Z
5: Output: Z
which clearly indicates that both Z, Zb are of low rank if r is significantly smaller than m. As a
result, we can apply the matrix completion algorithm [20] to recover Zb by solving the following
optimization problem:
min
M ?Rm?m
kM ktr ,
Mi,j = Zbi,j ?(i, j) ? ?
s.t.
(2)
where kM ktr denotes the nuclear norm of a matrix.
Estimate the label matrix Z The second step is to estimate the remaining entries in the label
matrix Z. In the sequel, for the ease of analysis, we will attain an estimate of the full matrix Z, from
which one can obtain the pairwise labels between all remaining pairs.
We first describe the motivation of the second step and then present the details of computation.
Assuming that there exists an orthogonal matrix Us = (u1 , ? ? ? , us ) ? Rn?s whose column space
subsumes the column space of the label matrix Z where s ? r, then there exist ak ? Rs , k =
1, . . . , r such that
gk = Us ak , k = 1, . . . , r.
(3)
Considering the formulation of Z and Zb in (1), the second step works as follows: we first compute
Pr
>
b
an estimate of k=1
k ak from the completed sub-matrix Z, then compute an estimate of Z based
Pa
r
>
on the estimate of k=1 ak ak . To this end, we construct the following optimization problems for
k = 1, . . . , r:
bs ak2 = (U
b >U
bs )? U
b >g
bk = arg min kb
a
gk ? U
(4)
2
s
s bk
bs ? Rm?s is a sub-matrix of Us ? Rn?s with the row indices corresponding to the global
where U
bm with respect to Dn . Then we can estimate Pr ak a> and
indices of the labeled examples in D
k
k=1
Z by
r
r
X
X
b> b ? b>
bs (U
bs> U
bs )? = (U
bs> U
bs )? U
bs> ZbU
bs (U
bs> U
bs )?
bk g
bk> U
ak a>
g
k = (Us Us ) Us
k=1
k=1
0
Z =
r
X
k=1
gk gk>
= Us
r
X
!
ak a>
k
bs> U
bs )? U
bs> ZbU
bs (U
bs> U
bs )? Us>
Us> = Us (U
(5)
k=1
In oder to complete the algorithm, we need to answer how to construct the orthogonal matrix Us =
(u1 , ? ? ? , us ). Inspired by previous studies on spectral clustering [18, 19], we can construct Us
as the first s eigen-vectors that correspond to the s largest eigen-values of the provided similarity
matrix. A justification of the practice is that if the similarity graph induced by a similarity matrix
has r connected components, then the eigen-space of the similarity matrix corresponding to the r
largest eigen-values is spanned by the indicator vectors of the components. Ideally, if the similarity
graph is equivalent to the label matrix Z, then the indicator vectors of connected components are
exactly g1 , ? ? ? , gr . Finally, we present the detailed step of the proposed algorithm in Algorithm 1.
Remarks on the Algorithm The performance of the proposed algorithm will reply on two factors.
First, how accurate is the recovered the sub-matrix Zb by matrix completion. According to our later
analysis, as long as the number of observed entries is sufficiently large (e.g., |?| ? ?(m log2 m),
b Second, how well the top eigen-space of S covers the
one can exactly recover the sub-matrix Z.
4
column space of the label matrix Z. As shown in section 4, if they are close enough, the estimated
matrix of Z has a small error provided the number of labeled examples m is sufficiently large (e.g.,
m ? ?(?s s log s), where ?s is a coherence measure of the top eigen-space of S.
It is interesting to compare the proposed algorithm to the spectral clustering algorithm [19] and
the spectral kernel learning algorithm [13], since all three algorithms exploit the top eigen-vectors
of a similarity matrix. The spectral clustering algorithm employes a k-means algorithm to cluster
the top eigen-vector matrix. The spectral kernel learning algorithm optimizes a diagonal matrix
? = diag(?1 , ? ? ? , ?s ) to learn a kernel matrix K = Us ?Us> by maximizing the kernel alignment
with the pre-determined labels. In contrast, we estimate the pairwise label matrix by Z 0 = Us M Us>
where the matrix M is learned from the recovered sub-matrix Zb and the provided similarity matrix
b serves as supervised information and the similarity matrix S serves
S. The recovered sub-matrix Z
as the input data for estimating the label matrix Z (c.f. equation 4). It is the first step that explores the
low rank structure of Zb we are able to gain more useful information for the estimation in the second
step. In our experiments, we observe improved performance of the proposed algorithm compared
with the spectral clustering and the spectral kernel learning algorithm.
4
Theoretical Results
In this section, we present theoretical results regarding the reconstruction error of the proposed algorithm, which essentially answer the question of what property the similarity matrix should satisfy,
how many labeled data and how many pre-determined pairwise labels are required for a good or
perfect recovery of the label matrix Z.
Before stating the theoretical results, we first introduce some notations. Let pi denote the percentage
of all examples in Dn that belongs to the i-th class. To facilitate our presentation and analysis, we
also introduce a coherence measure ?s of the orthogonal matrix Us = (u1 , ? ? ? , us ) ? Rn?s as
defined by
s
X
n
?s =
U2
(6)
max
s 1?i?n j=1 ij
The coherence measure has been exploited in many studies of matrix completion [29, 26], matrix
reconstruction [23, 10]. It is notable that [4] defined
a coherence measure of a complete orthogonal
?
matrix U = (u1 , ? ? ? , un ) ? Rn?n by ? = n max1?i?n,1?j?n |Uij |. It is not difficult to see
that ?s ? ?2 ? n. The coherence measure in (6) is also known as the largest statistical leverage
score. Drineas et al. [8] proposed a fast approximation algorithm to compute the coherence of an
arbitrary matrix. Intuitively, the coherence measures the degree to which the eigenvectors in Us
or U are correlated with the canonical bases. The purpose of introducing the coherence measure
is to quantify how large the sampled labeled examples m is in order to guarantee the sub-matrix
bs ? Rm?s has full column rank. We defer the detailed statement to the supplementary material.
U
b The theorem below states if the the distribution of
We begin with the recovery of the sub-matrix Z.
the data over the r hidden classes is not skewed, then an ?(r2 m log2 m) number of pairwise labels
b
between the labeled examples is enough for a perfect recovery of the sub-matrix Z.
Theorem 1. Suppose the entries at (i, j) ? ? are sampled uniformly at random from [m] ? [m],
bm are sampled uniformly at random from Dn . Then with a probability at least
and the examples in D
Pr
512
1 ? i=1 exp(?mpi /8) ? 2m?2 , Zb is the unique solution to (2) if |?| ? min
m log2 (2m).
p2
1?i?r
i
Next, we present a theorem stating that if the column space of Z is spanned by the orthogonal
vectors u1 , ? ? ? , us and m ? ?(?s s ln(m2 s)), the estimated matrix Z 0 is equal to the underlying
true matrix Z.
Theorem 2. Suppose the entries at (i, j) ? ? are sampled uniformly at random from [m]?[m], and
bm are sampled uniformly at random from Dn . If the column space of Z is spanned
the objects in D
512
by u1 , ? ? ? , us , m ? 8?s s log(m2 s), and |?| ? min
m log2 (2m), then with a probability at
p2
1?i?r i
Pr
least 1 ? i=1 exp (?mpi /8) ? 3m?2 , we have Z 0 = Z, where Z 0 is computed by (5).
5
Similar to other matrix reconstruction algorithms [4, 29, 26, 23, 10], the theorem above indicates that
a low coherence measure ?s plays a pivotal role in the success of the proposed algorithm. Actually,
several previous works [23, 11] as well as our experiments have studied the coherence measure of
real data sets and demonstrated that it is not rare to have an incoherent similarity matrix, i.e., with
a small coherence measure. We now consider a more realistic scenario where some of the column
vectors of Z do not lie in the subspace spanned by the top s eigen-vectors of the similarity matrix. To
quantify the gap between the column space of Z and the top eigen-space of the pairwise similarity
Pr
2
matrix, we define the following quantity ? = k=1 kgk ? PUS gk k2 , where PUs = Us Us> is the
projection matrix that projects a vector to the space spanned by the columns of Us . The following
theorem shows that if ? is small, so is the solution Z 0 given in (5).
Theorem 3. Suppose the entries at (i, j) ? ? are sampled uniformly at random from [m] ? [m],
bm are sampled uniformly at random from Dn . If the conditions on m and |?| in
and the objects in D
Pr
Theorem 2 are satisfied. , then, with a probability at least 1 ? i=1 exp (?mpi ) ? 3m?2 , we have
? !
?
2n
n? n ?
2
2n
0
kZ ? ZkF ? ? 1 +
?O
+ ?
+ ?
m
m
m?
m
Sketch of Proofs Before ending this section, we present a sketch of proofs. The details are deferred to the supplementary material. The proof of Theorem 1 relies on a matrix completion theory
by Recht [20], which can guarantee the perfect recovery of the low rank matrix Zb provided the number of observed entries is sufficiently enough. The key to the proof is to show that the coherence
b is bounded using the concentration inequality. To prove Theorem 2, we
measure of the sub-matrix Z
resort to convex optimization theory and Lemma 1 in [10], which shows thatthe sub-sampled
matrix
P>
m?s
>
>
b
Us ? R
has a full column rank if m ? ?(?s s log(s)). Since Z = Us
k=1 ak ak Us and
P
>
0
>
bk = ak , k ? [r], i.e.,
bk a
b>
Z 0 = Us
k Us , therefore to prove Z = Z is equivalent to show a
k=1 a
ak , k ? [r] are the unique minimizers of problems in (4). It is sufficient to show the optimization
bs is a full rank PSD
bs> U
problems in (4) are strictly convex, which follows immediately from that U
matrix with a high probability. The proof of Theorem 3 is more involved. The crux of the proof is
k
k
to consider gk = gk? + gk , where gk = PUs gk is the orthogonal projection of gk into the subspace
k
spanned by u1 , . . . , us and gk? = gk ?gk , and then bound kZ ?Z 0 kF ? kZ ?Z? kF +kZ 0 ?Z? kF ,
P k> k
where Z? = k gk gk .
5
Experimental Results
In this section, we present an empirical evaluation of our proposed simple algorithm for Transductive
Pairwise Classification by Matrix Completion (TPCMC for short) on one synthetic data set and three
real-world data sets.
5.1
Synthetic Data
We first generate a synthetic data set of 1000 examples evenly distributed over 4 classes, each of
which contains 250 data points. Then we generate a pairwise similarity matrix S by first constructing
a pairwise label matrix Z ? {0, 1}1000?1000 , and then adding a noise term ?ij to Zij where ?ij ?
(0, 0.5) follows a uniform distribution. We use S as the input pairwise similarity matrix of our
proposed algorithm. The coherence measure of the top eigen-vectors of S is a small value as shown
in Figure 1. According to the random perturbation matrix theory [22], the top eigen-space of S is
close to the column space of the label matrix Z. We choose s = 20, which yields roughly ?s = 2.
bm , out of with |?| = 2mr2 = 5120 entries
We randomly select m = 4s?s = 160 data to form D
of the 160 ? 160 sub-matrix are fed into the algorithm. In other words, roughly 0.5% entries out
of the whole pairwise label matrix Z ? {0, 1}1000?1000 are observed. We show the ground-truth
pairwise label matrix, the similarity matrix and the estimated label matrix in Figure 1, which clearly
demonstrates that the recovered label matrix is more accurate than the perturbed similarities.
6
3
?s
2.5
2
1.5
1
0
20
40
60
80
100
s
Figure 1: from left to right: ?s vs s, the true pairwise label matrix, the perturbed similarity matrix,
the recovered pairwise label matrix. The error of the estimated matrix is reduced by two times
kZ ? Z 0 kF /kZ ? SkF = 0.5.
5.2
Real Data
We further evaluate the performance of our algorithm on three real-world data sets: splice [24] 3 ,
gisette [12] 4 and citeseer [21] 5 . The splice is a DNA sequence data set for recognizing the splice
junctions. The gisette is a perturbed image data for handwritten digit recognition, which is originally
constructed for feature selection. The citeseer is a paper citation data, which has been used for link
prediction. We emphasize that we do not intend these data sets to be comprehensive but instead to be
illustrative case studies that are representative of a much wider range of applications. The statistics
of the three data sets are summarized in Table 1. Given a data set of size n, we randomly choose
m = 20%n, 30%n, . . . , 90%n examples, where 10% entries of the m?m label matrix are observed.
We design the experiments in this way since according to Theorem 1, the number of observed entries
|?| increase as m increases. For each given m, we repeat the experiments ten times with random
selections and report the performance scores averaged over the ten trials. We construct a similarity
matrix S with each entry being equal to the cosine similarity of two examples based on their feature
vectors. We set s = 50 in our algorithm and other algorithms as well. The corresponding coherence
measures ?s of the three data sets are shown in the last column of Table 1.
We compare with two state-of-the-art algorithms that utilize the pre-determined pairwise labels and
the provided similarity matrix in different way (c.f. the discussion at the end of Section 3), i.e.,
Spectral Clustering (SC) [19] and Spectral Kernel Learning (SKL) [13] for the task of clustering. To
attain a clustering from the proposed algorithm, we apply a similarity-based clustering algorithm to
group the data into clusters based on the estimated label matrix. Here we use spectral clustering [19]
for simplicity and fair comparison. For SC, to utilize the pre-determined pairwise labels we substitute the entries corresponding to the observed pairs by 1 if the two examples are known to be in the
same class and 0 if the two examples are determined to belong to different classes. For SKL, we
also apply the spectral clustering algorithm to cluster the data based on the learned kernel matrix.
The comparison to SC and SKL can verify the effectiveness of the proposed algorithm for exploring
the pre-determined labels and the provided similarities. After obtaining the clusters, we calculate
three well-known metrics, namely normalized mutual information [9], pairwise F-measure [27] and
accuracy [25] that measure the degree to which the obtained clusters match the groundtruth.
Figures 2?4 show the performance of different algorithms on the three data sets, respectively. First,
the performance of all the three algorithms generally improves as the ratio of m/n increases, which
is consistent with our theoretical result in Theorem 3. Second, our proposed TPCMC performs the
best on all the cases measured by all the three evaluation metrics, verifying its reliable performance.
SKL generally performs better than SC, indicating that simply using the observed pairwise labels
to directly modify the similarity matrix cannot fully utilize the label information. TPCMC is better
than SKL meaning that the proposed algorithm is more effective in mining the knowledge from the
pre-determined labels and the similarity matrix.
6
Conclusions
In this paper, we have presented a simple algorithm for transductive pairwise classification from
pairwise similarities based on matrix completion and matrix products. The algorithm consists of two
3
http://www.cs.toronto.edu/?delve/data/datasets.html
http://www.nipsfsc.ecs.soton.ac.uk/datasets/
5
http://www.cs.umd.edu/projects/linqs/projects/lbc/
4
7
Table 1: Statistics of the data sets
# examples # classes coherence (?50 )
3175
2
1.97
7000
2
4.17
3312
6
2.22
1
1
0.8
0.95
0.95
0.9
0.9
0.7
Pairwise F?measure
0.9
0.85
0.6
Accuracy
Normalized Mutual Information
name
splice
gisette
citeseer
0.5
0.4
0.8
0.75
0.7
SKL
SC
TPCMC
0.3
0.2
0.6
0.1
20
30
40
50
60
70
80
SKL
SC
TPCMC
0.65
0.55
90
20
30
40
m/n ? 100%
50
60
70
80
SKL
SC
TPCMC
0.85
0.8
0.75
0.7
0.65
90
20
30
40
m/n ? 100%
50
60
70
80
90
m/n ? 100%
Figure 2: Performance on the splice data set.
1
1
0.9
0.95
0.95
0.7
0.6
0.5
0.9
0.85
SKL
SC
TPCMC
0.4
0.3
20
30
40
50
60
70
80
SC
TPCMC
SKL
0.8
0.2
90
Pairwise F?measure
0.8
Accuracy
Normalized Mutual Information
1
0.75
20
30
40
m/n ? 100%
50
60
70
80
0.9
0.85
0.8
0.75
SKL
SC
TPCMC
0.7
0.65
90
20
30
40
m/n ? 100%
50
60
70
80
90
m/n ? 100%
1
1
0.9
0.9
Pairwise F?measure
0.8
0.7
0.8
Accuracy
Normalized Mutual Information
Figure 3: Performance on the gisette data set.
0.6
0.5
0.6
SKL
SC
TPCMC
0.4
0.3
30
40
50
60
70
m/n ? 100%
80
SKL
SC
TPCMC
0.5
0.2
20
0.7
90
0.4
20
30
40
50
60
70
80
0.8
0.7
0.6
0.5
SKL
SC
TPCMC
0.4
90
20
30
40
m/n ? 100%
50
60
70
80
90
m/n ? 100%
Figure 4: Performance on the citeseer data set.
simple steps: recovering the sub-matrix of pairwise labels given partially pre-determined pairwise
labels and estimating the full label matrix from the recovered sub-matrix and the provided pairwise
similarities. The theoretical analysis establishes the conditions on the similarity matrix, the number
of labeled examples and the number of pre-determined pairwise labels under which the estimated
pairwise label matrix by the proposed algorithm recovers the true one exactly or with a small error
with an overwhelming probability. Preliminary empirical evaluations have verified the potential of
the proposed algorithm.
Ackowledgement
The work of Rong Jin was supported in part by National Science Foundation (IIS-1251031) and
Office of Naval Research (N000141210431).
8
References
[1] N. Ailon. An active learning algorithm for ranking from pairwise preferences with an almost optimal
query complexity. JMLR, 13:137?164, 2012.
[2] S. Basu, M. Bilenko, and R. J. Mooney. A probabilistic framework for semi-supervised clustering. In
Proceedings of SIGKDD, pages 59?68, 2004.
[3] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717?772, 2009.
[4] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans.
Inf. Theor., 56:2053?2080, 2010.
[5] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to
face verification. In Proceedings of CVPR, pages 539?546, 2005.
[6] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In Proceedings of ICML, pages 209?216, 2007.
[7] I. S. Dhillon, Y. Guan, and B. Kulis. Kernel k-means: spectral clustering and normalized cuts. In
Proceedings of SIGKDD, pages 551?556, 2004.
[8] P. Drineas, M. Magdon-Ismail, M. W. Mahoney, and D. P. Woodruff. Fast approximation of matrix
coherence and statistical leverage. In Proceedings of ICML, 2012.
[9] A. Fred and A. Jain. Robust data clustering. In Proceedings of IEEE CVPR, volume 2, 2003.
[10] A. Gittens. The spectral norm errors of the naive nystrom extension. CoRR, abs/1110.5305, 2011.
[11] A. Gittens and M. W. Mahoney. Revisiting the nystrom method for improved large-scale machine learning. CoRR, abs/1303.1849, 2013.
[12] I. Guyon, S. R. Gunn, A. Ben-Hur, and G. Dror. Result analysis of the nips 2003 feature selection
challenge. In NIPS, 2004.
[13] S. C. H. Hoi, M. R. Lyu, and E. Y. Chang. Learning the unified kernel machines for classification. In
Proceedings of SIGKDD, pages 187?196, 2006.
[14] E. H?ullermeier and J. F?urnkranz. Learning from label preferences. In Proceedings of ALT, page 38, 2011.
[15] R. Jin, T. Yang, M. Mahdavi, Y.-F. Li, and Z.-H. Zhou. Improved bounds for the nystr?om method with
application to kernel classification. IEEE Transactions on Information Theory, 59(10):6939?6949, 2013.
[16] A. Kelil, S. Wang, R. Brzezinski, and A. Fleury. Cluss: Clustering of protein sequences based on a new
similarity measure. BMC Bioinformatics, 8, 2007.
[17] D. Liben-Nowell and J. Kleinberg. The link-prediction problem for social networks. J. Am. Soc. Inf. Sci.
Technol., 58:1019?1031, 2007.
[18] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395?416, 2007.
[19] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages
849?856, 2001.
[20] B. Recht. A simpler approach to matrix completion. JMLR, 12:3413?3430, 2011.
[21] P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad. Collective classification in
network data. AI Magazine, 29(3):93?106, 2008.
[22] G. W. Stewart and J. guang Sun. Matrix Perturbation Theory. Academic Press, 1990.
[23] A. Talwalkar and A. Rostamizadeh. Matrix coherence and the nystrom method. In Proceedings of UAI,
pages 572?579, 2010.
[24] G. G. Towell and J. W. Shavlik. Interpretation of artificial neural networks: Mapping knowledge-based
neural networks into rules. In NIPS, pages 977?984, 1991.
[25] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application to clustering
with side-information. In NIPS, volume 15, pages 505?512, 2002.
[26] M. Xu, R. Jin, and Z.-H. Zhou. Speedup matrix completion with side information: Application to multilabel learning. In NIPS, pages 2301?2309, 2013.
[27] T. Yang, R. Jin, Y. Chi, and S. Zhu. Combining link and content for community detection: a discriminative
approach. In Proceedings of SIGKDD, pages 927?936, 2009.
[28] T. Yang, Y. Li, M. Mahdavi, R. Jin, and Z. Zhou. Nystr?om method vs random fourier features: A
theoretical and empirical comparison. In NIPS, pages 485?493, 2012.
[29] J. Yi, L. Zhang, R. Jin, Q. Qian, and A. K. Jain. Semi-supervised clustering by input pattern assisted
pairwise similarity matrix completion. In Proceedings of ICML, pages 1400?1408, 2013.
9
| 5607 |@word mild:1 kgk:1 trial:1 mr2:1 kulis:2 norm:2 zkf:1 km:2 r:1 citeseer:4 nystr:3 contains:1 score:2 zij:1 woodruff:1 recovered:7 yet:1 realistic:1 predetermined:1 v:2 short:1 provides:1 node:2 toronto:1 preference:4 simpler:1 zhang:1 dn:12 constructed:1 consists:3 prove:3 introduce:4 lansing:1 pairwise:92 roughly:2 cand:3 chi:1 inspired:1 bilenko:1 overwhelming:1 considering:1 provided:15 estimating:3 underlying:4 notation:1 begin:1 project:3 eigenspace:1 bounded:1 what:3 gisette:4 substantially:1 dror:1 unified:1 hindsight:1 guarantee:3 certainty:1 tackle:1 exactly:3 rm:3 k2:1 demonstrates:1 uk:1 before:2 modify:1 ak:12 incoherence:2 china:1 studied:3 employes:1 ease:1 delve:1 range:1 averaged:1 unique:2 lecun:1 practice:1 n000141210431:1 digit:1 empirical:4 significantly:1 attain:2 projection:2 pre:20 word:1 protein:3 cannot:1 uiowa:1 close:2 selection:3 context:1 www:3 equivalent:2 demonstrated:1 maximizing:2 tianbao:2 attention:2 straightforward:1 convex:4 hadsell:1 simplicity:1 recovery:5 immediately:1 zbi:3 qian:1 m2:2 rule:1 spanned:8 nuclear:1 justification:1 target:1 play:2 suppose:3 magazine:1 exact:1 pa:1 recognition:1 asymmetric:1 gunn:1 cut:1 labeled:16 observed:17 bottom:1 role:2 solved:1 capture:1 precondition:1 calculate:1 verifying:1 revisiting:1 wang:1 connected:2 sun:1 russell:1 liben:1 balanced:1 complexity:1 ideally:1 multilabel:1 solving:3 serve:1 max1:1 binarizing:1 drineas:2 easily:1 jain:3 fast:2 effective:3 describe:2 query:1 sc:13 artificial:1 whose:1 widely:1 supplementary:2 cvpr:2 reconstruct:1 statistic:3 g1:1 cooked:1 transductive:12 sequence:4 eigenvalue:1 sen:1 propose:1 reconstruction:5 product:2 combining:1 ktr:2 ismail:1 bed:1 normalize:1 cluster:10 perfect:4 ben:1 object:2 wider:1 depending:1 derive:2 completion:17 stating:2 ac:1 measured:1 ij:3 p2:2 soc:1 recovering:1 c:2 indicate:2 quantify:2 closely:2 kb:1 hoi:2 material:2 crux:1 preliminary:1 theor:1 rong:2 exploring:2 strictly:1 extension:1 assisted:1 sufficiently:4 ground:2 exp:3 lyu:1 predict:2 mapping:1 nowell:1 purpose:1 estimation:1 applicable:1 label:85 largest:5 vice:1 city:1 establishes:1 namata:1 brought:1 clearly:2 aim:3 zhou:3 office:1 derived:1 focus:2 naval:1 rank:8 indicates:4 contrast:1 sigkdd:4 talwalkar:1 am:1 rostamizadeh:1 hangzhou:1 minimizers:1 hidden:3 uij:1 interested:1 tao:1 arg:1 classification:15 html:1 denoted:2 art:1 ak2:1 mutual:4 equal:4 construct:4 ng:2 sampling:2 biology:1 bmc:1 icml:3 report:1 ullermeier:1 randomly:4 composed:1 national:1 comprehensive:1 psd:1 ab:2 detection:1 mining:1 evaluation:3 alignment:3 deferred:1 mahoney:2 analyzed:2 accurate:2 partial:4 orthogonal:7 skf:1 theoretical:11 column:18 cover:3 measuring:1 stewart:1 assignment:1 introducing:1 subset:6 entry:19 rare:1 uniform:1 recognizing:1 conducted:1 gr:1 answer:3 perturbed:3 synthetic:3 recht:4 explores:1 sequel:1 probabilistic:1 linqs:1 reflect:1 satisfied:1 choose:2 resort:1 li:2 mahdavi:2 potential:1 skl:14 summarized:1 subsumes:1 satisfy:3 notable:1 ranking:2 depends:1 later:1 try:1 xing:1 recover:10 defer:1 om:3 oi:1 accuracy:4 efficiently:1 correspond:1 yield:1 handwritten:1 served:1 mooney:1 definition:1 involved:1 nystrom:3 proof:6 mi:2 recovers:2 soton:1 sampled:12 gain:1 hur:1 knowledge:3 improves:1 actually:1 reflecting:1 originally:1 supervised:6 improved:3 wei:1 formulation:1 reply:1 sketch:2 lack:1 facilitate:1 name:1 verify:2 true:6 normalized:5 assigned:1 symmetric:2 dhillon:2 skewed:1 davis:1 illustrative:1 mpi:3 cosine:1 complete:5 demonstrate:2 theoretic:1 performs:2 image:3 meaning:1 common:1 volume:2 belong:7 interpretation:1 approximates:1 refer:4 versa:1 ai:1 mathematics:1 similarly:1 similarity:66 base:2 pu:3 lbc:1 recent:1 optimizes:1 belongs:1 inf:2 scenario:1 certain:1 inequality:1 success:2 yi:2 exploited:1 determine:1 semi:5 ii:5 full:6 infer:1 fleury:1 match:1 characterized:1 academic:1 long:1 laplacian:1 prediction:2 variant:1 vision:1 essentially:2 metric:5 kernel:20 represent:1 singular:1 umd:1 probably:1 subject:1 induced:1 effectiveness:2 jordan:2 eliassi:1 extracting:1 near:1 yang:5 leverage:2 chopra:1 enough:4 perfectly:2 regarding:1 whether:3 suffer:1 oder:1 remark:1 useful:1 generally:2 detailed:2 eigenvectors:2 extensively:1 ten:2 dna:1 reduced:2 generate:2 http:3 exist:3 percentage:1 canonical:1 tutorial:1 estimated:6 towell:1 urnkranz:1 group:2 key:2 threshold:1 verified:1 utilize:4 graph:2 relaxation:1 luxburg:1 uncertainty:2 family:2 almost:1 groundtruth:1 guyon:1 coherence:19 bound:2 occur:1 kleinberg:1 u1:7 fourier:1 min:4 speedup:1 ailon:1 according:3 combination:2 smaller:1 gittens:2 b:21 intuitively:1 sij:1 pr:6 ln:1 equation:1 fed:1 end:2 serf:2 available:1 junction:1 magdon:1 apply:3 observe:1 spectral:18 appropriate:2 eigen:22 substitute:1 top:16 clustering:28 remaining:3 denotes:1 completed:5 log2:5 exploit:3 intend:1 question:2 quantity:1 parametric:3 concentration:1 diagonal:1 exhibit:1 subspace:2 distance:2 link:4 sci:1 evenly:1 assuming:3 o1:2 index:2 relationship:3 ratio:1 difficult:1 statement:1 gk:19 design:1 collective:1 observation:3 snapshot:1 datasets:2 jin:7 technol:1 situation:1 rn:5 perturbation:2 arbitrary:1 community:1 bk:9 pair:8 cast:1 namely:2 required:2 nipsfsc:1 rad:1 alibaba:1 learned:2 nip:7 trans:1 able:1 usually:2 below:1 pattern:1 challenge:2 summarize:1 built:1 oj:1 max:1 reliable:1 ia:1 power:1 getoor:1 bilgic:1 indicator:2 zhu:1 scheme:1 incoherent:1 naive:2 prior:1 kf:4 fully:1 discriminatively:1 interesting:2 remarkable:2 foundation:2 iowa:2 degree:2 verification:2 sufficient:2 consistent:1 pi:1 row:2 repeat:1 last:1 supported:1 side:4 shavlik:1 basu:1 face:3 distributed:2 world:3 ending:1 fred:1 computes:2 kz:6 author:1 bm:12 ec:1 social:1 transaction:1 citation:1 emphasize:1 global:1 active:1 uai:1 discriminative:1 msu:1 un:1 table:3 learn:3 robust:1 sra:1 rongjin:1 obtaining:1 constructing:1 diag:1 did:1 whole:2 motivation:1 noise:1 guang:1 fair:1 pivotal:1 body:1 xu:1 representative:1 sub:22 fails:1 lie:1 guan:1 jmlr:2 splice:5 theorem:13 bad:1 r2:1 alt:1 exists:1 adding:1 corr:2 gallagher:1 gap:1 michigan:1 simply:1 likely:1 partially:4 u2:1 chang:1 corresponds:2 truth:2 relies:1 goal:2 identity:1 presentation:1 towards:1 replace:1 content:1 determined:19 uniformly:9 lemma:1 zb:15 e:3 experimental:1 east:1 indicating:1 select:1 arises:1 bioinformatics:1 preparation:1 evaluate:1 correlated:1 |
5,090 | 5,608 | Incremental Clustering: The Case for Extra Clusters
Margareta Ackerman
Florida State University
600 W College Ave, Tallahassee, FL 32306
[email protected]
Sanjoy Dasgupta
UC San Diego
9500 Gilman Dr, La Jolla, CA 92093
[email protected]
Abstract
The explosion in the amount of data available for analysis often necessitates a
transition from batch to incremental clustering methods, which process one element at a time and typically store only a small subset of the data. In this paper,
we initiate the formal analysis of incremental clustering methods focusing on the
types of cluster structure that they are able to detect. We find that the incremental
setting is strictly weaker than the batch model, proving that a fundamental class of
cluster structures that can readily be detected in the batch setting is impossible to
identify using any incremental method. Furthermore, we show how the limitations
of incremental clustering can be overcome by allowing additional clusters.
1
Introduction
Clustering is a fundamental form of data analysis that is applied in a wide variety of domains, from
astronomy to zoology. With the radical increase in the amount of data collected in recent years,
the use of clustering has expanded even further, to applications such as personalization and targeted
advertising. Clustering is now a core component of interactive systems that collect information on
millions of users on a daily basis. It is becoming impractical to store all relevant information in
memory at the same time, often necessitating the transition to incremental methods.
Incremental methods receive data elements one at a time and typically use much less space than is
needed to store the complete data set. This presents a particularly interesting challenge for unsupervised learning, which unlike its supervised counterpart, also suffers from an absence of a unique
target truth. Observe that not all data possesses a meaningful clustering, and when an inherent
structure exists, it need not be unique (see Figure 1 for an example). As such, different users may
be interested in very different partitions. Consequently, different clustering methods detect distinct
types of structure, often yielding radically different results on the same data. Until now, differences
in the input-output behaviour of clustering methods have only been studied in the batch setting
[12, 13, 8, 4, 3, 5, 2, 19]. In this work, we take a first look at the types of cluster structures that can
be discovered by incremental clustering methods.
To qualify the type of cluster structure present in data, a number of notions of clusterability have
been proposed (for a detailed discussion, see [1] and [8]). These notions capture the structure of
the target clustering: the clustering desired by the user for a specific application. As such, notions of
clusterability facilitate the analysis of clustering methods by making it possible to formally ascertain
whether an algorithm correctly recovers the desired partition.
One elegant notion of clusterability, introduced by Balcan et al. [8], requires that every element be
closer to data in its own cluster than to other points. For simplicity, we will refer to clusterings that
adhere to this requirement as nice. It was shown by [8] that such clusterings are readily detected
offline by classical batch algorithms. On the other hand, we prove (Theorem 3.8) that no incremental method can discover these partitions. Thus, batch algorithms are significantly stronger than
incremental methods in their ability to detect cluster structure.
1
Figure 1: An example of different cluster structures in the same data. The clustering on the left
finds inherent structure in the data by identifying well-separated partitions, while the clustering on
the right discovers structure in the data by focusing on the dense region. The correct partitioning
depends on the application at hand.
In an effort to identify types of cluster structure that incremental methods can recover, we turn
to stricter notions of clusterability. A notion used by Epter et al. [9] requires that the minimum
separation between clusters be larger than the maximum cluster diameter. We call such clusterings
perfect, and we present an incremental method that is able to recover them (Theorem 4.3).
Yet, this result alone is unsatisfactory. If, indeed, it were necessary to resort to such strict notions
of clusterability, then incremental methods would have limited utility. Is there some other way to
circumvent the limitations of incremental techniques?
It turns out that incremental methods become a lot more powerful when we slightly alter the clustering problem: if, instead of asking for exactly the target partition, we are satisfied with a refinement,
that is, a partition each of whose clusters is contained within some target cluster. Indeed, in many
applications, it is reasonable to allow additional clusters.
Incremental methods benefit from additional clusters in several ways. First, we exhibit an algorithm
that is able to capture nice k-clusterings if it is allowed to return a refinement with 2k?1 clusters
(Theorem 5.3), which could be reasonable for small k. We also show that this exponential dependence on k is unavoidable in general (Theorem 5.4). As such, allowing additional clusters enables
incremental techniques to overcome their inability to detect nice partitions.
A similar phenomenon is observed in the analysis of the sequential k-means algorithm, one of
the most popular methods of incremental clustering. We show that it is unable to detect perfect
clusterings (Theorem 4.4), but that if each cluster contains a significant fraction of the data, then it
can recover a refinement of (a slight variant of) nice clusterings (Theorem 5.6).
Lastly, we demonstrate the power of additional clusters by relaxing the niceness condition, requiring
only that clusters have a significant core (defined in Section 5.3). Under this milder requirement, we
show that a randomized incremental method is able to discover a refinement of the target partition
(Theorem 5.10).
Due to space limitations, many proofs appear in the supplementary material.
2
Definitions
We consider a space X equipped with a symmetric distance function d : X ? X ? R+ satisfying
d(x, x) = 0. An example is X = Rp with d(x, x0 ) = kx ? x0 k2 . It is assumed that a clustering
algorithm can invoke d(?, ?) on any pair x, x0 ? X .
A clustering (or, partition) of X is a set of clusters C = {C1 , . . . , Ck } such that Ci ? Cj = ? for all
i 6= j, and X = ?ki=1 Ci . A k-clustering is a clustering with k clusters.
Write x ?C y if x, y are both in some cluster Cj ; and x 6?C y otherwise. This is an equivalence
relation.
2
Definition 2.1. An incremental clustering algorithm has the following structure:
for n = 1, . . . , N :
See data point xn ? X
Select model Mn ? M
where N might be ?, and M is a collection of clusterings of X . We require the algorithm to
have bounded memory, typically a function of the number of clusters. As a result, an incremental
algorithm cannot store all data points.
Notice that the ordering of the points is unspecified. In our results, we consider two types of ordering: arbitrary ordering, which is the standard setting in online learning and allows points to be
ordered by an adversary, and random ordering, which is standard in statistical learning theory. In
exemplar-based clustering, M = X k : each model is a list of k ?centers? (t1 , . . . , tk ) that induce
a clustering of X , where every x ? X is assigned to the cluster Ci for which d(x, ti ) is smallest
(breaking ties by picking the smallest i). All the clusterings we will consider in this paper will be
specified in this manner.
We also note that the incremental clustering model is closely related to streaming clustering [6, 10],
the primary difference being that in the latter framework multiple passes of the data are allowed.
2.1
Examples of incremental clustering algorithms
The most well-known incremental clustering algorithm is probably sequential k-means, which is
meant for data in Euclidean space. It is an incremental variant of Lloyd?s algorithm [16, 17]:
Algorithm 2.2. Sequential k-means.
Set T = (t1 , . . . , tk ) to the first k data points
Initialize the counts n1 , n2 , ..., nk to 1
Repeat:
Acquire the next example, x
If ti is the closest center to x:
Increment ni
Replace ti by ti + (1/ni )(x ? ti )
This method, and many variants of it, have been studied intensively in the literature on selforganizing maps [15]. It attempts to find centers T that optimize the k-means cost function:
X
cost(T ) =
min kx ? tk2 .
data x
t?T
It is not hard to see that the solution obtained by sequential k-means at any given time can have
cost far from optimal; we will see an even stronger lower bound in Theorem 4.4. Nonetheless, we
will also see that if additional centers are allowed, this algorithm is able to correctly capture some
fundamental types of cluster structure.
Another family of clustering algorithms with incremental variants are agglomerative procedures [12]
like single-linkage [11]. Given n data points in batch mode, these algorithms produce a hierarchical
clustering on all n points. But the hierarchy can be truncated at the intermediate k-clustering, yielding a tree with k leaves. Moreover, there is a natural scheme for updating these leaves incrementally:
Algorithm 2.3. Sequential agglomerative clustering.
Set T to the first k data points
Repeat:
Get the next point x and add it to T
Select t, t0 ? T for which dist(t, t0 ) is smallest
Replace t, t0 by the single center merge(t, t0 )
Here the two functions dist and merge can be varied to optimize different clustering criteria,
and often require storing additional sufficient statistics, such as counts of individual clusters. For
instance, Ward?s method of average linkage [18] is geared towards the k-means cost function. We
will consider the variant obtained by setting dist(t, t0 ) = d(t, t0 ) and merge(t, t0 ) to either t or t0 :
3
Algorithm 2.4. Sequential nearest-neighbour clustering.
Set T to the first k data points
Repeat:
Get the next point x and add it to T
Let t, t0 be the two closest points in T
Replace t, t0 by either of these two points
We will see that this algorithm is effective at picking out a large class of cluster structures.
2.2
The target clustering
Unlike supervised learning tasks, which are typically endowed with a unique correct classification,
clustering is ambiguous. One approach to disambiguating clustering is identifying an objective
function such as k-means, and then defining the clustering task as finding the partition with minimum cost. Although there are situations to which this approach is well-suited, many clustering
applications do not inherently lend themselves to any specific objective function. As such, while
objective functions play an essential role in deriving clustering methods, they do not circumvent the
ambiguous nature of clustering.
The term target clustering denotes the partition that a specific user is looking for in a data set.
This notion was used by Balcan et al. [8] to study what constraints on cluster structure make them
efficiently identifiable in a batch setting. In this paper, we consider families of target clusterings that
satisfy different properties, and ask whether incremental algorithms can identify such clusterings.
The target clustering C is defined on a possibly infinite space X , from which the learner receives a
sequence of points. At any time n, the learner has seen n data points and has some clustering that
ideally agrees with C on these points. The methods we consider are exemplar-based: they all specify
a list of points T in X that induce a clustering of X (recall the discussion just before Section 2.1).
We consider two requirements:
? (Strong) T induces the target clustering C.
? (Weaker) T induces a refinement of the target clustering C: that is, each cluster induced by
T is part of some cluster of C.
If the learning algorithm is run on a finite data set, then we require these conditions to hold once
all points have been seen. In our positive results, we will also consider infinite streams of data, and
show that these conditions hold at every time n, taking the target clustering restricted to the points
seen so far.
3
A basic limitation of incremental clustering
We begin by studying limitations of incremental clustering compared with the batch setting.
One of the most fundamental types of cluster structure is what we shall call nice clusterings for the
sake of brevity. Originally introduced by Balcan et al. [8] under the name ?strict separation,? this
notion has since been applied in [2], [1], and [7], to name a few.
Definition 3.1 (Nice clustering). A clustering C of (X , d) is nice if for all x, y, z ? X , d(y, x) <
d(z, x) whenever x ?C y and x 6?C z.
See Figure 2 for an example.
Observation 3.2. If we select one point from every cluster of a nice clustering C, the resulting set
induces C. (Moreover, niceness is the minimal property under which this holds.)
A nice k-clustering is not, in general, unique. For example, consider X = {1, 2, 4, 5} on the real
line under the usual distance metric; then both {{1}, {2}, {4, 5}} and {{1, 2}, {4}, {5}} are nice
3-clusterings of X . Thus we start by considering data with a unique nice k-clustering.
Since niceness is a strong requirement, we might expect that it is easy to detect. Indeed, in the batch
setting, a unique nice k-clustering can be recovered by single-linkage [8]. However, we show that
nice partitions cannot be detected in the incremental setting, even if they are unique.
4
Figure 2: A nice clustering may include clusters with very different diameters, as long as the distance
between any two clusters scales as the larger diameter of the two.
We start by formalizing the ordering of the data. An ordering function O takes a finite set X and
returns an ordering of the points in this set. An ordered distance space is denoted by (O[X ], d).
Definition 3.3. An incremental clustering algorithm A is nice-detecting if, given a positive integer
k and (X , d) that has a unique nice k-clustering C, the procedure A(O[X ], d, k) outputs C for any
ordering function O.
In this section, we show (Theorem 3.8) that no deterministic memory-bounded incremental method
is nice-detecting, even for points in Euclidean space under the `2 metric.
We start with the intuition behind the proof. Fix any incremental clustering algorithm and set the
number of clusters to 3. We will specify a data set D with a unique nice 3-clustering that this
algorithm cannot detect. The data set has two subsets, D1 and D2 , that are far away from each
other but are otherwise nearly isomorphic. The target 3-clustering is either: (D1 , together with a
2-clustering of D2 ) or (D2 , together with a 2-clustering of D1 ).
The central piece of the construction is the configuration of D1 (and likewise, D2 ). The first point
presented to the learner is xo . This is followed by a clique of points xi that are equidistant from each
other and have the same, slightly larger, distance to xo . For instance, we could set distances within
the clique d(xi , xj ) to 1, and distances d(xi , xo ) to 2. Finally there is a point x0 that is either exactly
like one of the xi ?s (same distances), or differs from them in just one specific distance d(x0 , xj )
which is set to 2. In the former case, there is a nice 2-clustering of D1 , in which one cluster is
xo and the other cluster is everything else. In the latter case, there is no nice 2-clustering, just the
1-clustering consisting of all of D1 .
D2 is like D1 , but is rigged so that if D1 has a nice 2-clustering, then D2 does not; and vice versa.
The two possibilities for D1 are almost identical, and it would seem that the only way an algorithm
can distinguish between them is by remembering all the points it has seen. A memory-bounded
incremental learner does not have this luxury. Formalizing this argument requires some care; we
cannot, for instance, assume that the learner is using its memory to store individual points.
In order to specify D1 , we start with a larger collection of points that we call an M -configuration,
and that is independent of any algorithm. We then pick two possibilities for D1 (one with a nice
2-clustering and one without) from this collection, based on the specific learner.
Definition 3.4. In any metric space (X , d), for any integer M > 0, define an M -configuration to
be a collection of 2M + 1 points xo , x1 , . . . , xM , x01 , . . . , x0M ? X such that
? All interpoint distances are in the range [1, 2].
? d(xo , xi ), d(xo , x0i ) ? (3/2, 2] for all i ? 1.
? d(xi , xj ), d(x0i , x0j ), d(xi , x0j ) ? [1, 3/2] for all i 6= j ? 1.
? d(xi , x0i ) > d(xo , xi ).
The significance of this point configuration is as follows.
5
Lemma 3.5. Let xo , x1 , . . . , xM , x01 , . . . , x0M be any M -configuration in (X , d). Pick any index
1 ? j ? M and any subset S ? [M ] with |S| > 1. Then the set A = {xo , x0j } ? {xi : i ? S} has a
nice 2-clustering if and only if j 6? S.
Proof. Suppose A has a nice 2-clustering {C1 , C2 }, where C1 is the cluster that contains xo .
We first show that C1 is a singleton cluster. If C1 also contains some x` , then it must contain all
the points {xi : i ? S} by niceness since d(x` , xi ) ? 3/2 < d(x` , xo ). Since |S| > 1, these
points include some xi with i 6= j. Whereupon C1 must also contain x0j , since d(xi , x0j ) ? 3/2 <
d(xi , xo ). But this means C2 is empty.
Likewise, if C1 contains x0j , then it also contains all {xi : i ? S, i 6= j}, since d(xi , x0j ) < d(xo , x0j ).
There is at least one such xi , and we revert to the previous case.
Therefore C1 = {xo } and, as a result, C2 = {xi : i ? S} ? {x0j }. This 2-clustering is nice if and
only if d(xo , x0j ) > d(xi , x0j ) and d(xo , xi ) > d(x0j , xi ) for all i ? S, which in turn is true if and
only if j 6? S.
By putting together two M -configurations, we obtain:
Theorem 3.6. Let (X , d) be any metric space that contains two M -configurations separated by
a distance of at least 4. Then, there is no deterministic incremental algorithm with ? M/2 bits
of storage that is guaranteed to recover nice 3-clusterings of data sets drawn from X , even when
limited to instances in which such clusterings are unique.
Proof. Suppose the deterministic incremental learner has a memory capacity of b bits. We will refer
to the memory contents of the learner as its state, ? ? {0, 1}b .
0
. We
Call the two M -configurations xo , x1 , . . . , xM , x01 , . . . , x0M and zo , z1 , . . . , zM , z10 , . . . , zM
feed the following points to the learner:
Batch 1:
Batch 2:
Batch 3:
Batch 4:
xo and zo
b distinct points from x1 , . . . , xM
b distinct points from z1 , . . . , zM
Two final points x0j1 and zj0 2
The learner?s state after seeing batch 2 can be described by a function f : {x1 , . . . , xM }b ? {0, 1}b .
b
b
The number of distinct sets of b points in batch 2 is M
b > (M/b) . If M ? 2b, this is > 2 , which
b
means that two different sets of points must lead to the same state, call it ? ? {0, 1} . Let the indices
of these sets be S1 , S2 ? [M ] (so |S1 | = |S2 | = b), and pick any j1 ? S1 \ S2 .
Next, suppose the learner is in state ? and is then given batch 3. We can capture its state at the end
of this batch by a function g : {z1 , . . . , zM }b ? {0, 1}b , and once again there must be distinct sets
T1 , T2 ? [M ] that yield the same state ? 0 . Pick any j2 ? T1 \ T2 .
It follows that the sequences of inputs xo , zo , (xi : i ? S1 ), (zi : i ? T2 ), x0j1 , zj0 2 and xo , zo , (xi :
i ? S2 ), (zi : i ? T1 ), x0j1 , zj0 2 produce the same final state and thus the same answer. But in the first
case, by Lemma 3.5, the unique nice 3-clustering keeps the x?s together and splits the z?s, whereas
in the second case, it splits the x?s and keeps the z?s together.
An M -configuration can be realized in Euclidean space:
Lemma 3.7. There is an absolute constant co such that for any dimension p, the Euclidean space
Rp , with L2 norm, contains M -configurations for all M < 2co p .
The overall conclusions are the following.
Theorem 3.8. There is no memory-bounded deterministic nice-detecting incremental clustering
algorithm that works in arbitrary metric spaces. For data in Rp under the `2 metric, there is no
deterministic nice-detecting incremental clustering algorithm using less than 2co p?1 bits of memory.
6
4
A more restricted class of clusterings
The discovery that nice clusterings cannot be detected using any incremental method, even though
they are readily detected in a batch setting, speaks to the substantial limitations of incremental
algorithms. We next ask whether there is a well-behaved subclass of nice clusterings that can be
detected using incremental methods. Following [9, 2, 5, 1], among others, we consider clusterings
in which the maximum cluster diameter is smaller than the minimum inter-cluster separation.
Definition 4.1 (Perfect clustering). A clustering C of (X , d) is perfect if d(x, y) < d(w, z) whenever
x ?C y, w 6?C z.
Any perfect clustering is nice. But unlike nice clusterings, perfect clusterings are unique:
Lemma 4.2. For any (X , d) and k, there is at most one perfect k-clustering of (X , d).
Whenever an algorithm can detect perfect clusterings, we call it perfect-detecting. Formally, an
incremental clustering algorithm A is perfect-detecting if, given a positive integer k and (X , d) that
has a perfect k-clustering, A(O[X ], d, k) outputs that clustering for any ordering function O.
We start with an example of a simple perfect-detecting algorithm.
Theorem 4.3. Sequential nearest-neighbour clustering (Algorithm 2.4) is perfect-detecting.
We next turn to sequential k-means (Algorithm 2.2), one of the most popular methods for incremental clustering. Interestingly, it is unable to detect perfect clusterings.
It is not hard to see that a perfect k-clustering is a local optimum of k-means. We will now see an
example in which the perfect k-clustering is the global optimum of the k-means cost function, and
yet sequential k-means fails to detect it.
Theorem 4.4. There is a set of four points in R3 with a perfect 2-clustering that is also the global
optimum of the k-means cost function (for k = 2). However, there is no ordering of these points that
will enable this clustering to be detected by sequential k-means.
5
Incremental clustering with extra clusters
Returning to the basic lower bound of Theorem 3.8, it turns out that a slight shift in perspective
greatly improves the capabilities of incremental methods. Instead of aiming to exactly discover the
target partition, it is sufficient in some applications to merely uncover a refinement of it. Formally, a
clustering C of X is a refinement of clustering C 0 of X , if x ?C y implies x ?C 0 y for all x, y ? X .
We start by showing that although incremental algorithms cannot detect nice k-clusterings, they can
find a refinement of such a clustering if allowed 2k?1 centers. We also show that this is tight.
Next, we explore the utility of additional clusters for sequential k-means. We show that for a random
ordering of the data, and with extra centers, this algorithm can recover (a slight variant of) nice
clusterings. We also show that the random ordering is necessary for such a result.
Finally, we prove that additional clusters extend the utility of incremental methods beyond nice
clusterings. We introduce a weaker constraint on cluster structure, requiring only that each cluster
possess a significant ?core?, and we present a scheme that works under this weaker requirement.
5.1
An incremental algorithm can find nice k-clusterings if allowed 2k centers
Earlier work [8] has shown that that any nice clustering corresponds to a pruning of the tree obtained by single linkage on the points. With this insight, we develop an incremental algorithm that
maintains 2k?1 centers that are guaranteed to induce a refinement of any nice k-clustering.
The following subroutine takes any finite S ? X and returns at most 2k?1 distinct points:
C ANDIDATES(S)
Run single linkage on S to get a tree
Assign each leaf node the corresponding data point
Moving bottom-up, assign each internal node the data point in one of its children
Return all points at distance < k from the root
7
Lemma 5.1. Suppose S has a nice `-clustering, for ` ? k. Then the points returned by
C ANDIDATES(S) include at least one representative from each of these clusters.
Here?s an incremental algorithm that uses 2k?1 centers to detect a nice k-clustering.
Algorithm 5.2. Incremental clustering with extra centers.
T0 = ?
For t = 1, 2, . . .:
Receive xt and set Tt = Tt?1 ? {xt }
If |Tt | > 2k?1 : Tt ? C ANDIDATES(Tt )
Theorem 5.3. Suppose there is a nice k-clustering C of X . Then for each t, the set Tt has at most
2k?1 points, including at least one representative from each Ci for which Ci ? {x1 , . . . , xt } =
6 ?.
It is not possible in general to use fewer centers.
Theorem 5.4. Pick any incremental clustering algorithm that maintains a list of ` centers that are
guaranteed to be consistent with a target nice k-clustering. Then ` ? 2k?1 .
5.2
Sequential k-means with extra clusters
Theorem 4.4 above shows severe limitations of sequential k-means. The good news is that additional
clusters allow this algorithm to find a variant of nice partitionings.
The following condition imposes structure on the convex hull of the partitions in the target clustering.
Definition 5.5. A clustering C = {C1 , . . . , Ck } is convex-nice if for any i 6= j, any points x, y in
the convex hull of Ci , and any point z in the convex hull of Cj , we have d(y, x) < d(z, x).
Theorem 5.6. Fix a data set (X , d) with a convex-nice clustering C = {C1 , . . . , Ck } and let ? =
mini |Ci |/|X |. If the points are ordered uniformly at random, then for any ` ? k, sequential `-means
will return a refinement of C with probability at least 1 ? ke??` .
The probability of failure is small when the refinement contains ` = ?((log k)/?) centers. We can
also show that this positive result no longer holds when data is adversarially ordered.
Theorem 5.7. Pick any k ? 3. Consider any data set X in R (under the usual metric) that has
a convex-nice k-clustering C = {C1 , . . . , Ck }. Then there exists an ordering of X under which
sequential `-means with ` ? mini |Ci | centers fails to return a refinement of C.
5.3
A broader class of clusterings
We conclude by considering a substantial generalization of niceness that can be detected by incremental methods when extra centers are allowed.
Definition 5.8 (Core). For any clustering C = {C1 , . . . , Ck } of (X , d), the core of cluster Ci is the
maximal subset Cio ? Ci such that d(x, z) < d(x, y) for all x ? Ci , z ? Cio , and y 6? Ci .
In a nice clustering, the core of any cluster is the entire cluster. We now require only that each core
contain a significant fraction of points, and we show that the following simple sampling routine will
find a refinement of the target clustering, even if the points are ordered adversarially.
Algorithm 5.9. Algorithm subsample.
Set T to the first ` elements
For t = ` + 1, ` + 2, . . .:
Get a new point xt
With probability `/t:
Remove an element from T uniformly at random and add xt to T
It is well-known (see, for instance, [14]) that at any time t, the set T consists of ` elements chosen
at random without replacement from {x1 , . . . , xt }.
Theorem 5.10. Consider any clustering C = {C1 , . . . , Ck } of (X , d), with core {C1o , . . . , Cko }.
Let ? = mini |Cio |/|X |. Fix any ` ? k. Then, given any ordering of X , Algorithm 5.9 detects a
refinement of C with probability 1 ? ke??` .
8
References
[1] M. Ackerman and S. Ben-David. Clusterability: A theoretical study. Proceedings of AISTATS09, JMLR: W&CP, 5(1-8):53, 2009.
[2] M. Ackerman, S. Ben-David, S. Branzei, and D. Loker. Weighted clustering. Proc. 26th AAAI
Conference on Artificial Intelligence, 2012.
[3] M. Ackerman, S. Ben-David, and D. Loker. Characterization of linkage-based clustering.
COLT, 2010.
[4] M. Ackerman, S. Ben-David, and D. Loker. Towards property-based classification of clustering
paradigms. NIPS, 2010.
[5] M. Ackerman, S. Ben-David, D. Loker, and S. Sabato. Clustering oligarchies. Proceedings of
AISTATS-09, JMLR: W&CP, 31(6674), 2013.
[6] Charu C Aggarwal. A survey of stream clustering algorithms., 2013.
[7] M.-F. Balcan and P. Gupta. Robust hierarchical clustering. In COLT, pages 282?294, 2010.
[8] M.F. Balcan, A. Blum, and S. Vempala. A discriminative framework for clustering via similarity functions. In Proceedings of the 40th annual ACM symposium on Theory of Computing,
pages 671?680. ACM, 2008.
[9] S. Epter, M. Krishnamoorthy, and M. Zaki. Clusterability detection and initial seed selection
in large datasets. In The International Conference on Knowledge Discovery in Databases,
volume 7, 1999.
[10] Sudipto Guha, Nina Mishra, Rajeev Motwani, and Liadan O?Callaghan. Clustering data
streams. In Foundations of computer science, 2000. proceedings. 41st annual symposium on,
pages 359?366. IEEE, 2000.
[11] J.A. Hartigan. Consistency of single linkage for high-density clusters. Journal of the American
Statistical Association, 76(374):388?394, 1981.
[12] N. Jardine and R. Sibson. Mathematical taxonomy. London, 1971.
[13] J. Kleinberg. An impossibility theorem for clustering. Proceedings of International Conferences on Advances in Neural Information Processing Systems, pages 463?470, 2003.
[14] D.E. Knuth. The Art of Computer Programming: Seminumerical Algorithms, volume 2. 1981.
[15] T. Kohonen. Self-organizing maps. Springer, 2001.
[16] S.P. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory,
28(2):129?137, 1982.
[17] J.B. MacQueen. Some methods for classification and analysis of multivariate observations.
In Proceedings of Fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1, pages 281?297. University of California Press, 1967.
[18] J.H. Ward. Hierarchical grouping to optimize an objective function. Journal of the American
Statistical Association, 58:236?244, 1963.
[19] R.B. Zadeh and S. Ben-David. A uniqueness theorem for clustering. In Proceedings of the
Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 639?646. AUAI Press,
2009.
9
| 5608 |@word stronger:2 norm:1 rigged:1 d2:6 eng:1 pick:6 initial:1 configuration:10 contains:8 interestingly:1 mishra:1 recovered:1 yet:2 must:4 readily:3 partition:14 j1:1 enables:1 remove:1 alone:1 intelligence:2 leaf:3 fewer:1 core:8 detecting:8 characterization:1 node:2 mathematical:2 c2:3 become:1 symposium:3 prove:2 consists:1 introduce:1 manner:1 x0:5 inter:1 speaks:1 indeed:3 themselves:1 dist:3 detects:1 equipped:1 considering:2 begin:1 discover:3 bounded:4 moreover:2 fsu:1 formalizing:2 what:2 unspecified:1 astronomy:1 finding:1 impractical:1 berkeley:1 every:4 ti:5 subclass:1 auai:1 interactive:1 stricter:1 exactly:3 tie:1 k2:1 returning:1 x0m:3 partitioning:2 appear:1 t1:5 before:1 positive:4 local:1 aiming:1 becoming:1 merge:3 might:2 studied:2 equivalence:1 collect:1 relaxing:1 co:3 jardine:1 limited:2 range:1 unique:12 differs:1 procedure:2 significantly:1 induce:3 seeing:1 get:4 cannot:6 selection:1 storage:1 impossible:1 whereupon:1 optimize:3 map:2 deterministic:5 center:16 convex:6 survey:1 ke:2 simplicity:1 identifying:2 insight:1 deriving:1 proving:1 notion:9 increment:1 diego:1 target:17 hierarchy:1 user:4 play:1 construction:1 suppose:5 us:1 programming:1 element:6 gilman:1 satisfying:1 particularly:1 updating:1 database:1 observed:1 role:1 bottom:1 capture:4 region:1 news:1 ordering:14 substantial:2 intuition:1 ideally:1 tight:1 learner:11 basis:1 necessitates:1 zo:4 separated:2 distinct:6 revert:1 effective:1 london:1 detected:8 artificial:2 whose:1 larger:4 supplementary:1 otherwise:2 ability:1 statistic:2 ward:2 final:2 online:1 sequence:2 maximal:1 ackerman:6 zm:4 j2:1 relevant:1 kohonen:1 organizing:1 sudipto:1 cluster:55 requirement:5 empty:1 optimum:3 produce:2 motwani:1 incremental:54 perfect:17 ben:6 tk:2 radical:1 develop:1 exemplar:2 x0i:3 nearest:2 strong:2 implies:1 closely:1 correct:2 hull:3 enable:1 material:1 everything:1 require:4 behaviour:1 assign:2 fix:3 generalization:1 strictly:1 hold:4 seed:1 smallest:3 uniqueness:1 proc:1 agrees:1 vice:1 weighted:1 ck:6 broader:1 unsatisfactory:1 greatly:1 impossibility:1 ave:1 detect:12 milder:1 streaming:1 typically:4 entire:1 relation:1 subroutine:1 interested:1 overall:1 classification:3 among:1 colt:2 denoted:1 art:1 initialize:1 uc:1 once:2 sampling:1 identical:1 adversarially:2 look:1 unsupervised:1 nearly:1 alter:1 t2:3 others:1 inherent:2 few:1 neighbour:2 individual:2 consisting:1 replacement:1 n1:1 luxury:1 attempt:1 detection:1 possibility:2 severe:1 zoology:1 personalization:1 yielding:2 behind:1 closer:1 explosion:1 daily:1 necessary:2 tree:3 euclidean:4 desired:2 theoretical:1 minimal:1 instance:5 earlier:1 asking:1 zj0:3 cost:7 subset:4 guha:1 answer:1 st:1 density:1 fundamental:4 randomized:1 international:2 invoke:1 picking:2 together:5 again:1 aaai:1 satisfied:1 unavoidable:1 central:1 possibly:1 dr:1 resort:1 american:2 return:6 singleton:1 lloyd:2 satisfy:1 depends:1 stream:3 piece:1 root:1 lot:1 start:6 recover:5 maintains:2 capability:1 cio:3 square:1 ni:2 efficiently:1 likewise:2 yield:1 identify:3 krishnamoorthy:1 advertising:1 suffers:1 whenever:3 definition:8 failure:1 nonetheless:1 proof:4 recovers:1 popular:2 ask:2 intensively:1 recall:1 knowledge:1 improves:1 cj:3 routine:1 uncover:1 focusing:2 feed:1 originally:1 zaki:1 supervised:2 specify:3 though:1 furthermore:1 just:3 lastly:1 until:1 hand:2 receives:1 rajeev:1 incrementally:1 mode:1 behaved:1 liadan:1 facilitate:1 name:2 requiring:2 contain:3 true:1 counterpart:1 former:1 assigned:1 symmetric:1 self:1 ambiguous:2 criterion:1 complete:1 demonstrate:1 necessitating:1 tt:6 cp:2 balcan:5 discovers:1 volume:3 million:1 extend:1 slight:3 association:2 refer:2 significant:4 versa:1 consistency:1 moving:1 geared:1 longer:1 similarity:1 add:3 closest:2 own:1 recent:1 multivariate:1 perspective:1 jolla:1 store:5 qualify:1 seen:4 minimum:3 additional:10 remembering:1 care:1 paradigm:1 multiple:1 aggarwal:1 long:1 variant:7 basic:2 metric:7 c1:13 receive:2 whereas:1 else:1 adhere:1 sabato:1 extra:6 unlike:3 posse:2 strict:2 pass:1 probably:1 induced:1 elegant:1 seem:1 call:6 integer:3 intermediate:1 split:2 easy:1 niceness:5 variety:1 xj:3 equidistant:1 zi:2 shift:1 t0:11 whether:3 clusterability:7 utility:3 linkage:7 effort:1 returned:1 detailed:1 selforganizing:1 amount:2 induces:3 diameter:4 notice:1 correctly:2 tallahassee:1 write:1 dasgupta:2 shall:1 putting:1 four:1 sibson:1 blum:1 drawn:1 hartigan:1 merely:1 fraction:2 year:1 run:2 powerful:1 uncertainty:1 family:2 reasonable:2 almost:1 x0j:12 separation:3 charu:1 zadeh:1 bit:3 fl:1 ki:1 bound:2 followed:1 distinguish:1 guaranteed:3 identifiable:1 cko:1 annual:2 constraint:2 sake:1 kleinberg:1 argument:1 min:1 expanded:1 vempala:1 smaller:1 ascertain:1 slightly:2 making:1 s1:4 restricted:2 xo:21 turn:5 count:2 r3:1 needed:1 initiate:1 end:1 studying:1 available:1 endowed:1 z10:1 observe:1 hierarchical:3 away:1 batch:19 florida:1 rp:3 denotes:1 clustering:146 include:3 tk2:1 classical:1 objective:4 realized:1 primary:1 dependence:1 usual:2 exhibit:1 distance:12 unable:2 capacity:1 agglomerative:2 collected:1 nina:1 index:2 mini:3 margareta:1 acquire:1 loker:4 taxonomy:1 twenty:1 allowing:2 observation:2 datasets:1 macqueen:1 finite:3 truncated:1 defining:1 situation:1 looking:1 discovered:1 ucsd:1 varied:1 arbitrary:2 introduced:2 david:6 pair:1 specified:1 z1:3 california:1 nip:1 able:5 adversary:1 beyond:1 xm:5 challenge:1 including:1 memory:9 lend:1 power:1 natural:1 circumvent:2 mn:1 scheme:2 nice:48 literature:1 l2:1 discovery:2 expect:1 interesting:1 limitation:7 foundation:1 x01:3 sufficient:2 consistent:1 imposes:1 storing:1 repeat:3 offline:1 formal:1 weaker:4 allow:2 wide:1 taking:1 absolute:1 fifth:2 benefit:1 overcome:2 dimension:1 xn:1 transition:2 collection:4 refinement:14 san:1 far:3 transaction:1 pruning:1 keep:2 clique:2 global:2 assumed:1 conclude:1 xi:24 discriminative:1 nature:1 robust:1 ca:1 inherently:1 domain:1 aistats:1 significance:1 dense:1 s2:4 subsample:1 n2:1 allowed:6 child:1 x1:7 representative:2 fails:2 exponential:1 breaking:1 jmlr:2 theorem:22 c1o:1 specific:5 xt:6 showing:1 list:3 gupta:1 grouping:1 exists:2 essential:1 quantization:1 sequential:15 ci:12 callaghan:1 knuth:1 kx:2 nk:1 suited:1 explore:1 pcm:1 contained:1 ordered:5 springer:1 radically:1 truth:1 corresponds:1 acm:2 targeted:1 consequently:1 towards:2 disambiguating:1 replace:3 absence:1 content:1 hard:2 infinite:2 uniformly:2 lemma:5 sanjoy:1 isomorphic:1 la:1 meaningful:1 formally:3 college:1 select:3 internal:1 latter:2 inability:1 meant:1 brevity:1 interpoint:1 d1:11 phenomenon:1 |
5,091 | 5,609 | Parallel Successive Convex Approximation for
Nonsmooth Nonconvex Optimization
Meisam Razaviyayn?
[email protected]
Mingyi Hong?
[email protected]
Zhi-Quan Luo?
[email protected]
Jong-Shi Pang?
[email protected]
Abstract
Consider the problem of minimizing the sum of a smooth (possibly non-convex)
and a convex (possibly nonsmooth) function involving a large number of variables.
A popular approach to solve this problem is the block coordinate descent (BCD)
method whereby at each iteration only one variable block is updated while the remaining variables are held fixed. With the recent advances in the developments of
the multi-core parallel processing technology, it is desirable to parallelize the BCD
method by allowing multiple blocks to be updated simultaneously at each iteration of the algorithm. In this work, we propose an inexact parallel BCD approach
where at each iteration, a subset of the variables is updated in parallel by minimizing convex approximations of the original objective function. We investigate
the convergence of this parallel BCD method for both randomized and cyclic variable selection rules. We analyze the asymptotic and non-asymptotic convergence
behavior of the algorithm for both convex and non-convex objective functions.
The numerical experiments suggest that for a special case of Lasso minimization
problem, the cyclic block selection rule can outperform the randomized rule.
1
Introduction
Consider the following optimization problem
min
x
h(x) , f (x1 , . . . , xn ) +
n
X
gi (xi )
s.t. xi ? Xi , i = 1, 2, . . . , n,
(1)
i=1
Qn
where Xi ? Rmi is a closed convex set; the function f : i=1 Xi ? R is a smooth function (posPn
sibly non-convex); and g(x) , i=1 gi (xi ) is a separable convex function (possibly nonsmooth).
The above optimization problem appears in various fields such as machine learning, signal processing, wireless communication, image processing, social networks, and bioinformatics, to name just a
few. These optimization problems are typically of huge size and should be solved expeditiously.
A popular approach for solving the above multi-block optimization problem is the block coordinate
descent (BCD) approach, where at each iteration of BCD, only one of the block variables is updated,
while the remaining blocks are held fixed. Since only one block is updated at each iteration, the periteration storage and computational demand of the algorithm is low, which is desirable in huge-size
problems. Furthermore, as observed in [1?3], these methods perform particulary well in practice.
?
Electrical Engineering Department, Stanford University
Industrial and Manufacturing Systems Engineering, Iowa State University
?
Department of Electrical and Computer Engineering, University of Minnesota
?
Department of Industrial and Systems Engineering, University of Southern California
?
1
The availability of high performance multi-core computing platforms makes it increasingly desirable to develop parallel optimization methods. One category of such parallelizable methods is the
(proximal) gradient methods. These methods are parallelizable in nature [4?8]; however, they are
equivalent to successive minimization of a quadratic approximation of the objective function which
may not be tight; and hence suffer from low convergence speed in some practical applications [9].
To take advantage of the BCD method and parallel multi-core technology, different parallel BCD algorithms have been recently proposed in the literature. In particular, the references [10?12] propose
parallel coordinate descent minimization methods for `1 -regularized convex optimization problems.
Using the greedy (Gauss-Southwell) update rule, the recent works [9,13] propose parallel BCD type
methods for general composite optimization problems. In contrast, references [2, 14?20] suggest
randomized block selection rule, which is more amenable to big data optimization problems, in
order to parallelize the BCD method.
Motivated by [1, 9, 15, 21], we propose a parallel inexact BCD method where at each iteration of the
algorithm, a subset of the blocks is updated by minimizing locally tight approximations of the objective function. Asymptotic and non-asymptotic convergence analysis of the algorithm is presented in
both convex and non-convex cases for different variable block selection rules. The proposed parallel
algorithm is synchronous, which is different than the existing lock-free methods in [22, 23].
The contributions of this work are as follows:
? A parallel block coordinate descent method is proposed for non-convex nonsmooth problems. To the best of our knowledge, reference [9] is the only paper in the literature that
focuses on parallelizing BCD for non-convex nonsmooth problems. This reference utilizes
greedy block selection rule which requires search among all blocks as well as communication among processing nodes in order to find the best blocks to update. This requirement
can be demanding in practical scenarios where the communication among nodes are costly
or when the number of blocks is huge. In fact, this high computational cost motivated the
authors of [9] to develop further inexact update strategies to efficiently alleviating the high
computational cost of the greedy block selection rule.
? The proposed parallel BCD algorithm allows both cyclic and randomized block variable
selection rules. The deterministic (cyclic) update rule is different than the existing parallel
randomized or greedy BCD methods in the literature; see, e.g., [2, 9, 13?20]. Based on our
numerical experiments, this update rule is beneficial in solving the Lasso problem.
? The proposed method not only works with the constant step-size selection rule, but also
with the diminishing step-sizes which is desirable when the Lipschitz constant of the objective function is not known.
? Unlike many existing algorithms in the literature, e.g. [13?15], our parallel BCD algorithm utilizes the general approximation of the original function which includes the linear/proximal approximation of the objective as a special case. The use of general approximation instead of the linear/proximal approximation offers more flexibility and results in
efficient algorithms for particular practical problems; see [21, 24] for specific examples.
? We present an iteration complexity analysis of the algorithm for both convex and nonconvex scenarios. Unlike the existing non-convex parallel methods in the literature such
as [9] which only guarantee the asymptotic behavior of the algorithm, we provide nonasymptotic guarantees on the convergence of the algorithm as well.
2
Parallel Successive Convex Approximation
As stated in the introduction section, a popular approach for solving (1) is the BCD method where at
iteration r +1 of the algorithm, the block variable xi is updated by solving the following subproblem
xr+1
= arg min
i
xi ?Xi
h(xr1 , . . . , xri?1 , xi , xri+1 , . . . , xrn ).
(2)
In many practical problems, the update rule (2) is not in closed form and hence not computationally cheap. One popular approach is to replace the function h(?) with a well-chosen local convex
2
approximation e
hi (xi , xr ) in (2). That is, at iteration r + 1, the block variable xi is updated by
xr+1
= arg min
i
xi ?Xi
e
hi (xi , xr ),
(3)
where e
hi (xi , xr ) is a convex (possibly upper-bound) approximation of the function h(?) with respect
to the i-th block around the current iteration xr . This approach, also known as block successive
convex approximation or block successive upper-bound minimization [21], has been widely used in
different applications; see [21, 24] for more details and different useful approximation functions. In
this work, we assume that the approximation function e
hi (?, ?) is of the following form:
e
hi (xi , y) = fei (xi , y) + gi (xi ).
(4)
Here fei (?, y) is an approximation of the function f (?) around the point y with respect to the i-th
block. We further assume that fei (xi , y) : Xi ? X ? R satisfies the following assumptions:
? fei (?, y) is continuously differentiable and uniformly strongly convex with parameter ? , i.e.,
fei (xi , y) ? fei (x0i , y) + h?xi fei (x0i , y), xi ? x0i i + ?2 kxi ? x0i k2 , ?xi , x0i ? Xi , ?y ? X
? Gradient consistency assumption: ?xi fei (xi , x) = ?xi f (x), ?x ? X
e i.e.,
? ?xi fei (xi , ?) is Lipschitz continuous on X for all xi ? Xi with constant L,
e
e
e
k?x fi (xi , y) ? ?x fi (xi , z)k ? Lky ? zk, ?y, z ? X , ?xi ? Xi , ?i.
i
i
For instance, the following traditional proximal/quadratic approximations of f (?) satisfy the above
assumptions when the feasible set is compact and f (?) is twice continuously differentiable:
? fe(xi , y) = h?yi f (y), xi ? yi i + ?2 kxi ? yi k2 .
? fe(xi , y) = f (xi , y?i ) + ?2 kxi ? yi k2 , for ? large enough.
For other practical useful approximations of f (?) and the stochastic/incremental counterparts, see
[21, 25, 26].
With the recent advances in the development of parallel processing machines, it is desirable to take
the advantage of multi-core machines by updating multiple blocks simultaneously in (3). Unfortunately, naively updating multiple blocks simultaneously using the approach (3) does not result in a
convergent algorithm. Hence, we suggest to modify the update rule by using a well-chosen step-size.
More precisely, we propose Algorithm 1 for solving the optimization problem (1).
Algorithm 1 Parallel Successive Convex Approximation (PSCA) Algorithm
find a feasible point x0 ? X and set r = 0
for r = 0, 1, 2, . . . do
choose a subset S r ? {1, . . . , n}
calculate x
bri = arg minxi ?Xi e
hi (xi , xr ), ?i ? S r
r+1
r
r r
r
set xi = xi + ? (b
xi ? xi ), ?i ? S r , and set xr+1
= xri , ? i ?
/ Sr
i
end for
The procedure of selecting the subset S r is intentionally left unspecified in Algorithm 1. This
selection could be based on different rules. Reference [9] suggests the greedy variable selection rule
where at each iteration of the algorithm in [9], the best response of all the variables are calculated
and at the end, only the block variables with the largest amount of improvement are updated. A
drawback of this approach is the overhead caused by the calculation of all of the best responses at
each iteration; this overhead is especially computationally demanding when the size of the problem
is huge. In contrast to [9], we suggest the following randomized or cyclic variable selection rules:
T
? Cyclic: Given the partition {T0 , . . . , Tm?1 } of the set {1, 2, . . . , n} with Ti Tj =
Sm?1
?, ?i 6= j and `=0 T` = {1, 2, . . . , n}, we say the choice of the variable selection is
cyclic if
S mr+` = T` , ?` = 0, 1, . . . , m ? 1 and ?r,
3
? Randomized: The variable selection rule is called randomized if at each iteration the variables are chosen randomly from the previous iterations so that
P r(j ? S r | xr , xr?1 , . . . , x0 ) = prj ? pmin > 0,
3
?j = 1, 2, . . . , n, ?r.
Convergence Analysis: Asymptotic Behavior
We first make a standard assumption that ?f (?) is Lipschitz continuous with constant L?f , i.e.,
k?f (x) ? ?f (y)k ? L?f kx ? yk,
and assume that ?? < inf x?X h(x). Let us also define x
? to be a stationary point of (1) if ? d ?
?g(?
x) such that h?f (?
x)+d, x? x
?i ? 0, ?x ? X , i.e., the first order optimality condition is satisfied
at the point x
?. The following lemma will help us to study the asymptotic convergence of the PSCA
algorithm.
n
Lemma 1 [9, Lemma 2] Define the mapping x
b(?) : X 7? X as x
b(y) = (b
xi (y))i=1 with x
bi (y) =
? e
e
b
arg minx ?X hi (xi , y). Then the mapping x
b(?) is Lipschitz continuous with constant L = nL , i.e.,
i
?
i
b ? zk, ?y, z ? X .
kb
x(y) ? x
b(z)k ? Lky
Having derived the above result, we are now ready to state our first result which studies the limiting
behavior of the PSCA algorithm. This result is based on the sufficient decrease of the objective
function which has been also exploited in [9] for greedy variable selection rule.
P? r
Theorem 1 Assume ? r ? (0, 1],
= +?, and that lim supr?? ? r < ?? ,
r=1 ?
?
?
min{ L?f , ? +Le?n }. Suppose either cyclic or randomized block selection rule is employed. For
cyclic update rule, assume further that {? r }?
r=1 is a monotonically decreasing sequence. Then every limit point of the iterates is a stationary point of (1) ? deterministically for cyclic update rule
and almost surely for randomized block selection rule.
Proof Using the standard sufficient decrease argument (see the supplementary materials), one can
show that
? r (?? + ? r L?f ) r
kb
x ? xr k2S r .
(5)
h(xr+1 ) ? h(xr ) +
2
Since lim supr?? ? r < ?? , for sufficiently large r, there exists ? > 0 such that
h(xr+1 ) ? h(xr ) ? ?? r kb
xr ? xr k2S r .
Taking the conditional expectation from both sides implies
" n
#
X
r+1
r
r
r
r
r
r 2
r
E[h(x ) | x ] ? h(x ) ? ?? E
Ri kb
xi ? xi k | x ,
(6)
(7)
i=1
where Rir is a Bernoulli random variable which is one if i ? S r and it is zero otherwise. Clearly,
E[Rir | xr ] = pri and therefore,
E[h(xr+1 ) | xr ] ? h(xr ) ? ?? r pmin kb
xr ? xr k2 , ?r.
(8)
r
Thus {h(x )} is a supermartingale with respect to the natural history; and by the supermartingale
convergence theorem [27, Proposition 4.2], h(xr ) converges and we have
?
X
? r kb
xr ? xr k2 < ?,
almost surely.
(9)
r=1
Let
restrict our analysis to the set of probability one for which h(xr ) converges and
P?us now
r
r
?
kb
x
? xr k2 < ?. Fix a realization in that set. P
The equation (9) simply implies that,
r=1
for the fixed realization, lim inf r?? kb
xr ? xr k = 0, since r ? r = ?. Next we strengthen this
result by proving that limr?? kb
xr ? xr k = 0. Suppose the contrary that there exists ? > 0 such
4
that ?r , kb
xr ? xr k ? 2? infinitely often. Since lim inf r?? ?r = 0, there exists a subset of
indices K and {ir } such that for any r ? K,
?r < ?,
2? < ?ir ,
and ? ? ?j ? 2?, ?j = r + 1, . . . , ir ? 1.
(10)
Clearly,
(i)
(ii)
? ? ?r ? ?r+1 ? ?r = kb
xr+1 ? xr+1 k ? kb
xr ? xr k ? kb
xr+1 ? x
br k + kxr+1 ? xr k
(iii)
(iv)
b r+1 ? xr k = (1 + L)?
b r kb
b r ?,
? (1 + L)kx
xr ? xr k ? (1 + L)?
(11)
where (i) and (ii) are due to (10) and the triangle inequality, respectively. The inequality (iii)
is the result of Lemma 1; and (iv) is followed from the algorithm iteration update rule. Since
lim supr?? ? r < 1+1 Lb , the above inequality implies that there exists an ? > 0 such that
?r > ?,
(12)
for all r large enough. Furthermore, since the chosen realization satisfies (9), we have that
Pir ?1 t t 2
limr?? t=r
? (? ) = 0; which combined with (10) and (12), implies
lim
r??
iX
r ?1
? t = 0.
(13)
t=r
On the other hand, using the similar reasoning as in above, one can write
? < ?ir ? ?r = kb
xir ? xir k ? kb
xr ? xr k ? kb
x ir ? x
br k + kxir ? xr k
b
? (1 + L)
iX
r ?1
b
? t kb
xt ? xt k ? 2?(1 + L)
t=r
iX
r ?1
?t,
t=r
Pir ?1
and hence lim inf r?? t=r ? t > 0, which contradicts (13). Therefore the contrary assumption
does not hold and we must have limr?? kb
xr ? xr k = 0, almost surely. Now consider a limit
converging
to x
?. Using the definition of x
brj , we have
point x
? with the subsequence {xrj }?
j=1
r
limj?? e
hi (xi , xrj ), ?xi ? Xi , ?i. Therefore, by letting j ? ? and using
hi (b
xi j , xrj ) ? e
r
the fact that limr?? kb
x ? xr k = 0, almost surely, we obtain e
hi (?
xi , x
?) ? e
hi (xi , x
?), ?xi ?
Xi , ?i, almost surely; which in turn, using the gradient consistency assumption, implies
h?f (?
x) + d, x ? x
?i ? 0, ?x ? X , almost surely,
for some d ? ?g(?
x), which completes the proof for the randomized block selection rule.
Now consider the cyclic update rule with a limit point x
?. Due to the sufficient decrease bound
(6), we have limr?? h(xr ) = h(?
x). Furthermore, by taking the summation over (6), we obtain
P
?
r
xr ? xr k2S r < ?. Consider a fixedP
block i and define {rk }?
of
k=1 to be the
r=1 ? kb
Psubsequence
?
?
xri k ? xri k k2 < ? and k=1 ? rk = ?,
iterations that block i is updated in. Clearly, k=1 ? rk kb
since {? r } is monotonically decreasing. Therefore, lim inf k?? kb
xri k ? xri k k = 0. Repeating the
above argument with some slight modifications, which are omitted due to lack of space, we can
show that limk?? kb
xri k ? xri k k = 0 implying that the limit point x
? is a stationary point of (1).
Remark 1 Theorem 1 covers both diminishing and constant step-size selection rule; or the combination of the two, i.e., decreasing the step-size until it is less than the constant ?? . It is also worth
noting that the diminishing step-size rule is especially useful when the knowledge of the problem?s
? and ? is not available.
constants L, L,
4
Convergence Analysis: Iteration Complexity
In this section, we present iteration complexity analysis of the algorithm for both convex and nonconvex cases.
5
4.1
Convex Case
When the function f (?) is convex, the overall objective function will become convex; and as a
result of Theorem 1, if a limit point exists, it is a global minimizer of (1). In this scenario, it
is desirable to derive the iteration complexity bounds of the algorithm. Note that our algorithm
employs linear combination of the two consecutive points at each iteration and hence it is different
than the existing algorithms in [2, 14?20]. Therefore, not only in the cyclic case, but also in the
randomized scenario, the iteration complexity analysis of PSCA is different than the existing results
and should be investigated. Let us make the following assumptions for our iteration complexity
analysis:
? The step-size is constant with ? r = ? <
?
L?f
, ?r.
? The level set {x | h(x) ? h(x0 )} is compact and the next two assumptions hold in this set.
? The nonsmooth function g(?) is Lipschitz continuous, i.e., |g(x) ? g(y)| ? Lg kx ?
yk, ?x, y ? X . This assumption is satisfied in many practical problems such as (group)
Lasso.
? The gradient of the approximation function fei (?, y) is uniformly Lipschitz with constant
Li , i.e., k?x fei (xi , y) ? ?x0 fei (x0 , y)k ? Li kxi ? x0 k, ?xi , x0 ? Xi .
i
i
i
i
i
b ?e > 0, such that for all r ? 1, we have
Lemma 2 (Sufficient Descent) There exists ?,
b xr ? xr k2 .
? For randomized rule: E[h(xr+1 ) | xr ] ? h(xr ) ? ?kb
e m(r+1) ? xmr k2 .
? For cyclic rule: h(xm(r+1) ) ? h(xmr ) ? ?kx
Proof The above result is an immediate consequence of (6) with ?b , ??pmin and ?e ,
?
?.
b Q, R > 0 such that
Due to the bounded level set assumption, there must exist constants Q,
k?f (xr )k ? Q,
b
k?xi fei (b
xr , xr )k ? Q,
kxr ? x? k ? R,
(14)
b and R to bound the cost-to-go in the algorithm.
for all xr . Next we use the constants Q, Q
Lemma 3 (Cost-to-go Estimate) For all r ? 1, we have
2
r
? For randomized rule: E[h(xr+1 ) | xr ] ? h(x? ) ? 2 (Q + Lg )2 + nL2 R2 kb
x ?xr k2
2
2
? For cyclic rule: h(xm(r+1) ) ? h(x? ) ? 3n ?(1??)
kxm(r+1) ? xmr k2
?2
2
b 2 + 2nR2 L
? 2 ? 2 + 2R2 L2 .
for any optimal point x? , where L , maxi {Li } and ? , L2g + Q
(1??)
Proof Please see the supplementary materials for the proof.
Lemma 2 and Lemma 3 lead to the iteration complexity bound in the following theorem. The proof
steps of this result are similar to the ones in [28] and therefore omitted here for space reasons.
Theorem 2 Define ? ,
(?L?f ?? )?pmin
4((Q+Lg )2 +nL2 R2 )
and ?
e,
(?L?f ?? )?
6n?(1??)2 .
? For randomized update rule: E [h(xr )] ? h(x? ) ?
? For cyclic update rule: h(xmr ) ? h(x? ) ?
6
Then
max{4??2,h(x0 )?h(x? ),2} 1
?
r.
max{4e
? ?2,h(x0 )?h(x? ),2} 1
?
e
r.
4.2
Non-convex Case
In this subsection we study the iteration complexity of the proposed randomized algorithm for the
general nonconvex function f (?) assuming constant step-size selection rule. This analysis is only
for the randomized block selection rule. Since in the nonconvex scenario, the iterates may not
converge to the global optimum point, the closeness to the optimal solution cannot be considered
for the iteration complexity analysis. Instead, inspired by [29] where the size of the gradient of the
objective function is used as a measure of optimality, we consider the size of the objective proximal
gradient as a measure of optimality. More precisely, we define
1
e
?h(x)
= x ? arg min h?f (x), y ? xi + g(y) + ky ? xk2 .
y?X
2
e
e
Clearly, ?h(x)
= 0 when x is a stationary point. Moreover, ?h(?)
coincides with the gradient of
the objective if g ? 0 and X = Rn . The following theorem, which studies the decrease rate of
e
k?h(x)k,
could be viewed as an iteration complexity analysis of the randomized PSCA.
Theorem 3 Consider randomized block selection rule. Define T to be the first time that
2
0
)?h? )
r 2
e
E[k?h(x
)k ] ? . Then T ? ?/ where ? , 2(L +2L+2)(h(x
and h? = minx?X h(x).
b
?
Proof To simplify the presentation of the proof, let us define yeir , arg minyi ?Xi h?xi f (xr ), yi ?
n
r
e
xri i + gi (yi ) + 21 kyi ? xri k2 . Clearly, ?h(x
) = (xri ? yeir )i=1 . The first order optimality condition
of the above optimization problem implies
h?xi f (xr ) + yeir ? xri , xi ? yeir i + gi (xi ) ? gi (e
yir ) ? 0, ?xi ? Xi .
(15)
r
Furthermore, based on the definition of x
bi , we have
r
r
e
h?x fi (b
x , x ), xi ? x
br i + gi (xi ) ? gi (b
xr ) ? 0, ?xi ? Xi .
(16)
i
i
i
i
Plugging in the points x
bri and yeir in (15) and (16); and summing up the two equations will yield to
h?x fei (b
xr , xr ) ? ?x f (xr ) + xr ? yer , yer ? x
br i ? 0.
i
i
i
i
i
i
i
Using the gradient consistency assumption, we can write
h?x fei (b
xr , xr ) ? ?x fei (xr , xr ) + xr ? x
br + x
br ? yer , yer ? x
br i ? 0,
i
i
i
i
i
i
i
i
i
i
or equivalently, h?xi fei (b
xri , xr ) ? ?xi fei (xri , xr ) + xri ? x
bri , yeir ? x
bri i ? kb
xri ? yeir k2 . Applying
Cauchy-Schwarz and the triangle inequality will yield to
k?xi fei (b
xri , xr ) ? ?xi fei (xri , xr )k + kxri ? x
bri k ke
yir ? x
bri k ? kb
xri ? yeir k2 .
Since the function fei (?, x) is Lipschitz, we must have
kb
xri ? yeir k ? (1 + Li )kxri ? x
bri k
(17)
Using the inequality (17), the norm of the proximal gradient of the objective can be bounded by
n
n
X
X
r 2
r
r 2
e
kxri ? x
bri k2 + kb
xri ? yeir k2
k?h(x )k =
kxi ? yei k ? 2
i=1
n
X
?2
i=1
kxri ? x
bri k2 + (1 + Li )2 kxri ? x
bri k2 ? 2(2 + 2L + L2 )kb
xr ? xr k2 .
i=1
Combining the above inequality with the sufficient decrease bound in (7), one can write
T
T
h
i X
X
r
r 2
e
E k?h(x
)k ?
2(2 + 2L + L2 )E kb
x ? xr k2
r=0
r=1
T
X
2(2 + 2L + L2 )
2(2 + 2L + L2 )
?
E h(xr ) ? h(xr+1 ) ?
E h(x0 ) ? h(xT +1 )
b
b
?
?
r=0
?
2(2 + 2L + L2 )
h(x0 ) ? h? = ?,
b
?
which implies that T ? ? .
7
5
Numerical Experiments:
In this short section, we compare the numerical performance of the proposed algorithm with the
classical serial BCD methods. The algorithms are evaluated over the following Lasso problem:
1
kAx ? bk22 + ?kxk1 ,
2
where the matrix A is generated according to the Nesterov?s approach [5]. Two problem instances
are considered: A ? R2000?10,000 with 1% sparsity level in x? and A ? R1000?100,000 with 0.1%
sparsity level in x? . The approximation functions are chosen similar to the numerical experiments
in [9], i.e., block size is set to one (mi = 1, ?i) and the approximation function
?
fe(xi , y) = f (xi , y?i ) + kxi ? yi k2
2
1
2
is considered, where f (x) = 2 kAx ? bk is the smooth part of the objective function. We choose
constant step-size ? and proximal coefficient ?. In general, careful selection of the algorithm parameters results in better numerical convergence rate. The smaller values of step-size ? will result
in less zigzag behavior for the convergence path of the algorithm; however, too small step sizes will
clearly slow down the convergence speed. Furthermore, in order to make the approximation function sufficiently strongly convex, we need to choose ? large enough. However, choosing too large ?
values enforces the next iterates to stay close to the current iterate and results in slower convergence
speed; see the supplementary materials for related examples.
min
x
Figure 1 and Figure 2 illustrate the behavior of cyclic and randomized parallel BCD method as
compared with their serial counterparts. The serial methods ?Cyclic BCD? and ?Randomized BCD?
are based on the update rule in (2) with the cyclic and randomized block selection rules, respectively.
The variable q shows the number of processors and on each processor we update 40 scalar variables
in parallel. As can be seen in Figure 1 and Figure 2, parallelization of the BCD algorithm results in
more efficient algorithm. However, the computational gain does not grow linearly with the number
of processors. In fact, we can see that after some point, the increase in the number of processors
lead to slower convergence. This fact is due to the communication overhead among the processing
nodes which dominates the computation time; see the supplementary materials for more numerical
experiments on this issue.
2
2
10
10
Cyclic BCD
Randomized BCD
Cyclic PSCA q=32
Randomized PSCA q= 32
Cyclic PSCA q=8
Randomized PSCA q=8
Cyclic PSCA q=4
Randomized PSCA q=4
10
0
Relative Error
10
?1
10
?2
10
0
10
?3
10
?4
?1
10
?2
10
?3
10
?4
10
10
?5
?5
10
10
?6
10
Randomized BCD
Cyclic BCD
Randomized PSCA q=8
Cyclic PSCA q=8
Randomized PSCA q=16
Cyclic PSCA q = 16
Randomized PSCA q=32
Cyclic PSCA q=32
1
10
Relative Error
1
?6
0
1
2
3
4
5
6
10
7
Time (seconds)
0
100
200
300
400
500
600
Time (seconds)
Figure 1: Lasso Problem: A ? R2,000?10,000
Figure 2: Lasso Problem: A ? R1,000?100,000
Acknowledgments: The authors are grateful to the University of Minnesota Graduate School Doctoral Dissertation Fellowship and AFOSR, grant number FA9550-12-1-0340 for the support during
this research.
References
[1] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM
Journal on Optimization, 22(2):341?362, 2012.
[2] P. Richt?arik and M. Tak?ac? . Efficient serial and parallel coordinate descent methods for huge-scale truss
topology design. In Operations Research Proceedings, pages 27?32. Springer, 2012.
8
[3] Y. T. Lee and A. Sidford. Efficient accelerated coordinate descent methods and faster algorithms for
solving linear systems. In 54th Annual Symposium on Foundations of Computer Science (FOCS), pages
147?156. IEEE, 2013.
[4] I. Necoara and D. Clipici. Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC. Journal of Process Control, 23(3):243?
253, 2013.
[5] Y. Nesterov. Gradient methods for minimizing composite functions.
140(1):125?161, 2013.
Mathematical Programming,
[6] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization.
Mathematical Programming, 117(1-2):387?423, 2009.
[7] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[8] S. J. Wright, R. D. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation. IEEE
Transactions on Signal Processing, 57(7):2479?2493, 2009.
[9] F. Facchinei, S. Sagratella, and G. Scutari. Flexible parallel algorithms for big data optimization. arXiv
preprint arXiv:1311.2444, 2013.
[10] J. K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for `1 -regularized loss
minimization. arXiv preprint arXiv:1105.5379, 2011.
[11] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel
coordinate descent. In NIPS, pages 28?36, 2012.
[12] C. Scherrer, M. Halappanavar, A. Tewari, and D. Haglin. Scaling up coordinate descent algorithms for
large `1 regularization problems. arXiv preprint arXiv:1206.6409, 2012.
[13] Z. Peng, M. Yan, and W. Yin. Parallel and distributed sparse optimization. preprint, 2013.
[14] I. Necoara and D. Clipici. Distributed coordinate descent methods for composite minimization. arXiv
preprint arXiv:1312.5302, 2013.
[15] P. Richt?arik and M. Tak?ac? . Parallel coordinate descent methods for big data optimization. arXiv preprint
arXiv:1212.0873, 2012.
[16] P. Richt?arik and M. Tak?ac? . On optimal probabilities in stochastic coordinate descent methods. arXiv
preprint arXiv:1310.3438, 2013.
[17] O. Fercoq and P. Richt?arik. Accelerated, parallel and proximal coordinate descent. arXiv preprint
arXiv:1312.5799, 2013.
[18] O. Fercoq, Z. Qu, P. Richt?arik, and M. Tak?ac? . Fast distributed coordinate descent for non-strongly convex
losses. arXiv preprint arXiv:1405.5300, 2014.
[19] O. Fercoq and P. Richt?arik. Smooth minimization of nonsmooth functions with parallel coordinate descent
methods. arXiv preprint arXiv:1309.5885, 2013.
[20] A. Patrascu and I. Necoara. A random coordinate descent algorithm for large-scale sparse nonconvex
optimization. In European Control Conference (ECC), pages 2789?2794. IEEE, 2013.
[21] M. Razaviyayn, M. Hong, and Z.-Q. Luo. A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM Journal on Optimization, 23(2):1126?1153, 2013.
[22] F. Niu, B. Recht, C. R?e, and S. J. Wright. Hogwild!: A lock-free approach to parallelizing stochastic
gradient descent. Advances in Neural Information Processing Systems, 24:693?701, 2011.
[23] J. Liu, S. J. Wright, C. R?e, and V. Bittorf. An asynchronous parallel stochastic coordinate descent algorithm. arXiv preprint arXiv:1311.1873, 2013.
[24] J. Mairal. Optimization with first-order surrogate functions. arXiv preprint arXiv:1305.3120, 2013.
[25] J. Mairal. Incremental majorization-minimization optimization with application to large-scale machine
learning. arXiv preprint arXiv:1402.4419, 2014.
[26] M. Razaviyayn, M. Sanjabi, and Z.-Q. Luo. A stochastic successive minimization method for nonsmooth nonconvex optimization with applications to transceiver design in wireless communication networks. arXiv preprint arXiv:1307.4457, 2013.
[27] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-dynamic programming. 1996.
[28] M. Hong, X. Wang, M. Razaviyayn, and Z.-Q. Luo. Iteration complexity analysis of block coordinate
descent methods. arXiv preprint arXiv:1310.6957, 2013.
[29] Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer, 2004.
9
| 5609 |@word norm:1 cyclic:26 liu:1 selecting:1 existing:6 bradley:1 current:2 luo:4 must:3 numerical:7 partition:1 cheap:1 update:15 bickson:1 lky:2 greedy:6 stationary:4 implying:1 core:4 short:1 dissertation:1 fa9550:1 iterates:3 node:3 successive:8 bittorf:1 mathematical:2 become:1 symposium:1 transceiver:1 focs:1 overhead:3 introductory:1 x0:11 peng:1 behavior:6 multi:5 inspired:1 decreasing:3 zhi:1 bounded:2 moreover:1 unspecified:1 unified:1 guarantee:2 every:1 ti:1 k2:24 control:2 grant:1 bertsekas:1 ecc:1 engineering:4 local:1 modify:1 limit:5 consequence:1 parallelize:2 r1000:1 path:1 niu:1 twice:1 doctoral:1 suggests:1 bi:2 graduate:1 practical:6 acknowledgment:1 enforces:1 practice:1 block:39 xr:86 procedure:1 yan:1 composite:3 suggest:4 cannot:1 close:1 selection:23 storage:1 applying:1 xrn:1 equivalent:1 deterministic:1 shi:1 go:2 limr:5 convex:31 ke:1 rule:39 proving:1 coordinate:21 updated:10 limiting:1 suppose:2 alleviating:1 strengthen:1 programming:3 updating:2 observed:1 kxk1:1 subproblem:1 preprint:15 solved:1 electrical:2 wang:1 calculate:1 richt:6 decrease:5 yk:2 complexity:11 nesterov:4 dynamic:1 grateful:1 solving:6 tight:2 yei:1 efficiency:1 triangle:2 various:1 fast:2 minxi:1 choosing:1 stanford:2 solve:1 widely:1 say:1 supplementary:4 otherwise:1 kxri:5 gi:8 advantage:2 differentiable:2 sequence:1 propose:5 reconstruction:1 combining:1 realization:3 flexibility:1 ky:1 convergence:15 requirement:1 prj:1 optimum:1 r1:1 incremental:2 converges:2 help:1 derive:1 develop:2 illustrate:1 ac:4 x0i:5 school:1 implies:7 drawback:1 stochastic:5 kb:32 material:4 fix:1 proposition:1 summation:1 hold:2 around:2 sufficiently:2 considered:3 wright:3 mapping:2 particulary:1 consecutive:1 omitted:2 xk2:1 schwarz:1 largest:1 minimization:11 clearly:6 arik:6 shrinkage:1 xir:2 derived:1 focus:1 improvement:1 bernoulli:1 kyrola:1 sanjabi:1 industrial:2 contrast:2 typically:1 diminishing:3 tak:4 arg:6 among:4 overall:1 issue:1 flexible:1 scherrer:2 development:2 platform:1 special:2 field:1 having:1 nonsmooth:10 simplify:1 few:1 employ:1 randomly:1 simultaneously:3 usc:1 beck:1 huge:6 investigate:1 umn:1 nl:1 tj:1 halappanavar:2 held:2 necoara:3 meisam:1 amenable:1 zigzag:1 nowak:1 haglin:2 iv:2 supr:3 instance:2 teboulle:1 cover:1 sidford:1 cost:4 nr2:1 subset:5 too:2 proximal:8 kxi:6 combined:1 recht:1 randomized:30 siam:3 stay:1 lee:1 continuously:2 satisfied:2 choose:3 possibly:4 pmin:4 li:5 nonasymptotic:1 luozq:1 availability:1 includes:1 coefficient:1 satisfy:1 caused:1 hogwild:1 closed:2 analyze:1 parallel:32 contribution:1 majorization:1 pang:1 ir:5 efficiently:1 yield:2 periteration:1 l2g:1 worth:1 processor:4 history:1 parallelizable:2 nl2:2 definition:2 inexact:3 intentionally:1 mpc:1 proof:8 mi:1 gain:1 popular:4 knowledge:2 lim:8 subsection:1 appears:1 response:2 evaluated:1 strongly:3 furthermore:5 just:1 until:1 hand:1 lack:1 name:1 counterpart:2 hence:5 regularization:1 limj:1 pri:1 during:1 please:1 supermartingale:2 whereby:1 coincides:1 hong:3 yun:1 reasoning:1 image:1 recently:1 fi:3 scutari:1 volume:1 slight:1 bk22:1 consistency:3 minnesota:2 expeditiously:1 recent:3 inf:5 scenario:5 nonconvex:7 inequality:6 yi:7 exploited:1 seen:1 guestrin:1 mr:1 employed:1 kxm:1 surely:6 converge:1 monotonically:2 signal:2 ii:2 multiple:3 desirable:6 smooth:4 faster:1 calculation:1 offer:1 serial:4 plugging:1 converging:1 involving:1 kax:2 neuro:1 basic:1 expectation:1 arxiv:28 iteration:27 fellowship:1 completes:1 grow:1 parallelization:1 unlike:2 sr:1 limk:1 quan:1 contrary:2 noting:1 iii:2 enough:3 iterate:1 lasso:6 restrict:1 topology:1 tm:1 br:7 synchronous:1 t0:1 motivated:2 pir:2 accelerating:1 suffer:1 remark:1 useful:3 tewari:2 amount:1 repeating:1 locally:1 category:1 outperform:1 exist:1 xr1:1 write:3 group:1 kyi:1 imaging:1 sum:1 inverse:1 almost:6 utilizes:2 scaling:1 bound:7 hi:11 followed:1 convergent:1 quadratic:2 annual:1 rmi:1 precisely:2 constraint:1 fei:21 psca:17 ri:1 bcd:25 speed:3 argument:2 min:6 optimality:4 kxr:2 fercoq:3 separable:4 bri:10 department:3 according:1 combination:2 beneficial:1 iastate:1 increasingly:1 contradicts:1 smaller:1 qu:1 modification:1 southwell:1 computationally:2 equation:2 turn:1 letting:1 end:2 available:1 operation:1 slower:2 original:2 remaining:2 clustering:1 lock:2 especially:2 classical:1 objective:13 strategy:1 costly:1 traditional:1 surrogate:1 southern:1 gradient:12 minx:2 cauchy:1 tseng:1 reason:1 assuming:1 index:1 minimizing:4 equivalently:1 lg:3 unfortunately:1 yir:2 fe:3 xri:22 stated:1 design:2 rir:2 perform:1 allowing:1 upper:2 sm:1 descent:23 immediate:1 communication:5 rn:1 lb:1 parallelizing:2 bk:1 california:1 nip:1 xm:2 sparsity:2 max:2 facchinei:1 demanding:2 natural:1 regularized:2 technology:2 ready:1 literature:5 l2:6 asymptotic:7 relative:2 afosr:1 loss:2 lecture:1 foundation:1 iowa:1 sufficient:5 thresholding:1 course:1 wireless:2 free:2 asynchronous:1 figueiredo:1 tsitsiklis:1 side:1 taking:2 sparse:3 distributed:4 calculated:1 xn:1 qn:1 author:2 social:1 transaction:1 compact:2 global:2 mairal:2 summing:1 xi:81 subsequence:1 search:1 continuous:4 iterative:1 nature:1 zk:2 investigated:1 european:1 linearly:1 big:3 razaviyayn:4 x1:1 slow:1 deterministically:1 ix:3 theorem:8 kxir:1 rk:3 down:1 specific:1 xt:3 maxi:1 r2:4 closeness:1 dominates:1 naively:1 exists:6 yer:4 demand:1 kx:4 yin:1 simply:1 infinitely:1 patrascu:1 scalar:1 springer:2 minimizer:1 mingyi:2 satisfies:2 conditional:1 viewed:1 presentation:1 careful:1 manufacturing:1 lipschitz:7 replace:1 feasible:2 uniformly:2 lemma:8 called:1 gauss:1 xrj:3 jong:1 clipici:2 support:1 bioinformatics:1 accelerated:2 |
5,092 | 561 | A Neural Network for Motion Detection of
Drift-Balanced Stimuli
Hilary Tunley*
School of Cognitive and Computer Sciences
Sussex University
Brighton, England.
Abstract
This paper briefly describes an artificial neural network for preattentive
visual processing. The network is capable of determiuing image motioll in
a type of stimulus which defeats most popular methods of motion detect.ion
- a subset of second-order visual motion stimuli known as drift-balanced
stimuli(DBS). The processing st.ages of the network described in this paper
are integratable into a model capable of simultaneous motion extractioll.
edge detection, and the determination of occlusion.
1
INTRODUCTION
Previous methods of motion detection have generally been based on one of
two underlying approaches: correlation; and gradient-filter. Probably the best
known example of the correlation approach is th(! Reichardt movement detEctor
[Reiehardt 1961]. The gradient-filter (GF) approach underlies the work of AdElson
and Bergen [Adelson 1985], and Heeger [Heeger L9H8], amongst others.
These motion-detecting methods eannot track DBS, because DBS Jack essential
componellts of information needed by such methods. Both the correlation and
GF approaches impose constraints on the input stimuli. Throughout the image
sequence, correlation methods require information that is spatiotemporally correlatable; and GF motion detectors assume temporally constant spatial gradi,'nts.
"Current address:
University.
714
Experimental Psychology, School of Biological Sciences, Sussex
A Neural Network for Motion Detection of Drift-Balanced Stimuli
The network discussed here does not impose such constraints. Instead, it extracts
motion energy and exploits the spatial coherence of movement (defined more formally in the Gestalt theory of common fait [Koffka 1935]) to achieve tracking.
The remainder of this paper discusses DBS image sequences, then correlation methods, then GF methods in more detail, followed by a qualitative description of this
network which can process DBS.
2
SECOND-ORDER AND DRIFT-BALANCED STIMULI
There has been a lot of recent interest in second-order visual stimuli , and DBS in
particular ([Chubb 1989, Landy 1991]). DBS are stimuli which give a clear percept
of directional motion, yet Fourier analysis reveals a lack of coherent motion energy,
or energy present in a direction opposing that of the displacement (hence the term
'drift-balanced '). Examples of DBS include image sequences in which the contrast
polarity of edges present reverses between frames.
A subset of DBS, which are also processpd by the network, are known as microbalanced stimuli (MBS). MBS cont,ain no correlatable features and are driftbalanced at all scales. The MBS image sequences used for this work were created
from a random-dot image in which an area is successively shifted by a constant
displacement between each frame and sim ultaneously re-randomised.
3
3.1
EXISTING METHODS OF MOTION DETECTION
CORRELATION METHODS
Correlation methods perform a local cross-correlation in image space: the matching
of features in local neighbourhoods (depending upon displacement/speed) between
image frames underlies the motion detection. Examples of this method include
[Van Santen 1985J. Most correlation models suffer from noise degradation in that
any noise features extracted by the edge detection are available for spurious correlation .
There has been much recent debate questioning the validity of correlation methods
for modelling human motion detection abilit.ies. In addition to DBS, there is also
increasing psychophysical evidence ([Landy 1991, Mather 1991]) which correlation
methods cannot account for.
These factors suggest that correlation techniques are not suitable for low-level motion processing where no information is available concerning what is moving (as
with MBS). However, correlation is a more plausible method when working with
higher level constructs such as tracking in model-based vision (e.g . [Bray 1990]),
3.2
GRADIENT-FILTER (GF) METHODS
GF methods use a combination of spatial filtering to determine edge positions and
temporal filtering to determine whether such edges are moving. A common assumption used by G F methods is that spatial gradients are constant. A recent method by
Verri [Verri 1990], for example, argu es that flow det.ection is based upon the notion
715
716
Tunley
-
??
?
??
?
? ? ?? ??
~
.
??? ?
??
?
?
T ??
Model
R:
Receptor UnIts - Detect temporal
changes In IMage intensit~
(polarIty-independent)
M:
Motion Units - Detect
distribution of change
iniorMtlon
0:
OcclusIon Units - Detect
changes In .otlon
dIstribution
E:
Edge Units - Detect edges
dlrectl~ from occluslon
Figure 1: The Network (Schematic)
of tracking spatial gradient magnitude and/or direction, and that any variation in
the spatial gradient is due to some form of motion deformation - i.e. rotation,
expansion or shear. Whilst for scenes containing smooth surfaces this is a valid
approximation, it is not the case for second-order stimuli such as DBS.
4
THE NETWORK
A simplified diagram illustrating the basic structure of the network (based upon
earlier work ([Tunley 1990, Tunley 1991a, Tunley 1991b]) is shown in Figure 1
( the edge detection stage is discussed elsewhere ([Tunley 1990, Tunley 1991 b,
Tunley 1992]).
4.1
INPUT RECEPTOR UNITS
The units in the input layer respond to rectified local changes in image intensity
over time. Each unit has a variable adaption rate, resulting in temporal sensitivity
- a fast adaption rate gives a high temporal filtering rate. The main advantages for
this temporal averaging processing are:
? Averaging removes the D.C. component of image intensity. This eliminates problematic gain for motion in high brightness areas of the image.
[Heeger 1988] .
? The random nature of DBS/MBS generation cannot guarantee that each pixel
change is due to local image motion. Local temporal averaging smooths the
A Neural Network for Motion Detection of Drift-Balanced Stimuli
moving regions, thus creating a more coherently structured input for the motion
units.
The input units have a pointwise rectifying response governed by an autoregressive
filter of the following form:
(1 )
where a E [0,1] is a variable which controls the degree of temporal filtering of the
change in input intensity, nand n - 1 are successive image frames, and Rn and In
are the filter output and input, respectively.
The receptor unit responses for two different a values are shown in Figure 2. C\' can
thus be used to alter the amount of motion blur produced for a particular frame
rate, effectively producing a unit with differing velocity sensitivity.
( a)
(b)
Figure 2: Receptor Unit Response: (a) a = 0.3; (b) a = 0.7.
4.2
MOTION UNITS
These units determine the coherence of image changes indicated by corresponding
receptor units. First-order motion produces highly-tuned motion activity - i.e. a
strong response in a particular direction - whilst second-order motion results in less
coherent output.
The operation of a basic motion detector can be described by:
(2)
where !vI is the detector, (if, j') is a point in frame n at a distance d from (i, j),
a point in frame n - 1, in the direction k. Therefore, for coherent motion (i.e.
first-order), in direction k at a speed of d units/frame, as n ---- 00:
(3)
717
718
Tunley
The convergence of motion activity can be seen using an example. The stimulus
sequence used consists of a bar of re-randomising texture moving to the right in
front of a leftward moving background with the same texture (i.e. random dots).
The bar motion is second-order as it contains no correlatable features, whilst the
background consists of a simple first-order shifting of dots between frames. Figures 3, 4 and 5 show two-dimensional images of the leftward motion activity for the
stimulus after 3,4 and 6 frames respectively. The background, which has coherent
leftward movement (at speed d units/frame) is gradually reducing to zero whilst
the microbalanced rightwards-moving bar, remains active. The fact that a non-zero
response is obtained for second-order motion suggests, according to the definition
of Chubb and Sperling [Chubb 1989], that first-order detectors produce no response
to MBS, that this detector is second-order with regard to motion detection.
Figure 3: Leftward Motion Response to Third Frame in Sequence.
HfOL(tlyllmh ~ .4)
.. '
Figure 4: Leftward Motion Response to Fourth Frame.
Hf Ol (llyrlnh ~. 6)
Figure 5: Leftward Motion Response to Sixth Frame.
The motion units in this model are arranged on a hexagonal grid. This grid is
known as a flow web as it allows information to flow, both laterally between units
of the same type, and between the different units in the model (motion, occlusion
or edge). Each flow web unit is represented by three variables - a position (a, b)
and a direction k, which is evenly spaced between 0 and 360 degrees. In this model
each k is an integer between 1 and kmax - the value of kmax can be varied to vary
the sensitivity of the units.
A way of using first-order techniques to discriminate between first and secondorder motions is through the concept of coherence. At any point in the motionprocessed images in Figures 3-5, a measure of the overall variation in motion activity
can be used to distinguish between the motion of the micro-balanced bar and its
background. The motion energy for a detector with displacement d, and orientation
A Neural Network for Motion Detection of Drift-Balanced Stimuli
k, at position (a, b), can be represented by Eabkd. For each motion unit, responding
over distance d, in each cluster the energy present can be defined as:
E
_
abkdn -
mink(Mabkd)
AI
(4)
abkd
where mink(xk) is the minimum value of x found searching over k values. If motion
is coherent, and of approximately the correct speed for the detector M, then as
n -+ 00:
(5)
where k m is in the actual direction of the motion. In reality n need only approach
around 5 for convergence to occur. Also, more importantly, under the same convergence conditions:
(6)
This is due to the fact that the minimum activation value in a group of first-order
detectors at point (a, b) will be the same as the actual value in the direction, km .
By similar reasoning, for non-coherent motion as n -+ 00:
Eabkdn -
(7)
1 'Vk
in other words there is no peak of activity in a given direction . The motion energy
is ambiguous at a large number of points in most images, except at discontinuities
and on well-textured surfaces.
A measure of motion coherence used for the motion units can now be defined as:
Mc( abkd)
=
. Eabkd
E
",", k max
L...k=l
For coherent motion in direction k m as n
(8)
abkd
-+ 00:
(9)
Whilst for second-order motion, also as n -
00:
(10)
Using this approach the total Me activity at each position - regardless of coherence,
or lack of it - is unity. Motion energy is the same in all moving regions, the difference
is in the distribution, or tuning of that energy.
Figures 6, 7 and 8 show how motion coherence allows the flow web structure to
reveal the presence of motion in microbalanced areas whilst not affecting the easily
detected background motion for the stimulus.
719
720
Tunley
Figure 6: Motion Coherence Response to Third Frame
Figure 7: Motion Coherence Response to Fourth Frame
Figure 8: Motion Coherence Response to Sixth Frame
4.3
OCCLUSION UNITS
These units identify discontinuities in second-order motion which are vitally important when computing the direction of that motion . They determine spatial and
temporal changes in motion coherence and can process single or multiple motions at
each image point . Established and newly-activated occlusion units work, through
a gating process, to enhance continuously-displacing surfaces, utilising the concept
of visual inertia.
The implementation details of the occlusion stage of this model are discussed elsewhere [Tunley 1992], but some output from the occlusion units to the above secondorder stimulus are shown in Figures 9 and 10. The figures show how the edges of
the bar can be determined.
References
[Adelson 1985)
E.H. Adelson and J .R. Bergen . Spatiotemporal energy models for
the perception of motion. J. Opt. Soc. Am. 2, 1985 .
[Bray 1990)
A.J . Bray. Tracking objects using image disparities. Image and
Vision Computin,q, 8, 1990.
[Chubb 1989)
C. Chubb and G. Sperling. Second-order motion perception:
Space/time separable mechanisms. In Proc. Workshop on Visual
Motion, Irvine, CA , USA, 1989.
A Neural Network for Motion Detection of Drift-Balanced Stimuli
Figure 9: Occluding Motion Information: Occlusion activity produced by an increase in motion coherence activity.
O( IlynnlJsl . 1")
Figure 10: Occluding Motion Information: Occlusion activity produced by a decrease in motion activity at a point. Some spurious activity is produced due to the
random nature of the second-order motion information.
D.J. Heeger. Optical Flow using spatiotemporal filters. Int. J.
Camp. Vision, 1, 1988.
K. Koffka. Principles of Gestalt Psychology. Harcourt Brace,
[Koffka 1935]
1935.
M.S. Landy, B.A. Dosher, G. Sperling and M.E. Perkins. The
[Landy 1991]
kinetic depth effect and optic flow II: First- and second-order
motion. Vis. Res. 31, 1991.
G. Mather. Personal Communication.
[Mather 1991]
[Reichardt 1961] W. Reichardt. Autocorrelation, a principle for the evaluation of
sensory information by the central nervous system. In W. Rosenblith, editor, Sensory Communications. Wiley NY, 1961.
[Van Santen 1985] J .P.H. Van Santen and G. Sperling. Elaborated Reichardt detectors. J. Opt. Soc. Am. 2, 1985.
H. Tunley. Segmenting Moving Images. In Proc. Int. Neural Net[Tunley 1990]
work Conf (INN C9 0) , Paris, France, 1990.
H. Tunley. Distributed dynamic processing for edge detection. In
[Tunley 1991a]
Proc. British Machine Vision Conf (BMVC91), Glasgow, Scotland, 1991.
H. Tunley. Dynamic segmentation and optic flow extraction. In.
[Tunley 1991b]
Proc. Int. Joint. Conf Neural Networks (IJCNN91) , Seattle,
USA, 1991.
H. Tunley. Sceond-order motion processing: A distributed ap[Tunley 1992]
proach. CSRP 211, School of Cognitive and Computing Sciences,
University of Sussex (forthcoming).
A. Verri, F. Girosi and V. Torre. Differential techniques for optic
[Verri 1990]
flow. J. Opt. Soc. Am. 7, 1990.
[Heeger 1988]
721
| 561 |@word illustrating:1 briefly:1 vitally:1 km:1 brightness:1 contains:1 disparity:1 tuned:1 existing:1 current:1 nt:1 activation:1 yet:1 blur:1 girosi:1 remove:1 nervous:1 xk:1 scotland:1 detecting:1 successive:1 differential:1 qualitative:1 consists:2 autocorrelation:1 ol:1 actual:2 increasing:1 underlying:1 what:1 whilst:6 differing:1 guarantee:1 temporal:8 laterally:1 control:1 unit:28 producing:1 segmenting:1 local:5 receptor:5 approximately:1 ap:1 suggests:1 displacement:4 area:3 matching:1 spatiotemporally:1 word:1 suggest:1 cannot:2 kmax:2 regardless:1 glasgow:1 importantly:1 searching:1 notion:1 variation:2 secondorder:2 velocity:1 santen:3 region:2 movement:3 decrease:1 balanced:9 dynamic:2 personal:1 upon:3 textured:1 easily:1 joint:1 represented:2 fast:1 artificial:1 detected:1 plausible:1 sequence:6 questioning:1 advantage:1 net:1 inn:1 mb:6 remainder:1 achieve:1 description:1 seattle:1 convergence:3 cluster:1 mather:3 produce:2 object:1 depending:1 school:3 hilary:1 sim:1 strong:1 soc:3 revers:1 direction:11 correct:1 torre:1 filter:6 human:1 require:1 opt:3 biological:1 abkd:3 around:1 vary:1 proc:4 ain:1 vk:1 modelling:1 contrast:1 detect:5 am:3 camp:1 bergen:2 tunley:19 nand:1 spurious:2 france:1 pixel:1 overall:1 orientation:1 spatial:7 construct:1 extraction:1 adelson:4 alter:1 others:1 stimulus:18 micro:1 eabkd:2 koffka:3 occlusion:9 opposing:1 detection:14 interest:1 highly:1 evaluation:1 csrp:1 activated:1 correlatable:3 edge:11 capable:2 re:3 deformation:1 earlier:1 subset:2 front:1 spatiotemporal:2 st:1 peak:1 sensitivity:3 enhance:1 continuously:1 central:1 successively:1 containing:1 cognitive:2 creating:1 conf:3 account:1 int:3 vi:2 lot:1 hf:1 rectifying:1 elaborated:1 percept:1 spaced:1 identify:1 directional:1 produced:4 mc:1 rectified:1 simultaneous:1 detector:10 rosenblith:1 definition:1 sixth:2 energy:9 gain:1 newly:1 irvine:1 popular:1 segmentation:1 higher:1 response:12 verri:4 arranged:1 stage:2 correlation:14 working:1 harcourt:1 web:3 lack:2 reveal:1 indicated:1 usa:2 effect:1 validity:1 concept:2 hence:1 sussex:3 ambiguous:1 gradi:1 ection:1 brighton:1 motion:74 reasoning:1 image:22 jack:1 common:2 rotation:1 shear:1 defeat:1 discussed:3 ai:1 tuning:1 grid:2 dot:3 moving:8 surface:3 recent:3 leftward:6 dosher:1 seen:1 minimum:2 impose:2 determine:4 ii:1 multiple:1 smooth:2 england:1 determination:1 cross:1 concerning:1 y:1 schematic:1 underlies:2 basic:2 vision:4 ion:1 addition:1 background:5 affecting:1 diagram:1 eliminates:1 brace:1 probably:1 db:12 flow:9 integer:1 presence:1 psychology:2 forthcoming:1 displacing:1 det:1 whether:1 suffer:1 generally:1 clear:1 amount:1 problematic:1 shifted:1 track:1 proach:1 group:1 utilising:1 fourth:2 respond:1 throughout:1 chubb:5 coherence:11 layer:1 followed:1 distinguish:1 activity:11 bray:3 occur:1 optic:3 constraint:2 perkins:1 scene:1 fourier:1 speed:4 hexagonal:1 separable:1 optical:1 structured:1 according:1 combination:1 describes:1 abilit:1 unity:1 microbalanced:3 gradually:1 fait:1 remains:1 randomised:1 discus:1 sperling:4 mechanism:1 needed:1 available:2 operation:1 neighbourhood:1 responding:1 include:2 landy:4 exploit:1 psychophysical:1 coherently:1 gradient:6 amongst:1 distance:2 evenly:1 me:1 cont:1 polarity:2 pointwise:1 debate:1 mink:2 implementation:1 perform:1 communication:2 frame:17 rn:1 varied:1 drift:8 intensity:3 paris:1 coherent:7 ultaneously:1 established:1 discontinuity:2 address:1 bar:5 perception:2 max:1 shifting:1 suitable:1 temporally:1 created:1 extract:1 gf:6 reichardt:4 generation:1 filtering:4 age:1 degree:2 principle:2 editor:1 elsewhere:2 van:3 regard:1 distributed:2 depth:1 valid:1 autoregressive:1 sensory:2 inertia:1 simplified:1 gestalt:2 active:1 reveals:1 reality:1 nature:2 ca:1 expansion:1 main:1 noise:2 ny:1 wiley:1 position:4 heeger:5 governed:1 third:2 british:1 gating:1 evidence:1 essential:1 workshop:1 effectively:1 texture:2 magnitude:1 argu:1 visual:5 tracking:4 intensit:1 adaption:2 extracted:1 kinetic:1 change:8 determined:1 except:1 reducing:1 averaging:3 degradation:1 total:1 discriminate:1 experimental:1 e:1 preattentive:1 occluding:2 formally:1 c9:1 rightwards:1 |
5,093 | 5,610 | Stochastic Proximal Gradient Descent with
Acceleration Techniques
Atsushi Nitanda
NTT DATA Mathematical Systems Inc.
1F Shinanomachi Rengakan, 35,
Shinanomachi, Shinjuku-ku, Tokyo,
160-0016, Japan
[email protected]
Abstract
Proximal gradient descent (PGD) and stochastic proximal gradient descent
(SPGD) are popular methods for solving regularized risk minimization problems
in machine learning and statistics. In this paper, we propose and analyze an accelerated variant of these methods in the mini-batch setting. This method incorporates two acceleration techniques: one is Nesterov?s acceleration method, and
the other is a variance reduction for the stochastic gradient. Accelerated proximal gradient descent (APG) and proximal stochastic variance reduction gradient
(Prox-SVRG) are in a trade-off relationship. We show that our method, with the
appropriate mini-batch size, achieves lower overall complexity than both APG and
Prox-SVRG.
1
Introduction
This paper consider the following optimization problem:
def
minimize f (x) = g(x) + h(x),
x?Rd
(1)
d
where
Pn g is the average dof the smooth convex functions g1 , . . . , gn from R to R, i.e., g(x) =
1
i=1 gi (x) and h : R ? R is a relatively simple convex function that can be non-differentiable.
n
In machine learning, we often encounter optimization problems of this form. For example, given
a sequence of training examples (a1 , b1 ), . . . , (an , bn ), where ai ? Rd and bi ? R, if we set
gi (x) = 21 (aTi x ? bi )2 , then we obtain ridge regression by setting h(x) = ?2 kxk2 or we obtain
Lasso by setting h(x) = ?|x|. If we set gi (x) = log(1 + exp(?bi xT ai )), then we obtain regularized logistic regression.
To solve the optimization problem (1), one popular method is proximal gradient descent (PGD),
which can be described by the following update rule for k = 1, 2, . . .:
xk+1 = prox?k h (xk ? ?k ?g(xk )) ,
where prox is the proximity operator,
prox?h (y) = arg min
x?Rd
1
2
kx ? yk + ?h(x) .
2
A stochastic variant of PGD is stochastic proximal gradient descent (SPGD), where at each iteration
k = 1, 2, . . ., we pick ik randomly from {1, 2, . . . , n}, and take the following update:
xk+1 = prox?k h (xk ? ?k ?gik (xk )) .
1
The advantage of SPGD over PGD is that at each iteration, SPGD only requires the computation
of a single gradient ?gik (xk ). In contrast, each iteration of PGD evaluates the n gradients. Thus
the computational cost of SPGD per iteration is 1/n that of the PGD. However, due to the variance
introduced by random sampling, SPGD obtains a slower convergence rate than PGD. In this paper
we consider problem (1) under the following assumptions.
Assumption 1. Each convex function gi (x) is L-Lipschitz smooth, i.e., there exist L > 0 such that
for all x, y ? Rd ,
k?gi (x) ? ?gi (y)k ? Lkx ? yk.
(2)
From (2), one can derive the following inequality,
L
gi (x) ? gi (y) + (?gi (y), x ? y) + kx ? yk2 .
(3)
2
Assumption 2. g(x) is ?-strongly convex; i.e., there exists ? > 0 such that for all x, y ? Rd ,
?
(4)
g(x) ? g(y) + (?g(y), x ? y) + kx ? yk2 .
2
Note that it is obvious that L ? ?.
Assumption 3. The regularization function h(x) is a lower semi-continuous proper convex function;
however, it can be non-differentiable or non-continuous.
Under the Assumptions 1, 2, and h(x) ? 0, PGD (which is equivalent to gradient descent in this
case) with a constant learning rate ?k = L1 achieves a linear convergence rate. On the other hand, for
stochastic (proximal) gradient descent, because of the variance introduced by random sampling, we
need to choose diminishing learning rate ?k = O(1/k), and thus the stochastic (proximal) gradient
descent converges at a sub-linear rate.
To improve the stochastic (proximal) gradient descent, we need a variance reduction technique,
which allows us to take a larger learning rate. Recently, several papers proposed such variance reduction methods for the various special cases of (1). In the case where gi (x) is Lipschitz smooth
and h(x) is strongly convex, Shalev-Shwartz and Zhang [1, 2] proposed a proximal stochastic dual
coordinate ascent (Prox-SDCA); the same authors developed accelerated variants of SDCA [3, 4].
Le Roux et al. [5] proposed a stochastic average gradient (SAG) for the case where gi (x) is Lipschitz smooth, g(x) is strongly convex, and h(x) ? 0. These methods achieve a linear convergence
rate. However, SDCA and SAG need to store all gradients (or dual variables), so that O(nd) storage is required in general problems. Although this can be reduced to O(n) for linear prediction
problems, these methods may be unsuitable for more complex and large-scale problems. More recently, Johnson and Zhang [6] proposed stochastic variance reduction gradients (SVRG) for the case
where gi (x) is L-Lipschitz smooth, g(x) is ?-strongly convex, and h(x) ? 0. SVRG achieves the
following overall complexity (total number of component gradient evaluations to find an ?-accurate
solution),
1
O (n + ?) log
,
(5)
?
where ? is the condition number L/?. Furthermore, this method need not store all gradients. Xiao
and Zhang [7] proposed a proximal variant of SVRG, called Prox-SVRG which also achieves the
same complexity.
Another effective method for solving (1) is accelerated proximal gradient descent (APG), proposed
by Nesterov [8, 9]. APG [8] is an accelerated variant of deterministic gradient descent and achieves
the following overall complexity to find an ?-accurate solution,
?
1
O n ? log
.
(6)
?
Complexities (5) and (6) are in a trade-off relationship. For example, if ? = n, then the complexity
(5) is less than (6). On the other hand, the complexity of APG has a better dependence on the
condition number ?.
In this paper, we propose and analyze a new method called the Accelerated Mini-Batch Prox-SVRG
(Acc-Prox-SVRG) for solving (1). Acc-Prox-SVRG incorporates two acceleration techniques in
the mini-batch setting: (1) Nesterov?s acceleration method of APG and (2) an variance reduction
technique of SVRG. We show that the overall complexity of this method, with an appropriate minibatch size, is more efficient than both Prox-SVRG and APG; even when mini-batch size is not
appropriate, our method is still comparable to APG or Prox-SVRG.
2
2
Accelerated Mini-Batch Prox-SVRG
As mentioned above, to ensure convergence of SPGD, the learning rate ?k has to decay to zero
for reducing the variance effect of the stochastic gradient. This slows down the convergence. As
a remedy to this issue, we use the variance reduction technique of SVRG [6] (see also [7]), which
allows us to take a larger learning rate. Acc-Prox-SVRG is a multi-stage scheme. During each stage,
this method performs m APG-like iterations and employs the following direction with mini-batch
instead of gradient,
x) + ?g(?
x),
(7)
vk = ?gIk (yk ) ? ?gIk (?
P
b
where Ik = {i1 , . . . , ib } is a randomly chosen size b subset of {1, 2, . . . , n} and gIk = 1b j=1 gij .
At the beginning of each stage, the initial point x1 is set to be x
?, and at the end of stage, x
? is updated.
Conditioned on yk , we can take expectation with respect to Ik and obtain EIk [vk ] = ?g(yk ), so
that vk is an unbiased estimator. As described in the next section, the conditional variance EIk kvk ?
?g(yk )k2 can be much smaller than Ei k?gi (yk )??g(yk )k2 near the optimal solution. The pseudocode of our Acc-Prox-SVRG is given in Figure 1.
Parameters update frequency m, learning rate ?, mini-batch size b
and non-negative sequence ?1 , . . . , ?m
Initialize x
?1
Iterate: for s = 1, 2, . . .
x
?=x
?sP
n
v? = n1 i=1 ?gi (?
x)
x 1 = y1 = x
?
Iterate: for k = 1, 2, . . . , m
Randomly pick subset Ik ? {1, 2, . . . , n} of size b
vk = ?gIk (yk ) ? ?gIk (?
x) + v?
xk+1 = prox?h (yk ? ?vk )
yk+1 = xk+1 + ?k (xk+1 ? xk )
set x
?s+1 = xm+1
end
end
Figure 1: Acc-Prox-SVRG
In our analysis, we focus on a basic variant of the algorithm (Figure 1) with ?k =
3
?
1? ??
?
1+ ?? .
Analysis
In this section, we present our analysis of the convergence rates of Acc-Prox-SVRG described in
Figure 1 under Assumptions 1, 2 and 3, and provide some notations and definitions. Note that we
may omit the outer index s for notational simplicity. By the definition of a proximity operator, there
exists a subgradient ?k ? ?h(xk+1 ) such that
xk+1 = yk ? ? (vk + ?k ) .
We define the estimate sequence ?k (x) (k = 1, 2, . . . , m + 1) by
?
kx ? x1 k2 and
2
?
?
?
?k+1 (x) = (1 ? ??)?k (x) + ??(gIk (yk ) + (vk , x ? yk ) + kx ? yk k2
2
+h(xk+1 ) + (?k , x ? xk+1 )), f or k ? 1.
?1 (x) = f (x1 ) +
We set
??k = min ?k (x) and zk = arg min?k (x).
x?Rd
x?Rd
3
Since ?2 ?k (x) = ?In , it follows that for ?x ? Rd ,
?k (x) =
?
kx ? zk k2 + ??k .
2
The following lemma is the key to the analysis of our method.
Lemma 1. Consider Acc-Prox-SVRG in Figure 1 under Assumptions 1, 2, and 3. If ? ?
for k ? 1 we have
?
k?1
E [?k (x)] ? f (x) + (1 ? ??)
(?1 ? f )(x) and
"
E [f (xk )] ? E ??k +
k?1
X
l=1
(8)
1
2L ,
then
(9)
#
?
1
?
??
?
(1 ? ??)k?1?l ? ?
, (10)
kxl ? yl k2 + ?k?g(yl ) ? vl k2
2
??
where the expectation is taken with respect to the history of random variables I1 , . . . , Ik?1 .
Note that if the conditional variance of vl is equal to zero, we immediately obtain a linear convergence rate from (9) and (10). Before we can prove Lemma 1, additional lemmas are required, whose
proofs may be found in the Supplementary Material.
Lemma 2. If ? < ?1 , then for k ? 1 we have
r
?
?
?
(11)
(vk + ?k ) and
zk+1 = (1 ? ??)zk + ??yk ?
?
1
zk ? yk = ? (yk ? xk ).
(12)
??
Lemma 3. For k ? 1, we have
1
k?g(yk ) + ?k k2 + kvk + ?k k2 ? k?g(yk ) ? vk k2 , (13)
2
kvk + ?k k2 ? 2 k?g(yk ) + ?k k2 + k?g(yk ) ? vk k2 , and
(14)
2
2
2
k?g(yk ) + ?k k ? 2 kvk + ?k k + k?g(yk ) ? vk k .
(15)
(?g(yk ) + ?k , vk + ?k ) =
Proof of Lemma 1. Using induction, it is easy to show (9). The proof is in Supplementary Material.
Now we prove (10) by induction. From the definition of ?1 , ??1 = f (x1 ). we assume (10) is true
for k. Using Eq. (11), we have
2
r
?
?
(v
+
?
)
kyk ? zk+1 k2 =
??)(y
?
z
)
+
(1
?
k
k
k
k
?
r
?
?
?
?
2
2
(1 ? ??)(yk ? zk , vk + ?k ) + kvk + ?k k2 .
= (1 ? ??) kyk ? zk k + 2
?
?
From above equation and (8) with x = yk , we get
r
?
?
?
?
?k+1 (yk ) = ??k+1 +
(1 ? ??)2 kyk ? zk k2 + 2
(1 ? ??)(yk ? zk , vk + ?k )
2
?
?
2
+ kvk + ?k k .
?
On the other hand, from the definition of the estimate sequence and (8),
?
?
?
?k+1 (yk ) = (1 ? ??) ??k + kyk ? zk k2 + ??(gIk (yk ) + h(xk+1 ) + (?k , yk ? xk+1 )).
2
Therefore, from these two equations, we have
?
?
?
?
?
??k+1 = (1 ? ??)??k + (1 ? ??) ??kyk ? zk k2 + ??(gIk (yk ) + h(xk+1 )
2
?
?
?
+(?k , yk ? xk+1 )) ? (1 ? ??) ??(yk ? zk , vk + ?k ) ? kvk + ?k k2 . (16)
2
4
Since g is Lipschitz smooth, we bound f (xk+1 ) as follows:
f (xk+1 ) ? g(yk ) + (?g(yk ), xk+1 ? yk ) +
L
2 kxk+1
? yk k2 + h(xk+1 ).
(17)
Using (16), (17), (12), and xk+1 ? yk = ??(vk + ?k ) we have
EIk f (xk+1 ) ? ??k+1
(18)
h
?
?
?
EIk (1 ? ??)(??k + g(yk ) + h(xk+1 )) + (?g(yk ), xk+1 ? yk )
(16),(17)
L
?
?
?
?
+ ??(?k , xk+1 ? yk ) + kxk+1 ? yk k2 ? (1 ? ??) ??kyk ? zk k2
2
2
i
?
?
?
+(1 ? ??) ??(yk ? zk , vk + ?k ) + kvk + ?k k2
2
h
?
?
= EIk (1 ? ??)(??k + g(yk ) + h(xk+1 ) + (xk ? yk , vk + ?k )) ? ?(?g(yk ), vk + ?k )
(12)
?
i
? 1 ? ??
?
?
?? ??(?k , vk + ?k ) ?
(19)
kyk ? xk k2 + (L? + 1)kvk + ?k k2 ,
?
2
??
2
where for the first inequality we used EIk [gIk (yk )] = g(yk ). Here, we give the following
EIk [g(yk ) + h(xk+1 ) + (xk ? yk , vk + ?k )]
= EIk [g(yk ) + (vk , xk ? yk ) + h(xk+1 ) + (?k , xk ? xk+1 ) + (?k , xk+1 ? yk )]
i
h
?
? EIk g(xk ) ? kxk ? yk k2 + h(xk ) ? ?(?k , vk + ?k ) ,
2
(20)
where for the first inequality we used EIk [vk ] = ?g(yk ) and convexity of g and h. Thus we have
EIk f (xk+1 ) ? ??k+1
h
? 1 ? ??
?
kxk ? yk k2
? EIk (1 ? ??)(f (xk ) ? ??k ) ?
?
2
??
(19),(20)
i
?
??(?g(yk ) + ?k , vk + ?k ) + (1 + L?)kvk + ?k k2
2
? 1 ? ??
?
kxk ? yk k2
? EIk (1 ? ??)(f (xk ) ? ??k ) ?
?
2
??
(13)
?
L? 2
?
2
2
2
? k?g(yk ) + ?k k +
kvk + ?k k + kvk ? ?g(yk )k
2
2
2
? 1 ? ??
?
2
2
?
kxk ? yk k + ?kvk ? ?g(yk )k .
? EIk (1 ? ??)(f (xk ) ? ?k ) ?
?
1
2
??
(14),?? 2L
By taking expectation with respect to the history of random variables I1 , . . . , Ik?1 , the induction
hypothesis finishes the proof of (10).
Our bound on the variance of vk is given in the following lemma, whose proof is in the Supplementary Material.
Lemma 4. Suppose Assumption 1 holds, and let x? = arg minf (x). Conditioned on yk , we have
x?Rd
that
EIk kvk ? ?g(yk )k2 ?
1n?b
2L2 kyk ? xk k2 + 8L(f (xk ) ? f (x? ) + f (?
x) ? f (x? )) . (21)
bn?1
From (10), (21), and (9) with x = x? , it follows that
hP
?
?
k?1
??)k?1?l
E [f (xk ) ? f (x? )] ? (1 ? ??)k?1 (?1 ? f )(x? ) + E
l=1 (1 ?
oi
n
2
n?b 2L ?
n?b 8L?
?
x) ? f (x? )) .
kxl ? yl k2 + n?1
? ? ?2 1???
?? + n?1 b
b (f (xl ) ? f (x? ) + f (?
5
If ? ? min
(pb)2
64
Indeed, using
n?1
n?b
2
?
1
L2 , 2L
(pb)2
??
64
n?1
n?b
, then the coefficients of kxl ? yl k2 are non-positive for p ? 2.
2
?
n ? b L?
p?
?
?
??,
2
L
n?1 b
8
f or p > 0,
(22)
we get
=
?1
2 ??
?
? ?2 1???
?? +
2
n?b 2L ?
n?1 b
?? + ?2 ? + ?L?
?
? ? ?2 1???
?? +
?
??L
?1
2 ??
L?
2 ??
(?? + 2?L?) ? 0.
1
?? 2L
Thus, using (22) again with p ? 1, we have
?
E [f (xk ) ? f (x? )] ? (1 ? ??)k?1 (?1 ? f )(x? )
#
"k?1
X
?
k?1?l ?
p ??(f (xl ) ? f (x? ) + f (?
x) ? f (x? ))
+E
(1 ? ??)
l=1
?
? (1 ? ??)k?1 (?1 ? f )(x? ) + p(f (?
x) ? f (x? ))
#
"k?1
X
?
k?1?l ?
p ??(f (xl ) ? f (x? )) ,
+E
(1 ? ??)
(23)
l=1
P?
?
?
??)k?1?l ? t=0 (1 ? ??)t = ?1?? .
2
(pb)2
?
1
n?1
and 0 < p <
Theorem 1. Suppose Assumption 1, 2, and 3. Let ? ? min
64
n?b
L2 , 2L
where for the last inequality we used
Pk?1
l=1
(1 ?
1. Then we have
E [f (?
xs+1 ) ? f (x? )] ?
Moreover, if m ?
1?
(1?p) ??
log
p
(1 ? (1 ? p) ??) +
1?p
?
1?p
p ,
m
(2 + p)(f (?
xs ) ? f (x? )).
(24)
then it follows that
E [f (?
xs+1 ) ? f (x? )] ?
2p(2 + p)
(f (?
xs ) ? f (x? )).
1?p
(25)
From Theorem 1, we can see that for small 0 < p, the overall complexity of Acc-Prox-SVRG (total
number of component gradient evaluations to find an ?-accurate solution) is
b
1
O
n+ ?
log
.
??
?
Thus, we have the following corollary:
Corollary1. Suppose Assumption
as stated above, and
1, 2, and 3. Let p be sufficiently small,
2
l
m
?
(pb)2
8 ?n
?
1
n?1
?
?
, then the
? = min
64
n?b
L2 , 2L . If mini-batch size b is smaller than
2p(n?1)+8 ?
2
2
?
n?1
learning rate ? is equal to (pb)
64
n?b
L2 and the overall complexity is
O
Otherwise, ? =
1
2L
n+
1
n?b
.
? log
n?1
?
and the complexity becomes
?
1
.
O n + b ? log
?
6
(26)
(27)
Table 1: Comparison of overall complexity. b0 =
AccProxSVRG b < ?b0 ?
ProxSVRG
O (n + ?) log
1
?
O
n+
n?b
n?1 ?
log
1
?
?
?
8 ?n
? .
2p(n?1)+8 ?
AccProxSVRG b ? ?b0 ?
APG [8]
?
O (n ?) log 1?
?
O (n + b ?) log 1?
Table 1 lists the overall complexities of the algorithms that achieve linear convergence. As seen
from Table 1, the complexity
of Acc-Prox-SVRG monotonically decreases with respect to b < ?b0 ?,
?
8 ?n
? and monotonically increases when b ? ?b0 ?. Moreover, if b = 1, then
where b0 = ?2p(n?1)+8
?
Acc-Prox-SVRG has the same complexity as that of Prox-SVRG, while if b = n then the complexity
of this method is equal to that of APG. Therefore, with an appropriate mini-batch size, Acc-ProxSVRG may outperform both Prox-SVRG and APG; even if the mini-batch is not appropriate, then
Acc-Prox-SVRG is still comparable to Prox-SVRG or APG. The following overall complexity is
the best possible rate of Acc-Prox-SVRG,
n?
1
?
O
n+
.
log
?
n+ ?
Now we give the proof of Theorem 1.
Proof of Theorem 1. We denote E[f (xk ) ? f (x? )] by Vk , and we use Wk to denote the last expression in (23). Thus, for k ? 1, Vk ? Wk . For k ? 2, we have
)
(
k?2
X
?
?
?
k?2
k?2?l ?
(1 ? ??)
(?1 ? f )(x? ) + pV1 +
p ?? Vl
Wk = (1 ? ??) (1 ? ??)
l=1
?
?
?
?
+p ?? Vk?1 + p ?? V1 ? (1 ? ??(1 ? p))Wk?1 + p ?? W1 .
?
Since 0 < ??(1 ? p) < 1, the above inequality leads to
p
?
k?1
Wk = (1 ? (1 ? p) ??)
W1 .
+
1?p
(28)
From the strong convexity of g (and f ), we can see
?
W1 = (1 + p)(f (?
x) ? f (x? )) + k?
x ? x? k2 ? (2 + p)(f (?
x) ? f (x? )).
2
Thus, for k ? 2, we have
V k ? Wk ?
?
k?1
+
(1 ? (1 ? p) ??)
p
1?p
and that is exactly (24). Using log(1 ? ?) ? ?? and m ?
(2 + p)(f (?
x) ? f (x? )),
1?
(1?p) ??
log
1?p
p ,
we have
1?p
?
?
,
log(1 ? (1 ? p) ??)m ? ?m(1 ? p) ?? ? ? log
p
so that
?
(1 ? (1 ? p) ??)m ?
This finishes the proof of Theorem 1.
4
p
.
1?p
Numerical Experiments
In this section, we compare Acc-Prox-SVRG with Prox-SVRG and APG on L1 -regularized multiclass logistic regression with the regularization parameter ?. Table 2 provides details of the datasets
7
mnist
covtype.scale
rcv1.binary
Figure 2: Comparison of Acc-Prox-SVRG with Prox-SVRG and APG. Top: Objective gap of L1
regularized multi-class logistic regression. Bottom: Test error rates.
and regularization parameters utilized in our experiments. These datasets can be found at the LIBSVM website1 . The best choice of mini-batch size is b = ?b0 ?, which ?
allows us to take a large
?
1
?
learning rate, ? = 2L . Therefore, we have m ? O( ?) and ?k = 2??1
. When the num2?+1
?
?
ber of components n is very large compared with ?, we see that b0 = O( ?); for this, we set
m = ?b (? ? {0.1, 1.0, 10}) and ?k = b?2
b+2 varying b in the set {100, 500, 1000}. We ran AccProx-SVRG using values of ? from the range {0.01, 0.05, 0.1, 0.5, 1.0, 5.0, 10.0}, and we chose
the best ? in each mini-batch setting.
Figure 2 compares Acc-Prox-SVRG with Prox-SVRG and APG. The horizontal axis is the number
of single-component gradient evaluations. For Acc-Prox-SVRG, each iteration computes the 2b
gradients, and at the beginning of each stage, the n component gradients are evaluated. For ProxSVRG, each iteration computes the two gradients, and at the beginning of each stage, the n gradients
are evaluated. For APG, each iteration evaluates n gradients.
Table 2: Details of data sets and regularization parameters.
Dataset
mnist
covtype.scale
rcv1.binary
classes
10
7
2
Training size
60,000
522,910
20,242
Testing size
10,000
58,102
677,399
Features
780
54
47,236
?
10?5
10?6
10?5
As can be seen from Figure 2, Acc-Prox-SVRG with good values of b performs better than or is
comparable to Prox-SVRG and is much better than results for APG. On the other hand, for relatively
large b, Acc-Prox-SVRG may perform worse because of an overestimation of b0 , and hence the
worse estimates of m and ?k .
5
Conclusion
We have introduced a method incorporating Nesterov?s acceleration method and a variance reduction technique of SVRG in the mini-batch setting. We prove that the overall complexity of our
method, with an appropriate mini-batch size, is more efficient than both Prox-SVRG and APG; even
when mini-batch size is not appropriate, our method is still comparable to APG or Prox-SVRG. In
addition, the gradient evaluations for each mini-batch can be parallelized [3, 10, 11] when using our
method; hence, it performs much faster in a distributed framework.
1
http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
8
References
[1] S. Shalev-Shwartz and T. Zhang. Proximal stochastic dual coordinate ascent. arXiv:1211.2717,
2012.
[2] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss
minimization. Journal of Machine Learning Research 14, pages 567-599, 2013.
[3] S. Shalev-Shwartz and T. Zhang. Accelerated mini-batch stochastic dual coordinate ascent. Advances in Neural Information Processing System 26, pages 378-385, 2013.
[4] S. Shalev-Shwartz and T. Zhang. Accelerated Proximal Stochastic Dual Coordinate Ascent for
Regularized Loss Minimization. Proceedings of the 31th International Conference on Machine
Learning, pages 64-72, 2014.
[5] N. Le Roux, M. Schmidt, and F. Bach. A Stochastic Gradient Method with an Exponential
Convergence Rate for Finite Training Sets. Advances in Neural Information Processing System
25, pages 2672-2680, 2012.
[6] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. Advances in Neural Information Processing System 26, pages 315-323, 2013.
[7] L. Xiao and T. Zhang. A Proximal Stochastic Gradient Method with Progressive Variance Reduction. arXiv:1403.4699, 2014.
[8] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, Boston,
2004.
[9] Y. Nesterov. Gradient methods for minimizing composite objective function. CORE Discussion
Papers, 2007.
[10] O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction
using mini-batches. Journal of Machine Learning Research 13, pages 165-202, 2012.
[11] A. Agarwal and J. Duchi. Distributed delayed stochastic optimization. Advances in Neural
Information Processing System 24, pages 873-881, 2011.
9
| 5610 |@word nd:1 dekel:1 bn:2 pick:2 reduction:10 initial:1 ati:1 numerical:1 update:3 kyk:8 xk:51 beginning:3 core:1 provides:1 zhang:9 mathematical:1 ik:6 prove:3 introductory:1 indeed:1 multi:2 becomes:1 notation:1 moreover:2 developed:1 sag:2 exactly:1 k2:34 omit:1 before:1 positive:1 chose:1 co:1 bi:3 range:1 testing:1 sdca:3 composite:1 get:2 operator:2 storage:1 risk:1 www:1 equivalent:1 deterministic:1 convex:9 bachrach:1 roux:2 simplicity:1 immediately:1 rule:1 estimator:1 coordinate:5 updated:1 shamir:1 suppose:3 hypothesis:1 utilized:1 bottom:1 csie:1 trade:2 decrease:1 yk:67 mentioned:1 ran:1 convexity:2 complexity:18 overestimation:1 nesterov:6 solving:3 predictive:1 various:1 effective:1 shalev:5 dof:1 whose:2 larger:2 solve:1 supplementary:3 otherwise:1 statistic:1 gi:14 g1:1 online:1 sequence:4 differentiable:2 advantage:1 propose:2 achieve:2 convergence:9 converges:1 derive:1 b0:9 eq:1 strong:1 direction:1 tokyo:1 stochastic:21 libsvmtools:1 material:3 ntu:1 hold:1 proximity:2 sufficiently:1 exp:1 achieves:5 minimization:3 pn:1 varying:1 corollary:2 focus:1 vk:29 notational:1 contrast:1 vl:3 diminishing:1 i1:3 overall:10 arg:3 dual:6 issue:1 special:1 initialize:1 equal:3 sampling:2 progressive:1 minf:1 eik:15 employ:1 randomly:3 delayed:1 n1:1 evaluation:4 kvk:14 accproxsvrg:2 accurate:3 website1:1 gn:1 cost:1 subset:2 johnson:2 proximal:16 international:1 off:2 yl:4 w1:3 again:1 choose:1 worse:2 japan:1 prox:40 wk:6 coefficient:1 inc:1 analyze:2 minimize:1 oi:1 variance:16 history:2 acc:19 definition:4 evaluates:2 frequency:1 obvious:1 proof:8 dataset:1 popular:2 evaluated:2 strongly:4 furthermore:1 stage:6 hand:4 horizontal:1 ei:1 minibatch:1 logistic:3 effect:1 unbiased:1 remedy:1 true:1 regularization:4 hence:2 during:1 shinanomachi:2 ridge:1 performs:3 l1:3 atsushi:1 duchi:1 recently:2 pseudocode:1 jp:1 kluwer:1 ai:2 rd:9 hp:1 yk2:2 lkx:1 proxsvrg:3 store:2 inequality:5 binary:2 seen:2 additional:1 parallelized:1 monotonically:2 semi:1 ntt:1 smooth:6 faster:1 bach:1 a1:1 prediction:2 variant:6 regression:4 basic:2 expectation:3 arxiv:2 iteration:8 gilad:1 agarwal:1 addition:1 ascent:5 incorporates:2 near:1 easy:1 iterate:2 finish:2 lasso:1 multiclass:1 expression:1 accelerating:1 reduced:1 http:1 outperform:1 exist:1 per:1 key:1 pb:5 libsvm:1 v1:1 subgradient:1 comparable:4 apg:20 def:1 bound:2 min:6 rcv1:2 relatively:2 smaller:2 tw:1 taken:1 equation:2 cjlin:1 nitanda:2 end:3 appropriate:7 batch:19 encounter:1 schmidt:1 slower:1 top:1 ensure:1 unsuitable:1 objective:2 dependence:1 gradient:35 outer:1 induction:3 msi:1 relationship:2 mini:19 index:1 minimizing:1 slows:1 negative:1 stated:1 proper:1 perform:1 datasets:3 finite:1 descent:13 y1:1 introduced:3 required:2 xm:1 kxl:3 regularized:6 scheme:1 improve:1 axis:1 spgd:7 l2:5 loss:2 lecture:1 xiao:3 course:1 last:2 svrg:41 ber:1 taking:1 distributed:3 computes:2 author:1 obtains:1 b1:1 shwartz:5 continuous:2 table:5 ku:1 zk:15 complex:1 pv1:1 sp:1 pk:1 x1:4 sub:1 exponential:1 xl:3 kxk2:1 ib:1 down:1 theorem:5 xt:1 list:1 decay:1 x:4 covtype:2 gik:11 exists:2 incorporating:1 mnist:2 conditioned:2 kx:6 gap:1 boston:1 kxk:6 conditional:2 acceleration:6 lipschitz:5 pgd:8 reducing:1 lemma:9 total:2 gij:1 called:2 accelerated:9 |
5,094 | 5,611 | Beyond the Birkhoff Polytope:
Convex Relaxations for Vector Permutation Problems
Stephen J. Wright
Department of Computer Sciences
University of Wisconsin - Madison
Madison, WI 53706
[email protected]
Cong Han Lim
Department of Computer Sciences
University of Wisconsin - Madison
Madison, WI 53706
[email protected]
Abstract
The Birkhoff polytope (the convex hull of the set of permutation matrices), which
is represented using ?(n2 ) variables and constraints, is frequently invoked in formulating relaxations of optimization problems over permutations. Using a recent
construction of Goemans [1], we show that when optimizing over the convex hull
of the permutation vectors (the permutahedron), we can reduce the number of
variables and constraints to ?(n log n) in theory and ?(n log2 n) in practice. We
modify the recent convex formulation of the 2-SUM problem introduced by Fogel
et al. [2] to use this polytope, and demonstrate how we can attain results of similar
quality in significantly less computational time for large n. To our knowledge, this
is the first usage of Goemans? compact formulation of the permutahedron in a convex optimization problem. We also introduce a simpler regularization scheme for
this convex formulation of the 2-SUM problem that yields good empirical results.
1
Introduction
A typical workflow for converting a discrete optimization problem over the set of permutations of n
objects into a continuous relaxation is as follows: (1) use permutation matrices to represent permutations; (2) relax to the convex hull of the set of permutation matrices ? the Birkhoff polytope; (3)
relax other constraints to ensure convexity/continuity. Instances of this procedure appear in [3, 2].
Representation of the Birkhoff polytope requires ?(n2 ) variables, significantly more than the n
variables required to represent the permutation directly. The increase in dimension is unappealing,
especially if we are only interested in optimizing over permutation vectors, as opposed to permutations of a more complex object, such as a graph. The obvious alternative of using a relaxation based
on the convex hull of the set of permutations (the permutahedron) is computationally infeasible,
because the permutahedron has exponentially many facets (whereas the Birkhoff polytope has only
n2 facets). We can achieve a better trade-off between the number of variables and facets by using
sorting networks to construct polytopes that can be linearly projected to recover the permutahedron.
This construction, introduced by Goemans [1], can have as few as ?(n log n) facets, which is optimal up to constant factors. In this paper, we use a relaxation based on these polytopes, which we
call ?sorting network polytopes.?
We apply the sorting network polytope to the noisy seriation problem, defined as follows. Given
a noisy similarity matrix A, recover a symmetric row/column ordering of A for which the entries
generally decrease with distance from the diagonal. Fogel et al. [2] introduced a convex relaxation
of the 2-SUM problem to solve the noisy seriation problem. They proved that the solution to the 2SUM problem recovers the exact solution of the seriation problem in the ?noiseless? case (in which
an ordering exists that ensures monotonic decrease of similarity measures with distance from the
diagonal). They further show that the formulation allows side information about the ordering to be
incorporated, and is more robust to noise than a spectral formulation of the 2-SUM problem de1
scribed by Atkins et al. [4]. The formulation in [2] makes use of the Birkhoff polytope. We propose
instead a formulation based on the sorting network polytope. Performing convex optimization over
the sorting network polytope requires different techniques from those described in [2]. In addition,
we describe a new regularization scheme, applicable both to our formulation and that of [2], that is
more natural for the 2-SUM problem and has good practical performance.
The paper is organized as follows. We begin by describing polytopes for representing permutations
in Section 2. In Section 3, we introduce the seriation problem and the 2-SUM problem, describe two
continuous relaxations for the latter, (one of which uses the sorting network polytope) and introduce
our regularization scheme for strengthening the relaxations. Issues that arise in using the sorting
network polytope are discussed in Section 4. In Section 5, we provide experimental results showing
the effectiveness of our approach. The extended version of this paper [5] includes some additional
computational results, along with several proofs. It also describes an efficient algorithm for taking a
conditional gradient step for the convex formulation, for the case in which the formulation contains
no side information.
2
Permutahedron, Birkhoff Polytope, and Sorting Networks
We use n throughout the paper to refer to the length of the permutation vectors. ?In = (1, 2, . . . , n)T
denotes the identity permutation. (When the size n can be inferred from the context, we write the
identity permutation as ?I .) P n denotes the set of all permutations vectors of length n. We use
? ? P n to denote a generic permutation, and denote its components by ?(i), i = 1, 2, . . . , n. We
use 1 to denote the vector of length n whose components are all 1.
Definition 2.1. The permutahedron PHn , the convex hull of P n , is defined as follows:
?
?
X
|S|
n
?
?
X
X
n(n + 1)
PHn := x ? Rn
xi =
,
xi ?
(n + 1 ? i) for all S ? [n] .
?
?
2
i=1
i=1
i?S
The permutahedron PHn has 2n ?2 facets, which prevents us from using it in optimization problems
directly. (We should note however that the permutahedron is a submodular polyhedron and hence
admits efficient algorithms for certain optimization problems.) Relaxations are commonly derived
from the set of permutation matrices (the set of n ? n matrices containing zeros and ones, with a
single one in each row and column) and its convex hull instead.
Definition 2.2. The convex hull of the set of n?n permutation matrices is the Birkhoff polytope B n ,
which is the set of all doubly-stochastic n ? n matrices {X ? Rn?n | X ? 0, X1 = 1, X T 1 = 1}.
The Birkhoff polytope has been widely used in the machine learning and computer vision communities for various permutation problems (see for example [2], [3]). The permutahedron
can be
Pn
represented as the projection of the Birkhoff polytope from Rn?n to Rn by xi = j=1 j ? Xij . The
Birkhoff polytope is sometimes said to be an extended formulation of the permutahedron.
A natural question to ask is whether a more compact extended formulation exists for the permutahedron. Goemans [1] answered this question in the affirmative by constructing one with ?(n log n)
constraints and variables, which is optimal up to constant factors. His construction is based on sorting networks, a collection of wires and binary comparators that sorts a list of numbers. Figure 1
displays a sorting network on 4 variables. (See [6] for further information on sorting networks.)
Given a sorting network on n inputs with m comparators (we will subsequently always use m to
refer to the number of comparators), an extended formulation for the permutahedron with O(m)
variables and constraints can be constructed as follows [1]. Referring to the notation in the right
subfigure in Figure 1, we introduce a set of constraints for each comparator k = 1, 2, . . . , m to
indicate the relationships between the two inputs and the two outputs of each comparator:
xk(in, top) + xk(in, bot) = xk(out, top) + xk(out, bot) , xk(out, top) ? xk(in, top) , and xk(out, top) ? xk(in, bot) . (1)
Note that these constraints require the sum of the two inputs to be the same as the sum of the two
out
outputs, but the inputs can be closer together than the outputs. Let xin
i and xi , i = 1, 2, . . . , n
denote the x variables corresponding to the ith input and ith output of the entire sorting network,
respectively. We introduce the additional constraints
xout
(2)
i = i, for i ? [n].
2
xin
1
xout
1
xin
2
xout
2
xin
3
xout
3
xin
4
xout
4
xk(in,top)
xk(in,bottom)
xk(out,top)
xk(out,bottom)
Figure 1: A bitonic sorting network on 4 variables (left) and the k-th comparator (right). The input
to the sorting network is on the left and the output is on the right. At each comparator, we take the
two input values and sort them such that the smaller value is the one at the top in the output. Sorting
takes place progressively as we move from left to right through the network, sorting pairs of values
as we encounter comparators.
The details of this construction depend on the particular choice of sorting network (see Section 4),
but we will refer to it generically as the sorting network polytope SN n . Each element in this
polytope can be viewed as a concatenation of two vectors: the subvector associated with the network
rest
in
in
, which includes all the internal
inputs xin = (xin
1 , x2 , . . . , xn ), and the rest of the coordinates x
variables as well as the outputs. The following theorem attests to the fact that any input vector xin
vector that is part of a feasible vector for the entire network is a point in the permutahedron:
Theorem 2.3 (Goemans [1]). The set {xin | (xin , xrest ) ? SN n } is the permutahedron PHn .
3
Convex Relaxations of 2-SUM via Sorting Network Polytope
In this section we will briefly describe the seriation problem, and some of the continuous relaxations
of the combinatorial 2-SUM problem that can be used to solve this problem.
The Noiseless Seriation Problem. The term seriation generally refers to data analysis techniques
that arrange objects in a linear ordering in a way that fits available information and thus reveals
underlying structure of the system [7]. We adopt here the definition of the seriation problem from
[4]. Suppose we have n objects arranged along a line, and a similarity function that increases with
distance between objects in the line. The similarity matrix is the symmetric n ? n matrix whose
(i, j) entry is the similarity measure between the ith and jth objects in the linear arrangement. This
similarity matrix is a R-matrix, according to the following definition.
Definition 3.1. A symmetric matrix A is a Robinson matrix (R-matrix) if for all points (i, j) where
i > j, we have Aij ? min(A(i?1),j , Ai,(j+1) ). A symmetric matrix A is a pre-R matrix if ?T A? is
R for some permutation ?.
In other words, a symmetric matrix is a R-matrix if the entries are nonincreasing as we move away
from the diagonal in either the horizontal or vertical direction. The goal of the noiseless seriation
problem is to recover the ordering of the variables along the line from the pairwise similarity data,
which is equivalent to finding the permutation that recovers an R-matrix from a pre-R-matrix.
The seriation problem was introduced in the archaeology literature [8], and has applications across
a wide range of areas including clustering [9], shotgun DNA sequencing [2], and taxonomy [10].
R-matrices are useful in part because of their relation to the consecutive-ones property in a matrix
of zeros and ones, where the ones in each column form a contiguous block. A matrix M with the
consecutive-ones property gives rise to a R-matrix M M T .
Noisy Seriation, 2-SUM and Continuous Relaxations. Given a binary symmetric matrix A, the
2-SUM problem can be expressed as follows:
n X
n
X
minn
Aij (?(i) ? ?(j))2 .
(3)
??P
i=1 j=1
A slightly simpler but equivalent formulation, defined via the Laplacian LA = diag(A1) ? A, is
min ? T LA ?.
??P n
3
(4)
The seriation problem is closely related to the combinatorial 2-SUM problem, and Fogel et al. [2]
proved that if A is a pre-R-matrix such that each row/column has unique entries, then the solution to
the 2-SUM problem also solves the noiseless seriation problem. In another relaxation of the 2-SUM
problem, Atkins et al. [4] demonstrate that finding the second smallest eigenvalue, also known as the
Fiedler value, solves the noiseless seriation problem. Hence, the 2-SUM problem provides a good
model for the noisy seriation problem, where the similarity matrices are close to, but not exactly,
pre-R matrices.
The 2-SUM problem is known to be N P -hard [11], so we seek efficient relaxations. We describe
below two continuous relaxations that are computationally practical. (Other relaxations of these
problems require solution of semidefinite programs and are intractable in practice for large n.)
The spectral formulation of [4] seeks the Fiedler value by searching over the space orthogonal to the
vector 1, which is the eigenvector that corresponds to the zero eigenvalue. The Fiedler value is the
optimal objective value of the following problem:
min y T LA y
y?Rn
such that y T 1 = 0, kyk2 = 1.
(5)
This problem is non-convex, but its solution can be found efficiently from an eigenvalue decomposition of LA . With Fiedler vector y, one can obtain a candidate solution to the 2-SUM problem
by picking the permutation ? ? P n to have the same ordering as the elements of y. The spectral
formulation (5) is a continuous relaxation of the 2-SUM problem (4).
The second relaxation of (4), described by Fogel et al. [2], makes use of the Birkhoff polytope B n .
The basic version of the formulation is
(6)
minn ?IT ?T LA ??I ,
??B
(recall that ?I is the identity permutation (1, 2, . . . , n)T ), which is a convex quadratic program over
the n2 components of ?. Fogel et al. augment and enhance this formulation as follows.
? Introduce a ?tiebreaking? constraint eT1 ??I + 1 ? eTn ??I to resolve ambiguity about the
direction of the ordering, where ek = (0, . . . , 0, 1, 0, . . . , 0)T with the 1 in position k.
? Average over several perturbations of ?I to improve robustness of the solution.
? Add a penalty to maximize the Frobenius norm of the matrix ?, which pushes the solution
closer to a vertex of the Birkhoff polytope.
? Incorporate additional ordering constraints of the form xi ? xj ? ?k , to exploit prior
knowledge about the ordering.
With these modifications, the problem to be solved is
1
?
min
Trace(Y T ?T LA ?Y ) ? kP ?k2F such that D??I ? ?,
(7)
??Bn p
p
where each column of Y ? Rn?p is a slightly perturbed version of a permutation,1 ? is the regularization coefficient, the constraint D??I ? ? contains the ordering information and tiebreaking
constraints, and the operator P = I ? n1 11T is the projection of ? onto elements orthogonal to
the all-ones matrix. The penalization is applied to kP ?k2F rather than to k?k2F directly, thus ensuring that the program remains convex if the regularization factor is sufficiently small (for which
a sufficient condition is ? < ?2 (LA )?1 (Y Y T )). We will refer to this regularization scheme as
the matrix-based regularization, and to the formulation (7) as the matrix-regularized Birkhoff-based
convex formulation.
Figure 2 illustrates the permutahedron in the case of n = 3, and compares minimization of the objective y T LA y over the permutahedron (as attempted by the convex formulation) with minimization of
the same objective over the constraints in the spectral formulation (5). The spectral method returns
good solutions when the noise is low, and it is computationally efficient since there are many fast
algorithms and software for obtaining selected eigenvectors. However, the Birkhoff-based convex
formulation can return a solution that is significantly better in situations with high noise or significant additional ordering information. For the rest of this section, we will focus on the convex
formulation.
1
In [2], each column of Y is said to contain a perturbation of ?I , but in a response to referees of their paper,
the authors say that they used sorted uniform random vectors instead in the revised version.
4
y
x
z
Figure 2: A geometric interpretation of spectral and convex formulation solutions on the 3permutahedron. The left image shows the 3-permutahedron in 3D space and the dashed line shows
the eigenvector 1 corresponding to the zero eigenvalue. The right image shows the projection of
the 3-permutahedron along the trivial eigenvector together with the elliptical level curves of the
objective function y T LA y. Points on the circumscribed circle have an `2 -norm equal to that of a
permutation, and the objective is minimized over this circle at the point denoted by a cross. The
vertical line in the right figure enforces the tiebreaking constraint that 1 must appear before 3 in the
ordering; the red dot indicates the minimizer of the objective over the resulting triangular feasible
region. Without the tiebreaking constraint, the minimizer is at the center of the permutahedron.
A Compact Convex Relaxation via the Permutahedron/Sorting Network Polytope and a New
Regularization Scheme. We consider now a different relaxation for the 2-SUM problem (4). Taking the convex hull of P n directly, we obtain
min
x?PHn
xT LA x.
(8)
This is essentially a permutahedron-based version of (6). In fact, two problems are equivalent, except
that formulation (8) is more compact when we enforce x ? PH via the sorting network constraints
x ? {xin | (xin , xrest ) ? SN n },
where SN n incorporates the comparator constraints (1) and the output constraints (2). This formulation can be enhanced and augmented in a similar fashion to (6). The tiebreaking constraint
for this formulation can be expressed simply as x1 + 1 ? xn , since xin consists of the subvector (x1 , x2 , . . . , xn ). (In both (8) and (6), having at least one additional constraint is necessary to
remove the trivial solution given by the center of the permutahedron or Birkhoff polytope; see Figure 2.) This constraint is the strongest inequality that will not eliminate any permutation (assuming
that a permutation and its reverse are equivalent); we include a proof of this fact in [5].
It is also helpful to introduce a penalty to force the solution x to be closer to a permutation, that is, a
vertex of the permutahedron. To this end, we introduce a vector-based regularization scheme. The
following statement is an immediate consequence of strict convexity of norms.
Proposition 3.2. Let v ? Rn , and let X be the convex hull of all permutations of v. Then, the points
in X with the highest `p norm, for 1 < p < ?, are precisely the permutations of v.
It follows that adding a penalty to encourage kxk2 to be large might improve solution quality. However, directly penalizing the negative of the 2-norm of x would destroy convexity, since LA has a
zero eigenvalue. Instead we penalize P x, where P = I ? n1 11T projects onto the subspace orthogonal to the trivial eigenvector 1. (Note that this projection of the permutahedron still satisfies the
assumptions of Proposition 3.2.) When we include a penalty on kP xk22 in the formulation (8) along
with side constraints Dx ? ? on the ordering, we obtain the objective xT LA x ? ?kP xk22 which
leads to
min
x?PHn
xT (LA ? ?P )x such that
Dx ? ?.
(9)
This objective is convex when ? ? ?2 (LA ), a looser condition on ? than is the case in matrix-based
regularization. We will refer to (9) as the regularized permutahedron-based convex formulation.
5
Vector-based regularization can also be incorporated into the Birkhoff-based convex formulation.
Instead of maximizing the kP ?k22 term in formulation (7) to force the solution to be closer to a permutation, we could maximize kP ?Y k22 . The vector-regularized version of (6) with side constraints
can be written as follows:
min
??Bn
1
Trace(YT ?T (LA ? ?P)?Y) such that D??I ? ?.
p
(10)
We refer to this formulation as the vector-regularized Birkhoff-based convex formulation. Vectorbased regularization is in some sense more natural than the regularization in (7). It acts directly
on the set that we are optimizing over, rather than an expanded set. The looser condition ? ?
?2 (LA ) allows for stronger regularization. Experiments reported in [5] show that the vector-based
regularization produces permutations that are consistently better those obtained from the Birkhoffbased regularization.
The regularized permutahedron-based convex formulation is a convex QP with O(m) variables and
constraints, where m is the number of comparators in its sorting network, while the Birkhoff-based
one is a convex QP with O(n2 ) variables. The one feature in the Birkhoff-based formulations that
the permutahedron-based formulations do not have is the ability to average the solution over multiple
vectors by choosing p > 1 columns in the matrix Y ? Rn?p . However, our experiments suggested
that the best solutions were obtained for p = 1, so this consideration was not important in practice.
4
Key Implementation Issues
Choice of Sorting Network. There are numerous possible choices of the sorting network, from
which the constraints in formulation (9) are derived. The asymptotically most compact option is
the AKS sorting network, which contains ?(n log n) comparators. This network was introduced in
[12] and subsequently improved by others, but is impractical because of its difficulty of construction and the large constant factor in the complexity expression. We opt instead for more elegant
networks with slightly worse asymptotic complexity. Batcher [13] introduced two sorting networks
with ?(n log2 n) size ? the odd-even sorting network and the bitonic sorting network ? that are
popular and practical. The sorting network polytope based on these can be generated with a simple
recursive algorithm in ?(n log2 n) time.
Obtaining Permutations from a Point in the Permutahedron. Solution of the permutationbased relaxation yields a point x in the permutahedron, but we need techniques to convert this point
into a valid permutation, which is a candidate solution for the 2-SUM problem (3). The most obvious recovery technique is to choose this permutation ? to have the same ordering as the elements of
x, that is, xi < xj implies ?(i) < ?(j), for all i, j ? {1, 2, . . . , n}. We could also sample multiple
permutations, by applying Gaussian noise to the components of x prior to taking the ordering to produce ?. (We used i.i.d. noise with variance 0.5.) The 2-SUM objective (3) can be evaluated for each
permutation so obtained, with the best one being reported as the overall solution. This inexpensive
randomized recovery procedure can be repeated many times, and it yield significantly better results
over the single ?obvious? ordering.
Solving the Convex Formulation. On our test machine using the Gurobi interior point solver,
we were able to solve instances of the permutahedron-based convex formulation (9) of size up to
around n = 10000. As in [2], first-order methods can be employed when the scale is larger. In [5],
we provide an optimal O(n log n) algorithm for step (1), in the case in which only the tiebreaking
constraint is present, with no additional ordering constraints.
5
Experiments
We compare the run time and solution quality of algorithms on the two classes of convex formulations ? Birkhoff-based and permutahedron-based ? with various parameters. Summary results are
presented in this section. Additional results, including more extensive experiments comparing the
effects of different parameters on the solution quality, appear in [5].
6
Experimental Setup. The experiments were run on an Intel Xeon X5650 (24 cores @ 2.66Ghz)
server with 128GB of RAM in MATLAB 7.13, CVX 2.0 ([14],[15]), and Gurobi 5.5 [16]. We
tested four formulation-algorithm-implementation variants, as follows. (i) Spectral method using the
MATLAB eigs function, (ii) MATLAB/Gurobi on the permutahedron-based convex formulation,
(iii) MATLAB/Gurobi on the Birkhoff-based convex formulation with p = 1 (that is, formulation
(7) with Y = ?I ), and (iv) Experimental MATLAB code provided to us by the authors of [2]
implementing FISTA, for solving the matrix-regularized Birkhoff-based convex formulation (7),
with projection steps solved using block coordinate ascent on the dual problem. This is the current
state-of-the-art algorithm for large instances of the Birkhoff-based convex formulation; we refer
to it as RQPS (for ?Regularized QP for Seriation?). We report run time data using wall clock time
reported by Gurobi, and MATLAB timings for RQPS, excluding all preprocessing time. We used the
bitonic sorting network by Batcher [13] for experiments with the permutahedron-based formulation.
Linear Markov Chain. The Markov chain reordering problem [2] involves recovering the ordering of a simple Markov chain with Gaussian noise from disordered samples. The Markov chain
consists of random variables X1 , X2 , . . . , Xn such that Xi = bXi?1 + i , where b is a positive
constant and i ? N (0, ? 2 ). A sample covariance matrix taken over multiple independent samples
of the Markov chain with permuted labels is used as the similarity matrix in the 2-SUM problem.
We use this problem for two different comparisons. First, we compare the solution quality and
running time of RQPS algorithm of [2] with the Gurobi interior-point solver on the regularized
permutahedron-based convex formulation, to demonstrate the performance of the formulation and
algorithm introduced in this paper compared with the prior state of the art. Second, we apply Gurobi
to both the permutahedron-based and Birkhoff-based formulations with p = 1, with the goal of
discovering which formulation is more efficient in practice.
For both sets of experiments, we fixed b = 0.999 and ? = 0.5 and generate 50 chains to form a sample covariance matrix. We chose n ? {500, 2000, 5000} to see how algorithm performance scales
with n. For each n, we perform 10 independent runs, each based on a different set of samples of the
Markov chain (and hence a different sample covariance matrix). We added n ordering constraints
for each run. Each ordering constraint is of the form xi + ?(j) ? ?(i) ? xj , where ? is the (known)
permutation that recovers the original matrix, and i, j ? [n] is a pair randomly chosen but satisfying
?(j) ? ?(i) > 0. We used a regularization parameter of ? = 0.9?2 (LA ) on all formulations.
RQPS and the Permutahedron-Based Formulation. We compare the RQPS code for the matrixregularized Birkhoff-based convex formulation (7) to the regularized permutahedron-based convex
formulation, solved with Gurobi. We fixed a time limit for each value of n, and ran the RQPS
algorithm until the limit was reached. At fixed time intervals, we query the current solution and
sample permutations from that point.
Figure 3: Plot of 2-SUM objective over time (in seconds) for n ? {500, 2000, 5000}. We choose the
run (out of ten) that shows the best results for RQPS relative to the interior-point algorithm for the
regularized permutahedron-based formulation. We test four different variants of RQPS. The curves
represent performance of the RQPS code for varying values of p (1 for red/green and n for blue/cyan)
and the cap on the maximum number of iterations for the projection step (10 for red/blue and 100 for
green/cyan). The white square represents the spectral solution, and the magenta diamond represents
the solution returned by Gurobi for the permutahedron-based formulation. The horizontal axis in
each graph is positioned at the 2-SUM objective corresponding to the permutation that recovers the
original labels for the sample covariance matrix.
7
For RQPS, with a cap of 10 iterations within each projection step, the objective tends to descend
rapidly to a certain level, then fluctuates around that level (or gets slightly worse) for the rest of the
running time. For a limit of 100 iterations, there is less fluctuation in 2-SUM value, but it takes some
time to produce a solution as good as the previous case. In contrast to experience reported in [2],
values of p greater than 1 do not seem to help; our runs for p = n plateaued at higher values of the
2-SUM objective than those with p = 1.
In most cases, the regularized permutahedron-based formulation gives a better solution value than
the RQPS method, but there are occasional exceptions to this trend. For example, in the third run for
n = 500 (the left plot in Figure 3), one variant of RQPS converges to a solution that is significantly
better. Despite its very fast runtimes, the spectral method does not yield solutions of competitive
quality, due to not being able to make use of the side constraint information.
Direct Comparison of Birkhoff and Permutahedron Formulations For the second set of experiments, we compare the convergence rate of the objective value in the Gurobi interior-point solver
applied to two equivalent formulations: the vector-regularized Birkhoff-based convex formulation
(10) with p = 1 and the regularized permutahedron-based convex formulation (9). For each choice
of input matrix and sampled ordering information, we ran the Gurobi interior-point method In Figure 4, we plot at each iteration the difference between the primal objective and v.
Figure 4: Plot of the difference of the 2-SUM objective from the baseline objective over time (in
seconds) for n ? {2000, 5000}. The red curve represents performance of the permutahedron-based
formulation; the blue curve represents the performance of the Birkhoff-based formulation. We display the best run (out of ten) for the Birkhoff-based formulation for each n. When n = 2000, the
permutahedron-based formulation converges slightly faster in most cases. However, once we scale
up to n = 5000, the permutahedron-based formulation converges significantly faster in all tests.
Our comparisons show that the permutahedron-based formulation tends to yield better solutions in
faster times than Birkhoff-based formulations, regardless of the algorithm used to solve the latter.
The advantage of the permutahedron-based formulation is more pronounced when n is large.
6
Future Work and Acknowledgements
We hope that this paper spurs further interest in using sorting networks in the context of other more
general classes of permutation problems, such as graph matching or ranking. A direct adaptation of
this approach is inadequate, since the permutahedron does not uniquely describe a convex combination of permutations, which is how the Birkhoff polytope is used in many such problems. However,
when the permutation problem has a solution in the Birkhoff polytope that is close to an actual permutation, we should expect that the loss of information when projecting this point in the Birkhoff
polytope to the permutahedron to be insignificant.
We thank Okan Akalin and Taedong Kim for helpful comments and suggestions for the experiments.
We thank the anonymous referees for feedback that improved the paper?s presentation. We also thank
the authors of [2] for sharing their experimental code, and Fajwal Fogel for helpful discussions.
Lim?s work on this project was supported in part by NSF Awards DMS-0914524 and DMS-1216318,
and a grant from ExxonMobil. Wright?s work was supported in part by NSF Award DMS-1216318,
ONR Award N00014-13-1-0129, DOE Award DE-SC0002283, AFOSR Award FA9550-13-1-0138,
and Subcontract 3F-30222 from Argonne National Laboratory.
8
References
[1] M. Goemans, ?Smallest compact formulation for the permutahedron,? Working Paper, 2010.
[2] F. Fogel, R. Jenatton, F. Bach, and A. D?Aspremont, ?Convex Relaxations for Permutation
Problems,? in Advances in Neural Information Processing Systems, 2013, pp. 1016?1024.
[3] M. Fiori, P. Sprechmann, J. Vogelstein, P. Muse, and G. Sapiro, ?Robust Multimodal Graph
Matching: Sparse Coding Meets Graph Matching,? in Advances in Neural Information Processing Systems, 2013, pp. 127?135.
[4] J. E. Atkins, E. G. Boman, and B. Hendrickson, ?A Spectral Algorithm for Seriation and the
Consecutive Ones Problem,? SIAM Journal on Computing, vol. 28, no. 1, pp. 297?310, Jan.
1998.
[5] C. H. Lim and S. J. Wright, ?Beyond the Birkhoff Polytope: Convex Relaxations for Vector
Permutation Problems,? arXiv:1407.6609, 2014.
[6] T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson, Introduction to Algorithms, 2nd ed.
McGraw-Hill Higher Education, 2001.
[7] I. Liiv, ?Seriation and matrix reordering methods: An historical overview,? Statistical Analysis
and Data Mining, vol. 3, no. 2, pp. 70?91, 2010.
[8] W. S. Robinson, ?A Method for Chronologically Ordering Archaeological Deposits,? American Antiquity, vol. 16, no. 4, p. 293, Apr. 1951.
[9] C. Ding and X. He, ?Linearized cluster assignment via spectral ordering,? in Twenty-first international conference on Machine learning - ICML ?04. New York, New York, USA: ACM
Press, Jul. 2004, p. 30.
[10] R. Sokal and P. H. A. Sneath, Principles of Numerical Taxonomy.
1963.
London: W. H. Freeman,
[11] A. George and A. Pothen, ?An Analysis of Spectral Envelope Reduction via Quadratic Assignment Problems,? SIAM Journal on Matrix Analysis and Applications, vol. 18, no. 3, pp.
706?732, Jul. 1997.
[12] M. Ajtai, J. Koml?os, and E. Szemer?edi, ?An O(n log n) sorting network,? in Proceedings of the
fifteenth annual ACM symposium on Theory of computing - STOC ?83. New York, New York,
USA: ACM Press, Dec. 1983, pp. 1?9.
[13] K. E. Batcher, ?Sorting networks and their applications,? in Proceedings of the April 30?May
2, 1968, spring joint computer conference on - AFIPS ?68 (Spring). New York, New York,
USA: ACM Press, Apr. 1968, p. 307.
[14] M. Grant and S. Boyd, ?CVX: Matlab software for disciplined convex programming, version
2.0,? http://cvxr.com/cvx, Aug. 2012.
[15] ??, ?Graph implementations for nonsmooth convex programs,? in Recent Advances in
Learning and Control, ser. Lecture Notes in Control and Information Sciences, V. Blondel, S. Boyd, and H. Kimura, Eds.
Springer-Verlag Limited, 2008, pp. 95?110, http:
//stanford.edu/?boyd/graph dcp.html.
[16] Gurobi Optimizer Reference Manual, Gurobi Optimization, Inc., 2014. [Online]. Available:
http://www.gurobi.com
9
| 5611 |@word briefly:1 version:7 norm:5 stronger:1 nd:1 seek:2 linearized:1 bn:2 decomposition:1 covariance:4 xout:5 reduction:1 contains:3 etn:1 elliptical:1 comparing:1 current:2 com:2 dx:2 must:1 written:1 numerical:1 remove:1 plot:4 progressively:1 selected:1 discovering:1 de1:1 xk:12 ith:3 core:1 fa9550:1 provides:1 simpler:2 along:5 constructed:1 direct:2 symposium:1 consists:2 doubly:1 introduce:8 blondel:1 pairwise:1 frequently:1 freeman:1 resolve:1 actual:1 solver:3 begin:1 project:2 notation:1 underlying:1 provided:1 rivest:1 affirmative:1 eigenvector:4 finding:2 impractical:1 kimura:1 sapiro:1 act:1 exactly:1 ser:1 control:2 grant:2 appear:3 before:1 positive:1 timing:1 modify:1 tends:2 limit:3 consequence:1 despite:1 ak:1 meet:1 fluctuation:1 might:1 chose:1 limited:1 range:1 scribed:1 practical:3 unique:1 enforces:1 practice:4 block:2 recursive:1 procedure:2 jan:1 area:1 empirical:1 attain:1 significantly:6 projection:7 matching:3 pre:4 word:1 refers:1 boyd:3 get:1 onto:2 close:2 interior:5 operator:1 context:2 applying:1 www:1 equivalent:5 center:2 maximizing:1 yt:1 archaeological:1 regardless:1 convex:50 recovery:2 his:1 searching:1 coordinate:2 construction:5 suppose:1 enhanced:1 exact:1 programming:1 us:1 element:4 referee:2 circumscribed:1 satisfying:1 trend:1 sc0002283:1 bottom:2 ding:1 solved:3 descend:1 cong:1 region:1 ensures:1 ordering:23 trade:1 decrease:2 highest:1 ran:2 convexity:3 complexity:2 depend:1 solving:2 multimodal:1 joint:1 represented:2 various:2 fiedler:4 fast:2 describe:5 london:1 kp:6 query:1 choosing:1 whose:2 fluctuates:1 widely:1 solve:4 larger:1 say:1 relax:2 stanford:1 triangular:1 ability:1 noisy:5 online:1 advantage:1 eigenvalue:5 afips:1 propose:1 strengthening:1 adaptation:1 rapidly:1 spur:1 achieve:1 tiebreaking:6 frobenius:1 pronounced:1 convergence:1 cluster:1 produce:3 converges:3 object:6 help:1 odd:1 aug:1 solves:2 recovering:1 c:2 involves:1 indicate:1 implies:1 direction:2 closely:1 hull:9 stochastic:1 subsequently:2 disordered:1 implementing:1 education:1 require:2 wall:1 anonymous:1 proposition:2 opt:1 sufficiently:1 around:2 wright:3 arrange:1 adopt:1 consecutive:3 smallest:2 optimizer:1 applicable:1 combinatorial:2 label:2 et1:1 minimization:2 hope:1 always:1 attests:1 gaussian:2 rather:2 pn:1 fiori:1 varying:1 derived:2 focus:1 consistently:1 polyhedron:1 sequencing:1 indicates:1 contrast:1 baseline:1 sense:1 kim:1 helpful:3 sokal:1 entire:2 eliminate:1 relation:1 interested:1 issue:2 overall:1 dual:1 html:1 augment:1 denoted:1 art:2 equal:1 construct:1 once:1 having:1 runtimes:1 represents:4 comparators:6 k2f:3 icml:1 future:1 minimized:1 others:1 report:1 nonsmooth:1 few:1 randomly:1 national:1 unappealing:1 n1:2 interest:1 mining:1 leiserson:1 generically:1 birkhoff:35 semidefinite:1 dcp:1 primal:1 nonincreasing:1 chain:7 closer:4 encourage:1 necessary:1 experience:1 orthogonal:3 iv:1 plateaued:1 circle:2 subfigure:1 instance:3 xeon:1 column:7 facet:5 contiguous:1 assignment:2 vertex:2 entry:4 uniform:1 inadequate:1 pothen:1 reported:4 perturbed:1 referring:1 international:1 randomized:1 siam:2 off:1 picking:1 enhance:1 together:2 ambiguity:1 opposed:1 containing:1 choose:2 worse:2 ek:1 american:1 return:2 de:1 coding:1 includes:2 coefficient:1 inc:1 ranking:1 red:4 reached:1 sort:2 recover:3 option:1 competitive:1 jul:2 square:1 variance:1 efficiently:1 yield:5 strongest:1 sharing:1 ed:2 manual:1 definition:5 inexpensive:1 batcher:3 pp:7 obvious:3 dm:3 proof:2 associated:1 recovers:4 sampled:1 proved:2 popular:1 ask:1 recall:1 lim:3 knowledge:2 cap:2 organized:1 positioned:1 jenatton:1 higher:2 response:1 improved:2 april:1 disciplined:1 formulation:72 arranged:1 evaluated:1 clock:1 until:1 working:1 horizontal:2 o:1 continuity:1 quality:6 usage:1 effect:1 k22:2 contain:1 usa:3 regularization:17 hence:3 symmetric:6 seriation:18 laboratory:1 white:1 kyk2:1 uniquely:1 subcontract:1 hill:1 demonstrate:3 image:2 invoked:1 consideration:1 permuted:1 qp:3 overview:1 bxi:1 exponentially:1 discussed:1 interpretation:1 he:1 refer:7 significant:1 ai:1 submodular:1 dot:1 permutahedron:52 han:1 similarity:9 add:1 recent:3 optimizing:3 reverse:1 certain:2 server:1 n00014:1 inequality:1 binary:2 onr:1 verlag:1 atkins:3 additional:7 greater:1 george:1 employed:1 converting:1 maximize:2 dashed:1 stephen:1 ii:1 multiple:3 vogelstein:1 faster:3 cross:1 bach:1 award:5 a1:1 laplacian:1 ensuring:1 variant:3 basic:1 noiseless:5 vision:1 essentially:1 fifteenth:1 arxiv:1 iteration:4 represent:3 sometimes:1 dec:1 penalize:1 whereas:1 addition:1 interval:1 envelope:1 rest:4 strict:1 ascent:1 comment:1 elegant:1 incorporates:1 effectiveness:1 seem:1 call:1 iii:1 xj:3 fit:1 reduce:1 ajtai:1 whether:1 expression:1 gb:1 shotgun:1 penalty:4 returned:1 york:6 matlab:7 workflow:1 generally:2 useful:1 eigenvectors:1 stein:1 ten:2 ph:1 dna:1 generate:1 http:3 xij:1 nsf:2 bot:3 blue:3 discrete:1 write:1 vol:4 key:1 four:2 wisc:2 penalizing:1 destroy:1 graph:7 relaxation:23 asymptotically:1 ram:1 sum:29 convert:1 chronologically:1 run:9 place:1 throughout:1 looser:2 cvx:3 cyan:2 display:2 quadratic:2 annual:1 constraint:30 precisely:1 x2:3 software:2 answered:1 min:7 formulating:1 spring:2 performing:1 expanded:1 department:2 according:1 combination:1 sneath:1 cormen:1 describes:1 smaller:1 across:1 slightly:5 wi:2 modification:1 projecting:1 taken:1 computationally:3 xrest:2 xk22:2 remains:1 boman:1 describing:1 sprechmann:1 end:1 koml:1 available:2 apply:2 occasional:1 away:1 spectral:12 generic:1 enforce:1 alternative:1 encounter:1 robustness:1 original:2 denotes:2 top:8 ensure:1 clustering:1 include:2 running:2 log2:3 muse:1 madison:4 exploit:1 especially:1 move:2 objective:17 question:2 arrangement:1 added:1 diagonal:3 said:2 gradient:1 subspace:1 distance:3 thank:3 concatenation:1 eigs:1 polytope:29 trivial:3 assuming:1 length:3 code:4 minn:2 relationship:1 setup:1 taxonomy:2 statement:1 stoc:1 trace:2 negative:1 rise:1 implementation:3 twenty:1 perform:1 diamond:1 vertical:2 wire:1 revised:1 markov:6 immediate:1 situation:1 extended:4 incorporated:2 excluding:1 rn:8 perturbation:2 community:1 inferred:1 introduced:7 edi:1 pair:2 required:1 subvector:2 gurobi:14 extensive:1 fogel:7 polytopes:4 robinson:2 beyond:2 suggested:1 able:2 below:1 program:4 including:2 green:2 natural:3 force:2 regularized:13 difficulty:1 szemer:1 representing:1 scheme:6 improve:2 numerous:1 axis:1 aspremont:1 sn:4 prior:3 literature:1 geometric:1 acknowledgement:1 asymptotic:1 wisconsin:2 relative:1 reordering:2 expect:1 permutation:48 loss:1 afosr:1 suggestion:1 deposit:1 lecture:1 penalization:1 sufficient:1 exxonmobil:1 principle:1 row:3 summary:1 supported:2 infeasible:1 jth:1 aij:2 side:5 wide:1 taking:3 sparse:1 ghz:1 hendrickson:1 curve:4 dimension:1 xn:4 phn:6 valid:1 feedback:1 author:3 commonly:1 collection:1 projected:1 preprocessing:1 historical:1 compact:6 mcgraw:1 reveals:1 xi:8 continuous:6 robust:2 obtaining:2 complex:1 constructing:1 diag:1 apr:2 linearly:1 noise:6 arise:1 n2:5 cvxr:1 repeated:1 x1:4 augmented:1 intel:1 fashion:1 position:1 candidate:2 kxk2:1 third:1 theorem:2 magenta:1 xt:3 showing:1 list:1 insignificant:1 admits:1 exists:2 intractable:1 adding:1 illustrates:1 push:1 sorting:34 simply:1 prevents:1 expressed:2 monotonic:1 springer:1 corresponds:1 minimizer:2 satisfies:1 acm:4 conditional:1 comparator:5 identity:3 viewed:1 goal:2 sorted:1 presentation:1 feasible:2 hard:1 fista:1 typical:1 except:1 goemans:6 experimental:4 xin:13 la:17 attempted:1 argonne:1 exception:1 swright:1 internal:1 latter:2 incorporate:1 tested:1 |
5,095 | 5,612 | Bregman Alternating Direction Method of Multipliers
Huahua Wang,
Arindam Banerjee
Dept of Computer Science & Engg, University of Minnesota, Twin Cities
{huwang,banerjee}@cs.umn.edu
Abstract
The mirror descent algorithm (MDA) generalizes gradient descent by using a
Bregman divergence to replace squared Euclidean distance. In this paper, we
similarly generalize the alternating direction method of multipliers (ADMM) to
Bregman ADMM (BADMM), which allows the choice of different Bregman divergences to exploit the structure of problems. BADMM provides a unified framework for ADMM and its variants, including generalized ADMM, inexact ADMM
and Bethe ADMM. We establish the global convergence and the O(1/T ) iteration
complexity for BADMM. In some cases, BADMM can be faster than ADMM by
a factor of O(n/ ln n) where n is the dimensionality. In solving the linear program of mass transportation problem, BADMM leads to massive parallelism and
can easily run on GPU. BADMM is several times faster than highly optimized
commercial software Gurobi.
1
Introduction
In recent years, the alternating direction method of multipliers (ADMM) [4] has been successfully
used in a broad spectrum of applications, ranging from image processing [11, 14] to applied statistics and machine learning [26, 25, 12]. ADMM considers the problem of minimizing composite
objective functions subject to an equality constraint:
min
x?X ,z?Z
f (x) + g(z) s.t. Ax + Bz = c ,
(1)
where f and g are convex functions, A ? Rm?n1 , B ? Rm?n2 , c ? Rm?1 , x ? X ? Rn1 ?1 , z ?
Z ? Rn2 ?1 , and X ? Rn1 and Z ? Rn2 are nonempty closed convex sets. f and g can be
non-smooth functions, including indicator functions of convex sets. For further understanding of
ADMM, we refer the readers to the comprehensive review by [4] and references therein. Many
machine learning problems can be cast into the framework of minimizing a composite objective [22,
10], where f is a loss function such as hinge or logistic loss, and g is a regularizer, e.g., `1 norm, `2
norm, nuclear norm or total variation. The functions and constraints usually have different structures.
Therefore, it is useful and sometimes necessary to split and solve them separately, which is exactly
the forte of ADMM.
In each iteration, ADMM updates splitting variables separately and alternatively by solving the
partial augmented Lagrangian of (1), where only the equality constraint is considered:
?
(2)
L? (x, z, y) = f (x) + g(z) + hy, Ax + Bz ? ci + kAx + Bz ? ck22 ,
2
where y ? Rm is dual variable, ? > 0 is penalty parameter, and the quadratic penalty term is to
penalize the violation of the equality constraint. ADMM consists of the following three updates:
?
xt+1 = argminx?X f (x) + hyt , Ax + Bzt ? ci + kAx + Bzt ? ck22 ,
(3)
2
?
zt+1 = argminz?Z g(z) + hyt , Axt+1 + Bz ? ci + kAxt+1 + Bz ? ck22 ,
(4)
2
yt+1 = yt + ?(Axt+1 + Bzt+1 ? c) .
(5)
1
Since the computational complexity of the y update (5) is trivial, the computational complexity of
ADMM is determined by the x and z updates (3)-(4) which amount to solving proximal minimization problems using the quadratic penalty term. Inexact ADMM [26, 4] and generalized ADMM [8]
have been proposed to solve the updates inexactly by linearizing the functions and adding additional
quadratic terms. Recently, online ADMM [25] and Bethe-ADMM [12] add an additional Bregman
divergence on the x update by keeping or linearizing the quadratic penalty term kAx + Bz ? ck22 .
As far as we know, all existing ADMMs use quadratic penalty terms.
A large amount of literature shows that replacing the quadratic term by Bregman divergence in
gradient-type methods can greatly boost their performance in solving constrained optimization
problem. First, the use of Bregman divergence could effectively exploit the structure of problems [6, 2, 10] , e.g., in computerized tomography [3], clustering problems and exponential family
distributions [1]. Second, in some cases, the gradient descent method with Kullback-Leibler
(KL)
?
divergence can outperform the method with the quadratic term by a factor of O( n ln n) where n
is the dimensionality of the problem [2, 3]. Mirror descent algorithm (MDA) and composite objective mirror descent (COMID) [10] use Bregman divergence to replace the quadratic term in gradient
descent or proximal gradient [7]. Proximal point method with D-functions (PMD) [6, 5] and Bregman proximal minimization (BPM) [20] generalize proximal point method by using generalized
Bregman divegence to replace the quadratic term.
For ADMM, although the convergence of ADMM is well understood, it is still unknown whether
the quadratic penalty term in ADMM can be replaced by Bregman divergence. The proof of global
convergence of ADMM can be found in [13, 4]. Recently, it has been shown that ADMM converges
at a rate of O(1/T ) [25, 17], where T is the number of iterations. For strongly convex functions,
the dual objective of an accelerated version of ADMM can converge at a rate of O(1/T 2 ) [15].
Under suitable assumptions like strongly convex functions or a sufficiently small step size for the
dual variable update, ADMM can achieve a linear convergence rate [8, 19]. However, as pointed out
by [4], ?There is currently no proof of convergence known for ADMM with nonquadratic penalty
terms.?
In this paper, we propose Bregman ADMM (BADMM) which uses Bregman divergences to replace
the quadratic penalty term in ADMM, answering the question raised in [4]. More specifically, the
quadratic penalty term in the x and z updates (3)-(4) will be replaced by a Bregman divergence in
BADMM. We also introduce a generalized version of BADMM where two additional Bregman divergences are added to the x and z updates. The generalized BADMM (BADMM for short) provides
a unified framework for solving (1), which allows one to choose suitable Bregman divergence so that
the x and z updates can be solved efficiently. BADMM includes ADMM and its variants as special
cases. In particular, BADMM replaces all quadratic terms in generalized ADMM [8] with Bregman
divergences. By choosing a proper Bregman divergence, we also show that inexact ADMM [26] and
Bethe ADMM [12] can be considered as special cases of BADMM. BADMM generalizes ADMM
similar to how MDA generalizes gradient descent and how PMD generalizes proximal methods. In
BADMM, the x and z updates can take the form of MDA or PMD. We establish the global convergence and the O(1/T ) iteration complexity for BADMM. In some cases, we show that BADMM can
outperform ADMM by a factor O(n/ ln n). We evaluate the performance of BADMM in solving
the linear program problem of mass transportation [18]. Since BADMM takes use of the structure
of the problem, it leads to closed-form solutions which amounts to elementwise operations and can
be done in parallel. BADMM is faster than ADMM and can even be orders of magnitude faster than
highly optimized commercial software Gurobi when implemented on GPU.
The rest of the paper is organized as follows. In Section 2, we propose Bregman ADMM and
discuss several special cases of BADMM. In Section 3, we establish the convergence of BADMM.
In Section 4, we consider illustrative applications of BADMM, and conclude in Section 5.
2
Bregman Alternating Direction Method of Multipliers
Let ? : ? ? R be a continuously differentiable and strictly convex function on the relative interior
of a convex set ?. Denote ??(y) as the gradient of ? at y. We define Bregman divergence B? :
? ? ri(?) ? R+ induced by ? as
B? (x, y) = ?(x) ? ?(y) ? h??(y), x ? yi .
2
Since ? is strictly convex, B? (x, y) ? 0 where the equality holds if and only if x = y. More details
about Bregman divergence can be found in [6, 1]. Note the definition of Bregman divergence has
been generalized for the nondifferentiable functions [20, 23]. In this paper, our discussion uses the
definition of classical Bregman divergence. Two of the most commonly usedPexamples are squared
n
Euclidean distance B? (x, y) = 12 kx ? yk22 and KL divergence B? (x, y) = i=1 xi log xyii .
Assuming B? (c ? Ax, Bz) is well defined, we replace the quadratic penalty term in the partial
augmented Lagrangian (2) by a Bregman divergence as follows:
L?? (x, z, y) = f (x) + g(z) + hy, Ax + Bz ? ci + ?B? (c ? Ax, Bz).
(6)
Unfortunately, we can not derive Bregman ADMM (BADMM) updates by simply solving
L?? (x, z, y) alternatingly as ADMM does because Bregman divergences are not necessarily convex in the second argument. More specifically, given (zt , yt ), xt+1 can be obtained by solving
minx?X L?? (x, zt , yt ), where the quadratic penalty term 21 kAx + Bzt ? ck22 for ADMM in (3) is
replaced with B? (c ? Ax, Bzt ) in the x update of BADMM. However, given (xt+1 , yt ), we cannot
obtain zt+1 by solving minz?Z L?? (xt+1 , z, yt ), since the term B? (c ? Axt+1 , Bz) need not be
convex in z. The observation motivates a closer look at the role of the quadratic term in ADMM.
In standard ADMM, the quadratic augmentation term added to the Lagrangian is just a penalty term
to ensure the new updates do not violate the equality constraint significantly. Staying with these
goals, we propose the z update augmentation term of BADMM to be: B? (Bz, c ? Axt+1 ), instead
of the quadratic penalty term 12 kAxt+1 + Bz ? ck22 in (3). Then, we get the following updates for
BADMM:
xt+1 =argminx?X f (x) + hyt , Ax + Bzt ? ci + ?B? (c ? Ax, Bzt ) ,
zt+1 =argminz?Z g(z) + hyt , Axt+1 + Bz ? ci + ?B? (Bz, c ? Axt+1 ) ,
yt+1 =yt + ?(Axt+1 + Bzt+1 ? c) .
(7)
(8)
(9)
Compared to ADMM (3)-(5), BADMM simply uses a Bregman divergence to replace the quadratic
penalty term in the x and z updates. It is worth noting that the same Bregman divergence B? is used
in the x and z updates.
We consider a special case when A = ?I, B = I, c = 0. (7) is reduced to
xt+1 = argminx?X f (x) + hyt , ?x + zt i + ?B? (x, zt ) .
(10)
If ? is a quadratic function, the constrained problem (10) requires the projection onto the constraint
set X . However, in some cases, by choosing a proper Bregman divergence, (10) can be solved
efficiently or has a closed-form solution. For example, assuming f is a linear function and X is
the unit simplex, choosing B? to be KL divergence leads to the exponentiated gradient [2, 3, 21].
Interestingly, if the z update is also the exponentiated gradient, we have alternating exponentiated
gradients. In Section 4, we will show the mass transportation problem can be cast into this scenario.
While the updates (7)-(8) use the same Bregman divergences, efficiently solving the x and z updates
may not be feasible, especially when the structure of the original functions f, g, the function ? used
for augmentation, and the constraint sets X , Z are rather different. For example, if f (x) is a logistic
function in (10), it will not have a closed-form solution even B? is the KL divergence and X is the
unit simplex. To address such concerns, we propose a generalized version of BADMM.
2.1
Generalized BADMM
To allow the use of different Bregman divergences in the x and z updates (7)-(8) of BADMM, the
generalized BADMM simply introduces an additional Bregman divergence for each update. The
generalized BADMM has the following updates:
xt+1 =argminx?X f (x) + hyt , Ax + Bzt ? ci + ?B? (c ? Ax, Bzt ) + ?x B?x (x, xt ) ,
(11)
zt+1 =argminz?Z g(z) + hyt , Axt+1 + Bz ? ci + ?B? (Bz, c ? Axt+1 ) + ?z B?z (z, zt ) , (12)
yt+1 = yt + ? (Axt+1 + Bzt+1 ? c) .
(13)
where ? > 0, ? > 0, ?x ? 0, ?z ? 0. Note that we allow the use of a different step size ? in the dual
variable update [8, 19]. There are three Bregman divergences in the generalized BADMM. While
3
the Bregman divergence B? is shared by the x and z updates, the x update has its own Bregman
divergence B?x and the z update has its own Bregman divergence B?z . The two additional Bregman
divergences in generalized BADMM are variable specific, and can be chosen to make sure that
the xt+1 , zt+1 updates are efficient. If all three Bregman divergences are quadratic functions, the
generalized BADMM reduces to the generalized ADMM [8]. We prove convergence of generalized
BADMM in Section 3, which yields the convergence of BADMM with ?x = ?z = 0.
In the following, we illustrate how to choose a proper Bregman divergence B?x so that the x update
can be solved efficiently, e.g., a closed-form solution, noting that the same arguments apply to the
z-updates. Consider the first three terms in (11) as s(x) + h(x), where s(x) denotes a simple term
and h(x) is the problematic term which needs to be linearized for an efficient x-update. We illustrate
the idea with several examples later in the section. Now, we have
xt+1 = minx?X s(x) + h(x) + ?x B?x (x, xt ) .
(14)
where efficient updates are difficult due to the mismatch in structure between h and X . The goal is
to ?linearize? the function h by using the fact that the Bregman divergence Bh (x, xt ) captures all
the higher-order (beyond linear) terms in h(x) so that:
h(x) ? Bh (x, xt ) = h(xt ) + hx ? xt , ?h(xt )i
(15)
is a linear function of x. Let ? be another convex function such that one can efficiently solve
minx?X s(x) + ?(x) + hx, bi for any constant b. Assuming ?x (x) = ?(x) ? ?1x h(x) is continuously differentiable and strictly convex, we construct a Bregman divergence based proximal term to
the original problem so that:
argminx?X s(x)+h(x)+?x B?x (x,xt ) = argminx?X s(x)+h?h(xt ), x?xt i+?x B?x (x,xt ),(16)
where the latter problem can be solved efficiently, by our assumption. To ensure ?x is continuously
differentiable and strictly convex, we need the following condition:
Proposition 1 If h is smooth and has Lipschitz continuous gradients with constant ? under a pnorm, then ?x is ?/?x -strongly convex w.r.t. the p-norm.
This condition has been widely used in gradient-type methods, including MDA and COMID. Note
that the convergence analysis of generalized ADMM in Section 4 holds for any additional Bregman
divergence based proximal terms, and does not rely on such specific choices. Using the above idea,
one can ?linearize? different parts of the x update to yield an efficient update.
We consider three special cases, respectively focusing on linearizing the function f (x), linearizing
the Bregman divergence based augmentation term B? (c ? Ax, Bzt ), and linearizing both terms,
along with examples for each case.
Case 1: Linearization of smooth function f : Let h(x) = f (x) in (16), we have
xt+1 = argminx?X h?f (xt ), x ? xt i + hyt , Axi + ?B? (c ? Ax, Bzt ) + ?x B?x (x, xt ) . (17)
where ?f (xt ) is the gradient of f (x) at xt .
Example 1 Consider the following ADMM form for sparse logistic regression problem [16, 4]:
minx h(x) + ?kzk1 , s.t. x = z ,
(18)
where h(x) is the logistic function. If we use ADMM to solve (18), the x update is as follows [4]:
?
xt+1 = argminx h(x) + hyt , x ? zt i + kx ? zt k22 ,
2
(19)
which is a ridge-regularized logistic regression problem and one needs an iterative algorithm like
L-BFGS to solve it. Instead, if we linearize h(x) at xt and set B? to be a quadratic function, then
?
?x
kx ? xt k22 ,
xt+1 = argminx h? h(xt ), x ? xt i + hyt , x ? zt i + kx ? zt k22 +
2
2
the x update has a simple closed-form solution.
4
(20)
Case 2: Linearization of the quadratic penalty term: In ADMM, B? (c ? Ax, Bzt ) = 12 kAx +
Bzt ? ck22 . Let h(x) = ?2 kAx + Bzt ? ck22 . Then ?h(xt ) = ?AT (Axt + Bzt ? c), we have
xt+1 = argminx?X f (x) + hyt + ?(Axt + Bzt ? c), Axi + ?x B? (x, xt ) .
(21)
kAxk22
The case mainly solves the problem due to the
term which makes x updates nonseparable,
whereas the linearized version can be solved with separable (parallel) updates. Several problems
have been benefited from the linearization of quadratic term [8], e.g., when f is `1 loss function [16],
and projection onto the unit simplex or `1 ball [9].
Case 3: Mirror Descent: In some settings, we want to linearize both the function f and the
quadratic augmentation term B? (c ? Ax, Bzt ) = 21 kAx + Bzt ? ck22 . Let h(x) = f (x) +
hyt , Axi + ?2 kAx + Bzt ? ck22 , we have
xt+1 = argminx?X h?h(xt ), xi + ?x B? (x, xt ) .
(22)
Note that (22) is a MDA-type update. Further, one can do a similar exercise with a general Bregman
divergence based augmentation term B? (c ? Ax, Bzt ), although there has to be a good motivation
for going to this route.
Example 2 [Bethe-ADMM [12]] Given an undirected graph G = (V, E), where V is the vertex
set and E is the edge set. Assume a random discrete variable Xi associated with node i ? V
can take K values. In a pairwise MRF, the joint distribution of a set of discrete random variables
X = {X1 , ? ? ? , Xn } (n is the number of nodes in the graph) is defined in terms of nodes and
cliques [24]. Consider solving the following graph-structured linear program (LP) :
min l(?) s.t. ? ? L(G) ,
?
(23)
where l(?) is a linear function of ? and L(G) is the so-called local polytope [24] determined by the
marginalization and normalization (MN) constraints for each node and edge in the graph G:
X
X
L(G) = {? ? 0 ,
?i (xi ) = 1 ,
?ij (xi , xj ) = ?i (xi )} ,
(24)
xi
xj
where ?i , ?ij are pseudo-marginal distributions of node i and edge ij respectively. The LP in (23)
contains O(nK + |E|K 2 ) variables and that order of constraints. In particular, (23) serves as a LP
relaxation of MAP inference probem in a pairwise MRF if l(?) is defined as follows:
XX
XX
l(?) =
?i (xi )?i (xi ) +
?ij (xi , xj )?ij (xi , xj ),
(25)
i
xi
ij?E xij
where ?i , ?ij are the potential functions of node i and edge ij respectively.
For a grid graph (e.g., image) of size 1000?1000, (23) contains millions of variables and constraints,
posing a challenge to LP solvers. An efficient way is to decompose the graph into trees such that
X
min
c? l? (?? ) s.t. ?? ? T? , ?? = m? ,
(26)
??
?
where T? denotes the MN constraints (24) in the tree ? . ?? is a vector of pseudo-marginals of nodes
and edges in the tree ? . m is a global variable which contains all trees and m? corresponds to the
tree ? in the global variable. c? is the weight for sharing variables. The augmented Lagrangian is
X
?
(27)
L? (?? , m, ?? ) =
c? l? (?? ) + h?? , ?? ? m? i + k?? ? m? k22 .
?
2
which leads to the following update for ?t+1
in ADMM:
?
?
?t+1
= argmin?? ?T? c? l? (?? ) + h?t? , ?? i + k?? ? mt? k22
(28)
?
2
(28) is difficult to solve due to the MN constraints in the tree. Let h(?? ) be the objective of (28).
Linearizing h(?? ) and adding a Bregman divergence in (28), we have:
?t+1
= argmin?? ?T? h?h(?t? ), ?? i + ?x B? (?? , ?t? )
?
= argmin?? ?T? h?h(?t? ) ? ?x ??(?t? ), ?? i + ?x ?(?? ) ,
If ?(?? ) is the negative Bethe entropy of ?? , the update of ?t+1
becomes the Bethe entropy prob?
lem [24] and can be solved exactly using the sum-product algorithm in linear time for any tree.
5
3
Convergence Analysis of BADMM
We need the following assumption in establishing the convergence of BADMM:
Assumption 1 (a) f : Rn1 7?R?{+?} and g : Rn2 7?R?{+?} are closed, proper and convex.
(b) An optimal solution exists.
(c) The Bregman divergence B? is defined on an ?-strongly convex function ? with respect to a
p-norm k ? k2p , i.e., B? (u, v) ? ?2 ku ? vk2p , where ? > 0.
Assume that {x? , z? , y? } satisfies the KKT conditions of the Lagrangian of (1) (? = 0 in (2)), i.e.,
?AT y? ? ?f (x? ) , ?BT y? ? ?g(z? ) , Ax? + Bz? ? c = 0 ,
(29)
and x ? X , z ? Z. Note X and Z are always satisfied in (11) and (12). Let f 0 (xt+1 ) ? ?f (xt+1 )
and g 0 (zt+1 ) ? ?g(zt+1 ). For x? ? X , z? ? Z, the optimality conditions of (11) and (12) are
?
?
hf 0 (xt+1 )+AT {yt +?(???(c?Axt+1 )+??(Bzt )}+?x (??x (xt+1 )???x (xt )), xt+1 ?x? i ? 0 ,
hg 0 (zt+1 )+BT {yt +?(??(Bzt+1 )???(c?Axt+1 )}+?z (??z (zt+1 )???z (zt )), zt+1 ? z? i ? 0 .
If Axt+1 + Bzt+1 = c, then yt+1 = yt . Further, if B?x (xt+1 , xt ) = 0, B?z (zt+1 , zt ) = 0, then
the KKT conditions in (29) will be satisfied. Therefore, we have the following sufficient conditions
for the KKT conditions:
B?x (xt+1 , xt ) = 0 , B?z (zt+1 , zt ) = 0 ,
(30a)
Axt+1 + Bzt ? c = 0 , Axt+1 + Bzt+1 ? c = 0 .
(30b)
For the exact BADMM, ?x = ?z = 0 in (11) and (12), the optimality conditions are (30b), which is
equivalent to the optimality conditions of ADMM [4], i.e., Bzt+1 ?Bzt = 0 , Axt+1 +Bzt+1 ?c =
0. Define the residuals of optimality conditions (30) at (t + 1) as:
?x
?z
R(t+1) = B?x(xt+1 ,xt )+ B?z(zt+1 ,zt )+B? (c?Axt+1 ,Bzt )+?kAxt+1+Bzt+1?ck22 , (31)
?
?
where ? > 0. If R(t + 1) = 0, the optimality conditions (30a) and (30b) are satisfied. It is sufficient
to show the convergence of BADMM by showing R(t+1) converges to zero. The following theorem
establishes the global convergence for BADMM.
Theorem 1 Let the sequence {xt , zt , yt } be generated by BADMM (11)-(13), {x? , z? , y? } satisfy (29) and x? ? X , z? ? Z. Let the Assumption 1 hold and ? ? (?? ? 2?)?, where
2
? = min{1, m p ?1 } and 0 < ? < ??
2 . Then R(t + 1) converges to zero and {xt , zt , yt } converges to a KKT point {x? , z? , y? }.
Remark 1 (a) If 0 < p ? 2, then ? = 1 and ? ? (? ? 2?)?. The case that 0 < p ? 2 includes two
widely used Bregman divergences, i.e., Euclidean distance and KL divergence. For KL divergence
in the unit simplex, we have ? = 1, p = 1 in the Assumption 1 (c), i.e., KL(u, v) ? 21 ku ? vk21 [2].
(b) Since we often set B? to be a quadratic function (p = 2), the three special cases in Section 2.1
could choose step size ? = (? ? 2?)?.
(c) If p > 2, ? will be small, leading to a small step size ? which may be not be necessary in practice.
It would be interesting to see whether a large step size can be used for any p > 0.
The following theorem establishes a O(1/T ) iteration complexity for the objective and residual of
constraints in an ergodic sense.
Theorem 2 Let the sequences {xt , zt , yt } be generated by BADMM (11)-(13). Set ? ? (???2?)?,
PT
P
2
1
?
?T = T1 Tt=1 zt and y0 = 0.
where ? = min{1, m p ?1 } and 0 < ? < ??
t=1 xt , z
2 . Let xT = T
For any x? ? X , z? ? Z and (x? , z? , y? ) satisfying KKT conditions (29), we have
D1
f (?
xT ) + g(?
zT ) ? (f (x? ) + g(z? )) ?
,
(32)
T
D(w? , w0 )
kA?
xT + B?
zT ? ck22 ?
,
(33)
?T
where D1 = ?B? (Bz? , Bz0 ) + ?x B?x (x? , x0 ) + ?z B?z (z? , z0 ) and D(w? , w0 ) = 2?1? ky? ?
y0 k22 + B? (Bz? , Bz0 ) + ??x B?x (x? , x0 )+ ??z B?z (z? , z0 ).
6
We consider one special case of BADMM where B = I and X , Z are the unit simplex. Let B?
be the KL divergence. For z? ? Z ? Rn2 ?1 , choosing z0 = e/n2 , we have B? (z? , z0 ) =
Pn2 ?
Pn2 ?
zi?
?
i=1 zi ln zi,0 =
i=1 zi ln zi + ln n2 ? ln n2 . Similarly, if ?x > 0, by choosing x0 = e/n1 ,
B?x (x? , x0 ) ? ln n1 . Setting ? = 1, ? = 1 and ? = 14 in Theorem 2 yields the following result:
Corollary 1 Let the sequences {xt , zt , yt } be generated by Bregman ADMM (11),(12),(13) and
y0 = 0. Assume B = I, and X and Z is the unit simplex. Let B? , B?x , B?z be KL divergence.
P
P
?
?
? ?
?
? T = T1 Tt=1 xt , z
?T = T1 Tt=1 zt . Set ? = 3?
Let x
4 . For any x ? X , z ? Z and (x , z , y )
satisfying KKT conditions (29), we have
? ln n2 + ?x ln n1 + ?z ln n2
,
(34)
f (?
xT ) + g(?
zT ) ? (f (x? ) + g(z? )) ?
T
4?z
4?x
2
?
2
? ? ky ?y0 k2 + 4 ln n2 + ? ln n1 + ? ln n2
kA?
xT + B?
zT ? ck22 ?
,
(35)
T
Remark 2 (a) [2] shows that MDA yields a smilar O(ln n) bound
?where n is dimensionality
? of
the problem. If the diminishing step size of MDA is propotional to ln n, the bound is O( ln n).
Therefore, MDA is faster than the gradient descent method by a factor O((n/ ln n)1/2 ).
Pn
Pn
(b) In ADMM, B? (z? , z0 ) = 12 kz? ? z0 k22 = 21 k i=1 z?i ? zi,0 k22 ? n2 i=1 kz?i ? zi,0 k22 ? n.
Therefore, BADMM is faster than ADMM by a factor O(n/ ln n) in an ergodic sense.
4
Experimental Results
In this section, we use BADMM to solve the mass transportation problem [18]:
min hC, Xi s.t.
Xe = a, XT e = b, X ? 0 .
(36)
where hC, Xi denotes Tr(CT X), C ? Rm?n is a cost matrix, X ? Rm?n , a ? Rm?1 , b ? Rm?1 ,
e is a column vector of ones. The mass transportation problem (36) is a linear program and thus can
be solved by the simplex method.
We now show that (36) can be solved by ADMM and BADMM. We first introduce a variable Z to
split the constraints into two simplex such that ?x = {X|X ? 0, Xe = a} and ?z = {Z|Z ?
0, ZT e = b}. (36) can be rewritten in the following ADMM form:
min hC, Xi s.t.
X ? ?x , Z ? ?z , X = Z .
(37)
(37) can be solved by ADMM which requires the Euclidean projection onto the simplex ?x and
?z , although the projection can be done efficiently [9]. We use BADMM to solve (37):
Xt+1 = argminX??x hC, Xi + hYt , Xi + ?KL(X, Zt ) ,
Z
t+1
Y
t+1
t
= argminZ??z hY , ?Zi + ?KL(Z, X
t
= Y + ?(X
t+1
?Z
t+1
t+1
),
).
(38)
(39)
(40)
Both (38) and (39) have closed-form solutions, i.e.,
t+1
Xij
=
t
Cij +Yij
)
?
ai
t
Pn
C
+Y
ij
ij
t
)
j=1 Zij exp(?
?
t
Zij
exp(?
,
t+1
Zij
=
t
Yij
? )
bj
t
Pm
Yij
t+1
i=1 Xij exp( ? )
t+1
Xij
exp(
(41)
which are exponentiated graident updates and can be done in O(mn). Besides the sum operation
(O(ln n) or O(ln m)), (41) amounts to elementwise operation and thus can be done in parallel.
According to Corollary 1, BADMM can be faster than ADMM by a factor of O(n/ ln n).
We compare BADMM with ADMM and a commercial LP solver Gurobi on the mass transportation
problem (36) with m = n and a = b = e. C is randomly generated from the uniform distribution.
We run the experiments 5 times and the average is reported. We choose the ?best?parameter for
BADMM (? = 0.001) and ADMM (? = 0.001). The stopping condition is either when the number
of iterations exceeds 2000 or when the primal-dual residual is less than 10?4 .
BADMM vs ADMM: Figure 1 compares BADMM and ADMM with different dimensions n =
{1000, 2000, 4000} running on a single CPU. Figure 1(a) plots the primal and dual residual against
7
?3
?3
x 10
1
0.8
0.6
0.4
0.2
0
0
100
200
300
400
runtime (s)
(a) m = n = 1000
500
600
x 10
20
BADMM
ADMM
BADMM
ADMM
0.8
Objective value
BADMM
ADMM
Primal and dual residual
Primal and dual residual
1
0.6
0.4
15
10
5
0.2
0
0
500
1000
1500
Iteration
(b) m = n = 2000
2000
0
0
2000
4000
6000
8000
10000
runtime (s)
(c) m = n = 4000
Figure 1: Comparison BADMM and ADMM. BADMM converges faster than ADMM. (a): the
primal and dual residual agaist the runtime. (b): the primal and dual residual over iterations. (c):
The convergence of objective value against the runtime.
Table 1: Comparison of BADMM (GPU) with Gurobi in solving mass transportation problem
number of variables
Gurobi (Laptop)
Gurobi (Server)
BADMM (GPU)
m?n
time (s) objective time (s) objective time (s) objective
(210 )2 > 1 million
4.22
1.69
2.66
1.69
0.54
1.69
(5 ? 210 )2 > 25 million
377.14
1.61
92.89
1.61
22.15
1.61
(10 ? 210 )2 > 0.1 billion
1235.34
1.65
117.75
1.65
(15 ? 210 )2 > 0.2 billion
303.54
1.63
the runtime when n = 1000, and Figure 1(b) plots the convergence of primal and dual residual over
iteration when n = 2000. BADMM converges faster than ADMM. Figure 1(c) plots the convergence
of objective value against the runtime when n = 4000. BADMM converges faster than ADMM even
when the initial point is further from the optimum.
BADMM vs Gurobi: Gurobi (http://www.gurobi.com/) is a highly optimized commercial software
where linear programming solvers have been efficiently implemented. We run Gurobi on two settings: a Mac laptop with 8G memory and a server with 86G memory, respectively. For comparison,
BADMM is run in parallel on a Tesla M2070 GPU with 5G memory and 448 cores1 . We experiment with large scale problems and use m = n = {1, 5, 10, 15} ? 210 . Table 1 shows the runtime
and the objective values of BADMM and Gurobi, where a ?-? indicates the algorithm did not terminate. In spite of Gurobi being one of the most optimized LP solvers, BADMM running in parallel
is several times faster than Gurobi. In fact, for larger values of n, Gurobi did not terminate even
on the 86G server, whereas BADMM was efficient even with just 5G memory! The memory consumption of Gurobi increases rapidly with the increase of n, especially at the scales we consider.
When n = 5 ? 210 , the memory required by Gurobi surpassed the memory in the laptop, leading
to the rapid increase of time. A similar situation was also observed in the server with 86G when
n = 10 ? 210 . In contrast, the memory required by BADMM is O(n2 )?even when n = 15 ? 210
(more than 0.2 billion parameters), BADMM can still run on a single GPU with only 5G memory.
The results clearly illustrate the promise of BADMM. With more careful implementation and code
optimization, BADMM has the potential to solve large scale problems efficiently in parallel with
small memory foot-print.
5
Conclusions
In this paper, we generalized the alternating direction method of multipliers (ADMM) to Bregman
ADMM, similar to how mirror descent generalizes gradient descent. BADMM defines a unified
framework for ADMM, generalized ADMM, inexact ADMM and Bethe ADMM. The global convergence and the O(1/T ) iteration complexity of BADMM are also established. In some cases,
BADMM is faster than ADMM by a factor of O(n/ ln n). BADMM is also faster than highly optimized commercial software in solving the linear program of mass transportation problem.
Acknowledgment
The research was supported by NSF grants IIS-1447566, IIS-1422557, CCF-1451986, CNS-1314560, IIS0953274, IIS-1029711, IIS-0916750, and by NASA grant NNX12AQ39A. H.W. and A.B. acknowledge the
technical support from the University of Minnesota Supercomputing Institute. H.W. acknowledges the support
of DDF (2013-2014) from the University of Minnesota. A.B. acknowledges support from IBM and Yahoo.
1
GPU code is available on https://github.com/anteagle/GPU_BADMM_MT
8
References
[1] A. Banerjee, S. Merugu, I. Dhillon, and J. Ghosh. Clustering with Bregman divergences. JMLR, 6:1705?
1749, 2005.
[2] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31:167?175, 2003.
[3] A. Ben-Tal, T. Margalit, and A. Nemirovski. The ordered subsets mirror descent optimization method
with applications to tomography. SIAM Journal on Optimization, 12:79?108, 2001.
[4] S. Boyd, E. Chu N. Parikh, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via
the alternating direction method of multipliers. Foundation and Trends Machine Learning, 3(1):1?122,
2011.
[5] Y. Censor and S. Zenios. Parallel Optimization: Theory, Algorithms, and Applications. Oxford University
Press, 1998.
[6] G. Chen and M. Teboulle. Convergence analysis of a proximal-like minimization algorithm using bremgan
functions. SIAM Journal on Optimization, 3:538?543, 1993.
[7] P. Combettes and J. Pesquet. Proximal splitting methods in signal processsing. Fixed-Point Algorithms
for Inverse Problems in Science and Engineering Springer (Ed.), pages 185?212, 2011.
[8] W. Deng and W. Yin. On the global and linear convergence of the generalized alternating direction method
of multipliers. ArXiv, 2012.
[9] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the l1 -ball for learning
in high dimensions. In ICML, pages 272?279, 2008.
[10] J. Duchi, S. Shalev-Shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In COLT,
2010.
[11] M. A. T. Figueiredo and J. M. Bioucas-Dias. Restoration of Poissonian images using alternating direction
optimization. IEEE Transactions on Image Processing, 19:3133?3145, 2010.
[12] Q. Fu, H. Wang, and A. Banerjee. Bethe-ADMM for tree decomposition based parallel MAP inference.
In UAI, 2013.
[13] D. Gabay. Applications of the method of multipliers to variational inequalities. In Augmented Lagrangian
Methods: Applications to the Solution of Boundary-Value Problems. M. Fortin and R. Glowinski, eds.,
North-Holland: Amsterdam, 1983.
[14] T. Goldstein, X. Bresson, and S. Osher. Geometric applications of the split Bregman method: segmentation and surface reconstruction. Journal of Scientific Computing, 45(1):272?293, 2010.
[15] T. Goldstein, B. Donoghue, and S. Setzer. Fast alternating direction optimization methods. CAM report
12-35, UCLA, 2012.
[16] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference,
and Prediction. Springer, 2009.
[17] B. He and X. Yuan. On the O(1/n) convergence rate of the Douglas-Rachford alternating direction
method. SIAM Journal on Numerical Analysis, 50:700?709, 2012.
[18] F. L. Hitchcock. The distribution of a product from several sources to numerous localities. Journal of
Mathematical Physics, 20:224?230, 1941.
[19] M. Hong and Z. Luo. On the linear convergence of the alternating direction method of multipliers. ArXiv,
2012.
[20] K. C. Kiwiel. Proximal minimization methods with generalized Bregman functions. SIAM Journal on
Control and Optimization, 35:1142?1168, 1995.
[21] A. Nemirovski and D. Yudin. Problem Complexity and Method Efficiency in Optimization. Wiley, 1983.
[22] Y. Nesterov. Gradient methods for minimizing composite objective function. Technical Report 76, Center
for Operation Research and Economics (CORE), Catholic University of Louvain (UCL), 2007.
[23] M. Telgarsky and S. Dasgupta. Agglomerative Bregman clustering. In ICML, 2012.
[24] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference.
Foundations and Trends in Machine Learning, 1:1?305, 2008.
[25] H. Wang and A. Banerjee. Online alternating direction method. In ICML, 2012.
[26] J. Yang and Y. Zhang. Alternating direction algorithms for L1-problems in compressive sensing. ArXiv,
2009.
9
| 5612 |@word version:4 norm:5 linearized:2 decomposition:1 tr:1 initial:1 contains:3 zij:3 interestingly:1 existing:1 ka:2 com:2 luo:1 chu:1 gpu:7 numerical:1 engg:1 plot:3 update:43 v:2 short:1 core:1 provides:2 node:7 zhang:1 mathematical:1 along:1 yuan:1 consists:1 prove:1 kiwiel:1 introduce:2 x0:4 pairwise:2 rapid:1 nonseparable:1 cpu:1 solver:4 becomes:1 xx:2 mass:8 laptop:3 argmin:3 kaxk22:1 compressive:1 unified:3 ghosh:1 pseudo:2 runtime:7 exactly:2 axt:19 rm:8 k2:1 control:1 unit:6 grant:2 t1:3 bioucas:1 understood:1 local:1 engineering:1 xyii:1 oxford:1 establishing:1 therein:1 nemirovski:2 bi:1 acknowledgment:1 practice:1 significantly:1 composite:5 projection:5 boyd:1 spite:1 get:1 cannot:1 interior:1 onto:4 bh:2 www:1 equivalent:1 map:2 lagrangian:6 transportation:8 yt:18 pn2:2 center:1 economics:1 convex:17 ergodic:2 splitting:2 nuclear:1 variation:1 pt:1 commercial:5 massive:1 exact:1 programming:1 ck22:13 us:3 trend:2 element:1 satisfying:2 observed:1 role:1 wang:3 solved:9 capture:1 complexity:7 nesterov:1 cam:1 solving:13 efficiency:1 easily:1 joint:1 regularizer:1 fast:1 hitchcock:1 choosing:5 shalev:2 widely:2 solve:9 larger:1 statistic:1 online:2 sequence:3 differentiable:3 ucl:1 propose:4 reconstruction:1 product:2 rapidly:1 achieve:1 ky:2 billion:3 convergence:22 optimum:1 badmm:78 telgarsky:1 converges:7 staying:1 ben:1 derive:1 illustrate:3 linearize:4 ij:10 solves:1 implemented:2 c:1 huwang:1 direction:13 foot:1 hx:2 decompose:1 proposition:1 yij:3 strictly:4 hold:3 sufficiently:1 considered:2 exp:4 bj:1 currently:1 city:1 successfully:1 establishes:2 minimization:4 clearly:1 always:1 rather:1 pn:3 corollary:2 ax:17 indicates:1 mainly:1 greatly:1 contrast:1 comid:2 sense:2 censor:1 inference:4 stopping:1 bt:2 diminishing:1 margalit:1 bpm:1 going:1 dual:11 colt:1 yahoo:1 constrained:2 raised:1 special:7 marginal:1 construct:1 broad:1 look:1 icml:3 simplex:9 report:2 randomly:1 divergence:49 comprehensive:1 beck:1 replaced:3 argminx:12 cns:1 n1:5 friedman:1 highly:4 mining:1 umn:1 violation:1 introduces:1 primal:7 hg:1 bregman:54 hyt:13 closer:1 partial:2 necessary:2 edge:5 fu:1 tree:8 euclidean:4 column:1 teboulle:2 bresson:1 restoration:1 cost:1 mac:1 vertex:1 subset:1 uniform:1 reported:1 proximal:11 siam:4 physic:1 continuously:3 squared:2 augmentation:6 satisfied:3 rn1:3 choose:4 leading:2 potential:2 bfgs:1 rn2:4 twin:1 includes:2 north:1 ddf:1 satisfy:1 later:1 closed:8 hf:1 parallel:8 merugu:1 efficiently:9 yield:4 generalize:2 computerized:1 worth:1 alternatingly:1 sharing:1 ed:2 definition:2 inexact:4 against:3 proof:2 associated:1 dimensionality:3 pmd:3 organized:1 segmentation:1 goldstein:2 nasa:1 focusing:1 higher:1 done:4 strongly:4 just:2 replacing:1 nonlinear:1 banerjee:5 defines:1 logistic:5 scientific:1 k22:9 multiplier:9 ccf:1 equality:5 alternating:14 leibler:1 dhillon:1 forte:1 illustrative:1 linearizing:6 generalized:21 hong:1 ridge:1 tt:3 duchi:2 l1:2 ranging:1 image:4 variational:2 arindam:1 recently:2 parikh:1 mt:1 smilar:1 million:3 rachford:1 he:1 elementwise:2 marginals:1 refer:1 ai:1 grid:1 pm:1 similarly:2 pointed:1 minnesota:3 surface:1 add:1 own:2 recent:1 scenario:1 route:1 server:4 inequality:1 xe:2 yi:1 processsing:1 additional:6 deng:1 converge:1 signal:1 ii:4 violate:1 reduces:1 smooth:3 exceeds:1 faster:13 technical:2 kax:8 variant:2 regression:2 mrf:2 prediction:1 chandra:1 surpassed:1 bz:19 arxiv:3 iteration:10 sometimes:1 normalization:1 penalize:1 whereas:2 want:1 separately:2 bz0:2 source:1 rest:1 sure:1 subject:1 induced:1 undirected:1 jordan:1 noting:2 yk22:1 yang:1 split:3 marginalization:1 xj:4 zi:8 pesquet:1 hastie:1 zenios:1 idea:2 donoghue:1 whether:2 setzer:1 nonquadratic:1 penalty:15 remark:2 useful:1 tewari:1 amount:4 tomography:2 argminz:4 reduced:1 http:2 outperform:2 xij:4 problematic:1 nsf:1 tibshirani:1 discrete:2 promise:1 dasgupta:1 douglas:1 graph:6 relaxation:1 subgradient:1 year:1 sum:2 run:5 prob:1 letter:1 inverse:1 catholic:1 family:2 reader:1 bound:2 ct:1 quadratic:26 replaces:1 mda:9 constraint:14 ri:1 software:4 hy:3 tal:1 ucla:1 argument:2 min:7 optimality:5 separable:1 structured:1 according:1 ball:2 y0:4 lp:6 lem:1 osher:1 ln:23 discus:1 nnx12aq39a:1 nonempty:1 singer:2 know:1 serf:1 dia:1 generalizes:5 operation:5 rewritten:1 available:1 apply:1 pnorm:1 original:2 denotes:3 clustering:3 ensure:2 running:2 graphical:1 hinge:1 exploit:2 especially:2 establish:3 classical:1 objective:15 question:1 added:2 print:1 gradient:16 admms:1 minx:4 distance:3 nondifferentiable:1 w0:2 consumption:1 polytope:1 agglomerative:1 considers:1 evaluate:1 trivial:1 assuming:3 besides:1 code:2 minimizing:3 difficult:2 unfortunately:1 cij:1 kzk1:1 negative:1 implementation:1 zt:38 proper:4 unknown:1 motivates:1 observation:1 acknowledge:1 descent:14 kaxt:3 situation:1 glowinski:1 peleato:1 cast:2 required:2 gurobi:16 kl:11 optimized:5 eckstein:1 huahua:1 louvain:1 established:1 boost:1 address:1 beyond:1 poissonian:1 parallelism:1 usually:1 mismatch:1 challenge:1 program:5 including:3 memory:10 wainwright:1 suitable:2 rely:1 regularized:1 indicator:1 residual:9 mn:4 github:1 fortin:1 numerous:1 acknowledges:2 review:1 understanding:1 literature:1 geometric:1 relative:1 loss:3 interesting:1 foundation:2 sufficient:2 ibm:1 supported:1 keeping:1 figueiredo:1 exponentiated:4 allow:2 institute:1 sparse:1 distributed:1 boundary:1 axi:3 xn:1 dimension:2 yudin:1 kz:2 commonly:1 projected:1 supercomputing:1 far:1 transaction:1 kullback:1 clique:1 global:8 kkt:6 uai:1 conclude:1 xi:17 shwartz:2 alternatively:1 spectrum:1 continuous:1 iterative:1 table:2 bethe:8 ku:2 bzt:32 terminate:2 posing:1 hc:4 necessarily:1 did:2 motivation:1 n2:10 gabay:1 tesla:1 x1:1 augmented:4 benefited:1 k2p:1 wiley:1 combettes:1 exponential:2 exercise:1 answering:1 jmlr:1 minz:1 theorem:5 z0:6 xt:63 specific:2 showing:1 sensing:1 concern:1 exists:1 adding:2 effectively:1 ci:8 mirror:8 magnitude:1 linearization:3 kx:4 nk:1 chen:1 locality:1 entropy:2 yin:1 simply:3 amsterdam:1 ordered:1 holland:1 springer:2 corresponds:1 inexactly:1 satisfies:1 goal:2 careful:1 replace:6 admm:76 feasible:1 shared:1 lipschitz:1 determined:2 specifically:2 total:1 called:1 experimental:1 support:3 latter:1 accelerated:1 dept:1 d1:2 |
5,096 | 5,613 | Multi-Step Stochastic ADMM in High Dimensions:
Applications to Sparse Optimization
and Matrix Decomposition
Hanie Sedghi
Univ. of Southern California
Los Angeles, CA 90089
[email protected]
Anima Anandkumar
University of California
Irvine, CA 92697
[email protected]
Edmond Jonckheere
Univ. of Southern California
Los Angeles, CA 90089
[email protected]
Abstract
In this paper, we consider a multi-step version of the stochastic ADMM method
with efficient guarantees for high-dimensional problems. We first analyze the
simple setting, where the optimization problem consists of a loss function and
a single regularizer (e.g. sparse optimization), and then extend to the multi-block
setting with multiple regularizers and multiple variables (e.g. matrix decomposition into sparse and low rank components). For the sparse optimization problem,
our method achieves the minimax rate of O(s log d/T ) for s-sparse problems in
d dimensions in T steps, and is thus, unimprovable by any method up to constant
factors. For the matrix decomposition problem with a general loss function, we
analyze the multi-step ADMM with multiple blocks. We establish O(1/T ) rate
and efficient scaling as the size of matrix grows. For natural noise models (e.g.
independent noise), our convergence rate is minimax-optimal. Thus, we establish
tight convergence guarantees for multi-block ADMM in high dimensions. Experiments show that for both sparse optimization and matrix decomposition problems,
our algorithm outperforms the state-of-the-art methods.
1
Introduction
Stochastic optimization techniques have been extensively employed for online machine learning
on data which is uncertain, noisy or missing. Typically it involves performing a large number of
inexpensive iterative updates, making it scalable for large-scale learning. In contrast, traditional
batch-based techniques involve far more expensive operations for each update step. Stochastic optimization has been analyzed in a number of recent works.
The alternating direction method of multipliers (ADMM) is a popular method for online and distributed optimization on a large scale [1], and is employed in many applications. It can be viewed as
a decomposition procedure where solutions to sub-problems are found locally, and coordinated via
constraints to find the global solution. Specifically, it is a form of augmented Lagrangian method
which applies partial updates to the dual variables. ADMM is often applied to solve regularized
problems, where the function optimization and regularization can be carried out locally, and then
coordinated globally via constraints. Regularized optimization problems are especially relevant in
the high dimensional regime since regularization is a natural mechanism to overcome ill-posedness
and to encourage parsimony in the optimal solution, e.g., sparsity and low rank. Due to the efficiency
of ADMM in solving regularized problems, we employ it in this paper.
We consider a simple modification to the (inexact) stochastic ADMM method [2] by incorporating
multiple steps or epochs, which can be viewed as a form of annealing. We establish that this simple
modification has huge implications in achieving tight bounds on convergence rate as the dimensions
1
of the problem instances scale. In each iteration, we employ projections on to certain norm balls
of appropriate radii, and we decrease the radii in epochs over time. For instance, for the sparse
optimization problem, we constrain the optimal solution at each step to be within an `1 -norm ball of
the initial estimate, obtained at the beginning of each epoch. At the end of the epoch, an average is
computed and passed on to the next epoch as its initial estimate. Note that the `1 projection can be
solved efficiently in linear time, and can also be parallelized easily [3]. For matrix decomposition
with a general loss function, the ADMM method requires multiple blocks for updating the low rank
and sparse components. We apply the same principle and project the sparse and low rank estimates
on to `1 and nuclear norm balls, and these projections can be computed efficiently.
Theoretical implications: The above simple modifications to ADMM have huge implications
d
for high-dimensional problems. For sparse optimization, our convergence rate is O( s log
T ), for
s-sparse problems in d dimensions in T steps. Our bound has the best of both worlds: efficient
high-dimensional scaling (as log d) and efficient convergence rate (as T1 ). This also matches the
minimax rate for the linear model and square loss function [4], which implies that our guarantee is
unimprovable by any (batch or online) algorithm (up to constant factors). For matrix decomposition,
our convergence rate is O((s + r)? 2 (p) log p/T )) + O(max{s + r, p}/p2 ) for a p ? p input matrix
in T steps, where the sparse part has s non-zero entries and low rank part has rank r. For many natural noise models (e.g. independent noise, linear Bayesian networks), ? 2 (p) = p, and the resulting
convergence rate is minimax-optimal. Note that our bound is not only on the reconstruction error,
but also on the error in recovering the sparse and low rank components. These are the first convergence guarantees for online matrix decomposition in high dimensions. Moreover, our convergence
rate holds with high probability when noisy samples are input, in contrast to expected convergence
rate, typically analyzed in the literature. See Table 1, 2 for comparison of this work with related
frameworks. Proof of all results and implementation details can be found in the longer version [5].
Practical implications: The proposed algorithms provide significantly faster convergence in high
dimension and better robustness to noise. For sparse optimization, our method has significantly
better accuracy compared to the stochastic ADMM method and better performance than RADAR,
based on multi-step dual averaging [6]. For matrix decomposition, we compare our method with the
state-of-art inexact ALM [7] method. While both methods have similar reconstruction performance,
our method has significantly better accuracy in recovering the sparse and low rank components.
Related Work: ADMM: Existing online ADMM-based methods lack high-dimensional guarantees. They scale poorly with the data dimension (as O(d2 )), and also have slow convergence for
general problems (as O( ?1T )). Under strong convexity, the convergence rate can be improved to
O( T1 ) but only in expectation: such analyses ignore the per sample error and consider only the
expected convergence rate(see Table 1). In contrast, our bounds hold with high probability. Some
stochastic ADMM methods, Goldstein et al. [8], Deng [9] and Luo [10] provide faster rates for
stochastic ADMM, than the rate noted in Table 1. However, they require strong conditions which
are not satisfied for the optimization problems considered here, e.g., Goldstein et al. [8] require both
the loss function and the regularizer to be strongly convex.
Related Work: Sparse Optimization: For the sparse optimization problem, `1 regularization is
employed and the underlying true parameter is assumed to be sparse. This is a well-studied problem
in a number of works (for details, refer to [6]). Agarwal et al. [6] propose an efficient online method
based on dual averaging, which achieves the same optimal rates as the ones derived in this paper. The
main difference is that our ADMM method is capable of solving the problem for multiple random
variables and multiple conditions while their method cannot incorporate these extensions.
Related Work: Matrix Decomposition: To the best of our knowledge, online guarantees for highdimensional matrix decomposition have not been provided before. Wang et al. [12] propose a multiblock ADMM method for the matrix decomposition problem but only provide convergence rate
analysis in expectation and it has poor high dimensional scaling (as O(p4 ) for a p ? p matrix)
without further modifications. Note that they only provide convergence rate on difference between
loss function and optimal loss, whereas we provide the convergence rate on individual errors of the
? ) ? S ? k2 , kL(T
? ) ? L? k2 . See Table 2 for comparison of
sparse and low rank components kS(T
F
F
guarantees for matrix decomposition problem.
Notation In the sequel, we use lower case letter for vectors and upper case letter for matrices.
Moreover, X ? Rp?p . kxk1 , kxk2 refer to `1 , `2 vector norms respectively. The term kXk? stands
2
Method
ST-ADMM [2]
ST-ADMM [2]
BADMM [11]
RADAR [6]
REASON 1 (this paper)
Minimax bound [4]
Assumptions
L, convexity
SC, E
convexity, E
LSC, LL
LSC, LL
Eigenvalue conditions
Convergence
? rate
O(d2 / T )
O(d2 log?T /T )
O(d2 / T )
O(s log d/T )
O(s log d/T )
O(s log d/T )
Table 1: Comparison of online sparse optimization methods under s sparsity level for the optimal
paramter, d dimensional space, and T number of iterations. SC = Strong Convexity, LSC = Local
Strong Convexity, LL = Local Lipschitz, L = Lipschitz property, E=in Expectation. The last row
provides the minimax-optimal rate for any method. The results hold with high probability.
Method
Multi-block-ADMM[12]
Batch method[13]
REASON 2 (this paper)
Minimax bound[13]
Assumptions
L, SC, E
LL, LSC, DF
LSC, LL, DF
`2 , IN, DF
Convergence rate
O(p4 /T )
O((s log p + rp)/T )+O(s/p2 )
O((s + r)? 2 (p) log p/T ))+O(max{s + r, p}/p2 )
O((s log p + rp)/T )+O(s/p2 )
Table 2: Comparison of optimization methods for sparse+low rank matrix decomposition for a p ? p
matrix under s sparsity level and r rank matrices and T is the number of samples. Abbreviations
are as in Table 1, IN = Independent
noise model, DF = diffuse low rank matrix under the optimal
?
parameter. ?(p) = ?( p), O(p) and its value depends the model. The last row provides the
minimax-optimal rate for any method under the independent noise model. The results hold with high
probability unless otherwise mentioned. For Multi-block-ADMM [12] the convergence rate is on the
difference of loss function from optimal loss, for the rest of works in the table, the convergence rate is
? ) ? S ? k2 + kL(T
? ) ? L? k2 .
on the individual estimates of the sparse and low rank components: kS(T
F
F
for nuclear norm of X. In addition, kXk2 , kXkF denote spectral
and Frobenius norms respectively.
P
We use vectorized `1 , `? norm for matrices, i.e., kXk1 = |Xij |, kXk? = max|Xij |.
i,j
i,j
2
`1 Regularized Stochastic Optimization
We consider the optimization problem ?? ? arg min E[f (?, x)], ? ? ? where ?? is a sparse vector.
The loss function f (?, xk ) is a function of a parameter ? ? Rd and samples xi . In stochastic setting,
we do not have access to E[f (?, x)] nor to its subgradients. In each iteration we have access to one
noisy sample. In order to impose sparsity we use regularization. Thus, we solve a sequence
?k ? arg min f (?, xk ) + ?k?k1 ,
?0 ? ?,
(1)
???0
where the regularization parameter ? > 0 and the constraint sets ?0 change from epoch to epoch.
2.1
Epoch-based Stochastic ADMM Algorithm
We now describe the modified inexact ADMM algorithm for the sparse optimization problem in (1),
and refer to it as REASON 1, see Algorithm 1. We consider an epoch length T0 , and in each epoch
i, we project the optimal solution on to an `1 ball with radius Ri centered around ??i , which is the
initial estimate of ?? at the start of the epoch. The ?-update is given by
?x
?
?k+1 = arg min {h?f (?k ), ? ? ?k i ? hzk , ? ? yk i + k? ? yk k22 + k? ? ?k k22 }. (2)
2
2
k????i k2 ?R2
1
i
Note that this is an inexact update since we employ the gradient ?f (?) rather than optimize directly
on the loss function f (?) which is expensive. The above program can be solved efficiently since
it is a projection on to the `1 ball, whose complexity is linear in the sparsity level of the gradient,
when performed serially, and O(log d) when performed in parallel using d processors [3]. For the
regularizer, we introduce the variable y, and the y-update is yk+1 = arg min{?i kyk k1 ? hzk , ?k+1 ?
3
Algorithm 1: Regularized Epoch-based Admm for Stochastic Opt. in high-dimensioN 1 (REASON 1)
Input ?, ?x , epoch length T0 , initial prox center ??1 , initial radius R1 , regularization parameter
T
.
{?i }ki=1
Define Shrink? (a) = (a ? ?)+ ? (?a ? ?)+ .
for Each epoch i = 1, 2, ..., kT do
Initialize ?0 = y0 = ??i
for Each iteration k = 0, 1, ..., T0 ? 1 do
?
?x
?k+1 = arg min {h?f (?k ), ? ? ?k i ? hzk , ? ? yk i + k? ? yk k22 + k? ? ?k k22 }
2
2
?
k???i k1 ?Ri
zk
zk+1 = zk ? ? (?k+1 ? yk+1 )
yk+1 = Shrink?i /? (?k+1 ? ),
?
PT0 ?1
Return : ?(Ti ) := T1 k=0
?k for epoch i and ??i+1 = ?(Ti ).
2
2
Update : Ri+1 = Ri /2.
yi+ ?2 k?k+1 ?yk22 }. This update can be simplified to the form given in REASON 1, where Shrink? (?)
is the soft-thresholding or shrinkage function [1]. Thus, each step in the update is extremely simple
to implement. When an epoch is complete, we carry over the average ?(Ti ) as the next epoch center
and reset the other variables.
2.2
High-dimensional Guarantees
We now provide convergence guarantees for the proposed method under the following assumptions.
Assumption A1: Local strong convexity (LSC): The function f : S ? R satisfies an R-local form
of strong convexity (LSC) if there is a non-negative constant ? = ?(R) such that for any ?1 , ?2 ? S
with k?1 k1 ? R and k?2 k1 ? R, f (?1 ) ? f (?2 ) + h?f (?2 ), ?1 ? ?2 i + ?2 k?2 ? ?1 k22 .
Note that the notion of strong convexity leads to faster convergence rates in general. Intuitively,
strong convexity is a measure of curvature of the loss function, which relates the reduction in the
loss function to closeness in the variable domain. Assuming that the function f is twice continuously
differentiable, it is strongly convex, if and only if its Hessian is positive semi-definite, for all feasible
?. However, in the high-dimensional regime, where there are fewer samples than data dimension, the
Hessian matrix is often singular and we do not have global strong convexity. A solution is to impose
local strong convexity which allows us to provide guarantees for high dimensional problems. This
notion has been exploited before in a number of works on high dimensional analysis, e.g., [14, 13, 6].
It holds for various loss functions such as square loss.
Assumption A2: Sub-Gaussian stochastic gradients: Let ek (?) := ?f (?, xk ) ? E[?f (?, xk )].
There is a constant ? = ?(R) such that for all k > 0, E[exp(kek (?)k2? )/? 2 ] ? exp(1), for all ?
such that k? ? ?? k1 ? R.
?
Remark: The bound holds with ? = O( log d) whenever each component of the error vector has
sub-Gaussian tails [6].
Assumption A3: Local Lipschitz condition: For each R > 0, there is a constant G = G(R) such
that, |f (?1 )?f (?2 )| ? Gk?1 ??2 k1 , for all ?1 , ?2 ? S such that k???? k1 ? R and k?1 ??? k1 ? R.
The design parameters are as below where ?i is the regularization for `1 term in epoch i, ? and ?x
are penalties in ?-update as in (2) and ? is the step size for the dual update.
s
?
?
3
?R
G2 (? + ?x )2
T0 log d
T0
i
2
2
+ ?i log( ), ? ?
, ?x > 0, ? ?
.
?i = ?
log d +
2
T
?
R
R
s T0
i
i
i
0
(3)
Theorem 1. Under Assumptions A1 ? A3, ?i as in (3) , with fixed epoch lengths T0 = T log d/kT ,
where T is the total number of iterations and
? 2 R12 T
kT = log2 2
,
?
s (log d + s G + 12? 2 log( 6? ))
4
and T0 satisfies T0 = O(log d), for any ?? with sparsity s, with probability at least 1 ? ? we have
log d + ?s G + (log(1/?) + log(kT /log d))? 2 log d
k??T ? ?? k22 = O s
,
T
kT
where ??T is the average for the last epoch for a total of T iterations.
Improvement of log d factor : The above theorem covers the practical case where the epoch length
T0 is fixed. We can improve the above results using varying epoch length (which depend on the
problem parameters) such that k??T ? ?? k22 = O(s log d/T ). The details can be found in the longer
version [5].This convergence rate of O(s log d/T ) matches the minimax lower bounds for sparse
estimation [4]. This implies that our guarantees are unimprovable up to constant factors.
3
Extension to Doubly Regularized Stochastic Optimization
We consider the optimization problem M ? ? arg min E[f (M, X)], where we want to decompose
M into a sparse matrix S ? Rp?p and a low rank matrix L ? Rp?p . f (M, Xk ) is a function of a
parameter M and samples Xk . Xk can be a matrix (e.g. independent noise model) or a vector (e.g.
Gaussian graphical model). In stochastic setting, we do not have access to E[f (M, X)] nor to its
subgradients. In each iteration, we have access to one noisy sample and update our estimate based
on that. We impose the desired properties with regularization. Thus, we solve a sequence
ck := arg min{fb(M, Xk ) + ?n kSk1 + ?n kLk? }
M
s.t.
M = S + L,
kLk? ?
?
.
p
(4)
We propose an online program based on multi-block ADMM algorithm. In addition to tailoring
projection ideas employed for sparse case, we impose an `? constraint of ?/p on each entry of L.
This constraint is also imposed for the batch version of the problem (4) in [13], and we assume that
the true matrix L? satisfies this constraint. Intuitively, the `? constraint controls the ?spikiness?
of L? . If ? ? 1, then the entries of L are O(1/p), i.e. they are ?diffuse? or ?non-spiky?, and no
entry is too large. When the low rank matrix L? has diffuse entries, it cannot be a sparse matrix,
and thus, can be separated from the sparse S ? efficiently. In fact, the `? constraint is a weaker form
of the incoherence-type assumptions needed to guarantee identifiability [15] for sparse+low rank
decomposition. For more discussions, see Section 3.2.
3.1
Epoch-based Multi-Block ADMM Algorithm
We now extend the ADMM method proposed in REASON 1 to multi-block ADMM. The details
are in Algorithm 2, and we refer to it as REASON 2. Recall that the matrix decomposition setting
assumes that the true matrix M ? = S ? + L? is a combination of a sparse matrix S ? and a low rank
matrix L? . In REASON 2, the updates for matrices M, S, L are done independently at each step. The
updates follow definition of ADMM and ideas presented in Section 2. We consider epochs of lengths
T0 . We do not need to project the update of matrix M . The update rules for S, L are result of doing
an inexact proximal update by considering them as a single block, which can then be decoupled.
We impose an `1 -norm projection for the sparse estimate S around the epoch initialization S?i . For
?i.
the low rank estimate L, we impose a nuclear norm projection around the epoch initialization L
Intuitively, the nuclear norm projection, which is an `1 projection on the singular values, encourages
sparsity in the spectral domain leading to low rank estimates. We also require an `? constraint on
L. Thus, the update rule for L has two projections, i.e. infinity and nuclear norm projections. We
decouple it into ADMM updates L, Y with dual variable U corresponding to this decomposition.
3.2
High-dimensional Guarantees
We now prove that REASON 2 recovers both the sparse and low rank estimates in high dimensions
efficiently. We need the following assumptions, in addition to Assumptions A2, A3.
Assumption A4: Spectral Bound on the Gradient Error Let Ek (M, Xk ) := ?f (M, Xk ) ?
E[?f (M, Xk )], kEk k2 ? ?(p)?, where ? := kEk k? .
5
Recall from Assumption A2 that ? = O(log p), under sub-Gaussianity. Here, we require spectral
bounds in addition to k ? k? bound in A2.
Assumption A5: Bound on spikiness of low-rank matrix kL? k? ?
?
p,
as discussed before.
Assumption A6: Local strong convexity (LSC) The function f : Rd1 ?d2 ? Rn1 ?n2 satisfies
an R-local form of strong convexity (LSC) if there is a non-negative constant ? = ?(R) such that
f (B1 ) ? f (B2 ) + Tr(?f (B2 )(B1 ? B2 )) + ?2 kB2 ? B1 kF , for any kB1 k ? R and kB2 k ? R,
which is essentially the matrix version of Assumption A1.
We choose algorithm parameters as below where ?i , ?i are the regularization for `1 and nuclear
norm respectively, ?, ?x correspond to penalty terms in M -update and ? is dual update step size.
q
s
?2)
? (Ri2 + R
3
?2 ? 2 (p)? 2
1
G2 (? + ?x )2
i
2
2
2
?
?i =
+? (p)?i log( )+ 2 +
log p+log
log p+
T02
?i
p
T0
?
(s + r) T0
(5)
s
s
T0 log p
T0
?2i = c? ?2i , ? ?
, ?x > 0, ? ?
?2
?2
R2 + R
R2 + R
i
i
i
i
Theorem 2. Under Assumptions A2 ? A6, parameter settings (5), let T denote total number of
iterations and T0 = T log p/kT , where
(s + r)2
G
2
2
+
?
(p)?
[(1
+
G)(log(6/?)
+
log
k
)
+
log
p]
,
kT ' ? log
log
p
+
T
? 2 R12 T
s+r
and T0 satisfies T0 = O(log p), with probability at least 1 ? ? we have
? ) ? S ? k2F + kL(T
? ) ? L? k2F =
kS(T
h
?
log p + G + ? 2 (p)? 2 (1 + G)(log
O ?(s + r)
T
6
?
+ log
kT
log p )
i
?
2
+ log p log p
?+ 1 + s + r ? .
kT
?2p
p
Improvement of log p factor : The above result can be improved by a log p factor by considering
varying epoch lengths (which depend on problem parameters). The resulting convergence rate is
O((s + r)p log p/T + ?2 /p). The details can be found in the longer version [5].
?
Scaling of ?(p): We have the following bounds ?( p) ? ?(p)?(p). This implies that the conver?
gence rate (with varying epoch lengths) is O((s + r)p log p/T + ?2 /p), when ?(p) = ?( p) and
2
2
when ?(p) = ?(p), it is O((s + r)p log p/T + ? /p). The upper bound on ?(p) arises trivially by
converting the max-norm kEk k? ? ? to the bound on the spectral norm kEk k2 . In many interesting
scenarios, the lower bound on ?(p) is achieved, as outlined below in Section 3.2.1.
Comparison with the batch result: Agarwal et al. [13] consider the batch version of the same
problem (4), and provide a convergence rate of O((s log p + rp)/T + s?2 /p2 ). This is also the
minimax lower bound under the independent noise model. With respect to the convergence rate, we
match their results with respect to the scaling of s and r, and also obtain
a 1/T rate. We match
?
the scaling with respect to p (up to a log factor), when ?(p) = ?( p) attains the lower bound,
and we discuss a few such instances below. Otherwise, we are worse by a factor of p compared
to the batch version. Intuitively, this is because we require different bounds on error terms Ek in
the online and the batch settings. The batch setting considers an empirical estimate, hence operates
on the averaged error. Whereas in the online setting we suffer from the per sample error. Efficient
concentration bounds exist for the batch case [16], while for the online case, no such bounds exist in
general. Hence, we conjecture that our bounds in Theorem 2 are unimprovable in the online setting.
Approximation Error: Note that the optimal decomposition M ? = S ? + L? is not identifiable
in general without the incoherence-style conditions [15, 17]. In this paper, we provide efficient
guarantees without assuming such strong incoherence constraints. This implies that there is an
approximation error which is incurred even in the noiseless setting due to model non-identifiability.
6
Algorithm 2: Regularized Epoch-based Admm for Stochastic Opt. in high-dimensioN 2 (REASON 2)
T
?1,
, initial prox centers S?1 , L
Input ?, ?x , epoch length T0 , regularization parameters {?i , ?i }ki=1
?1.
initial radii R1 , R
Define Shrink? (a) shrinkage operator as in REASON 1, GMk = Mk+1 ? Sk ? Lk ? ?1 Zk .
for each epoch i = 1, 2, ..., kT do
? i , M0 = S0 + L0 .
Initialize S0 = S?i , L0 = L
for each iteration k = 0, 1, ..., T0 ? 1 do
??f (Mk ) + Zk + ?(Sk + Lk ) + ?x Mk
? + ?x
?
Sk+1 =
min
?i kSk1 +
kS ? (Sk + ?k GMk )k2F
?
2?
kS?Si k1 ?Ri
k
?
Lk+1 =
min
?i kLk? + kL ? Yk ? Uk /?k2F
? i k? ?R
?i
2
kL?L
?
?
Yk+1 =
min
kY ? (Lk + ?k GMk )k2F + kLk+1 ? Y ? Uk /?k2F
2
kY k? ??/p 2?k
Zk+1 = Zk ? ? (Mk+1 ? (Sk+1 + Lk+1 ))
Uk+1 = Uk ? ? (Lk+1 ? Yk+1 ).
Mk+1 =
Set: S?i+1 =
1
T0
if Ri2 > 2(s + r
else STOP;
PT0 ?1
1
?
k=0 Sk and Li+1 := T0
2
?2
+ (s+r)
p? 2 ) p then Update
Dimension
Run Time (s)
d=20000
T=50
d=2000
T=5
d=20
T=0.2
PT0 ?1
k=0 Lk
2
?2
Ri+1
= Ri2 /2, R
i+1
Method
ST-ADMM
RADAR
REASON
ST-ADMM
RADAR
REASON
ST-ADMM
RADAR
REASON
error at 0.02T
1.022
0.116
1.5e-03
0.794
0.103
0.001
0.212
0.531
0.100
2
= R?i /2;
error at 0.2T
1.002
2.10e-03
2.20e-04
0.380
4.80e-03
2.26e-04
0.092
4.70e-03
2.02e-04
Table 3: Least square regression problem, epoch size Ti = 2000, Error=
error at T
0.996
6.26e-05
1.07e-08
0.348
1.53e-04
1.58e-08
0.033
4.91e-04
1.09e-08
k??? ? k2
k? ? k2 .
Agarwal et al. [13] achieve an approximation error of s?2 /p2 for their batch algorithm. Our online
algorithm has an approximation error of max{s + r, p}?2 /p2 , which is decaying with p. It is not
clear if this bound can be improved by any other online algorithm.
3.2.1
Optimal Guarantees for Various Statistical Models
We now list some statistical models under which we achieve the batch-optimal rate for sparse+low
rank decomposition.
1) Independent Noise Model: Assume we sample i.i.d. matrices Xk = S ? + L? + Nk , where
the noise Nk has independent bounded sub-Gaussian entries with maxi,j Var(Nk (i, j)) = ? 2 . We
2
?
?
consider the square loss function, kXk ? S
?? LkF . Hence Ek = Xk ? S ? L = Nk . From [Thm.
1.1][18], we have w.h.p. kNk k = O(? p). We match the batch bound in [13] in this setting.
Moreover, Agarwal et al. [13] provide a minimax lower bound for this model, and we match it as
well. Thus, we achieve the optimal convergence rate for online matrix decomposition for this model.
2) Linear Bayesian Network: Consider a p-dimensional vector y = Ah + n, where h ? Rr with
r ? p, and n ? Rp . The variable h is hidden, and y is the observed variable. We assume that the
vectors h and n are each zero-mean sub-Gaussian vectors with i.i.d entries, and are independent of
7
Run Time
Error
T = 50 sec
T = 150 sec
kM ? ?S?LkF
kM ? kF
kS?S ? kF
kS ? kF
kL? ?LkF
kL? kF
kM ? ?S?LkF
kM ? kF
kS?S ? kF
kS ? kF
kL? ?LkF
kL? kF
2.20e-03
5.11e-05
0.004
0.12
0.01
0.27
5.55e-05
8.76e-09
1.50e-04
0.12
3.25e-04
0.27
REASON 2
IALM
Table 4: REASON 2 and inexact ALM, matrix decomposition problem. p = 2000, ? 2 = 0.01
one another. Let ?h2 and ?n2 be the variances for the entries of h and n respectively. Without loss
of generality, we assume that the columns of A are normalized, as we can always rescale A and
?h appropriately to obtain the same model. Let ??y,y be the true covariance matrix of y. From the
independence assumptions, we have ??y,y = S ? + L? , where S ? = ?n2 I is a diagonal matrix and
L? = ?h2 AA> has rank at most r.
In each step k, we obtain a sample yk from the Bayesian network. For the square loss function f ,
we have the error Ek = yk yk> ? ??y,y . Applying [Cor. 5.50][19], we have, with w.h.p. knk n>
k ?
?
? 2
2
?cp
Ik
=
O(
?n2 Ik2 = O( p?n2 ), khk h>
?
?
p?
).
We
thus
have
with
probability
1
?
T
e
,
k
h
h 2
?
2 2
2
kEk k2 ? O p(kAk ?h + ?n ) , ? k ? T. When kAk2 is bounded, we obtain the optimal
bound in Theorem 2, which matches the batch bound. Ifp
the entries of A are generically drawn (e.g.,
from a Gaussian distribution), we have kAk2 = O(1 + r/p). Moreover, such generic matrices A
are also ?diffuse?, and thus, the low rank matrix L? satisfies Assumption A5, with ? ? polylog(p).
Intuitively, when A is generically drawn, there are diffuse connections from hidden to observed
variables, and we have efficient guarantees under this setting.
4
Experiments
REASON 1: For sparse optimization problem, we compare REASON 1 with RADAR and
ST-ADMM under the least-squares regression setting. Samples (xt , yt ) are generated such that
xt ? Unif[?B, B] and yt = h?? , xi + nt . ?? is s-sparse with s = dlog de. nt ? N (0, ? 2 ).
With ? 2 = 0.5 in all cases. We consider d = 20, 2000, 20000 and s = 1, 3, 5 respectively.
The experiments are performed on a 2.5 GHz Intel Core i5 laptop with 8 GB RAM. See Table 3
for experiment results. It should be noted that
RADAR is provided with information of ?? for
epoch design and recentering. In addition, both
RADAR and REASON 1 have the same initial
radius. Nevertheless, REASON 1 reaches better accuracy within the same run time even for
small time frames. In addition, we compare relative error k? ? ?? k2 /k?? k2 in REASON 1 and
ST-ADMM in the first epoch. We observe that in
higher dimension error fluctuations for ADMM
?
k2
increases noticeably (see Figure 1). Therefore, Figure 1: Least square regression, Error= k???
k? ? k2
projections of REASON 1 play an important role vs. iteration number, d1 = 20 and d2 = 20000.
in denoising and obtaining good accuracy.
?4
1
t2
x 10
t1
10
0.6
6
tttt
8
tttt
0.8
0.4
4
0.2
2
0
0
500
1000
1500
2000
500
t4
1000
1500
2000
1500
2000
t3
4
30
25
3
tttt
tttt
20
2
15
10
1
5
0
0
500
1000
rr
1500
2000
500
1000
rr
REASON 2: We compare REASON 2 with state-of-the-art inexact ALM method for matrix decomposition problem (ALM codes are downloaded from [20]). Table 4 shows that with equal time,
?
?S?LkF
error while in fact this does not provide a good decompoinexact ALM reaches smaller kMkM
?k
F
sition. Further, REASON 2 reaches useful individual errors. Experiments with ? 2 ? [0.01, 1] show
similar results. Similar experiments on exact ALM shows worse performance than inexact ALM.
Acknowledgment
We acknowledge detailed discussions with Majid Janzamin and thank him for valuable comments on
sparse and low rank recovery. The authors thank Alekh Agarwal for detailed discussions of his work
and the minimax bounds. A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF
Career award CCF-1254106, NSF Award CCF-1219234, and ARO YIP Award W911NF-13-1-0084.
8
References
[1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical
R in
learning via the alternating direction method of multipliers. Foundations and Trends
Machine Learning, 3(1):1?122, 2011.
[2] H. Ouyang, N. He, L. Tran, and A. G Gray. Stochastic alternating direction method of multipliers. In Proceedings of the 30th International Conference on Machine Learning (ICML-13),
pages 80?88, 2013.
[3] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1 ball for learning in high dimensions. In Proceedings of the 25th international conference on
Machine learning, pages 272?279. ACM, 2008.
[4] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax rates of estimation for high-dimensional
linear regression over `q -balls. IEEE Trans. Information Theory, 57(10):6976?6994, October
2011.
[5] Hanie Sedghi, Anima Anandkumar, and Edmond Jonckheere. Guarantees for multi-step
stochastic ADMM in high dimensions. arXiv preprint arXiv:1402.5131, 2014.
[6] A. Agarwal, S. Negahban, and M. J. Wainwright. Stochastic optimization and sparse statistical
recovery: Optimal algorithms for high dimensions. In NIPS, pages 1547?1555, 2012.
[7] Z. Lin, M. Chen, and Y. Ma. The augmented lagrange multiplier method for exact recovery of
corrupted low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.
[8] T Goldstein, B. ODonoghue, and S. Setzer. Fast alternating direction optimization methods.
CAM report, pages 12?35, 2012.
[9] W. Deng, W.and Yin. On the global and linear convergence of the generalized alternating
direction method of multipliers. Technical report, DTIC Document, 2012.
[10] Zhi-Quan Luo. On the linear convergence of the alternating direction method of multipliers.
arXiv preprint arXiv:1208.3922, 2012.
[11] H. Wang and A. Banerjee. Bregman alternating direction method of multipliers. arXiv preprint
arXiv:1306.3203, 2013.
[12] X. Wang, M. Hong, S. Ma, and Z. Luo. Solving multiple-block separable convex minimization problems using two-block alternating direction method of multipliers. arXiv preprint
arXiv:1308.5294, 2013.
[13] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. The Annals of Statistics, 40(2):1171?1197, 2012.
[14] S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. Statistical Science,
27(4):538?557, 2012.
[15] V. Chandrasekaran, S. Sanghavi, Pablo A Parrilo, and A. S Willsky. Rank-sparsity incoherence
for matrix decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011.
[16] J. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389?434, 2012.
[17] Daniel Hsu, Sham M Kakade, and Tong Zhang. Robust matrix decomposition with sparse
corruptions. Information Theory, IEEE Transactions on, 57(11):7221?7234, 2011.
[18] Van H Vu. Spectral norm of random matrices. In Proceedings of the thirty-seventh annual
ACM symposium on Theory of computing, pages 423?430. ACM, 2005.
[19] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027, 2010.
[20] Low-rank matrix recovery and completion via convex optimization.
http://
perception.csl.illinois.edu/matrix-rank/home.html. Accessed: 201405-02.
9
| 5613 |@word faculty:1 version:8 norm:15 unif:1 d2:6 km:4 decomposition:25 covariance:1 tr:1 klk:4 carry:1 reduction:1 initial:8 pt0:3 daniel:1 document:1 outperforms:1 existing:1 ksk1:2 nt:2 luo:3 si:1 chu:1 hanie:2 tailoring:1 update:22 v:1 fewer:1 kyk:1 xk:13 beginning:1 core:1 provides:2 zhang:1 accessed:1 symposium:1 ik:1 multiblock:1 consists:1 doubly:1 prove:1 khk:1 introduce:1 alm:7 expected:2 nor:2 multi:12 globally:1 zhi:1 csl:1 considering:2 project:3 provided:2 moreover:4 underlying:1 notation:1 bounded:2 laptop:1 ouyang:1 parsimony:1 unified:1 guarantee:17 ti:4 friendly:1 k2:15 uk:4 control:1 t1:4 before:3 positive:1 local:8 incoherence:4 fluctuation:1 twice:1 initialization:2 studied:1 k:9 averaged:1 practical:2 acknowledgment:1 thirty:1 vu:1 block:12 implement:1 definite:1 procedure:1 ri2:3 empirical:1 significantly:3 projection:13 boyd:1 cannot:2 onto:1 operator:1 applying:1 kb1:1 optimize:1 knk:2 imposed:1 lagrangian:1 missing:1 center:3 yt:2 independently:1 convex:5 decomposable:1 recovery:4 rule:2 estimator:1 nuclear:6 his:1 notion:2 annals:1 play:1 user:1 exact:2 trend:1 expensive:2 updating:1 t02:1 kxk1:2 observed:2 role:1 preprint:6 solved:2 wang:3 decrease:1 valuable:1 yk:13 mentioned:1 convexity:13 complexity:1 cam:1 radar:8 depend:2 tight:2 solving:3 efficiency:1 conver:1 easily:1 various:2 regularizer:3 univ:2 separated:1 fast:1 describe:1 sc:3 shalev:1 whose:1 solve:3 otherwise:2 statistic:1 noisy:5 online:16 sequence:2 eigenvalue:1 differentiable:1 rr:3 reconstruction:2 propose:3 aro:1 tran:1 reset:1 p4:2 uci:1 relevant:1 poorly:1 achieve:3 frobenius:1 ky:2 los:2 convergence:30 r1:2 badmm:1 hzk:3 polylog:1 completion:1 rescale:1 p2:7 strong:13 recovering:2 involves:1 implies:4 direction:8 radius:6 stochastic:19 centered:1 noticeably:1 require:5 decompose:1 opt:2 extension:2 hold:6 around:3 considered:1 exp:2 m0:1 achieves:2 a2:5 estimation:2 him:1 minimization:1 gaussian:6 always:1 modified:1 rather:1 ck:1 shrinkage:2 varying:3 derived:1 l0:2 improvement:2 rank:29 contrast:3 attains:1 typically:2 hidden:2 arg:7 dual:6 ill:1 html:1 art:3 yip:1 initialize:2 equal:1 yu:2 k2f:6 icml:1 t2:1 report:2 sanghavi:1 roman:1 employ:3 few:1 individual:3 usc:2 microsoft:1 huge:2 unimprovable:4 a5:2 generically:2 analyzed:2 regularizers:2 implication:4 kt:10 bregman:1 encourage:1 partial:1 capable:1 janzamin:1 decoupled:1 unless:1 desired:1 theoretical:1 uncertain:1 mk:5 instance:3 column:1 soft:1 cover:1 kxkf:1 w911nf:1 a6:2 entry:9 seventh:1 too:1 corrupted:1 proximal:1 vershynin:1 st:7 international:2 negahban:3 siam:1 sequel:1 continuously:1 satisfied:1 rn1:1 choose:1 worse:2 ek:5 leading:1 return:1 style:1 li:1 prox:2 de:1 parrilo:1 b2:3 sec:2 gaussianity:1 ialm:1 coordinated:2 depends:1 performed:3 analyze:2 doing:1 start:1 decaying:1 parallel:1 identifiability:2 square:7 accuracy:4 kek:6 variance:1 efficiently:5 correspond:1 t3:1 bayesian:3 corruption:1 anima:2 processor:1 ah:1 reach:3 whenever:1 definition:1 inexact:8 inexpensive:1 proof:1 recovers:1 irvine:1 stop:1 hsu:1 popular:1 recall:2 knowledge:1 goldstein:3 higher:1 follow:1 improved:3 done:1 shrink:4 strongly:2 generality:1 spiky:1 tropp:1 banerjee:1 lack:1 gray:1 grows:1 k22:7 normalized:1 multiplier:8 true:4 ccf:2 regularization:10 hence:3 alternating:8 ll:5 encourages:1 noted:2 kak:1 hong:1 generalized:1 complete:1 duchi:1 cp:1 parikh:1 ifp:1 raskutti:1 extend:2 tail:2 discussed:1 he:1 refer:4 jonckheere:2 rd:1 trivially:1 outlined:1 mathematics:1 illinois:1 access:4 longer:3 alekh:1 curvature:1 recent:1 scenario:1 certain:1 yi:1 exploited:1 impose:6 employed:4 parallelized:1 deng:2 converting:1 semi:1 relates:1 multiple:8 sham:1 technical:1 match:7 faster:3 lin:1 ravikumar:1 award:3 a1:3 scalable:1 regression:4 essentially:1 expectation:3 df:4 noiseless:1 sition:1 iteration:10 chandra:1 arxiv:12 agarwal:7 achieved:1 whereas:2 addition:6 want:1 fellowship:1 annealing:1 spikiness:2 gmk:3 singular:2 else:1 appropriately:1 rest:1 comment:1 majid:1 quan:1 anandkumar:4 yk22:1 independence:1 idea:2 angeles:2 t0:22 gb:1 passed:1 setzer:1 penalty:2 suffer:1 hessian:2 remark:1 useful:1 clear:1 involve:1 detailed:2 extensively:1 locally:2 http:1 xij:2 exist:2 nsf:2 r12:2 per:2 nevertheless:1 achieving:1 drawn:2 ram:1 relaxation:1 sum:1 run:3 letter:2 i5:1 chandrasekaran:1 home:1 scaling:6 bound:29 ki:2 identifiable:1 annual:1 constraint:10 infinity:1 constrain:1 ri:6 diffuse:5 min:10 extremely:1 performing:1 subgradients:2 separable:1 conjecture:1 ball:7 poor:1 combination:1 smaller:1 y0:1 kakade:1 making:1 modification:4 intuitively:5 dlog:1 paramter:1 discus:1 mechanism:1 needed:1 singer:1 end:1 cor:1 operation:1 apply:1 observe:1 edmond:2 appropriate:1 spectral:6 generic:1 batch:14 robustness:1 rp:7 assumes:1 graphical:1 log2:1 a4:1 k1:10 especially:1 establish:3 concentration:1 kak2:2 traditional:1 diagonal:1 southern:2 gradient:4 thank:2 gence:1 considers:1 reason:25 willsky:1 sedghi:2 assuming:2 length:9 code:1 october:1 gk:1 negative:2 implementation:1 design:2 upper:2 acknowledge:1 frame:1 thm:1 peleato:1 posedness:1 pablo:1 eckstein:1 kl:10 connection:1 california:3 nip:1 trans:1 below:4 perception:1 regime:2 sparsity:8 program:2 max:5 wainwright:4 natural:3 serially:1 regularized:7 minimax:13 improve:1 lk:7 carried:1 kb2:2 epoch:34 literature:1 kf:9 relative:1 asymptotic:1 loss:18 lkf:6 interesting:1 var:1 h2:2 downloaded:1 incurred:1 foundation:2 vectorized:1 s0:2 principle:1 thresholding:1 row:2 supported:1 last:3 weaker:1 ik2:1 sparse:39 recentering:1 distributed:2 ghz:1 overcome:1 dimension:18 van:1 world:1 stand:1 fb:1 author:1 simplified:1 far:1 transaction:1 ignore:1 global:3 b1:3 assumed:1 xi:2 shwartz:1 iterative:1 sk:6 table:12 zk:7 robust:1 ca:3 career:1 obtaining:1 lsc:9 domain:2 main:1 noise:11 n2:5 augmented:2 intel:1 slow:1 tong:1 sub:6 kxk2:2 theorem:5 xt:2 maxi:1 r2:3 list:1 closeness:1 a3:3 incorporating:1 t4:1 dtic:1 nk:4 chen:1 rd1:1 yin:1 lagrange:1 kxk:3 g2:2 applies:1 aa:1 satisfies:6 acm:3 ma:2 abbreviation:1 viewed:2 lipschitz:3 admm:38 feasible:1 change:1 specifically:1 operates:1 averaging:2 decouple:1 denoising:1 total:3 highdimensional:2 arises:1 incorporate:1 d1:1 |
5,097 | 5,614 | Accelerated Mini-batch Randomized Block
Coordinate Descent Method
?
Tuo Zhao??? Mo Yu?? Yiming Wang? Raman Arora? Han Liu?
Johns Hopkins University ? Harbin Institute of Technology ? Princeton University
{tour,myu25,freewym,arora}@jhu.edu,[email protected]
Abstract
We consider regularized empirical risk minimization problems. In particular, we
minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve
the minimization problems in a randomized block coordinate descent (RBCD)
manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each
iteration. Thus they need all data to be accessible so that the partial gradient of the
block gradient can be exactly obtained. However, such a ?batch? setting may be
computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial
gradient of the selected block based on a mini-batch of randomly sampled data
in each iteration. We further accelerate the MRBCD method by exploiting the
semi-stochastic optimization scheme, which effectively reduces the variance of
the partial gradient estimators. Theoretically, we show that for strongly convex
functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to
solve the regularized sparse learning problems. Our numerical experiments shows
that the MRBCD method naturally exploits the sparsity structure and achieves
better computational performance than existing methods.
1
Introduction
Big data analysis challenges both statistics and computation. In the past decade, researchers have
developed a large family of sparse regularized M-estimators, such as Sparse Linear Regression [17,
24], Group Sparse Linear Regression [22], Sparse Logistic Regression [9], Sparse Support Vector
Machine [23, 19], and etc. These estimators are usually formulated as regularized empirical risk
minimization problems in a generic form as follows [10],
?b = argmin P(?) = argmin F(?) + R(?),
?
(1.1)
?
where ? is the parameter of the working model. Here we assume the empirical risk function F(?)
is smooth, and the regularization function R(?) is non-differentiable. Some first order algorithms,
mostly variants of proximal gradient methods [11], have been proposed for solving (1.1) . For
strongly convex P(?), these methods achieve linear rates of convergence [1].
The proximal gradient methods, though simple, are not necessarily efficient for large problems. Note
that empirical risk function F(?) is usually composed of many smooth component functions:
n
F(?) =
?
1X
fi (?)
n i=1
n
and
rF(?) =
Both authors contributed equally.
1
1X
rfi (?),
n i=1
where each fi is associated with a few samples of the whole date set. Since the proximal gradient
methods need to calculate the gradient of F in every iteration, the computational complexity scales
linearly with the sample size (or the number of components functions). Thus the overall computation
can be expensive especially when the sample size is very large in such a ?batch? setting [16].
To overcome the above drawback, recent work has focused on stochastic proximal gradient methods
(SPG), which exploit the additive nature of the empirical risk function F(?). In particular, the
SPG methods randomly sample only a few fi ?s to estimate the gradient rF(?), i.e., given an index
set B, also as known as a mini-batch [16], where all elements are
P independently sampled from
1
{1, ..., n} with replacement, we consider a gradient estimator |B|
i2B rfi (?). Thus calculating
such a ?stochastic? gradient can be far less expensive than the proximal gradient methods within
each iteration. Existing literature has established the global convergence results for the stochastic
proximal gradient methods [3, 7] based on the unbiasedness of the gradient estimator, i.e.,
"
#
1 X
EB
rfi (?) = rF(?) for 8 ? 2 Rd .
|B|
i2B
However, owing to the variance of the gradient estimator introduced by the stochastic sampling, SPG
methods only achieve sublinear rates of convergence even when P(?) is strongly convex [3, 7].
A second line of research has focused randomized block coordinate descent (RBCD) methods.
These methods exploit the block separability of the regularization function R, i.e., given a partition {G1 , ..., Gk } of d coordinates, we use vGj to denote the subvector of v with all indices in Gj ,
and then we can write
R(?) =
k
X
j=1
rj (?Gj )
with ? = (?GT1 , ..., ?GTk )T .
Accordingly, they develop the randomized block coordinate descent (RBCD) methods. In particular,
the block coordinate descent methods randomly select a block of coordinates in each iteration, and
then only calculate the gradient of F with respect to the selected block [15, 13]. Since the variance
introduced by the block selection asymptotically goes to zero, the RBCD methods also attain linear rates of convergence when P(?) is strongly convex. For sparse learning problems, the RBCD
methods have a natural advantage over the proximal gradient methods. Because many blocks of
coordinates stay at zero values throughout most of iterations, we can integrate the active set strategy
into the computation. The active set strategy maintains an only iterates over a small subset of all
blocks [2], which greatly boosts the computational performance. Recent work has corroborated the
empirical advantage of RBCD methods over the proximal gradient method [4, 20, 8]. The RBCD
methods, however, still requires that all component functions are accessible within every iteration
so that the partial gradient can be exactly obtained.
To address this issue, we propose a stochastic variant of the RBCD methods, which shares the advantage with both the SPG and RBCD methods. More specifically, we randomly select a block of
coordinates in each iteration, and estimate the corresponding partial gradient based on a mini-batch
of fi ?s sampled from all component functions. To address the variance introduced by stochastic sampling, we exploit the semi-stochastic optimization scheme proposed in [5, 6]. The semi-stochastic
optimization scheme contains two nested loops: For each iteration of the outer loop, we calculate
an exact gradient. Then in the follow-up inner loop, we adjust all estimated partial gradients by the
obtained exact gradient. Such a modification, though simple, has a profound impact: the amortized
computational complexity in each iteration is similar to the stochastic optimization, but the rate of
convergence is not compromised. Theoretically, we show that when P(?) is strongly convex, the
MRBCD method attains better overall iteration complexity than existing RBCD methods. We then
apply the MRBCD method combined with the active set strategy to solve the regularized sparse
learning problems. Our numerical experiments shows that the MRBCD method achieves much better computational performance than existing methods.
A closely related method is the stochastic proximal variance reduced gradient method proposed in
[21]. Their method is a variant of the stochastic proximal gradient methods using the same semistochastic optimization scheme as ours, but their method inherits the same drawback as the proximal
gradient method, and does not fully exploit the underlying sparsity structure for large sparse learning
problems. We will compare its computational performance with the MRBCD method in numerical
2
experiments. Note that their method can be viewed as a special example of the MRBCD method
with one single block.
While this paper was under review, we learnt that a similar method was independently proposed by
[18]. They also apply the variance reduction technique into the randomized block coordinate descent
method, and obtain similar theoretical results to ours.
2
Notations and Assumptions
P
P
Given a vector v = (v1 , ..., vd )T 2 Rd , we define vector norms: ||v||1 = j |vj |, ||v||2 = j vj2 ,
and ||v||1 = maxj |vj |. Let {G1 , ..., Gk } be a partition of all d coordinates with |Gj | = pj and
Pk
j=1 pj = d. We use vGj to denote the subvector of v with all indices in Gj , and v\Gj to denote
the subvector of v with all indices in Gj removed.
Throughout the rest of the paper, if not specified, we make the following assumptions on P(?).
Assumption 2.1. Each fi (?) is convex and differentiable. Given the partition {G1 , ..., Gk }, all
rGj fi (?) = [rfi (?)]Gj ?s are Lipschitz continuous, i.e., there exists a positive constants Lmax such
that for all ?, ? 0 2 Rd and ?Gj 6= ?G0 j , we have
rGj fi (? 0 )|| ? Lmax ||?Gj
||rGj fi (?)
?G0 j ||.
Moreover, rfi (?) is Lipschitz continuous, i.e., there exists a positive constant Tmax for all ?, ? 0 2
Rd and ? 6= ? 0 , we have
||rfi (?)
rfi (? 0 )|| ? Tmax ||?
? 0 ||.
Assumption 2.1 also implies that rF(?) is Lipschitz continuous, and given the tightest Tmax and
Lmax in Assumption 2.1, we have Tmax ? kLmax .
Assumption 2.2. F (?) is strongly convex, i.e., for all ? and ? 0 , there exists a positive constant ?
such that
? 0
||?
?||2 .
F(? 0 ) F(?) + rF(?)T (? 0 ?)
2
Note that Assumption 2.2 also implies that P(?) is strongly convex.
Assumption 2.3. R(?) is a simple convex nonsmooth function such that given some positive constant ?, we can obtain a closed form solution to the following optimization problem,
T?j (?G0 j ) = argmin
?Gj 2R
pj
1
||?Gj
2?
?G0 j ||2 + rj (?).
Assumptions 2.1-2.3 are satisfied by many popular regularized empirical risk minimization problems. We give some examples in the experiments section.
3
Method
The MRBCD method is doubly stochastic, in the sense that we not only randomly select a block
of coordinates, but also randomly sample a mini-batch of component functions from all fi ?s. The
partial gradient of the selected block is estimated based on the selected component functions, which
yields a much lower computational complexity than existing RBCD methods in each iteration.
A naive implementation of the MRBCD method is summarized in Algorithm 1. Since the variance
introduced by stochastic sampling over component functions does not go to zero as the number of
iteration increases, we have to choose a sequence of diminishing step sizes (e.g. ?t = ? 1 t 1 ) to
ensure the convergence. When t is large, we only gain very limited descent in each iteration. Thus
the MRBCD-I method can only attain a sublinear rate of convergence.
3
Algorithm 1 Mini-batch Randomized Block Coordinate Descent Method-I: A Naive Implementation. The stochastic sampling over component functions introduces variance to the partial gradient
estimator. To ensure the convergence, we adopt a sequence of diminishing step sizes, which eventually leads to sublinear rates of convergence.
Parameter: Step size ?t
Initialize: ? (0)
For t = 1, 2, ...
Randomly sample a mini-batch B from {1, ..., n} with equal probability
Randomly sample
j from {1, ..., k} with?equal probability
?
(t)
(t 1)
(t)
(t 1)
j
? Gj
T ? t ? Gj
?t rGj fB (? (t 1) ) , ?\Gj
?\Gj
End for
3.1
MRBCD with Variance Reduction
A recent line of work shows how to reduce the variance in the gradient estimation without deteriorating rates of convergence using a semi-stochastic optimization scheme [5, 6]. The semi-stochastic
optimization contains two nested loops: In each iteration of the outer loop, we calculate an exact
gradient; Then within the follow-up inner loop, we use the obtained exact gradient to adjust all estimated partial gradients. These adjustments can guarantee that the variance introduced by stochastic
sampling over component functions asymptotically goes to zero (see [5]).
Algorithm 2 Mini-batch Randomized Block Coordinate Descent Method-II: MRBCD + Variance
Reduction. We periodically calculate the exact gradient at the beginning of each outer loop, and
then use the obtained exact gradient to adjust all follow-up estimated partial gradients. These adjustments guarantee that the variance introduced by stochastic sampling over component functions
asymptotically goes to zero, and help the MRBCD II method attain linear rates of convergence.
Parameter: update frequency m and step size ?
Initialize: ?e(0)
For s = 1,2,...
e
?e ?e(s 1) , ?
rF(?e(s 1) ), ? (0)
?e(s 1)
For t = 1, 2, ..., m
Randomly sample a mini-batch B from {1, ..., n} with equal probability
Randomly sample
? j from {1,h..., k} with equal probability
i?
(t)
(t 1)
(t)
(t 1)
j
e +?
e Gj , ?\Gj
? Gj
T ? ? Gj
? rGj fB (? (t 1) ) rGj fB (?)
?\Gj
End forP
m
(l)
?e(s)
l=1 ?
End for
The MRBCD method with variance reduction is summarized in Algorithm 2. In the next section,
we will show that the MRBCD II method attains linear rates of convergence, and the amortized
computational complexity within each iteration is almost the same as that of the MRBCD I method.
Remark 3.1. Another option for the variance reduction is the stochastic averaging scheme as proposed in [14], which stores the gradients of most recently subsampled component functions. But the
MRBCD method iterates randomly over different blocks of coordinates, which makes the stochastic
averaging scheme inapplicable.
3.2
MRBCD with Variance Reduction and Active Set Strategy
When applying the MRBCD II method to regularized sparse learning problems, we further incorporate the active set strategy to boost the empirical performance. Different from existing RBCD
methods, which usually identify the active set by cyclic search, we exploit a proximal gradient pilot
to identify the active set. More specifically, within each iteration of the outer loop, we conduct a
proximal gradient descent step, and select the support of the resulting solution as the active set. This
is very natural to the MRBCD-II method. Because at the beginning of each outer loop, we always
calculate an exact gradient, and delivering a proximal gradient pilot will not introduce much addi4
tional computational cost. Once the active set is identified, all randomized block coordinate descent
steps within the follow-up inner loop only iterates over blocks of coordinates in the active set.
Algorithm 3 Mini-batch Randomized Block Coordinate Descent Method-III: MRBCD with Variance Reduction and Active Set. To fully take advantage of the obtained exact gradient, we adopt
a proximal gradient pilot ? (0) to identify the active set at each iteration of the outer loop. Then
all randomized coordinate descent steps within the follow-up inner loop only iterate over blocks of
coordinates in the active set.
Parameter: update frequency m and step size ?
Initialize: ?e(0)
For s = 1,2,...
e
?e ?e(s 1) , ?
rF(?e(s 1) )
For j = 1, 2, ..., k?
?
(0)
j
e Gj /k
? Gj
T?/k
?eGj ? ?
End for
(0)
A
{ j | ?Gj 6= 0}, |B| = |A|
For t = 1, 2, ..., m|A|/k
Randomly sample a mini-batch B from {1, ..., n} with equal probability
Randomly sample j from {1, ..., k} with equal probability
For all j 2?Ae
h
i?
(t)
(t 1)
(t)
(t 1)
e +?
e Gj , ?\Gj
? Gj
T?j ?Gj
? rGj fB (? (t 1) ) rGj fB (?)
?\Gj
End forP
m
(l)
?e(s)
l=1 ?
End for
The MRBCD method with variance reduction and active set strategy is summarized in Algorithm 3.
Since we integrate the active set into the computation, a successive |A| coordinate decent iterations
in MRBCD-III will have similar performance as k iterations in MRBCD-II. Therefore we change
the maximum number of iterations within each inner loop to |A|m/k. Moreover, since the support
is only |A| blocks of coordinates, we only need to take |B| = |A| to guarantee sufficient variance
reduction. These modifications will further boost the computational performance of MRBCD-III.
Remark 3.2. The exact gradient can be also used to determine the convergence of the MRBCDIII method. We terminate the iteration when the approximate KKT condition is satisfied,
e + ?|| ? ", where " is a positive preset convergence parameter. Since evaluatmin?2@R(?)
e ||?
ing whether the approximate KKT condition holds is based on the exact gradient obtained at each
iteration of the outer loop, it does not introduce much additional computational cost, either.
4
Theory
Before we proceed with our main results of the MRBCD-II method, we first introduce the important
lemma for controlling the variance introduced by stochastic sampling.
P
1
(t 1)
)
Lemma 4.1. Let B be a mini-batch sampled from {1, ..., n}. Define vB = |B|
i2|B| rfi (?
P
1
(t
1)
(t
1)
e + ?.
e Conditioning on ?
rfi (?)
, we have EB vB = rF(?
) and
|B|
i2|B|
EB ||vB
rF(? (t
1)
)||2 ?
4Tmax h
P(? (t
|B|
1)
)
b + P(?)
e
P(?)
i
b .
P(?)
The proof of Lemma 4.1 is provided in Appendix A. Lemma 4.1 guarantees that v is an unbiased
estimator of F(?), and its variance is bounded by the objective value gap. Therefore we do not need
to choose a sequence diminishing step sizes to reduce the variance.
4.1
Strongly Convex Functions
We then present the concrete rates of convergence of MRBCD-II when P is strongly convex.
5
Theorem 4.2. Suppose that Assumptions 2.1-2.3 hold. Let ?e(s) be a random point generated by the
MRBCD-II method in Algorithm 2. Given a large enough batch B and a small enough learning rate
1
? such that |B| Tmax /Lmax and ? < Lmax
/4, we have
?
?s
k
4?Lmax (m + 1)
(s)
e
b
b
EP(? ) P(?) ?
+
[P(?e(0) ) P(?)].
??(1 4?Lmax )m (1 4?Lmax )m
Here we only present a sketch. The detailed proof is provided in Appendix B. The expected successive descent of the objective value is composed of two terms: The first one is the same as the expected successive descent of the ?batch? RBCD methods; The second one is the variance introduced
by the stochastic sampling. The descent term can be bounded by taking the average of the successive
descent over all blocks of coordinates. The variance term can be bounded using Lemma 4.1. The
mini-batch sampling and adjustments of ??s guarantees that the variance asymptotically goes to zero
at a proper scale. By taking expectation over the randomness of component functions and blocks of
coordinates throughout all iterations, we derive a geometric rate of convergence.
The next corollary present the concrete iteration complexity of the MRBCD-II method.
Corollary 4.3. Suppose that Assumptions 2.1-2.3 hold. Let |B| = Tmax /Lmax , m = 65kLmax /?,
1
and ? = Lmax
/16. Given the target accuracy ? and some ? 2 (0, 1), for any
s
we have P(?e(s) )
3 log[P(?e(0) )
b
P(?)/?]
+ 3 log(1/?),
b ? ? with at last probability 1
P(?)
?.
Corollary 4.3 is a direct result of Theorem 4.2 and Markov inequality. The detailed proof is provided
in Appendix C.
To characterize the overall iteration complexity, we count the number of partial gradients we estimate. In each iteration of the outer loop, we calculate an exact gradient. Thus the number of
estimated partial gradients is O(nk). Within each iteration of the inner loop (m in total), we estimate the partial gradients based on a mini-batch B. Thus the number of estimate partial gradients
is O(m|B|). If we choose ?, m, and B as in Corollary (4.3) and consider ? as a constant, then
the iteration complexity of the MRBCD-II method with respect to the number of estimated partial
gradients is
O ((nk + kTmax /?) ? log(1/?)) ,
which is much lower than that of existing ?batch? RBCD methods, O (nkLmax /? ? log(1/?)).
Remark 4.4 (Connection to the MRBCD-III method). There still exists a gap between the theory
and empirical success of the active set strategy and its in existing literature, even for the ?batch?
RBCD methods. When incorporating the active set strategy to the RBCD-style methods, we have
known that the empirical performance can be greatly boosted. How to exactly characterize the
theoretical speed up is still largely unknown. Therefore Theorem 4.2 and 4.3 can only serve as an
imprecise characterization of the MRBCD-III method. A rough understanding is that if the solution
has at most q nonzero entries throughout all iterations, then the MRBCD-III method should have an
approximate overall iteration complexity
O ((nk + qTmax /?) ? log(1/?)) .
4.2
Nonstrongly Convex Functions
When P(?) is not strongly convex, we can adopt a perturbation approach. Instead of solving (1.1),
we consider the minimization problem as follows,
?~ = argmin F(?) + ||? (0)
?2Rd
?||2 + R(?),
(4.1)
e
where is some positive perturbation parameter, and ? (0) is the initial value. If we consider F(?)
=
(0)
2
e
F(?) + ||?
?|| in (4.1) as the smooth empirical risk function, then F(?) is a strongly convex
function. Thus Corollary 4.3 can be applied to (4.1): When B, m, ?, and ? are suitably chosen, given
s
3 log([P(? (0) )
~
P(?)
||? (0)
6
~ 2 ]/?) + 3 log(2/?),
?||
we have P(?e(s) )
P(?e(s) )
~
P(?)
||? (0)
~ 2 ? ?/2 with at least probability 1
?||
?. We then have
b ? P(?e(s) ) P(?)
b
b 2 + ||? (0) ?||
b 2
P(?)
||? (0) ?||
~
~ 2 + ||? (0) ?||
b 2 ? ?/2 + ||? (0)
? P(?e(s) ) P(?)
||? (0) ?||
b 2.
?||
~ + ||? (0) ?||
~ 2 ? P(?) + ||? (0) ?||
b 2,
where the second inequality comes from the fact that P(?)
(0)
2
(s)
b , we have P(?e ) P(?)
b ? ?.
because ?~ is the minimizer to (4.1). If we choose = ?/||?
?||
Since depends on the desired accuracy ?, the number of estimated partial gradients also depends
b 2 as a constant, then the overall iteration complexity of the
on ?. Thus if we consider ||? (0) ?||
perturbation approach becomes O ((nk + kTmax /?) ? log(1/?)).
5
Numerical Simulations
The first sparse learning problem of our interest is Lasso, which solves
n
1X
?b = argmin
fi (?) + ||?||1
?2Rd n i=1
with fi =
1
(yi
2
xTi ?)2 .
(5.1)
We set n = 2000 and d = 1000, and all covariate vectors xi ?s are independently sampled from a
1000-dimensional Gaussian distribution with mean 0 and covariance matrix ?, where ?jj = 1 and
?jk = 0.5 for all k 6= j. The first 50 entries of the regression coefficient
vector ? are independently
S
samples from a uniform distribution over support ( 2, 1) (+1, +2). The responses yi ?s are
generated by the linear model yi = xTi ? + ?i , where all ?i ?s are independently sampled from a
standard Gaussian distribution N (0, 1).
p
We choose = log d/n, and compare the proposed MRBCD-I and MRBCD-II methods with the
?batch? proximal gradient (BPG) method [11], the stochastic proximal variance reduced gradient
method (SPVRG) [21], and the ?batch? randomized block coordinate descent (BRBCD) method
[12]. We set k = 100. All blocks are of the same size
Pn(10 coordinates). For BPG, the step
size is 1/T , where T is the largest singular value of n1 i=1 xi xTi . For BRBCD, the step size
Pn
as 1/L, where L is the maximum over the largest singular values of n1 i=1 [xi ]Gj of all blocks.
For SPVRG, we choose m = n, and the step size is 1/(4T ). For MRBCD-I, the step size is
1/(Ldt/8000e), where t is the iteration index. For MRBCD-II, we choose m = n, and the step size
is 1/(4L). Note that the step size and number of iterations m within each inner loop for MRBCD-II
and SPVRG are tuned over a refined grid such that the best computational performance is obtained.
Number of partial gradients estimates
2
10
0
Objective Value Gap
10
?2
10
?4
10
?6
10
?8
10
?10
10
0
MRBCD?II
SPVRG
BRBCD
BPG
MRBCD?I
1
2
3
4
5
6
7
Number of partial gradient estimates
8
9
10
6
x 10
9
10
8
10
7
10
MRBCD?III
SPVRG
BRBCD
6
10
0
2
4
6
8
10
12
14
16
18
20
Regularization Index
(a) Comparison between different methods for a sin- (b) Comparison between different methods for a segle regularization parameter.
quence of regularization parameters.
b in log scale.
Figure 5.1: [a] The vertical axis corresponds to objective value gaps P(?) P(?)
The horizontal axis corresponds to numbers of partial gradient estimates. [b] The horizontal axis
corresponds to indices of regularization parameters. The vertical axis corresponds to numbers of
partial gradient estimates in log scale. We see that MRBCD attains the best performance among all
methods for both settings
We evaluate the computational performance by the number of estimated partial gradients, and the
results averaged over 100 replications are presented in Figure 5.1 [a]. As can be seen, MRBCD-II
outperforms SPVRG, and attains the best performance among all methods. The BRBCD and BPG
7
perform worse than MRBCD-II and SPVRG due to high computational complexity within each
iteration. MRBCD-I is actually the fastest among all methods at the first few iterations, and then
falls behind SPG and SPVRG due to its sublinear rate of convergence.
We then compare the proposed MRBCD-III method with SPVRG and BRBCD for a sequence of
regularization parameters. The sequence contains 21 regularization parameters { 0 , ...,p20 }. We
P
set 0 = || n1 i yi xi ||1 , which yields a null solution (all entries are zero), and 20 = log d/n.
For K = 1, ..., 19, we set K = ? K 1 , where ? = ( 20 / 0 )1/20 . When solving (5.1) with
respect to K , we use the output solution for K 1 as the initial solution. The above setting is
often referred to the warm start scheme in existing literature, and it is very natural to sparse learning
problems, since we always need to tune the regularization parameter to secure good finite sample
performance. For each regularization parameter, the algorithm terminates the iteration when the
approximate KKT condition is satisfied with ? = 10 10 .
The results over 50 replications are presented in Figure 5.1 [b]. As can be seen, MRBCD-III outperforms SPVRG and BRBCD, and attains the best performance among all methods. Since BRBCD
is also combined with the active set strategy, it attains better performance than SPVRG. See more
detailed results in Table E.1 in Appendix E
6
Real Data Example
The second sparse learning problem is the elastic-net regularized logistic regression, which solves
n
1X
?b = argmin
fi (?) +
?2Rd n i=1
1 ||?||1
with fi = log(1 + exp( yi xTi ?)) +
2
2
||?||2 .
We adopt the rcv1 dataset with n = 20242 and d = 47236. We set k = 200, and each block contains
approximately 237 coordinates.
We choose 2 = 10 4 , and 1 = 10 4 , and compare MRBCD-II with SPVRG and BRBCD.
ForPBRBCD, the step size as 1/(4L), where L is the maximum of the largest singular values of
n
1
For SPVRG, m = n and the step size is 1/(16T ), where
i=1 [xi ]Gj over all blocks for BRBCD.
n
Pn
T is the largest singular value of 1/ n1 i=1 xi xTi . For MRBCD-II, m = n and the step size is
Pn
1/(16T ). For BRBCD, the step size as 1/(4L), where L = n1 maxj i=1 [xi ]2j for BRBCD. Note
that the step size and number of iterations m within each inner loop for MRBCD-II and SPVRG are
tuned over a refined grid such that the best computational performance is obtained.
The results averaged over 30 replications are presented in Figure F.1 [a] of Appendix F. As can be
seen, MRBCD-II outperforms SPVRG, and attains the best performance among all methods. The
BRBCD performs worse than MRBCD-II and SPVRG due to high computational complexity within
each iteration.
We then compare the proposed MRBCD-III method with SPVRG and BRBCD for a sequence of
regularization
P parameters. The sequence contains 11 regularization parameters { 0 , ..., 10 }. We set
1
4. For
0 = ||
i rfi (0)||1 , which yields a null solution (all entries are zero), and 10 = 1e
K = 1, ..., 9, we set K = ? K 1 , where ? = ( 10 / 0 )1/10 . For each regularization parameter,
we set ? = 10 7 for the approximate KKT condition.
The results over 30 replications are presented in Figure F.1 [b] of Appendix F. As can be seen,
MRBCD-III outperforms SPVRG and BRBCD, and attains the best performance among all methods. Since BRBCD is also combined with the active set strategy, it attains better performance than
SPVRG.
Acknowledgements This work is partially supported by the grants NSF IIS1408910, NSF
IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. Yu is supported
by China Scholarship Council and by NSFC 61173073.
8
References
[1] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183?202, 2009.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2009.
[3] John Duchi and Yoram Singer. Efficient online and batch learning using forward backward splitting. The
Journal of Machine Learning Research, 10:2899?2934, 2009.
[4] Jerome Friedman, Trevor Hastie, Holger H?ofling, and Robert Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302?332, 2007.
[5] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[6] Jakub Kone?cn`y and Peter Richt?arik.
arXiv:1312.1666, 2013.
Semi-stochastic gradient descent methods.
arXiv preprint
[7] John Langford, Lihong Li, and Tong Zhang. Sparse online learning via truncated gradient. Journal of
Machine Learning Research, 10(777-801):65, 2009.
[8] Han Liu, Mark Palatucci, and Jian Zhang. Blockwise coordinate descent procedures for the multi-task
lasso, with applications to neural semantic basis discovery. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 649?656, 2009.
[9] L. Meier, S. Van De Geer, and P B?uhlmann. The group lasso for logistic regression. Journal of the Royal
Statistical Society: Series B, 70(1):53?71, 2008.
[10] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework
for high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science,
27(4):538?557, 2012.
[11] Yu Nesterov. Gradient methods for minimizing composite objective function. Technical report, Universit?e
catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2007.
[12] Peter Richt?arik and Martin Tak?ac? . Iteration complexity of randomized block-coordinate descent methods
for minimizing a composite function. arXiv preprint arXiv:1107.2848, 2011.
[13] Peter Richt?arik and Martin Tak?ac? . Iteration complexity of randomized block-coordinate descent methods
for minimizing a composite function. Mathematical Programming, pages 1?38, 2012.
[14] Nicolas L Roux, Mark Schmidt, and Francis R Bach. A stochastic gradient method with an exponential
convergence rate for finite training sets. In Advances in Neural Information Processing Systems, pages
2672?2680, 2012.
[15] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for `1 -regularized loss minimization. The
Journal of Machine Learning Research, 12:1865?1892, 2011.
[16] Suvrit Sra, Sebastian Nowozin, and Stephen J Wright. Optimization for machine learning. Mit Press,
2012.
[17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[18] Huahua Wang and Arindam Banerjee. Randomized block coordinate descent for online and stochastic
optimization. CoRR, abs/1407.0107, 2014.
[19] Li Wang, Ji Zhu, and Hui Zou. The doubly regularized support vector machine. Statistica Sinica,
16(2):589, 2006.
[20] Tong Tong Wu and Kenneth Lange. Coordinate descent algorithms for lasso penalized regression. The
Annals of Applied Statistics, 2:224?244, 2008.
[21] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction.
arXiv preprint arXiv:1403.4699, 2014.
[22] Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model. Biometrika,
94(1):19?35, 2007.
[23] Ji Zhu, Saharon Rosset, Trevor Hastie, and Robert Tibshirani. 1-norm support vector machines. In NIPS,
volume 15, pages 49?56, 2003.
[24] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 67(2):301?320, 2005.
9
| 5614 |@word norm:2 suitably:1 simulation:1 covariance:1 reduction:11 initial:2 liu:2 contains:5 cyclic:1 series:3 tuned:2 ours:2 past:1 existing:11 outperforms:4 deteriorating:1 john:3 additive:1 periodically:1 numerical:4 partition:3 update:2 selected:5 amir:1 accordingly:1 beginning:2 core:1 iterates:3 characterization:1 successive:4 zhang:4 mathematical:1 direct:1 profound:1 replication:4 yuan:1 doubly:2 introduce:3 manner:1 theoretically:2 expected:2 multi:1 ming:1 xti:5 becomes:1 provided:3 r01mh102339:1 underlying:1 notation:1 moreover:2 bounded:3 null:2 argmin:6 developed:1 unified:1 guarantee:5 every:2 ofling:1 exactly:3 universit:1 biometrika:1 grant:1 positive:6 before:1 nsfc:1 approximately:1 tmax:7 eb:3 china:1 fastest:1 klmax:2 limited:1 averaged:2 practice:1 block:39 procedure:1 empirical:12 jhu:1 attain:3 composite:3 boyd:1 imprecise:1 selection:4 risk:8 applying:1 center:1 go:5 independently:5 convex:15 focused:2 decomposable:1 splitting:1 roux:1 estimator:9 vandenberghe:1 coordinate:35 annals:2 controlling:1 suppose:2 target:1 exact:11 programming:1 amortized:2 element:1 expensive:3 i2b:2 jk:1 econometrics:1 corroborated:1 ep:1 preprint:3 wang:3 calculate:7 richt:3 decrease:1 removed:1 complexity:15 nesterov:1 solving:3 predictive:1 serve:1 inapplicable:1 basis:1 accelerate:1 fast:1 shalev:1 refined:2 solve:3 statistic:3 g1:3 online:3 ldt:1 advantage:4 differentiable:2 sequence:7 net:2 propose:2 loop:18 date:1 bpg:4 achieve:2 exploiting:2 convergence:18 yiming:1 help:1 derive:1 develop:1 ac:2 solves:2 implies:2 come:1 drawback:2 closely:1 owing:1 stochastic:30 bin:1 hold:3 wright:1 exp:1 mo:1 achieves:2 adopt:4 estimation:2 uhlmann:1 council:1 largest:4 gt1:1 minimization:6 rough:1 mit:1 always:2 gaussian:3 arik:3 pn:4 shrinkage:2 boosted:1 corollary:5 inherits:1 quence:1 greatly:2 secure:1 attains:10 sense:1 tional:1 diminishing:3 tak:2 issue:1 overall:6 among:6 special:1 initialize:3 equal:6 once:1 sampling:9 progressive:1 holger:1 yu:4 nonsmooth:2 report:1 few:3 randomly:14 composed:2 maxj:2 subsampled:1 beck:1 replacement:1 n1:5 friedman:1 ab:1 interest:1 adjust:3 introduces:1 pradeep:1 kone:1 behind:1 regularizers:1 partial:22 conduct:1 desired:1 forp:2 theoretical:2 teboulle:1 cost:2 subset:1 entry:4 tour:1 uniform:1 johnson:1 characterize:2 learnt:1 proximal:18 rosset:1 combined:3 unbiasedness:1 international:1 randomized:15 siam:1 accessible:2 stay:1 negahban:1 hopkins:1 concrete:2 satisfied:3 choose:8 worse:2 zhao:1 style:1 li:2 de:2 summarized:3 coefficient:1 depends:2 closed:1 francis:1 start:1 maintains:1 option:1 shai:1 r01hg06841:1 minimize:1 accuracy:2 variance:28 largely:1 yield:3 identify:3 researcher:1 randomness:1 sebastian:1 trevor:3 frequency:2 naturally:1 associated:1 proof:3 sampled:6 gain:1 pilot:3 dataset:1 popular:1 actually:1 follow:5 methodology:1 response:1 rie:1 though:2 strongly:11 jerome:1 langford:1 working:1 sketch:1 horizontal:2 banerjee:1 logistic:3 unbiased:1 regularization:16 nonzero:1 i2:2 semantic:1 sin:1 performs:1 duchi:1 saharon:1 arindam:1 fi:13 recently:1 nih:3 ji:2 conditioning:1 volume:1 r01gm083084:1 cambridge:1 rd:7 grid:2 p20:1 lihong:1 han:2 harbin:1 gj:30 etc:1 recent:3 store:1 suvrit:1 inequality:2 success:1 yi:6 seen:4 additional:1 determine:1 semi:6 ii:22 stephen:1 rj:2 reduces:1 smooth:4 technical:1 ing:1 iis1408910:1 bach:1 lin:2 equally:1 ravikumar:1 impact:1 variant:3 regression:8 ae:1 expectation:1 arxiv:6 iteration:42 palatucci:1 singular:4 jian:1 rest:1 iii:11 enough:2 decent:1 iterate:1 hastie:3 identified:1 lasso:5 inner:8 reduce:2 cn:1 lange:1 whether:1 accelerating:1 sahand:1 peter:3 proceed:1 jj:1 remark:3 rfi:10 tewari:1 delivering:1 detailed:3 tune:1 reduced:2 nsf:2 iis1332109:1 estimated:8 tibshirani:3 write:1 group:2 pj:3 kenneth:1 backward:1 v1:1 imaging:1 asymptotically:4 sum:1 inverse:1 family:1 throughout:4 almost:1 wu:1 raman:1 appendix:6 vb:3 annual:1 speed:1 rcv1:1 separable:1 martin:3 terminates:1 separability:1 modification:2 computationally:1 eventually:1 count:1 singer:1 end:6 tightest:1 operation:1 apply:2 generic:1 batch:24 schmidt:1 ensure:2 graphical:1 calculating:1 exploit:6 yoram:1 scholarship:1 especially:1 society:3 objective:6 g0:4 strategy:10 gradient:66 vd:1 outer:8 index:7 mini:15 minimizing:3 sinica:1 mostly:1 robert:2 blockwise:1 gk:3 implementation:2 proper:1 unknown:1 contributed:1 perform:1 vertical:2 markov:1 finite:2 descent:26 truncated:1 vj2:1 perturbation:3 tuo:1 introduced:8 meier:1 subvector:3 specified:1 connection:1 huahua:1 louvain:1 established:1 boost:3 nip:1 address:2 usually:4 sparsity:2 challenge:1 ambuj:1 rf:9 royal:3 wainwright:1 natural:3 warm:1 regularized:10 zhu:2 scheme:8 technology:1 arora:2 axis:4 naive:2 review:1 literature:3 geometric:1 understanding:1 acknowledgement:1 discovery:1 fully:2 nonstrongly:1 loss:1 sublinear:4 integrate:2 sufficient:1 xiao:1 thresholding:1 nowozin:1 share:1 lmax:10 penalized:1 supported:2 last:1 catholique:1 institute:1 fall:1 taking:2 sparse:14 van:1 overcome:1 fb:5 author:1 forward:1 rbcd:18 far:1 approximate:5 trim:1 global:1 active:19 kkt:4 xi:7 shwartz:1 continuous:3 compromised:1 search:1 decade:1 iterative:1 table:1 nature:1 terminate:1 nicolas:1 elastic:2 sra:1 necessarily:1 zou:2 spg:5 vj:2 marc:1 pk:1 main:1 statistica:1 linearly:1 big:1 whole:1 gtk:1 referred:1 tong:5 exponential:1 theorem:3 hanliu:1 covariate:1 jakub:1 exists:4 incorporating:1 effectively:1 corr:1 hui:2 nk:4 gap:4 adjustment:3 pathwise:1 partially:1 nested:2 minimizer:1 corresponds:4 viewed:1 formulated:1 lipschitz:3 change:1 specifically:2 averaging:2 preset:1 lemma:5 total:1 geer:1 select:4 support:6 mark:2 accelerated:1 incorporate:1 evaluate:1 princeton:2 |
5,098 | 5,615 | Sparse PCA with Oracle Property
Zhaoran Wang
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ 08544, USA
[email protected]
Quanquan Gu
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ 08544, USA
[email protected]
Han Liu
Department of Operations Research
and Financial Engineering
Princeton University
Princeton, NJ 08544, USA
[email protected]
Abstract
In this paper, we study the estimation of the k-dimensional sparse principal subspace of covariance matrix ? in the high-dimensional setting. We aim to recover
the oracle principal subspace solution, i.e., the principal subspace estimator obtained assuming the true support is known a priori. To this end, we propose a
family of estimators based on the semidefinite relaxation of sparse PCA with novel
regularizations. In particular, under a weak assumption on the magnitude of the
population projection matrix, one estimator within this family exactly
p recovers the
true support with high probability, has exact rank-k, and attains a s/n statistical
rate of convergence with s being the subspace sparsity level and n the sample size.
Compared to existing support recovery results for sparse PCA, our approach does
not hinge on the spiked covariance model or the limited correlation condition. As a
complement to the first estimator that enjoys the oracle property, we prove that, another estimator within the family achieves a sharper statistical rate of convergence
than the standard semidefinite relaxation of sparse PCA, even when the previous
assumption on the magnitude of the projection matrix is violated. We validate the
theoretical results by numerical experiments on synthetic datasets.
1
Introduction
Principal Component Analysis (PCA) aims at recovering the top k leading eigenvectors u1 , . . . , uk
b In applications where the dimension p
of the covariance matrix ? from sample covariance matrix ?.
is much larger than the sample size n, classical PCA could be inconsistent [12]. To avoid this problem,
one common assumption is that the leading eigenvector u1 of the population covariance matrix ? is
sparse, i.e., the number of nonzero elements in u1 is less than the sample size, s = supp(u1 ) < n.
This gives rise to Sparse Principal Component Analysis (SPCA). In the past decade, significant
progress has been made toward the methodological development [13, 8, 30, 22, 7, 14, 28, 19, 27] as
well as theoretical understanding [12, 20, 1, 24, 21, 4, 6, 3, 2, 18, 15] of sparse PCA.
However, all the above studies focused on estimating the leading eigenvector u1 . When the top k
eigenvalues of ? are not distinct, there exist multiple groups of leading eigenvectors that are equivalent
up to rotation. In order to address this problem, it is reasonable to de-emphasize eigenvectors
and to instead focus on their span U, i.e., the principal subspace of variation. [23, 25, 16, 27]
1
proposed Subspace Sparsity, which defines sparsity on the projection matrix onto subspace U, i.e.,
?? = U U > , as the number of nonzero entries on the diagonal of ?? , i.e., s = |supp(diag(?? ))|.
They proposed to estimate the principal subspace instead of principal eigenvectors of ?, based
`1,1 -norm regularization on a convex set called Fantope [9], that provides a tight relaxation for
simultaneous rank and orthogonality constraints
p on the positive semidefinite cone. The convergence
rate of their estimator is O(?1 /(?k ? ?k+1 )s log p/n), where ?k , k = 1, . . . , p is the k-th largest
eigenvalue of ?. Moreover, their support recovery relies on limited correlation condition (LCC) [16],
which is similar to irrepresentable condition in sparse linear regression. We notice that [1] also
analyzed the semidefinite relaxation of sparse PCA. However, they only considered rank-1 principal
subspace and the stringent spiked covariance model, where the population covariance matrix is block
diagonal.
In this paper, we aim to recover the oracle principal subspace solution, i.e., the principal subspace
estimator obtained assuming the true support is known a priori. Based on recent progress made
on penalized M -estimators with nonconvex penalty functions [17, 26], we propose a family of
estimators based on the semidefinite relaxation of sparse PCA with novel regularizations. It estimates
b In
the k-dimensional principal subspace of a population matrix ? based on its empirical version ?.
particular, under a weak assumption on the magnitude of the projection matrix, i.e,
?
r
C k?1
s
?
min |?ij | ? ? +
,
?k ? ?k+1 n
(i,j)?T
where T is the support of ?? , ? is a parameter from nonconvex penalty and C is an universal constant,
one estimator within this family exactly recovers the oracle solution with high probability, and is
exactly of rank k. It is worth noting that unlike the linear regression setting, where the estimators that
can recover the oracle solution often have nonconvex formulations, our estimator here is obtained
from a convex optimization1 , and has unique global solution. Compared to existing support recovery
results for sparse PCA, our approach does not hinge on the spiked covariance model [1] or the limited
correlation condition [16]. Moreover, it attains the same convergence rate as standard PCA as if the
support of the true projection matrix is provided a priori. More specifically, the Frobenius norm error
b is bounded with high probability as follows
of the estimator ?
r
ks
C?
1
?
b ? ? kF ?
,
k?
?k ? ?k+1
n
where k is the dimension of the subspace.
As a complement to the first estimator that enjoys the oracle property, we prove that, another estimator
within the family achieves a sharper statistical rate of convergence than the standard semidefinite
relaxation of sparse PCA, even when the previous assumption on the magnitude of the projection
matrix is violated. This estimator is based on nonconvex optimizaiton. With a suitable choice of
the regularization parameter, we show that any local optima to the optimization problem is a good
estimator for the projection matrix of the true principal subspace. In particular, we show that the
b is bounded with high probability as
Frobenius norm error of the estimator ?
!
r
r
s
s
log
p
C?
1
1
b ? ?? kF ?
+ m1 m2
,
k?
?k ? ?k+1
n
n
where s1 , m1 , m2 are all no larger than s. Evidently, it is sharperpthan the convergence rate proved
in [23]. Note that the above rate consists of two terms, the O( s1 s/n) term corresponds to the
entries
p of projection matrix satisfying the previous assumption (i.e., with large magnitude), while
O( m1 m2 log p/n) corresponds to the entries of projection matrix violating the previous assumption
(i.e., with small magnitude).
Finally, we demonstrate the numerical experiments on synthetic datasets, which support our theoretical
analysis.
The rest of this paper is arranged as follows. Section 2 introduces two estimators for the principal
subspace of a covariance matrix. Section 3 analyzes the statistical properties of the two estimators.
1
Even though we use nonconvex penalty, the resulting problem as a whole is still a convex optimization
problem, because we add another strongly convex term in the regularization part, i.e., ? /2k?kF .
2
We present an algorithm for solving the estimators in Section 4. Section 5 shows the experiments on
synthetic datasets. Section 6 concludes this work with remarks.
Notation. Let [p] be the shorthand for {1, . . . , p}. For matrices A, B of compatible dimension,
hA, Bi := tr(A> B) is the Frobenius inner product, and kAkF = hA, Ai is the squared Frobenius
norm. kxkq is the usual `q norm with kxk0 defined as the number of nonzero entries of x. kAka,b
is the (a, b)-norm defined to be the `b norm of the vector of rowwise `a norms of A, e.g. kAk1,?
is the maximum absolute row sum. kAk2 is the spectral norm of A, and kAk? is the trace norm
(nuclear norm) of A. For a symmetric matrix A, we define ?1 (A) ? ?2 (A) ? . . . ? ?p (A) to be the
eigenvalues of A with multiplicity. When the context is obvious we write ?j = ?j (A) as shorthand.
2
The Proposed Estimators
In this section, we present a family of estimators based on the semidefinite relaxation of sparse PCA
with novel regularizations, for the principal subspace of the population covariance matrix. Before
going into the details of the proposed estimators, we first present the formal definition of principal
subspace estimation.
2.1
Problem Definition
Let ? ? Rp?p be an unknown covariance matrix, with eigen-decomposition as follows
p
X
?=
?i ui u>
i ,
i=1
where ?1 ? . . . ? ?p are eigenvalues (with multiplicity) and u1 , . . . , up ? Rp are the associated
eigenvectors. The k-dimensional principal subspace of ? is the subspace spanned by u1 , . . . , uk .
The projection matrix to the k-dimensional principal subspace is
?? =
k
X
>
ui u>
i = UU ,
i=1
where U = [u1 , . . . , uk ] is an orthonormal matrix. The reason why principal subspace is more
appealing is that it avoids the problem of un-identifiability of eigenvectors when the eigenvalues are
not distinct. In fact, we only need to assume ?k ? ?k+1 > 0 instead of ?1 > . . . > ?k > ?k+1 . Then
the principal subspace ?? is unique and identifiable. We also assume that k is fixed.
Next, we introduce the definition of Subspace Sparsity [25], which can be seen as the extension of
conventional Eigenvector Sparsity used in sparse PCA.
Definition 1. [25] (Subspace Sparsity) The projection ?? onto the subspace spanned by the
eigenvectors of ? corresponding to its k largest eigenvalues satisfies kU k2,0 ? s, or equivalently
kdiag(?)k0 ? s.
In the extreme case that k = 1, the support of the projection matrix onto the rank-1 principal subspace
is the same as the support of the sparse leading eigenvector.
The problem definition of principal subspace estimation is: given an i.i.d. sample {x1 , x2 , . . . , xn } ?
Rp which are drawn from an unknown distribution of zero mean and covariance matrix ?, we
b =
aim P
to estimate ?? based on the empirical covariance matrix S ? Rp?p , that is given by ?
n
1/n i=1 xi x>
.
We
are
particularly
interested
in
the
high
dimensional
setting,
where
p
?
?
as
i
n ? ?, in sharp contrast to conventional setting where p is fixed and n ? ?.
Now we are ready to design a family of estimators for ?? .
2.2
A Family of Sparse PCA Estimators
b ? Rp?p , we propose a family of sparse principal subspace
Given a sample covariance matrix ?
b
estimator ? that is defined to be a solution of the semidefinite relaxation of sparse PCA
b ?i + ? k?k2F + P? (?), subject to ? ? F k ,
b ? = argmin ? h?,
?
(1)
2
?
3
where ? > 0, ? > 0 is a regularization parameter, F k is a convex body called the Fantope [9, 23],
that is defined as follows
F k = {X : 0 ? X ? I and tr(X) = k },
Pp
and P? (?) is a decomposable nonconvex penalty, i.e., P? (?) = i,j=1 p? (?ij ). Typical nonconvex
penalties include the smoothly clipped absolute deviation (SCAD) penalty [10] and minimax concave
penalty MCP [29], which can eliminate the estimation bias and attain more refined statistical rates of
convergence [17, 26]. For example, MCP penalty is defined as
Z |t|
t2
b?2
z
dz = ?|t| ?
1(|t| ? b?) +
1(|t| > b?),
(2)
p? (t) = ?
1?
?b
2b
2
0
where b > 0 is a fixed parameter.
An important property of the nonconvex penalties p? (t) is that they can be formulated as the sum of
the `1 penalty and a concave part q? (t): p? (t) = ?|t| + q? (t). For example, if p? (t) is chosen to be
the MCP penalty, then the corresponding q? (t) is:
2
t2
b?
q? (t) = ? 1(|t| ? b?) +
? ?|t| 1(|t| > b?),
2b
2
We rely on the following regularity conditions on p? (t) and its concave component q? (t):
(a) p? (t) satisfies p0? (t) = 0, for |t| ? ? > 0.
(b) q?0 (t) is monotone and Lipschitz continuous, i.e., for t0 ? t, there exists a constant ?? ? 0
such that
q 0 (t0 ) ? q?0 (t)
??? ? ? 0
.
t ?t
(c) q? (t) and q?0 (t) pass through the origin, i.e., q? (0) = q?0 (0) = 0.
(d) q?0 (t) is bounded, i.e., |q?0 (t)| ? ? for any t.
The above conditions apply to a variety of nonconvex penalty functions. For example, for MCP in
(2), we have ? = b? and ?? = 1/b.
It is easy to show that when ? > ?? , the problem in (1) is strongly convex, and therefore its solution is
unique. We notice that [16] also introduced the same regularization term ? /2k?k2F in their estimator.
However, our motivation is quite different from theirs. We introduce this term because it is essential
for the estimator in (1) to achieve the oracle property provided that the magnitude of all the entries in
the population projection matrix is sufficiently large. We call (1) Convex Sparse PCA Estimator.
b is ? k. However, we can prove that
Note that constraint ? ? F k only guarantees that the rank of ?
our estimator is of rank k exactly. This is in contrast to [23], where some post projection is needed,
to make sure their estimator is of rank k.
2.3
Nonconvex Sparse PCA Estimator
In the case that the magnitude of entries in the population projection matrix ?? violates the previous
assumption, (1) with ? > ?? no longer enjoys the desired oracle property. To this end, we consider
another estimator from the family of estimators in (1) with ? = 0,
b ? =0 = argmin ? h?,
b ?i + P? (?),
?
subject to ? ? F k .
(3)
?
b ?i is an affine function, and P? (?) is nonconvex, the estimator in (3) is nonconvex.
Since ?h?,
We simply refer to it as Nonconvex Sparse PCA Estimator. We will prove that it achieves a sharper
statistical rate of convergence than the standard semidefinite relaxation of sparse PCA [23], even
when the previous assumption on the magnitude of the projection matrix is violated.
It is worth noting that although our estimators in (1) and (3) are for the projection matrix ? of the
principal subspace, we can also provide an estimator of U . By definition, the true subspace satisfies
4
b can be computed from ?
b using eigenvalue decomposition. In
?? = U U > . Thus, the estimator U
b
b In case that the top k
detail, we can set the columns of U to be the top k leading eigenvectors of ?.
b
eigenvalues of ? are the same, we can follow the standard PCA convention by rotating the eigenvectors
b R)T ?(
b U
b R) is diagonal. Then U
b R is the orthonormal basis for
with a rotation matrix R, such that (U
the estimated principal subspace, and can be used for visualization and dimension reduction.
3
Statistical Properties of the Proposed Estimators
In this section, we present the statistical properties of the two estimators in the family (1). One is
with ? > ?? , the other is with ? = 0. The proofs are all included in the longer version of this paper.
To evaluate the statistical performance of the principal subspace estimators, we need to define the
estimator error between the estimated projection matrix and the true projection matrix. In our study,
b ? ?? kF .
we use the Frobenius norm error k?
3.1
Oracle Property and Convergence Rate of Convex Sparse PCA
b in (1) recovers
We first analyze the estimator in (1) when ? > ?? . We prove that, the estimator ?
the support of ?? under suitable conditions on its magnitude. Before we present this theorem, we
b O . Recall that S = supp(diag(?? )).
introduce the definition of an oracle estimator, denoted by ?
b O is defined as
The oracle estimator ?
bO =
?
argmin
L(?).
(4)
supp(diag(?))?S,??F k
b ?i + ? k?k2 . Note that the above oracle estimator is not a practical estimator,
where L(?) = ?h?,
F
2
because we do not know the true support S in practice.
b in (1) is the same as the oracle
The following theorem shows that, under suitable conditions, ?
b
estimator ?O with high probability, and therefore exactly recovers the support of ?? .
Pp
Theorem 1. (Support Recovery) Suppose the nonconvex penalty P? (?) =
satisi,j=1 p? (?)p
?
?
?
fies conditions (a) and (b). If ? satisfies min(i,j)?T |?ij | ? ? + C k?1 /(?k ? ?k+1 ) s/n.
p
For the estimator in (1) with the regularization parameter ? = C?1 log p/n and ? > ?? , we
b = ?
b O , which further implies supp(diag(?))
b =
have with probability at least 1 ? 1/n2 that ?
?
b
b
b
supp(diag(?O )) = supp(diag(? )) and rank(?) = rank(?O ) = k.
For example, if we use MCP penalty, the magnitude assumption turns out to be min(i,j)?T |??ij | ?
p
p
?
Cb?1 log p/n + C k?1 /(?k ? ?k+1 ) s/n.
Note that in our proposed estimator in (1), we do not rely on any oracle knowledge on the true support.
Our theory in Theorem 1 shows that, with high probability, the estimator is identical to the oracle
estimator, and thus exactly recovers the true support.
Compared to existing support recovery results for sparse PCA [1, 16], our condition on the magnitude
is weaker. Note that the limited correlation condition [16] and the even stronger spiked covariance
condition [1] impose constraints not only on the principal subspace corresponding to ?1 , . . . , ?k ,
but also on the ?non-signal? part, i.e., the subspace corresponding to ?k+1 , . . . , ?p . Unlike these
conditions, we only impose conditions on the ?signal? part, i.e., the magnitude of the projection
matrix ?? corresponding to ?1 , . . . , ?k . We attribute the oracle property of our estimator to novel
regularizations (? /2k?k2F plus nonconvex penalty).
The oracle property immediately implies that under the above conditions on the magnitude, the
estimator in (1) achieves the convergence rate of standard PCA as if we know the true support S a
priori. This is summarized in the following theorem.
Theorem 2. Under the same conditions of Theorem 1, we have with probability at least 1 ? 1/n2
that
?
r
C k?1
s
?
b
k? ? ? kF ?
,
?k ? ?k+1 n
5
for some universal constant C.
Evidently, the estimator attains a much sharper statistical rate of convergence than the state-of-the-art
result proved in [23].
3.2
Convergence Rate of Nonconvex Sparse PCA
We now analyze the estimator in (3), which is a special case of (1) when ? = 0. We basically
show that any local optima of the non-convex optimization problem in (3) is a good estimator. In
b ? =0 ? Rp?p that satisfies the first-order
other words, our theory applies to any projection matrix ?
necessary conditions (variational inequality) to be a local minimum of (3):
b ? =0 ? ?0 , ??
b + ?P? (?)i
b ? 0, ? ?0 ? F k
h?
Recall that S = supp(diag(?? )) with |S| = s, T = S ? S with |T | = s2 , and T c = [p] ? [p] \ T .
For (i, j) ? T1 ? T with |T1 | = t1 , we assume |??ij | ? ?, while for (i, j) ? T2 ? T with |T2 | = t2 ,
we assume |??ij | < ?. Clearly, we have s2 = t1 + t2 . There exists a minimal submatrix A ? Rn1 ?n2
of ?? , which contains all the elements in T1 , with s1 = min{n1 , n2 }. There also exists a minimal
submatrix B ? Rm1 ?m2 of ?? , that contains all the elements in T2 .
Note that in general, s1 ? s, m1 ? s and m2 ? s. In the worst case, we have s1 = m1 = m2 = s.
Pp
Theorem 3. Suppose the nonconvex penalty P? (?) = i,j=1 p? (?) satisfies conditions (b) (c)
p
and (d). For the estimator in (3) with regularization parameter ? = C?1 log p/n and ?? ?
b ? =0 satisfies
(?k ? ?k+1 )/4, with probability at least 1 ? 4/p2 , any local optimal solution ?
r
r
?
?
s 12C?1 m1 m2 log p
b ? =0 ? ?? kF ? 4C?1 s1
k?
+
.
(?k ? ?k+1 ) n
(?k ? ?k+1 )
n
|
{z
} |
{z
}
T1 :|??
ij |??
T2 :|??
ij |<?
Note that the upper bound can be decomposed into two parts according to the magnitude of the entries
in the true projection matrix, i.e., |??ij |, 1 ? i, j ? p. We have the following comments:
On the one hand, for those strong ?signals?, i.e., |??ij | ? ?, we are able to achieve the convergence rate
p
?
of O(?1 s1 /(?k ? ?k+1 ) s/n). Since s1 is at most equal to s, the worst-case rate ispO(?1 /(?k ?
?
?k+1 )s/ n), which is sharper than the rate proved in [23], i.e., O(?1 /(?k ? ?k+1 )s log p/n). In
the other case that s1 < s, the convergence rate could be even sharper.
On the other hand, for those weak ?signals?, i.e., |??ij | < ?, we are able to achieve the convergence
p
?
rate of O(?1 m1 m2 /(?k ? ?k+1 ) log
p p/n). Since both m1 and m2 are at most equal to s, the
worst-case rate is O(?
/(?
?
?
)s
log p/n), which is the same as the rate proved in [23]. In
k
k+1
? 1
the other case that m1 m2 < s, the convergence rate will be sharper than that in [23].
The above discussions clearly demonstrate the advantage of our estimator, which essentially benefits
from non-convex penalty.
4
Optimization Algorithm
In this section, we present an optimization algorithm to solve (1) and (3). Since (3) is a special case
of (1) with ? = 0, it is sufficient to develop an algorithm for solving (1).
Observing that (1) has both nonsmooth regularization term and nontrivial constraint set F k , it is
difficult to directly apply gradient descent and its variants. Following [23], we present an alternating
direction method of multipliers (ADMM) algorithm. The proposed ADMM algorithm can efficiently
compute the global optimum of (1). It can also find a local optimum to (3). It is worth noting that
other algorithms such as Peaceman Rachford Splitting Method [11] can also be used to solve (1).
We introduce an auxiliary variable ? ? Rp?p , and consider an equivalent form of (1) as follows
b ?i + ? k?k2F + P? (?), subject to ? = ?, ? ? F k .
argmin ?h?,
(5)
2
?,?
6
The augmented Lagrangian function corresponding to (5) is
b ?i + ? k?k2 + P? (?) + h?, ? ? ?i + ? k? ? ?k2 ,
L(?, ?, ?) = ?1F k (?) ? h?,
F
F
2
2
(6)
where ? ? Rd?d is the Lagrange multiplier associated with the equality constraint ? = ? in (5),
and ? > 0 is a penalty parameter that enforces the equality constraint ? = ?. The detailed update
scheme is described in Algorithm 1. In details, the first subproblem (Line 5 of Algorithm 1) can be
b onto Fantope F k . This projection
solved by projecting ?/(? + ? )?(t) ? 1/(? + ? )?(t) + 1/(? + ? )?
has a simple form solution as shown by [23, 16]. The second subproblem (Line 6 of Algorithm 1)
can be solved by generalized soft-thresholding operator as shown by [5] [17].
Algorithm 1 Solving Convex Relaxation (5) using ADMM.
b
1: Input: Covariance Matrix Estimator ?
2: Parameter: Regularization parameters ? > 0, ? ? 0, Penalty parameter ? > 0 of the augmented
Lagrangian, Maximum number of iterations T
3: ?(0) ? 0, ?(0) ? 0, ?(0) ? 0
4: For t = 0, . . . , T ? 1
?
1
1 b 2
?)kF
5:
?(t+1) ? arg min??F k 21 k? ? ( ?+?
?(t) ? ?+?
?(t) + ?+?
1
1
6:
?(t+1) ? arg min? 2 k? ? (?(t+1) + ? ?(t) )k2F + P ? (?)
?
7:
?(t+1) ? ?(t) + ?(?(t+1) ? ?(t+1) )
8: End For
9: Output: ?(T )
5
Experiments
In this section, we conduct simulations on synthetic datasets to validate the effectiveness of the
proposed estimators in Section 2. We generate two synthetic datasets via designing two covariance
matrices. The covariance matrix ? is basically constructed through the eigenvalue decomposition.
In detail, for synthetic dataset I, we set s = 5 and k = 1. The leading eigenvalue of its covariance
matrix ? is set as ?1 = 100, and its corresponding
? eigenvector is sparse in the sense that only the
first s = 5 entries are nonzero and set be to 1/ 5. The other eigenvalues are set as ?2 = . . . =
?p = 1, and their eigenvectors are chosen arbitrarily. For synthetic dataset II, we set s = 10 and
k = 5. The top-5 eigenvalues are set as ?1 = . . . = ?4 = 100 and ?5 = 10. We generate their
corresponding eigenvectors by sampling its nonzero entries from a standard Gaussian distribution,
and then orthnormalizing them while retaining the first s = 10 rows nonzero. The other eigenvalues
are set as ?6 = . . . = ?p = 1, and the associated eigenvectors are chosen arbitrarily. Based on the
covariance matrix, the groundtruth rank-k projection matrix ?? can be immediately calculated. Note
that synthetic dataset II is more challenging than synthetic dataset I, because the smallest magnitude
of ?? in synthetic dataset I is 0.2, while that in synthetic dataset II is much smaller (about 10?3 ).
We sample n = 80 i.i.d. observations from a normal distribution N (0, ?) with p = 128, and then
b
calculate the sample covariance matrix ?.
Since the focus of this paper is principal subspace estimation rather than principal eigenvectors
estimation, it is sufficient to compare our proposed estimators (Convex SPCA in (1) and Nonconvex
SPCA in 3) with the estimator proposed in [23], which is referred to as Fantope SPCA. Note that
Fantope PCA is the pioneering and the state-of-the-art estimator for principal subspace estimation
of SPCA. However, since Fantope SPCA uses convex penalty k?k1,1 on the projection matrix ?,
the estimator is biased [29]. We also compare our proposed estimators with the oracle estimator
in (4), which is not a practical estimator but provides the optimal results that we could achieve. In
our experiments, we need to compare the estimator attained by the algorithmic procedure and the
oracle estimator. To obtain the oracle estimator, we apply standard PCA on the submatrix (supported
b Note that the true support is known because we use
on the true support) of the sample covariance ?.
synthetic datasets here.
In order to evaluate the performance of the above estimators, we look at the Frobenius norm error
b ? ?? kF . We also use True Positive Rate (TPR) and False Positive Rate (FPR) to evaluate the
k?
7
support recovery result. The larger the TPR and the smaller the FPR, the better the support recovery
result.
Both of our estimators use MCP penalty, though other nonconvex penalties such as SCAD could
be used as well. In particular, we set b = 3. For Convex SPCA, we set ? = 2b . The regularization
parameter ? in our estimators as well as Fantope SPCA is tuned by 5-fold cross validation on a
held-out dataset. The experiments are repeated 20 times, and the mean as well as the standard errors
are reported. The empirical results on synthetic datasets I and II are displayed in Table 1.
Table 1: Empirical results for subspace estimation on synthetic datasets I and II.
Synthetic I
n = 80
p = 128
s=5
k=1
Synthetic II
n = 80
p = 128
s = 10
k=5
Oracle
Fantope SPCA
Convex SPCA
Nonconvex SPCA
b ? ?? k F
k?
0.0289?0.0134
0.0317?0.0149
0.0290?0.0132
0.0290?0.0133
TPR
1
1.0000?0.0000
1.0000?0.0000
1.0000?0.0000
FPR
0
0.0146?0.0218
0.0000?0.0000
0.0000?0.0000
Oracle
Fantope SPCA
Convex SPCA
Nonconvex SPCA
b ? ?? k F
k?
0.1487?0.0208
0.2788?0.0437
0.2031?0.0331
0.2041?0.0326
TPR
1
1.0000?0.0000
1.0000?0.0000
1.0000?0.0000
FPR
0
0.8695?0.1634
0.5814?0.0674
0.6000?0.0829
It can be observed that both Convex SPCA and Nonconvex SPCA estimators outperform Fantope
SPCA estimator [23] greatly in both datasets. In details, on synthetic dataset I with relatively large
magnitude of ?? , our Convex SPCA estimator achieves the same estimation error and perfect support
recovery as the oracle estimator. This is consistent with our theoretical results in Theorems 1 and 2.
In addition, our Nonconvex SPCA estimator achieves very similar results with Convex SPCA. This is
not very surprising, because provided?that the magnitude of all the entries in ?? is large, Nonconvex
SPCA attains a rate which is only 1/ s slower than Convex SPCA. Fantope SPCA cannot recover
the support perfectly because it detected several false positive supports. This implies that the LCC
condition is stronger than our large magnitude assumption, and does not hold on this dataset.
On synthetic dataset II, our Convex SPCA estimator does not perform as well as the oracle estimator.
This is because the magnitude of ?? is small (about 10?3 ). Given the sample size n = 80, the
conditions of Theorems 1 are violated. But note that Convex SPCA is still slightly better than
Nonconvex SPCA. And both of them are much better than Fantope SPCA. This again illustrates the
superiority of our estimators over existing best approach, i.e., Fantope SPCA [23].
6
Conclusion
In this paper, we study the estimation of the k-dimensional principal subspace of a population
b We proposed a family of estimators based on novel
matrix ? based on sample covariance matrix ?.
regularizations. The first estimator is based on convex optimization, which is suitable for projection
matrix with large magnitude entries. It enjoys oracle property and the same convergence rate as
standard PCA. The second estimator is based on nonconvex optimization, and it also attains faster
rate than existing principal subspace estimator, even when the large magnitude assumption is violated.
Numerical experiments on synthetic datasets support our theoretical results.
Acknowledgement
We would like to thank the anonymous reviewers for their helpful comments. This research is
partially supported by the grants NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH
R01GM083084, and NIH R01HG06841.
8
References
[1] A. Amini and M. Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal
components. The Annals of Statistics, 37(5B):2877?2921, 2009.
[2] Q. Berthet and P. Rigollet. Computational lower bounds for sparse PCA. arXiv preprint arXiv:1304.0828,
2013.
[3] Q. Berthet and P. Rigollet. Optimal detection of sparse principal components in high dimension. The
Annals of Statistics, 41(4):1780?1815, 2013.
[4] A. Birnbaum, I. M. Johnstone, B. Nadler, D. Paul, et al. Minimax bounds for sparse PCA with noisy
high-dimensional data. The Annals of Statistics, 41(3):1055?1084, 2013.
[5] P. Breheny and J. Huang. Coordinate descent algorithms for nonconvex penalized regression, with
applications to biological feature selection. The Annals of Applied Statistics, 5(1):232?253, 03 2011.
[6] T. T. Cai, Z. Ma, Y. Wu, et al. Sparse PCA: Optimal rates and adaptive estimation. The Annals of Statistics,
41(6):3074?3110, 2013.
[7] A. d?Aspremont, F. Bach, and L. Ghaoui. Optimal solutions for sparse principal component analysis. The
Journal of Machine Learning Research, 9:1269?1294, 2008.
[8] A. d?Aspremont, L. E. Ghaoui, M. I. Jordan, and G. R. Lanckriet. A direct formulation for sparse PCA
using semidefinite programming. SIAM Review, pages 434?448, 2007.
[9] J. Dattorro. Convex Optimization & Euclidean Distance Geometry. Meboo Publishing USA, 2011.
[10] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal
of the American Statistical Association, 96(456):1348?1360, 2001.
[11] B. He, H. Liu, Z. Wang, and X. Yuan. A strictly contractive peaceman?rachford splitting method for
convex programming. SIAM Journal on Optimization, 24(3):1011?1040, 2014.
[12] I. Johnstone and A. Lu. On consistency and sparsity for principal components analysis in high dimensions.
Journal of the American Statistical Association, 104(486):682?693, 2009.
[13] I. Jolliffe, N. Trendafilov, and M. Uddin. A modified principal component technique based on the lasso.
Journal of Computational and Graphical Statistics, 12(3):531?547, 2003.
[14] M. Journ?ee, Y. Nesterov, P. Richt?arik, and R. Sepulchre. Generalized power method for sparse principal
component analysis. The Journal of Machine Learning Research, 11:517?553, 2010.
[15] R. Krauthgamer, B. Nadler, and D. Vilenchik. Do semidefinite relaxations really solve sparse PCA? arXiv
preprint arXiv:1306.3690, 2013.
[16] J. Lei and V. Q. Vu. Sparsistency and agnostic inference in sparse PCA. arXiv preprint arXiv:1401.6978,
2014.
[17] P.-L. Loh and M. J. Wainwright. Regularized m-estimators with nonconvexity: Statistical and algorithmic
theory for local optima. arXiv preprint arXiv:1305.2436, 2013.
[18] K. Lounici. Sparse principal component analysis with missing observations. In High Dimensional
Probability VI, pages 327?356. Springer, 2013.
[19] Z. Ma. Sparse principal component analysis and iterative thresholding. The Annals of Statistics, 41(2):772?
801, 2013.
[20] D. Paul and I. M. Johnstone. Augmented sparse principal component analysis for high dimensional data.
arXiv preprint arXiv:1202.1242, 2012.
[21] D. Shen, H. Shen, and J. Marron. Consistency of sparse PCA in high dimension, low sample size contexts.
Journal of Multivariate Analysis, 115:317?333, 2013.
[22] H. Shen and J. Huang. Sparse principal component analysis via regularized low rank matrix approximation.
Journal of multivariate analysis, 99(6):1015?1034, 2008.
[23] V. Q. Vu, J. Cho, J. Lei, and K. Rohe. Fantope projection and selection: A near-optimal convex relaxation
of sparse pca. In NIPS, pages 2670?2678, 2013.
[24] V. Q. Vu and J. Lei. Minimax rates of estimation for sparse PCA in high dimensions. In International
Conference on Artificial Intelligence and Statistics, pages 1278?1286, 2012.
[25] V. Q. Vu and J. Lei. Minimax sparse principal subspace estimation in high dimensions. The Annals of
Statistics, 41(6):2905?2947, 2013.
[26] Z. Wang, H. Liu, and T. Zhang. Optimal computational and statistical rates of convergence for sparse
nonconvex learning problems. The Annals of Statistics, 42(6):2164?2201, 12 2014.
[27] Z. Wang, H. Lu, and H. Liu. Nonconvex statistical optimization: minimax-optimal sparse pca in polynomial
time. arXiv preprint arXiv:1408.5352, 2014.
[28] X.-T. Yuan and T. Zhang. Truncated power method for sparse eigenvalue problems. The Journal of
Machine Learning Research, 14(1):899?925, 2013.
[29] C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist., 38(2):894?
942, 2010.
[30] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of computational and
graphical statistics, 15(2):265?286, 2006.
9
| 5615 |@word version:2 polynomial:1 norm:13 stronger:2 simulation:1 covariance:23 decomposition:3 p0:1 tr:2 sepulchre:1 reduction:1 liu:4 contains:2 tuned:1 past:1 existing:5 surprising:1 numerical:3 update:1 intelligence:1 fpr:4 provides:2 zhang:3 constructed:1 direct:1 yuan:2 prove:5 consists:1 shorthand:2 introduce:4 decomposed:1 provided:3 estimating:1 moreover:2 bounded:3 notation:1 r01mh102339:1 agnostic:1 argmin:4 eigenvector:5 nj:3 guarantee:1 fantope:14 concave:4 exactly:6 k2:4 uk:3 grant:1 superiority:1 positive:4 before:2 engineering:3 local:6 t1:6 plus:1 k:1 challenging:1 limited:4 bi:1 contractive:1 unique:3 practical:2 enforces:1 vu:4 practice:1 block:1 procedure:1 universal:2 empirical:4 attain:1 projection:27 word:1 onto:4 irrepresentable:1 cannot:1 operator:1 selection:4 context:2 equivalent:2 conventional:2 lagrangian:2 dz:1 reviewer:1 missing:1 convex:26 focused:1 shen:3 decomposable:1 recovery:8 immediately:2 splitting:2 m2:10 estimator:90 nuclear:1 spanned:2 orthonormal:2 financial:3 population:8 variation:1 coordinate:1 annals:8 suppose:2 exact:1 programming:2 us:1 designing:1 origin:1 lanckriet:1 element:3 satisfying:1 particularly:1 observed:1 subproblem:2 preprint:6 wang:4 solved:2 worst:3 calculate:1 richt:1 ui:2 nesterov:1 tight:1 solving:3 basis:1 gu:1 k0:1 distinct:2 detected:1 artificial:1 refined:1 quite:1 larger:3 solve:3 statistic:11 noisy:1 advantage:1 eigenvalue:14 evidently:2 cai:1 propose:3 product:1 kak1:1 achieve:4 frobenius:6 validate:2 convergence:18 regularity:1 optimum:5 perfect:1 develop:1 ij:11 progress:2 strong:1 p2:1 recovering:1 auxiliary:1 implies:3 uu:1 convention:1 direction:1 attribute:1 lcc:2 stringent:1 violates:1 really:1 anonymous:1 biological:1 extension:1 strictly:1 hold:1 sufficiently:1 considered:1 normal:1 cb:1 algorithmic:2 nadler:2 achieves:6 smallest:1 estimation:13 quanquan:1 largest:2 clearly:2 gaussian:1 arik:1 aim:4 modified:1 rather:1 avoid:1 focus:2 methodological:1 rank:12 likelihood:1 greatly:1 contrast:2 attains:5 sense:1 helpful:1 inference:1 eliminate:1 journ:1 going:1 interested:1 arg:2 denoted:1 priori:4 retaining:1 development:1 art:2 special:2 equal:2 sampling:1 identical:1 look:1 k2f:5 uddin:1 nearly:1 t2:8 nonsmooth:1 sparsistency:1 geometry:1 n1:1 detection:1 introduces:1 analyzed:1 extreme:1 semidefinite:12 held:1 necessary:1 conduct:1 euclidean:1 rotating:1 desired:1 theoretical:5 minimal:2 column:1 soft:1 deviation:1 entry:11 peaceman:2 reported:1 marron:1 synthetic:19 cho:1 international:1 siam:2 squared:1 again:1 rn1:1 huang:2 american:2 leading:7 li:1 supp:8 de:1 zhaoran:2 summarized:1 vi:1 analyze:2 observing:1 recover:4 identifiability:1 ispo:1 r01hg06841:1 efficiently:1 weak:3 basically:2 lu:2 worth:3 simultaneous:1 definition:7 pp:3 obvious:1 meboo:1 associated:3 proof:1 recovers:5 proved:4 dataset:10 recall:2 knowledge:1 attained:1 violating:1 follow:1 arranged:1 formulation:2 though:2 strongly:2 lounici:1 correlation:4 hand:2 defines:1 lei:4 usa:4 true:15 multiplier:2 unbiased:1 regularization:15 equality:2 alternating:1 symmetric:1 nonzero:6 kak:1 generalized:2 demonstrate:2 variational:1 novel:5 nih:3 common:1 rotation:2 rigollet:2 rachford:2 association:2 he:1 m1:9 tpr:4 theirs:1 r01gm083084:1 significant:1 refer:1 ai:1 rd:1 consistency:2 han:1 longer:2 add:1 multivariate:2 recent:1 nonconvex:29 inequality:1 arbitrarily:2 seen:1 analyzes:1 minimum:1 impose:2 kxk0:1 signal:4 ii:7 multiple:1 faster:1 cross:1 iis1408910:1 bach:1 post:1 variant:1 regression:3 essentially:1 arxiv:12 iteration:1 addition:1 biased:1 rest:1 unlike:2 sure:1 comment:2 subject:3 inconsistent:1 nonconcave:1 effectiveness:1 jordan:1 call:1 ee:1 near:1 noting:3 spca:28 easy:1 variety:1 hastie:1 perfectly:1 lasso:1 inner:1 t0:2 pca:38 penalty:23 loh:1 remark:1 detailed:1 eigenvectors:13 statist:1 generate:2 outperform:1 exist:1 rowwise:1 nsf:2 iis1332109:1 notice:2 estimated:2 tibshirani:1 write:1 group:1 drawn:1 birnbaum:1 nonconvexity:1 relaxation:13 monotone:1 cone:1 sum:2 clipped:1 family:13 reasonable:1 groundtruth:1 wu:1 submatrix:3 bound:3 fold:1 fan:1 oracle:27 identifiable:1 nontrivial:1 orthogonality:1 constraint:6 x2:1 u1:8 span:1 min:6 relatively:1 department:3 according:1 scad:2 smaller:2 slightly:1 appealing:1 s1:9 projecting:1 spiked:4 multiplicity:2 ghaoui:2 mcp:6 visualization:1 turn:1 jolliffe:1 needed:1 know:2 end:3 operation:3 apply:3 spectral:1 amini:1 eigen:1 rp:7 slower:1 top:5 include:1 publishing:1 graphical:2 krauthgamer:1 hinge:2 k1:1 classical:1 kak2:1 diagonal:3 usual:1 gradient:1 subspace:40 distance:1 thank:1 toward:1 reason:1 assuming:2 equivalently:1 difficult:1 sharper:7 trace:1 rise:1 design:1 unknown:2 perform:1 upper:1 observation:2 datasets:10 descent:2 displayed:1 truncated:1 sharp:1 introduced:1 complement:2 dattorro:1 nip:1 address:1 able:2 sparsity:7 pioneering:1 wainwright:2 power:2 suitable:4 rely:2 regularized:2 minimax:6 scheme:1 concludes:1 ready:1 aspremont:2 review:1 understanding:1 acknowledgement:1 kf:8 kakf:1 validation:1 affine:1 sufficient:2 consistent:1 thresholding:2 row:2 compatible:1 penalized:3 supported:2 enjoys:4 formal:1 bias:1 weaker:1 johnstone:3 absolute:2 sparse:49 benefit:1 dimension:9 xn:1 calculated:1 avoids:1 kdiag:1 fies:1 berthet:2 made:2 adaptive:1 emphasize:1 global:2 xi:1 un:1 continuous:1 optimization1:1 decade:1 iterative:1 why:1 table:2 ku:1 vilenchik:1 zou:1 diag:7 whole:1 motivation:1 s2:2 paul:2 n2:4 repeated:1 x1:1 body:1 augmented:3 referred:1 theorem:10 rohe:1 hanliu:1 exists:3 essential:1 false:2 magnitude:22 illustrates:1 smoothly:1 simply:1 lagrange:1 partially:1 bo:1 applies:1 springer:1 trendafilov:1 corresponds:2 satisfies:7 relies:1 ma:2 formulated:1 ann:1 rm1:1 lipschitz:1 admm:3 included:1 specifically:1 typical:1 principal:44 called:2 pas:1 kxkq:1 support:27 violated:5 evaluate:3 princeton:9 |
5,099 | 5,616 | A statistical model for tensor PCA
Emile Richard
Electrical Engineering
Stanford University
Andrea Montanari
Statistics & Electrical Engineering
Stanford University
Abstract
We consider the Principal Component Analysis problem for large tensors of arbitrary order k under a single-spike (or rank-one plus noise) model. On the one
hand, we use information theory, and recent results in probability theory, to establish necessary and sufficient conditions under which the principal component
can be estimated using unbounded computational resources. It turns out?that this
is possible as soon as the signal-to-noise ratio ? becomes larger than C k log k
(and in particular ? can remain bounded as the problem dimensions increase).
On the other hand, we analyze several polynomial-time estimation algorithms,
based on tensor unfolding, power iteration and message passing ideas from graphical models. We show that, unless the signal-to-noise ratio diverges in the system
dimensions, none of these approaches succeeds. This is possibly related to a fundamental limitation of computationally tractable estimators for this problem.
We discuss various initializations for tensor power iteration, and show that a
tractable initialization based on the spectrum of the unfolded tensor outperforms
significantly baseline methods, statistically and computationally. Finally, we consider the case in which additional side information is available about the unknown
signal. We characterize the amount of side information that allows the iterative
algorithms to converge to a good estimate.
1
Introduction
Given a data matrix X, Principal Component Analysis (PCA) can be regarded as a ?denoising? technique that replaces X by its closest rank-one approximation. This optimization problem can be
solved efficiently, and its statistical properties are well-understood. The generalization of PCA to
tensors is motivated by problems in which it is important to exploit higher order moments, or data
elements are naturally given more than two indices. Examples include topic modeling, video processing, collaborative filtering in presence of temporal/context information, community detection
[1], spectral hypergraph theory. Further, finding a rank-one approximation to a tensor is a bottleneck for tensor-valued optimization algorithms using conditional gradient type of schemes. While
tensor factorization is NP-hard [11], this does not necessarily imply intractability for natural statistical models. Over the last ten years, it was repeatedly observed that either convex optimization or
greedy methods yield optimal solutions to statistical problems that are intractable from a worst case
perspective (well-known examples include sparse regression and low-rank matrix completion).
In order to investigate the fundamental tradeoffs between computational resources and statistical
power in tensor PCA, we consider the simplest possible model where this arises, whereby an unknown unit vector v0 is to be inferred from noisy multilinear measurements. Namely, for each
unordered k-uple {i1 , i2 , . . . , ik } ? [n], we measure
Xi1 ,i2 ,...,ik = ?(v0 )i1 (v0 )i2 ? ? ? (v0 )ik + Zi1 ,i2 ,...,ik ,
(1)
with Z Gaussian noise (see below for a precise definition) and wish to reconstruct v0 . In tensor
notation, the observation model reads (see the end of this section for notations)
X = ? v0 ?k + Z .
Spiked Tensor Model
1
This is analogous to the so called ?spiked covariance model? used to study matrix PCA in high
dimensions [12].
It is immediate to see that maximum-likelihood estimator v ML is given by a solution of the following
problem
maximize hX, v?k i,
subject to kvk2 = 1 .
Tensor PCA
Solving it exactly is ?in general? NP hard [11].
We next summarize our results. Note that, given a completely observed rank-one symmetric tensor
v0 ?k (i.e. for ? = ?), it is easy to recover the vector v0 ? Rn . It is therefore natural to ask the
question for which signal-to-noise ratios can one still reliably estimate v0 ? The answer appears to
depend dramatically on the computational resources1 .
Ideal estimation. Assuming unbounded computational resources, we can solve the Tensor PCA
b ML . We
optimization problem and hence implement the maximum likelihood estimator v
? use recent
results in probability theory to show that this approach is successful for ? ? ?k = k log k(1 +
ok (1)). In particular, above this threshold2 we have, with high probability,
kb
v ML ? v0 k22 .
2.01 ?k
.
?
(2)
We use an information-theoretic argument to show that no
? approach can do significantly better,
namely no procedure can estimate v0 accurately for ? ? c k (for c a universal constant).
Tractable estimators: Unfolding. We consider two approaches to estimate v0 that can be
implemented in polynomial time. The first approach is based on tensor unfolding: starting from
Nk n
the tensor X ?
R , we produce a matrix Mat(X) of dimensions nq ? nk?q . We then perform
matrix PCA on Mat(X). We show that this method is successful for ? & n(dk/2e?1)/2 . A heuristics
argument suggests that the necessary and sufficient condition for tensor unfolding to succeed is
indeed ? & n(k?2)/4 (which is below the rigorous bound by a factor n1/4 for k odd). We can
indeed confirm this conjecture for k even and under an asymmetric noise model.
Tractable estimators: Warm-start power iteration and Approximate Message Passing. We
prove that, initializing power iteration uniformly at random, it converges very rapidly to an accurate
estimate provided ? & n(k?1)/2 . A heuristic argument suggests that the correct necessary and
sufficient threshold is given by ? & n(k?2)/2 . Motivated by the last observation, we consider a
?warm-start? power iteration algorithm, in which we initialize power iteration with the output of
tensor unfolding. This approach appears to have the same threshold signal-to-noise ratio as simple
unfolding, but significantly better accuracy above that threshold. We extend power iteration to
an approximate message passing (AMP) algorithm [7, 4]. We show that the behavior of AMP is
qualitatively similar to the one of naive power iteration. In particular, AMP fails for any ? bounded
as n ? ?.
Side information. Given the above computational complexity barrier, it is natural to study weaker
version of the original problem. Here we assume that extra information about v0 is available. This
can be provided by additional measurements or by approximately solving a related problem, for
instance a matrix PCA problem as in [1]. We model this additional information as y = ?v0 + g
(with g an independent Gaussian noise vector), and incorporate it in the initial condition of AMP
algorithm. We characterize exactly the threshold value ?? = ?? (?) above which AMP converges to
an accurate estimator. The thresholds for various classes of algorithms are summarized below.
1
Here we write F (n) . G(n) if there exists a constant c independent of n (but possibly dependent on n,
such that F (n) ? c G(n)
2
Note that, for k even, v0 can only be recovered modulo sign. For the sake of simplicity, we assume here
that this ambiguity is correctly resolved.
2
Method
Tensor Unfolding
Tensor Power Iteration (with random init.)
Maximum Likelihood
Information-theory lower bound
Required ? (rigorous)
O(n(dk/2e?1)/2 )
O(n(k?1)/2 )
1
1
Required ? (heuristic)
n(k?2)/4
n(k?2)/2
?
?
We will conclude the paper with some insights that we believe provide useful guidance for tensor
factorization heuristics. We illustrate these insights through simulations.
1.1
Notations
Nk n
Given X ?
R a real k-th order tensor, we let {Xi1 ,...,ik }i1 ,...,ik denote its coordinates and
define a map X : Rn ? Rn , by letting, for v ? Rn ,
X
X{v}i =
Xi,j1 ,??? ,jk?1 vj1 ? ? ? vjk?1 .
(3)
j1 ,??? ,jk?1 ?[n]
The outer product of two tensors is X?Y, and, for v ? Rn , we define v?k = v ?? ? ??v ?
Nk n
as the k-th outer power of v. We define the inner product of two tensors X, Y ?
R as
X
hX, Yi =
Xi1 ,??? ,ik Yi1 ,??? ,ik .
Nk
Rn
(4)
i1 ,??? ,ik ?[n]
We define the Frobenius (Euclidean) norm of a tensor X, by kXkF =
norm by
p
hX, Xi, and its operator
kXkop ? max{hX, u1 ? ? ? ? ? uk i : ?i ? [k] , kui k2 ? 1}.
(5)
For the special case k = 2, it reduces to the ordinary `2 matrix operator norm. For ? ? Sk , we
will denote by X? the tensor with permuted indices X?i1 ,??? ,ik = X?(i1 ),??? ,?(ik ) . We call the tensor
X symmetric if, for any permutation ? ? Sk , X? = X. It is proved [23] that, for symmetric
tensors, the value of problem Tensor PCA coincides with kXkop up to a sign. More precisely, for
symmetric tensors we have the equivalent representation max{|hX, u?k i| : kuk2 ? 1}. We denote
Nk n
by G ?
R a tensor with independent and identically distributed entries Gi1 ,??? ,ik ? N(0, 1)
(note that this tensor is not symmetric). We define the symmetric standard normal noise tensor
Nk n
Z?
R by
r
1 k X ?
G .
(6)
Z=
k! n
??Sk
We use the loss function
Loss(b
v, v0 ) ? min kb
v ? v0 k22 , kb
v + v0 k22 = 2 ? 2|hb
v, v0 i| .
2
(7)
Ideal estimation
In this section we consider the problem of estimating v0 under the observation model Spiked Tensor
Model, when no constraint is imposed on the complexity of the estimator. Our first result is a lower
bound on the loss of any estimator.
b = v
b (X) of v0 from data X, such that kb
b :
Theorem 1. For any estimator v
v(X)k2 = 1 (i.e. v
?k Rn ? Sn?1 ), we have, for all n ? 4,
r
k
1
? E Loss(b
v , v0 ) ?
.
(8)
??
10
32
In order to establish a matching upper bound on the loss, we consider the maximum likelihood
b ML , obtained by solving the Tensor PCA problem. As in the case of matrix denoising, we
estimator v
expect the properties of this estimator to depend on signal to noise ratio ?, and on the ?norm? of the
noise kZkop (i.e. on the value of the optimization problem Tensor PCA in the case ? = 0). For the
3
matrix case k = 2, this coincides with the largest eigenvalue of Z. Classical random matrix theory
shows that ?in this case? kZkop concentrates tightly around 2 [10, 6, 3].
It turns out that tight results for k ? 3 follow immediately from a technically sophisticated analysis
of the stationary points of random Morse functions by Auffinger, Ben Arous and Cerny [2].
Lemma 2.1. There exists a sequence of real numbers {?k }k?2 , such that
lim sup kZkop ? ?k
(k odd),
(9)
(k even).
(10)
n??
lim kZkop = ?k
n??
Further kZkop concentrates tightly around its expectation. Namely, for any n, k
2
P kZkop ? EkZkop ? s ? 2 e?ns /(2k) .
?
Finally ?k = k log k(1 + ok (1)) for large k.
(11)
For instance, a large order-3 Gaussian tensor should have kZkop ? 2.87, while a large order 10
tensor has kZkop ? 6.75. As a simple consequence of Lemma 2.1, we establish an upper bound on
the error incurred by the maximum likelihood estimator.
b ML denote the
Theorem 2. Let ?k be the sequence of real numbers introduced above. Letting v
maximum likelihood estimator (i.e. the solution of Tensor PCA), we have for n large enough, and
all s > 0
? ? ?k ? Loss(b
v ML , v0 ) ?
2
with probability at least 1 ? 2e?ns
/(16k)
2
(?k + s) ,
?
(12)
.
The following upper bound on the value of the Tensor PCA problem is proved using SudakovFernique inequality. While it is looser than Lemma 2.1 (corresponding to the case ? = 0), we
expect it to become sharp for ? ? ?k a suitably large constant.
Lemma 2.2. Under Spiked Tensor Model model, we have
k
n
o
k
?
+?
.
lim sup EkZkop ? max ? ?
? ?0
n??
1 + ?2
1 + ?2
(13)
Further, for any s ? 0,
2
P kZkop ? EkZkop ? s ? 2 e?ns /(2k) .
3
(14)
Tensor Unfolding
A simple and popular heuristics to obtain tractable estimators of v0 consists in constructing a suitable matrix with the entries of X, and performing PCA on this matrix.
3.1
Symmetric noise
For an integer k ? q ? k/2, we introduce the unfolding (also referred to as matricization or
q
k?q
reshape) operator Matq : ?k Rn ? Rn ?n
as follows. For any indices i1 , i2 , ? ? ? , ik ? [n],
Pk
Pq
we let a = 1 + j=1 (ij ? 1)nj?1 and b = 1 + j=q+1 (ij ? 1)nj?q?1 , and define
[Matq (X)]a,b = Xi1 ,i2 ,??? ,ik .
(15)
Standard convex relaxations of low-rank tensor estimation problem compute factorizations of
Matq (X)[22, 15, 17, 19]. Not all unfoldings (choices of q) are equivalent. It is natural to expect that
this approach will be successful only if the signal-to-noise ratio exceeds the operator norm of the
unfolded noise kMatq (Z)kop . The next lemma suggests that the latter is minimal when Matq (Z) is
?as square as possible? . A similar phenomenon was observed in a different context in [17].
4
Lemma 3.1. For any integer k/2 ? q ? k we have, for some universal constant Ck ,
?
1
Ck
p
n(q?1)/2 1 ? max(q,k?q)) ? EkMatq (Z)kop ? k n(q?1)/2 + n(k?q?1)/2 .
n
(k ? 1)!
(16)
For all n large enough, both bounds are minimized for q = dk/2e. Further
n
o
2
P kMatq (Z)kop ? EkMatq (Z)kop ? t ? 2 e?nt /(2k) .
(17)
The last lemma suggests the choice q = dk/2e, which we shall adopt in the following, unless stated
otherwise. We will drop the subscript from Mat.
Let us recall the following standard result derived directly from Wedin perturbation Theorem [24],
and stated in the context of the spiked model.
Theorem 3 (Wedin perturbation). Let M = ?u0 w0 T + ? ? Rm?p be a matrix with ku0 k2 =
b denote the right singular vector of M. If ? > 2k?kop , then
kw0 k2 = 1. Let w
b w0 ) ?
Loss(w,
8k?k2op
.
?2
(18)
Theorem 4. Letting w = w(X) denote the top right singular vector of Mat(X), we have the
following, for some universal constant C = Ck > 0, and b ? (1/2)(dk/2e ? 1).
If ? ? 5 k 1/2 nb then, with probability at least 1 ? n?2 , we have
C kn2b
Loss w, vec v0 ?bk/2c ?
.
?2
3.2
(19)
Asymmetric noise and recursive unfolding
A technical complication in analyzing the random matrix Matq (X) lies in the fact that its entries are
not independent, because the noise tensor Z is assumed to be symmetric. In the next theorem we
consider the case of non-symmetric noise and even k. This allows us to leverage upon known results
in random matrix theory [18, 8, 5] to obtain: (i) Asymptotically sharp estimates on the critical
signal-to-noise ratio; (ii) A lower bound on the loss below the critical signal-to-noise ratio. Namely,
we consider observations
e = ?v0 ?k + ?1 G .
X
n
(20)
where G ? ?k Rn is a standard Gaussian tensor (i.e. a tensor with i.i.d. standard normal entries).
k/2
e ? Rn denote the top right singular vector of Mat(X). For k ? 4 even, and define
Let w = w(X)
b ? (k ? 2)/4, as above. By [18, Theorem 4], or [5, Theorem 2.3], we have the following almost
sure limits
? ? (1 ? ?)nb ?
? ? (1 + ?)nb ?
e vec(v0 ?(k/2) )i = 0 ,
lim hw(X),
(21)
n??
e vec(v0 ?(k/2) )i ?
lim inf hw(X),
n??
r
?
.
1+?
(22)
e is a good estimate of v0 ?(k/2) if and only if ? is larger than nb .
In other words w(X)
e ? R2b+1 to estimate v0 as follows. Construct the unfolding Mat1 (w) ? Rn?n2b
We can use w(X)
(slight abuse of notation) of w by letting, for i ? [n], and j ? [n2b ],
Mat1 (w)i,j = wi+(j?1)n ,
(23)
b to be the left principal vector of Mat1 (X). We refer to this algorithm as to recursive
we then let v
unfolding.
5
e be distributed according to the non-symmetric model (20) with k ? 4 even,
Theorem 5. Let X
b be the estimate obtained by two-steps recursive matricization.
define b ? (k ? 2)/4. and let v
If ? ? (1 + ?)nb then, almost surely
lim Loss(b
v , v0 ) = 0 .
n??
(24)
We conjecture that the weaker condition ? & n(k?2)/4 is indeed sufficient also for our original
symmetric noise model, both for k even and for k odd.
4
Power Iteration
Iterating over (multi-) linear maps induced by a (tensor) matrix is a standard method for finding
leading eigenpairs, see [14] and references therein for tensor-related results. In this section we will
consider a simple power iteration, and then its possible uses in conjunction with tensor unfolding.
Finally, we will compare our analysis with results available in the literature.
4.1
Naive power iteration
The simplest iterative approach is defined by the following recursion
y
X{vt }
v0 =
, and vt+1 =
.
Power Iteration
kyk2
kX{vt }k2
The following result establishes convergence criteria for this iteration, first for generic noise Z and
then for standard normal noise (using Lemma 2.1).
Theorem 6. Assume
? ? 2 e(k ? 1) kZkop ,
(25)
1/(k?1)
hy, v0 i
(k ? 1)kZkop
?
.
(26)
kyk2
?
Then for all t ? t0 (k), the power iteration estimator satisfies Loss(vt , v0 ) ? 2ekZkop /?. If Z is a
standard normal noise tensor, then conditions (25), (26) are satisfied with high probability provided
p
(27)
? ? 2ek ?k = 6 k 3 log k 1 + ok (1) ,
1/(k?1)
hy, v0 i
k?k
= ? ?1/(k?1) 1 + ok (1) .
(28)
?
kyk2
?
In Section 6 we discuss two aspects of this result: (i) The requirement of a positive correlation
between initialization and ground truth ; (ii) Possible scenarios under which the assumptions of
Theorem 6 are satisfied.
5
Asymptotics via Approximate Message Passing
Approximate message passing (AMP) algorithms [7, 4] proved successful in several highdimensional estimation problems including compressed sensing, low rank matrix reconstruction,
and phase retrieval [9, 13, 20, 21]. An appealing feature of this class of algorithms is that their highdimensional limit can be characterized exactly through a technique known as ?state evolution.? Here
we develop an AMP algorithm for tensor data, and its state evolution analysis focusing on the fixed
?, n ? ? limit. Proofs follows the approach of [4] and will be presented in a journal publication.
In a nutshell, our AMP for Tensor PCA can be viewed as a sophisticated version of the power
iteration method of the last section. With the notation f (x) = x/kxk2 , we define the AMP iteration
over vectors vt ? Rn by v0 = y, f (v?1 ) = 0, and
( t+1
v
= X{f (vt )} ? bt f (vt?1 ) ,
AMP
k?2
bt = (k ? 1) hf (vt ), f (vt?1 )i
.
Our main conclusion is that the behavior of AMP is qualitatively similar to the one of power iteration. However, we can establish stronger results in two respects:
6
1. We can prove that, unless side information is provided about the signal v0 , the AMP estimates remain essentially orthogonal to v0 , for any fixed number of iterations. This corresponds to a converse to Theorem 6.
2. Since state evolution is asymptotically exact, we can prove sharp phase transition results
with explicit characterization of their locations.
We assume that the additional information takes the form of a noisy observation y = ? v0 + z,
where z ? N(0, In /n). Our next results summarize the state evolution analysis.
Proposition 5.1. Let k ? 2 be a fixed integer. Let {v0 (n)}n?1 be a sequence of unit norm vectors
v0 (n) ? Sn?1 . Let also {X(n)}n?1 denote a sequence of tensors X(n) ? ?k Rn generated following Spiked Tensor Model. Finally, let vt denote the t-th iterate produced by AMP, and consider its
orthogonal decomposition
t
vt = vkt + v?
,
(29)
t
t
where vkt is proportional to v0 , and v?
is perpendicular. Then v?
is uniformly random, conditional
on its norm. Further, almost surely
lim hvt , v0 i = lim hvkt , v0 i = ?t ,
n??
n??
t
lim kv?
k2 = 1 ,
n??
(30)
(31)
where ?t is given recursively by letting ?0 = ? and, for t ? 0 (we refer to this as to state evolution):
k?1
?t2
2
?t+1
= ?2
.
(32)
1 + ?t2
The following result characterizes the minimum required additional information ? to allow AMP
to escape from those undesired local optima. We will say that {vt }t converges almost surely to a
desired local optimum if,
4
lim lim Loss(vt /kvt k2 , v0 ) ? 2 .
t?? n??
?
Theorem 7. Consider the Tensor PCA problem with k ? 3 and
q
?
? > ?k ? (k ? 1)k?1 /(k ? 2)k?2 ? ek .
p
Then AMP converges almost surely to a desired local optimum if and only if ? > 1/k (?) ? 1
where k (?) is the largest solution of (1 ? )(k?2) = ? ?2 ,
p
In the special case k = 3, and ? > 2, assuming ? > ?(1/2 ? 1/4 ? 1/? 2 ), AMP tends to a
b i ? 0.9 if
desired local optimum. Numerically ? > 2.69 is enough for AMP to achieve hv0 , v
? > 0.45.
As a final remark, we note that the methods of [16] can be used to show that, under the assumptions
of Theorem 7, for ? > ?k a sufficiently large constant, AMP asymptotically solves the optimization
problem Tensor PCA. Formally, we have, almost surely,
?k
lim lim hX, vt
i ? kXkop = 0.
(33)
t?? n??
6
6.1
Numerical experiments
Comparison of different algorithms
Our empirical results are reported in the appendix. The main findings are consistent with the theory
developed above:
? Tensor power iteration (with random initialization) performs poorly with respect to other
approaches that use some form of tensor unfolding. The gap widens as the dimension n
increases.
7
n = 200
n ?> ? (theory)
n = 500
1
1
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.4
0.5
0.6
0.4
0.4
0.3
0.3
0.2
0.2
0.2
0.1
1.5
0.1
?
0
1
0.2
?
?
0
0.5
0.3
0.1
?
0
Pow.0.6
It. (init. y)
Pow.0.5
It. (random init.)
y = Matrix
PCA
0.4
Pow. It. (unfold. init.)
0.5
0.3
0.1
0.7
| <v0 , v> |
0.5
0.6
0
0.6
| <v , v> |
1
| <v0 , v> |
| <v0 , v> |
n = 50
1
2
0
0
0.5
1
1.5
2
0
0
0.5
1
1.5
2
0
0.5
1
1.5
2
Figure 1: Simultaneous PCA at ? = 3. Absolute correlation of the estimated principal component
with the truth |hb
v, v0 i|, simultaneous PCA (black) compared with matrix (green) and tensor PCA
(blue).
? All algorithms based on initial unfolding (comprising PSD-constrained PCA and recursive
unfolding) have essentially the same threshold. Above that threshold, those that process the
singular vector (either by recursive unfolding or by tensor power iteration) have superior
performances over simpler one-step algorithms.
Our heuristic arguments suggest that tensor power iteration with random initialization will work for
? & n1/2 , while unfolding only requires ? & n1/4 (our theorems guarantee this for, respectively,
? & n and ? & n1/2 ). We plot the average correlation |hb
v, v0 i| versus (respectively) ?/n1/2 and
1/4
?/n . The curve superposition confirms that our prediction captures the correct behavior already
for n of the order of 50.
6.2
The value of side information
Our next experiment concerns a simultaneous matrix and tensor PCA task: we are given a tensor
X ? ?3 Rn of Spiked Tensor Model with k = 3 and the signal to noise ratio ? = 3 is fixed. In
addition, we observe M = ?v0 v0 T + N where N ? Rn?n is a symmetric noise matrix with upper
diagonal elements i < j iid Ni,j ? N(0, 1/n) and the value of ? ? [0, 2] varies. This experiment
mimics a rank-1 version of topic modeling method presented in [1] where M is a matrix representing
pairwise co-occurences and X triples.
The analysis in previous sections suggests to use the leading eigenvector of M as the initial point
of AMP algorithm for tensor PCA on X. We performed the experiments on 100 randomly generb (X)i| with
ated instances with n = 50, 200, 500 and report in Figure 1 the mean values of |hv0 , v
confidence intervals.
?
Random matrix theory predicts limn?? hb
v1 (M ), v0 i = 1 ? ??2 [8]. Thus we can set ? =
?
1 ? ??2 and apply the theory of the previous section. In particular, Proposition 5.1 implies
p
p
lim hb
v(X), v0 i = ? 1/2 + 1/4 ? 1/? 2
if ? > ? 1/2 ? 1/4 ? 1/? 2
n??
and limn?? hb
v(X), v0 i = 0 otherwise Simultaneous PCA appears vastly superior to simple PCA.
Our theory captures this difference quantitatively already for n = 500.
Acknowledgements
This work was partially supported by the NSF grant CCF-1319979 and the grants AFOSR/DARPA
FA9550-12-1-0411 and FA9550-13-1-0036.
References
[1] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for
learning latent variable models. arXiv:1210.7559, 2012.
8
[2] A. Auffinger, G. Ben Arous, and J. Cerny. Random matrices and complexity of spin glasses.
Communications on Pure and Applied Mathematics, 66(2):165?201, 2013.
[3] Z. Bai and J. Silverstein. Spectral Analysis of Large Dimensional Random Matrices (2nd
edition). Springer, 2010.
[4] M. Bayati and A. Montanari. The dynamics of message passing on dense graphs, with applications to compressed sensing. IEEE Trans. on Inform. Theory, 57:764?785, 2011.
[5] Florent Benaych-Georges and Raj Rao Nadakuditi. The singular values and vectors of low
rank perturbations of large rectangular random matrices. Journal of Multivariate Analysis,
111:120?135, 2012.
[6] K. R. Davidson and S. J. Szarek. Local operator theory, random matrices and Banach spaces.
In Handbook on the Geometry of Banach spaces, volume 1, pages 317?366. Elsevier Science,
2001.
[7] D. L. Donoho, A. Maleki, and A. Montanari. Message Passing Algorithms for Compressed
Sensing. Proceedings of the National Academy of Sciences, 106:18914?18919, 2009.
[8] D. F?eral and S. P?ech?e. The largest eigenvalues of sample covariance matrices for a spiked
population: diagonal case. Journal of Mathematical Physics, 50:073302, 2009.
[9] A. K. Fletcher, S. Rangan, L. R. Varshney, and A. Bhargava. Neural reconstruction with
approximate message passing (neuramp). In Neural Information Processing Systems (NIPS),
pages 2555?2563, 2011.
[10] S. Geman. A limit theorem for the norm of random matrices. Annals of Probability, 8:252?261,
1980.
[11] C. Hillar and L. H. Lim. Most tensor problems are np-hard. Journal of the ACM, 6, 2009.
[12] I. M Johnstone and A. Y. Lu. On consistency and sparsity for principal components analysis
in high dimensions. Journal of the American Statistical Association, 104(486), 2009.
[13] U. Kamilov, S. Rangan, A. K. Fletcher, and M. Unser. Approximate message passing with
consistent parameter estimation and applications to sparse learning. In Neural Information
Processing Systems (NIPS), pages 2447?2455, 2012.
[14] T. Kolda and J. Mayo. Shifted power method for computing tensor eigenpairs. SIAM Journal
on Matrix Analysis and Applications, 32(4):1095?1124, 2011.
[15] J. Liu, P. Musialski, P. Wonka, and J. Ye. Tensor completion for estimating missing values in
visual data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):208?220,
2013.
[16] A. Montanari and E. Richard. Non-negative principal component analysis: Message passing
algorithms and sharp asymptotics. arXiv:1406.4775, 2014.
[17] C. Mu, J. Huang, B. Wright, and D. Goldfarb. Square deal: Lower bounds and improved
relaxations for tensor recovery. In International Conference in Machine Learning (ICML),
2013.
[18] D. Paul. Asymptotics of sample eigenstructure for a large dimensional spiked covariance
model. Statistica Sinica, 17(4):1617, 2007.
[19] B. Romera-Paredes and M. Pontil. A new convex relaxation for tensor completion. In Neural
Information Processing Systems (NIPS), 2013.
[20] P. Schniter and V. Cevher. Approximate message passing for bilinear models. In Proc. Workshop Signal Process. Adaptive Sparse Struct. Repr.(SPARS), page 68, 2011.
[21] P. Schniter and S. Rangan. Compressive phase retrieval via generalized approximate message
passing. In Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton
Conference on, pages 815?822. IEEE, 2012.
[22] R. Tomioka, T. Suzuki, K. Hayashi, and H. Kashima. Statistical performance of convex tensor
decomposition. In Neural Information Processing Systems (NIPS), 2011.
[23] W. C. Waterhouse. The absolute-value estimate for symmetric multilinear forms. Linear Algebra and its Applications, 128:97?105, 1990.
[24] P. A. Wedin. Perturbation bounds in connection with singular value decomposition. BIT Numerical Mathematics, 12(1):99?111, 1972.
9
| 5616 |@word version:3 polynomial:2 norm:8 stronger:1 nd:1 suitably:1 paredes:1 confirms:1 simulation:1 covariance:3 decomposition:4 arous:2 recursively:1 moment:1 bai:1 liu:1 initial:3 amp:19 romera:1 outperforms:1 recovered:1 nt:1 numerical:2 j1:2 drop:1 plot:1 stationary:1 greedy:1 intelligence:1 nq:1 yi1:1 fa9550:2 characterization:1 complication:1 location:1 allerton:2 simpler:1 unbounded:2 mathematical:1 kvk2:1 vjk:1 become:1 ik:14 prove:3 consists:1 introduce:1 pairwise:1 indeed:3 behavior:3 andrea:1 multi:1 unfolded:2 becomes:1 provided:4 estimating:2 bounded:2 notation:5 eigenvector:1 developed:1 szarek:1 compressive:1 finding:3 nj:2 guarantee:1 temporal:1 nutshell:1 exactly:3 k2:7 rm:1 uk:1 control:1 unit:2 converse:1 grant:2 eigenpairs:2 positive:1 eigenstructure:1 engineering:2 understood:1 local:5 tends:1 limit:4 consequence:1 bilinear:1 analyzing:1 subscript:1 approximately:1 abuse:1 black:1 plus:1 initialization:5 therein:1 suggests:5 co:1 factorization:3 perpendicular:1 statistically:1 recursive:5 implement:1 procedure:1 pontil:1 asymptotics:3 unfold:1 universal:3 empirical:1 significantly:3 matching:1 word:1 confidence:1 suggest:1 operator:5 nb:5 context:3 equivalent:2 map:2 imposed:1 hillar:1 missing:1 starting:1 convex:4 rectangular:1 simplicity:1 recovery:1 immediately:1 pure:1 occurences:1 estimator:15 insight:2 regarded:1 population:1 coordinate:1 analogous:1 annals:1 kolda:1 modulo:1 exact:1 us:1 element:2 jk:2 asymmetric:2 predicts:1 geman:1 observed:3 electrical:2 solved:1 worst:1 initializing:1 capture:2 mu:1 complexity:3 hypergraph:1 dynamic:1 depend:2 solving:3 tight:1 algebra:1 technically:1 upon:1 completely:1 resolved:1 darpa:1 various:2 ech:1 heuristic:6 stanford:2 larger:2 valued:1 solve:1 say:1 reconstruct:1 otherwise:2 compressed:3 statistic:1 noisy:2 final:1 sequence:4 eigenvalue:2 reconstruction:2 product:2 rapidly:1 poorly:1 achieve:1 academy:1 frobenius:1 kv:1 convergence:1 requirement:1 diverges:1 optimum:4 produce:1 telgarsky:1 converges:4 ben:2 illustrate:1 develop:1 completion:3 ij:2 odd:3 solves:1 implemented:1 implies:1 concentrate:2 correct:2 kb:4 hx:6 generalization:1 proposition:2 multilinear:2 around:2 sufficiently:1 ground:1 normal:4 wright:1 fletcher:2 kvt:1 auffinger:2 adopt:1 estimation:6 mayo:1 proc:1 superposition:1 largest:3 establishes:1 unfolding:19 gaussian:4 ck:3 kzkop:11 publication:1 conjunction:1 derived:1 rank:9 likelihood:6 rigorous:2 baseline:1 glass:1 elsevier:1 dependent:1 hvt:1 bt:2 i1:7 comprising:1 constrained:1 special:2 initialize:1 construct:1 icml:1 mimic:1 minimized:1 report:1 np:3 quantitatively:1 t2:2 richard:2 escape:1 randomly:1 tightly:2 national:1 phase:3 geometry:1 n1:5 psd:1 detection:1 message:12 investigate:1 wedin:3 kxkop:3 accurate:2 schniter:2 necessary:3 orthogonal:2 unless:3 nadakuditi:1 euclidean:1 desired:3 guidance:1 minimal:1 cevher:1 instance:3 modeling:2 rao:1 kxkf:1 ordinary:1 entry:4 successful:4 characterize:2 reported:1 answer:1 varies:1 fundamental:2 siam:1 international:1 xi1:4 physic:1 vastly:1 ambiguity:1 satisfied:2 huang:1 possibly:2 ek:2 american:1 leading:2 unordered:1 summarized:1 ated:1 performed:1 analyze:1 sup:2 characterizes:1 start:2 recover:1 hf:1 collaborative:1 square:2 ni:1 accuracy:1 spin:1 efficiently:1 yield:1 silverstein:1 accurately:1 produced:1 iid:1 none:1 lu:1 simultaneous:4 inform:1 definition:1 naturally:1 proof:1 hsu:1 proved:3 popular:1 ask:1 recall:1 lim:15 musialski:1 sophisticated:2 appears:3 ok:4 focusing:1 higher:1 follow:1 improved:1 correlation:3 hand:2 believe:1 k22:3 ye:1 ccf:1 evolution:5 hence:1 maleki:1 read:1 symmetric:13 goldfarb:1 i2:6 undesired:1 deal:1 kyk2:3 whereby:1 coincides:2 criterion:1 generalized:1 theoretic:1 performs:1 superior:2 permuted:1 volume:1 banach:2 extend:1 slight:1 association:1 numerically:1 measurement:2 refer:2 vec:3 consistency:1 mathematics:2 repr:1 pq:1 v0:55 closest:1 multivariate:1 recent:2 perspective:1 raj:1 inf:1 scenario:1 inequality:1 neuramp:1 kamilov:1 vt:14 yi:1 minimum:1 additional:5 george:1 surely:5 converge:1 maximize:1 signal:12 u0:1 ii:2 reduces:1 exceeds:1 technical:1 characterized:1 retrieval:2 spar:1 prediction:1 regression:1 pow:3 essentially:2 expectation:1 arxiv:2 iteration:22 addition:1 interval:1 singular:6 limn:2 extra:1 benaych:1 sure:1 subject:1 induced:1 call:1 integer:3 anandkumar:1 presence:1 ideal:2 leverage:1 easy:1 identically:1 hb:6 enough:3 iterate:1 florent:1 inner:1 idea:1 tradeoff:1 bottleneck:1 t0:1 motivated:2 pca:27 passing:12 repeatedly:1 remark:1 dramatically:1 useful:1 iterating:1 amount:1 ten:1 simplest:2 nsf:1 uple:1 shifted:1 sign:2 estimated:2 correctly:1 blue:1 write:1 mat:5 shall:1 threshold:7 v1:1 asymptotically:3 relaxation:3 graph:1 year:1 almost:6 looser:1 appendix:1 eral:1 bit:1 bound:10 replaces:1 annual:1 precisely:1 constraint:1 rangan:3 sake:1 hy:2 u1:1 aspect:1 argument:4 min:1 performing:1 conjecture:2 according:1 remain:2 wi:1 appealing:1 kakade:1 spiked:9 computationally:2 resource:3 turn:2 discus:2 kw0:1 letting:5 ge:1 tractable:5 end:1 available:3 apply:1 observe:1 spectral:2 reshape:1 generic:1 kashima:1 struct:1 original:2 top:2 include:2 graphical:1 widens:1 exploit:1 establish:4 classical:1 tensor:73 question:1 already:2 spike:1 diagonal:2 gradient:1 outer:2 w0:2 topic:2 assuming:2 index:3 ratio:9 sinica:1 wonka:1 stated:2 negative:1 reliably:1 unknown:2 perform:1 upper:4 observation:5 vkt:2 immediate:1 zi1:1 communication:2 precise:1 rn:16 perturbation:4 arbitrary:1 vj1:1 sharp:4 community:1 inferred:1 introduced:1 bk:1 namely:4 required:3 connection:1 nip:4 trans:1 k2op:1 below:4 pattern:1 ku0:1 sparsity:1 summarize:2 max:4 including:1 video:1 green:1 power:22 suitable:1 critical:2 natural:4 warm:2 recursion:1 bhargava:1 representing:1 scheme:1 imply:1 naive:2 sn:2 literature:1 acknowledgement:1 waterhouse:1 morse:1 afosr:1 loss:12 expect:3 permutation:1 limitation:1 filtering:1 proportional:1 versus:1 bayati:1 emile:1 triple:1 incurred:1 sufficient:4 consistent:2 intractability:1 supported:1 last:4 soon:1 side:5 weaker:2 allow:1 johnstone:1 barrier:1 absolute:2 sparse:3 distributed:2 curve:1 dimension:6 transition:1 qualitatively:2 adaptive:1 suzuki:1 transaction:1 approximate:8 varshney:1 confirm:1 ml:6 handbook:1 conclude:1 assumed:1 xi:2 davidson:1 spectrum:1 iterative:2 latent:1 sk:3 matricization:2 init:4 kui:1 hv0:2 necessarily:1 constructing:1 pk:1 main:2 montanari:4 dense:1 statistica:1 noise:25 paul:1 edition:1 referred:1 n:3 tomioka:1 fails:1 wish:1 explicit:1 lie:1 kxk2:1 hw:2 theorem:16 kuk2:1 kop:5 sensing:3 unser:1 dk:5 concern:1 intractable:1 exists:2 workshop:1 kx:1 nk:7 gap:1 cerny:2 visual:1 gi1:1 partially:1 hayashi:1 springer:1 corresponds:1 truth:2 satisfies:1 acm:1 succeed:1 conditional:2 viewed:1 donoho:1 hard:3 uniformly:2 denoising:2 principal:7 lemma:8 called:1 succeeds:1 formally:1 highdimensional:2 latter:1 arises:1 incorporate:1 phenomenon:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.